## Problem
Two gaps in the plugin: (1) contest pickers have no way to know whether
a contest supports countdown (race), so the UI can't surface that
affordance; (2) `:CP submit` hardcodes a single language ID per platform
with no way to choose C++ standard or override the platform ID.
## Solution
**Race countdown** (`4e709c8`): Add `supports_countdown` boolean to
`ContestListResult` and wire it through CSES/USACO scrapers, cache, race
module, and pickers.
**Language version selection** (`b90ac67`): Add `LANGUAGE_VERSIONS` and
`DEFAULT_VERSIONS` tables in `constants.lua`. Config gains `version` and
`submit_id` fields validated at setup time. `submit.lua` resolves the
effective config to a platform ID (priority: `submit_id` > `version` >
default). Helpdocs add `*cp-submit-language*` section. Tests cover
`LANGUAGE_IDS` completeness.
## Problem
`codeforces.py` used `curl_cffi` to bypass Cloudflare when fetching
contest problem HTML, making it unavailable in the nix python env and
requiring an extra dependency across `pyproject.toml` and `flake.nix`.
## Solution
Rewrite `_fetch_problems_html` to use scrapling `StealthySession` with
`solve_cloudflare=True`, matching the existing CF submit pattern. Extend
`needs_browser` in `scraper.lua` to route CF `metadata` and `tests`
through the FHS env on NixOS. Remove `curl-cffi` from `pyproject.toml`,
`flake.nix`, and test mocks.
## Problem
`:CP <platform> login` blindly caches username/password without
server-side
validation. Bad credentials are only discovered at submit time, which is
confusing and wastes a browser session.
## Solution
Wire `:CP <platform> login` through the scraper pipeline so each
platform
actually authenticates before persisting credentials. On failure, the
user
sees an error and nothing is cached.
- CSES: reuses `_check_token` (fast path) and `_web_login`; returns API
token
in `LoginResult.credentials` so subsequent submits skip re-auth.
- AtCoder/Codeforces: new `_login_headless` functions open a
StealthySession,
solve Turnstile/Cloudflare, fill the login form, and validate success by
checking for the logout link. Cookies only persist on confirmed login.
- CodeChef/Kattis/USACO: return "not yet implemented" errors.
- `scraper.lua`: generalizes submit-only guards (`needs_browser` flag)
to
cover both `submit` and `login` subcommands.
- `credentials.lua`: prompts for username/password, passes cached token
for
CSES fast path, shows ndjson status notifications, only caches on
success.
## Problem
After the initial submit hardening, two issues remained: source code was
read in Lua and piped as stdin to the scraper (unnecessary roundtrip
since
the file exists on disk), and CF's `page.fill()` timed out on the hidden
`textarea[name="source"]` because CodeMirror owns the editor state.
## Solution
Pass the source file path as a CLI arg instead — AtCoder calls
`page.set_input_files(file_path)` directly, CF reads it with
`Path(file_path).read_text()`. Fix CF source injection via
`page.evaluate()`
into the CodeMirror instance. Extract `BROWSER_SUBMIT_NAV_TIMEOUT` as a
per-platform `defaultdict` (CF defaults to 2× nav timeout). Save the
buffer
with `vim.cmd.update()` before submitting.
## Problem
AtCoder file upload always wrote a `.cpp` temp file regardless of
language. CF submit used `solve_cloudflare=True` on the submit page,
causing a spurious "No Cloudflare challenge found" error;
`_wait_for_gate_reload` in `login_action` was dead code. Stale cookies
caused silent auth failures with no recovery path. The `uv.spawn` ndjson
path for submit had no overall timeout.
## Solution
Replace AtCoder's temp file with `page.set_input_files` using an
in-memory buffer and correct extension via `_LANGUAGE_ID_EXTENSION`.
Replace CF's temp-file/fallback dance with a direct
`textarea[name="source"]` fill and set `solve_cloudflare=False` on the
submit fetch. Add a login fast-path that skips the homepage check when
cookies exist, with automatic stale-cookie recovery via `_retried` flag
on redirect-to-login detection. Remove `_wait_for_gate_reload`. Fix
`_ensure_browser` to propagate install errors. Add a 120s kill timer to
the ndjson `uv.spawn` submit path in `scraper.lua`.
## Problem
CSES submit was a stub returning "not yet implemented".
## Solution
Authenticate via web login + API token bridge (POST `/login` form, then
POST `/api/login` and confirm the auth page), submit source to
`/api/courses/problemset/submissions` with base64-encoded content, and
poll for verdict. Uses the same username/password credential model as
AtCoder — no browser dependencies needed. Tested end-to-end with a real
CSES account (verdict: `ACCEPTED`).
Also updates `scraper.lua` to pass the full ndjson event object to
`on_status` and handle `credentials` events for future platform use.
## Problem
`_submit_sync` was a 170-line nested closure with `_solve_turnstile` and
the browser-install block further nested inside it. Status events went
to
stderr, which `run_scraper()` silently discards, leaving the user with a
10–30s silent hang after credential entry. The NDJSON spawn path also
lacked stdin support, so submit had no streaming path at all.
## Solution
Extract `_TURNSTILE_JS`, `_solve_turnstile`, `_ensure_browser`, and
`_submit_headless` to module level in `atcoder.py`; status events
(`installing_browser`, `checking_login`, `logging_in`, `submitting`) now
print to stdout as NDJSON. Add stdin pipe support to the NDJSON spawn
path in `scraper.lua` and switch `M.submit` to streaming with an
`on_status` callback. Wire `on_status` in `submit.lua` to fire
`vim.notify` for each phase transition.
Problem: vim.json.decode maps JSON null to vim.NIL (userdata), but
cache.set_test_cases validates precision as number|nil, causing a
type error on every scrape where precision is absent.
Solution: guard the precision field when building the callback
table, converting vim.NIL to nil.
Problem: luals flagged undefined-field on uv timer methods because
race_state.timer was untyped, and undefined-field on env_extra/stdin
because they were missing from the run_scraper opts annotation.
Solution: hoist race_state.timer into a typed local before the nil
check so luals can narrow through it; add env_extra and stdin to the
opts inline type in run_scraper.
Problem: problem pages contain floating-point precision requirements and
contest start timestamps that were not being extracted or stored. The
submit workflow also needed a foundation in the scraper layer.
Solution: add extract_precision() to base.py and propagate through all
scrapers into cache. Add start_time to ContestSummary and extract it
from AtCoder and Codeforces. Add SubmitResult model, abstract submit()
method, submit CLI case with get_language_id() resolution, stdin/env_extra
support in run_scraper, and a full AtCoder submit implementation; stub
the remaining platforms.
Problem: vim.loop is deprecated since Neovim 0.10 in favour of vim.uv.
Five call sites across scraper.lua, setup.lua, utils.lua, and health.lua
still referenced the old alias.
Solution: replace every vim.loop reference with vim.uv directly.
Add LuaCATS annotations to the env conversion helper and drop the table.sort call since ordering is not required by uv.spawn.
Co-authored-by: Codex <noreply@openai.com>
Neovim/libuv spawn expects env as a list of KEY=VALUE strings. Passing the map from vim.fn.environ() can fail process startup with ENOENT, which breaks NDJSON test scraping and surfaces as 'Failed to start scraper process'.\n\nConvert env map to a deterministic list before uv.spawn in the NDJSON scraper path.
Co-authored-by: Codex <noreply@openai.com>
Problem: setup_python_env() is called from check_required_runtime()
during config.setup(), which runs on the very first :CP command. The
uv sync and nix build calls use vim.system():wait(), blocking the
Neovim event loop. During the block the UI is frozen and
vim.schedule-based log messages never render, so the user sees an
unresponsive editor with no feedback.
Solution: remove setup_python_env() from check_required_runtime() so
config init is instant. Call it lazily from run_scraper() instead,
only when a scraper subprocess is actually needed. Use vim.notify +
vim.cmd.redraw() before blocking calls so the notification renders
immediately via a forced screen repaint, rather than being queued
behind vim.schedule.
Problem: with debug = true, there is not enough diagnostic output to
troubleshoot environment or execution issues. The resolved python path,
scraper commands, and compile/run shell commands are not logged.
Solution: add logger.log calls at key decision points: python env
resolution (nix vs uv vs discovery), uv sync stderr output, scraper
subprocess commands, and compile/run shell strings. All gated behind
the existing debug flag so they only appear when debug = true.