Problem: CSES credentials were cached without verifying them, so bad
passwords were only discovered at submit time.
Solution: reuse `_check_token` (fast path) and `_web_login` in the new
`login()` method. Return credentials with API token on success so
subsequent submits skip re-authentication.
## Problem
After the initial submit hardening, two issues remained: source code was
read in Lua and piped as stdin to the scraper (unnecessary roundtrip
since
the file exists on disk), and CF's `page.fill()` timed out on the hidden
`textarea[name="source"]` because CodeMirror owns the editor state.
## Solution
Pass the source file path as a CLI arg instead — AtCoder calls
`page.set_input_files(file_path)` directly, CF reads it with
`Path(file_path).read_text()`. Fix CF source injection via
`page.evaluate()`
into the CodeMirror instance. Extract `BROWSER_SUBMIT_NAV_TIMEOUT` as a
per-platform `defaultdict` (CF defaults to 2× nav timeout). Save the
buffer
with `vim.cmd.update()` before submitting.
## Problem
Codeforces submit was a stub. CSES submit re-ran the full login flow on
every invocation (~1.5s overhead).
## Solution
**Codeforces**: headless browser submit via StealthySession (same
pattern as AtCoder). Solves Cloudflare Turnstile on login, uploads
source via file input, caches cookies at
`~/.cache/cp-nvim/codeforces-cookies.json` so repeat submits skip login.
**CSES**: persist the API token in credentials via a `credentials`
ndjson event. Subsequent submits validate the cached token with a single
GET before falling back to full login.
Also includes a vimdoc table of contents.
## Problem
CSES submit was a stub returning "not yet implemented".
## Solution
Authenticate via web login + API token bridge (POST `/login` form, then
POST `/api/login` and confirm the auth page), submit source to
`/api/courses/problemset/submissions` with base64-encoded content, and
poll for verdict. Uses the same username/password credential model as
AtCoder — no browser dependencies needed. Tested end-to-end with a real
CSES account (verdict: `ACCEPTED`).
Also updates `scraper.lua` to pass the full ndjson event object to
`on_status` and handle `credentials` events for future platform use.
Problem: lua typecheck flagged missing start_time field on ContestSummary;
ty flagged BeautifulSoup Tag/NavigableString union on csrf_input.get(),
a 3-tuple unpack where _extract_problem_info now returns 4 values in
cses.py, and an untyped list assignment in usaco.py.
Solution: add start_time? to ContestSummary LuaDoc, guard csrf_input
with hasattr check and type: ignore, unpack precision from
_extract_problem_info in cses.py callers, and use cast() in usaco.py.
Problem: problem pages contain floating-point precision requirements and
contest start timestamps that were not being extracted or stored. The
submit workflow also needed a foundation in the scraper layer.
Solution: add extract_precision() to base.py and propagate through all
scrapers into cache. Add start_time to ContestSummary and extract it
from AtCoder and Codeforces. Add SubmitResult model, abstract submit()
method, submit CLI case with get_language_id() resolution, stdin/env_extra
support in run_scraper, and a full AtCoder submit implementation; stub
the remaining platforms.