Merge pull request #147 from barrett-ruth/feat/softer-scripts

Warn on no test cases - don't fail
This commit is contained in:
Barrett Ruth 2025-10-05 20:06:20 +02:00 committed by GitHub
commit f00691ae40
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
7 changed files with 93 additions and 154 deletions

View file

@ -43,6 +43,7 @@ cp.nvim follows a simple principle: **solve locally, submit remotely**.
``` ```
:CP next :CP next
:CP prev :CP prev
:CP e1
``` ```
5. **Submit** on the original website 5. **Submit** on the original website

View file

@ -24,12 +24,6 @@ COMMANDS *cp-commands*
:CP *:CP* :CP *:CP*
cp.nvim uses a single :CP command with intelligent argument parsing: cp.nvim uses a single :CP command with intelligent argument parsing:
State Restoration ~
:CP Restore state from current file.
Automatically detects platform, contest, problem,
and language from cached state. Use this after
switching files to restore your CP environment.
Setup Commands ~ Setup Commands ~
:CP {platform} {contest_id} :CP {platform} {contest_id}
Full setup: set platform and load contest metadata. Full setup: set platform and load contest metadata.
@ -39,13 +33,10 @@ COMMANDS *cp-commands*
< <
:CP {platform} {contest_id} :CP {platform} {contest_id}
Contest setup: set platform, load contest metadata, Contest setup: set platform, load contest metadata,
and scrape ALL problems in the contest. This creates and scrape all test cases in the contest.
source files for every problem and caches all test Opens the first problem after completion.
cases for efficient bulk setup. Opens the first
problem after completion.
Example: > Example: >
:CP atcoder abc324 :CP atcoder abc324
:CP codeforces 1951
< <
Action Commands ~ Action Commands ~
:CP run Toggle run panel for individual test cases. :CP run Toggle run panel for individual test cases.
@ -70,10 +61,15 @@ COMMANDS *cp-commands*
:CP {problem_id} Jump to problem {problem_id} in a contest. :CP {problem_id} Jump to problem {problem_id} in a contest.
Requires that a contest has already been set up. Requires that a contest has already been set up.
State Restoration ~
:CP Restore state from current file.
Automatically detects platform, contest, problem,
and language from cached state. Use this after
switching files to restore your CP environment.
Cache Commands ~ Cache Commands ~
:CP cache clear [contest] :CP cache clear [contest]
Clear the cache data (contest list, problem Clear the cache data for the specified contest,
data, file states) for the specified contest,
or all contests if none specified. or all contests if none specified.
:CP cache read :CP cache read
@ -86,8 +82,6 @@ Template Variables ~
• {source} Source file path (e.g. "abc324a.cpp") • {source} Source file path (e.g. "abc324a.cpp")
• {binary} Output binary path (e.g. "build/abc324a.run") • {binary} Output binary path (e.g. "build/abc324a.run")
• {contest} Contest identifier (e.g. "abc324", "1933")
• {problem} Problem identifier (e.g. "a", "b")
Example template: > Example template: >
build = { 'g++', '{source}', '-o', '{binary}', '-std=c++17' } build = { 'g++', '{source}', '-o', '{binary}', '-std=c++17' }
@ -98,8 +92,8 @@ Template Variables ~
============================================================================== ==============================================================================
CONFIGURATION *cp-config* CONFIGURATION *cp-config*
Here's an example configuration with lazy.nvim: >lua Here's an example configuration with lazy.nvim:
>lua
{ {
'barrett-ruth/cp.nvim', 'barrett-ruth/cp.nvim',
cmd = 'CP', cmd = 'CP',
@ -109,7 +103,8 @@ Here's an example configuration with lazy.nvim: >lua
cpp = { cpp = {
extension = 'cc', extension = 'cc',
commands = { commands = {
build = { 'g++', '-std=c++17', '{source}', '-o', '{binary}' }, build = { 'g++', '-std=c++17', '{source}', '-o', '{binary}',
'-fdiagnostics-color=always' },
run = { '{binary}' }, run = { '{binary}' },
debug = { 'g++', '-std=c++17', '-fsanitize=address,undefined', debug = { 'g++', '-std=c++17', '-fsanitize=address,undefined',
'{source}', '-o', '{binary}' }, '{source}', '-o', '{binary}' },
@ -164,21 +159,17 @@ By default, C++ (g++ with ISO C++17) and Python are preconfigured under
the default; per-platform overrides can tweak `extension` or `commands`. the default; per-platform overrides can tweak `extension` or `commands`.
For example, to run CodeForces contests with Python by default: For example, to run CodeForces contests with Python by default:
>lua >lua
{ {
platforms = { platforms = {
codeforces = { codeforces = {
enabled_languages = { 'cpp', 'python' },
default_language = 'python', default_language = 'python',
}, },
}, },
} }
< <
Any language is supported provided the proper configuration. For example, to Any language is supported provided the proper configuration. For example, to
run CSES problems with Rust using the single schema: run CSES problems with Rust using the single schema:
>lua >lua
{ {
languages = { languages = {
@ -198,7 +189,6 @@ run CSES problems with Rust using the single schema:
}, },
} }
< <
*cp.Config* *cp.Config*
Fields: ~ Fields: ~
{languages} (table<string,|CpLanguage|>) Global language registry. {languages} (table<string,|CpLanguage|>) Global language registry.
@ -214,9 +204,6 @@ run CSES problems with Rust using the single schema:
(default: concatenates contest_id and problem_id, lowercased) (default: concatenates contest_id and problem_id, lowercased)
{ui} (|CpUI|) UI settings: run panel, diff backend, picker. {ui} (|CpUI|) UI settings: run panel, diff backend, picker.
*cp.PlatformConfig*
Replaced by |CpPlatform|. Platforms no longer inline language tables.
*CpPlatform* *CpPlatform*
Fields: ~ Fields: ~
{enabled_languages} (string[]) Language ids enabled on this platform. {enabled_languages} (string[]) Language ids enabled on this platform.
@ -279,7 +266,8 @@ run CSES problems with Rust using the single schema:
Hook functions receive the cp.nvim state object (cp.State). See the state Hook functions receive the cp.nvim state object (cp.State). See the state
module documentation (lua/cp/state.lua) for available methods and fields. module documentation (lua/cp/state.lua) for available methods and fields.
Example usage in hook: >lua Example usage in hook:
>lua
hooks = { hooks = {
setup_code = function(state) setup_code = function(state)
print("Setting up " .. state.get_base_name()) print("Setting up " .. state.get_base_name())
@ -300,24 +288,25 @@ PLATFORM-SPECIFIC USAGE *cp-platforms*
AtCoder ~ AtCoder ~
*cp-atcoder* *cp-atcoder*
URL format: https://atcoder.jp/contests/abc123/tasks/abc123_a URL format:
https://atcoder.jp/contests/{contest_id}/tasks/{contest_id}_{problem_id}
Usage examples: > Usage examples: >
:CP atcoder abc324 " Contest setup: load contest metadata only :CP atcoder abc324 " Set up atcoder.jp/contests/abc324
Codeforces ~ Codeforces ~
*cp-codeforces* *cp-codeforces*
URL format: https://codeforces.com/contest/1234/problem/A URL format: https://codeforces.com/contest/{contest_id}/problem/{problem_id}
Usage examples: > Usage examples: >
:CP codeforces 1934 " Contest setup: load contest metadata only :CP codeforces 1934 " Set up codeforces.com/contest/1934
CSES ~ CSES ~
*cp-cses* *cp-cses*
URL format: https://cses.fi/problemset/task/1068 URL format: https://cses.fi/problemset/task/{problem_id}
Usage examples: > Usage examples: >
:CP cses dynamic_programming " Set up ALL problems from DP category :CP cses dynamic_programming " Set up all problems in dp category
============================================================================== ==============================================================================
@ -329,30 +318,26 @@ Example: Setting up and solving AtCoder contest ABC324
2. Set up entire contest (bulk setup): > 2. Set up entire contest (bulk setup): >
:CP atcoder abc324 :CP atcoder abc324
< This scrapes ALL problems (A, B, C, D, ...), creates source files < This scrapes all test case data, downloads all test cases,
for each, downloads all test cases, and opens problem A. and opens the first problem.
3. Alternative: Set up single problem: > 3. Code your solution, then test: >
:CP atcoder abc324 a
< This creates only a.cc and scrapes its test cases
4. Code your solution, then test: >
:CP run :CP run
< Navigate with j/k, run specific tests with <enter> < Navigate with j/k, run specific tests with <enter>
Exit test panel with q or :CP run when done Exit test panel with q or :CP run when done
5. Move to next problem: > 4. Move to next problem: >
:CP next :CP next
< This automatically sets up problem B < This automatically sets up the next problem (likely problem B)
6. Continue solving problems with :CP next/:CP prev navigation 5. Continue solving problems with :CP next/:CP prev navigation
7. Switch to another file (e.g. previous contest): > 6. Switch to another file (e.g. previous contest): >
:e ~/contests/abc323/a.cpp :e ~/contests/abc323/a.cpp
:CP :CP
< Automatically restores abc323 contest context < Automatically restores abc323 contest context
8. Submit solutions on AtCoder website 7. Submit solutions on AtCoder website
============================================================================== ==============================================================================
PICKER INTEGRATION *cp-picker* PICKER INTEGRATION *cp-picker*
@ -368,7 +353,7 @@ platform and contest selection using telescope.nvim or fzf-lua.
Requires corresponding plugin (telescope.nvim or fzf-lua) to be installed. Requires corresponding plugin (telescope.nvim or fzf-lua) to be installed.
PICKER KEYMAPS *cp-picker-keys* PICKER KEYMAPS *cp-picker-keys*
<c-r> Force refresh contest list, bypassing cache. <c-r> Force refresh/update contest list.
Useful when contest lists are outdated or incomplete Useful when contest lists are outdated or incomplete
============================================================================== ==============================================================================
@ -424,7 +409,7 @@ erroneous config. Most tools (GCC, Python, Clang, Rustc) color stdout based on
whether stdout is connected to a terminal. One can usually get aorund this by whether stdout is connected to a terminal. One can usually get aorund this by
leveraging flags to force colored output. For example, to force colors with GCC, leveraging flags to force colored output. For example, to force colors with GCC,
alter your config as follows: alter your config as follows:
>lua
{ {
commands = { commands = {
build = { build = {
@ -434,7 +419,7 @@ alter your config as follows:
} }
} }
} }
<
============================================================================== ==============================================================================
HIGHLIGHT GROUPS *cp-highlights* HIGHLIGHT GROUPS *cp-highlights*
@ -468,34 +453,16 @@ TERMINAL COLOR INTEGRATION *cp-terminal-colors*
ANSI colors automatically use the terminal's color palette through Neovim's ANSI colors automatically use the terminal's color palette through Neovim's
vim.g.terminal_color_* variables. vim.g.terminal_color_* variables.
If your colorscheme doesn't set terminal colors, set them like so: >vim
let g:terminal_color_1 = '#ff6b6b'
...
============================================================================== ==============================================================================
HIGHLIGHT CUSTOMIZATION *cp-highlight-custom* HIGHLIGHT CUSTOMIZATION *cp-highlight-custom*
You can customize any highlight group by linking to existing groups or Customize highlight groups after your colorscheme loads:
defining custom colors: >lua >lua
-- Customize the color of "TLE" text in run panel:
vim.api.nvim_set_hl(0, 'CpTestTLE', { fg = '#ffa500', bold = true })
-- ... or the ANSI colors used to display stderr
vim.api.nvim_set_hl(0, 'CpAnsiRed', {
fg = vim.g.terminal_color_1 or '#ef4444'
})
<
Place customizations in your init.lua or after the colorscheme loads to
prevent them from being overridden: >lua
vim.api.nvim_create_autocmd('ColorScheme', { vim.api.nvim_create_autocmd('ColorScheme', {
callback = function() callback = function()
-- Your cp.nvim highlight customizations here
vim.api.nvim_set_hl(0, 'CpTestAC', { link = 'String' }) vim.api.nvim_set_hl(0, 'CpTestAC', { link = 'String' })
end end
}) })
<
============================================================================== ==============================================================================
RUN PANEL KEYMAPS *cp-test-keys* RUN PANEL KEYMAPS *cp-test-keys*

View file

@ -160,9 +160,9 @@ end
---@param contest_id string ---@param contest_id string
---@param problem_id string ---@param problem_id string
---@param test_cases TestCase[] ---@param test_cases TestCase[]
---@param timeout_ms? number ---@param timeout_ms number
---@param memory_mb? number ---@param memory_mb number
---@param interactive? boolean ---@param interactive boolean
function M.set_test_cases( function M.set_test_cases(
platform, platform,
contest_id, contest_id,
@ -185,8 +185,9 @@ function M.set_test_cases(
local index = cache_data[platform][contest_id].index_map[problem_id] local index = cache_data[platform][contest_id].index_map[problem_id]
cache_data[platform][contest_id].problems[index].test_cases = test_cases cache_data[platform][contest_id].problems[index].test_cases = test_cases
cache_data[platform][contest_id].problems[index].timeout_ms = timeout_ms or 0 cache_data[platform][contest_id].problems[index].timeout_ms = timeout_ms
cache_data[platform][contest_id].problems[index].memory_mb = memory_mb or 0 cache_data[platform][contest_id].problems[index].memory_mb = memory_mb
cache_data[platform][contest_id].problems[index].interactive = interactive
M.save() M.save()
end end

View file

@ -57,6 +57,12 @@ function M.setup_contest(platform, contest_id, problem_id, language)
logger.log(('Fetching test cases...'):format(cached_len, #problems)) logger.log(('Fetching test cases...'):format(cached_len, #problems))
scraper.scrape_all_tests(platform, contest_id, function(ev) scraper.scrape_all_tests(platform, contest_id, function(ev)
local cached_tests = {} local cached_tests = {}
if vim.tbl_isempty(ev.tests) then
logger.log(
("No tests found for problem '%s'."):format(ev.problem_id),
vim.log.levels.WARN
)
end
for i, t in ipairs(ev.tests) do for i, t in ipairs(ev.tests) do
cached_tests[i] = { index = i, input = t.input, expected = t.expected } cached_tests[i] = { index = i, input = t.input, expected = t.expected }
end end
@ -66,7 +72,8 @@ function M.setup_contest(platform, contest_id, problem_id, language)
ev.problem_id, ev.problem_id,
cached_tests, cached_tests,
ev.timeout_ms or 0, ev.timeout_ms or 0,
ev.memory_mb or 0 ev.memory_mb or 0,
ev.interactive
) )
logger.log('Test cases loaded.') logger.log('Test cases loaded.')
end) end)

View file

@ -169,7 +169,7 @@ def _parse_tasks_list(html: str) -> list[dict[str, str]]:
return rows return rows
def _extract_limits(html: str) -> tuple[int, float]: def _extract_problem_info(html: str) -> tuple[int, float, bool]:
soup = BeautifulSoup(html, "html.parser") soup = BeautifulSoup(html, "html.parser")
txt = soup.get_text(" ", strip=True) txt = soup.get_text(" ", strip=True)
timeout_ms = 0 timeout_ms = 0
@ -180,7 +180,10 @@ def _extract_limits(html: str) -> tuple[int, float]:
ms = re.search(r"Memory\s*Limit:\s*(\d+)\s*MiB", txt, flags=re.I) ms = re.search(r"Memory\s*Limit:\s*(\d+)\s*MiB", txt, flags=re.I)
if ms: if ms:
memory_mb = float(ms.group(1)) * MIB_TO_MB memory_mb = float(ms.group(1)) * MIB_TO_MB
return timeout_ms, memory_mb div = soup.select_one("#problem-statement")
txt = div.get_text(" ", strip=True) if div else soup.get_text(" ", strip=True)
interactive = "This is an interactive" in txt
return timeout_ms, memory_mb, interactive
def _extract_samples(html: str) -> list[TestCase]: def _extract_samples(html: str) -> list[TestCase]:
@ -213,13 +216,16 @@ def _scrape_tasks_sync(contest_id: str) -> list[dict[str, str]]:
def _scrape_problem_page_sync(contest_id: str, slug: str) -> dict[str, Any]: def _scrape_problem_page_sync(contest_id: str, slug: str) -> dict[str, Any]:
html = _fetch(f"{BASE_URL}/contests/{contest_id}/tasks/{slug}") html = _fetch(f"{BASE_URL}/contests/{contest_id}/tasks/{slug}")
tests = _extract_samples(html) try:
timeout_ms, memory_mb = _extract_limits(html) tests = _extract_samples(html)
except Exception:
tests = []
timeout_ms, memory_mb, interactive = _extract_problem_info(html)
return { return {
"tests": tests, "tests": tests,
"timeout_ms": timeout_ms, "timeout_ms": timeout_ms,
"memory_mb": memory_mb, "memory_mb": memory_mb,
"interactive": False, "interactive": interactive,
} }
@ -309,47 +315,22 @@ class AtcoderScraper(BaseScraper):
slug = row.get("slug") or "" slug = row.get("slug") or ""
if not letter or not slug: if not letter or not slug:
return return
try: data = await asyncio.to_thread(_scrape_problem_page_sync, category_id, slug)
data = await asyncio.to_thread( tests: list[TestCase] = data.get("tests", [])
_scrape_problem_page_sync, category_id, slug print(
) json.dumps(
tests: list[TestCase] = data["tests"] {
if not tests: "problem_id": letter,
print( "tests": [
json.dumps( {"input": t.input, "expected": t.expected} for t in tests
{ ],
"problem_id": letter, "timeout_ms": data.get("timeout_ms", 0),
"error": f"{self.platform_name}: no tests found", "memory_mb": data.get("memory_mb", 0),
} "interactive": bool(data.get("interactive")),
), }
flush=True, ),
) flush=True,
return )
print(
json.dumps(
{
"problem_id": letter,
"tests": [
{"input": t.input, "expected": t.expected}
for t in tests
],
"timeout_ms": data["timeout_ms"],
"memory_mb": data["memory_mb"],
"interactive": bool(data["interactive"]),
}
),
flush=True,
)
except Exception as e:
print(
json.dumps(
{
"problem_id": letter,
"error": f"{self.platform_name}: {str(e)}",
}
),
flush=True,
)
await asyncio.gather(*(emit(r) for r in rows)) await asyncio.gather(*(emit(r) for r in rows))

View file

@ -244,20 +244,7 @@ class CodeforcesScraper(BaseScraper):
for b in blocks: for b in blocks:
pid = b["letter"].lower() pid = b["letter"].lower()
tests: list[TestCase] = b["tests"] tests: list[TestCase] = b.get("tests", [])
if not tests:
print(
json.dumps(
{
"problem_id": pid,
"error": f"{self.platform_name}: no tests found",
}
),
flush=True,
)
continue
print( print(
json.dumps( json.dumps(
{ {
@ -265,9 +252,9 @@ class CodeforcesScraper(BaseScraper):
"tests": [ "tests": [
{"input": t.input, "expected": t.expected} for t in tests {"input": t.input, "expected": t.expected} for t in tests
], ],
"timeout_ms": b["timeout_ms"], "timeout_ms": b.get("timeout_ms", 0),
"memory_mb": b["memory_mb"], "memory_mb": b.get("memory_mb", 0),
"interactive": bool(b["interactive"]), "interactive": bool(b.get("interactive")),
} }
), ),
flush=True, flush=True,

View file

@ -221,23 +221,18 @@ class CSESScraper(BaseScraper):
html = await fetch_text(client, task_path(pid)) html = await fetch_text(client, task_path(pid))
tests = parse_tests(html) tests = parse_tests(html)
timeout_ms, memory_mb = parse_limits(html) timeout_ms, memory_mb = parse_limits(html)
if not tests: except Exception:
return { tests = []
"problem_id": pid, timeout_ms, memory_mb = 0, 0
"error": f"{self.platform_name}: no tests found", return {
} "problem_id": pid,
return { "tests": [
"problem_id": pid, {"input": t.input, "expected": t.expected} for t in tests
"tests": [ ],
{"input": t.input, "expected": t.expected} "timeout_ms": timeout_ms,
for t in tests "memory_mb": memory_mb,
], "interactive": False,
"timeout_ms": timeout_ms, }
"memory_mb": memory_mb,
"interactive": False,
}
except Exception as e:
return {"problem_id": pid, "error": str(e)}
tasks = [run_one(p.id) for p in problems] tasks = [run_one(p.id) for p in problems]
for coro in asyncio.as_completed(tasks): for coro in asyncio.as_completed(tasks):