Merge pull request #189 from barrett-ruth/feat/multi-test-case

Multi-Test Case View
This commit is contained in:
Barrett Ruth 2025-11-05 19:23:09 -05:00 committed by GitHub
commit 5995ded7d5
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
16 changed files with 495 additions and 149 deletions

View file

@ -2,7 +2,7 @@ minimum_pre_commit_version: '3.5.0'
repos: repos:
- repo: https://github.com/JohnnyMorganz/StyLua - repo: https://github.com/JohnnyMorganz/StyLua
rev: v2.1.0 rev: v2.3.1
hooks: hooks:
- id: stylua-github - id: stylua-github
name: stylua (Lua formatter) name: stylua (Lua formatter)
@ -10,7 +10,7 @@ repos:
pass_filenames: true pass_filenames: true
- repo: https://github.com/astral-sh/ruff-pre-commit - repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.6.9 rev: v0.14.3
hooks: hooks:
- id: ruff-format - id: ruff-format
name: ruff (format) name: ruff (format)
@ -30,7 +30,7 @@ repos:
pass_filenames: false pass_filenames: false
- repo: https://github.com/pre-commit/mirrors-prettier - repo: https://github.com/pre-commit/mirrors-prettier
rev: v3.1.0 rev: v4.0.0-alpha.8
hooks: hooks:
- id: prettier - id: prettier
name: prettier (format markdown) name: prettier (format markdown)

View file

@ -34,15 +34,30 @@ COMMANDS *cp-commands*
:CP codeforces 1933 --lang python :CP codeforces 1933 --lang python
< <
View Commands ~ View Commands ~
:CP run [--debug] [n] :CP run [all|n|n,m,...] [--debug]
Run tests in I/O view (see |cp-io-view|). Run tests in I/O view (see |cp-io-view|).
Lightweight split showing test verdicts. Lightweight split showing test verdicts.
Without [n]: runs all tests, shows verdict summary
With [n]: runs test n, shows detailed output Execution modes:
• :CP run Combined: single execution with all tests
(auto-switches to individual when multiple samples)
• :CP run all Individual: N separate executions
• :CP run n Individual: run test n only
• :CP run n,m,... Individual: run specific tests (e.g. nth and mth)
--debug: Use debug build (builds to build/<name>.dbg) --debug: Use debug build (builds to build/<name>.dbg)
Combined mode runs all test inputs in one execution (matching
platform behavior for multi-test problems). When a problem has
multiple independent sample test cases, :CP run auto-switches to
individual mode to run each sample separately.
Examples: > Examples: >
:CP run " All tests :CP run " Combined: all tests, one execution
:CP run --debug 2 " Test 2, debug build :CP run all " Individual: all tests, N executions
:CP run 2 " Individual: test 2 only
:CP run 1,3,5 " Individual: tests 1, 3, and 5
:CP run all --debug " Individual with debug build
< <
:CP panel [--debug] [n] :CP panel [--debug] [n]
Open full-screen test panel (see |cp-panel|). Open full-screen test panel (see |cp-panel|).
@ -536,10 +551,27 @@ Example: Setting up and solving AtCoder contest ABC324
I/O VIEW *cp-io-view* I/O VIEW *cp-io-view*
The I/O view provides lightweight test feedback in persistent side splits. The I/O view provides lightweight test feedback in persistent side splits.
All test outputs are concatenated with verdict summaries at the bottom. Test outputs are concatenated with verdict summaries at the bottom.
The |cp-panel| offers more fine-grained analysis with diff modes. The |cp-panel| offers more fine-grained analysis with diff modes.
Access the I/O view with :CP run [n] Execution Modes ~
The I/O view supports two execution modes:
Combined Mode (:CP run with single sample)
• Single execution with all test inputs concatenated
• Matches platform behavior (e.g. Codeforces multi-test format)
• Shows one verdict for the entire execution
• Input split: All test inputs concatenated
• Output split: Single program output + verdict
• Used when problem has one sample containing multiple test cases
Individual Mode (:CP run all / :CP run n / :CP run n,m,...)
• Separate execution for each test case
• Per-test verdicts for debugging
• Input split: Selected test inputs concatenated
• Output split: All test outputs concatenated + per-test verdicts
• Auto-selected when problem has multiple independent samples
Layout ~ Layout ~
@ -561,7 +593,7 @@ The I/O view appears as 30% width splits on the right side: >
└──────────────────────────┴─────────────────────────────────────────────┘ └──────────────────────────┴─────────────────────────────────────────────┘
< <
The output split shows: The output split shows:
1. Concatenated test outputs (separated by blank lines) 1. Program output (raw, preserving all formatting)
2. Space-aligned verdict summary with: 2. Space-aligned verdict summary with:
- Test number and status (AC/WA/TLE/MLE/RTE with color highlighting) - Test number and status (AC/WA/TLE/MLE/RTE with color highlighting)
- Runtime: actual/limit in milliseconds - Runtime: actual/limit in milliseconds
@ -570,8 +602,10 @@ The output split shows:
Usage ~ Usage ~
:CP run Run all tests :CP run Combined mode: all tests in one execution
:CP run 3 Run test 3 only :CP run all Individual mode: all tests separately
:CP run 3 Individual mode: test 3 only
:CP run 1,3,5 Individual mode: specific tests (1, 3, and 5)
Navigation ~ Navigation ~

View file

@ -16,6 +16,10 @@
---@field name string ---@field name string
---@field id string ---@field id string
---@class CombinedTest
---@field input string
---@field expected string
---@class Problem ---@class Problem
---@field id string ---@field id string
---@field name? string ---@field name? string
@ -23,6 +27,7 @@
---@field multi_test? boolean ---@field multi_test? boolean
---@field memory_mb? number ---@field memory_mb? number
---@field timeout_ms? number ---@field timeout_ms? number
---@field combined_test? CombinedTest
---@field test_cases TestCase[] ---@field test_cases TestCase[]
---@class TestCase ---@class TestCase
@ -181,9 +186,34 @@ function M.get_test_cases(platform, contest_id, problem_id)
return cache_data[platform][contest_id].problems[index].test_cases or {} return cache_data[platform][contest_id].problems[index].test_cases or {}
end end
---@param platform string
---@param contest_id string
---@param problem_id? string
---@return CombinedTest?
function M.get_combined_test(platform, contest_id, problem_id)
vim.validate({
platform = { platform, 'string' },
contest_id = { contest_id, 'string' },
problem_id = { problem_id, { 'string', 'nil' }, true },
})
if
not cache_data[platform]
or not cache_data[platform][contest_id]
or not cache_data[platform][contest_id].problems
or not cache_data[platform][contest_id].index_map
then
return nil
end
local index = cache_data[platform][contest_id].index_map[problem_id]
return cache_data[platform][contest_id].problems[index].combined_test
end
---@param platform string ---@param platform string
---@param contest_id string ---@param contest_id string
---@param problem_id string ---@param problem_id string
---@param combined_test? CombinedTest
---@param test_cases TestCase[] ---@param test_cases TestCase[]
---@param timeout_ms number ---@param timeout_ms number
---@param memory_mb number ---@param memory_mb number
@ -193,6 +223,7 @@ function M.set_test_cases(
platform, platform,
contest_id, contest_id,
problem_id, problem_id,
combined_test,
test_cases, test_cases,
timeout_ms, timeout_ms,
memory_mb, memory_mb,
@ -203,6 +234,7 @@ function M.set_test_cases(
platform = { platform, 'string' }, platform = { platform, 'string' },
contest_id = { contest_id, 'string' }, contest_id = { contest_id, 'string' },
problem_id = { problem_id, { 'string', 'nil' }, true }, problem_id = { problem_id, { 'string', 'nil' }, true },
combined_test = { combined_test, { 'table', 'nil' }, true },
test_cases = { test_cases, 'table' }, test_cases = { test_cases, 'table' },
timeout_ms = { timeout_ms, { 'number', 'nil' }, true }, timeout_ms = { timeout_ms, { 'number', 'nil' }, true },
memory_mb = { memory_mb, { 'number', 'nil' }, true }, memory_mb = { memory_mb, { 'number', 'nil' }, true },
@ -212,6 +244,7 @@ function M.set_test_cases(
local index = cache_data[platform][contest_id].index_map[problem_id] local index = cache_data[platform][contest_id].index_map[problem_id]
cache_data[platform][contest_id].problems[index].combined_test = combined_test
cache_data[platform][contest_id].problems[index].test_cases = test_cases cache_data[platform][contest_id].problems[index].test_cases = test_cases
cache_data[platform][contest_id].problems[index].timeout_ms = timeout_ms cache_data[platform][contest_id].problems[index].timeout_ms = timeout_ms
cache_data[platform][contest_id].problems[index].memory_mb = memory_mb cache_data[platform][contest_id].problems[index].memory_mb = memory_mb

View file

@ -17,8 +17,11 @@ local actions = constants.ACTIONS
---@field problem_id? string ---@field problem_id? string
---@field interactor_cmd? string ---@field interactor_cmd? string
---@field test_index? integer ---@field test_index? integer
---@field test_indices? integer[]
---@field mode? string
---@field debug? boolean ---@field debug? boolean
---@field language? string ---@field language? string
---@field subcommand? string
--- Turn raw args into normalized structure to later dispatch --- Turn raw args into normalized structure to later dispatch
---@param args string[] The raw command-line mode args ---@param args string[] The raw command-line mode args
@ -75,25 +78,84 @@ local function parse_command(args)
return { type = 'action', action = 'edit', test_index = test_index } return { type = 'action', action = 'edit', test_index = test_index }
elseif first == 'run' or first == 'panel' then elseif first == 'run' or first == 'panel' then
local debug = false local debug = false
local test_index = nil local test_indices = nil
local mode = 'combined'
if #args == 2 then if #args == 2 then
if args[2] == '--debug' then if args[2] == '--debug' then
debug = true debug = true
elseif args[2] == 'all' then
mode = 'individual'
else
if args[2]:find(',') then
local indices = {}
for num in args[2]:gmatch('[^,]+') do
local idx = tonumber(num)
if not idx or idx < 1 or idx ~= math.floor(idx) then
return {
type = 'error',
message = ("Invalid test index '%s' in list"):format(num),
}
end
table.insert(indices, idx)
end
if #indices == 0 then
return { type = 'error', message = 'No valid test indices provided' }
end
test_indices = indices
mode = 'individual'
else else
local idx = tonumber(args[2]) local idx = tonumber(args[2])
if not idx then if not idx then
return { return {
type = 'error', type = 'error',
message = ("Invalid argument '%s': expected test number or --debug"):format(args[2]), message = ("Invalid argument '%s': expected test number(s), 'all', or --debug"):format(
args[2]
),
} }
end end
if idx < 1 or idx ~= math.floor(idx) then if idx < 1 or idx ~= math.floor(idx) then
return { type = 'error', message = ("'%s' is not a valid test index"):format(idx) } return { type = 'error', message = ("'%s' is not a valid test index"):format(idx) }
end end
test_index = idx test_indices = { idx }
mode = 'individual'
end
end end
elseif #args == 3 then elseif #args == 3 then
if args[2] == 'all' then
mode = 'individual'
if args[3] ~= '--debug' then
return {
type = 'error',
message = ("Invalid argument '%s': expected --debug"):format(args[3]),
}
end
debug = true
elseif args[2]:find(',') then
local indices = {}
for num in args[2]:gmatch('[^,]+') do
local idx = tonumber(num)
if not idx or idx < 1 or idx ~= math.floor(idx) then
return {
type = 'error',
message = ("Invalid test index '%s' in list"):format(num),
}
end
table.insert(indices, idx)
end
if #indices == 0 then
return { type = 'error', message = 'No valid test indices provided' }
end
if args[3] ~= '--debug' then
return {
type = 'error',
message = ("Invalid argument '%s': expected --debug"):format(args[3]),
}
end
test_indices = indices
mode = 'individual'
debug = true
else
local idx = tonumber(args[2]) local idx = tonumber(args[2])
if not idx then if not idx then
return { return {
@ -110,16 +172,26 @@ local function parse_command(args)
message = ("Invalid argument '%s': expected --debug"):format(args[3]), message = ("Invalid argument '%s': expected --debug"):format(args[3]),
} }
end end
test_index = idx test_indices = { idx }
mode = 'individual'
debug = true debug = true
end
elseif #args > 3 then elseif #args > 3 then
return { return {
type = 'error', type = 'error',
message = 'Too many arguments. Usage: :CP ' .. first .. ' [test_num] [--debug]', message = 'Too many arguments. Usage: :CP '
.. first
.. ' [all|test_num[,test_num...]] [--debug]',
} }
end end
return { type = 'action', action = first, test_index = test_index, debug = debug } return {
type = 'action',
action = first,
test_indices = test_indices,
debug = debug,
mode = mode,
}
else else
local language = nil local language = nil
if #args >= 3 and args[2] == '--lang' then if #args >= 3 and args[2] == '--lang' then
@ -197,9 +269,12 @@ function M.handle_command(opts)
if cmd.action == 'interact' then if cmd.action == 'interact' then
ui.toggle_interactive(cmd.interactor_cmd) ui.toggle_interactive(cmd.interactor_cmd)
elseif cmd.action == 'run' then elseif cmd.action == 'run' then
ui.run_io_view(cmd.test_index, cmd.debug) ui.run_io_view(cmd.test_indices, cmd.debug, cmd.mode)
elseif cmd.action == 'panel' then elseif cmd.action == 'panel' then
ui.toggle_panel({ debug = cmd.debug, test_index = cmd.test_index }) ui.toggle_panel({
debug = cmd.debug,
test_index = cmd.test_indices and cmd.test_indices[1] or nil,
})
elseif cmd.action == 'next' then elseif cmd.action == 'next' then
setup.navigate_problem(1, cmd.language) setup.navigate_problem(1, cmd.language)
elseif cmd.action == 'prev' then elseif cmd.action == 'prev' then

View file

@ -198,6 +198,40 @@ function M.load_test_cases()
return #tcs > 0 return #tcs > 0
end end
---@param debug boolean?
---@return RanTestCase?
function M.run_combined_test(debug)
local combined = cache.get_combined_test(
state.get_platform() or '',
state.get_contest_id() or '',
state.get_problem_id()
)
if not combined then
logger.log('No combined test found', vim.log.levels.ERROR)
return nil
end
local ran_test = {
index = 1,
input = combined.input,
expected = combined.expected,
status = 'running',
actual = nil,
time_ms = nil,
code = nil,
ok = nil,
signal = nil,
tled = false,
mled = false,
rss_mb = 0,
selected = true,
}
local result = run_single_test_case(ran_test, debug)
return result
end
---@param index number ---@param index number
---@param debug boolean? ---@param debug boolean?
---@return boolean ---@return boolean

View file

@ -194,6 +194,7 @@ function M.scrape_all_tests(platform, contest_id, callback)
end end
if type(callback) == 'function' then if type(callback) == 'function' then
callback({ callback({
combined = ev.combined,
tests = ev.tests, tests = ev.tests,
timeout_ms = ev.timeout_ms or 0, timeout_ms = ev.timeout_ms or 0,
memory_mb = ev.memory_mb or 0, memory_mb = ev.memory_mb or 0,

View file

@ -82,7 +82,7 @@ local function start_tests(platform, contest_id, problems)
return not vim.tbl_isempty(cache.get_test_cases(platform, contest_id, p.id)) return not vim.tbl_isempty(cache.get_test_cases(platform, contest_id, p.id))
end, problems) end, problems)
if cached_len ~= #problems then if cached_len ~= #problems then
logger.log(('Fetching test cases... (%d/%d)'):format(cached_len, #problems)) logger.log(('Fetching problem test data... (%d/%d)'):format(cached_len, #problems))
scraper.scrape_all_tests(platform, contest_id, function(ev) scraper.scrape_all_tests(platform, contest_id, function(ev)
local cached_tests = {} local cached_tests = {}
if not ev.interactive and vim.tbl_isempty(ev.tests) then if not ev.interactive and vim.tbl_isempty(ev.tests) then
@ -95,6 +95,7 @@ local function start_tests(platform, contest_id, problems)
platform, platform,
contest_id, contest_id,
ev.problem_id, ev.problem_id,
ev.combined,
cached_tests, cached_tests,
ev.timeout_ms or 0, ev.timeout_ms or 0,
ev.memory_mb or 0, ev.memory_mb or 0,
@ -104,31 +105,12 @@ local function start_tests(platform, contest_id, problems)
local io_state = state.get_io_view_state() local io_state = state.get_io_view_state()
if io_state then if io_state then
local problem_id = state.get_problem_id() local combined_test = cache.get_combined_test(platform, contest_id, state.get_problem_id())
local test_cases = cache.get_test_cases(platform, contest_id, problem_id) if combined_test then
local input_lines = {} local input_lines = vim.split(combined_test.input, '\n')
local contest_data = cache.get_contest_data(platform, contest_id)
local is_multi_test = contest_data.problems[contest_data.index_map[problem_id]].multi_test
if is_multi_test and #test_cases > 1 then
table.insert(input_lines, tostring(#test_cases))
for _, tc in ipairs(test_cases) do
local stripped = tc.input:gsub('^1\n', '')
for _, line in ipairs(vim.split(stripped, '\n')) do
table.insert(input_lines, line)
end
end
else
for _, tc in ipairs(test_cases) do
for _, line in ipairs(vim.split(tc.input, '\n')) do
table.insert(input_lines, line)
end
end
end
require('cp.utils').update_buffer_content(io_state.input_buf, input_lines, nil, nil) require('cp.utils').update_buffer_content(io_state.input_buf, input_lines, nil, nil)
end end
end
end) end)
end end
end end

View file

@ -274,10 +274,25 @@ local function save_all_tests()
local is_multi_test = contest_data.problems[contest_data.index_map[problem_id]].multi_test local is_multi_test = contest_data.problems[contest_data.index_map[problem_id]].multi_test
or false or false
-- Generate combined test from individual test cases
local combined_input = table.concat(
vim.tbl_map(function(tc)
return tc.input
end, edit_state.test_cases),
'\n'
)
local combined_expected = table.concat(
vim.tbl_map(function(tc)
return tc.expected
end, edit_state.test_cases),
'\n'
)
cache.set_test_cases( cache.set_test_cases(
platform, platform,
contest_id, contest_id,
problem_id, problem_id,
{ input = combined_input, expected = combined_expected },
edit_state.test_cases, edit_state.test_cases,
edit_state.constraints and edit_state.constraints.timeout_ms or 0, edit_state.constraints and edit_state.constraints.timeout_ms or 0,
edit_state.constraints and edit_state.constraints.memory_mb or 0, edit_state.constraints and edit_state.constraints.memory_mb or 0,

View file

@ -287,7 +287,7 @@ function M.ensure_io_view()
return return
end end
io_view_state.current_test_index = new_index io_view_state.current_test_index = new_index
M.run_io_view(new_index) M.run_io_view({ new_index }, false, 'individual')
end end
if cfg.ui.run.next_test_key then if cfg.ui.run.next_test_key then
@ -338,7 +338,9 @@ function M.ensure_io_view()
vim.api.nvim_set_current_win(solution_win) vim.api.nvim_set_current_win(solution_win)
end end
function M.run_io_view(test_index, debug) function M.run_io_view(test_indices_arg, debug, mode)
mode = mode or 'combined'
local platform, contest_id, problem_id = local platform, contest_id, problem_id =
state.get_platform(), state.get_contest_id(), state.get_problem_id() state.get_platform(), state.get_contest_id(), state.get_problem_id()
if not platform or not contest_id or not problem_id then if not platform or not contest_id or not problem_id then
@ -356,35 +358,56 @@ function M.run_io_view(test_index, debug)
return return
end end
if mode == 'combined' then
local test_cases = cache.get_test_cases(platform, contest_id, problem_id)
if test_cases and #test_cases > 1 then
mode = 'individual'
end
end
M.ensure_io_view() M.ensure_io_view()
local run = require('cp.runner.run') local run = require('cp.runner.run')
if mode == 'combined' then
local combined = cache.get_combined_test(platform, contest_id, problem_id)
if not combined then
logger.log('No combined test available', vim.log.levels.ERROR)
return
end
else
if not run.load_test_cases() then if not run.load_test_cases() then
logger.log('No test cases available', vim.log.levels.ERROR) logger.log('No test cases available', vim.log.levels.ERROR)
return return
end end
end
local test_state = run.get_panel_state()
local test_indices = {} local test_indices = {}
if test_index then if mode == 'individual' then
if test_index < 1 or test_index > #test_state.test_cases then local test_state = run.get_panel_state()
if test_indices_arg then
for _, idx in ipairs(test_indices_arg) do
if idx < 1 or idx > #test_state.test_cases then
logger.log( logger.log(
string.format( string.format(
'Test %d does not exist (only %d tests available)', 'Test %d does not exist (only %d tests available)',
test_index, idx,
#test_state.test_cases #test_state.test_cases
), ),
vim.log.levels.WARN vim.log.levels.WARN
) )
return return
end end
test_indices = { test_index } end
test_indices = test_indices_arg
else else
for i = 1, #test_state.test_cases do for i = 1, #test_state.test_cases do
test_indices[i] = i test_indices[i] = i
end end
end end
end
local io_state = state.get_io_view_state() local io_state = state.get_io_view_state()
if not io_state then if not io_state then
@ -418,8 +441,6 @@ function M.run_io_view(test_index, debug)
return return
end end
run.run_all_test_cases(test_indices, debug)
local run_render = require('cp.runner.run_render') local run_render = require('cp.runner.run_render')
run_render.setup_highlights() run_render.setup_highlights()
@ -430,6 +451,72 @@ function M.run_io_view(test_index, debug)
local formatter = config.ui.run.format_verdict local formatter = config.ui.run.format_verdict
if mode == 'combined' then
local combined = cache.get_combined_test(platform, contest_id, problem_id)
if not combined then
logger.log('No combined test found', vim.log.levels.ERROR)
return
end
run.load_test_cases()
local result = run.run_combined_test(debug)
if not result then
logger.log('Failed to run combined test', vim.log.levels.ERROR)
return
end
input_lines = vim.split(combined.input, '\n')
if result.actual then
output_lines = vim.split(result.actual, '\n')
end
local status = run_render.get_status_info(result)
local test_state = run.get_panel_state()
---@type VerdictFormatData
local format_data = {
index = 1,
status = status,
time_ms = result.time_ms or 0,
time_limit_ms = test_state.constraints and test_state.constraints.timeout_ms or 0,
memory_mb = result.rss_mb or 0,
memory_limit_mb = test_state.constraints and test_state.constraints.memory_mb or 0,
exit_code = result.code or 0,
signal = (result.code and result.code >= 128)
and require('cp.constants').signal_codes[result.code]
or nil,
time_actual_width = #string.format('%.2f', result.time_ms or 0),
time_limit_width = #tostring(
test_state.constraints and test_state.constraints.timeout_ms or 0
),
mem_actual_width = #string.format('%.0f', result.rss_mb or 0),
mem_limit_width = #string.format(
'%.0f',
test_state.constraints and test_state.constraints.memory_mb or 0
),
}
local verdict_result = formatter(format_data)
table.insert(verdict_lines, verdict_result.line)
if verdict_result.highlights then
for _, hl in ipairs(verdict_result.highlights) do
table.insert(verdict_highlights, {
line_offset = #verdict_lines - 1,
col_start = hl.col_start,
col_end = hl.col_end,
group = hl.group,
})
end
end
else
run.run_all_test_cases(test_indices, debug)
local test_state = run.get_panel_state()
local max_time_actual = 0 local max_time_actual = 0
local max_time_limit = 0 local max_time_limit = 0
local max_mem_actual = 0 local max_mem_actual = 0
@ -449,21 +536,28 @@ function M.run_io_view(test_index, debug)
) )
end end
local is_multi_test = contest_data.problems[contest_data.index_map[problem_id]].multi_test local all_outputs = {}
if is_multi_test and #test_indices > 1 then
table.insert(input_lines, tostring(#test_indices))
end
for _, idx in ipairs(test_indices) do for _, idx in ipairs(test_indices) do
local tc = test_state.test_cases[idx] local tc = test_state.test_cases[idx]
for _, line in ipairs(vim.split(tc.input, '\n')) do
table.insert(input_lines, line)
end
if tc.actual then if tc.actual then
for _, line in ipairs(vim.split(tc.actual, '\n', { plain = true, trimempty = false })) do table.insert(all_outputs, tc.actual)
end
end
local combined_output = table.concat(all_outputs, '')
if combined_output ~= '' then
for _, line in ipairs(vim.split(combined_output, '\n')) do
table.insert(output_lines, line) table.insert(output_lines, line)
end end
end end
for _, idx in ipairs(test_indices) do
local tc = test_state.test_cases[idx]
local status = run_render.get_status_info(tc) local status = run_render.get_status_info(tc)
---@type VerdictFormatData ---@type VerdictFormatData
@ -496,13 +590,6 @@ function M.run_io_view(test_index, debug)
}) })
end end
end end
local test_input = tc.input
if is_multi_test and #test_indices > 1 then
test_input = test_input:gsub('^1\n', '')
end
for _, line in ipairs(vim.split(test_input, '\n')) do
table.insert(input_lines, line)
end end
end end

View file

@ -16,6 +16,7 @@ from urllib3.util.retry import Retry
from .base import BaseScraper from .base import BaseScraper
from .models import ( from .models import (
CombinedTest,
ContestListResult, ContestListResult,
ContestSummary, ContestSummary,
MetadataResult, MetadataResult,
@ -70,7 +71,7 @@ def _retry_after_requests(details):
on_backoff=_retry_after_requests, on_backoff=_retry_after_requests,
) )
def _fetch(url: str) -> str: def _fetch(url: str) -> str:
r = _session.get(url, headers=HEADERS, timeout=TIMEOUT_SECONDS) r = _session.get(url, headers=HEADERS, timeout=TIMEOUT_SECONDS, verify=False)
if r.status_code in RETRY_STATUS: if r.status_code in RETRY_STATUS:
raise requests.HTTPError(response=r) raise requests.HTTPError(response=r)
r.raise_for_status() r.raise_for_status()
@ -242,7 +243,8 @@ def _to_problem_summaries(rows: list[dict[str, str]]) -> list[ProblemSummary]:
async def _fetch_all_contests_async() -> list[ContestSummary]: async def _fetch_all_contests_async() -> list[ContestSummary]:
async with httpx.AsyncClient( async with httpx.AsyncClient(
limits=httpx.Limits(max_connections=100, max_keepalive_connections=100) limits=httpx.Limits(max_connections=100, max_keepalive_connections=100),
verify=False,
) as client: ) as client:
first_html = await _get_async(client, ARCHIVE_URL) first_html = await _get_async(client, ARCHIVE_URL)
last = _parse_last_page(first_html) last = _parse_last_page(first_html)
@ -313,16 +315,25 @@ class AtcoderScraper(BaseScraper):
return return
data = await asyncio.to_thread(_scrape_problem_page_sync, category_id, slug) data = await asyncio.to_thread(_scrape_problem_page_sync, category_id, slug)
tests: list[TestCase] = data.get("tests", []) tests: list[TestCase] = data.get("tests", [])
combined_input = "\n".join(t.input for t in tests)
combined_expected = "\n".join(t.expected for t in tests)
print( print(
json.dumps( json.dumps(
{ {
"problem_id": letter, "problem_id": letter,
"combined": {
"input": combined_input,
"expected": combined_expected,
},
"tests": [ "tests": [
{"input": t.input, "expected": t.expected} for t in tests {"input": t.input, "expected": t.expected} for t in tests
], ],
"timeout_ms": data.get("timeout_ms", 0), "timeout_ms": data.get("timeout_ms", 0),
"memory_mb": data.get("memory_mb", 0), "memory_mb": data.get("memory_mb", 0),
"interactive": bool(data.get("interactive")), "interactive": bool(data.get("interactive")),
"multi_test": False,
} }
), ),
flush=True, flush=True,
@ -364,6 +375,7 @@ async def main_async() -> int:
success=False, success=False,
error="Usage: atcoder.py tests <contest_id>", error="Usage: atcoder.py tests <contest_id>",
problem_id="", problem_id="",
combined=CombinedTest(input="", expected=""),
tests=[], tests=[],
timeout_ms=0, timeout_ms=0,
memory_mb=0, memory_mb=0,

View file

@ -34,10 +34,13 @@ class BaseScraper(ABC):
def _create_tests_error( def _create_tests_error(
self, error_msg: str, problem_id: str = "", url: str = "" self, error_msg: str, problem_id: str = "", url: str = ""
) -> TestsResult: ) -> TestsResult:
from .models import CombinedTest
return TestsResult( return TestsResult(
success=False, success=False,
error=f"{self.platform_name}: {error_msg}", error=f"{self.platform_name}: {error_msg}",
problem_id=problem_id, problem_id=problem_id,
combined=CombinedTest(input="", expected=""),
tests=[], tests=[],
timeout_ms=0, timeout_ms=0,
memory_mb=0, memory_mb=0,

View file

@ -11,6 +11,7 @@ from scrapling.fetchers import StealthyFetcher
from .base import BaseScraper from .base import BaseScraper
from .models import ( from .models import (
CombinedTest,
ContestListResult, ContestListResult,
ContestSummary, ContestSummary,
MetadataResult, MetadataResult,
@ -230,14 +231,22 @@ class CodeChefScraper(BaseScraper):
memory_mb = 256.0 memory_mb = 256.0
interactive = False interactive = False
combined_input = "\n".join(t.input for t in tests)
combined_expected = "\n".join(t.expected for t in tests)
return { return {
"problem_id": problem_code, "problem_id": problem_code,
"combined": {
"input": combined_input,
"expected": combined_expected,
},
"tests": [ "tests": [
{"input": t.input, "expected": t.expected} for t in tests {"input": t.input, "expected": t.expected} for t in tests
], ],
"timeout_ms": timeout_ms, "timeout_ms": timeout_ms,
"memory_mb": memory_mb, "memory_mb": memory_mb,
"interactive": interactive, "interactive": interactive,
"multi_test": False,
} }
tasks = [run_one(problem_code) for problem_code in problems.keys()] tasks = [run_one(problem_code) for problem_code in problems.keys()]
@ -279,6 +288,7 @@ async def main_async() -> int:
success=False, success=False,
error="Usage: codechef.py tests <contest_id>", error="Usage: codechef.py tests <contest_id>",
problem_id="", problem_id="",
combined=CombinedTest(input="", expected=""),
tests=[], tests=[],
timeout_ms=0, timeout_ms=0,
memory_mb=0, memory_mb=0,

View file

@ -13,6 +13,7 @@ from scrapling.fetchers import StealthyFetcher
from .base import BaseScraper from .base import BaseScraper
from .models import ( from .models import (
CombinedTest,
ContestListResult, ContestListResult,
ContestSummary, ContestSummary,
MetadataResult, MetadataResult,
@ -126,16 +127,12 @@ def _extract_samples(block: Tag) -> tuple[list[TestCase], bool]:
) )
for k in keys for k in keys
] ]
samples_with_prefix = [ return samples, True
TestCase(input=f"1\n{tc.input}", expected=tc.expected) for tc in samples
]
return samples_with_prefix, True
inputs = [_text_from_pre(p) for p in input_pres] inputs = [_text_from_pre(p) for p in input_pres]
outputs = [_text_from_pre(p) for p in output_pres] outputs = [_text_from_pre(p) for p in output_pres]
n = min(len(inputs), len(outputs)) n = min(len(inputs), len(outputs))
samples = [TestCase(input=inputs[i], expected=outputs[i]) for i in range(n)] return [TestCase(input=inputs[i], expected=outputs[i]) for i in range(n)], False
return samples, False
def _is_interactive(block: Tag) -> bool: def _is_interactive(block: Tag) -> bool:
@ -164,18 +161,35 @@ def _parse_all_blocks(html: str) -> list[dict[str, Any]]:
name = _extract_title(b)[1] name = _extract_title(b)[1]
if not letter: if not letter:
continue continue
tests, multi_test = _extract_samples(b) raw_samples, is_grouped = _extract_samples(b)
timeout_ms, memory_mb = _extract_limits(b) timeout_ms, memory_mb = _extract_limits(b)
interactive = _is_interactive(b) interactive = _is_interactive(b)
if is_grouped and raw_samples:
combined_input = f"{len(raw_samples)}\n" + "\n".join(
tc.input for tc in raw_samples
)
combined_expected = "\n".join(tc.expected for tc in raw_samples)
individual_tests = [
TestCase(input=f"1\n{tc.input}", expected=tc.expected)
for tc in raw_samples
]
else:
combined_input = "\n".join(tc.input for tc in raw_samples)
combined_expected = "\n".join(tc.expected for tc in raw_samples)
individual_tests = raw_samples
out.append( out.append(
{ {
"letter": letter, "letter": letter,
"name": name, "name": name,
"tests": tests, "combined_input": combined_input,
"combined_expected": combined_expected,
"tests": individual_tests,
"timeout_ms": timeout_ms, "timeout_ms": timeout_ms,
"memory_mb": memory_mb, "memory_mb": memory_mb,
"interactive": interactive, "interactive": interactive,
"multi_test": multi_test, "multi_test": is_grouped,
} }
) )
return out return out
@ -252,6 +266,10 @@ class CodeforcesScraper(BaseScraper):
json.dumps( json.dumps(
{ {
"problem_id": pid, "problem_id": pid,
"combined": {
"input": b.get("combined_input", ""),
"expected": b.get("combined_expected", ""),
},
"tests": [ "tests": [
{"input": t.input, "expected": t.expected} for t in tests {"input": t.input, "expected": t.expected} for t in tests
], ],
@ -298,6 +316,7 @@ async def main_async() -> int:
success=False, success=False,
error="Usage: codeforces.py tests <contest_id>", error="Usage: codeforces.py tests <contest_id>",
problem_id="", problem_id="",
combined=CombinedTest(input="", expected=""),
tests=[], tests=[],
timeout_ms=0, timeout_ms=0,
memory_mb=0, memory_mb=0,

View file

@ -10,6 +10,7 @@ import httpx
from .base import BaseScraper from .base import BaseScraper
from .models import ( from .models import (
CombinedTest,
ContestListResult, ContestListResult,
ContestSummary, ContestSummary,
MetadataResult, MetadataResult,
@ -233,14 +234,23 @@ class CSESScraper(BaseScraper):
except Exception: except Exception:
tests = [] tests = []
timeout_ms, memory_mb, interactive = 0, 0, False timeout_ms, memory_mb, interactive = 0, 0, False
combined_input = "\n".join(t.input for t in tests)
combined_expected = "\n".join(t.expected for t in tests)
return { return {
"problem_id": pid, "problem_id": pid,
"combined": {
"input": combined_input,
"expected": combined_expected,
},
"tests": [ "tests": [
{"input": t.input, "expected": t.expected} for t in tests {"input": t.input, "expected": t.expected} for t in tests
], ],
"timeout_ms": timeout_ms, "timeout_ms": timeout_ms,
"memory_mb": memory_mb, "memory_mb": memory_mb,
"interactive": interactive, "interactive": interactive,
"multi_test": False,
} }
tasks = [run_one(p.id) for p in problems] tasks = [run_one(p.id) for p in problems]
@ -282,6 +292,7 @@ async def main_async() -> int:
success=False, success=False,
error="Usage: cses.py tests <category>", error="Usage: cses.py tests <category>",
problem_id="", problem_id="",
combined=CombinedTest(input="", expected=""),
tests=[], tests=[],
timeout_ms=0, timeout_ms=0,
memory_mb=0, memory_mb=0,

View file

@ -8,6 +8,13 @@ class TestCase(BaseModel):
model_config = ConfigDict(extra="forbid") model_config = ConfigDict(extra="forbid")
class CombinedTest(BaseModel):
input: str
expected: str
model_config = ConfigDict(extra="forbid")
class ProblemSummary(BaseModel): class ProblemSummary(BaseModel):
id: str id: str
name: str name: str
@ -46,6 +53,7 @@ class ContestListResult(ScrapingResult):
class TestsResult(ScrapingResult): class TestsResult(ScrapingResult):
problem_id: str problem_id: str
combined: CombinedTest
tests: list[TestCase] = Field(default_factory=list) tests: list[TestCase] = Field(default_factory=list)
timeout_ms: int timeout_ms: int
memory_mb: float memory_mb: float

View file

@ -61,6 +61,16 @@ def test_scraper_offline_fixture_matrix(run_scraper_offline, scraper, mode):
tr = TestsResult.model_validate(obj) tr = TestsResult.model_validate(obj)
assert tr.problem_id != "" assert tr.problem_id != ""
assert isinstance(tr.tests, list) assert isinstance(tr.tests, list)
assert hasattr(tr, "combined"), "Missing combined field"
assert tr.combined is not None, "combined field is None"
assert hasattr(tr.combined, "input"), "combined missing input"
assert hasattr(tr.combined, "expected"), "combined missing expected"
assert isinstance(tr.combined.input, str), "combined.input not string"
assert isinstance(tr.combined.expected, str), (
"combined.expected not string"
)
assert hasattr(tr, "multi_test"), "Missing multi_test field"
assert isinstance(tr.multi_test, bool), "multi_test not boolean"
validated_any = True validated_any = True
else: else:
assert "problem_id" in obj assert "problem_id" in obj
@ -68,5 +78,17 @@ def test_scraper_offline_fixture_matrix(run_scraper_offline, scraper, mode):
assert ( assert (
"timeout_ms" in obj and "memory_mb" in obj and "interactive" in obj "timeout_ms" in obj and "memory_mb" in obj and "interactive" in obj
) )
assert "combined" in obj, "Missing combined field in raw JSON"
assert isinstance(obj["combined"], dict), "combined not a dict"
assert "input" in obj["combined"], "combined missing input key"
assert "expected" in obj["combined"], "combined missing expected key"
assert isinstance(obj["combined"]["input"], str), (
"combined.input not string"
)
assert isinstance(obj["combined"]["expected"], str), (
"combined.expected not string"
)
assert "multi_test" in obj, "Missing multi_test field in raw JSON"
assert isinstance(obj["multi_test"], bool), "multi_test not boolean"
validated_any = True validated_any = True
assert validated_any, "No valid tests payloads validated" assert validated_any, "No valid tests payloads validated"