This is an automated email from the ASF dual-hosted git repository.
gstein pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/steve.git
The following commit(s) were added to refs/heads/trunk by this push:
new 91bace0 docs: add testing documentation and update main README
91bace0 is described below
commit 91bace0a1783ba856f54dcc4de1a9600cf9df74b
Author: Greg Stein <[email protected]>
AuthorDate: Tue Feb 3 01:55:42 2026 -0600
docs: add testing documentation and update main README
Co-authored-by: aider (openrouter/x-ai/grok-code-fast-1) <[email protected]>
---
v3/README.md | 6 +++++-
v3/tests/README.md | 51 +++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 56 insertions(+), 1 deletion(-)
diff --git a/v3/README.md b/v3/README.md
index 65c3263..54eb8c2 100644
--- a/v3/README.md
+++ b/v3/README.md
@@ -203,7 +203,11 @@ To tally a specific issue:
This is _TBD_
A basic example of using the API is available via the
-[code coverage testing script](test/check_coverage.py).
+[code coverage testing script](tests/check_coverage.py).
+
+## Testing
+
+See [tests/README.md](tests/README.md) for details on testing the codebase.
## Threat Model
diff --git a/v3/tests/README.md b/v3/tests/README.md
new file mode 100644
index 0000000..87cf5e6
--- /dev/null
+++ b/v3/tests/README.md
@@ -0,0 +1,51 @@
+# Testing
+
+This directory contains scripts and utilities for testing the v3 codebase.
+
+## Code Coverage Testing
+
+To run code coverage testing, use the `check_coverage.py` script. This script
uses the `coverage` library to measure code coverage of the `steve` package.
+
+### Prerequisites
+
+- Install the `coverage` library: `pip install coverage`
+- Install the `faker` package: `pip install faker`
+
+### Usage
+
+Run the script from the `v3/tests/` directory:
+
+```bash
+python check_coverage.py
+```
+
+This will generate a coverage report in the terminal and create an HTML report
in the `covreport/` directory.
+
+## STV Testing
+
+STV (Single Transferable Vote) testing involves running the STV tally process
on sample data and verifying the results.
+
+### Prerequisites
+
+- Ensure the `stv_tool` module is available at
`../../../monitoring/stv_tool.py` (relative to `v3/steve/vtypes/stv.py`).
+
+### Scripts
+
+- `populate_v2_stv.sh`: Generates test data directories with
`raw_board_votes.txt` and `board_nominations.ini` files.
+- `run_stv.py`: Runs the STV tally on a given meeting directory (e.g.,
`Meetings/yyyymmdd`).
+- `check_stv_outputs.sh`: Compares the sorted outputs from two directories
created by `populate_v2_stv.sh`.
+
+### Workflow
+
+1. Run `populate_v2_stv.sh` to create two test data directories (e.g., `dir1`
and `dir2`).
+2. For each directory, run `run_stv.py` on the meeting subdirectories to
generate outputs.
+3. Use `check_stv_outputs.sh` to verify that the outputs from `dir1` and
`dir2` are pairwise equal after sorting.
+
+### Dependencies
+
+- `raw_board_votes.txt`: Contains the raw vote data.
+- `board_nominations.ini`: Contains the label mappings for candidates.
+
+### Importing stv_tool
+
+The `stv.py` module imports `stv_tool` from `../../../monitoring/stv_tool.py`
using dynamic loading to ensure compatibility.