This is an automated email from the ASF dual-hosted git repository. asorokoumov pushed a commit to branch feat/118 in repository https://gitbox.apache.org/repos/asf/otava.git
commit a55748dc533acc61b72801fdb0a37e80e6d9e337 Author: Alex Sorokoumov <[email protected]> AuthorDate: Fri Jan 30 17:37:53 2026 -0800 Make --branch works across all importers --- docs/BASICS.md | 105 +++++++++++------- examples/postgresql/config/otava.yaml | 12 +- otava/bigquery.py | 11 +- otava/config.py | 22 ---- otava/importer.py | 81 +++++++++++--- otava/main.py | 3 +- otava/postgres.py | 4 +- otava/test_config.py | 28 ++--- pyproject.toml | 1 - tests/cli_help_test.py | 4 +- tests/config_test.py | 146 ------------------------ tests/csv_e2e_test.py | 184 +++++++++++++++++++++++++++++++ tests/graphite_e2e_test.py | 111 ++++++++++++++++++- tests/importer_test.py | 159 +++++++++++++++++++++++++- tests/postgres_e2e_test.py | 6 +- tests/resources/sample_config.yaml | 3 +- tests/resources/sample_multi_branch.csv | 9 ++ tests/resources/sample_single_branch.csv | 6 + tests/tigerbeetle_test.py | 2 +- uv.lock | 11 -- 20 files changed, 624 insertions(+), 284 deletions(-) diff --git a/docs/BASICS.md b/docs/BASICS.md index dcaedbd..b6d7afd 100644 --- a/docs/BASICS.md +++ b/docs/BASICS.md @@ -142,56 +142,85 @@ You can inherit more than one template. ## Validating Performance of a Feature Branch -The `otava regressions` command can work with feature branches. +When developing a feature, you may want to analyze performance test results from a specific branch +to detect any regressions introduced by your changes. The `--branch` option allows you to run +change-point analysis on branch-specific data. -First you need to tell Otava how to fetch the data of the tests run against a feature branch. -The `prefix` property of the graphite test definition accepts `%{BRANCH}` variable, -which is substituted at the data import time by the branch name passed to `--branch` -command argument. Alternatively, if the prefix for the main branch of your product is different -from the prefix used for feature branches, you can define an additional `branch_prefix` property. +### Configuration + +To support branch-based analysis, use the `%{BRANCH}` placeholder in your test configuration. +This placeholder will be replaced with the branch name specified via `--branch`: ```yaml -my-product.test-1: - type: graphite - tags: [perf-test, daily, my-product, test-1] - prefix: performance-tests.daily.%{BRANCH}.my-product.test-1 - inherit: common-metrics +tests: + my-product.test: + type: graphite + prefix: performance-tests.%{BRANCH}.my-product + tags: [perf-test, my-product] + metrics: + throughput: + suffix: client.throughput + direction: 1 + response_time: + suffix: client.p50 + direction: -1 +``` + +For PostgreSQL or BigQuery tests, use `%{BRANCH}` in your SQL query: -my-product.test-2: - type: graphite - tags: [perf-test, daily, my-product, test-2] - prefix: performance-tests.daily.master.my-product.test-2 - branch_prefix: performance-tests.feature.%{BRANCH}.my-product.test-2 - inherit: common-metrics +```yaml +tests: + my-product.db-test: + type: postgres + time_column: commit_ts + attributes: [experiment_id, commit] + query: | + SELECT commit, commit_ts, throughput, response_time + FROM results + WHERE branch = %{BRANCH} + ORDER BY commit_ts ASC + metrics: + throughput: + direction: 1 + response_time: + direction: -1 ``` -Now you can verify if correct data are imported by running -`otava analyze <test> --branch <branch>`. +For CSV data sources, the branching is done by looking at the `branch` column in the CSV file and filtering rows based on the specified branch value. -The `--branch` argument also works with `otava regressions`. In this case a comparison will be made -between the tail of the specified branch and the tail of the main branch (or a point of the -main branch specified by one of the `--since` selectors). +### Usage + +Run the analysis with the `--branch` option: ``` -$ otava regressions <test or group> --branch <branch> -$ otava regressions <test or group> --branch <branch> --since <date> -$ otava regressions <test or group> --branch <branch> --since-version <version> -$ otava regressions <test or group> --branch <branch> --since-commit <commit> +otava analyze my-product.test --branch feature-xyz ``` -When comparing two branches, you generally want to compare the tails of both test histories, and -specifically a stable sequence from the end that doesn't contain any changes in itself. -To ignore the older test results, and compare -only the last few points on the branch with the tail of the main branch, -use the `--last <n>` selector. E.g. to check regressions on the last run of the tests -on the feature branch: +This will: +1. Fetch data from the branch-specific location. +2. Run change-point detection on the branch's performance data + +### Example ``` -$ otava regressions <test or group> --branch <branch> --last 1 +$ otava analyze my-product.test --branch feature-new-cache --since=-30d +INFO: Computing change points for test my-product.test... +my-product.test: +time throughput response_time +------------------------- ------------ --------------- +2024-01-15 10:00:00 +0000 125000 45.2 +2024-01-16 10:00:00 +0000 124500 44.8 +2024-01-17 10:00:00 +0000 126200 45.1 + ········ + +15.2% + ········ +2024-01-18 10:00:00 +0000 145000 38.5 +2024-01-19 10:00:00 +0000 144200 39.1 +2024-01-20 10:00:00 +0000 146100 38.2 ``` -Please beware that performance validation based on a single data point is quite weak -and Otava might miss a regression if the point is not too much different from -the baseline. However, accuracy improves as more data points accumulate, and it is -a normal way of using Otava to just merge a feature and then revert if it is -flagged later. +The `--branch` option can also be set via the `BRANCH` environment variable: + +``` +BRANCH=feature-xyz otava analyze my-product.test +``` \ No newline at end of file diff --git a/examples/postgresql/config/otava.yaml b/examples/postgresql/config/otava.yaml index bb52c15..caeabe2 100644 --- a/examples/postgresql/config/otava.yaml +++ b/examples/postgresql/config/otava.yaml @@ -15,14 +15,6 @@ # specific language governing permissions and limitations # under the License. -# External systems connectors configuration: -postgres: - hostname: ${POSTGRES_HOSTNAME} - port: ${POSTGRES_PORT} - username: ${POSTGRES_USERNAME} - password: ${POSTGRES_PASSWORD} - database: ${POSTGRES_DATABASE} - # Templates define common bits shared between test definitions: templates: common: @@ -63,7 +55,7 @@ tests: INNER JOIN configs c ON r.config_id = c.id INNER JOIN experiments e ON r.experiment_id = e.id WHERE e.exclude_from_analysis = false AND - e.branch = '${BRANCH}' AND + e.branch = %{BRANCH} AND e.username = 'ci' AND c.store = 'MEM' AND c.cache = true AND @@ -85,7 +77,7 @@ tests: INNER JOIN configs c ON r.config_id = c.id INNER JOIN experiments e ON r.experiment_id = e.id WHERE e.exclude_from_analysis = false AND - e.branch = '${BRANCH}' AND + e.branch = %{BRANCH} AND e.username = 'ci' AND c.store = 'TIME_ROCKS' AND c.cache = true AND diff --git a/otava/bigquery.py b/otava/bigquery.py index cf5fafd..34f329d 100644 --- a/otava/bigquery.py +++ b/otava/bigquery.py @@ -17,7 +17,7 @@ from dataclasses import dataclass from datetime import datetime -from typing import Dict +from typing import Dict, List, Optional from google.cloud import bigquery from google.oauth2 import service_account @@ -71,8 +71,13 @@ class BigQuery: self.__client = bigquery.Client(credentials=credentials, project=credentials.project_id) return self.__client - def fetch_data(self, query: str): - query_job = self.client.query(query) # API request + def fetch_data( + self, query: str, params: Optional[List[bigquery.ScalarQueryParameter]] = None + ): + job_config = None + if params: + job_config = bigquery.QueryJobConfig(query_parameters=params) + query_job = self.client.query(query, job_config=job_config) # API request results = query_job.result() columns = [field.name for field in results.schema] return (columns, results) diff --git a/otava/config.py b/otava/config.py index ef707a9..126fe0e 100644 --- a/otava/config.py +++ b/otava/config.py @@ -20,7 +20,6 @@ from pathlib import Path from typing import Dict, List, Optional import configargparse -from expandvars import expandvars from ruamel.yaml import YAML from otava.bigquery import BigQueryConfig @@ -96,33 +95,12 @@ def load_test_groups(config: Dict, tests: Dict[str, TestConfig]) -> Dict[str, Li return result -def expand_env_vars_recursive(obj): - """Recursively expand environment variables in all string values within a nested structure. - - Raises ConfigError if any environment variables remain undefined after expansion. - """ - if isinstance(obj, dict): - return {key: expand_env_vars_recursive(value) for key, value in obj.items()} - elif isinstance(obj, list): - return [expand_env_vars_recursive(item) for item in obj] - elif isinstance(obj, str): - return expandvars(obj, nounset=True) - else: - return obj - - def load_config_from_parser_args(args: configargparse.Namespace) -> Config: config_file = getattr(args, "config_file", None) if config_file is not None: yaml = YAML(typ="safe") config = yaml.load(Path(config_file).read_text()) - # Expand environment variables in the entire config after CLI argument replacement - try: - config = expand_env_vars_recursive(config) - except Exception as e: - raise ConfigError(f"Error expanding environment variables: {e}") - templates = load_templates(config) tests = load_tests(config, templates) groups = load_test_groups(config, tests) diff --git a/otava/importer.py b/otava/importer.py index 634c46a..c6cea32 100644 --- a/otava/importer.py +++ b/otava/importer.py @@ -22,7 +22,9 @@ from contextlib import contextmanager from dataclasses import dataclass from datetime import datetime, timedelta from pathlib import Path -from typing import Dict, List, Optional +from typing import Dict, List, Optional, Set + +from google.cloud import bigquery from otava.bigquery import BigQuery from otava.config import Config @@ -221,14 +223,13 @@ class CsvImporter(Importer): else: return defined_metrics + BRANCH_COLUMN = "branch" + def fetch_data(self, test_conf: TestConfig, selector: DataSelector = DataSelector()) -> Series: if not isinstance(test_conf, CsvTestConfig): raise ValueError("Expected CsvTestConfig") - if selector.branch: - raise ValueError("CSV tests don't support branching yet") - since_time = selector.since_time until_time = selector.until_time file = Path(test_conf.file) @@ -251,6 +252,17 @@ class CsvImporter(Importer): headers: List[str] = next(reader, None) metrics = self.__selected_metrics(test_conf.metrics, selector.metrics) + # Check for branch column + has_branch_column = self.BRANCH_COLUMN in headers + branch_index = headers.index(self.BRANCH_COLUMN) if has_branch_column else None + + if selector.branch and not has_branch_column: + # --branch specified but no branch column + raise DataImportError( + f"Test {test_conf.name}: --branch was specified but CSV file does not have " + f"a '{self.BRANCH_COLUMN}' column. Add a branch column to the CSV file." + ) + # Decide which columns to fetch into which components of the result: try: time_index: int = headers.index(test_conf.time_column) @@ -275,10 +287,20 @@ class CsvImporter(Importer): for i in attr_indexes: attributes[headers[i]] = [] + branches: Set[str] = set() + # Append the lists with data from each row: for row in reader: self.check_row_len(headers, row) + # Track branch values if branch column exists + if has_branch_column: + row_branch = row[branch_index] + branches.add(row_branch) + # Filter by branch if --branch is specified + if selector.branch and row_branch != selector.branch: + continue + # Filter by time: ts: datetime = self.__convert_time(row[time_index]) if since_time is not None and ts < since_time: @@ -305,6 +327,15 @@ class CsvImporter(Importer): for i in attr_indexes: attributes[headers[i]].append(row[i]) + # Branch column exists but --branch not specified and multiple branches found + if has_branch_column and not selector.branch and len(branches) > 1: + raise DataImportError( + f"Test {test_conf.name}: CSV file contains data from multiple branches. " + f"Analyzing results across different branches will produce confusing results. " + f"Use --branch to select a specific branch.\n" + f"Branches found:\n" + "\n".join(sorted(branches)) + ) + # Convert metrics to series.Metrics metrics = {m.name: Metric(m.direction, m.scale) for m in metrics.values()} @@ -321,7 +352,7 @@ class CsvImporter(Importer): return Series( test_conf.name, - branch=None, + branch=selector.branch, time=time, metrics=metrics, data=data, @@ -476,9 +507,6 @@ class PostgresImporter(Importer): if not isinstance(test_conf, PostgresTestConfig): raise ValueError("Expected PostgresTestConfig") - if selector.branch: - raise ValueError("Postgres tests don't support branching yet") - since_time = selector.since_time until_time = selector.until_time if since_time.timestamp() > until_time.timestamp(): @@ -489,7 +517,21 @@ class PostgresImporter(Importer): ) metrics = self.__selected_metrics(test_conf.metrics, selector.metrics) - columns, rows = self.__postgres.fetch_data(test_conf.query) + # Handle %{BRANCH} placeholder using parameterized query to prevent SQL injection + query = test_conf.query + params = None + if "%{BRANCH}" in query: + if not selector.branch: + raise DataImportError( + f"Test {test_conf.name} uses %{{BRANCH}} in query but --branch was not specified" + ) + # Count occurrences and create matching number of parameters + placeholder_count = query.count("%{BRANCH}") + # Replace placeholder with %s for pg8000 parameterized query + query = query.replace("%{BRANCH}", "%s") + params = tuple(selector.branch for _ in range(placeholder_count)) + + columns, rows = self.__postgres.fetch_data(query, params) # Decide which columns to fetch into which components of the result: try: @@ -548,7 +590,7 @@ class PostgresImporter(Importer): return Series( test_conf.name, - branch=None, + branch=selector.branch, time=time, metrics=metrics, data=data, @@ -693,9 +735,6 @@ class BigQueryImporter(Importer): if not isinstance(test_conf, BigQueryTestConfig): raise ValueError("Expected BigQueryTestConfig") - if selector.branch: - raise ValueError("BigQuery tests don't support branching yet") - since_time = selector.since_time until_time = selector.until_time if since_time.timestamp() > until_time.timestamp(): @@ -706,7 +745,19 @@ class BigQueryImporter(Importer): ) metrics = self.__selected_metrics(test_conf.metrics, selector.metrics) - columns, rows = self.__bigquery.fetch_data(test_conf.query) + # Handle %{BRANCH} placeholder using parameterized query to prevent SQL injection + query = test_conf.query + params = None + if "%{BRANCH}" in query: + if not selector.branch: + raise DataImportError( + f"Test {test_conf.name} uses %{{BRANCH}} in query but --branch was not specified" + ) + # Replace placeholder with @branch for BigQuery parameterized query + query = query.replace("%{BRANCH}", "@branch") + params = [bigquery.ScalarQueryParameter("branch", "STRING", selector.branch)] + + columns, rows = self.__bigquery.fetch_data(query, params) # Decide which columns to fetch into which components of the result: try: @@ -765,7 +816,7 @@ class BigQueryImporter(Importer): return Series( test_conf.name, - branch=None, + branch=selector.branch, time=time, metrics=metrics, data=data, diff --git a/otava/main.py b/otava/main.py index 046e43a..a017123 100644 --- a/otava/main.py +++ b/otava/main.py @@ -301,7 +301,8 @@ class Otava: def setup_data_selector_parser(parser: argparse.ArgumentParser): parser.add_argument( - "--branch", metavar="STRING", dest="branch", help="name of the branch", nargs="?" + "--branch", metavar="STRING", dest="branch", help="name of the branch", nargs="?", + env_var="BRANCH" ) parser.add_argument( "--metrics", diff --git a/otava/postgres.py b/otava/postgres.py index 7a2aaa0..40ef53d 100644 --- a/otava/postgres.py +++ b/otava/postgres.py @@ -77,9 +77,9 @@ class Postgres: ) return self.__conn - def fetch_data(self, query: str): + def fetch_data(self, query: str, params: tuple = None): cursor = self.__get_conn().cursor() - cursor.execute(query) + cursor.execute(query, params) columns = [c[0] for c in cursor.description] return (columns, cursor.fetchall()) diff --git a/otava/test_config.py b/otava/test_config.py index f7b9f2d..5cc57c9 100644 --- a/otava/test_config.py +++ b/otava/test_config.py @@ -20,7 +20,6 @@ from dataclasses import dataclass from typing import Dict, List, Optional from otava.csv_options import CsvOptions -from otava.util import interpolate @dataclass @@ -83,8 +82,7 @@ class GraphiteMetric: @dataclass class GraphiteTestConfig(TestConfig): - prefix: str # location of the performance data for the main branch - branch_prefix: Optional[str] # location of the performance data for the feature branch + prefix: str # location of the performance data (use %{BRANCH} for branch substitution) metrics: Dict[str, GraphiteMetric] # collection of metrics to fetch tags: List[str] # tags to query graphite events for this test annotate: List[str] # annotation tags @@ -93,35 +91,26 @@ class GraphiteTestConfig(TestConfig): self, name: str, prefix: str, - branch_prefix: Optional[str], metrics: List[GraphiteMetric], tags: List[str], annotate: List[str], ): self.name = name self.prefix = prefix - self.branch_prefix = branch_prefix self.metrics = {m.name: m for m in metrics} self.tags = tags self.annotate = annotate - def get_path(self, branch_name: Optional[str], metric_name: str) -> str: + def get_path(self, branch: Optional[str], metric_name: str) -> str: metric = self.metrics.get(metric_name) - substitutions = {"BRANCH": [branch_name if branch_name else "main"]} - if branch_name and self.branch_prefix: - return interpolate(self.branch_prefix, substitutions)[0] + "." + metric.suffix - elif branch_name: - branch_var_name = "%{BRANCH}" - if branch_var_name not in self.prefix: + prefix = self.prefix + if "%{BRANCH}" in prefix: + if not branch: raise TestConfigError( - f"Test {self.name} does not support branching. " - f"Please set the `branch_prefix` property or use {branch_var_name} " - f"in the `prefix`." + f"Test {self.name} uses %{{BRANCH}} in prefix but --branch was not specified" ) - interpolated = interpolate(self.prefix, substitutions) - return interpolated[0] + "." + metric.suffix - else: - return self.prefix + "." + metric.suffix + prefix = prefix.replace("%{BRANCH}", branch) + return prefix + "." + metric.suffix def fully_qualified_metric_names(self): return [f"{self.prefix}.{m.suffix}" for _, m in self.metrics.items()] @@ -304,7 +293,6 @@ def create_graphite_test_config(name: str, test_info: Dict) -> GraphiteTestConfi return GraphiteTestConfig( name, prefix=test_info["prefix"], - branch_prefix=test_info.get("branch_prefix"), tags=test_info.get("tags", []), annotate=test_info.get("annotate", []), metrics=metrics, diff --git a/pyproject.toml b/pyproject.toml index 0e2e18d..ab7fe61 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -49,7 +49,6 @@ dependencies = [ "google-cloud-bigquery>=3.38.0", "pg8000>=1.31.5", "configargparse>=1.7.1", - "expandvars>=0.12.0", # For Python 3.10: last series that supports it "scipy>=1.15,<1.16; python_version < '3.11'", diff --git a/tests/cli_help_test.py b/tests/cli_help_test.py index 1984062..e08278f 100644 --- a/tests/cli_help_test.py +++ b/tests/cli_help_test.py @@ -185,7 +185,7 @@ options: Slack. Same syntax as --since. --output {log,json,regressions_only} Output format for the generated report. - --branch [STRING] name of the branch + --branch [STRING] name of the branch [env var: BRANCH] --metrics LIST a comma-separated list of metrics to analyze --attrs LIST a comma-separated list of attribute names associated with the runs (e.g. commit, branch, version); if not specified, it will be automatically @@ -243,7 +243,7 @@ options: Slack. Same syntax as --since. --output {log,json,regressions_only} Output format for the generated report. - --branch [STRING] name of the branch + --branch [STRING] name of the branch [env var: BRANCH] --metrics LIST a comma-separated list of metrics to analyze --attrs LIST a comma-separated list of attribute names associated with the runs (e.g. commit, branch, version); if not specified, it will be automatically diff --git a/tests/config_test.py b/tests/config_test.py index 1b35c8f..58b223f 100644 --- a/tests/config_test.py +++ b/tests/config_test.py @@ -19,14 +19,10 @@ import tempfile from io import StringIO import pytest -from expandvars import UnboundVariable from otava.config import ( NestedYAMLConfigFileParser, - create_config_parser, - expand_env_vars_recursive, load_config_from_file, - load_config_from_parser_args, ) from otava.main import create_otava_cli_parser from otava.test_config import CsvTestConfig, GraphiteTestConfig, HistoStatTestConfig @@ -220,148 +216,6 @@ templates: assert section not in ignored_sections, f"Found key '{key}' from ignored section '{section}'" -def test_expand_env_vars_recursive(): - """Test the expand_env_vars_recursive function with various data types.""" - - # Set up test environment variables - test_env_vars = { - "TEST_HOST": "localhost", - "TEST_PORT": "8080", - "TEST_DB": "testdb", - "TEST_USER": "testuser", - } - - for key, value in test_env_vars.items(): - os.environ[key] = value - - try: - # Test simple string expansion - simple_string = "${TEST_HOST}:${TEST_PORT}" - result = expand_env_vars_recursive(simple_string) - assert result == "localhost:8080" - - # Test dictionary expansion - test_dict = { - "host": "${TEST_HOST}", - "port": "${TEST_PORT}", - "database": "${TEST_DB}", - "connection_string": "postgresql://${TEST_USER}@${TEST_HOST}:${TEST_PORT}/${TEST_DB}", - "timeout": 30, # non-string should remain unchanged - "enabled": True, # non-string should remain unchanged - } - - result_dict = expand_env_vars_recursive(test_dict) - expected_dict = { - "host": "localhost", - "port": "8080", - "database": "testdb", - "connection_string": "postgresql://testuser@localhost:8080/testdb", - "timeout": 30, - "enabled": True, - } - assert result_dict == expected_dict - - # Test list expansion - test_list = [ - "${TEST_HOST}", - {"nested_host": "${TEST_HOST}", "nested_port": "${TEST_PORT}"}, - ["${TEST_USER}", "${TEST_DB}"], - 123, # non-string should remain unchanged - ] - - result_list = expand_env_vars_recursive(test_list) - expected_list = [ - "localhost", - {"nested_host": "localhost", "nested_port": "8080"}, - ["testuser", "testdb"], - 123, - ] - assert result_list == expected_list - - # Test undefined variables (should throw UnboundVariable) - with pytest.raises(UnboundVariable, match="'UNDEFINED_VAR: unbound variable"): - expand_env_vars_recursive("${UNDEFINED_VAR}") - - # Test mixed defined/undefined variables (should throw UnboundVariable) - with pytest.raises(UnboundVariable, match="'UNDEFINED_VAR: unbound variable"): - expand_env_vars_recursive("prefix-${TEST_HOST}-middle-${UNDEFINED_VAR}-suffix") - - finally: - # Clean up environment variables - for key in test_env_vars: - if key in os.environ: - del os.environ[key] - - -def test_env_var_expansion_in_templates_and_tests(): - """Test that environment variable expansion works in template and test sections.""" - - # Set up test environment variables - test_env_vars = { - "CSV_DELIMITER": "$", - "CSV_QUOTE_CHAR": "!", - "CSV_FILENAME": "/tmp/test.csv", - } - - for key, value in test_env_vars.items(): - os.environ[key] = value - - # Create a temporary config file with env var placeholders - config_content = """ -templates: - csv_template_1: - csv_options: - delimiter: "${CSV_DELIMITER}" - - csv_template_2: - csv_options: - quote_char: '${CSV_QUOTE_CHAR}' - -tests: - expansion_test: - type: csv - file: ${CSV_FILENAME} - time_column: timestamp - metrics: - response_time: - column: response_ms - unit: ms - inherit: [csv_template_1, csv_template_2] -""" - - try: - with tempfile.NamedTemporaryFile(mode="w", suffix=".yaml", delete=False) as f: - f.write(config_content) - config_file_path = f.name - - try: - # Load config and verify expansion worked - parser = create_config_parser() - args = parser.parse_args(["--config-file", config_file_path]) - config = load_config_from_parser_args(args) - - # Verify test was loaded - assert "expansion_test" in config.tests - test = config.tests["expansion_test"] - assert isinstance(test, CsvTestConfig) - - # Verify that expansion worked - assert test.file == test_env_vars["CSV_FILENAME"] - - # Verify that inheritance from templates worked with expanded values - assert test.csv_options.delimiter == test_env_vars["CSV_DELIMITER"] - assert test.csv_options.quote_char == test_env_vars["CSV_QUOTE_CHAR"] - - finally: - os.unlink(config_file_path) - - finally: - # Clean up environment variables - for key in test_env_vars: - if key in os.environ: - del os.environ[key] - - def test_cli_precedence_over_env_vars(): """Test that CLI arguments take precedence over environment variables.""" diff --git a/tests/csv_e2e_test.py b/tests/csv_e2e_test.py index 9baf494..6afb422 100644 --- a/tests/csv_e2e_test.py +++ b/tests/csv_e2e_test.py @@ -123,3 +123,187 @@ def test_analyze_csv(): ) assert _remove_trailing_whitespaces(proc.stdout) == expected_output.rstrip("\n") + + +def test_analyze_csv_multiple_branches_without_branch_flag_fails(): + """ + E2E test: CSV with multiple branches but no --branch flag should fail. + """ + now = datetime.now() + data_points = [ + (now - timedelta(days=4), "aaa0", "main", 154023, 10.43), + (now - timedelta(days=3), "aaa1", "main", 138455, 10.23), + (now - timedelta(days=2), "aaa2", "feature-x", 143112, 10.29), + (now - timedelta(days=1), "aaa3", "feature-x", 149190, 10.91), + (now, "aaa4", "main", 132098, 10.34), + ] + + config_content = textwrap.dedent( + """\ + tests: + local.sample: + type: csv + file: data/local_sample.csv + time_column: time + attributes: [commit, branch] + metrics: [metric1, metric2] + """ + ) + + with tempfile.TemporaryDirectory() as td: + td_path = Path(td) + data_dir = td_path / "data" + data_dir.mkdir(parents=True, exist_ok=True) + csv_path = data_dir / "local_sample.csv" + with open(csv_path, "w", newline="") as f: + writer = csv.writer(f) + writer.writerow(["time", "commit", "branch", "metric1", "metric2"]) + for ts, commit, branch, m1, m2 in data_points: + writer.writerow([ts.strftime("%Y.%m.%d %H:%M:%S %z"), commit, branch, m1, m2]) + + config_path = td_path / "otava.yaml" + config_path.write_text(config_content, encoding="utf-8") + + cmd = ["uv", "run", "otava", "analyze", "local.sample"] + proc = subprocess.run( + cmd, + cwd=str(td_path), + capture_output=True, + text=True, + timeout=120, + env=dict(os.environ, OTAVA_CONFIG=str(config_path)), + ) + + output = proc.stderr + proc.stdout + assert "multiple branches" in output + assert "--branch" in output + assert "main" in output + assert "feature-x" in output + + +def test_analyze_csv_branch_flag_without_branch_column_fails(): + """ + E2E test: --branch flag specified but CSV has no branch column should fail. + """ + now = datetime.now() + data_points = [ + (now - timedelta(days=2), "aaa0", 154023, 10.43), + (now - timedelta(days=1), "aaa1", 138455, 10.23), + (now, "aaa2", 143112, 10.29), + ] + + config_content = textwrap.dedent( + """\ + tests: + local.sample: + type: csv + file: data/local_sample.csv + time_column: time + attributes: [commit] + metrics: [metric1, metric2] + """ + ) + + with tempfile.TemporaryDirectory() as td: + td_path = Path(td) + data_dir = td_path / "data" + data_dir.mkdir(parents=True, exist_ok=True) + csv_path = data_dir / "local_sample.csv" + with open(csv_path, "w", newline="") as f: + writer = csv.writer(f) + writer.writerow(["time", "commit", "metric1", "metric2"]) + for ts, commit, m1, m2 in data_points: + writer.writerow([ts.strftime("%Y.%m.%d %H:%M:%S %z"), commit, m1, m2]) + + config_path = td_path / "otava.yaml" + config_path.write_text(config_content, encoding="utf-8") + + cmd = ["uv", "run", "otava", "analyze", "local.sample", "--branch", "main"] + proc = subprocess.run( + cmd, + cwd=str(td_path), + capture_output=True, + text=True, + timeout=120, + env=dict(os.environ, OTAVA_CONFIG=str(config_path)), + ) + + output = proc.stderr + proc.stdout + assert "--branch was specified" in output + assert "branch" in output and "column" in output + + +def test_analyze_csv_with_branch_filter(): + """ + E2E test: --branch flag filters CSV rows correctly. + """ + now = datetime.now() + # Data with change point in feature-x branch + data_points = [ + # main branch - no change point + (now - timedelta(days=7), "aaa0", "main", 100, 10.0), + (now - timedelta(days=6), "aaa1", "main", 102, 10.1), + (now - timedelta(days=5), "aaa2", "main", 101, 10.0), + (now - timedelta(days=4), "aaa3", "main", 103, 10.2), + # feature-x branch - has a change point + (now - timedelta(days=7), "bbb0", "feature-x", 100, 10.0), + (now - timedelta(days=6), "bbb1", "feature-x", 102, 10.1), + (now - timedelta(days=5), "bbb2", "feature-x", 101, 10.0), + (now - timedelta(days=4), "bbb3", "feature-x", 150, 15.0), # regression + (now - timedelta(days=3), "bbb4", "feature-x", 152, 15.2), + (now - timedelta(days=2), "bbb5", "feature-x", 148, 14.8), + ] + + config_content = textwrap.dedent( + """\ + tests: + local.sample: + type: csv + file: data/local_sample.csv + time_column: time + attributes: [commit, branch] + metrics: [metric1, metric2] + """ + ) + + with tempfile.TemporaryDirectory() as td: + td_path = Path(td) + data_dir = td_path / "data" + data_dir.mkdir(parents=True, exist_ok=True) + csv_path = data_dir / "local_sample.csv" + with open(csv_path, "w", newline="") as f: + writer = csv.writer(f) + writer.writerow(["time", "commit", "branch", "metric1", "metric2"]) + for ts, commit, branch, m1, m2 in data_points: + writer.writerow([ts.strftime("%Y.%m.%d %H:%M:%S %z"), commit, branch, m1, m2]) + + config_path = td_path / "otava.yaml" + config_path.write_text(config_content, encoding="utf-8") + + # Analyze feature-x branch - should show change point + cmd = ["uv", "run", "otava", "analyze", "local.sample", "--branch", "feature-x"] + proc = subprocess.run( + cmd, + cwd=str(td_path), + capture_output=True, + text=True, + timeout=120, + env=dict(os.environ, OTAVA_CONFIG=str(config_path)), + ) + + if proc.returncode != 0: + pytest.fail( + "Command returned non-zero exit code.\n\n" + f"Command: {cmd!r}\n" + f"Exit code: {proc.returncode}\n\n" + f"Stdout:\n{proc.stdout}\n\n" + f"Stderr:\n{proc.stderr}\n" + ) + + output = proc.stdout + # Should only show feature-x data (bbb commits) + assert "bbb" in output + # Should NOT show main branch data (aaa commits) + assert "aaa" not in output + # Should show a change point (increase ~50%) + assert "+" in output and "%" in output diff --git a/tests/graphite_e2e_test.py b/tests/graphite_e2e_test.py index 96a2d99..7d0b5fa 100644 --- a/tests/graphite_e2e_test.py +++ b/tests/graphite_e2e_test.py @@ -19,6 +19,7 @@ import json import os import socket import subprocess +import tempfile import time import urllib.request from pathlib import Path @@ -125,7 +126,10 @@ def _graphite_readiness_check(container_id: str, port_map: dict[int, int]) -> bo return False -def _seed_graphite_data(carbon_port: int) -> int: +def _seed_graphite_data( + carbon_port: int, + prefix: str = "performance-tests.daily.my-product", +) -> int: """ Seed Graphite with test data matching the pattern from examples/graphite/datagen/datagen.sh. @@ -138,13 +142,13 @@ def _seed_graphite_data(carbon_port: int) -> int: - throughput dropped from ~61k to ~57k (-5.6% regression) - cpu increased from 0.2 to 0.8 (+300% regression) """ - throughput_path = "performance-tests.daily.my-product.client.throughput" + throughput_path = f"{prefix}.client.throughput" throughput_values = [56950, 57980, 57123, 60960, 60160, 61160] - p50_path = "performance-tests.daily.my-product.client.p50" + p50_path = f"{prefix}.client.p50" p50_values = [85, 87, 88, 89, 85, 87] - cpu_path = "performance-tests.daily.my-product.server.cpu" + cpu_path = f"{prefix}.server.cpu" cpu_values = [0.7, 0.9, 0.8, 0.1, 0.3, 0.2] start_timestamp = int(time.time()) @@ -208,3 +212,102 @@ def _wait_for_graphite_data( f"Timeout waiting for Graphite data. " f"Expected {expected_points} points for metric '{metric_path}' within {timeout}s, got {last_observed_count}" ) + + +def test_analyze_graphite_with_branch(): + """ + End-to-end test for Graphite with %{BRANCH} substitution. + + Verifies that using --branch correctly substitutes %{BRANCH} in the prefix + to fetch data from a branch-specific Graphite path. + """ + with container( + "graphiteapp/graphite-statsd", + ports=[HTTP_PORT, CARBON_PORT], + readiness_check=_graphite_readiness_check, + ) as (container_id, port_map): + # Seed data into a branch-specific path + branch_name = "feature-xyz" + prefix = f"performance-tests.{branch_name}.my-product" + data_points = _seed_graphite_data(port_map[CARBON_PORT], prefix=prefix) + + # Wait for data to be written and available + _wait_for_graphite_data( + http_port=port_map[HTTP_PORT], + metric_path=f"performance-tests.{branch_name}.my-product.client.throughput", + expected_points=data_points, + ) + + # Create a temporary config file with %{BRANCH} in the prefix + config_content = """ +tests: + branch-test: + type: graphite + prefix: performance-tests.%{BRANCH}.my-product + tags: [perf-test, branch] + metrics: + throughput: + suffix: client.throughput + direction: 1 + scale: 1 + response_time: + suffix: client.p50 + direction: -1 + scale: 1 + cpu_usage: + suffix: server.cpu + direction: -1 + scale: 1 +""" + with tempfile.NamedTemporaryFile( + mode="w", suffix=".yaml", delete=False + ) as config_file: + config_file.write(config_content) + config_file_path = config_file.name + + try: + # Run the Otava analysis with --branch + proc = subprocess.run( + [ + "uv", + "run", + "otava", + "analyze", + "branch-test", + "--branch", + branch_name, + "--since=-10m", + ], + capture_output=True, + text=True, + timeout=600, + env=dict( + os.environ, + OTAVA_CONFIG=config_file_path, + GRAPHITE_ADDRESS=f"http://localhost:{port_map[HTTP_PORT]}/", + ), + ) + + if proc.returncode != 0: + pytest.fail( + "Command returned non-zero exit code.\n\n" + f"Command: {proc.args!r}\n" + f"Exit code: {proc.returncode}\n\n" + f"Stdout:\n{proc.stdout}\n\n" + f"Stderr:\n{proc.stderr}\n" + ) + + # Verify output contains expected columns + output = _remove_trailing_whitespaces(proc.stdout) + + # Check that the header contains expected column names + assert "throughput" in output + assert "response_time" in output + assert "cpu_usage" in output + + # Data shows throughput dropped from ~61k to ~57k (-5.6%) and cpu increased +300% + assert "-5.6%" in output # throughput change + assert "+300.0%" in output # cpu_usage change + + finally: + os.unlink(config_file_path) diff --git a/tests/importer_test.py b/tests/importer_test.py index 207083f..ce35355 100644 --- a/tests/importer_test.py +++ b/tests/importer_test.py @@ -17,6 +17,7 @@ from datetime import datetime +import pytest import pytz from otava.csv_options import CsvOptions @@ -24,6 +25,7 @@ from otava.graphite import DataSelector from otava.importer import ( BigQueryImporter, CsvImporter, + DataImportError, HistoStatImporter, PostgresImporter, ) @@ -32,9 +34,12 @@ from otava.test_config import ( BigQueryTestConfig, CsvMetric, CsvTestConfig, + GraphiteMetric, + GraphiteTestConfig, HistoStatTestConfig, PostgresMetric, PostgresTestConfig, + TestConfigError, ) SAMPLE_CSV = "tests/resources/sample.csv" @@ -149,7 +154,7 @@ def test_import_histostat_last_n_points(): class MockPostgres: - def fetch_data(self, query: str): + def fetch_data(self, query: str, params: tuple = None): return ( ["time", "metric1", "metric2", "commit"], [ @@ -225,7 +230,7 @@ def test_import_postgres_last_n_points(): class MockBigQuery: - def fetch_data(self, query: str): + def fetch_data(self, query: str, params=None): return ( ["time", "metric1", "metric2", "commit"], [ @@ -298,3 +303,153 @@ def test_import_bigquery_last_n_points(): assert len(series.time) == 5 assert len(series.data["m2"]) == 5 assert len(series.attributes["commit"]) == 5 + + +def test_graphite_substitutes_branch(): + config = GraphiteTestConfig( + name="test", + prefix="perf.%{BRANCH}.test", + metrics=[GraphiteMetric("m1", 1, 1.0, "metric1", annotate=[])], + tags=[], + annotate=[] + ) + assert config.get_path("feature-x", "m1") == "perf.feature-x.test.metric1" + + +def test_graphite_branch_placeholder_without_branch_raises_error(): + """Test that using %{BRANCH} in prefix without --branch raises an error.""" + config = GraphiteTestConfig( + name="branch-test", + prefix="perf.%{BRANCH}.test", + metrics=[GraphiteMetric("m1", 1, 1.0, "metric1", annotate=[])], + tags=[], + annotate=[], + ) + with pytest.raises(TestConfigError) as exc_info: + config.get_path(None, "m1") + assert "branch-test" in exc_info.value.message + assert "%{BRANCH}" in exc_info.value.message + assert "--branch" in exc_info.value.message + + +def test_postgres_branch_placeholder_without_branch_raises_error(): + """Test that using %{BRANCH} in query without --branch raises an error.""" + test = PostgresTestConfig( + name="branch-test", + query="SELECT * FROM results WHERE branch = '%{BRANCH}';", + time_column="time", + metrics=[PostgresMetric("m1", 1, 1.0, "metric1")], + attributes=["commit"], + ) + importer = PostgresImporter(MockPostgres()) + with pytest.raises(DataImportError) as exc_info: + importer.fetch_data(test_conf=test, selector=data_selector()) + assert "branch-test" in exc_info.value.message + assert "%{BRANCH}" in exc_info.value.message + assert "--branch" in exc_info.value.message + + +def test_bigquery_branch_placeholder_without_branch_raises_error(): + """Test that using %{BRANCH} in query without --branch raises an error.""" + test = BigQueryTestConfig( + name="branch-test", + query="SELECT * FROM results WHERE branch = '%{BRANCH}';", + time_column="time", + metrics=[BigQueryMetric("m1", 1, 1.0, "metric1")], + attributes=["commit"], + ) + importer = BigQueryImporter(MockBigQuery()) + with pytest.raises(DataImportError) as exc_info: + importer.fetch_data(test_conf=test, selector=data_selector()) + assert "branch-test" in exc_info.value.message + assert "%{BRANCH}" in exc_info.value.message + assert "--branch" in exc_info.value.message + + +# CSV branch handling tests + +SAMPLE_SINGLE_BRANCH_CSV = "tests/resources/sample_single_branch.csv" +SAMPLE_MULTI_BRANCH_CSV = "tests/resources/sample_multi_branch.csv" + + +def csv_test_config_with_branch(file): + """Create a CSV test config that includes the branch column in attributes.""" + return CsvTestConfig( + name="test", + file=file, + csv_options=CsvOptions(), + time_column="time", + metrics=[CsvMetric("m1", 1, 1.0, "metric1"), CsvMetric("m2", 1, 5.0, "metric2")], + attributes=["commit", "branch"], + ) + + +def test_csv_no_branch_no_branch_column(): + """No --branch specified and no branch column in CSV - should succeed.""" + importer = CsvImporter() + series = importer.fetch_data(csv_test_config(SAMPLE_CSV), data_selector()) + assert len(series.time) == 10 + assert series.branch is None + + +def test_csv_no_branch_single_branch_in_column(): + """: No --branch specified but CSV has branch column with single value - should succeed.""" + importer = CsvImporter() + series = importer.fetch_data(csv_test_config_with_branch(SAMPLE_SINGLE_BRANCH_CSV), data_selector()) + assert len(series.time) == 5 + assert series.branch is None + + +def test_csv_no_branch_multiple_branches_raises_error(): + """No --branch specified but CSV has branch column with multiple values - should error.""" + importer = CsvImporter() + with pytest.raises(DataImportError) as exc_info: + importer.fetch_data(csv_test_config_with_branch(SAMPLE_MULTI_BRANCH_CSV), data_selector()) + + error_msg = exc_info.value.message + assert "multiple branches" in error_msg + assert "--branch" in error_msg + assert "main" in error_msg + assert "feature-x" in error_msg + assert "feature-y" in error_msg + + +def test_csv_branch_specified_no_branch_column_raises_error(): + """--branch specified but CSV has no branch column - should error.""" + importer = CsvImporter() + selector = data_selector() + selector.branch = "main" + + with pytest.raises(DataImportError) as exc_info: + importer.fetch_data(csv_test_config(SAMPLE_CSV), selector) + + error_msg = exc_info.value.message + assert "--branch was specified" in error_msg + assert "branch" in error_msg + assert "column" in error_msg + + +def test_csv_branch_specified_filters_rows(): + """--branch specified and CSV has branch column - should filter rows.""" + importer = CsvImporter() + + # Filter by 'main' branch + selector = data_selector() + selector.branch = "main" + series = importer.fetch_data(csv_test_config_with_branch(SAMPLE_MULTI_BRANCH_CSV), selector) + assert len(series.time) == 4 # rows 1, 2, 5, 8 have 'main' + assert series.branch == "main" + + # Filter by 'feature-x' branch + selector = data_selector() + selector.branch = "feature-x" + series = importer.fetch_data(csv_test_config_with_branch(SAMPLE_MULTI_BRANCH_CSV), selector) + assert len(series.time) == 2 # rows 3, 4 have 'feature-x' + assert series.branch == "feature-x" + + # Filter by 'feature-y' branch + selector = data_selector() + selector.branch = "feature-y" + series = importer.fetch_data(csv_test_config_with_branch(SAMPLE_MULTI_BRANCH_CSV), selector) + assert len(series.time) == 2 # rows 6, 7 have 'feature-y' + assert series.branch == "feature-y" diff --git a/tests/postgres_e2e_test.py b/tests/postgres_e2e_test.py index be6a440..62decdc 100644 --- a/tests/postgres_e2e_test.py +++ b/tests/postgres_e2e_test.py @@ -41,7 +41,7 @@ def test_analyze(): with postgres_container(username, password, db) as (postgres_container_id, host_port): # Run the Otava analysis proc = subprocess.run( - ["uv", "run", "otava", "analyze", "aggregate_mem"], + ["uv", "run", "otava", "analyze", "aggregate_mem", "--branch", "trunk"], capture_output=True, text=True, timeout=600, @@ -53,7 +53,6 @@ def test_analyze(): POSTGRES_USERNAME=username, POSTGRES_PASSWORD=password, POSTGRES_DATABASE=db, - BRANCH="trunk", ), ) @@ -140,7 +139,7 @@ def test_analyze_and_update_postgres(): with postgres_container(username, password, db) as (postgres_container_id, host_port): # Run the Otava analysis proc = subprocess.run( - ["uv", "run", "otava", "analyze", "aggregate_mem", "--update-postgres"], + ["uv", "run", "otava", "analyze", "aggregate_mem", "--branch", "trunk", "--update-postgres"], capture_output=True, text=True, timeout=600, @@ -152,7 +151,6 @@ def test_analyze_and_update_postgres(): POSTGRES_USERNAME=username, POSTGRES_PASSWORD=password, POSTGRES_DATABASE=db, - BRANCH="trunk", ), ) diff --git a/tests/resources/sample_config.yaml b/tests/resources/sample_config.yaml index b6a1175..995204a 100644 --- a/tests/resources/sample_config.yaml +++ b/tests/resources/sample_config.yaml @@ -109,8 +109,7 @@ tests: remote2: inherit: [common_metrics, write_metrics] - prefix: "performance_regressions.my_product.main.test2" - branch_prefix: "performance_regressions.my_product.feature-%{BRANCH}.test2" + prefix: "performance_regressions.my_product.%{BRANCH}.test2" diff --git a/tests/resources/sample_multi_branch.csv b/tests/resources/sample_multi_branch.csv new file mode 100644 index 0000000..f1db321 --- /dev/null +++ b/tests/resources/sample_multi_branch.csv @@ -0,0 +1,9 @@ +time,commit,branch,metric1,metric2 +2024.01.01 3:00:00 +0100,aaa0,main,154023,10.43 +2024.01.02 3:00:00 +0100,aaa1,main,138455,10.23 +2024.01.03 3:00:00 +0100,aaa2,feature-x,143112,10.29 +2024.01.04 3:00:00 +0100,aaa3,feature-x,149190,10.91 +2024.01.05 3:00:00 +0100,aaa4,main,132098,10.34 +2024.01.06 3:00:00 +0100,aaa5,feature-y,151344,10.69 +2024.01.07 3:00:00 +0100,aaa6,feature-y,155145,9.23 +2024.01.08 3:00:00 +0100,aaa7,main,148889,9.11 diff --git a/tests/resources/sample_single_branch.csv b/tests/resources/sample_single_branch.csv new file mode 100644 index 0000000..9f17a86 --- /dev/null +++ b/tests/resources/sample_single_branch.csv @@ -0,0 +1,6 @@ +time,commit,branch,metric1,metric2 +2024.01.01 3:00:00 +0100,aaa0,main,154023,10.43 +2024.01.02 3:00:00 +0100,aaa1,main,138455,10.23 +2024.01.03 3:00:00 +0100,aaa2,main,143112,10.29 +2024.01.04 3:00:00 +0100,aaa3,main,149190,10.91 +2024.01.05 3:00:00 +0100,aaa4,main,132098,10.34 diff --git a/tests/tigerbeetle_test.py b/tests/tigerbeetle_test.py index b797107..94a9019 100644 --- a/tests/tigerbeetle_test.py +++ b/tests/tigerbeetle_test.py @@ -22,7 +22,7 @@ from otava.analysis import compute_change_points def _get_series(): """ This is the Tigerbeetle dataset used for demo purposes at Nyrkiö. - It has a couple distinctive ups and down, ananomalous drop, then an upward slope and the rest is just normal variance. + It has a couple distinctive ups and down, anomalous drop, then an upward slope and the rest is just normal variance. ^ .' | ... ,..''.'...,......''','....'''''.......'...'.....,,,..'' diff --git a/uv.lock b/uv.lock index 860b767..c4fc7c1 100644 --- a/uv.lock +++ b/uv.lock @@ -15,7 +15,6 @@ source = { editable = "." } dependencies = [ { name = "configargparse" }, { name = "dateparser" }, - { name = "expandvars" }, { name = "google-cloud-bigquery" }, { name = "numpy" }, { name = "pg8000" }, @@ -48,7 +47,6 @@ requires-dist = [ { name = "autoflake", marker = "extra == 'dev'", specifier = ">=2.3.1" }, { name = "configargparse", specifier = ">=1.7.1" }, { name = "dateparser", specifier = ">=1.0.0" }, - { name = "expandvars", specifier = ">=0.12.0" }, { name = "flake8", marker = "extra == 'dev'", specifier = ">=7.3.0" }, { name = "google-cloud-bigquery", specifier = ">=3.38.0" }, { name = "isort", marker = "extra == 'dev'", specifier = ">=7.0.0" }, @@ -273,15 +271,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/36/f4/c6e662dade71f56cd2f3735141b265c3c79293c109549c1e6933b0651ffc/exceptiongroup-1.3.0-py3-none-any.whl", hash = "sha256:4d111e6e0c13d0644cad6ddaa7ed0261a0b36971f6d23e7ec9b4b9097da78a10", size = 16674, upload-time = "2025-05-10T17:42:49.33Z" }, ] -[[package]] -name = "expandvars" -version = "1.1.2" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/9c/64/a9d8ea289d663a44b346203a24bf798507463db1e76679eaa72ee6de1c7a/expandvars-1.1.2.tar.gz", hash = "sha256:6c5822b7b756a99a356b915dd1267f52ab8a4efaa135963bd7f4bd5d368f71d7", size = 70842, upload-time = "2025-09-12T10:55:20.929Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/7f/e6/79c43f7a55264e479a9fbf21ddba6a73530b3ea8439a8bb7fa5a281721af/expandvars-1.1.2-py3-none-any.whl", hash = "sha256:d1652fe4e61914f5b88ada93aaedb396446f55ae4621de45c8cb9f66e5712526", size = 7526, upload-time = "2025-09-12T10:55:18.779Z" }, -] - [[package]] name = "filelock" version = "3.20.0"
