test: ensure global options for benchmark tests can be set in bazel (#34753)
Previously, when the benchmark tests ran outside of Bazel, developers had the posibility to control how the tests run through command line options. e.g. `--dryrun`. This no longer works reliable in Bazel where command line arguments are not passed to the text executable. To make the global options still usable (as they could be still useful in some cases), we just pass them through the Bazel `--test_env`. This reduces the code we need to read the command line, but still preserves the flexibility in a Bazel idiomatic way. PR Close #34753
This commit is contained in:

committed by
Andrew Kushnir

parent
8f0732fb17
commit
03690442dc
@ -23,3 +23,23 @@ yarn bazel test modules/benchmarks/...
|
||||
|
||||
The `*_aot.ts` files are used as entry-points within Google to run the benchmark
|
||||
tests. These are still built as part of the corresponding `ng_module` rule.
|
||||
|
||||
## Specifying benchmark options
|
||||
|
||||
There are options that can be specified in order to control how a given benchmark target
|
||||
runs. The following options can be set through [test environment variables](https://docs.bazel.build/versions/master/command-line-reference.html#flag--test_env):
|
||||
|
||||
* `PERF_SAMPLE_SIZE`: Benchpress performs measurements until `scriptTime` predictively no longer
|
||||
decreases. It does this by using a simple linear regression with the amount of samples specified.
|
||||
Defaults to `20` samples.
|
||||
* `PERF_FORCE_GC`: If set to `true`, `@angular/benchpress` will run run the garbage collector
|
||||
before and after performing measurements. Benchpress will measure and report the garbage
|
||||
collection time.
|
||||
* `PERF_DRYRUN`: If set to `true`, no results are printed and stored in a `json` file. Also
|
||||
benchpress only performs a single measurement (unlike with the simple linear regression).
|
||||
|
||||
Here is an example command that sets the `PERF_DRYRUN` option:
|
||||
|
||||
```bash
|
||||
yarn bazel test modules/benchmarks/src/tree/baseline:perf --test_env=PERF_DRYRUN=true
|
||||
```
|
@ -7,7 +7,7 @@ load("//tools:defaults.bzl", "protractor_web_test_suite")
|
||||
unless explicitly requested.
|
||||
"""
|
||||
|
||||
def benchmark_test(name, server, deps, tags = []):
|
||||
def benchmark_test(name, server, tags = [], **kwargs):
|
||||
protractor_web_test_suite(
|
||||
name = name,
|
||||
configuration = "//:protractor-perf.conf.js",
|
||||
@ -19,7 +19,5 @@ def benchmark_test(name, server, deps, tags = []):
|
||||
# Benchmark targets should not run on CI by default.
|
||||
tags = tags + ["manual"],
|
||||
test_suite_tags = ["manual"],
|
||||
deps = [
|
||||
"@npm//yargs",
|
||||
] + deps,
|
||||
**kwargs
|
||||
)
|
||||
|
@ -6,12 +6,10 @@ load("//tools:defaults.bzl", "protractor_web_test_suite")
|
||||
with `@angular/benchpress`.
|
||||
"""
|
||||
|
||||
def e2e_test(name, server, deps, **kwargs):
|
||||
def e2e_test(name, server, **kwargs):
|
||||
protractor_web_test_suite(
|
||||
name = name,
|
||||
on_prepare = "//modules/benchmarks:start-server.js",
|
||||
server = server,
|
||||
# `yargs` is needed as runtime dependency for the e2e utils.
|
||||
deps = ["@npm//yargs"] + deps,
|
||||
**kwargs
|
||||
)
|
||||
|
Reference in New Issue
Block a user