![]() This suddenly appeared after updating Nix to v26, which then complained: … while calling the 'fromJSON' builtin at /home/runner/work/nixpkgs/nixpkgs/target/ci/eval/compare/default.nix:74:19: 73| 74| getAttrs = dir: builtins.fromJSON (builtins.readFile "${dir}/outpaths.json"); | ^ 75| beforeAttrs = getAttrs beforeResultDir; … while evaluating the first argument passed to builtins.fromJSON error: the string '{ "AMB-plugins.aarch64-linux": { "out": "/nix/store/faw59ba5p6h4b177n8q2ilb3hlm7xlc2-AMB-plugins-0.8.1" }, .... "zzuf.aarch64-linux": { "out": "/nix/store/bqvm1h7jfd8smgnjc1v1gpmbwdgvwy5g-zzuf-0.15" }, "zzuf.x86_64-linux": { "out": "/nix/store/6qs4lnmzn1qlr3smqqxnmhnrcdcfiv6a-zzuf-0.15" } } ' is not allowed to refer to a store path (such as '134m2q047vsr9miwh5l227j7sh9jb130-jq-1.7.1-bin') By discard the unsafe string context, we explicitly allow loading those store paths. It's unclear why this blew up now, especially because I was not possible to consistently replicate this locally, so far. |
||
---|---|---|
.. | ||
compare | ||
default.nix | ||
README.md |
Nixpkgs CI evaluation
The code in this directory is used by the eval.yml GitHub Actions workflow to evaluate the majority of Nixpkgs for all PRs, effectively making sure that when the development branches are processed by Hydra, no evaluation failures are encountered.
Furthermore it also allows local evaluation using
nix-build ci -A eval.full \
--max-jobs 4 \
--cores 2 \
--arg chunkSize 10000 \
--arg evalSystems '["x86_64-linux" "aarch64-darwin"]'
--max-jobs
: The maximum number of derivations to run at the same time. Only each supported system gets a separate derivation, so it doesn't make sense to set this higher than that number.--cores
: The number of cores to use for each job. Recommended to set this to the amount of cores on your system divided by--max-jobs
.chunkSize
: The number of attributes that are evaluated simultaneously on a single core. Lowering this decreases memory usage at the cost of increased evaluation time. If this is too high, there won't be enough chunks to process them in parallel, and will also increase evaluation time.evalSystems
: The set of systems for whichnixpkgs
should be evaluated. Defaults to the four official platforms (x86_64-linux
,aarch64-linux
,x86_64-darwin
andaarch64-darwin
).
A good default is to set chunkSize
to 10000, which leads to about 3.6GB max memory usage per core, so suitable for fully utilising machines with 4 cores and 16GB memory, 8 cores and 32GB memory or 16 cores and 64GB memory.
Note that 16GB memory is the recommended minimum, while with less than 8GB memory evaluation time suffers greatly.