Usage
Ekomark mimics the same inputs needed to run eko, namely a theory card, an observable card and also the name of a pdf set whenever the external program can be used together with lhapdf.
Both the theory and observable card can be generated automatically from a default:
the former with banana, the latter with something similar to generate_observable()
provided sandbox.py.
In addition to run ekomark you need to specify the external program you would benchmark against.
To do so, you will have to initialize a class of type ekomark.benchmark.runner
.
To speed up the calculations null PDFs can be skipped setting the attribute skip_pdfs
Finally you can decide to display the output in Flavor or in Evolution basis setting rotate_to_evolution_basis
In the following section we describe some available runners which are the most useful example.
The minimal setup of the input cards must contain:
Name |
Type |
default |
description |
---|---|---|---|
|
[required] |
order of perturbation theory: |
|
|
[required] |
reference value of the strong coupling \(\alpha_s(\mu_0^2)\) (Note that we have to use \(\alpha_s(\mu_0^2)\) here, instead of \(a_s(\mu_0^2)\) for legacy reasons) |
|
|
[required] |
reference scale from which to start |
|
|
2.0 |
charm mass in GeV |
|
|
4.5 |
bottom mass in GeV |
|
|
173.0 |
top mass in GeV |
Name |
Type |
description |
---|---|---|
|
|
the interpolation grid |
|
polynomial degree of the interpolating function |
|
|
use logarithmic interpolation? |
|
|
all operators at the requested values of \(Q^2\) represented by the key |
The output of ekomark will be stored in data/benchmark.db
inside a Pandas.DataFrame
table.
You can then use the ekonavigator app to inspect your database and produce plots.
Available Runners
In benchmarks/runners
we provide a list of established benchmarks
sandbox.py
:it is used to provide the boilerplate needed for a basic run, in order to make a quick run for debugging purpose, but still fully managed and registered by the ekomark machinery and then available in the ekonavigator
apfel_bench.py
:pegaus_bench.py
:paper_LHA_bench.py
:it is used by the corresponding workflow to run the established benchmarks against the LHA papers.
There are no external python bindings needed since the LHA data are stored in
ekomark/benchmark/external/LHA.yaml
.
All of them are examples useful to understand how to use the ekomark package for benchmarking.