JSON/TOML configuration
While the openPMD API intends to be a backend-independent implementation of the openPMD standard, it is sometimes useful to pass configuration parameters to the specific backend in use. For each backend, configuration options can be passed via a JSON- or TOML-formatted string or via environment variables. A JSON/TOML option always takes precedence over an environment variable.
The fundamental structure of this JSON configuration string is given as follows:
{
"adios2": "put ADIOS2 config here",
"hdf5": "put HDF5 config here",
"json": "put JSON config here"
}
Every JSON configuration can alternatively be given by its TOML equivalent:
[adios2]
# put ADIOS2 config here
[hdf5]
# put HDF5 config here
[json]
# put JSON config here
This structure allows keeping one configuration string for several backends at once, with the concrete backend configuration being chosen upon choosing the backend itself.
Options that can be configured via JSON are often also accessible via other means, e.g. environment variables. The following list specifies the priority of these means, beginning with the lowest priority:
Default values
Automatically detected options, e.g. the backend being detected by inspection of the file extension
Environment variables
JSON/TOML configuration. For JSON/TOML, a dataset-specific configuration overwrites a global, Series-wide configuration.
Explicit API calls such as
setIterationEncoding()
The configuration is read in a case-insensitive manner, keys as well as values.
An exception to this are string values which are forwarded to other libraries such as ADIOS2.
Those are read “as-is” and interpreted by the backend library.
Parameters that are directly passed through to an external library and not interpreted within openPMD API (e.g. adios2.engine.parameters) are unaffected by this and follow the respective library’s conventions.
The configuration string may refer to the complete openPMD::Series or may additionally be specified per openPMD::Dataset, passed in the respective constructors.
This reflects the fact that certain backend-specific parameters may refer to the whole Series (such as storage engines and their parameters) and others refer to actual datasets (such as compression).
Dataset-specific configurations are (currently) only available during dataset creation, but not when reading datasets.
Additionally, some backends may provide different implementations to the Series::flush() and Attributable::flushSeries() calls.
JSON/TOML strings may be passed to these calls as optional parameters.
A JSON/TOML configuration may either be specified as an inline string that can be parsed as a JSON/TOML object, or alternatively as a path to a JSON/TOML-formatted text file (only in the constructor of openPMD::Series, all other API calls that accept a JSON/TOML specification require in-line datasets):
File paths are distinguished by prepending them with an at-sign
@. JSON and TOML are then distinguished by the filename extension.jsonor.toml. If no extension can be uniquely identified, JSON is assumed as default.If no at-sign
@is given, an inline string is assumed. If the first non-blank character of the string is a{, it will be parsed as a JSON value. Otherwise, it is parsed as a TOML value.
For a consistent user interface, backends shall follow the following rules:
The configuration structures for the Series and for each dataset should be defined equivalently.
Any setting referring to single datasets should also be applicable globally, affecting all datasets (specifying a default).
If a setting is defined globally, but also for a concrete dataset, the dataset-specific setting should override the global one.
If a setting is passed to a dataset that only makes sense globally (such as the storage engine), the setting should be ignored except for printing a warning. Backends should define clearly which keys are applicable to datasets and which are not.
All dataset-specific options should be passed inside the
datasetobject, e.g.:{ "adios2": { "dataset": { "put dataset options": "here" } } }
[adios2.dataset] # put dataset options here
Backend-independent JSON configuration
The openPMD backend can be chosen via the JSON/TOML key backend which recognizes the alternatives ["hdf5", "adios2", "json"].
The iteration encoding can be chosen via the JSON/TOML key iteration_encoding which recognizes the alternatives ["file_based", "group_based", "variable_based"].
Note that for file-based iteration encoding, specification of the expansion pattern in the file name (e.g. data_%T.json) remains mandatory.
The key defer_iteration_parsing can be used to optimize the process of opening an openPMD Series (deferred/lazy parsing).
By default, a Series is parsed eagerly, i.e. opening a Series implies reading all available iterations.
Especially when a Series has many iterations, this can be a costly operation and users may wish to defer parsing of iterations to a later point adding {"defer_iteration_parsing": true} to their JSON/TOML configuration.
When parsing non-eagerly, each iteration needs to be explicitly opened with Iteration::open() before accessing.
(Notice that Iteration::open() is generally recommended to be used in parallel contexts to avoid parallel file accessing hazards).
Using the Streaming API (i.e. SeriesInterface::readIteration()) will do this automatically.
Parsing eagerly might be very expensive for a Series with many iterations, but will avoid bugs by forgotten calls to Iteration::open().
In complex environments, calling Iteration::open() on an already open environment does no harm (and does not incur additional runtime cost for additional open() calls).
By default, the library will print a warning to suggest using deferred Iteration parsing when opening a Series takes long.
The timeout can be tuned by the JSON/TOML key hint_lazy_parsing_timeout (integer, seconds):
if set to a positive value, the library will print periodic warnings to stderr when eager parsing of Iterations takes longer than the specified number of seconds (default: 20). Setting this option to 0 disables the warnings.
Environment variables may alternatively be used for options concerning deferred iteration parsing:
Environment variable
OPENPMD_DEFER_ITERATION_PARSING: if set to a truthy value (e.g.1), the Series will be opened with deferred iteration parsing as if{"defer_iteration_parsing": true}had been supplied.Environment variable
OPENPMD_HINT_LAZY_PARSING_TIMEOUT: accepts integral values equivalent to thehint_lazy_parsing_timeoutkey.
Examples:
# enable lazy parsing via env var
export OPENPMD_DEFER_ITERATION_PARSING=1
# disable the parsing hint/warning
export OPENPMD_HINT_LAZY_PARSING_TIMEOUT=0
Or in a Series constructor JSON/TOML configuration:
{
"defer_iteration_parsing": true,
"hint_lazy_parsing_timeout": 20
}
As of openPMD-api 0.17.0, the parser verifies that all records within a mesh or within a particle species have consistent shapes / extents.
This is used for filling in the shape for constant components that do not define it.
In order to skip this check in the error case, the key {"verify_homogeneous_extents": false} may be set (alternatively export OPENPMD_VERIFY_HOMOGENEOUS_EXTENTS=0 will do the same).
This will help read datasets with inconsistent metadata definitions.
The key resizable can be passed to Dataset options.
It if set to {"resizable": true}, this declares that it shall be allowed to increased the Extent of a Dataset via resetDataset() at a later time, i.e., after it has been first declared (and potentially written).
For HDF5, resizable Datasets come with a performance penalty.
For JSON and ADIOS2, all datasets are resizable, independent of this option.
The key rank_table allows specifying the creation of a rank table, used for tracking chunk provenance especially in streaming setups, refer to the streaming documentation for details.
A warning is printed if the JSON/TOML configuration contains keys that are not understood (and hence ignored) by the openPMD-api.
Such warnings can be suppressed by listing key names under "dont_warn_unused_keys".
Example:
{ "adio2": { "dataset": { "put dataset options": "here" } }, "hdf5": { "an unknown": "key", "another unknown": "key", "a third unknown": "key", "dont_warn_unused_keys": ["an unknown", "another unknown"] } }
The openPMD-api will warn about the unknown key "adio2" (in order to give feedback about the apparent misspelling of "adios2" to users); there will similarly be a warning for the unknown HDF5 key "a third unknown". Warnings about the other two unknown keys have been suppressed. The printed warning will hence be about these unknown parts of the JSON configuration:
{ "adio2": { "dataset": { "put dataset options": "here" } }, "hdf5": { "a third unknown": "key" } }
Configuration Structure per Backend
Please refer to the respective backends’ documentations for further information on their configuration.
ADIOS2
A full configuration of the ADIOS2 backend:
{
"adios2": {
"engine": {
"type": "sst",
"preferred_flush_target": "disk",
"parameters": {
"BufferGrowthFactor": "2.0",
"QueueLimit": "2"
}
},
"dataset": {
"operators": [
{
"type": "blosc",
"parameters": {
"clevel": "1",
"doshuffle": "BLOSC_BITSHUFFLE"
}
}
]
},
"attribute_writing_ranks": 0
}
}
[adios2]
# ignore all attribute writes not issued on these ranks
# can also be a list if multiple ranks need to be given
# however rank 0 should be the most common option here
attribute_writing_ranks = 0
[adios2.engine]
type = "sst"
preferred_flush_target = "disk"
[adios2.engine.parameters]
BufferGrowthFactor = "2.0"
QueueLimit = "2"
# use double brackets to indicate lists
[[adios2.dataset.operators]]
type = "blosc"
# specify parameters for the current operator
[adios2.dataset.operators.parameters]
clevel = "1"
doshuffle = "BLOSC_BITSHUFFLE"
# use double brackets a second time to indicate a further entry
[[adios2.dataset.operators]]
# specify a second operator here
type = "some other operator"
# the parameters dictionary can also be specified in-line
parameters.clevel = "1"
parameters.doshuffle = "BLOSC_BITSHUFFLE"
All keys found under adios2.dataset are applicable globally as well as per dataset, any other keys such as those found under adios2.engine only globally.
Explanation of the single keys:
adios2.engine.type: A string that is passed directly toadios2::IO:::SetEnginefor choosing the ADIOS2 engine to be used. Please refer to the official ADIOS2 documentation for a list of available engines.adios2.engine.pretend_engine: May be used for experimentally testing an ADIOS2 engine that is not explicitly supported by the openPMD-api. Specify the actual engine viaadios2.engine.typeand useadios2.engine.pretend_engineto make the ADIOS2 backend pretend that it is in fact using another engine that it knows. Some advanced engine-specific features will be turned off indiscriminately:The Span API will use a fallback implementation
PerformDataWrite()will not be used, even when specifyingadios2.engine.preferred_flush_target = "disk".Engine-specific parameters such as
QueueLimitwill not be set by default.No engine-specific filename extension handling will be executed, the extension specified by the user is taken “as is”.
adios2.engine.access_mode: One of"Write", "Read", "Append", "ReadRandomAccess". Only needed in specific use cases, the access mode is usually determined from the specifiedopenPMD::Access. Useful for finetuning the backend-specific behavior of ADIOS2 when overwriting existing Iterations in file-based Append mode.adios2.engine.parameters: An associative array of string-formatted engine parameters, passed directly through toadios2::IO::SetParameters. Please refer to the official ADIOS2 documentation for the available engine parameters. The openPMD-api does not interpret these values and instead simply forwards them to ADIOS2.adios2.engine.preferred_flush_targetOnly relevant for BP5 engine, possible values are"disk"and"buffer"(default:"disk").If
"disk", data will be moved to disk on every flush.If
"buffer", then only upon ending an IO step or closing an engine.If
new_step, then a new step will be created. This should be used in combination with the ADIOS2 optionadios2.engine.parameters.FlattenSteps = "on".
This behavior can be overridden on a per-flush basis by specifying this JSON/TOML key as an optional parameter to the
Series::flush()orAttributable::seriesFlush()methods.Additionally, specifying
"disk_override","buffer_override"or"new_step_override"will take precedence over options specified without the_overridesuffix, allowing to invert the normal precedence order. This way, a data producing code can hardcode the preferred flush target perflush()call, but users can e.g. still entirely deactivate flushing to disk in theSeriesconstructor by specifyingpreferred_flush_target = buffer_override. This is useful when applying the asynchronous IO capabilities of the BP5 engine.adios2.dataset.operators: This key contains either a single ADIOS2 operator or a list of operators, used to enable compression or dataset transformations. Each operator is an object with two keys:typesupported ADIOS operator type, e.g. zfp, szparametersis an associative map of string parameters for the operator (e.g. compression levels)
adios2.use_span_based_put: The openPMD-api exposes the span-based Put() API of ADIOS2 via an overload ofRecordComponent::storeChunk(). This API is incompatible with compression operators as described above. The openPMD-api will automatically use a fallback implementation for the span-based Put() API if any operator is added to a dataset. This workaround is enabled on a per-dataset level. The workaround can be completely deactivated by specifying{"adios2": {"use_span_based_put": true}}or it can alternatively be activated indiscriminately for all datasets by specifying{"adios2": {"use_span_based_put": false}}.adios2.attribute_writing_ranks: A list of MPI ranks that define metadata. ADIOS2 attributes will be written only from those ranks, any other ranks will be ignored. Can be either a list of integers or a single integer.
Hint
Specifying adios2.attribute_writing_ranks can lead to serious serialization performance improvements at large scale.
Operations specified inside adios2.dataset.operators will be applied to ADIOS2 datasets in writing as well as in reading.
Beginning with ADIOS2 2.8.0, this can be used to specify decompressor settings:
{
"adios2": {
"dataset": {
"operators": [
{
"type": "blosc",
"parameters": {
"nthreads": 2
}
}
]
}
}
}
In older ADIOS2 versions, this specification will be without effect in read mode. Dataset-specific configurations are (currently) only possible when creating datasets, not when reading.
Any setting specified under adios2.dataset is applicable globally as well as on a per-dataset level.
Any setting under adios2.engine is applicable globally only.
HDF5
A full configuration of the HDF5 backend:
{
"hdf5": {
"dataset": {
"chunks": "auto"
},
"vfd": {
"type": "subfiling",
"ioc_selection": "every_nth_rank",
"stripe_size": 33554432,
"stripe_count": -1
}
}
}
All keys found under hdf5.dataset are applicable globally as well as per dataset.
Explanation of the single keys:
hdf5.dataset.chunks: This key contains options for data chunking via H5Pset_chunk. The default is"auto"for a heuristic."none"can be used to disable chunking.An explicit chunk size can be specified as a list of positive integers, e.g.
hdf5.dataset.chunks = [10, 100]. Note that this specification should only be used per-dataset, e.g. inresetDataset()/reset_dataset().Chunking generally improves performance and only needs to be disabled in corner-cases, e.g. when heavily relying on independent, parallel I/O that non-collectively declares data records.
hdf5.datasets.permanent_filters: Either a single HDF5 permanent filter specification or a list of HDF5 permanent filter specifications. Each filter specification is a JSON/TOML object, but there are multiple options:Zlib: The Zlib filter has a distinct API in HDF5 and the configuration for Zlib in openPMD is hence also different. It is activated by the mandatory key
type = "zlib"and configured by the optional integer keyaggression. Example:{"type": "zlib", "aggression": 5}.Filters identified by their global ID registered with the HDF group. They are activated by the mandatory integer key
idcontaining this global ID. All other keys are optional:type = "by_id"may optionally be specified for clarity and consistency.The string key
flagscan take the values"mandatory"or"optional", indicating if HDF5 should abort execution if the filter cannot be applied for some reason.The key
cd_valuespoints to a list of nonnegative integers. These are filter-specific configuration options. Refer to the specific filter’s documentation.
Alternatively to an integer ID, the key
idmay also be of string type, identifying one of the six builtin filters of HDF5:"deflate", "shuffle", "fletcher32", "szip", "nbit", "scaleoffset".
hdf5.vfd.typeselects the HDF5 virtual file driver. Currently available are:"default": Equivalent to specifying nothing.subfiling": Use the subfiling VFD. Note that the subfiling VFD needs to be enabled explicitly when configuring HDF5 and threaded MPI must be used. When using this VFD, the options described below are additionally available. They correspond with the field entries ofH5FD_subfiling_params_t, refer to the HDF5 documentation for their detailed meanings.hdf5.vfd.ioc_selection: Must be one of["one_per_node", "every_nth_rank", "with_config", "total"]hdf5.vfd.stripe_size: Must be an integerhdf5.vfd.stripe_count: Must be an integer
Flush calls, e.g. Series::flush() can be configured via JSON/TOML as well.
The parameters eligible for being passed to flush calls may be configured globally as well, i.e. in the constructor of Series, to provide default settings used for the entire Series.
hdf5.independent_stores: A boolean that sets theH5FD_MPIO_INDEPENDENTdataset transfer property if true, otherwiseH5FD_MPIO_COLLECTIVE. Only available when using HDF5 in combination with MPI. See the HDF5 subpage for further information on independent vs. collective flushing.
JSON/TOML
A full configuration of the JSON backend:
{
"json": {
"dataset": {
"mode": "template"
},
"attribute": {
"mode": "short"
}
}
}
The TOML backend is configured analogously, replacing the "json" key with "toml".
All keys found under json.dataset are applicable globally as well as per dataset.
Explanation of the single keys:
json.dataset.mode/toml.dataset.mode: One of"dataset"(default) or"template". In “dataset” mode, the dataset will be written as an n-dimensional (recursive) array, padded with nulls (JSON) or zeroes (TOML) for missing values. In “template” mode, only the dataset metadata (type, extent and attributes) are stored and no chunks can be written or read (i.e. write/read operations will be skipped).json.attribute.mode/toml.attribute.mode: One of"long"(default in openPMD 1.*) or"short"(default in openPMD 2.* and generally in TOML). The long format explicitly encodes the attribute type in the dataset on disk, the short format only writes the actual attribute as a JSON/TOML value, requiring readers to recover the type.
Dataset-specific configuration
Sometimes it is beneficial to set configuration options for specific datasets. Most dataset-specific configuration options supported by the openPMD-api are additionally backend-specific, being format-specific serialization instructions such as compression or chunking.
All dataset-specific and backend-specific configuration is specified under the key path <backend>.dataset.
Without filtering by dataset name (see the select` key below) this looks like:
{
"adios2": {
"dataset": {
"operators": []
}
},
"hdf5": {
"dataset": {
"chunking": "auto"
}
}
}
Dataset-specific configuration options can be configured in multiple ways:
As part of the general JSON/TOML configuration
In the simplest case, the dataset configuration is specified without any extra steps as part of the JSON/TOML configuration that is used to initialize the openPMD Series as part of the Series constructor. This does not allow specifying different configurations per dataset, but sets the default configuration for all datasets.
As a separate JSON/TOML configuration during dataset initialization
Similarly to the Series constructor, the Dataset constructor optionally receives a JSON/TOML configuration, used for setting options specifically only for those datasets initialized with this Dataset specification. The default given in the Series constructor will be overridden.
This is the preferred way for configuring dataset-specific options that are not backend-specific (currently only {"resizable": true}).
By pattern-matching the dataset names
The above approach has the disadvantage that it has to be supported explicitly at the level of the downstream application, e.g. a simulation or data reader. As an alternative, the the backend-specific dataset configuration under <backend>.dataset can also be given as a list of alternatives that are matched against the dataset name in sequence, e.g. hdf5.dataset = [<pattern_1>, <pattern_2>, ...].
Each such pattern <pattern_i> is a JSON object with key cfg and optional key select: {"select": <regex>, "cfg": <cfg>}.
In here, <regex> is a regex or a list of regexes, of type egrep as defined by the C++ standard library.
<cfg> is a configuration that will be forwarded as a “regular” dataset configuration to the backend.
Note
To match lists of regular expressions select = [REGEX_1, REGEX_2, ..., REGEX_n], the list is internally transformed into a single regular expression ($^)|(REGEX_1)|(REGEX_2)|...|(REGEX_n).
In a configuration such as hdf5.dataset = [<pattern_1>, <pattern_2>, ...], the single patterns will be processed in top-down manner, selecting the first matching pattern found in the list.
The specified regexes will be matched against the openPMD dataset path either within the Iteration (e.g. meshes/E/x or particles/.*/position/.*) or within the Series (e.g. /data/1/meshes/E/x or /data/.*/particles/.*/position/.*), considering full matches only.
Note
The dataset name is determined by the result of attributable.myPath().openPMDPath() where attributable is an object in the openPMD hierarchy.
Note
To match against the path within the containing Iteration or within the containing Series, the specified regular expression is internally transformed into (/data/[0-9]+/)?(REGEX) where REGEX is the specified pattern, and then matched against the full dataset path.
The default configuration is specified by omitting the select key.
Specifying more than one default is an error.
If no pattern matches a dataset, the default configuration is chosen if specified, or an empty JSON object {} otherwise.
A full example:
# ADIOS2 config
[adios2.engine.parameters]
Profile = "On"
# default configuration
[[adios2.dataset]]
# nested list as ADIOS2 can add multiple operators to a single dataset
[[adios2.dataset.cfg.operators]]
type = "blosc"
parameters.doshuffle = "BLOSC_BITSHUFFLE"
parameters.clevel = "1"
# dataset-specific configuration to exclude some datasets
# from applying operators.
[[adios2.dataset]]
select = [".*positionOffset.*", ".*particlePatches.*"]
cfg.operators = []
# Now HDF5
[hdf5]
independent_stores = false
# default configuration
# The position of the default configuration does not matter, but there must
# be only one single default configuration.
[[hdf5.dataset]]
cfg.chunks = "auto"
# Dataset-specific configuration that specifies full paths,
# i.e. including the path to the Iteration.
# The non-default configurations are matched in top-down order,
# so the order is relevant.
[[hdf5.dataset]]
select = ["/data/1/particles/e/.*", "/data/2/particles/e/.*"]
cfg.chunks = [5]
# dataset-specific configuration that specifies only the path
# within the Iteration
[[hdf5.dataset]]
select = "particles/e/.*"
cfg.chunks = [10]
{
"adios2": {
"engine": {
"parameters": {
"Profile": "On"
}
},
"dataset": [
{
"cfg": {
"operators": [
{
"type": "blosc",
"parameters": {
"clevel": "1",
"doshuffle": "BLOSC_BITSHUFFLE"
}
}
]
}
},
{
"select": [
".*positionOffset.*",
".*particlePatches.*"
],
"cfg": {
"operators": []
}
}
]
},
"hdf5": {
"independent_stores": false,
"dataset": [
{
"cfg": {
"chunks": "auto"
}
},
{
"select": [
"/data/1/particles/e/.*",
"/data/2/particles/e/.*"
],
"cfg": {
"chunks": [
5
]
}
},
{
"select": "particles/e/.*",
"cfg": {
"chunks": [
10
]
}
}
]
}
}