datalad.api.run_procedure(spec=None, dataset=None, discover=False, help_proc=False)

Run prepared procedures (DataLad scripts) on a dataset


A “procedure” is an algorithm with the purpose to process a dataset in a particular way. Procedures can be useful in a wide range of scenarios, like adjusting dataset configuration in a uniform fashion, populating a dataset with particular content, or automating other routine tasks, such as synchronizing dataset content with certain siblings.

Implementations of some procedures are shipped together with DataLad, but additional procedures can be provided by 1) any DataLad extension, 2) any (sub-)dataset, 3) a local user, or 4) a local system administrator. DataLad will look for procedures in the following locations and order:

Directories identified by the configuration settings

  • ‘datalad.locations.user-procedures’ (determined by appdirs.user_config_dir; defaults to ‘$HOME/.config/datalad/procedures’ on GNU/Linux systems)
  • ‘datalad.locations.system-procedures’ (determined by appdirs.site_config_dir; defaults to ‘/etc/xdg/datalad/procedures’ on GNU/Linux systems)
  • ‘datalad.locations.dataset-procedures’

and subsequently in the ‘resources/procedures/’ directories of any installed extension, and, lastly, of the DataLad installation itself.

Please note that a dataset that defines ‘datalad.locations.dataset-procedures’ provides its procedures to any dataset it is a subdataset of. That way you can have a collection of such procedures in a dedicated dataset and install it as a subdataset into any dataset you want to use those procedures with. In case of a naming conflict with such a dataset hierarchy, the dataset you’re calling run-procedures on will take precedence over its subdatasets and so on.

Each configuration setting can occur multiple times to indicate multiple directories to be searched. If a procedure matching a given name is found (filename without a possible extension), the search is aborted and this implementation will be executed. This makes it possible for individual datasets, users, or machines to override externally provided procedures (enabling the implementation of customizable processing “hooks”).

Procedure implementation

A procedure can be any executable. Executables must have the appropriate permissions and, in the case of a script, must contain an appropriate “shebang” line. If a procedure is not executable, but its filename ends with ‘.py’, it is automatically executed by the ‘python’ interpreter (whichever version is available in the present environment). Likewise, procedure implementations ending on ‘.sh’ are executed via ‘bash’.

Procedures can implement any argument handling, but must be capable of taking at least one positional argument (the absolute path to the dataset they shall operate on).

For further customization there are two configuration settings per procedure available:

  • ‘datalad.procedures.<NAME>.call-format’ fully customizable format string to determine how to execute procedure NAME (see also datalad-run). It currently requires to include the following placeholders:

    • ‘{script}’: will be replaced by the path to the procedure

    • ‘{ds}’: will be replaced by the absolute path to the dataset the procedure shall operate on

    • ‘{args}’: (not actually required) will be replaced by

      all but the first element of spec if spec is a list or tuple As an example the default format string for a call to a python script is: “python {script} {ds} {args}”

  • ‘datalad.procedures.<NAME>.help’ will be shown on datalad run-procedure –help-proc NAME to provide a description and/or usage info for procedure NAME

Customize other commands with procedures

On execution of any commands, DataLad inspects two additional configuration settings:

  • ‘datalad.<name>.proc-pre’
  • ‘datalad.<name>.proc-post’

where ‘<name>’ is the name of a DataLad command. Using this mechanism DataLad can be instructed to run one or more procedures before or after the execution of a given command. For example, configuring a set of metadata types in any newly created dataset can be achieved via:

% datalad -c ‘datalad.create.proc-post=cfg_metadatatypes xmp image’ create -d myds

As procedures run on datasets, it is necessary to explicitly identify the target dataset via the -d (–dataset) option.

  • spec – Name and possibly additional arguments of the to-be-executed procedure. [Default: None]
  • dataset (Dataset or None, optional) – specify the dataset to run the procedure on. An attempt is made to identify the dataset based on the current working directory. [Default: None]
  • discover (bool, optional) – if given, all configured paths are searched for procedures and one result record per discovered procedure is yielded, but no procedure is executed. [Default: False]
  • help_proc (bool, optional) – if given, get a help message for procedure NAME from config setting [Default: False]
  • on_failure ({'ignore', 'continue', 'stop'}, optional) – behavior to perform on failure: ‘ignore’ any failure is reported, but does not cause an exception; ‘continue’ if any failure occurs an exception will be raised at the end, but processing other actions will continue for as long as possible; ‘stop’: processing will stop on first failure and an exception is raised. A failure is any result with status ‘impossible’ or ‘error’. Raised exception is an IncompleteResultsError that carries the result dictionaries of the failures in its failed attribute. [Default: ‘continue’]
  • proc_post – Like proc_pre, but procedures are executed after the main command has finished. [Default: None]
  • proc_pre – DataLad procedure to run prior to the main command. The argument a list of lists with procedure names and optional arguments. Procedures are called in the order their are given in this list. It is important to provide the respective target dataset to run a procedure on as the dataset argument of the main command. [Default: None]
  • result_filter (callable or None, optional) – if given, each to-be-returned status dictionary is passed to this callable, and is only returned if the callable’s return value does not evaluate to False or a ValueError exception is raised. If the given callable supports **kwargs it will additionally be passed the keyword arguments of the original API call. [Default: None]
  • result_renderer ({'default', 'json', 'json_pp', 'tailored'} or None, optional) – format of return value rendering on stdout. [Default: None]
  • result_xfm ({'paths', 'relpaths', 'datasets', 'successdatasets-or-none', 'metadata'} or callable or None, optional) – if given, each to-be-returned result status dictionary is passed to this callable, and its return value becomes the result instead. This is different from result_filter, as it can perform arbitrary transformation of the result value. This is mostly useful for top- level command invocations that need to provide the results in a particular format. Instead of a callable, a label for a pre-crafted result transformation can be given. [Default: None]
  • return_type ({'generator', 'list', 'item-or-list'}, optional) – return value behavior switch. If ‘item-or-list’ a single value is returned instead of a one-item return value list, or a list in case of multiple return values. None is return in case of an empty list. [Default: ‘list’]