datalad.api.ls_file_collection

datalad.api.ls_file_collection(type: str, collection: CollectionSpec, *, hash: str | List[str] | None = None)

Report information on files in a collection

This is a utility that can be used to query information on files in different file collections. The type of information reported varies across collection types. However, each result at minimum contains some kind of identifier for the collection ('collection' property), and an identifier for the respective collection item ('item' property). Each result also contains a type property that indicates particular type of file that is being reported on. In most cases this will be file, but other categories like symlink or directory are recognized too.

If a collection type provides file-access, this command can compute one or more hashes (checksums) for any file in a collection.

Supported file collection types are:

directory

Reports on the content of a given directory (non-recursively). The collection identifier is the path of the directory. Item identifiers are the names of items within that directory. Standard properties like size, mtime, or link_target are included in the report. When hashes are computed, an fp property with a file-like is provided. Reading file data from it requires a seek(0) in most cases. This file handle is only open when items are yielded directly by this command (return_type='generator) and only until the next result is yielded.

gittree

Reports on the content of a Git "tree-ish". The collection identifier is that tree-ish. The command must be executed inside a Git repository. If the working directory for the command is not the repository root (in case of a non-bare repository), the report is constrained to items underneath the working directory. Item identifiers are the relative paths of items within that working directory. Reported properties include gitsha and gittype; note that the gitsha is not equivalent to a SHA1 hash of a file's content, but is the SHA-type blob identifier as reported and used by Git. Reporting of content hashes beyond the gitsha is presently not supported.

gitworktree

Reports on all tracked and untracked content of a Git repository's work tree. The collection identifier is a path of a directory in a Git repository (which can, but needs not be, its root). Item identifiers are the relative paths of items within that directory. Reported properties include gitsha and gittype; note that the gitsha is not equivalent to a SHA1 hash of a file's content, but is the SHA-type blob identifier as reported and used by Git. When hashes are computed, an fp property with a file-like is provided. Reading file data from it requires a seek(0) in most cases. This file handle is only open when items are yielded directly by this command (return_type='generator) and only until the next result is yielded.

annexworktree

Like gitworktree, but amends the reported items with git-annex information, such as annexkey, annexsize, and annnexobjpath.

tarfile

Reports on members of a TAR archive. The collection identifier is the path of the TAR file. Item identifiers are the relative paths of archive members within the archive. Reported properties are similar to the directory collection type. When hashes are computed, an fp property with a file-like is provided. Reading file data from it requires a seek(0) in most cases. This file handle is only open when items are yielded directly by this command (return_type='generator) and only until the next result is yielded.

zipfile

Like tarfile for reporting on ZIP archives.

Examples

Report on the content of a directory:

> records = ls_file_collection("directory", "/tmp")

Report on the content of a TAR archive with MD5 and SHA1 file hashes:

> records = ls_file_collection("tarfile", "myarchive.tar.gz", hash=["md5", "sha1"])

List annex keys of all files in the working tree of a dataset:

> [r['annexkey'] \
  for r in ls_file_collection('annexworktree', '.') \
  if 'annexkey' in r]
Parameters:
  • type -- Name of the type of file collection to report on.

  • collection -- identifier or location of the file collection to report on. Depending on the type of collection to process, the specific nature of this parameter can be different. A common identifier for a file collection is a path (to a directory, to an archive), but might also be a URL. See the documentation for details on supported collection types.

  • hash -- One or more names of algorithms to be used for reporting file hashes. They must be supported by the Python 'hashlib' module, e.g. 'md5' or 'sha256'. Reporting file hashes typically implies retrieving/reading file content. This processing may also enable reporting of additional properties that may otherwise not be readily available. [Default: None]

  • on_failure ({'ignore', 'continue', 'stop'}, optional) -- behavior to perform on failure: 'ignore' any failure is reported, but does not cause an exception; 'continue' if any failure occurs an exception will be raised at the end, but processing other actions will continue for as long as possible; 'stop': processing will stop on first failure and an exception is raised. A failure is any result with status 'impossible' or 'error'. Raised exception is an IncompleteResultsError that carries the result dictionaries of the failures in its failed attribute. [Default: 'continue']

  • result_filter (callable or None, optional) -- if given, each to-be-returned status dictionary is passed to this callable, and is only returned if the callable's return value does not evaluate to False or a ValueError exception is raised. If the given callable supports **kwargs it will additionally be passed the keyword arguments of the original API call. [Default: None]

  • result_renderer -- select rendering mode command results. 'tailored' enables a command- specific rendering style that is typically tailored to human consumption, if there is one for a specific command, or otherwise falls back on the the 'generic' result renderer; 'generic' renders each result in one line with key info like action, status, path, and an optional message); 'json' a complete JSON line serialization of the full result record; 'json_pp' like 'json', but pretty-printed spanning multiple lines; 'disabled' turns off result rendering entirely; '<template>' reports any value(s) of any result properties in any format indicated by the template (e.g. '{path}', compare with JSON output for all key-value choices). The template syntax follows the Python "format() language". It is possible to report individual dictionary values, e.g. '{metadata[name]}'. If a 2nd-level key contains a colon, e.g. 'music:Genre', ':' must be substituted by '#' in the template, like so: '{metadata[music#Genre]}'. [Default: 'tailored']

  • result_xfm ({'datasets', 'successdatasets-or-none', 'paths', 'relpaths', 'metadata'} or callable or None, optional) -- if given, each to-be-returned result status dictionary is passed to this callable, and its return value becomes the result instead. This is different from result_filter, as it can perform arbitrary transformation of the result value. This is mostly useful for top- level command invocations that need to provide the results in a particular format. Instead of a callable, a label for a pre-crafted result transformation can be given. [Default: None]

  • return_type ({'generator', 'list', 'item-or-list'}, optional) -- return value behavior switch. If 'item-or-list' a single value is returned instead of a one-item return value list, or a list in case of multiple return values. None is return in case of an empty list. [Default: 'list']