API

Gantry’s public API.

class GitRepoState(repo: str, repo_url: str, ref: str, branch: str | None = None)[source]

Represents the state of a local git repository.

Tip

Use from_env() to instantiate this class.

repo: str

The repository name, e.g. "allenai/beaker-gantry".

repo_url: str

The repository URL for cloning, e.g. "https://github.com/allenai/beaker-gantry".

ref: str

The current ref.

branch: str | None = None

The current active branch, if any.

property is_dirty: bool

If the local repository state is dirty (uncommitted changes).

property is_public: bool

If the repository is public.

property ref_url: str

The URL to the current ref.

property branch_url: str | None

The URL to the current active branch.

classmethod from_env(ref: str | None = None, branch: str | None = None) GitRepoState[source]

Instantiate this class from the root of a git repository.

Raises:
launch_experiment(args: Sequence[str], name: str | None = None, description: str | None = None, task_name: str = 'main', workspace: str | None = None, group_name: str | None = None, clusters: Sequence[str] | None = None, gpu_types: Sequence[str] | None = None, hostnames: Sequence[str] | None = None, beaker_image: str | None = None, docker_image: str | None = None, cpus: float | None = None, gpus: int | None = None, memory: str | None = None, shared_memory: str | None = None, datasets: Sequence[str] | None = None, gh_token_secret: str = 'GITHUB_TOKEN', ref: str | None = None, branch: str | None = None, conda: PathLike | str | None = None, pip: PathLike | str | None = None, venv: str | None = None, env_vars: Sequence[str] | None = None, env_secrets: Sequence[str] | None = None, dataset_secrets: Sequence[str] | None = None, timeout: int = 0, task_timeout: str | None = None, show_logs: bool = True, allow_dirty: bool = False, dry_run: bool = False, yes: bool = False, save_spec: PathLike | str | None = None, priority: str | None = None, install: str | None = None, no_python: bool = False, no_conda: bool = False, replicas: int | None = None, leader_selection: bool = False, host_networking: bool = False, propagate_failure: bool | None = None, propagate_preemption: bool | None = None, synchronized_start_timeout: str | None = None, mounts: Sequence[str] | None = None, weka: str | None = None, budget: str | None = None, preemptible: bool | None = None, retries: int | None = None, results: str = '/results', skip_tcpxo_setup: bool = False)[source]

Launch an experiment on Beaker. Same as the gantry run command.

follow_workload(beaker: Beaker, workload: Workload, *, task: Task | None = None, timeout: int = 0, tail: bool = False, show_logs: bool = True) Job[source]

Follow a workload until completion while streaming logs to stdout.

Parameters:
  • task – A specific task in the workload to follow. Defaults to the first task.

  • timeout – The number of seconds to wait for the workload to complete. Raises a timeout error if it doesn’t complete in time. Set to 0 (the default) to wait indefinitely.

  • tail – Start tailing the logs if a job is already running. Otherwise shows all logs.

  • show_logs – Set to False to avoid streaming the logs.

Returns:

The finalized BeakerJob from the task being followed.

Raises:

BeakerJobTimeoutError – If timeout is set to a positive number and the workload doesn’t complete in time.