CLI¶
Gantry’s command-line interface (CLI) is the main way to utilize its functionality.
Run ❯ gantry --help
to see a list of all available commands.
Usage: gantry [OPTIONS] COMMAND [ARGS]...
Options:
--version Show the version and exit.
--quiet / --not-quiet Don't display the gantry logo. Can also be
set through the environment variable
'GANTRY_QUIET'. Defaults to false.
--log-level [debug|info|warning|error]
The Python log level. Can also be set
through the environment variable
'GANTRY_LOG_LEVEL'. Defaults to 'warning'.
--help Show this message and exit.
Commands:
completion Generate the autocompletion script for gantry for the...
config Configure Gantry for a specific Beaker workspace.
find-gpus Find free GPUs.
follow Follow the logs for a running experiment.
list List recent experiments within a workspace or group.
logs Display the logs for an experiment workload.
open Open the page for a Beaker object in your browser.
run Run an experiment on Beaker.
stop Stop a running workload.
run¶
❯ gantry run
Usage: gantry run [OPTIONS] [ARGS]...
Run an experiment on Beaker.
Example:
$ gantry run --yes --show-logs -- python -c 'print("Hello, World!")'
Options:
--help Show this message and exit.
❯❯❯ Workload settings:
-n, --name TEXT A name to assign to the experiment on
Beaker. Defaults to a randomly generated
name.
-d, --description TEXT A description for the experiment.
-w, --workspace TEXT The Beaker workspace to pull experiments
from.
-b, --budget TEXT The budget account to associate with the
experiment.
--group TEXT A group to assign the experiment to.
Multiple allowed.
❯❯❯ Launch settings:
--show-logs / --no-logs Whether or not to stream the logs to stdout
as the experiment runs.
--timeout INTEGER Time to wait (in seconds) for the experiment
to finish. A timeout of -1 means wait
indefinitely. A timeout of 0 means don't
wait at all. This defaults to 0 unless you
set --show-logs, in which case it defaults
to -1.
--allow-dirty Allow submitting the experiment with a dirty
working directory.
-y, --yes Skip all confirmation prompts.
--dry-run Do a dry run only.
--save-spec FILE A path to save the generated YAML Beaker
experiment spec to.
❯❯❯ Constraints:
-c, --cluster TEXT The name of a cluster to use or a glob
pattern, e.g. --cluster='ai2/*-cirrascale'.
Multiple allowed. If you don't specify a
cluster or the priority, the priority will
default to 'preemptible' and the job will be
able to run on any on-premise cluster.
--gpu-type TEXT Filter clusters by GPU type (e.g. "--gpu-
type=h100"). Multiple allowed.
--tag TEXT Filter clusters by a tag (e.g. "--
tag=storage:weka"). Multiple allowed, in
which case only clusters that have all
specified tags will be used.
--hostname TEXT Hostname constraints to apply to the
experiment spec. This option can be used
multiple times to allow multiple hosts.
❯❯❯ Resources:
--cpus FLOAT The number of logical CPU cores (e.g. 4.0,
0.5) to assign to each task replica.
--gpus INTEGER The number of GPUs (e.g. 1) to assign to
each task replica.
--memory TEXT The amount of system memory to assign to
each task replica. This should be specified
as a number with unit suffix (e.g. 2.5GiB).
--shared-memory TEXT The size of /dev/shm as a number with unit
suffix (e.g. 2.5GiB).
❯❯❯ Inputs:
--beaker-image TEXT The name or ID of an image on Beaker to use
for your experiment. Mutually exclusive with
--docker-image. Defaults to 'petew/gantry'
if neither is set.
--docker-image TEXT The name of a public Docker image to use for
your experiment. Mutually exclusive with
--beaker-image.
--dataset TEXT An input dataset in the form of 'dataset-
name:/mount/location' or 'dataset-
name:sub/path:/mount/location' to attach to
your experiment. You can specify this option
more than once to attach multiple datasets.
-m, --mount TEXT Host directories to mount to the Beaker
experiment. Should be in the form
'{HOST_SOURCE}:{TARGET}' similar to the '-v'
option with 'docker run'.
--weka TEXT A weka bucket to mount in the form of
'bucket-name:/mount/location', e.g.
--weka=oe-training-default:/data
--env TEXT Environment variables to add the Beaker
experiment. Should be in the form
'{NAME}={VALUE}'.
--env-secret, --secret-env TEXT
Environment variables to add the Beaker
experiment from Beaker secrets. Should be in
the form '{NAME}={SECRET_NAME}'.
--dataset-secret TEXT Mount a Beaker secret to a file as a
dataset. Should be in the form
'{SECRET_NAME}:{MOUNT_PATH}'.
--ref TEXT The target git ref to use. Defaults to the
latest commit.
--branch TEXT The target git branch to use. Defaults to
the active branch.
--gh-token-secret TEXT The name of the Beaker secret that contains
your GitHub token. [default: GITHUB_TOKEN]
❯❯❯ Outputs:
--results TEXT Specify the results directory on the
container (an absolute path). This is where
the results dataset will be mounted.
[default: /results]
❯❯❯ Task settings:
-t, --task-name TEXT A name to assign to the task on Beaker.
[default: main]
--priority [low|normal|high|urgent|immediate]
The job priority.
--task-timeout TEXT The Beaker job timeout, e.g. "24h". If a job
runs longer than this it will canceled by
Beaker.
--preemptible / --not-preemptible
Mark the job as preemptible or not. If you
don't specify at least one cluster then jobs
will default to preemptible.
--retries INTEGER Specify the number of automatic retries for
the experiment.
❯❯❯ Multi-node config:
--replicas INTEGER The number of task replicas to run.
--leader-selection Specifies that the first task replica should
be the leader and populates each task with
'BEAKER_LEADER_REPLICA_HOSTNAME' and
'BEAKER_LEADER_REPLICA_NODE_ID' environment
variables. This is only applicable when '--
replicas INT' and '--host-networking' are
used, although the '--host-networking' flag
can be omitted in this case since it's
assumed.
--host-networking Specifies that each task replica should use
the host's network. When used with '--
replicas INT', this allows the replicas to
communicate with each other using their
hostnames.
--propagate-failure Stop the experiment if any task fails.
--propagate-preemption Stop the experiment if any task is
preempted.
--synchronized-start-timeout TEXT
If set, jobs in the replicated task will
wait this long to start until all other jobs
are also ready. This should be specified as
a duration such as '5m', '30s', etc.
--skip-tcpxo-setup By default Gantry will configure NCCL for
TCPXO when running a multi-node job on
Augusta (--replicas > 1), but you can use
this flag to skip that step if you need a
custom configuration. If you do use this
flag, you'll probably need to follow all of
the steps documented here:
https://beaker-docs.allen.ai/compute/augusta
.html#distributed-workloads
❯❯❯ Runtime:
--runtime-dir TEXT The runtime directory on the image.
[default: /gantry-runtime]
--exec-method [exec|bash] Defines how your command+arguments are
evaluated and executed at runtime. 'exec'
means gantry will call 'exec "$@"' to
execute your command. 'bash' means gantry
will call 'bash -c "$*"' to execute your
command. One reason you might prefer 'bash'
over 'exec' is if you have shell variables
in your arguments that you want expanded at
runtime. [default: exec]
❯❯❯ Setup hooks:
--pre-setup TEXT Set a custom command or shell script to run
before gantry's setup steps.
--post-setup TEXT Set a custom command or shell script to run
after gantry's setup steps.
❯❯❯ Python settings:
--python-manager [uv|conda] The tool to use to manage Python
installations and environments at runtime.
If not specified this will default to 'uv'
(recommended) in most cases, unless other '
--conda-*' specific options are given (see
below).
--default-python-version TEXT
The default Python version to use when
constructing a new Python environment. This
will be ignored if gantry is instructed to
use an existing Python
distribution/environment on the image, such
as with the --system-python flag, the --uv-
venv option, or the --conda-env option.
[default: 3.10]
--system-python If set, gantry will try to use the default
Python installation on the image. Though the
behavior is a little different when using
conda as the Python manager, in which case
gantry will try to use the base conda
environment.
--install TEXT Override the default Python project
installation method with a custom command or
shell script, e.g. '--install "python
setup.py install"' or '--install "my-custom-
install-script.sh"'.
--no-python If set, gantry will skip setting up a Python
environment altogether. This can be useful
if your experiment doesn't need Python or if
your image already contains a complete
Python environment.
❯❯❯ Python uv settings: Settings specific to the uv Python manager
(--python-manager=uv).
--uv-venv TEXT A path to a Python virtual environment on
the image.
--uv-extra TEXT Include optional dependencies for your local
project from the specified extra name. Can
be specified multiple times. If not
provided, all extras will be installed
unless --uv-no-extras is given.
--uv-all-extras / --uv-no-extras
Install your local project with all extra
dependencies, or no extra dependencies. This
defaults to true unless --uv-extra is
specified.
--uv-torch-backend TEXT The backend to use when installing packages
in the PyTorch ecosystem with uv. Valid
options are 'auto', 'cpu', 'cu128', etc.
❯❯❯ Python conda settings: Settings specific to the conda Python
manager (--python-manager=conda).
--conda-file FILE Path to a conda environment file for
reconstructing your Python environment. If
not specified, an
'environment.yml'/'environment.yaml' file
will be used if it exists.
--conda-env TEXT The name or path to an existing conda
environment on the image to use.
config¶
❯ gantry config
Usage: gantry config [OPTIONS] COMMAND [ARGS]...
Configure Gantry for a specific Beaker workspace.
Options:
--help Show this message and exit.
Commands:
set-gh-token Set or update Gantry's GitHub token for the workspace.
find-gpus¶
❯ gantry find-gpus
Usage: gantry find-gpus [OPTIONS]
Find free GPUs.
Options:
-a, --all Show all clusters, not just ones with free GPUs.
-g, --gpu-type TEXT Filter by GPU type (e.g. "h100"). Multiple allowed.
--help Show this message and exit.
follow¶
❯ gantry follow
Usage: gantry follow [OPTIONS] [WORKLOAD]
Follow the logs for a running experiment.
Options:
-t, --tail Only tail the logs as opposed to printing all logs so
far.
-l, --latest Get the logs from the latest running experiment (non-
session) workload.
-w, --workspace TEXT The Beaker workspace to pull experiments from.
-a, --author TEXT Pull the latest experiment workload for a particular
author. Defaults to your own account.
--help Show this message and exit.
logs¶
❯ gantry logs
Usage: gantry logs [OPTIONS] WORKLOAD
Display the logs for an experiment workload.
Options:
-r, --replica INTEGER The replica rank to pull logs from.
--task TEXT The name of task to pull logs from.
-t, --tail INTEGER Tail this many lines.
--run INTEGER The run number to pull logs from.
-f, --follow Continue streaming logs for the duration of the job.
--help Show this message and exit.
list¶
❯ gantry list
Usage: gantry list [OPTIONS]
List recent experiments within a workspace or group. This will only show
experiments launched with Gantry by default, unless '--all' is specified.
Options:
-w, --workspace TEXT The Beaker workspace to pull experiments
from.
-g, --group TEXT The Beaker group to pull experiments from.
-l, --limit INTEGER Limit the number of experiments to display.
Default: 10
-a, --author TEXT Filter by author. Tip: use '--me' instead to
show your own experiments.
--me Only show your own experiments. Mutually
exclusive with '--author'.
-s, --status [submitted|queued|initializing|running|stopping|uploading_results|canceled|succeeded|failed|ready_to_start]
Filter by status. Multiple allowed.
--max-age INTEGER Maximum age of experiments, in days.
Default: 7
--all Show all experiments, not just onces
submitted through Gantry.
--help Show this message and exit.
stop¶
❯ gantry stop
Usage: gantry stop [OPTIONS] [WORKLOAD]...
Stop a running workload.
Options:
-l, --latest Stop your latest experiment (non-session) workload.
-w, --workspace TEXT The Beaker workspace to pull experiments from.
--dry-run Do a dry-run without stopping any experiments.
-y, --yes Skip all confirmation prompts.
--help Show this message and exit.
open¶
❯ gantry open
Usage: gantry open [OPTIONS] [IDENTIFIERS]...
Open the page for a Beaker object in your browser.
Options:
--help Show this message and exit.
completion¶
❯ gantry completion
Usage: gantry completion [OPTIONS] COMMAND [ARGS]...
Generate the autocompletion script for gantry for the specified shell.
See each sub-command's help for details on how to use the generated script.
Options:
--help Show this message and exit.
Commands:
bash Generate the autocompletion script for bash.
fish Generate the autocompletion script for fish.
zsh Generate the autocompletion script for zsh.