scripts

environment variables

every script atlas runs receives the following environment variables automatically:

variabledescription
ATLAS_WORKSPACEalways 1 — indicates the script is running inside an atlas workspace
ATLAS_WORKSPACE_NAMEthe workspace name (e.g., amber-arch)
ATLAS_WORKSPACE_PATHfull path to the worktree directory
ATLAS_ROOT_PATHpath to the repo’s bare clone
ATLAS_DEFAULT_BRANCHthe repo’s default branch (e.g., main)
ATLAS_PORTallocated port for the workspace (starts at 3000, increments by 10)
ATLAS_WORKSPACE_MODEworkspace mode (standard, browse, or quick)
ATLAS_MAIN_CHECKOUT"true" for primary environments, "false" for regular workspaces

conductor compatibility

when use conductor is enabled in settings, atlas also sets the equivalent CONDUCTOR_* variables alongside the ATLAS_* ones:

  • CONDUCTOR_WORKSPACE_NAME
  • CONDUCTOR_WORKSPACE_PATH
  • CONDUCTOR_ROOT_PATH
  • CONDUCTOR_DEFAULT_BRANCH
  • CONDUCTOR_PORT
  • CONDUCTOR_MAIN_CHECKOUT

this means existing conductor scripts that reference CONDUCTOR_* env vars work without modification. atlas will also read conductor.json from the repo root and run its setup, run, and archive scripts at the appropriate times — so if you’re migrating from conductor, you don’t need to change anything.

the .atlas/ directory

the .atlas/ directory lives at the root of your repository and contains shell scripts that atlas runs at different points in the workspace lifecycle:

scriptwhen it runs
.atlas/preflight.shbefore setup — checks that required tools are installed
.atlas/setup.shwhen a workspace is created
.atlas/run.shwhen the dev server is started
.atlas/teardown.shwhen a workspace is archived

all scripts should be executable (chmod +x) and use a shebang line like #!/usr/bin/env bash.

preflight script

the preflight script runs before setup and verifies that all required dependencies are present on the machine. if the preflight fails, atlas won’t proceed with setup — this saves time by catching missing tools early.

a good preflight script checks for:

  • language runtimes (ruby, node, python, etc.)
  • package managers (bundler, npm, pip, etc.)
  • databases and services (postgres, redis, elasticsearch, etc.)
  • any other tools your project depends on

example: ruby on rails preflight

#!/usr/bin/env bash
set -euo pipefail

echo "==> preflight check: ${ATLAS_WORKSPACE_NAME}"

MISSING=()

check_cmd() {
  if command -v "$1" >/dev/null 2>&1; then
    echo "  ✓ $1: $(command -v "$1")"
  else
    echo "  ✗ $1: not found"
    MISSING+=("$1")
  fi
}

check_running() {
  local name="$1"
  local port="$2"
  if lsof -i ":$port" >/dev/null 2>&1; then
    echo "  ✓ $name: running on port $port"
  else
    echo "  ✗ $name: not running on port $port"
    MISSING+=("$name")
  fi
}

# language & tools
check_cmd ruby
check_cmd bundler
check_cmd node
check_cmd yarn

# services
check_running "postgresql" 5432
check_running "redis" 6379
check_running "elasticsearch" 9200

if [ ${#MISSING[@]} -gt 0 ]; then
  echo ""
  echo "  missing ${#MISSING[@]} required tool(s): ${MISSING[*]}"
  exit 1
fi

echo ""
echo "  all dependencies present"
exit 0

example: atlas’s own preflight

atlas itself uses a preflight script to check for node, npm, cargo, rustc, cargo-tauri, jq, and git:

#!/usr/bin/env bash
set -euo pipefail

ATLAS_DIR="$(cd "$(dirname "$0")" && pwd)"
source "$ATLAS_DIR/lib/init.sh"

check_cmd_version "node" "node"
check_cmd_version "npm" "npm"
check_cmd_version "cargo" "cargo"
check_cmd_version "rustc" "rustc"
check_cmd_version "cargo-tauri" "cargo-tauri" "--version"
check_cmd "jq" "jq"
check_cmd "git" "git"

if [ ${#MISSING[@]} -gt 0 ]; then
  error "missing ${#MISSING[@]} required tool(s): ${MISSING[*]}"
  exit 1
fi

success "all dependencies present"

setup script

the setup script runs once when a workspace is created. use it to install dependencies, prepare databases, copy config files, and anything else needed to get a working dev environment.

example: ruby on rails setup

#!/usr/bin/env bash
set -euo pipefail

echo "==> setting up workspace: ${ATLAS_WORKSPACE_NAME}"

REPO_ROOT="${0:a:h:h}"

# symlink .env files from the root repo
if [ -n "$ATLAS_ROOT_PATH" ]; then
  for env_file in ".env" ".env.development.local"; do
    src="$ATLAS_ROOT_PATH/$env_file"
    dest="$REPO_ROOT/$env_file"
    if [ -f "$src" ] && [ ! -e "$dest" ]; then
      echo "==> symlinking $env_file from root repo"
      ln -sf "$src" "$dest"
    fi
  done

  # copy master.key if not present
  if [ -f "$ATLAS_ROOT_PATH/config/master.key" ] && \
     [ ! -f "$REPO_ROOT/config/master.key" ]; then
    echo "==> copying master.key from root repo"
    cp "$ATLAS_ROOT_PATH/config/master.key" "$REPO_ROOT/config/master.key"
  fi

  # copy per-environment credential keys
  if [ -d "$ATLAS_ROOT_PATH/config/credentials" ]; then
    mkdir -p "$REPO_ROOT/config/credentials"
    for key_file in "$ATLAS_ROOT_PATH"/config/credentials/*.key; do
      [ -f "$key_file" ] || continue
      dest="$REPO_ROOT/config/credentials/$(basename "$key_file")"
      if [ ! -f "$dest" ]; then
        echo "==> copying $(basename "$key_file") from root repo"
        cp "$key_file" "$dest"
      fi
    done
  fi
fi

# install dependencies
echo "==> installing ruby dependencies"
cd "$REPO_ROOT"
bundle install

# prepare database
echo "==> preparing database"
bin/rails db:prepare

echo "==> setup complete"

key things this script handles:

  • env files — symlinks .env and .env.development.local from the root repo so secrets don’t need to be duplicated
  • credentials — copies master.key and per-environment .key files so rails can decrypt credentials
  • dependencies — runs bundle install for ruby gems
  • database — runs db:prepare which creates the database and runs migrations

database isolation

for postgres or mysql, use the workspace name to create isolated databases per workspace. in config/database.yml:

development:
  database: myapp_development<%= "_#{ENV['ATLAS_WORKSPACE_NAME']}" if ENV['ATLAS_WORKSPACE_NAME'] %>

test:
  database: myapp_test<%= "_#{ENV['ATLAS_WORKSPACE_NAME']}" if ENV['ATLAS_WORKSPACE_NAME'] %>

this gives each workspace its own database (e.g., myapp_development_amber_arch) so migrations on one branch don’t interfere with another.

run script

the run script starts your dev server. atlas runs this when you click the start server button or when the workspace opens (if auto-start is configured).

example: ruby on rails run

#!/usr/bin/env bash
set -euo pipefail

REPO_ROOT="${0:a:h:h}"

export PORT="${ATLAS_PORT:-${PORT:-3000}}"

cd "$REPO_ROOT"
exec bin/dev

the exec replaces the shell process with your server, so atlas can manage the process lifecycle correctly. bin/dev is the standard rails entry point that starts both the rails server and css/js watchers via foreman or a procfile.

teardown script

the teardown script runs when a workspace is archived. use it to clean up databases, temp files, or anything the setup script created.

example: ruby on rails teardown

#!/usr/bin/env bash
set -euo pipefail

echo "==> archiving workspace: ${ATLAS_WORKSPACE_NAME}"

REPO_ROOT="${0:a:h:h}"
cd "$REPO_ROOT"

# drop the workspace database
bin/rails db:drop

echo "==> archive complete"

tips

  • use set -euo pipefail at the top of every script — this exits on errors, undefined variables, and pipe failures
  • use exec in run scripts so the server process replaces the shell — this lets atlas manage the process correctly
  • use ATLAS_ROOT_PATH to reference files from the original repo clone (credentials, env files, etc.) that shouldn’t be duplicated in worktrees
  • use ATLAS_PORT for your dev server port so each workspace gets its own port and doesn’t conflict with others
  • check services in preflight — verifying that postgres, redis, or other services are running before setup saves debugging time later

primary environments

primary environments represent your main checkout of a repository — the existing clone on disk rather than a worktree atlas created. they behave differently from regular workspaces in a few important ways that affect how you write scripts.

what’s different

behaviorregular workspacesprimary environments
preflightruns before setupruns before setup
setupruns when workspace is creatednever runs — your main checkout is already set up
runruns normallyruns normally
teardownruns when workspace is archivednever runs — primary environments can only be removed, not archived
ATLAS_MAIN_CHECKOUT"false""true"

since primary environments point to your existing repo clone, atlas assumes everything is already installed and configured — setup scripts are skipped entirely. and since primary environments can’t be archived (only removed), teardown scripts never run either. this means you don’t need to worry about atlas accidentally dropping a database or undoing work in your main checkout.

the run script works the same for both — ATLAS_PORT is still set so your server binds to the correct port.

guarding your scripts

use the ATLAS_MAIN_CHECKOUT variable to change behavior based on whether the workspace is a primary environment. this is mainly useful for teardown scripts as a defensive measure:

#!/usr/bin/env bash
set -euo pipefail

if [ "$ATLAS_MAIN_CHECKOUT" = "true" ]; then
  echo "==> primary environment — skipping teardown"
  exit 0
fi

echo "==> archiving workspace: ${ATLAS_WORKSPACE_NAME}"

REPO_ROOT="${0:a:h:h}"
cd "$REPO_ROOT"

bin/rails db:drop
echo "==> archive complete"

while teardown never runs for primary environments today, guarding against it is good practice in case your scripts are shared across tools or invoked manually