Skip to content

Latest commit

 

History

History
168 lines (114 loc) · 5.54 KB

File metadata and controls

168 lines (114 loc) · 5.54 KB

Contributing to ART

Clone the repository:

git clone https://github.com/OpenPipe/ART.git
cd ART

Install the dependencies:

uv sync --group dev

Code Quality Checks (prek)

This project uses prek to run local checks (ruff, pyright, uv.lock sync, and unit tests). Before submitting a pull request, please ensure your code passes all quality checks:

# Install git hooks (optional but recommended)
uv run prek install

# Run all checks against all files (formatting, linting, typecheck, uv.lock, tests)
uv run prek run --all-files

You can also run individual hooks:

uv run prek run ruff
uv run prek run ruff-format
uv run prek run pyright
uv run prek run uv-lock-check
uv run prek run pytest

These checks are automatically run in CI for all pull requests. If your PR fails these checks, re-run the corresponding prek hook locally and commit any fixes.

CI uv Cache

The PR prek workflow uses a prebuilt full uv cache (stored as a GitHub release asset) to avoid rebuilding heavy dependencies on every run.

The cache is keyed by a fingerprint computed from pyproject.toml, uv.lock, the base Docker image, and the Python version. When dependencies change, the fingerprint changes and CI automatically rebuilds the cache using Docker Buildx and uploads it for future runs. The first CI run after a dependency change will be slower while the cache is built.

To manually rebuild the cache (e.g., if the automatic build fails), run:

bash scripts/ci/build_and_push_uv_cache.sh

This requires GitHub CLI authentication (gh auth login) and should be run in an environment compatible with CI (same base CUDA image/toolchain).

Release Process

To create a new release:

  1. Review merged PRs since the last release:

    • Go to the pull requests page
    • Review PRs merged since the last release to understand what changed
    • Note any breaking changes, new features, or important bug fixes
  2. Create a draft release:

    • Go to Actions
    • Click "Run workflow"
    • Select the version bump type:
      • patch: Bug fixes and minor changes (0.3.13 → 0.3.14)
      • minor: New features and non-breaking changes (0.3.13 → 0.4.0)
      • major: Breaking changes (0.3.13 → 1.0.0)
  3. Edit the draft release notes:

    • Go to the releases page
    • Click "Edit" on the draft release
    • Add release highlights, breaking changes, and curated changelog
    • The auto-generated PR list provides a starting point, but manual curation improves clarity
  4. Finalize the release:

    • Review and merge the automatically created release PR
    • This will automatically:
      • Create the git tag
      • Publish the curated release notes
      • Build and publish the package to PyPI

Then follow the GPU training instructions below.

GPU Training (Local or Cloud VM)

Copy the .env.example file to .env and set the environment variables:

cp .env.example .env

Make sure you're on a machine with at least one H100 or A100-80GB GPU. Machines equipped with lower-end GPUs may work, but training will be slower.

If you're using a cloud VM, you can SSH into the machine using either VSCode or the command line.

Connecting via Command Line

Simply run:

ssh art

Connecting via VSCode

  1. Install the Remote-SSH extension on your local machine

    • Open the extensions view by clicking on the Extensions icon in the Activity Bar on the left.
    • Search for "Remote-SSH" and install it.
  2. Configure default extensions for your remote host

    • In your VSCode settings, find "Remote.SSH: Default Extensions"
    • Add the following extensions:
      • ms-python.python
      • ms-toolsai.jupyter
      • eamodio.gitlens
      • charliermarsh.ruff
  3. Connect to the host

    • Open the command palette and run "Remote-SSH: Connect to Host..."
    • Select art
  4. Set up the host

    • Click "Open Folder"
      • Select "sky_workdir"
      • Click OK
  5. Run a notebook

    • Find 2048.ipynb and run it!

"2048" example

Now you can run the "2048" example in /examples/2048/2048.ipynb.

It has been tested with the Qwen/Qwen2.5-14B-Instruct model on a 1xH100 instance.

You can monitor training progress with Weights & Biases at https://wandb.ai/your-wandb-organization/agent-reinforcement-training.

You should see immediate improvement in val/reward after one step.

If you run into any issues, the training output is set to maximum verbosity. Copying the outputs such as the vLLM or torchtune logs, or copying/screenshotting the plotted packed tensors, may help me debug the issue.

Cleaning Up

When you're done, you can tear down the cluster with:

uv run sky down art

Adding Docs

We use Mintlify to serve our docs. Here are the steps for adding a new page:

  1. Clone the ART repo
  2. Open the /docs directory in your CLI and IDE
  3. Run npx mintlify dev to start serving a local version of the docs in your browser
  4. Create a new .mdx file in the relevant directory
  5. Add a title and sidebar title (see other pages for examples)
  6. In docs.json, add a link to the new page within one of the navigation.groups
  7. Ensure everything works by navigating to and viewing the page in your browser
  8. Submit a PR

When you're done, shut down your GPU instance (if using a cloud VM) or stop the local training process.