Skip to main content

Running a uv Python Workspace Inside an Nx Monorepo

Two different tools resting on the same workbench
Apr 15, 20263 min readNx, Monorepo, pnpm, CI/CD, GitHub Actions, TypeScript

The problem

I wanted to add a FastAPI service to a pnpm + Nx monorepo. Nx has no idea what Python is. The affected graph cannot see Python imports, pnpm install cannot install Python packages, and the default Nx CI path runs lint, test, and build only against projects it recognizes.

The service is real work: a Greenhouse scraper, a scoring engine, a poller, database migrations. It needs its own linter, formatter, type checker, and test runner. I did not want to bolt a Poetry install into every CI step or ask pnpm to pretend to be a Python installer.

The approach

Four escape hatches, each small:

  1. uv workspace at the monorepo root, with the Python package as a member.
  2. Docker build from the repo root, not the service directory, so the workspace lockfile is reachable.
  3. A dedicated ci-python CI job that runs alongside the Node jobs and uses the same pnpm nx CLI to dispatch uv-backed tasks.
  4. Project-level Nx targets (dev, test, lint, mypy) that shell out to uv run --package job-api ....

Nothing about this asks Nx to understand Python. Nx dispatches commands; uv owns the Python universe.

uv workspace

pyproject.toml at the monorepo root declares the workspace and nothing else:

pyproject.toml
[project]
name = "danieljoffe-com"
version = "0.0.0"
description = "Workspace root — not a publishable package"
requires-python = ">=3.11"
 
[tool.uv.workspace]
members = ["apps/job-api"]

The service pyproject.toml in apps/job-api/ declares dependencies normally. The single uv.lock lives at the root and locks every Python dependency across every member. uv is fast enough (Rust, same author as Ruff) that uv sync --frozen finishes in a few seconds even in CI.

Docker from the monorepo root

This is the gotcha. The service's Dockerfile lives in apps/job-api/Dockerfile, but the build context has to be the monorepo root, because the workspace lockfile is at the root. Building from the service directory cannot see uv.lock.

apps/job-api/Dockerfile
FROM python:3.11-slim
 
COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/
 
WORKDIR /app
 
# Build context is the monorepo root so the workspace lockfile is available.
COPY pyproject.toml uv.lock ./
COPY apps/job-api/pyproject.toml ./apps/job-api/pyproject.toml
 
RUN uv sync --frozen --no-dev --no-editable --package job-api
 
COPY apps/job-api/app ./apps/job-api/app
 
WORKDIR /app/apps/job-api
 
EXPOSE 8000
 
CMD ["uv", "run", "--package", "job-api", "uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]

On Railway, this means setting the service's build context to the repo root and the Dockerfile path to apps/job-api/Dockerfile. Locally, the equivalent is docker build -f apps/job-api/Dockerfile .. Two flags. I had to relearn this lesson the first time CI ran docker build . inside the service directory and got a uv.lock not found error.

Project targets that pretend to be JavaScript

The project.json exposes Python work as Nx targets so the rest of the monorepo can run everything through the same pnpm nx entrypoint:

apps/job-api/project.json
{
  "name": "job-api",
  "sourceRoot": "apps/job-api",
  "projectType": "application",
  "targets": {
    "dev": {
      "executor": "nx:run-commands",
      "options": {
        "command": "uv run --package job-api uvicorn app.main:app --reload --port 8000",
        "cwd": "apps/job-api"
      }
    },
    "test": {
      "executor": "nx:run-commands",
      "options": {
        "command": "uv run --package job-api pytest -v",
        "cwd": "apps/job-api"
      }
    },
    "lint": {
      "executor": "nx:run-commands",
      "options": {
        "command": "uv run --package job-api ruff check .",
        "cwd": "apps/job-api"
      }
    },
    "mypy": {
      "executor": "nx:run-commands",
      "options": {
        "command": "uv run --package job-api mypy app/",
        "cwd": "apps/job-api"
      }
    }
  }
}

One subtle rename: the Python type-check target is mypy, not typecheck. The @nx/js/typescript plugin auto-infers a typecheck target on every project with a tsconfig. If the service also defined typecheck, the name would collide with the workspace-wide TypeScript typecheck sweep and one would silently overwrite the other. Using mypy sidesteps the conflict.

A dedicated ci-python job

Nx affected cannot see Python changes, so putting the Python work in the main affected pipeline would mean every Python commit looked like a no-op to Nx and skipped the tests. The fix is a parallel GitHub Actions job that always runs when a non-docs PR touches the workspace:

.github/workflows/ci.yml (excerpt)
ci-python:
  needs: preflight
  if: ${{ needs.preflight.outputs.docs-only != 'true' && !inputs.update-snapshots && github.event.pull_request.draft != true }}
  runs-on: ubuntu-latest
  timeout-minutes: 10
  steps:
    - uses: actions/checkout@v5
      with: { filter: tree:0, fetch-depth: 0 }
    - uses: pnpm/action-setup@v5
    - uses: actions/setup-node@v5
      with: { node-version-file: .nvmrc, cache: pnpm }
    - run: pnpm install --frozen-lockfile
    - uses: actions/setup-python@v5
      with: { python-version: '3.12' }
    - uses: astral-sh/setup-uv@v6
    - name: Install Python dependencies
      run: uv sync --frozen
    - name: Run Python tasks
      run: pnpm nx run-many -t lint test mypy --nxBail -p job-api

Two details matter. First, the job installs both pnpm and uv: pnpm to get the Nx CLI, uv to execute the actual Python work dispatched by Nx. Second, pnpm nx run-many -t lint test mypy -p job-api runs the three Python targets in parallel; --nxBail kills the run on the first failure.

The ci-status gate treats ci-python as a required check alongside fast and full. If mypy fails, the PR cannot merge.

The takeaway

Nx does not have to understand every language in the repo. It only has to dispatch work. Give uv the lockfile, let it own dependency resolution, and expose the Python tasks as nx:run-commands targets so the rest of the pipeline speaks one CLI.

The two things that catch you are the Docker build context (has to be the repo root) and the typecheck name collision (rename the Python target). Past those, a Python service in a JS monorepo is boring infrastructure.