- Go 79.9%
- Nix 14.8%
- Shell 5.3%
| .forgejo/workflows | ||
| cmd | ||
| deploy | ||
| internal | ||
| nix | ||
| scripts | ||
| .gitignore | ||
| autoscaler.example.yaml | ||
| config.example.yaml | ||
| flake.lock | ||
| flake.nix | ||
| go.mod | ||
| go.sum | ||
| README.md | ||
nsc-autoscaler
nsc-autoscaler turns queued Forgejo Actions jobs into short-lived Namespace
runners.
It ships two binaries:
forgejo-nsc-dispatcher: accepts explicit dispatch requests and launches runners for a Forgejo scope plus label set.forgejo-nsc-autoscaler: polls Forgejo queues, managesworkflow_jobwebhooks, and asks the dispatcher for runners when jobs queue up.
Layout
.
├── cmd/forgejo-nsc-dispatcher
├── cmd/forgejo-nsc-autoscaler
├── internal/
├── config.example.yaml
├── autoscaler.example.yaml
└── deploy/
Local use
cp config.example.yaml config.yaml
cp autoscaler.example.yaml autoscaler.yaml
nix develop
# ensure the bare `nsc` CLI is installed and authenticated, or set
# `namespace.nsc_binary` to an explicit path
go run ./cmd/forgejo-nsc-dispatcher --config config.yaml
go run ./cmd/forgejo-nsc-autoscaler --config autoscaler.yaml
Smoke checks
./scripts/smoke-local.sh
For a live instance smoke against a real dispatcher or autoscaler, set the required environment variables and run:
DISPATCHER_URL=http://127.0.0.1:8080 ./scripts/smoke-live.sh
To exercise a deployed pair from Forgejo itself, use the manual
workflow_dispatch inputs on /.forgejo/workflows/ci.yml.
The live_smoke job is opt-in and reuses the same scripts/runtime-http-smoke.sh
entrypoint.
This repo uses Forgejo Actions only. The canonical in-repo CI entrypoint is
/.forgejo/workflows/ci.yml, and it intentionally
uses the same namespace-profile-* runner contract that this autoscaler maps
onto Namespace profiles. The workflow is script-driven and only uses Forgejo-
hosted checkout and setup-go actions; Nix bootstrap lives in scripts/ci/.
The dispatcher prefers the bare nsc CLI as the lifecycle boundary for
Namespace instances. Linux and Windows already run entirely through the CLI, and
macOS now prefers the CLI create/instance upload/ssh flow as well, with
the older Compute path retained only as a fallback if the CLI bootstrap fails.
The checked-in examples are placeholders you should adapt to your environment:
- Forgejo API on
http://127.0.0.1:3001 - public runner registration URL
https://forgejo.example.com - Namespace label allowlists for Linux, macOS, and Windows
- a user-scoped controller for
conrad - an organization-scoped autoscaler controller for
example-org
The dispatcher also supports optional scope-routed credentials. This lets one dispatcher choose a Forgejo PAT and Namespace auth context per user or organization scope instead of assuming one shared credential set for every request.
The autoscaler examples use loopback webhook URLs, which works well when Forgejo and the autoscaler run on the same machine.
Nix outputs
nix build .#forgejo-nsc-dispatcher
nix build .#forgejo-nsc-autoscaler
nix build .#container-amd64
The container output packages the dispatcher plus the nsc CLI for Linux
deployments. The autoscaler is intended to run next to it as a separate process
or systemd unit.
NixOS module
This flake exports a consumable NixOS module at nixosModules.default (also
nixosModules.forgejo-nsc). Import it at the host or flake modules = [ ... ]
layer, not via a module argument inside another module's imports.
For existing deployments that already manage encrypted YAML configs, raw-config mode is the lowest-friction path:
{
imports = [ inputs.nsc_autoscaler.nixosModules.default ];
services.forgejo-nsc = {
enable = true;
nscTokenFile = config.age.secrets.nscToken.path;
dispatcher.configFile = config.age.secrets.dispatcherConfig.path;
autoscaler = {
enable = true;
configFile = config.age.secrets.autoscalerConfig.path;
allowPending = true;
};
};
}
For new deployments, the module can also generate dispatcher and autoscaler
config from structured Nix options and will automatically reuse
services.forgejo.settings.server.LOCAL_ROOT_URL and ROOT_URL when
services.forgejo.enable = true.
Operational compatibility options carried over from existing deployments:
nscTokenFile,nscTokenSpecFile, andnscEndpointfeed the barenscCLI environment.- the module copies
nscTokenFileinto the service state dir before startup so the service user owns the runtime token path extraEnvandextraPathlet host configs extend the runtime boundarydispatcher.allowPendingandautoscaler.allowPendingpreserve placeholder config workflows during staged rollouts