To run locally you need a Node version 18 or 20 available with yarn installed. Typically you'd use a utility like nvm or nodenv to install Node, set that version active, then use NPM to install yarn
There are multiple Node environment managers example, see for instance https://www.honeybadger.io/blog/node-environment-managers/
Here is an example with FNM which is nice cross-platform (requires brew installed on a Mac)
curl -fsSL https://fnm.vercel.app/install | bashfnm list-remoteto show available Node versionsfnm install v20.18.1to install a specific one- Edit your shell profile such as
~/.zshrcand adjust it if needed - see below for an example with a Brew-installed FNM - Make
yarnavailable:corepack enable - Check both from within this project directory:
node --version&yarn --version(which may trigger the actual download ofyarn)
# fnm
export PATH="/opt/homebrew/bin/fnm:$PATH"
eval "`fnm env --use-on-cd --shell zsh`"To start the app, run:
yarn setup
yarn install
yarn dev- The
yarn setupstep has been added for a bit of developer utility, including shipped templates for.envand theapp-config.local.yamlfile Backstage looks for automatically on local-only execution- Note: This seems to still have OS-specific trouble at times. The extras can be torn out of
package.jsonwith the user just copying the two files manually. The token part is not in use yet. yarn minisetupcan be used to skip the token and env loading - but be mindful what'll be missing or adjust as needed for your sytem
- Note: This seems to still have OS-specific trouble at times. The extras can be torn out of
- The
packages/backend/Dockerfileremains unmodified as of this writing and works - optimization in a multi-layered build can come later. It starts in/appmeaning links to local catalog files need to look there, not in../..as is done when running from source- Instructions for building Docker are available within the Dockerfile
- The included
Jenkinsfilecan successfully build and publish the Docker image when paired with a compatible Jenkins setup (targets Google Container Registry, easy to change) - There is a separate infra repo at https://github.com/CervTest/backstage-infra with another Jenkinsfile that knows how to deploy to a public demo Backstage via Helm
- This further supports secrets imported via Jenkins credentials and a
cm-app-config.extra.yamlthat is supplied in a Kubernetes context and wired up for the Backstage container without further config or rebuild needed
- This further supports secrets imported via Jenkins credentials and a
For Backstage plugins have traditionally been applied directly to source code, while also relying on a Node dependency. As such it has been tricky to "install" a bunch of plugins quickly, as each takes modifying source files, often overlapping. One approach to at least seeing how individual plugins work is making a new branch off basic then solely installing a single plugin there. Later instance branches (like main) that correspond to a given Backstage can be merged into from various plugin branches with any conflicts resolved then.
While working locally with this approach it is important to keep your config files and .gitignored files in mind, lest you end up with a plugin and a half if for instance app-config.local.yaml has not been deleted. The .env file will also not overwrite when running yarn setup again, but then generally won't cause trouble from having more new variables added than needed.
To validate your plugin after following the instructions run yarn install then yarn tsc and consider adjusting the locally included catalog-info.yaml with an appropriate annotation to test. You might also want to minimize extra tweaks to whitespace and the like to leave a clearer overview and make it easier to merge plugin branches around without causing excessive conflicts (make such edits in the relevant base branch instead)
For managing secrets on the GitHub/Jenkins/GCP setup make the associated credential in Jenkins and then reference it in the backstage-infra repo. In there the Jenkinsfile will load the credential, write it into a Kubernetes secrets file locally, which is then deployed to k8s and loaded by Backstage as extra environment variables you can then simply reach normally in config. Note that there is also an additional extra override config file in the infra repo, much like the local config file when working in a developer workspace.