Commit edb06ab2 authored by 徐豪's avatar 徐豪
Browse files

init

parents

Too many changes to show.

To preserve performance only 532 of 532+ files are displayed.
accessor
accessors
ACLs
Adafruit
Airbnb
Airtable
Akismet
Alertmanager
Algolia
Alibaba
aliuid
Aliyun
allowlist
allowlisted
allowlisting
allowlists
AlmaLinux
AMIs
anonymization
anonymized
Ansible
Anthos
Anycast
apdex
API
APIs
Apparmor
Appetize
approvers
Appsec
architected
architecting
archiver
Arel
arity
Arkose
armhf
ARNs
Artifactory
Asana
Asciidoctor
asdf
Assembla
Astro
async
Atlassian
auditability
auditable
Auth0
authenticator
Authy
autocomplete
autocompleted
autocompletes
autocompleting
autogenerated
autoloaded
autoloader
autoloading
automatable
autoscale
autoscaled
autoscaler
autoscalers
autoscales
autoscaling
autovacuum
awardable
awardables
Axios
Ayoa
AZs
Azure
B-tree
backfilling
backfills
backport
backported
backporting
backports
backtrace
backtraced
backtraces
backtracing
badging
balancer
balancer's
Bamboo
Bazel
bcrypt
Beamer
Bhyve
Bitbucket
Bitnami
Bittrex
blockquote
blockquoted
blockquotes
blockquoting
boolean
booleans
Bootsnap
bot
bot's
Bottlerocket
browsable
bugfix
bugfixed
bugfixes
bugfixing
Bugzilla
Buildah
Buildkite
buildpack
buildpacks
bundler
bundlers
burndown
burnup
burstable
CA
cacheable
Caddy
callout
callouts
callstack
callstacks
camelCase
camelCased
Camo
canonicalization
canonicalized
captcha
CAPTCHAs
Capybara
Casdoor
CDNs
CE
CentOS
Ceph
Certbot
cgo
cgroup
cgroups
chai
changeset
changesets
ChaosKube
chatbot
chatbots
ChatOps
checksummable
checksummed
checksumming
Chemlab
chipset
chipsets
CIDRs
Citrix
Citus
Civo
Cleartext
ClickHouse
CLIs
Clojars
clonable
Cloudwatch
clusterized
CMake
CMK
CMKs
CNAs
CNs
Cobertura
Codeception
Codecov
codenames
Codepen
CodeSandbox
Codey
Cognito
Coinbase
colocate
colocated
colocating
commit's
CommonMark
compilable
composable
composables
Conda
config
Configs
Consul
Contentful
Corosync
corpuses
Cosign
Coursier
CPU
CPUs
CRAN
CRI-O
cron
crond
cronjob
cronjobs
crons
crontab
crontabs
crosslinked
crosslinking
crosslinks
Crossplane
Crowdin
crypto
CSSComb
CSV
CSVs
CTAs
CTEs
CUnit
customappsso
CVEs
CWEs
cybersecurity
CycloneDX
Dangerfile
DAST
Database Lab Engine
Database Lab
Databricks
Datadog
datasource
datasources
datastore
datastores
datestamp
datetime
DBeaver
Debian
debloating
decodable
Decompressor
decryptable
dedupe
deduplicate
deduplicated
deduplicates
deduplicating
deduplication
delegators
deliverables
denormalization
denormalize
denormalized
denormalizes
denormalizing
dentry
denylist
denylisted
denylisting
denylists
Depesz
deployer
deployers
deprovision
deprovisioned
deprovisioning
deprovisions
dequarantine
dequarantined
dequarantining
deserialization
deserialize
deserializers
deserializes
desugar
desugars
desynchronized
Dev
devfile
devfiles
DevOps
Dhall
dialogs
Diffblue
disambiguates
discoverability
dismissable
Disqus
Distroless
Divio
DLE
DNs
Docker
Dockerfile
Dockerfiles
Dockerize
Dockerized
Dockerizing
Docusaurus
dogfood
dogfooding
dogfoods
DOMPurify
dotenv
doublestar
downvoted
downvotes
Dpl
dput
Dreamweaver
DRIs
DSLs
DSN
Dynatrace
Ecto
eden
EGit
ElastiCache
Elasticsearch
Eleventy
enablement
Encrypt
enqueued
enqueues
enricher
enrichers
enum
enums
Enviroments
ESLint
ESXi
ETag
ETags
Etsy
Excon
exfiltrate
exfiltration
ExifTool
expirable
Facebook
failover
failovers
failsafe
Falco
falsy
Fanout
Fargate
fastlane
Fastly
Fastzip
favicon
favorited
Fediverse
ffaker
Figma
Filebeat
Filestore
Finicity
Finnhub
Fio
firewalled
firewalling
fixup
flamegraph
flamegraphs
Flawfinder
Flickr
Fluentd
Flutterwave
Flycheck
focusable
Forgerock
formatters
Fortanix
Fortinet
FQDNs
FreshBooks
frontend
Fugit
Fulcio
fuzzer
fuzzing
Gantt
Gbps
Gemfile
Gemnasium
Gemojione
Getter
Getters
gettext
GIDs
gists
Git
Gitaly
Gitea
GitHub
GitLab
gitlabsos
Gitleaks
Gitpod
Gitter
GLab
globals
globbing
globstar
globstars
Gmail
Godep
Golang
Gollum
Google
goroutine
goroutines
Gosec
GPUs
Gradle
Grafana
Grafonnet
gravatar
Grype
GUIs
Gzip
Hackathon
Haml
HAProxy
HAR
hardcode
hardcoded
hardcodes
HashiCorp
Haswell
heatmap
heatmaps
Helm
Helmfile
Heroku
Herokuish
heuristical
hexdigest
Hexo
HipChat
hostname
hostnames
hotfix
hotfixed
hotfixes
hotfixing
hotspots
HTMLHint
http
https
hyperparameter
hyperparameters
iCalendar
iCloud
idempotence
idmapper
Iglu
IIFEs
Immer
inclusivity
inflector
inflectors
Ingress
initializer
initializers
injective
innersource
innersourcing
inodes
Instrumentor
interdependencies
interdependency
interruptible
inviter
IPs
IPython
irker
issuables
Istio
Jaeger
jasmine-jquery
Javafuzz
JavaScript
Jenkins
Jenkinsfile
Jira
Jitsu
jq
jQuery
JRuby
JSDoc
jsdom
Jsonnet
JUnit
JupyterHub
JWT
JWTs
Kaminari
kanban
kanbans
kaniko
Karma
KCachegrind
Kerberos
Keycloak
keyless
keyset
keyspace
keystore
keytab
keytabs
Kibana
Kinesis
Klar
Knative
KPIs
Kramdown
Kroki
kubeconfig
Kubecost
kubectl
Kubernetes
Kubesec
Kucoin
Kustomize
Kustomization
kwargs
Laravel
LaunchDarkly
ldapsearch
Lefthook
Leiningen
Lemmy
LLM
LLMs
libFuzzer
Libgcrypt
Libravatar
liveness
lockfile
lockfiles
Lodash
Lograge
logrotate
Logrus
Logstash
lookahead
lookaheads
lookbehind
lookbehinds
Lookbook
lookups
loopback
LSP
Lua
Lucene
Lucidchart
macOS
Mailchimp
Maildir
Mailgun
Mailroom
Makefile
Makefiles
malloc
Maniphest
Markdown
markdownlint
Marketo
matcher
matchers
Matomo
Mattermost
mbox
memoization
memoize
memoized
memoizes
memoizing
Memorystore
mergeability
mergeable
metaprogramming
metric's
microformat
Microsoft
middleware
middlewares
migratable
migratus
minikube
MinIO
misconfiguration
misconfigurations
misconfigure
misconfigured
misconfigures
misconfiguring
mitigations
mitmproxy
mixin
mixins
MLflow
Mmap
mockup
mockups
ModSecurity
Monokai
monorepo
monorepos
monospace
MRs
MSBuild
multiline
mutex
nameserver
nameservers
namespace
namespace's
namespaced
namespaces
namespacing
namespacings
Nanoc
NAT
navigations
negatable
Neovim
Netlify
NGINX
ngrok
njsscan
Nokogiri
nosniff
noteable
noteables
npm
NuGet
nullability
nullable
Nurtch
NVMe
nyc
OAuth
OCP
Octokit
offboarded
offboarding
offboards
OIDs
OKRs
OKRs
Okta
OLM
OmniAuth
onboarding
OpenID
OpenShift
OpenTelemetry
Opsgenie
Opstrace
ORMs
OS
osquery
OSs
OTel
outdent
Overcommit
Packagist
packfile
packfiles
Packwerk
paginator
parallelization
parallelizations
parsable
PascalCase
PascalCased
passthrough
passthroughs
passwordless
Patroni
PDFs
performant
PgBouncer
pgFormatter
pgLoader
pgMustard
pgvector
Phabricator
phaser
phasers
phpenv
Phorge
PHPUnit
PIDs
pipenv
Pipfile
Pipfiles
Piwik
plaintext
podman
Poedit
polyfill
polyfills
pooler
postfixed
Postgres
postgres.ai
PostgreSQL
Praefect's
prebuild
prebuilds
precompile
precompiled
preconfigure
preconfigured
preconfigures
prefetch
prefetching
prefill
prefilled
prefilling
prefills
preload
preloaded
preloading
preloads
prepend
prepended
prepending
prepends
prepopulate
prepopulated
presentationals
Prettifier
Pritaly
Priyanka
profiler
Prometheus
ProseMirror
protobuf
protobufs
proxied
proxies
proxyable
proxying
pseudocode
pseudonymization
pseudonymized
pseudonymizer
Pulumi
Puma
Pumble
PyPI
pytest
Python
Qualys
queryable
Quicktime
Rackspace
railties
Raspbian
rbenv
rbspy
rbtrace
Rclone
Rdoc
reachability
Realplayer
reauthenticate
reauthenticated
reauthenticates
reauthenticating
rebalancing
rebar
rebase
rebased
rebases
rebasing
rebinding
reCAPTCHA
recoverability
Redcarpet
redirection
redirections
Redis
Redmine
refactorings
referer
referers
reflog
reflogs
refname
refspec
refspecs
regexes
Rego
reimplementation
reimplemented
reindex
reindexed
reindexes
reindexing
reinitialize
reinitializing
Rekor
relicensing
remediations
renderers
renderless
replicables
repmgr
repmgrd
reposts
repurposing
requestee
requesters
requeue
requeued
requeues
requeuing
resolver
resolver's
Restlet
resync
resynced
resyncing
resyncs
retarget
retargeted
retargeting
retargets
reusability
reverified
reverifies
reverify
reviewee
RIs
roadmap
roadmaps
rock
rollout
rollouts
routable
RPCs
RSpec
rsync
rsynced
rsyncing
rsyncs
Rubinius
Rubix
RuboCop
Rubular
RubyGems
Rugged
ruleset
rulesets
runbook
runbooks
runit
runtime
runtimes
Salesforce
sandboxing
sanitization
SBOMs
sbt
SBT
scalar's
scalers
scatterplot
scatterplots
schedulable
Schemastore
scriptable
scrollable
SDKs
segmentations
SELinux
Semgrep
Sendbird
Sendinblue
Sendmail
Sentry
serializer
serializers
serializing
serverless
setuptools
severities
SFCs
sharded
sharding
SHAs
shfmt
Shippo
Shopify
Sidekiq
Sigstore
Silverlight
Sisense
Sitespeed
skippable
skopeo
Slack
Slackbot
SLAs
SLIs
Slony
SLOs
smartcard
smartcards
snake_case
snake_cased
snapshotting
Snowplow
Snyk
Sobelow
Solargraph
Solarized
Sourcegraph
Spamcheck
spammable
sparkline
sparklines
Speedscope
spidering
Splunk
SpotBugs
Squarespace
SREs
SSDs
SSGs
Stackdriver
Stackprof
starrer
starrers
storable
storages
strace
strikethrough
strikethroughs
stunnel
stylelint
subchart
subcharts
subcommand
subcommands
subcomponent
subfolder
subfolders
subgraph
subgraphs
subgroup
subgroups
subkey
subkeys
sublicense
sublicensed
sublicenses
sublicensing
submodule
submodule's
subnet
subnets
subnetting
subpath
subproject
subprojects
subqueried
subqueries
subquery
subquerying
Subreddit
substring
substrings
subtask
subtasks
subtest
subtests
subtransaction
subtransactions
subtree
subtrees
sudo
sunsetting
supercookie
supercookies
supergroup
supergroups
superset
supersets
supertype
supertypes
SVGs
swappiness
swimlane
swimlanes
syncable
Sysbench
syscall
syscalls
syslog
systemd
tablespace
tablespaces
Tamland
tanuki
taskscaler
tcpdump
teardown
templated
Thanos
thoughtbot
throughputs
Tiller
timebox
timeboxed
timeboxes
timeboxing
timecop
timelog
timelogs
Tiptap
todos
tokenizer
Tokenizers
tokenizing
tolerations
toolchain
toolchains
toolkit
toolkits
toolset
tooltip
tooltips
transactionally
transpile
transpiled
transpiles
transpiling
Trello
Trendline
triaged
triages
triaging
Trivy
Truststore
truthy
Twilio
Twitter
Typeform
TypeScript
TZInfo
Ubuntu
Udemy
UI
UIDs
unapplied
unapprove
unapproved
unapproving
unarchive
unarchived
unarchives
unarchiving
unary
unassign
unassigning
unassigns
unban
unbans
uncached
uncheck
unchecked
unchecking
unchecks
uncomment
uncommented
uncommenting
uncordon
underperforming
unencode
unencoded
unencoder
unencodes
unencrypted
unescaped
unfollow
unfollowed
unfollows
Unicorn
unindexed
unlink
unlinking
unlinks
unmappable
unmapped
unmergeable
unmerged
unmerges
unmerging
unmocked
unoptimize
unoptimized
unoptimizes
unoptimizing
unparsable
unpatched
unpause
unprioritized
unprotect
unprotected
unprotecting
unprotects
unprovision
unprovisioned
unprovisions
unpublish
unpublished
unpublishes
unpublishing
unpullable
unpushed
unreferenced
unregister
unregistered
unregisters
unreplicated
unresolve
unresolved
unresolving
unreviewed
unrevoke
unsanitized
unschedule
unscoped
unsetting
unshare
unshared
unshares
unstage
unstaged
unstages
unstaging
unstar
unstars
unstarted
unstash
unstashed
unstashing
unsynced
unsynchronized
untarred
untracked
untrusted
unverified
unverifies
unverify
unverifying
uploader
uploaders
upstreams
upvote
upvoted
upvotes
urgencies
URIs
URL
UUIDs
Vagrantfile
validator
validators
vCPUs
vendored
vendoring
versionless
viewport
viewports
virtualized
virtualizing
Vite
VMs
VPCs
VSCodium
Vue
Vuex
waitlist
walkthrough
walkthroughs
WebdriverIO
Webex
webpack
WEBrick
webserver
Webservice
websocket
websockets
whitepaper
whitepapers
wireframe
wireframed
wireframes
wireframing
Wireshark
Wordpress
Workato
workstream
worktree
worktrees
Worldline
Xcode
Xeon
XPath
Yandex
YouTrack
ytt
Yubico
Zabbix
ZAProxy
Zeitwerk
Zendesk
ZenTao
Zoekt
zsh
Zstandard
Zuora
---
stage: Systems
group: Distribution
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
# Omnibus GitLab architecture and components
Omnibus GitLab is a customized fork of the Omnibus project from Chef, and it
uses Chef components like cookbooks and recipes to perform the task of
configuring GitLab on a user's computer. [Omnibus GitLab repository on GitLab.com](https://gitlab.com/gitlab-org/omnibus-gitlab)
hosts all the necessary components of Omnibus GitLab. These include parts of
Omnibus that are required to build the package, like configurations and project
metadata, and the Chef related components that are used in a user's computer
after installation.
![Omnibus-GitLab Components](components.png)
An in-depth video walkthrough of these components is available
[on YouTube](https://youtu.be/m89NHLhTMj4?t=807).
## Software definitions
### GitLab project definition file
A primary component of the omnibus architecture is a project definition file
that lists the project details and dependency relations to external software
and libraries.
The main components of this [project definition file](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/config/projects/gitlab.rb)
are:
- Project metadata: Includes attributes such as the project's name and description.
- License details of the project.
- Dependency list: List of external tools and software which are required to
build or run GitLab, and sometimes their metadata.
- Global configuration variables used for installation of GitLab: Includes the
installation directory, system user, and system group.
### Individual software definitions
Omnibus GitLab follows a batteries-included style of distribution. All of the
software, libraries, and binaries necessary for the proper functioning of
a GitLab instance is provided as part of the package, in an embedded format.
So another one of the major components of the omnibus architecture is the
[software definitions and configurations](https://gitlab.com/gitlab-org/omnibus-gitlab/tree/master/config/software).
A typical software configuration consists of the following parts:
- Version of the software required.
- License of the software.
- Dependencies for the software to be built/run.
- Commands needed to build the software and embed it inside the package.
Sometimes, a software's source code may have to be patched to use it with GitLab.
This may be to fix a security vulnerability, add some functionality needed for
GitLab, or make it work with other components of GitLab. For this purpose,
Omnibus GitLab consists of a [patch directory](https://gitlab.com/gitlab-org/omnibus-gitlab/tree/master/config/patches),
where patches for different software are stored.
For more extensive changes, it may be more convenient to track the required
changes in a branch on the mirror. The pattern to follow for this is to create a
branch from an upstream tag or sha making reference to that branchpoint in the
name of the branch. As an example, from the omnibus codebase, `gitlab-omnibus-v5.6.10`
is based on the `v5.6.10` tag of the upstream project. This allows us to
generate a comparison link like `https://gitlab.com/gitlab-org/omnibus/compare/v5.6.10...gitlab-omnibus-v5.6.10`
to identify what local changes are present.
## Global GitLab configuration template
Omnibus GitLab ships with it a single configuration file that can be used to
configure every part of the GitLab instance, which will be installed on
the user's computer. This configuration file acts as the canonical source of all
configuration settings that will be applied to the GitLab instance. It lists the
general settings for a GitLab instance as well as various options for different
components. The common structure of this file consists of configurations
specified in the format `<component>['<setting>'] = <value>`. All the available
options are listed in the [configuration template](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/files/gitlab-config-template/gitlab.rb.template),
but all except the ones necessary for the basic working of GitLab are commented out
by default. Users may uncomment them and specify corresponding values, if
necessary.
## GitLab Cookbook
Omnibus GitLab, as previously described, uses many of the Chef components like
cookbooks, attributes, and resources. GitLab EE uses a separate cookbook that
extends from the one GitLab CE uses and adds the EE-only components. The major
players in the Chef-related part of Omnibus GitLab are the following:
### Default Attributes
[Default attributes](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/files/gitlab-cookbooks/gitlab/attributes/default.rb),
as the name suggests, specifies the default values to different settings
provided in the configuration file. These values act as fail-safe and get used
if the user doesn't provide a value to a setting, and thus ensure a working
GitLab instance with minimum user tweaking being necessary.
### Recipes
[Recipes](https://gitlab.com/gitlab-org/omnibus-gitlab/tree/master/files/gitlab-cookbooks/gitlab/recipes)
do most of the heavy lifting while installing GitLab using the omnibus package as
they are responsible for setting up each component of the GitLab ecosystem in a
user's computer. They create necessary files, directories, and links in their
corresponding locations, set their permissions and owners, configure, start, and
stop necessary services, and notify these services when files correspond to
their change. A master recipe, named `default`, acts as the entry point and it
invokes all other necessary recipes for various components and services.
### Custom Resources
[Custom Resources](https://gitlab.com/gitlab-org/omnibus-gitlab/-/tree/master/files/gitlab-cookbooks/gitlab/resources)
can be considered as global-level macros that are available across recipes. Some
common uses for Custom Resources are defining the ports used for common services, and
listing important directories that may be used by different recipes. They define
resources that may be reused by different recipes.
### Templates for configuration of components
As mentioned earlier, Omnibus GitLab provides a single configuration file to
tweak all components of a GitLab instance. However, the architectural design of
different components may require them to have individual configuration files
residing at specific locations. These configuration files have to be generated
from either the values specified by the user in the general configuration file or
from the default values specified. Hence, Omnibus GitLab ships with it
[templates of such configuration files](https://gitlab.com/gitlab-org/omnibus-gitlab/tree/master/files/gitlab-cookbooks/gitlab/templates)
with placeholders that may be filled by default values or values from the user. The
recipes do the job of completing these templates, by filling them and placing
them at necessary locations.
### General library methods
Omnibus GitLab also ships some [library methods](https://gitlab.com/gitlab-org/omnibus-gitlab/tree/master/files/gitlab-cookbooks/gitlab/libraries)
that primarily does the purpose of code reuse. This includes methods to check if
services are up and running, methods to check if files exist, and helper methods
to interact with different components. They're often used in Chef recipes.
Of all the libraries used in Omnibus GitLab, there are some special ones: the
primary GitLab module and all the component-specific libraries that it invokes.
The component-specific libraries contain methods that do the job of parsing the
configuration file for settings defined for their corresponding components. The
primary GitLab module contains methods that coordinate this. It is responsible
for identifying default values, invoking component-specific libraries, merging
the default values and user-specified values, validating them, and generating
additional configurations based on their initial values. Every top-level
component that's shipped by Omnibus GitLab package gets added to this module so
that they can be mentioned in the configuration file and default attributes and get
parsed correctly.
### runit
GitLab uses [runit](https://smarden.org/runit/) recipes for
service management and supervision. runit recipes do the job of identifying the
init system used by the OS and performing basic service management tasks like
creating necessary service files for GitLab, service enabling, and service
reloading. runit provides `runit_service` definitions that can be used by other
recipes to interact with services, see [`/files/gitlab-cookbooks/runit`](https://gitlab.com/gitlab-org/omnibus-gitlab/-/tree/master/files/gitlab-cookbooks/runit)
for more information.
### Services
Services are software processes that we run using the runit process
init/supervisor. You can check their status, start, stop, and restart
them using the `gitlab-ctl` commands. Recipes may also disable or enable these
services based on their process group and the settings/roles that have been
configured for the instance of GitLab. The list of services and the service
groups associated with them can be found in
[`files/gitlab-cookbooks/package/libraries/config/services.rb`](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/files/gitlab-cookbooks/package/libraries/config/services.rb).
## Additional `gitlab-ctl` commands
Omnibus, by default, provides some wrapper commands like `gitlab-ctl reconfigure`
and `gitlab-ctl restart` to manage the GitLab instance. There are some
additional [wrapper commands](https://gitlab.com/gitlab-org/omnibus-gitlab/tree/master/files/gitlab-ctl-commands)
that target some specific use cases defined in the Omnibus GitLab repository.
These commands get used with the general `gitlab-ctl` command to perform certain
actions like running database migrations or removing dormant accounts and similar
not-so-common tasks.
## Tests
Omnibus GitLab repository uses ChefSpec to [test the cookbooks and recipes](https://gitlab.com/gitlab-org/omnibus-gitlab/tree/master/spec/) it ships. The usual strategy is to check a recipe to see if it behaves correctly in two (or more) conditions: when the user doesn't specify any corresponding configuration, (i.e. when defaults are used) and when user-specified configuration is used. Tests may include checking if files are generated in the correct locations, services are started/stopped/notified, correct binaries are invoked, and correct parameters are being passed to method invocations. Recipes and library methods have tests associated with them. Omnibus GitLab also uses some support methods or macros to help in the testing process. The tests are defined as compatible for parallelization, where possible, to decrease the time required for running the entire test suite.
So, of the components described above, some (such as software definitions, project metadata, and tests) find use during the package building, in a build environment, and some (such as Chef cookbooks and recipes, GitLab configuration file, runit, and `gitlab-ctl` commands) are used to configure the user's installed instance.
## Work life cycle of Omnibus GitLab
### What happens during package building
The type of packages being built depends on the OS the build process is run. If the build is done on a Debian environment, a `.deb` package will be created. What happens during package building can be summarized in the following steps
1. Fetching sources of dependency software:
1. Parsing software definitions to find out corresponding versions.
1. Getting source code from remotes or cache.
1. Building individual software components:
1. Setting up necessary environment variables and flags.
1. Applying patches, if applicable.
1. Performing the build and installation of the component, which involves installing it to an appropriate location (inside `/opt/gitlab`).
1. Generating license information of all bundled components - including external software, Ruby gems, and JS modules. This involves analyzing definitions of each dependency as well as any additional licensing document provided by the components (like `licenses.csv` file provided by GitLab Rails)
1. Checking the license of the components to make sure we are not shipping a component with a non-compatible license
1. Running a health check on the package to make sure the binaries are linked against available libraries. For bundled libraries, the binaries should link against them and not the ones available globally.
1. Building the package with contents of `/opt/gitlab`. This makes use of the metadata given inside [`gitlab.rb`](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/config/projects/gitlab.rb) file. This includes the package name, version, maintainer, homepage, and information regarding conflicts with other packages.
#### Caching
Omnibus uses two types of cache to optimize the build process: one to store the software artifacts (sources of dependent software), and one to store the project tree after each software component is built
##### Software artifact cache (for GitLab Inc builds)
Software artifact cache uses an Amazon S3 bucket to store the sources of the dependent software. In our build process, this cache is populated using the command `bin/omnibus cache populate`. This will pull in all the necessary software sources from the Amazon bucket and store them in the necessary locations. When there is a change in the version requirement of software, omnibus pulls it from the original upstream and adds it to the artifact cache. This process is internal to omnibus and we configure the Amazon bucket to use in [omnibus.rb](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/omnibus.rb) file available in the root of the repository. This cache ensures the availability of the dependent software even if their original upstream remotes go down.
##### Build cache
A second type of cache that plays an important role in our build process is the build cache. Build cache can be described as snapshots of the project tree (where the project gets built - `/opt/gitlab`) after each dependent software is built. Consider a project with five dependent pieces of software - A, B, C, D, and E, built in that order, we're not considering their dependencies. Build cache makes use of Git tags to make snapshots. After each software is built, a Git tag is computed and committed. Now, consider we made some change to the definition of software D. A, B, C and E remains the same. When we try to build again, omnibus can reuse the snapshot that was made before D was built in the previous build. Thus, the time taken to build A, B, and C can be saved as it can simply check out the snapshot that was made after C was built. Omnibus uses the snapshot just before the software which "dirtied" the cache (dirtying can happen either by a change in the software definition, a change in name/version of a previous component, or a change in version of the current component) was built. Similarly, if in a build there is a change in the definition of software A, it will dirty the cache and hence A and all the following dependencies get built from scratch. If C dirties the cache, A and B get reused and C, D, and E get built again from scratch.
This cache makes sense only if it is retained across builds. For that, we use the caching mechanism of GitLab CI. We have a dedicated runner which is configured to store its internal cache in an Amazon bucket. Before each build, we pull in this cache (`restore_cache_bundle` target in our Makefile), move it to an appropriate location and start the build. It gets used by the omnibus until the point of dirtying. After the build, we pack the new cache and tell CI to back it up to the Amazon bucket (`pack_cache_bundle` in our Makefile).
Both types of cache reduce the overall build time of GitLab and dependencies on external factors.
The cache mechanism can be summarized as follows:
1. For each software dependency:
1. Parse definition to understand version and SHA256.
1. If the source file tarball available in the artifact cache in the Amazon bucket matches the version and SHA256, use it.
1. Else, download the correct tarball from the upstream remote.
1. Get the cache from the CI cache.
1. For each software dependency:
1. If a cache has been dirtied, break the loop.
1. Else, check out the snapshot.
1. If there are remaining dependencies:
1. For each remaining dependency:
1. Build the dependency.
1. Create a snapshot and commit it.
1. Push back the new build cache to the CI cache.
## Multiple databases
Previously, the GitLab Rails application was the sole client connected to the
Omnibus GitLab database. Over time, this has changed:
- Praefect and Container Registry use their own databases.
- The Rails application now uses a [decomposed database](https://gitlab.com/groups/gitlab-org/-/epics/5883).
Because additional databases might be necessary:
- The [multi-database blueprint](multiple_database_support/index.md) explains
how to add database support to Omnibus GitLab for new components and features.
- The [accompanying development document](../development/database_support.md)
details the implementation model and provides examples of adding database
support.
---
status: proposed
creation-date: "2023-10-02"
authors: [ "@pursultani" ]
approvers: [ "@product-manager", "@engineering-manager" ]
owning-stage: "~devops::systems"
participating-stages: []
---
# Multiple databases support
## Summary
This document explains how to support a component with one or more databases. It
describes different levels of support and offers an implementation model for
each level to overcome the several challenges of the [recommended deployment models](https://docs.gitlab.com/ee/administration/reference_architectures/).
The [architecture page](../index.md#multiple-databases) provides some
background on this subject.
A [development document](../../development/database_support.md) accompanies this
blueprint. It details the implementation model and provides a few examples.
## Goals
- Offer [higher levels of support](#levels-of-support) for current and new
components with database requirements.
- Implementation refactors maintain the current configuration options
already present in `gitlab.rb`.
- Minimize breaking changes and refactors in database code with a consistent,
testable, and extensible implementation model.
- Migrate code to the newer implementation method.
## Proposal
### Terminology
|Term|Definition|
|-|-|
|Database|A _logical_ database that a component, such as Rails application, uses. For example, `gitlabhq_production`. A component can have more than one database.|
|Database server| A _standalone process_ or a _cluster_ that provides PostgreSQL database service. Not to be confused with database objects or data.|
|Database objects| Anything that is created with Data Definition Language (DDL), such as `DATABASE`, `SCHEMA`, `ROLE`, or `FUNCTION`. It may include reference data or indices as well. These are partially created by Omnibus GitLab and the rest are created by application-specific _database migrations_.|
|Standalone database server| A single PostgreSQL database server. It can be accessed through a PgBouncer instance.|
|Database server cluster|Encompasses multiple PostgreSQL database servers, managed by Patroni services, backed by a Consul cluster, accessible by using one or more PgBouncer instances, and may include an HAProxy (in TCP mode) as a frontend.|
### Levels of support
There are different levels of database support for Omnibus GitLab components.
Higher levels indicate more integration into Omnibus GitLab.
#### Level 1
Configure the component with user-provided parameters from `gitlab.rb` to work
with the database server. For example, `database.yml` is rendered with database
server connection details of the Rails application or database parameters of
Container Registry are passed to its `config.yml`.
#### Level 2
Create database objects and run migrations of the component. Full support at
this level requires Omnibus GitLab to not only create the required database
objects, such as `DATABASE` and `ROLE`, but also to run the application
migration to for the component.
#### Level 3
Static configuration of PgBouncer. At this level, Omnibus GitLab can create a
_dedicated PgBouncer user_ for the component and configure it with user-provided
(from `gitlab.rb`) or application-mandated connection settings.
This is not specific to clustered database server setups but it is a requirement
for it. There are scenarios where PgBouncer is configured with a standalone
database server. However, all clustered database server setups depend on
PgBouncer configuration.
#### Level 4
Configuration of database server cluster in high-availability (HA) mode. At this
level, Omnibus GitLab supports various deployment models, ranging from _one
cluster for all databases_ to _one cluster per database_.
Therefore the HA configuration of logical databases must be independent of the
deployment model.
Consul [services](https://developer.hashicorp.com/consul/docs/services/configuration/services-configuration-reference)
can have multiple health-checks and [watches](https://developer.hashicorp.com/consul/docs/dynamic-app-config/watches#service).
At this level, Omnibus GitLab defines _a Consul service per database cluster_
and _a service watch per logical database_.
Omnibus GitLab configures [Patroni to register a Consul service](https://patroni.readthedocs.io/en/latest/yaml_configuration.html#consul).
The name of the service is the scope parameter as its tag is the role of the
node which can be one of `master`, `primary`, `replica`, or `standby-leader`. It
uses this service name, which is the same as the scope of Patroni cluster, to
address a database cluster and associate it to any logical database that the
cluster serves.
This is done with Consul watches that track Patroni services. They find cluster
leaders and notify PgBouncer with the details of both the database cluster and
the logical database.
#### Level 5
Automated or assisted transition from previous deployment models. Not all
components require this level of support but, in some cases, where a recommended
yet deprecated database configuration is in use, Omnibus GitLab may provide
specialized tools or procedures to allow transitioning to the new database
model. In most cases, this is not supported unless specified.
### Design overview
Each component manages every aspect of its own database requirements, _except
its database users_. It means that component-specific implementation of database
operations are done in the specific cookbooks of each component. For example,
Rails or Registry database requirements are exclusively addressed in `gitlab`
and `registry` cookbooks and not in `postgresql`, `pgbouncer`, or `patroni`
cookbooks.
The database users are excluded because `SUPERUSER` or users with `CREATEROLE`
privilege can create PostgreSQL users. Due to security considerations we do not
grant this privilege to the users that are connected over TCP connection. So
components that may connect to a remote database do not have the permission to
create their users.
Hence each component creates its own database objects, _except its database user_.
`postgresql` and `patroni` cookbooks create the database users but each component
creates the rest of its database objects. The database users must have `CREATEDB`
privilege to allow components create their own `DATABASE` and trusted `EXTENSION`.
To impose a structure and fix some of the shortcomings of this approach, such as
locality and limited reusability, we use [Chef resource model](https://docs.chef.io/resources/)
and leverage [custom resources](https://docs.chef.io/custom_resources/) for
database configuration and operations, including:
- Manage lifecycle of component-specific database objects
- Run application-specific database migrations
- Set up PgBouncer to serve the application
- Set up Consul watches to track Patroni clusters
Cross-cutting concerns such as [central on/off switch for auto-migration](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/7716),
logging control, and [pre-flight checks](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5428)
are addressed with [helper classes](https://docs.chef.io/helpers/) that are
available to all components. The `package` cookbook is a suitable place for
these helpers.
Helper classes also provide a place to translate the existing user configuration
model (in `gitlab.rb`) to the new model needed for management of
multiple databases.
### Implementation details
[Development document](../../development/database_support.md) provides
implementation details and concrete examples for the proposed design.
---
stage: Systems
group: Distribution
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
# Build a GitLab Docker image locally
The GitLab Docker image uses the Ubuntu 22.04 package created by
`omnibus-gitlab`. Most of the files needed for building a Docker image
are in the `Docker` directory of the `omnibus-gitlab` repository.
The `RELEASE` file is not in this directory, and you must create this file.
## Create the `RELEASE` file
The version details of the package being used are stored in the `RELEASE` file.
To build your own Docker image, create this file with contents similar to the following.
```plaintext
RELEASE_PACKAGE=gitlab-ee
RELEASE_VERSION=13.2.0-ee
DOWNLOAD_URL=https://example.com/gitlab-ee_13.2.00-ee.0_amd64.deb
```
- `RELEASE_PACKAGE` specifies whether the package is a CE one or EE one.
- `RELEASE_VERSION` specifies the version of the package, for example `13.2.0-ee`.
- `DOWNLOAD_URL` specifies the URL where that package can be downloaded from.
NOTE **Note:**
We're looking at improving this situation, and using locally available packages
[in issue #5550](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5550).
## Build the Docker image
To build the Docker image after populating the `RELEASE` file:
```shell
cd docker
docker build -t omnibus-gitlab-image:custom .
```
The image is built and tagged as `omnibus-gitlab-image:custom`.
---
stage: Systems
group: Distribution
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
# Build an `omnibus-gitlab` package locally
## Prepare a build environment
Docker images with the necessary build tools for building `omnibus-gitlab` packages
are in the [`GitLab Omnibus Builder`](https://gitlab.com/gitlab-org/gitlab-omnibus-builder)
project's [Container Registry](https://gitlab.com/gitlab-org/gitlab-omnibus-builder/container_registry).
1. [Install Docker](https://docs.docker.com/engine/install/).
> Containers need access to 4GB of memory to complete builds. Consult the documentation
> for your container runtime. Docker for Mac and Docker for Windows are known to set
> this value to 2GB for default installations.
1. Pull the Docker image for the OS you want to build a package for. The current
version of the image used officially by `omnibus-gitlab` is referred to in the
[CI configuration](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/.gitlab-ci.yml)
`BUILDER_IMAGE_REVISION` environment variable.
```shell
docker pull registry.gitlab.com/gitlab-org/gitlab-omnibus-builder/debian_10:${BUILDER_IMAGE_REVISION}
```
1. Clone the Omnibus GitLab source and change to the cloned directory:
```shell
git clone https://gitlab.com/gitlab-org/omnibus-gitlab.git ~/omnibus-gitlab
cd ~/omnibus-gitlab
```
1. Start the container and enter its shell, while mounting the `omnibus-gitlab`
directory in the container:
```shell
docker run -v ~/omnibus-gitlab:/omnibus-gitlab -it registry.gitlab.com/gitlab-org/gitlab-omnibus-builder/debian_10:${BUILDER_IMAGE_REVISION} bash
```
1. By default, `omnibus-gitlab` chooses public GitLab repositories to
fetch sources of various GitLab components. Set the environment variable
`ALTERNATIVE_SOURCES` to `false` to build from `dev.gitlab.org`.
```shell
export ALTERNATIVE_SOURCES=false
```
Component source information is in the
[`.custom_sources.yml`](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/.custom_sources.yml)
file.
1. By default, `omnibus-gitlab` codebase is optimized to be used in a CI
environment. One such optimization is reusing the pre-compiled Rails assets
that is built by the GitLab CI pipeline. To know how to leverage this in your
builds, check [Fetch upstream assets](#fetch-upstream-assets) section. Or,
you can choose to compile the assets during the package build by setting the
`COMPILE_ASSETS` environment variable.
```shell
export COMPILE_ASSETS=true
```
1. By default, XZ compression is used to produce the final DEB package,
which reduces the package size by nearly 30% in comparison to Gzip, with
little to no increase in build time and a slight increase in installation
(decompression) time. However, the system's package manager must also support
the format. If your system's package manager does not support XZ packages,
set the `COMPRESS_XZ` environment variable to `false`:
```shell
export COMPRESS_XZ=false
```
1. Install the libraries and other dependencies:
```shell
cd /omnibus-gitlab
bundle install
bundle binstubs --all
```
### Fetch upstream assets
Pipelines on GitLab and GitLab-FOSS projects create a Docker image with
pre-compiled assets and publish the image to the container registry. While building
packages, to save time you can reuse these images instead of compiling the assets
again:
1. Fetch the assets Docker image that corresponds to the ref of GitLab or
GitLab-FOSS you are building. For example, to pull the asset image
corresponding to latest master ref, run the following:
```shell
docker pull registry.gitlab.com/gitlab-org/gitlab/gitlab-assets-ee:master
```
1. Create a container using that image:
```shell
docker create --name gitlab_asset_cache registry.gitlab.com/gitlab-org/gitlab/gitlab-assets-ee:master
```
1. Copy the asset directory from the container to the host:
```shell
docker cp gitlab_asset_cache:/assets ~/gitlab-assets
```
1. While starting the build environment container, mount the asset directory in
it:
```shell
docker run -v ~/omnibus-gitlab:/omnibus-gitlab -v ~/gitlab-assets:/gitlab-assets -it registry.gitlab.com/gitlab-org/gitlab-omnibus-builder/debian_10:${BUILDER_IMAGE_REVISION} bash
```
1. Instead of setting `COMPILE_ASSETS` to true, set the path where assets can be
found:
```shell
export ASSET_PATH=/gitlab-assets
```
## Build the package
After you have prepared the build environment and have made necessary changes,
you can build packages using the provided Rake tasks:
1. For builds to work, Git working directory should be clean. So, commit your
changes to a new branch.
1. Run the Rake task to build the package:
```shell
bundle exec rake build:project
```
The packages are built and made available in the `~/omnibus-gitlab/pkg`
directory.
### Build an EE package
By default, `omnibus-gitlab` builds a CE package. If you want to build an EE
package, set the `ee` environment variable before running the Rake task:
```shell
export ee=true
```
### Clean files created during build
You can clean up all temporary files generated during the build process using
`omnibus`'s `clean` command:
```shell
bin/omnibus clean gitlab
```
Adding the `--purge` purge option removes **all** files generated during the
build including the project install directory (`/opt/gitlab`) and
the package cache directory (`/var/cache/omnibus/pkg`):
```shell
bin/omnibus clean --purge gitlab
```
## Get help on Omnibus
For help with the Omnibus command-line interface, run the
`help` command:
```shell
bin/omnibus help
```
---
stage: Systems
group: Distribution
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
# Building `omnibus-gitlab` packages and Docker images locally
NOTE:
If you are a GitLab team member, you have access to our CI infrastructure which
can be used to build these artifacts. Check the [documentation](team_member_docs.md)
for more details.
## `omnibus-gitlab` packages
`omnibus-gitlab` uses the [omnibus](https://github.com/chef/omnibus) tool for
building packages for the supported operating systems. The omnibus tool will detect
the OS where it is being used and build packages for that OS. It is recommended
to use a Docker container corresponding to the OS as the environment for building
packages.
How to build a custom package locally is described in the
[dedicated document](build_package.md).
## All-in-one Docker image
NOTE:
If you want individual Docker images for each GitLab component instead of the
all-in-one monolithic one, check out the
[CNG](https://gitlab.com/gitlab-org/build/CNG) repository.
The GitLab all-in-one Docker image uses the `omnibus-gitlab` package built for
Ubuntu 22.04 under the hood. The Dockerfile is optimized to be used in a CI
environment, with the expectation of packages being available over the Internet.
We're looking at improving this situation
[in issue #5550](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5550).
How to build an all-in-one Docker image locally is described in the
[dedicated document](build_docker_image.md).
---
stage: Systems
group: Distribution
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
# GitLab Team member's guide to using official build infrastructure
If you are a GitLab team member, you have access to the build
infrastructure or to the colleagues who have access to the infrastructure. You
can use that access to build packages.
## Test a `gitlab-org/gitlab` project merge request
If you have a merge request (MR) in the `gitlab-org/gitlab` project, you can
test that MR using a package or a Docker image.
In the CI pipeline corresponding to your MR, run the `e2e:package-and-test` job in
the `qa` stage to trigger:
- A downstream pipeline in the `omnibus-gitlab`
[QA mirror](https://gitlab.com/gitlab-org/build/omnibus-gitlab-mirror), which
gives you an Ubuntu 22.04 package and an all-in-one Docker image for testing.
- A `gitlab-qa` run using these artifacts as well.
## Test an `omnibus-gitlab` project MR
If you have an MR in the `omnibus-gitlab` project, you can
test that MR using a package or a Docker image.
Similar to the `GitLab` project, pipelines running for MRs in `omnibus-gitlab` also
have manual jobs to get a package or Docker image. The `Trigger:ce-package` and
`Trigger:ee-package` jobs build CE and EE packages and Docker images and perform a QA run.
## Use specific branches or versions of a GitLab component
Versions of the primary GitLab components like GitLab Rails or Gitaly are controlled by:
- `*_VERSION` files in the `omnibus-gitlab` repository.
- `*_VERSION` environment variables present during the build.
Check the following table for more information:
| File name | Environment variable | Description |
| ------------------------------------ | ------------------------------------ | ----------- |
| `VERSION` | `GITLAB_VERSION` | Controls the Git reference of the GitLab Rails application. By default, points to the `master` branch of the GitLab-FOSS repository. If you want to use the GitLab repository, set the environment variable `ee` to true. |
| `GITALY_SERVER_VERSION` | `GITALY_SERVER_VERSION` | Git reference of the [Gitaly](https://gitlab.com/gitlab-org/gitaly) repository. |
| `GITLAB_PAGES_VERSION` | `GITLAB_PAGES_VERSION` | Git reference of the [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages) repository.|
| `GITLAB_SHELL_VERSION` | `GITLAB_SHELL_VERSION` | Git reference of the [GitLab Shell](https://gitlab.com/gitlab-org/gitlab-shell) repository.|
| `GITLAB_ELASTICSEARCH_INDEXER_VERSION` | `GITLAB_ELASTICSEARCH_INDEXER_VERSION` | Git reference of the [GitLab Elasticsearch Indexer](https://gitlab.com/gitlab-org/gitlab-elasticsearch-indexer) repository. Used only in EE builds.|
| `GITLAB_KAS_VERSION` | `GITLAB_KAS_VERSION` | Git reference of the [GitLab Kubernetes Agent Server](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent) repository.|
If you are running the `e2e:package-and-test` job from a GitLab MR, the `GITLAB_VERSION`
environment variable is set to the commit SHA corresponding to the pipeline.
Other environment variables, if not specified, are populated from
their corresponding files and passed on to the triggered pipeline.
NOTE:
Environment variables take precedence over `*_VERSION` files.
### Temporarily specify a component version
Temporarily specify a component version using any of the following methods:
- Edit the `*_VERSION` file, commit and push to start a pipeline, but revert
this change before the MR is marked ready for merge. We recommend you
open an unresolved discussion on this diff in the MR so you remember to
revert it.
- Set the environment variable in the `.gitlab-ci.yml` file, commit and push to
start a pipeline, but revert this change before the MR is marked ready for
merge. We recommend you open an unresolved discussion on this diff in the
MR so you remember to revert it.
- Pass the environment variable as a [Git push option](https://docs.gitlab.com/ee/user/project/push_options.html#push-options-for-gitlab-cicd).
```shell
git push <REMOTE> -o ci.variable="<ENV_VAR>=<VALUE>"
# Passing multiple variables
git push <REMOTE> -o ci.variable="<ENV_VAR_1>=<VALUE_1>" -o ci.variable="<ENV_VAR_2>=<VALUE_2>"
```
**`Note`**: This works only if you have some changes to push. If remote is
already updated with your local branch, no new pipeline is created.
- Manually run the pipeline from UI while specifying the environment variables.
Environment variables are passed to the triggered downstream pipeline in the
[QA mirror](https://gitlab.com/gitlab-org/build/omnibus-gitlab-mirror) so that
they are used during builds.
You should use environment variables instead of changing the `*_VERSION`
files to avoid the extra step of reverting changes. The `*_VERSION` files are
most efficient when you need repeated package builds of `omnibus-gitlab`,
but the only changes happening are in GitLab components. In this case, when a
pipeline is run after changing the `*_VERSION` files, it can be retried to build
new packages pulling in changes from the upstream component feature branch instead
of manually running new pipelines.
## Use a specific mirror or fork of a GitLab component
The repository sources for most software that Omnibus builds are in
the `.custom_sources.yml` file in the `omnibus-gitlab` repository. You can override
the main GitLab components using environment variables. Check the table
below for details:
| Environment variable | Description |
| --------------------------------------------- | ----------- |
| `ALTERNATIVE_PRIVATE_TOKEN` | An access token used if needing to pull from private repositories. |
| `GITLAB_ALTERNATIVE_REPO` | Git repository location for the GitLab Rails application. |
| `GITLAB_SHELL_ALTERNATIVE_REPO` | Git repository location for [GitLab Shell](https://gitlab.com/gitlab-org/gitlab-shell). |
| `GITLAB_PAGES_ALTERNATIVE_REPO` | Git repository location for [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages). |
| `GITALY_SERVER_ALTERNATIVE_REPO` | Git repository location for [Gitaly](https://gitlab.com/gitlab-org/gitaly). |
| `GITLAB_ELASTICSEARCH_INDEXER_ALTERNATIVE_REPO` | Git repository location for [GitLab Elasticsearch Indexer](https://gitlab.com/gitlab-org/gitlab-elasticsearch-indexer). |
| `GITLAB_KAS_ALTERNATIVE_REPO` | Git repository location for [GitLab Kubernetes Agent Server](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent). |
## Build packages for other operating systems
Prerequisites:
- You must have permission to push branches to the `omnibus-gitlab` release mirror: `https://dev.gitlab.org/gitlab/omnibus-gitlab`.
Use the release mirror to:
- Build a package for an operating system other than Ubuntu 22.04.
- Ensure packages with your changes can be built on all operating systems.
To build packages for other operating systems:
1. Modify `*_VERSION` files or environment variables as specified in the
previous section if needed. You might want to set the `ee` environment variable in
the [CI configuration](https://gitlab.com/gitlab-org/omnibus-gitlab/-/blob/master/.gitlab-ci.yml)
to `true` to use a commit from the GitLab repository instead of GitLab-FOSS.
1. Push your branch to the release mirror and check the
pipelines: `https://dev.gitlab.org/gitlab/omnibus-gitlab/-/pipelines`.
1. The pipeline builds packages for all supported operating systems and a Docker image.
---
stage: Systems
group: Distribution
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
# Add or remove Omnibus GitLab configuration options
## Add a configuration option
Adding a configuration option may happen during any release milestone.
- Add an entry to `files/gitlab-config-template/gitlab.rb.template` as
documentation for administrators.
- Add a default value for the new option:
- Values specific to a service should be set in the appropriate `files/gitlab-cookbooks/SERVICE_NAME/attributes/default.rb`
- General values may be set in `files/gitlab-cookbooks/gitlab/attributes/default.rb`
- If the value requires calculations at runtime, then it should be added to
the [defined `parse_variables` method in the related cookbook](new-services.md#additional-configuration-parsing-for-the-service).
- Consider whether the option should be added to [public attributes](public-attributes.md).
## Remove a configuration option
Distribution follows a strict process when removing configuration options to
minimize disruptions for Omnibus GitLab administrators.
1. Create an issue for deprecating the configuration option.
1. Create an issue for removing the configuration option that happens no
less than three milestones after adding the deprecation messages.
### Deprecate the option
- [Add deprecation messages](adding-deprecation-messages.md).
- Remove the configuration options from `files/gitlab-config-template/gitlab.rb.template` to prevent their use in new installations.
### Final removal of the option
- Remove the default values for the deprecated option from `files/gitlab-cookbooks/gitlab/attributes/default.rb`.
---
stage: Systems
group: Distribution
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
# Omnibus GitLab deprecation process
Besides following the [GitLab deprecation guidelines](https://handbook.gitlab.com/handbook/product/gitlab-the-product/#deprecations-removals-and-breaking-changes), we should also add deprecation messages
to the Omnibus GitLab package.
Notifying GitLab administrators of the deprecation and removal of features through deprecation messages consists of:
1. [Addding deprecation messages](#adding-deprecation-messages).
1. [Tracking the removal of deprecation messages](#tracking-the-removal-of-deprecation-messages).
1. [Tracking the removal of the feature](#track-the-removal-of-the-feature).
1. [Removing deprecation messages](#removing-deprecation-messages).
## You must know
Before you add a deprecation message, make sure to read:
- [When can a feature be deprecated](https://docs.gitlab.com/ee/development/deprecation_guidelines/#when-can-a-feature-be-deprecated).
- [Omnibus GitLab deprecation policy](https://docs.gitlab.com/ee/administration/package_information/deprecation_policy.html).
## Adding deprecation messages
We store a list of deprecations associated with it in the `list` method of
[`Gitlab::Deprecations` class](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/files/gitlab-cookbooks/package/libraries/deprecations.rb)
If a configuration has to be deprecated, it has to be added to that list with
proper details. For example:
```ruby
deprecations = [
{
config_keys: %w(gitlab postgresql data_dir),
deprecation: '11.6',
removal: '14.0',
note: "Please see https://docs.gitlab.com/omnibus/settings/database.html#store-postgresql-data-in-a-different-directory for how to use postgresql['dir']"
},
{
config_keys: %w(gitlab sidekiq cluster),
deprecation: '13.0',
removal: '14.0',
note: "Running sidekiq directly is deprecated. Please see https://docs.gitlab.com/ee/administration/operations/extra_sidekiq_processes.html for how to use sidekiq-cluster."
},
...
]
```
### `config_keys`
`config_keys` represents a list of keys, which can be used to traverse the configuration hash available
from `/opt/gitlab/embedded/nodes/{fqdn}.json` to reach a specific configuration.
For example `%w(mattermost log_file_directory)` means `mattermost['log_file_directory']` setting.
Similarly, `%w(gitlab nginx listen_addresses)` means `gitlab['nginx']['listen_addresses']`.
We internally convert it to `nginx['listen_addresses']`, which is what we use in `/etc/gitlab/gitlab.rb`.
### `deprecation`
`deprecation` is where you set the `<major>.<minor>` version that deprecated the change.
Starting in that version, running `gitlab-ctl reconfigure` will warn that the setting is being removed in the `removal`
version, and it will display the provided `note`.
### `removal`
`removal` is where you set the `<major>.<minor>` version that will no longer support the change at all.
This should almost always be a major release. The Omnibus package runs a script at the beginning of installation that ensures you don't have any removed configuration in your settings. The install will fail early, before making any changes, if it finds configuration that is no longer supported. Similarly, running `gitlab-ctl reconfigure` will also check the `gitlab.rb` file for removed configs. This is to tackle situations where users simply copy `gitlab.rb` from an older instance to a newer one.
### `note`
`note` is part of the deprecation message provided to users during `gitlab-ctl reconfigure`.
Use this area to inform users of how to change their settings, often by linking to new documentation,
or in the case of a settings rename, telling them what the new setting name should be.
## Tracking the removal of deprecation messages
Deprecation messages **should not** be cleaned up together with removals, because even after the removal they protect upgrades
where an administrator tries to upgrade to the version where the key got removed, but they have not yet migrated all
the old configuration.
Upgrades do this by running the `Gitlab::Deprecations.check_config` method, which compares existing
configuration against their scheduled removal date, before allowing the GitLab package to be updated.
Additionaly, we have users who might skip multiple GitLab versions when upgrading. For that reason, we recommend deprecation
messages to only be removed in the next planned required stop following the removal milestone, as per our
[upgrade paths](https://docs.gitlab.com/ee/update/index.html#upgrade-paths). For example:
- A deprecation message was added in 15.8.
- The old configuration was removed from the codebase in 16.0.
- The deprecation messages should be removed in 16.3, as this is the next planned required stop.
To track the removal of deprecation messages:
1. Create a follow-up issue using the `Remove Deprecation Message` issue template.
1. Add a comment next to your deprecation message with a link to the follow-up issue to remove the message. For example:
```ruby
{
config_keys: ...
deprecation: '15.8', # Remove message Issue: https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/XYZ
removal: '16.0',
note: "..."
},
```
## Track the removal of the feature
Define the correct milestone to remove the feature you want to deprecate, based on the [you must know](#you-must-know)
section above. Then create a follow-up issue to track the removal of the feature, and add a comment
beside the `removal` key informing which issue is tracking its removal. For example:
```ruby
{
config_keys: ...
deprecation: '15.8', # Remove message Issue: https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/1
removal: '16.0', # Removal issue: https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/2
note: "..."
},
```
The follow-up issue should be set to the milestone in which the feature is expected to be removed.
## Removing deprecation messages
When the messages are ready to be removed you should:
1. Make sure the deprecated configuration was indeed removed in a previous milestone.
1. Make sure the message removal is being released in a required stop milestone later than the one that removed the configuration.
1. Open an MR to remove the deprecation messages and to close the follow-up issue.
---
stage: Systems
group: Distribution
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
# Test Report Generation in Omnibus-GitLab
The following three pipelines are created while generating the allure-report
- Omnibus pipeline
- TRIGGERED_CE/EE_PIPELINE child pipeline (Manually Triggered)
- QA-SUBSET-TEST child pipeline
## Omnibus MR Pipeline
An Omnibus-GitLab project MR pipeline can be triggered in two ways
- manually running the pipeline
- a MR exists and a commit is pushed to the repository
The tests in the pipeline are currently triggered manually by
- `Trigger:ce-package` job
- `Trigger:ee-package` job
### Trigger:ce/ee-package job
These jobs can be triggered manually after the `generate-facts` job is completed. On triggering these jobs, a child pipeline is created.
The child pipeline, called `TRIGGERED_CE/EE_PIPELINE` is generated in the Omnibus-GitLab repository
## TRIGGERED_CE/EE_PIPELINE child pipeline
This child pipeline consists of a job called `qa-subset-test` which uses the `package-and-test/main.gitlab-ci.yml` file of the main GitLab project.
### qa-subset-test job
The `qa-subset-test` job triggers another child pipeline in the Omnibus-GitLab repository
To get an allure report snapshot as a comment in the MR, following environment variables need to be passed to `qa-subset-test`
| Environment Variable | Description |
| ----------------------------------|-------------|
| `GITLAB_AUTH_TOKEN` | This is used to give access to the Danger bot to post comment in `omnibus-gitlab` repository. We are using `$DANGER_GITLAB_API_TOKEN` which is also being used for other Danger bot related access in `omnibugs-gitlab` as mentioned [ci-variable](https://gitlab.com/gitlab-org/omnibus-gitlab/-/blob/master/doc/development/ci-variables.md) |
| `ALLURE_MERGE_REQUEST_IID` | This denotes the MR ID which will be used by [e2e-test-report-job](#e2e-test-report-job) which inturn used `allure-report-publisher` to post message to MR with provided ID e.g. !6190 |
### qa-remaining-test-manual
The `qa-remaining-test-manual` job is a manual trigger pipeline. It triggers the same pipeline as `qa-subset-test` but runs the tests which aren't run as a part of `qa-subset-test` job.
The environment variables used in `qa-subset-test` are the same that are used in this job to generate the allure report.
## QA-SUBSET-TEST child pipeline
This pipeline runs a subset of all the orchestrated tests using GitLab QA project which in turn uses allure gem to generate report source files for each test that is executed and stores the files in a common folder. Certain orchestrated jobs like `instance`, `decomposition-single-db`, `decomposition-multiple-db` and `praefect` run only smoke tests which initially used to run the entire suite.
### e2e-test-report job
The `e2e-test-report` job includes [.generate-allure-report-base](https://gitlab.com/gitlab-org/quality/pipeline-common/-/blob/master/ci/allure-report.yml) job which uses the `allure-report-publisher` gem to collate all the report in the mentioned folder into a single report and uploads it to the s3 bucket.
It also posts the allure report as a comment on the MR having the ID passed in `ALLURE_MERGE_REQUEST_IID` variable in the [qa-subset-test](#qa-subset-test-job).
[allure-report-publisher](https://github.com/andrcuns/allure-report-publisher) is a gem which uses allure in the backend. It has been catered for GitLab to upload the report and post the comment to MR.
The entire flow of QA in omnibus MR pipeline is as follows
```mermaid
%%{init: {'theme':'base'}}%%
graph TD
B0 --->|MR Pipeline Triggered on each commit| A0
A0 ---->|Creates Child Pipeline| A1
A1 ---->|Creates Child Pipelines| A2
A2 -->|"Once tests are successful <br> calls e2e-test-report job"| B1
B2 -.-|includes| B1
B1 -->|Runs| C1
A3 -.-|includes| A1
C1 -.->|uploads report| C2
C1 -.->|Posts report link as a comment on MR| B0
C3 -.->|pulls| B2
subgraph QA flow in omnibus pipeline
subgraph Omnibus Parent Pipeline
B0((Merge <br> Request))
A0["`**_trigger-package_** stage <br> Manual **_Trigger:ce/ee-package_** job kicked off`"]
end
subgraph Trigger:CE/EE-job Child Pipeline
A1["`**_trigger-qa_** stage <br> **_qa-subset-test_** job`"]
A3(["`_package-and-test/main.gitlab-ci.yml_ <br> from _gitlab-org/gitlab_`"])
end
subgraph qa-subset-test Child Pipeline
A2["`from <br> **_package-and-test/main.gitlab-ci.yml_** in **_gitlab-org/gitlab_**`"]
B1["`**_report_** stage <br> **_e2e-test-report_** job`"]
B2(["`_.generate-allure-report-base_ job from<br> _quality/pipeline-common_`"])
C1["`**_allure-report-publisher_** gem`"]
C2[("`AWS S3 <br> **_gitlab-qa-allure-report_** <br> in <br> **_eng-quality-ops-ci-cd-shared-infra_** <br> project`")]
C3["`pulls <br> image _andrcuns/allure-report-publisher:1.6.0_`"]
end
end
```
## Demo for Allure report & QA pipelines
An in-depth video walkthrough of the pipeline and how to use Allure report
is available [on YouTube](https://youtu.be/_0dM6KLdCpw).
---
stage: Systems
group: Distribution
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
# Amazon Machine Images (AMIs) and Marketplace Listings
GitLab caters to the AWS ecosystem via the following methods
1. Community AMIs
1. [GitLab CE](https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#Images:visibility=public-images;owner=782774275127;search=GitLab%20CE;sort=desc:name) - amd64 and arm64
1. [GitLab EE](https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#Images:visibility=public-images;owner=782774275127;search=GitLab%20EE;sort=desc:name) (Unlicensed) - amd64 and arm64
1. [AWS Marketplace listing](https://aws.amazon.com/marketplace/seller-profile?id=9657c703-ca56-4b54-b029-9ded0fadd970)
1. [GitLab CE](https://aws.amazon.com/marketplace/pp/prodview-w6ykryurkesjq?sr=0-3&ref_=beagle&applicationId=AWSMPContessa)
1. [GitLab EE Premium (5 seat pre-licensed)](https://aws.amazon.com/marketplace/pp/prodview-amk6tacbois2k?sr=0-1&ref_=beagle&applicationId=AWSMPContessa)
1. [GitLab EE Ultimate (5 seat pre-licensed)](https://aws.amazon.com/marketplace/pp/prodview-g6ktjmpuc33zk?sr=0-2&ref_=beagle&applicationId=AWSMPContessa)
## Building the AMIs
AMIs are built as part of regular release process in the tag pipelines that run
in the [Build mirror](https://dev.gitlab.org/gitlab/omnibus-gitlab), and use the
Ubuntu 22.04 packages under the hood. They are built using [`packer`](https://www.packer.io/)
with their [`Amazon EBS`](https://developer.hashicorp.com/packer/integrations/hashicorp/amazon/latest/components/builder/ebs)
builder. Each Community AMI listed above has a corresponding packer
configuration file to specify the build and AMI attributes and an update script
to install the GitLab package and configure the AMI's startup behaviors. The
update script downloads the Ubuntu 22.04 package and installs it on the VM. It
also installs a [`cloud-init`](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/amazon-linux-ami-basics.html#amazon-linux-cloud-init)
script which automatically configures (with a `gitlab-ctl reconfigure` run) the
GitLab instance to work with the VM's external IP address on startup.
In addition to these public Community AMIs, two private AMIs are also built -
for GitLab EE Premium and Ultimate tiers, which ships 5-seat licenses of the
respective GitLab tier. This license file gets used as part of the initial
`gitlab-ctl reconfigure` run on VM startup. These AMIs are backing our AWS
Marketplace listings.
## Releasing to AWS Marketplace
In addition to building the AMIs during the release process, Omnibus GitLab
tag pipeline also publishes the new version of the respective AWS Marketplace
listing. The private AMIs mentioned above are used to back these listings. As
part of release pipeline, we submit a changeset to publish the new version.
Unlike AMI creation, this process is not immediate and we need to manually check
the status of the changeset periodically to ensure it got applied to the
listings, preferably after 24 hours.
## Common release blocker events
The following events has happen often in the past, and has caused the release
pipeline to fail, and needs immediate attention:
1. Exhausting quota on Public AMIs - When builds fail due to quota exhaustion,
as an immediate fix, request a quota increase. Then discuss with
Alliances/Product on de-registering or making private AMIs of some of the
older versions.
Also check [issue discussing retention policy of AMIs](https://gitlab.com/gitlab-org/distribution/team-tasks/-/issues/1149).
1. AWS Marketplace version limit - AWS Marketplace has a 100 versions limit for
each product, exhausting which we can't publish newer versions. However, they
usually inform us (via email to a specific email account which is forwarded
to selected Distribution Build team and Alliance team members) which is when
we are nearing that limit, and we work with Alliances/Product to unlist some
of the older versions.
1. AWS Marketplace listing blocked by a pending changeset - When this happens,
the changeset needs to be manually cancelled, and the Marketplace release job
in the release pipeline needs to be retried. This requires someone with
Maintainer level access to the Build mirror of `omnibus-gitlab` project.
---
stage: Systems
group: Distribution
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
# Handling broken master pipelines
We currently run [nightly pipelines](pipelines.md#scheduled-pipelines) for
building both CE and EE package in our Release mirror: `https://dev.gitlab.org/gitlab/omnibus-gitlab`.
This mirror is configured to send pipeline failure notifications to
`#g_distribution` channel on Slack. A broken master pipeline gets priority over
other scheduled work as per our [development guidelines](https://handbook.gitlab.com/handbook/engineering/workflow/#resolution-of-broken-master).
## Jobs are stuck due to no runners being active
This is a transient error due to connection issues between runner manager
machine and `dev.gitlab.org`.
1. Sign in to [runner manager machine](https://handbook.gitlab.com/handbook/engineering/infrastructure/core-platform/systems/distribution/maintenance/build-machines/#build-runners-gitlab-org).
1. Run the following command to force a connection between runner and GitLab
```shell
sudo gitlab-runner verify
```
---
stage: Systems
group: Distribution
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
# Add or change behavior during package install and upgrade
## Test changes during install/upgrade
If you are working on changes to the install/upgrade process, and not the reconfigure process itself, you can use the [scripts/repack-deb](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/scripts/repack-deb) tool to quickly repack an existing GitLab deb with changes from your local branch. It will repack the existing deb file into a new deb containing the local content from
- `config/templates/package-scripts`
- `files/gitlab-cookbook/`
- `files/gitlab-ctl-commands`
- `files/gitlab-ctl-commands-ee`
---
stage: Systems
group: Distribution
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
# CI Variables
`omnibus-gitlab` [CI pipelines](pipelines.md) use variables provided by the CI environment to change build behavior between mirrors and
keep sensitive data out of the repositories.
Check the table below for more information about the various CI variables used in the pipelines.
## Build variables
**Required:**
These variables are required to build packages in the pipeline.
| Environment Variable | Description |
| --------------------------------------------- | ----------- |
| `AWS_SECRET_ACCESS_KEY` | Account secret to read/write the build package to a S3 location. |
| `AWS_ACCESS_KEY_ID` | Account ID to read/write the build package to a S3 location. |
**Available:**
These additional variables are available to override or enable different build behavior.
| Environment Variable | Description |
| --------------------------------------------- | ----------- |
| `AWS_MAX_ATTEMPTS` | Maximum number of times an S3 command should retry. |
| `USE_S3_CACHE` | Set to any value and Omnibus will cache fetched software sources in an s3 bucket. [Upstream documentation](https://www.rubydoc.info/github/chef/omnibus/Omnibus%2FConfig:use_s3_caching). |
| `CACHE_AWS_ACCESS_KEY_ID` | Account ID to read/write from the s3 bucket containing the s3 software fetch cache. |
| `CACHE_AWS_SECRET_ACCESS_KEY` | Account secret to read/write from the s3 bucket containing the s3 software fetch cache. |
| `CACHE_AWS_BUCKET` | S3 bucket name for the software fetch cache. |
| `CACHE_AWS_S3_REGION` | S3 bucket region to write/read the software fetch cache. |
| `CACHE_AWS_S3_ENDPOINT` | The HTTP or HTTPS endpoint to send requests to, when using s3 compatible service. |
| `CACHE_S3_ACCELERATE` | Setting any value enables the s3 software fetch cache to pull using s3 accelerate. |
| `SECRET_AWS_SECRET_ACCESS_KEY` | Account secret to read the gpg private package signing key from a secure s3 bucket. |
| `SECRET_AWS_ACCESS_KEY_ID` | Account ID to read the gpg private package signing key from a secure s3 bucket. |
| `GPG_PASSPHRASE` | The passphrase needed to use the gpg private package signing key. |
| `CE_MAX_PACKAGE_SIZE_MB` | The max package size in MB allowed for CE packages before we alert the team and investigate. |
| `EE_MAX_PACKAGE_SIZE_MB` | The max package size in MB allowed for EE packages before we alert the team and investigate. |
| `DEV_GITLAB_SSH_KEY` | SSH private key for an account able to read repositories from `dev.gitlab.org`. Used for SSH Git fetch. |
| `BUILDER_IMAGE_REGISTRY` | Registry to pull the CI Job images from. |
| `BUILD_LOG_LEVEL` | Omnibus build log level. |
| `ALTERNATIVE_SOURCES` | Switch to the custom sources listed in `https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/.custom_sources.yml` Defaults to `true`. |
| `OMNIBUS_GEM_SOURCE` | Non-default remote URI to clone the omnibus gem from. |
| `QA_BUILD_TARGET` | Build specified QA image. See this [MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/91250) for details. Defaults to `qa`. |
| `GITLAB_ASSETS_TAG` | Tag of the assets image built by the `build-assets-image` job in the `gitlab-org/gitlab` pipelines. Defaults to `$GITLAB_REF_SLUG` or the `gitlab-rails` version. |
| `BUILD_ON_ALL_OS` | Build all OS images without using manual trigger if set to `true`. |
## Test variables
| Environment Variable | Description |
| ------------------------------------------------------------ | ----------------------------------------------------------------------------------- |
| `RAT_REFERENCE_ARCHITECTURE` | Reference architecture template used in pipeline triggered by RAT job. |
| `RAT_FIPS_REFERENCE_ARCHITECTURE` | Reference architecture template used in pipeline triggered by RAT:FIPS job. |
| `RAT_PACKAGE_URL` | URL to fetch regular package - for RAT pipeline triggered by RAT job. |
| `RAT_FIPS_PACKAGE_URL` | URL to fetch FIPS package - for RAT pipeline triggered by RAT job. |
| `RAT_TRIGGER_TOKEN` | Trigger token for the RAT pipeline. |
| `RAT_PROJECT_ACCESS_TOKEN` | Project access token for triggering a RAT pipeline. |
| `OMNIBUS_GITLAB_MIRROR_PROJECT_ACCESS_TOKEN` | Project access token for building a test package. |
| `CI_SLACK_WEBHOOK_URL` | Webhook URL for Slack failure notifications. |
| `DANGER_GITLAB_API_TOKEN` | GitLab API token for dangerbot to post comments to MRs. |
| `DEPS_GITLAB_TOKEN` | Token used by [dependencies.io](https://www.dependencies.io/gitlab/) to create MRs. |
| `DEPS_TOKEN` | Token used by CI to auth to [dependencies.io](https://www.dependencies.io/gitlab/). |
| `DOCS_API_TOKEN` | Token used by CI to trigger a review-app build of the docs site. |
| `MANUAL_QA_TEST` | Variable used to decide if the `qa-subset-test` job should be played automatically or not. |
## Release variables
**Required:**
These variables are required to release packages built by the pipeline.
| Environment Variable | Description |
| --------------------------------------------- | ----------- |
| `STAGING_REPO` | Repository at `packages.gitlab.com` where releases are uploaded prior to final release. |
| `PACKAGECLOUD_USER` | Packagecloud username for pushing packages to `packages.gitlab.com`. |
| `PACKAGECLOUD_TOKEN` | API access token for pushing packages to `packages.gitlab.com`. |
| `LICENSE_S3_BUCKET` | Bucket for storing release license information published on the public page at `https://gitlab-org.gitlab.io/omnibus-gitlab/licenses.html`. |
| `LICENSE_AWS_SECRET_ACCESS_KEY` | Account secret to read/write from the S3 bucket containing license information. |
| `LICENSE_AWS_ACCESS_KEY_ID` | Account ID to read/write from the S3 bucket containing license information. |
| `GCP_SERVICE_ACCOUNT` | Used to read/write metrics in Google Object Storage. |
| `DOCKERHUB_USERNAME` | Username used when pushing the Omnibus GitLab image to Docker Hub. |
| `DOCKERHUB_PASSWORD` | Password used when pushing the Omnibus GitLab image to Docker Hub. |
| `AWS_ULTIMATE_LICENSE_FILE` | GitLab Ultimate license to use the Ultimate AWS AMIs. |
| `AWS_PREMIUM_LICENSE_FILE` | GitLab Premium license to use the Ultimate AWS AMIs. |
| `AWS_AMI_SECRET_ACCESS_KEY` | Account secret for read/write access to publish the AWS AMIs. |
| `AWS_AMI_ACCESS_KEY_ID` | Account ID for read/write access to publish the AWS AMIs. |
| `AWS_MARKETPLACE_ARN` | AWS ARN to allow AWS Marketplace access our official AMIs. |
| `PACKAGE_PROMOTION_RUNNER_TAG` | Tag associated with the shared runners used to run package promotion jobs. |
**Available:**
These additional variables are available to override or enable different build behavior.
| Environment Variable | Description |
| --------------------------------------------- | ----------- |
| `RELEASE_DEPLOY_ENVIRONMENT` | Deployment name used for [`gitlab.com` deployer](https://gitlab.com/gitlab-org/release/docs/blob/master/general/deploy/gitlab-com-deployer.md) trigger if current ref is a stable tag. |
| `PATCH_DEPLOY_ENVIRONMENT` | Deployment name used for the [`gitlab.com` deployer](https://gitlab.com/gitlab-org/release/docs/blob/master/general/deploy/gitlab-com-deployer.md) trigger if current ref is a release candidate tag. |
| `AUTO_DEPLOY_ENVIRONMENT` | Deployment name used for the [`gitlab.com` deployer](https://gitlab.com/gitlab-org/release/docs/blob/master/general/deploy/gitlab-com-deployer.md) trigger if current ref is an auto-deploy tag. |
| `DEPLOYER_TRIGGER_PROJECT` | GitLab project ID for the repository used for the [`gitlab.com` deployer](https://gitlab.com/gitlab-org/release/docs/blob/master/general/deploy/gitlab-com-deployer.md). |
| `DEPLOYER_TRIGGER_TOKEN` | Trigger token for the various [`gitlab.com` deployer](https://gitlab.com/gitlab-org/release/docs/blob/master/general/deploy/gitlab-com-deployer.md) environments. |
| `RELEASE_BUCKET` | S3 bucket where release packages are pushed. |
| `BUILDS_BUCKET` | S3 bucket where regular branch packages are pushed. |
| `RELEASE_BUCKET_REGION` | S3 bucket region. |
| `RELEASE_BUCKET_S3_ENDPOINT` | Specify S3 endpoint. Especially useful when S3 compatible storage service is adopted. |
| `GITLAB_BUNDLE_GEMFILE` | Set Gemfile path required by `gitlab-rails` bundle. Default is `Gemfile`. |
| `GITLAB_COM_PKGS_RELEASE_BUCKET` | GCS bucket where release packages are pushed. |
| `GITLAB_COM_PKGS_BUILDS_BUCKET` | GCS bucket where regular branch packages are pushed. |
| `GITLAB_COM_PKGS_SA_FILE` | Service account key used for pushing release packages for SaaS deployments, it must have write access to the pkgs bucket. |
## Unknown/outdated variables
| Environment Variable | Description |
| --------------------------------------------- | ----------- |
| `VERSION_TOKEN` | |
| `TAKEOFF_TRIGGER_TOKEN` | |
| `TAKEOFF_TRIGGER_PROJECT` | |
| `RELEASE_TRIGGER_TOKEN` | |
| `GITLAB_DEV` | |
| `GET_SOURCES_ATTEMPTS` | A GitLab Runner variable used to control how many times runner tries to fetch the Git repository. |
| `FOG_REGION` | |
| `FOG_PROVIDER` | |
| `FOG_DIRECTORY` | |
| `AWS_RELEASE_TRIGGER_TOKEN` | Used for releases older than 13.10. |
| `ASSETS_AWS_SECRET_ACCESS_KEY` | |
| `ASSETS_AWS_ACCESS_KEY_ID` | |
| `AMI_LICENSE_FILE` | |
---
stage: Systems
group: Distribution
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
# Contribute to Omnibus GitLab
## Common enhancement tasks
- [Adding and removing configuration options](add-remove-configuration-options.md)
- [Adding a new Service to Omnibus GitLab](new-services.md)
- [Adding deprecation messages](adding-deprecation-messages.md)
- [Adding an attribute to `public_attributes.json`](public-attributes.md)
- [Adding a `gitlab-ctl` command](gitlab-ctl-commands.md)
## Common maintenance tasks
- [Upgrading software components](upgrading-software-components.md)
- [Patching upstream software](creating-patches.md)
- [Managing PostgreSQL versions](managing-postgresql-versions.md)
- [Upgrading the bundled Chef version](upgrading-chef.md)
- [Deprecating and removing support for an OS](deprecating-and-removing-support-for-an-os.md)
- [Adding or changing behavior during package install and upgrade](change-package-behavior.md)
## Build and test your enhancement
- [Building your own package](../build/index.md)
- [Building a package from a custom branch](../build/team_member_docs.md#test-an-omnibus-gitlab-project-mr)
## Submit your enhancement for review
### Merge request guidelines
If you are working on a new feature or an issue which doesn't have an entry on
the Omnibus GitLab issue tracker, it is always a better idea to create an issue
and mention that you will be working on it as this will help to prevent
duplication of work. Also, others may be able to provide input regarding the
issue, which can help you in your task.
It is preferred to make your changes in a branch named `\<issue number>-\<description>`
so that merging the request will automatically close the
specified issue.
A good merge request is expected to have the following components, based on
their applicability:
1. Full merge request description explaining why this change was needed
1. Code for implementing feature/bugfix
1. Tests, as explained in [Writing Tests](#write-tests)
1. Documentation explaining the change
1. If merge request introduces change in user facing configuration, update to [`gitlab.rb.template`](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/files/gitlab-config-template/gitlab.rb.template)
1. [Changelog entry](https://docs.gitlab.com/ee/development/changelog.html) to inform about the change, if necessary.
NOTE:
Ensure shared runners are enabled for your fork in order for our automated tests to run:
1. Go to **Settings -> CI/CD**.
1. Expand Runners settings.
1. If shared runners are not enabled, click on the button labeled **Enable shared Runners**.
### Write tests
Any change in the internal cookbook also requires specs. Apart from testing the
specific feature/bug, it would be greatly appreciated if the submitted Merge
Request includes more tests. This is to ensure that the test coverage grows with
development.
When in rush to fix something (such as a security issue, or a bug blocking the release),
writing specs can be skipped. However, an issue to implement the tests
**must be** created and assigned to the person who originally wrote the code.
To run tests, execute the following command. You may have to run `bundle install` before running it:
```shell
bundle exec rspec
```
---
stage: Systems
group: Distribution
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
# Create Patches
You can manually modify an external dependency to:
- Make sure that dependency works with Omnibus embedded packaging.
- Fix an issue that an upstream maintainer has not fixed.
## Bootstrap patch files
Omnibus has a specific [DSL](https://github.com/chef/omnibus#software) and
conventions to ship and apply patches automatically as part of the build
process.
To apply patch files, store `.patch` files that contain the changes in a
specific directory structure using the `patch` DSL method:
```plaintext
config/patches/<software-name>
```
For example, for a patch applied during the execution of
`gitlab-rails`, store the `.patch` files in:
```plaintext
config/patches/gitlab-rails
```
## Create a patch
To create a patch file, you can use:
- The `diff` command to compare an original file with a modified file.
- Git to output a patch based one or more commits.
### Use `diff` to create a patch
To create a patch file using the `diff` command:
1. Duplicate the file you are changing and give the new file a new name.
1. Change the original file.
```shell
diff -Naur <original_file> <new_file> > <patch_filename>.patch
```
### Use Git to create a patch
Use the `git diff` command to create a patch file between two Git commits.
You must know both commit IDs.
```shell
git diff <commitid1> <commitid2> > <patch_filename>.patch
```
You can also create a patch file based on one Git commit and the base HEAD.
```shell
git diff <commitid1> > <patch_filename>.patch
```
## Use the patch
To patch one or more files:
1. Get the original files by downloading, bundle installing, or using a similar method.
1. Add the following line to each original file:
```shell
patch source: '<patch_filename>.patch', target: "#{<install_directory>}/embedded/<target_file>.txt"
```
---
stage: Systems
group: Distribution
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
# Database support
This document provides details and examples on how to implement database support
for an Omnibus GitLab component. The [architecture blueprint](../architecture/multiple_database_support/index.md)
provides the design and definitions.
1. [Level 1](#level-1)
1. [Level 2](#level-2)
1. [Examples](#examples)
1. [Example 1: Registry database objects](#example-1-registry-database-objects)
1. [Example 2: Registry database migrations](#example-2-registry-database-migrations)
1. [Example 3: Use database objects and migrations of Registry](#example-3-use-database-objects-and-migrations-of-registry)
1. [Example 4: Parametrized database objects resource for Rails](#example-4-parameterized-database-objects-resource-for-rails)
1. [Level 3](#level-3)
1. [Level 4](#level-4)
1. [Considerations](#considerations)
1. [Bridge the gap](#bridge-the-gap)
1. [Reorganize the existing database operations](#reorganize-the-existing-database-operations)
1. [Support dedicated PgBouncer user for databases](#support-dedicated-pgbouncer-user-for-databases)
1. [Delay the population of PgBouncer database configuration](#delay-the-population-of-pgbouncer-database-configuration)
1. [Configurable Consul watch for databases](#configurable-consul-watch-for-databases)
1. [Helper class for general database migration requirements](#helper-class-for-general-database-migration-requirements)
## Level 1
1. Add the new database-related configuration attributes to `gitlab.rb`. Do
not forget to update `gitlab.rb.template`.
1. Update the Chef recipe to consume the configuration attributes. At this
level, the requirement is to pass down the attributes to the component,
generally its through configuration files or command-line arguments.
For example in `registry` cookbook:
- `registry['database']` attribute is added to `gitlab.rb` (see [`attributes/default.rb`](https://gitlab.com/gitlab-org/omnibus-gitlab/-/blob/565f7a73f721fa40efc936dfd735b849986ce0ac/files/gitlab-cookbooks/registry/attributes/default.rb#L39)).
- The configuration template uses the attribute to configure registry (see [`templates/default/registry-config.yml.erb`](https://gitlab.com/gitlab-org/omnibus-gitlab/-/blob/565f7a73f721fa40efc936dfd735b849986ce0ac/files/gitlab-cookbooks/registry/templates/default/registry-config.yml.erb#L47)).
## Level 2
1. Add dependency to `postgresql` and `pgbouncer` cookbooks. Use `depends` in
`metadata.rb`. This ensures that requirements are met and their Chef custom
resources are available to the cookbook.
1. Create a `database_objects` custom resource in `resources/` directory of the
cookbook with the default `nothing` action (a no-op action) and a `create`
action. The `create` action can leverage the existing `postgresql` custom
resources to set up the required database objects for the component.
See:
- `postgresql_user`
- `postgresql_database`
- `postgresql_schema`
- `postgresql_extension`
- `postgresql_query`
A `database_objects` resource of must create _all_ of the required database
objects for a component. It must not assume that another cookbook creates
some of the objects that it needs.
1. Create a `database_migrations` custom resource in `resources/` directory of
the cookbook with the default `nothing` action (a no-op action) and a `run`
action. The `run` action executes the commands
for database migrations of the component.
When the migration runs, it can safely assume that all of the required
database objects are available. Therefore this resource depends on successful
`create` action of `database_objects` resource.
1. In the `default` recipe of the cookbook, use a `database_objects` resource
that notifies a `database_migrations` resources to `run`. The migrations
should be able to run `immediately` after the preparation of database objects
but a component may choose not to use the immediate trigger.
### Examples
All of the following code blocks are provided as examples. You may need
to make adjustments to ensure that they meet your requirements.
#### Example 1: Registry database objects
The following example shows a `database_objects` resource in the `registry`
cookbook defined in `registry/resources/database_objects.rb`.
Notice how it uses custom resources from the `postgresql` cookbook to create
the required database objects.
```ruby
# registry/resources/database_objects.rb
unified_mode true
property :pg_helper, [GeoPgHelper, PgHelper], required: true, sensitive: true
default_action :nothing
action :nothing do
end
action :create do
host = node['registry']['database']['host']
port = node['registry']['database']['port']
database_name = node['registry']['database']['database_name']
username = node['registry']['database']['username']
password = node['registry']['database']['password']
postgresql_user username do
password "md5#{password}" unless password.nil?
action :create
end
postgresql_database database_name do
database_socket host
database_port port
owner username
helper new_resource.pg_helper
action :create
end
postgresql_extension 'btree_gist' do
database database_name
action :enable
end
end
```
#### Example 2: Registry database migrations
The following example shows a `database_migrations` resource in the `registry`
cookbook defined in `registry/resources/database_objects.rb`.
Notice how the resource accepts additional parameters. Parameters help
support different migration scenarios, for example separation of pre-deployment
or post-deployment migrations. It also uses [`MigrationHelper`](#helper-class-for-general-database-migration-requirements)
to decide whether to run a migration or not.
```ruby
# registry/resources/database_migrations.rb
unified_mode true
property :name, name_property: true
property :direction, Symbol, default: :up
property :dry_run, [true, false], default: false
property :force, [true, false], default: false
property :limit, [Integer, nil], default: nil
property :skip_post_deployment, [true, false], default: false
default_action :nothing
action :nothing do
end
action :run do
# MigrationHelper is not implemented. It contains general-purpose helper
# methods for managing migrations, for example if a specific component
# migrations can run or not.
#
# See: "Helper class for general database migration requirements"
migration_helper = MigrationHelper.new(node)
account_helper = AccountHelper.new(node)
logfiles_helper = LogfilesHelper.new(node)
logging_settings = logfiles_helper.logging_settings('registry')
bash_hide_env "migrate registry database: #{new_resource.name}" do
code <<-EOH
set -e
LOG_FILE="#{logging_settings[:log_directory]}/db-migrations-$(date +%Y-%m-%d-%H-%M-%S).log"
umask 077
/opt/gitlab/embedded/bin/registry \
#{new_resource.direction} \
#{"--dry-run" if new_resource.dry_run} \
#{"--limit #{new_resource.limit}" unless new_resource.limit.nil?} \
... \
#{working_dir}/config.yml \
2>& 1 | tee ${LOG_FILE}
STATUS=${PIPESTATUS[0]}
chown #{account_helper.gitlab_user}:#{account_helper.gitlab_group} ${LOG_FILE}
exit $STATUS
EOH
not_if { migration_helper.run_migration?('registry') }
end
end
```
#### Example 3: Use database objects and migrations of Registry
The resources defined in the previous examples are used in
`registry/recipes/enable.rb` recipe.
See how `only_if` and `not_if` guards are used to decide when to create the
database objects or run the migrations. Also, pay attention to the way that
`notifies` is used to show the dependency of the migrations on the successful
creation of database objects.
```ruby
# registry/recipes/enable.rb
# ...
pg_helper = PgHelper.new(node)
registry_database_objects 'default' do
pg_helper pg_helper
action :create
only_if { node['registry']['database']['enable'] }
not_if { pg_helper.replica? }
notifies :create, 'registry_database_migrations[up]', :immediately if pg_helper.is_ready?
end
registry_database_migrations 'up' do
direction :up
only_if { node['registry']['database']['enable'] }
not_if { pg_helper.replica? }
end
# ...
```
#### Example 4: Parameterized database objects resource for Rails
The following example shows how a single implementation of the database objects
for Rails application can satisfy the requirements of the decomposed database model.
In this example the _logical_ database is passed as the _resource name_ and is
used to lookup settings of each database from the configuration. The settings
are passed to `postgresql` custom resources. This is particularly useful when
the majority of implementation of can be reused to replace two or more resources.
```ruby
# gitlab/resources/database_objects.rb
unified_mode true
property :pg_helper, [GeoPgHelper, PgHelper], required: true, sensitive: true
default_action :nothing
action :nothing do
end
action :create do
global_database_settings = {
# ...
port: node['postgresql']['port']
host: node['gitlab']['gitlab_rails']['db_host']
# ...
}
database_settings = node['gitlab']['gitlab_rails']['databases'][new_resource.resource_name]
database_settings = global_database_settings.merge(database_settings) unless database_settings.nil?
username = database_settings[:username]
password = database_settings[:password]
database_name = database_settings[:database_name]
host = database_settings[:host]
port = database_settings[:port]
postgresql_user username do
password "md5#{password}" unless password.nil?
action :create
end
postgresql_database database_name do
database_socket host
database_port port
owner username
helper new_resource.pg_helper
action :create
end
postgresql_extension 'btree_gist' do
database database_name
action :enable
end
end
```
And this is how it is used in `gitlab/recipes/default.rb`:
```ruby
gitlabe_database_objects 'main' do
pg_helper pg_helper
action :create
only_if { node['gitlab']['gitlab_rails']['databases']['main']['enable'] ... }
end
gitlabe_database_objects 'ci' do
pg_helper pg_helper
action :create
only_if { node['gitlab']['gitlab_rails']['databases']['ci']['enable'] ... }
end
```
## Level 3
1. Add a new attribute for PgBouncer user. Make sure that this attribute is
mapped to the existing `pgbouncer['databases']` setting and can consume it.
This attribute is used to create a dedicated PgBouncer user for the component
as opposed to reusing the existing Rails user, the same as what Praefect
currently does.
NOTE **Note:**
It is very important that we do not introduce any breaking changes to
`gitlab.rb`. The current user settings must work without any change.
1. Use `pgbouncer_user` custom resource from `pgbouncer` cookbook to create the
dedicated PgBouncer user for the component. Use the attribute that is
described in the previous step.
## Level 4
1. Add a new attribute for the component to specify the name of the Consul
service of the database cluster. This is either the name of the scope of the
Patroni cluster (when automatic service registry for Patroni, i.e. `patroni['register_service']`,
is enabled) or the name of the Consul service that is configured manually
without Omnibus GitLab.
1. Use `database_watch` custom resource<sup>([Needs Implementation](#configurable-consul-watch-for-databases))</sup>
to define a new Consul watch for the database cluster service. It notifies
PgBouncer to update the logical database endpoint when the leader of the
cluster changes. Pass the name of the Consul service, logical database, and
any other PgBouncer options as parameters to the watch.
_All_ `database_watch` _resources must be placed in the_ `consul` _cookbook_. As
opposed to the previous levels, this is the only place where database-related
resources are concentrated in one cookbook, `consul`, and not managed in
the same cookbooks as their associated components.
The reason for this exception is that the watches run on the PgBouncer nodes,
where `pgbouncer_role` is used. All components, except PgBouncer and Consul,
are disabled. Note that this is in line with existing user configuration since
it is [the recommended configuration for PgBouncer node](https://docs.gitlab.com/ee/administration/postgresql/replication_and_failover.html#configure-pgbouncer-nodes).
We don't want to introduce any breaking changes into `gitlab.rb`.
## Considerations
- _No other resource should be involved with database setup_.
- All custom resources _must be idempotent_. For example they must not fail
when an object already exist even though they are created or ran in another
cookbook. Instead they must be able to update the current state of the
database objects, configuration, or migrations based on the new user inputs.
- In HA mode, given that multiple physical nodes are involved, Omnibus GitLab
may encounter certain limitations to provide full automation of the
configuration. This is an acceptable limitation.
## Bridge the gap
Currently not all of the custom resources or helper classes are available. Even
if they are, they may require some adjustments to meet the specified requirements.
Here are some examples.
### Reorganize the existing database operations
This model requires some adjustments in `postgresql`, `patroni`, and `gitlab`
cookbooks. For example `database_objects` that is defined in `gitlab` cookbook
must be used in the same cookbook and its usage must be removed from `postgresql`
and `patroni` cookbooks.
The database service cookbooks (`postgresql` and `patroni`) should not deal with
database objects and migrations and must delegate them to the application
cookbooks (e.g. `gitlab`, `registry`, and `praefect`). However, to support this,
custom resources of `postgresql` cookbook must be able to work on any node.
Currently they assume they run on the PostgreSQL node and use the UNIX socket to
connect to the database server. This assumption forces to place all database
operations in one cookbook.
The same is true for `pgbouncer` cookbook. Currently the only PgBouncer user is
created in the `users` recipe of this cookbook. This can change as well to allow
each component cookbook to create its own PgBouncer users.
### Support dedicated PgBouncer user for databases
The current `pgbouncer` cookbook [_mostly_ supports multiple databases](https://docs.gitlab.com/ee/administration/gitaly/praefect.html#configure-a-new-pgbouncer-database-with-pool_mode--session).
The `pgbouncer` cookbook only creates PgBouncer users for the main Rails database. This
is why non-Rails applications connect with the same PgBouncer user created for Rails.
We can currently set up PgBouncer support for decomposed Rails databases sharing
the same user. But for Praefect or Registry, we need additional work to create
dedicated PgBouncer users.
NOTE **Note:**
A shared user does not mean connection settings for each database must
be the same. It only means that multiple databases use the same user for
PgBouncer connection.
### Delay the population of PgBouncer database configuration
The implementation of `gitlab-ctl pgb-notify` supports multiple
databases. It is generic enough that, as long as the PgBouncer users are created,
it can update `databases.ini` from the `databases.json` file.
However, when PgBouncer users are pulled into individual cookbooks, the initial
`databases.ini` that is created or updated in `gitlab-ctl reconfigure` may not
be valid because it references PgBouncer users that are not created yet.
We should be able to fix this by delaying the action on Chef resource that calls
`gitlab-ctl pgb-notify`.
### Configurable Consul watch for databases
Consul cluster can be shared between multiple Patroni clusters (using different
scopes, such as `patroni['scope']`), but updating PgBouncer configuration is still
problematic because the Consul watch scripts are not fully configurable.
The current implementation has several limitations:
1. Omnibus GitLab uses `postgresql` service that is [explicitly defined](https://gitlab.com/gitlab-org/omnibus-gitlab/-/blob/master/files/gitlab-cookbooks/consul/recipes/enable_service_postgresql.rb)
in `consul` cookbook. This service, that is currently being used to notify
PgBouncer, is a leftover of the transition from RepMgr to Patroni. It must
be replaced with the Consul service that [Patroni registers](https://patroni.readthedocs.io/en/latest/yaml_configuration.html#consul).
When `patroni['register_service']` is enabled Patroni registers a Consul
service with `patroni['scope']` parameter and the tag `master`, `primary`,
`replica`, or `standby-leader` depending on the node's role.
1. The current [failover script](https://gitlab.com/gitlab-org/omnibus-gitlab/-/blob/master/files/gitlab-cookbooks/consul/templates/default/watcher_scripts/failover_pgbouncer.erb)
is associated to a Consul watch for `postgresql` service and is not capable
of handling multiple databases because [database name can not be changed](https://gitlab.com/gitlab-org/omnibus-gitlab/-/blob/a9c14f6cfcc9fe0d9e98da7d04f43c5772d5f768/files/gitlab-cookbooks/consul/libraries/watch_helper.rb#L50).
We need to extend the current Omnibus GitLab capability to use Consul watches to
track Patroni services, find cluster leaders, and notify PgBouncer with a
parameterized failover script.
In order to do this we implement `database_watch` custom resource in `consul`
cookbook. It defines a database-specific Consul watch for database a cluster
service and passes the required information to a parameterized failover script
to notify PgBouncer. The key attributes of this resource are:
1. The service name, that specifies which database cluster must be watched.
It could be the scope of the Patroni cluster when `patroni['register_service']`
is enabled or a Consul service name when it is manually configured.
1. The database name that specifies which logical databases should be
reconfigured when the database cluster leader changes.
### Helper class for general database migration requirements
`MigrationHelper`<sup>(Needs implementation)</sup> implements general
requirements of database migrations, including the central switch for enabling
or disabling auto-migrations. It can also provides the mapping between the
existing and new configuration attributes.
---
stage: Systems
group: Distribution
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
# Deprecate and remove support for a supported operating system
GitLab provides Omnibus packages for operating systems (OS) only until their end of life (EOL).
After the EOL date of the OS, GitLab stops releasing official
packages. The following content documents how to:
- Deprecate and remove support for an OS.
- Communicate this information to internal and external stakeholders.
## Check for upcoming EOL dates for supported OS
Check [supported operating systems](https://docs.gitlab.com/ee/administration/package_information/supported_os.html)
to see EOL dates for supported OS.
Slack reminders to check the EOL dates are sent to the Distribution team's Slack
channel on the first day of every quarter.
## Tell users of the deprecation and upcoming removal of support
If you find an OS has an EOL date in the upcoming quarter, open an issue to
discuss the deprecation and removal timeline. We provide a path forward for users
who are affected by this by making sure:
- We can build packages for the next version of the OS.
- Our package repository provider, [Packagecloud](https://packagecloud.io/),
supports packages for the new version.
After we decide to deprecate support for an OS, we tell affected users
through appropriate channels, including:
- In the next and following GitLab release blog posts, until removal.
- At the end of `gitlab-ctl reconfigure` run.
To add the deprecation notice to the blog post, message the Distribution team PM
in the issue to open necessary merge requests in the website repository.
To add deprecation notice to the end of `gitlab-ctl reconfigure` output, add
the OS information to the [`OmnibusHelper#deprecated_os_list`](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/e0fbef119bdcfccc488713c68c9e076c1a592412/files/gitlab-cookbooks/package/libraries/omnibus_helper.rb#L133).
## Tell other internal stakeholders about the deprecation and upcoming removal of support
You must tell customer-facing teams about the deprecation and upcoming removal
of support for the OS. Announce the deprecation in the following Slack channels:
1. `#support_self_managed` - Support team catering to our self-managed customers.
1. `#customer-success` - Customer Success team of our Sales division.
## Remove support for an OS
When the OS EOL date has passed, open an merge request to the `omnibus-gitlab` project to
remove CI/CD jobs for that OS from the CI/CD configuration. These jobs include:
- Spec jobs that run in the
[development repository](https://gitlab.com/gitlab-org/omnibus-gitlab)
- Package build and release jobs that run in the
[Release mirror](https://dev.gitlab.org/gitlab/omnibus-gitlab).
Message the PM and all other necessary Slack channels to tell every stakeholder
about the removal of support.
When the last version which supported the OS is out of the maintenance window,
open an merge request to remove the builder image from the
[Omnibus Builder](https://gitlab.com/gitlab-org/gitlab-omnibus-builder)
project.
require "#{Omnibus::Config.project_root}/lib/gitlab/build/info/package"
require "#{Omnibus::Config.project_root}/lib/gitlab/build_iteration"
require "#{Omnibus::Config.project_root}/lib/gitlab/ohai_helper.rb"
require "#{Omnibus::Config.project_root}/lib/gitlab/openssl_helper"
require "#{Omnibus::Config.project_root}/lib/gitlab/util"
require "#{Omnibus::Config.project_root}/lib/gitlab/version"
name 'simple'
description 'Simple project to test omnibus changes'
maintainer 'GitLab, Inc. <support@gitlab.com>'
homepage 'https://about.gitlab.com/'
license 'MIT'
install_dir '/opt/simple'
dependency ''
build_version '0.1.1'
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment