Cells
This document is a work-in-progress and represents a very early state of the Cells design. Significant aspects are not documented, though we expect to add them in the future.
Cells is a new architecture for our software as a service platform. This architecture is horizontally scalable, resilient, and provides a more consistent user experience. It may also provide additional features in the future, such as data residency control (regions) and federated features.
For more information about Cells, see also:
Cells Iterations
- The Cells 1.0 target is to deliver a solution for new enterprise customers using the SaaS GitLab.com offering.
- The Cells 1.5 target is to deliver a migration solution for existing enterprise customers using the SaaS GitLab.com offering, built on top of architecture of Cells 1.0.
- The Cells 2.0 target is to support a public and open source contribution model in a cellular architecture.
Goals
See Goals, Glossary and Requirements.
Deployment Architecture
Work streams
We can't ship the entire Cells architecture in one go - it is too large. Instead, we are defining key work streams required by the project. For each work stream, we need to define the effort necessary to make features compliant with Cell 1.0, Cell 1.5, and Cell 2.0, respectively.
It is expected that some objectives will not be completed for General Availability (GA), but will be enough to run Cells in production.
1. Data access layer
Before Cells can be run in production we need to prepare the codebase to accept the Cells architecture. This preparation involves:
- Allowing data sharing between Cells.
- Updating the tooling for discovering cross-Cell data traversal.
- Defining code practices for cross-Cell data traversal.
- Analyzing the data model to define the data affinity.
Under this objective the following steps are expected:
-
Allow to share cluster-wide data with database-level data access layer.
Cells can connect to a database containing shared data. For example: application settings, users, or routing information.
-
Evaluate the efficiency of database-level access vs. API-oriented access layer.
Reconsider the consequences of database-level data access for data migration, resiliency of updates and of interconnected systems when we share only a subset of data.
-
Cluster-unique identifiers
Every object has a unique identifier that can be used to access data across the cluster. The IDs for allocated Projects, issues and any other objects are cluster-unique.
-
Cluster-wide deletions
If entities deleted in Cell 2 are cross-referenced, they are properly deleted or nullified across clusters. We will likely re-use existing loose foreign keys to extend it with cross-Cells data removal.
-
Data access layer
Ensure that a stable data access (versioned) layer is implemented that allows to share cluster-wide data.
-
Database migration
Ensure that migrations can be run independently between Cells, and we safely handle migrations of shared data in a way that does not impact other Cells.
2. Workflows
To make Cells viable we require to define and support essential workflows before we can consider the Cells to be of Beta quality. Workflows are meant to cover the majority of application functionality that makes the product mostly useable, but with some caveats.
The current approach is to define workflows from top to bottom. The order defines the presumed priority of the items. This list is not exhaustive as we would be expecting other teams to help and fix their workflows after the initial phase, in which we fix the fundamental ones.
To consider a project ready for the Beta phase, it is expected that all features defined below are supported by Cells.
In the cases listed below, the workflows define a set of tables to be properly attributed to the feature.
In some cases, a table with an ambiguous usage has to be broken down.
For example: uploads
are used to store user avatars, as well as uploaded attachments for comments.
It would be expected that uploads
is split into uploads
(describing Group/Project-level attachments) and global_uploads
(describing, for example, user avatars).
It is expected that group::tenant scale will help other teams to fix their feature set to work with Cells. The first 2-3 quarters are required to define a general split of data, and build the required tooling and development guidelines.
-
Instance-wide settings are shared across cluster.
The Admin Area section for the most part is shared across a cluster.
-
User accounts are shared across cluster. ✓
The purpose is to make
users
cluster-wide. -
User can create Organization.
The purpose is to create Organizations that are isolated from each other.
-
User can create Group. ✓ (demo)
The purpose is to perform a targeted decomposition of
users
andnamespaces
, becausenamespaces
will be stored locally in the Cell. -
User can create Project. ✓ (demo)
The purpose is to perform a targeted decomposition of
users
andprojects
, becauseprojects
will be stored locally in the Cell. -
User can create Project with a README file
The purpose is to allow
users
to create README files in a project. -
User can change profile avatar that is shared in cluster.
The purpose is to fix global uploads that are shared in cluster.
-
User can push to Git repository.
The purpose is to ensure that essential joins from the Projects table are properly attributed to be Cell-local, and as a result the Git workflow is supported.
-
User can run CI pipeline.
The purpose is that
ci_pipelines
(likeci_stages
,ci_builds
,ci_job_artifacts
) and adjacent tables are properly attributed to be Cell-local. -
User can create issue.
The purpose is to ensure that
issues
are properly attributed to beCell-local
. -
User can create merge request, and merge it after it is green.
The purpose is to ensure
merge requests
are properly attributed to beCell-local
. -
User can manage Group and Project members.
The
members
table is properly attributed to be eitherCell-local
orcluster-wide
. -
User can manage instance-wide runners.
The purpose is to scope all CI runners to be Cell-local. Instance-wide runners in fact become Cell-local runners. The expectation is to provide a user interface view and manage all runners per Cell, instead of per cluster.
-
User is part of Organization and can only see information from the Organization.
The purpose is to have many Organizations per Cell, but never have a single Organization spanning across many Cells. This is required to ensure that information shown within an Organization is isolated, and does not require fetching information from other Cells.
Some of the following workflows might need to be supported, depending on the group's decision. This list is not exhaustive of work needed to be done.
- User can use all Group-level features.
- User can use all Project-level features.
- User can share Groups with other Groups in an Organization.
- User can create system webhook.
- User can upload and manage packages.
- User can manage security detection features.
- User can manage Kubernetes integration.
- TBD
Dependencies
We have identified the following dependencies between workflows.
flowchart TD
A[Create Organization] --> B[Create Group]
B --> C[Create Project]
L --> D[Create Issue]
E --> F[Push to Git repo]
E --> G[Create Merge Request]
E --> H[Create CI Pipeline]
G --> J[Merge when Pipeline Succeeds]
H --> J
J --> K[Issue gets closed by the reference in MR description]
D --> K
A --> L[Manage members]
B --> L
C --> L
L --> E[Create file in repository]
3. Routing layer
4. Cell deployment
See Cell: Application deployment.
5. Migration
When we reach production and are able to store new Organizations on new Cells, we need to be able to divide big Cells into many smaller ones.
-
Use GitLab Geo to clone Cells.
The purpose is to use GitLab Geo to clone Cells.
-
Split Cells by cloning them.
Once a Cell is cloned we change the routing information for Organizations. Organizations will encode a
cell_id
. When we update thecell_id
it will automatically make the given Cell authoritative to handle traffic for the given Organization. -
Delete redundant data from previous Cells.
Since the Organization is now stored on many Cells, once we change
cell_id
we will have to remove data from all other Cells based onorganization_id
.
Availability of the feature
We are following the Support for Experiment, Beta, and Generally Available features.
1. Experiment
Expectations:
- We can deploy a Cell on staging or another testing environment by using a separate domain (for example
cell2.staging.gitlab.com
) using Cell deployment tooling. - User can create Organization, Group and Project, and run some of the workflows.
- It is not expected to be able to run a router to serve all requests under a single domain.
- We expect data loss of data stored on additional Cells.
- We expect to tear down and create many new Cells to validate tooling.
2. Beta
Expectations:
- We can run many Cells under a single domain (ex.
staging.gitlab.com
). - All features defined in workflows are supported.
- Not all aspects of the routing layer are finalized.
- We expect additional Cells to be stable with minimal data loss.
3. GA
Expectations:
- We can run many Cells under a single domain (for example,
staging.gitlab.com
). - All features of the routing layer are supported.
- We don't expect to support any of the migration aspects.
4. Post GA
Expectations:
- We can migrate existing Organizations onto new Cells.
Iteration plan
The delivered iterations will focus on solving particular steps of a given key work stream. It is expected that initial iterations will be rather slow, because they require substantially more changes to prepare the codebase for data split.
Iteration 1 (FY24Q1)
- Data access layer: Initial Admin Area settings are shared across cluster.
- Workflow: Allow to share cluster-wide data with database-level data access layer.
Iteration 2 (FY24Q2-FY24Q3)
- Workflow: User accounts are shared across cluster.
- Workflow: User can create Group.
Iteration 3 (FY24Q4-FY25Q1)
- Workflow: User can create Project.
- Routing: Technology.
- Routing: Cell discovery.
Iteration 4 (FY25Q1-FY25Q2)
- Workflow: User can create Organization on Cell 2.
Iteration 5..N - starting FY25Q3
- Data access layer: Cluster-unique identifiers.
- Data access layer: Evaluate the efficiency of database-level access vs. API-oriented access layer.
- Data access layer: Data access layer.
- Routing: User can use single domain to interact with many Cells.
- Cell deployment: Extend GitLab Dedicated to support GCP.
- Workflow: User can create Project with a README file.
- Workflow: User can push to Git repository.
- Workflow: User can run CI pipeline.
- Workflow: Instance-wide settings are shared across cluster.
- Workflow: User can change profile avatar that is shared in cluster.
- Workflow: User can create issue.
- Workflow: User can create merge request, and merge it after it is green.
- Workflow: User can manage Group and Project members.
- Workflow: User can manage instance-wide runners.
- Workflow: User is part of Organization and can only see information from the Organization.
- Routing: Router endpoints classification.
- Routing: GraphQL and other ambiguous endpoints.
- Data access layer: Allow to share cluster-wide data with database-level data access layer.
- Data access layer: Cluster-wide deletions.
- Data access layer: Database migrations.
Technical proposals
The Cells architecture has long lasting implications to data processing, location, scalability and the GitLab architecture. This section links all different technical proposals that are being evaluated.
Impacted features
The Cells architecture will impact many features requiring some of them to be rewritten, or changed significantly. Below is a list of known affected features with preliminary proposed solutions.
- Cells: Admin Area
- Cells: Backups
- Cells: CI/CD Catalog
- Cells: CI Runners
- Cells: Container Registry
- Cells: Contributions: Forks
- Cells: Database Sequences
- Cells: Data Migration
- Cells: Explore
- Cells: Git Access
- Cells: Global Search
- Cells: GraphQL
- Cells: Organizations
- Cells: Personal Access Tokens
- Cells: Personal Namespaces
- Cells: Secrets
- Cells: Snippets
- Cells: User Profile
- Cells: Your Work
Impacted features: Placeholders
The following list of impacted features only represents placeholders that still require work to estimate the impact of Cells and develop solution proposals.
- Cells: Agent for Kubernetes
- Cells: Data pipeline ingestion
- Cells: GitLab Pages
- Cells: Group Transfer
- Cells: Issues
- Cells: Merge Requests
- Cells: Project Transfer
- Cells: Router Endpoints Classification
- Cells: Schema changes (Postgres and Elasticsearch migrations)
- Cells: Uploads
- ...
Frequently Asked Questions
What's the difference between Cells architecture and GitLab Dedicated?
The new Cells architecture is meant to scale GitLab.com.
The way to achieve this is by moving Organizations into Cells, but different Organizations can still share server resources, even if the application provides isolation from other Organizations.
But all of them still operate under the existing GitLab SaaS domain name gitlab.com
.
Also, Cells still share some common data, like users
, and routing information of Groups and Projects.
For example, no two users can have the same username even if they belong to different Organizations that exist on different Cells.
With the aforementioned differences, GitLab Dedicated is still offered at higher costs due to the fact that it's provisioned via dedicated server resources for each customer, while Cells use shared resources. This makes GitLab Dedicated more suited for bigger customers, and GitLab Cells more suitable for small to mid-size companies that are starting on GitLab.com.
On the other hand, GitLab Dedicated is meant to provide a completely isolated GitLab instance for any Organization. This instance is running on its own custom domain name, and is totally isolated from any other GitLab instance, including GitLab SaaS. For example, users on GitLab Dedicated don't have to have a different and unique username that was already taken on GitLab.com.
Can different Cells communicate with each other?
Up until iteration 3, Cells communicate with each other only via a shared database that contains common data. In iteration 4 we are going to evaluate the option of Cells calling each other via API to provide more isolation and reliability.
How are Cells provisioned?
The GitLab.com cluster of Cells will use GitLab Dedicated instances. Once a GitLab Dedicated instance gets provisioned it could join the GitLab.com cluster and become a Cell. One requirement will be that the GitLab Dedicated instance does not contain any prior data.
To reach shared resources, Cells will use Private Service Connect.
See also the design discussion.
What is a Cells topology?
See the design discussion.
How are users of an Organization routed to the correct Cell?
TBD
How do users authenticate with Cells and Organizations?
See the design discussion.
How are Cells rebalanced?
TBD
How can Cells implement disaster recovery capabilities?
TBD
How do I decide whether to move my feature to the cluster, Cell or Organization level?
By default, features are required to be scoped to the Organization level. Any deviation from that rule should be validated and approved by Tenant Scale.
The design goals of the Cells architecture describe that all Cells are under a single domain and as such, Cells are invisible to the user:
- Cell-local features should be limited to those related to managing the Cell, but never be a feature where the Cell semantic is exposed to the customer.
- The Cells architecture wants to freely control the distribution of Organization and customer data across Cells without impacting users when data is migrated.
Cluster-wide features are strongly discouraged because:
- They might require storing a substantial amount of data cluster-wide which decreases scalability headroom.
- They might require implementation of non-trivial data aggregation that reduces resilience to single node failure.
- They are harder to build due to the need of being able to run mixed deployments. Cluster-wide features need to take this into account.
- They might affect our ability to provide an on-premise like experience on GitLab.com.
- Some features that are expected to be cluster-wide might in fact be better implemented using federation techniques that use trusted intra-cluster communication using the same user identity. User Profile is shared across the cluster.
- The Cells architecture limits what services can be considered cluster-wide. Services that might initially be cluster-wide are still expected to be split in the future to achieve full service isolation. No feature should be built to depend on such a service (like Elasticsearch).
reference architecture for 50,000 users?
Will Cells use theThe infrastructure team will properly size Cells depending on the load. The Tenant Scale team sees an opportunity to use GitLab Dedicated as a base for Cells deployment.