[{"data":1,"prerenderedAt":785},["ShallowReactive",2],{"/en-us/blog/postmortem-of-database-outage-of-january-31":3,"navigation-en-us":33,"banner-en-us":432,"footer-en-us":442,"blog-post-authors-en-us-GitLab":684,"blog-related-posts-en-us-postmortem-of-database-outage-of-january-31":698,"assessment-promotions-en-us":736,"next-steps-en-us":775},{"id":4,"title":5,"authorSlugs":6,"body":8,"categorySlug":9,"config":10,"content":14,"description":8,"extension":22,"isFeatured":12,"meta":23,"navigation":24,"path":25,"publishedDate":20,"seo":26,"stem":30,"tagSlugs":31,"__hash__":32},"blogPosts/en-us/blog/postmortem-of-database-outage-of-january-31.yml","Postmortem Of Database Outage Of January 31",[7],"gitlab",null,"company",{"slug":11,"featured":12,"template":13},"postmortem-of-database-outage-of-january-31",false,"BlogPost",{"title":15,"description":16,"authors":17,"heroImage":19,"date":20,"body":21,"category":9},"Postmortem of database outage of January 31","Postmortem on the database outage of January 31 2017 with the lessons we learned.",[18],"GitLab","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749663397/Blog/Hero%20Images/logoforblogpost.jpg","2017-02-10","\n\nOn January 31st 2017, we experienced a major service outage for one of our products, the online service GitLab.com. The outage was caused by an accidental removal of data from our primary database server.\n\nThis incident caused the GitLab.com service to be unavailable for many hours. We also lost some production data that we were eventually unable to recover. Specifically, we lost modifications to database data such as projects, comments, user accounts, issues and snippets, that took place between 17:20 and 00:00 UTC on January 31. Our best estimate is that it affected roughly 5,000 projects, 5,000 comments and 700 new user accounts. Code repositories or wikis hosted on GitLab.com were unavailable during the outage, but were not affected by the data loss. [GitLab Enterprise](/enterprise/) customers, GitHost customers, and self-managed GitLab CE users were not affected by the outage, or the data loss.\n\nLosing production data is unacceptable. To ensure this does not happen again we're working on multiple improvements to our operations & recovery procedures for GitLab.com. In this article we'll look at what went wrong, what we did to recover, and what we'll do to prevent this from happening in the future.\n\nTo the GitLab.com users whose data we lost and to the people affected by the outage: we're sorry. I apologize personally, as GitLab's CEO, and on behalf of everyone at GitLab.\n\n## Database setup\n\nGitLab.com currently uses a single primary and a single secondary in hot-standby\nmode. The standby is only used for failover purposes. In this setup a single\ndatabase has to handle all the load, which is not ideal. The primary's hostname\nis `db1.cluster.gitlab.com`, while the secondary's hostname is\n`db2.cluster.gitlab.com`.\n\nIn the past we've had various other issues with this particular setup due to\n`db1.cluster.gitlab.com` being a single point of failure. For example:\n\n* [A database outage on November 28th, 2016 due to project_authorizations having too much bloat](https://gitlab.com/gitlab-com/infrastructure/issues/791)\n* [CI distributed heavy polling and exclusive row locking for seconds takes GitLab.com down](https://gitlab.com/gitlab-com/infrastructure/issues/514)\n* [Scary DB spikes](https://gitlab.com/gitlab-com/infrastructure/issues/364)\n\n## Timeline\n\nOn January 31st an engineer started setting up multiple PostgreSQL servers in\nour staging environment. The plan was to try out\n[pgpool-II](http://www.pgpool.net/mediawiki/index.php/Main_Page) to see if it\nwould reduce the load on our database by load balancing queries between the\navailable hosts. Here is the issue for that plan:\n[infrastructure#259](https://gitlab.com/gitlab-com/infrastructure/issues/259).\n\n**± 17:20 UTC:** prior to starting this work, our engineer took an LVM snapshot\nof the production database and loaded this into the staging environment. This was\nnecessary to ensure the staging database was up to date, allowing for more\naccurate load testing. This procedure normally happens automatically once every\n24 hours (at 01:00 UTC), but they wanted a more up to date copy of the\ndatabase.\n\n**± 19:00 UTC:** GitLab.com starts experiencing an increase in database load due\nto what we suspect was spam. In the week leading up to this event GitLab.com had\nbeen experiencing similar problems, but not this severe. One of the problems\nthis load caused was that many users were not able to post comments on issues\nand merge requests. Getting the load under control took several hours.\n\nWe would later find out that part of the load was caused by a background job\ntrying to remove a GitLab employee and their associated data. This was the\nresult of their account being flagged for abuse and accidentally scheduled for removal. More information regarding this particular problem can be found in the\nissue [\"Removal of users by spam should not hard\ndelete\"](https://gitlab.com/gitlab-org/gitlab-ce/issues/27581).\n\n**± 23:00 UTC:** Due to the increased load, our PostgreSQL secondary's\nreplication process started to lag behind. The replication failed as WAL\nsegments needed by the secondary were already removed from the primary. As\nGitLab.com was not using WAL archiving, the secondary had to be re-synchronised\nmanually. This involves removing the\nexisting data directory on the secondary, and running\n[pg_basebackup](https://www.postgresql.org/docs/9.6/static/app-pgbasebackup.html)\nto copy over the database from the primary to the secondary.\n\nOne of the engineers went to the secondary and wiped the data directory, then\nran `pg_basebackup`. Unfortunately `pg_basebackup` would hang, producing no\nmeaningful output, despite the `--verbose` option being set. After a few tries\n`pg_basebackup` mentioned that it could not connect due to the master not having\nenough available replication connections (as controlled by the `max_wal_senders`\noption).\n\nTo resolve this our engineers decided to temporarily increase\n`max_wal_senders` from the default value of `3` to `32`. When applying the\nsettings, PostgreSQL refused to restart, claiming too many semaphores were being\ncreated. This can happen when, for example, `max_connections` is set too high. In\nour case this was set to `8000`. Such a value is way too high, yet it had been\napplied almost a year ago and was working fine until that point. To resolve this\nthe setting's value was reduced to `2000`, resulting in PostgreSQL restarting\nwithout issues.\n\nUnfortunately this did not resolve the problem of `pg_basebackup` not starting\nreplication immediately. One of the engineers decided to run it with `strace` to\nsee what it was blocking on. `strace` showed that `pg_basebackup` was hanging in\na `poll` call, but that did not provide any other meaningful information that might\nhave explained why.\n\n**± 23:30 UTC:** one of the engineers thinks that perhaps `pg_basebackup`\ncreated some files in the PostgreSQL data directory of the secondary during the\nprevious attempts to run it. While normally `pg_basebackup` prints an error when\nthis is the case, the engineer in question wasn't too sure what was going on. It\nwould later be revealed by another engineer (who wasn't around at the time) that\nthis is normal behaviour: `pg_basebackup` will wait for the primary to start\nsending over replication data and it will sit and wait silently until that time.\nUnfortunately this was not clearly documented in our [engineering\nrunbooks](https://gitlab.com/gitlab-com/runbooks) nor in the official\n`pg_basebackup` document.\n\nTrying to restore the replication process, an engineer proceeds to wipe the\nPostgreSQL database directory, errantly thinking they were doing so on the\nsecondary. Unfortunately this process was executed on the primary instead. The\nengineer terminated the process a second or two after noticing their mistake,\nbut at this point around 300 GB of data had already been removed.\n\nHoping they could restore the database the engineers involved went to look for\nthe database backups, and asked for help on Slack. Unfortunately the process of\nboth finding and using backups failed completely.\n\n## Broken recovery procedures\n\nThis brings us to the recovery procedures. Normally in an event like this, one\nshould be able to restore a database in relatively little time using a recent\nbackup, though some form of data loss can not always be prevented. For\nGitLab.com we have the following procedures in place:\n\n1. Every 24 hours a backup is generated using `pg_dump`, this backup is uploaded\n   to Amazon S3. Old backups are automatically removed after some time.\n1. Every 24 hours we generate an LVM snapshot of the disk storing the production\n   database data. This snapshot is then loaded into the staging environment,\n   allowing us to more safely test changes without impacting our production\n   environment. Direct access to the staging database is restricted, similar to\n   our production database.\n1. For various servers (e.g. the NFS servers storing Git data) we use Azure disk\n   snapshots. These snapshots are taken once per 24 hours.\n1. Replication between PostgreSQL hosts, primarily used for failover purposes\n   and not for disaster recovery.\n\nAt this point the replication process was broken and data had already been wiped\nfrom both the primary and secondary, meaning we could not restore from either\nhost.\n\n### Database backups using pg_dump\n\nWhen we went to look for the `pg_dump` backups we found out they were not there.\nThe S3 bucket was empty, and there was no recent backup to be found anywhere.\nUpon closer inspection we found out that the backup procedure was using\n`pg_dump` 9.2, while our database is running PostgreSQL 9.6 (for Postgres, 9.x\nreleases are considered major). A difference in major versions results in\n`pg_dump` producing an error, terminating the backup procedure.\n\nThe difference is the result of how our Omnibus package works. We currently\nsupport both PostgreSQL 9.2 and 9.6, allowing users to upgrade (either manually\nor using commands provided by the package). To determine the correct version to\nuse the Omnibus package looks at the PostgreSQL version of the database cluster\n(as determined by `$PGDIR/PG_VERSION`, with `$PGDIR` being the path to the data\ndirectory). When PostgreSQL 9.6 is detected Omnibus ensures all binaries use\nPostgreSQL 9.6, otherwise it defaults to PostgreSQL 9.2.\n\nThe `pg_dump` procedure was executed on a regular application server, not the\ndatabase server. As a result there is no PostgreSQL data directory present on\nthese servers, thus Omnibus defaults to PostgreSQL 9.2. This in turn resulted in\n`pg_dump` terminating with an error.\n\nWhile notifications are enabled for any cronjobs that error, these notifications\nare sent by email. For GitLab.com we use [DMARC](https://dmarc.org/).\nUnfortunately DMARC was not enabled for the cronjob emails, resulting in them\nbeing rejected by the receiver. This means we were never aware of the backups\nfailing, until it was too late.\n\n### Azure disk snapshots\n\nAzure disk snapshots are used to generate a snapshot of an entire disk. These\nsnapshots don't make it easy to restore individual chunks of data (e.g. a lost\nuser account), though it's possible. The primary purpose is to restore entire\ndisks in case of disk failure.\n\nIn Azure a snapshot belongs to a storage account, and a storage account in turn\nis linked to one or more hosts. Each storage account has a limit of roughly 30\nTB. When restoring a snapshot using a host in the same storage account, the\nprocedure usually completes very quickly. However, when using a host in a\ndifferent storage account the procedure can take hours if not days to complete.\nFor example, in one such case it took over a week to restore a snapshot. As a\nresult we try not to rely on this system too much.\n\nWhile enabled for the NFS servers, these snapshots were not enabled for any of\nthe database servers as we assumed that our other backup procedures were\nsufficient enough.\n\n### LVM snapshots\n\nThe LVM snapshots are primarily used to easily copy data from our production\nenvironment to our staging environment. While this process was working as\nintended, the produced snapshots are not really meant to be used for disaster\nrecovery. At the time of the outage we had two snapshots available:\n\n1. A snapshot created for our staging environment every 24 hours, almost 24\n   hours before the outage happened.\n1. A snapshot created manually by one of the engineers roughly 6 hours before\n   the outage.\n\nWhen we generate a snapshot the following steps are taken:\n\n1. Generate a snapshot of production.\n1. Copy the snapshot to staging.\n1. Create a new disk using this snapshot.\n1. Remove all webhooks from the resulting database, to prevent them from being\n   triggered by accident.\n\n## Recovering GitLab.com\n\nTo recover GitLab.com we decided to use the LVM snapshot created 6 hours before\nthe outage, as it was our only option to reduce data loss as much as possible\n(the alternative was to lose almost 24 hours of data). This process would\ninvolve the following steps:\n\n1. Copy the existing staging database to production, which would not contain any\n   webhooks.\n1. In parallel, copy the snapshot used to set up the database as this snapshot\n   might still contain the webhooks (we weren't entirely sure).\n1. Set up a production database using the snapshot from step 1.\n1. Set up a separate database using the snapshot from step 2.\n1. Restore webhooks using the database set up in the previous step.\n1. Increment all database sequences by 100,000 so one can't re-use IDs that\n   might have been used before the outage.\n1. Gradually re-enable GitLab.com.\n\nFor our staging environment we were using Azure classic, without Premium Storage.\nThis is primarily done to save costs as premium storage is quite expensive. As a\nresult the disks are very slow, resulting in them being the main bottleneck in\nthe restoration process. Because LVM snapshots are stored on the hosts they are\ntaken for we had two options to restore data:\n\n1. Copy over the LVM snapshot\n1. Copy over the PostgreSQL data directory\n\nIn both cases the amount of data to copy would be roughly the same. Since\ncopying over and restoring the data directory would be easier we decided to go\nwith this solution.\n\nCopying the data from the staging to the production host took around 18 hours. These disks are network disks and are throttled to a really low number (around 60Mbps), there is no way to move from cheap storage to premium, so this was the performance we would get out of it. There was no network or processor bottleneck, the bottleneck was in the drives.\nOnce copied we were able to restore the database (including webhooks) to the\nstate it was at January 31st, 17:20 UTC.\n\nOn February 1st at 17:00 UTC we managed to restore the GitLab.com database\nwithout webhooks. Restoring webhooks was done by creating a separate staging\ndatabase using the LVM snapshot, but without triggering the removal of webhooks.\nThis allowed us to generate a SQL dump of the table and import this into the\nrestored GitLab.com database.\n\nAround 18:00 UTC we finished the final restoration procedures such as restoring\nthe webhooks and confirming everything was operating as expected.\n\n## Publication of the outage\n\nIn the spirit of transparency we kept track of progress and notes in a\n[publicly visible Google document](https://docs.google.com/document/d/1GCK53YDcBWQveod9kfzW-VCxIABGiryG7_z_6jHdVik/pub).\nWe also streamed the recovery procedure on YouTube, with a peak viewer count of\naround 5000 (resulting in the stream being the #2 live stream on YouTube for\nseveral hours). The stream was used to give our users live updates about the\nrecovery procedure. Finally we used Twitter (\u003Chttps://twitter.com/gitlabstatus>)\nto inform those that might not be watching the stream.\n\nThe document in question was initially private to GitLab employees and contained\nname of the engineer who accidentally removed the data. While the name was added\nby the engineer themselves (and they had no problem with this being public), we\nwill redact names in future cases as other engineers may not be comfortable with\ntheir name being published.\n\n## Data loss impact\n\nDatabase data such as projects, issues, snippets, etc. created between January\n31st 17:20 UTC and 23:30 UTC has been lost. Git repositories and Wikis were not\nremoved as they are stored separately.\n\nIt's hard to estimate how much data has been lost exactly, but we estimate we\nhave lost at least 5000 projects, 5000 comments, and roughly 700 users. This\nonly affected users of GitLab.com, self-managed instances or GitHost instances\nwere not affected.\n\n## Impact on GitLab itself\n\nSince GitLab uses GitLab.com to develop GitLab the outage meant that for some it\nwas harder to get work done. Most developers could continue working using their\nlocal Git repositories, but creating issues and such had to be delayed. To\npublish the blog post [\"GitLab.com Database\nIncident\"](/blog/gitlab-dot-com-database-incident/)\nwe used a private GitLab instance we normally use for private/sensitive\nworkflows (e.g. security releases). This allowed us to build and deploy a new\nversion of the website while GitLab.com was unavailable.\n\nWe also have a public monitoring website located at\n\u003Chttps://dashboards.gitlab.com/>. Unfortunately the current setup for this website\nwas not able to handle the load produced by users using this service during the\noutage. Fortunately our internal monitoring systems (which dashboards.gitlab.com is\nbased on) were not affected.\n\n## Root cause analysis\n\nTo analyse the root cause of these problems we'll use a technique called [\"The 5\nWhys\"](https://en.wikipedia.org/wiki/5_Whys). We'll break up the incident into 2\nmain problems: GitLab.com being down, and it taking a long time to restore\nGitLab.com.\n\n**Problem 1:** GitLab.com was down for about 18 hours.\n\n1. **Why was GitLab.com down?** - The database directory of the primary database\n   was removed by accident, instead of removing the database directory of the\n   secondary.\n1. **Why was the database directory removed?** - Database replication stopped,\n   requiring the secondary to be reset/rebuilt. This in turn requires that the\n   PostgreSQL data directory is empty. Restoring this required manual work as\n   this was not automated, nor was it documented properly.\n1. **Why did replication stop?** - A spike in database load caused the database\n   replication process to stop. This was due to the primary removing WAL\n   segments before the secondary could replicate them.\n1. **Why did the database load increase?** - This was caused by two events\n   happening at the same time: an increase in spam, and a process trying to\n   remove a GitLab employee and their associated data.\n1. **Why was a GitLab employee scheduled for removal?** - The employee was\n   reported for abuse by a troll. The current system used for responding to\n   abuse reports makes it too easy to overlook the details of those reported. As\n   a result the employee was accidentally scheduled for removal.\n\n**Problem 2:** restoring GitLab.com took over 18 hours.\n\n1. **Why did restoring GitLab.com take so long?** - GitLab.com had to be\n   restored using a copy of the staging database. This was hosted on slower\n   Azure VMs in a different region.\n1. **Why was the staging database needed for restoring GitLab.com?** - Azure\n   disk snapshots were not enabled for the database servers, and the periodic\n   database backups using `pg_dump` were not working.\n1. **Why could we not fail over to the secondary database host?** - The\n   secondary database's data was wiped as part of restoring database\n   replication. As such it could not be used for disaster recovery.\n1. **Why could we not use the standard backup procedure?** - The standard backup\n   procedure uses `pg_dump` to perform a logical backup of the database. This\n   procedure failed silently because it was using PostgreSQL 9.2, while\n   GitLab.com runs on PostgreSQL 9.6.\n1. **Why did the backup procedure fail silently?** - Notifications were\n   sent upon failure, but because of the Emails being rejected there was no\n   indication of failure. The sender was an automated process with no other\n   means to report any errors.\n1. **Why were the Emails rejected?** - Emails were rejected by the receiving\n   mail server due to the Emails not being signed using DMARC.\n1. **Why were Azure disk snapshots not enabled?** - We assumed our other backup\n   procedures were sufficient. Furthermore, restoring these snapshots can take\n   days.\n1. **Why was the backup procedure not tested on a regular basis?** - Because\n   there was no ownership, as a result nobody was responsible for testing this\n   procedure.\n\n## Improving recovery procedures\n\nWe are currently working on fixing and improving our various recovery\nprocedures. Work is split across the following issues:\n\n1. [Overview of status of all issues listed in this blog post (#1684)](https://gitlab.com/gitlab-com/infrastructure/issues/1684)\n1. [Update PS1 across all hosts to more clearly differentiate between hosts and environments (#1094)](https://gitlab.com/gitlab-com/infrastructure/issues/1094)\n1. [Prometheus monitoring for backups (#1095)](https://gitlab.com/gitlab-com/infrastructure/issues/1095)\n1. [Set PostgreSQL's max_connections to a sane value (#1096)](https://gitlab.com/gitlab-com/infrastructure/issues/1096)\n1. [Investigate Point in time recovery & continuous archiving for PostgreSQL (#1097)](https://gitlab.com/gitlab-com/infrastructure/issues/1097)\n1. [Hourly LVM snapshots of the production databases (#1098)](https://gitlab.com/gitlab-com/infrastructure/issues/1098)\n1. [Azure disk snapshots of production databases (#1099)](https://gitlab.com/gitlab-com/infrastructure/issues/1099)\n1. [Move staging to the ARM environment (#1100)](https://gitlab.com/gitlab-com/infrastructure/issues/1100)\n1. [Recover production replica(s) (#1101)](https://gitlab.com/gitlab-com/infrastructure/issues/1101)\n1. [Automated testing of recovering PostgreSQL database backups (#1102)](https://gitlab.com/gitlab-com/infrastructure/issues/1102)\n1. [Improve PostgreSQL replication documentation/runbooks (#1103)](https://gitlab.com/gitlab-com/infrastructure/issues/1103)\n1. [Investigate pgbarman for creating PostgreSQL backups (#1105)](https://gitlab.com/gitlab-com/infrastructure/issues/1105)\n1. [Investigate using WAL-E as a means of Database Backup and Realtime Replication (#494)](https://gitlab.com/gitlab-com/infrastructure/issues/494)\n1. [Build Streaming Database Restore](https://gitlab.com/gitlab-com/infrastructure/issues/1152)\n1. [Assign an owner for data durability](https://gitlab.com/gitlab-com/infrastructure/issues/1163)\n\nWe are also working on setting up multiple secondaries and balancing the load\namongst these hosts. More information on this can be found at:\n\n* [Bundle pgpool-II 3.6.1 (!1251)](https://gitlab.com/gitlab-org/omnibus-gitlab/merge_requests/1251)\n* [Connection pooling/load balancing for PostgreSQL (#259)](https://gitlab.com/gitlab-com/infrastructure/issues/259)\n\nOur main focus is to improve disaster recovery, and making it more obvious as to\nwhat host you're using; instead of preventing production engineers from running\ncertain commands. For example, one could alias `rm` to something safer but in\ndoing so would only protect themselves against accidentally running `rm -rf\n/important-data`, not against disk corruption or any of the many other ways you\ncan lose data.\n\nAn ideal environment is one in which you _can_ make mistakes but easily and\nquickly recover from them with minimal to no impact. This in turn requires you\nto be able to perform these procedures on a regular basis, and make it easy to\ntest and roll back any changes. For example, we are in the process of setting up\nprocedures that allow developers to test their database migrations. More\ninformation on this can be found in the issue\n[\"Tool for executing and reverting Rails migrations on staging\"](https://gitlab.com/gitlab-com/infrastructure/issues/811).\n\nWe're also looking into ways to build better recovery procedures for the entire\nGitLab.com infrastructure, and not just the database; and to ensure there is\nownership of these procedures. The issue for this is\n[\"Disaster recovery for everything that is not the database\"](https://gitlab.com/gitlab-com/infrastructure/issues/1161).\n\nMonitoring wise we also started working on a public backup monitoring dashboard,\nwhich can be found at \u003Chttps://dashboards.gitlab.com/dashboard/db/postgresql-backups>.\nCurrently this dashboard only contains data of our `pg_dump` backup procedure,\nbut we aim to add more data over time.\n\nOne might notice that at the moment our `pg_dump` backups are 3 days old.  We\nperform these backups on a secondary as `pg_dump` can put quite a bit of\npressure on a database. Since we are in the process of rebuilding our\nsecondaries the `pg_dump` backup procedure is suspended for the time being. Fear\nnot however, as LVM snapshots are now taken every hour instead of once per 24\nhours. Enabling Azure disk snapshots is something we're still looking into.\n\nFinally, we're looking into improving our abuse reporting and response system.\nMore information regarding this can be found in the issue\n[\"Removal of users by spam should not hard delete\"](https://gitlab.com/gitlab-org/gitlab-ce/issues/27581).\n\nIf you think there are additional measures we can take to prevent incidents like this please let us know in the comments.\n\n## Troubleshooting FAQ\n\n### Some of my merge requests are shown as being open, but their commits have already been merged into the default branch. How can I resolve this?\n\nPushing to the default branch will automatically update the merge request so\nthat it's aware of there not being any differences between the source and target\nbranch. At this point you can safely close the merge request.\n\n### My merge request has not yet been merged, and I am not seeing my changes. How can I resolve this?\n\nThere are 3 options to resolve this:\n\n1. Close the MR and create a new one\n1. Push new changes to the merge request's source branch\n1. Rebase/amend, and force push to the merge request's source branch\n\n### My GitLab Pages website was not updated. How can I solve this?\n\nGo to your project, then \"Pipelines\", \"New Pipeline\", use \"master\" as the\nbranch, then create the pipeline. This will create and start a new pipeline\nusing your master branch, which should result in your website being updated.\n\n### My Pipelines were not executed\n\nMost likely they were, but the database is not aware of this. To solve this,\ncreate a new pipeline using the right branch and run it.\n\n### Some commits are not showing up\n\nPushing new commits should automatically solve this. Alternatively you can try\nforce pushing to the target branch.\n\n### I created a project after 17:20 UTC and it shows up, but my issues are gone.  What happened?\n\nProject details are stored in the database. This meant that this data was lost\nfor projects created after 17:20. We ran a procedure to restore these\nprojects based on their Git repositories that were still stored in our NFS\ncluster. This procedure however was only able to restore projects in their most\nbasic form, without associated data such as issues and merge requests.\n","yml",{},true,"/en-us/blog/postmortem-of-database-outage-of-january-31",{"title":15,"description":16,"ogTitle":15,"ogDescription":16,"noIndex":12,"ogImage":19,"ogUrl":27,"ogSiteName":28,"ogType":29,"canonicalUrls":27},"https://about.gitlab.com/blog/postmortem-of-database-outage-of-january-31","https://about.gitlab.com","article","en-us/blog/postmortem-of-database-outage-of-january-31",[],"nN74HVmZvYWuClITNwZRbSMPTnouPmVRQ9UeAZGDc9Y",{"data":34},{"logo":35,"freeTrial":40,"sales":45,"login":50,"items":55,"search":362,"minimal":393,"duo":412,"pricingDeployment":422},{"config":36},{"href":37,"dataGaName":38,"dataGaLocation":39},"/","gitlab logo","header",{"text":41,"config":42},"Get free trial",{"href":43,"dataGaName":44,"dataGaLocation":39},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com&glm_content=default-saas-trial/","free trial",{"text":46,"config":47},"Talk to sales",{"href":48,"dataGaName":49,"dataGaLocation":39},"/sales/","sales",{"text":51,"config":52},"Sign in",{"href":53,"dataGaName":54,"dataGaLocation":39},"https://gitlab.com/users/sign_in/","sign in",[56,83,178,183,284,343],{"text":57,"config":58,"cards":60},"Platform",{"dataNavLevelOne":59},"platform",[61,67,75],{"title":57,"description":62,"link":63},"The intelligent orchestration platform for DevSecOps",{"text":64,"config":65},"Explore our Platform",{"href":66,"dataGaName":59,"dataGaLocation":39},"/platform/",{"title":68,"description":69,"link":70},"GitLab Duo Agent Platform","Agentic AI for the entire software lifecycle",{"text":71,"config":72},"Meet GitLab Duo",{"href":73,"dataGaName":74,"dataGaLocation":39},"/gitlab-duo-agent-platform/","gitlab duo agent platform",{"title":76,"description":77,"link":78},"Why GitLab","See the top reasons enterprises choose GitLab",{"text":79,"config":80},"Learn more",{"href":81,"dataGaName":82,"dataGaLocation":39},"/why-gitlab/","why gitlab",{"text":84,"left":24,"config":85,"link":87,"lists":91,"footer":160},"Product",{"dataNavLevelOne":86},"solutions",{"text":88,"config":89},"View all Solutions",{"href":90,"dataGaName":86,"dataGaLocation":39},"/solutions/",[92,116,139],{"title":93,"description":94,"link":95,"items":100},"Automation","CI/CD and automation to accelerate deployment",{"config":96},{"icon":97,"href":98,"dataGaName":99,"dataGaLocation":39},"AutomatedCodeAlt","/solutions/delivery-automation/","automated software delivery",[101,105,108,112],{"text":102,"config":103},"CI/CD",{"href":104,"dataGaLocation":39,"dataGaName":102},"/solutions/continuous-integration/",{"text":68,"config":106},{"href":73,"dataGaLocation":39,"dataGaName":107},"gitlab duo agent platform - product menu",{"text":109,"config":110},"Source Code Management",{"href":111,"dataGaLocation":39,"dataGaName":109},"/solutions/source-code-management/",{"text":113,"config":114},"Automated Software Delivery",{"href":98,"dataGaLocation":39,"dataGaName":115},"Automated software delivery",{"title":117,"description":118,"link":119,"items":124},"Security","Deliver code faster without compromising security",{"config":120},{"href":121,"dataGaName":122,"dataGaLocation":39,"icon":123},"/solutions/application-security-testing/","security and compliance","ShieldCheckLight",[125,129,134],{"text":126,"config":127},"Application Security Testing",{"href":121,"dataGaName":128,"dataGaLocation":39},"Application security testing",{"text":130,"config":131},"Software Supply Chain Security",{"href":132,"dataGaLocation":39,"dataGaName":133},"/solutions/supply-chain/","Software supply chain security",{"text":135,"config":136},"Software Compliance",{"href":137,"dataGaName":138,"dataGaLocation":39},"/solutions/software-compliance/","software compliance",{"title":140,"link":141,"items":146},"Measurement",{"config":142},{"icon":143,"href":144,"dataGaName":145,"dataGaLocation":39},"DigitalTransformation","/solutions/visibility-measurement/","visibility and measurement",[147,151,155],{"text":148,"config":149},"Visibility & Measurement",{"href":144,"dataGaLocation":39,"dataGaName":150},"Visibility and Measurement",{"text":152,"config":153},"Value Stream Management",{"href":154,"dataGaLocation":39,"dataGaName":152},"/solutions/value-stream-management/",{"text":156,"config":157},"Analytics & Insights",{"href":158,"dataGaLocation":39,"dataGaName":159},"/solutions/analytics-and-insights/","Analytics and insights",{"title":161,"items":162},"GitLab for",[163,168,173],{"text":164,"config":165},"Enterprise",{"href":166,"dataGaLocation":39,"dataGaName":167},"/enterprise/","enterprise",{"text":169,"config":170},"Small Business",{"href":171,"dataGaLocation":39,"dataGaName":172},"/small-business/","small business",{"text":174,"config":175},"Public Sector",{"href":176,"dataGaLocation":39,"dataGaName":177},"/solutions/public-sector/","public sector",{"text":179,"config":180},"Pricing",{"href":181,"dataGaName":182,"dataGaLocation":39,"dataNavLevelOne":182},"/pricing/","pricing",{"text":184,"config":185,"link":187,"lists":191,"feature":271},"Resources",{"dataNavLevelOne":186},"resources",{"text":188,"config":189},"View all resources",{"href":190,"dataGaName":186,"dataGaLocation":39},"/resources/",[192,225,243],{"title":193,"items":194},"Getting started",[195,200,205,210,215,220],{"text":196,"config":197},"Install",{"href":198,"dataGaName":199,"dataGaLocation":39},"/install/","install",{"text":201,"config":202},"Quick start guides",{"href":203,"dataGaName":204,"dataGaLocation":39},"/get-started/","quick setup checklists",{"text":206,"config":207},"Learn",{"href":208,"dataGaLocation":39,"dataGaName":209},"https://university.gitlab.com/","learn",{"text":211,"config":212},"Product documentation",{"href":213,"dataGaName":214,"dataGaLocation":39},"https://docs.gitlab.com/","product documentation",{"text":216,"config":217},"Best practice videos",{"href":218,"dataGaName":219,"dataGaLocation":39},"/getting-started-videos/","best practice videos",{"text":221,"config":222},"Integrations",{"href":223,"dataGaName":224,"dataGaLocation":39},"/integrations/","integrations",{"title":226,"items":227},"Discover",[228,233,238],{"text":229,"config":230},"Customer success stories",{"href":231,"dataGaName":232,"dataGaLocation":39},"/customers/","customer success stories",{"text":234,"config":235},"Blog",{"href":236,"dataGaName":237,"dataGaLocation":39},"/blog/","blog",{"text":239,"config":240},"Remote",{"href":241,"dataGaName":242,"dataGaLocation":39},"https://handbook.gitlab.com/handbook/company/culture/all-remote/","remote",{"title":244,"items":245},"Connect",[246,251,256,261,266],{"text":247,"config":248},"GitLab Services",{"href":249,"dataGaName":250,"dataGaLocation":39},"/services/","services",{"text":252,"config":253},"Community",{"href":254,"dataGaName":255,"dataGaLocation":39},"/community/","community",{"text":257,"config":258},"Forum",{"href":259,"dataGaName":260,"dataGaLocation":39},"https://forum.gitlab.com/","forum",{"text":262,"config":263},"Events",{"href":264,"dataGaName":265,"dataGaLocation":39},"/events/","events",{"text":267,"config":268},"Partners",{"href":269,"dataGaName":270,"dataGaLocation":39},"/partners/","partners",{"backgroundColor":272,"textColor":273,"text":274,"image":275,"link":279},"#2f2a6b","#fff","Insights for the future of software development",{"altText":276,"config":277},"the source promo card",{"src":278},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758208064/dzl0dbift9xdizyelkk4.svg",{"text":280,"config":281},"Read the latest",{"href":282,"dataGaName":283,"dataGaLocation":39},"/the-source/","the source",{"text":285,"config":286,"lists":287},"Company",{"dataNavLevelOne":9},[288],{"items":289},[290,295,301,303,308,313,318,323,328,333,338],{"text":291,"config":292},"About",{"href":293,"dataGaName":294,"dataGaLocation":39},"/company/","about",{"text":296,"config":297,"footerGa":300},"Jobs",{"href":298,"dataGaName":299,"dataGaLocation":39},"/jobs/","jobs",{"dataGaName":299},{"text":262,"config":302},{"href":264,"dataGaName":265,"dataGaLocation":39},{"text":304,"config":305},"Leadership",{"href":306,"dataGaName":307,"dataGaLocation":39},"/company/team/e-group/","leadership",{"text":309,"config":310},"Team",{"href":311,"dataGaName":312,"dataGaLocation":39},"/company/team/","team",{"text":314,"config":315},"Handbook",{"href":316,"dataGaName":317,"dataGaLocation":39},"https://handbook.gitlab.com/","handbook",{"text":319,"config":320},"Investor relations",{"href":321,"dataGaName":322,"dataGaLocation":39},"https://ir.gitlab.com/","investor relations",{"text":324,"config":325},"Trust Center",{"href":326,"dataGaName":327,"dataGaLocation":39},"/security/","trust center",{"text":329,"config":330},"AI Transparency Center",{"href":331,"dataGaName":332,"dataGaLocation":39},"/ai-transparency-center/","ai transparency center",{"text":334,"config":335},"Newsletter",{"href":336,"dataGaName":337,"dataGaLocation":39},"/company/contact/#contact-forms","newsletter",{"text":339,"config":340},"Press",{"href":341,"dataGaName":342,"dataGaLocation":39},"/press/","press",{"text":344,"config":345,"lists":346},"Contact us",{"dataNavLevelOne":9},[347],{"items":348},[349,352,357],{"text":46,"config":350},{"href":48,"dataGaName":351,"dataGaLocation":39},"talk to sales",{"text":353,"config":354},"Support portal",{"href":355,"dataGaName":356,"dataGaLocation":39},"https://support.gitlab.com","support portal",{"text":358,"config":359},"Customer portal",{"href":360,"dataGaName":361,"dataGaLocation":39},"https://customers.gitlab.com/customers/sign_in/","customer portal",{"close":363,"login":364,"suggestions":371},"Close",{"text":365,"link":366},"To search repositories and projects, login to",{"text":367,"config":368},"gitlab.com",{"href":53,"dataGaName":369,"dataGaLocation":370},"search login","search",{"text":372,"default":373},"Suggestions",[374,376,380,382,386,390],{"text":68,"config":375},{"href":73,"dataGaName":68,"dataGaLocation":370},{"text":377,"config":378},"Code Suggestions (AI)",{"href":379,"dataGaName":377,"dataGaLocation":370},"/solutions/code-suggestions/",{"text":102,"config":381},{"href":104,"dataGaName":102,"dataGaLocation":370},{"text":383,"config":384},"GitLab on AWS",{"href":385,"dataGaName":383,"dataGaLocation":370},"/partners/technology-partners/aws/",{"text":387,"config":388},"GitLab on Google Cloud",{"href":389,"dataGaName":387,"dataGaLocation":370},"/partners/technology-partners/google-cloud-platform/",{"text":391,"config":392},"Why GitLab?",{"href":81,"dataGaName":391,"dataGaLocation":370},{"freeTrial":394,"mobileIcon":399,"desktopIcon":404,"secondaryButton":407},{"text":395,"config":396},"Start free trial",{"href":397,"dataGaName":44,"dataGaLocation":398},"https://gitlab.com/-/trials/new/","nav",{"altText":400,"config":401},"Gitlab Icon",{"src":402,"dataGaName":403,"dataGaLocation":398},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758203874/jypbw1jx72aexsoohd7x.svg","gitlab icon",{"altText":400,"config":405},{"src":406,"dataGaName":403,"dataGaLocation":398},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758203875/gs4c8p8opsgvflgkswz9.svg",{"text":408,"config":409},"Get Started",{"href":410,"dataGaName":411,"dataGaLocation":398},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com/compare/gitlab-vs-github/","get started",{"freeTrial":413,"mobileIcon":418,"desktopIcon":420},{"text":414,"config":415},"Learn more about GitLab Duo",{"href":416,"dataGaName":417,"dataGaLocation":398},"/gitlab-duo/","gitlab duo",{"altText":400,"config":419},{"src":402,"dataGaName":403,"dataGaLocation":398},{"altText":400,"config":421},{"src":406,"dataGaName":403,"dataGaLocation":398},{"freeTrial":423,"mobileIcon":428,"desktopIcon":430},{"text":424,"config":425},"Back to pricing",{"href":181,"dataGaName":426,"dataGaLocation":398,"icon":427},"back to pricing","GoBack",{"altText":400,"config":429},{"src":402,"dataGaName":403,"dataGaLocation":398},{"altText":400,"config":431},{"src":406,"dataGaName":403,"dataGaLocation":398},{"title":433,"button":434,"config":439},"See how agentic AI transforms software delivery",{"text":435,"config":436},"Watch GitLab Transcend now",{"href":437,"dataGaName":438,"dataGaLocation":39},"/events/transcend/virtual/","transcend event",{"layout":440,"icon":441},"release","AiStar",{"data":443},{"text":444,"source":445,"edit":451,"contribute":456,"config":461,"items":466,"minimal":673},"Git is a trademark of Software Freedom Conservancy and our use of 'GitLab' is under license",{"text":446,"config":447},"View page source",{"href":448,"dataGaName":449,"dataGaLocation":450},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/","page source","footer",{"text":452,"config":453},"Edit this page",{"href":454,"dataGaName":455,"dataGaLocation":450},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/content/","web ide",{"text":457,"config":458},"Please contribute",{"href":459,"dataGaName":460,"dataGaLocation":450},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/CONTRIBUTING.md/","please contribute",{"twitter":462,"facebook":463,"youtube":464,"linkedin":465},"https://twitter.com/gitlab","https://www.facebook.com/gitlab","https://www.youtube.com/channel/UCnMGQ8QHMAnVIsI3xJrihhg","https://www.linkedin.com/company/gitlab-com",[467,514,568,612,639],{"title":179,"links":468,"subMenu":483},[469,473,478],{"text":470,"config":471},"View plans",{"href":181,"dataGaName":472,"dataGaLocation":450},"view plans",{"text":474,"config":475},"Why Premium?",{"href":476,"dataGaName":477,"dataGaLocation":450},"/pricing/premium/","why premium",{"text":479,"config":480},"Why Ultimate?",{"href":481,"dataGaName":482,"dataGaLocation":450},"/pricing/ultimate/","why ultimate",[484],{"title":485,"links":486},"Contact Us",[487,490,492,494,499,504,509],{"text":488,"config":489},"Contact sales",{"href":48,"dataGaName":49,"dataGaLocation":450},{"text":353,"config":491},{"href":355,"dataGaName":356,"dataGaLocation":450},{"text":358,"config":493},{"href":360,"dataGaName":361,"dataGaLocation":450},{"text":495,"config":496},"Status",{"href":497,"dataGaName":498,"dataGaLocation":450},"https://status.gitlab.com/","status",{"text":500,"config":501},"Terms of use",{"href":502,"dataGaName":503,"dataGaLocation":450},"/terms/","terms of use",{"text":505,"config":506},"Privacy statement",{"href":507,"dataGaName":508,"dataGaLocation":450},"/privacy/","privacy statement",{"text":510,"config":511},"Cookie preferences",{"dataGaName":512,"dataGaLocation":450,"id":513,"isOneTrustButton":24},"cookie preferences","ot-sdk-btn",{"title":84,"links":515,"subMenu":524},[516,520],{"text":517,"config":518},"DevSecOps platform",{"href":66,"dataGaName":519,"dataGaLocation":450},"devsecops platform",{"text":521,"config":522},"AI-Assisted Development",{"href":416,"dataGaName":523,"dataGaLocation":450},"ai-assisted development",[525],{"title":526,"links":527},"Topics",[528,533,538,543,548,553,558,563],{"text":529,"config":530},"CICD",{"href":531,"dataGaName":532,"dataGaLocation":450},"/topics/ci-cd/","cicd",{"text":534,"config":535},"GitOps",{"href":536,"dataGaName":537,"dataGaLocation":450},"/topics/gitops/","gitops",{"text":539,"config":540},"DevOps",{"href":541,"dataGaName":542,"dataGaLocation":450},"/topics/devops/","devops",{"text":544,"config":545},"Version Control",{"href":546,"dataGaName":547,"dataGaLocation":450},"/topics/version-control/","version control",{"text":549,"config":550},"DevSecOps",{"href":551,"dataGaName":552,"dataGaLocation":450},"/topics/devsecops/","devsecops",{"text":554,"config":555},"Cloud Native",{"href":556,"dataGaName":557,"dataGaLocation":450},"/topics/cloud-native/","cloud native",{"text":559,"config":560},"AI for Coding",{"href":561,"dataGaName":562,"dataGaLocation":450},"/topics/devops/ai-for-coding/","ai for coding",{"text":564,"config":565},"Agentic AI",{"href":566,"dataGaName":567,"dataGaLocation":450},"/topics/agentic-ai/","agentic ai",{"title":569,"links":570},"Solutions",[571,573,575,580,584,587,591,594,596,599,602,607],{"text":126,"config":572},{"href":121,"dataGaName":126,"dataGaLocation":450},{"text":115,"config":574},{"href":98,"dataGaName":99,"dataGaLocation":450},{"text":576,"config":577},"Agile development",{"href":578,"dataGaName":579,"dataGaLocation":450},"/solutions/agile-delivery/","agile delivery",{"text":581,"config":582},"SCM",{"href":111,"dataGaName":583,"dataGaLocation":450},"source code management",{"text":529,"config":585},{"href":104,"dataGaName":586,"dataGaLocation":450},"continuous integration & delivery",{"text":588,"config":589},"Value stream management",{"href":154,"dataGaName":590,"dataGaLocation":450},"value stream management",{"text":534,"config":592},{"href":593,"dataGaName":537,"dataGaLocation":450},"/solutions/gitops/",{"text":164,"config":595},{"href":166,"dataGaName":167,"dataGaLocation":450},{"text":597,"config":598},"Small business",{"href":171,"dataGaName":172,"dataGaLocation":450},{"text":600,"config":601},"Public sector",{"href":176,"dataGaName":177,"dataGaLocation":450},{"text":603,"config":604},"Education",{"href":605,"dataGaName":606,"dataGaLocation":450},"/solutions/education/","education",{"text":608,"config":609},"Financial services",{"href":610,"dataGaName":611,"dataGaLocation":450},"/solutions/finance/","financial services",{"title":184,"links":613},[614,616,618,620,623,625,627,629,631,633,635,637],{"text":196,"config":615},{"href":198,"dataGaName":199,"dataGaLocation":450},{"text":201,"config":617},{"href":203,"dataGaName":204,"dataGaLocation":450},{"text":206,"config":619},{"href":208,"dataGaName":209,"dataGaLocation":450},{"text":211,"config":621},{"href":213,"dataGaName":622,"dataGaLocation":450},"docs",{"text":234,"config":624},{"href":236,"dataGaName":237,"dataGaLocation":450},{"text":229,"config":626},{"href":231,"dataGaName":232,"dataGaLocation":450},{"text":239,"config":628},{"href":241,"dataGaName":242,"dataGaLocation":450},{"text":247,"config":630},{"href":249,"dataGaName":250,"dataGaLocation":450},{"text":252,"config":632},{"href":254,"dataGaName":255,"dataGaLocation":450},{"text":257,"config":634},{"href":259,"dataGaName":260,"dataGaLocation":450},{"text":262,"config":636},{"href":264,"dataGaName":265,"dataGaLocation":450},{"text":267,"config":638},{"href":269,"dataGaName":270,"dataGaLocation":450},{"title":285,"links":640},[641,643,645,647,649,651,653,657,662,664,666,668],{"text":291,"config":642},{"href":293,"dataGaName":9,"dataGaLocation":450},{"text":296,"config":644},{"href":298,"dataGaName":299,"dataGaLocation":450},{"text":304,"config":646},{"href":306,"dataGaName":307,"dataGaLocation":450},{"text":309,"config":648},{"href":311,"dataGaName":312,"dataGaLocation":450},{"text":314,"config":650},{"href":316,"dataGaName":317,"dataGaLocation":450},{"text":319,"config":652},{"href":321,"dataGaName":322,"dataGaLocation":450},{"text":654,"config":655},"Sustainability",{"href":656,"dataGaName":654,"dataGaLocation":450},"/sustainability/",{"text":658,"config":659},"Diversity, inclusion and belonging (DIB)",{"href":660,"dataGaName":661,"dataGaLocation":450},"/diversity-inclusion-belonging/","Diversity, inclusion and belonging",{"text":324,"config":663},{"href":326,"dataGaName":327,"dataGaLocation":450},{"text":334,"config":665},{"href":336,"dataGaName":337,"dataGaLocation":450},{"text":339,"config":667},{"href":341,"dataGaName":342,"dataGaLocation":450},{"text":669,"config":670},"Modern Slavery Transparency Statement",{"href":671,"dataGaName":672,"dataGaLocation":450},"https://handbook.gitlab.com/handbook/legal/modern-slavery-act-transparency-statement/","modern slavery transparency statement",{"items":674},[675,678,681],{"text":676,"config":677},"Terms",{"href":502,"dataGaName":503,"dataGaLocation":450},{"text":679,"config":680},"Cookies",{"dataGaName":512,"dataGaLocation":450,"id":513,"isOneTrustButton":24},{"text":682,"config":683},"Privacy",{"href":507,"dataGaName":508,"dataGaLocation":450},[685],{"id":686,"title":687,"body":8,"config":688,"content":690,"description":8,"extension":22,"meta":693,"navigation":24,"path":694,"seo":695,"stem":696,"__hash__":697},"blogAuthors/en-us/blog/authors/gitlab.yml","Gitlab",{"template":689},"BlogAuthor",{"name":18,"config":691},{"headshot":692,"ctfId":18},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1749659488/Blog/Author%20Headshots/gitlab-logo-extra-whitespace.png",{},"/en-us/blog/authors/gitlab",{},"en-us/blog/authors/gitlab","XCBKIcPoCs6zi2oHG7o-bAp52Jhaw8_zGhIJ2jNrEjU",[699,712,722],{"content":700,"config":710},{"title":701,"description":702,"authors":703,"heroImage":705,"date":706,"body":707,"category":9,"tags":708,"updatedDate":706},"GitLab names Bill Staples as new CEO","Co-founder Sid Sijbrandij transitions to Executive Chair of the Board.",[704],"Sid Sijbrandij","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749665388/Blog/Hero%20Images/Revised2.png","2024-12-05","__This message from Sid Sijbrandij and Bill Staples was shared with GitLab team members earlier today.__ \n\n__Sid:__ On today’s earnings call, I announced that I am stepping down as CEO and will remain Executive Chair of the Board. I also introduced GitLab’s new CEO, [Bill Staples](https://www.linkedin.com/in/williamstaples/). \n\nAs a Board, we routinely do succession planning. This includes conversations with a number of top executives. We’ve been having these conversations in greater earnest since my cancer returned. Through these discussions, we identified someone uniquely qualified to lead GitLab. I want more time to focus on my cancer treatment and health. My treatments are going well, my cancer is not metastatic, and I'm working towards making a full recovery. Stepping down from a role that I love is not easy, but I believe that it is the right decision for GitLab.\n\nI couldn't be more excited to introduce you to Bill Staples, who will be leading GitLab into its next chapter. Bill will be GitLab’s CEO, effective today. He will also join the GitLab Board as a Director. Bill was most recently a public company CEO at New Relic. During his time there, he significantly increased the value of the company by accelerating revenue and driving increased profitability. He also brings decades of experience in leadership roles at Adobe and Microsoft. When I began speaking with Bill, I was immediately drawn to his customer-centric approach and deep product expertise. As I got to know him further, I knew that his shared value system made him the right person for this role, for our team members, for our customers, and for our shareholders. I feel fortunate that GitLab has found someone with a great leadership track record and strong DevOps expertise to lead GitLab into the future.\n\nWe have come so far from the early days when we launched GitLab.com. We have created the DevOps category and are the leader in the Gartner Magic Quadrant for both vision and execution. Millions of people now use GitLab to deliver software faster and more efficiently. We have integrated AI, Security, and Compliance into our platform to offer our enterprise customers the strongest AI-powered DevSecOps solution. We have also built GitLab in collaboration with our contributors. Last quarter, we had an all-time high of an estimated 1,800 code contributions from the wider community. It is incredible that as GitLab grew, our contributor community grew with us. We have done all of this while being a values-driven company, leading in all-remote work, championing transparency through our public handbook and culture, and co-creating with the wider community.\n\nI feel many things today, but more than anything else, I am grateful. I want to thank our customers. Driving results for them has been at the core of GitLab’s values, and I greatly appreciate their trust in us. I want to thank the wider GitLab community for their trust and enthusiasm. Their tens of thousands of contributions have greatly enhanced GitLab and its value for all users. Thank you, GitLab team members. Your contributions are at the core of GitLab’s success and the value we drive for our customers. Thank you, E-Group. You are amazing partners and collaborators in leading GitLab and our team members to achieve our very best. Thank you, GitLab Board. I have appreciated your support throughout my time as CEO and look forward to our ongoing partnership as I continue to serve as Executive Chair. And, thank you, Bill. I am excited for you to lead our next phase of growth. I am here to support you and the company in GitLab’s next chapter!\n\nI couldn't be more thrilled about Bill and what's ahead for GitLab with him at the helm. We have an incredible opportunity in front of us. Software has never mattered more, and GitLab is well-positioned to be the platform that best enables folks to create, secure, and operate it. I look forward to staying part of the company and being actively involved wherever Bill can use me. \n\n__Bill:__ Thanks, Sid, for the warm welcome! I greatly admire you and what you have accomplished. Very few people in the world have built a $10B market-cap technology company, taken it public, and scaled it to $750M in run-rate revenue. You have done incredible things with GitLab, and I’m grateful you will continue to play a meaningful role in the company. I appreciate your trust in me and commit to building upon the successes you and others should rightfully celebrate. \n\nI am so excited about GitLab and the opportunity ahead of us. Over the coming decade, we will see software-driven transformation around the world as AI accelerates and transforms the software revolution already in motion. GitLab and our mission are going to be more important than ever. I look forward to working with this team to scale GitLab well beyond where it is today.\n",[709],"news",{"slug":711,"featured":24,"template":13},"gitlab-names-bill-staples-as-new-ceo",{"content":713,"config":720},{"title":714,"description":715,"authors":716,"heroImage":717,"date":718,"body":719,"category":9},"Our Privacy Policy has been updated","Our updated Privacy Policy clarifies our existing data processing activities.",[18],"https://res.cloudinary.com/about-gitlab-com/image/upload/v1749664472/Blog/Hero%20Images/gitlabflatlogomap.png","2023-06-14","As part of our commitment to keeping our policies current, we made some updates to our [Privacy Policy](/privacy/) on June 14, 2022.  These updates are intended to clarify our existing data processing activities and to provide information on processing that may derive from new features.  Through this update, we continue to provide transparency to our data processing activities, in line with an evolving privacy landscape.  Specifically, these policy updates include the following:\n\n- Clarification about which processing activities apply to each respective GitLab product;\n- Information about when personal data may be collected to verify someone’s identity to enable certain product features;\n- Clarification about what personal data is collected to provide a license and maintain a subscription; \n- Additional information regarding our Service Usage data collection practices, and the inclusion of certain processing activities, such as Event Analytics and Call Recordings;\n- Additional information regarding the purposes for which personal data is collected;\n- Minor updates regarding our legal basis for processing your personal data in the European Union; \n- Updates to our data retention practices for inactive accounts; \n- Clarification about how to delete your personal data at GitLab and how deletion is effectuated for public projects; \n- An additional notice that details our processing and your rights under the California Consumer Privacy Act, including CCPA metrics reporting;\n\nOverall, we believe that these updates will empower our users to make informed decisions about their personal data.  Please visit the complete text of our Privacy Policy and [Cookie Policy](/privacy/cookies/) to learn more about how GitLab processes personal data and your rights and choices regarding such processing.\n",{"slug":721,"featured":12,"template":13},"our-privacy-policy-has-been-updated",{"content":723,"config":734},{"title":724,"description":725,"authors":726,"heroImage":728,"date":729,"body":730,"category":9,"tags":731},"Rate limitations for unauthorized users of the Projects List API","Learn details about upcoming changes for unauthenticated users of the Projects List API.",[727],"Christina Lohr","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749664087/Blog/Hero%20Images/tanukicover.jpg","2023-04-10","\n\nStarting on May 22 for self-managed GitLab, and May 8 for GitLab.com, unauthenticated users will be subject to rate limitations when using the Projects List API. This change has been made to ensure the stability and reliability of our platform for all users.\n\n**Note:** Authorized users are not affected by this change.\n\n## What is the the Projects List API?\n\nThe Projects List API provides information about GitLab projects, including name, description, and other metadata. This API is widely used by our community, including researchers, developers, and integrators, to retrieve and analyze information about GitLab projects. We value this usage and aim to support it as much as possible.\n\n## Rate limitation details\n\nIn recent months, we have observed that the frequency and intensity of requests made by unauthenticated, also known as anonymous, users to the Projects List API have increased significantly. This has resulted in an increased load on our servers, which has impacted the performance and stability of our platform for all users. To address this issue, we have decided to introduce rate limitations for unauthenticated users.\n\nAs a consequence of this change, unauthenticated users of the Projects List API will be limited to 400 requests per 10 minutes per unique IP address on GitLab.com. If an unauthenticated user exceeds this limit, the user will receive a \"429 Too Many Requests\" response. On GitLab.com, this limit cannot be changed. Users of self-managed GitLab instances have the same rate limitation set by default, but [admins can change the rate limits](https://docs.gitlab.com/ee/administration/settings/rate_limit_on_projects_api.html#rate-limit-on-projects-api) as they see fit via the UI or the application settings API. They can also set the rate limit to zero, which acts as if there is no rate limitation at all.\n\nWe understand that this change may impact some of our users who rely on the Projects List API, and we apologize for any inconvenience this may cause. We encourage users who need to make more than 400 requests per 10 minutes to the Projects List API to [sign up for a GitLab account](/pricing/), which provides higher rate limits and other benefits, such as access to additional APIs and integrations.\n\nIf you have any questions or concerns about this change, please do not hesitate to [leave feedback in this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/404611).\n",[732,733,709],"product","features",{"slug":735,"featured":12,"template":13},"rate-limitation-for-unauthorized-users-projects-list-api",{"promotions":737},[738,752,763],{"id":739,"categories":740,"header":742,"text":743,"button":744,"image":749},"ai-modernization",[741],"ai-ml","Is AI achieving its promise at scale?","Quiz will take 5 minutes or less",{"text":745,"config":746},"Get your AI maturity score",{"href":747,"dataGaName":748,"dataGaLocation":237},"/assessments/ai-modernization-assessment/","modernization assessment",{"config":750},{"src":751},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138786/qix0m7kwnd8x2fh1zq49.png",{"id":753,"categories":754,"header":755,"text":743,"button":756,"image":760},"devops-modernization",[732,552],"Are you just managing tools or shipping innovation?",{"text":757,"config":758},"Get your DevOps maturity score",{"href":759,"dataGaName":748,"dataGaLocation":237},"/assessments/devops-modernization-assessment/",{"config":761},{"src":762},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138785/eg818fmakweyuznttgid.png",{"id":764,"categories":765,"header":767,"text":743,"button":768,"image":772},"security-modernization",[766],"security","Are you trading speed for security?",{"text":769,"config":770},"Get your security maturity score",{"href":771,"dataGaName":748,"dataGaLocation":237},"/assessments/security-modernization-assessment/",{"config":773},{"src":774},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138786/p4pbqd9nnjejg5ds6mdk.png",{"header":776,"blurb":777,"button":778,"secondaryButton":783},"Start building faster today","See what your team can do with the intelligent orchestration platform for DevSecOps.\n",{"text":779,"config":780},"Get your free trial",{"href":781,"dataGaName":44,"dataGaLocation":782},"https://gitlab.com/-/trial_registrations/new?glm_content=default-saas-trial&glm_source=about.gitlab.com/","feature",{"text":488,"config":784},{"href":48,"dataGaName":49,"dataGaLocation":782},1772652077902]