[{"data":1,"prerenderedAt":792},["ShallowReactive",2],{"/en-us/blog/how-we-diagnosed-and-resolved-redis-latency-spikes":3,"navigation-en-us":38,"banner-en-us":438,"footer-en-us":448,"blog-post-authors-en-us-Matt Smiley":688,"blog-related-posts-en-us-how-we-diagnosed-and-resolved-redis-latency-spikes":702,"assessment-promotions-en-us":743,"next-steps-en-us":782},{"id":4,"title":5,"authorSlugs":6,"body":8,"categorySlug":9,"config":10,"content":14,"description":8,"extension":26,"isFeatured":12,"meta":27,"navigation":28,"path":29,"publishedDate":20,"seo":30,"stem":34,"tagSlugs":35,"__hash__":37},"blogPosts/en-us/blog/how-we-diagnosed-and-resolved-redis-latency-spikes.yml","How We Diagnosed And Resolved Redis Latency Spikes",[7],"matt-smiley",null,"engineering",{"slug":11,"featured":12,"template":13},"how-we-diagnosed-and-resolved-redis-latency-spikes",false,"BlogPost",{"title":15,"description":16,"authors":17,"heroImage":19,"date":20,"body":21,"category":9,"tags":22},"How we diagnosed and resolved Redis latency spikes with BPF and other tools","How we uncovered a three-phase cycle involving two distinct saturation points and a simple fix to break that cycle.",[18],"Matt Smiley","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749667913/Blog/Hero%20Images/clocks.jpg","2022-11-28","If you enjoy performance engineering and peeling back abstraction layers to ask underlying subsystems to explain themselves, this article’s for you. The context is a chronic Redis latency problem, and you are about to tour a practical example of using BPF and profiling tools in concert with standard metrics to reveal unintuitive behaviors of a complex system.\n\nBeyond the tools and techniques, we also use an iterative hypothesis-testing approach to compose a behavior model of the system dynamics. This model tells us what factors influence the problem's severity and triggering conditions.\n\nUltimately, we find the root cause, and its remedy is delightfully boring and effective. We uncover a three-phase cycle involving two distinct saturation points and a simple fix to break that cycle. Along the way, we inspect aspects of the system’s behavior using stack sampling profiles, heat maps and flamegraphs, experimental tuning, source and binary analysis, instruction-level BPF instrumentation, and targeted latency injection under specific entry and exit conditions.\n\nIf you are short on time, the takeaways are summarized at the end. But the journey is the fun part, so let's dig in!\n\n## Introducing the problem: Chronic latency\n\nGitLab makes extensive use of Redis, and, on GitLab.com SaaS, we use [separate Redis clusters](https://handbook.gitlab.com/handbook/engineering/infrastructure/production/architecture/#redis-architecture) for certain functions. This tale concerns a Redis instance acting exclusively as a least recently used (LRU) cache.\n\nThis cache had a chronic latency problem that started occurring intermittently over two years ago and in recent months had become significantly worse: Every few minutes, it suffered from bursts of very high latency and corresponding throughput drop, eating into its Service Level Objective (SLO). These latency spikes impacted user-facing response times and [burned error budgets](https://gitlab.com/gitlab-org/gitlab/-/issues/360578#note_966597336) for dependent features, and this is what we aimed to solve.\n\n**Graph:** Spikes in the rate of extremely slow (1 second) Redis requests, each corresponding to an eviction burst\n\n![Graph showing spikes in the slow request rate every few minutes](https://about.gitlab.com/images/blogimages/2022-11-28-diagnosing-redis-latency-spikes-with-bpf-and-friends/00_redis_slow_request_rate_spikes_during_each_eviction_burst.png)\n\nIn prior work, we had already completed several mitigating optimizations. These sufficed for a while, but organic growth had resurfaced this as an important [long-term scaling problem](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1601#why-is-it-important-to-get-to-the-root-of-the-latency-spikes). We had also already ruled out externally triggered causes, such as request floods, connection rate spikes, host-level resource contention, etc. These latency spikes were consistently associated with memory usage reaching the eviction threshold (`maxmemory`), not by changes in client traffic patterns or other processes competing with Redis for CPU time, memory bandwidth, or network I/O.\n\nWe [initially thought](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1567) that Redis 6.2’s new [eviction throttling mechanism](https://github.com/redis/redis/pull/7653) might alleviate our eviction burst overhead. It did not. That mechanism solves a different problem: It prevents a stall condition where a single call to `performEvictions` could run arbitrarily long. In contrast, during this analysis we [discovered](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1601#note_977816216) that our problem (both before and after upgrading Redis) was related to numerous calls collectively reducing Redis throughput, rather than a few extremely slow calls causing a complete stall.\n\nTo discover our bottleneck and its potential solutions, we needed to investigate Redis’s behavior during our workload’s eviction bursts.\n\n## A little background on Redis evictions\n\nAt the time, our cache was oversubscribed, trying to hold more cache keys than the [configured `maxmemory` threshold](https://redis.io/docs/reference/eviction/) could hold, so evictions from the LRU cache were expected. But the dense concentration of that eviction overhead was surprising and troubling.\n\nRedis is essentially single-threaded. With a few exceptions, the “main” thread does almost all tasks serially, including handling client requests and evictions, among other things. Spending more time on X means there is less remaining time to do Y, so think about queuing behavior as the story unfolds.\n\nWhenever Redis reaches its `maxmemory` threshold, it frees memory by evicting some keys, aiming to do just enough evictions to get back under `maxmemory`. However, contrary to expectation, the metrics for memory usage and eviction rate (shown below) indicated that instead of a continuous steady eviction rate, there were abrupt burst events that freed much more memory than expected. After each eviction burst, no evictions occurred until memory usage climbed back up to the `maxmemory` threshold again.\n\n**Graph:** Redis memory usage drops by 300-500 MB during each eviction burst:\n\n![Memory usage repeatedly rises gradually to 64 GB and then abruptly drops](https://about.gitlab.com/images/blogimages/2022-11-28-diagnosing-redis-latency-spikes-with-bpf-and-friends/01_redis_memory_usage_dips_during_eviction_bursts.png)\n\n**Graph:** Key eviction spikes match the timing and size of the memory usage dips shown above\n\n![Eviction counter shows a large spike each time the previous graph showed a large memory usage drop](https://about.gitlab.com/images/blogimages/2022-11-28-diagnosing-redis-latency-spikes-with-bpf-and-friends/02_redis_eviction_bursts.png)\n\nThis apparent excess of evictions became the central mystery. Initially, we thought answering that question might reveal a way to smooth the eviction rate, spreading out the overhead and avoiding the latency spikes. Instead, we discovered that these bursts are an interaction effect that we need to avoid, but more on that later.\n\n## Eviction bursts cause CPU saturation\n\nAs shown above, we had found that these latency spikes correlated perfectly with large spikes in the cache’s eviction rate, but we did not yet understand why the evictions were concentrated into bursts that last a few seconds and occur every few minutes.\n\nAs a first step, we wanted to verify a causal relationship between eviction bursts and latency spikes.\n\nTo test this, we used [`perf`](https://www.brendangregg.com/perf.html) to run a CPU sampling profile on the Redis main thread. Then we applied a filter to split that profile, isolating the samples where it was calling the [`performEvictions` function](https://github.com/redis/redis/blob/6.2.6/src/evict.c#L512). Using [`flamescope`](https://github.com/Netflix/flamescope), we can visualize the profile’s CPU usage as a [subsecond offset heat map](https://www.brendangregg.com/HeatMaps/subsecondoffset.html), where each second on the X axis is folded into a column of 20 msec buckets along the Y axis. This visualization style highlights sub-second activity patterns. Comparing these two heat maps confirmed that during an eviction burst, `performEvictions` is starving all other main thread code paths for CPU time.\n\n**Graph:** Redis main thread CPU time, excluding calls to `performEvictions`\n\n![Heat map shows one large gap and two small gaps in an otherwise uniform pattern of 70 percent to 80 percent CPU usage](https://about.gitlab.com/images/blogimages/2022-11-28-diagnosing-redis-latency-spikes-with-bpf-and-friends/03_heat_map_of_redis_main_thread_during_eviction_burst__excluding_performEvictions.png)\n\n**Graph:** Remainder of the same profile, showing only the calls to `performEvictions`\n\n![This heat map shows the gaps in the previous heap map were CPU time spent performing evictions](https://about.gitlab.com/images/blogimages/2022-11-28-diagnosing-redis-latency-spikes-with-bpf-and-friends/04_heat_map_of_redis_main_thread_during_eviction_burst__only_performEvictions.png)\n\nThese results confirm that eviction bursts are causing CPU starvation on the main thread, which acts as a throughput bottleneck and increases Redis’s response time latency.  These CPU utilization bursts typically lasted a few seconds, so they were too short-lived to trigger alerts but were still user impacting.\n\nFor context, the following flamegraph shows where `performEvictions` spends its CPU time. There are a few interesting things here, but the most important takeaways are:\n* It gets called synchronously by `processCommand` (which handles all client requests).\n* It handles many of its own deletes. Despite its name, the `dbAsyncDelete` function only delegates deletes to a helper thread under certain conditions which turn out to be rare for this workload.\n\n![Flamegraph of calls to function performEvictions, as described above](https://about.gitlab.com/images/blogimages/2022-11-28-diagnosing-redis-latency-spikes-with-bpf-and-friends/05_flamegraph_of_redis_main_thread_during_eviction_burst__only_performEvictions.png)\n\nFor more details on this analysis, see the [walkthrough and methodology](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1601#note_854745083).\n\n## How fast are individual calls to `performEvictions`?\n\nEach incoming request to Redis is handled by a call to `processCommand`, and it always concludes by calling the `performEvictions` function. That call to `performEvictions` is frequently a no-op, returning immediately after checking that the `maxmemory` threshold has not been breached. But when the threshold is exceeded, it will continue evicting keys until it either reaches its `mem_tofree` goal or exceeds its configured time limit per call.\n\nThe CPU heat maps shown earlier proved that `performEvictions` calls were collectively consuming a large majority of CPU time for up to several seconds.\n\nTo complement that, we also measured the wall clock time of individual calls.\n\nUsing the `funclatency` CLI tool (part of the [BCC suite of BPF tools](https://github.com/iovisor/bcc)), we measured call duration by instrumenting entry and exit from the `performEvictions` function and aggregated those measurements into a [histogram](https://en.wikipedia.org/wiki/Histogram) at 1-second intervals. When no evictions were occurring, the calls were consistently low latency (4-7 usecs/call). This is the no-op case described above (including 2.5 usecs/call of instrumentation overhead). But during an eviction burst, the results shift to a bimodal distribution, including a combination of the fast no-op calls along with much slower calls that are actively performing evictions:\n\n```shell\n$ sudo funclatency-bpfcc --microseconds --timestamp --interval 1 --duration 600 --pid $( pgrep -o redis-server ) '/opt/gitlab/embedded/bin/redis-server:performEvictions'\n...\n23:54:03\n     usecs               : count     distribution\n         0 -> 1          : 0        |                                        |\n         2 -> 3          : 576      |************                            |\n         4 -> 7          : 1896     |****************************************|\n         8 -> 15         : 392      |********                                |\n        16 -> 31         : 84       |*                                       |\n        32 -> 63         : 62       |*                                       |\n        64 -> 127        : 94       |*                                       |\n       128 -> 255        : 182      |***                                     |\n       256 -> 511        : 826      |*****************                       |\n       512 -> 1023       : 750      |***************                         |\n\n```\n\nThis measurement also directly confirmed and quantified the throughput drop in Redis requests handled per second: The call rate to `performEvictions` (and hence to `processCommand`) dropped to 20% of its norm from before the evictions began, from 25K to 5K calls per second.\n\nThis has a huge impact on clients: New requests are arriving at 5x the rate they are being completed. And crucially, we will see soon that this asymmetry is what drives the eviction burst.\n\nFor more details on this analysis, see the [safety check](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1601#note_857869826) for instrumentation overhead and the [results walkthrough](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1601#note_857907521). And for more general reference, the BPF instrumentation overhead estimate is based on these [benchmark results](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1383).\n\n## Experiment: Can tuning mitigate eviction-driven CPU saturation?\n\nThe analyses so far had shown that evictions were severely starving the Redis main thread for CPU time. There were still important unanswered questions (which we will return to shortly), but this was already enough info to [suggest some experiments](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1601#note_859236777) to test potential mitigations:\n* Can we spread out the eviction overhead so it takes longer to reach its goal but consumes a smaller percentage of the main thread’s time?\n* Are evictions freeing more memory than expected due to scheduling a lot of keys to be asynchronously deleted by the [lazyfree mechanism](https://github.com/redis/redis/blob/6.2.6/redis.conf#L1079)? Lazyfree is an optional feature that lets the Redis main thread [delegate to an async helper thread](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1601#note_859236777) the expensive task of deleting keys that have more than 64 elements. These async evictions do not count immediately towards the eviction loop’s memory goal, so if many keys qualify for lazyfree, this could potentially drive many extra iterations of the eviction loop.\n\nThe [answers](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/7172#note_971197943) to both turned out to be no:\n* Reducing `maxmemory-eviction-tenacity` to its minimum setting still did not make `performEvictions` cheap enough to avoid accumulating a request backlog. It did increase response rate, but arrival rate still far exceeded it, so this was not an effective mitigation.\n* Disabling `lazyfree-lazy-eviction` did not prevent the eviction burst from dropping memory usage far below `maxmemory`. Those lazyfrees represent a small percentage of reclaimed memory. This rules out one of the potential explanations for the mystery of excessive memory being freed.\n\nHaving ruled out two potential mitigations and one candidate hypothesis, at this point we return to the pivotal question: Why are several hundred extra megabytes of memory being freed by the end of each eviction burst?\n\n## Why do evictions occur in bursts and free too much memory?\n\nEach round of eviction aims to free just barely enough memory to get back under the `maxmemory` threshold.\n\nWith a steady rate of demand for new memory allocations, the eviction rate should be similarly steady. The rate of arriving cache writes does appear to be steady. So why are evictions happening in dense bursts, rather than smoothly? And why do they reduce memory usage on a scale of hundreds of megabytes rather than hundreds of bytes?\n\nSome potential explanations to explore:\n* Do evictions only end when a large key gets evicted, spontaneously freeing enough memory to skip evictions for a while? No, the memory usage drop is far bigger than the largest keys in the dataset.\n* Do deferred lazyfree evictions cause the eviction loop to overshoot its goal, freeing more memory than intended? No, the above experiment disproved this hypothesis.\n* Is something causing the eviction loop to sometimes calculate an unexpectedly large value for its `mem_tofree` goal? We explore this next. The answer is no, but checking it led to a new insight.\n* Is a feedback loop causing evictions to become somehow self-amplifying? If so, what conditions lead to entering and leaving this state? This turned out to be correct.\n\nThese were all plausible and testable hypotheses, and each would point towards a different solution to the eviction-driven latency problem.\n\nThe first two hypotheses we have already eliminated.\n\nTo test the next two, we built custom BPF instrumentation to peek at the calculation of `mem_tofree` at the start of each call to `performEvictions`.\n\n## Observing the `mem_tofree` calculation with `bpftrace`\n\nThis part of the investigation was a personal favorite and led to a critical realization about the nature of the problem.\n\nAs noted above, our two remaining hypotheses were:\n* an unexpectedly large `mem_tofree` goal\n* a self-amplifying feedback loop\n\nTo differentiate between them, we used [`bpftrace`](https://github.com/iovisor/bpftrace) to instrument the calculation of `mem_tofree`, looking at its input variables and results.\n\nThis set of measurements directly tests the following:\n* Does each call to `performEvictions` aim to free a small amount of memory -- perhaps roughly the size of an average cache entry? If `mem_tofree` ever approaches hundreds of megabytes, that would confirm the first hypothesis and reveal what part of the calculation was causing that large value. Otherwise, it rules out the first hypothesis and makes the feedback loop hypothesis more likely.\n* Does the replication buffer size significantly influence `mem_tofree` as a feedback mechanism? Each eviction adds to this buffer, just like normal writes do. If this buffer grows large (possibly partly due to evictions) and then abruptly shrinks (due to the peer consuming it), that would cause a spontaneous large drop in memory usage, ending evictions and instantly reducing memory usage. This is one potential way for evictions to drive a feedback loop.\n\nTo peek at the values of the `mem_tofree` calculation ([script](https://gitlab.com/gitlab-com/gl-infra/scalability/uploads/cab2cd03231f8dd4819f77b44d768cb9/redis_snoop.getMaxmemoryState.sha_25a228b839a93a1395907a03f83e1eee448b0f14.production_thresholds.bt)), we needed to isolate the [correct call from `performEvictions`](https://github.com/redis/redis/blob/6.2.6/src/evict.c#L523) to the [`getMaxmemoryState`](https://github.com/redis/redis/blob/6.2.6/src/evict.c#L374-L407) function and reverse engineer its assembly to find the right instruction and register to instrument for each of the source code level variables that we wanted to capture. From that data we generate histograms for each of the following variables:\n\n```text\nmem_reported = zmalloc_used_memory()        // All used memory tracked by jemalloc\noverhead = freeMemoryGetNotCountedMemory()  // Replication output buffers + AOF buffer\nmem_used = mem_reported - overhead          // Non-exempt used memory\nmem_tofree = mem_used - maxmemory           // Eviction goal\n```\n\n_Caveat:_ Our [custom BPF instrumentation](https://gitlab.com/gitlab-com/gl-infra/scalability/uploads/cab2cd03231f8dd4819f77b44d768cb9/redis_snoop.getMaxmemoryState.sha_25a228b839a93a1395907a03f83e1eee448b0f14.production_thresholds.bt) is specific to this particular build of the `redis-server` binary, since it attaches to virtual addresses that are likely to change the next time Redis is compiled. But the approach is able to be generalized. Treat this as a concrete example of using BPF to inspect source code variables in the middle of a function call without having to rebuild the binary. Because we are peeking at the function’s intermediate state and because the compiler inlined this function call, we needed to do binary analysis to find the correct instrumentation points. In general, peeking at a function’s arguments or return value is easier and more portable, but in this case it would not suffice.\n\nThe results:\n* Ruled out the first hypothesis: Each call to `performEvictions` had a small target value (`mem_tofree` \u003C 2 MB). This means each call to `performEvictions` did a small amount of work. Redis’s mysterious rapid drop in memory usage cannot have been caused by an abnormally large `mem_tofree` target evicting a big batch of keys all at once. Instead, there must be many calls collectively driving down memory usage.\n* The replication output buffers remained consistently small, ruling out one of the potential feedback loop mechanisms.\n* Surprisingly, `mem_tofree` was usually 16 KB to 64 KB, which is larger than a typical cache entry. This size discrepancy hints that cache keys may not be the main source of the memory pressure perpetuating the eviction burst once it begins.\n\nAll of the above results were consistent with the feedback loop hypothesis.\n\nIn addition to answering the initial questions, we got a bonus outcome: Concurrently measuring both `mem_tofree` and `mem_used` revealed a crucial new fact – _the memory reclaim is a completely distinct phase from the eviction burst_.\n\nReframing the pathology as exhibiting separate phases for evictions versus memory reclaim led to a series of realizations, described in the next section. From that emerged a coherent hypothesis explaining all the observed properties of the pathology.\n\nFor more details on this analysis, see [methodology notes](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1601#note_982498636), [build notes](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1601#note_982499538) supporting the disassembly of the Redis binary, and [initial interpretations](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1601#note_977994182).\n\n## Three-phase cycle\n\nWith the above results indicating a distinct separation between the evictions and the memory reclaim, we can now concisely characterize [three phases](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1601#note_982623949) in the cycle of eviction-driven latency spikes.\n\n**Graph:** Diagram (not to scale) comparing memory and CPU usage to request and response rates during each of the three phases\n\n![Diagram summarizes the text that follows, showing CPU and memory saturate in Phase 2 until request rate drops to match response rate, after which they recover](https://about.gitlab.com/images/blogimages/2022-11-28-diagnosing-redis-latency-spikes-with-bpf-and-friends/06_3_phase_cycle_of_eviction_bursts.png)\n\nPhase 1: Not saturated (7-15 minutes)\n* Memory usage is below `maxmemory`. No evictions occur during this phase.\n* Memory usage grows organically until reaching `maxmemory`, which starts the next phase.\n\nPhase 2: Saturated memory and CPU (6-8 seconds)\n* When memory usage reaches `maxmemory`, evictions begin.\n* Evictions occur only during this phase, and they occur intermittently and frequently.\n* Demand for memory frequently exceeds free capacity, repeatedly pushing memory usage above `maxmemory`. Throughout this phase, memory usage oscillates close to the `maxmemory` threshold, evicting a small amount of memory at a time, just enough to get back under `maxmemory`.\n\nPhase 3: Rapid memory reclaim (30-60 seconds)\n* No evictions occur during this phase.\n* During this phase, something that had been holding a lot of memory starts quickly and steadily releasing it.\n* Without the overhead of running evictions, CPU time is again spent mostly on handling requests (starting with the backlog that accumulated during Phase 2).\n* Memory usage drops rapidly and steadily. By the time this phase ends, hundreds of megabytes have been freed. Afterwards, the cycle restarts with Phase 1.\n\nAt the transition between Phase 2 and Phase 3, evictions abruptly ended because memory usage stays below the `maxmemory` threshold.\n\nReaching that transition point where memory pressure becomes negative signals that whatever was driving the memory demand in Phase 2 has started releasing memory faster than it is consuming it, shrinking the footprint it had accumulated during the previous phase.\n\nWhat is this **mystery memory consumer** that bloats its demand during Phase 2 and frees it during Phase 3?\n\n## The mystery revealed\n\n[Modeling the phase transitions](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1601#note_982651298) gave us some useful constraints that a viable hypothesis must satisfy. The mystery memory consumer must:\n* quickly bloat its footprint to hundreds of megabytes on a timescale of less than 10 seconds (the duration of Phase 2), under conditions triggered by the start of an eviction burst\n* quickly release its accumulated excess on a timescale of just tens of seconds (the duration of Phase 3), under the conditions immediately following an eviction burst\n\n**The answer:** The client input/output buffers meet those constraints to be the mystery memory consumer.\n\nHere is how that hypothesis plays out:\n* During Phase 1 (healthy state), the Redis main thread’s CPU usage is already fairly high. At the start of Phase 2, when evictions begin, the eviction overhead saturates the main thread’s CPU capacity, quickly dropping response rate below the incoming request rate.\n* This throughput mismatch between arrivals versus responses **is itself the amplifier** that takes over driving the eviction burst. As the size of that rate gap increases, the proportion of time spent doing evictions also increases.\n* Accumulating a backlog of requests requires memory, and that backlog continues to grow until enough clients are stalled that the arrival rate drops to match the response rate. As clients stall, the arrival rate falls, and with it the memory pressure, eviction rate, and CPU overhead begin to reduce.\n* At the equilibrium point when arrival rate falls to match response rate, memory demand is satisfied and evictions stop (ending Phase 2). Without the eviction overhead, more CPU time is available to process the backlog, so response rate increases above request arrival rate. This recovery phase steadily consumes the request backlog, incrementally freeing memory as it goes (Phase 3).\n* Once the backlog is resolved, the arrival and response rates match again. CPU usage is back to its Phase 1 norm, and memory usage has temporarily dropped in proportion to the max size of Phase 2’s request backlog.\n\nWe confirmed this hypothesis via a [latency injection experiment](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1601#note_987049036) showing that queuing alone explains the pathology. This outcome supports the conclusion that the extra memory demand originates from response rate falling below request arrival rate.\n\n## Remedies: How to avoid entering the eviction burst cycle\n\nNow that we understand the dynamics of the pathology, we can draw confident conclusions about viable solutions.\n\nRedis evictions are only self-amplifying when all of the following conditions are present:\n* **Memory saturation:** Memory usage reaches the `maxmemory` limit, causing evictions to start.\n* **CPU saturation:** The baseline CPU usage by the Redis main thread’s normal workload is close enough to a whole core that the eviction overhead pushes it to saturation. This reduces the response rate below request arrival rate, inducing self-amplification via increased memory demand for request buffering.\n* **Many active clients:** The saturation only lasts as long as request arrival rate exceeds response rate. Stalled clients no longer contribute to that arrival rate, so the saturation lasts longer and has a greater impact if Redis has many active clients still sending requests.\n\nViable remedies include:\n* Avoid memory saturation by any combination of the following to make peak memory usage less than the `maxmemory` limit:\n  * Reduce cache time to live (TTL)\n  * Increase `maxmemory` (and host memory if needed, but watch out for [`numa_balancing` CPU overhead](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1889) on hosts with multiple NUMA nodes)\n  * Adjust client behavior to avoid writing unnecessary cache entries\n  * Split the cache among multiple instances (sharding or functional partitioning, helps avoid both memory and CPU saturation)\n* Avoid CPU saturation by any combination of the following to make peak CPU usage for the workload plus eviction overhead be less than 1 CPU core:\n  * Use the fastest processor available for single-threaded instructions per second\n  * Isolate the redis-server process (particularly its main thread) from any other competing CPU-intensive processes (dedicated host, taskset, cpuset)\n  * Adjust client behavior to avoid unnecessary cache lookups or writes\n  * Split the cache among multiple instances (sharding or functional partitioning, helps avoid both memory and CPU saturation)\n  * Offload work from the Redis main thread (io-threads, lazyfree)\n  * Reduce eviction tenacity (only gives a minor benefit in our experiments)\n\nMore exotic potential remedies could include a new Redis feature. One idea is to exempt ephemeral allocations like client buffers from counting towards the `maxmemory` limit, instead applying that limit only to key storage. Alternatively, we could limit evictions to only consume at most a configurable percentage of the main thread’s time, so that most of its time is still spent on request throughput rather than eviction overhead.\n\nUnfortunately, either of those features would trade one failure mode for another, reducing the risk of eviction-driven CPU saturation while increasing the risk of unbounded memory growth at the process level, which could potentially saturate the host or cgroup and lead to an OOM, or out of memory, kill. That trade-off may not be worthwhile, and in any case it is not currently an option.\n\n## Our solution\n\nWe had already exhausted the low-hanging fruit for CPU efficiency, so we focused our attention on avoiding memory saturation.\n\nTo improve the cache’s memory efficiency, we [evaluated](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1601#note_990891708) which types of cache keys were using the most space and how much [`IDLETIME`](https://redis.io/commands/object-idletime/) they had accrued since last access. This memory usage profile identified some rarely used cache entries (which waste space), helped inform the TTL, or time to live, tuning by first focusing on keys with a high idle time, and highlighted some useful potential cutpoints for functionally partitioning the cache.\n\nWe [decided](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1601#note_1014582669) to concurrently pursue several cache efficiency improvements and opened an [epic](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/764) for it. The goal was to avoid chronic memory saturation, and the main action items were:\n* Iteratively reduce the cache’s [default TTL](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1854) from 2 weeks to 8 hours (helped a lot!)\n* Switch to [client-side caching](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1601#note_1026821730) for certain cache keys (efficiently avoids spending shared cache space on non-shared cache entries)\n* [Partition](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/762) a set of cache keys to a separate Redis instance\n\nThe TTL reduction was the simplest solution and turned out to be a big win. One of our main concerns with TTL reduction was that the additional cache misses could potentially increase workload on other parts of the infrastructure. Some cache misses are more expensive than others, and our metrics are not granular enough to quantify the cost of cache misses per type of cache entry. This concern is why we applied the TTL adjustment incrementally and monitored for SLO violations. Fortunately, our inference was correct: Reducing TTL did not significantly reduce the cache hit rate, and the additional cache misses did not cause noticeable impact to downstream subsystems.\n\nThe TTL reduction turned out to be sufficient to drop memory usage consistently a little below its saturation point.\n\nIncreasing `maxmemory` had initially not been feasible because the original peak memory demand (prior to the efficiency improvements) was expected to be larger than the max size of the VMs we use for Redis. However, once we dropped memory demand below saturation, then we could confidently [provision headroom](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1868) for future growth and re-enable [saturation alerting](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1883).\n\n## Results\n\nThe following graph shows Redis memory usage transitioning out of its chronically saturated state, with annotations describing the milestones when latency spikes ended and when saturation margin became wide enough to be considered safe:\n\n![Redis memory usage stops showing a flat top saturation](https://about.gitlab.com/images/blogimages/2022-11-28-diagnosing-redis-latency-spikes-with-bpf-and-friends/07_epic_results__memory_saturation_avoided_by_TTL_reductions.png)\n\nZooming into the days when we rolled out the TTL adjustments, we can see the harmful eviction-driven latency spikes vanish as we drop memory usage below its saturation point, exactly as predicted:\n\n![Redis memory usage starts as a flat line and then falls below that saturation line](https://about.gitlab.com/images/blogimages/2022-11-28-diagnosing-redis-latency-spikes-with-bpf-and-friends/08_results__redis_memory_usage_stops_saturating.png)\n\n![Redis response time spikes stop occurring at the exact point when memory stops being saturated](https://about.gitlab.com/images/blogimages/2022-11-28-diagnosing-redis-latency-spikes-with-bpf-and-friends/09_results__redis_latency_spikes_end.png)\n\nThese eviction-driven latency spikes had been the biggest cause of slowess in Redis cache.\n\nSolving this source of slowness significantly improved the user experience. This 1-year lookback shows only the long-tail portion of the improvement, not even the full benefit.  Each weekday had roughly 2 million Redis requests slower than 1 second, until our fix in mid-August:\n\n![Graph of the daily count of Redis cache requests slower than 1 second, showing roughly 2 million slow requests per day on weekdays until mid-August, when the TTL adjustments were applied](https://about.gitlab.com/images/blogimages/2022-11-28-diagnosing-redis-latency-spikes-with-bpf-and-friends/10_results__1_year_retrospective_of_slow_redis_requests_per_day.png)\n\n## Conclusions\n\nWe solved a long-standing latency problem that had been worsening as the workload grew, and we learned a lot along the way. This article focuses mostly on the Redis discoveries, since those are general behaviors that some of you may encounter in your travels. We also developed some novel tools and analytical methods and uncovered several useful environment-specific facts about our workload, infrastructure, and observability, leading to several additional improvements and proposals not mentioned above.\n\nOverall, we made several efficiency improvements and broke the cycle that was driving the pathology. Memory demand now stays well below the saturation point, eliminating the latency spikes that were burning error budgets for the development teams and causing intermittent slowness for users. All stakeholders are happy, and we came away with deeper domain knowledge and sharper skills!\n\n## Key insights summary\n\nThe following notes summarize what we learned about Redis eviction behavior (current as of version 6.2):\n* The same memory budget (`maxmemory`) is shared by key storage and client connection buffers. A spike in demand for client connection buffers counts towards the `maxmemory` limit, in the same way that a spike in key inserts or key size would.\n* Redis performs evictions in the foreground on its main thread. All time spent in `performEvictions` is time not spent handling client requests. Consequently, during an eviction burst, Redis has a lower throughput ceiling.\n* If eviction overhead saturates the main thread’s CPU, then response rate falls below request arrival rate. Redis accumulates a request backlog (which consumes memory), and clients experience this as slowness.\n* The memory used for pending requests requires more evictions, driving the eviction burst until enough clients are stalled that arrival rate falls back below response rate. At that equilibrium point, evictions stop, eviction overhead vanishes, Redis rapidly handles its request backlog, and that backlog’s memory gets freed.\n* Triggering this cycle requires all of the following:\n  * Redis is configured with a `maxmemory` limit, and its memory demand exceeds that size. This memory saturation causes evictions to begin.\n  * Redis main thread’s CPU utilization is high enough under its normal workload that having to also perform evictions drives it to CPU saturation. This reduces response rate below request rate, causing a growing request backlog and high latency.\n  * Many active clients are connected. The duration of the eviction burst and the size of memory spent on client connection buffers increases proportionally to the number of active clients.\n* Prevent this cycle by avoiding either memory or CPU saturation. In our case, avoiding memory saturation was easier (mainly by reducing cache TTL).\n\n## Further reading\n\nThe following lists summarize the analytical tools and methods cited in this article. These tools are all highly versatile and any of them can provide a massive level-up when working on performance engineering problems.\n\nTools:\n* [perf](https://www.brendangregg.com/perf.html) - A Linux performance analysis multitool. In this article, we used `perf` as a sampling profiler, capturing periodic stack traces of the `redis-server` process's main thread when it is actively running on a CPU.\n* [Flamescope](https://github.com/Netflix/flamescope) - A visualization tool for rendering a `perf` profile (and other formats) into an interactive subsecond heat map. This tool invites the user to explore the timeline for microbursts of activity or inactivity and render flamegraphs of those interesting timespans to explore what code paths were active.\n* [BCC](https://github.com/iovisor/bcc) - BCC is a framework for building BPF tools, and it ships with many useful tools out of the box. In this article, we used `funclatency` to measure the call durations of a specific Redis function and render the results as a histogram.\n* [bpftrace](https://github.com/iovisor/bpftrace) - Another BPF framework, ideal for answering ad-hoc questions about your system's behavior. It uses an `awk`-like syntax and is [quick to learn](https://github.com/iovisor/bpftrace#readme). In this article, we wrote a [custom `bpftrace` script](https://gitlab.com/gitlab-com/gl-infra/scalability/uploads/cab2cd03231f8dd4819f77b44d768cb9/redis_snoop.getMaxmemoryState.sha_25a228b839a93a1395907a03f83e1eee448b0f14.production_thresholds.bt) for observing the variables used in computing how much memory to free during each round of evictions. This script's instrumentation points are specific to our particular build of `redis-server`, but the [approach is able to be generalized](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1601#note_982498636) and illustrates how versatile this tool can be.\n\nUsage examples:\n* [Example](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1601#note_854745083) - Walkthrough of using `perf` and `flamescope` to capture, filter, and visualize the stack sampling CPU profiles of the Redis main thread.\n* [Example](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1601#note_857869826) - Walkthrough (including safety check) of using `funclatency` to measure the durations of the frequent calls to function `performEvictions`.\n* [Example](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/7172#note_971197943) - Experiment for adjusting Redis settings `lazyfree-lazy-eviction` and `maxmemory-eviction-tenacity` and observing the results using `perf`, `funclatency`, `funcslower`, and the Redis metrics for eviction count and memory usage.\n* [Example](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1601#note_982498636) - This is a working example (script included) of using `bpftrace` to observe the values of a function's variables. In this case we inspected the `mem_tofree` calculation at the start of `performEvictions`. Also, these [companion notes](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1601#note_982499538) discuss some build-specific considerations.\n* [Example](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1601#note_987049036) - Describes the latency injection experiment (the first of the three ideas). This experiment confirmed that memory demand increases at the predicted rate when we slow response rate to below request arrival rate, in the same way evictions do. This result confirmed the request queuing itself is the source of the memory pressure that amplifies the eviction burst once it begins.\n",[23,24,25],"performance","tutorial","DevOps","yml",{},true,"/en-us/blog/how-we-diagnosed-and-resolved-redis-latency-spikes",{"title":15,"description":16,"ogTitle":15,"ogDescription":16,"noIndex":12,"ogImage":19,"ogUrl":31,"ogSiteName":32,"ogType":33,"canonicalUrls":31},"https://about.gitlab.com/blog/how-we-diagnosed-and-resolved-redis-latency-spikes","https://about.gitlab.com","article","en-us/blog/how-we-diagnosed-and-resolved-redis-latency-spikes",[23,24,36],"devops","CgHzMhslblVYXrk-z-zLdL2y9JglO1sY7wrgcptIMyU",{"data":39},{"logo":40,"freeTrial":45,"sales":50,"login":55,"items":60,"search":368,"minimal":399,"duo":418,"pricingDeployment":428},{"config":41},{"href":42,"dataGaName":43,"dataGaLocation":44},"/","gitlab logo","header",{"text":46,"config":47},"Get free trial",{"href":48,"dataGaName":49,"dataGaLocation":44},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com&glm_content=default-saas-trial/","free trial",{"text":51,"config":52},"Talk to sales",{"href":53,"dataGaName":54,"dataGaLocation":44},"/sales/","sales",{"text":56,"config":57},"Sign in",{"href":58,"dataGaName":59,"dataGaLocation":44},"https://gitlab.com/users/sign_in/","sign in",[61,88,183,188,289,349],{"text":62,"config":63,"cards":65},"Platform",{"dataNavLevelOne":64},"platform",[66,72,80],{"title":62,"description":67,"link":68},"The intelligent orchestration platform for DevSecOps",{"text":69,"config":70},"Explore our Platform",{"href":71,"dataGaName":64,"dataGaLocation":44},"/platform/",{"title":73,"description":74,"link":75},"GitLab Duo Agent Platform","Agentic AI for the entire software lifecycle",{"text":76,"config":77},"Meet GitLab Duo",{"href":78,"dataGaName":79,"dataGaLocation":44},"/gitlab-duo-agent-platform/","gitlab duo agent platform",{"title":81,"description":82,"link":83},"Why GitLab","See the top reasons enterprises choose GitLab",{"text":84,"config":85},"Learn more",{"href":86,"dataGaName":87,"dataGaLocation":44},"/why-gitlab/","why gitlab",{"text":89,"left":28,"config":90,"link":92,"lists":96,"footer":165},"Product",{"dataNavLevelOne":91},"solutions",{"text":93,"config":94},"View all Solutions",{"href":95,"dataGaName":91,"dataGaLocation":44},"/solutions/",[97,121,144],{"title":98,"description":99,"link":100,"items":105},"Automation","CI/CD and automation to accelerate deployment",{"config":101},{"icon":102,"href":103,"dataGaName":104,"dataGaLocation":44},"AutomatedCodeAlt","/solutions/delivery-automation/","automated software delivery",[106,110,113,117],{"text":107,"config":108},"CI/CD",{"href":109,"dataGaLocation":44,"dataGaName":107},"/solutions/continuous-integration/",{"text":73,"config":111},{"href":78,"dataGaLocation":44,"dataGaName":112},"gitlab duo agent platform - product menu",{"text":114,"config":115},"Source Code Management",{"href":116,"dataGaLocation":44,"dataGaName":114},"/solutions/source-code-management/",{"text":118,"config":119},"Automated Software Delivery",{"href":103,"dataGaLocation":44,"dataGaName":120},"Automated software delivery",{"title":122,"description":123,"link":124,"items":129},"Security","Deliver code faster without compromising security",{"config":125},{"href":126,"dataGaName":127,"dataGaLocation":44,"icon":128},"/solutions/application-security-testing/","security and compliance","ShieldCheckLight",[130,134,139],{"text":131,"config":132},"Application Security Testing",{"href":126,"dataGaName":133,"dataGaLocation":44},"Application security testing",{"text":135,"config":136},"Software Supply Chain Security",{"href":137,"dataGaLocation":44,"dataGaName":138},"/solutions/supply-chain/","Software supply chain security",{"text":140,"config":141},"Software Compliance",{"href":142,"dataGaName":143,"dataGaLocation":44},"/solutions/software-compliance/","software compliance",{"title":145,"link":146,"items":151},"Measurement",{"config":147},{"icon":148,"href":149,"dataGaName":150,"dataGaLocation":44},"DigitalTransformation","/solutions/visibility-measurement/","visibility and measurement",[152,156,160],{"text":153,"config":154},"Visibility & Measurement",{"href":149,"dataGaLocation":44,"dataGaName":155},"Visibility and Measurement",{"text":157,"config":158},"Value Stream Management",{"href":159,"dataGaLocation":44,"dataGaName":157},"/solutions/value-stream-management/",{"text":161,"config":162},"Analytics & Insights",{"href":163,"dataGaLocation":44,"dataGaName":164},"/solutions/analytics-and-insights/","Analytics and insights",{"title":166,"items":167},"GitLab for",[168,173,178],{"text":169,"config":170},"Enterprise",{"href":171,"dataGaLocation":44,"dataGaName":172},"/enterprise/","enterprise",{"text":174,"config":175},"Small Business",{"href":176,"dataGaLocation":44,"dataGaName":177},"/small-business/","small business",{"text":179,"config":180},"Public Sector",{"href":181,"dataGaLocation":44,"dataGaName":182},"/solutions/public-sector/","public sector",{"text":184,"config":185},"Pricing",{"href":186,"dataGaName":187,"dataGaLocation":44,"dataNavLevelOne":187},"/pricing/","pricing",{"text":189,"config":190,"link":192,"lists":196,"feature":276},"Resources",{"dataNavLevelOne":191},"resources",{"text":193,"config":194},"View all resources",{"href":195,"dataGaName":191,"dataGaLocation":44},"/resources/",[197,230,248],{"title":198,"items":199},"Getting started",[200,205,210,215,220,225],{"text":201,"config":202},"Install",{"href":203,"dataGaName":204,"dataGaLocation":44},"/install/","install",{"text":206,"config":207},"Quick start guides",{"href":208,"dataGaName":209,"dataGaLocation":44},"/get-started/","quick setup checklists",{"text":211,"config":212},"Learn",{"href":213,"dataGaLocation":44,"dataGaName":214},"https://university.gitlab.com/","learn",{"text":216,"config":217},"Product documentation",{"href":218,"dataGaName":219,"dataGaLocation":44},"https://docs.gitlab.com/","product documentation",{"text":221,"config":222},"Best practice videos",{"href":223,"dataGaName":224,"dataGaLocation":44},"/getting-started-videos/","best practice videos",{"text":226,"config":227},"Integrations",{"href":228,"dataGaName":229,"dataGaLocation":44},"/integrations/","integrations",{"title":231,"items":232},"Discover",[233,238,243],{"text":234,"config":235},"Customer success stories",{"href":236,"dataGaName":237,"dataGaLocation":44},"/customers/","customer success stories",{"text":239,"config":240},"Blog",{"href":241,"dataGaName":242,"dataGaLocation":44},"/blog/","blog",{"text":244,"config":245},"Remote",{"href":246,"dataGaName":247,"dataGaLocation":44},"https://handbook.gitlab.com/handbook/company/culture/all-remote/","remote",{"title":249,"items":250},"Connect",[251,256,261,266,271],{"text":252,"config":253},"GitLab Services",{"href":254,"dataGaName":255,"dataGaLocation":44},"/services/","services",{"text":257,"config":258},"Community",{"href":259,"dataGaName":260,"dataGaLocation":44},"/community/","community",{"text":262,"config":263},"Forum",{"href":264,"dataGaName":265,"dataGaLocation":44},"https://forum.gitlab.com/","forum",{"text":267,"config":268},"Events",{"href":269,"dataGaName":270,"dataGaLocation":44},"/events/","events",{"text":272,"config":273},"Partners",{"href":274,"dataGaName":275,"dataGaLocation":44},"/partners/","partners",{"backgroundColor":277,"textColor":278,"text":279,"image":280,"link":284},"#2f2a6b","#fff","Insights for the future of software development",{"altText":281,"config":282},"the source promo card",{"src":283},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758208064/dzl0dbift9xdizyelkk4.svg",{"text":285,"config":286},"Read the latest",{"href":287,"dataGaName":288,"dataGaLocation":44},"/the-source/","the source",{"text":290,"config":291,"lists":293},"Company",{"dataNavLevelOne":292},"company",[294],{"items":295},[296,301,307,309,314,319,324,329,334,339,344],{"text":297,"config":298},"About",{"href":299,"dataGaName":300,"dataGaLocation":44},"/company/","about",{"text":302,"config":303,"footerGa":306},"Jobs",{"href":304,"dataGaName":305,"dataGaLocation":44},"/jobs/","jobs",{"dataGaName":305},{"text":267,"config":308},{"href":269,"dataGaName":270,"dataGaLocation":44},{"text":310,"config":311},"Leadership",{"href":312,"dataGaName":313,"dataGaLocation":44},"/company/team/e-group/","leadership",{"text":315,"config":316},"Team",{"href":317,"dataGaName":318,"dataGaLocation":44},"/company/team/","team",{"text":320,"config":321},"Handbook",{"href":322,"dataGaName":323,"dataGaLocation":44},"https://handbook.gitlab.com/","handbook",{"text":325,"config":326},"Investor relations",{"href":327,"dataGaName":328,"dataGaLocation":44},"https://ir.gitlab.com/","investor relations",{"text":330,"config":331},"Trust Center",{"href":332,"dataGaName":333,"dataGaLocation":44},"/security/","trust center",{"text":335,"config":336},"AI Transparency Center",{"href":337,"dataGaName":338,"dataGaLocation":44},"/ai-transparency-center/","ai transparency center",{"text":340,"config":341},"Newsletter",{"href":342,"dataGaName":343,"dataGaLocation":44},"/company/contact/#contact-forms","newsletter",{"text":345,"config":346},"Press",{"href":347,"dataGaName":348,"dataGaLocation":44},"/press/","press",{"text":350,"config":351,"lists":352},"Contact us",{"dataNavLevelOne":292},[353],{"items":354},[355,358,363],{"text":51,"config":356},{"href":53,"dataGaName":357,"dataGaLocation":44},"talk to sales",{"text":359,"config":360},"Support portal",{"href":361,"dataGaName":362,"dataGaLocation":44},"https://support.gitlab.com","support portal",{"text":364,"config":365},"Customer portal",{"href":366,"dataGaName":367,"dataGaLocation":44},"https://customers.gitlab.com/customers/sign_in/","customer portal",{"close":369,"login":370,"suggestions":377},"Close",{"text":371,"link":372},"To search repositories and projects, login to",{"text":373,"config":374},"gitlab.com",{"href":58,"dataGaName":375,"dataGaLocation":376},"search login","search",{"text":378,"default":379},"Suggestions",[380,382,386,388,392,396],{"text":73,"config":381},{"href":78,"dataGaName":73,"dataGaLocation":376},{"text":383,"config":384},"Code Suggestions (AI)",{"href":385,"dataGaName":383,"dataGaLocation":376},"/solutions/code-suggestions/",{"text":107,"config":387},{"href":109,"dataGaName":107,"dataGaLocation":376},{"text":389,"config":390},"GitLab on AWS",{"href":391,"dataGaName":389,"dataGaLocation":376},"/partners/technology-partners/aws/",{"text":393,"config":394},"GitLab on Google Cloud",{"href":395,"dataGaName":393,"dataGaLocation":376},"/partners/technology-partners/google-cloud-platform/",{"text":397,"config":398},"Why GitLab?",{"href":86,"dataGaName":397,"dataGaLocation":376},{"freeTrial":400,"mobileIcon":405,"desktopIcon":410,"secondaryButton":413},{"text":401,"config":402},"Start free trial",{"href":403,"dataGaName":49,"dataGaLocation":404},"https://gitlab.com/-/trials/new/","nav",{"altText":406,"config":407},"Gitlab Icon",{"src":408,"dataGaName":409,"dataGaLocation":404},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758203874/jypbw1jx72aexsoohd7x.svg","gitlab icon",{"altText":406,"config":411},{"src":412,"dataGaName":409,"dataGaLocation":404},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758203875/gs4c8p8opsgvflgkswz9.svg",{"text":414,"config":415},"Get Started",{"href":416,"dataGaName":417,"dataGaLocation":404},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com/compare/gitlab-vs-github/","get started",{"freeTrial":419,"mobileIcon":424,"desktopIcon":426},{"text":420,"config":421},"Learn more about GitLab Duo",{"href":422,"dataGaName":423,"dataGaLocation":404},"/gitlab-duo/","gitlab duo",{"altText":406,"config":425},{"src":408,"dataGaName":409,"dataGaLocation":404},{"altText":406,"config":427},{"src":412,"dataGaName":409,"dataGaLocation":404},{"freeTrial":429,"mobileIcon":434,"desktopIcon":436},{"text":430,"config":431},"Back to pricing",{"href":186,"dataGaName":432,"dataGaLocation":404,"icon":433},"back to pricing","GoBack",{"altText":406,"config":435},{"src":408,"dataGaName":409,"dataGaLocation":404},{"altText":406,"config":437},{"src":412,"dataGaName":409,"dataGaLocation":404},{"title":439,"button":440,"config":445},"See how agentic AI transforms software delivery",{"text":441,"config":442},"Watch GitLab Transcend now",{"href":443,"dataGaName":444,"dataGaLocation":44},"/events/transcend/virtual/","transcend event",{"layout":446,"icon":447},"release","AiStar",{"data":449},{"text":450,"source":451,"edit":457,"contribute":462,"config":467,"items":472,"minimal":677},"Git is a trademark of Software Freedom Conservancy and our use of 'GitLab' is under license",{"text":452,"config":453},"View page source",{"href":454,"dataGaName":455,"dataGaLocation":456},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/","page source","footer",{"text":458,"config":459},"Edit this page",{"href":460,"dataGaName":461,"dataGaLocation":456},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/content/","web ide",{"text":463,"config":464},"Please contribute",{"href":465,"dataGaName":466,"dataGaLocation":456},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/CONTRIBUTING.md/","please contribute",{"twitter":468,"facebook":469,"youtube":470,"linkedin":471},"https://twitter.com/gitlab","https://www.facebook.com/gitlab","https://www.youtube.com/channel/UCnMGQ8QHMAnVIsI3xJrihhg","https://www.linkedin.com/company/gitlab-com",[473,520,572,616,643],{"title":184,"links":474,"subMenu":489},[475,479,484],{"text":476,"config":477},"View plans",{"href":186,"dataGaName":478,"dataGaLocation":456},"view plans",{"text":480,"config":481},"Why Premium?",{"href":482,"dataGaName":483,"dataGaLocation":456},"/pricing/premium/","why premium",{"text":485,"config":486},"Why Ultimate?",{"href":487,"dataGaName":488,"dataGaLocation":456},"/pricing/ultimate/","why ultimate",[490],{"title":491,"links":492},"Contact Us",[493,496,498,500,505,510,515],{"text":494,"config":495},"Contact sales",{"href":53,"dataGaName":54,"dataGaLocation":456},{"text":359,"config":497},{"href":361,"dataGaName":362,"dataGaLocation":456},{"text":364,"config":499},{"href":366,"dataGaName":367,"dataGaLocation":456},{"text":501,"config":502},"Status",{"href":503,"dataGaName":504,"dataGaLocation":456},"https://status.gitlab.com/","status",{"text":506,"config":507},"Terms of use",{"href":508,"dataGaName":509,"dataGaLocation":456},"/terms/","terms of use",{"text":511,"config":512},"Privacy statement",{"href":513,"dataGaName":514,"dataGaLocation":456},"/privacy/","privacy statement",{"text":516,"config":517},"Cookie preferences",{"dataGaName":518,"dataGaLocation":456,"id":519,"isOneTrustButton":28},"cookie preferences","ot-sdk-btn",{"title":89,"links":521,"subMenu":530},[522,526],{"text":523,"config":524},"DevSecOps platform",{"href":71,"dataGaName":525,"dataGaLocation":456},"devsecops platform",{"text":527,"config":528},"AI-Assisted Development",{"href":422,"dataGaName":529,"dataGaLocation":456},"ai-assisted development",[531],{"title":532,"links":533},"Topics",[534,539,544,547,552,557,562,567],{"text":535,"config":536},"CICD",{"href":537,"dataGaName":538,"dataGaLocation":456},"/topics/ci-cd/","cicd",{"text":540,"config":541},"GitOps",{"href":542,"dataGaName":543,"dataGaLocation":456},"/topics/gitops/","gitops",{"text":25,"config":545},{"href":546,"dataGaName":36,"dataGaLocation":456},"/topics/devops/",{"text":548,"config":549},"Version Control",{"href":550,"dataGaName":551,"dataGaLocation":456},"/topics/version-control/","version control",{"text":553,"config":554},"DevSecOps",{"href":555,"dataGaName":556,"dataGaLocation":456},"/topics/devsecops/","devsecops",{"text":558,"config":559},"Cloud Native",{"href":560,"dataGaName":561,"dataGaLocation":456},"/topics/cloud-native/","cloud native",{"text":563,"config":564},"AI for Coding",{"href":565,"dataGaName":566,"dataGaLocation":456},"/topics/devops/ai-for-coding/","ai for coding",{"text":568,"config":569},"Agentic AI",{"href":570,"dataGaName":571,"dataGaLocation":456},"/topics/agentic-ai/","agentic ai",{"title":573,"links":574},"Solutions",[575,577,579,584,588,591,595,598,600,603,606,611],{"text":131,"config":576},{"href":126,"dataGaName":131,"dataGaLocation":456},{"text":120,"config":578},{"href":103,"dataGaName":104,"dataGaLocation":456},{"text":580,"config":581},"Agile development",{"href":582,"dataGaName":583,"dataGaLocation":456},"/solutions/agile-delivery/","agile delivery",{"text":585,"config":586},"SCM",{"href":116,"dataGaName":587,"dataGaLocation":456},"source code management",{"text":535,"config":589},{"href":109,"dataGaName":590,"dataGaLocation":456},"continuous integration & delivery",{"text":592,"config":593},"Value stream management",{"href":159,"dataGaName":594,"dataGaLocation":456},"value stream management",{"text":540,"config":596},{"href":597,"dataGaName":543,"dataGaLocation":456},"/solutions/gitops/",{"text":169,"config":599},{"href":171,"dataGaName":172,"dataGaLocation":456},{"text":601,"config":602},"Small business",{"href":176,"dataGaName":177,"dataGaLocation":456},{"text":604,"config":605},"Public sector",{"href":181,"dataGaName":182,"dataGaLocation":456},{"text":607,"config":608},"Education",{"href":609,"dataGaName":610,"dataGaLocation":456},"/solutions/education/","education",{"text":612,"config":613},"Financial services",{"href":614,"dataGaName":615,"dataGaLocation":456},"/solutions/finance/","financial services",{"title":189,"links":617},[618,620,622,624,627,629,631,633,635,637,639,641],{"text":201,"config":619},{"href":203,"dataGaName":204,"dataGaLocation":456},{"text":206,"config":621},{"href":208,"dataGaName":209,"dataGaLocation":456},{"text":211,"config":623},{"href":213,"dataGaName":214,"dataGaLocation":456},{"text":216,"config":625},{"href":218,"dataGaName":626,"dataGaLocation":456},"docs",{"text":239,"config":628},{"href":241,"dataGaName":242,"dataGaLocation":456},{"text":234,"config":630},{"href":236,"dataGaName":237,"dataGaLocation":456},{"text":244,"config":632},{"href":246,"dataGaName":247,"dataGaLocation":456},{"text":252,"config":634},{"href":254,"dataGaName":255,"dataGaLocation":456},{"text":257,"config":636},{"href":259,"dataGaName":260,"dataGaLocation":456},{"text":262,"config":638},{"href":264,"dataGaName":265,"dataGaLocation":456},{"text":267,"config":640},{"href":269,"dataGaName":270,"dataGaLocation":456},{"text":272,"config":642},{"href":274,"dataGaName":275,"dataGaLocation":456},{"title":290,"links":644},[645,647,649,651,653,655,657,661,666,668,670,672],{"text":297,"config":646},{"href":299,"dataGaName":292,"dataGaLocation":456},{"text":302,"config":648},{"href":304,"dataGaName":305,"dataGaLocation":456},{"text":310,"config":650},{"href":312,"dataGaName":313,"dataGaLocation":456},{"text":315,"config":652},{"href":317,"dataGaName":318,"dataGaLocation":456},{"text":320,"config":654},{"href":322,"dataGaName":323,"dataGaLocation":456},{"text":325,"config":656},{"href":327,"dataGaName":328,"dataGaLocation":456},{"text":658,"config":659},"Sustainability",{"href":660,"dataGaName":658,"dataGaLocation":456},"/sustainability/",{"text":662,"config":663},"Diversity, inclusion and belonging (DIB)",{"href":664,"dataGaName":665,"dataGaLocation":456},"/diversity-inclusion-belonging/","Diversity, inclusion and belonging",{"text":330,"config":667},{"href":332,"dataGaName":333,"dataGaLocation":456},{"text":340,"config":669},{"href":342,"dataGaName":343,"dataGaLocation":456},{"text":345,"config":671},{"href":347,"dataGaName":348,"dataGaLocation":456},{"text":673,"config":674},"Modern Slavery Transparency Statement",{"href":675,"dataGaName":676,"dataGaLocation":456},"https://handbook.gitlab.com/handbook/legal/modern-slavery-act-transparency-statement/","modern slavery transparency statement",{"items":678},[679,682,685],{"text":680,"config":681},"Terms",{"href":508,"dataGaName":509,"dataGaLocation":456},{"text":683,"config":684},"Cookies",{"dataGaName":518,"dataGaLocation":456,"id":519,"isOneTrustButton":28},{"text":686,"config":687},"Privacy",{"href":513,"dataGaName":514,"dataGaLocation":456},[689],{"id":690,"title":18,"body":8,"config":691,"content":693,"description":8,"extension":26,"meta":697,"navigation":28,"path":698,"seo":699,"stem":700,"__hash__":701},"blogAuthors/en-us/blog/authors/matt-smiley.yml",{"template":692},"BlogAuthor",{"name":18,"config":694},{"headshot":695,"ctfId":696},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1749682529/Blog/Author%20Headshots/msmiley-headshot.jpg","msmiley",{},"/en-us/blog/authors/matt-smiley",{},"en-us/blog/authors/matt-smiley","GaLE5OWIxcwOcDqqFiyWy1UVdAIZeYkr1nYOINaNM8E",[703,716,728],{"content":704,"config":714},{"title":705,"description":706,"authors":707,"heroImage":709,"date":710,"category":9,"tags":711,"body":713},"How IIT Bombay students are coding the future with GitLab","At GitLab, we often talk about how software accelerates innovation. But sometimes, you have to step away from the Zoom calls and stand in a crowded university hall to remember why we do this.",[708],"Nick Veenhof","https://res.cloudinary.com/about-gitlab-com/image/upload/v1750099013/Blog/Hero%20Images/Blog/Hero%20Images/blog-image-template-1800x945%20%2814%29_6VTUA8mUhOZNDaRVNPeKwl_1750099012960.png","2026-01-08",[260,610,712],"open source","The GitLab team recently had the privilege of judging the **iHack Hackathon** at **IIT Bombay's E-Summit**. The energy was electric, the coffee was flowing, and the talent was undeniable. But what struck us most wasn't just the code — it was the sheer determination of students to solve real-world problems, often overcoming significant logistical and financial hurdles to simply be in the room.\n\n\nThrough our [GitLab for Education program](https://about.gitlab.com/solutions/education/), we aim to empower the next generation of developers with tools and opportunity. Here is a look at what the students built, and how they used GitLab to bridge the gap between idea and reality.\n\n## The challenge: Build faster, build securely\n\nThe premise for the GitLab track of the hackathon was simple: Don't just show us a product; show us how you built it. We wanted to see how students utilized GitLab's platform — from Issue Boards to CI/CD pipelines — to accelerate the development lifecycle.\n\nThe results were inspiring.\n\n## The winners\n\n### 1st place: Team Decode — Democratizing Scientific Research\n\n**Project:** FIRE (Fast Integrated Research Environment)\n\nTeam Decode took home the top prize with a solution that warms a developer's heart: a local-first, blazing-fast data processing tool built with [Rust](https://about.gitlab.com/blog/secure-rust-development-with-gitlab/) and Tauri. They identified a massive pain point for data science students: existing tools are fragmented, slow, and expensive.\n\nTheir solution, FIRE, allows researchers to visualize complex formats (like NetCDF) instantly. What impressed the judges most was their \"hacker\" ethos. They didn't just build a tool; they built it to be open and accessible.\n\n**How they used GitLab:** Since the team lived far apart, asynchronous communication was key. They utilized **GitLab Issue Boards** and **Milestones** to track progress and integrated their repo with Telegram to get real-time push notifications. As one team member noted, \"Coordinating all these technologies was really difficult, and what helped us was GitLab... the Issue Board really helped us track who was doing what.\"\n\n![Team Decode](https://res.cloudinary.com/about-gitlab-com/image/upload/v1767380253/epqazj1jc5c7zkgqun9h.jpg)\n\n### 2nd place: Team BichdeHueDost — Reuniting to Solve Payments\n\n**Project:** SemiPay (RFID Cashless Payment for Schools)\n\nThe team name, BichdeHueDost, translates to \"Friends who have been set apart.\" It's a fitting name for a group of friends who went to different colleges but reunited to build this project. They tackled a unique problem: handling cash in schools for young children. Their solution used RFID cards backed by a blockchain ledger to ensure secure, cashless transactions for students.\n\n**How they used GitLab:** They utilized [GitLab CI/CD](https://about.gitlab.com/topics/ci-cd/) to automate the build process for their Flutter application (APK), ensuring that every commit resulted in a testable artifact. This allowed them to iterate quickly despite the \"flaky\" nature of cross-platform mobile development.\n\n![Team BichdeHueDost](https://res.cloudinary.com/about-gitlab-com/image/upload/v1767380253/pkukrjgx2miukb6nrj5g.jpg)\n\n### 3rd place: Team ZenYukti — Agentic Repository Intelligence\n\n**Project:** RepoInsight AI (AI-powered, GitLab-native intelligence platform)\n\nTeam ZenYukti impressed us with a solution that tackles a universal developer pain point: understanding unfamiliar codebases. What stood out to the judges was the tool's practical approach to onboarding and code comprehension: RepoInsight-AI automatically generates documentation, visualizes repository structure, and even helps identify bugs, all while maintaining context about the entire codebase.\n\n**How they used GitLab:** The team built a comprehensive CI/CD pipeline that showcased GitLab's security and DevOps capabilities. They integrated [GitLab's Security Templates](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/ci/templates/Security) (SAST, Dependency Scanning, and Secret Detection), and utilized [GitLab Container Registry](https://docs.gitlab.com/user/packages/container_registry/) to manage their Docker images for backend and frontend components. They created an AI auto-review bot that runs on merge requests, demonstrating an \"agentic workflow\" where AI assists in the development process itself.\n\n![Team ZenYukti](https://res.cloudinary.com/about-gitlab-com/image/upload/v1767380253/ymlzqoruv5al1secatba.jpg)\n\n## Beyond the code: A lesson in inclusion\n\nWhile the code was impressive, the most powerful moment of the event happened away from the keyboard.\n\nDuring the feedback session, we learned about the journey Team ZenYukti took to get to Mumbai. They traveled over 24 hours, covering nearly 1,800 kilometers. Because flights were too expensive and trains were booked, they traveled in the \"General Coach,\" a non-reserved, severely overcrowded carriage.\n\nAs one student described it:\n\n*\"You cannot even imagine something like this... there are no seats... people sit on the top of the train. This is what we have endured.\"*\n\nThis hit home. [Diversity, Inclusion, and Belonging](https://handbook.gitlab.com/handbook/company/culture/inclusion/) are core values at GitLab. We realized that for these students, the barrier to entry wasn't intellect or skill, it was access.\n\nIn that moment, we decided to break that barrier. We committed to reimbursing the travel expenses for the participants who struggled to get there. It's a small step, but it underlines a massive truth: **talent is distributed equally, but opportunity is not.**\n\n![hackathon class together](https://res.cloudinary.com/about-gitlab-com/image/upload/v1767380252/o5aqmboquz8ehusxvgom.jpg)\n\n### The future is bright (and automated)\n\nWe also saw incredible potential in teams like Prometheus, who attempted to build an autonomous patch remediation tool (DevGuardian), and Team Arrakis, who built a voice-first job portal for blue-collar workers using [GitLab Duo](https://about.gitlab.com/gitlab-duo/) to troubleshoot their pipelines.\n\nTo all the students who participated: You are the future. Through [GitLab for Education](https://about.gitlab.com/solutions/education/), we are committed to providing you with the top-tier tools (like GitLab Ultimate) you need to learn, collaborate, and change the world — whether you are coding from a dorm room, a lab, or a train carriage. **Keep shipping.**\n\n> :bulb: Learn more about the [GitLab for Education program](https://about.gitlab.com/solutions/education/).\n",{"slug":715,"featured":12,"template":13},"how-iit-bombay-students-code-future-with-gitlab",{"content":717,"config":726},{"title":718,"description":719,"authors":720,"heroImage":721,"date":722,"category":9,"tags":723,"body":725},"Artois University elevates research and curriculum with GitLab Ultimate for Education","Artois University's CRIL leveraged the GitLab for Education program to gain free access to Ultimate, transforming advanced research and computer science curricula.",[708],"https://res.cloudinary.com/about-gitlab-com/image/upload/v1750099203/Blog/Hero%20Images/Blog/Hero%20Images/blog-image-template-1800x945%20%2820%29_2bJGC5ZP3WheoqzlLT05C5_1750099203484.png","2025-12-10",[610,260,724],"product","Leading academic institutions face a critical challenge: how to provide thousands of students and researchers with industry-standard, **full-featured DevSecOps tools** without compromising institutional control. Many start with basic version control, but the modern curriculum demands integrated capabilities for planning, security, and advanced CI/CD.\n\nThe **GitLab for Education program** is designed to solve this by providing access to **GitLab Ultimate** for qualifying institutions, allowing them to scale their operations and elevate their academic offerings. \n\nThis article showcases a powerful success story from the **Centre de Recherche en Informatique de Lens (CRIL)**, a joint laboratory of **Artois University** and CNRS in France. After years of relying solely on GitLab Community Edition (CE), the university's move to GitLab Ultimate through the GitLab for Education program immediately unlocked advanced capabilities, transforming their teaching, research, and contribution workflows virtually overnight. This story demonstrates why GitLab Ultimate is essential for institutions seeking to deliver advanced computer science and research curricula.\n\n## GitLab Ultimate unlocked: Managing scale and driving academic value\n\n**Artois University's** self-managed GitLab instance is a large-scale operation, supporting nearly **3,000 users** across approximately **19,000 projects**, primarily serving computer science students and researchers. While GitLab Community Edition was robust, the upgrade to GitLab Ultimate provided the sophisticated tooling necessary for managing this scale and facilitating advanced university-level work.\n\n***\"We can see the difference,\" says Daniel Le Berre, head of research at CRIL and the instance maintainer. \"It's a completely different product. Each week reveals new features that directly enhance our productivity and teaching.\"***\n\nThe institution joined the GitLab for Education program specifically because it covers both **instructional and non-commercial research use cases** and offers full access to Ultimate's features, removing significant cost barriers.\n\n### Key GitLab Ultimate benefits for students and researchers\n\n* **Advanced project management at scale:** Master's students now benefit from **GitLab Ultimate's project planning features**. This enables them to structure, track, and manage complex, long-term research projects using professional methodologies like portfolio management and advanced issue tracking that seamlessly roll up across their thousands of projects.\n\n* **Enhanced visibility:** Features like improved dashboards and code previews directly in Markdown files dramatically streamline tracking and documentation review, reducing administrative friction for both instructors and students managing large project loads.\n\n## Comprehensive curriculum: From concepts to continuous delivery\n\nGitLab Ultimate is deeply integrated into the computer science curriculum, moving students beyond simple `git` commands to practical **DevSecOps implementation**.\n\n* **Git fundamentals:** Students begin by visualizing concepts using open-source tools to master Git concepts.\n\n* **Full CI/CD implementation:** Students use GitLab CI for rigorous **Test-Driven Development (TDD)** in their software projects. They learn to build, test, and perform quality assurance using unit and integration testing pipelines—core competency made seamless by the integrated platform.\n\n* **DevSecOps for research and documentation:** The university teaches students that DevSecOps principles are vital for all collaborative work. Inspired by earlier work in Delft, students manage and produce critical research documentation (PDFs from Markdown files) using GitLab, incorporating quality checks like linters and spell checks directly in the CI pipeline. This ensures high-quality, reproducible research output.\n\n* **Future-proofing security skills:** The GitLab Ultimate platform immediately positions the institution to incorporate advanced DevSecOps features like SAST and DAST scanning as their research and development code projects grow, ensuring students are prepared for industry security standards.\n\n## Accelerating open source contributions with GitLab Duo\n\nAccess to the full GitLab platform, including our AI capabilities, has empowered students to make impactful contributions to the wider open source community faster than ever before.\n\nTwo Master's students recently completed direct contributions to the GitLab product, adding the **ORCID identifier** into user profiles. Working on GitLab.com, they leveraged **GitLab Duo's AI chat and code suggestions** to navigate the codebase efficiently.\n\n***\"This would not have been possible without GitLab Duo,\" Daniel Le Berre notes. \"The AI features helped students, who might have lacked deep codebase knowledge, deliver meaningful contributions in just two weeks.\"***\n\nThis demonstrates how providing students with cutting-edge tools **accelerates their learning and impact**, allowing them to translate classroom knowledge into real-world contributions immediately.\n\n## Empowering open research and institutional control\n\nThe stability of the self-managed instance at Artois University is key to its success. This model guarantees **institutional control and stability** — a critical factor for long-term research preservation.\n\nThe institution's expertise in this area was recently highlighted in a major 2024 study led by CRIL, titled: \"[Higher Education and Research Forges in France - Definition, uses, limitations encountered and needs analysis](https://hal.science/hal-04208924v4)\" ([Project on GitLab](https://gitlab.in2p3.fr/coso-college-codes-sources-et-logiciels/forges-esr-en)). The research found that the vast majority of public forges in French Higher Education and Research relied on **GitLab**. This finding underscores the consensus among academic leaders that self-hosted solutions are essential for **data control and longevity**, especially when compared to relying on external, commercial forges.\n\n## Unlock GitLab Ultimate for your institution today\n\nThe success story of **Artois University's CRIL** proves the transformative power of the GitLab for Education program. By providing **free access to GitLab Ultimate**, we enable large-scale institutions to:\n\n1.  **Deliver a modern, integrated DevSecOps curriculum.**\n\n2.  **Support advanced, collaborative research projects with Ultimate planning features.**\n\n3.  **Empower students to make AI-assisted open source contributions.**\n\n4.  **Maintain institutional control and data longevity.**\n\nIf your academic institution is ready to equip its students and researchers with the complete DevSecOps platform and its most advanced features, we invite you to join the program.\n\nThe program provides **free access to GitLab Ultimate** for qualifying instructional and non-commercial research use cases.\n\n**Apply now [online](https://about.gitlab.com/solutions/education/join/).**\n",{"slug":727,"featured":28,"template":13},"artois-university-elevates-curriculum-with-gitlab-ultimate-for-education",{"content":729,"config":741},{"category":9,"tags":730,"body":732,"date":733,"updatedDate":734,"heroImage":735,"authors":736,"title":739,"description":740},[24,731,107],"git","\nEnterprise teams are increasingly migrating from Azure DevOps to GitLab to gain strategic advantages and accelerate secure software delivery. \n\n\n- GitLab comes with integrated controls, policies, and [compliance frameworks](https://docs.gitlab.com/user/compliance/compliance_frameworks/) that allow organizations to implement software delivery standards at scale. This is especially important for regulated industries.\n\n- [Security testing](https://docs.gitlab.com/user/application_security/) is embedded in the pipeline and results show in the developer workflow, including static application security testing (SAST), source code analysis (SCA), dynamic application security testing (DAST), infrastructure-as-code scanning (IaC), container scanning, and API scanning.\n\n- [AI capabilities](https://about.gitlab.com/gitlab-duo-agent-platform/) across the full software delivery lifecycle include advanced agent orchestration and customizable flows to support how your organizational teams work.\n\n\nGitLab's open-source, open-core approach, flexible deployment options such as single-tenant dedicated and self-managed, and truly unified platform eliminate integration complexity and security gaps. \n\n\nFor teams facing mounting pressure to accelerate delivery while strengthening security posture and maintaining regulatory compliance, GitLab represents not just a migration but a platform evolution.\n\n\nMigrating from Azure DevOps to GitLab can seem like a daunting task, but with the right approach and tools, it can be a smooth and efficient process. This guide will walk you through the steps needed to successfully migrate your projects, repositories, and pipelines from Azure DevOps to GitLab.\n\n\n## Overview\n\nGitLab provides both [Congregate](https://gitlab.com/gitlab-org/professional-services-automation/tools/migration/congregate/) (maintained by [GitLab Professional Services](https://about.gitlab.com/professional-services/) organization) and [a built-in Git repository import](https://docs.gitlab.com/user/project/import/repo_by_url/) for migrating projects from Azure DevOps (ADO). These options support repository-by-repository or bulk migration and preserve git commit history, branches, and tags. With Congregate and professional services tools, we support additional assets such as wikis, work items, CI/CD variables, container images, packages, pipelines, and more (see this [feature matrix](https://gitlab.com/gitlab-org/professional-services-automation/tools/migration/congregate/-/blob/master/customer/ado-migration-features-matrix.md)). Use this guide to plan and execute your migration and complete post-migration follow-up tasks.\n\n\nEnterprises migrating from ADO to GitLab commonly follow a multi-phase approach:\n\n\n- Migrate repositories from ADO to GitLab using Congregate or GitLab's built-in repository migration.\n\n- Migrate pipelines from Azure Pipelines to GitLab CI/CD.\n\n- Migrate remaining assets such as boards, work items, and artifacts to GitLab Issues, Epics, and the Package and Container Registries.\n\n\nHigh-level migration phases:\n\n\n```mermaid\ngraph LR\n    subgraph Prerequisites\n        direction TB\n        A[\"Set up identity provider (IdP) and\u003Cbr/>provision users\"]\n        A --> B[\"Set up runners and\u003Cbr/>third-party integrations\"]\n        B --> I[\"Users enablement and\u003Cbr/>change management\"]\n    end\n    \n    subgraph MigrationPhase[\"Migration phase\"]\n        direction TB\n        C[\"Migrate source code\"]\n        C --> D[\"Preserve contributions and\u003Cbr/> format history\"]\n        D --> E[\"Migrate work items and\u003Cbr/>map to \u003Ca href=\"https://docs.gitlab.com/topics/plan_and_track/\">GitLab Plan \u003Cbr/>and track work\"]\n    end\n    \n    subgraph PostMigration[\"Post-migration steps\"]\n        direction TB\n        F[\"Create or translate \u003Cbr/>ADO pipelines to GitLab CI\"]\n        F --> G[\"Migrate other assets\u003Cbr/>packages and container images\"]\n        G --> H[\"Introduce \u003Ca href=\"https://docs.gitlab.com/user/application_security/secure_your_application/\">security\u003C/a> and\u003Cbr/>SDLC improvements\"]\n    end\n    \n    Prerequisites --> MigrationPhase\n    MigrationPhase --> PostMigration\n\n    style A fill:#FC6D26\n    style B fill:#FC6D26\n    style I fill:#FC6D26\n    style C fill:#8C929D\n    style D fill:#8C929D\n    style E fill:#8C929D\n    style F fill:#FFA500\n    style G fill:#FFA500\n    style H fill:#FFA500\n```\n\n\n## Planning your migration\n\n\n**To plan your migration, ask these questions:**\n\n\n- How soon do we need to complete the migration?\n\n- Do we understand what will be migrated?\n\n- Who will run the migration?\n\n- What organizational structure do we want in GitLab?\n\n- Are there any constraints, limitations, or pitfalls that need to be taken into account?\n\n\nDetermine your timeline, as it will largely dictate your migration approach. Identify champions or groups familiar with both ADO and GitLab platforms (such as early adopters) to help drive adoption and provide guidance.\n\n\n**Inventory what you need to migrate:**\n\n\n- The number of repositories, pull requests, and contributors\n\n- The number and complexity of work items and pipelines\n\n- Repository sizes and dependency relationships\n\n- Critical integrations and runner requirements (agent pools with specific capabilities)\n\n\nUse GitLab Professional Services's [Evaluate](https://gitlab.com/gitlab-org/professional-services-automation/tools/utilities/evaluate#beta-azure-devops) tool to produce a complete inventory of your entire Azure DevOps organization, including repositories, PR counts, contributor lists, number of pipelines, work items, CI/CD variables and more. If you're working with the GitLab Professional Services team, share this report with your engagement manager or technical architect to help plan the migration.\n\n\nMigration timing is primarily driven by pull request count, repository size, and amount of contributions (e.g. comments in PR, work items, etc). For example, 1,000 small repositories with few PRs and limited contributors can migrate much faster than a smaller set of repositories containing tens of thousands of PRs and thousands of contributors. Use your inventory data to estimate effort and plan test runs before proceeding with production migrations.\n\n\nCompare inventory against your desired timeline and decide whether to migrate all repositories at once or in batches. If teams cannot migrate simultaneously, batch and stagger migrations to align with team schedules. For example, in Professional Services engagements, we organize migrations into waves of 200-300 projects to manage complexity and respect API rate limits, both in [GitLab](https://docs.gitlab.com/security/rate_limits/) and [ADO](https://learn.microsoft.com/en-us/azure/devops/integrate/concepts/rate-limits?view=azure-devops).\n\n\nGitLab's built-in [repository importer](https://docs.gitlab.com/user/project/import/repo_by_url/) migrates Git repositories (commits, branches, and tags) one-by-one. Congregate is designed to preserve pull requests (known in GitLab as merge requests), comments, and related metadata where possible; the simple built-in repository import focuses only on the Git data (history, branches, and tags).\n\n\n**Items that typically require separate migration or manual recreation:**\n\n\n- Azure Pipelines - create equivalent GitLab CI/CD pipelines (consult with [CI/CD YAML](https://docs.gitlab.com/ci/yaml/) and/or with [CI/CD components](https://docs.gitlab.com/ci/components/)). Alternatively, consider using AI-based pipeline conversion available in Congregate.\n\n- Work items and boards - map to GitLab Issues, Epics, and Issue Boards.\n\n- Artifacts, container images (ACR) - migrate to GitLab Package Registry or Container Registry.\n\n- Service hooks and external integrations - recreate in GitLab.\n\n- [Permissions models](https://docs.gitlab.com/user/permissions/) differ between ADO and GitLab; review and plan permissions mapping rather than assuming exact preservation.\n\n\nReview what each tool (Congregate vs. built-in import) will migrate and choose the one that fits your needs. Make a list of any data or integrations that must be migrated or recreated manually.\n\n\n**Who will run the migration?**\n\n\nMigrations are typically run by a GitLab group owner or instance administrator, or by a designated migrator who has been granted the necessary permissions on the destination group/project. Congregate and the GitLab import APIs require valid authentication tokens for both Azure DevOps and GitLab.\n\n\n- Decide whether a group owner/admin will perform the migrations or whether you will grant a specific team/person delegated access.\n\n- Ensure the migrator has correctly configured personal access tokens (Azure DevOps and GitLab) with the scopes required by your chosen migration tool (for example, api/read_repository scopes and any tool-specific requirements). \n\n- Test tokens and permissions with a small pilot migration.\n\n**Note:** Congregate leverages file-based import functionality for ADO migrations and requires instance administrator permissions to run ([see our documentation](https://docs.gitlab.com/user/project/settings/import_export/#migrate-projects-by-uploading-an-export-file)). If you are migrating to GitLab.com, consider engaging Professional Services. For more information, see the [Professional Services Full Catalog](https://about.gitlab.com/professional-services/catalog/). Non-admin account cannot preserve contribution attribution!\n\n\n**What organizational structure do we want in GitLab?**\n\nWhile it's possible to map ADO structure directly to GitLab structure, it's recommended to rationalize and simplify the structure during migration. Consider how teams will work in GitLab and design the structure to facilitate collaboration and access management. Here is a way to think about mapping ADO structure to GitLab structure:\n\n\n```mermaid\ngraph TD\n    subgraph GitLab\n        direction TB\n        A[\"Top-level Group\"]\n        B[\"Subgroup (optional)\"]\n        C[\"Projects\"]\n        A --> B\n        A --> C\n        B --> C\n    end\n\n    subgraph AzureDevOps[\"Azure DevOps\"]\n        direction TB\n        F[\"Organizations\"]\n        G[\"Projects\"]\n        H[\"Repositories\"]\n        F --> G\n        G --> H\n    end\n\n    style A fill:#FC6D26\n    style B fill:#FC6D26\n    style C fill:#FC6D26\n    style F fill:#8C929D\n    style G fill:#8C929D\n    style H fill:#8C929D\n```\n\nRecommended approach:\n\n\n- Map each ADO organization to a GitLab group (or a small set of groups), not to many small groups. Avoid creating a GitLab group for every ADO team project. Use migration as an opportunity to rationalize your GitLab structure.\n\n- Use subgroups and project-level permissions to group related repositories.\n\n- Manage access to sets of projects by using GitLab groups and group membership (groups and subgroups) rather than one group per team project.\n\n- Review GitLab [permissions](https://docs.gitlab.com/ee/user/permissions.html) and consider [SAML Group Links](https://docs.gitlab.com/user/group/saml_sso/group_sync/) to implement an enterprise RBAC model for your GitLab instance (or a GitLab.com namespace).\n\n\n**ADO Boards and work items: State of migration**\n\n\nIt's important to understand how work items migrate from ADO into GitLab Plan (issues, epics, and boards).\n\n\n- ADO Boards and work items map to GitLab Issues, Epics, and Issue Boards. Plan how your workflows and board configurations will translate.\n\n- ADO Epics and Features become GitLab Epics.\n\n- Other work item types (e.g., user stories, tasks, bugs) become project-scoped issues.\n\n- Most standard fields are preserved; selected custom fields can be migrated when supported.\n\n- Parent-child relationships are retained so Epics reference all related issues.\n\n- Links to pull requests are converted to merge request links to maintain development traceability.\n\n\nExample: Migration of an individual work item to a GitLab Issue, including field accuracy and relationships:\n\n\n![Example: Migration of an individual work item to a GitLab Issue](https://res.cloudinary.com/about-gitlab-com/image/upload/v1764769188/ztesjnxxfbwmfmtckyga.png)\n\n\nBatching guidance:\n\n\n- If you need to run migrations in batches, use your new group/subgroup structure to define batches (for example, by ADO organization or by product area).\n\n- Use inventory reports to drive batch selection and test each batch with a pilot migration before scaling.\n\n\n**Pipelines migration**\n\n\nCongregate [recently introduced](https://gitlab.com/gitlab-org/professional-services-automation/tools/migration/congregate/-/merge_requests/1298) AI-powered conversion for multi-stage YAML pipelines from Azure DevOps to GitLab CI/CD. This automated conversion works best for simple, single-file pipelines and is designed to provide a working starting point rather than a production-ready `.gitlab-ci.yml` file. The tool generates a functionally equivalent GitLab pipeline that you can then refine and optimize for your specific needs.\n\n\n- Converts Azure Pipelines YAML to `.gitlab-ci.yml` format automatically.\n\n- Best suited for straightforward, single-file pipeline configurations.\n\n- Provides a boilerplate to accelerate migration, not a final production artifact.\n\n- Requires review and adjustment for complex scenarios, custom tasks, or enterprise requirements.\n\n- Does not support Azure DevOps classic release pipelines — [convert these to multi-stage YAML](https://learn.microsoft.com/en-us/azure/devops/pipelines/release/from-classic-pipelines?view=azure-devops) first.\n\n\nRepository owners should review the [GitLab CI/CD documentation](https://docs.gitlab.com/ci/) to further optimize and enhance their pipelines after the initial conversion.\n\n\nExample of converted pipelines:\n\n\n```yml \n\n# azure-pipelines.yml\n\ntrigger:\n  - main\n\nvariables:\n  imageName: myapp\n\nstages:\n  - stage: Build\n    jobs:\n      - job: Build\n        pool:\n          vmImage: 'ubuntu-latest'\n        steps:\n          - checkout: self\n\n          - task: Docker@2\n            displayName: Build Docker image\n            inputs:\n              command: build\n              repository: $(imageName)\n              Dockerfile: '**/Dockerfile'\n              tags: |\n                $(Build.BuildId)\n\n  - stage: Test\n    jobs:\n      - job: Test\n        pool:\n          vmImage: 'ubuntu-latest'\n        steps:\n          - checkout: self\n\n          # Example: run tests inside the container\n          - script: |\n              docker run --rm $(imageName):$(Build.BuildId) npm test\n            displayName: Run tests\n\n  - stage: Push\n    jobs:\n      - job: Push\n        pool:\n          vmImage: 'ubuntu-latest'\n        steps:\n          - checkout: self\n\n          - task: Docker@2\n            displayName: Login to ACR\n            inputs:\n              command: login\n              containerRegistry: '\u003Cyour-acr-service-connection>'\n\n          - task: Docker@2\n            displayName: Push image to ACR\n            inputs:\n              command: push\n              repository: $(imageName)\n              tags: |\n                $(Build.BuildId)\n\n```\n\n```yaml\n\n# .gitlab-ci.yml\n\nvariables:\n  imageName: myapp\n\nstages:\n  - build\n  - test\n  - push\n\nbuild:\n  stage: build\n  image: docker:latest\n  services:\n    - docker:dind\n  script:\n    - docker build -t $imageName:$CI_PIPELINE_ID -f $(find . -name Dockerfile) .\n  only:\n    - main\n\ntest:\n  stage: test\n  image: docker:latest\n  services:\n    - docker:dind\n  script:\n    - docker run --rm $imageName:$CI_PIPELINE_ID npm test\n  only:\n    - main\n\npush:\n  stage: push\n  image: docker:latest\n  services:\n    - docker:dind\n  before_script:\n    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY\n  script:\n    - docker tag $imageName:$CI_PIPELINE_ID $CI_REGISTRY/$CI_PROJECT_PATH/$imageName:$CI_PIPELINE_ID\n    - docker push $CI_REGISTRY/$CI_PROJECT_PATH/$imageName:$CI_PIPELINE_ID\n  only:\n    - main\n\n```\n\n**Final checklist:**\n\n\n- Decide timeline and batch strategy.\n\n- Produce a full inventory of repositories, PRs, and contributors.\n\n- Choose Congregate or the built-in import based on scope (PRs and metadata vs. Git data only).\n\n- Decide who will run migrations and ensure tokens/permissions are configured.\n\n- Identify assets that must be migrated separately (pipelines, work items, artifacts, and hooks) and plan those efforts.\n\n- Run pilot migrations, validate results, then scale according to your plan.\n\n\n## Running your migrations\n\n\nAfter planning, execute migrations in stages, starting with trial runs. Trial migrations help surface org-specific issues early and let you measure duration, validate outcomes, and fine-tune your approach before production.\n\n\nWhat trial migrations validate:\n\n\n- Whether a given repository and related assets migrate successfully (history, branches, tags; plus MRs/comments if using Congregate)\n\n- Whether the destination is usable immediately (permissions, runners, CI/CD variables, integrations)\n\n- How long each batch takes, to set schedules and stakeholder expectations\n\n\nDowntime guidance:\n\n\n- GitLab's built-in Git import and Congregate do not inherently require downtime.\n\n- For production waves, freeze changes in ADO (branch protections or read-only) to avoid missed commits, PR updates, or work items created mid-migration.\n\n- Trial runs do not require freezes and can be run anytime.\n\n\nBatching guidance:\n\n\n- Run trial batches back-to-back to shorten elapsed time; let teams validate results asynchronously.\n\n- Use your planned group/subgroup structure to define batches and respect API rate limits.\n\n\nRecommended steps:\n\n\n1. Create a test destination in GitLab for trials:\n\n\n  - GitLab.com: create a dedicated group/namespace (for example, my-org-sandbox)\n\n  - Self-managed: create a top-level group or a separate test instance if needed\n\n\n2. Prepare authentication:\n\n\n  - Azure DevOps PAT with required scopes.\n\n  - GitLab Personal Access Token with api and read_repository (plus admin access for file-based imports used by Congregate).\n\n\n3. Run trial migrations:\n\n\n  - Repos only: use GitLab's built-in import (Repo by URL)\n\n  - Repos + PRs/MRs and additional assets: use Congregate\n\n\n4. Post-trial follow-up:\n\n\n  - Verify repo history, branches, tags; merge requests (if migrated), issues/epics (if migrated), labels, and relationships.\n\n  - Check permissions/roles, protected branches, required approvals, runners/tags, variables/secrets, integrations/webhooks.\n\n  - Validate pipelines (`.gitlab-ci.yml`) or converted pipelines where applicable.\n\n\n5. Ask users to validate functionality and data fidelity.\n\n6. Resolve issues uncovered during trials and update your runbooks.\n\n7. Network and security:\n\n\n  - If your destination uses IP allow lists, add the IPs of your migration host and any required runners/integrations so imports can succeed.\n\n\n8. Run production migrations in waves:\n\n\n  - Enforce change freezes in ADO during each wave.\n\n  - Monitor progress and logs; retry or adjust batch sizes if you hit rate limits.\n\n\n9. Optional: remove the sandbox group or archive it after you finish.\n\n\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/ibIXGfrVbi4?si=ZxOVnXjCF-h4Ne0N\" frameborder=\"0\" allowfullscreen=\"true\">\u003C/iframe>\n\u003C/figure>\n\n\n## Terminology reference for GitLab and Azure DevOps\n\n| GitLab                                                           | Azure DevOps                                 | Similarities & Key Differences                                                                                                                                          |\n| ---------------------------------------------------------------- | -------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Group                                                            | Organization                                 | Top-level namespace, membership, policies. ADO org contains Projects; GitLab Group contains Subgroups and Projects.                                                   |\n| Group or Subgroup                                                | Project                                      | Logical container, permissions boundary. ADO Project holds many repos; GitLab Groups/Subgroups organize many Projects.                                                |\n| Project (includes a Git repo)                                    | Repository (inside a Project)                | Git history, branches, tags. In GitLab, a \"Project\" is the repo plus issues, CI/CD, wiki, etc. One repo per Project.                                                  |\n| Merge Request (MR)                                               | Pull Request (PR)                            | Code review, discussions, approvals. MR rules include approvals, required pipelines, code owners.                                                                     |\n| Protected Branches, MR Approval Rules, Status Checks             | Branch Policies                              | Enforce reviews and checks. GitLab combines protections + approval rules + required status checks.                                                                    |\n| GitLab CI/CD                                                     | Azure Pipelines                              | YAML pipelines, stages/jobs, logs. ADO also has classic UI pipelines; GitLab centers on .gitlab-ci.yml.                                                               |\n| .gitlab-ci.yml                                                   | azure-pipelines.yml                          | Defines stages/jobs/triggers. Syntax/features differ; map jobs, variables, artifacts, and triggers.                                                                   |\n| Runners (shared/specific)                                        | Agents / Agent Pools                         | Execute jobs on machines/containers. Target via demands (ADO) vs tags (GitLab). Registration/scoping differs.                                                         |\n| CI/CD Variables (project/group/instance), Protected/Masked       | Pipeline Variables, Variable Groups, Library | Pass config/secrets to jobs. GitLab supports group inheritance and masking/protection flags.                                                                          |\n| Integrations, CI/CD Variables, Deploy Keys                       | Service Connections                          | External auth to services/clouds. Map to integrations or variables; cloud-specific helpers available.                                                                 |\n| Environments & Deployments (protected envs)                      | Environments (with approvals)                | Track deploy targets/history. Approvals via protected envs and manual jobs in GitLab.                                                                                 |\n| Releases (tag + notes)                                           | Releases (classic or pipelines)              | Versioned notes/artifacts. GitLab Release ties to tags; deployments tracked separately.                                                                               |\n| Job Artifacts                                                    | Pipeline Artifacts                           | Persist job outputs. Retention/expiry configured per job or project.                                                                                                  |\n| Package Registry (NuGet/npm/Maven/PyPI/Composer, etc.)           | Azure Artifacts (NuGet/npm/Maven, etc.)      | Package hosting. Auth/namespace differ; migrate per package type.                                                                                                     |\n| GitLab Container Registry                                        | Azure Container Registry (ACR) or others     | OCI images. GitLab provides per-project/group registries.                                                                                                             |\n| Issue Boards                                                     | Boards                                       | Visualize work by columns. GitLab boards are label-driven; multiple boards per project/group.                                                                         |\n| Issues (types/labels), Epics                                     | Work Items (User Story/Bug/Task)             | Track units of work. Map ADO types/fields to labels/custom fields; epics at group level.                                                                              |\n| Epics, Parent/Child Issues                                       | Epics/Features                               | Hierarchy of work. Schema differs; use epics + issue relationships.                                                                                                   |\n| Milestones and Iterations                                        | Iteration Paths                              | Time-boxing. GitLab Iterations (group feature) or Milestones per project/group.                                                                                       |\n| Labels (scoped labels)                                           | Area Paths                                   | Categorization/ownership. Replace hierarchical areas with scoped labels.                                                                                              |\n| Project/Group Wiki                                               | Project Wiki                                 | Markdown wiki. Backed by repos in both; layout/auth differ slightly.                                                                                                  |\n| Test reports via CI, Requirements/Test Management, integrations  | Test Plans/Cases/Runs                        | QA evidence/traceability. No 1:1 with ADO Test Plans; often use CI reports + issues/requirements.                                                                     |\n| Roles (Owner/Maintainer/Developer/Reporter/Guest) + custom roles | Access levels + granular permissions         | Control read/write/admin. Models differ; leverage group inheritance and protected resources.                                                                          |\n| Webhooks                                                         | Service Hooks                                | Event-driven integrations. Event names/payloads differ; reconfigure endpoints.                                                                                        |\n| Advanced Search                                                  | Code Search                                  | Full-text repo search. Self-managed GitLab may need Elasticsearch/OpenSearch for advanced features.                                                                   |\n","2025-12-03","2026-01-16","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749658924/Blog/Hero%20Images/securitylifecycle-light.png",[737,738],"Evgeny Rudinsky","Michael Leopard","Guide: Migrate from Azure DevOps to GitLab","Learn how to carry out the full migration from Azure DevOps to GitLab using GitLab Professional Services migration tools — from planning and execution to post-migration follow-up tasks.",{"featured":28,"template":13,"slug":742},"migration-from-azure-devops-to-gitlab",{"promotions":744},[745,759,770],{"id":746,"categories":747,"header":749,"text":750,"button":751,"image":756},"ai-modernization",[748],"ai-ml","Is AI achieving its promise at scale?","Quiz will take 5 minutes or less",{"text":752,"config":753},"Get your AI maturity score",{"href":754,"dataGaName":755,"dataGaLocation":242},"/assessments/ai-modernization-assessment/","modernization assessment",{"config":757},{"src":758},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138786/qix0m7kwnd8x2fh1zq49.png",{"id":760,"categories":761,"header":762,"text":750,"button":763,"image":767},"devops-modernization",[724,556],"Are you just managing tools or shipping innovation?",{"text":764,"config":765},"Get your DevOps maturity score",{"href":766,"dataGaName":755,"dataGaLocation":242},"/assessments/devops-modernization-assessment/",{"config":768},{"src":769},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138785/eg818fmakweyuznttgid.png",{"id":771,"categories":772,"header":774,"text":750,"button":775,"image":779},"security-modernization",[773],"security","Are you trading speed for security?",{"text":776,"config":777},"Get your security maturity score",{"href":778,"dataGaName":755,"dataGaLocation":242},"/assessments/security-modernization-assessment/",{"config":780},{"src":781},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138786/p4pbqd9nnjejg5ds6mdk.png",{"header":783,"blurb":784,"button":785,"secondaryButton":790},"Start building faster today","See what your team can do with the intelligent orchestration platform for DevSecOps.\n",{"text":786,"config":787},"Get your free trial",{"href":788,"dataGaName":49,"dataGaLocation":789},"https://gitlab.com/-/trial_registrations/new?glm_content=default-saas-trial&glm_source=about.gitlab.com/","feature",{"text":494,"config":791},{"href":53,"dataGaName":54,"dataGaLocation":789},1772652080772]