[{"data":1,"prerenderedAt":794},["ShallowReactive",2],{"/en-us/blog/proposed-server-purchase-for-gitlab-com":3,"navigation-en-us":33,"banner-en-us":433,"footer-en-us":443,"blog-post-authors-en-us-Sid Sijbrandij":685,"blog-related-posts-en-us-proposed-server-purchase-for-gitlab-com":703,"assessment-promotions-en-us":745,"next-steps-en-us":784},{"id":4,"title":5,"authorSlugs":6,"body":8,"categorySlug":9,"config":10,"content":14,"description":8,"extension":22,"isFeatured":12,"meta":23,"navigation":24,"path":25,"publishedDate":20,"seo":26,"stem":30,"tagSlugs":31,"__hash__":32},"blogPosts/en-us/blog/proposed-server-purchase-for-gitlab-com.yml","Proposed Server Purchase For Gitlab Com",[7],"sid-sijbrandij",null,"engineering",{"slug":11,"featured":12,"template":13},"proposed-server-purchase-for-gitlab-com",false,"BlogPost",{"title":15,"description":16,"authors":17,"heroImage":19,"date":20,"body":21,"category":9},"Proposed server purchase for GitLab.com","What hardware we're considering purchasing now that we have to move GitLab.com to metal.",[18],"Sid Sijbrandij","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749666262/Blog/Hero%20Images/default-blog-image.png","2016-12-11","\n\nWe want to make GitLab.com fast and we [knew it was time to leave the cloud](/blog/why-choose-bare-metal/) and purchase our own servers.\nIn this post is our thinking about what chassis, rack, memory, CPU, network, power, and hosting to buy.\nWe wanted to share what we learned and get your feedback on our proposal and questions.\nWhen you reply to a question in the comments on our blog or Hacker News please reference it with the letter and number: 'Regarding R1'.\nWe'll try to update the questions with preliminary answers as we learn more.\n\n\u003C!-- more -->\n\n## Overview\n\nToday, GitLab.com hosts 96TB of data, and that number is growing rapidly. We\nare attempting to build a fault-tolerant and performant CephFS cluster. We are\nalso attempting to move GitLab application servers and supporting services\n(e.g. PostgreSQL) to bare metal.\n\nNote that for now our CI Runners will stay in the cloud. Not only are they are\nmuch less sensitive to latency, but autoscaling is easier with a cloud service.\n\n### Chassis\n\nOne of the team members that will join GitLab in 2017 recommended using a [6028TP-HTTR SuperMicro 2U Twin2 server](https://www.supermicro.nl/products/system/2U/6028/SYS-6028TP-HTTR.cfm) chassis that has 4 dual processor nodes and is 2 [rack units](https://en.wikipedia.org/wiki/Rack_unit) (U) high. The advantages are:\n\n1. Great density, 0.5U per dual processor server\n1. You have one common form factor\n1. Power supplies are shared for great efficiency similar to [blade servers](https://en.wikipedia.org/wiki/Blade_server)\n1. The network is per node for more bandwidth and reliability (like individual server)\n\nWe use the [2U Twin2](https://www.supermicro.com/products/nfo/2UTwin2.cfm) instead of the [1U Twin](https://www.supermicro.com/products/nfo/1UTwin.cfm) because it fits one more 3.5\" hard drive (3 per node instead of 2).\n\nThis server is on the list of global SKU's for SuperMicro.\nWe'll also ask for quotes from other vendors to see if they have a competitive alternative.\nFor example HPE has the [Apollo 2000 series](https://www.hpe.com/h20195/v2/getpdf.aspx/c04542552.pdf?ver=7).\n\nC1 Should we use another version of the chassis than HTTR?\n\nC2 What is the best Dell equivalent? => [C6320](http://www.dell.com/us/business/p/poweredge-c6320/pd)\n\n### Servers\n\nWe need the following servers:\n\n1. 32x File storage (CephFS OSD)\n1. 3x File Monitoring (CephFS MON)\n1. 8x Application server ([Unicorn](https://bogomips.org/unicorn/))\n1. 7x Background jobs ([Sidekiq](http://sidekiq.org/))\n1. 5x Key value store ([Redis Sentinel](https://redis.io/topics/sentinel))\n1. 4x Database (PostgreSQL)\n1. 3x Load balancers (HAproxy)\n1. 1x Staging\n1. 1x Spare\n\nFor a total of 64 nodes.\n\nWe would like to have one common node so that they are interchangeable.\nThis would mean installing only a few disks per node instead of having large fileservers.\nThis would distribute failures and IO.\n\n![IOPS on GitLab.com](https://about.gitlab.com/images/blogimages/write_iops.png)\n\nThe above picture shows the currently number of Input/output Operations Per\nSecond (IOPS) on GitLab.com. On our current NFS servers, our peak write IOPS\noften hit close to 500K, and our peak read IOPS reach 200K. These numbers\nsuggest that using spinning disks alone may not be enough; we need to use\nhigh-performance SSDs judiciously.\n\nOne task that we could not fit on the common nodes was PostgreSQL.\nOur current plan is to make PostgreSQL distributed in 2017 with the help of [Citus](https://www.citusdata.com/).\nBut for now, we need to scale vertically so we need a lot of memory and CPU.\nWe need at least a primary and secondary database.\nWe wanted to add a second pair for testing and to ensure spares in case of failure.\nDetails about this are in the following sections.\n\nChoosing a common node will mean that file storage servers will have too much CPU and that application servers will have too much disk space.\nWe plan to remedy that by running everything on Kubernetes.\nThis allows us to have a blended workload using all CPU and disk.\nFor example we can combine file storage and background jobs on the same server since one is disk heavy and one is CPU heavy.\nWe will start by having one workload per server to reduce complexity.\nThis means that when we need to grow we can still unlock almost twice as much disk space and CPU by blending the workloads.\nPlease note that this will be container based, to get maximum IO performance we won't virtualize our workload.\n\nS1 Shall we spread the database servers among different chassis to make sure they don't all fail when one chassis fails?\n\nS2 Does Ceph handle running 60 OSD nodes well or can this cause problems?\n\n### CPU\n\nThe [SuperServer 6028TP-HTTR](https://www.supermicro.nl/products/system/2U/6028/SYS-6028TP-HTTR.cfm) supports dual E5-2600v4 processors per node.\nWe think the [E5-2630v4](http://ark.intel.com/products/92981/Intel-Xeon-Processor-E5-2630-v4-25M-Cache-2_20-GHz) is a good blend of power and cost.\nIt has 20 virtual cores at 2.20Ghz, 25MB cache, and costs about $669 per processor.\nEvery physical core is two virtual cores due to [hyperthreading](https://en.wikipedia.org/wiki/Hyper-threading).\nA slightly more powerful processor is the [E5-2640v4](https://ark.intel.com/products/92984/Intel-Xeon-Processor-E5-2640-v4-25M-Cache-2_40-GHz) but while the [SPECint score](https://en.wikipedia.org/wiki/SPECint) increases from 845 to 887 the costs increase from $669 to $939.\nYou can find the scores by entering a [search on spec.org](https://www.spec.org/cgi-bin/osgresults?conf=rint2006) with 'Hewlett Packard Enterprise' as the hardware vendor and looking for ProLiant DL360 Gen9 as the platform.\n\nOur current SQL server has one E5-2698B v3 with 32 virtual cores.\nPostgreSQL commonly uses about 20-25 virtual cores.\nMoving to dual processors should already help a lot.\nTo give us more months to grow before having to distribute the database we want to purchase some headroom.\nThat is why we're getting a [E5-2687Wv4](https://ark.intel.com/products/91750/Intel-Xeon-Processor-E5-2687W-v4-30M-Cache-3_00-GHz) for the database servers.\nThis processor costs $2100 instead of $670 but has 4 extra virtual cores and runs continuously on 3 Ghz instead of 2.2 Ghz.\nComprated to the E5-2630v4 that leads to a SPEC score or 1230 instead of 845 and 51.3 SPEC per virtual core instead of 42.3.\nFor the 4 dual processor database servers this upgrade will cost $11k.\nWe think it is worth it since the 20-40% of extra performance will buy us the month or two of extra time to distribute the database that we need.\n\n### Disk\n\nEvery node can fit 3 larger (3.5\") harddrives.\nWe plan to purchase the largest one available, a 8TB Seagate with 6Gb/s SATA and 7.2K RPM.\nAt 60 nodes this will give us 1.4PB of raw storage.\nAt a replication factor of 3 for Ceph this is 480TB of usable storage.\nRight now GitLab.com uses 96TB (54TB for repo's, 21TB for uploads, 21TB for LFS and build artifacts) so we can grow by a factor of almost 5.\n\nDisks can be slow so we looked at improving latency.\nHigher RPM hard drives typically come in [GB instead of TB sizes](http://www.seagate.com/enterprise-storage/hard-disk-drives/enterprise-performance-15k-hdd/).\nGoing all SSD is too expensive.\nTo improve latency we plan to fit every server with an SSD card.\nOn the fileservers this will be used as a cache.\nWe're thinking about using [Bcache](https://en.wikipedia.org/wiki/Bcache) for this.\n\nWe plan to use [Intel DC P3700 series](http://www.intel.com/content/www/us/en/solid-state-drives/ssd-dc-p3700-spec.html) or slight less powerful [P3600 series](http://www.intel.com/content/www/us/en/solid-state-drives/ssd-dc-p3600-spec.html) of SSD's because they are recommended by the CephFS experts we hired.\nFor most servers it will be the [800GB SSDPEDMD800G4](http://www.supermicro.com/products/nfo/PCI-E_SSD.cfm?show=Intel).\nFor the database servers we plan to use the 1.6TB variant to have more headroom.\nThe endurance we need for the database server is 90TB/year, the 3600 series is already above 4PB of endurance.\n\nWe plan to add a 64GB [SSD SATADOM boot drive](https://www.supermicro.com/products/nfo/SATADOM.cfm) to the servers to boot from.\nThis way we can keep the large SSD as a separate volume.\n\nD1 We plan to configure the disks as just a bunch of disks (JBOD) but heard that this caused performance problems with some controllers. Is this likely to impact us?\n\nD2 Should we use Bcache to improve latency on the Ceph OSD servers with SSD? => Make sure you're using a kernel >= 4.5, since that's when a bunch of stability patches landed (https://lkml.org/lkml/2015/12/5/38).\n\nD3 We heard concerns about fitting the PCIe 3.0 x 4 SSD card into [our chassis](https://www.supermicro.nl/products/system/2U/6028/SYS-6028TP-HTTR.cfm) that supports a PCI-E 3.0 x16 Low-profile slot. Will this fit? => [Florian Heigl](http://disq.us/p/1eedj2n): \"Somewhat unlikely you will be able to fit a P3700. I have a Twin^2 too and the only SSD I could fit there was a consumer NVME with a PCIe adapter board.\"\n\nD4 Should we ask for 8TB HGST drives instead of Seagate since they seem [more reliable](https://www.backblaze.com/blog/hard-drive-reliability-stats-q1-2016/).\n\nD5 Is it a good idea to have a boot drive or should we use [PXE boot](https://en.wikipedia.org/wiki/Preboot_Execution_Environment) every time it starts? => [dsr_](https://news.ycombinator.com/item?id=13153336): You want a local boot drive, and you want it to fall back to PXE booting if the local drive is unavailable. Your PXE image should default to the last known working image, and have a boot-time menu with options for a rescue image and an installer for your distribution of choice.\n\nD6 Should we go for the 3700 series SSD or save some money and go for the 3600 series? Both for the normal and the SQL servers?\n\nD7 We're planning on one SSD per node. For the OSD nodes (file server) that would mean having the Ceph journal and bcache on the same SSD. Is this a good idea?\n\n### Memory\n\nSuppose one node runs both as application server and fileserver.\nWe recommend virtual cores + 1 instances of Unicorn of about 0.5GB each, for a total of 21GB per node (2 processors * 21 unicorns per processor * 0.5GB).\nCeph recommends about 1GB per TB of data which comes out to 24 per node.\nSo theoretically we can fit everything in 45GB so 64GB should be enough.\n\nBut in practice we've seen 24TB OSD nodes use 79GB of memory.\nAnd the rule of thumb is have about 2GB per virtual core for background jobs available (40GB).\nSo in order not to be to low we'll spend the extra $30k to have 128GB of ECC memory per node instead of 64GB.\n\nFor the SQL nodes we'll need much more memory, we currently give it 440GB and it uses all of that.\nThe database is about 250GB in size and growing with 40GB per month.\nAt 250GB of server memory we redlined the server, probably because it no longer fits into memory.\nTheoretically the server supports 2TB of memory but it needs to fit in 16 memory slots per node.\nWe wanted to start with 1TB per server but we're not sure if we should go from a 64GB DIMM to 128GB to be able to expand later.\nBy having only half of the memory banks full you get half the bandwidth.\nAnd 64GB DIMMs already cost twice as much per GB as 32GB DIMMs, let alone 128GB ones.\nAt a price of about $940 per 64 DIMM the cost for 1TB of memory already is $15k per server.\n\nNote that larger sizes such as 64GB come in the form of LRDIMM that has a [small performance penalty](https://www.microway.com/hpc-tech-tips/ddr4-rdimm-lrdimm-performance-comparison/) but this looks acceptable.\n\nM1. Should we use 128GB DIMMS to be able to expand the database server later even though the will double the cost and half the bandwidth?\n\n### Network\n\nThe servers come with 2x 10Gbps RJ45 by default (Intel X540 Dual port 10GBase-T).\nWe want to [dual bound](https://docs.oracle.com/cd/E37670_01/E41138/html/ch11s05.html) the network connections to increase performance and reliability.\nThis will allow us to take routers out of service during low traffic times, for example to restart them after a software upgrade.\nWe think that 20Gbps is enough bandwidth to handle our data access and replication needs, right now our highest peaks are 1 Gbps.\nThis is important because we want to have minimal latency between the Ceph servers so network congestion would be a problem.\n\nCeph reference designs recommend a separated front and back network with the back network reserved for Ceph traffic.\nWe think that this is not needed as long as there is enough capacity.\nWe do want to have user request termination in a DMZ, so our HA proxy servers will be the only ones with a public IP.\n\nEach of the two physical network connections will connect to a different top of rack router.\nWe want to get a Software Defined Networking (SDN) compatible router so we have flexibility there.\nWe're considering the [10/40GbE SDN SuperSwitch (SSE-X3648S/SSE-X3648SR)](https://www.supermicro.com/products/accessories/Networking/SSE-X3648S.cfm) that can switch 1440 Gbps.\n\nApart from those routers we'll have a separate router for a 1Gbps management network.\nFor example to make [STONITH](https://en.wikipedia.org/wiki/STONITH) reliable when there is a lot of traffic on the normal network.\nEach node already has a separate 1Gbps connection for this.\n\nWe have 64+1 nodes (1 for backup) and most routers seem to have 48 ports.\nEvery node has 2 network ports so that is a need for 130 ports in total.\nWe're not use if we can use 3 routers with 48 ports each (144 in total) to cover that.\n\nN1 Which router should we purchase?\n\nN2 How do we interconnect the routers while keeping the network simple and fast?\n\nN3 Should we have a separate network for Ceph traffic?\n\nN4 Do we need an SDN compatible router or can we purchase something more affordable?\n\nN5 What router should we use for the management network?\n\n### Backup\n\nWe're still early in figuring out the backup solution so there are still lots of questions.\n\nBacking up 480TB of data (expected size in 2017) is pretty hard.\nWe thought about using [Google Nearline](https://cloud.google.com/storage-nearline/) because with a price of $0.01 per GB per month means that for $4800 we don't have to worry about much.\nBut restoring that over a 1Gbps connection takes 44 days, way too long.\n\nWe mainly want our backup to protect us against human and software errors.\nBecause all the files are already replicated 3 times hardware errors are unlikely to affect us.\nOf course we should have a good [Ceph CRUSH map](http://docs.ceph.com/docs/jewel/rados/operations/crush-map/) to prevent storing multiple copies on the same chassis.\n\nWe're most afraid of human error or Ceph corruption. For that reason we don't want to replicate on the Ceph level but on the file level.\n\nWe're thinking about using [Bareos backup software](https://www.bareos.org/en/) to replicate to a huge fileserver.\nWe're inspired by the posts about the [latest 480TB Backblaze storage pod 6.0](https://www.backblaze.com/blog/open-source-data-storage-server/) and these are available for $6k without drives from [Backuppods](https://www.backuppods.com/).\nBut SuperMicro offers a [comparable solution in the form of a SuperChassis that can hold 90 drives](https://www.supermicro.com/products/chassis/4U/946/SC946ED-R2KJBOD).\nAt 8TB per drive that is 720TB of raw storage.\nEven with RAID overhead it should be possible to have 480TB of usable storage (66%).\n\nThe SuperChassis is only hard drives, it still needs a controller. In a [reference architecture by Nexenta (PDF download)](https://nexenta.com/sites/default/files/docs/Nexenta_SMC_RA_DataSheet.pdf) two [SYS6028U](https://www.supermicro.com/products/system/2u/6028/sys-6028u-tr4_.cfm) with E5-2643v3 processors and 256GB of RAM is recommended. Unlike smaller configurations this one doesn't come with an SSD for [ZFS L2ARC](https://blogs.oracle.com/brendan/entry/test).\n\nSince backups are mostly linear we don't need an SSD for caching. In general 1GB of memory per TB of raw ZFS disk space is recommended. That would mean getting 512GB of RAM, 16x 32GB. Unlike the reference architecture we'll go with one controller. We're considering the [SuperServer 1028R-WC1RT](https://www.supermicro.com/products/system/1U/1028/SYS-1028R-WC1RT.cfm) since it is similar to our other servers, 1U, has 2x 10Gbps, 16 DIMM slots, and has 2 PCI slots. We'll use our regular [E5-2630v4](http://ark.intel.com/products/92981/Intel-Xeon-Processor-E5-2630-v4-25M-Cache-2_20-GHz) processor.\n\nThe question is if this controller can saturate the 20 Gbps uplink.\nFor this it needs to use both 12 Gbps SAS buses.\nAnd each drive has to do at least 30 MBps which seems reasonable for a continuous read.\n\nThe problem is that even at 20Gbps a full restore takes 2 days.\nOf course many times you need to restore only part of the files (uploads).\nAnd most of the time it won't contain 480TB (we'll start at about 100TB).\nThe question is if we can accept this worst case scenario for GitLab.com.\n\nAn alternative would be to use multiple controllers.\nBut you can't aggregate ZFS pools over multiple servers.\nAnother option would be to have one controller with more IO.\nWe can use multiple disk enclosures and multiple SAS buses.\nAnd we can add more network ports and/or switch to 40Gbps.\nBut this all seems pretty complicated.\n\nB0 Are we on the right track here or is 20 Gbps of restore speed not OK?\n\nB1 Should we go for the [90 or 60 drive SuperChassis](https://www.supermicro.com/products/chassis/4U/?chs=946)? It looks like 60 drive one has more peak power (1600W vs. 800W) to start the drives.\n\nB2 How should we configure the SuperChassis? [ZFS on Linux](http://zfsonlinux.org/) with [RAIDZ3](https://icesquare.com/wordpress/zfs-performance-mirror-vs-raidz-vs-raidz2-vs-raidz3-vs-striped/)?\n\nB3 Will the SuperChassis be able to saturate the 20Gbsp connection?\n\nB4 Should we upgrade the networking on the SuperChassis to be able to restore even faster?\n\nB5 Is Bareos the right software to use?\n\nB6 How should we configure the backup software?  Should we use incremental backups with parallel jobs to speed things up?\n\nB7 Should we use the live filesystem or [CephFS snapshots](http://docs.ceph.com/docs/master/dev/cephfs-snapshots/) to back up from?\n\nB8 How common is it to have a tape or cloud backup in addition to the above?\n\nB9 Should we pick the top load model or [one of the front and rear access models](https://www.supermicro.com/products/chassis/JBOD/index.cfm?show=SELECT&storage=90).\n\nB10 Can we connect two SAS cables to get 2x 12 Gbps?\n\nB11 What [HBA card](https://www.supermicro.com/products/nfo/storage_cards.cfm) should be added to the controller or does it come with an LSI 3108?\n\nB12 Is it smart to make the controller a separate 1U box or should we repurpose some of our normal nodes for this?\n\nB13 Any hints on how to test the backup restore (on AWS or our hardware, how often, etc.)?\n\n### Rack\n\nThe default rack height seems to be 45U nowadays (42U used to be the standard).\n\nIt is used as follows:\n\n- 32U for 16 chassis with 64 nodes\n- 3U for three network routers\n- 1U for the management network\n- 4U for the disk enclosure\n- 1U for the disk controller\n- 4U spare for 2 new chassis (maybe distributed PostgreSQL servers)\n\n### Power\n\nEach chassis has a 2000 watt power supply (comes to 1kW per U), 32kW in total.\nNormal usage is guessed at 60% of the rated capacity, about 19kW.\nThat doesn't account for the routers and backup.\nBoth hosting providers quoted 4 x 208v 30A power supplies (2 for redundancy).\n\nP1 Does the quoted supply seem adequate for our needs?\n\n### Hosting\n\nWe've worked in [an issue](https://gitlab.com/gitlab-com/infrastructure/issues/732) to see where we should host.\n\nApart from the obvious (reliable, affordable) we had the following needs:\n\n- [AWS Direct connect](https://aws.amazon.com/directconnect/details/) so we can use the cloud for temporary application server needs\n- Based on the east coast of the USA since it provides the best latency tradeoff for most of our users\n- Advanced remote hands service so we don't have to station people near the datacenter at all times\n- Ability to upgrade from one rack to a private cage\n\nThe following networking options are a plus:\n\n- Carrier neutral (all major global network providers in its meet-me facility)\n- Backbones to other locations to provide cheap 2nd site transit\n- CDN services to reduce origin bandwidth costs\n\nSo far we've gotten quotes from [QTS in Ashburn, VA](http://www.qtsdatacenters.com/data-centers/ashburn) and [NYI in Bridgewater, NJ](https://www.nyi.net/datacenters/new-jersey/).\n\nH1 Any watchouts when selecting hosting providers?\n\nH2 Should we install the servers ourselves or is it OK to let the hosting provider do that?\n\nH3 How can we minimize installation costs? Should we ask to configure the servers to PXE boot?\n\nH4 Is there an Azure equivalent for AWS Direct Connect? => Azure will let you work with a provider to \"peer into\" the Azure network at a data center of your choice. So for example we could pay to have a circuit established in a data center that was linked into the Azure 'US East 2' data center (where we currently host out of) for direct connectivity needs.\n\n### Expense\n\nWe can't give cost details since all the quotes we receive are confidential.\nThe cloud hosting for GitLab.com excluding GitLab CI is currently costing us about $200k per month.\nThe capital needed for going to metal would be less than we pay for 1 quarter of hosting.\nThe hosting facility costs look to be less than $10k per month.\nIf you spread the capital costs over 2.5 years (10 quarters) it is 10x cheaper to host your own.\n\nOf course the growth of GitLab.com will soon force us to buy additional hardware.\nBut we would also have to pay extra for additional cloud capacity.\nOur proposed buying plan is about 5x the capacity we need now.\nHaving your own hardware means you're always overprovisioned.\nAnd we could probably have reduced the cost of cloud hosting by focussing on it.\n\nThe bigger expense will be hiring more people to deal with the additional complexity.\nWe'll probably need to hire a couple of people more to deal with this.\n\nWe looked into initially having disks in only half the servers but that saves only $20k ($225 per disk) and it would create a lot of work when we eventually have to install them.\n\nE1 If we want to look at leasing should we do that through SuperMicro or third party?\n\nE2 Are there ways we can save money?\n\n## Details\n\nOur detailed calculations and notes can be found in a [public Google sheet](https://docs.google.com/spreadsheets/d/1XG9VXdDxNd8ipgPlEr7Nb7Eg22twXPuzgDwsOhtdYKQ/edit#gid=894825456).\n","yml",{},true,"/en-us/blog/proposed-server-purchase-for-gitlab-com",{"title":15,"description":16,"ogTitle":15,"ogDescription":16,"noIndex":12,"ogImage":19,"ogUrl":27,"ogSiteName":28,"ogType":29,"canonicalUrls":27},"https://about.gitlab.com/blog/proposed-server-purchase-for-gitlab-com","https://about.gitlab.com","article","en-us/blog/proposed-server-purchase-for-gitlab-com",[],"3jaZ_8SIungJiTffc0cgB18LKbgYY0UToJz4_ztR_X0",{"data":34},{"logo":35,"freeTrial":40,"sales":45,"login":50,"items":55,"search":363,"minimal":394,"duo":413,"pricingDeployment":423},{"config":36},{"href":37,"dataGaName":38,"dataGaLocation":39},"/","gitlab logo","header",{"text":41,"config":42},"Get free trial",{"href":43,"dataGaName":44,"dataGaLocation":39},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com&glm_content=default-saas-trial/","free trial",{"text":46,"config":47},"Talk to sales",{"href":48,"dataGaName":49,"dataGaLocation":39},"/sales/","sales",{"text":51,"config":52},"Sign in",{"href":53,"dataGaName":54,"dataGaLocation":39},"https://gitlab.com/users/sign_in/","sign in",[56,83,178,183,284,344],{"text":57,"config":58,"cards":60},"Platform",{"dataNavLevelOne":59},"platform",[61,67,75],{"title":57,"description":62,"link":63},"The intelligent orchestration platform for DevSecOps",{"text":64,"config":65},"Explore our Platform",{"href":66,"dataGaName":59,"dataGaLocation":39},"/platform/",{"title":68,"description":69,"link":70},"GitLab Duo Agent Platform","Agentic AI for the entire software lifecycle",{"text":71,"config":72},"Meet GitLab Duo",{"href":73,"dataGaName":74,"dataGaLocation":39},"/gitlab-duo-agent-platform/","gitlab duo agent platform",{"title":76,"description":77,"link":78},"Why GitLab","See the top reasons enterprises choose GitLab",{"text":79,"config":80},"Learn more",{"href":81,"dataGaName":82,"dataGaLocation":39},"/why-gitlab/","why gitlab",{"text":84,"left":24,"config":85,"link":87,"lists":91,"footer":160},"Product",{"dataNavLevelOne":86},"solutions",{"text":88,"config":89},"View all Solutions",{"href":90,"dataGaName":86,"dataGaLocation":39},"/solutions/",[92,116,139],{"title":93,"description":94,"link":95,"items":100},"Automation","CI/CD and automation to accelerate deployment",{"config":96},{"icon":97,"href":98,"dataGaName":99,"dataGaLocation":39},"AutomatedCodeAlt","/solutions/delivery-automation/","automated software delivery",[101,105,108,112],{"text":102,"config":103},"CI/CD",{"href":104,"dataGaLocation":39,"dataGaName":102},"/solutions/continuous-integration/",{"text":68,"config":106},{"href":73,"dataGaLocation":39,"dataGaName":107},"gitlab duo agent platform - product menu",{"text":109,"config":110},"Source Code Management",{"href":111,"dataGaLocation":39,"dataGaName":109},"/solutions/source-code-management/",{"text":113,"config":114},"Automated Software Delivery",{"href":98,"dataGaLocation":39,"dataGaName":115},"Automated software delivery",{"title":117,"description":118,"link":119,"items":124},"Security","Deliver code faster without compromising security",{"config":120},{"href":121,"dataGaName":122,"dataGaLocation":39,"icon":123},"/solutions/application-security-testing/","security and compliance","ShieldCheckLight",[125,129,134],{"text":126,"config":127},"Application Security Testing",{"href":121,"dataGaName":128,"dataGaLocation":39},"Application security testing",{"text":130,"config":131},"Software Supply Chain Security",{"href":132,"dataGaLocation":39,"dataGaName":133},"/solutions/supply-chain/","Software supply chain security",{"text":135,"config":136},"Software Compliance",{"href":137,"dataGaName":138,"dataGaLocation":39},"/solutions/software-compliance/","software compliance",{"title":140,"link":141,"items":146},"Measurement",{"config":142},{"icon":143,"href":144,"dataGaName":145,"dataGaLocation":39},"DigitalTransformation","/solutions/visibility-measurement/","visibility and measurement",[147,151,155],{"text":148,"config":149},"Visibility & Measurement",{"href":144,"dataGaLocation":39,"dataGaName":150},"Visibility and Measurement",{"text":152,"config":153},"Value Stream Management",{"href":154,"dataGaLocation":39,"dataGaName":152},"/solutions/value-stream-management/",{"text":156,"config":157},"Analytics & Insights",{"href":158,"dataGaLocation":39,"dataGaName":159},"/solutions/analytics-and-insights/","Analytics and insights",{"title":161,"items":162},"GitLab for",[163,168,173],{"text":164,"config":165},"Enterprise",{"href":166,"dataGaLocation":39,"dataGaName":167},"/enterprise/","enterprise",{"text":169,"config":170},"Small Business",{"href":171,"dataGaLocation":39,"dataGaName":172},"/small-business/","small business",{"text":174,"config":175},"Public Sector",{"href":176,"dataGaLocation":39,"dataGaName":177},"/solutions/public-sector/","public sector",{"text":179,"config":180},"Pricing",{"href":181,"dataGaName":182,"dataGaLocation":39,"dataNavLevelOne":182},"/pricing/","pricing",{"text":184,"config":185,"link":187,"lists":191,"feature":271},"Resources",{"dataNavLevelOne":186},"resources",{"text":188,"config":189},"View all resources",{"href":190,"dataGaName":186,"dataGaLocation":39},"/resources/",[192,225,243],{"title":193,"items":194},"Getting started",[195,200,205,210,215,220],{"text":196,"config":197},"Install",{"href":198,"dataGaName":199,"dataGaLocation":39},"/install/","install",{"text":201,"config":202},"Quick start guides",{"href":203,"dataGaName":204,"dataGaLocation":39},"/get-started/","quick setup checklists",{"text":206,"config":207},"Learn",{"href":208,"dataGaLocation":39,"dataGaName":209},"https://university.gitlab.com/","learn",{"text":211,"config":212},"Product documentation",{"href":213,"dataGaName":214,"dataGaLocation":39},"https://docs.gitlab.com/","product documentation",{"text":216,"config":217},"Best practice videos",{"href":218,"dataGaName":219,"dataGaLocation":39},"/getting-started-videos/","best practice videos",{"text":221,"config":222},"Integrations",{"href":223,"dataGaName":224,"dataGaLocation":39},"/integrations/","integrations",{"title":226,"items":227},"Discover",[228,233,238],{"text":229,"config":230},"Customer success stories",{"href":231,"dataGaName":232,"dataGaLocation":39},"/customers/","customer success stories",{"text":234,"config":235},"Blog",{"href":236,"dataGaName":237,"dataGaLocation":39},"/blog/","blog",{"text":239,"config":240},"Remote",{"href":241,"dataGaName":242,"dataGaLocation":39},"https://handbook.gitlab.com/handbook/company/culture/all-remote/","remote",{"title":244,"items":245},"Connect",[246,251,256,261,266],{"text":247,"config":248},"GitLab Services",{"href":249,"dataGaName":250,"dataGaLocation":39},"/services/","services",{"text":252,"config":253},"Community",{"href":254,"dataGaName":255,"dataGaLocation":39},"/community/","community",{"text":257,"config":258},"Forum",{"href":259,"dataGaName":260,"dataGaLocation":39},"https://forum.gitlab.com/","forum",{"text":262,"config":263},"Events",{"href":264,"dataGaName":265,"dataGaLocation":39},"/events/","events",{"text":267,"config":268},"Partners",{"href":269,"dataGaName":270,"dataGaLocation":39},"/partners/","partners",{"backgroundColor":272,"textColor":273,"text":274,"image":275,"link":279},"#2f2a6b","#fff","Insights for the future of software development",{"altText":276,"config":277},"the source promo card",{"src":278},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758208064/dzl0dbift9xdizyelkk4.svg",{"text":280,"config":281},"Read the latest",{"href":282,"dataGaName":283,"dataGaLocation":39},"/the-source/","the source",{"text":285,"config":286,"lists":288},"Company",{"dataNavLevelOne":287},"company",[289],{"items":290},[291,296,302,304,309,314,319,324,329,334,339],{"text":292,"config":293},"About",{"href":294,"dataGaName":295,"dataGaLocation":39},"/company/","about",{"text":297,"config":298,"footerGa":301},"Jobs",{"href":299,"dataGaName":300,"dataGaLocation":39},"/jobs/","jobs",{"dataGaName":300},{"text":262,"config":303},{"href":264,"dataGaName":265,"dataGaLocation":39},{"text":305,"config":306},"Leadership",{"href":307,"dataGaName":308,"dataGaLocation":39},"/company/team/e-group/","leadership",{"text":310,"config":311},"Team",{"href":312,"dataGaName":313,"dataGaLocation":39},"/company/team/","team",{"text":315,"config":316},"Handbook",{"href":317,"dataGaName":318,"dataGaLocation":39},"https://handbook.gitlab.com/","handbook",{"text":320,"config":321},"Investor relations",{"href":322,"dataGaName":323,"dataGaLocation":39},"https://ir.gitlab.com/","investor relations",{"text":325,"config":326},"Trust Center",{"href":327,"dataGaName":328,"dataGaLocation":39},"/security/","trust center",{"text":330,"config":331},"AI Transparency Center",{"href":332,"dataGaName":333,"dataGaLocation":39},"/ai-transparency-center/","ai transparency center",{"text":335,"config":336},"Newsletter",{"href":337,"dataGaName":338,"dataGaLocation":39},"/company/contact/#contact-forms","newsletter",{"text":340,"config":341},"Press",{"href":342,"dataGaName":343,"dataGaLocation":39},"/press/","press",{"text":345,"config":346,"lists":347},"Contact us",{"dataNavLevelOne":287},[348],{"items":349},[350,353,358],{"text":46,"config":351},{"href":48,"dataGaName":352,"dataGaLocation":39},"talk to sales",{"text":354,"config":355},"Support portal",{"href":356,"dataGaName":357,"dataGaLocation":39},"https://support.gitlab.com","support portal",{"text":359,"config":360},"Customer portal",{"href":361,"dataGaName":362,"dataGaLocation":39},"https://customers.gitlab.com/customers/sign_in/","customer portal",{"close":364,"login":365,"suggestions":372},"Close",{"text":366,"link":367},"To search repositories and projects, login to",{"text":368,"config":369},"gitlab.com",{"href":53,"dataGaName":370,"dataGaLocation":371},"search login","search",{"text":373,"default":374},"Suggestions",[375,377,381,383,387,391],{"text":68,"config":376},{"href":73,"dataGaName":68,"dataGaLocation":371},{"text":378,"config":379},"Code Suggestions (AI)",{"href":380,"dataGaName":378,"dataGaLocation":371},"/solutions/code-suggestions/",{"text":102,"config":382},{"href":104,"dataGaName":102,"dataGaLocation":371},{"text":384,"config":385},"GitLab on AWS",{"href":386,"dataGaName":384,"dataGaLocation":371},"/partners/technology-partners/aws/",{"text":388,"config":389},"GitLab on Google Cloud",{"href":390,"dataGaName":388,"dataGaLocation":371},"/partners/technology-partners/google-cloud-platform/",{"text":392,"config":393},"Why GitLab?",{"href":81,"dataGaName":392,"dataGaLocation":371},{"freeTrial":395,"mobileIcon":400,"desktopIcon":405,"secondaryButton":408},{"text":396,"config":397},"Start free trial",{"href":398,"dataGaName":44,"dataGaLocation":399},"https://gitlab.com/-/trials/new/","nav",{"altText":401,"config":402},"Gitlab Icon",{"src":403,"dataGaName":404,"dataGaLocation":399},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758203874/jypbw1jx72aexsoohd7x.svg","gitlab icon",{"altText":401,"config":406},{"src":407,"dataGaName":404,"dataGaLocation":399},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758203875/gs4c8p8opsgvflgkswz9.svg",{"text":409,"config":410},"Get Started",{"href":411,"dataGaName":412,"dataGaLocation":399},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com/compare/gitlab-vs-github/","get started",{"freeTrial":414,"mobileIcon":419,"desktopIcon":421},{"text":415,"config":416},"Learn more about GitLab Duo",{"href":417,"dataGaName":418,"dataGaLocation":399},"/gitlab-duo/","gitlab duo",{"altText":401,"config":420},{"src":403,"dataGaName":404,"dataGaLocation":399},{"altText":401,"config":422},{"src":407,"dataGaName":404,"dataGaLocation":399},{"freeTrial":424,"mobileIcon":429,"desktopIcon":431},{"text":425,"config":426},"Back to pricing",{"href":181,"dataGaName":427,"dataGaLocation":399,"icon":428},"back to pricing","GoBack",{"altText":401,"config":430},{"src":403,"dataGaName":404,"dataGaLocation":399},{"altText":401,"config":432},{"src":407,"dataGaName":404,"dataGaLocation":399},{"title":434,"button":435,"config":440},"See how agentic AI transforms software delivery",{"text":436,"config":437},"Watch GitLab Transcend now",{"href":438,"dataGaName":439,"dataGaLocation":39},"/events/transcend/virtual/","transcend event",{"layout":441,"icon":442},"release","AiStar",{"data":444},{"text":445,"source":446,"edit":452,"contribute":457,"config":462,"items":467,"minimal":674},"Git is a trademark of Software Freedom Conservancy and our use of 'GitLab' is under license",{"text":447,"config":448},"View page source",{"href":449,"dataGaName":450,"dataGaLocation":451},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/","page source","footer",{"text":453,"config":454},"Edit this page",{"href":455,"dataGaName":456,"dataGaLocation":451},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/content/","web ide",{"text":458,"config":459},"Please contribute",{"href":460,"dataGaName":461,"dataGaLocation":451},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/CONTRIBUTING.md/","please contribute",{"twitter":463,"facebook":464,"youtube":465,"linkedin":466},"https://twitter.com/gitlab","https://www.facebook.com/gitlab","https://www.youtube.com/channel/UCnMGQ8QHMAnVIsI3xJrihhg","https://www.linkedin.com/company/gitlab-com",[468,515,569,613,640],{"title":179,"links":469,"subMenu":484},[470,474,479],{"text":471,"config":472},"View plans",{"href":181,"dataGaName":473,"dataGaLocation":451},"view plans",{"text":475,"config":476},"Why Premium?",{"href":477,"dataGaName":478,"dataGaLocation":451},"/pricing/premium/","why premium",{"text":480,"config":481},"Why Ultimate?",{"href":482,"dataGaName":483,"dataGaLocation":451},"/pricing/ultimate/","why ultimate",[485],{"title":486,"links":487},"Contact Us",[488,491,493,495,500,505,510],{"text":489,"config":490},"Contact sales",{"href":48,"dataGaName":49,"dataGaLocation":451},{"text":354,"config":492},{"href":356,"dataGaName":357,"dataGaLocation":451},{"text":359,"config":494},{"href":361,"dataGaName":362,"dataGaLocation":451},{"text":496,"config":497},"Status",{"href":498,"dataGaName":499,"dataGaLocation":451},"https://status.gitlab.com/","status",{"text":501,"config":502},"Terms of use",{"href":503,"dataGaName":504,"dataGaLocation":451},"/terms/","terms of use",{"text":506,"config":507},"Privacy statement",{"href":508,"dataGaName":509,"dataGaLocation":451},"/privacy/","privacy statement",{"text":511,"config":512},"Cookie preferences",{"dataGaName":513,"dataGaLocation":451,"id":514,"isOneTrustButton":24},"cookie preferences","ot-sdk-btn",{"title":84,"links":516,"subMenu":525},[517,521],{"text":518,"config":519},"DevSecOps platform",{"href":66,"dataGaName":520,"dataGaLocation":451},"devsecops platform",{"text":522,"config":523},"AI-Assisted Development",{"href":417,"dataGaName":524,"dataGaLocation":451},"ai-assisted development",[526],{"title":527,"links":528},"Topics",[529,534,539,544,549,554,559,564],{"text":530,"config":531},"CICD",{"href":532,"dataGaName":533,"dataGaLocation":451},"/topics/ci-cd/","cicd",{"text":535,"config":536},"GitOps",{"href":537,"dataGaName":538,"dataGaLocation":451},"/topics/gitops/","gitops",{"text":540,"config":541},"DevOps",{"href":542,"dataGaName":543,"dataGaLocation":451},"/topics/devops/","devops",{"text":545,"config":546},"Version Control",{"href":547,"dataGaName":548,"dataGaLocation":451},"/topics/version-control/","version control",{"text":550,"config":551},"DevSecOps",{"href":552,"dataGaName":553,"dataGaLocation":451},"/topics/devsecops/","devsecops",{"text":555,"config":556},"Cloud Native",{"href":557,"dataGaName":558,"dataGaLocation":451},"/topics/cloud-native/","cloud native",{"text":560,"config":561},"AI for Coding",{"href":562,"dataGaName":563,"dataGaLocation":451},"/topics/devops/ai-for-coding/","ai for coding",{"text":565,"config":566},"Agentic AI",{"href":567,"dataGaName":568,"dataGaLocation":451},"/topics/agentic-ai/","agentic ai",{"title":570,"links":571},"Solutions",[572,574,576,581,585,588,592,595,597,600,603,608],{"text":126,"config":573},{"href":121,"dataGaName":126,"dataGaLocation":451},{"text":115,"config":575},{"href":98,"dataGaName":99,"dataGaLocation":451},{"text":577,"config":578},"Agile development",{"href":579,"dataGaName":580,"dataGaLocation":451},"/solutions/agile-delivery/","agile delivery",{"text":582,"config":583},"SCM",{"href":111,"dataGaName":584,"dataGaLocation":451},"source code management",{"text":530,"config":586},{"href":104,"dataGaName":587,"dataGaLocation":451},"continuous integration & delivery",{"text":589,"config":590},"Value stream management",{"href":154,"dataGaName":591,"dataGaLocation":451},"value stream management",{"text":535,"config":593},{"href":594,"dataGaName":538,"dataGaLocation":451},"/solutions/gitops/",{"text":164,"config":596},{"href":166,"dataGaName":167,"dataGaLocation":451},{"text":598,"config":599},"Small business",{"href":171,"dataGaName":172,"dataGaLocation":451},{"text":601,"config":602},"Public sector",{"href":176,"dataGaName":177,"dataGaLocation":451},{"text":604,"config":605},"Education",{"href":606,"dataGaName":607,"dataGaLocation":451},"/solutions/education/","education",{"text":609,"config":610},"Financial services",{"href":611,"dataGaName":612,"dataGaLocation":451},"/solutions/finance/","financial services",{"title":184,"links":614},[615,617,619,621,624,626,628,630,632,634,636,638],{"text":196,"config":616},{"href":198,"dataGaName":199,"dataGaLocation":451},{"text":201,"config":618},{"href":203,"dataGaName":204,"dataGaLocation":451},{"text":206,"config":620},{"href":208,"dataGaName":209,"dataGaLocation":451},{"text":211,"config":622},{"href":213,"dataGaName":623,"dataGaLocation":451},"docs",{"text":234,"config":625},{"href":236,"dataGaName":237,"dataGaLocation":451},{"text":229,"config":627},{"href":231,"dataGaName":232,"dataGaLocation":451},{"text":239,"config":629},{"href":241,"dataGaName":242,"dataGaLocation":451},{"text":247,"config":631},{"href":249,"dataGaName":250,"dataGaLocation":451},{"text":252,"config":633},{"href":254,"dataGaName":255,"dataGaLocation":451},{"text":257,"config":635},{"href":259,"dataGaName":260,"dataGaLocation":451},{"text":262,"config":637},{"href":264,"dataGaName":265,"dataGaLocation":451},{"text":267,"config":639},{"href":269,"dataGaName":270,"dataGaLocation":451},{"title":285,"links":641},[642,644,646,648,650,652,654,658,663,665,667,669],{"text":292,"config":643},{"href":294,"dataGaName":287,"dataGaLocation":451},{"text":297,"config":645},{"href":299,"dataGaName":300,"dataGaLocation":451},{"text":305,"config":647},{"href":307,"dataGaName":308,"dataGaLocation":451},{"text":310,"config":649},{"href":312,"dataGaName":313,"dataGaLocation":451},{"text":315,"config":651},{"href":317,"dataGaName":318,"dataGaLocation":451},{"text":320,"config":653},{"href":322,"dataGaName":323,"dataGaLocation":451},{"text":655,"config":656},"Sustainability",{"href":657,"dataGaName":655,"dataGaLocation":451},"/sustainability/",{"text":659,"config":660},"Diversity, inclusion and belonging (DIB)",{"href":661,"dataGaName":662,"dataGaLocation":451},"/diversity-inclusion-belonging/","Diversity, inclusion and belonging",{"text":325,"config":664},{"href":327,"dataGaName":328,"dataGaLocation":451},{"text":335,"config":666},{"href":337,"dataGaName":338,"dataGaLocation":451},{"text":340,"config":668},{"href":342,"dataGaName":343,"dataGaLocation":451},{"text":670,"config":671},"Modern Slavery Transparency Statement",{"href":672,"dataGaName":673,"dataGaLocation":451},"https://handbook.gitlab.com/handbook/legal/modern-slavery-act-transparency-statement/","modern slavery transparency statement",{"items":675},[676,679,682],{"text":677,"config":678},"Terms",{"href":503,"dataGaName":504,"dataGaLocation":451},{"text":680,"config":681},"Cookies",{"dataGaName":513,"dataGaLocation":451,"id":514,"isOneTrustButton":24},{"text":683,"config":684},"Privacy",{"href":508,"dataGaName":509,"dataGaLocation":451},[686],{"id":687,"title":18,"body":8,"config":688,"content":690,"description":8,"extension":22,"meta":698,"navigation":24,"path":699,"seo":700,"stem":701,"__hash__":702},"blogAuthors/en-us/blog/authors/sid-sijbrandij.yml",{"template":689},"BlogAuthor",{"role":691,"name":18,"bio":692,"config":693},"Co-founder, Chief Executive Officer and Board Chair of GitLab Inc.","Sid Sijbrandij (pronounced see-brandy) is the Co-founder, Chief Executive Officer and Board Chair of GitLab Inc., the most comprehensive AI-powered DevSecOps platform. GitLab's single application helps organizations deliver software faster and more efficiently while strengthening their security and compliance.\n\nSid's career path has been anything but traditional. He spent four years building recreational submarines for U-Boat Worx and while at Ministerie van Justitie en Veiligheid he worked on the Legis project, which developed several innovative web applications to aid lawmaking. He first saw Ruby code in 2007 and loved it so much that he taught himself how to program. In 2012, as a Ruby programmer, he encountered GitLab and discovered his passion for open source. Soon after, Sid commercialized GitLab, and by 2015 he led the company through Y Combinator's Winter 2015 batch. Under his leadership, the company has grown with an estimated 30 million+ registered users from startups to global enterprises.\n\nSid studied at the University of Twente in the Netherlands where he received an M.S. in Management Science. Sid was named one of the greatest minds of the pandemic by Forbes for spreading the gospel of remote work.",{"headshot":694,"twitter":695,"linkedin":696,"ctfId":697},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1749665383/Blog/Author%20Headshots/sytses-headshot.png","https://twitter.com/sytses","https://www.linkedin.com/in/sijbrandij","sytses",{},"/en-us/blog/authors/sid-sijbrandij",{},"en-us/blog/authors/sid-sijbrandij","ZdVvFbtL6NKLtKZEjFCVOecdpvuPzX3wmEZBrC6pRWg",[704,717,729],{"content":705,"config":715},{"title":706,"description":707,"authors":708,"heroImage":710,"date":711,"category":9,"tags":712,"body":714},"How IIT Bombay students are coding the future with GitLab","At GitLab, we often talk about how software accelerates innovation. But sometimes, you have to step away from the Zoom calls and stand in a crowded university hall to remember why we do this.",[709],"Nick Veenhof","https://res.cloudinary.com/about-gitlab-com/image/upload/v1750099013/Blog/Hero%20Images/Blog/Hero%20Images/blog-image-template-1800x945%20%2814%29_6VTUA8mUhOZNDaRVNPeKwl_1750099012960.png","2026-01-08",[255,607,713],"open source","The GitLab team recently had the privilege of judging the **iHack Hackathon** at **IIT Bombay's E-Summit**. The energy was electric, the coffee was flowing, and the talent was undeniable. But what struck us most wasn't just the code — it was the sheer determination of students to solve real-world problems, often overcoming significant logistical and financial hurdles to simply be in the room.\n\n\nThrough our [GitLab for Education program](https://about.gitlab.com/solutions/education/), we aim to empower the next generation of developers with tools and opportunity. Here is a look at what the students built, and how they used GitLab to bridge the gap between idea and reality.\n\n## The challenge: Build faster, build securely\n\nThe premise for the GitLab track of the hackathon was simple: Don't just show us a product; show us how you built it. We wanted to see how students utilized GitLab's platform — from Issue Boards to CI/CD pipelines — to accelerate the development lifecycle.\n\nThe results were inspiring.\n\n## The winners\n\n### 1st place: Team Decode — Democratizing Scientific Research\n\n**Project:** FIRE (Fast Integrated Research Environment)\n\nTeam Decode took home the top prize with a solution that warms a developer's heart: a local-first, blazing-fast data processing tool built with [Rust](https://about.gitlab.com/blog/secure-rust-development-with-gitlab/) and Tauri. They identified a massive pain point for data science students: existing tools are fragmented, slow, and expensive.\n\nTheir solution, FIRE, allows researchers to visualize complex formats (like NetCDF) instantly. What impressed the judges most was their \"hacker\" ethos. They didn't just build a tool; they built it to be open and accessible.\n\n**How they used GitLab:** Since the team lived far apart, asynchronous communication was key. They utilized **GitLab Issue Boards** and **Milestones** to track progress and integrated their repo with Telegram to get real-time push notifications. As one team member noted, \"Coordinating all these technologies was really difficult, and what helped us was GitLab... the Issue Board really helped us track who was doing what.\"\n\n![Team Decode](https://res.cloudinary.com/about-gitlab-com/image/upload/v1767380253/epqazj1jc5c7zkgqun9h.jpg)\n\n### 2nd place: Team BichdeHueDost — Reuniting to Solve Payments\n\n**Project:** SemiPay (RFID Cashless Payment for Schools)\n\nThe team name, BichdeHueDost, translates to \"Friends who have been set apart.\" It's a fitting name for a group of friends who went to different colleges but reunited to build this project. They tackled a unique problem: handling cash in schools for young children. Their solution used RFID cards backed by a blockchain ledger to ensure secure, cashless transactions for students.\n\n**How they used GitLab:** They utilized [GitLab CI/CD](https://about.gitlab.com/topics/ci-cd/) to automate the build process for their Flutter application (APK), ensuring that every commit resulted in a testable artifact. This allowed them to iterate quickly despite the \"flaky\" nature of cross-platform mobile development.\n\n![Team BichdeHueDost](https://res.cloudinary.com/about-gitlab-com/image/upload/v1767380253/pkukrjgx2miukb6nrj5g.jpg)\n\n### 3rd place: Team ZenYukti — Agentic Repository Intelligence\n\n**Project:** RepoInsight AI (AI-powered, GitLab-native intelligence platform)\n\nTeam ZenYukti impressed us with a solution that tackles a universal developer pain point: understanding unfamiliar codebases. What stood out to the judges was the tool's practical approach to onboarding and code comprehension: RepoInsight-AI automatically generates documentation, visualizes repository structure, and even helps identify bugs, all while maintaining context about the entire codebase.\n\n**How they used GitLab:** The team built a comprehensive CI/CD pipeline that showcased GitLab's security and DevOps capabilities. They integrated [GitLab's Security Templates](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/ci/templates/Security) (SAST, Dependency Scanning, and Secret Detection), and utilized [GitLab Container Registry](https://docs.gitlab.com/user/packages/container_registry/) to manage their Docker images for backend and frontend components. They created an AI auto-review bot that runs on merge requests, demonstrating an \"agentic workflow\" where AI assists in the development process itself.\n\n![Team ZenYukti](https://res.cloudinary.com/about-gitlab-com/image/upload/v1767380253/ymlzqoruv5al1secatba.jpg)\n\n## Beyond the code: A lesson in inclusion\n\nWhile the code was impressive, the most powerful moment of the event happened away from the keyboard.\n\nDuring the feedback session, we learned about the journey Team ZenYukti took to get to Mumbai. They traveled over 24 hours, covering nearly 1,800 kilometers. Because flights were too expensive and trains were booked, they traveled in the \"General Coach,\" a non-reserved, severely overcrowded carriage.\n\nAs one student described it:\n\n*\"You cannot even imagine something like this... there are no seats... people sit on the top of the train. This is what we have endured.\"*\n\nThis hit home. [Diversity, Inclusion, and Belonging](https://handbook.gitlab.com/handbook/company/culture/inclusion/) are core values at GitLab. We realized that for these students, the barrier to entry wasn't intellect or skill, it was access.\n\nIn that moment, we decided to break that barrier. We committed to reimbursing the travel expenses for the participants who struggled to get there. It's a small step, but it underlines a massive truth: **talent is distributed equally, but opportunity is not.**\n\n![hackathon class together](https://res.cloudinary.com/about-gitlab-com/image/upload/v1767380252/o5aqmboquz8ehusxvgom.jpg)\n\n### The future is bright (and automated)\n\nWe also saw incredible potential in teams like Prometheus, who attempted to build an autonomous patch remediation tool (DevGuardian), and Team Arrakis, who built a voice-first job portal for blue-collar workers using [GitLab Duo](https://about.gitlab.com/gitlab-duo/) to troubleshoot their pipelines.\n\nTo all the students who participated: You are the future. Through [GitLab for Education](https://about.gitlab.com/solutions/education/), we are committed to providing you with the top-tier tools (like GitLab Ultimate) you need to learn, collaborate, and change the world — whether you are coding from a dorm room, a lab, or a train carriage. **Keep shipping.**\n\n> :bulb: Learn more about the [GitLab for Education program](https://about.gitlab.com/solutions/education/).\n",{"slug":716,"featured":12,"template":13},"how-iit-bombay-students-code-future-with-gitlab",{"content":718,"config":727},{"title":719,"description":720,"authors":721,"heroImage":722,"date":723,"category":9,"tags":724,"body":726},"Artois University elevates research and curriculum with GitLab Ultimate for Education","Artois University's CRIL leveraged the GitLab for Education program to gain free access to Ultimate, transforming advanced research and computer science curricula.",[709],"https://res.cloudinary.com/about-gitlab-com/image/upload/v1750099203/Blog/Hero%20Images/Blog/Hero%20Images/blog-image-template-1800x945%20%2820%29_2bJGC5ZP3WheoqzlLT05C5_1750099203484.png","2025-12-10",[607,255,725],"product","Leading academic institutions face a critical challenge: how to provide thousands of students and researchers with industry-standard, **full-featured DevSecOps tools** without compromising institutional control. Many start with basic version control, but the modern curriculum demands integrated capabilities for planning, security, and advanced CI/CD.\n\nThe **GitLab for Education program** is designed to solve this by providing access to **GitLab Ultimate** for qualifying institutions, allowing them to scale their operations and elevate their academic offerings. \n\nThis article showcases a powerful success story from the **Centre de Recherche en Informatique de Lens (CRIL)**, a joint laboratory of **Artois University** and CNRS in France. After years of relying solely on GitLab Community Edition (CE), the university's move to GitLab Ultimate through the GitLab for Education program immediately unlocked advanced capabilities, transforming their teaching, research, and contribution workflows virtually overnight. This story demonstrates why GitLab Ultimate is essential for institutions seeking to deliver advanced computer science and research curricula.\n\n## GitLab Ultimate unlocked: Managing scale and driving academic value\n\n**Artois University's** self-managed GitLab instance is a large-scale operation, supporting nearly **3,000 users** across approximately **19,000 projects**, primarily serving computer science students and researchers. While GitLab Community Edition was robust, the upgrade to GitLab Ultimate provided the sophisticated tooling necessary for managing this scale and facilitating advanced university-level work.\n\n***\"We can see the difference,\" says Daniel Le Berre, head of research at CRIL and the instance maintainer. \"It's a completely different product. Each week reveals new features that directly enhance our productivity and teaching.\"***\n\nThe institution joined the GitLab for Education program specifically because it covers both **instructional and non-commercial research use cases** and offers full access to Ultimate's features, removing significant cost barriers.\n\n### Key GitLab Ultimate benefits for students and researchers\n\n* **Advanced project management at scale:** Master's students now benefit from **GitLab Ultimate's project planning features**. This enables them to structure, track, and manage complex, long-term research projects using professional methodologies like portfolio management and advanced issue tracking that seamlessly roll up across their thousands of projects.\n\n* **Enhanced visibility:** Features like improved dashboards and code previews directly in Markdown files dramatically streamline tracking and documentation review, reducing administrative friction for both instructors and students managing large project loads.\n\n## Comprehensive curriculum: From concepts to continuous delivery\n\nGitLab Ultimate is deeply integrated into the computer science curriculum, moving students beyond simple `git` commands to practical **DevSecOps implementation**.\n\n* **Git fundamentals:** Students begin by visualizing concepts using open-source tools to master Git concepts.\n\n* **Full CI/CD implementation:** Students use GitLab CI for rigorous **Test-Driven Development (TDD)** in their software projects. They learn to build, test, and perform quality assurance using unit and integration testing pipelines—core competency made seamless by the integrated platform.\n\n* **DevSecOps for research and documentation:** The university teaches students that DevSecOps principles are vital for all collaborative work. Inspired by earlier work in Delft, students manage and produce critical research documentation (PDFs from Markdown files) using GitLab, incorporating quality checks like linters and spell checks directly in the CI pipeline. This ensures high-quality, reproducible research output.\n\n* **Future-proofing security skills:** The GitLab Ultimate platform immediately positions the institution to incorporate advanced DevSecOps features like SAST and DAST scanning as their research and development code projects grow, ensuring students are prepared for industry security standards.\n\n## Accelerating open source contributions with GitLab Duo\n\nAccess to the full GitLab platform, including our AI capabilities, has empowered students to make impactful contributions to the wider open source community faster than ever before.\n\nTwo Master's students recently completed direct contributions to the GitLab product, adding the **ORCID identifier** into user profiles. Working on GitLab.com, they leveraged **GitLab Duo's AI chat and code suggestions** to navigate the codebase efficiently.\n\n***\"This would not have been possible without GitLab Duo,\" Daniel Le Berre notes. \"The AI features helped students, who might have lacked deep codebase knowledge, deliver meaningful contributions in just two weeks.\"***\n\nThis demonstrates how providing students with cutting-edge tools **accelerates their learning and impact**, allowing them to translate classroom knowledge into real-world contributions immediately.\n\n## Empowering open research and institutional control\n\nThe stability of the self-managed instance at Artois University is key to its success. This model guarantees **institutional control and stability** — a critical factor for long-term research preservation.\n\nThe institution's expertise in this area was recently highlighted in a major 2024 study led by CRIL, titled: \"[Higher Education and Research Forges in France - Definition, uses, limitations encountered and needs analysis](https://hal.science/hal-04208924v4)\" ([Project on GitLab](https://gitlab.in2p3.fr/coso-college-codes-sources-et-logiciels/forges-esr-en)). The research found that the vast majority of public forges in French Higher Education and Research relied on **GitLab**. This finding underscores the consensus among academic leaders that self-hosted solutions are essential for **data control and longevity**, especially when compared to relying on external, commercial forges.\n\n## Unlock GitLab Ultimate for your institution today\n\nThe success story of **Artois University's CRIL** proves the transformative power of the GitLab for Education program. By providing **free access to GitLab Ultimate**, we enable large-scale institutions to:\n\n1.  **Deliver a modern, integrated DevSecOps curriculum.**\n\n2.  **Support advanced, collaborative research projects with Ultimate planning features.**\n\n3.  **Empower students to make AI-assisted open source contributions.**\n\n4.  **Maintain institutional control and data longevity.**\n\nIf your academic institution is ready to equip its students and researchers with the complete DevSecOps platform and its most advanced features, we invite you to join the program.\n\nThe program provides **free access to GitLab Ultimate** for qualifying instructional and non-commercial research use cases.\n\n**Apply now [online](https://about.gitlab.com/solutions/education/join/).**\n",{"slug":728,"featured":24,"template":13},"artois-university-elevates-curriculum-with-gitlab-ultimate-for-education",{"content":730,"config":743},{"category":9,"tags":731,"body":734,"date":735,"updatedDate":736,"heroImage":737,"authors":738,"title":741,"description":742},[732,733,102],"tutorial","git","\nEnterprise teams are increasingly migrating from Azure DevOps to GitLab to gain strategic advantages and accelerate secure software delivery. \n\n\n- GitLab comes with integrated controls, policies, and [compliance frameworks](https://docs.gitlab.com/user/compliance/compliance_frameworks/) that allow organizations to implement software delivery standards at scale. This is especially important for regulated industries.\n\n- [Security testing](https://docs.gitlab.com/user/application_security/) is embedded in the pipeline and results show in the developer workflow, including static application security testing (SAST), source code analysis (SCA), dynamic application security testing (DAST), infrastructure-as-code scanning (IaC), container scanning, and API scanning.\n\n- [AI capabilities](https://about.gitlab.com/gitlab-duo-agent-platform/) across the full software delivery lifecycle include advanced agent orchestration and customizable flows to support how your organizational teams work.\n\n\nGitLab's open-source, open-core approach, flexible deployment options such as single-tenant dedicated and self-managed, and truly unified platform eliminate integration complexity and security gaps. \n\n\nFor teams facing mounting pressure to accelerate delivery while strengthening security posture and maintaining regulatory compliance, GitLab represents not just a migration but a platform evolution.\n\n\nMigrating from Azure DevOps to GitLab can seem like a daunting task, but with the right approach and tools, it can be a smooth and efficient process. This guide will walk you through the steps needed to successfully migrate your projects, repositories, and pipelines from Azure DevOps to GitLab.\n\n\n## Overview\n\nGitLab provides both [Congregate](https://gitlab.com/gitlab-org/professional-services-automation/tools/migration/congregate/) (maintained by [GitLab Professional Services](https://about.gitlab.com/professional-services/) organization) and [a built-in Git repository import](https://docs.gitlab.com/user/project/import/repo_by_url/) for migrating projects from Azure DevOps (ADO). These options support repository-by-repository or bulk migration and preserve git commit history, branches, and tags. With Congregate and professional services tools, we support additional assets such as wikis, work items, CI/CD variables, container images, packages, pipelines, and more (see this [feature matrix](https://gitlab.com/gitlab-org/professional-services-automation/tools/migration/congregate/-/blob/master/customer/ado-migration-features-matrix.md)). Use this guide to plan and execute your migration and complete post-migration follow-up tasks.\n\n\nEnterprises migrating from ADO to GitLab commonly follow a multi-phase approach:\n\n\n- Migrate repositories from ADO to GitLab using Congregate or GitLab's built-in repository migration.\n\n- Migrate pipelines from Azure Pipelines to GitLab CI/CD.\n\n- Migrate remaining assets such as boards, work items, and artifacts to GitLab Issues, Epics, and the Package and Container Registries.\n\n\nHigh-level migration phases:\n\n\n```mermaid\ngraph LR\n    subgraph Prerequisites\n        direction TB\n        A[\"Set up identity provider (IdP) and\u003Cbr/>provision users\"]\n        A --> B[\"Set up runners and\u003Cbr/>third-party integrations\"]\n        B --> I[\"Users enablement and\u003Cbr/>change management\"]\n    end\n    \n    subgraph MigrationPhase[\"Migration phase\"]\n        direction TB\n        C[\"Migrate source code\"]\n        C --> D[\"Preserve contributions and\u003Cbr/> format history\"]\n        D --> E[\"Migrate work items and\u003Cbr/>map to \u003Ca href=\"https://docs.gitlab.com/topics/plan_and_track/\">GitLab Plan \u003Cbr/>and track work\"]\n    end\n    \n    subgraph PostMigration[\"Post-migration steps\"]\n        direction TB\n        F[\"Create or translate \u003Cbr/>ADO pipelines to GitLab CI\"]\n        F --> G[\"Migrate other assets\u003Cbr/>packages and container images\"]\n        G --> H[\"Introduce \u003Ca href=\"https://docs.gitlab.com/user/application_security/secure_your_application/\">security\u003C/a> and\u003Cbr/>SDLC improvements\"]\n    end\n    \n    Prerequisites --> MigrationPhase\n    MigrationPhase --> PostMigration\n\n    style A fill:#FC6D26\n    style B fill:#FC6D26\n    style I fill:#FC6D26\n    style C fill:#8C929D\n    style D fill:#8C929D\n    style E fill:#8C929D\n    style F fill:#FFA500\n    style G fill:#FFA500\n    style H fill:#FFA500\n```\n\n\n## Planning your migration\n\n\n**To plan your migration, ask these questions:**\n\n\n- How soon do we need to complete the migration?\n\n- Do we understand what will be migrated?\n\n- Who will run the migration?\n\n- What organizational structure do we want in GitLab?\n\n- Are there any constraints, limitations, or pitfalls that need to be taken into account?\n\n\nDetermine your timeline, as it will largely dictate your migration approach. Identify champions or groups familiar with both ADO and GitLab platforms (such as early adopters) to help drive adoption and provide guidance.\n\n\n**Inventory what you need to migrate:**\n\n\n- The number of repositories, pull requests, and contributors\n\n- The number and complexity of work items and pipelines\n\n- Repository sizes and dependency relationships\n\n- Critical integrations and runner requirements (agent pools with specific capabilities)\n\n\nUse GitLab Professional Services's [Evaluate](https://gitlab.com/gitlab-org/professional-services-automation/tools/utilities/evaluate#beta-azure-devops) tool to produce a complete inventory of your entire Azure DevOps organization, including repositories, PR counts, contributor lists, number of pipelines, work items, CI/CD variables and more. If you're working with the GitLab Professional Services team, share this report with your engagement manager or technical architect to help plan the migration.\n\n\nMigration timing is primarily driven by pull request count, repository size, and amount of contributions (e.g. comments in PR, work items, etc). For example, 1,000 small repositories with few PRs and limited contributors can migrate much faster than a smaller set of repositories containing tens of thousands of PRs and thousands of contributors. Use your inventory data to estimate effort and plan test runs before proceeding with production migrations.\n\n\nCompare inventory against your desired timeline and decide whether to migrate all repositories at once or in batches. If teams cannot migrate simultaneously, batch and stagger migrations to align with team schedules. For example, in Professional Services engagements, we organize migrations into waves of 200-300 projects to manage complexity and respect API rate limits, both in [GitLab](https://docs.gitlab.com/security/rate_limits/) and [ADO](https://learn.microsoft.com/en-us/azure/devops/integrate/concepts/rate-limits?view=azure-devops).\n\n\nGitLab's built-in [repository importer](https://docs.gitlab.com/user/project/import/repo_by_url/) migrates Git repositories (commits, branches, and tags) one-by-one. Congregate is designed to preserve pull requests (known in GitLab as merge requests), comments, and related metadata where possible; the simple built-in repository import focuses only on the Git data (history, branches, and tags).\n\n\n**Items that typically require separate migration or manual recreation:**\n\n\n- Azure Pipelines - create equivalent GitLab CI/CD pipelines (consult with [CI/CD YAML](https://docs.gitlab.com/ci/yaml/) and/or with [CI/CD components](https://docs.gitlab.com/ci/components/)). Alternatively, consider using AI-based pipeline conversion available in Congregate.\n\n- Work items and boards - map to GitLab Issues, Epics, and Issue Boards.\n\n- Artifacts, container images (ACR) - migrate to GitLab Package Registry or Container Registry.\n\n- Service hooks and external integrations - recreate in GitLab.\n\n- [Permissions models](https://docs.gitlab.com/user/permissions/) differ between ADO and GitLab; review and plan permissions mapping rather than assuming exact preservation.\n\n\nReview what each tool (Congregate vs. built-in import) will migrate and choose the one that fits your needs. Make a list of any data or integrations that must be migrated or recreated manually.\n\n\n**Who will run the migration?**\n\n\nMigrations are typically run by a GitLab group owner or instance administrator, or by a designated migrator who has been granted the necessary permissions on the destination group/project. Congregate and the GitLab import APIs require valid authentication tokens for both Azure DevOps and GitLab.\n\n\n- Decide whether a group owner/admin will perform the migrations or whether you will grant a specific team/person delegated access.\n\n- Ensure the migrator has correctly configured personal access tokens (Azure DevOps and GitLab) with the scopes required by your chosen migration tool (for example, api/read_repository scopes and any tool-specific requirements). \n\n- Test tokens and permissions with a small pilot migration.\n\n**Note:** Congregate leverages file-based import functionality for ADO migrations and requires instance administrator permissions to run ([see our documentation](https://docs.gitlab.com/user/project/settings/import_export/#migrate-projects-by-uploading-an-export-file)). If you are migrating to GitLab.com, consider engaging Professional Services. For more information, see the [Professional Services Full Catalog](https://about.gitlab.com/professional-services/catalog/). Non-admin account cannot preserve contribution attribution!\n\n\n**What organizational structure do we want in GitLab?**\n\nWhile it's possible to map ADO structure directly to GitLab structure, it's recommended to rationalize and simplify the structure during migration. Consider how teams will work in GitLab and design the structure to facilitate collaboration and access management. Here is a way to think about mapping ADO structure to GitLab structure:\n\n\n```mermaid\ngraph TD\n    subgraph GitLab\n        direction TB\n        A[\"Top-level Group\"]\n        B[\"Subgroup (optional)\"]\n        C[\"Projects\"]\n        A --> B\n        A --> C\n        B --> C\n    end\n\n    subgraph AzureDevOps[\"Azure DevOps\"]\n        direction TB\n        F[\"Organizations\"]\n        G[\"Projects\"]\n        H[\"Repositories\"]\n        F --> G\n        G --> H\n    end\n\n    style A fill:#FC6D26\n    style B fill:#FC6D26\n    style C fill:#FC6D26\n    style F fill:#8C929D\n    style G fill:#8C929D\n    style H fill:#8C929D\n```\n\nRecommended approach:\n\n\n- Map each ADO organization to a GitLab group (or a small set of groups), not to many small groups. Avoid creating a GitLab group for every ADO team project. Use migration as an opportunity to rationalize your GitLab structure.\n\n- Use subgroups and project-level permissions to group related repositories.\n\n- Manage access to sets of projects by using GitLab groups and group membership (groups and subgroups) rather than one group per team project.\n\n- Review GitLab [permissions](https://docs.gitlab.com/ee/user/permissions.html) and consider [SAML Group Links](https://docs.gitlab.com/user/group/saml_sso/group_sync/) to implement an enterprise RBAC model for your GitLab instance (or a GitLab.com namespace).\n\n\n**ADO Boards and work items: State of migration**\n\n\nIt's important to understand how work items migrate from ADO into GitLab Plan (issues, epics, and boards).\n\n\n- ADO Boards and work items map to GitLab Issues, Epics, and Issue Boards. Plan how your workflows and board configurations will translate.\n\n- ADO Epics and Features become GitLab Epics.\n\n- Other work item types (e.g., user stories, tasks, bugs) become project-scoped issues.\n\n- Most standard fields are preserved; selected custom fields can be migrated when supported.\n\n- Parent-child relationships are retained so Epics reference all related issues.\n\n- Links to pull requests are converted to merge request links to maintain development traceability.\n\n\nExample: Migration of an individual work item to a GitLab Issue, including field accuracy and relationships:\n\n\n![Example: Migration of an individual work item to a GitLab Issue](https://res.cloudinary.com/about-gitlab-com/image/upload/v1764769188/ztesjnxxfbwmfmtckyga.png)\n\n\nBatching guidance:\n\n\n- If you need to run migrations in batches, use your new group/subgroup structure to define batches (for example, by ADO organization or by product area).\n\n- Use inventory reports to drive batch selection and test each batch with a pilot migration before scaling.\n\n\n**Pipelines migration**\n\n\nCongregate [recently introduced](https://gitlab.com/gitlab-org/professional-services-automation/tools/migration/congregate/-/merge_requests/1298) AI-powered conversion for multi-stage YAML pipelines from Azure DevOps to GitLab CI/CD. This automated conversion works best for simple, single-file pipelines and is designed to provide a working starting point rather than a production-ready `.gitlab-ci.yml` file. The tool generates a functionally equivalent GitLab pipeline that you can then refine and optimize for your specific needs.\n\n\n- Converts Azure Pipelines YAML to `.gitlab-ci.yml` format automatically.\n\n- Best suited for straightforward, single-file pipeline configurations.\n\n- Provides a boilerplate to accelerate migration, not a final production artifact.\n\n- Requires review and adjustment for complex scenarios, custom tasks, or enterprise requirements.\n\n- Does not support Azure DevOps classic release pipelines — [convert these to multi-stage YAML](https://learn.microsoft.com/en-us/azure/devops/pipelines/release/from-classic-pipelines?view=azure-devops) first.\n\n\nRepository owners should review the [GitLab CI/CD documentation](https://docs.gitlab.com/ci/) to further optimize and enhance their pipelines after the initial conversion.\n\n\nExample of converted pipelines:\n\n\n```yml \n\n# azure-pipelines.yml\n\ntrigger:\n  - main\n\nvariables:\n  imageName: myapp\n\nstages:\n  - stage: Build\n    jobs:\n      - job: Build\n        pool:\n          vmImage: 'ubuntu-latest'\n        steps:\n          - checkout: self\n\n          - task: Docker@2\n            displayName: Build Docker image\n            inputs:\n              command: build\n              repository: $(imageName)\n              Dockerfile: '**/Dockerfile'\n              tags: |\n                $(Build.BuildId)\n\n  - stage: Test\n    jobs:\n      - job: Test\n        pool:\n          vmImage: 'ubuntu-latest'\n        steps:\n          - checkout: self\n\n          # Example: run tests inside the container\n          - script: |\n              docker run --rm $(imageName):$(Build.BuildId) npm test\n            displayName: Run tests\n\n  - stage: Push\n    jobs:\n      - job: Push\n        pool:\n          vmImage: 'ubuntu-latest'\n        steps:\n          - checkout: self\n\n          - task: Docker@2\n            displayName: Login to ACR\n            inputs:\n              command: login\n              containerRegistry: '\u003Cyour-acr-service-connection>'\n\n          - task: Docker@2\n            displayName: Push image to ACR\n            inputs:\n              command: push\n              repository: $(imageName)\n              tags: |\n                $(Build.BuildId)\n\n```\n\n```yaml\n\n# .gitlab-ci.yml\n\nvariables:\n  imageName: myapp\n\nstages:\n  - build\n  - test\n  - push\n\nbuild:\n  stage: build\n  image: docker:latest\n  services:\n    - docker:dind\n  script:\n    - docker build -t $imageName:$CI_PIPELINE_ID -f $(find . -name Dockerfile) .\n  only:\n    - main\n\ntest:\n  stage: test\n  image: docker:latest\n  services:\n    - docker:dind\n  script:\n    - docker run --rm $imageName:$CI_PIPELINE_ID npm test\n  only:\n    - main\n\npush:\n  stage: push\n  image: docker:latest\n  services:\n    - docker:dind\n  before_script:\n    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY\n  script:\n    - docker tag $imageName:$CI_PIPELINE_ID $CI_REGISTRY/$CI_PROJECT_PATH/$imageName:$CI_PIPELINE_ID\n    - docker push $CI_REGISTRY/$CI_PROJECT_PATH/$imageName:$CI_PIPELINE_ID\n  only:\n    - main\n\n```\n\n**Final checklist:**\n\n\n- Decide timeline and batch strategy.\n\n- Produce a full inventory of repositories, PRs, and contributors.\n\n- Choose Congregate or the built-in import based on scope (PRs and metadata vs. Git data only).\n\n- Decide who will run migrations and ensure tokens/permissions are configured.\n\n- Identify assets that must be migrated separately (pipelines, work items, artifacts, and hooks) and plan those efforts.\n\n- Run pilot migrations, validate results, then scale according to your plan.\n\n\n## Running your migrations\n\n\nAfter planning, execute migrations in stages, starting with trial runs. Trial migrations help surface org-specific issues early and let you measure duration, validate outcomes, and fine-tune your approach before production.\n\n\nWhat trial migrations validate:\n\n\n- Whether a given repository and related assets migrate successfully (history, branches, tags; plus MRs/comments if using Congregate)\n\n- Whether the destination is usable immediately (permissions, runners, CI/CD variables, integrations)\n\n- How long each batch takes, to set schedules and stakeholder expectations\n\n\nDowntime guidance:\n\n\n- GitLab's built-in Git import and Congregate do not inherently require downtime.\n\n- For production waves, freeze changes in ADO (branch protections or read-only) to avoid missed commits, PR updates, or work items created mid-migration.\n\n- Trial runs do not require freezes and can be run anytime.\n\n\nBatching guidance:\n\n\n- Run trial batches back-to-back to shorten elapsed time; let teams validate results asynchronously.\n\n- Use your planned group/subgroup structure to define batches and respect API rate limits.\n\n\nRecommended steps:\n\n\n1. Create a test destination in GitLab for trials:\n\n\n  - GitLab.com: create a dedicated group/namespace (for example, my-org-sandbox)\n\n  - Self-managed: create a top-level group or a separate test instance if needed\n\n\n2. Prepare authentication:\n\n\n  - Azure DevOps PAT with required scopes.\n\n  - GitLab Personal Access Token with api and read_repository (plus admin access for file-based imports used by Congregate).\n\n\n3. Run trial migrations:\n\n\n  - Repos only: use GitLab's built-in import (Repo by URL)\n\n  - Repos + PRs/MRs and additional assets: use Congregate\n\n\n4. Post-trial follow-up:\n\n\n  - Verify repo history, branches, tags; merge requests (if migrated), issues/epics (if migrated), labels, and relationships.\n\n  - Check permissions/roles, protected branches, required approvals, runners/tags, variables/secrets, integrations/webhooks.\n\n  - Validate pipelines (`.gitlab-ci.yml`) or converted pipelines where applicable.\n\n\n5. Ask users to validate functionality and data fidelity.\n\n6. Resolve issues uncovered during trials and update your runbooks.\n\n7. Network and security:\n\n\n  - If your destination uses IP allow lists, add the IPs of your migration host and any required runners/integrations so imports can succeed.\n\n\n8. Run production migrations in waves:\n\n\n  - Enforce change freezes in ADO during each wave.\n\n  - Monitor progress and logs; retry or adjust batch sizes if you hit rate limits.\n\n\n9. Optional: remove the sandbox group or archive it after you finish.\n\n\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/ibIXGfrVbi4?si=ZxOVnXjCF-h4Ne0N\" frameborder=\"0\" allowfullscreen=\"true\">\u003C/iframe>\n\u003C/figure>\n\n\n## Terminology reference for GitLab and Azure DevOps\n\n| GitLab                                                           | Azure DevOps                                 | Similarities & Key Differences                                                                                                                                          |\n| ---------------------------------------------------------------- | -------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Group                                                            | Organization                                 | Top-level namespace, membership, policies. ADO org contains Projects; GitLab Group contains Subgroups and Projects.                                                   |\n| Group or Subgroup                                                | Project                                      | Logical container, permissions boundary. ADO Project holds many repos; GitLab Groups/Subgroups organize many Projects.                                                |\n| Project (includes a Git repo)                                    | Repository (inside a Project)                | Git history, branches, tags. In GitLab, a \"Project\" is the repo plus issues, CI/CD, wiki, etc. One repo per Project.                                                  |\n| Merge Request (MR)                                               | Pull Request (PR)                            | Code review, discussions, approvals. MR rules include approvals, required pipelines, code owners.                                                                     |\n| Protected Branches, MR Approval Rules, Status Checks             | Branch Policies                              | Enforce reviews and checks. GitLab combines protections + approval rules + required status checks.                                                                    |\n| GitLab CI/CD                                                     | Azure Pipelines                              | YAML pipelines, stages/jobs, logs. ADO also has classic UI pipelines; GitLab centers on .gitlab-ci.yml.                                                               |\n| .gitlab-ci.yml                                                   | azure-pipelines.yml                          | Defines stages/jobs/triggers. Syntax/features differ; map jobs, variables, artifacts, and triggers.                                                                   |\n| Runners (shared/specific)                                        | Agents / Agent Pools                         | Execute jobs on machines/containers. Target via demands (ADO) vs tags (GitLab). Registration/scoping differs.                                                         |\n| CI/CD Variables (project/group/instance), Protected/Masked       | Pipeline Variables, Variable Groups, Library | Pass config/secrets to jobs. GitLab supports group inheritance and masking/protection flags.                                                                          |\n| Integrations, CI/CD Variables, Deploy Keys                       | Service Connections                          | External auth to services/clouds. Map to integrations or variables; cloud-specific helpers available.                                                                 |\n| Environments & Deployments (protected envs)                      | Environments (with approvals)                | Track deploy targets/history. Approvals via protected envs and manual jobs in GitLab.                                                                                 |\n| Releases (tag + notes)                                           | Releases (classic or pipelines)              | Versioned notes/artifacts. GitLab Release ties to tags; deployments tracked separately.                                                                               |\n| Job Artifacts                                                    | Pipeline Artifacts                           | Persist job outputs. Retention/expiry configured per job or project.                                                                                                  |\n| Package Registry (NuGet/npm/Maven/PyPI/Composer, etc.)           | Azure Artifacts (NuGet/npm/Maven, etc.)      | Package hosting. Auth/namespace differ; migrate per package type.                                                                                                     |\n| GitLab Container Registry                                        | Azure Container Registry (ACR) or others     | OCI images. GitLab provides per-project/group registries.                                                                                                             |\n| Issue Boards                                                     | Boards                                       | Visualize work by columns. GitLab boards are label-driven; multiple boards per project/group.                                                                         |\n| Issues (types/labels), Epics                                     | Work Items (User Story/Bug/Task)             | Track units of work. Map ADO types/fields to labels/custom fields; epics at group level.                                                                              |\n| Epics, Parent/Child Issues                                       | Epics/Features                               | Hierarchy of work. Schema differs; use epics + issue relationships.                                                                                                   |\n| Milestones and Iterations                                        | Iteration Paths                              | Time-boxing. GitLab Iterations (group feature) or Milestones per project/group.                                                                                       |\n| Labels (scoped labels)                                           | Area Paths                                   | Categorization/ownership. Replace hierarchical areas with scoped labels.                                                                                              |\n| Project/Group Wiki                                               | Project Wiki                                 | Markdown wiki. Backed by repos in both; layout/auth differ slightly.                                                                                                  |\n| Test reports via CI, Requirements/Test Management, integrations  | Test Plans/Cases/Runs                        | QA evidence/traceability. No 1:1 with ADO Test Plans; often use CI reports + issues/requirements.                                                                     |\n| Roles (Owner/Maintainer/Developer/Reporter/Guest) + custom roles | Access levels + granular permissions         | Control read/write/admin. Models differ; leverage group inheritance and protected resources.                                                                          |\n| Webhooks                                                         | Service Hooks                                | Event-driven integrations. Event names/payloads differ; reconfigure endpoints.                                                                                        |\n| Advanced Search                                                  | Code Search                                  | Full-text repo search. Self-managed GitLab may need Elasticsearch/OpenSearch for advanced features.                                                                   |\n","2025-12-03","2026-01-16","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749658924/Blog/Hero%20Images/securitylifecycle-light.png",[739,740],"Evgeny Rudinsky","Michael Leopard","Guide: Migrate from Azure DevOps to GitLab","Learn how to carry out the full migration from Azure DevOps to GitLab using GitLab Professional Services migration tools — from planning and execution to post-migration follow-up tasks.",{"featured":24,"template":13,"slug":744},"migration-from-azure-devops-to-gitlab",{"promotions":746},[747,761,772],{"id":748,"categories":749,"header":751,"text":752,"button":753,"image":758},"ai-modernization",[750],"ai-ml","Is AI achieving its promise at scale?","Quiz will take 5 minutes or less",{"text":754,"config":755},"Get your AI maturity score",{"href":756,"dataGaName":757,"dataGaLocation":237},"/assessments/ai-modernization-assessment/","modernization assessment",{"config":759},{"src":760},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138786/qix0m7kwnd8x2fh1zq49.png",{"id":762,"categories":763,"header":764,"text":752,"button":765,"image":769},"devops-modernization",[725,553],"Are you just managing tools or shipping innovation?",{"text":766,"config":767},"Get your DevOps maturity score",{"href":768,"dataGaName":757,"dataGaLocation":237},"/assessments/devops-modernization-assessment/",{"config":770},{"src":771},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138785/eg818fmakweyuznttgid.png",{"id":773,"categories":774,"header":776,"text":752,"button":777,"image":781},"security-modernization",[775],"security","Are you trading speed for security?",{"text":778,"config":779},"Get your security maturity score",{"href":780,"dataGaName":757,"dataGaLocation":237},"/assessments/security-modernization-assessment/",{"config":782},{"src":783},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138786/p4pbqd9nnjejg5ds6mdk.png",{"header":785,"blurb":786,"button":787,"secondaryButton":792},"Start building faster today","See what your team can do with the intelligent orchestration platform for DevSecOps.\n",{"text":788,"config":789},"Get your free trial",{"href":790,"dataGaName":44,"dataGaLocation":791},"https://gitlab.com/-/trial_registrations/new?glm_content=default-saas-trial&glm_source=about.gitlab.com/","feature",{"text":489,"config":793},{"href":48,"dataGaName":49,"dataGaLocation":791},1772652082427]