[{"data":1,"prerenderedAt":790},["ShallowReactive",2],{"/en-us/blog/moving-all-your-data":3,"navigation-en-us":33,"banner-en-us":433,"footer-en-us":443,"blog-post-authors-en-us-Jacob Vosmaer":685,"blog-related-posts-en-us-moving-all-your-data":699,"assessment-promotions-en-us":741,"next-steps-en-us":780},{"id":4,"title":5,"authorSlugs":6,"body":8,"categorySlug":9,"config":10,"content":14,"description":8,"extension":22,"isFeatured":12,"meta":23,"navigation":24,"path":25,"publishedDate":20,"seo":26,"stem":30,"tagSlugs":31,"__hash__":32},"blogPosts/en-us/blog/moving-all-your-data.yml","Moving All Your Data",[7],"jacob-vosmaer",null,"engineering",{"slug":11,"featured":12,"template":13},"moving-all-your-data",false,"BlogPost",{"title":15,"description":16,"authors":17,"heroImage":19,"date":20,"body":21,"category":9},"Moving all your data, 9TB edition","At GitLab B.V. we are working on an infrastructure upgrade to give more CPU power and storage space to GitLab.com. Learn more here!",[18],"Jacob Vosmaer","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749684774/Blog/Hero%20Images/van.jpg","2015-03-09","At GitLab B.V. we are working on an infrastructure upgrade to give more CPU\npower and storage space to GitLab.com. (We are currently still running on a\n[single server](/blog/the-hardware-that-powers-100k-git-repos/).) As a\npart of this upgrade we wanted to move gitlab.com from our own dedicated\nhardware servers to an AWS data center 400 kilometers away.  In this blog post\nI will tell you how I did that and what challenges I had to overcome. An epic\nadventure of hand-rolled network tunnels, advanced DRBD features and streaming\n9TB of data through SSH pipes!\n\n\u003C!-- more -->\n\n## What did I have to move?\n\nIn our current setup we run a stock GitLab Enterprise Edition omnibus package,\nwith a single big filesystem mounted at `/var/opt/gitlab`. This\nfilesystem holds all the user data hosted on gitlab.com: Postgres and Redis\ndatabase files, user uploads, and a lot of Git repositories. All I had to do\nto move this data to AWS is to move the files on this filesystem. Sounds simple\nenough, does it not?\n\nSo do we move the files, or the filesystem itself? This is an easy question to\nanswer. Moving the files using something like Rsync is not an option because it\nis just too slow. We do file-based backups every week where we take a block\ndevice snapshot, mount the snapshot and send it across with Rsync. That\ncurrently takes over 24 hours, and 24 hours of downtime while we move\ngitlab.com is not a nice idea. Now you might ask: what if you Rsync once to\nprepare, take the server offline, and then do a quick Rsync just to catch up?\nThat would still take hours just for Rsync to walk through all the files and\ndirectories on disk. No good.\n\nWe have faced and solved this same problem in the past when the amount of data\nwas 5 times smaller. (Rsync was not an option even then.) What I did at that\ntime was to use DRBD to move not just the files themselves, but the whole\nfilesystem they sit on. This time around DRBD again seemed like the best\nsolution for us. It is not the fastest solution to move a lot of data, but what\nis great about it is that you can keep using the filesystem while the data is\nbeing moved, and changes will get synchronized continuously. No downtime for\nour users! (Except maybe 5 minutes at the start to set up the sync.)\n\n## What is DRBD?\n\n[DRBD](http://www.drbd.org) is a system that can create a virtual hard drive\n(block device) on a Linux computer that gets mirrored across a network\nconnection to a second Linux computer. Both computers give a 'real' hard drive\nto DRBD, and DRBD keeps the contents of the real hard drive the same across\nboth computers via the network. One of the two computers gets a virtual hard\ndrive from DRBD, which shows the contents of the real hard drive underneath. If\nyour first computer crashes, you can 'plug in' the virtual hard drive on the\nsecond computer in a matter of seconds, and all your data will still be there\nbecause DRBD kept the 'real' hard drives in sync for you. You can even have the\ntwo computers that are linked by DRBD sit in different buildings, or on\ndifferent continents. Up until our move to AWS, we were using DRBD to protect\nagainst hardware failure on the server that runs gitlab.com: if such a failure\nwould happen, we could just plug in the virtual hard drive with the user data\ninto our stand-by server. In our new data center, the hosting provider (Amazon\nWeb Services) has their own solution for plugging virtual hard drives in and\nout called Elastic Block Storage, so we are no longer using DRBD as a virtual\nhard drive. From an availability standpoint this is not better or worse, but\nusing EBS drives does make it a lot easier for us to make backups because now\nwe can just store snapshots (no more Rsync).\n\n## Using DRBD for a data migration\n\nAlthough DRBD is not really made for this purpose, I felt confident using DRBD\nfor the migration because I had done it before for a migration between data\ncenters. At that time we were moving across the Atlantic Ocean; this time we\nwould only be moving from the Netherlands to Germany.  However, the last time\nwe used DRBD only as a one-off tool. In our pre-migration setup, we were\nalready using DRBD to replicate the filesystem between two servers in the same\nrack. DRBD only lets you share a virtual hard drive between two computers, so\nhow do we now send the data to a _third_ computer in the new data center?\n\nLuckily, DRBD actually has a trick up its sleeve to deal with this, called\n'stacked resources'. This means that our old servers ('linus' and 'monty')\nwould share a virtual hard drive called 'drbd0', and that whoever of the two\nhas the 'drbd0' virtual hard drive plugged in gets to use 'drbd0' as the 'real'\nhard drive underneath a second virtual hard drive, called 'drbd10', which is\nshared with the new server ('theo'). Also see the picture below.\n\n![Stacked DRBD replication](https://about.gitlab.com/images/drbd/drbd-three-nodes.png)\n\nIf linus would malfunction, we could attach drbd0 (the blue virtual hard drive)\non monty and keep gitlab.com going. The 'green' replication (to get the data to\ntheo) would also be able to continue, even after a failover to monty.\n\n## Networking\n\nI liked the picture above, so 'all' I had to do was set it up. That ended up\ntaking a few days, just to set up a test environment, and to figure out how to\ncreate a network tunnel for the green traffic. The network tunnel needed to\nhave a movable endpoint depending on whether linus or monty was primary. We\nalso needed the tunnel because DRBD is not compatible with the [Network Address\nTranslation](http://en.wikipedia.org/wiki/Network_address_translation) used by\nAWS. DRBD assumes that whenever a node listens on an IP address, it is also\nreachable for its partner node at that IP address. On AWS on the other hand, a\nnode will have one or more internal IP addresses, which are distinct from its\n_public_ IP address.\n\nWe chose to work around this with an [IPIP\ntunnel](http://en.wikipedia.org/wiki/IP_in_IP) and manually keyed IPsec\nencryption. Previous experiments indicated that this gave us the best network\nthroughput compared to OpenVPN and GRE tunnels.\n\nTo set up the tunnel I used a shell script that was kept in sync on all three\nservers involved in the migration by Chef.\n\n```bash\n# Network tunnel configuration script used by GitLab B.V. to migrate data from\n# Delft to Frankfurt\n\n#!/bin/sh\nset -u\n\nPATH=/usr/sbin:/sbin:/usr/bin:/bin\n\nfrankfurt_public=54.93.71.23\nfrankfurt_replication=172.16.228.2\ntest_public=54.152.127.180\ntest_replication=172.16.228.1\ndelft_public=62.204.93.103\ndelft_replication=172.16.228.1\n\ncreate_tunipip() {\n  if ! ip tunnel show | grep -q tunIPIP ; then\n    echo Creating tunnel tunIPIP\n    ip tunnel add tunIPIP mode ipip ttl 64 local \"$1\" remote \"$2\"\n  fi\n}\n\nadd_tunnel_address() {\n  if ! ip address show tunIPIP | grep -q \"$1\" ; then\n    ip address add \"$1/32\" peer \"$2/32\" dev tunIPIP\n  fi\n}\n\ncase $(hostname) in\n  ip-10-0-2-9)\n    create_tunipip 10.0.2.140 \"${frankfurt_public}\"\n    add_tunnel_address \"${test_replication}\" \"${frankfurt_replication}\"\n    ip link set tunIPIP up\n    ;;\n  ip-10-0-2-245)\n    create_tunipip 10.0.2.11 \"${frankfurt_public}\"\n    add_tunnel_address \"${test_replication}\" \"${frankfurt_replication}\"\n    ip link set tunIPIP up\n    ;;\n  ip-10-1-0-52|theo.gitlab.com)\n    create_tunipip 10.1.0.52 \"${delft_public}\"\n    add_tunnel_address \"${frankfurt_replication}\" \"${delft_replication}\"\n    ip link set tunIPIP up\n    ;;\n  linus|monty)\n    create_tunipip \"${delft_public}\" \"${frankfurt_public}\"\n    add_tunnel_address \"${delft_replication}\" \"${frankfurt_replication}\"\n    ip link set tunIPIP up\n    ;;\nesac\n```\n\nThis script was configured to run on boot. Note that it covers our Delft nodes\n(linus and monty, then current production), the node we were migrating to in\nFrankfurt (theo), and two AWS test nodes that were part of a staging setup. We\nchose the AWS Frankfurt (Germany) data center because of its geographic\nproximity to Delft (The Netherlands).\n\nWe configured IPsec with `/etc/ipsec-tools.conf`. An example for the 'origin'\nconfiguration would be:\n\n```text\n#!/usr/sbin/setkey -f\n\n# Configuration for 172.16.228.1\n\n# Flush the SAD and SPD\nflush;\nspdflush;\n\n# Attention: Use this keys only for testing purposes!\n# Generate your own keys!\n\n# AH SAs using 128 bit long keys\n# Fill in your keys below!\nadd 172.16.228.1 172.16.228.2 ah 0x200 -A hmac-md5 0xfoobar;\nadd 172.16.228.2 172.16.228.1 ah 0x300 -A hmac-md5 0xbarbaz;\n\n# ESP SAs using 192 bit long keys (168 + 24 parity)\n# Fill in your keys below!\nadd 172.16.228.1 172.16.228.2 esp 0x201 -E 3des-cbc 0xquxfoo;\nadd 172.16.228.2 172.16.228.1 esp 0x301 -E 3des-cbc 0xbazqux;\n\n# Security policies\n# outbound traffic from 172.16.228.1 to 172.16.228.2\nspdadd 172.16.228.1 172.16.228.2 any -P out ipsec esp/transport//require ah/transport//require;\n\n# inbound traffic from 172.16.228.2 to 172.16.228.1\nspdadd 172.16.228.2 172.16.228.1 any -P in ipsec esp/transport//require ah/transport//require;\n```\n\nGetting the networking to this point took quite some work. For starters, we did\nnot have a staging environment similar enough to our production environment, so\nI had to create one for this occasion.\n\nOn top of that, to model our production setup, I had to use an AWS 'Virtual\nPrivate Cloud', which was new technology for us. It took a while before I\nfound some [vital information about using multiple IP\naddresses](http://engineering.silk.co/post/31923247961/multiple-ip-addresses-on-amazon-ec2)\nthat was not obvious from the AWS documentation: if you want to have two public\nIP addresses on an AWS VPC node, you need to put two corresponding private IP\naddresses on one 'Elastic Network Interface', instead of creating two network\ninterfaces with one private IP each.\n\n## Configuring three-way DRBD replication\n\nWith the basic networking figured out the next thing I had to do was to adapt\nour production failover script so that we maintain redundancy while migrating\nthe data. 'Failover' is a procedure where you move a service (gitlab.com) ove\nto a different computer after a failure. Our failover procedure is managed by a\nscript. My goal was to make sure that if one of our production servers failed,\nany teammate of mine on pager duty would be able to restore the gitlab.com\nservice using our normal failover procedure. That meant I had to update the\nscript to use the new three-way DRBD configuration.\n\nI certainly got a little more familiar with tcpdump (`tcpdump -n -i\nINTERFACE`), having multiple layers of firewalls\n([UFW](http://en.wikipedia.org/wiki/Uncomplicated_Firewall) and AWS [Security\nGroups](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html)),\nand how to get any useful log messages from DRBD:\n\n```shell\n# Monitor DRBD log messages\nsudo tail -f /var/log/messages | grep -e drbd -e d-con\n```\n\nI later learned that I actually deployed a new version of the failover script\nwith a bug in it that potentially could have confused the hell out of my\nteammates had they had to use it under duress. Luckily we never actually needed\nthe failover procedure during the time the new script was in production.\n\nBut, even though I was introducing new complexity and hence bugs into our\nfailover tooling, I did manage to learn and try out enough things to bring this\nproject to a successful conclusion.\n\n## Enabling the DRBD replication\n\nThis part was relatively easy. I just had to grow the DRBD block device\n'drbd0' so that it could accommodate the new stacked (inner) block device\n'drbd10' without having to shrink our production filesystem. Because drbd0 was\nbacked by LVM and we had some space left this was a matter of invoking\n`lvextend` and `drbdadm resize` on both our production nodes.\n\nThe step after this was the first one where I had to take gitlab.com offline.\nIn order to 'activate' drbd10 and start the synchronization, I had to unmount\n`/dev/drbd0` from `/var/opt/gitlab` and mount `/dev/drbd10` in its place. This\ntook less than 5 minutes. After this the actual migration was under way!\n\n## Too slow\n\nAt this point I was briefly excited to be able to share some good news with the\nrest of the team. While staring about the DRBD progress bar for the\nsynchronization I started to realize however that the progress bar was telling\nme that the synchronization would take about 50-60 days at 2MB/s.\n\nThis prognosis was an improvement over what we would expect based on our\nprevious experience moving 1.8TB from North Virginia (US) to Delft (NL) in\nabout two weeks (across the Atlantic Ocean!). If one would extrapolate that\nrate you would expect moving 9TB to take 70 days. We were disappointed\nnonetheless because we were hoping that we would gain more throughput by moving\nover a shorter distance this time around (Delft and Frankfurt are about 400km\napart).\n\nThe first thing I started looking into at this point was whether we could\nsomehow make better use of the network bandwidth at our disposal. Sending fake\ndata (zeroes) over the (encrypted) IPIP tunnel (`dd if=/dev/zero | nc remote_ip\n1234`) we could get about 17 MB/s. By disabling IPsec (not really an option as\nfar as I am concerned) we could increase that number to 40 MB/s.\n\nThe only conclusion I could come to was that we were not reaching our maximum\nbandwidth potential, but that I had no clue how to coax more speed out of the\nDRBD sync. Luckily I recalled reading about another magical DRBD feature.\n\n## Bring out the truck\n\nThe solution suggested by the DRBD documentation for situations like ours is\ncalled ['truck based\nreplication](https://drbd.linbit.com/users-guide/s-using-truck-based-replication.html).\nInstead of synchronizing 9TB of data, we would be telling DRBD to mark a point\nin time, take a full disk snapshot, move the snapshot to the new location (as a\nbox full of hard drives in a truck if needed), and then tell DRBD to get the\ndata at the new location up to date. During that 'catching-up' sync, DRBD would\nonly be resending those parts of the disk that actually changed since we marked\nthe point in time earlier. Because our users would not have written 9TB of new\ndata while the 'disks' were being shipped, we would have to sync much less than\n9TB.\n\n![Full replication versus 'truck' replication](https://about.gitlab.com/images/drbd/drbd-truck-sync.png)\n\nIn our case I would not have to use an actual truck; while testing the network\nthroughput between our old and new server I found that I could stream zeroes\nthrough SSH at about 35MB/s.\n\n```text\ndd if=/dev/zero bs=1M count=100 | ssh theo.gitlab.com dd of=/dev/null\n```\n\nAfter doing some testing with the leftover two-node staging setup I built\nearlier to figure out the networking I felt I could make this work. I followed\nthe steps in the DRBD documentation, made an LVM snapshot on the active origin\nserver, and started sending the snapshot to the new server with the following\nscript.\n\n```bash\n#!/bin/sh\nblock_count=100\nblock_size='8M'\nremote='54.93.71.23'\n\nsend_blocks() {\n  for skip in $(seq $1 ${block_count} $2) ; do\n    echo \"${skip}   $(date)\"\n    sudo dd if=/dev/gitlab_vg/truck bs=${block_size} count=${block_count} skip=${skip} status=noxfer iflag=fullblock \\\n    | ssh -T ${remote} sudo dd of=/dev/gitlab_vg/gitlab_com bs=${block_size} count=${block_count} seek=${skip} status=none iflag=fullblock\n  done\n}\n\ncheck_blocks() {\n  for skip in $(seq $2 ${block_count} $3) ; do\n    printf \"${skip}   \"\n    sudo dd if=$1 bs=${block_size} count=${block_count} skip=${skip} iflag=fullblock | md5sum\n  done\n}\n\ncase $1 in\n  send)\n    send_blocks $2 $3\n    ;;\n  check)\n    check_blocks $2 $3 $4\n    ;;\n  *)\n    echo \"Usage: $0 (send START END) | (check BLOCK_DEVICE START END)\"\n    exit 127\nesac\n```\n\nBy running this script in a [screen](http://www.gnu.org/software/screen/)\nsession I was able to copy the LVM snapshot `/dev/gitlab_vg/truck` from the old\nserver to the new server in about 3.5 days, 800 MB at a time. The 800MB number\nwas a bit of a coincidence, stemming from the recommendation from our Dutch\nhosters [NetCompany](http://www.netcompany.nl/) to use 8MB `dd`-blocks. Also\ncoincidentally, the total disk size was divisible by 8MB. If you have an eye\nfor system security you might notice that the script needed both root\nprivileges on the source server, and via short-lived unattended SSH sessions\ninto the remote server (`| ssh sudo ...`). This is not a normal thing for us to\ndo, and my colleagues got spammed by warning messages about it while this\nmigration was in progress.\n\nBecause I am a little paranoid, I was running a second instance of this script\nin parallel with the sync, where I was calculating MD5 checksums of all the\nblocks that were being sent across the network. By calculating the same\nchecksums on the migration target I could gain sufficient confidence that all\ndata made across without errors. If there would have been any, the script would\nhave made it easy to re-send an individual 800MB block.\n\nAt this point my spirits were lifting again and I told my teammates we would\nprobably need one extra day after the 'truck' stage before we could start using\nthe new server. I did not know yet that 'one day' would become 'one week'.\n\n## Shipping too much data\n\nAfter moving the big snapshot across the network with\n[dd](http://en.wikipedia.org/wiki/Dd_%28Unix%29) and SSH, the next step would\nbe to 'just turn DRBD on and let it catch up'. But that did not work all of a\nsudden! It took me a while to realize that the problem was that while trucking,\nI had sent _too much_ data to the new server (theo). If you recall the picture\nI drew earlier of the three-way DRBD replication then you can see that the goal\nwas to replicate the 'green box' from the old servers to the new server, while\nletting the old servers keep sharing the 'blue box' for redundancy.\n\n![Blue box on the left, green box on the\nright](https://about.gitlab.com/images/drbd/drbd-too-much-data.png)\n\nBut I had just sent a snapshot of the _blue_ box to theo (the server on the\nright), not just the green box. DRBD was refusing to turn back on theo,\nbecause it was expecting the green box, not the blue box (containing the green\nbox). More precisely, my disk on the new server contained metadata for drbd0 as\nwell as drbd10. DRBD finds its metadata by starting at the end of the disk and\nwalking backwards. Because of that, it was not seeing the drbd10 (green)\nmetadata on theo.\n\n![Two metadata block](https://about.gitlab.com/images/drbd/drbd-two-metadata-blocks.png)\n\nThe first thing I tried was to shrink the disk (with\n[LVM](http://en.wikipedia.org/wiki/Logical_Volume_Manager_%28Linux%29)) so that\nthe blue block at the end would fall off. Unfortunately, you can only grow and\nshrink LVM disks in fixed steps (4MB steps in our case), and those steps did\nnot align with where the drbd10 metadata (green box) ended on disk.\n\nThe next thing I tried was to erase the blue block. That would leave DRBD\nunable to find any metadata, because DRBD metadata must sit at the end of the\ndisk. To cope with that I tried and trick DRBD into thinking it was in the\nmiddle of a disk resize operation. By manually creating a doctored\n`/var/lib/drbd/drbd-minor-10.lkbd` file used by DRBD when it does a\n(legitimate) disk resize, I was pointing it to where I thought it could find\nthe green block of drbd10 metadata. To be honest this required more disk sector\narithmetic than I was comfortable with. Comfortable or not, I never got this\nprocedure to work without a few screens full of scary DRBD error messages so I\ndecided to call our first truck expedition a bust.\n\n## One last try\n\nWe had just spent four days waiting for a 9TB chunk of data to be transported\nto our new server only to find out that it was getting rejected by DRBD. The\nonly option that seemed left to us was to sit back and wait 50-60 days for a\nregular DRBD sync to happen. There was just this one last thing I wanted to try\nbefore giving up. The stumbling block at this point was getting DRBD on theo to\nfind the metadata for the drbd10 disk. From reading the documentation, I knew\nthat DRBD has metadata export and import commands. What if we would take a new\nLVM snapshot in Delft, take the disk offline and export its metadata, and then\non the other hand do a metadata import with the proper DRBD import command\n(instead of me writing zeroes to the disk and lying to DRBD about being in the\nmiddle of a resize). This would require us to redo the truck dance and wait\nfour days, but four days was still better than 50 days.\n\nUsing the staging setup I built at the start of this process (a good time\ninvestment!) I created a setup that allowed me to test three-way replication\nand truck-based replication at the same time. Without having to do any\narithmetic I came up with an intimidating but reliable sequence of commands to\n(1) initiate truck based replication and (2) export the DRBD metadata.\n\n```shell\nsudo lvremove -f gitlab_vg/truck\n## clear the bitmap to mark the sync point in time\nsudo drbdadm disconnect --stacked gitlab_data-stacked\nsudo drbdadm new-current-uuid --clear-bitmap --stacked gitlab_data-stacked/0\n## create a metadata dump\necho Yes | sudo gitlab-drbd slave\nsudo drbdadm primary gitlab_data\nsudo drbdadm apply-al --stacked gitlab_data-stacked\nsudo drbdadm dump-md --stacked gitlab_data-stacked > stacked-md-$(date +%s).txt\n## Create a block device snapshot\nsudo lvcreate -n truck -s --extents 50%FREE gitlab_vg/drbd\n## Turn gitlab back on\necho Yes |sudo gitlab-drbd slave\necho Yes |sudo gitlab-drbd master\n## Make sure the current node will 'win' as primary later on\nsudo drbdadm new-current-uuid --stacked gitlab_data-stacked/0\n```\n\nThis time I needed to take gitlab.com offline for a few minutes to be able to\ndo the metadata export. After that, a second waiting period of 4 days of\nstreaming the disk snapshot with `dd` and `ssh` commenced. And then came the\nbig moment of turning DRBD back on theo. It worked! Now I just had to wait\nfor the changes on disk of the last four days to be replicated (which took\nabout a day) and we were ready to flip the big switch, update the DNS and start\nserving gitlab.com from AWS. That final transition took another 10 minutes of\ndowntime, and then we were done.\n\n## Looking back\n\nAs soon as we flipped the switch and started operating out of AWS/Frankfurt,\ngitlab.com became noticeably more responsive. This is in spite of the fact that\nwe are _still_ running on a single server (an [AWS\nc3.8xlarge](http://aws.amazon.com/ec2/instance-types/#c3) instance at the\nmoment).\n\nCounting from the moment I was tasked to work on this data migration, we were\nable to move a 9TB filesystem to a different data center and hosting provider\nin three weeks, requiring 20 minutes of total downtime (spread over three\nmaintenance windows). We took an operational risk of prolonged downtime due to\noperator confusion in case of incidents, by deploying a new configuration that\nwhile tested to some degree was understood by only one member of the operations\nteam (myself). We were lucky that there was no incident during those three\nweeks that made this lack of shared knowledge a problem.\n\nNow if you will excuse me I have to go and explain to my colleagues how our\nnew gitlab.com infrastructure on AWS is set up. :)\n","yml",{},true,"/en-us/blog/moving-all-your-data",{"title":15,"description":16,"ogTitle":15,"ogDescription":16,"noIndex":12,"ogImage":19,"ogUrl":27,"ogSiteName":28,"ogType":29,"canonicalUrls":27},"https://about.gitlab.com/blog/moving-all-your-data","https://about.gitlab.com","article","en-us/blog/moving-all-your-data",[],"37w2pJl2KYBQP_bPPZLxSh9E1VSWOxBc9zHTaRP48wQ",{"data":34},{"logo":35,"freeTrial":40,"sales":45,"login":50,"items":55,"search":363,"minimal":394,"duo":413,"pricingDeployment":423},{"config":36},{"href":37,"dataGaName":38,"dataGaLocation":39},"/","gitlab logo","header",{"text":41,"config":42},"Get free trial",{"href":43,"dataGaName":44,"dataGaLocation":39},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com&glm_content=default-saas-trial/","free trial",{"text":46,"config":47},"Talk to sales",{"href":48,"dataGaName":49,"dataGaLocation":39},"/sales/","sales",{"text":51,"config":52},"Sign in",{"href":53,"dataGaName":54,"dataGaLocation":39},"https://gitlab.com/users/sign_in/","sign in",[56,83,178,183,284,344],{"text":57,"config":58,"cards":60},"Platform",{"dataNavLevelOne":59},"platform",[61,67,75],{"title":57,"description":62,"link":63},"The intelligent orchestration platform for DevSecOps",{"text":64,"config":65},"Explore our Platform",{"href":66,"dataGaName":59,"dataGaLocation":39},"/platform/",{"title":68,"description":69,"link":70},"GitLab Duo Agent Platform","Agentic AI for the entire software lifecycle",{"text":71,"config":72},"Meet GitLab Duo",{"href":73,"dataGaName":74,"dataGaLocation":39},"/gitlab-duo-agent-platform/","gitlab duo agent platform",{"title":76,"description":77,"link":78},"Why GitLab","See the top reasons enterprises choose GitLab",{"text":79,"config":80},"Learn more",{"href":81,"dataGaName":82,"dataGaLocation":39},"/why-gitlab/","why gitlab",{"text":84,"left":24,"config":85,"link":87,"lists":91,"footer":160},"Product",{"dataNavLevelOne":86},"solutions",{"text":88,"config":89},"View all Solutions",{"href":90,"dataGaName":86,"dataGaLocation":39},"/solutions/",[92,116,139],{"title":93,"description":94,"link":95,"items":100},"Automation","CI/CD and automation to accelerate deployment",{"config":96},{"icon":97,"href":98,"dataGaName":99,"dataGaLocation":39},"AutomatedCodeAlt","/solutions/delivery-automation/","automated software delivery",[101,105,108,112],{"text":102,"config":103},"CI/CD",{"href":104,"dataGaLocation":39,"dataGaName":102},"/solutions/continuous-integration/",{"text":68,"config":106},{"href":73,"dataGaLocation":39,"dataGaName":107},"gitlab duo agent platform - product menu",{"text":109,"config":110},"Source Code Management",{"href":111,"dataGaLocation":39,"dataGaName":109},"/solutions/source-code-management/",{"text":113,"config":114},"Automated Software Delivery",{"href":98,"dataGaLocation":39,"dataGaName":115},"Automated software delivery",{"title":117,"description":118,"link":119,"items":124},"Security","Deliver code faster without compromising security",{"config":120},{"href":121,"dataGaName":122,"dataGaLocation":39,"icon":123},"/solutions/application-security-testing/","security and compliance","ShieldCheckLight",[125,129,134],{"text":126,"config":127},"Application Security Testing",{"href":121,"dataGaName":128,"dataGaLocation":39},"Application security testing",{"text":130,"config":131},"Software Supply Chain Security",{"href":132,"dataGaLocation":39,"dataGaName":133},"/solutions/supply-chain/","Software supply chain security",{"text":135,"config":136},"Software Compliance",{"href":137,"dataGaName":138,"dataGaLocation":39},"/solutions/software-compliance/","software compliance",{"title":140,"link":141,"items":146},"Measurement",{"config":142},{"icon":143,"href":144,"dataGaName":145,"dataGaLocation":39},"DigitalTransformation","/solutions/visibility-measurement/","visibility and measurement",[147,151,155],{"text":148,"config":149},"Visibility & Measurement",{"href":144,"dataGaLocation":39,"dataGaName":150},"Visibility and Measurement",{"text":152,"config":153},"Value Stream Management",{"href":154,"dataGaLocation":39,"dataGaName":152},"/solutions/value-stream-management/",{"text":156,"config":157},"Analytics & Insights",{"href":158,"dataGaLocation":39,"dataGaName":159},"/solutions/analytics-and-insights/","Analytics and insights",{"title":161,"items":162},"GitLab for",[163,168,173],{"text":164,"config":165},"Enterprise",{"href":166,"dataGaLocation":39,"dataGaName":167},"/enterprise/","enterprise",{"text":169,"config":170},"Small Business",{"href":171,"dataGaLocation":39,"dataGaName":172},"/small-business/","small business",{"text":174,"config":175},"Public Sector",{"href":176,"dataGaLocation":39,"dataGaName":177},"/solutions/public-sector/","public sector",{"text":179,"config":180},"Pricing",{"href":181,"dataGaName":182,"dataGaLocation":39,"dataNavLevelOne":182},"/pricing/","pricing",{"text":184,"config":185,"link":187,"lists":191,"feature":271},"Resources",{"dataNavLevelOne":186},"resources",{"text":188,"config":189},"View all resources",{"href":190,"dataGaName":186,"dataGaLocation":39},"/resources/",[192,225,243],{"title":193,"items":194},"Getting started",[195,200,205,210,215,220],{"text":196,"config":197},"Install",{"href":198,"dataGaName":199,"dataGaLocation":39},"/install/","install",{"text":201,"config":202},"Quick start guides",{"href":203,"dataGaName":204,"dataGaLocation":39},"/get-started/","quick setup checklists",{"text":206,"config":207},"Learn",{"href":208,"dataGaLocation":39,"dataGaName":209},"https://university.gitlab.com/","learn",{"text":211,"config":212},"Product documentation",{"href":213,"dataGaName":214,"dataGaLocation":39},"https://docs.gitlab.com/","product documentation",{"text":216,"config":217},"Best practice videos",{"href":218,"dataGaName":219,"dataGaLocation":39},"/getting-started-videos/","best practice videos",{"text":221,"config":222},"Integrations",{"href":223,"dataGaName":224,"dataGaLocation":39},"/integrations/","integrations",{"title":226,"items":227},"Discover",[228,233,238],{"text":229,"config":230},"Customer success stories",{"href":231,"dataGaName":232,"dataGaLocation":39},"/customers/","customer success stories",{"text":234,"config":235},"Blog",{"href":236,"dataGaName":237,"dataGaLocation":39},"/blog/","blog",{"text":239,"config":240},"Remote",{"href":241,"dataGaName":242,"dataGaLocation":39},"https://handbook.gitlab.com/handbook/company/culture/all-remote/","remote",{"title":244,"items":245},"Connect",[246,251,256,261,266],{"text":247,"config":248},"GitLab Services",{"href":249,"dataGaName":250,"dataGaLocation":39},"/services/","services",{"text":252,"config":253},"Community",{"href":254,"dataGaName":255,"dataGaLocation":39},"/community/","community",{"text":257,"config":258},"Forum",{"href":259,"dataGaName":260,"dataGaLocation":39},"https://forum.gitlab.com/","forum",{"text":262,"config":263},"Events",{"href":264,"dataGaName":265,"dataGaLocation":39},"/events/","events",{"text":267,"config":268},"Partners",{"href":269,"dataGaName":270,"dataGaLocation":39},"/partners/","partners",{"backgroundColor":272,"textColor":273,"text":274,"image":275,"link":279},"#2f2a6b","#fff","Insights for the future of software development",{"altText":276,"config":277},"the source promo card",{"src":278},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758208064/dzl0dbift9xdizyelkk4.svg",{"text":280,"config":281},"Read the latest",{"href":282,"dataGaName":283,"dataGaLocation":39},"/the-source/","the source",{"text":285,"config":286,"lists":288},"Company",{"dataNavLevelOne":287},"company",[289],{"items":290},[291,296,302,304,309,314,319,324,329,334,339],{"text":292,"config":293},"About",{"href":294,"dataGaName":295,"dataGaLocation":39},"/company/","about",{"text":297,"config":298,"footerGa":301},"Jobs",{"href":299,"dataGaName":300,"dataGaLocation":39},"/jobs/","jobs",{"dataGaName":300},{"text":262,"config":303},{"href":264,"dataGaName":265,"dataGaLocation":39},{"text":305,"config":306},"Leadership",{"href":307,"dataGaName":308,"dataGaLocation":39},"/company/team/e-group/","leadership",{"text":310,"config":311},"Team",{"href":312,"dataGaName":313,"dataGaLocation":39},"/company/team/","team",{"text":315,"config":316},"Handbook",{"href":317,"dataGaName":318,"dataGaLocation":39},"https://handbook.gitlab.com/","handbook",{"text":320,"config":321},"Investor relations",{"href":322,"dataGaName":323,"dataGaLocation":39},"https://ir.gitlab.com/","investor relations",{"text":325,"config":326},"Trust Center",{"href":327,"dataGaName":328,"dataGaLocation":39},"/security/","trust center",{"text":330,"config":331},"AI Transparency Center",{"href":332,"dataGaName":333,"dataGaLocation":39},"/ai-transparency-center/","ai transparency center",{"text":335,"config":336},"Newsletter",{"href":337,"dataGaName":338,"dataGaLocation":39},"/company/contact/#contact-forms","newsletter",{"text":340,"config":341},"Press",{"href":342,"dataGaName":343,"dataGaLocation":39},"/press/","press",{"text":345,"config":346,"lists":347},"Contact us",{"dataNavLevelOne":287},[348],{"items":349},[350,353,358],{"text":46,"config":351},{"href":48,"dataGaName":352,"dataGaLocation":39},"talk to sales",{"text":354,"config":355},"Support portal",{"href":356,"dataGaName":357,"dataGaLocation":39},"https://support.gitlab.com","support portal",{"text":359,"config":360},"Customer portal",{"href":361,"dataGaName":362,"dataGaLocation":39},"https://customers.gitlab.com/customers/sign_in/","customer portal",{"close":364,"login":365,"suggestions":372},"Close",{"text":366,"link":367},"To search repositories and projects, login to",{"text":368,"config":369},"gitlab.com",{"href":53,"dataGaName":370,"dataGaLocation":371},"search login","search",{"text":373,"default":374},"Suggestions",[375,377,381,383,387,391],{"text":68,"config":376},{"href":73,"dataGaName":68,"dataGaLocation":371},{"text":378,"config":379},"Code Suggestions (AI)",{"href":380,"dataGaName":378,"dataGaLocation":371},"/solutions/code-suggestions/",{"text":102,"config":382},{"href":104,"dataGaName":102,"dataGaLocation":371},{"text":384,"config":385},"GitLab on AWS",{"href":386,"dataGaName":384,"dataGaLocation":371},"/partners/technology-partners/aws/",{"text":388,"config":389},"GitLab on Google Cloud",{"href":390,"dataGaName":388,"dataGaLocation":371},"/partners/technology-partners/google-cloud-platform/",{"text":392,"config":393},"Why GitLab?",{"href":81,"dataGaName":392,"dataGaLocation":371},{"freeTrial":395,"mobileIcon":400,"desktopIcon":405,"secondaryButton":408},{"text":396,"config":397},"Start free trial",{"href":398,"dataGaName":44,"dataGaLocation":399},"https://gitlab.com/-/trials/new/","nav",{"altText":401,"config":402},"Gitlab Icon",{"src":403,"dataGaName":404,"dataGaLocation":399},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758203874/jypbw1jx72aexsoohd7x.svg","gitlab icon",{"altText":401,"config":406},{"src":407,"dataGaName":404,"dataGaLocation":399},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758203875/gs4c8p8opsgvflgkswz9.svg",{"text":409,"config":410},"Get Started",{"href":411,"dataGaName":412,"dataGaLocation":399},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com/compare/gitlab-vs-github/","get started",{"freeTrial":414,"mobileIcon":419,"desktopIcon":421},{"text":415,"config":416},"Learn more about GitLab Duo",{"href":417,"dataGaName":418,"dataGaLocation":399},"/gitlab-duo/","gitlab duo",{"altText":401,"config":420},{"src":403,"dataGaName":404,"dataGaLocation":399},{"altText":401,"config":422},{"src":407,"dataGaName":404,"dataGaLocation":399},{"freeTrial":424,"mobileIcon":429,"desktopIcon":431},{"text":425,"config":426},"Back to pricing",{"href":181,"dataGaName":427,"dataGaLocation":399,"icon":428},"back to pricing","GoBack",{"altText":401,"config":430},{"src":403,"dataGaName":404,"dataGaLocation":399},{"altText":401,"config":432},{"src":407,"dataGaName":404,"dataGaLocation":399},{"title":434,"button":435,"config":440},"See how agentic AI transforms software delivery",{"text":436,"config":437},"Watch GitLab Transcend now",{"href":438,"dataGaName":439,"dataGaLocation":39},"/events/transcend/virtual/","transcend event",{"layout":441,"icon":442},"release","AiStar",{"data":444},{"text":445,"source":446,"edit":452,"contribute":457,"config":462,"items":467,"minimal":674},"Git is a trademark of Software Freedom Conservancy and our use of 'GitLab' is under license",{"text":447,"config":448},"View page source",{"href":449,"dataGaName":450,"dataGaLocation":451},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/","page source","footer",{"text":453,"config":454},"Edit this page",{"href":455,"dataGaName":456,"dataGaLocation":451},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/content/","web ide",{"text":458,"config":459},"Please contribute",{"href":460,"dataGaName":461,"dataGaLocation":451},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/CONTRIBUTING.md/","please contribute",{"twitter":463,"facebook":464,"youtube":465,"linkedin":466},"https://twitter.com/gitlab","https://www.facebook.com/gitlab","https://www.youtube.com/channel/UCnMGQ8QHMAnVIsI3xJrihhg","https://www.linkedin.com/company/gitlab-com",[468,515,569,613,640],{"title":179,"links":469,"subMenu":484},[470,474,479],{"text":471,"config":472},"View plans",{"href":181,"dataGaName":473,"dataGaLocation":451},"view plans",{"text":475,"config":476},"Why Premium?",{"href":477,"dataGaName":478,"dataGaLocation":451},"/pricing/premium/","why premium",{"text":480,"config":481},"Why Ultimate?",{"href":482,"dataGaName":483,"dataGaLocation":451},"/pricing/ultimate/","why ultimate",[485],{"title":486,"links":487},"Contact Us",[488,491,493,495,500,505,510],{"text":489,"config":490},"Contact sales",{"href":48,"dataGaName":49,"dataGaLocation":451},{"text":354,"config":492},{"href":356,"dataGaName":357,"dataGaLocation":451},{"text":359,"config":494},{"href":361,"dataGaName":362,"dataGaLocation":451},{"text":496,"config":497},"Status",{"href":498,"dataGaName":499,"dataGaLocation":451},"https://status.gitlab.com/","status",{"text":501,"config":502},"Terms of use",{"href":503,"dataGaName":504,"dataGaLocation":451},"/terms/","terms of use",{"text":506,"config":507},"Privacy statement",{"href":508,"dataGaName":509,"dataGaLocation":451},"/privacy/","privacy statement",{"text":511,"config":512},"Cookie preferences",{"dataGaName":513,"dataGaLocation":451,"id":514,"isOneTrustButton":24},"cookie preferences","ot-sdk-btn",{"title":84,"links":516,"subMenu":525},[517,521],{"text":518,"config":519},"DevSecOps platform",{"href":66,"dataGaName":520,"dataGaLocation":451},"devsecops platform",{"text":522,"config":523},"AI-Assisted Development",{"href":417,"dataGaName":524,"dataGaLocation":451},"ai-assisted development",[526],{"title":527,"links":528},"Topics",[529,534,539,544,549,554,559,564],{"text":530,"config":531},"CICD",{"href":532,"dataGaName":533,"dataGaLocation":451},"/topics/ci-cd/","cicd",{"text":535,"config":536},"GitOps",{"href":537,"dataGaName":538,"dataGaLocation":451},"/topics/gitops/","gitops",{"text":540,"config":541},"DevOps",{"href":542,"dataGaName":543,"dataGaLocation":451},"/topics/devops/","devops",{"text":545,"config":546},"Version Control",{"href":547,"dataGaName":548,"dataGaLocation":451},"/topics/version-control/","version control",{"text":550,"config":551},"DevSecOps",{"href":552,"dataGaName":553,"dataGaLocation":451},"/topics/devsecops/","devsecops",{"text":555,"config":556},"Cloud Native",{"href":557,"dataGaName":558,"dataGaLocation":451},"/topics/cloud-native/","cloud native",{"text":560,"config":561},"AI for Coding",{"href":562,"dataGaName":563,"dataGaLocation":451},"/topics/devops/ai-for-coding/","ai for coding",{"text":565,"config":566},"Agentic AI",{"href":567,"dataGaName":568,"dataGaLocation":451},"/topics/agentic-ai/","agentic ai",{"title":570,"links":571},"Solutions",[572,574,576,581,585,588,592,595,597,600,603,608],{"text":126,"config":573},{"href":121,"dataGaName":126,"dataGaLocation":451},{"text":115,"config":575},{"href":98,"dataGaName":99,"dataGaLocation":451},{"text":577,"config":578},"Agile development",{"href":579,"dataGaName":580,"dataGaLocation":451},"/solutions/agile-delivery/","agile delivery",{"text":582,"config":583},"SCM",{"href":111,"dataGaName":584,"dataGaLocation":451},"source code management",{"text":530,"config":586},{"href":104,"dataGaName":587,"dataGaLocation":451},"continuous integration & delivery",{"text":589,"config":590},"Value stream management",{"href":154,"dataGaName":591,"dataGaLocation":451},"value stream management",{"text":535,"config":593},{"href":594,"dataGaName":538,"dataGaLocation":451},"/solutions/gitops/",{"text":164,"config":596},{"href":166,"dataGaName":167,"dataGaLocation":451},{"text":598,"config":599},"Small business",{"href":171,"dataGaName":172,"dataGaLocation":451},{"text":601,"config":602},"Public sector",{"href":176,"dataGaName":177,"dataGaLocation":451},{"text":604,"config":605},"Education",{"href":606,"dataGaName":607,"dataGaLocation":451},"/solutions/education/","education",{"text":609,"config":610},"Financial services",{"href":611,"dataGaName":612,"dataGaLocation":451},"/solutions/finance/","financial services",{"title":184,"links":614},[615,617,619,621,624,626,628,630,632,634,636,638],{"text":196,"config":616},{"href":198,"dataGaName":199,"dataGaLocation":451},{"text":201,"config":618},{"href":203,"dataGaName":204,"dataGaLocation":451},{"text":206,"config":620},{"href":208,"dataGaName":209,"dataGaLocation":451},{"text":211,"config":622},{"href":213,"dataGaName":623,"dataGaLocation":451},"docs",{"text":234,"config":625},{"href":236,"dataGaName":237,"dataGaLocation":451},{"text":229,"config":627},{"href":231,"dataGaName":232,"dataGaLocation":451},{"text":239,"config":629},{"href":241,"dataGaName":242,"dataGaLocation":451},{"text":247,"config":631},{"href":249,"dataGaName":250,"dataGaLocation":451},{"text":252,"config":633},{"href":254,"dataGaName":255,"dataGaLocation":451},{"text":257,"config":635},{"href":259,"dataGaName":260,"dataGaLocation":451},{"text":262,"config":637},{"href":264,"dataGaName":265,"dataGaLocation":451},{"text":267,"config":639},{"href":269,"dataGaName":270,"dataGaLocation":451},{"title":285,"links":641},[642,644,646,648,650,652,654,658,663,665,667,669],{"text":292,"config":643},{"href":294,"dataGaName":287,"dataGaLocation":451},{"text":297,"config":645},{"href":299,"dataGaName":300,"dataGaLocation":451},{"text":305,"config":647},{"href":307,"dataGaName":308,"dataGaLocation":451},{"text":310,"config":649},{"href":312,"dataGaName":313,"dataGaLocation":451},{"text":315,"config":651},{"href":317,"dataGaName":318,"dataGaLocation":451},{"text":320,"config":653},{"href":322,"dataGaName":323,"dataGaLocation":451},{"text":655,"config":656},"Sustainability",{"href":657,"dataGaName":655,"dataGaLocation":451},"/sustainability/",{"text":659,"config":660},"Diversity, inclusion and belonging (DIB)",{"href":661,"dataGaName":662,"dataGaLocation":451},"/diversity-inclusion-belonging/","Diversity, inclusion and belonging",{"text":325,"config":664},{"href":327,"dataGaName":328,"dataGaLocation":451},{"text":335,"config":666},{"href":337,"dataGaName":338,"dataGaLocation":451},{"text":340,"config":668},{"href":342,"dataGaName":343,"dataGaLocation":451},{"text":670,"config":671},"Modern Slavery Transparency Statement",{"href":672,"dataGaName":673,"dataGaLocation":451},"https://handbook.gitlab.com/handbook/legal/modern-slavery-act-transparency-statement/","modern slavery transparency statement",{"items":675},[676,679,682],{"text":677,"config":678},"Terms",{"href":503,"dataGaName":504,"dataGaLocation":451},{"text":680,"config":681},"Cookies",{"dataGaName":513,"dataGaLocation":451,"id":514,"isOneTrustButton":24},{"text":683,"config":684},"Privacy",{"href":508,"dataGaName":509,"dataGaLocation":451},[686],{"id":687,"title":18,"body":8,"config":688,"content":690,"description":8,"extension":22,"meta":694,"navigation":24,"path":695,"seo":696,"stem":697,"__hash__":698},"blogAuthors/en-us/blog/authors/jacob-vosmaer.yml",{"template":689},"BlogAuthor",{"name":18,"config":691},{"headshot":692,"ctfId":693},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1749659488/Blog/Author%20Headshots/gitlab-logo-extra-whitespace.png","Jacob-Vosmaer",{},"/en-us/blog/authors/jacob-vosmaer",{},"en-us/blog/authors/jacob-vosmaer","8CR6ShwBsKxqGM4AAocQnzaLp61rqybzr4XSir9Y3Ag",[700,713,725],{"content":701,"config":711},{"title":702,"description":703,"authors":704,"heroImage":706,"date":707,"category":9,"tags":708,"body":710},"How IIT Bombay students are coding the future with GitLab","At GitLab, we often talk about how software accelerates innovation. But sometimes, you have to step away from the Zoom calls and stand in a crowded university hall to remember why we do this.",[705],"Nick Veenhof","https://res.cloudinary.com/about-gitlab-com/image/upload/v1750099013/Blog/Hero%20Images/Blog/Hero%20Images/blog-image-template-1800x945%20%2814%29_6VTUA8mUhOZNDaRVNPeKwl_1750099012960.png","2026-01-08",[255,607,709],"open source","The GitLab team recently had the privilege of judging the **iHack Hackathon** at **IIT Bombay's E-Summit**. The energy was electric, the coffee was flowing, and the talent was undeniable. But what struck us most wasn't just the code — it was the sheer determination of students to solve real-world problems, often overcoming significant logistical and financial hurdles to simply be in the room.\n\n\nThrough our [GitLab for Education program](https://about.gitlab.com/solutions/education/), we aim to empower the next generation of developers with tools and opportunity. Here is a look at what the students built, and how they used GitLab to bridge the gap between idea and reality.\n\n## The challenge: Build faster, build securely\n\nThe premise for the GitLab track of the hackathon was simple: Don't just show us a product; show us how you built it. We wanted to see how students utilized GitLab's platform — from Issue Boards to CI/CD pipelines — to accelerate the development lifecycle.\n\nThe results were inspiring.\n\n## The winners\n\n### 1st place: Team Decode — Democratizing Scientific Research\n\n**Project:** FIRE (Fast Integrated Research Environment)\n\nTeam Decode took home the top prize with a solution that warms a developer's heart: a local-first, blazing-fast data processing tool built with [Rust](https://about.gitlab.com/blog/secure-rust-development-with-gitlab/) and Tauri. They identified a massive pain point for data science students: existing tools are fragmented, slow, and expensive.\n\nTheir solution, FIRE, allows researchers to visualize complex formats (like NetCDF) instantly. What impressed the judges most was their \"hacker\" ethos. They didn't just build a tool; they built it to be open and accessible.\n\n**How they used GitLab:** Since the team lived far apart, asynchronous communication was key. They utilized **GitLab Issue Boards** and **Milestones** to track progress and integrated their repo with Telegram to get real-time push notifications. As one team member noted, \"Coordinating all these technologies was really difficult, and what helped us was GitLab... the Issue Board really helped us track who was doing what.\"\n\n![Team Decode](https://res.cloudinary.com/about-gitlab-com/image/upload/v1767380253/epqazj1jc5c7zkgqun9h.jpg)\n\n### 2nd place: Team BichdeHueDost — Reuniting to Solve Payments\n\n**Project:** SemiPay (RFID Cashless Payment for Schools)\n\nThe team name, BichdeHueDost, translates to \"Friends who have been set apart.\" It's a fitting name for a group of friends who went to different colleges but reunited to build this project. They tackled a unique problem: handling cash in schools for young children. Their solution used RFID cards backed by a blockchain ledger to ensure secure, cashless transactions for students.\n\n**How they used GitLab:** They utilized [GitLab CI/CD](https://about.gitlab.com/topics/ci-cd/) to automate the build process for their Flutter application (APK), ensuring that every commit resulted in a testable artifact. This allowed them to iterate quickly despite the \"flaky\" nature of cross-platform mobile development.\n\n![Team BichdeHueDost](https://res.cloudinary.com/about-gitlab-com/image/upload/v1767380253/pkukrjgx2miukb6nrj5g.jpg)\n\n### 3rd place: Team ZenYukti — Agentic Repository Intelligence\n\n**Project:** RepoInsight AI (AI-powered, GitLab-native intelligence platform)\n\nTeam ZenYukti impressed us with a solution that tackles a universal developer pain point: understanding unfamiliar codebases. What stood out to the judges was the tool's practical approach to onboarding and code comprehension: RepoInsight-AI automatically generates documentation, visualizes repository structure, and even helps identify bugs, all while maintaining context about the entire codebase.\n\n**How they used GitLab:** The team built a comprehensive CI/CD pipeline that showcased GitLab's security and DevOps capabilities. They integrated [GitLab's Security Templates](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/ci/templates/Security) (SAST, Dependency Scanning, and Secret Detection), and utilized [GitLab Container Registry](https://docs.gitlab.com/user/packages/container_registry/) to manage their Docker images for backend and frontend components. They created an AI auto-review bot that runs on merge requests, demonstrating an \"agentic workflow\" where AI assists in the development process itself.\n\n![Team ZenYukti](https://res.cloudinary.com/about-gitlab-com/image/upload/v1767380253/ymlzqoruv5al1secatba.jpg)\n\n## Beyond the code: A lesson in inclusion\n\nWhile the code was impressive, the most powerful moment of the event happened away from the keyboard.\n\nDuring the feedback session, we learned about the journey Team ZenYukti took to get to Mumbai. They traveled over 24 hours, covering nearly 1,800 kilometers. Because flights were too expensive and trains were booked, they traveled in the \"General Coach,\" a non-reserved, severely overcrowded carriage.\n\nAs one student described it:\n\n*\"You cannot even imagine something like this... there are no seats... people sit on the top of the train. This is what we have endured.\"*\n\nThis hit home. [Diversity, Inclusion, and Belonging](https://handbook.gitlab.com/handbook/company/culture/inclusion/) are core values at GitLab. We realized that for these students, the barrier to entry wasn't intellect or skill, it was access.\n\nIn that moment, we decided to break that barrier. We committed to reimbursing the travel expenses for the participants who struggled to get there. It's a small step, but it underlines a massive truth: **talent is distributed equally, but opportunity is not.**\n\n![hackathon class together](https://res.cloudinary.com/about-gitlab-com/image/upload/v1767380252/o5aqmboquz8ehusxvgom.jpg)\n\n### The future is bright (and automated)\n\nWe also saw incredible potential in teams like Prometheus, who attempted to build an autonomous patch remediation tool (DevGuardian), and Team Arrakis, who built a voice-first job portal for blue-collar workers using [GitLab Duo](https://about.gitlab.com/gitlab-duo/) to troubleshoot their pipelines.\n\nTo all the students who participated: You are the future. Through [GitLab for Education](https://about.gitlab.com/solutions/education/), we are committed to providing you with the top-tier tools (like GitLab Ultimate) you need to learn, collaborate, and change the world — whether you are coding from a dorm room, a lab, or a train carriage. **Keep shipping.**\n\n> :bulb: Learn more about the [GitLab for Education program](https://about.gitlab.com/solutions/education/).\n",{"slug":712,"featured":12,"template":13},"how-iit-bombay-students-code-future-with-gitlab",{"content":714,"config":723},{"title":715,"description":716,"authors":717,"heroImage":718,"date":719,"category":9,"tags":720,"body":722},"Artois University elevates research and curriculum with GitLab Ultimate for Education","Artois University's CRIL leveraged the GitLab for Education program to gain free access to Ultimate, transforming advanced research and computer science curricula.",[705],"https://res.cloudinary.com/about-gitlab-com/image/upload/v1750099203/Blog/Hero%20Images/Blog/Hero%20Images/blog-image-template-1800x945%20%2820%29_2bJGC5ZP3WheoqzlLT05C5_1750099203484.png","2025-12-10",[607,255,721],"product","Leading academic institutions face a critical challenge: how to provide thousands of students and researchers with industry-standard, **full-featured DevSecOps tools** without compromising institutional control. Many start with basic version control, but the modern curriculum demands integrated capabilities for planning, security, and advanced CI/CD.\n\nThe **GitLab for Education program** is designed to solve this by providing access to **GitLab Ultimate** for qualifying institutions, allowing them to scale their operations and elevate their academic offerings. \n\nThis article showcases a powerful success story from the **Centre de Recherche en Informatique de Lens (CRIL)**, a joint laboratory of **Artois University** and CNRS in France. After years of relying solely on GitLab Community Edition (CE), the university's move to GitLab Ultimate through the GitLab for Education program immediately unlocked advanced capabilities, transforming their teaching, research, and contribution workflows virtually overnight. This story demonstrates why GitLab Ultimate is essential for institutions seeking to deliver advanced computer science and research curricula.\n\n## GitLab Ultimate unlocked: Managing scale and driving academic value\n\n**Artois University's** self-managed GitLab instance is a large-scale operation, supporting nearly **3,000 users** across approximately **19,000 projects**, primarily serving computer science students and researchers. While GitLab Community Edition was robust, the upgrade to GitLab Ultimate provided the sophisticated tooling necessary for managing this scale and facilitating advanced university-level work.\n\n***\"We can see the difference,\" says Daniel Le Berre, head of research at CRIL and the instance maintainer. \"It's a completely different product. Each week reveals new features that directly enhance our productivity and teaching.\"***\n\nThe institution joined the GitLab for Education program specifically because it covers both **instructional and non-commercial research use cases** and offers full access to Ultimate's features, removing significant cost barriers.\n\n### Key GitLab Ultimate benefits for students and researchers\n\n* **Advanced project management at scale:** Master's students now benefit from **GitLab Ultimate's project planning features**. This enables them to structure, track, and manage complex, long-term research projects using professional methodologies like portfolio management and advanced issue tracking that seamlessly roll up across their thousands of projects.\n\n* **Enhanced visibility:** Features like improved dashboards and code previews directly in Markdown files dramatically streamline tracking and documentation review, reducing administrative friction for both instructors and students managing large project loads.\n\n## Comprehensive curriculum: From concepts to continuous delivery\n\nGitLab Ultimate is deeply integrated into the computer science curriculum, moving students beyond simple `git` commands to practical **DevSecOps implementation**.\n\n* **Git fundamentals:** Students begin by visualizing concepts using open-source tools to master Git concepts.\n\n* **Full CI/CD implementation:** Students use GitLab CI for rigorous **Test-Driven Development (TDD)** in their software projects. They learn to build, test, and perform quality assurance using unit and integration testing pipelines—core competency made seamless by the integrated platform.\n\n* **DevSecOps for research and documentation:** The university teaches students that DevSecOps principles are vital for all collaborative work. Inspired by earlier work in Delft, students manage and produce critical research documentation (PDFs from Markdown files) using GitLab, incorporating quality checks like linters and spell checks directly in the CI pipeline. This ensures high-quality, reproducible research output.\n\n* **Future-proofing security skills:** The GitLab Ultimate platform immediately positions the institution to incorporate advanced DevSecOps features like SAST and DAST scanning as their research and development code projects grow, ensuring students are prepared for industry security standards.\n\n## Accelerating open source contributions with GitLab Duo\n\nAccess to the full GitLab platform, including our AI capabilities, has empowered students to make impactful contributions to the wider open source community faster than ever before.\n\nTwo Master's students recently completed direct contributions to the GitLab product, adding the **ORCID identifier** into user profiles. Working on GitLab.com, they leveraged **GitLab Duo's AI chat and code suggestions** to navigate the codebase efficiently.\n\n***\"This would not have been possible without GitLab Duo,\" Daniel Le Berre notes. \"The AI features helped students, who might have lacked deep codebase knowledge, deliver meaningful contributions in just two weeks.\"***\n\nThis demonstrates how providing students with cutting-edge tools **accelerates their learning and impact**, allowing them to translate classroom knowledge into real-world contributions immediately.\n\n## Empowering open research and institutional control\n\nThe stability of the self-managed instance at Artois University is key to its success. This model guarantees **institutional control and stability** — a critical factor for long-term research preservation.\n\nThe institution's expertise in this area was recently highlighted in a major 2024 study led by CRIL, titled: \"[Higher Education and Research Forges in France - Definition, uses, limitations encountered and needs analysis](https://hal.science/hal-04208924v4)\" ([Project on GitLab](https://gitlab.in2p3.fr/coso-college-codes-sources-et-logiciels/forges-esr-en)). The research found that the vast majority of public forges in French Higher Education and Research relied on **GitLab**. This finding underscores the consensus among academic leaders that self-hosted solutions are essential for **data control and longevity**, especially when compared to relying on external, commercial forges.\n\n## Unlock GitLab Ultimate for your institution today\n\nThe success story of **Artois University's CRIL** proves the transformative power of the GitLab for Education program. By providing **free access to GitLab Ultimate**, we enable large-scale institutions to:\n\n1.  **Deliver a modern, integrated DevSecOps curriculum.**\n\n2.  **Support advanced, collaborative research projects with Ultimate planning features.**\n\n3.  **Empower students to make AI-assisted open source contributions.**\n\n4.  **Maintain institutional control and data longevity.**\n\nIf your academic institution is ready to equip its students and researchers with the complete DevSecOps platform and its most advanced features, we invite you to join the program.\n\nThe program provides **free access to GitLab Ultimate** for qualifying instructional and non-commercial research use cases.\n\n**Apply now [online](https://about.gitlab.com/solutions/education/join/).**\n",{"slug":724,"featured":24,"template":13},"artois-university-elevates-curriculum-with-gitlab-ultimate-for-education",{"content":726,"config":739},{"category":9,"tags":727,"body":730,"date":731,"updatedDate":732,"heroImage":733,"authors":734,"title":737,"description":738},[728,729,102],"tutorial","git","\nEnterprise teams are increasingly migrating from Azure DevOps to GitLab to gain strategic advantages and accelerate secure software delivery. \n\n\n- GitLab comes with integrated controls, policies, and [compliance frameworks](https://docs.gitlab.com/user/compliance/compliance_frameworks/) that allow organizations to implement software delivery standards at scale. This is especially important for regulated industries.\n\n- [Security testing](https://docs.gitlab.com/user/application_security/) is embedded in the pipeline and results show in the developer workflow, including static application security testing (SAST), source code analysis (SCA), dynamic application security testing (DAST), infrastructure-as-code scanning (IaC), container scanning, and API scanning.\n\n- [AI capabilities](https://about.gitlab.com/gitlab-duo-agent-platform/) across the full software delivery lifecycle include advanced agent orchestration and customizable flows to support how your organizational teams work.\n\n\nGitLab's open-source, open-core approach, flexible deployment options such as single-tenant dedicated and self-managed, and truly unified platform eliminate integration complexity and security gaps. \n\n\nFor teams facing mounting pressure to accelerate delivery while strengthening security posture and maintaining regulatory compliance, GitLab represents not just a migration but a platform evolution.\n\n\nMigrating from Azure DevOps to GitLab can seem like a daunting task, but with the right approach and tools, it can be a smooth and efficient process. This guide will walk you through the steps needed to successfully migrate your projects, repositories, and pipelines from Azure DevOps to GitLab.\n\n\n## Overview\n\nGitLab provides both [Congregate](https://gitlab.com/gitlab-org/professional-services-automation/tools/migration/congregate/) (maintained by [GitLab Professional Services](https://about.gitlab.com/professional-services/) organization) and [a built-in Git repository import](https://docs.gitlab.com/user/project/import/repo_by_url/) for migrating projects from Azure DevOps (ADO). These options support repository-by-repository or bulk migration and preserve git commit history, branches, and tags. With Congregate and professional services tools, we support additional assets such as wikis, work items, CI/CD variables, container images, packages, pipelines, and more (see this [feature matrix](https://gitlab.com/gitlab-org/professional-services-automation/tools/migration/congregate/-/blob/master/customer/ado-migration-features-matrix.md)). Use this guide to plan and execute your migration and complete post-migration follow-up tasks.\n\n\nEnterprises migrating from ADO to GitLab commonly follow a multi-phase approach:\n\n\n- Migrate repositories from ADO to GitLab using Congregate or GitLab's built-in repository migration.\n\n- Migrate pipelines from Azure Pipelines to GitLab CI/CD.\n\n- Migrate remaining assets such as boards, work items, and artifacts to GitLab Issues, Epics, and the Package and Container Registries.\n\n\nHigh-level migration phases:\n\n\n```mermaid\ngraph LR\n    subgraph Prerequisites\n        direction TB\n        A[\"Set up identity provider (IdP) and\u003Cbr/>provision users\"]\n        A --> B[\"Set up runners and\u003Cbr/>third-party integrations\"]\n        B --> I[\"Users enablement and\u003Cbr/>change management\"]\n    end\n    \n    subgraph MigrationPhase[\"Migration phase\"]\n        direction TB\n        C[\"Migrate source code\"]\n        C --> D[\"Preserve contributions and\u003Cbr/> format history\"]\n        D --> E[\"Migrate work items and\u003Cbr/>map to \u003Ca href=\"https://docs.gitlab.com/topics/plan_and_track/\">GitLab Plan \u003Cbr/>and track work\"]\n    end\n    \n    subgraph PostMigration[\"Post-migration steps\"]\n        direction TB\n        F[\"Create or translate \u003Cbr/>ADO pipelines to GitLab CI\"]\n        F --> G[\"Migrate other assets\u003Cbr/>packages and container images\"]\n        G --> H[\"Introduce \u003Ca href=\"https://docs.gitlab.com/user/application_security/secure_your_application/\">security\u003C/a> and\u003Cbr/>SDLC improvements\"]\n    end\n    \n    Prerequisites --> MigrationPhase\n    MigrationPhase --> PostMigration\n\n    style A fill:#FC6D26\n    style B fill:#FC6D26\n    style I fill:#FC6D26\n    style C fill:#8C929D\n    style D fill:#8C929D\n    style E fill:#8C929D\n    style F fill:#FFA500\n    style G fill:#FFA500\n    style H fill:#FFA500\n```\n\n\n## Planning your migration\n\n\n**To plan your migration, ask these questions:**\n\n\n- How soon do we need to complete the migration?\n\n- Do we understand what will be migrated?\n\n- Who will run the migration?\n\n- What organizational structure do we want in GitLab?\n\n- Are there any constraints, limitations, or pitfalls that need to be taken into account?\n\n\nDetermine your timeline, as it will largely dictate your migration approach. Identify champions or groups familiar with both ADO and GitLab platforms (such as early adopters) to help drive adoption and provide guidance.\n\n\n**Inventory what you need to migrate:**\n\n\n- The number of repositories, pull requests, and contributors\n\n- The number and complexity of work items and pipelines\n\n- Repository sizes and dependency relationships\n\n- Critical integrations and runner requirements (agent pools with specific capabilities)\n\n\nUse GitLab Professional Services's [Evaluate](https://gitlab.com/gitlab-org/professional-services-automation/tools/utilities/evaluate#beta-azure-devops) tool to produce a complete inventory of your entire Azure DevOps organization, including repositories, PR counts, contributor lists, number of pipelines, work items, CI/CD variables and more. If you're working with the GitLab Professional Services team, share this report with your engagement manager or technical architect to help plan the migration.\n\n\nMigration timing is primarily driven by pull request count, repository size, and amount of contributions (e.g. comments in PR, work items, etc). For example, 1,000 small repositories with few PRs and limited contributors can migrate much faster than a smaller set of repositories containing tens of thousands of PRs and thousands of contributors. Use your inventory data to estimate effort and plan test runs before proceeding with production migrations.\n\n\nCompare inventory against your desired timeline and decide whether to migrate all repositories at once or in batches. If teams cannot migrate simultaneously, batch and stagger migrations to align with team schedules. For example, in Professional Services engagements, we organize migrations into waves of 200-300 projects to manage complexity and respect API rate limits, both in [GitLab](https://docs.gitlab.com/security/rate_limits/) and [ADO](https://learn.microsoft.com/en-us/azure/devops/integrate/concepts/rate-limits?view=azure-devops).\n\n\nGitLab's built-in [repository importer](https://docs.gitlab.com/user/project/import/repo_by_url/) migrates Git repositories (commits, branches, and tags) one-by-one. Congregate is designed to preserve pull requests (known in GitLab as merge requests), comments, and related metadata where possible; the simple built-in repository import focuses only on the Git data (history, branches, and tags).\n\n\n**Items that typically require separate migration or manual recreation:**\n\n\n- Azure Pipelines - create equivalent GitLab CI/CD pipelines (consult with [CI/CD YAML](https://docs.gitlab.com/ci/yaml/) and/or with [CI/CD components](https://docs.gitlab.com/ci/components/)). Alternatively, consider using AI-based pipeline conversion available in Congregate.\n\n- Work items and boards - map to GitLab Issues, Epics, and Issue Boards.\n\n- Artifacts, container images (ACR) - migrate to GitLab Package Registry or Container Registry.\n\n- Service hooks and external integrations - recreate in GitLab.\n\n- [Permissions models](https://docs.gitlab.com/user/permissions/) differ between ADO and GitLab; review and plan permissions mapping rather than assuming exact preservation.\n\n\nReview what each tool (Congregate vs. built-in import) will migrate and choose the one that fits your needs. Make a list of any data or integrations that must be migrated or recreated manually.\n\n\n**Who will run the migration?**\n\n\nMigrations are typically run by a GitLab group owner or instance administrator, or by a designated migrator who has been granted the necessary permissions on the destination group/project. Congregate and the GitLab import APIs require valid authentication tokens for both Azure DevOps and GitLab.\n\n\n- Decide whether a group owner/admin will perform the migrations or whether you will grant a specific team/person delegated access.\n\n- Ensure the migrator has correctly configured personal access tokens (Azure DevOps and GitLab) with the scopes required by your chosen migration tool (for example, api/read_repository scopes and any tool-specific requirements). \n\n- Test tokens and permissions with a small pilot migration.\n\n**Note:** Congregate leverages file-based import functionality for ADO migrations and requires instance administrator permissions to run ([see our documentation](https://docs.gitlab.com/user/project/settings/import_export/#migrate-projects-by-uploading-an-export-file)). If you are migrating to GitLab.com, consider engaging Professional Services. For more information, see the [Professional Services Full Catalog](https://about.gitlab.com/professional-services/catalog/). Non-admin account cannot preserve contribution attribution!\n\n\n**What organizational structure do we want in GitLab?**\n\nWhile it's possible to map ADO structure directly to GitLab structure, it's recommended to rationalize and simplify the structure during migration. Consider how teams will work in GitLab and design the structure to facilitate collaboration and access management. Here is a way to think about mapping ADO structure to GitLab structure:\n\n\n```mermaid\ngraph TD\n    subgraph GitLab\n        direction TB\n        A[\"Top-level Group\"]\n        B[\"Subgroup (optional)\"]\n        C[\"Projects\"]\n        A --> B\n        A --> C\n        B --> C\n    end\n\n    subgraph AzureDevOps[\"Azure DevOps\"]\n        direction TB\n        F[\"Organizations\"]\n        G[\"Projects\"]\n        H[\"Repositories\"]\n        F --> G\n        G --> H\n    end\n\n    style A fill:#FC6D26\n    style B fill:#FC6D26\n    style C fill:#FC6D26\n    style F fill:#8C929D\n    style G fill:#8C929D\n    style H fill:#8C929D\n```\n\nRecommended approach:\n\n\n- Map each ADO organization to a GitLab group (or a small set of groups), not to many small groups. Avoid creating a GitLab group for every ADO team project. Use migration as an opportunity to rationalize your GitLab structure.\n\n- Use subgroups and project-level permissions to group related repositories.\n\n- Manage access to sets of projects by using GitLab groups and group membership (groups and subgroups) rather than one group per team project.\n\n- Review GitLab [permissions](https://docs.gitlab.com/ee/user/permissions.html) and consider [SAML Group Links](https://docs.gitlab.com/user/group/saml_sso/group_sync/) to implement an enterprise RBAC model for your GitLab instance (or a GitLab.com namespace).\n\n\n**ADO Boards and work items: State of migration**\n\n\nIt's important to understand how work items migrate from ADO into GitLab Plan (issues, epics, and boards).\n\n\n- ADO Boards and work items map to GitLab Issues, Epics, and Issue Boards. Plan how your workflows and board configurations will translate.\n\n- ADO Epics and Features become GitLab Epics.\n\n- Other work item types (e.g., user stories, tasks, bugs) become project-scoped issues.\n\n- Most standard fields are preserved; selected custom fields can be migrated when supported.\n\n- Parent-child relationships are retained so Epics reference all related issues.\n\n- Links to pull requests are converted to merge request links to maintain development traceability.\n\n\nExample: Migration of an individual work item to a GitLab Issue, including field accuracy and relationships:\n\n\n![Example: Migration of an individual work item to a GitLab Issue](https://res.cloudinary.com/about-gitlab-com/image/upload/v1764769188/ztesjnxxfbwmfmtckyga.png)\n\n\nBatching guidance:\n\n\n- If you need to run migrations in batches, use your new group/subgroup structure to define batches (for example, by ADO organization or by product area).\n\n- Use inventory reports to drive batch selection and test each batch with a pilot migration before scaling.\n\n\n**Pipelines migration**\n\n\nCongregate [recently introduced](https://gitlab.com/gitlab-org/professional-services-automation/tools/migration/congregate/-/merge_requests/1298) AI-powered conversion for multi-stage YAML pipelines from Azure DevOps to GitLab CI/CD. This automated conversion works best for simple, single-file pipelines and is designed to provide a working starting point rather than a production-ready `.gitlab-ci.yml` file. The tool generates a functionally equivalent GitLab pipeline that you can then refine and optimize for your specific needs.\n\n\n- Converts Azure Pipelines YAML to `.gitlab-ci.yml` format automatically.\n\n- Best suited for straightforward, single-file pipeline configurations.\n\n- Provides a boilerplate to accelerate migration, not a final production artifact.\n\n- Requires review and adjustment for complex scenarios, custom tasks, or enterprise requirements.\n\n- Does not support Azure DevOps classic release pipelines — [convert these to multi-stage YAML](https://learn.microsoft.com/en-us/azure/devops/pipelines/release/from-classic-pipelines?view=azure-devops) first.\n\n\nRepository owners should review the [GitLab CI/CD documentation](https://docs.gitlab.com/ci/) to further optimize and enhance their pipelines after the initial conversion.\n\n\nExample of converted pipelines:\n\n\n```yml \n\n# azure-pipelines.yml\n\ntrigger:\n  - main\n\nvariables:\n  imageName: myapp\n\nstages:\n  - stage: Build\n    jobs:\n      - job: Build\n        pool:\n          vmImage: 'ubuntu-latest'\n        steps:\n          - checkout: self\n\n          - task: Docker@2\n            displayName: Build Docker image\n            inputs:\n              command: build\n              repository: $(imageName)\n              Dockerfile: '**/Dockerfile'\n              tags: |\n                $(Build.BuildId)\n\n  - stage: Test\n    jobs:\n      - job: Test\n        pool:\n          vmImage: 'ubuntu-latest'\n        steps:\n          - checkout: self\n\n          # Example: run tests inside the container\n          - script: |\n              docker run --rm $(imageName):$(Build.BuildId) npm test\n            displayName: Run tests\n\n  - stage: Push\n    jobs:\n      - job: Push\n        pool:\n          vmImage: 'ubuntu-latest'\n        steps:\n          - checkout: self\n\n          - task: Docker@2\n            displayName: Login to ACR\n            inputs:\n              command: login\n              containerRegistry: '\u003Cyour-acr-service-connection>'\n\n          - task: Docker@2\n            displayName: Push image to ACR\n            inputs:\n              command: push\n              repository: $(imageName)\n              tags: |\n                $(Build.BuildId)\n\n```\n\n```yaml\n\n# .gitlab-ci.yml\n\nvariables:\n  imageName: myapp\n\nstages:\n  - build\n  - test\n  - push\n\nbuild:\n  stage: build\n  image: docker:latest\n  services:\n    - docker:dind\n  script:\n    - docker build -t $imageName:$CI_PIPELINE_ID -f $(find . -name Dockerfile) .\n  only:\n    - main\n\ntest:\n  stage: test\n  image: docker:latest\n  services:\n    - docker:dind\n  script:\n    - docker run --rm $imageName:$CI_PIPELINE_ID npm test\n  only:\n    - main\n\npush:\n  stage: push\n  image: docker:latest\n  services:\n    - docker:dind\n  before_script:\n    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY\n  script:\n    - docker tag $imageName:$CI_PIPELINE_ID $CI_REGISTRY/$CI_PROJECT_PATH/$imageName:$CI_PIPELINE_ID\n    - docker push $CI_REGISTRY/$CI_PROJECT_PATH/$imageName:$CI_PIPELINE_ID\n  only:\n    - main\n\n```\n\n**Final checklist:**\n\n\n- Decide timeline and batch strategy.\n\n- Produce a full inventory of repositories, PRs, and contributors.\n\n- Choose Congregate or the built-in import based on scope (PRs and metadata vs. Git data only).\n\n- Decide who will run migrations and ensure tokens/permissions are configured.\n\n- Identify assets that must be migrated separately (pipelines, work items, artifacts, and hooks) and plan those efforts.\n\n- Run pilot migrations, validate results, then scale according to your plan.\n\n\n## Running your migrations\n\n\nAfter planning, execute migrations in stages, starting with trial runs. Trial migrations help surface org-specific issues early and let you measure duration, validate outcomes, and fine-tune your approach before production.\n\n\nWhat trial migrations validate:\n\n\n- Whether a given repository and related assets migrate successfully (history, branches, tags; plus MRs/comments if using Congregate)\n\n- Whether the destination is usable immediately (permissions, runners, CI/CD variables, integrations)\n\n- How long each batch takes, to set schedules and stakeholder expectations\n\n\nDowntime guidance:\n\n\n- GitLab's built-in Git import and Congregate do not inherently require downtime.\n\n- For production waves, freeze changes in ADO (branch protections or read-only) to avoid missed commits, PR updates, or work items created mid-migration.\n\n- Trial runs do not require freezes and can be run anytime.\n\n\nBatching guidance:\n\n\n- Run trial batches back-to-back to shorten elapsed time; let teams validate results asynchronously.\n\n- Use your planned group/subgroup structure to define batches and respect API rate limits.\n\n\nRecommended steps:\n\n\n1. Create a test destination in GitLab for trials:\n\n\n  - GitLab.com: create a dedicated group/namespace (for example, my-org-sandbox)\n\n  - Self-managed: create a top-level group or a separate test instance if needed\n\n\n2. Prepare authentication:\n\n\n  - Azure DevOps PAT with required scopes.\n\n  - GitLab Personal Access Token with api and read_repository (plus admin access for file-based imports used by Congregate).\n\n\n3. Run trial migrations:\n\n\n  - Repos only: use GitLab's built-in import (Repo by URL)\n\n  - Repos + PRs/MRs and additional assets: use Congregate\n\n\n4. Post-trial follow-up:\n\n\n  - Verify repo history, branches, tags; merge requests (if migrated), issues/epics (if migrated), labels, and relationships.\n\n  - Check permissions/roles, protected branches, required approvals, runners/tags, variables/secrets, integrations/webhooks.\n\n  - Validate pipelines (`.gitlab-ci.yml`) or converted pipelines where applicable.\n\n\n5. Ask users to validate functionality and data fidelity.\n\n6. Resolve issues uncovered during trials and update your runbooks.\n\n7. Network and security:\n\n\n  - If your destination uses IP allow lists, add the IPs of your migration host and any required runners/integrations so imports can succeed.\n\n\n8. Run production migrations in waves:\n\n\n  - Enforce change freezes in ADO during each wave.\n\n  - Monitor progress and logs; retry or adjust batch sizes if you hit rate limits.\n\n\n9. Optional: remove the sandbox group or archive it after you finish.\n\n\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/ibIXGfrVbi4?si=ZxOVnXjCF-h4Ne0N\" frameborder=\"0\" allowfullscreen=\"true\">\u003C/iframe>\n\u003C/figure>\n\n\n## Terminology reference for GitLab and Azure DevOps\n\n| GitLab                                                           | Azure DevOps                                 | Similarities & Key Differences                                                                                                                                          |\n| ---------------------------------------------------------------- | -------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Group                                                            | Organization                                 | Top-level namespace, membership, policies. ADO org contains Projects; GitLab Group contains Subgroups and Projects.                                                   |\n| Group or Subgroup                                                | Project                                      | Logical container, permissions boundary. ADO Project holds many repos; GitLab Groups/Subgroups organize many Projects.                                                |\n| Project (includes a Git repo)                                    | Repository (inside a Project)                | Git history, branches, tags. In GitLab, a \"Project\" is the repo plus issues, CI/CD, wiki, etc. One repo per Project.                                                  |\n| Merge Request (MR)                                               | Pull Request (PR)                            | Code review, discussions, approvals. MR rules include approvals, required pipelines, code owners.                                                                     |\n| Protected Branches, MR Approval Rules, Status Checks             | Branch Policies                              | Enforce reviews and checks. GitLab combines protections + approval rules + required status checks.                                                                    |\n| GitLab CI/CD                                                     | Azure Pipelines                              | YAML pipelines, stages/jobs, logs. ADO also has classic UI pipelines; GitLab centers on .gitlab-ci.yml.                                                               |\n| .gitlab-ci.yml                                                   | azure-pipelines.yml                          | Defines stages/jobs/triggers. Syntax/features differ; map jobs, variables, artifacts, and triggers.                                                                   |\n| Runners (shared/specific)                                        | Agents / Agent Pools                         | Execute jobs on machines/containers. Target via demands (ADO) vs tags (GitLab). Registration/scoping differs.                                                         |\n| CI/CD Variables (project/group/instance), Protected/Masked       | Pipeline Variables, Variable Groups, Library | Pass config/secrets to jobs. GitLab supports group inheritance and masking/protection flags.                                                                          |\n| Integrations, CI/CD Variables, Deploy Keys                       | Service Connections                          | External auth to services/clouds. Map to integrations or variables; cloud-specific helpers available.                                                                 |\n| Environments & Deployments (protected envs)                      | Environments (with approvals)                | Track deploy targets/history. Approvals via protected envs and manual jobs in GitLab.                                                                                 |\n| Releases (tag + notes)                                           | Releases (classic or pipelines)              | Versioned notes/artifacts. GitLab Release ties to tags; deployments tracked separately.                                                                               |\n| Job Artifacts                                                    | Pipeline Artifacts                           | Persist job outputs. Retention/expiry configured per job or project.                                                                                                  |\n| Package Registry (NuGet/npm/Maven/PyPI/Composer, etc.)           | Azure Artifacts (NuGet/npm/Maven, etc.)      | Package hosting. Auth/namespace differ; migrate per package type.                                                                                                     |\n| GitLab Container Registry                                        | Azure Container Registry (ACR) or others     | OCI images. GitLab provides per-project/group registries.                                                                                                             |\n| Issue Boards                                                     | Boards                                       | Visualize work by columns. GitLab boards are label-driven; multiple boards per project/group.                                                                         |\n| Issues (types/labels), Epics                                     | Work Items (User Story/Bug/Task)             | Track units of work. Map ADO types/fields to labels/custom fields; epics at group level.                                                                              |\n| Epics, Parent/Child Issues                                       | Epics/Features                               | Hierarchy of work. Schema differs; use epics + issue relationships.                                                                                                   |\n| Milestones and Iterations                                        | Iteration Paths                              | Time-boxing. GitLab Iterations (group feature) or Milestones per project/group.                                                                                       |\n| Labels (scoped labels)                                           | Area Paths                                   | Categorization/ownership. Replace hierarchical areas with scoped labels.                                                                                              |\n| Project/Group Wiki                                               | Project Wiki                                 | Markdown wiki. Backed by repos in both; layout/auth differ slightly.                                                                                                  |\n| Test reports via CI, Requirements/Test Management, integrations  | Test Plans/Cases/Runs                        | QA evidence/traceability. No 1:1 with ADO Test Plans; often use CI reports + issues/requirements.                                                                     |\n| Roles (Owner/Maintainer/Developer/Reporter/Guest) + custom roles | Access levels + granular permissions         | Control read/write/admin. Models differ; leverage group inheritance and protected resources.                                                                          |\n| Webhooks                                                         | Service Hooks                                | Event-driven integrations. Event names/payloads differ; reconfigure endpoints.                                                                                        |\n| Advanced Search                                                  | Code Search                                  | Full-text repo search. Self-managed GitLab may need Elasticsearch/OpenSearch for advanced features.                                                                   |\n","2025-12-03","2026-01-16","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749658924/Blog/Hero%20Images/securitylifecycle-light.png",[735,736],"Evgeny Rudinsky","Michael Leopard","Guide: Migrate from Azure DevOps to GitLab","Learn how to carry out the full migration from Azure DevOps to GitLab using GitLab Professional Services migration tools — from planning and execution to post-migration follow-up tasks.",{"featured":24,"template":13,"slug":740},"migration-from-azure-devops-to-gitlab",{"promotions":742},[743,757,768],{"id":744,"categories":745,"header":747,"text":748,"button":749,"image":754},"ai-modernization",[746],"ai-ml","Is AI achieving its promise at scale?","Quiz will take 5 minutes or less",{"text":750,"config":751},"Get your AI maturity score",{"href":752,"dataGaName":753,"dataGaLocation":237},"/assessments/ai-modernization-assessment/","modernization assessment",{"config":755},{"src":756},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138786/qix0m7kwnd8x2fh1zq49.png",{"id":758,"categories":759,"header":760,"text":748,"button":761,"image":765},"devops-modernization",[721,553],"Are you just managing tools or shipping innovation?",{"text":762,"config":763},"Get your DevOps maturity score",{"href":764,"dataGaName":753,"dataGaLocation":237},"/assessments/devops-modernization-assessment/",{"config":766},{"src":767},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138785/eg818fmakweyuznttgid.png",{"id":769,"categories":770,"header":772,"text":748,"button":773,"image":777},"security-modernization",[771],"security","Are you trading speed for security?",{"text":774,"config":775},"Get your security maturity score",{"href":776,"dataGaName":753,"dataGaLocation":237},"/assessments/security-modernization-assessment/",{"config":778},{"src":779},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138786/p4pbqd9nnjejg5ds6mdk.png",{"header":781,"blurb":782,"button":783,"secondaryButton":788},"Start building faster today","See what your team can do with the intelligent orchestration platform for DevSecOps.\n",{"text":784,"config":785},"Get your free trial",{"href":786,"dataGaName":44,"dataGaLocation":787},"https://gitlab.com/-/trial_registrations/new?glm_content=default-saas-trial&glm_source=about.gitlab.com/","feature",{"text":489,"config":789},{"href":48,"dataGaName":49,"dataGaLocation":787},1772652085308]