An anonymous reader quotes a report from The Register: Source-code hub Gitlab.com is in meltdown after experiencing data loss as a result of what it has suddenly discovered are ineffectual backups. On Tuesday evening, Pacific Time, the startup issued the sobering series of tweets, starting with “We are performing emergency database maintenance, GitLab.com will be taken offline” and ending with “We accidentally deleted production data and might have to restore from backup. Google Doc with live notes [link].” Behind the scenes, a tired sysadmin, working late at night in the Netherlands, had accidentally deleted a directory on the wrong server during a frustrating database replication process: he wiped a folder containing 300GB of live production data that was due to be replicated. Just 4.5GB remained by the time he canceled the rm -rf command. The last potentially viable backup was taken six hours beforehand. That Google Doc mentioned in the last tweet notes: “This incident affected the database (including issues and merge requests) but not the git repos (repositories and wikis).” So some solace there for users because not all is lost. But the document concludes with the following: “So in other words, out of 5 backup/replication techniques deployed none are working reliably or set up in the first place.” At the time of writing, GitLab says it has no estimated restore time but is working to restore from a staging server that may be “without webhooks” but is “the only available snapshot.” That source is six hours old, so there will be some data loss.
Read more of this story at Slashdot.