Patience, Grasshopper. Yes, The Cloud is unicorns-and-rainbows wonderful, and we all need to know how to make good use of it. At the same time, it remains part of the real world of commercial computing, and has limits. We need to understand those limits, and what to do when we collide with them.
More generally, the wonderfulness of DevOps doesn't mean all problems are solved, and it's time for us all to retire. Rather than dissolve all problems, DevOps, like any good craft, helps us focus attention on the problems that merit our attention. GitLab tells us, in the explanation mentioned above, that the IO (input/output) responsiveness it requires from storage is best implemented on a bare-metal base.
Human Resources departments have to treat DevOps like a commodity, something that can be squeezed from a bottle like mass-market mayonnaise. Actual data-center accomplishment is more delicate, and, as Pablo Carranza summarizes in the GitLab article, is likely to involve application-specific monitoring and follow-up. Here's a different low-level and even more mundane example of engineering trade-offs DevOps practitioners need to be able to analyze for themselves:
Move files from here to here
Suppose you have a file here and need to move it there, that is, from one host to another. Why would that happen? Lots of reasons: licensing might restrict certain computations to specific processors; compliance considerations might require you to negotiate security or geographic boundaries; you might have special-purpose processors that are fast and/or cheap for specialized loads; and so on.
How do you automate the move? This is the interesting part: multiple alternatives not only are possible, but seem most natural or obvious to different practitioners. Some engineers think first of FTP, despite its severe security difficulties and performance idiosyncrasies.
"Web service" sounds more like a 21st-century solution. This basically replaces FTP with HTTP (or HTTPS, if you prefer not to invite everyone on your network to read all your content). Web services fit naturally in standard DevOps architecture. They also involve maintenance of the servers which provide them. Having to install and maintain Apache might be utterly simple and uneventful, or, depending on your organization's technologic context, it might present a larger problem than the original one of moving a file from one filesystem to another.
Sharing a filesystem between hosts is utterly conventional. All system administrators know how to do this, and the means to mount an external filesystem are built into every operating system (OS) likely to turn up in your datacenter. The GitLab work described above actually started as an NFS (network file system) project, from what I understand. While tuning NFS performance has the reputation of being an advanced topic, it's extensively documented. Even with several advantages, I remain wary of filesystem mounts in a DevOps context if only because they generally result in a distributed configuration. What I mean by that is that Puppet or a competitor captures the mount configuration, while the details of the endpoints of the file copy are probably in source code in a programming language, and security configuration and/or credentials are in a third location. Refactoring what looks like a simple file copy might involve co-ordination between three different configuration stores.
Yet another alternative is
scp or Powershell to or from a foreign filesystem. While this demands a bit more coding, and can seem arcane to someone accustomed to treating all filesystems as local resources, it has the virtue, among others, that everything involved in its specification appears in procedural source code maintained in one place.
A complete analysis of even these four alternatives would require book-length treatment to account for all the variations in engineering context that might be pertinent, and these four don't even exhaust all the possibilities. Several of my clients rely on all four approaches in different parts of their operations, and I certainly implement and maintain all four routinely.
Today's bottom line: DevOps need to be ready to think things through, and not just assume that cloud magic answers all questions. Real-world solutions generally require a bit of fit to their real-world circumstances.