Always, in hardware computing as an industry, there is a cycle, as technology advances. I learned this from a former sysadmin, who I talk to frequently because he is wise.
First faster chips are developed. The fast chips are expensive, so they are made into a large, very expensive, super server, and the focus becomes maintaining the one server to perfection, and using the other computers as dumb terminals, to hide the fact that they're older and slower.
Then the chips become cheaper. Having completely saturated the market on big servers, the hardware companies then focus on selling many computers, each as powerful as what they were formerly touting as a super-server, so that all users can benefit from the power.
And then somebody develops a faster chip, repeating the cycle. Monolithic vs. cloud, the market swinging back and forth to sell the maximum amount of hardware possible.
Oddly enough, both moves are touted as saving the administrator's precious time and effort. The super server is obvious, as the administrator need only maintain one computer. The other computers are dumb terminals, which need basically no maintenance.
The "cloud" stage, when the chip is cheap and independent computers are being sold by the truckload, is promoted as easy because configuration cloning allows each user to start with an identical setup, which reduces the problem to the monolithic stage. Except when the users need custom configuration, but that can be fobbed off on the new technician.
If you'd like to save money on your hardware, don't get too excited about any one fad. Within five years, the pendulum will swing again. Currently we're in the cloud stage.