That was a fantastic comment. Thanks for sharing!
I've experienced the opposite of Google's "shared codebase, always run from HEAD" approach.
About a decade ago I worked for Yahoo. It was common for Yahoo products to share certain infrastructure in the form of service APIs and versioned dependencies, but virtually all Yahoo products had their own separate codebases, managed their dependencies separately, and ran on servers that were dedicated to that specific product. This was in the days before containerization was really a thing (and AWS was just barely starting to become a thing).
As a result, it was not uncommon for a Yahoo product or service to be essentially abandoned without actually being shut down. Nobody would be working on it, but somewhere in Yahoo's many data centers there were still servers running that product's code, users were still using it, and things were mostly fine.
Until something went wrong, or a shared API needed to change in a backwards-incompatible way. Then there'd be a huge effort to try to track down anyone at the company who still knew anything about that product in the hopes that they could help fix it or update it or at least shut it down smoothly.
On one occasion I witnessed, an incident at a datacenter resulted in the need to power-cycle a bunch of servers. It turned out that some of the servers in question were running an ancient and unmaintained product (it took a while to even figure out what was running on those servers in the first place), so there was nobody who could say whether or not that product would actually come back up if those servers were turned off. For all anybody knew, it was possible those servers had never actually been turned off since the product was launched and later abandoned.
All things considered, I think I'd have preferred Google's approach, even though it still has its downsides.