Ah yes, I seem to remember that.
Ah yes, I seem to remember that.
The world had indeed changed thanks to Amazon, some parts more than others. But the company has impacted more aspects of job economy and commerce life than any others, thus far. They changed the way corporate computing is provided and delivered. As it was written somewhere, without AWS there would be no IPhone age.
I think you're giving too much credit to AWS. Cloud computing was conceptualized as far back as Sun Microsystems in the late 80s. AWS was built on hypervisor software and hardware that was developed by others (at AWS they originally used this to support their own operations, and then opened it up as a service).
Don't get me wrong, AWS is the leader and huge in this space, but it wasn't created by Amazon, Amazon has just been the market leader.
Why do you think there would be no iPhone age without AWS?
Oh, and you could argue Salesforce had a much larger impact as they were the first SaaS company.
Mainframes are were it's all at!
It's interesting you say that. I remember when you had to reserve run time on the mainframe which scheduled programs and supported isolated operating systems. Then it all exploded outwards to desktops for everyone, and now with cloud computing it's all collapsed back into reserved instances and container operating systems....
Everything old is new again.
I know, right? If the quantum computing will eventually succeed in building super powerful, gigantic processing power supercomputers, then it would be full circle back to the mainframe concept. Paired with networks of the future, all the consumer world really needs will be some nice and shiny iPhones as mere peripherals.
Intuitively, I would say Apple. But that’s probably just because I haven’t thought about all the different ways Amazon has affected our lives. It’s certainly a worthy debate to have!
I find this picture of mainframes quite misleading, and rather limited. Mainframes are alive and well, and they are not really just about massive centralisation. It is much more about reliability. z/OS is, arguably, the most reliable operating system in use. You can hotswap processor cards and do other things quite unimaginable in desktop or rack server world, and literally nothing will happen with the jobs being executed - not a "negligible downtime" (which usually means "we don't know how long, maybe a couple seconds, right?", and don't even get me started on "eventual consistency" applied left and right where it doesn't belong), no downtime. The closest equivalent could be some mission-critical multiple-redundancies close-to-bare-metal embedded systems, and even those are few and far between.
Regarding the OP's question, I would say there's no argument. They both changed the world in significant ways (subject to interpretation galore), but there's neither a way nor the reason to compare - and especially quantitatively :)
I m glad the mainframes are alive and well, and wonder why Amazon hasn't considered them. As far as reliability? Any quality electronics can be built to 'military standards' quality, usually it's the pencil pushers cutting corners to save a cent with cheap components or work quality. As an example, I have a little HP Cube Micro Server that's been running for almost decade now Esxi, it has no screen or keyboard attached as it does not need a reboot ever. I reboot it once a year perhaps, when I remember.. The unreliable concept was introduced and made generally acceptable by Microsoft with the now ubiquitous Windows OS., which they're trying (and succeeding) to turn into a subscription based milk cow. Its one of the easiest OS to develop software for, which accounts for it's market share and keeping employed so many souls, no?
This would be equivalent to say that any human being can be healthy, rich and successful :) yes, but....
HP MicroServers are wonderful little machines, but let me assure you that if you try to hot-swap a CPU on one (or a power supply), it will fail spectacularly. Because it wasn't designed for that.
You can build to exacting standards, but to even begin, you have to be smart enough to draft standards and bold enough to imagine what is possible and paint it as desirable. The latter is especially challenging ; an obvious example would be Mr.Musk's achievements. Neither Tesla nor SpaceX didn't invent anything especially new (not to denigrate their engineering prowess, of course). But they took concepts that were looked upon with laughter and/or derision and/or "maybe sometime latter someone else will do it" attitude, and did it.
Same applies to computing, both hardware and software.
Let me add an overgeneralising flourish here by quoting from Dune:
What you of the CHOAM directorate seem unable to understand is that you
seldom find real loyalties in commerce. When did you last hear of a
clerk giving his life for the company? Perhaps your deficiency rests in
the false assumption that you can order men to think and cooperate. This
has been a failure of everything from religions to general staffs
throughout history. General staffs have a long record of destroying
their own nations. As to religions, I recommend a rereading of Thomas
Aquinas. As to you of CHOAM, what nonsense you believe! Men must want
to do things of their own innermost drives. People, not commercial
organizations or chains of command, are what make great civilizations
work. Every civilization depends upon the quality of the individuals it
produces. If you over-organize humans, over-legalize them, suppress
their urge to greatness — they cannot work and their civilization
-- A letter to CHOAM, Attributed to The Preacher
I haven't seen anyone else describe mainframes as "bold enough to imagine what is possible and paint it as desirable".
Your example of hot-swapping CPUs is silly, you could do that for decades on desktop computers. I worked on a fully fault tolerant desktop computer in the 80s, the CPUs even voted to determine the result of computations. The fact is that it's generally an undesirable tradeoff of cost vs benefit. In cloud computing, if a server fails you throw it away and there is zero downtime because of the failure in any normally designed system.
In cloud computing, if a server fails you throw it away and there is zero downtime because of the failure in any normally designed system.
That's partially true, the only problem is that "normal design" is hard to come by (and the execution of one is even more rare). That passage about cloud computing is nice and abstractly correct, however you won't find much fully implemented systems completely based on immutable servers in everyday life, and will find lots and lots of systems implemented in one region, one availability zone, with zero redundancies, single tunnels in VPN interconnects, with conflicting CIDRs on default VPCs etc and so on ad infinitum.
I have also seen lots of interesting computer architectures in the 80s and probably still have a giant ISA transputer card lying around somewhere - however, none of that is available as a mainstream desktop PC, and an attempt to pull out a CPU in one, while it is running, will end badly :)
the only problem is that "normal design" is hard to come by (and the execution of one is even more rare).
And mainframes are easy to come by? You're just making a subjective statement to support your premise. Immutable servers have been the norm for years with container systems like docker and orchestration systems like kubernetes. Google's entire cloud infrastructure was designed on that principle, as was AWS's lambda and services that existed even in their earliest offerings (such as SQS). They even have a service called "auto-scaling" that adds and removes servers based on real time demand.
As far as finding "lots and lots of systems implemented in one region" and such, so what? Systems should be designed with their use in mind, why would anyone incur expense for no return?
I have also seen lots of interesting computer architectures in the 80s and probably still have a giant ISA transputer card lying around somewhere - however, none of that is available as a mainstream desktop PC
Right, because it turned out not to be useful as a "mainstream desktop PC". However, multi-cpu systems for parallel processing are the norm.
Healthy is subject to genetic heritage and upbringing, then societal conditions.. while 'successful' and 'rich' to me are quite debatable, but I digress.
I did not need nor want to trade the complexities associated with the ability to swap out CPU or other core parts on my server, I am willing to take the risk of not having that extra complexity added for a feature rarely if ever used for my purposes. Because I rely on components quality. And for that matter let's face it, how often does a well built, quality computer component fail? We are talking years.. once the reliability statistical curve has passed it's initial slope, all the way until their technical obsolescence many many years later..
I love the mention of the Dune excerpt, as I do the whole novel, but do not quite see the connection with our OP, would be very eager to learn your thoughts behind that parable.
After pondering this for a couple of days, I could live without Amazon but I couldn't live without the smartphone in its current design or desktop computers as well designed as they are.
Amazon is great but there are other great online commerce sites — thousands of them. I have to hand it to them for Alexa and Kindle, however. But I used the Kindle app on my iPad. And I prefer Google's version of Alexa.