Cake
  • Log In
  • Sign Up
    • Paul Duplantis

      "We herald the PC revolution, but we should remember that it made us forget to share.  Timesharing enabled groups to share a common pool resource, sharing that, which impacted social dynamics.  With PCs, we were left on our own, however empowered."

      The more I dig, the more I realize the significance of the wrong turn technology took in the late 60's and early 70's. It is time to change the course. #TheEmergentWeb

      https://www.zdnet.com/article/the-shocking-truth-about-silicon-valley-genius-doug-engelbart/

    • I found that article a most peculiar take on computer history. The timesharing mainframes that I remember from the late 60s could support several hundred simultaneous users at best and used character based terminals over dedicated, slow communications lines. I'm sure Xerox-PARC had much cooler stuff in-house, but big iron in those days was largely batch oriented and decidedly un-sexy. It couldn't scale the way PCs did, and by the mid 80s, local area networks of PCs enabled resource sharing within organizations. Ten years later or so, Internet technology permitted sharing over long distances. The WIMP interface (windows, icons, menus and pointing devices) that Englebart and his colleagues created and Apple brought to the mainstream has yet to be replaced. I'm not sure what we could/should have done differently.

    • Most everything has small beginnings. I believe the point is if resources would have been poured into technologies to connect and collaborate at the beginning a different future could have been rendered. I was not there so I only use the context of what I have been reading. But I have read countless accounts of Englebart's ideas being passed over for the standalone PC. Also that's a 10 year gap between the late 60's and the late 70's for the birth of the PC. I hazard to guess with resources allocated, a larger network could have been built.

    • I was working in IT during the mid-70's through to 2015. The rise of the personal computer was inevitable, along with the networks and server farms to which they connect. Our terminals then were developed with more and more local processing, with integrated peripherals, multiple connections to the outside world, and increasingly complex on-board applications and processing.

      As an example a requirement I had to fulfil was to enable a dumb terminal, connected via telephone line to our corporate mainframe, to run a local word processor. This was because of woeful communications speed and frequent communications failures. The specifications, simple at first, quickly grew to include interfacing a printer (golf ball; remember those?) and off-line data storage (the boss needed that report now, not when the techs in town finally sorted out their problems).

      We wound up with what was effectively a personal computer, though with none of the niceties like graphic user interfaces or mice. I did incorporate a graphics tablet with wired pen for quick input later. The processor was Motorola, the firmare written by hand (I think they would call it bespoke now). The connection to the mainframe also improved over time, eventually rising to the blistering speed of 250Kb via a dedicated line.

      The penultimate is the current smart device: a computer, terminal, telephone, camera, and applications in your pocket or on your wrist using voice control to connect and communicate with the rest of the world, video included; Dick Tracy indeed.

      The ultimate will be neuro connectivity, bypassing the need to carry or wear something externally. Such a system is the logical extension and terminus of the distributed network, possibly sharing "spare" brain capacity the way some personal computer networks share resources now.

      I don't believe that there is a place for large monolithic computers with users sharing space and resources outside of an entity such as a government department or scientific establishment "super computers"; such a workspace needs to have both its connectivity and resources rigidly controlled for safety and security but even there a distributed network of computers and servers is better in terms of speed and data backup.

      Monolithic computing was never about shared access to data, it was about shared access to expensive computing cycles, very expensive memory space, massive and slow data backup, and finicky expensive peripherals like printers and plotters.

      What we have now would have come about eventually anyway; the evolution of a system of massive mainframes would have led to this outcome as well. We now have machines on our laps much more powerful than the corporate silos of 10 years ago, with communications speeds undreamt of previously, and interplanetary connectivity. In terms of communications infrastructure we have so much more that we can do and need to do to enable neuro connectivity.

      Humans like autonomy, feeling that they control their own information (good luck with that) and can switch off when they need to. Everybody relying on a large, centralised system with its oversight and accountability was never going to continue on a personal level and in today's world would certainly be the cause for disaster.

    • I agree that an earlier emphasis on networking and collaboration might have changed the course of things, but there were many technical constraints that took a long time to overcome and it was not for a lack of trying. As @lgorrie mentioned, CPU and memory were insanely expensive and pathetically limited by today's standards. Thanks to Moore's Law, they became much cheaper over time. Hardware was not the only limitation--TCP/IP was only created in 1974 and the first version of HTML did not appear till the early 90s. These are the two network protocols upon which the Internet is based. Without them, communication between systems was largely limited to proprietary networks.

      While it's perhaps a minor point in the grand scheme of things, we didn't go directly from mainframes in the 60s to PCs in 80s. Companies like DEC and Data General produced minicomputers for about 30 years starting in the mid 60s. These were much smaller and more affordable than mainframes, though they were still basically monolithic systems. They permitted many more organizations to use computers and equally important, gave rise to new operating systems like UNIX, which gave a huge boost to collaboration.

    • I have to say I am just amazed at the quality of feedback on this thread and in Cake.co in general. It is less about agreement and disagreement and more about depth of engagement. All of the points made on this thread speak well to my original post and provide much food for thought. In my wish upon a star, I would love to see viewpoints from Douglas Englebart's daughter Christina Englebart, and Tim Berner's Lee on this subject. There seems to be a narrative running on how do we fix what is broken on the web but I strongly feel if we don't look back, we might make the same mistakes again. Why were the ideas of Douglas Englebart, Alan Kay, and Ted Neslon not fully adopted where we would see a more granular and collaborative web of expressions. Is it because the technology was not possible at the time. Was it politics? Was it ego? Is there something we could learn from this reality which may help play into building a better one? I have a feeling this will be discussed during the upcoming symposium on Douglas Englebart's Mother of all Demos 50th anniversary which I plan on attending. https://thedemoat50.org/. We need a better web to pull what is best from people not what is considered best for people.

    • Why were the ideas of Douglas Englebart, Alan Kay, and Ted Neslon not fully adopted

      Because technology alone doesn't solve human problems. Universal or even widespread adoption of anything is the hardest problem we as a species haven't learned how to solve yet, and that is the sole reason why we still have wars and hunger and many other problems and atrocities. Some of the purely technological achievements in communications and ubiquitous computing and networking are almost a miracle - see TCP/IP (which is 45 years old today, by the way).

      I could talk about other interesting parallels here, I guess, not least the one between true OOP/functional programming (as envisioned by the very same Alan Kay), which doesn't make distinction between and data - and similar confluence of humans and their technology. Having better technology won't help us any unless we get better humans to go along with it. Does this sound like Bene Gesserit already? :)

    • Paul Duplantis

      > But how do better humans surface without a better connection? Historically the masses haven't fared so well left to their own devices, have they? Better connection. Better Humans. Better Humans. Better connection. Englebart's bootstraping but more focused on the personal domains than organizational domains in my opinion. And I can't agree more that technology alone will not solve all human problems but technology augmenting our natural expressions and impressions has a better chance for progress than technology artificially influencing outcomes. That is at the core of what I think Vannevar Bush, Englebart, Nelson and Kay were getting at that were not adopted in the formation of the web. In my opinion that is. But you make great points and yes indeed we need better humans to foster better technology.

    • Whenever people start talking about better humans or a better world, they are either engaging in a form of chauvinism or a conscious or unconscious belief in an objective ideal. This idealism can be either a non-religious philosophical idealism or some form of religious belief system.

      The problem is that there is no agreement among humans as to what constitutes "better."

      Vladimir Putin has a definition of "better" which does not agree with my view of "better".

      Another example: I am not a monarchist. Yet there are some people who view the constitutional monarchy form of government as being better than a "republican" form of government.

      There are some people whose view of "better" tends towards individualism and a libertarian society, while others think that "better" is only achieved through a form of societalism that involves a very regulated society.

      Then we get into conflicting views as to what constitutes better ethically or morally.

      Is "better" an objective or subjective subject?

      As for myself, it is my view that if one believes that humans evolved from subhumans and is in the process of evolving into superhumans that this constitutes racism but not one which has skin coloration as its criteria for race. I do not think that it is possible to believe in the concept of a future or present ubermensch without a distorted view of one's relationship to other humans.

    • I am not quite sure where to begin, but while you're trying to denounce any attempts at an objective approach, aren't you trying to impose some kind of an absolute viewpoint, in turn?

      As far as I'm aware, I haven't posited even an existence of a universal objective scale of which is better. However, I do believe that there exists a (sub)set of values beneficial to humanity that can be agreed upon and based on that an optimised set of actions can be performed. Such values could be, as an example, peace, health, enough food, universal literacy, universally accessible higher education and so on, I couldn't presume to enumerate a comprehensive list. And you are echoing my sentiment by stating that there is the problem of exactly the lack of agreement. That is, IMHO, one of the pivotal points for possible "betterment" - an ability to agree on mutually beneficial basis.

      I would also like to point out that your examples are very complex societal paradigms, whereupon there are unsolved problems of much simpler yet more important status (see above for examples, some more complex than the others). Yes, I understand that [efficient] governance is required to achieve any of those, and yet agreement starts at the lower levels, as otherwise we can't agree on what constitutes benefits to the agreeing parties.

      Finally, I don't really understand where are all the generalisations and passive-aggressive invocations of ubermensch, racism and chauvinism are coming from, because surely an arbitrary hypothesis that humans can choose a vector and optimise along it can't really be a sore subject? Better doesn't mean uber, the same way as you can't infer that all positive integers are less than 100 because 1, 3, 7, 9, 31 and even 74 are less than 100.

    • I am very sorry if my attempt to bring into the discussion a variety of conflicting paradigms sounded to you as if I were passive aggressive on this topic. I assure you that I have no desire to discuss this topic with any animosity whether blatant or hidden. I am also not discussing to be disputative but rather am merely saying that consensus does not exist and that a realistic view of this subject is that whatever one person views as ideal another will disagree.

      Let's start with the term "chauvinism." There is some question among historians as to whether Chauvin was a real person or not but the legend goes that Cahuvin believed that the Feench culture was the best of all possible cultures and that he believed that the world would be a much better place if all the world purged themselves of things which were not french and adopted the french culture as the universal culture of all mankind.

      Now, I hope it is obvious that when I wrote of the topic of a form of chauvinism, I was not speaking of the french culture. Rather what I was talking about was the idea that what an individual invents as ideal or that what a group imagines together as ideal should be the universal standard for all humanity. For example, the Trumpians seem to favor hyper-nationalism whenever it is promoted in any country. Another example, is that there is a theory that in spite of the french revolution, in spite of the destruction of New Echota, in spite of the after effects of the Arab Spring, that in spite of what has happened since 2010 with Aung San Suu Kyi that if all the world chose their leaders in a manner that is democratic that the world would be a better place.

      I am highly amused by the highly regulated Star Trek mileu and the decentralized mileu of Star Wars after the fall of the Empire. (I haven't seen the more recent movies.) How can both be ideal when they are postulating opposite societal paradigms.

      As to why I would discuss ubermensch, you mentioned the Bene Gesserit. The history of eugenics prior to the repugnance which resulted from hitlerism was based in a desire to make the world a better place.

      I am not bringing this up to be passive aggressive. Rather what I am saying is that yesterday's ideals are today's horrors and how are we to know what tomorrow will think of our ideals?

      In the early 1970s, I participated in an inner city high school club as a student that was devoted to eliminating societal prejudice. But today, our efforts to break down barriers are seen by many as being bad. Instead cultural diversity and preservation of minority culturals and opposition to those outside of a culture adopting parts of a minority's culture are viewed as ideal. Just as MLK objected to violence as an avenue to equality of rights so also I think that he would find the ardent defenders of cultural diversity to be neo-segregationists.

      "Every generation blames the one before ..."

      "We didn't start the fire ..."

    • I certainly agree the pursuit of "better" as a means of influence FOR people as your Putin example warrants a slippery slope argument but I respectfully disagree when it comes to the individual. I am talking about the pursuit of better FROM people and how technology could provide the tools for this to surface.

      Because as soon as "better" is removed from our internal means of pursuit, we are all truly lost are we not? When used in the context of self, is it then argued that we would strive to be a better murderer? No, I believe there are certain natural laws humans struggle with everyday that could be used as a measurement for better.  Better creativity. Better awareness of dangers and opportunities. Better productivity. Better empathy. How could technology allow the individual to grab the wheel and steer their own interests toward the better? That is a good debate to have. That is my take at least.

    • Oh, I most definitely believe that the individual should be seeking to be better. My point is that what I consider to be the main aspiration of my life, seems to be foolishness to others AND many others pursue a "better" which I consider to be "vanity of vanities."

    • I did not cite Putin to indicate that I agree with him in any way but rather to say that he has a set of ideals and is probably seeking to make the world better according to his estimation of what constitutes better. He would probably be as negative on my view of "better" as I am negative on his view of "better."

    • Without a doubt and I imagine we will never fully engineer our way out of our differences nor should we want to. Which is why I am a fan of Englebart's theory on Augmenting intelligence vs. a reliance on Artificial Intelligence. Technology should be personal and AI should only be there to assist and not to lead. Let's not let the bots determine what is better for us. Let us decide. Anyhow I greatly appreciate your insight. It is not taken lightly.

    You've been invited!