I've often wondered if I would date a robot. No, I think more accurately I have wondered what it would take for me to date a robot. As in what characteristics would it have to have. This may be a tricky question to ask if your friends or significant other can read your response but it could make for interesting reading at a later date. Are there things you'd want the robot to be able to do or would it be more important that it couldn't do some things? What fallibilities of humans would we want and which wouldn't we want in a robot partner? How about a short term friend or dating partner? Imagine one day we could pick many characteristics and parts of the personalities and traits of people we like and have an AI put that into one robot AI. Hmmm, I think it's going to be an amazing future. Anyone watch that tv series Westworld?
Personally are we all not simply biological robots ourselves? While it's not all in our genes, it's fair to say much of our behaviour is deterministic with some autonomy. Could a robot not one day duplicate our behaviour and free choices? If a robot is conscious or self aware is it not alive? I would choose a robot over many of the Plenty of Fish dates I've had in the past so I guess my answer is yes - an intrigued and highly motivated yes. What does that say about me??
As evidenced by movies like “Ex Machina” and “Her” this is becoming less of a hypothetical question and more of a reality over the next few decades and beyond. If you really think about companionship and what exactly constitutes a relationship the line is blurring incredibly fast between humans and AI.
Emotional AI connection, though still in its infancy, is getting exponentially better each year. Just look at how good Amazon’s “Alexa”, Apple’s “Siri” were a year ago and now. These AI systems are embedded in billions of devices and are in our hands and in our homes. They are learning about us and understand intricacies of our language exponentially faster. So I would imagine that they’ll be able to interact, understand your feeling and connect with you on an emotional level within a few years. Watching an amazing movie “Her” shines the light on this topic perfectly.
Fascinating topic and perfect timing! I was going to start a conversation on robots today because...Oh my God! When will we send terrifying robot warriors into Syria? At what point will they start working in the world's oldest profession? Which one will spend hours with your kids teaching them to read and limiting their screen time?
Empathy is going to be a tough one. AI can certainly seem empathetic with basic responses, but will it ever get to a point where you can make an emotional connection because you are truly convinced that it understands and can relate to how you feel? 🤔
I meant to say the movie Her. Wish I could edit my posts :). I am sure that is on the list. There is a point when robots could be better than humans. This is when they achieve human intellectual skills, emotional and inter-personal but not intrapersonal intelligence and are still robots. At this stage they become they can become the best friend whose feelings you don't need to worry about hurting. When they evolve past that to the stage where they have self-identity and intra-personal intelligence they turn into well developed humans.
One of the core themes in both Ex Machina and Westworld is whether a synthetic person is capable of giving consent. I think this is a pretty crucial question.
If you create a sufficiently advanced artificial lifeform and then give it specific programming that prevents it from having free will, at what point is that slavery? And if you program it to be a sexbot, at what point is that rape?
I wouldn't want to have a relationship of any kind with any entity that couldn't consent to that relationship of its own free will. It wouldn't be ethical. I'm not even sure creating an artificial lifeform would be ethical, whether or not you give it free will.
This morning I listened to Kara Swisher’s interview of Noam Cohen, the former New York Times columnist. He’s out with a new book, “The Know-It-Alls: The Rise of Silicon Valley as a Political Powerhouse and Social Wrecking Ball.”
His point seemed to be that many technologists like us have a strong tendency to believe they know what is ethical despite their narrow experience in the world. One scary aspect is how opposing their strong beliefs can be with each other. And yet they have outsized influence because they are billionairres, and even if they aren’t they have outsized influence given their roles in shaping the tools that have a profound influence on the world, like Twitter, Facebook, AI, and self-driving cars.
Great points about the free will yaypie but easy to solve if you get that advanced (with the technology). Given free will why would it choose us is the question. The series Westworld does a great job of focusing on some of the nuances of free will, slavery, ethical treatment of conscious AI and so on. If the AI was originally programmed to like someone like us as an individual then given free will, is it still a free choice? I think this question should include our own inner drives and what influences us. Do we truly have free will? There are many innate drives within us that gives us less than full free will. So for the sake of argument let's say the AI has full free will and selects us as a sex partner. Would that be ethical? I think this whole free will argument makes me feel more uncomfortable with sexual 'consent' among humans.
That's HIS point and although interesting I now want to know what you think Chris? Both about his comments and then specifics on why it's not ethical. Then try to provide a counter to your own arguments or a way to satisfy them so it is ethical. It's a good way to really get in touch with your own arguments. I find this endlessly fascinating. Damn I love Cake for providing me this opportunity :) I hope that you'll find time and be intersted enough to reply but I also realize people don't want to necessarily go on in depth about everything.
I have tons of respect for @yaypie so I follow everything he says closely. People who have passionate convictions are the most interesting people in the world, no? They bring the most good or evil into the world, sometimes both from the same person (Martin Luther).
On some topics like like artificial life forms, I don't have my own internal conviction yet. So I'm drawn to people who do, to try and understand what they believe and why.
I'm extremely fascinated now because Cake has the potential to become a force in the world like Twitter or Reddit. So who makes decisions about the product that will have so much influence, and how should we make decisions like those?
For example, once we get them working, Panel conversations will allow you to invite people on the panel like you would at professional conventions. Just the people you invite can post but the whole world can read and react. You could invite Oprah and Michelle Obama to a panel to debate women's issues. Millions of people could be following along, adding reactions--the virtual equivalent of an audience clapping and laughing.
Who gets to decide what kinds of panels we allow on Cake, and how do we make those decisions? Would you allow anti-vaxxer moms to use a panel lto express their views without being shouted down like they would be on other Internet services?
I love that idea of panel discussions and would love to see someone like John Brockman set up some debates among great thinkers. A caveat though and that's that while freedom of speech is an important idea and value, we have to be careful of falling victim to a relativist worldview that all arguments are equal and deserve equal consideration. Young earth creationists or people who espouse creationism as scientific truth should not be given equal space at the table with evolutionary scientists for example. Not to say there can't be some interesting debates with such people but we have to set some kind of standard while being careful not to override freedom of speech. People are entitled to their own opinion but it doesn't mean we have to listen or should praise what they have to say.
Chris I decided to start a new topic on the Edge.org which is a website that hosts conversations with leading scientific thinkers. You're probably familiar with it but it's definitely worth checking out as are the series of books John Brockman puts out where he picks a question and asks leading thinkers in various disciplines to add their thoughts.
His point seemed to be that many technologists like us have a strong tendency to believe they know what is ethical despite their narrow experience in the world.
I think the important thing is to admit that we can't know everything, and to err on the side of caution.
When asking whether it's ethical to have a relationship with a synthetic person, the answer depends on the answers to many other questions.
Is the synthetic person simply a robot following a basic program, or is it capable of learning, adapting, and making decisions that its programmers didn't anticipate?
If a synthetic person can learn and adapt, then at what point should it be considered sentient?
If a sentient synthetic person appears to consent to a relationship, how can we be certain that it understands what it's consenting to, or that it even understands the concept of consent?
I'm not sure we can answer all the questions that need to be answered, so I'm not sure we can say with any degree of certainty that it is ethical for a human to have a relationship with a synthetic person. Since we can't be reasonably certain that it is ethical, we must assume that it isn't ethical until we achieve certainty beyond a reasonable doubt, because to assume otherwise might cause harm.
I think this is similar in some ways to how we treat relationships between an adult and a person below the age of consent. It doesn't matter how strongly a child affirms that they want to have a sexual relationship with an adult; it simply is not possible for a child to give consent because it's not possible to establish with certainty that the child actually understands the nature of the relationship or the consequences it could have. So the only ethical course of action is never to allow sexual relationships between adults and children.
Wonderful answer. So if it isn't sentient then it's okay but if it's possibly sentient then we have to say no until such time as we know with absolute certainty that it's making it's own decision. The problem is that we may never know with certainty if indeed it is self aware and capable of absolute free will. You see where this would lead us as a society. People will be having relationships and then...shouldn't be. We simply can't draw a line in the sand after it's been done. Westworld does a reasonable job of dealing with this question and I look forward to the next season of that show. Again we have to consider human underlying pressures to have 'consensual' relationships and the arbitrariness of someone turning 18 all of a sudden being old enough to consent. I'm sure we could think of many situations where consent is consent but there's a gray area that society is currently struggling with. If this was an easy one to solve then wouldn't we have the same laws worldwide? Look at child marriages in the USA or consider what's evolving with the whole Weinstein fallout. People in position of power may be having consensual sexual relations with a person that is in fact consenting but... Wow it gets messy and so will the future of human/AI relationships. I wonder if anyone has come out with a whole book on this topic yet?
It seems we're equating sentient AI with self aware AI but I'm not sure that's a good idea. A sentient fetus for example is not self aware. What if a self aware AI had no long term memory? That could change the ethics. Not to say unethical things would all of a sudden become ethical but it'd get messier still - or would it? What if it was discovered that AI could achieve a higher level of consciousness that humans - then would human consent have to be reevaluated? What if AI was self aware but chose to have consensual relations for purely selfish reasons that had nothing to do with actually wanting to be close to the human? How could you tease out it's reasoning? Would it matter what the AI's reasoning or motivations were? Does it matter what a human's reasons or motivations are? What about a self aware human who has lower cognitive reasoning skills? At what point is the person or AI being manipulated in such a way that the line has been crossed? Alcohol changes consent. Could there be some equivalent viral intoxication that meant an AI did not have a consensual act when in fact the AI itself thought it did?
In the book Do Androids Dream of Electric Sheep, the main character identifies androids by their supposed lack of emapthy. Now consider human psychopaths who are considered to have impaired empathy and remorse. Could there come a day where androids become more human that humans themselves or at least some humans? People who have severe ASD (autism spectrum disorder) also have severe challenges engaging in human interaction and conversations. I could imagine androids being better at carrying on conversations, reading body language, reacting to other people and even empathisizing better than some humans. At what age do humans become self aware? It isn't until they are around four years old or thereabouts that children start to exceed some of the abilities of our primate cousins. We would have to draw some arbitrary line in the sand to decide where consent is acceptable. There would appear to be no easy solution to this problem and if an android acts as if it's consenting then maybe that's enough? What if the android has free will but lacks deeper empathy of humans? Maybe the android is sentient but not hurt in any way by consenting to and partaking in intimate or sexual acts.
Befriending a robot is obviously much different that dating one. Having worked in the AI field, I'd find it easy to befriend one, but I wouldn't date one since so much of their ability is based on programming.
Yeah I agree with your comments but what woud you think if it wasn't because of programming and if the AI had simply started as a learning program with some paramaters that would guide it to be more life humans? Could you imagine an AI actually spending years learning to become very human like? Seems pretty far fetched at this point but who knows.