Impressive. In the first example, I'm pretty sure I wouldn't have known which side was the machine. In the second, it handled accented speech well and managed to solve the problem despite the unexpected turns. It remains to be seen how well the system performs in the real world--obviously, they're not going to show the ridiculous failures in a demo. I'd be curious to know how the system reacts when it realizes that it's not solving the problem.
WOW! As a psychotherapist who works with people with Autism and/or intellectual disabilities, I am often trying to teach the intracacies of soical communication. In my expereince, this is NOT easy to teach, but Google has somehow taught their AI to do it! There is an immediate need for the application of this technology for those who cannot speak for themselves. It also seems that within it lies the ability to help anyone who has difficulty communicating for whatever reason.
This is really impressive, but I have a lot of mixed feelings about it.
On one hand, this could be a big time saver for a lot of people. It could also be great for people who are deaf, hard of hearing, have crippling social anxiety, or who find it difficult or impossible to call businesses on the phone for other reasons. And, in theory at least, it could be great for businesses by helping them get customers without having to sign their souls away to the kinds of parasitic companies that promise to help small businesses take reservations or make appointments online for exorbitant fees.
On the other hand, there's something fundamentally creepy about a computer intentionally trying to deceive a human, and that's what Google is doing here. Is it unethical? I'm not sure. The intentions are good. But it does set a creepy and potentially worrisome precedent.
Ten years from now when I call tech support because my Internet connection is down, will I be able to tell whether I'm talking to a human? When the voice on the other end of the line tells me to turn my cable modem off and on again, is that a person reading from a script or is that a computer following a program?
When Google Cloud launches a pay-as-you-go API for custom AI phone bots, how many humans will lose their jobs in call centers? How many of those humans needed those call center jobs because the flexible hours allow them to be home when their kids get home from school, or because they have a physical disability and a call center is one of the few entry-level jobs that doesn't require you to be able to stand or move around or lift things?
So many questions I just don't know the answers to.
In general I'm a fan of incredible new technology, but sometimes I wonder if, as Ian Malcolm says, we get so preoccupied with whether or not we could do a thing that we don't stop to think whether or not we should do that thing.
I am not at all tech-savvy like you are, but to me, at least at first glance, like most technology, the problem does not lie with the AI, but with how it might get used. For example, I think that clearly pro-social use (e.g., allowing someone who was silenced by a stroke or paralyzed to once again communicate verbally) would not be a problem. Anyway, we are not going to stop the growth of AI (or other) technologies, so I think we have to embrace it, try to control mis-use where we can, and bank on my belief that humans are more far more good than evil.
The thing I love about automated calls now is that they are so easy to recognize, I don’t have to find a polite way to hang up.
Much as the speaker wants to present this as a time-saver for people like you and me, I have my doubts. I think this just might be a way to dress up spam from Google’s business clients.
I imagine I won’t be the only one who gets very angry when candidates implement this technology two weeks before primaries/elections and inundate us all with robo-calls that sound like college kids who are trying to make ends meet, but in actuality is just Google AI on autodial. 😡
One thing I think I am learning since I joined Cake is that those of you involved with the hi-tech industry are much more cautious and cynical about the technology and its potential for abuse than I. I wonder if y'all are a lot more optimistic about the mental health system than me? Does familiarity breed contempt? Or does being an insider simply make us more skeptical?
This has been bugging me all day because it feels like a technology that can be used for some serious trouble. I've always thought there was a special place in Hell for TV evangelists and other hucksters who prey on the elderly, like happened to both my mother and grandmother. "Oh, they're such nice men, I just had to give them another donation."
Somewhere a headline passed by the other day that estimated defrauding the elderly is a $37 billion industry in the U.S. Who knows what it really is, but AI like this seems like it could be a very powerful tool in that arsenal. I'm already getting several robo calls a day claiming the FBI is coming for me if I don't pay some taxes today. But that voice is so fake I think even my mentally ill mom would not be fooled.
That's a really good question.
For me at least, I think my caution and cynicism about technology stems from having seen how the sausage is made and knowing that very often there's nobody thinking about the ethical or societal implications of the technology they're creating.
I'm much more worried about accidental consequences than I am about people intentionally using technology for evil purposes, because accidental consequences happen all the time and can be really hard to predict even if you try (and a lot of companies don't try).
I think the question to ask is if you call in to tech support, and you can't tell if its AI or not, then does it really matter?
As for the displacement of workers, this isn't something new. Efficiency for maximizing profit has always been a priority for businesses. Once the tech exists, it will be used. Whether at the time of it's creation, or later on when society has had time to adjust.
Personally, I think things like this, and other automation of industries are just stepping stones to a different type of economy and way of living. I hesitate to use the term, but the direction feels like it's pushing towards a post scarcity society.
The second video was more troubling. The one where Google is predicting the chance of readmission to the hospital by reviewing medical records-I'm not sure I want machine learning or AI monetizing my medical records.
Back to the original. I'm not sure if it would be better if my automation called your automation and the two negotiated a haircut appointment...
Wait till the SEO companies catch on to this.
I think of technology as relentlessly pushing the boundaries, whether good or bad.
I think of mental health as a constantly moving target, which makes it super slippery.
Two completely different types of formidable challenges.
Google could build it so that it always starts a conversation by stating that it's a bot. It would defuse a lot of criticism if it did so, but there might be more money in concealing the fact. Or government could require it, if popular support is sufficient.
Chris, thanks for the invitation to Cake, and I'm glad you like my photography industry news site, Photo Ten Five. We launched seven weeks ago and already have 1,000 links posted! Although this story isn't exactly photo related, I added it to the site links because it is very fascinating, even if it seems a bit frightening. Photographers, publishers and editors have been grappling with the ethics of manipulating images for decades, but certainly every year technology improves to where there is now virtually no demarcation between reality and illusion. What is troubling is the tendency of some to hide behind AI/VR to a point where reality is just so boring or unappealing, they can't cope with the truth about how they actually look or the honest appearance of what they are seeing... and now hearing...
I wondered why I hadn't heard of Photo Ten Five. I can't imagine how you come up with all those links, but a few of them draw me in every time I go there.
The one I thought was fascinating this morning was How This Artist Makes Money Off YouTube Without Brand Sponsorships. It's about a food photographer who got bored and opened a YouTube channel like I considered doing a couple years ago.
there has been studies done to show that folks are generally very unsympathetic bordering on rude and aggressive if reacting with a machine in a physical or verbal way. The things I call siri when it messes up all the time is not something I would want public! 🤪
If the convo got into a contentious issue it could proclaim its AI?
I was a bit annoyed when the Ai didn't just state matter of fact what it wanted for the appointment, instead it got there through a bit more of conversational way, perhaps to hide it was Ai?
It could be a great tool for elderly and companionship in situations where having someone/thing to talk with, is a soothing or almost therapeutic approach.
Yeah, I think I'd feel better if it identified itself as Richard's computer calling to make a haircutting appointment.
I have to admit the ease of use of ALEXA is enticing.
Grocery lists, shuffling John Prince just set a timer for a meatloaf to go off at 5pm. When they can do the dishes and clean the bathroom it’s gonna get a little crazy. Yet it will seem perfectly normal and we will say. I can’t believe we lived without these things
While they plot our demise! 🤣
the NSA thinks this is fantastic btw, put a listening device in millions of homes and the owners pay for it!
They are gonna hear a lot of silly talk directed at my dog.😇
Things are moving faster that our ability and energy to learn how to protect ourselves from them.
I would be truly impressed if it called Fry's Electronics in Palo Alto and got any information whatsoever out of them.
I actually think it might help here, unlimited calm, knowledge of different accents and dialects. Perhaps the question would be why would you ever try and call fry’s it is bad enough in person. 😂
TechCrunch has an article out this morning, Duplex Shows Google failing at ethical and creative AI design. The bottom line is they feel that tricking people into thinking they're actually speaking with a human is deceptive by design.
I think most of us have already experienced calling a customer support line and wondering whether we're dealing with artificial intelligence or natural stupidity. If this technology becomes smarter than the usual first-line support, great. Perhaps I'd be more tolerant of AI answering than AI calling me. I hang up immediately on robo-calls, but a human making an unsolicited sales or political pitch doesn't fare much better. Neither would an AI. The current state of voice response systems is pretty bad, I think--mulit-level menus, long wait times, crappy music. After I've been through ten minutes of that and my call is dropped, I get furious. If Google can solve that, hats off to them. But if Duplex is truly intelligent, it will know better than to call me.