The Kids R Not “A.I.-lright”!

I was reading a post about an online therapy site called “Koko”, which offers “alternative ways of thinking” to troubled people. Apparently, it is now either partially or wholly taken over by an A.I. Really? I mean…REALLY?! It’s bad enough that people actually talk to Alexa but this is, as a Far Side cartoon once wrote “Just Plain Nuts”. In a scene from the breakthrough show, “Mr. Robot”, a lonely F.B.I agent asks Alexa if she loves her. The perfect answer, to which everyone should pay close attention, is: “I am not capable of that kind of thing.” It does not take a genius to figure out that Alexa, though programmed to say that “I feel good when I help you” and that she likes the color ultraviolet, is not capable of any┬áhuman emotion or thought process, let alone love. This includes, concern, compassion, curiosity, reasoning, responsibility and any other type of thought that is involved in real therapy. One things at which Alexa DOES excel is admitting that she does not know something. More humans could use this trait but having it does not make a therapist.

I have often thought of an online therapy site as being potentially helpful for those who think that going to a therapist means that one is crazy, or those who don’t believe in it or those who are just not ready to spill their deepest thoughts and feelings to a real person. After all, not everyone is cut out for the therapeutic modality. And in fairness to the online shrinks and even to an A.I., not all therapists are great. Some should never be allowed to dispense advice about life to anyone. I have known some of these people. But there are far more good, live, trained therapists out there who can and do help those in emotional pain. They see them regularly and remember their names. They take notes and remember what both of you said in a session. Still, it is probably better for some people to seek online help than go to a live shrink and get nothing out of it for whatever reason. That lonely F.B.I. agent comes to mind, pathetically.

But I draw the line at promoting the idea of entrusting one’s mental well-being, including the decision about whether or not to end one’s life, to a software program. Where in blazes did humans get the idea that talking to an A.I. was remotely helpful, at least in the long run? If said descendant of Hal tells you that you should ignore the thugs who bully you every day, or that getting a hobby can ease depression, or that thinking about PTSD differently will cure it, and these “solutions” do not work (a good bet), then who is held accountable? A shrink can be reviewed, disciplined and even sued or sent to jail for screwing up someone’s life even more than it already is. How do you punish an A.I. for doing the same thing?

When I first read that this online shrink was named Koko, I immediately thought of the talking gorilla. It is not out of the realm of possibility that one might be better off having a hug-fest with a gorilla, or even the family dog, than listen to programmed “solutions” parroted by a brainless, soulless machine who, let’s face it, does not even know you are there, much less give a damn what happens to you.

For more well-deserved paranoia:

http://www.vox.com/conversations/2017/3/8/14712286/artificial-intelligence-science-technology-robots-singularity-automation

 

Advertisements