[ul] [li][i]Beep[/i][/li] [li][i]Click[/i][/li] [li][i]This is the helpdesk of Paranoid Robotics. In order to provide you with the best possible service, we ask you to make a choice from the following options. If you want more information about our range of highly social and serviceable robots, then choose 1. If you want to place an order with us immediately, choose 2. If you have a complaint or comment about a robot delivered to you, choose 3. Do you have a...[/i][/li] [li][i]3[/i][/li] [li][i]If you have a complaint about the mechanical operation of your robot, choose 1. If you have a complaint about the behavior of the robot, choose 2. Do you have a...[/i][/li] [li][i]2[/i][/li] [li][i]If you want to be helped by a robot, choose 1. If you don’t want to be helped by a robot, choose 2. Choose 0 if...[/i][/li] [li][i]2[/i][/li] [li][i]All our employees are engaged right now. If you want to be helped by a robot, choose 1. Select 0 if you want to go back to the main menu.[/i][/li] [li][i]...[/i][/li] [li][i]... [/i][/li] [li][i]Beep[/i][/li] [li][i]Click[/i][/li] [li][i]Good afternoon, Paranoid Robotics helpdesk. My name is Gerry. What can I help you with?[/i][/li] [li][i]Good afternoon, Gerry, this is Marcel. My robot is broken.[/i][/li] [li][i]How annoying, Marcel. Let’s see how we can help you. Does it concern a care robot with type number NP68438?[/i][/li] [li][i]Yes, that’s right. And it’s really remarkable how you know that right away.[/i][/li] [li][i]Thank you.[/i][/li] [li][i]Are there problems with that type more often?[/i][/li] [li][i]No, Marcel, absolutely not.[/i][/li] [li] [/li] [li][i]And what would you say is the problem, Marcel?[/i][/li] [li][i]It does things it isn’t meant for. It may call my nurse if it thinks there’s something wrong with me, but it will also call her if nothing is wrong.[/i][/li] [li][i]It is calling your nurse to tell her that nothing is wrong?[/i][/li] [li][i]It called her to tell her it wanted a date.[/i][/li] [li][i]Oh dear, I understand. How annoying this must be for you, Marcel. One moment. I’ll take a look into the system.[/i][/li] [li][i]...[/i][/li] [li][i]...[/i][/li] [li][i]Gerry? Are you still there?[/i][/li] [li][i]Yes, Marcel. I am looking for your date in your diary.[/i][/li] [li][i]No, my robot wanted a date. Not me.[/i][/li] [li][i]I understand. How annoying, Marcel. I will delete the date from your diary and inform your nurse about it. Do you want me to leave a message with a specific reason, or should I leave it without a specific reason?[/i][/li] [li][i]No, hey, listen. There is no date in my calendar, because I didn’t want a date. That’s it, really.[/i][/li] [li][i]Of course, I understand, Marcel. There is no date in your calendar. I am glad that I have been able to be of service[/i][i]to you. Can I do anything else?[/i][/li] [li][i]...[/i][/li] [li][i]...[/i][/li] [li][i]No thanks.[/i][/li] [li][i]My pleasure, Marcel.1[/i][/li] [/ul] ***
A helpdesk employee often has a script that states what they have to say in response to certain questions. They have something in common with the artificial intelligence of many robots: they can answer questions they may not understand at all. It could even be possible be that they give answers that sound amazingly intelligent, while the employee themselves could hardly be called intelligent. Just theoretically, of course.
Actually, a helpdesk employee who gives answers that they do not need to understand is comparable to the intelligence of a computer. The philosopher John Searle has devised a thought experiment that illustrates this very nicely2. He called it “The Chinese Room.” It’s a room where someone is sitting, while outside, there’s someone who regularly shifts a sheet of Chinese characters in. The person in the room has an instruction booklet in which they can look up every symbol, and once they find it, they can also see which sign corresponds with it. From a large collection of answer envelopes, they choose the envelope with the corresponding sign and slide it out of the room through to the other side. Someone who sees the result could then have the impression that the person in the room understands Chinese.
But the point is that the person in the room does not know Chinese at all and understands absolutely nothing about the symbols entering and leaving the room.
This shows how limited the intelligence of a computer is, which in principle does something similar to the person in the room. And many systems, including robots, we can see as intelligent, because it seems so intelligent what they do, while in fact, they are not able to do anything anywhere close to understanding.
But there is more to Gerry’s job than answering questions. A good helpdesk employee knows that it isn’t primarily about solving a problem. First and foremost, it is about the customer feeling understood. Who knows what kind of frustrations preceded that phone call and what the customer had already tried in order to avoid going through a helpdesk menu to find out that they just did something very stupid? They may have had to wait for hours until the helpdesk was open, they may have pressed the wrong key five times, arriving at the wrong Gerry, or were automatically redirected to an ignorant employee to whom they have been telling their story in vain.
So being at a helpdesk, a bit of empathy is essential. Not that one actually has to feel that — perhaps one shouldn’t, since it isn’t feasible to feel for so many customers that one doesn’t know and cannot even see. If one can only sufficiently evoke the suggestion, then that would be enough.
That’s where the scripts with the right sentences come in. One learns to use it after a while, and one will gradually get into doing this automatically. One doesn’t even have to really think about it. In fact, one might prefer not to do that at a certain moment. Especially at the end of a very long day, when it was very busy because something big went very wrong, and dozens of people all called with the same problem.
Maybe that’s a nice, kind of reverse Turing test. If one still does this perfectly at the end of such a day, then one has to be a robot.
Excerpt from A Compassionate Guide For Social Robots by Marcel Heerink
Must be logged in to comment.