I found this article today. As humans, we do anthropomorphize things. We like to think, for example, that our pets have feelings.
Could you switch off a robot that was pleading not to be turned off? Apparently, yes, you could...every participant did eventually "kill" the begging robot.
But as robots become more sophisticated, I wonder if our attitude will change. Now I'm wondering if we might end up treating robots the way we treat pets or working animals. (This is all leaving aside the issue of robots that might be our intellectual equals).
When I work with a horse, I'm very careful not to anthropomorphize. It causes problems. Horses are not humans. Their minds work differently and their intellect is probably that of a human 3-5 year old. Certainly, the way they try to manipulate their handlers is very reminiscent of toddlers - I'm dealing with one right now who is faking being terrified of random stuff to try and get out of work. If you want to understand what's going on, you have to think "horse." You have to keep in your mind the fact that they can't grasp long-term consequences and struggle immensely with change of any kind.
Now, the difference with robots is that they come with operating manuals that very specifically state what they are programmed to do. For now.
What if we find that as we start to move into quantum computing our robots and computers demonstrate quirks...real ones, not the ones we tend to imagine? What then?
Will we be able to turn off the robots of the future?