hosh 310 days ago [-]
To be clear: From a psychological perspective, the robot and reading of emotion is cognitive empathy rather than affective empathy.

There are several personality disorders in which there is reduced cognitive or affective empathy in different combinations. Someone with a high cognitive empathy, low affective empathy, a willingness and skill in manipulation, and lack ethics, can cause problems. I don't see how that would be different when it is a robot or android. (We have problems with socially engineering dopamine feedback loops to raise "engagement" on our apps and platforms, and as a society, we have not figured out what we want to do about this yet).

Lastly, in regards to manipulative behavior and trust -- there is an insight from Crucial Conversations, and that is that when someone feels that another person has their best interest at heart, it is easier to trust that person even when talking about difficult subjects. I think that applies here to the question of AI and ethics. It is not so much as, whether it is more ethical for a self-driving AI to sacrifice one group of people to save another, but whether individuals feel that the AI has their best interest at heart (whatever that means).

joe_the_user 310 days ago [-]
Well, if I set a series of broad goals I want a robot to help me achieve, it seems OK for it use high cognitive empathy to get me there. Regular feedback on how where the robot is in the process would be good.

It seems like affective empathy, actually feeling, would be something you would only want from any person or person-equivalent since implies they do what you want only because they like you and they'd expect you to satisfy them in turn, a problematic relationship to have with a robot. I'd want a robot doing what I told it to do with various fail-safes and feedback (obviously, knowing something made me unhappy would be a moment to request feedback).

hosh 308 days ago [-]
Affecfive empathy is not necessarily the best thing for therapy or coaching, yet it is often what a person craves for.

Both anti-social personality disorder and narcissistic personality disorder has reduced or no affective empathy and often high cognitive empathy. However, I would be more inclined to work with the "pro-social" ASD than someone with NPD. Someone with NPD is incapable of thinking beyond themselves while someone with ASD is willing to entertain mutually beneficial arrangements.

You could swing the opposite way too. Someone on the autism spectrum would have reduced cognitive empathy yet high affective empathy. They tend to care more, but only after understanding, and then the emotion tends to be more intense.

Likewise, there are people who feel deeply what everyone else is feeling to the point where they lost their center and and is unable to function as an individual.

The main point I was making is that people tend to confuse one type of empathy for another. And the other is that, what is a good test is if you trust the other person or AI to have yout best interest at heart.

joe_the_user 308 days ago [-]
But we're talking about robots, right? Mechanical devices that are constructed, not human beings of any sort.

It seems like the appropriate analogy would be something like a drug or a video game. I would claim it is unwise in the extreme to allow a robot with emotional-influencing abilities/programming to first calculate one's "best interests" and then go about imposing those best interests without getting overt, conscious go-ahead from the user.

Of course, even with human beings, you want to interact with people who have an intention to be good to you and whose idea of "good" is somewhat in accordance with your own ideas of good, otherwise you get nuts and cults and so-forth. With robots, devices which function imperfectly and which we don't want just imposing their "values" on society, jeesh, obviously, one wants tight control what concept they have of "best interests" along with tight control on what sort of influencing they do.

The inability of people to see human-robot interactions as rather distinct from human-human interactions, just makes the situation more urgent.

hosh 304 days ago [-]
I didn't say the robot had to calculate best interest. I said whether _you_ yourself feels that the robot has your best interest at heart. I am talking about being in relationship with something that interacts with your emotional responses.
userbinator 310 days ago [-]
This is very much in the "uncanny valley" territory for me. The only feeling a robot needs to read from me is "I don't want a robot showing some fake 'emotion'." It brings to mind the irritatingly insincere, smarmy fake friendliness that some companies like to exude in their communications.
otakucode 310 days ago [-]
It seems like they are taking a very difficult approach to this. As humans we develop the ability to read human feelings through our mirror neurons and biofeedback from our own body. If you take away the body, and all those nerves feeding back sensation, you're making the problem far harder. Without a body, of course, there won't ever really be meaningful emotion in a robot since emotion is primarily a body-oriented biofeedback thing. That's why people who experience total facial paralysis lose the ability to feel some emotions, then lose the ability to remember what feeling them felt like, eventually on to losing the ability to recognize them in others. Anger, from things I've read, is the standout on that count. The expression of emotion in your body in a very real way is the emotion. Put a pencil in your teeth and you will feel happier because it forces your mouth into a smile. IMO the research screams that you need a body (or perhaps a simulated simulacra of one) for emotion. And to recognize emotion well and be able to empathize, mirror neurons seem a really tried-and-true solution to that.
everdev 310 days ago [-]
I think people are tuned to read other people's emotions pretty well. The trick is to actually care and show empathy. Not sure a robot can effectively do that.
nuanced 310 days ago [-]
People are tuned to communicate their emotions really well. Reading what humans are naturally evolved to constantly broadcast about themselves to others doesn't really take such sophisticated sentiment analysis to read into.
QAPereo 310 days ago [-]
The challenges of humans with limited affect scream otherwise. It doesn’t take much for a human to lose the ability to translate emotional signals, a process we don’t really understand, but you think would be relatively easy to automate?