Making Conversation Models More Empathic – My team and I wrote a paper on making conversation models more empathetic. Automated dialogue systems are becoming commonplace and it’s important that they are able to respond to people’s input in a way. That is appropriate to their feelings. There’s been research on that in lots of other domains that show that in practice when a system or person is deemed more caring and more empathetic then it also helps your end metric in terms of how Satisfied people are of the interaction.
There are currently no benchmarks for empathy or sufficient large scale publicly available datasets both for feelings and Relevant dialogues that could require empathy the fact that they are many data sets of benchmarks reflects a kind of lack of awareness Of the importance of empathy even for goal-directed applications. So in this paper, we introduced a new data set of about 25,000 dialogues of people talking about something that happened to them and someone else Responding to that we used the dialogues.
We collected to train and also evaluate models in the task of generating conversation responses, the model is playing the role of the listener and when it’s been trained it only has access to what was said before in the Conversation but not to the description of situation itself. The models were using are based on modern architectures using transformers and a model called BERT. so we’re taking those architectures and we’re showing that if we fine-tune them on our task of producing the Empathetic responder the role of the person who responds to the one who is going through a situation.
The resulting responses are judged as more empathetic the way the models are judged is they are evaluated on how well they are Doing the part of the listener the person who is responding to the situation as opposed to the one who is talking about what’s happening to them and When these models are fine-tuned using the data, We collected we show that humans judge the responses as being more empathetic.
Something that we want to work on down The line is moving this type of responding to a more general setting where it’s not always clear When is a good moment to react with empathy as opposed to moving your goal forward. So our next step is how do we Know when a model should be in more empathetic responding mode as opposed to giving information or moving the task forward.
reference – Making Conversation Models More Empathic
Web enthusiast. Thinker. Evil coffeeaholic. Food specialist. Reader. Twitter fanatic. Music maven. AI and Machine Learning!