As we studied though, we became more aware of how much nuance there was in each of these methods, especially when we started having humans chat with the models. In the end, the paper ended up being more of a thorough analysis into what behaviors are more preferred in dialogue agents. For example, we found that you should spend about ~60-70% of your turns asking questions to maximize engagement: enough to convey interest in your partner without coming off as super nosy.
Most conversations are based on a pretext that individuals understand. In fact, if someone entering a conversation doesn't know that pretext it could awkward for them. That's why most people enter a conversation by saying something neutral like 'hello' and not by saying 'wow is the food horrible here!'. You never know who is listening. You wouldn't greet your boss the same way you greet your mother for instance (in most cases).
The tl/dr question here is 'Do you take any precondition (like relationship) into account before the parties begin chatting?'
So we restrict ourselves to meeting someone for the first time and trying to learn about them and share a bit about ourselves.
The personas that were assigned to the bot and the human were both randomized (from a large, fixed set) and unseen in training data.