In episode 54, we talked about propensities in human beings and in chatbots, and I suggested that the chatbots will embody to some extent certain characteristics that are reflected in the way they’ve been trained, on the choices that have been made by those training them. Extend that a little, and you can start to see how different chat bots from different parts of the world trained by different groups of people are likely to have different characteristics and so different personalities. So h...
In episode 54, we talked about propensities in human beings and in chatbots, and I suggested that the chatbots will embody to some extent certain characteristics that are reflected in the way they’ve been trained, on the choices that have been made by those training them. Extend that a little, and you can start to see how different chat bots from different parts of the world trained by different groups of people are likely to have different characteristics and so different personalities. So human beings, when they come to decide which chatbots to use, will be affected by those factors in just the same way that they are when they choose their own human partners, friends and collaborators. But the fact that we can wrap these chatbots with fine-tuning means that we can also tailor them to our own personal or corporate or collective interests in order to try to persuade them to give better responses to the things that we are interested in. But where does that start and where does it finish, and how safe are we from the misappropriation of chatbot technology by those who would wrap them in skins and give them personalities of a kind that we would deplore and justifiably find quite alarming, dangerous and frightening? Fine tuning will be in 8.56.
View more