Our artificial intelligence models should not be thought of as if they are trying to reproduce whatever it was that produced the phenomena that they are modelling. So if they produce something that looks vaguely like Shakespeare, that does not mean that they have recreated Shakespeare’s brain. If they produce reliable predictions of earthquakes, it doesn’t mean that they have reproduced a working theory that is adequate to the seismology of the Earth, all that matters is that their pre...
Our artificial intelligence models should not be thought of as if they are trying to reproduce whatever it was that produced the phenomena that they are modelling. So if they produce something that looks vaguely like Shakespeare, that does not mean that they have recreated Shakespeare’s brain. If they produce reliable predictions of earthquakes, it doesn’t mean that they have reproduced a working theory that is adequate to the seismology of the Earth, all that matters is that their predictions are reliable or interesting or entertaining. My experience with Andrew Karpathy’s GPT model from YouTube, “Let’s build GPT: from scratch, in code, spelled out.”
View more