We consider the possibility that, as embedding vectors get longer and longer - we have already got to 1536 floating-point numbers and there are reports that elements of GPT-4 uses over 12,000 - could embeddings ever come to encapsulate so much of the semantics of everything that they embed everything between them.m? They become absolutely unique and as such capable of replacing the weights and even the layers of their neural nets? It’s an interesting idea