From inference engines to probability models : The story of machine cognition
In the 20th century, we reached a synergistic pinnacle in mathematics, computing and the sciences that allowed us to abstract a very fundamental human task -- learning. One of the major contributing factors to this was David Hilbert's undertaking to create a foundation for mathematics. This subsequently allowed for the development of inference/deduction engines that were able to automatically prove theorems (since there was now a rigorous definition for a proof).
Following this, our focus was shifted towards the study of probability, which allowed us to use uncertainty to model events. However, there is no widely accepted unification of these methods. What would such a unification look like? How can we teach computers to make clear, explainable inferences that make use of probability? Is there more to human cognition than this combined process?
It is Free