In this podcast, we provide some insights into the complexity of common sense. First, we discuss the importance of building common sense into learning machines. Second, we discuss how first-order logic can be used to represent common sense knowledge. Third, we describe a large database of common sense knowledge where the knowledge is represented using first-order logic which is free for researchers in machine learning. We provide a hyperlink to this free database of common sense knowledge. Fourth, we discuss some problems of first-order logic and explain how these problems can be resolved by transforming logical rules into probabilistic rules using Markov Logic Nets. And finally, we have another book review of the book “Markov Logic: An Interface Layer for Artificial Intelligence” by Pedro Domingos and Daniel Lowd which provides further discussion of the issues in this podcast. In this book review, we cover some additional important applications of Markov Logic Nets not covered in detail in this podcast such as: object labeling, social network link analysis, information extraction, and helping support robot navigation. Finally, at the end of the podcast we provide information about a free software program which you can use to build and evaluate your own Markov Logic Net! For more information check out: www.learningmachines101.com
LM101-046: How to Optimize Student Learning using Recurrent Neural Networks (Educational Technology)
LM101-045: How to Build a Deep Learning Machine for Answering Questions about Images
LM101-044: What happened at the Deep Reinforcement Learning Tutorial at the 2015 Neural Information Processing Systems Conference?
LM101-043: How to Learn a Monte Carlo Markov Chain to Solve Constraint Satisfaction Problems (Rerun of Episode 22)
LM101-042: What happened at the Monte Carlo Markov Chain (MCMC) Inference Methods Tutorial at the 2015 Neural Information Processing Systems Conference?
LM101-041: What happened at the 2015 Neural Information Processing Systems Deep Learning Tutorial?
LM101-040: How to Build a Search Engine, Automatically Grade Essays, and Identify Synonyms using Latent Semantic Analysis
LM101-039: How to Solve Large Complex Constraint Satisfaction Problems (Monte Carlo Markov Chain and Markov Fields)[Rerun]
LM101-038: How to Model Knowledge Skill Growth Over Time using Bayesian Nets
LM101-037: How to Build a Smart Computerized Adaptive Testing Machine using Item Response Theory
LM101-036: How to Predict the Future from the Distant Past using Recurrent Neural Networks
LM101-035: What is a Neural Network and What is a Hot Dog?
LM101-034: How to Use Nonlinear Machine Learning Software to Make Predictions (Feedforward Perceptrons with Radial Basis Functions)[Rerun]
LM101-033: How to Use Linear Machine Learning Software to Make Predictions (Linear Regression Software)[RERUN]
LM101-032: How To Build a Support Vector Machine to Classify Patterns
LM101-031: How to Analyze and Design Learning Rules using Gradient Descent Methods (RERUN)
LM101-030: How to Improve Deep Learning Performance with Artificial Brain Damage (Dropout and Model Averaging)
LM101-029: How to Modernize Deep Learning with Rectilinear units, Convolutional Nets, and Max-Pooling
LM101-028: How to Evaluate the Ability to Generalize from Experience (Cross-Validation Methods)[RERUN]
LM101-027: How to Learn About Rare and Unseen Events (Smoothing Probabilistic Laws)[RERUN]
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
The Unbelivable Truth - Series 1 - 26 including specials and pilot
A Prairie Home Companion: News from Lake Wobegon