Data Science #32 - A Markovian Decision Process, Richard Bellman (1957)
We reviewed Richard Bellmanâs âA Markovian Decision Processâ (1957), which introduced a mathematical framework for sequential decision-making under uncertainty. By connecting recurrence relations to Markov processes, Bellman showed how current choices shape future outcomes and formalized the principle of optimality, laying the groundwork for dynamic programming and the Bellman equationThis paper is directly relevant to reinforcement learning and modern AI: it defines the structure of Markov Decision Processes (MDPs), which underpin algorithms like value iteration, policy iteration, and Q-learning. From robotics to large-scale systems like AlphaGo, nearly all of RL traces back to the foundations Bellman set in 1957
Data Science #31 - Correlation and causation (1921), Wright Sewall
On the 31st episode of the podcast, we add Liron to the team, we review a gem from 1921, where SewallâŻWright introduced path analysis, mapping hypothesized causal arrows into simple diagrams and proving that any sample correlation can be written as the sum of products of âpath coefficients.â By treating each arrow as a standardised regression weight, he showed how to split the variance of an outcome into direct, indirect, and joint pieces, then solve for unknown paths from an ordinary correlation matrixâturning the slogan âcorrelation â causationâ into a workable calculus for observational data.Wrightâs algebra and diagrams became the blueprint for modern graphical causal models, structuralâequation modelling, and DAGâbased inference that power libraries such as DoWhy, Pyro and CausalNex. The same logic underlies featureâimportance decompositions, counterfactual A/B testing, fairness audits, and explainableâAI tooling, making a centuryâold livestockâbreeding study a foundation stone of presentâday dataâscience and AI practice.
Data Science #30 - The Bootstrap Method (1977)
In the 30th episode we review the the bootstrap, method which was introduced by Bradley Efron in 1979, is a non-parametric resampling technique that approximates a statisticâs sampling distribution by repeatedly drawing with replacement from the observed data, allowing estimation of standard errors, confidence intervals, and bias without relying on strong distributional assumptions. Its ability to quantify uncertainty cheaply and flexibly underlies many staples of modern data science and AI, powering model evaluation and feature stability analysis, inspiring ensemble methods like bagging and random forests, and informing uncertainty calibration for deep-learning predictionsâthereby making contemporary models more reliable and robust.Efron, B. "Bootstrap methods: Another look at the bootstrap." The Annals of Statistics 7 (1977): 1-26.
Data Science #29 - The Chi-square automatic interaction detection(CHAID) algorithm (1979)
In the 29th episode, we go over the 1979 paper by Gordon Vivian Kass that introduced the CHAID algorithm.CHAID (Chi-squared Automatic Interaction Detection) is a tree-based partitioning method introduced by G. V. Kass for exploring large categorical data sets by iteratively splitting records into mutually exclusive, exhaustive subsets based on the most statistically significant predictors rather than maximal explanatory power. Unlike its predecessor, AID, CHAID embeds each split in a chi-squared significance test (with Bonferroniâcorrected thresholds), allows multi-way divisions, and handles missing or âfloatingâ categories gracefully.In practice, CHAID proceeds by merging predictor categories that are least distinguishable (stepwise grouping) and then testing whether any compound categories merit a further split, ensuring parsimonious, stable groupings without overfitting. Through its significanceâdriven, multi-way splitting and built-in bias correction against predictors with many levels, CHAID yields intuitive decision trees that highlight the strongest associations in high-dimensional categorical data In modern data science, CHAIDâs core ideas underpin contemporary decisionâtree algorithms (e.g., CART, C4.5) and ensemble methods like random forests, where statistical rigor in splitting criteria and robust handling of missing data remain critical. Its emphasis on automated, hypothesisâdriven partitioning has influenced automated feature selection, interpretable machine learning, and scalable analytics workflows that transform raw categorical variables into actionable insights.
Data Science #28 - The Bloom filter algorithm
In the 28th episode, we go over Burton Bloom's Bloom filter from 1970, a groundbreaking data structure that enables fast, space-efficient set membership checks by allowing a small, controllable rate of false positives.Unlike traditional methods that store full data, Bloom filters use a compact bit array and multiple hash functions, trading exactness for speed and memory savings. This idea transformed modern data science and big data systems, powering tools like Apache Spark, Cassandra, and Kafka, where fast filtering and memory efficiency are critical for performance at scale.