We present TabPFN, a trained Transformer that can do supervised classification for small tabular datasets in less than a second , needs no hyperparameter tuning and is competitive with state-of-the-art classification methods. TabPFN is fully entailed in the weights of our network, which accepts training and test samples as a set-valued input and yields predictions for the entire test set in a single forward pass. TabPFN is a Prior-Data Fitted Network (PFN) and is trained offline once, to appro...
We present TabPFN, a trained Transformer that can do supervised classification for small tabular datasets in less than a second , needs no hyperparameter tuning and is competitive with state-of-the-art classification methods. TabPFN is fully entailed in the weights of our network, which accepts training and test samples as a set-valued input and yields predictions for the entire test set in a single forward pass. TabPFN is a Prior-Data Fitted Network (PFN) and is trained offline once, to approximate Bayesian inference on synthetic datasets drawn from our prior.
2022: Noah Hollmann, Samuel Muller, Katharina Eggensperger, F. Hutter
https://arxiv.org/pdf/2207.01848v3.pdf
View more