ddml #
Ahrens, A., Hansen, C.B.,Schaffer, M.E. and Wiemann, T. 2023. ddml: Double/debiased machine learning in Stata. arXiv preprint arXiv:2301.09397.
We introduce the package
ddml
for Double/Debiased Machine Learning (DDML) in Stata. Estimators of causal parameters for five different econometric models are supported, allowing for flexible estimation of causal effects of endogenous variables in settings with unknown functional forms and/or many exogenous variables.ddml
is compatible with many existing supervised machine learning programs in Stata. We recommend using DDML in combination with stacking estimation which combines multiple machine learners into a final predictor. We provide Monte Carlo evidence to support our recommendation.
pystacked #
Ahrens, A., Hansen, C.B. and Schaffer, M.E., 2022. pystacked: Stacking generalization and machine learning in Stata. arXiv preprint arXiv:2208.10896.
pystacked implements stacked generalization (Wolpert, 1992) for regression and binary classification via Python’s scikit-learn. Stacking combines multiple supervised machine learners – the “base” or “level-0” learners – into a single learner. The currently supported base learners include regularized regression, random forest, gradient boosted trees, support vector machines, and feed-forward neural nets (multi-layer perceptron). pystacked can also be used with as a ‘regular’ machine learning program to fit a single base learner and, thus, provides an easy-to-use API for scikit-learn’s machine learning algorithms.
lassopack #
Ahrens A, Hansen CB, Schaffer ME (2020). lassopack: Model selection and prediction with regularized regression in Stata. The Stata Journal. 20(1):176-235. doi:10.1177/1536867X20909697
In this article, we introduce lassopack, a suite of programs for regularized regression in Stata. lassopack implements lasso, square-root lasso, elastic net, ridge regression, adaptive lasso, and postestimation ordinary least squares. The methods are suitable for the high-dimensional setting, where the number of predictors p may be large and possibly greater than the number of observations, n. We offer three approaches for selecting the penalization (“tuning”) parameters: information criteria (implemented in
lasso2
), K-fold cross-validation and h-step-ahead rolling cross-validation for cross-section, panel, and time-series data (cvlasso
), and theory-driven (“rigorous” or plugin) penalization for the lasso and square-root lasso for cross-section and panel data (rlasso
). We discuss the theoretical framework and practical considerations for each approach. We also present Monte Carlo results to compare the performances of the penalization approaches.
Download earlier arXiv-Version – Stata Journal paper
Feel free to contact me (AA) if you have trouble accessing the SJ version.
Slides #
- Presentation slides from our presentation at the 2018 London Stata Conference are available here.
- Presentation slides for
pystacked
are here. - Presentation slides for
ddml
are here.