Abstract
LS and Maximum Likelihood estimation (MLE) overfit when the dimension of
the model is not small relative to the sample size. This happens almost
always in high-dimensions. Regularziation often works by adding a penalty
to the fitting criterion as in classical model selection methods such as
AIC or BIC and L1-penalized LS called Lasso. We will also introduce Cross-validation (CV) for regularization parameter selection.