Regularization in the field of machine learning is a process of introducing additional information in order to solve an ill-posed problem or to prevent overfitting. A theoretical justification for regularization is that it attempts to impose Occam’s razor on the solution, as depicted in the figure. From a Bayesian point of view, many regularization techniques correspond to imposing certain prior distributions on model parameters. Regularization can be used to learn simpler models, induce models to be sparse, introduce group structure into the learning problem, and more. The same idea arose in many fields of science. For example, the least-squares method can be viewed as a very simple form of regularization.
If you find above information useful please share this post on social media, for more detailed information, please refer to below books: