In this short series of notebooks we review some fundamental ideas regarding mathematical functions, ideas that we will see used over and over again throughout our study of machine learning / deep learning. We begin our first post by introducing the notion of a mathematical function in terms of real data and then in terms of equations/formulae.
In this post we review a variety of elementary functions often seen in the study of machine learning / deep learning, well as various ways of these combining elementary functions to create an un-ending array of interesting and complicated functions with known equations. These elementary functions are used extensively throughout not only the study of machine learning / deep learning, but throughout many areas of science in general.
In this post we discuss an algorithm for combining the elementary functions and operations introduced in the previous post to construct new functions of arbitrary complexity. Using the same thinking also allows us to deconstruct generic functions into their elementary functions / operations in a consistent manner. This basic idea arises throughout machine learning: from the design Automatic Differentiation algorithms to the design of novel neural network architectures.
In the previous posts we discussed elementary function operations, in particular function composition. In this post we explore the result of repeatedly applying function composition, which results in the creation of so-called recursive functions. These sorts of functions arise throughout machine learning, from the computation of higher order derivatives to the construction of neural networks.