# Summer reading — The Hundred-page Machine Learning Book review

How many pages can have a book whose purpose is to explain, with a not trivial level of detail, what Machine Learning (ML) is?

It’s natural to think of one thousand pages tomes, like Lords of the Rings…or even bigger.

Let’s think about it: ML is composed of a lot of different topics, there is math involved, many, many concepts to deal with and let’s not forget we’re talking of applied science, so some code is necessary to see something in action.

And then I found this book — the title is already a spoiler on pages number — and I thought: wow, is this possible? Can this book — by Andriy Burkov — of just nearly one hundred pages reach the goal of being a valuable resource for everyone who wants to approach (or consolidate) ML knowledge?

The short answer is: hell yes! Let’s see why.

The book is structured in several chapters and every chapter deals with different subjects but, generally, there is a good mix of intuition (meaning concepts written with words) and formal contents (math and formulas), so is a great resource for a broad range of readers, having different learning goals.

Intrigued? Let’s see the chapters and topics.

*Chapter 1: Introduction* → An introduction to ML types and how they differ (supervised, unsupervised,…)

Then…

Part I: Supervised Learning

*Chapter 2: Notation and Definitions → *The math chapter (data structures, functions, matrix operations, derivatives and gradients, random variables, distributions, probabilities)

*Chapter 3: Fundamental Algorithms → *The main supervised algorithms (Linear Regression, Logistic Regression, Decision Tree, Support Vector Machine, k-Nearest Neighbors)

*Chapter 4: Anatomy of a Learning Algorithm → *Types of gradient descent and how they work

*Chapter 5: Basic Practice* → common actions and problems in practice (features engineering, select an algorithm, overfitting and underfitting, assess model performances)

Chapter 6: Neural Networks and Deep Learning → Neural networks a go-go (MLP, CNN, RNN)

*Chapter 7: Problems and Solutions* → the different problem types and approaches (Kernel regression, one-class, multiclass, multilabel classification, ensemble learning)

*Chapter 8: Advanced Practice* → specific solutions to specific problems (imbalanced sets, combining models, regularization, multiple inputs and outputs, transfer learning)

Part II: Unsupervised and Other Forms of Learning

*Chapter 9: Unsupervised Learning* → the unsupervised algorithms (density estimation, clustering algorithms, dimensionality reduction)

*Chapter 10: Other Forms of Learning → *particular types of learning (metric learning, recommender, words embedding)

*Chapter 11: Conclusion → *final notes and what was not covered (GAN, genetic algorithms, reinforcement learning)

Of course, you can’t cover in full detail everything, but to go deeper on some subjects, there are several QR codes in the book, linking to specific web resources, providing an elegant way to keep things compact…and stay in the nearly 100 pages range :)

You can read it from start to end, use it as a reference or as more than exhaustive entry point to the ML world.

One final note: it’s a book with the formula “read first, buy later”, so you can “taste it” directly and decide if suits your expectations.

In conclusion, If you’re interested in these topics, this is one of the books to check.