The talk is dedicated to an introduction of the machine learning and more specifically to the neural network constituted of several hidden layers (deep learning). I first present some elementary machines such as the perceptron and introduce the different concepts of distributed memory. Then, we continue with the more general situation constituted of elementary neural objects linked by activation functions. I show how to train a machine learning leading to the the notion of gradient method and back propagation. The last part will be dedicated to the unsupervised machine such as the (restricted) Boltzmann machine and autoencoder.