Armando Maynez
Apr 15, 2021

I don't think this is entirely true, you could use linear activations and still get non-linear results with a multi-layer perceptron, for example take the very simple XOR problem, with as little as 2 hidden layers of 2 hidden neurons each, you could teach a network the hyperbolic boundaries of the problem, even using ReLu or linear activation functions.

Armando Maynez
Armando Maynez

Written by Armando Maynez

Engineer, industry executive, research enthusiast. Avid learner with diverse interests in coding, machine learning, artificial intelligence and learning.

Responses (1)