Jeterion85
New member
- Joined
- Mar 6, 2020
- Messages
- 13
I know that using an activation functions in CNN helps to solve the linearity problem and while i can understand it in simple perceptrons i can not quite get it how it solves the linearity problem in the CNN (especially when we apply max pooling ). Can anyone please give me a mathematical example of how the linearity is kept to the next convolutional layer if i don't apply an activation function ?
Thank you in advance!
Thank you in advance!