Deconstructing Deep Learning + δeviations
Drop me an email
Format :
Date | Title
TL; DR
Implementing activation functions.
Activation functions are an extremely important part of any neural network. But they are actually much simpler than we make them out to be. Here are some of them. Lets define a test matrix.
test = [100 1.0 0.0 -300.0;100 1.0 0.0 300.0]
relu(mat) = max.(0, mat)
lrelu(x) = max.(0.01x, x)
#export
prelu(x,a) = max.(x, x.*a)
prelu(test,0.10)
maxout(x,a) = max.(x, x.*a)
maxout(test,0.10)
σ(x) = 1 ./(1 .+exp.(-x))
σ(test)
using Distributions
noisyrelu(x) = max.(0, x.+rand(Distributions.Normal(), 1))
noisyrelu(test)
softplus(x) = log.(exp.(test).+1)
softplus(test)
elu(x,a) = max.(x, a.*(exp.(x) .-1))
elu(test,0.1)
swish(x,β) = x ./(1 .+exp.(-β.*x))
swish(test,0.1)