Layer activation functions
- Original Link : https://keras.io/api/layers/activations/
- Last Checked at : 2024-11-24
Usage of activations
Activations can either be used through an Activation
layer, or through the activation
argument supported by all forward layers:
model.add(layers.Dense(64, activation=activations.relu))
This is equivalent to:
from keras import layers
from keras import activations
model.add(layers.Dense(64))
model.add(layers.Activation(activations.relu))
All built-in activations may also be passed via their string identifier:
model.add(layers.Dense(64, activation='relu'))
Available activations
relu
function
keras.activations.relu(x, negative_slope=0.0, max_value=None, threshold=0.0)
Applies the rectified linear unit activation function.
With default values, this returns the standard ReLU activation: max(x, 0)
, the element-wise maximum of 0 and the input tensor.
Modifying default parameters allows you to use non-zero thresholds, change the max value of the activation, and to use a non-zero multiple of the input for values below the threshold.
Examples
>>> x = [-10, -5, 0.0, 5, 10]
>>> keras.activations.relu(x)
[ 0., 0., 0., 5., 10.]
>>> keras.activations.relu(x, negative_slope=0.5)
[-5. , -2.5, 0. , 5. , 10. ]
>>> keras.activations.relu(x, max_value=5.)
[0., 0., 0., 5., 5.]
>>> keras.activations.relu(x, threshold=5.)
[-0., -0., 0., 0., 10.]
Arguments
- x: Input tensor.
- negative_slope: A
float
that controls the slope for values lower than the threshold. - max_value: A
float
that sets the saturation threshold (the largest value the function will return). - threshold: A
float
giving the threshold value of the activation function below which values will be damped or set to zero.
Returns
A tensor with the same shape and dtype as input x
.
sigmoid
function
keras.activations.sigmoid(x)
Sigmoid activation function.
It is defined as: sigmoid(x) = 1 / (1 + exp(-x))
.
For small values (<-5), sigmoid
returns a value close to zero, and for large values (>5) the result of the function gets close to 1.
Sigmoid is equivalent to a 2-element softmax, where the second element is assumed to be zero. The sigmoid function always returns a value between 0 and 1.
Arguments
- x: Input tensor.
softmax
function
keras.activations.softmax(x, axis=-1)
Softmax converts a vector of values to a probability distribution.
The elements of the output vector are in range [0, 1]
and sum to 1.
Each input vector is handled independently. The axis
argument sets which axis of the input the function is applied along.
Softmax is often used as the activation for the last layer of a classification network because the result could be interpreted as a probability distribution.
The softmax of each vector x is computed as exp(x) / sum(exp(x))
.
The input values in are the log-odds of the resulting probability.
Arguments
- x: Input tensor.
- axis: Integer, axis along which the softmax is applied.
softplus
function
keras.activations.softplus(x)
Softplus activation function.
It is defined as: softplus(x) = log(exp(x) + 1)
.
Arguments
- x: Input tensor.
softsign
function
keras.activations.softsign(x)
Softsign activation function.
Softsign is defined as: softsign(x) = x / (abs(x) + 1)
.
Arguments
- x: Input tensor.
tanh
function
keras.activations.tanh(x)
Hyperbolic tangent activation function.
It is defined as: tanh(x) = sinh(x) / cosh(x)
, i.e. tanh(x) = ((exp(x) - exp(-x)) / (exp(x) + exp(-x)))
.
Arguments
- x: Input tensor.
selu
function
keras.activations.selu(x)
Scaled Exponential Linear Unit (SELU).
The Scaled Exponential Linear Unit (SELU) activation function is defined as:
scale * x
ifx > 0
scale * alpha * (exp(x) - 1)
ifx < 0
where alpha
and scale
are pre-defined constants (alpha=1.67326324
and scale=1.05070098
).
Basically, the SELU activation function multiplies scale
(> 1) with the output of the keras.activations.elu
function to ensure a slope larger than one for positive inputs.
The values of alpha
and scale
are chosen so that the mean and variance of the inputs are preserved between two consecutive layers as long as the weights are initialized correctly (see keras.initializers.LecunNormal
initializer) and the number of input units is “large enough” (see reference paper for more information).
Arguments
- x: Input tensor.
Notes:
- To be used together with the
keras.initializers.LecunNormal
initializer. - To be used together with the dropout variant
keras.layers.AlphaDropout
(rather than regular dropout).
Reference
elu
function
keras.activations.elu(x, alpha=1.0)
Exponential Linear Unit.
The exponential linear unit (ELU) with alpha > 0
is defined as:
x
ifx > 0
- alpha *
exp(x) - 1
ifx < 0
ELUs have negative values which pushes the mean of the activations closer to zero.
Mean activations that are closer to zero enable faster learning as they bring the gradient closer to the natural gradient. ELUs saturate to a negative value when the argument gets smaller. Saturation means a small derivative which decreases the variation and the information that is propagated to the next layer.
Arguments
- x: Input tensor.
Reference
exponential
function
keras.activations.exponential(x)
Exponential activation function.
Arguments
- x: Input tensor.
leaky_relu
function
keras.activations.leaky_relu(x, negative_slope=0.2)
Leaky relu activation function.
Arguments
- x: Input tensor.
- negative_slope: A
float
that controls the slope for values lower than the threshold.
relu6
function
keras.activations.relu6(x)
Relu6 activation function.
It’s the ReLU function, but truncated to a maximum value of 6.
Arguments
- x: Input tensor.
silu
function
keras.activations.silu(x)
Swish (or Silu) activation function.
It is defined as: swish(x) = x * sigmoid(x)
.
The Swish (or Silu) activation function is a smooth, non-monotonic function that is unbounded above and bounded below.
Arguments
- x: Input tensor.
Reference
hard_silu
function
keras.activations.hard_silu(x)
Hard SiLU activation function, also known as Hard Swish.
It is defined as:
0
ifif x < -3
x
ifx > 3
x * (x + 3) / 6
if-3 <= x <= 3
It’s a faster, piecewise linear approximation of the silu activation.
Arguments
- x: Input tensor.
Reference
gelu
function
keras.activations.gelu(x, approximate=False)
Gaussian error linear unit (GELU) activation function.
The Gaussian error linear unit (GELU) is defined as:
gelu(x) = x * P(X <= x)
where P(X) ~ N(0, 1)
, i.e. gelu(x) = 0.5 * x * (1 + erf(x / sqrt(2)))
.
GELU weights inputs by their value, rather than gating inputs by their sign as in ReLU.
Arguments
- x: Input tensor.
- approximate: A
bool
, whether to enable approximation.
Reference
hard_sigmoid
function
keras.activations.hard_sigmoid(x)
Hard sigmoid activation function.
The hard sigmoid activation is defined as:
0
ifif x <= -3
1
ifx >= 3
(x/6) + 0.5
if-3 < x < 3
It’s a faster, piecewise linear approximation of the sigmoid activation.
Arguments
- x: Input tensor.
Reference
linear
function
keras.activations.linear(x)
Linear activation function (pass-through).
A “linear” activation is an identity function: it returns the input, unmodified.
Arguments
- x: Input tensor.
mish
function
keras.activations.mish(x)
Mish activation function.
It is defined as:
mish(x) = x * tanh(softplus(x))
where softplus
is defined as:
softplus(x) = log(exp(x) + 1)
Arguments
- x: Input tensor.
Reference
log_softmax
function
keras.activations.log_softmax(x, axis=-1)
Log-Softmax activation function.
Each input vector is handled independently. The axis
argument sets which axis of the input the function is applied along.
Arguments
- x: Input tensor.
- axis: Integer, axis along which the softmax is applied.
Creating custom activations
You can also use a callable as an activation (in this case it should take a tensor and return a tensor of the same shape and dtype):
model.add(layers.Dense(64, activation=keras.ops.tanh))
About “advanced activation” layers
Activations that are more complex than a simple function (eg. learnable activations, which maintain a state) are available as Advanced Activation layers.