HardTanh Activation Function
Jump to navigation
Jump to search
A HardTanh Activation Function is a Hyperbolic Tangent-based Activation Function that is based on the piecewise function:
[math]\displaystyle{ f(x) = \begin{cases} +1, & \mbox{ if } x \gt 1 \\ -1, & \mbox{ if } x \lt -1\\ x, & \mbox{ otherwise} \end{cases} }[/math]
- Context:
- It can (typically) be used in the activation of HardTanh Neurons.
- Example(s):
- Counter-Example(s):
- a Tanhshrink Activation Function,
- a Rectified-based Activation Function,
- a Heaviside Step Activation Function,
- a Ramp Function-based Activation Function,
- a Softmax-based Activation Function,
- a Logistic Sigmoid-based Activation Function,
- a Gaussian-based Activation Function,
- a Softsign Activation Function,
- a Softshrink Activation Function,
- a Adaptive Piecewise Linear Activation Function,
- a Bent Identity Activation Function,
- a Maxout Activation Function.
- See: Hyperbolic Tangent Function, Artificial Neural Network, Artificial Neuron, Neural Network Topology, Neural Network Layer, Neural Network Learning Rate.
References
2018
- (Pyttorch, 2018) ⇒ http://pytorch.org/docs/master/nn.html#hardtanh
- QUOTE:
class torch.nn.Hardtanh(min_val=-1, max_val=1, inplace=False, min_value=None, max_value=None)
sourceApplies the HardTanh function element-wise
HardTanh is defined as:
- QUOTE:
f(x) = +1, if x > 1 f(x) = -1, if x < -1 f(x) = x, otherwise
- The range of the linear region [−1,1] can be adjusted.
- min_val – minimum value of the linear region range. Default: -1
- max_val – maximum value of the linear region range. Default: 1
- inplace – can optionally do the operation in-place. Default:
False
- Keyword arguments
min_value
andmax_value
have been deprecated in favor ofmin_val
andmax_val
.Shape:
::* Input: [math]\displaystyle{ (N,∗) }[/math] where * means, any number of additional dimensions
- Output: [math]\displaystyle{ (N,∗) }[/math], same shape as the input.
- Examples:
- The range of the linear region [−1,1] can be adjusted.
>>> m = nn.Hardtanh(-2, 2) >>> input = autograd.Variable(torch.randn(2)) >>> print(input) >>> print(m(input))