Scaled Exponential Linear Activation Function

From GM-RKB
Jump to navigation Jump to search

A Scaled Exponential Linear Activation Function is a Rectified-based Activation Function that is based on an Exponential Linear Activation Function.



References

2018a

  • (Pytorch,2018) ⇒ http://pytorch.org/docs/master/nn.html#selu Retrieved: 2018-2-10.
    • QUOTE: class torch.nn.SELU(inplace=False)source

      Applies element-wise, [math]\displaystyle{ f(x)=scale∗(max(0,x)+min(0,alpha∗(exp(x)−1))) }[/math], with alpha=1.6732632423543772848170429916717 and scale=1.0507009873554804934193349852946.

      More details can be found in the paper Self-Normalizing Neural Networks.

      Parameters:

      • inplace(bool, optional) – can optionally do the operation in-place. Default: False
Shape:
  • Input: (N,∗) where * means, any number of additional dimensions
  • Output: (N,∗), same shape as the input
Examples:
>>> m = nn.SELU()
>>> input = autograd.Variable(torch.randn(2))
>>> print(input)
>>> print(m(input))

2018b

  • (Chainer, 2018) ⇒ http://docs.chainer.org/en/stable/reference/generated/chainer.functions.selu.html Retrieved:2018-2-18
    • QUOTE: chainer.functions.selu(x, alpha=1.6732632423543772, scale=1.0507009873554805)source

       Scaled Exponential Linear Unit function.

      For parameters [math]\displaystyle{ \alpha }[/math] and [math]\displaystyle{ \lambda }[/math], it is expressed as

      [math]\displaystyle{ f(x) = \lambda \begin{cases} x, & \mbox{if } x \ge 0 \\ \alpha(\exp(x)−1), & \mbox{if } x \lt 0 \end{cases} }[/math].

      See: https://arxiv.org/abs/1706.02515

      Parameters:

      • x (Variable or numpy.ndarray or cupy.ndarray) – Input variable. A [math]\displaystyle{ (s_1,s_2,\cdots,s_N) }[/math]-shaped float array.
      • alpha (float) – Parameter [math]\displaystyle{ \alpha }[/math].
      • scale (float) – Parameter [math]\displaystyle{ \lambda }[/math].
Returns: Output variable. A [math]\displaystyle{ (s_1,s_2,\cdots,s_N) }[/math]-shaped float array.
Return type: Variable

2017a

  • (Mate Labs, 2017) ⇒ Mate Labs Aug 23, 2017. Secret Sauce behind the beauty of Deep Learning: Beginners guide to Activation Functions
    • QUOTE:  Scaled Exponential Linear Unit (SELU)

      Range: [math]\displaystyle{ (-\lambda\alpha,+\infty) }[/math]

      [math]\displaystyle{ f(x) = \begin{cases} \alpha(e^x-1)x & \mbox{if } x \lt 0 \\ x & \mbox{if } x\ge 0 \end{cases} }[/math]

      with [math]\displaystyle{ \lambda=1.0507 }[/math] and [math]\displaystyle{ \alpha=1.67326 }[/math]

2017b