Softsign Activation Function

From GM-RKB
Jump to navigation Jump to search

A Softsign Activation Function is a neuron activation function that is based on the mathematical function: [math]\displaystyle{ f(x)= x/(1+|x|) }[/math].



References

2018a

  • (Pyttorch, 2018) ⇒ http://pytorch.org/docs/master/nn.html#softsign
    • QUOTE: class torch.nn.Softsign source

      Applies element-wise, the function [math]\displaystyle{ f(x)=\dfrac{x}{1+|x|} }[/math]

      Shape:

      *** Input: [math]\displaystyle{ (N,∗) }[/math] where * means, any number of additional dimensions

      • Output: [math]\displaystyle{ (N,∗) }[/math], same shape as the input
Examples:
>>> m = nn.Softsign()
>>> input = autograd.Variable(torch.randn(2))
>>> print(input)
>>> print(m(input))
forward(input) source

Defines the computation performed at every call.

Should be overriden by all subclasses.

Note:

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

2010

2009