Leaky Rectified Linear Neuron

From GM-RKB
Jump to navigation Jump to search

A Leaky Rectified Linear Neuron is an Rectified Linear Neuron in which the leakage coefficient is a non-zero small fraction, i.e. [math]\displaystyle{ 0\lt \alpha \lt \lt 1 }[/math].



References

2017

  • (Mate Labs, 2017) ⇒ Mate Labs Aug 23, 2017. Secret Sauce behind the beauty of Deep Learning: Beginners guide to Activation Functions
    • QUOTE: Leaky rectified linear unit (Leaky ReLU) — Leaky ReLUs allow a small, non-zero gradient when the unit is not active. 0.01 is the small non-zero gradient here

      [math]\displaystyle{ f(x) = \begin{cases} 0, & \mbox{for } 0.01x \lt 0 \\ x, & \mbox{for } x \geq 0 \end{cases} }[/math]

      Range:[math]\displaystyle{ (-\infty, +\infty) }[/math]