2017 ProgrammingwithaDifferentiableF

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Differentiable Programming.

Notes

Cited By

Quotes

Abstract

There are families of neural networks that can learn to compute any function, provided sufficient training data. However, given that in practice training data is scarce for all but a small set of problems, a core question is how to incorporate prior knowledge into a model. Here we consider the case of prior procedural knowledge, such as knowing the overall recursive structure of a sequence transduction program or the fact that a program will likely use arithmetic operations on real numbers to solve a task. To this end we present a differentiable interpreter for the programming language Forth. Through a neural implementation of the dual stack machine that underlies Forth, programmers can write program sketches with slots that can be filled with behaviour trained from program input-output data. As the program interpreter is end-to-end differentiable, we can optimize this behaviour directly through gradient descent techniques on user specified objectives, and also integrate the program into any larger neural computation graph. We show empirically that our interpreter is able to effectively leverage different levels of prior program structure and learn complex transduction tasks such as sequence sorting or addition with substantially less data and better generalisation over problem sizes. In addition, we introduce neural program optimisations based on symbolic computation and parallel branching that lead to significant speed improvements.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2017 ProgrammingwithaDifferentiableFSebastian Riedel
Matko Bošnjak
Jason Naradowsky
Tim Rocktäschel
Programming with a Differentiable Forth Interpreter