Code-to-Vector (code2vec) Neural Network: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
m (Text replacement - "]] ** " to "]]. ** ") |
||
Line 7: | Line 7: | ||
** https://code2vec.org | ** https://code2vec.org | ||
* <B>Counter-Example(s):</B> | * <B>Counter-Example(s):</B> | ||
** a [[Code-to-Sequence (code2seq) Neural Network]] | ** a [[Code-to-Sequence (code2seq) Neural Network]]. | ||
** a [[CNN with Attention]], | ** a [[CNN with Attention]], | ||
** a [[LSTM with Attention]]. | ** a [[LSTM with Attention]]. |
Revision as of 13:53, 6 July 2022
A Code-to-Vector (code2vec) Neural Network is a Path-Attention Neural Network that uses a representation of arbitrary-sized code snippets and learns to aggregate multiple syntactic paths into a single fixed-size vector.
- AKA: Code2Vec.
- Context:
- Source code available at: https://github.com/tech-srl/code2vec
- It can be trained by a Code-to-Vector Neural Network Training System and evaluated by a Code-to-Vector (code2vec) Benchmarking Task.
- Example(s):
- Counter-Example(s):
- See: Attention Mechanism, Code Summarization Task, Bimodal Modelling of Code and Natural Language.
References
2019
- (Alon et al., 2019) ⇒ Uri Alon, Meital Zilberstein, Omer Levy, and Eran Yahav. (2019). “code2vec: Learning Distributed Representations of Code.” In: Proceedings of the ACM on Programming Languages (POPL), Volume 3.
- QUOTE: The goal of this paper is to learn code embeddings, continuous vectors for representing snippets of code. By learning code embeddings, our long-term goal is to enable the application of neural techniques to a wide-range of programming-languages tasks. In this paper, we use the motivating task of semantic labeling of code snippets.