Code-to-Vector (code2vec) Neural Network: Difference between revisions
Jump to navigation
Jump to search
m (Text replacement - "]] ** " to "]]. ** ") |
m (Text replacement - ". " to ". ") |
||
Line 1: | Line 1: | ||
A [[Code-to-Vector (code2vec) Neural Network]] is a [[Path-Attention Neural Network]] that uses a [[representation]] of arbitrary-sized [[code snippet]]s and learns to aggregate multiple [[syntactic path]]s into a single [[fixed-size vector]]. | A [[Code-to-Vector (code2vec) Neural Network]] is a [[Path-Attention Neural Network]] that uses a [[representation]] of arbitrary-sized [[code snippet]]s and learns to aggregate multiple [[syntactic path]]s into a single [[fixed-size vector]]. | ||
* <B>AKA:</B> [[Code-to-Vector (code2vec) Neural Network|Code2Vec]]. | * <B>AKA:</B> [[Code-to-Vector (code2vec) Neural Network|Code2Vec]]. | ||
* <B>Context:</B> | * <B>Context:</B> | ||
Line 18: | Line 18: | ||
=== 2019 === | === 2019 === | ||
* ([[2019_Code2vecLearningDistributedRepr|Alon et al., 2019]]) ⇒ [[author::Uri Alon]], [[Meital Zilberstein]], [[Omer Levy]], and [[Eran Yahav]]. ([[2019]]). “[https://arxiv.org/pdf/1803.09473.pdf code2vec: Learning Distributed Representations of Code].” In: [[Proceedings of the ACM on Programming Languages (POPL), Volume 3]]. | * ([[2019_Code2vecLearningDistributedRepr|Alon et al., 2019]]) ⇒ [[author::Uri Alon]], [[Meital Zilberstein]], [[Omer Levy]], and [[Eran Yahav]]. ([[2019]]). “[https://arxiv.org/pdf/1803.09473.pdf code2vec: Learning Distributed Representations of Code].” In: [[Proceedings of the ACM on Programming Languages (POPL), Volume 3]]. | ||
** QUOTE: The goal of this paper is to learn [[code embedding]]s, [[continuous vector]]s for representing [[snippets of code]]. By learning [[code embedding]]s, our long-term goal is to enable the [[application]] of [[neural technique]]s to a wide-range of [[programming-languages task]]s. In this paper, we use the motivating [[task of semantic labeling of code snippets]]. | ** QUOTE: The goal of this paper is to learn [[code embedding]]s, [[continuous vector]]s for representing [[snippets of code]]. By learning [[code embedding]]s, our long-term goal is to enable the [[application]] of [[neural technique]]s to a wide-range of [[programming-languages task]]s. In this paper, we use the motivating [[task of semantic labeling of code snippets]]. | ||
Latest revision as of 17:20, 1 August 2022
A Code-to-Vector (code2vec) Neural Network is a Path-Attention Neural Network that uses a representation of arbitrary-sized code snippets and learns to aggregate multiple syntactic paths into a single fixed-size vector.
- AKA: Code2Vec.
- Context:
- Source code available at: https://github.com/tech-srl/code2vec
- It can be trained by a Code-to-Vector Neural Network Training System and evaluated by a Code-to-Vector (code2vec) Benchmarking Task.
- Example(s):
- Counter-Example(s):
- See: Attention Mechanism, Code Summarization Task, Bimodal Modelling of Code and Natural Language.
References
2019
- (Alon et al., 2019) ⇒ Uri Alon, Meital Zilberstein, Omer Levy, and Eran Yahav. (2019). “code2vec: Learning Distributed Representations of Code.” In: Proceedings of the ACM on Programming Languages (POPL), Volume 3.
- QUOTE: The goal of this paper is to learn code embeddings, continuous vectors for representing snippets of code. By learning code embeddings, our long-term goal is to enable the application of neural techniques to a wide-range of programming-languages tasks. In this paper, we use the motivating task of semantic labeling of code snippets.