Neural Summarization Algorithm: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
No edit summary
m (Text replacement - ". "" to ". “")
Line 23: Line 23:


=== 2019 ===
=== 2019 ===
* ([[Liu & Lapata, 2019]]) ⇒ [[Yang Liu]], and [[Mirella Lapata]]. ([[2019]]). "Text Summarization with Pretrained Encoders.” In: [https://arxiv.org/pdf/1908.08345.pdf arXiv:1908.08345] [doi:10.48550/arXiv.1908.08345]
* ([[Liu & Lapata, 2019]]) ⇒ [[Yang Liu]], and [[Mirella Lapata]]. ([[2019]]). “Text Summarization with Pretrained Encoders.” In: [https://arxiv.org/pdf/1908.08345.pdf arXiv:1908.08345] [doi:10.48550/arXiv.1908.08345]
** QUOTE: [[Bidirectional Encoder Representations from Transformer (BERT)]] represent the latest incarnation of [[pretrained language models]], which have recently advanced a wide range of [[natural language processing tasks]]. In this paper, they demonstrate how BERT can be applied in [[text summarization]] using a [[general framework]] for both [[extractive summarization]] and [[abstractive summarization]] models. They introduce a novel [[document-level encoder]] based on BERT for this purpose.
** QUOTE: [[Bidirectional Encoder Representations from Transformer (BERT)]] represent the latest incarnation of [[pretrained language models]], which have recently advanced a wide range of [[natural language processing tasks]]. In this paper, they demonstrate how BERT can be applied in [[text summarization]] using a [[general framework]] for both [[extractive summarization]] and [[abstractive summarization]] models. They introduce a novel [[document-level encoder]] based on BERT for this purpose.



Revision as of 07:02, 8 May 2024

A Neural Summarization Algorithm is a text summarization algorithm that leverages neural network architectures.



References

2019