2024 LegalProBERTClassificationofLeg: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
No edit summary
(ContinuousReplacement)
Tag: continuous replacement
Line 1: Line 1:
* ([[2024_LegalProBERTClassificationofLeg|Tewari, 2024]]) ⇒ [[author::Amit Tewari]]. ([[year::2024]]). “[https://arxiv.org/pdf/2404.10097.pdf LegalPro-BERT: Classification of Legal Provisions by Fine-tuning BERT Large Language Model].”  [http://dx.doi.org/10.48550/arXiv.2404.10097 doi:10.48550/arXiv.2404.10097]  
* ([[2024_LegalProBERTClassificationofLeg|Tewari, 2024]]) [[author::Amit Tewari]]. ([[year::2024]]). “[https://arxiv.org/pdf/2404.10097.pdf LegalPro-BERT: Classification of Legal Provisions by Fine-tuning BERT Large Language Model].”  [http://dx.doi.org/10.48550/arXiv.2404.10097 doi:10.48550/arXiv.2404.10097]  


<B>Subject Headings:</B>  
<B>Subject Headings:</B>  


==Notes==
== Notes ==
* [[The paper]] introduces [[LegalPro-BERT]], a fine-tuned [[BERT (model)|BERT]] model specialized in classifying [[legal provision|legal provisions]] within [[contract]]s, demonstrating significant [[performance improvement]]s over existing [[benchmark]]s.
* [[The paper]] introduces [[LegalPro-BERT]], a fine-tuned [[BERT (model)|BERT]] model specialized in classifying [[legal provision|legal provisions]] within [[contract]]s, demonstrating significant [[performance improvement]]s over existing [[benchmark]]s.
* [[The research]] employs the [[LEDGAR dataset]], sourced from [[LexGLUE]], consisting of 80,000 labeled [[paragraph]]s from [[legal contract|legal contracts]], to fine-tune and evaluate the [[LegalPro-BERT]] model.
* [[The research]] employs the [[LEDGAR dataset]], sourced from [[LexGLUE]], consisting of 80,000 labeled [[paragraph]]s from [[legal contract|legal contracts]], to fine-tune and evaluate the [[LegalPro-BERT]] model.
Line 12: Line 12:
* [[The paper]] also discusses potential future [[work]], suggesting further exploration of fine-tuning with different subsets of [[word]]s and applying the model to other [[domain]]s beyond [[legal]], such as [[finance]] and [[healthcare]], which could benefit from specialized [[document classification]].
* [[The paper]] also discusses potential future [[work]], suggesting further exploration of fine-tuning with different subsets of [[word]]s and applying the model to other [[domain]]s beyond [[legal]], such as [[finance]] and [[healthcare]], which could benefit from specialized [[document classification]].


==Cited By==
== Cited By ==
* http://scholar.google.com/scholar?q=%222024%22+LegalPro-BERT%3A+Classification+of+Legal+Provisions+by+Fine-tuning+BERT+Large+Language+Model
* http://scholar.google.com/scholar?q=%222024%22+LegalPro-BERT%3A+Classification+of+Legal+Provisions+by+Fine-tuning+BERT+Large+Language+Model


==Quotes==
== Quotes ==


===Abstract===
=== Abstract ===


A [[contract]] is a [[type]] of [[legal document]] commonly used in [[organization]]s. </s>
A [[contract]] is a [[type]] of [[legal document]] commonly used in [[organization]]s. </s>
Line 30: Line 30:
[[We]] found that [[LegalPro-BERT]] outperforms the previous [[benchmark]] used for [[comparison]] in this [[research]]. </s>
[[We]] found that [[LegalPro-BERT]] outperforms the previous [[benchmark]] used for [[comparison]] in this [[research]]. </s>


==References==
== References ==
{{#ifanon:|
{{#ifanon:|



Revision as of 23:07, 24 April 2024

Subject Headings:

Notes

Cited By

Quotes

Abstract

A contract is a type of legal document commonly used in organizations. Contract review is an integral and repetitive process to avoid business risk and liability. Contract analysis requires the identification and classification of key provisions and paragraphs within an agreement. Identification and validation of contract clauses can be a time-consuming and challenging task demanding the services of trained and expensive lawyers, paralegals or other legal assistants. Classification of legal provisions in contracts using artificial intelligence and natural language processing is complex due to the requirement of domain-specialized legal language for model training and the scarcity of sufficient labeled data in the legal domain. Using general-purpose models is not effective in this context due to the use of specialized legal vocabulary in contracts which may not be recognized by a general model. To address this problem, we propose the use of a pre-trained large language model which is subsequently calibrated on legal taxonomy. We propose LegalPro-BERT, a BERT transformer architecture model that we fine-tune to efficiently handle classification task for legal provisions. We conducted experiments to measure and compare metrics with current benchmark results. We found that LegalPro-BERT outperforms the previous benchmark used for comparison in this research.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2024 LegalProBERTClassificationofLegAmit TewariLegalPro-BERT: Classification of Legal Provisions by Fine-tuning BERT Large Language Model10.48550/arXiv.2404.100972024