Clause-Risk Identification Benchmark

From GM-RKB
Jump to navigation Jump to search

A Clause-Risk Identification Benchmark is a clause-level risk-focused legal AI benchmark on clause-risk identification tasks.



References

2021

  • Hendrycks, Dan, Collin Burns, Anya Chen, and Spencer Ball. *CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review*. In *Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1 (NeurIPS 2021 Datasets and Benchmarks, Round 1)*, 1–12, 2021. https://arxiv.org/abs/2103.06268.

2025

  • Liu, Shuang, Zelong Li, Ruoyun Ma, Haiyan Zhao, and Mengnan Du. *ContractEval: Benchmarking LLMs for Clause-Level Legal Risk Identification in Commercial Contracts*. ArXiv preprint arXiv:2508.03080, 2025. https://arxiv.org/abs/2508.03080.

2021

  • Lippi, Marco, Przemysław Pałka, Giuseppe Contissa, Francesca Lagioia, Hans-Wolfgang Micklitz, Giovanni Sartor, Paolo Torroni, and Tommaso Agnoloni (associated researcher). *Automated Detection of Unfair Clauses in Online Consumer Contracts*. In *Legal Knowledge and Information Systems: JURIX 2017: The Thirtieth Annual Conference*, edited by Adam Wyner and Giovanni Casini, 145–154. Frontiers in Artificial Intelligence and Applications 302. Amsterdam: IOS Press, 2025 (updated edition; originally 2017). doi:10.3233/978-1-61499-838-9-145. https://www.researchgate.net/publication/389219290_Automated_Detection_of_Unfair_Clauses_in_Online_Consumer_Contracts.

2023

  • Impedovo, Angelo, Giuseppe Rizzo, and Angelo Mauro. *Towards Open-Set Contract Clause Recognition*. In *2023 IEEE International Conference on Big Data (BigData)*, 1190–1199. IEEE, 2023. doi:10.1109/BigData59044.2023.10386681. https://ieeexplore.ieee.org/document/10386681/.

2024

  • Bizzaro, Pietro Giovanni, Elena Della Valentina, Maurizio Napolitano, Nadia Mana, and Massimo Zancanaro. *Annotation and Classification of Relevant Clauses in Terms-and-Conditions Contracts*. In *Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)*, 1209–1214. Torino, Italy: ELRA and ICCL, 2024. https://aclanthology.org/2024.lrec-main.108/.