2023 FineTuningPretrainedLanguageMod

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Dialogue Summarization, RDASS Text Summarization Measure, Customer Support Dialog Session, Korean NLP.

Notes

Cited By

Quotes

Abstract

The application of pretrained language models in real-world business domains has gained significant attention. However, research on the practical use of generative artificial intelligence (AI) to address real-world downstream tasks is limited. This study aims to enhance the routine tasks of customer service (CS) representatives, particularly in the finance domain, by applying a fine-tuning method to dialogue summarization in CS centers. KakaoBank handles an average of 15,000 CS calls daily. By employing a fine-tuning method using real-world CS dialogue data, we can reduce the time required to summarize CS dialogues and standardize summarization skills. To ensure effective dialogue summarization in the finance domain, pretrained language models should acquire additional knowledge and skills, such as specific knowledge of financial products, problem-solving abilities, and the capacity to handle emotionally charged customers. In this study, we developed a reference fine-tuned model using Polyglot-Ko (5.8B) as the baseline PLM and a dataset containing a wide range of zero-shot instructions and partially containing summarization instructions. We compared this reference model with another model fine-tuned using KakaoBank’s CS dialogues and summarization data as the instruct dataset. The results demonstrated that the fine-tuned model based on KakaoBank’s internal datasets outperformed the reference model, showing a 199% and 12% improvement in ROUGE-L and RDASS, respectively. This study emphasizes the significance of task-specific fine-tuning using appropriate instruct datasets for effective performance in specific downstream tasks. Considering its practical use, we suggest that fine-tuning using real-world instruct datasets is a powerful and cost-effective technique for developing generative AI in the business domain.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2023 FineTuningPretrainedLanguageModJiseon Yun
Jae Eui Sohn
Sunghyon Kyeong
Fine-Tuning Pretrained Language Models to Enhance Dialogue Summarization in Customer Service Centers10.1145/3604237.36268382023