Advanced Medical Question Classification Approach SEKE2023

the 35th international conference on software n.w
1 / 12
Embed
Share

Explore a cutting-edge approach for medical question classification presented at the 35th International Conference on Software Engineering & Knowledge Engineering (SEKE2023). Learn about the innovative methods of prompt tuning and contrastive learning utilized to enhance classification accuracy and efficiency.

  • Medical
  • Classification
  • SEKE2023
  • Prompt Tuning
  • Contrastive Learning

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. The 35th International Conference on Software Engineering & Knowledge Engineering (SEKE2023) A Medical Question Classification Approach Based on Prompt Tuning and Contrastive Learning Qian Wang, Cheng Zeng*, Yujin Liu, Peng He 1.School of Computer Science and Information Engineering School Hubei University Wuhan, China 2.School of Artificial Intelligence Hubei University Wuhan, China 3.School of Cyber Science and Technology Hubei University Wuhan, China *Corresponding author E-mail addresses: zc@hubu.edu.cn Reporter: Qian Wang 1

  2. 01 Background 02 Methods Contents 03 Evaluation and Result 04 Conclusion 2

  3. 1.Background The challenge of medical classification - The scarcity of professional medical guides. -Ambiguous phrasing of medical. The shortcomings of PLMs - The pre-training task does not match the downstream task. -Ahuge waste of deployment resources. 3

  4. 1.Background Contribution - For the medical problem classification task, we propose a model that combines prompt learning and supervised contrastive sample learning. - We construct a prompt template as part of the original input to assist ERNIE 3.0 in achieving excellent classification results in our experiments. - In the training process, we adopt the comparative sample learning strategy to alleviate the problem of insufficient medical samples 4

  5. 2.Methods Prompt tuning The probability calculation for a specific label = = P ( softmax W ) P ([ ] ( )| ) v y y x Mask x = ( ) E exercise(Label:sport) Label mapping ( ) v y cl s MLM head exp( exp( = Y ) W E ( ) v y W cls E = ) ( ') y v c s l PLMS h[Mask] ' y Xprompt=T(X) where W and E represent the hidden vectors of [MASK] and label word, respectively. [CLS]This is a [Mask] News [SEP]Swimming gold medal Template(T) Input X 5

  6. 2.Methods Contrastive learning Negative Anchor Positive Anchor Negative Positive 6

  7. 2.Methods The framework of ERNIE 3.0-CL on Prompt turning. Prompt Addition Contrastive Simple Learning Module + Ecls . . . ERNIE 3.0 [CLS]This is a [Mask][Mask] intention [SEP]best diet . . . CL Template This is a [Mask][Mask] intention Positive samples sharing Ecls . . . + [CLS]This is a [Mask][Mask] intention [SEP] good diet ERNIE 3.0 loss= + * CE CL . . . Target samples Input text sharing - Ecls . . . [CLS]This is a [Mask][Mask] intention [SEP] bad diet CE ERNIE 3.0 . . . Negative samples . . . . . . get closer keep distant Other feature representation prompt templete CLS feature representation 7

  8. 3. Evaluation and Result Experimental dataset and settings - we use a crawler tool to crawl texts from mainstream medical websites to construct a medical question dataset for experiments, using manual annotation. - Based on the crawled data, we integrated and labeled 5 categories: disease diet, seasonal diet, sports and fitness, weight loss and beauty, and dietary contraindications Label seasonal diet disease diet dietary contraindications sports and fitness weight loss and beauty Text The principles of winter health care. What to eat after injury fracture. People who should not eat watermelon. Football promotes good health No meals after 7:00 p.m 8

  9. 3. Result Model Acc 0.9213 F1 0.924 BERT 0.9228 0.9284 BERT-Rdrop 0.9259 0.9306 BERT-SimCSE 0.9265 0.9303 RoBERTa 0.9276 0.9306 RoBERTa-Rdrop Fine-tuning 0.927 0.9318 RoBERTa-SimCSE 0.9291 0.9323 RoBERTa-CL ERNIE3.0 0.9281 0.9334 ERNIE3.0-Rdrop 0.9286 0.9314 ERNIE3.0-SimCSE 0.9307 0.9327 ERNIE 3.0-CL 0.9365 0.9391 Prompt-tuning The Prompt-Tuning-based ERNIE3.0-CL method proposed in this paper improves the accuracy by 0.8 percentage points and the F1 value by 0.6 percentage points. Our proposed ERNIE 3.0-CL compares better with other methods of ERNIE 3.0 fusion with contrastive learning, ERNIE 3.0-Rdop, and ERNIE 3.0-SimCSE. 9

  10. 3. Evaluation and Result Hyperparameter effect (b) Temperature coefficient (a) Loss function fusion ratio 10

  11. 4. Conclusion We design a novel supervised contrastive learning method for medical question classification using ERNIE 3.0 pre-trained language model and prompt fine- tuning. We explore the efficiency of our model on the medical question classification task and outperform recent work in experimental tasks. In our subsequent work, we will take the automatic generation of prompt templates and the optimization of sampling strategies as our direction and try experimenting with different tasks 11

  12. Thanks! Qian Wang wqmaster@stu.hubu.edu.cn

Related


More Related Content