Detecting Toxicity in Multiplayer Online Games through Annotation System

toxicity detection in multiplayer online games n.w
1 / 26
Embed
Share

Explore the research on toxicity detection in multiplayer online games focusing on MOBA games like League of Legends. The study introduces an annotation system to identify and classify toxic behavior in player communication during gameplay, with a dataset from DotA as a case study. The paper discusses the challenges and techniques involved in modeling game communication for toxicity detection.

  • Toxicity Detection
  • Multiplayer Games
  • MOBA
  • Annotation System
  • Game Communication

Uploaded on | 1 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Toxicity Detection In Multiplayer Online Games Marcus Martens, Siqi Shen, Alexandru Iosup, Frenando Kuipers Best Paper Award NetGames 15 1

  2. INTRODUCTION 2

  3. Toxicity Perceived hostility in a player community Triggered by game events like kills or other mistakes Verbally assault other players Degrade the QoE Focus on MOBA game (such as LOL) Devise an annotation system f 3

  4. Contribution Focus on MOBA games Such as LOL, Dota 2 e.t.c. Devise an annotation system For chats of multiplayer online games Detect toxicity 4

  5. DATA 5

  6. Dataset A dataset from DotA Collected from DotAlicious (one of the platform) 2ndto 6thof Feb 2012 Around 12K matches Contain team channel and public channel 6

  7. MODELLING GAME COMMUNICATION 7

  8. Annotation System Language used Abbreviated Does not follow grammar Little variety in the topic Repetitive & restricted set of vocabulary Derive a system to classify the most frequent words And with their miss-spelled variants Only consider English 8

  9. Rules Pattern Includes or start with certain symbols List Member of a pre-defined list Letterset Set of letters of the word equals a pre-defined list 9

  10. Letterset Useful to capture unintentionally or intentionally misspelled words Ex: noob Common insult, derived from newbie n , o , b NOOOOOOOOOOOb , nooobbbbb 224 different way of writing noob in the dataset Low false positive rate 10

  11. Percentage Of Annotation 7042112 words 286654 distinct 16% of distinct words are annotated 60% of total words 11

  12. Annotation Categories 12

  13. Different Chat Modes All-chat All players can see Ally-chat Visible only to friendly players Unable to get player-to-player chat 90% ally-chat, 10% all-chat 13

  14. Average Use Of Annotated Words 14

  15. Average Use Of Annotated Words 15

  16. Average Use Of Annotated Words 16

  17. Toxicity Detection Try to detect toxicity towards teammate Different from simply detecting bad words Bad words could also be used for other purposes Use n-gram Get 1 ~ 4 continuous words that contain bad words Manually determine if they re toxic List available online 45 unigram, 21 bigrams, 32 trigrams, 36 quadgrams 17

  18. ANALYSIS OF GAME TOXICITY AND SUCCESS 18

  19. Trigger Of Toxicity 6528/10305 contain at least one toxic remark At most 5 in 90 (outlier up to 22) Total of 16950 toxic remark Try to find the event that trigger toxicity Kill event? 19

  20. Game Success And Profanity Does losing cause more profanity? Only for players with over 10 matches 4900 distinct players Calculate winrate & word used 20

  21. Game Success And Profanity Overall lower toxicity and lower at the late stage for winning team 21

  22. Game Success And Profanity 22

  23. Predicting Match Outcome Try to predict outcome using all words Uses linear SVM Feature-set based on Term Frequency Inverse Document Frequency (TF-IDF) Default parameters and no optimization 23

  24. Predicting Match Outcome Bad is not a good indicator Slang is a pretty good one Indicate more teamwork 24

  25. CONCLUSION 25

  26. Conclusion Distinguish simple swearing from deliberate insult Building block for game monitoring system Predict the outcome using the chat message Point out that being successful in-game can still be toxic Toxicity can be considered in match-making system 26

Related


More Related Content