Entity-Level Sentiment Detection and Inference in Intelligent Systems Programs

entity event level sentiment detection n.w
1 / 66
Embed
Share

Explore the research on entity/event-level sentiment detection and inference conducted by Dr. Lingjia Deng and the intelligent systems program at the University of Pittsburgh. Discover the goal of explicit and implicit sentiments, along with the development of computational models to infer implicit opinions. Dive into the annotations and corpus utilized for sentiment inference.

  • Sentiment Analysis
  • Intelligent Systems
  • Sentiment Detection
  • Lingjia Deng
  • University of Pittsburgh

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Entity/Event-Level Sentiment Detection and Inference Lingjia Deng Intelligent Systems Program University of Pittsburgh Dr. Janyce Wiebe, Intelligent Systems Program, University of Pittsburgh Dr. Rebecca Hwa, Intelligent Systems Program, University of Pittsburgh Dr. Yuru Lin, Intelligent Systems Program, University of Pittsburgh Dr. William Cohen, Machine Learning Department, Carnegie Mellon University 1

  2. A World Of Opinions NEWS REVIEW EDITORIALS TWITTER BLOGS 3

  3. Motivation ... people protest the country s same-sex marriage ban .... negative protest people same-sex marriage ban positive 6

  4. Explicit Opinions The explicit opinions are revealed by opinion expressions. negative protest same-sex marriage ban people positive 7

  5. Implicit Opinions The implicit opinions are not revealed by expressions, but are indicated in the text. The system needs to infer implicit opinions. negative protest same-sex marriage ban people positive 8

  6. Goal: Explicit and Implicit Sentiments explicit: negative sentiment implicit: positive sentiment negative protest same-sex marriage ban people positive 9

  7. Goal: Entity/Event-Level Sentiments PositivePair(people, same-sex marriage) NegativePair(people, same-sex marriage ban) negative protest same-sex marriage ban people positive 10

  8. Three Questions to Solve Is there any corpus annotated with both explicit and implicit sentiments? No. This proposal develops. Is there any inference rules defining how to infer implicit sentiments? Yes. (Wiebe and Deng, arXiv, 2014.) How do we incorporate the inference rules into computational models? This proposal investigates. 11

  9. Completed and Proposed Work Expert Annotations on 70 documents (Deng et al., NAACL 2015) Non-expert Annotations on hundreds of documents Corpus: MPQA 3.0 Sentiment Inference on A corpus of +/-effect event sentiments (Deng et al., ACL 2013) A model validating rules (Deng and Wiebe, EACL 2014) A model inferring sentiments (Deng et al., COLING 2014) +/-Effect Events & Entities Sentiment Inference on Joint Models A pilot study (Deng and Wiebe, EMNLP 2015) Extracting Nested Source and Entity/Event Target Blocking the rules General Events & Entities 12

  10. Background: Sentiment Corpora Genre Source Target Implicit Opinion s Review Sentiment Corpus (Hu and Liu, 2004) product reviews writer the product, feature of the product Sentiment Tree Bank (Socher et al., 2013) movie reviews writer the movie MPQA 2.0 (Wiebe at al., 2005; Wilson, 2008) news, editorials, blogs, etc writer, and any entity an arbitrary span MPQA 3.0 news, editorials, blogs, etc writer, and any entity any entity/event eTarget (head of noun phrase/verb phrase) 13

  11. Background: MPQA Corpus Direct subjective o nested source o attitude attitude type target Expressive subjective element (ESE) o nested source o polarity Objective speech event o nested source o target 14

  12. MPQA 2.0: An Example nested source: writer, Imam negative attitude When the Imam issued the fatwa against Salman Rushdie for insulting the Prophet target 15

  13. Background: Explicit and Implicit Sentiment Explicit sentiments o Extracting explicit opinion expressions, sources and targets (Wiebe et al., 2005, Johansson and Moschitti, 2013a, Yang and Cardie, 2013, Moilanen and Pulman, 2007, Choi and Cardie, 2008, Moilanen et al., 2010). Implicit sentiments o Investigating features directly indicating implicit sentiment (Zhang and Liu, 2011; Feng et al., 2013). No inference. o A rules-based system requiring all oracle information. (Wiebe and Deng, arXiv 2014) 16

  14. Completed and Proposed Work Expert Annotations on 70 documents (Deng et al., NAACL 2015) Non-expert Annotations on hundreds of documents Corpus: MPQA 3.0 Sentiment Inference on A corpus of +/-effect event sentiments (Deng et al., ACL 2013) A model validating rules (Deng and Wiebe, EACL 2014) A model inferring sentiments (Deng et al., COLING 2014) +/-Effect Events & Entities Sentiment Inference on Joint Models A pilot study (Deng and Wiebe, EMNLP 2015) Extracting Nested Source and Entity/Event Target Blocking the rules General Events & Entities 18

  15. Completed and Proposed Work Expert Annotations on 70 documents (Deng et al., NAACL 2015) Non-expert Annotations on hundreds of documents Corpus: MPQA 3.0 Sentiment Inference on A corpus of +/-effect event sentiments (Deng et al., ACL 2013) A model validating rules (Deng and Wiebe, EACL 2014) A model inferring sentiments (Deng et al., COLING 2014) +/-Effect Events & Entities Sentiment Inference on Joint Models A pilot study (Deng and Wiebe, EMNLP 2015) Extracting Nested Source and Entity/Event Target Blocking the rules General Events & Entities 19

  16. From MPQA 2.0 To MPQA 3.0 nested source: writer, Imam negative attitude target When the Imam issued the fatwa against eTarget Salman Rushdie for insulting the Prophet o Imam is negative toward Rushdie . o Imam is negative toward insulting . o Imam is NOT negative toward Prophet . 20

  17. Expert Annotations Expert annotators include Dr. Janyce Wiebe and I. The expert annotators are asked to select which noun or verb is the eTarget of an attitude or an ESE. The expert annotators annotated 70 documents. The agreement score is 0.82 on average over four documents. 21

  18. Non-Expert Annotations Previous work have tried to ask non expert annotators to annotate subjectivity and opinions (Akkaya et al., 2010, Socher et al., 2013). Reliable Annotations o Non-expert annotators with high credits. o Majority vote. o Weighted vote and reliable annotators (Welinder and Perona, 2010). Validating Annotation Scheme o 70 documents: Compare non-expert annotations with expert annotations. o Then, collect non-expert annotations for the remaining corpus. 22

  19. Part 1 Summary An entity/event-level sentiment corpus, MPQA 3.0 Complete expert annotations o 70 documents (Deng and Wiebe, NAACL 2015). Propose non-expert annotations o Remaining hundreds of documents. o Crowdsourcing tasks. o Automatically acquiring reliable labels. 27

  20. Completed and Proposed Work Expert Annotations on 70 documents (Deng et al., NAACL 2015) Non-expert Annotations on hundreds of documents Corpus: MPQA 3.0 Sentiment Inference on A corpus of +/-effect event sentiments (Deng et al., ACL 2013) A model validating rules (Deng and Wiebe, EACL 2014) A model inferring sentiments (Deng et al., COLING 2014) +/-Effect Events & Entities Sentiment Inference on Joint Models A pilot study (Deng et al., EMNLP 2015) Extracting Nested Source and Entity/Event Target Blocking the rules General Events & Entities 28

  21. +/-Effect Event Definition A +effect event has benefiting effect on the theme. o help, increase, etc A effect event has harmful effect on the theme. o harm, decrease, etc A triple <agent, event, theme> He rejects the paper. -effect event: reject Agent: He theme: paper <He, reject, paper> 29

  22. +/-Effect Event Representation +Effect(x) o x is a +effect event -Effect(x) o x is a effect event Agent(x,a) o a is the agent of +/-effect event x Theme(x, h) o h is the theme of +/-effect event x 31

  23. +/-Effect Event Corpus +/-Effect event information is annotated. o The +/-effect events. o The agents. o The themes. The writer s sentiments toward the agents and themes are annotated. o positive, negative, neutral 134 political editorials. 32

  24. Sentiment Inference Rules people protest the country s same-sex marriage ban. explicit sentiment o NegativePair(people, ban) +/-effect event information o -Effect(ban) o Theme(ban, same-sex marriage) NegativePair(people, ban) ^ -Effect(ban) ^ Theme(ban, same-sex marriage) PositivePair(people, same-sex marriage) 33

  25. Sentiment Inference Rules +Effect Rule: If two entities participate in a +effect event, the writer s sentiments toward the entities are the same. -Effect Rule: If two entities participate in a effect event, the writer s sentiments toward the entities are the opposite. Can rules infer sentiments correctly? (Deng and Wiebe, EACL 2014) 34

  26. Building the graph from annotations (Deng and Wiebe, EACL 2014) agent/ theme A B D C E node score: two sentiment scores fA(pos)+fA(neg)=1 35

  27. Building the graph from annotations A B D C +/-effect E edge score: four sentiment constraints scores YD,E(pos,pos) the score that the sentiment toward D is positive AND the sentiment toward E is positive 36

  28. Building the graph from annotations A B D C +/-effect E edge score: inference rules if +effect: if effect: YD,E(pos,neg)=1,YD,E(neg,pos)=1 YD,E(pos,pos)=1,YD,E(neg,neg)=1 37

  29. Loopy Belief Propagation agent/ theme A B D C +/-effect E Input: the gold standard sentiment of one node Model: Loopy Belief Propagation Output: the propagated sentiments of other nodes 38

  30. Propagating sentiments agent/ theme A B D C +/-effect E For node E, can it be propagated with correct sentiment labels? 39

  31. Propagating sentiments A E agent/ theme A B D C +/-effect E Node A is assigned with gold standard sentiment. Run the propagation. Record whether Node E is propagated correctly or not. 40

  32. Propagating sentiments B E agent/ theme A B D C +/-effect E Node B is assigned with gold standard sentiment. Run the propagation. Record whether Node E is propagated correctly or not. 41

  33. Propagating sentiments C E agent/ theme A B D C +/-effect E Node C is assigned with gold standard sentiment. Run the propagation. Record whether Node E is propagated correctly or not. 42

  34. Propagating sentiments D E agent/ theme A B D C +/-effect E Node D is assigned with gold standard sentiment. Run the propagation. Record whether Node E is propagated correctly or not. 43

  35. Evaluating E being propagated correctly agent/ theme A B D C +/-effect E Node E is propagated with sentiment 4 times. correctness = (# node E being propagated correctly)/ 4 average correctness = 88.74% 44

  36. Conclusion Defining the graph-based model with sentiment inference rules. Propagating sentiments correctly in 88.74% cases. To validate the inference rules only, The graph-based propagation model is built from manual annotations. Can we automatically infer sentiments? (Deng et al., COLING 2014) 45

  37. Local Detectors Given a +/-effect event span in a document, Run state-of-the-art systems assigning local scores. (Deng et al., COLING 2014) (Q2) is the effect reversed? (Q1) is it +effect or -effect? Agent1 Agent2 reversed +effect -effect Theme1 Theme2 pos: 0.7 neg: 0.5 neg: 0.6 pos: 0.5 pos: 0.5 neg: 0.5 +effect: 0.8 -effect: 0.2 pos: 0.7 neg: 0.5 reverser: 0.9 (Q3) which spans are agents and themes? (Q4) what are the writer s sentiments? 46

  38. Local Detectors (Deng et al., COLING 2014) (Q2) negation detected (Q1) word sense disambiguation Agent1 Agent2 reversed +effect -effect Theme1 Theme2 pos: 0.7 neg: 0.5 neg: 0.6 pos: 0.5 pos: 0.5 neg: 0.5 +effect: 0.8 -effect: 0.2 pos: 0.7 neg: 0.5 reverser: 0.9 (Q3) semantic role labeling (Q4) sentiment analysis 47

  39. Global Optimization Agent1 Agent2 reversed +effect -effect Theme1 Theme2 pos: 0.5 neg: 0.5 pos: 0.5 neg: 0.6 pos: 0.7 neg: 0.5 +effect: 0.8 -effect: 0.2 pos: 0.7 neg: 0.5 reverser: 0.9 The global model selects an optimal set of candidates: o one candidate from the four agent sentiment candidates, Agent1-pos, Agent1-neg, Agent2-pos, Agent2-neg o one/no candidate from the reversed candidate, o one candidate from the +/-effect candidates, o one candidate from the four theme sentiment candidates. 48

  40. Objective Function u: binary indicator of choosing candidate min - + xikj+ dikj picuic i EffectEvent Entity c Li <i,k,j> Triple <i,k,j> Triple , : slack variables of triple <i,k,j> representing this triple is an exception to +effect effect rule (exception: 1) p: candidate local score The framework assigns values (0 or 1) to u o maximizing the scores given by the local detectors, and assigns values (0 or 1) to , o minimizing the cases where +/-effect event sentiment rules are violated. Integer Linear Programming (ILP) is used. 49

  41. +Effect Rule Constraints In a +effect event, sentiments are the same 0 0 +effect: 1 -effect: 0 exception: 1 not exception: 0 ui, pos 1 uj, pos 1 - + uk, +effect-uk, reversed <=1+xikj i,<i,k,j> j,<i,k,j> AND - + uk, +effect-uk, reversed <=1+xikj ui, neg uj, neg i,<i,k,j> j,<i,k,j> 50

  42. -Effect Rule Constraints In a effect event, sentiments are opposite. + -1 + uk, -effect-uk, reversed <=1+dikj ui, pos uj, pos i,<i,k,j> j,<i,k,j> + -1 + uk, -effect-uk, reversed <=1+dikj ui, neg uj, neg i,<i,k,j> j,<i,k,j> 51

  43. Performances 1 0.8 0.6 Light Color: Local 0.4 0.2 Dark Color: ILP 0 Accuracy of Q1 Accuracy of Q2 Accuracy of Q3 F-measure of Q4 (Q1) is it +effect or -effect? Recall of Q4 (Q2) is the effect reversed? Precision Q4 (Q3) which spans are agents and themes? (Q4) what are the writer s sentiments? 52

  44. Part 2 Summary Inferring sentiments toward entities participating in the +/-effect events. Developed an annotated corpus (Deng et al., ACL 2013). Developed a graph-based propagation model showing the inference ability of rules (Deng and Wiebe, EACL 2014). Developed an Integer Linear Programming model jointly resolving various ambiguities w.r.t. +/-effect events and sentiments (Deng at al., COLING 2014). 56

  45. Completed and Proposed Work Expert Annotations on 70 documents (Deng et al., NAACL 2015) Non-expert Annotations on hundreds of documents Corpus: MPQA 3.0 Sentiment Inference on A corpus of +/-effect event sentiments (Deng et al., ACL 2013) A model validating rules (Deng and Wiebe, EACL 2014) A model inferring sentiments (Deng et al., COLING 2014) +/-Effect Events & Entities Sentiment Inference on Joint Models A pilot study (Deng and Wiebe, EMNLP 2015) Extracting Nested Source and Entity/Event Target Blocking the rules General Events & Entities 57

  46. Joint Models In (Deng et al., COLING 2014), we use Integer Linear Programming framework. Local systems are run. Joint models take local scores as input, and sentiment inference rules as constraints. In ILP, the rules are written in equations and in equations. - + uk, +effect-uk, reversed <=1+xikj ui, pos uj, pos i,<i,k,j> j,<i,k,j> 59

  47. Joint Models: General Inference Rules Great! Dr. Thompson likes the project. Explicit sentiment: o Positive(Great) o Source(Great, speaker) o ETarget(Great, likes) o PositivePair(speaker, likes) Explicit sentiment: o Positive(likes) o Source(likes, Dr. Thompson) o ETarget(likes, project) o PositivePair(Dr. Thompson, project) 60

  48. Joint Models: General Inference Rules Great! Dr. Thompson likes the project. sentiment toward sentiment Explicit sentiment: o Positive(Great) o Source(Great, speaker) o ETarget(Great, likes) o PositivePair(speaker, likes) PositivePair(speaker, likes) ^ Positive(likes) ^ ETarget(likes, project) PositivePair(spkear, project) Explicit sentiment: o Positive(likes) o Source(likes, Dr. Thompson) o ETarget(likes, project) o PositivePair(Dr. Thompson, project) 61

  49. Joint Models More complex rules, in first order logics. Markov Logic Network (Richardson and Domingos, 2006). o a set of atoms to be grounded o a set of weighted if-then rules o rule: friend(a,b) ^ voteFor(a,c) voteFor(b,c) o atom: friend(a,b), voteFor(a,c) o ground atom: friend (Mary, Tom) MLN selects a set of ground atoms that maximize the number of satisfied rules. 62

  50. Joint Model: Pilot Study Atoms PositivePair(s,t) NegativePair(s,t) Predicted by joint models Positive(y) Negative(y) Source(y,s) Etarget(y,t) Assigned scores by local systems +Effect(x) -Effect(y) Agent(x,a) Theme(x,a) 63

Related


More Related Content