Introduction to Information Retrieval

Introduction to Information Retrieval
Slide Note
Embed
Share

Information Retrieval (IR) involves finding unstructured material that satisfies an information need from large collections, such as text documents stored on computers. It encompasses various search scenarios beyond web search, including email search, accessing corporate knowledge bases, and legal information retrieval. IR is based on the goal of retrieving relevant documents to assist users in completing tasks, following a classic search model that involves defining a user's task, information need, collection, query, search engine, query refinement, and evaluating results for precision and recall. The distinction between unstructured (text) and structured (database) data has evolved over time, presenting both challenges and opportunities in information retrieval practices.

  • Information Retrieval
  • Unstructured Data
  • Structured Data
  • Search Model
  • Precision

Uploaded on Apr 04, 2025 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. INFORMATION RETRIEVAL

  2. INFORMATION RETRIEVAL Information Retrieval (IR) is finding material (usually documents) of an unstructured nature (usually text) that satisfies an information need from within large collections (usually stored on computers). These days we frequently think first of web search, but there are many other cases: E-mail search Searching your laptop Corporate knowledge bases Legal information retrieval

  3. UNSTRUCTURED (TEXT) VS. STRUCTURED (DATABASE) DATA IN THE MID-NINETIES

  4. UNSTRUCTURED (TEXT) VS. STRUCTURED (DATABASE) DATA TODAY

  5. BASIC ASSUMPTIONS OF INFORMATION RETRIEVAL Collection: A set of documents Assume it is a static collection for the moment Goal: Retrieve documents with information that is relevant to the user s information need and helps the user complete a task

  6. THE CLASSIC SEARCH MODEL User task Info need Collection Query Search engine Query refinement Results

  7. HOW GOOD ARE THE RETRIEVED DOCS? Precision : Fraction of retrieved docs that are relevant to the user s information need Recall : Fraction of relevant docs in collection that are retrieved

  8. Term-document incidence

  9. UNSTRUCTURED DATA IN 1620 Which plays of Shakespeare contain the words BrutusANDCaesar but NOT Calpurnia? One could get all of Shakespeare s plays for Brutus and Caesar, then strip out lines containing Calpurnia? Why is that not the answer? Slow (for large corpora) NOTCalpurnia is non-trivial Other operations (e.g., find the word Romans nearcountrymen) not feasible Ranked retrieval (best documents to return)

  10. TERM-DOCUMENT INCIDENCE MATRICES Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth Antony 1 1 0 0 0 1 Brutus 1 1 0 1 0 0 Caesar 1 1 0 1 1 1 Calpurnia 0 1 0 0 0 0 Cleopatra 1 0 0 0 0 0 mercy 1 0 1 1 1 1 worser 1 0 1 1 1 0 1 if play contains word, 0 otherwise BrutusANDCaesarBUTNOT Calpurnia

  11. BIGGER COLLECTIONS 500K (distinct words) x 1M (documents) matrix has half-a-trillion 0 s and 1 s. But it has no more than one billion 1 s. matrix is extremely sparse. What s a better representation? We only record the 1 positions.

  12. Inverted Index The key data structure underlying modern IR

  13. INVERTED INDEX For each term t, we must store a list of all documents that contain t. Identify each doc by a docID, a document serial number 1 2 4 11 31 45173 174 Brutus 1 2 4 5 6 16 57 132 Caesar 2 31 54101 Calpurnia 13

  14. INVERTED INDEX CONSTRUCTION Documents to be indexed Friends, Romans, countrymen. Tokenizer Token stream Friends Romans Countrymen Linguistic modules friend roman countryman Modified tokens 2 4 Indexer friend 1 2 roman Inverted index 16 13 countryman

  15. INITIAL STAGES OF TEXT PROCESSING Tokenization Cut character sequence into word tokens Deal with John s , a state-of-the-art solution Normalization Map text and query term to same form You want U.S.A. and USA to match Stemming We may wish different forms of a root to match authorize, authorization Stop words We may omit very common words (or not) the, a, to, of

  16. INDEXER STEPS: TOKEN SEQUENCE Sequence of (Modified token, Document ID) pairs. Doc 1 Doc 2 I did enact Julius Caesar I was killed i the Capitol; Brutus killed me. So let it be with Caesar. The noble Brutus hath told you Caesar was ambitious

  17. INDEXER STEPS: SORT Sort by terms And then docID Core indexing step

  18. INDEXER STEPS: DICTIONARY & POSTINGS Multiple term entries in a single document are merged. Split into Dictionary and Postings Doc. frequency information is added. Why frequency? Will discuss later.

  19. Query processing with an inverted index

  20. THE INDEX WE JUST BUILT Our focus How do we process a query? Later - what kinds of queries can we process? 20

  21. QUERY PROCESSING: AND Consider processing the query: BrutusANDCaesar Locate Brutus in the Dictionary; Retrieve its postings. Locate Caesar in the Dictionary; Retrieve its postings. Merge the two postings (intersect the document sets): 2 1 4 2 8 3 16 5 32 8 64 128 Brutus Brutus Caesar Caesar 21 13 21 34

  22. IR SYSTEM COMPONENTS Text processing forms index words (tokens). Indexing constructs an inverted index of word to document pointers. Searching retrieves documents that contain a given query token from the inverted index. Ranking scores all retrieved documents according to a relevance metric.

  23. REFERENCE http://www.stanford.edu/class/cs276/

Related


More Related Content