How Computers Decipher Thoughts: STUART.REGES, Principal Lecturer
Computers can interpret human thoughts through various techniques like algorithms and programming languages. Stuart Reges, a Principal Lecturer in Computer Science & Engineering, elaborates on the mechanisms behind this fascinating process. By leveraging advanced technology and cognitive models, computers can extract data and predict human behavior with surprising accuracy. Reges delves deep into the intricate workings of this innovative field, shedding light on the groundbreaking methods that enable computers to understand and process human thoughts effectively.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
How do Computers Figure Out What You re Thinking? STUART REGES, Principal Lecturer Computer Science & Engineering
How do companies know what youre thinking? > Suppose a user types saving banks or savings bank or savings banks > Google wants to know what you re searching for > gold standard: you click on the first link they show and no others > Amazon wants to know what you want to buy > they know what you actually buy > they have extensive analytics (what you looked at, how long you spent there) > A/B testing helps them figure out how to maximize purchasing > Facebook wants to know everything > You tell Facebook tons > The friendship graph tells many secrets
An example from Facebook We organize our analysis around a basic question: given all the connections among a person s friends, can you recognize his or her romantic partner from the network structure alone? Using data from a large sample of Facebook users, we find that this task can be accomplished with high accuracy from from Romantic Partnerships and the Dispersion of Social Ties: A Network Analysis of Romantic Partnerships and the Dispersion of Social Ties: A Network Analysis of Relationship Status on Facebook Relationship Status on Facebook by Lars by Lars Backstrom Backstrom (Facebook) and Jon Kleinberg (Cornell) (Cornell) (Facebook) and Jon Kleinberg
Grand Challenge: Big Data > Big Data and Machine Learning are the hottest of the hot fields right now (google data scientist ) > UW has been hiring influential ML researchers > How do we teach ML? > have an ML course > new data science option being developed at UW with statistics > Read a book? > The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World by Pedro Domingos
Lessons from the Four Color Map Theorem > Given any separation of a plane into contiguous regions, producing a figure called a map, no more than four colors are required to color the regions of the map so that no two adjacent regions have the same color > Why do mathematicians hate this theorem? > Because the proof is not elegant required a computer > Lesson: our puny brains can handle only 7 +/- 2 things at a time and we don t have the time or patience for tedious computations > Hands-on ML examples are challenging, but we ll try
We will drill down and explore ngrams > An ngram is a sequence of n items that appear in a row What tends to come after, They lived happily ? Can be word-based or character based One of the primary tools used by Google Involves computing probabilities for different sequences > How do you babble in a Tom Sawyer-ly way? let s construct an ngram-based random Markov Chain generator using the text of Tom Sawyer as our training set
Links for further exploration > Google Books Ngram Viewer > https://books.google.com/ngrams > Wikipedia pages > https://en.wikipedia.org/wiki/N-gram > https://en.wikipedia.org/wiki/Markov_chain > Older all our ngram data belong to you data: > https://research.googleblog.com/2006/08/all-our-n-gram-are-belong-to- you.html > Raw data for Ngram Viewer > http://storage.googleapis.com/books/ngrams/books/datasetsv2.html