Introduction to Algorithms and Problem-solving Process

a brief introduction to algorithms complexity n.w
1 / 205
Embed
Share

Explore the fundamentals of algorithms, complexity, and problem-solving illustrated through a brief introduction. Discover how algorithms play a crucial role in tackling various tasks, from drawing shapes to performing complex surgeries, and learn about the essential steps involved in problem-solving processes. Dive into the world of Muhammad ibn Musa al-Khwarizmi, the renowned mathematician, as you unravel the significance of algorithms and their applications in everyday scenarios.

  • Algorithms
  • Complexity
  • Problem-solving
  • Muhammad ibn Musa
  • Instructions

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. A Brief Introduction to Algorithms, Complexity, and Computability TACT Boot Camp July 2016 Matthew Patitz University of Arkansas

  2. How do we solve problems? We "just do" Guesswork-and-luck Trial-and-error Experience (possibly someone else's) "Scientifically" 2

  3. The Problem-solving Process "Doctor, my head hurts" Patient has elevated pressure in anterior parietal lobe. Analysis Problem specification 1. Sterilize cranial saw 2. Anaesthetize patient 3. Remove top of skull 4. Get the big spoon... 5. etc., etc. Design Algorithm Implementation sterilize(saw,alcohol); raise_hammer(); lower hammer(fast); start(saw); /* etc. etc. */ Program Compilation 0100111010110010101010101001 0101010101001100101010101010 0101101001110101010101001001 0111010011110101010111110101 01000110100001101... Executable (solution) 3

  4. Algorithm A sequence of instructions specifying the steps required to accomplish some task Named after Muhammad ibn Musa al- Khwarizmiof Khwarazm (now Khiva in Uzbekistan) a Persian mathematician, astronomer, and geographer during the Abbasid Caliphate, a scholar in the House of Wisdom in Baghdad often considered one of the fathers of algebra 4

  5. Algorithm Muhammad ibn Musa Al-Khwarizmi Circa 780-850 C.E. (Common Era) 5

  6. Algorithm Working Definition A sequence of instructions describing how to do a task [As opposed to actually executing the instructions] 6

  7. Algorithm -- Examples A cooking recipe Assembly instructions for a model The rules of how to play a game VCR instructions Description of a martial arts technique Directions for driving from A to B A knitting pattern A car repair manual 7

  8. Algorithm -- Exercise Write algorithms to describe how to draw shapes/figures: Give instructions which only use line lengths, point locations, directions, and/or angles to guide someone to draw the shape/figure which you are given.

  9. From Algorithms to Programs Problem Algorithm: A sequence of instructions describing how to do a task (or process) C Program 9

  10. How can we find the maximum element in an array? High-level description in spoken language Pseudo-code Code in a programming language

  11. Finding an array max (n = length of M) int M = A[ 0 ]; for ( int i = 0; i < n; ++i ) { if ( A[ i ] >= M ) { M = A[ i ]; } }

  12. Comparing algorithms Assume two algorithms solve a given problem correctly Which is better? Metrics: amount of code, simplicity of code, running time, amount of memory needed Simplicity not rigorously defined Space: Is cheap Can be re-used So, let s focus on running time (it can t be re-used)

  13. Algorithm analysis Need a rigorous / mathematical way to determine how efficient an algorithm is, with respect to execution time, so we can compare algorithms Could try timings, but they can vary a lot depending on the input and/or hardware Another approach is to count the number of critical steps of code required to process N elements

  14. Critical steps To get a feel for the behavior of the algorithm, we just count the dominant instructions For a sort, it would be the comparison of array elements Or number of times elements are swapped For a search, it would be the comparison of an array element to the item being searched for The count will be some function of N, i.e., the number of items being processed

  15. Comparing critical step counts If we have the critical step counts for two algorithms on equal size inputs, then we can compare their performance The algorithm with the lower count wins! Actually, the algorithm whose critical step count function grows more slowly wins Messy! Critical step count for an algorithm could look like this: F(N) = 17 * N / Log2 N + N3/2 99 * Log2 N / N2/3 Compare it to another critical step count that looks like this: G(N) = N * Log2 N + N3/4 Log2 N3 Which one grows more slowly?

  16. Big O notation Don t use F(N) = 17 * N / Log2 N + N3/2 99 * Log2 N / N2/3 Re-write F(N) using big O notation: F(N) = O(N3/2) This says that F(N) grows on the order of N3/2 Intuitively, the Big O of a function is the order of the fastest growing term of the number of critical steps performed (as a function of N)

  17. Big O notation Example: If F(N) = 10 * N2 + 2 * N + 1000099917, then F(N) = O(N2) Think of 1000099917 as an initialization constant Drop constant multipliers different languages or compilers may require more or less instructions, such as array lookup operations in Pascal and C [Pascal] M := A[ i ] [C] if ( i >= 0 && i < n ) { M = A[ i ]; } - Pascal automatically checks that i is in bounds Drop slower growing terms

  18. Big O notation Filtering by dropping constants and keeping only the quickest growing term gives the asymptotic behavior Shows the behavior in the limit, as n tends to infinity Blue: f(n) = n3 Red: g(n) = 1999n f(n) becomes (and remains) larger than g(n) for all n > 44

  19. Examples of asymptotic behavior F(n) = 5n + 12 F(n) = 109 F(n) = 20n2 + 3n + 15 F(n) = 3n3 + 1999n + 1337 F(n) = n + n -> n -> 1 -> n2 -> n3 -> n

  20. Asymptotic behavior exercises 1. f( n ) = 47n2+ 19n6+ 3n 2. f( n ) = 2n+ 12 3. f( n ) = 2n+ 3n 4. f( n ) = 13nn+ 380n

  21. Linear search boolean LinearSearch(int searchVal, int[] array) { boolean returnVal = false; for (int i=0; i<array.length; i++) { if (array[i] == searchVal) { } } return returnVal; } returnVal = true; What variable determines the run time? What function describes its asymptotic behavior?

  22. Types of Analysis Worst-case: on any input, what is the largest number of steps required to output the answer? Best-case: what is the least? Average-case: over all inputs, what is the average number of steps required? Questions: 1. What are each of these for the linear search algorithm? 2. How can it be improved for best case scenarios?

  23. Modified linear search boolean LinearSearch(int searchVal, int[] array) { for (int i=0; i<array.length; i++) { if (array[i] == searchVal) { } } return false; } return true; On average, the loop only runs for half of the array elements, but it is still O(n)

  24. Asymptotic analysis void Foo(int[] array) { for (int i=0; i<array.length; i++) { for (int j=0; j<array.length; j++) { if (array[i] > array[j]) { } } } } array[i] = array[j]; O(n2)

  25. Asymptotic analysis Finding the last element of an array: if (array.length > 0) { return array[array.length 1]; } O(1)

  26. Asymptotic analysis examples n : array1 length, m : array 2 length void Foo2(int[] array1, int[] array2) { for (int i=0; i<array1.length; i++) { if (array1[i] < array2[i]) { } if (array2[i] < 100) { } } } for (int j=0; j<array2.length; j++) { array1[i] = array1[i] + array2[j]; } for (int j=0; j<array2.length; j++) { if (array[i] > array[j]) { } } array[i] = array[j]; O(n*m)

  27. Asymptotic analysis examples Multiplying two square matrices for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { double sum = 0; for (int k = 0; k < n; k++) { sum += a[i][k] * b[k][j]; } c[i][j] = sum; } } O(n3)

  28. Asymptotic analysis examples Interesting loop for (int i=0; i<n; i = i*2) { for (int j=0; j<n; j++) { count = count + 1; } } O(n*log(n))

  29. Asymptotic growth comparisons Example Description/example Class Constant 1 Fixed # of operations regardless of input Log logarithmic log(log(n)) Logarithmic log(n) Cutting problem by a fraction each iteration (log(n))c Poly-logarithmic nc, 0<c<1 Fractional power E.g. square root Linear n Visits each element of the input Linearithmic n log(n) Breaking whole problem into pieces, then reassembling n2 Quadratic Visits each pair of input elements n3 Cubic nc, c >= 1 Polynomial nlog(n) Quasipolynomial cn, c > 1 Exponential Brute force search through subsets of input Factorial n! Brute force search through permutations of input nn 22n N-to-the-n Double exponential

  30. Asymptotic growth comparisons To get a feel for the complexity classes consider a computer that performed one million abstract operations per second: T(n) 10 20 50 100 1,000 1,000,000 1 1 s 1 s 1 s 1 s 1 s 1 s log(n) 3.32 s 4.32 s 5.64 s 6.64 s 9.97 s 19.9 s N 10 s 20 s 50 s 100 s 1 msec 1 second nlog(n) 33.2 s 86.4 s 282 s 664 s 9.97 msec 19.9 seconds 2 n 100 s 400 s 2.5 msec 10 msec 1 second 11.57 days 2 1000n 100 msec 400 msec 2.5 seconds 10 seconds 16.7 minutes 31.7 years 3 n 1 msec 8 msec 125 msec 1 second 16.7 minutes 317 centuries log(n) 7 97 n 2.1 ms 420 ms 1.08 hours 224 days 2.5 10 Ga 1.23 10 Ga n 4298 1.01 1.10 s 1.22 s 1.64 s 2.7 s 20.9 ms 7.49 10 Ga 272 301001 0.000001 2 1.02 ns 1.05 msec 18.8 minutes 40.2 Ga 3.40 10 Ga 3.14 10 Ga n n 7 278 301007 2 1.02 ms 1.05 seconds 35.7 years 4.02 10 Ga 3.40 10 Ga 3.14 10 Ga 41 135 2545 n! 3.63 seconds 771 centuries 9.64 10 Ga 2.96 10 Ga 1.28 10 Ga n 62 177 2977 2.78 hours 3323 Ga 2.81 10 Ga 3.17 10 Ga 3.17 10 Ga n 22n 285 315630 5.70 10 Ga 2.14 10 Ga * Ga is a gigayear, or one billion years

  31. Formal big O Let G(N) be a function of N. Let O(G(N)) = { F(N) | there exist constants c, N0, such that, for all N N0, F(N) c * G(N) } We say that F(N) is big O of G(N), and we usually write F(N) = O(G(N)), if and only if F(N) O(G(N)) E.g. ,F(N) = N2 + N + 1, G(N) = N3 + 17 F(N) = O(G(N)) = O(N3) F(N) = O(G(N)) means that, after a certain point (for large enough values of N), F(N) is never greater than a constant times G(N) Draw a graph!

  32. Formal big Omega, Theta Big Omega: Let (G(N)) = { F(N) | there exist constants c, N0, such that, for all N N0, F(N) c * G(N) } F(N) = (G(N)) if and only if F(N) (G(N)) Big Theta: Let (G(N)) = { F(N) | there exist constants c0, c1, N0, such that, for all N N0, c0 * G(N) F(N) c1 * G(N) } F(N) = (G(N)) if and only if F(N) (G(N))

  33. Big Omega, Theta, O Big O is an upper bound F(N) = O(G(N)) F(N) G(N) Big Omega is a lower bound F(N) = (G(N)) F(N) G(N) Big Theta is an exact bound F(N) = (G(N)) F(N) = G(N)

  34. Comparing two algorithms using big O Suppose we are comparing two algorithms, A1 and A2 Suppose the critical step count for A1 is described by F(N), and F(N) = O(N2) Sometimes we simply say: the big O of A1 is N2 Or: runtime of A1 is O(N2) Similarly, suppose the critical step count for A2 is described by G(N), and G(N) = O(N) The more efficient algorithm is the one whose critical step count function grows more slowly

  35. Example Let F(N) = N + 3 and G(N) = N2 7 * N + 10 Show that F(N) = O(G(N)) We need to find constants N0 and c

  36. Example: finding the constants In this case, set F(N) = G(N) and solve for 0. N2 7 * N + 10 = N + 3 N2 8 * N + 7 = 0 N = 1 or N = 7 If N0 = 7, then what is c? c = 1

  37. Some rules to simplify examples Sum: If F0(N) = O(G0(N)) and F1(N) = O(G1(N)), then (F0 + F1)(N) = O((G0 + G1)(N)) Product: If F0(N) = O(G0(N)) and F1(N) = O(G1(N)), then (F0 * F1)(N) = O((G0 * G1)(N)) Scalar multiplication: If k is a constant, then O(k * G(N)) = O(G(N)) Constant exponential: If F(N) = O(Na) and G(N) = O(Nb), for b a, then F(N) = O(G(N))

  38. Prove or disprove F(N) = Loga N and G(N) = Logb N, then F(N) = O(G(N)) Log N , log N and lg N always mean the same thing: Log2 N

  39. An example Let F(N) = N4 + 17 * N2 + 10013 Show that F(N) = O(N4) G(N) = N4 F(N) = N4 + 17 * N2 + 10013 N4 + 17 * N4 + 10013 * N4 = 10031 * N4 Let c = 10031 and N0 = 0 Then we have, for all N 0, F(N) 10031 * N4

  40. Another example Let F(N) = N4 17N2 + 1 Show that F(N) = O(N4) G(N) = N4 F(N) = N4 17 * N2 + 1 N4 + 17 * N4 + 1 * N4 = 19 * N4 What are c and N0?

  41. More examples (on your own) True or false? 1. N4+ 200 * N3+ 100 * N2= O(N4) 2. 20 * N3= (N3) 3. .001 * N3= (N2)

  42. More examples (on your own) True or false? 1 A ( n ) algorithm is O( n ) True A ( n ) algorithm is O( n2) As n2is worse than n, this is true. 2 A ( n2) algorithm is O( n3) As n3is worse than n2, this is true. 3 4 A ( n ) algorithm is O( 1 ) As 1 is not worse than n, this is false. 5 A O( 1 ) algorithm is ( 1 ) This is true as the two complexities are the same. 6 A O( n ) algorithm is ( 1 ) This may or may not be true depending on the algorithm. In the general case it's false. If an algorithm is ( 1 ), then it certainly is O( n ). But if it's O( n ) then it may not be ( 1 ). For example, a ( n ) algorithm is O( n ) but not ( 1 ).

  43. Time vs. Space Complexity The space complexity of an algorithm is the maximum amount of space (i.e. memory) that is needed (relative to the input size) at any point during its execution. This does not include the space used for the input Time and space can be traded off via parallelism Sequential algorithms vs parallel algorithms A sequential algorithm (or serial algorithm) is an algorithm that is executed sequentially once through, from start to finish, without other processing executing as opposed to concurrently or in parallel What are some examples of problems that have possible parallel algorithms to solve them? Some which do not?

  44. Recursion A recursive function is simply a function which calls itself boolean RecursiveSearch(int target, int[] array, int start) { if (start >= array.length) { return false; } else if (array[start] == target) { return true; } else { return RecursiveSearch(target, array, start+1); } }

  45. Recursion Let s design a recursive function together Let s write a recursive function that sums the elements of an array.

  46. Recursion int RecursiveSum(int[] array, int start) { if (start >= array.length) { return 0; } else { return array[start] + RecursiveSum(array, start+1); } }

  47. Recursion Let s design a recursive function together Let s write a recursive function that sums the elements of an array. What were some of the important things to remember? (1) Check the base case, and (2) update arguments Now, you write a recursive function on your own Write a recursive function which takes an integer n as input and returns n! (n factorial)

  48. Recursion int factorial(int n) { } if (n == 1) { } else { } return 1; return n * factorial(n-1);

  49. Searching An extremely important problem that must be solved for many large data sets is that of searching for specific values in the data Sometimes we want to know if a piece of data exists in a collection (boolean search) Sometimes we want to look up associated data given a search key (indexed search)

  50. Linear search Given a linear data set, i.e. an array, loop through the entire set and compare each element to the search value boolean LinearSearch(int searchVal, int[] array) { for (int i=0; i<array.length; i++) { if (array[i] == searchVal) { } } return false; } return true; (Refresher) What is the worst-case running time for an n-element array?

Related


More Related Content