Algorithm Analysis

Weiss Ch. 2.

How is the running time of algorithms described in a machine-independent manner? Algorithm growth rates. Calculating growth rates from code. Leads to idea of provably fast, slow, or unusably slow algorithms, where we take speed to be measured by the number of instructions executed as a function of problem size. Problems with only impossibly slow algorithms are "intractable" problems (e.g. "NP-complete" problems like traveling salesman, knapsack...). Bad: brute-force search of things that get exponentially larger with N (e.g. the number of N-digit integers is 10N). Or binary trees: 20 decisions means a million possible outcomes.

Chapter has nice friendly set of definitions and examples of complexity classes (different growth rates) and the "Big-Oh" notation. Then illustrates how to relate programming constructs like for loops to the complexity analysis. Then an extended study of four algorithms for the maximum subsequence problem with different complexity (O(n3), O(n2), O(nlogn), O(n)). Motivates appreciation for logarithmic algorithms like binary search and motivates our search for quick graph and tree algorithms for, say, search trees, dictionaries, etc.

Definitions

Big-Oh notation. Usually you see: "this algorithm is obviously O(f(N))", or "O(g(N)) is preferred to O(f(n))", etc.

For us, the f(N) and g(N) functions are a small set of simple functions like f(N) = N, or Npower, Nlog(N), log(N), 2N. Think of them as growth rates. The definitions mean that, up to a constant of multiplication, you can show your function grows slower, faster, or the same as some other function (practically, the f() and g()'s above.)

Weiss says that his T(N)'s are functions, which could lead us to think of them as what their algorithms compute. He means they are the time functions of the associated algorithms: their running times.

FOCS, rather more coherently, says T(N) are the growth-rate functions OF the algorithms of interest. Subtle difference, but Weiss treats his T(N) like FOCS does but seems to mis-identify them, so we may get an apples and oranges feeling...

Weiss writes T(N) = O(f(N)).

How to Measure Growth

Computational model has a unit of operation, or instruction, like addition, multiplication, comparisons, an inner loop, often determined by the problem It is defined to take unit time since we don't care about the absolute size of the units.

Assume infinite memory, no access time.

So measure of "cost" of algorithm, or the time it takes, is the number of elementary operations it requires, taken to be the "running time".

Worst-case running times are usually used even if pessimistic: they are easier to compute. Average-case running times are of interest but sometimes not obvious how to define average, and they're hell to compute. Best case is sometimes amusing (e.g. for a sorting method, what's the best-case input order if any).

Other measures exist, like area for circuits, models including memory access, etc. Realistic computer and systems models are another story.

How measure input size? Good question. You can imagine algorithms that work with the numerical value of the input N, but also those that work on N itself as a sequence of bits, so the "length of N" is at issue, not the "value of N". Usually obvious.

Using the Definitions Directly: Big-Oh Proofs

Proofs that T(n) is O(f(n)) are all alike: From the formal definitions above, we "need a c and an n0 such that..." So--

  1. Find and state the two specific constants: a positive c and nonnegative n0, which are witnesses to your claim.
  2. Use algebra to show that for n > n0, T(n) ≤ cf(n) for your n0 and c.

E.g. Suppose we show our running time is T(n) = (n+1)2. By staying awake in lecture, we observe that this is a quadratic function, O(n2). How prove it? Choose witnesses n0 = 1 (a VERY common choice for reasons we'll see) and c = 4.

This c-guessing, rabbit-from-hat effect is good drama, but we actually compute what our witness c needs to be for our n0 using the reasoning below first, then appeal to it later.

First, the obvious approach: n0 = 1 means n > 1. So we want
a c such that (n+1)2 = n2 + 2n +1 ≤ cn2
(c-1)n 2 -2n -1 = 0. Bit of a mess, headed toward a quadratic.

Use our powerful inequalities -- to hell with exact solutions.
n0 = 1 means n > 1. So now we need to prove (n+1)2 ≤ 4n2 provided n ≥ 1. Or n2 +2n+1 ≤ 4n2 as before.
Now n ≥ 1 implies both n ≤ n2 and 1 ≤ n2, so
n2 + 2n +1 ≤ n2+2n2+n2 = 4n2, QED.

Wait! How'd we guess that c=4 would work? By jumping to the last line immediately, given n0 = 1 and that you see the basic trick: all the powers of n may be changed to n2.

This n0 = 1 trick and associated reasoning lets you just glance at the claim 5n3 + 100n2 +36n +1095 ≤ cn3
And say: for n0 = 1, c=2000 will work, and the smallest c would be 1236. You'll see that if n0 ≠ 1, we're talking about computing the intersection of two cubics, given we've somehow picked a c. Very messy and doesn't scale.

Common Growth Rates and Rules

constant, logarithmic, log-squared (log2N), N log N, linear, quadratic, cubic, polynomial, power (e.g. N1.57), exponential.

Useful Facts or Rules:

Algorithms to Running Times: Common Constructs

for loops: running time's at most number of all statements in the loop times number of iterations. For a singly nested loop with no function calls and N iterations, the length of the loop is some constant, so it goes away and we get O(N).

Nested loops: analyze inside-out.

Consecutive statements (or loops, function calls, etc.). Their time adds, which by one of our three early rules means the answer is the Order (Big Oh) of the maximum-time one.

Conditionals: Running time is at most the time of the test plus the time of the larger (or largest) running time of the "then", "else" or "case" etc. statements.

Function Calls: Consider the function an algorithm and recursively apply these rules!

Generally, need to work from inside out (from function calls, innermost loops, etc.).

(Really) Recursive Calls: Tail recursion is like a for loop, so easy. Otherwise, analysis leads to recurrence relations, which are a topic to themselves (for later).

Max. Subseq. Sum

Weiss 2.4.3 = Bentley Prog. Pearls Column 8.

Problem: find the largest sum of contiguous elements in a given N- vector of numbers. If they're all negative the answer's 0 for "no elements in sum". e.g. answer for [3 -4 10] is 10, [-4 3 10] is 13.

1: Cubic Time: triply-nested for loop, outer two pick all possible left and right sub-vector elements, inner adds all elts between them, each loop has maximum of N repetitions means O(N3).

2: Quadratic Time: smarter ways to compute sum in inner loop. Just update the sum with the "current element" or pre-compute the N cumulative sums of elements (O(N) to do this) and subtract two cum-sums to get any subsequence sum. Thus inner loop becomes O(1). Morals: inner loops repay scrutiny, keeping subresults is often smart.

3: NlogN time: Typical divide and conquer approach. Cut problem in half, get maxSSS of each half, AND in O(N) time compute the max sum of subsequences that slop over the boundary you used. Report the max of these three subproblems up the recursive chain. This yields the famous Quicksort recurrence T(1) = 1, T(N) = 2T(N/2) + O(N), whose solution is O(NlogN) -- we'll visit this again, next time in our overview of Trees.

4: Linear time: Some actual thought (!!) leads to vast improvements in the first quadratic algorithm: e.g. "can't have a (seq of) negative number(s) as first member(s) of maxSS". Get an O(N) "scanning" algorithm that zips thru the sequence once.

Why do we care? Weiss doesn't but Bentley does give some dramatic statistics and tables: he implements, times the results, figures out the multiplicative constants for the O(f(N)) formulae, and finds:
for N = 1M The times are
1: 41 years. 2: 1.7 weeks. 3: 11 secs. 4: 0.48 secs.

In a minute, how big a problem (N) can be solved?
1: 3600. 2: 10,000. 3: 1M. 4: 2.1x107.

Fast Algorithms! Log time examples

Divide and conquer, as in QuickSort say, are often NlogN (often comes from splitting (sub)problems in half).

Binary Search is common in dictionaries or phone books. Open in middle, look, go to L or R half and repeat. 20 questions means can find one in a million possible answers. O(logN) algorithm for lookup. But requires O(N) for insertion (in an array representation). Of course might need a sort to be able to use Binary Search, so that would have to be counted (or amortized as we DS professionals say.)

Euclid's algorithm is an Oldie but Goodie, a very clever way to find the greatest common divisor (largest integer dividing both) of two integers by repeatedly calculating remainders. They get smaller, and Weiss proves (Th. 2.1) that the size of the remainder goes down at worst by half every two iterations of the algorithm (one of which swaps the operands).

Th 2.1: if M>N, M mod N < M/2. Easy proof by cases N < M/2 and otherwise. So another O(logN) algorithm: divide size of problem by two every 2 iterations at worst.

Fast exponention. This is the trick of using previously computed powers (aka Dynamic Programming): so N^2 is one multiplication, times self is N^4, ditto is N^8 so there's N^8 computed with 3 = log(8) *'s, not 7. Nice little recursive algorithm (Fig. 2.11) and analysis. At most two multiplies are needed to get a problem half as big. Also Weiss discusses some coding improvements and snares in this little 9-liner, worth a look.

Code to O(N)

x = 150;
%O(1)
N= scanf(input); 
 % Size of one array dimension O(1)

Arr1 = MakeRandArr(N,2);
 % make 2-D NxN random array  O(N^2)
 %*******insert line here
if Sum(Arr1) < x
% O(N^2)
  x++;
%  O(1)
else
  Arr2 = MatMult(Arr1, Arr1) 
% O(N^3)

PrintArr(Arr2);
%O(N^2)

Looks O(N3) to me, and Ω(N2). Is that right? What changes with O(N), Ω(N), and Θ(N) if we also made a 3-D array by adding line

...
 %*******insert line here
Arr3 = MakeRandArr(N,3);
...

CB's Putative Project Categories

  1. Visualizing and Organizing Complex Systems
    -- .jar files
    -- Scheduling
    -- (Computer) inventories
    -- Filesharing
  2. Games
    -- Video
    -- Board (chess)
  3. Embedded systems
    -- Smart house, motion, temperature sensing, remote control
    -- Robots (e.g. blimp)
    -- Reverse engg?
  4. Web Programming
    -- Optimizing Javascript
    -- Reverse engg?
  5. AI
    -- Nat. Lang. Understanding: Chatbot
    -- Other NLU Apps
    -- Machine learning
  6. Geog. Info. Systems
    -- GoogleMaps and UR Campus
  7. Music
    -- Analysis (genre, name-that-tune, filtering, effects...)
    -- Synthesis (micro-tone scales, new timbres (instruments), modeling existing instruments (ukelele? ocarina?), composition (fractals, random, natural data, new instruments).
    -- Soundtrack for animated video (see below)
  8. Graphics
    -- And AI: smart game mods? e.g. Quagents
    -- Visualization and intelligent agents (W. Gibson meets TRAINS?)
    -- Visualizing big data (VISTA Collaboratorium, Gd. floor Carlson)
    -- make Animated video
    -- Rendering ( 173 Raytracing Project?)
  9. Theory
    -- CS200 type project (Lane Hemaspaandra Advisor)


---

Last update: 9/11/14