CSC 173: Gaussian Reduction Assignment. Writeup 60%, Code 40%

Overview and Motivation

We don't have a Matlab textbook, so the best I can offer is to recommend Stormy Attaway's Matlab: A Practical Introduction to Programming and Problem Solving. On line there is CSC 160, whose lectures go through programming with Matlab, and include some special topics like Plotting with Matlab, for instance.

Does partial pivoting improve numeric accuracy? We'll write both non-pivoting and partial-pivoting Gaussian Elimination (GE) solvers and generate statistics to compare them on systems with random coefficients, including cases constructed with the intent of being difficult. Random matrices (made with rand()) have very little likelihood of needing to pivot, so are a well-behaved domain.

Why write our own solver if one exists already? Matlab for "Solve Ax =B" is x=A\B, after all. In fact, Matlab does not offer a non-pivoting GE solver. "Normally" we always want to pivot, but the meat of this assignment (below) tries to establish exactly why, since there seem still to be open questions about it. It's really not unusual to have to build our own version of commonly-available functions just so we can vary, instrument (collect statistics), and experiment with them. Our domain of random matrices is probably the most demanding one in which to make a case for pivoting.

This assignment was originally for an introductory programming class. Feel free to ignore all the code-structuring instructions. If they're useful, fine. I've tried to mark this introductory material "Optional" below.

As usual, this is an open-ended project. Use your instincts and judgement to choose and ignore options with the goal of maximizing results while minimizing effort.

We're looking for a professionally written and beautiful report with grabby graphics and some intelligent experimental design. Try to answer the "why pivot" question as convincingly as possible, and of course describe any meta-discoveries about difficulties in the assignment, novel ideas or features you produced, etc.

Non-pivoting Gaussian Elimination

Refer to the linear algebra lecture overheads or the web or a numerical analysis text and write a non-pivoting GE solver.

Optional Material: Code Organization etc.

Write a simple function gauss_reduce() with prototype

function solution_vec = gauss_reduce(param_mat, const_vec)

to solve a system of linear equations using direct Gaussian reduction. This function takes a square matrix of parameters, and a vector of constants, and returns a solution vector. (There are other possibilies, e.g. a function that takes a single matrix representing both the coefficients and the constant vector, but let's not worry about that for now). Your function should be able to take systems of any size (up to practical limits). It should check that the matrix is square, and that the constant vector has the appropriate number of elements, printing an error message and returning the zero vector if these solvability conditions are violated. Your program should also print an error message and return the zero if it runs into a zero pivot which might indicate a singular system, though does not necessarily imply that.

In general zero pivots may not be obvious, given the initial coefficient matrix. If the top left element, the first pivot, is zero that IS obvious, but the process of reducing the first row may set the (2,2) element (the second pivot) to zero when it was non-zero before, and so on. The potential pivot element must be checked as the first step in the column-reduction operation (the reduce_column() function described below).

Optional: Debugging Checks:

Use rand and ones to make a parameter matrix A, a 4x4 random matrix with elements between -1 and 1. It turns out that it's really unlikely these matrices will cause any zero-pivot problems. Also make constant vector B, a 4x1 similarly-random column vector. Run your gaussian-elimination solver on the (A, B) system to get your solution vector X. Check X two ways:

  1. Check that AX = B.
  2. Use the Matlab built-in linear equation solver X = A\B to see if Matlab gets it right.

Optional: Dealing with 0 pivot
Again, use rand and ones to make a parameter matrix A, a 4x4 random matrix with elements between -1 and 1. Make constant vector B, a 4x1 similarly-random column vector. Set
A(1,1) = 0. Run your gaussian-elimination solver on the (A, B) system. You should get an error for zero pivot.

Now swap the first two rows of A and B to get Aswap and Bswap and try your gaussian-elimination program on the Aswap and Bswap system. It should run OK. Recall that row-swapping is an Elementary Row Operation (ERO) that simply re-arranges the order of the same equations, so the solution to the system is the same as the unswapped system. Congratulations, you've gotten over a zero-pivot problem. Remember the answer (X) vector.

One bad thing about built-in Matlab operations is we have no idea what's going on inside. One good thing is that they usually are reliable and "have seen it all" and are prepared for nasty inputs. Since X = A-1B, in some metaphorical sense X = B/A and in Matlab, writing the last equation with a "divided into (\)" rather than a "divided by (/)" operator invokes linear equation solution.

So, try Xswap = Aswap\Bswap;, which should work since you fixed the zero-pivot problem. Then similarly try X = A\B. What can you conclude about the built-in \ operator?

In fact, your row swap manually did one partial pivot operation on the first column of A. Partial pivoting is the use of row swaps to avoid zero pivots. Before eliminating the below-diagonal elements of any column starts, partial pivoting usually swaps the maximum column element in or below the main diagonal of the column up to the main diagonal to act as the pivot. The main part of the assignment has you implement partial pivoting and investigate whether it improves, as is often claimed but not proved, the numerical accuracy of solutions.

Optional: More Code Organization

Your gaussian-elimination function should use secondary functions called by your main function as a structuring mechanism. A good place to start is to note that after the initial parameter checking, the process consists of two main steps. First, reduce the coefficient matrix to upper triangular form (modifying the constant vector in parallel). Second, perform the back-substitution to obtain the solution vector from the upper triangular matrix and the (modified) constant vector. We can write a secondary function to perform each step. Our main function thus starts out looking like this:

function solution_vec = gauss_reduce(param_mat, const_vec)
  % check for consistent size of param_mat and const_vec
  ...
  % reduce coefficient matrix to upper triangular form, modifying the
  % constant vector appropriatly
  [ut_mat, new_const_vec] = ut_reduce(param_mat, const_vec);

  % Compute the solution vector using back substitution
  solution_vec = back_subst(ut_mat, new_const_vec);

  % we are done
end

The ut_reduce() function uses its own subsidiary functions. Specifically, you should write a function called reduce_column() with prototype

function [new_mat, new_const_vec] =
    reduce_column(param_mat, const_vec, column)

that returns a modified matrix (and constant vector) in which the input matrix has been reduced so that all the elements below the (col,col) diagonal element are zero. Initially it checks to see the (col, col) element is NOT zero, of course! This can be used iteratively to modify the original matrix and constant vector to produce an upper triangular form. Note that using a function to modify the matrix and vector passed as parameters in the following way,

[cur_mat, cur_vec] = reduce_column(cur_mat, cur_vec, cur_col);
is perfectly legitimate (as long as you don't need the old partial solutions) and a good way of structuring the process.

The reduce_column() function might itself call yet another function to reduce a specified column in a row to 0 by adding a multiple of another row, returning again, both a modified coefficient matrix and a modified constant vector. The prototype would look like

function [new_mat, new_const_vec] =
    reduce_row_at_col(param_mat, const_vec, ...
                      col, row_added, row_reduced);

This function adds a multiple of row_added to row_reduced so that the specified col in row_reduced is 0. The same operation is carried out on the corresponding position of the constant vector. (Basically like the "ERO" Attaway describes at the bottom of page 343)

The back-substitution function can similarly be constructed using a subsidiary function that takes the upper triangular matrix, the (modified) constant vector, and a partially-filled-in-from-the-end solution vector, (entries from col + 1 to the end already known) and produces a modified partial solution vector with entry col now filled in. The prototype would look like

function new_part_solution_vec = 
    back_subst_for_col(ut__mat, new_const_vec,...
                        column, part_solution_vec)

Note that if you call for a column to be filled without previously filling in all the higher columns, the function will probably not work as desired, so you need to be a little careful how you use it. Since it is your function, and not one you are publishing to the world, this is OK. Even so, you should leave yourself a note in comments.

Courage, mes amis!:
Despite all the above verbiage, the amount of programming needed is small. CB did the Gauss. Elim. solution with no and partial pivoting (but not the experiments) in 40 lines (32 for elimation with pivoting, 8 for back-substitution). He didn't use exactly these functions, but they are similar in spirit and small. Most are 4-liners; there's a 7 and a couple of 6's, longest is 8 for back-sub. This doesn't count his dense and helpful comment lines of course. There are some terse but not-well-structured solutions on the web too.

Science: Why Pivot?

Partial Pivoting

Write a second version of your function called gauss_reduce_pp() that uses partial pivoting.

Also, write a function with prototype

function [param_mat, const_vec] = random_test_case(n)

to generate random test cases of a specified size n, specifically coefficient matrices and constant vectors with values that are floating point values (not integers) between -100.0 and 100.0.

Optional: Code Org. Write a secondary function that identifies and swaps in the correct pivot row for a specified working column (modifying a matrix and constant vector) to help you do this. The prototype would be something like

function [new_mat, new_const_vec] = pivot(param_mat, const_vec, col);
A function to swap a specified pair of rows (modifying both the coefficient matrix and constant vector) with prototype
function [new_mat, new_vec] = swap_rows(mat, vec, row1, row2);
will be useful.

Correctness, Timing and Distribution Tests

Now that we've got our basic tools, the research begins.

Prologue: Precision

The basic issue is the accuracy of numerical calculations. That's related to how many bits are reliably used by operations and stored as (partial) answers: in the jargon, on the precision of the number representations. All Matlab's operations used to be double, or 64 bits. It's hard to investigate numerical accuracy issues using this much precision. So I suggest doing at least the first two of these upcoming cases. Naively, that could double your experiments, so you should probably do some small pilot studies... maybe you can quickly decide to do all your work in one of these three precision regimes:

  1. "Normal". Say nothing about precision, Matlab will give you double precision values and operations. Easiest to type, worst chance of getting meaningful results (IMHO)
  2. Use the single() function when creating matrices and assigning variables. It seems that Matlab will then use single-precision operations on them. See SP reference, > halfway down long page, below the images. CB thinks the minor troubles involved would pay off in clearer findings about precision.
  3. Go the whole shot and use variable precision arithmetic, e.g. vpa(), digits().... This is unexplored territory for CB as of 2012: definitely Extra Credit, definitely a way to FORCE precision-related effect to the surface.

Equality-comparison may not be needed for this assignment, but if it turns up, note that you should NOT use a straight == equality check to compare real numbers, thus matrices. Here are the issues as elucidated by Verner: Floating Point Comparisions in Matlab. In the first example, 0.8-0.7 is compared == to 0.1: failure. Further, the two numbers differ in their last 6 bits! Worse, these errors can compound and as calculations (like a summation or a Gaussian elimination algorithm) proceed, significance is progressively.

In Verner, it helps to know that in matlab eps(x) is defined as the distance (real number difference) between x and the next-largest floating point number of the same precision as x. eps is defined as eps(1) and for matlab's 64-bit floating point representation that equals 2-52. It's related to the "rounding error" of common parlance.

You'll note that Verner introduces Relative Error (RE) very early, and suggests 10-12 to 10-15 for double- and 10-6 for single-precision. He suggests experimentation, finding the RE that works for your experiment (and note formulae using RE and EPS for computations with many operations, starting about 1/2way through).

Optional Debugging and Check:

Run your programs on examples of 2x2, 3x3, and 4x4 systems, and check that the solutions are correct with a function that takes the coefficient matrix, the solution vector, and the constant vector, multiplies the coefficients by the solution and checks that the result is equal to the constant vector.

Timing Tests

Use tic, toc to time your non-pivoting solver running on 10x10, 100x100, and 1000x1000 systems (or any similar series that shows meaningful answers). Do the results seem to support the theoretical result that the running time on an NxN system is proportional to N3? A plot over a range of sizes would be convincing one way or another. Repeat using the partial pivoting program and compare the times. What percentage overhead does adding pivoting entail?

Why Pivot? Basic Case

Run both the solvers (no pivot, partial pivit) on 100 random 3x3 examples (generated by your random_test_case() function, thus elements will be in the range [-100.0, 100.0]), and compute the mean and standard deviation of 100 solutions. That is, the mean and std of the 100 values of X(1), X(2), X(3). Just a vectorized mean and std calculation if you save the solution vectors in a 100-long matrix. Our result will be two vectors (three mean X values and three stds) for each solver. (Sanity Check: What should the means be like? Remember the characteristics of the inputs!). (Hint: They should be similar, and the mean should be small compared to 100.0). Presumably you will use a program (function or script) to compute the mean and std.. Otherwise you will be typing till the cows come home...

I think the prose description of these experiments is confusing and possibly ambiguous. The following three pseudocode versions are meant to be clearer, but are new and undebugged as of Nov 2014. Hope they help... Basic Case Ax=B, A is 3x3, B is 3x1. Random elts in [-100,100]. init. Xvals, Zvals to zeros(3,100) for trial t = 1 to 100 { generate new A, B nopivot solution to Ax=B pivot soln to Az=B Xvals[:,i] = x, Zvals[:,i] =z } output mean(Xvals), std(Xvals), mean(Zvals), std(Zvals)

Running Calculations: above and below I use a batch calculation of mean and std. deviation. That's space intensive but I think can use matlab's vectorized calculations to express what we need to do simply, something (not guaranteed) like the pseudocode.

But we can also calculate means and stdevs "on line", i.e. without storing all the values. Here's an oldie but goodie algorithm from CS's earlier days, first found it seems in Don Knuth's Art of Computer Programming, Vol. 2 p. 232, 3rd edition. We want a vectorized version of this to get means and stdevs of vectors, but that's why we pay MatLab: initialize: m = 0; % mean S = 0; % for variance and std. dev n = 0; % no. of samples (for us, vectors) for each incoming sample (vector) x: prev_mean = m; n = n + 1; m = m + (x-m)/n; S = S + (x-m)*(x-prev_mean);

and at any time (for us, when all vectors are processed), variance = S/n (or probably more accepted is S/(n-1)). And stdev = sqrt(S/n) or sqrt(S/(n-1)) depending on arcane statistical preferences (see a statistics book).

This algorithm has good numerical accuracy properties. Other algorithms can have horrid problems: a quick look at the first part of this signal processing blogpost is instructive!

Why Pivot? Perturbation Experiment

You only care about statistics over the 100-long trials with perturbed matrices described below. For each size of perturbation we're interested in the difference pivoting makes to the accuracy of the answers. Thus if pivoting were doing some good, we might expect smaller errors between the perturbed and unperturbed solutions for the pivoting case.

For each of 100 random 10 x 10 systems generated by your program (i.e. with coefficient values between -100.0 and 100.0):
do the following.
For each of your two solvers:
First find the solution to the system and record it. Then perturb the system by adding a (different) random (real-valued) number between -1.0 and 1.0 to each of the matrix coefficients and each constant element. (Note that this range is 1% of the original range). Solve the perturbed system. Compute the distance between the solution to the perturbed problem and the solution to the original problem , i.e. compute
E = √ (Δ x 1 2 + Δ x 2 2 + Δ x 3 2 + ... + Δ x 10 2 ).

Here Δ x is the vector difference between the solution vectors of the original and perturbed problems. This square root of the sum of its squared elements (E above) is a measure of the error. Thus each of the 100 trials generates two errors: one for partial pivoting and one for pivoting. Each is the euclidean distance between the solution vectors for the unperturbed and perturbed systems. Then compute the mean and standard deviation of the two sets of 100 errors. thus the final output for each trial is four numbers a mean and a standard deviation for each kind of error (pivoting and not).

Ax=B, A is 10x10, B is 10x1. Random elts in [-100,100]. P, Q are same-sized mats of random elts in range [-e, e]: "small perturbations". for each choice of precision (if you do that) { for each choice of perturbation size e: { for trial t = 1, 100 { generate new A, B, P, Q, A'=A+P, B'=B+Q with no pivot, solve Ax=B, A'z = B' ErrNP[t] = sqrt (sum( (x[i]-z[i])^2)) with partial pivot, solve Ax=B, A'z = B' ErrP[t] = sqrt (sum( (x[i]-z[i])^2)) } get ave(ErrNP), std(ErrNP), ave(ErrP), std(ErrP) } } ------

Our idea is to generate some statistics. In fact, a total of maybe 40 numbers could be your final output: Take the absolute values of the 100 errors from your run as input, and generate their mean and standard deviation (that's two numbers so far) for each of the 5 perturbation sizes (10 numbers) and two pivoting conditions (twenty) and two precision settings (forty). More precision settings, perturbation sizes, etc. increase the number of statistics you'd generate.

Forty numbers still sounds like a lot, so a good idea is to make one plot with two data series plotted. One is means and std error-bars (matlab can plot those, use help or the documentation) for pivoting. The other gives the values for the no-pivoting case. So we'll see a single visual with all the information, readily comparable...each "data point" represents 100 experiments or so.

Now we think it's pretty likely that you won't get a significant result (interesting differences in the mean and stdevs) with that perturbation in the range [-1.0, 1.0], so if not try smaller perturbations -- range [-0.01, 0.01]? [-0.00001, 0.00001]? etc. to see if you can uncover anything with them. Or now would be a good time to check if single precision work gives better results than the default.

Or, just automate the process and make it less interactive? Go ahead and run a series of perturbation trials, each time with perturbation smaller by a factor of 10, and make a semilog plot of your results. This is not only interesting, but would add substantial visual 'reader appeal' (or 'grader appeal, if you care about that) to your writeup.

Recall Our Big Question:
Does partial pivoting make any significant reduction in the mean error?

A small aside. Make sure that an error exit from your simple program because it encountered a zero pivot (or other reason) does not contaminate the data. The probability of a zero pivot occuring is quite low, but if it happened, the zero return vector would bollix the mean. Re-run the experiment (with new random values) if this ever happens. Actually, if you find this occuring on random inputs, check your programs (and random data generation) for errors, as the probability is really extremely low.

Why Pivot? Ill-Conditioned Systems and More

A random ill-conditioned 10 x 10 system can be produced by first generating the A coefficient matrix (and corresponding constant vector B) at random, using values between -100.0 and 100.0 as before. Then select 3 of the 10 rows, (they all have to be different) and make one of them almost the average of the other two by taking the sum/2 (average) plus a random "noise" value between -1.0 and 1.0 (1% of the original range).

Repeat the perturbation experiment using 100 such ill-conditioned systems. (that is compare answers from each original and its corresponding ill-conditioned system.

Same as Perturbation except also: R is a 10-long row-vector of random elts in range [-e, e]: "noise". ... for each choice of precision for each perturbation size { for trial t = 1, 100 { generate new A, B, R, replace A[3,:] with ((A[1,:]+ A[2,:])/2)+R) generate P,Q ,A'=A+P, B'=B+Q with no pivot, solve Ax=B, A'z=B' ErrNP[t] = sqrt (sum( (x[i]-z[i])^2)) with partial pivot, solve Ax=B, A'z = B' ErrP[t] = sqrt (sum( (x[i]-z[i])^2)) } get ave(ErrNP), std(ErrNP), ave(ErrP), std(ErrP) } } Again, test both non- and partial- pivoting cases. How do the mean and standard deviation of the error compare to that of the (mostly) well conditioned random systems? How does the effect of partial pivoting compare?

Again again, if a random 'noise' addition in the 1% range doesn't yield any discernable differences in behavior, try a smaller range... 0.01% ? 0.00001%? etc. Or reduce the precision. No guarantees, but it IS research and it DOES give you more to write about and it's easy to do. Again again, using a bigger set of random ranges, probably multiples of 10 as above, is a chance to show off your plotting prowess and make a scientific point visually, always better than a table or a bunch of numbers stuck in a paragraph of text.

It's probably fine just to make row 3 close to the average of rows 1 and 2. I'd do that first. To randomize a bit, if your experiment is in a "for i = " loop, you could randomize the choice of rows by selecting something like (i (mod 10) +1), ((i+1) (mod 10) +1), ((i+2) (mod 10) +1) (array indexing starts at 1...grrr). For "real" randomness, how to pick 3 random positions from 10 is a little bit of a puzzle, but not that hard (first pick one of 10, then one of the 9 remaining, then ...) But I really don't see choice of rows making much of a difference.

Extra Credit: Maybe see if matrix size affects either the outcome of the perturbation or ill-conditioned results. For example, use your previous work to pick a range of values for perturbation size and precision that are likely to cause some numerical accuracy problems, and repeat the pertubation experiment using 100 randomly generated systems of size 5, 10, 20, 50, 100, [200, 500, 1000] (the last 3 might take too much time, depending on your implementation). Is the mean error correlated with the size of the system? Attempt to explain.


Last Change: 12/02/14: CB