COMPUTER PROJECT
ITERATIVE METHODS FOR NONSYMMETRIC
LINEAR SYSTEMS
Due December 3, 2012 100 points
Purpose: To expose you to the matrix collections at Matrix Market and University of Florida, to literature searches using MathSciNet, to codes for iterative methods at Netlib, to other internet resources and to have you gain experience with using Krylov subspace iterative methods intended for non-symmetric systems. Recall from our discussion at the beginning of the semester that these iterative methods were identified by the computer scientist Jack Dongarra and others to be among the top ten algorithms that have had “the greatest influence on the development and practice of science and engineering in the 20th century.”
This project will require that you use internet or the library to access and read parts of a book about iterative methods, use the library or internet to look up one or more journal articles, use internet or the departmental network to download some associated Matlab software, download some test matrices from the Matrix Market and the University of Florida (UF) Sparse Matrix Collection and carry out some test runs in Matlab that compare two iterative method that you select or study one method using different parameter selections (such as restart in gmres). A typed report of approximately five pages is required. You may do this assignment with a partner so that, if you wish, one report can be turned for two people. The project can be done in either Matlab 7 or Matlab 5.3. Obtaining matrices is easier in Matlab 7 and all the iterative methods that we use are available in Matlab. On the other hand in Matlab 5.3, but not Matlb 7, one can count the number of floating point operations with the flops command. This document will mostly focus on material that will work in either Matlab 7.3 or Matlab 7.
The steps that you need to follow are:
1. Look at the book Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, by Richard Barrett, Michael Berry, et. al., SIAM, Philadelphia, 1994. The book is available on line at www.netlib.org/templates/Templates.html The book is part of the netlib library which is an extensive collection of high quality programs and packages related to scientific computing. You might want to open www.netlib.org and browse around the library You can browse through the book by clicking on items appearing in blue. The section “Computational Aspects of the Methods”, might be most useful to you. After reading through this section (and after classroom discussion) you will need to choose one or two of the six methods GMRES, BiCG, QMR, CGS Bi-CGSTAB or LSQR to study further. You will probably want to go back to “Nonstationary Iterative Methods” to read more details about the method you choose and to note the references mentioned relating to the method. LSQR is not discussed in the Templates book but is discussed in some of the course handouts and elsewhere.
2. Next you should download the Matlab software that comes with the templates book. This is available on my web page www.math.sjsu.edu/~foster/project143m.html and is also available on internet at www.netlib.org/templates/index.html . Since I have made a few convenient changes in the code you are probably best off by getting the code from my home page. The programs that you get are Matlab programs that implement the above methods as well as other methods such as the classical Jacobi, Gauss-Sidel and SOR methods. Also from my web page www.math.sjsu.edu/~foster/project143m.html you should download loadmatrix.m, mmread.m 7za.exf and the Gnu license for 7za.exf. These programs will facilitate loading matrices form the Matrix Market and the UF sparse matrix collection into Matlab on your computer.
3. Next you should go to MathSciNet to look at references on the method that you have chosen. Also you may wish to look at references in the templates book or in Watkins text book. You can access MathSciNet through the SJSU library (http://www.ams.org.libaccess.sjlibrary.org/mathscinet/ ). From off campus you may need to type in the bar code from the back of your student ID and you will also need to know you library PIN (get a PIN sooner rather than later if you don’t already have one) . After you enter MathSciNet click on "Start Full Search ". Scroll down to the “anywhere” and type in a keyword of phrase for your search. For example you might enter GMRES to search for articles about the GMRES method. Now click on start search. You should get a list of references that have this name somewhere in the abstract of the reference. One of the challenges in searching a data base is narrowing the number of hits to a set that is of interest to you. This is an art which we won't pursue here. In this case you need to find only one article to discuss (see below) but you will probably need to look at several articles to choose an appropriate article. You can also search MathSciNet by author or title. For example one could search for the publications of the infamous mathematician T. Kaczynski (the “unibomber”). MathSciNet will give you the abstract to the article (click on the weird number, for example 99c:65243).
4. You can read the complete article by going to the library or, for most articles, by searching on line. For example from MathSciNet clicking on “the original article” may work. However perhaps a password will be required and a better alternative may be to go through our library. To access the full text of an article through our library go to the site http://www.sjlibrary.org/gateways/academic/ and search the library catalog, databases or e-journals. For example if you select e-journals and type “SIAM” in the search window you will get a list of available on line SIAM journals. Many of the articles related to Krylov subspaces are in SIAM journals such as SIAM Journal on Matrix Analysis and Applications, SIAM Journal on Scientific Computing, or SIAM Review. The library has access to these journals over certain periods of time. I have access to all the SIAM journals and if you can’t find the article you want on line or through the library you can email me the citation and I can email you back the article. Another good way to find the text of articles is to go to the author’s web page. Often their articles will be posted on their web page. Of course search engines, especially “google scholar,” can be very useful. If none of the above works one can also request articles through interlibrary loan (http://www.sjlibrary.org/services/request/ill.htm ). This can be somewhat slow.
You should look up several of the references that relate to the method that you choose. Look for an article or articles that have examples comparing your method with other methods. Sometimes you may be able to find an article that uses a matrix that appears in the Matrix Market or UF site and you can try out the same example.
5. In addition you should get on internet to try to locate the examples from The Matrix Market (http://math.nist.gov/MatrixMarket/ ) and the (UF) Sparse Matrix Collection ( http://www.cise.ufl.edu/research/sparse/matrices ) are marvelous collections of matrices that come from real world applications that, in many cases, lead to large matrices. Also the SJSU Singular Matrix Database (http://www.math.sjsu.edu/singular/matrices/ ) is good collection if you want to look at singular matrices. Many of the matrices are much bigger than 1000 by 1000 and indeed one matrix in the UF collection is over 5,000,000 by 5,000,000.
At the Matrix Market site right click on the file name (for example gr_30_30.mtx.gz) in the “Download as Compressed Matrix Market file: (some file name).mtx.gz” section and save the file to your computer by choosing “save link target as” or “save target as” in your web browser. At the UF Sparse Matrix Collection right click on the “MM” entry in the download column and save the file to your computer by choosing “save link target as” or “save target as.” You will need to select the folder where you will save the file but do not change the name of the file. All files related to this project should be saved in the same folder on your computer
From either site the files that you download will be in compressed format. If you know how to decompress files using programs such as 7-zip, winzip, powerdesk, or winrar you can decompress the file yourself. If you decompress the file yourself make sure that the decompressed file has a “.mtx” extension (for example gr_30_30.mtx). However you do not need to decompress the files yourself -- it is easier to decompress the file using the loadmatrix.m program. Loadmatrix is a Matlab program that will automatically decompress the file (if necessary) and also convert the matrix to Matlab format. It is designed to work with files downloaded from either the Matrix Market site or those downloaded from the UF Sparse Matrix Collection site.
To use the loadmatrix.m program you need to download three files – loadmatrix.m, 7za.exf, and mmread.m – from www.math.sjsu.edu/~foster/project143m.html. You should store these three files and also all the files that you download from the Matrix Market and UF sites in the same folder on your computer. To use the loadmatrix program you need to open up Matlab and move to the folder where you are storing all your files for the project. In Matlab 5.3 one way to move to this folder is to select the “file” menu, the “set path” submenu, click on the “browse” button, navigate to the desired folder and click on “ok.” .” In Matlab 6 or 7 to move to the appropriate folder click on the button with three dots (called “browse for folder”) next to “current directory:” in the toolbar menu, navigate to the appropriate folder and then click “ok.” You can confirm that you are in the desired folder by typing “pwd” (print working directory) from the Matlab prompt. You can confirm that all the desired files are in the current folder by type “dir” from the Matlab prompt.
Now to decompress the file and load the matrix in Malab, after you get to the proper folder in Matlab, you should type from the Matlab prompt “matixname = ‘the name of the matrix’ ” and then type “loadmatrix”. For example if you downloaded the matrix named gr_30_30 then the name of the file should be “gr_30_30.mtx.gz” (if you downloaded the file from the Matrix Market) or “gr_30_30.tar.gz” (if you downloaded the file from the UF site)” or “gr_30_30.mtx” (for example if you have decompressed the file yourself). Then at the Matlab prompt type
>> matrixname = ‘gr_30_30’
>> loadmatrix
Do not include the any extensions in the matrixname variable. Loadmatrix will store the matrix as a sparse matrix named A in Matlab. Also loadmatrix will construct a “true” solution xtrue and a corresponding right hand side b = A*xtrue. This true solution can be used to test the accuracy of Matlab’s solver x = A \ b. Of course, in the real world one would not know the true solution (why solve the system if you did) but we are using a true solution to test Matlab’s solver. Finally, you now have the Matrix Market matrix available in Matlab!
If you are using Matlab 7 there is an easier way to download matrices into Matlab. You can use the UFget utility (http://www.cise.ufl.edu/research/sparse/mat/UFget.html) to download matrices from the University of Florida sparse matrix collection or the SJget utility (http://www.math.sjsu.edu/singular/matrices/SJget.html) to download matrices from the SJSU singular matrix database. These interfaced make it easy to loop through a large number of matrices which are identified by properties that you select. See the documentation at the above sites for examples.
You might want to begin by downloading the Market’s Laplace matrix. This 900 by 900 sparse (that is lots of zeros) matrix is similar to the matrix in the five point star example we will do in class (with the -1 -1 4 -1 -1 coefficient pattern). It is slightly different since a fancier approximation (the "nine point star" which has a -1 -1 -1 -1 8 -1 -1 -1 -1 coefficient pattern) is used. To download this from the Matrix Market click on BROWSE, by collection, Harwell-Boeing, LAPLACE, GR3030 and finally gr_30_30.mxt.gz. From the UF collection click on “browse all matrices (sorted by name),” scroll down to HB/gr_30_30 and click on “MM.”
After the matrix has been loaded into Matlab and will be called A. In addition to the true solution, xtrue, and the right hand side b = A * xtrue, loadmatrix also defines a vector x0 of all zeros and an identity matrix M which is the same size as A. You can now run the jacobi method by typing
max_it = 5000
tol = 1.e-6 % Note: For your runs I recommend tol = 1.e-6, tol = 1.e-8 or less
[x,error,iter,flag] = jacobi(A,x0,b,max_it,tol);
The jacobi method is one of the oldest (from about 1850) and slowest iterative methods. We won’t be studying it except to show below that it can be much slower than more modern methods. The parameter max_it is the maximum number of iterations allowed in the program, tol is a stopping criterion that estimates that accuracy in the method and x0 is the initial guess. The output x is the final x in jacobi's method, error contains an estimate of the error(s), iter is the number of iterations required and flag is 0 if the method succeeded and 1 if not. The above run (with tol = 1.e-4) required about 5 minutes in 1996 on a 50 Mhz 486 DX/2 computer, required about 11 seconds in 2000 on a 120 Mhz Pentium computer, 1.4 seconds in 2002 on a 700 Mhz Pentium computer, 0.28 seconds on in 2006 on a 1.8 Mhz Pentium laptop and .04 seconds with a 2009 Intel Quad Processor. Whew, what an increase in speed!
I made one change to each of nine programs for iterative methods in the templates library. I modified the programs so that error returns a history of the errors at every step of the method. Without this change error contains only the last error. The change is useful to view the convergence of a method. For example if you type