Numerical Methods in Engineering With Python 3

Written by Jaan Kiusalaas
Book Published: 2013

## 2. Systems of Linear Algebraic Equations

### 2.1 Introduction

• This section deals with solving equations of the form
• Where is a vector of unknowns, is the coefficient matrix to the unknowns and is a vector of constants

Augmented Coefficient Matrix

• The augmented coefficient matrix is a useful representation of the equations for computation purposes

Uniqueness of Solution

• A unique solution will result if the determinant of the A matrix is non-singular ()
• Means that the rows and columns are linearly independent
• Example
•   (1)
•   (2)
• eqn (1) * 2 = eqn (2) therefore the equations are not linearly independent
• (example taken from book)
• A singular coefficient matrix results in infinite solutions or no solution

Ill Conditioning

• Ill conditioning occurs when the coefficient matrix is almost singular ( is very small)
• When the coefficient matrix is ill conditioned small changes in the coefficients result in large changes in the solutions
• This means that round off errors that coincide with numerical computation can have very large effects on the solution
• Example
• Case 1
• Case 2
• A small change in the y coefficient in the second equation resulted in a large change in the solution
• (example taken from book)
• The comparison for "smallness" of the determinant of the coefficient matrix will be the matrix norm ()
• means that the determinant of A is small
• There are different matrix norms that can be used for this purpose

Euclidan Norm

Infinity Norm/Row-Sum Norm

Matrix Condition Number

• The matrix condition number is a formal measuring of the condition of a matrix and can be found as follows
• A matrix condition number around one means that the matrix is well conditioned, condition number is for a singular matrix, and in general a larger condition number means the matrix is less well conditioned
• (Well conditioned)
• (Singular Matrix)
• (Matrix is less well conditioned)
• The matrix condition number is not a unique value but rather depends on which matrix norm is used
• The matrix condition number can be computationally expensive to calculate for large matrices and so the magnitude of the determinant is compared to the magnitude of the elements of the coefficient matrix and similar magnitude signifies a well conditioned matrix

Methods of Solution

Direct Methods

• Change the original equations into equivalent equations that are easier to solve and yield the same solution
• Can change
• Achieved by performing "elementary operations"
1. Exchange 2 equations (changes sign of )
2. Multiply an equation by a non-zero constant (multiplies by the same constant)
3. Multiply an equation by a non-zero constant and subtract it from another equation (no change to )

Indirect Methods

• Start with a guess for a solution to the system and then iterate until the solution converges
• Generally less efficient than direct methods due to large number of iterations, however, can have computational advantages if the coefficient matrix is large and sparse

Triangular Matrices

• Triangular matrices consist of zeros below or above the leading diagonal

Upper Triangular Matrix

• An upper triangular matrix, U, consists entirely of zeros below the leading diagonal and numbers with no restrictions above the leading diagonal
• Example

Lower Triangular Matrix

• A lower triangular matrix, L, consists entirely of zeros above the leading diagonal and numbers with no restrictions below the leading diagonal
• Example

Forward/Back Substitution

• When the coefficient matrix is a triangular matrix the solution can be obtained easily by solving each equation individually for one unknown at a time

Forward Substitution

• In forward substitution the solution process begins at the first row of the matrices
• Example

Back Substitution

• In back substitution the solution process begins with the last row of the matrices
• Example

### 2.2 Gauss Elimination Method

• The goal of the gauss elimination method is to get the equation into the form and solving for the unknowns using the back substitution method mentioned above
• Gauss elimination is a direct solution method

Elimination Phase

• During the elimination phase the third elementary operation is used to eliminate the an unknown at a time from the equations (multiply by a constant and subtract from other equations)
• The equation being multiplied by a constant is refered to as the "pivot" equation
• The determinant will remain unchanged
• The multiplication is don to eliminate successive unknowns from equations
• Ex. Remove unknown from all remainin equations
• Where represents new equation 2, represents equation 2 and represents equation 1
• This elimination process is used to remove from all equations excluding the first then from all equations excluding the first and second and continue until the new A matrix is an upper diagonal matrix
• This can be generalized as the following algorithm
• to remove unknown from equation
• Do not forget to use the process with the constant vector during the elimination process
• If the augmented coefficient matrix is used, A and b can be altered simultaneously
• An example of what a partially eliminated matrix looks like is given below
• The remaining step would be to remove from equation 4

Back Substitution Phase

• Now that the coefficient matrix is an upper triangular matrix, U, the back substitution method can be used to solve for the unknowns
• The following algorithm can be used to solve during this process
• , k = n-1, n-2, ..., 1

Operation Count

• The run time of the solution method is directly proportional to the number of long operations that will need to be performed (ie. multiplications/divisions)
• During elimination approximately operations are performed
• During back substitution approximately operations are performed
• It can be seen that most of the computation time is used during the elimination phase and computation time rapidly increase with an increase in the number of equations

Multiple Sets of Equations

• Multiple sets of constant vectors can be solved simulatneously by appending them on the end of the augmented coefficient matrix
• This method is not often used because LU decomposition methods handle multiple coefficient matrices better