Numerical Methods in Engineering With Python 3

Written by Jaan Kiusalaas
Book Published: 2013
Physical Book (Amazon Link)

Table of Contents

2. Systems of Linear Algebraic Equations

2.1 Introduction

Augmented Coefficient Matrix

Augmented Coefficient Matrix

Uniqueness of Solution

Ill Conditioning

Euclidan Norm

  • || A ||_{e} = \sqrt{\sum^{n}_{i=1} \sum^{n}_{j=1} (A_{ij})^{2}}

Infinity Norm/Row-Sum Norm

  • || A ||_{\infty} = max_{1 \leq i \leq n} \sum^{n}_{j=1} \| A_{ij} \|

Matrix Condition Number

  • The matrix condition number is a formal measuring of the condition of a matrix and can be found as follows
    • cond(A) = || A || \, || A^{-1} ||
  • A matrix condition number around one means that the matrix is well conditioned, condition number is \infty for a singular matrix, and in general a larger condition number means the matrix is less well conditioned
    • cond(A) = 1 (Well conditioned)
    • cond(A) = \infty (Singular Matrix)
    • cond(A) \uparrow (Matrix is less well conditioned)
  • The matrix condition number is not a unique value but rather depends on which matrix norm is used
  • The matrix condition number can be computationally expensive to calculate for large matrices and so the magnitude of the determinant is compared to the magnitude of the elements of the coefficient matrix and similar magnitude signifies a well conditioned matrix

Methods of Solution

Direct Methods

  • Change the original equations into equivalent equations that are easier to solve and yield the same solution
    • Can change \| A \|
  • Achieved by performing "elementary operations"
    1. Exchange 2 equations (changes sign of \| A \|)
    2. Multiply an equation by a non-zero constant (multiplies \| A \| by the same constant)
    3. Multiply an equation by a non-zero constant and subtract it from another equation (no change to \| A \|)

Indirect Methods

  • Start with a guess for a solution to the system and then iterate until the solution converges
  • Generally less efficient than direct methods due to large number of iterations, however, can have computational advantages if the coefficient matrix is large and sparse

Triangular Matrices

Upper Triangular Matrix

  • An upper triangular matrix, U, consists entirely of zeros below the leading diagonal and numbers with no restrictions above the leading diagonal
    • Example U = \left[ \begin{array} U_{11} & U_{12} \\ 0 & U_{22}  \end{array} \right]

Lower Triangular Matrix

  • A lower triangular matrix, L, consists entirely of zeros above the leading diagonal and numbers with no restrictions below the leading diagonal
    • Example L = \left[ \begin{array} L_{11} & 0 \\ L_{21} & L_{22}  \end{array} \right]

Forward/Back Substitution

Forward Substitution

  • In forward substitution the solution process begins at the first row of the matrices
  • Example
    • \left[ \begin{array} L_{11} & 0 \\ L_{21} & L_{22} \end{array}  \right] \left[ x_{1} \\ x_{2} \right] = \left[ c_{1} \\ c_{2}  \right]
    • x_{1} = c_{1} / L_{11}
    • x_{2} = (c_{2} - L_{21} x_{1}) / L_{22}

Back Substitution

  • In back substitution the solution process begins with the last row of the matrices
  • Example
    • \left[ \begin{array} U_{11} & U_{12} \\ 0 & U_{22} \end{array}  \right] \left[ x_{1} \\ x_{2} \right] = \left[ c_{1} \\ c_{2}  \right]
    • x_{2} = c_{2} / U_{22}
    • x_{1} = (c_{1} - U_{12} x_{2}) / U_{11}

2.2 Gauss Elimination Method

Elimination Phase

Back Substitution Phase

Operation Count

Multiple Sets of Equations