Study Material
Semester-04
M3
Unit-05

Unit 5: Numerical Methods

Numerical methods are essential techniques in computational mathematics that are used to obtain approximate solutions for mathematical problems that cannot be solved analytically. This unit focuses on two main areas: the numerical solution of algebraic and transcendental equations, and the numerical solutions of systems of linear equations. This comprehensive overview covers various numerical methods such as Bisection, Secant, Newton-Raphson, Gauss elimination, and Jacobi methods, among others.

Numerical Solution of Algebraic and Transcendental Equations

1. Introduction

Algebraic and transcendental equations are fundamental in mathematical modeling, where exact solutions are often unattainable. Numerical methods provide a way to find approximate solutions with a specified degree of accuracy.

Definition:

  • Algebraic equations are equations involving polynomial expressions, such as f(x)=0f(x) = 0, where f(x)f(x) is a polynomial.
  • Transcendental equations involve transcendental functions such as exponential, logarithmic, or trigonometric functions, e.g., exβˆ’x2=0e^x - x^2 = 0.

2. Bisection Method

The Bisection method is a root-finding method that applies to any continuous function for which one knows two values with opposite signs.

Process:

  1. Choose two initial points aa and bb such that f(a)f(a) and f(b)f(b) have opposite signs (i.e., f(a)β‹…f(b)<0f(a) \cdot f(b) < 0).
  2. Compute the midpoint c=a+b2c = \frac{a + b}{2}.
  3. Evaluate f(c)f(c):
    • If f(c)=0f(c) = 0, then cc is a root.
    • If f(c)f(c) has the same sign as f(a)f(a), then replace aa with cc; otherwise, replace bb with cc.
  4. Repeat the process until ∣f(c)∣|f(c)| is less than a specified tolerance.

Example:

For f(x)=x2βˆ’4f(x) = x^2 - 4, choose a=0a = 0 and b=3b = 3.

  1. c=0+32=1.5c = \frac{0 + 3}{2} = 1.5 -> f(1.5)=βˆ’2.5f(1.5) = -2.5 (same sign as f(0)f(0)).
  2. Update aa to 1.51.5.
  3. Repeat until convergence to the root x=2x = 2.

3. Secant Method

The Secant method is a root-finding algorithm that uses a succession of roots of secant lines to approximate the root of a function.

Process:

  1. Select two initial approximations x0x_0 and x1x_1.
  2. Compute the next approximation x2x_2 using the formula:
x2=x1βˆ’f(x1)(x1βˆ’x0)f(x1)βˆ’f(x0)x_2 = x_1 - \frac{f(x_1)(x_1 - x_0)}{f(x_1) - f(x_0)}
  1. Update x0x_0 to x1x_1 and x1x_1 to x2x_2, and repeat until convergence.

Example:

Given f(x)=exβˆ’x2f(x) = e^x - x^2:

  • Start with x0=0x_0 = 0 and x1=1x_1 = 1.
  • Calculate x2x_2 and update iteratively until convergence.

4. Regula Falsi Method

The Regula Falsi method, also known as the False Position method, is similar to the Bisection method but uses a linear interpolation to find the root.

Process:

  1. Choose two initial points aa and bb such that f(a)β‹…f(b)<0f(a) \cdot f(b) < 0.
  2. Compute the point cc using:
c=af(b)βˆ’bf(a)f(b)βˆ’f(a)c = \frac{a f(b) - b f(a)}{f(b) - f(a)}
  1. Evaluate f(c)f(c):
    • If f(c)=0f(c) = 0, then cc is a root.
    • If f(c)f(c) has the same sign as f(a)f(a), set a=ca = c; otherwise, set b=cb = c.
  2. Repeat until convergence.

Example:

For f(x)=x3βˆ’2xβˆ’5f(x) = x^3 - 2x - 5, iterate using a=2a = 2 and b=3b = 3.

5. Newton-Raphson Method

The Newton-Raphson method is an efficient iterative technique for finding roots of real-valued functions.

Process:

  1. Start with an initial guess x0x_0.
  2. Compute the next approximation using the formula:
x_n+1=xnβˆ’f(xn)fβ€²(xn)x\_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}
  1. Repeat until convergence.

Example:

For f(x)=x2βˆ’2f(x) = x^2 - 2, the derivative fβ€²(x)=2xf'(x) = 2x.

  1. Start with x0=1x_0 = 1.
  2. Iterate to find xβ‰ˆ2x \approx \sqrt{2}.

6. Successive Approximation Methods

Successive approximation methods are used for solving equations of the form x=g(x)x = g(x).

Process:

  1. Rearrange the equation to isolate xx.
  2. Choose an initial guess x0x_0.
  3. Compute subsequent approximations using x_n+1=g(xn)x\_{n+1} = g(x_n).
  4. Continue until convergence.

Example:

For x=cos⁑(x)x = \cos(x), start with x0=0.5x_0 = 0.5 and iterate using g(x)=cos⁑(x)g(x) = \cos(x).

7. Convergence and Stability

The convergence of a numerical method is crucial, indicating whether the method will yield accurate results as iterations increase. Stability refers to how errors propagate through computations.

  • Convergence: A method converges if lim⁑_nβ†’βˆžxn=r\lim\_{n \to \infty} x_n = r (the actual root).
  • Stability: A method is stable if small changes in input do not cause large changes in output.

Numerical Solutions of Systems of Linear Equations

1. Introduction

Systems of linear equations can be represented in matrix form Ax=bAx = b, where AA is a matrix of coefficients, xx is the vector of variables, and bb is the result vector.

2. Gauss Elimination

Gauss elimination is a systematic method for solving linear equations by transforming the system into an upper triangular form.

Process:

  1. Form the augmented matrix ([A|b]).
  2. Use elementary row operations to transform the matrix into upper triangular form.
  3. Perform back substitution to find the values of the variables.

Example:

For the system:

2x+3y+z=14x+y+2z=23x+4y+5z=3\begin{align*} 2x + 3y + z &= 1 \\ 4x + y + 2z &= 2 \\ 3x + 4y + 5z &= 3 \end{align*}

Transform the augmented matrix and apply back substitution.

3. LU Decomposition

LU decomposition factors a matrix AA into the product of a lower triangular matrix LL and an upper triangular matrix UU.

Process:

  1. Decompose AA into LL and UU such that A=LUA = LU.
  2. Solve Ly=bLy = b for yy using forward substitution.
  3. Solve Ux=yUx = y for xx using back substitution.

Example:

For a matrix AA, perform the decomposition to obtain LL and UU.

4. Cholesky Decomposition

The Cholesky decomposition is a specialized method for solving systems with positive definite matrices.

Process:

  1. Decompose AA into A=LLTA = LL^T.
  2. Similar to LU decomposition, solve Ly=bLy = b and then LTx=yL^Tx = y.

Example:

For a positive definite matrix, apply the Cholesky method.

5. Jacobi Method

The Jacobi method is an iterative algorithm for solving a system of linear equations.

Process:

  1. Rewrite each equation in terms of the variable to be solved.
  2. Use initial guesses for the variables.
  3. Update the values iteratively:
xβˆ—i(k+1)=1aβˆ—ii(bβˆ—iβˆ’βˆ‘βˆ—jβ‰ ia_ijxj(k))x*i^{(k+1)} = \frac{1}{a*{ii}} \left( b*i - \sum*{j \neq i} a\_{ij} x_j^{(k)} \right)

Example:

For the system of equations, iterate using the Jacobi formula until convergence.

6. Gauss-Seidel Method

The Gauss-Seidel method is an improvement over the Jacobi method, using updated values as soon as they are calculated.

Process:

  1. Start with initial guesses.
  2. Update variables using:
xβˆ—i(k+1)=1aβˆ—ii(bβˆ—iβˆ’βˆ‘βˆ—j<iaβˆ—ijxj(k+1)βˆ’βˆ‘βˆ—j>ia_ijxj(k))x*i^{(k+1)} = \frac{1}{a*{ii}} \left( b*i - \sum*{j < i} a*{ij} x_j^{(k+1)} - \sum*{j > i} a\_{ij} x_j^{(k)} \right)
  1. Repeat until convergence.

Example:

For the same system

, apply the Gauss-Seidel method.

7. Comparison of Methods

  • Bisection, Secant, Newton-Raphson: Suitable for single-variable equations; varying rates of convergence.
  • Gauss Elimination, LU Decomposition, Jacobi, Gauss-Seidel: Suitable for systems of linear equations; choice depends on matrix properties and dimensions.