Study Material
Semester-04
M3
Unit-06

Unit 6: Numerical Methods

Interpolation

Interpolation is a technique used to estimate unknown values by utilizing known data points. This is particularly useful in various fields, including engineering, computer graphics, and data analysis. Two common forms of interpolation are Newton’s Interpolation and Lagrange’s Interpolation.

1. Finite Differences

Finite differences are used to approximate derivatives and can be instrumental in interpolation. The main idea is to replace derivatives with differences in function values. There are three primary types of finite differences:

  • Forward Difference: This measures the change in the function values at subsequent points.

    Ξ”f(x)=f(x+h)βˆ’f(x)\Delta f(x) = f(x+h) - f(x)

  • Backward Difference: This measures the change in function values at preceding points.

    βˆ‡f(x)=f(x)βˆ’f(xβˆ’h)\nabla f(x) = f(x) - f(x-h)

  • Central Difference: This averages the forward and backward differences, providing a better approximation for the derivative.

    Ξ”2f(x)=f(x+h)βˆ’2f(x)+f(xβˆ’h)\Delta^2 f(x) = f(x+h) - 2f(x) + f(x-h)

These differences are the foundation of various interpolation formulas.

2. Newton’s Interpolation Formula

Newton’s interpolation uses the concept of divided differences to construct a polynomial that passes through a set of known points. The formula is expressed as:

P(x)=f(x0)+(xβˆ’x0)β‹…Ξ”f(x0)+(xβˆ’x0)(xβˆ’x1)2!β‹…Ξ”2f(x0)+…P(x) = f(x_0) + (x - x_0) \cdot \Delta f(x_0) + \frac{(x - x_0)(x - x_1)}{2!} \cdot \Delta^2 f(x_0) + \ldots

Where P(x)P(x) is the interpolating polynomial, x0,x1,…x_0, x_1, \ldots are known data points, and Ξ”nf(x0)\Delta^n f(x_0) represents the nth divided difference at point x0x_0.

Example:

Given points (1,1),(2,4),(3,9)(1, 1), (2, 4), (3, 9), we can use Newton's interpolation formula to find the polynomial that fits these points.

  1. Calculate the divided differences.
  2. Construct the polynomial using the formula.

3. Lagrange’s Interpolation Formula

Lagrange’s interpolation provides another way to construct a polynomial through known data points. The formula is given by:

P(x)=βˆ‘i=0nf(xi)Li(x)P(x) = \sum_{i=0}^{n} f(x_i) L_i(x)

where

Li(x)=∏0≀j≀njβ‰ ixβˆ’xjxiβˆ’xjL_i(x) = \prod_{\substack{0 \leq j \leq n \\ j \neq i}} \frac{x - x_j}{x_i - x_j}

Here, Li(x)L_i(x) are the Lagrange basis polynomials that ensure P(x)P(x) passes through all the given points.

Example:

For the points (1,1),(2,4),(3,9)(1, 1), (2, 4), (3, 9), the Lagrange polynomial can be constructed similarly to Newton's method, using the above formula to find the basis polynomials and summing them up.

Numerical Differentiation

Numerical differentiation is the process of estimating the derivative of a function based on discrete data points. It is particularly useful when dealing with experimental or observational data.

1. Forward Difference Method

The forward difference method estimates the derivative at a point using values from that point and subsequent points:

fβ€²(x)β‰ˆf(x+h)βˆ’f(x)hf'(x) \approx \frac{f(x+h) - f(x)}{h}

2. Backward Difference Method

The backward difference method estimates the derivative using previous points:

fβ€²(x)β‰ˆf(x)βˆ’f(xβˆ’h)hf'(x) \approx \frac{f(x) - f(x-h)}{h}

3. Central Difference Method

The central difference method provides a more accurate estimate by averaging the forward and backward differences:

fβ€²(x)β‰ˆf(x+h)βˆ’f(xβˆ’h)2hf'(x) \approx \frac{f(x+h) - f(x-h)}{2h}

Example:

Given values f(1)=1f(1) = 1, f(1.1)=1.1f(1.1) = 1.1, and f(0.9)=0.9f(0.9) = 0.9:

  1. Forward Difference: fβ€²(1)β‰ˆ1.1βˆ’10.1=1f'(1) \approx \frac{1.1 - 1}{0.1} = 1
  2. Backward Difference: fβ€²(1)β‰ˆ1βˆ’0.90.1=1f'(1) \approx \frac{1 - 0.9}{0.1} = 1
  3. Central Difference: fβ€²(1)β‰ˆ1.1βˆ’0.90.2=1f'(1) \approx \frac{1.1 - 0.9}{0.2} = 1

Numerical Integration

Numerical integration is essential for estimating the value of definite integrals when analytical solutions are challenging to obtain. Two commonly used methods are the Trapezoidal Rule and Simpson’s Rule.

1. Trapezoidal Rule

The Trapezoidal Rule approximates the area under a curve by dividing it into trapezoids. The formula for numerical integration using the trapezoidal rule is:

∫abf(x) dxβ‰ˆh2(f(a)+f(b))\int_a^b f(x) \, dx \approx \frac{h}{2} \left( f(a) + f(b) \right)

where h=bβˆ’ah = b - a.

Example:

For f(x)=x2f(x) = x^2 from x=1x=1 to x=3x=3:

  1. Calculate f(1)f(1) and f(3)f(3):
    • f(1)=12=1f(1) = 1^2 = 1
    • f(3)=32=9f(3) = 3^2 = 9
  2. Apply the Trapezoidal Rule:
∫13x2 dxβ‰ˆ22(1+9)=10\int_1^3 x^2 \, dx \approx \frac{2}{2} (1 + 9) = 10

2. Simpson’s Rule

Simpson’s Rule is a more accurate method that uses parabolic segments to approximate the area under the curve. The formula is:

∫abf(x) dxβ‰ˆh3(f(a)+4f(a+b2)+f(b))\int_a^b f(x) \, dx \approx \frac{h}{3} \left( f(a) + 4f\left(\frac{a+b}{2}\right) + f(b) \right)

where h=bβˆ’ah = b - a.

Example:

For f(x)=x2f(x) = x^2 from x=1x=1 to x=3x=3:

  1. Calculate f(1)f(1), f(3)f(3), and f(2)f(2):
    • f(2)=22=4f(2) = 2^2 = 4
  2. Apply Simpson’s Rule:
∫13x2 dxβ‰ˆ23(1+4Γ—4+9)=23(1+16+9)=23β‹…26=523β‰ˆ17.33\int_1^3 x^2 \, dx \approx \frac{2}{3} (1 + 4 \times 4 + 9) = \frac{2}{3} (1 + 16 + 9) = \frac{2}{3} \cdot 26 = \frac{52}{3} \approx 17.33

3. Bound of Truncation Error

The truncation error in numerical integration measures how much the approximation differs from the exact value. For the Trapezoidal Rule, the truncation error ETE_T can be expressed as:

ET≀(bβˆ’a)312n2max⁑∣fβ€²β€²(x)∣E_T \leq \frac{(b-a)^3}{12n^2} \max |f''(x)|

For Simpson’s Rule, the error ESE_S is given by:

ES≀(bβˆ’a)5180n4max⁑∣f(4)(x)∣E_S \leq \frac{(b-a)^5}{180n^4} \max |f^{(4)}(x)|

Where nn is the number of subintervals, and f(k)f^{(k)} denotes the k-th derivative of ff.

Solution of Ordinary Differential Equations

Ordinary differential equations (ODEs) describe the relationship between functions and their derivatives. Numerical methods are often employed to find approximate solutions when analytical solutions are not feasible.

1. Euler’s Method

Euler’s method is the simplest numerical technique for solving first-order ODEs. It uses the formula:

yn+1=yn+hf(xn,yn)y_{n+1} = y_n + hf(x_n, y_n)

Where hh is the step size, yny_n is the current value, and f(xn,yn)f(x_n, y_n) is the function defining the differential equation.

Example:

Given dydx=y\frac{dy}{dx} = y with initial condition y(0)=1y(0) = 1 and h=0.1h = 0.1:

  1. y0=1y_0 = 1
  2. y1=y0+hβ‹…y0=1+0.1β‹…1=1.1y_1 = y_0 + h \cdot y_0 = 1 + 0.1 \cdot 1 = 1.1
  3. Repeat for subsequent steps.

2. Modified Euler’s Method

The Modified Euler’s Method (also known as the Heun’s Method) improves upon the basic Euler method by averaging the slopes. The formula is:

  1. Predict: y_n+1\*=yn+hf(xn,yn)y\_{n+1}^{\*} = y_n + hf(x_n, y_n)
  2. Correct: yβˆ—n+1=yn+h2(f(xn,yn)+f(xβˆ—n+1,y_n+1\*))y*{n+1} = y_n + \frac{h}{2} \left( f(x_n, y_n) + f(x*{n+1}, y\_{n+1}^{\*}) \right)

3. Runge-Kutta 4th Order Method

The Runge-Kutta 4th Order Method is a more sophisticated approach that provides greater

accuracy. The formula is:

yn+1=yn+h6(k1+2k2+2k3+k4)y_{n+1} = y_n + \frac{h}{6} \left( k_1 + 2k_2 + 2k_3 + k_4 \right)

Where:

  • k1=f(xn,yn)k_1 = f(x_n, y_n)
  • k2=f(xn+h2,yn+h2k1)k_2 = f\left(x_n + \frac{h}{2}, y_n + \frac{h}{2} k_1\right)
  • k3=f(xn+h2,yn+h2k2)k_3 = f\left(x_n + \frac{h}{2}, y_n + \frac{h}{2} k_2\right)
  • k4=f(xn+h,yn+hk3)k_4 = f(x_n + h, y_n + hk_3)

This method provides a more accurate solution over a larger range of xx values compared to previous methods.