List of Topics (M 3)

Unit 1 i Linear Differential Equations of Higher Order

Unit 1 ii Simultaneous Linear Differential Equations

Unit 1 iii Applications of LDEs

Unit 2 i Laplace Transform

Unit 2 ii Fourier Transform

Unit 2 iii The {\mathcal Z} Transform

Unit 3 i Statistics, Correlation and Regression

Unit 3 ii Probability

Unit 4 Vector Differential Calculus

Unit 5 Vector Integral Calculus

Unit 6 Partial Differential Equations (for MECH, CIVIL)

Unit 6 Complex Variables, Differentiation and Integration (for Comp. Sci., EnTC, IT)


M III CS/IT May 2016

Q.1 a i) Solve {(D^2-D)y = e^x sin (x)}

Solution : The auxiliary equation is {m^2-m =0} or {m(m-1)=0}. The roots are {m_1 = 0} and {m_2 = 1}. The complimentary function will be

{y_c = c_1 e^{0x} + c_2 e^{1x} = c_1 + c_2 e^x}

The particular integral will be

{y_p = \frac {1}{D^2-D} e^x \cdot sin (x)}

Using {\frac {1}{\phi (D)} e^{ax} V = e^{ax} \times \frac {1}{\phi (D+a)} V},

{y_p = e^x \times \frac {1}{(D+1)^2 - (D+1)} sin (x)}

{= e^x \times \frac {1}{D^2+2D+1-D-1} sin (x) = e^{x} \times \frac {1}{D^2+D} sin (x)}

Using {\frac {1}{\phi (D^2)} sin (ax) = \frac {1}{\phi (-a^2)} sin (x)},

{y_p = \frac {1}{-1+D} sin(x) = e^x \frac {1+D}{[-1+D][1+D]} = e^x \times \frac {1+D}{D^2-1} sin(x) = e^x \times \frac {(1+D)}{-2} sin(x)}


{e^x \times \frac {-1}{2} \left (sin (x) + \frac {d}{dx} sin(x) \right ) = \frac {-e^x}{2} [sin (x) + cos (x)]}

The complete solution is

{y =y_c + y_p = c_1 + c_2 e^x - \frac {e^x}{2} [sin (x) + cos (x)]}

Q.1 a ii) Solve {\frac {dx}{3z-4y} = \frac {dy}{4x-2z} = \frac {dz}{2y-3x}}

Solution : These are symmetric simultaneous differential equations. We will find 2 sets of values {l,m,n} which will give {ldx + mdy +ndz = 0}.

Note the pattern in the ratios. In the first ratio, {2} and {x} are absent in the denominator, in the second, {3} and {y} and in the third {4} and {z}.

If we choose the first set of multipliers as {2,3} and {4}, we get,

{2(3z-4y) + 3 (4x-2z) + 4 (2y-3x)}

{= 6z-8y+12x-6z+8y-12x = 0}

Hence, {2dx+3dy+4dz = 0}

On integrating,

{\int2 dx + \int 3 dy + \int 4 dz = 2x+3y+4z = c_1}

Note that this represents a family of planes normal to the position vector of the point {(2,3,4)}.

If we choose the second set of multipliers as {x,y} and {z}, we get

{x(3z-4y) + y(4x-2z) + z (2y-3x)}

{= 3zx-4xy+4xy-2yz+2yz-3zx = 0}

Hence, {xdx + ydy + zdz = 0}

On integrating,

{\int x dx + \int y dy + \int zdz = \frac {x^2}{2} + \frac {y^2}{2} + \frac {z^2}{2} = c_2}

Note that this represents a family of spheres with center at origin.

Q.1 a iii) Solve {(D^2+9)y = x^2 +2x + cos (x)}

Solution : The auxiliary equation is {m^2+9 =0}. Its roots are {0+3i} and {0-3i}. These are complex conjugate of each other. Hence, the complimentary function will be

{y_c = e^{0x}[c_1 cos (3x) + c_2 sin (3x)] = c_1 cos (3x) + c_2 sin (3x)}

The particular integral will be

{\frac {1}{D^2+9} x^2 + x + cos (3x) = \frac {1}{D^2+9} x^2 + \frac {1}{D^2+9} 2x + \frac {1}{D^2+9} cos (3x)}

Note that first two functions on RHS are algebraic functions with integral powers of {x}. Hence, we need to use the binomial expansion in the form {\frac {1}{1+ \phi (D)}}.

The third function is {cos (3x)}. If we use {\frac {1}{\phi (D^2)} cos (ax) = \frac {1}{\phi (-a^2)} cos (ax)}, we get {\frac {1}{0}}, so we need to use the alternative, which is {\frac {1}{D^2+a^2} cos (ax) = \frac {x}{2a} sin (ax)}.

Considering all these,

{y_p = \frac {1}{9} \left [\frac {1}{\frac {D^2}{9} + 1} \right ] x^2 + \frac {1}{9} \left [\frac {1}{\frac {D^2}{9} + 1} \right ] 2x + \frac {x}{2 \times 3} sin (3x)}

{= \frac {1}{9} \left [1 - \frac {D^2}{9} + \frac {D^4}{81} - \cdots \right ] x^2 + \frac {1}{9} \left [1 - \frac {D^2}{9} + \cdots \right ] x + \frac {x}{6} sin (3x)}

All derivatives of {x^2} of order {\ge 3} will be {0} and all derivatives of {x} of order {\ge 2} will be {0}. So,

{y_p = \frac {1}{9} \left [x^2 - \frac {2}{9} \right ] + \frac {1}{9} \left [2x \right ] + \frac {x \ sin (3x)}{6}}

The complete solution is

{y = y_c + y_p = c_1 cos (3x) + c_2 sin (3x) + \frac {1}{9} \left [x^2 - \frac {2}{9} \right ] + \frac {1}{9} \left [2x \right ] + \frac {x \ sin (3x)}{6}}

Vector Integral Calculus

  • Vector Integration

In the previous section, we discussed vector differentiation. We can extend our notion of integration of scalar functions to that of vector functions.

Let {\vec F (x,y,z)} be a vector field defined over a region and let {C} be a curve in this region. At each point {\vec F} will have a value. We consider an arc element {\delta s} and a unit tangent vector {\hat T} at a point {P} on the curve. If we fix 2 points {A} and {B} on the curve and allow {P} to slide along the curve from {A} to {B}, we get the path of integration.

{\int \limits_{C : A}^{B} \vec F \cdot \hat T ds}

Above integral is known as the line integral.

Note that as {\delta s \to 0}, {\hat T \delta s \to d \vec r}. Hence, above integral can also be written as

{\int \limits_C \vec F \cdot d \vec r}

  • Conservative Field

If the definite integral {\int \limits_{C : A}^{B} \vec F \cdot d \vec r} does not depend on the path {C}, the field {\vec F} is known as conservative field. It can be shown that this is true, when there exists a scalar point function {\phi}, such that {\vec F = \nabla \phi}.

The closed path integral, where {B} coincides with {A}, {\oint \limits_C \vec F \cdot d \vec r}, is {0} in a conservative field.

  • Gauss’ Divergence Theorem

The surface integral of a vector point function {\vec F} over a surface {S} is defined as the component of {\vec F} normal to the surface {S}, taken over the entire surface. In other words,

{\int_S \vec F \cdot \hat n dS}

is the surface integral. Since the elementary area {dS} can be expressed as {dx dy}, the surface integral is actually a double integral.

Gauss’ Divergence Theorem states that the surface integral of the normal component of the function {\vec F} taken over a closed surface {S} is equal to the volume integral of {\vec F} taken over the volume {V} enclosed by the surface {S}. In other words,

{\iint \limits_S \vec F \cdot \hat n dS = \iiint \limits_V \nabla \cdot \vec F dV}

  • Stokes’ Theorem

The surface integral of curl of normal component of a vector point function {\vec F} taken over an open surface {S} bounded by a closed curve {C} is equal to the line integral of the tangential component of {\vec F} over {C}. Thus,

{\iint \limits_S (\nabla \times \vec F) \cdot \hat n dS = \oint \limits_C \vec F \cdot d \vec r}

  • Green’s Lemma

The Green’s lemma (or theorem) is a special case of Stokes’ theorem, when the surface is in {XY} plane and thus {\hat n = \hat k}.

Statistics, Correlation and Regression

  • What is Statistics?

It is a branch of mathematics. It involves collection, analysis, interpretation, presentation, and organization of data. (Dictionary Definition).

Descriptive Statistics summarizes the data with the help of few indices, such as mean (the central tendency) and standard deviation (the dispersion). Inferential Statistics draws conclusions from data that are subject to random variation. Inferential statistics uses the probability theory.

  • Classification of Data (Numerical)

If the data values are arranged in ascending or descending order, the minimum and maximum values are revealed. The quantity represented by the numerical data is termed as a variate or a variable. Often, data values repeatedly occur in the dataset. The number of times a value occurs in the dataset is termed as the frequency. The representation using frequencies is the frequency distribution.

If the range of the data is wide, instead of mentioning individual values, they are grouped into class intervals. In general, in a class interval of {(a-b)} all values {\ge a} and {<b} are included.

Sometimes, a cumulative frequency distribution table is prepared.

  • Representation of Data

Histogram, Frequency Polygon and Ogive are commonly used to represent the data.

  • Descriptive Statistics : Central Tendency

As mentioned earlier, in descriptive statistics, the central tendency and the dispersion are studied. The indices of central tendency are as follows :

Arithmetic Mean :

{\mu_x \ or \ \bar x = \frac {x_1+x_2 + \cdots + x_n}{n} =\frac 1 n \sum \limits_{i=1}^{n} x_i}

If the data are in frequency distribution format,

{\mu_x \ or \ \bar x = \frac {\sum \limits_{i=1}^{n} x_i f_i}{\sum \limits_{i=1}^{n} f_n}}

If the data are in grouped frequency distribution format,

{\mu_x \ or \ \bar x = A + h \times \frac {\sum \limits_{i=1}^{n} f_i u_i}{\sum \limits_{i=1}^{n} f_n},}

where {A} is the assumed mean,  generally the middle value in the dataset. {u} is given by {\frac {x-A}{h}}. {h} is the width of class interval and {x} is the mid-value of the class interval.

Joint Mean of 2 distributions with {n_1} and {n_2} values is given by

{\frac {\mu_1 n_1 + \mu_2 n_2}{n_1 + n_2}}

Geometric Mean : 

The geometric mean is given by

{\Big( \prod \limits_{i=1}^{n} x_i^{f_i} \Big)^{1/N},}

where {N = \sum \limits_{i=1}^{n} f_i}

Harmonic Mean : 

Harmonic mean is the reciprocal of arithmetic mean of the reciprocals of the given values.

Median/ Positional Average : 

Median is that value of variate, which divides the dataset into 2 equal parts. Thus, equal number of values exist on either side of the median. If the number of values is odd, the arithmetic mean of the two middle terms is the median.

In a cumulative frequency distribution, the median is that value of variate, whose cumulative frequency is equal to or just greater than $N/2$, where $N$ is the total number of values.

Mode : 

It is the most frequently occurring value in the data-set.

  • Descriptive Statistics : Dispersion

These indices measure the spread of the values. So, 2 datasets can have same mean, but the values may be spread over a wider range for one of them.

Mean Deviation : 

It is given by

{\frac {1}{N} \sum \limits_{i=1}^{n} f_i |x_i - \mu|}

Standard Deviation : 

{\sigma = \sqrt {\frac 1 N \sum \limits_{i=1}^{n} (x- \mu)^2}}

The square of the standard deviation is known as the variance.

Root Mean Square Deviation :

{S = \sqrt {\frac 1 N \sum \limits_{i=1}^{n} (x- A)^2},}

where {A} is any arbitrary number.

{S} and {\sigma} are related by the following relation :

{S^2 = \sigma^2 + (\mu - A)^2}

In grouped distributions, the standard deviation is calculated as follows :

{\sigma = h \sqrt {\frac 1 N \sum \limits_{i=1}^n f_i u_i^2 - \left (\frac {\sum \limits_{i=1}^{n} f_i u_i}{N} \right)^2}}

Here, {h} is the width of class interval, {u = \frac {x-A}{h}}. {A} is the assumed mean, {x} is the mid-value of the interval. {N} is the sum of all frequencies.

  • Moments

{r}th moment of a distribution about the mean {\mu} is denoted by {\mu_r} and is given by

{\mu_r = \frac {1}{N} \sum \limits_{i=1}^{n} f_i \big (x- \mu \big )^r}

This is similar to moment of a force about a point, where we define the moment as force {\times} the perpendicular distance.

{r}th moment of a distribution about any arbitrary number {A} is denoted by {\mu'_r} and is given by

{\mu'_r = \frac {1}{N} \sum \limits_{i=1}^{n} f_i \big (x- A \big )^r}


  • Relation between {r}th moment about the mean ({\mu_r}) and {r}th moment about any number {A}, ({\mu'_r})


It can be shown that {\mu_r} and {\mu'_r} are related by the following equation :

{\mu_r = \mu'_r - ^rC_1 \mu'_{r-1} \mu'_1 + ^rC_2 \mu'_{r-2} \mu'_2 + \cdots + (-1)^r (\mu'_1)^r}


  • Skewness

It tells how skew the frequency distribution curve is from the symmetry.

I) Frequency curve stretches towards right : Mean to the right of mode – Right/Positively skewed

II) Frequency curve stretches towards left : Mean to the left of mode – Left/Negatively skewed

Skewness is measured by

{\frac {\mu - m}{\sigma},}

where {m} is the median.

The coefficient of skewness is given by

{\beta_1 = \frac {\mu_3^2}{\mu_2^3}}


  • Kurtosis

In Greek, {kurt} means {bulging}. The coefficient of kurtosis, {\beta_2} is given by

{\beta_2 = \frac {\mu_4}{\mu_2^2}}

It measures how peaked the frequency distribution curve is.

The curve, which is neither flat nor peaked is {mesokurtic}. If $\beta_2 > 3$, the curve is {leptokurtic} or peaked. If {\beta_2 < 3}, the curve is {platykurtic}.


  • Bivariate Distributions

When the dataset contains 2 variables, each data point is an ordered pair, say {(x,y)}. Such distributions are known as bivariate distributions. We may wish to test the relationship between the 2 variables, if any.

Examples : Amount of time spent by students on social media vs. marks obtained, rainfall in a year and the crop production in the following year.

We may be tempted to come up with a conclusion without having a look at the actual values. The variables in the first example may seem to be correlated, but we cannot be very much sure unless we have the dataset.

In the second example, we can provide a good reasoning for the correlation.


  • Correlation and Karl Pearson’s Coefficient

When a change in one variable leads to a corresponding change in the other, we say that the variables are correlated. If one increases and the other decreases, it is a negative correlation. If both increase, it is a positive correlation.

If the ratio of values of variables in each pair {(x,y)} is constant, the correlation is said to be linear. According to Karl Pearson, the strength of linear relationship between the variables is given by the correlation coefficient :

{r = \frac {cov (x,y)}{\sigma_x \sigma_y} = \frac {\frac 1 n \sum \limits_{i=1}^n (x_i - \mu_x)(y_i - \mu_y)}{\sigma_x \sigma_y }}

{cov (x,y)} is the covariance. When the covariance is divided by the product of the standard deviations, we get the correlation coefficient.

Note that the value of {r} always lies between {-1} and {1}.


  • Regression

If the variables are known to be correlated, value of one variable can be obtained, if the value of the other variable is known. This is known as regression. In order to do that, a relationship between the variables, say {x} and {y} is developed, in the form of an equation. Further, if it is known that the relationship is linear, the equation will be of the form


I) Line of Regression of {y} on {x}

{y - \mu_y = r \frac {\sigma_y}{\sigma_x} (x- \mu_x)}

II) Line of Regression of {x} on {y}

{x - \mu_x = r \frac {\sigma_x}{\sigma_y} (y- \mu_y)}


Regression Coefficients

I) of {y} on {x}, {b_{yx} = \frac {cov (x,y)}{\sigma_x^2}}

I) of {x} on {y}, {b_{xy} = \frac {cov (x,y)}{\sigma_y^2}}



Almost all MCQs are formula-based. So, make sure that you know all formulas with the terms forming them.

Partial Differential Equations

  • Introduction

So far, while studying calculus, we have dealt with functions of single variable, i.e. {y=f(x)}. {sin (x^2), ln \ x, e^{cos \ (tan \ x)}} are few examples. Irrespective of their complexity, the variable {y} always depended on the value of independent variable {x}. We also defined the derivatives and integrals of {f(x)} and studied few applications of them.

More often than not, we encounter situations, where a function {f} needs more than 1 independent variable specified for its definition. Such functions are known as functions of several variables. e.g. a function of 2 variables is

{f(x,y) = sin (x) e^y \times xy^{3/2}}

Thus, without knowing values of both {x} and {y} simultaneously, we cannot get a unique value of {f(x,y)}.

One can define a function of as many variables as one wants. (Of course, it should make some sense.) In many of the problems in mechanical engineering, the functions are of at the most 4 independent variables; viz. 3 space variables, {(x,y,z)} and a time variable {t}.

The partial differentiation involves obtaining the derivatives of functions of several variables.

  • Definition and Rules

Let {z} be a function of 2 independent variables {x} and {y}. To differentiate {z} partially w.r.t {x}, we treat {y} as a constant and follow the usual process of differentiation. Thus,

{\frac {\partial z}{\partial x} = \lim \limits_{\delta x \to 0} \frac {f(x + \delta x, y) - f(x,y)}{\delta x}}


{\frac {\partial z}{\partial y} = \lim \limits_{\delta y \to 0} \frac {f(x, y+ \delta y) - f(x,y)}{\delta y}}

Thus, the definition is similar to that of ordinary differentiation. The condition of existence of the limit is necessary.

Note that we use the letter {\partial} for partial derivatives and the letter {d} for ordinary derivatives.

The rules of for differentiation of addition, subtraction, multiplication, division are same as ordinary differentiation.

  • Derivatives of Higher Order

Having obtained the first order derivatives {\frac {\partial z}{\partial x}} and {\frac {\partial z}{\partial y}}, we now define the second order derivatives, i.e.

{\frac {\partial}{ \partial x} \Big ( \frac {\partial z}{\partial x} \Big) , \frac {\partial}{ \partial y} \Big ( \frac {\partial z}{\partial x} \Big), \frac {\partial}{ \partial y} \Big ( \frac {\partial z}{\partial x} \Big), \frac {\partial}{ \partial y} \Big ( \frac {\partial z}{\partial y} \Big)}

For a function of 2 variables, four 2nd order derivatives are possible. These are sometimes written as

{\frac {\partial^2 z}{ \partial x^2} = z_{xx}, \ \frac {\partial^2 z}{\partial y \partial x} = z_{yx},\ \frac {\partial^2 z}{\partial x \partial y} = z_{xy}, \ \frac {\partial^2 z}{\partial y^2} = z_{yy}}

If the function and its derivatives are continuous, then we have

{\frac {\partial^2 z}{\partial y \partial x}= \frac {\partial^2 z}{\partial y \partial x}}

One can define derivatives of order {> 2} by following the same procedure.

  • Types of Problems (Crucial from exam point of view)

I) Based on the definition and the commutative property of partial differentiation

II) Based on the concept of composite functions (Mostly involve the relations between cartesian and polar coordinates)

  • Homogeneous Functions

When the sum of indices of the variables in a function is same for all terms, the function is said to be homogeneous of degree equal to the sum.

{6x^3y^2 + x^5 - xy^4}

is an example. (Degree {= 5})

Note that each term must be explicitly of the form $a x^m y^n$. Thus, $sin (6x^3y^2 + x^5 – xy^4)$ is NOT a homogeneous function.

  • Euler’s Theorem (by Leonhard Euler)

For a homogeneous function {z=f(x,y)} of degree {n},

{x \frac {\partial z}{\partial x} + y \frac {\partial z}{\partial y} = nz}

As a consequence of this,

{x^2 \frac {\partial^2 z}{ \partial x^2} + 2xy \frac {\partial^2 z}{\partial x \partial y} + y^2 \frac {\partial^2 z}{ \partial y^2} = n (n-1)z}

Similarly, if {u =f(x,y,z)} is a homogeneous function of 3 independent variables of degree {n}, then

{x \frac {\partial u}{\partial x} + y \frac {\partial u}{\partial y} + z \frac {\partial u}{\partial z}= nu}

  • Total Derivatives

Consider a function {z = f(x,y)}. If it so happens that {x} and {y} themselves are functions of another variable {t}, then the total derivative of {z} w.r.t. {t} is defined as

{\frac {dz}{dt} = \frac {\partial z}{\partial x} \times \frac {dx}{dt} + \frac {\partial z}{\partial y} \times \frac {dy}{dt}}

Thus, if we are given a function {z = g(t)}, we would differentiate it w.r.t. {t}, thus getting {\frac {dz}{dt}}. Instead, if {z} is expressed as {f(x,y)} and {x= \phi (t)} and {y = \psi (t)}, then obtaining the total derivative of {f(x,y)} will be equivalent to getting {\frac {d}{dt} g(t)}

  • Applications

We will discuss the applications of partial differentiation in the next blogpost.

Applications of Linear Differential Equations to Electric Circuits

  • Prerequisites :

Differential Equations of First Order and First Degree

Linear Differential Equations of Higher Order


There are 3 basic components of an electric circuit, where a change in voltage is possible. They are:

1) Resistance ({R}), voltage drop = {i R}, we saw this in Ohm’s law.

2) Capacitance ({C}), voltage drop =  {\frac {1}{C} \times \int i dt}

3) Inductance ({L}), voltage drop = {L \times \frac {di}{dt}}

Note that the electric current {i} is the rate of flow of charge {q}, hence, {i = \frac {dq}{dt}}.

To solve a differential equation, we use the Kirchoff’s voltage law, which states that the sum of all the voltages around a loop is equal to zero.

The general circuit will consist of all 3 elements, {R,L} and {C} as well as the voltage source, {E}. The D.E. will be

{L \frac {di}{dt} + i R + \frac {1}{C} \int i dt = E sin (\omega_a t)}

Expressing in terms of the amount of charge, {q},

{L \frac {d^2q}{dt^2} + R \frac {dq}{dt} + \frac {q}{C} = E sin (\omega_a t)}

{\omega_a} is the frequency of applied voltage.

Resonance is a special condition in {R-L-C} circuits, when the imaginary parts of impedances due to inductor and capacitor cancel each other. It occurs when the applied frequency {\omega_a} becomes equal to the natural frequency, given by

{\omega_n = \frac {1}{\sqrt {LC}}}

Simultaneous Linear Differential Equations

  • Introduction

In the previous blospost, we covered the L.D.E.s of one dependent variable {y} and one independent variable {x}. Moving ahead, we will now study L.D.E.s of two or more dependent variables and one independent variable.

In order to solve such equations, we will need as many differential equations as the number of dependent variables. This is similar to what we need when we solve simultaneous equations of the form {2x+3y = 5, \ 5x - 4y = 6}, 2 unknowns, hence 2 equations.

Note that a system of {n} first-order linear differential equations with {n} variables can be converted into a single differential linear equation of order {n}.

  • The Method of Substitution/ Elimination (2 Independent Variables Only)

This method is similar to solving the linear equations simultaneously. One has to eliminate the other variable by some operation. Having done that, a L.D.E. with only 1 independent variable is obtained and can be solved by the methods learnt earlier.

Once one variable is obtained, the second one can be easily obtained.

  • Symmetrical Simultaneous D.E.s

When the equations are of the form

{\frac {dx}{P} = \frac {dy}{Q} = \frac {dz}{R},}

where {P,Q} and {R} are functions of {x,y,z}, they are said to be symmetrical simultaneous D.E.s. The solution consists of 2 independent relations,

{f_1 (x,y,z)= c_1 \ and \ f_2 (x,y,z) = c_2}

  • Method of Grouping

If in case it so happens that any 2 of the 3 equations above contain only 2 variables, they can be solved readily by variable-separable method.

  • Method of Multipliers

We find a set of multipliers, say {l,m} and {n}, not necessarily constants in such a way that,

{\frac {dx}{P} = \frac {dy}{Q} = \frac {dz}{R} = \frac {ldx + mdy + n dz}{lP + mQ + nR}}

If, by choice, {lP + mQ + nR =0}, then

{l dx + m dy + ndz =0}

This D.E. is now integrable. On integration, we get {f_1 (x,y,z)= c_1}.

We then choose another set of multipliers, follow the same procedure and get the 2nd relation.