### Download Maths 4 Unit 1, Numerical Methods – 1 Notes [PDF File]

Download Maths 4 Unit 2, Numerical Methods – 2 Notes [PDF File]

Download Maths 4 Unit 3, Complex Numbers Notes [PDF File]

Download Maths 4 Unit 4 Notes [PDF File]

Download Maths 4 Unit 5, Special Functions Notes [PDF File]

Download Maths 4 Unit 6, Probability – 1 Notes [PDF File]

Download Maths 4 Unit 7, Probability – 2 Notes [PDF File]

Download Maths 4 Unit 8, Sampling Theory Notes [PDF File]

- Unit 1 [Numerical Methods – 1] Notes and Unit 2 [Numerical Methods – 2] Notes By Dr.V.Ramachandra Murthy, MSRIT, Bangalore
- Unit 3 [Complex Variables – 1] Notes and Unit 4 [Complex Variables – 2] Notes By Dr.V.S.Madalli, KLECET, Hubli
- Unit 5 [Special Fuctions] Notes and Unit 8 [Sampling Theory] Notes By Dr.K.S.Basavarajappa, BIET, Davangere
- Unit 6 [Probability Theory – 1] Notes and Unit 7 [Probability Theory – 2] Notes By Dr.S.S.Benchalli, BEC, Bagalkot

Notes Credits – VTU Elearning

**Part B – Unit : 6 Probability – 1 [Sample Notes]**

By Dr. S. S. Benchalli

Associate Professor and Head, Department of Mathematics

Basaveshwar Engineering College

Bagalkot – 587102, Karnataka

Introduction:

Probability and statistics are concerned with events which occur by chance. Examples include occurrence of accidents, errors of measurements, production of defective and non-defective items from a production line, and various games of chance, such as drawing a card from a wellmixed deck, flipping a coin, or throwing a symmetrical six-sided die. In each case we may have some knowledge of the likelihood of various possible results, but we cannot predict with any certainty the outcome of any particular trial. Probability and statistics are used throughout engineering. In electrical engineering, signals and noise are analysed by means of probability theory, Civil, mechanical, and industrial engineers use statistics and probability to test and account for variations in materials and goods. Chemical engineers use probability and statistics to assess experimental data and control and improve chemical processes, It is essential for today’s engineer to master these tools.

Important terms:

a) Probability is an area of study which involves predicting the relative likelihood of various outcomes. It is a mathematical area which has developed over the past three or four centuries. One of the early uses was to calculate the odds of various gambling games. Its usefulness for describing errors of scientific and engineering measurements was realized. Engineers study probability for its many practical uses, ranging from quality control and quality assurance to communication theory in electrical engineering.

b) Chance is a necessary part of any process to be described by probability. Sometimes that element of chance is due partly or even perhaps entirely to our lack of knowledge of the composition of every part of the raw material used to make bolts, and of the physical processes and conditions in their manufacture, in principle we could predict the diameter of each bolt. But in practice we generally lack that complete knowledge, so the diameter of the next bolt to be produced is an unknown quantity described by a random variation. Under these conditions the distribution of diameters can be described by probability and statistics. If we want to improve the quality of those bolts and to make them more uniform, we will have to look into the causes of the variation and make changes in the raw materials or the production process. But even that, there will very likely be a random variation in diameter that can be described statistically.

Fundamental concepts:

Probability as a specific term is a measure of the likelihood that a particular event will occur. Just how likely is it that the outcome of a trial will meet a particular requirement? If we are certain that an event will occur, its probability is 1 or 100 %. If it certainly will not occur, its probability is zero. The first situation corresponds to an event which occurs in every trial, whereas the second corresponds to an event which never occurs. At this point we might be tempted to say that probability is given by relative frequency, the fraction of all the trials in a particular experiment that give an outcome meeting the stated requirements. But in general that would not be right. Why? Because the outcome of each trial is determined by chance. Say we toss a fair coin, one which is just as likely to give heads as tails. It is entirely possible that six tosses of the coin would give six heads or six tails, or anything in between, so the relative frequency of heads would vary from zero to one. If it is just as likely that an event will occur as that it will not occur, its true probability is 0.5 or 50 %.

As an illustration, suppose the weather man on TV says that for a particular region the probability of precipitation tomorrow is 4o%. Let us consider 100 days which have the same set of relevant conditions as prevailed at the time of the forecast. According to the prediction, precipitation the next day would occur at any point in the region in about 40 of the 100 trials. ( This is what the weather man predicts, but we all know that the weather man is not always right!)

Although we cannot make an infinite number of trials, in practice we can make a moderate number of trials, and that will give some useful information. The relative frequency of a particular event, or the proportion of trials giving outcomes which meet certain requirements, will give an estimate of the probability of that event. The larger the number of trials, the more reliable that estimate will be. This is the empirical or frequency approach to probability. (Remember that “empirical” means based on observation or experience.)

Example: 260 bolts are examined as they are produced. Five of them are found to be defective. On the basis of this information, estimate the probability that a bolt will be defective.

Answer: The probability of a defective bolt is approximately equal to the relative frequency, which is 5/260 = 0.019.

Another type of probability is the subjective estimate, based on a person’s experience. To illustrate this, say a geological engineer examines extensive geological information on a particular property. He chooses the best site to drill an oil well, and he states that on the basis of his previous experience he estimates that the probability the well will be successful is 30 %. (Another experienced geological engineer using the same information might well come to a different estimate.) This, then, is a subjective estimate of probability. The executive of the company can use this estimate to decide whether to drill the well. A third approach is possible in certain cases. This includes various gambling games, such as tossing an unbiased coin; drawing a colored ball from a number of balls, identical except for color, which are put into a bag and thoroughly mixed; throwing an unbiased die; or drawing a card from a well-shuffled deck of cards. In each of these cases we can say before the trial that a number of possible results are equality likely. This is the classical or ;”a priori” approach. The phrase “ a priori” comes from Latin words meaning coming from what was known before. This approach is often simple to visualize, so giving a better understanding of probability. In some cases it can be applied directly in engineering.

Example: Two fair coins are tossed. What is the probability of getting one heads and one tails?

Answer: For a fair or unbiased coin, for each toss of each coin P[heads] =P[tails] = ½ This assumes that all other possibilities are excluded: if a coin is lost that toss will be eliminated. The possibility that a coin will stand on edge after tossing can be neglected. There are two possible results of tossing the first coin. These are heads (H) and tails (T), and they are equally likely. Whether the result of tossing the first coin is heads or tails, there are two possible results of tossing the second coin. Again, these are heads (H) and tails (T), and they are equally likely. The possible outcomes of tossing the two coins are HH, HT, TH and TT. Since the results H and T for the first coin are equally likely, and the results H and T for the second coin are equally likely, the four outcomes of tossing the two coin must be equally likely. These relationships are conveniently summarized in the following tree diagram. In which each branch point (or node) represents a point of decision where two or more results are possible.

Simple Tree Diagram [Refer PDF, Soft Copy]

Since there are four equally likely outcomes, the probability of each is 1/4. Both HT and TH correspond to getting one heads and one tails, so two of the four equally likely outcomes give this result. Then the probability of getting one heads and one tails must be 2/4=1/2 or 0.5.

In the study of probability an event is a set of possible outcomes which meets stated requirements. If a six-sided cube (called a die) is tossed, we define the outcome as the number of dots on the face which is upward when the die comes to rest. The possible outcomes are 1, 2,3,4,5, and 6. We might call each of these outcomes a separate event.

Remember that the probability of an event which is certain is 1, and the probability of an impossible event is 0. Then no probability can be more than 1 or less than 0. If we calculate a probability and obtain a result less than 0 or greater than 1, we know we must have made a mistake. If we can write down probabilities for all possible results, the sum of all these probabilities must be 1, and this should be used as a check whenever possible. Sometimes some basic requirements for probability are called the axioms of probability. These are that a probability must be between 0 and 1. These axioms are then used to derive theoretical relations for probability.

Definition:

Axiomatic Probability:

Let S be a sample space of a random experiment and A Ì S then the probability of an event A is a set function P(A) satisfying the following axioms.

i) Axiom of positiveness, P (A) ≥ 0

ii) Axiom of certainty, P(S) = 1

iii) P(A1 U A2 U A3 U…..) = P(A1) + P(A2) +P(A3) +….

Where A1, A2,…..An are sequence of disjoint subsets of the sample space S

Example: A Mathematics class for engineers consists of 25 industrial 10 mechanical, 10 electrical, and 8 civil engineering students. If a person is randomly selected by the instructor to answer a question, find the probability that the student chosen is

a) an industrial engineering

b) a civil engineering or electrical engineering.

Solution: Denote by I, M, E and C the students majoring in industrial, mechanical, electrical, and civil engineering, respectively. The total number of students in the class is 53, all of which are equally likely to be selected.

a) Since 25 of the 53 students are majoring in industrial, engineering, the probability of the event I, selecting an industrial engineering at random is P (I) = 25 / 53.

b) Since 18 of the 53 students are civil or electrical engineering majors, it follows that P(C U E) = 18 / 53.

Basic Rules of Combining Probabilities

The basic rules of laws of combining probabilities must be consistent with the fundamental concepts.

Addition Rule: This can be divided onto two parts, depending upon whether there is overlap between the events being combined.

a) If the events are mutually exclusive, there is no overlap: if one event occurs, other events cannot occur. In that case the probability of occurrence of one or another of more than one event is the sum of the probabilities of the separate events. For example, if I throw a fair six-sided die the probability of any one face coming up is the same as the probability of any other face, or one-sixth. There is no overlap among these six possibilities. Then P [6] =1/6, P [4] =1/6. So P [6 or 4] is 1/6 + 1/6 = 1/3. This, then, is the probability of obtaining a six or a four on throwing one dir. Notice that it is consistent with the classical approach to probability; of six equally likely results, two give the result which was specified. The addition rule corresponds to a logical or and gives a sum of separate probabilities. Often we can divide all possible outcomes into two groups without overlap…………..

……….

……….

**[Please Refer Soft Copy]**

## PART B – Unit 7 : Probability – 2 [Sample Notes]

By Dr. S. S. Benchalli

Associate Professor and Head

Department of Mathematics

Basaveshwar Engineering College

Bagalkot – 587102, Karnataka

Random Variables: In most statistical problems we are concerned with one number or a few numbers that are associated with the outcomes of experiments. In the inspection of a manufactured product we may be interested only in the number of defectives; in the analysis of a road test we may be interested only in the average speed and the average fuel consumption. All these numbers are associated with situations involving an element of chance – in other words, they are values of random variables. In the study of random variables we are usually interested in their probability distributions, namely, in the probabilities with which they take on the various values in their range.

For example, In tossing a coin the outcomes are H (Heads) or T (Tails), and in tossing a die the outcomes are of integers. However we frequently wish to assign a specific number to each outcome of the experiment, in coin tossing, it may be convenient to assign 1 to H and 0 to T, and such an assignment of numerical values is called a random variable. More generally, we have the following definition Definition: A random variable X is a rule that assigns a numerical value to each outcome in a sample space S.

In other wards If f is a function from S into the set R of all real numbers and X = f(s), s e S, then X is called a random variable on S. For this experiment the sample space is S={H,T }. Let us define a function f : S R by f(s) = 1 if s = H = 0 if s = T Then X =f(s) is a random variable on S. For the outcome H, the value of this random variable is 1 and for the outcome T, its value is zero. If X and Y are two random variables defined on the sample space S and a and b are two real numbers i) aX +bY is a random variable. In particular X – Y is a random variable ii) XY is a random variable iii) If X(s) ≠ 0 for all s e S then 1/ X is also a random variable

Definition: The event consisting of all outcomes for which X = x is denoted as { X= x} and the probability of this event is denoted as P(X = x ).

The Random variables are classified as discrete random variables (DRV) and continuous random variables (CRV).

If the random variable assumes values in steps or at the most countable number of values, it is called a discrete random variable is said to be continuous if it assumes all the values between the two limits.

For example, number of defective items in a lot is a DRV, length of life of electric bulbs is an example of CRV.

Example: In the experiment of tossing two coins, we have the sample space S = {HH,TH,HT,TT}. We assign uniform probability ¼ to each element of S. Consider a random variable X which assigns to each element of S “ the number of heads” in that element. Thus X: S R is given by X(HH) =2, X(HT)= X(TH)=1; X(TT) = 0 i.e. Number of heads (X) :{HH,HT.TH,TT} {0,1,2} (R) Range of X = {0,1,2} Now X-1(0)= X-1(No head) = {s e S : X(x) = 0} = {TT} X-1(1)= {HT,TH}, X-1(2)={HH} Therefore the probabilities of the events are: P(X = 0) = 1/4 P(X = 1) = ½, P(X = 2) = 1/4

Definition: The probability distribution f(x) of a random variable X is a description of the set of possible values of X (range of X ), along with the probability associated with each of the possible values ’x’.

Example: The probability distribution of the random variable X = Number of heads in the previous example

Definition: The probability distribution f(x) of a random variable X is a description of the set of possible values of X (range of X ), along with the probability associated with each of the possible values ’x’.

**[Please Refer Soft Copy]**

## UNIT 8 : Sampling Theory [Sample Notes]

By Dr. K.S.Basavarajappa

Professor & Head

Department of Mathematics

Bapuji Institute of Engineering and of Technology

Davangere-577004

Statistical Inference:

It is necessary to draw some valid and reasonable conclusions concerning a large mass of individuals or things. Every individual or the entire group is known as population. Small part of this population is known as a sample. The process of drawing some valid and reasonable conclusion about the entire population is Statistical Inference.

Random sampling:

A large collection of individuals or attributed or numerical data can be understood as population or universe.

A finite subset of the universe is called a sample. The number of individuals in a sample is called a Sample Size (n).

Sampling distribution:

For every sample size (n) we can compute quantities like mean, median, standard deviation etc., obviously these will not be the same.

Suppose we group these characteristics according to their frequencies, the frequency distributions so generated are called Sampling Distributions.

The sampling distribution of large samples are assumed to be a normal distribution. The standard deviation of a sampling distribution is also called as the standard error (SE).

Testing of Hypothesis

Making certain assumption to arrive at a decision regarding the population a sample population will be referred to as hypothesis

The hypothesis formulated for the purpose of its rejection under the assumption that the true is called as the null hypothesis denoted as H 0 ……

……..

………

**[Please Refer Soft Copy for Complete Notes]**

[…] >> Engineering Mathematics 4 Notes << […]