Simple problems in probability theory. Basic formula

In economics, as well as in other areas human activity or in nature, we constantly have to deal with events that cannot be accurately predicted. Thus, the sales volume of a product depends on demand, which can vary significantly, and on a number of other factors that are almost impossible to take into account. Therefore, when organizing production and carrying out sales, you have to predict the outcome of such activities on the basis of either your own previous experience, or similar experience of other people, or intuition, which to a large extent also relies on experimental data.

In order to somehow evaluate the event in question, it is necessary to take into account or specially organize the conditions in which this event is recorded.

The implementation of certain conditions or actions to identify the event in question is called experience or experiment.

The event is called random, if as a result of experience it may or may not occur.

The event is called reliable, if it necessarily appears as a result of a given experience, and impossible, if it cannot appear in this experience.

For example, snowfall in Moscow on November 30 is a random event. The daily sunrise can be considered a reliable event. Snowfall at the equator can be considered an impossible event.

One of the main tasks in probability theory is the task of determining a quantitative measure of the possibility of an event occurring.

Algebra of events

Events are called incompatible if they cannot be observed together in the same experience. Thus, the presence of two and three cars in one store for sale at the same time are two incompatible events.

Amount events is an event consisting of the occurrence of at least one of these events

An example of the sum of events is the presence of at least one of two products in the store.

The work events is an event consisting of the simultaneous occurrence of all these events

An event consisting of the appearance of two goods in a store at the same time is a product of events: - the appearance of one product, - the appearance of another product.

Events form a complete group of events if at least one of them is sure to occur in experience.

Example. The port has two berths for receiving ships. Three events can be considered: - the absence of ships at the berths, - the presence of one ship at one of the berths, - the presence of two ships at two berths. These three events form a complete group of events.

Opposite two unique possible events that form a complete group are called.

If one of the events that is opposite is denoted by , then the opposite event is usually denoted by .

Classical and statistical definitions of event probability

Each of the equally possible results of tests (experiments) is called an elementary outcome. They are usually designated by letters. For example, a die is thrown. There can be a total of six elementary outcomes based on the number of points on the sides.

From elementary outcomes you can create a more complex event. Thus, the event of an even number of points is determined by three outcomes: 2, 4, 6.

A quantitative measure of the possibility of the occurrence of the event in question is probability.

The most widely used definitions of the probability of an event are: classic And statistical.

The classical definition of probability is associated with the concept of a favorable outcome.

The outcome is called favorable this event, if its appearance entails the occurrence of this event.

In the above example, the event in question—an even number of points on the rolled side—has three favorable outcomes. In this case, the general
quantity possible outcomes. So here you can use classic definition probability of an event.

Classic definition equals the ratio of the number of favorable outcomes to the total number of possible outcomes

where is the probability of the event, is the number of outcomes favorable to the event, is the total number of possible outcomes.

In the considered example

The statistical definition of probability is associated with the concept of the relative frequency of occurrence of an event in experiments.

The relative frequency of occurrence of an event is calculated using the formula

where is the number of occurrences of an event in a series of experiments (tests).

Statistical definition. The probability of an event is the number around which the relative frequency stabilizes (sets) with an unlimited increase in the number of experiments.

IN practical problems the probability of an event is taken to be the relative frequency with a sufficiently large number of trials.

From these definitions of the probability of an event it is clear that the inequality is always satisfied

To determine the probability of an event based on formula (1.1), combinatorics formulas are often used, which are used to find the number of favorable outcomes and the total number of possible outcomes.

probability- a number between 0 and 1 that reflects the chances that a random event will occur, where 0 is the complete absence of probability of the event occurring, and 1 means that the event in question will definitely occur.

The probability of event E is a number from to 1.
The sum of the probabilities of mutually exclusive events is equal to 1.

empirical probability- probability, which is calculated as the relative frequency of an event in the past, extracted from the analysis of historical data.

The probability of very rare events cannot be calculated empirically.

subjective probability- probability based on a personal subjective assessment of an event without regard to historical data. Investors who make decisions to buy and sell shares often act based on considerations of subjective probability.

prior probability -

The chance is 1 in... (odds) that an event will occur through the concept of probability. The chance of an event occurring is expressed through probability as follows: P/(1-P).

For example, if the probability of an event is 0.5, then the chance of the event is 1 out of 2 because 0.5/(1-0.5).

The chance that an event will not occur is calculated using the formula (1-P)/P

Inconsistent probability- for example, the price of shares of company A takes into account possible event E by 85%, and the price of shares of company B only takes into account 50%. This is called inconsistent probability. According to the Dutch Betting Theorem, inconsistent probability creates profit opportunities.

Unconditional probability is the answer to the question “What is the probability that the event will occur?”

Conditional probability- this is the answer to the question: “What is the probability of event A if event B occurs.” Conditional probability is denoted as P(A|B).

Joint probability- the probability that events A and B will occur simultaneously. Denoted as P(AB).

P(A|B) = P(AB)/P(B) (1)

P(AB) = P(A|B)*P(B)

Rule for summing up probabilities:

The probability that either event A or event B will happen is

P (A or B) = P(A) + P(B) - P(AB) (2)

If events A and B are mutually exclusive, then

P (A or B) = P(A) + P(B)

Independent events- events A and B are independent if

P(A|B) = P(A), P(B|A) = P(B)

That is, it is a sequence of results where the probability value is constant from one event to the next.
A coin toss is an example of such an event - the result of each subsequent toss does not depend on the result of the previous one.

Dependent Events- these are events where the probability of the occurrence of one depends on the probability of the occurrence of another.

The rule for multiplying the probabilities of independent events:
If events A and B are independent, then

P(AB) = P(A) * P(B) (3)

Rule full probability:

P(A) = P(AS) + P(AS") = P(A|S")P(S) + P (A|S")P(S") (4)

S and S" are mutually exclusive events

expected value random variable is the average of possible outcomes random variable. For event X, the expectation is denoted as E(X).

Let’s say we have 5 values ​​of mutually exclusive events with a certain probability (for example, a company’s income was such and such an amount with such a probability). The expected value is the sum of all outcomes multiplied by their probability:

Dispersion of a random variable is the expectation of square deviations of a random variable from its expectation:

s 2 = E( 2 ) (6)

Conditional expected value is the expected value of a random variable X, provided that the event S has already occurred.

Everything in the world happens deterministically or by chance...
Aristotle

Probability: Basic Rules

Probability theory calculates the probabilities of various events. Fundamental to probability theory is the concept of a random event.

For example, you throw a coin, it randomly lands on a head or a tail. You don't know in advance which side the coin will fall on. You enter into an insurance contract; you do not know in advance whether payments will be made or not.

In actuarial calculations, you need to be able to estimate the probability of various events, so probability theory plays a key role. No other branch of mathematics can deal with the probabilities of events.

Let's take a closer look at tossing a coin. There are 2 mutually exclusive outcomes: the coat of arms falls out or the tails fall out. The outcome of the throw is random, since the observer cannot analyze and take into account all the factors that influence the result. What is the probability of the coat of arms falling out? Most will answer ½, but why?

Let it be formal A indicates the loss of the coat of arms. Let the coin toss n once. Then the probability of the event A can be defined as the proportion of those throws that result in a coat of arms:

Where n total number of throws, n(A) number of coat of arms drops.

Relation (1) is called frequency events A in a long series of tests.

It turns out that in various series of tests the corresponding frequency at large n clusters around some constant value P(A). This quantity is called probability of an event A and is designated by the letter R- abbreviation for English word probability - probability.

Formally we have:

(2)

This law is called law of large numbers.

If the coin is fair (symmetrical), then the probability of getting a coat of arms is equal to the probability of getting heads and equals ½.

Let A And IN some events, for example, whether an insured event occurred or not. The union of two events is an event consisting of the execution of an event A, events IN, or both events together. The intersection of two events A And IN called an event consisting in the implementation as an event A, and events IN.

Basic Rules The calculus of event probabilities is as follows:

1. The probability of any event lies between zero and one:

2. Let A and B be two events, then:

It reads like this: the probability of two events combining is equal to the sum of the probabilities of these events minus the probability of the events intersecting. If the events are incompatible or non-overlapping, then the probability of the union (sum) of two events is equal to the sum of the probabilities. This law is called the law addition probabilities.

We say that an event is reliable if its probability is equal to 1. When analyzing certain phenomena, the question arises of how the occurrence of an event affects IN upon the occurrence of an event A. To do this, enter conditional probability :

(4)

It reads like this: probability of occurrence A given that IN equals the probability of intersection A And IN, divided by the probability of the event IN.
Formula (4) assumes that the probability of an event IN Above zero.

Formula (4) can also be written as:

(5)

This is the formula multiplying probabilities.

Conditional probability is also called a posteriori probability of an event A- probability of occurrence A after the offensive IN.

In this case, the probability itself is called a priori probability. There are several other important formulas that are intensively used in actuarial calculations.

Total Probability Formula

Let us assume that an experiment is being carried out, the conditions of which can be determined in advance mutually mutually exclusive assumptions (hypotheses):

We assume that there is either a hypothesis, or... or. The probabilities of these hypotheses are known and equal:

Then the formula holds full probabilities :

(6)

Probability of an event occurring A equal to the sum of the products of the probability of occurrence A for each hypothesis on the probability of this hypothesis.

Bayes formula

Bayes formula allows the probability of hypotheses to be recalculated in the light of new information provided by the result A.

Bayes' formula in a certain sense is the inverse of the total probability formula.

Consider the following practical problem.

Problem 1

Suppose there is a plane crash and experts are busy investigating its causes. 4 reasons why the disaster occurred are known in advance: either the cause, or, or, or. According to available statistics, these reasons have the following probabilities:



When examining the crash site, traces of fuel ignition were found; according to statistics, the probability of this event for one reason or another is as follows:




Question: what is the most likely cause of the disaster?

Let's calculate the probabilities of causes under the conditions of the occurrence of an event A.



From this it can be seen that the first reason is the most likely, since its probability is maximum.

Problem 2

Consider an airplane landing at an airfield.

Upon landing weather may be as follows: no low clouds (), low clouds yes (). In the first case, the probability of a safe landing is P1. In the second case - P2. It's clear that P1>P2.

Devices that provide blind landing have a probability of trouble-free operation R. If there is low cloud cover and the blind landing instruments have failed, the probability of a successful landing is P3, and P3<Р2 . It is known that for a given airfield the proportion of days in a year with low clouds is equal to .

Find the probability of the plane landing safely.

We need to find the probability.

There are two mutually exclusive options: the blind landing devices are working, the blind landing devices have failed, so we have:

Hence, according to the total probability formula:

Problem 3

An insurance company provides life insurance. 10% of those insured by this company are smokers. If the insured person does not smoke, the probability of his death during the year is 0.01. If he is a smoker, then this probability is 0.05.

What is the proportion of smokers among those insured who died during the year?

Possible answers: (A) 5%, (B) 20%, (C) 36%, (D) 56%, (E) 90%.

Solution

Let's enter the events:

The condition of the problem means that

In addition, since the events form a complete group of pairwise incompatible events, then .
The probability we are interested in is .

Using Bayes' formula, we have:

therefore the correct option is ( IN).

Problem 4

The insurance company sells life insurance contracts in three categories: standard, preferred and ultra-privileged.

50% of all insured are standard, 40% are preferred and 10% are ultra-privileged.

The probability of death within a year for a standard insured is 0.010, for a privileged one - 0.005, and for an ultra-privileged one - 0.001.

What is the probability that the deceased insured is ultra-privileged?

Solution

Let us introduce the following events into consideration:

In terms of these events, the probability we are interested in is . By condition:

Since the events , , form a complete group of pairwise incompatible events, using Bayes' formula we have:

Random variables and their characteristics

Let it be some random variable, for example, damage from a fire or the amount of insurance payments.
A random variable is completely characterized by its distribution function.

Definition. Function called distribution function random variable ξ .

Definition. If there is a function such that for arbitrary a done

then they say that the random variable ξ It has probability density function f(x).

Definition. Let . For a continuous distribution function F theoretical α-quantile is called the solution to the equation.

This solution may not be the only one.

Quantile level ½ called theoretical median , quantile levels ¼ And ¾ -lower and upper quartiles respectively.

In actuarial applications, plays an important role Chebyshev's inequality:

at any

Symbol of mathematical expectation.

It reads like this: the probability that the modulus is greater than or equal to the mathematical expectation of the modulus divided by .

Lifetime as a random variable

The uncertainty of the moment of death is a major risk factor in life insurance.

Nothing definite can be said about the moment of death of an individual. However, if we are dealing with a large homogeneous group of people and are not interested in the fate of individual people from this group, then we are within the framework of probability theory as the science of mass random phenomena that have the property of frequency stability.

Respectively, we can talk about life expectancy as a random variable T.

Survival function

Probability theory describes the stochastic nature of any random variable T distribution function F(x), which is defined as the probability that the random variable T less than number x:

.

In actuarial mathematics it is nice to work not with the distribution function, but with the additional distribution function . In terms of longevity, this is the probability that a person will live to age x years.

called survival function(survival function):

The survival function has the following properties:

Life tables usually assume that there is some age limit (limiting age) (usually years) and, accordingly, at x>.

When describing mortality by analytical laws, it is usually assumed that life time is unlimited, but the type and parameters of the laws are selected so that the probability of life beyond a certain age is negligible.

The survival function has simple statistical meaning.

Let's say that we are observing a group of newborns (usually), whom we observe and can record the moments of their death.

Let us denote the number of living representatives of this group at age by . Then:

.

Symbol E here and below used to denote mathematical expectation.

So, the survival function is equal to the average proportion of those who survive to age from some fixed group of newborns.

In actuarial mathematics, one often works not with the survival function, but with the value just introduced (fixing the initial group size).

The survival function can be reconstructed from density:

Lifespan Characteristics

From a practical point of view, the following characteristics are important:

1 . Average lifetime

,
2 . Dispersion lifetime

,
Where
,

as an ontological category reflects the extent of the possibility of the emergence of any entity under any conditions. In contrast to the mathematical and logical interpretation of this concept, ontological mathematics does not associate itself with the obligation of quantitative expression. The meaning of V. is revealed in the context of understanding determinism and the nature of development in general.

Excellent definition

Incomplete definition ↓

PROBABILITY

concept characterizing quantities. the measure of the possibility of the occurrence of a certain event at a certain conditions. In scientific knowledge there are three interpretations of V. The classical concept of V., which arose from mathematical. analysis of gambling and most fully developed by B. Pascal, J. Bernoulli and P. Laplace, considers winning as the ratio of the number of favorable cases to the total number of all equally possible ones. For example, when throwing a dice that has 6 sides, each of them can be expected to land with a value of 1/6, since no one side has advantages over another. Such symmetry of experimental outcomes is specially taken into account when organizing games, but is relatively rare in the study of objective events in science and practice. Classic V.'s interpretation gave way to statistics. V.'s concepts, which are based on the actual observing the occurrence of a certain event over a long period of time. experience under precisely fixed conditions. Practice confirms that the more often an event occurs, the greater the degree of objective possibility of its occurrence, or B. Therefore, statistical. V.'s interpretation is based on the concept of relates. frequency, which can be determined experimentally. V. as a theoretical the concept never coincides with the empirically determined frequency, however, in plural. In cases, it differs practically little from the relative one. frequency found as a result of duration. observations. Many statisticians consider V. as a “double” refers. frequencies, edges are determined statistically. study of observational results

or experiments. Less realistic was the definition of V. as the limit relates. frequencies of mass events, or groups, proposed by R. Mises. As a further development of the frequency approach to V., a dispositional, or propensitive, interpretation of V. is put forward (K. Popper, J. Hacking, M. Bunge, T. Settle). According to this interpretation, V. characterizes the property of generating conditions, for example. experiment. installations to obtain a sequence of massive random events. It is precisely this attitude that gives rise to physical dispositions, or predispositions, V. which can be checked using relatives. frequency

Statistical V.'s interpretation dominates scientific research. cognition, because it reflects specific. the nature of the patterns inherent in mass phenomena of a random nature. In many physical, biological, economic, demographic. and other social processes, it is necessary to take into account the action of many random factors, which are characterized by a stable frequency. Identifying these stable frequencies and quantities. its assessment with the help of V. makes it possible to reveal the necessity that makes its way through the cumulative action of many accidents. This is where the dialectic of transforming chance into necessity finds its manifestation (see F. Engels, in the book: K. Marx and F. Engels, Works, vol. 20, pp. 535-36).

Logical, or inductive, reasoning characterizes the relationship between the premises and the conclusion of non-demonstrative and, in particular, inductive reasoning. Unlike deduction, the premises of induction do not guarantee the truth of the conclusion, but only make it more or less plausible. This plausibility, with precisely formulated premises, can sometimes be assessed using V. The value of this V. is most often determined by comparison. concepts (more than, less than or equal to), and sometimes in a numerical way. Logical interpretation is often used to analyze inductive reasoning and construct various systems of probabilistic logic (R. Carnap, R. Jeffrey). In semantics logical concepts V. is often defined as the degree to which one statement is confirmed by others (for example, a hypothesis by its empirical data).

In connection with the development of theories of decision making and games, the so-called personalistic interpretation of V. Although V. at the same time expresses the degree of faith of the subject and the occurrence of a certain event, V. themselves must be chosen in such a way that the axioms of the calculus of V. are satisfied. Therefore, V. with such an interpretation expresses not so much the degree of subjective, but rather reasonable faith . Consequently, decisions made on the basis of such V. will be rational, because they do not take into account the psychological. characteristics and inclinations of the subject.

With epistemological t.zr. difference between statistical, logical. and personalistic interpretations of V. is that if the first characterizes the objective properties and relationships of mass phenomena of a random nature, then the last two analyze the features of the subjective, cognizant. human activities under conditions of uncertainty.

PROBABILITY

one of the most important concepts of science, characterizing a special systemic vision of the world, its structure, evolution and knowledge. The specificity of the probabilistic view of the world is revealed through the inclusion of the concepts of randomness, independence and hierarchy (the idea of ​​levels in the structure and determination of systems) among the basic concepts of existence.

Ideas about probability originated in ancient times and related to the characteristics of our knowledge, while the existence of probabilistic knowledge was recognized, which differed from reliable knowledge and from false knowledge. The impact of the idea of ​​probability on scientific thinking and on the development of knowledge is directly related to the development of probability theory as a mathematical discipline. The origin of the mathematical doctrine of probability dates back to the 17th century, when the development of a core of concepts allowing. quantitative (numerical) characteristics and expressing a probabilistic idea.

Intensive applications of probability to the development of cognition occur in the 2nd half. 19 - 1st floor 20th century Probability has entered the structures of such fundamental sciences of nature as classical statistical physics, genetics, quantum theory, and cybernetics (information theory). Accordingly, probability personifies that stage in the development of science, which is now defined as non-classical science. To reveal the novelty and features of the probabilistic way of thinking, it is necessary to proceed from an analysis of the subject of probability theory and the foundations of its numerous applications. Probability theory is usually defined as a mathematical discipline that studies the patterns of mass random phenomena under certain conditions. Randomness means that within the framework of mass character, the existence of each elementary phenomenon does not depend on and is not determined by the existence of other phenomena. At the same time, the mass nature of phenomena itself has a stable structure and contains certain regularities. A mass phenomenon is quite strictly divided into subsystems, and the relative number of elementary phenomena in each of the subsystems (relative frequency) is very stable. This stability is compared with probability. A mass phenomenon as a whole is characterized by a probability distribution, that is, by specifying subsystems and their corresponding probabilities. The language of probability theory is the language of probability distributions. Accordingly, probability theory is defined as the abstract science of operating with distributions.

Probability gave rise in science to ideas about statistical patterns and statistical systems. The latter are systems formed from independent or quasi-independent entities; their structure is characterized by probability distributions. But how is it possible to form systems from independent entities? It is usually assumed that for the formation of systems with integral characteristics, it is necessary that sufficiently stable connections exist between their elements that cement the systems. Stability of statistical systems is given by the presence of external conditions, external environment, external rather than internal forces. The very definition of probability is always based on setting the conditions for the formation of the initial mass phenomenon. Another important idea characterizing the probabilistic paradigm is the idea of ​​hierarchy (subordination). This idea expresses the relationship between the characteristics of individual elements and the integral characteristics of systems: the latter, as it were, are built on top of the former.

The importance of probabilistic methods in cognition lies in the fact that they make it possible to study and theoretically express the patterns of structure and behavior of objects and systems that have a hierarchical, “two-level” structure.

Analysis of the nature of probability is based on its frequency, statistical interpretation. At the same time, for a very long time, such an understanding of probability dominated in science, which was called logical, or inductive, probability. Logical probability is interested in questions of the validity of a separate, individual judgment under certain conditions. Is it possible to evaluate the degree of confirmation (reliability, truth) of an inductive conclusion (hypothetical conclusion) in quantitative form? During the development of probability theory, such questions were repeatedly discussed, and they began to talk about the degrees of confirmation of hypothetical conclusions. This measure of probability is determined by the information available to a given person, his experience, views on the world and psychological mindset. In all such cases, the magnitude of probability is not amenable to strict measurements and practically lies outside the competence of probability theory as a consistent mathematical discipline.

The objective, frequentist interpretation of probability was established in science with significant difficulties. Initially, the understanding of the nature of probability was strongly influenced by those philosophical and methodological views that were characteristic of classical science. Historically, the development of probabilistic methods in physics occurred under the determining influence of the ideas of mechanics: statistical systems were interpreted simply as mechanical. Since the corresponding problems were not solved by strict methods of mechanics, assertions arose that turning to probabilistic methods and statistical laws is the result of the incompleteness of our knowledge. In the history of the development of classical statistical physics, numerous attempts were made to substantiate it on the basis of classical mechanics, but they all failed. The basis of probability is that it expresses the structural features of a certain class of systems, other than mechanical systems: the state of the elements of these systems is characterized by instability and a special (not reducible to mechanics) nature of interactions.

The entry of probability into knowledge leads to the denial of the concept of hard determinism, to the denial of the basic model of being and knowledge developed in the process of the formation of classical science. The basic models represented by statistical theories are of a different, more general nature: they include the ideas of randomness and independence. The idea of ​​probability is associated with the disclosure of the internal dynamics of objects and systems, which cannot be entirely determined by external conditions and circumstances.

The concept of a probabilistic vision of the world, based on the absolutization of ideas about independence (as before the paradigm of rigid determination), has now revealed its limitations, which is most strongly reflected in the transition of modern science to analytical methods for studying complex systems and the physical and mathematical foundations of self-organization phenomena.

Excellent definition

Incomplete definition ↓

When a coin is tossed, we can say that it will land heads up, or probability this is 1/2. Of course, this does not mean that if a coin is tossed 10 times, it will necessarily land on heads 5 times. If the coin is "fair" and if it is tossed many times, then heads will land very close half the time. Thus, there are two types of probabilities: experimental And theoretical .

Experimental and theoretical probability

If we flip a coin a large number of times - say 1000 - and count how many times it lands on heads, we can determine the probability that it lands on heads. If heads are thrown 503 times, we can calculate the probability of it landing:
503/1000, or 0.503.

This experimental definition of probability. This definition of probability comes from observation and study of data and is quite common and very useful. Here, for example, are some probabilities that were determined experimentally:

1. The probability that a woman will develop breast cancer is 1/11.

2. If you kiss someone who has a cold, then the probability that you will also get a cold is 0.07.

3. A person who has just been released from prison has an 80% chance of returning to prison.

If we consider tossing a coin and taking into account that it is just as likely that it will come up heads or tails, we can calculate the probability of getting heads: 1/2. This is a theoretical definition of probability. Here are some other probabilities that have been determined theoretically using mathematics:

1. If there are 30 people in a room, the probability that two of them have the same birthday (excluding year) is 0.706.

2. During a trip, you meet someone, and during the conversation you discover that you have a mutual friend. Typical reaction: “This can’t be!” In fact, this phrase is not suitable, because the probability of such an event is quite high - just over 22%.

Thus, experimental probabilities are determined through observation and data collection. Theoretical probabilities are determined through mathematical reasoning. Examples of experimental and theoretical probabilities, such as those discussed above, and especially those that we do not expect, lead us to the importance of studying probability. You may ask, "What is true probability?" In fact, there is no such thing. Probabilities within certain limits can be determined experimentally. They may or may not coincide with the probabilities that we obtain theoretically. There are situations in which it is much easier to determine one type of probability than another. For example, it would be sufficient to find the probability of catching a cold using theoretical probability.

Calculation of experimental probabilities

Let us first consider the experimental definition of probability. The basic principle we use to calculate such probabilities is as follows.

Principle P (experimental)

If in an experiment in which n observations are made, a situation or event E occurs m times in n observations, then the experimental probability of the event is said to be P (E) = m/n.

Example 1 Sociological survey. An experimental study was conducted to determine the number of left-handed people, right-handed people and people whose both hands are equally developed. The results are shown in the graph.

a) Determine the probability that the person is right-handed.

b) Determine the probability that the person is left-handed.

c) Determine the probability that a person is equally fluent in both hands.

d) Most Professional Bowling Association tournaments are limited to 120 players. Based on the data from this experiment, how many players could be left-handed?

Solution

a)The number of people who are right-handed is 82, the number of left-handers is 17, and the number of those who are equally fluent in both hands is 1. The total number of observations is 100. Thus, the probability that a person is right-handed is P
P = 82/100, or 0.82, or 82%.

b) The probability that a person is left-handed is P, where
P = 17/100, or 0.17, or 17%.

c) The probability that a person is equally fluent in both hands is P, where
P = 1/100, or 0.01, or 1%.

d) 120 bowlers, and from (b) we can expect that 17% are left-handed. From here
17% of 120 = 0.17.120 = 20.4,
that is, we can expect about 20 players to be left-handed.

Example 2 Quality control . It is very important for a manufacturer to keep the quality of its products at a high level. In fact, companies hire quality control inspectors to ensure this process. The goal is to produce the minimum possible number of defective products. But since the company produces thousands of products every day, it cannot afford to test every product to determine whether it is defective or not. To find out what percentage of products are defective, the company tests far fewer products.
The USDA requires that 80% of the seeds sold by growers must germinate. To determine the quality of the seeds that an agricultural company produces, 500 seeds from those that were produced are planted. After this, it was calculated that 417 seeds sprouted.

a) What is the probability that the seed will germinate?

b) Do the seeds meet government standards?

Solution a) We know that out of 500 seeds that were planted, 417 sprouted. Probability of seed germination P, and
P = 417/500 = 0.834, or 83.4%.

b) Since the percentage of seeds germinated has exceeded 80% as required, the seeds meet government standards.

Example 3 Television ratings. According to statistics, there are 105,500,000 households with televisions in the United States. Every week, information about viewing programs is collected and processed. In one week, 7,815,000 households tuned in to the hit comedy series "Everybody Loves Raymond" on CBS and 8,302,000 households tuned in to the hit series "Law & Order" on NBC (Source: Nielsen Media Research). What is the probability that one household's TV is tuned to "Everybody Loves Raymond" during a given week? to "Law & Order"?

Solution The probability that the TV in one household is tuned to "Everybody Loves Raymond" is P, and
P = 7,815,000/105,500,000 ≈ 0.074 ≈ 7.4%.
The chance that the household's TV was tuned to Law & Order is P, and
P = 8,302,000/105,500,000 ≈ 0.079 ≈ 7.9%.
These percentages are called ratings.

Theoretical probability

Suppose we are conducting an experiment, such as throwing a coin or darts, drawing a card from a deck, or testing products for quality on an assembly line. Each possible result of such an experiment is called Exodus . The set of all possible outcomes is called outcome space . Event it is a set of outcomes, that is, a subset of the space of outcomes.

Example 4 Throwing darts. Suppose that in a dart throwing experiment, a dart hits a target. Find each of the following:

b) Outcome space

Solution
a) The outcomes are: hitting black (B), hitting red (R) and hitting white (B).

b) The space of outcomes is (hitting black, hitting red, hitting white), which can be written simply as (H, K, B).

Example 5 Throwing dice. A die is a cube with six sides, each with one to six dots on it.


Suppose we are throwing a die. Find
a) Outcomes
b) Outcome space

Solution
a) Outcomes: 1, 2, 3, 4, 5, 6.
b) Outcome space (1, 2, 3, 4, 5, 6).

We denote the probability that an event E occurs as P(E). For example, “the coin will land on heads” can be denoted by H. Then P(H) represents the probability that the coin will land on heads. When all outcomes of an experiment have the same probability of occurring, they are said to be equally likely. To see the differences between events that are equally likely and events that are not, consider the target shown below.

For target A, the events of hitting black, red and white are equally probable, since the black, red and white sectors are the same. However, for target B, the zones with these colors are not the same, that is, hitting them is not equally probable.

Principle P (Theoretical)

If an event E can happen in m ways out of n possible equally probable outcomes from the outcome space S, then theoretical probability events, P(E) is
P(E) = m/n.

Example 6 What is the probability of rolling a die to get a 3?

Solution There are 6 equally probable outcomes on a dice and there is only one possibility of rolling the number 3. Then the probability P will be P(3) = 1/6.

Example 7 What is the probability of rolling an even number on a die?

Solution The event is the throwing of an even number. This can happen in 3 ways (if you roll a 2, 4 or 6). The number of equally probable outcomes is 6. Then the probability P(even) = 3/6, or 1/2.

We will use a number of examples involving a standard 52 card deck. This deck consists of the cards shown in the figure below.

Example 8 What is the probability of drawing an Ace from a well-shuffled deck of cards?

Solution There are 52 outcomes (the number of cards in the deck), they are equally likely (if the deck is well shuffled), and there are 4 ways to draw an Ace, so according to the P principle, the probability
P(draw an ace) = 4/52, or 1/13.

Example 9 Suppose we choose, without looking, one ball from a bag with 3 red balls and 4 green balls. What is the probability of choosing a red ball?

Solution There are 7 equally probable outcomes of drawing any ball, and since the number of ways to draw a red ball is 3, we get
P(red ball selection) = 3/7.

The following statements are results from Principle P.

Properties of Probability

a) If event E cannot happen, then P(E) = 0.
b) If event E is certain to happen then P(E) = 1.
c) The probability that event E will occur is a number from 0 to 1: 0 ≤ P(E) ≤ 1.

For example, in a coin toss, the event that the coin lands on its edge has zero probability. The probability that a coin is either heads or tails has a probability of 1.

Example 10 Let's assume that 2 cards are drawn from a 52-card deck. What is the probability that both of them are peaks?

Solution The number n of ways to draw 2 cards from a well-shuffled deck of 52 cards is 52 C 2 . Since 13 of the 52 cards are spades, the number of ways m to draw 2 spades is 13 C 2 . Then,
P(pulling 2 peaks) = m/n = 13 C 2 / 52 C 2 = 78/1326 = 1/17.

Example 11 Suppose 3 people are randomly selected from a group of 6 men and 4 women. What is the probability that 1 man and 2 women will be selected?

Solution The number of ways to select three people from a group of 10 people is 10 C 3. One man can be chosen in 6 C 1 ways, and 2 women can be chosen in 4 C 2 ways. According to the fundamental principle of counting, the number of ways to choose 1 man and 2 women is 6 C 1. 4 C 2 . Then, the probability that 1 man and 2 women will be selected is
P = 6 C 1 . 4 C 2 / 10 C 3 = 3/10.

Example 12 Throwing dice. What is the probability of rolling a total of 8 on two dice?

Solution Each dice has 6 possible outcomes. The outcomes are doubled, meaning there are 6.6 or 36 possible ways in which the numbers on the two dice can appear. (It’s better if the cubes are different, say one is red and the other is blue - this will help visualize the result.)

The pairs of numbers that add up to 8 are shown in the figure below. There are 5 possible ways to obtain a sum equal to 8, hence the probability is 5/36.