MATH 302 Probability

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Negative Binomial Probability

-Calculates probability of X=#trials for "K" successes. A negative binomial experiment is a statistical experiment that has the following properties: • The experiment consists of x repeated trials. • Each trial can result in just two possible outcomes. We call one of these outcomes a success and the other, a failure. • The probability of success, denoted by P, is the same on every trial. • The trials are independent; that is, the outcome on one trial does not affect the outcome on other trials. • The experiment continues until k successes are observed, where k is specified in advance. • Negative Binomial Formula. Suppose a negative binomial experiment consists of x trials and results in k successes. If the probability of success on an individual trial is P, then the negative binomial probability is: • b*(x; k, P) = x-1Ck-1 * Pk * (1 - P)x - k

ChebySev Inequality

-Probability that X deviates from the mean is GREATER than "e" is less than some value=(Variance/e^2). Probability/proportion of x=R.V. that exists ouside of the interal is LESS than or equal to 1/k^2 (k=e/SD)

Variance

-The variance is a numerical value used to indicate how widely individuals in a group vary from the mean. If individual observations vary greatly from the group mean, the variance is big; and vice versa. -The variance measures how far each number in the set deviates from the mean. Variance is calculated by taking the differences between each number in the set and the mean, squaring the differences (to make them positive) and dividing the sum of the squares by the number of values in the set.

joint distribution (discrete+continuous)

-for a continuous R.V. it is a density function/distribution that gives the probability of two random variables(Joint). If independent, it is the factor of marginal density fx and fy. In a joint probability distribution table, numbers in the cells of the table represent the probability that particular values of X and Y occur together. From this table, you can see that the probability that X=0 and Y=3 is 0.1; the probability that X=1 and Y=3 is 0.2; and so on.

Permutation

-it is an extension of the MULTIPLICATION rule...except you are only allowed to choose "K" values from a sample size n. permutation: is how many ways/outcomes you can arrange, with respect to order, these "k" values from sample n. -A permutation of a set A is a one to one mapping of A onto itself. let 1<=k<=n. A K-permutation of A is an ordered listing of a subset of A of size k. -nPr = n! / (n - r)!

cumulative distribution function

A cumulative probability is a function F(X) whose intregral calculates the P(X<b) where b=some value. It is the summation/cumulation of all the probabilities/integral/area under the PDF up to that value b. ranges from [-infinity,b] for the pdf. (cumulative/sum of probabilities of f(b)) It refers to the probability that the value of a random variable falls within a specified range. In probability theory and statistics, the cumulative distribution function (CDF), or just distribution function, describes the probability that a real-valued random variable X with a given probability distribution will be found to have a value less than or equal to x.

Distribution

A probability distribution is a table or an equation/function that links each random variable can assume with its probability of occurrence (pmf). A function of a discrete variable whose integral over any interval is the probability that the random variable specified by it will lie within that interval. It is characterized by the PMFs of the discrete Values X can take( Basically all the PMF vFalues for each random variable). the pmf values must sum up to 1.

convergence in distribution

A sequence of random variables X1....Xn converges weakly to the random variable X, if their respective cumulative distribution functions F1,F2...Fn, converge to the cumulative distribution function Fx, wherever F, is continuous. Weak convergence is also called convergence in distribution. Xn-(D)->X

Bayes Theorem(improve)

An extension of Law of total probability and conditional probability. p(B) is the sum of all the events P(BnAi). You are looking for P(A|B). However There are multiple branches you can take. You are given P(B) happened, however this has occured in both branches. So you must use P(AnB) over the TOTAL probability for both branches. AKA you are given the result of the second branch and must use it to calculate probability of the first branch.

Bernoulli Trials

BERNOULLI is a random experiment with exactly two possible outcomes, "success"(X=1) and "failure"(X=0), in which the probability of success(P) is the same every time the experiment is conducted.[1] P(X=1)=p P(X=0)=1-p

Continuous Random Variable

Continuous variables, in contrast, can take on an infinite number of values within a range of values(infinite sample space). Probability of a single value=0 but the sum of p(X)=1. AKA If a variable can take on any value between two specified values. For example, suppose we randomly select an individual from a population. Then, we measure the age of that person. In theory, his/her age can take on any value between zero and plus infinity, so age is a continuous variable. In this example, the age of the person selected is determined by a chance event; so, in this example, age is a continuous random variable.

moment generating functions

E(e^tx)....generates Uk

Hypergeometric

Gives the probability for an experimental RV X=k successes. Given that you are able to choose "Z" amount of times WITHOUT replacement.

Probability Mass Function

In probability theory and statistics, a probability mass function (pmf) is a function that gives the probability that a discrete random variable is exactly equal to some value. Sum of All PMFs for X must =1

Probability density function

In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function that describes the relative likelihood for this random variable to take on a given value. It graphs the distribution of the continuous RV. The probability of the random variable falling within a particular range of values is given by the integral of this variable's density over that range—that is, it is given by the area under the density function but above the horizontal axis and between the lowest and greatest values of the range. The probability density function is nonnegative everywhere, and its integral over the entire space is equal to one. Probability of a single value P(X)=0 because it's infinitesemally small. Most often, the equation used to describe a continuous probability distribution is called a probability density function . Sometimes, it is referred to as a density function, a PDF, or a pdf. For a continuous probability distribution, the density function has the following properties: • Since the continuous random variable is defined over a continuous range of values (called the domain of the variable), the graph of the density function will also be continuous over that range. • The area bounded by the curve of the density function and the x-axis is equal to 1, when computed over the domain of the variable. • The probability that a random variable assumes a value between a and b is equal to the area under the density function bounded by a and b. For example, consider the probability density function shown in the graph below. Suppose we wanted to know the probability that the random variable X was less than or equal to a. The probability that X is less than or equal to a is equal to the area under the curve bounded by a and minus infinity - as indicated by the shaded area.

Independent Events:

In probability theory, two events are independent, statistically independent, or stochastically independent[1] if the occurrence of one does not affect the probability of the other. Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other. The concept of independence extends to dealing with collections of more than two events or random variables, in which case the events are pairwise independent if each pair are independent of each other, and the events are mutually independent if each event is independent of each other combination of events. Conditions that must be true: 1) P(A|B)=P(A) or P(A)P(B)=P(AnB) 2) continuous: f(x,y)=f(x)f(y)

Law of total Probability(improve)

It expresses the total probability of an outcome(A) which can be realized via several distinct events(Bi). AKA to find the probability of (A), we must sum up all the different events (Bi) that occur in contingence(AND/BOTH) with A.

cauchy distribution(improve)

Its cumulative distribution function has the shape of an arctangent function arctan(x). Has NO MEAN/Expected value Area under curve of half of it=infinity Expected value=undefined

Combination

Note the distinction between a permutation and a combination . A combination focuses on the selection of objects without regard to the order in which they are selected. Thus, the letters AB and BA represent two different permutations, because the order is different. However, they represent only 1 combination; because order is not important in a combination.

Binomial Probability

Repeated bernoulli trials. • The experiment consists of n repeated trials. • Each trial can result in just two possible outcomes. We call one of these outcomes a success and the other, a failure. • The probability of success, denoted by P, is the same on every trial. • The trials are independent; that is, the outcome on one trial does not affect the outcome on other trials. The binomial probability refers to the probability that a binomial experiment results in exactly x successes. For example, in the above table, we see that the binomial probability of getting exactly one head in two coin flips is 0.50. Given x, n, and P, we can compute the binomial probability based on the binomial formula: Binomial Formula. Sum[0,n](nck)p^k(1-p)^n-k=(A+B)^n

Convergence in Probability

Seuqnece of Random Variables X1...Xn is said to converge to the random variable X IN PROBABILITY if (n->infinity)LimP(|Xn-X|>=e)=0 for every e>0 Xn-(P)->X -implies weak convergence

Binomial Expansion/theorem

Sum(...)=(A+B)^n

Convolution/Sum of RV

Sums of Continuous Random Variables/Convolution of two densitites. ex. X+Y=Z if the random variables are independent, the density of their sum(Z) is the convolution of their(X,Y) densitites. Properties: Communinative: associative:

standard deviation

The SD is a numerical value used to indicate how widely individuals in a group vary/fluctuate from the mean. If individual observations vary greatly from the group mean, the SD is big; and vice versa. it is the square root of the variance.

Central Limit Theorem

The central limit theorem states that the sampling distribution of the mean of a series of IID random variables will be normal or nearly normal IN SHAPE if the sample size is large enough. After Standardizing the shape, Sn-nu=0 is the center/largest due to the LLN(if you add all the sample means together and divide by n they will=u). A statistical theory that states that given a sufficiently large sample size from a population with a finite level of variance, the mean of all samples from the same population will be approximately equal to the mean of the population(LLN). THE MEAN DISTRIBUTION will have the shape of STANDARD NORMAL DISTRIBUTION.

Expected Value/Mean

The mean/average of the discrete random variable X is also called the expected value of X. AKA what value of r.v. X you should expect to get=the average value. E(X) = μx = Σ [ xi * P(xi) ] where xi is the value of the random variable for outcome i, μx is the mean of random variable X, and P(xi) is the probability that the random variable will be outcome i.

Random Variables

The random outcome of the experiment is X, the random variable. The sample space of the experiment is the set of all possible outcomes. If the sample space is either finite or countably infnite, the random variable is said to be discrete. If the sample space is infinite, the random variable is said to be continuous

Conditional Probability(Multiplication Rule?)

The rule of multiplication applies to the following situation. We have two events from the same sample space, and we want to know the probability that both events occur. Rule of Multiplication If events A and B come from the same sample space, the probability that both A and B occur is equal to the probability the event A occurs times the probability that B occurs, given that A has occurred. P(A ∩ B)/P(A) = P(B|A) Given P(B)>0

Strong Convergence

The sequence of Random variables X1...Xn is said to converge towards the random variable X STRONGLY if (n->inifinity) P(LimXn=X)=1. Xn-(a.s.)->X Strong convergence implies convergence in probability which implies weak convergence. (reverse statements not always true)

Discrete Random Variable

Within a range of numbers, X, the Discrete random value of a random process can take on only a certain amount of FINITE values. If Sample Space is finite then the RV is said to be discrete. Suppose, for example, that we flip a coin and count the number of heads. The number of heads will be a value between zero and plus infinity. Within that range, though, the number of heads can be only certain values. For example, the number of heads can only be a whole number, not a fraction. Therefore, the number of heads is a discrete variable. And because the number of heads results from a random process - flipping a coin - it is a discrete random variable.

Law of Large Numbers (weak and strong)

Xnsa=1/nsum(Xk) (sample average) of a sequence of IID Xk converges towards their common expectation "u." provided that the expectation of Xk is finite. Weak Law: Xnsa-P->u for n->infinity LimP(|Xnsa-u|>=e)=0 for every e>0 Strong Law(almost surely=1): Xnsa-as->u for n->infinity P(LimXnsa=EX)=1

Poisson Distribution

is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time and/or space if these events occur with a known average rate and independently of the time since the last event.[1] The Poisson distribution can also be used for the number of events in other specified intervals such as distance, area or volume. -The binomial distribution converges towards the Poisson distribution as the number of trials goes to infinity while the product np remains fixed. Using the mean/average/EXPECTED VALUE to calculate the probability of X=k. Where k is a value other than expected.

covariance

measures the strength of association betwen to variables. OR the property of a function of retaining its form when the variables are linearly transformed.

correlation coefficient

normalizes covariance by turning it into a unitless coefficient between -1 and 1. Correlation coefficients measure the strength of association between two variables. The most common correlation coefficient, called the Pearson product-moment correlation coefficient, measures the strength of the linear association between variables. The sign and the absolute value of a Pearson correlation coefficient describe the direction and the magnitude of the relationship between two variables. -The value of a correlation coefficient ranges between -1 and 1. -The greater the absolute value of a correlation coefficient, the stronger the linear relationship. - The strongest linear relationship is indicated by a correlation coefficient of -1 or 1. -The weakest linear relationship is indicated by a correlation coefficient equal to 0. -A positive correlation means that if one variable gets bigger, the other variable tends to get bigger. -A negative correlation means that if one variable gets bigger, the other variable tends to get smaller. Keep in mind that the Pearson correlation coefficient only measures linear relationships. Therefore, a correlation of 0 does not mean zero relationship between two variables; rather, it means zero linear relationship. (It is possible for two variables to have zero linear relationship and a strong curvilinear relationship at the same time.)

Multiplicaiton rule

the amount of possible arrangements/outcomes, with respect to order. for multiple random "experiments." Exp1. has n1 outcomes, Exp 2 has n2 outcomes. Then the total# of possible outcomes is n1xn2.

Radius of Convergence

the radius of convergence of a power series is the radius of the largest disk in which the series converges. It is either a non-negative real number or ∞.

Uniform Random Variables

when all the values of random variable x have the same probability of occurance. can be discrete or continuous

Geometric Probability

• The geometric distribution is a special case of the negative binomial distribution. It deals with the the Probability(X=number of trials required for a single success.) OR P(X=first success in "n" trials). -X=n Thus, the geometric distribution is negative binomial distribution where the number of successes (r) is equal to 1. • SAME FORMULA except k=1


Set pelajaran terkait

Design of Database Systems- Test 1 T/F Questions

View Set

1. Introduction to Analysis of Risk

View Set

SCM 301 Chapter 8: Lean and Six Sigma in the Supply Chain

View Set

Case Analysis: Herman Miller's Sustainable Values

View Set

American History The New Nation 2 The Three Branches of Government

View Set

AP United States History Chapter 7

View Set