Risk Analysis Oral Exam Questions

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

What is the multiplicative law for probability?

(a) (lecture 12) slide 3 (b) used when failures are not independent (c) The probability of occurrence of given two events or in other words the probability of intersection of two given events is equal to the product obtained by finding the product of the probability of occurrence of both events

How do we create a Boolean function from a fault tree?

(a) A boolean function is created through either a "Top Down" or "Bottom Up" approach (b) The top down approach entails substituting each the system boolean representation of subsystem with its lower level subsystem until only component are included. (c) The bottom up approach entails taking the lowest level and integrating it with upper levels until the entire system is modeled

What is a circular triad and why is it important in paired comparison experiments?

(a) A circular triad is when false logic appears in a paired comparison experiment (b) ie x1>x2, x2>x3, x3>x1 (c) Important because then the expert is not reliable

What is a coefficient of agreement and why is it important in paired comparison experiments?

(a) A coefficient of agreement is a measure of consensus amongst experts (b) It important because it shows that there is actually consensus amongst experts via statistical tests if coefficient of agreement is high, and if not then the expert judgement is not useful for analysis.

What is a common cause failure?

(a) A common cause failure is a failure of multiple components that is a result of one event (b) CCFs are considered to be the collection of all sources of dependency, especially between components, that are not known or are difficult to model explicitly.

What is a dual tree and why is it important?

(a) A dual tree is the complement to a fault tree (when all and gates are replaced by or gates and vice versa) (b) These are important because it allows for the determination of the min path sets

What is a copula and how is it derived?

(a) A method for combining any marginal distribution form and a correlations structure (usually from t) (b) -Given Fi and C are jointly differentiable, the joint density function is given as f(x1, ...,xn) = f(x1) ´ ... ´f(xn) ´ C'[F1(x1),..., Fn(xn)]where f(xi) is the density corresponding to F(xi) or conversely F(x1, ...,xn) = C[F1(x1),..., Fn(xn)] Given a joint cumulative distribution function for random variables with marginal distribution functions, F can be written as a function of the marginals. Given Fi and C are jointly differentiable, the joint density function is given where f(xi) is the density corresponding to bigger Fxi.

Counting Process (a) Definition (b) Be able to draw a sample path from this process (c) Important Parameters, M(t), m(t) and their relationship (d) Relationship Between N(t) and Tn

(a) A stochastic process where all values are positive and increases only occur by a single value (b) PHOTO (c-1) m(t) is the rate of occurrence. M(t) is mean number of failures expected in time (t), or E[N(t)], where N(t) = the number of failures in [0,t] (c-2) m(t) = derivative M(t) (d) N(t) is the actual counting process whilst Tn is the time of the nth arrival.

Describe the Alpha Factor model for CCFs

(a) Analogous to the MGL model, but with a different interpretation (b) The parameters alpha_i represent, for a system of size m, alpha_k = P{failure event with = k comp.| failure event} [PHOTO]

Know the Mean and Variance Formulas of each Distribution (Binomial, Negative Binomial, Poisson, Geometric)

(a) Binomial Mean: np Var: np(1-p) (b) Negative binomial Mean: s/p Var: s(1-p)/p^2 (c) Poisson Mean: lambda Var: lambda (d) Geometric Mean: 1/p Var: (1-p)/p^2

For Distributions: Binomial, Negative Binomial, Poisson, Geometric, How would you recognize when to use the distribution?

(a) Binomial - number of successes in a series of n independent trials, i.e. fixed number of trials; x = 0,..., n (b) Negative binomial - no upper limit, i.e. no fixed number of trials; x = s, s+1,...., no upper limit (c) Poisson - when there's no stated probabilities but stated averages; x = 0, 1,...., no upper limit (d) Geometric - neg binomial with s = 1; F(x)=1-(1-p)

Know the Special Properties of Each Distribution (Binomial, Negative Binomial, Poisson, Geometric)

(a) Binomial - preserved under convolution; can be used in conjunction with a continuous random variable X where p is a probability statement about X (b) Negative binomial - preserved under convolution; can be used in conjunction with a continuous random variable X where p is a probability statement about X; can be written in terms of # failures until s success using Y=X-s (c) Poisson - preserved under convolution, relationship with exponential (d) Geometric - memoryless property

What are the three methods of calculating probability of system failure given a min cut set representation?

(a) Boolean Representation of a min cut (b) probability representation of a min cut (c) Binary decision diagrams

What is the difference between a Bradley Terry, a Thurstone and an NEL model?

(a) Bradley Terry: probability built using iterative approach of N(i,j) [preferences] (b) Thurstone: probability that i beats j ~ N(m, sigma) (c) NEL model: probability i beats j uses exponential random variables

What are the nonparametric estimators for distributions and how are they defined and used

(a) Empirical CDF (1) Assuming complete data (2) order data least to greatest (3) use (i-0.3)/(n+0.4) for n<20 (b) Kaplan-Meier Estimator (1) Rank from least to greatest (2) give reverse ranks to those not censored (3) R(t) = Product([ri-1]/ri)

Why do we need to use expert judgment?

(a) Expert judgement is used to estimate parameters when data is not readily available (b) Data source not originally constructed with risk analysis in mind (c) Cost, time or technical considerations (d) Bad data in general (i.e. poor entry, bad data definitions)

Describe the Cooke Model for expert judgement. What is information? What is calibration?

(a) Experts are asked to provide breakpoints for cdf distributions of the model and also seed variables. These breakpoints are used to calculate information scores and calibrations scores. These scores are then used to yield weights. The weights allow for a new cdf to be determined. (b) Information score is a measure of how wide their range is (c) Calibration score is a measure of how often seed variables fall within the appropriate ranges

What is a truth table and how is it used?

(a) Generate all possible component states and the probabilities associated with each. (b) Used for Representing Systems in Terms of Their Components; evaluate the system using the Boolean formula for each state

Homogeneous Poisson Process: Definition and Use of Superposition Property

(a) If independent, then N(t) = sum of all N(t) ~ exp(lambda) (b)If I have multiple independent HPP, then combining them by adding the rates will also be a HPP.

What are the test of hypothesis for NHPPs and how and why are they used?

(a) Log-Linear NHPP (1) Ho: B1 = 0 vs Ha: B1 != 0 (2) Where m(t) = exp(B0+B1(t)) (b) Power Law NHPP (1) Ho: d = 1 vs Ha: d != 1 (2) where m(t) = gdt^(d-1)

Discuss how this (Tabular Form of Bayes Law and the Law of Total Probability) can be used for Discrete Priors

(a) Make three rows, =x (pdf), <=x (cdf), and >x(reliability) (b) Fill out the tables accordingly

Discuss how to solve for an MLE

(a) Mathematically (1) Build the likelihood function (2) take derivative of likelihood function (usually the derivative of the log likelihood function) (3) set previous function to zero (4) solve for parameters (b) Obtaining Starting Values for MLE: Use method of moments for each type of distribution

Markov Process: Definition

(a) Memoryless (b) Defined in a discrete state space

What are the three common estimators and how do we use them?

(a) Method of Moments: Using moments of the function to estimate parameters; past data to predict the next random variable (b) Least Squares: minimizing the least squares value of a function vs data points to estimate parameters; line of best fit (c) Maximum Likelihood: by selecting the values for the parameter that maximizes the likelihood function, we select the parameter values which maximize the probability of observing what we observed

Homogeneous Poisson Process: Definition and Use of Decomposition Property

(a) NEED (b) if independent, you can separate one HPP into separate HPPs.

Bayesian Prior Construction: Describe the types of prior and their use: (a) Noninformative (b) Conjugate (c) Empirical Bayes (d) Maximum Entropy

(a) Noninformative: Zero information. A f(x) = 1 on bounds [-∞,∞] (b) Conjugate: Select form of prior that has distribution that is proportional to the likelihood (c) Empirical Bayes: Choose a distribution form for the prior and use half of the data to estimate parameters and then rest to get a posterior distribution (not theoretically sound) (d) Maximum Entropy: Distribution with the greatest spread whilst retaining certain parameters ie (Mu or Sigma^2)

Develop or describe specific plotting procedures for Lognormal

(a) Plot ln(failure times) vs. z-score of F(failure times) (b) Linear= lognormal (c) sigma= 1/slope, mu= sigma(-intercept)

Develop or describe specific plotting procedures for Weibull

(a) Plot the ln(failure times) vs. Ln(-Ln(1- estimated cdf)) (b) Linear = Weibull. (c) beta= slope, mu=exp(-intercept/beta)

What is the definition and use of the important distributions in Bayesian Estimation: (a) Prior (b) Predictive (c) Posterior (d) Posterior Predictive

(a) Prior: A distribution that addresses variability in a parameters; uses expert judgment (b) Predictive: A distribution that addresses variability of a model whilst also accounting for variability of parameters; uses the law of total probability (c) Posterior: A posterior distribution is one that utilizes both data and the prior distribution to account for variability in a parameter; uses Bayes Theorem (d) Posterior Predictive: A distribution that addresses variability of a model whilst accounting for variability of a parameters that is informed by data and the prior distribution

What is the rare event approximation? How do we use it in Risk Analysis?

(a) Says that the probability that multiple rare events happening at once is negligible (b) We use it to simplify complicated probability expressions, especially when there is a large overlap in cut sets

Bayesian Prior Construction: How does the posterior distribution behave given the data

(a) Shifts in location (b) Peakedness (c) Large Sample Properties

NonHomogeneous Poisson Process: Simulation of NHPP

(a) Simulate independent interarrival times from an exponential distribution with parameter lambda=1. (b) Find the inverse function of M(t) (c) Then use Y(x)... as the jump times

Describe the Event Tree technique. What is a split fraction?

(a) Technique used to generate risk scenario from an initiating event (b) A split fraction is the probability of a specific branch of the event tree

Homogeneous Poisson Process: Definition

(a) The jump size will always be one (b) t and h will always be positive (c) a counting process; type of stochastic process (d) independent increments, constant m(t)

Describe the Mosleh-Appstolakis Model for Expert Judgment? Was/is the model not considered a good model?

(a) This estimates Mu and variance values for each expert. These values are used in conjunction with estimation for mu given by the expert. Variance values are used to weight expert opinion, along with decision maker's predictions. (b) No, because the model is too sensitive to the experts opinion. Also, too many parameters must be estimated.

What are the important properties of estimator and their definitions?

(a) Unbiasedness: The mean is good in comparison to the graphs mean; the mean of the sampling distribution should be equal to the parameter being estimated (b) Minimum Variance: There is the least amount of variance possible for the estimator; the deviation of the sampling distribution should be minimal from the mean value (c) Consistency: Converges on the true mean as n goes to infinity; the estimator reflects E[X]

Markov Process: Know the definition of Performance Measures (Visit Frequency, Mean Time Between Visits to a State

(a) Visit Frequency to a State: how often a specific state is reached (b) Mean Time Between Visits to a State: the average time between reaching a state (c) Mean Duration of a visit to a state: the average time spent in a state (d) System Availability: The percentage of time that the system will be running (e) System Unavailability: The percentage of time that the system will not be running (f) Frequency of System Failures: How often the system fails (g) Mean Duration of a System Failure: The average time that the system is failed (g) Mean Time Between System Failures: The average time between the system failing (h) Mean Functioning Time Until System Failures: the average time for a functioning system until failure.

Markov Process: Given a problem discuss how to establish

(a) a Rate Matrix: matrix denotes the rate at which the state will change from the x axis element to the y axis element. (b) a Transition Probability Matrix: Matrix of dividing each element in the row by the total sum of the row and zeros in the diagonal. (c) the Expected Time in State: mx1 matrix of 1/(sum of row m) (d) steady-State State Probabilities: solve for probabilities after setting 0 equal to probabilities times the rate matrix.

Discuss and define the definition of min path, path, min cut, and cut sets and why they are important.

(a) min path: a set of components that when all components in the set are functioning the system is functioning, and no component can be removed from the set that also allows for the system to function. (b) path: a collection of basic events that connect input and output; represents a path through the graph (c) min cut: a set of components that when all components in the set are failed the system is failed, and no component can be removed from the set such that forces a failure in the system. (d) cut sets: a group of components that when all fail, the system fails

Develop or describe specific plotting procedures for normal

(a) plot failure times vs z-score of F(failure times) (b) Linear = Normal

How do we create a Boolean function from a system block diagram?

A boolean expression for a block diagram is created by abstracting each subsystem (parallel subsystem and then series subsystem) and replace those components with component labeled by the subsystem's boolean representation until there is only one component within the diagram. Series is 1-(1-x1)(1-x2); parallel is x1x2

Describe the Fault Tree techniques.

A visual representation that shows a decomposition of a top event until basic events are reached top events = system-level failure basic events = component-level failure provides a description of occurrence of the top event

Bayesian Prior Construction: How are the parameters of the prior distribution selected

ANSWER NEEDED

Define or explain Weak Consistency

All of the highest priority cells have higher risks than the least prioritized cells [when max bound of all green cells is less than min of all red cells]

Describe the Binomial Failure Rate Model

Assumed two sorts of shocks - those that kill a single component and a common cause sock that can kill any group of components. Each component fails independently with rate lambda, common cause shocks occur with rate mu. Given a common cause shock each component fails with probability p independent of the other components. the number of components failing given a common cause shock is thus binomially distributed. this model assumes that the failure rate of any group of components depends only on the number of components in the group.

How is the Beta Factor Model derived?

Assumed two sorts of shocks; shocks that kill a single component and a common cause shock that can kill any group of components. Each component fails idependently with rate lambda common cause shocks occur with rate u. Tries to span the whole rate of possibilities with only three parameters. BEAUTIFUL BUT USELESS.

Describe the Beta Factor Model

Assumptions: For m components, only m+1 shock processes; one which kills each component separately and one which kills all components. The failure rate for each component is assumed to be identical. It is very simple and limited. Failure size of 1 or m. Can also be interpreted as the conditional probability that a component fails due to a common cause given that it fails .

What is Bayesian Estimation and How is it Different from Classical Estimation?

Bayesian Estimation is different from classical estimation in that it doesn't require observed data to estimate the proposed distribution. Bayesian utilizes a "prior" distribution to account for any variability in the parameters of a distribution. Bayesian takes into account uncertainty but classical estimation doesn't.

Describe the inference procedure for the Beta Factor Model

Beta factor models lumps times all together. similar to symmetrical marshall olkin. ni/(mt) and mle of nd/T. MLE(freq on demand of a single component failure) is MLE of lambda i and d.

How are the Marshal Olkin, Beta Factor, and Binomial Failure Rate Model related?

Binomial failure rate says it can cover everything but only by using three parameters

Why is Boolean algebra important for quantifying fault and event trees?

Boolean algebra is important for quantifying fault and event trees because it allows for a reduced mathematical representation of when a system 1-Fail or 0-Functions

Define or explain Betweenness

Cells cannot diagonally touch a cell that is two or more priority levels above it. [You cant have a red and a green touching - there must be a yellow between]

Discuss some ways conditional probability may be used in risk analysis.

Conditional probability can be used to better diagnose the system when failure occurs. Determining Pr(Component|Failure) allows for an analyst to better understand where the source of the failure lies. Determining Pr(Failure|Component) allows for the analyst to see which component have the greatest effect on a system's reliability.

How is the Marshal Olkin (Multivariate Exponential) Distribution derived?

Consider a system of m components. assume different kinds of independent shocks can occur resulting in failures of groups of components. a group of components is denoted by an m dimensional vector of 0s and 1s (1 in ith position indicating component I is a member of the group) or simply by list. Shocks killing the different groups occur independently with constant failure rate lambda x for group x or a given system of multiple components eg .... Component i can fail from any one of a number of different causes depending on which shock occurs first. the overall constant failure rate for component i is sigma x given a component of x is i lambda x.

What are some measures of dependence and what type of dependence do they measure?

Covariance: used to measure the linear dependence of 2 variables Correlation: used to measure the non-linear dependence of 2 variables

What is the definition of likelihood and how is it constructed for: (a) Complete Samples (b) Right Censored Samples (c) Left Censored Samples (d) Interval Censored Samples (e) Mixture Data

Definition: the joint probability distribution of observed data expressed as a function of statistical parameters (a) f(x) X = t = pdf (b) R(t) X > t = R(t) (c) F(t) X < t = CDF (d) F(ti)-F(ti-1) or R(ti-1)-R(ti) CDF(t2) - CDF(t1) given interval [t1, t2] (e) The product of all the likelihood contributions above

Describe the Marshal Olkin (Multivariate Exponential) Distribution

Different shocks are denoted to occur at time Ui or Uij (depending on the number of components)

How do we create a Boolean function from an event tree?

Each scenario is modeled by finding the boolean representation for each subsystem. Each subsystem or complement of subsystem is then multiplied by each other in the respective scenario.

How are event trees and fault trees related?

Event tree and fault trees involve boolean representations of components. Split fractions are also calculated from fault trees.

Describe a paired comparison experiment.

Experts are asked to judge which parameter in a pair has greater value

For Distributions: Exponential, Gamma, Weibull, Lognormal, Normal, Beta, (a) Be able to Recognize the Distribution Form (b) Know the Special Properties of Each Distribution (c) Know the Mean and Variance Formulas of each Distribution

Exponential: (a) Is there only an x term in the exponent of the "e" term? (b) constant failure rate; special case of Gamma (Gamma(1,l)) and Weibull (Weibull (1,1/l)); Memoryless Property (c) mean = 1/lambda; variance = 1/(lambda^2) Lognormal: (a) Is there a Ln x term in the exponent of the "e" term? (b) failure rate increases and then decreases (c) mean = e*((mu + sigma^2)/2); variance = e^(2*mu + 2*sigma^2) - e^(2*mu + 2*sigma^2) <---?? Normal: (a) Is there a power term (x^2) or (x-a)^2 in the exponent of the "e" term and a 1/sqrt(2*pi) constant? (b) (c) mean = mu; variance = sigma Weibull: (a) Is there a power term (x^a) in the exponent of the "e" term (where a doesn't equal 1)? (b) can model increasing, decreasing and constant failure rates (c) mean = eta*gamma*(1+(1/beta)); variance = (eta^2)*gamma*(1+(2/beta)) - (eta*gamma(1+(1/beta))^2 Beta: (a) Is there a both an (x-a) and/or (b-x) term? (b) bounded on [0,1]; extended distribution definition bounded on [a,b]; can have a uniform, normal-like, skewed left or right, J shaped left or right, or U shape (c) mean = alpha/(alpha+beta); variance = [(alpha*beta)/((alpha+beta)^2*(alpha+beta+1))]*(beta-alpha)^2 Gamma: (a) Otherwise there is a power term and an "e" term. (b) can model increasing, decreasing and constant failure rates (c) mean = v / alpha; variance = v/(alpha^2)

Be able to recognize parametric family integral identities

Exponential: f(x | lambda) = lambda * e^(lambda (x)) ==> integral (x^(-lambda * x) dx) = 1 / lambda Gamma: f(x | v, alpha) = (alpha^v / gamma(v)) (x^(v-1)) * (e^(-alpha (x) )) ==> integral (x^(v-1) * e^(-alpha * x) dx) = gamma(v)/(alpha^v) Normal: f(x | mu, sigma) = 1/(2 * pi)^0.5 * sigma) * e^(-0.5((x-mu)/sigma)^2) ==> integral(e^(-0.5((x-mu)/sigma)^2) dx = (2 * pi)^0.5 * sigma Beta: f(x | alpha, beta) = (gamma(alpha + beta)/(gamma(alpha) * gamma(beta)) * x^(alpha-1) * (1-x)^(beta - 1) ==> integral(x^(alpha-1) * (1-x)^(beta - 1)) dx = (gamma(alpha) * gamma(beta)) / gamma(alpha+beta) Poisson: f(x | lambda) = (lambda^x / x!) * e^(-lambda) ==> sum of (lambda^x / x!) = e^lambda Geometric: f(a | p) = p(1-p)^(x-1) ==> sum of (a-p)^(x-1) = 1/p

Describe Bayesian Interval Estimation

Given [theta i, theta], the area under the curve of prior distribution = % of values that fall within the interval Use both parameters values in the model to yield upper and lower bound of credibility. This does not translate to probability

What are importance measures and how are they used?

Important measures are used to understand how the reliability of a specific component affects the system as a whole. A key challenge in a PRA is to identify the elements in the system that contribute most to the risk. compares risk contribution of each element to that of another

Homogeneous Poisson Process: Distribution of nth Occurrence Time

Is a gamma distribution, Gamma(n,lambda) ?????

NonHomogeneous Poisson Process: Definition

Jump size is one and the interarrival times are a function of time If an HPP m(t) = lambda, an NHPP had m(t) = lambda(t). Intensity is non-constant.

Define the likelihood function for NHPPs

L = Product(m(ti)*exp(M(ti))) The product of each ROCOF distribution [m(ti)] and the exp(M[t0]) in the interval [0,t0]

Describe Bayesian Point Estimation

Mean, median or mode of the prior to estimate parameters. Mean, median or mode of predictive to estimate model values. To use functions of data in order to estimate certain unknown parameters

Describe the inference procedure for the Binomial Failure Rate Model

Model sucks. rate at which multiple failures occur is given N+. if Ni is the number of dependent failures of i components and N+ is the summation of Ni.

NonHomogeneous Poisson Process: Be able to draw a sample path from this process

NEEDED

Markov Process: Be able to draw a sample path from this process

PHOTO

Describe the Multiple Greek letter model for CCFs

PHOTO (b) A generalization of the beta-Factor Model (c) Involves use of m-1 parameters (i.e., add Greek letters , , ...)

Compound Poisson Process: Be able to draw a sample path from this process

PHOTO instead of N(t) = 1 jump size, X(t) = total value of arrivals (so jump size is random on Y axis)

Discuss the use of plotting techniques for goodness fit.

Plotting techniques will manipulate data shapes by manipulating axis values so they appear linear. If they are relatively linear, one can assume they are from a specified distribution.

For Binomial and Negative Binomial, explain what a Compound Probability Problem is.

Probability of two (or more) independent events occurring - multiply their probabilities together can follow a distribution which can be equated to p

What does independence mean? Why is this important?

Simplifying assumption for calculating P(T) NEED IMPORTANCE

Homogeneous Poisson Process: Simulation of HPP

Simulate independent interarrival times from the points of the realization of a HPP, with an exponential distribution between each jump point

Compound Poisson Process: Simulation of Compound Poisson Process

Simulate independent interarrival times, with an exponential distribution between each jump point, and simulate jump size using another distribution.

Define or explain Consistent Coloring

Subscribes to weak consistency, and any intermediate cell must contain points lower than max of one level lower, but also points greater than min or priority level high [must be yellow if not green or red - yellow has risks higher than some in red cells and risks lower than some in the green cells]

Compound Poisson Process: Definition

Sum of a family of independent incremental random variables up to time t. value of this summation = total value of the arrivals from 1 to N(t) Interarrival times are random, but jump sizes are a function of some distribution.

Discuss the Tabular Form of Bayes Law and the Law of Total Probability

Table 1: P(Bi|Ai) Table 2: P(Bi intersect Ai) **P(Bi|Ai)*P(Ai) Table 3: P(Bi) **sum up each row from table 2 Table 4: P(Ai|Bi) **Table 2 divided by table 3 OR Start with P(Ai) and P(B|Ai) [table 1]; can then calculate P(A n B) ==> P(B) ==> P(A|B); uses the fact that the Total Law of Probability: P(A)=sum of (A n Bi) = sum of (P(A|Bi) x P(Bi)) ==> Baye's Law

What is the additive law of probability?

The additive law of probability is the law that states when taking the union of N sets, one must add all odd intersections of these sets and subtract all even intersections of these sets until N is reached.

Compound Poisson Process: What makes the Compound Poisson Process different from an HPP

The jump size is not only 1 in CPP and is only 1 in HPP CPP has variable jump size and variable rates of failures (each family of random variable, Y)

Describe the Beta Factor model for CCFs

The probability of a CCF among k specific components in a group of size m , such that 1 ≤ k ≤ m , is [PHOTO]

Describe the Basic Parameter model for CCFs

The probability of a CCF among k specific components in a group of size m, is the same for any group of that size.

Risk Communication

The process by which information about the nature and consequences of risk, as well as the risk assessment techniques and risk management options are shared among decision makers and other stakeholders

Risk Management

The process by which the potential for loss or the magnitude of loss is minimized and controlled

Risk Assessment

The process by which the probability or frequency of loss by or to an engineering system is assessed, and the magnitude of the loss (consequences) estimated

How is the Risk Matrix Currently Used

The risk matrix is used to set priorities and guide resource allocation when managing risks. This is used primarily in risk communication.

How do we obtain starting values for finding the MLEs from NHPPs

The starting values for the optimization routine should be based on a plot of the Mean Value Function versus the actual plot of N(t) [i.e. the number of arrivals as a function of time] You can pick any point of t for M(t) and see how well it fits with N(t) step function

Homogeneous Poisson Process: Relationship of Exponential Distribution to HPP

The time between occurrences are all independent, identically distributed, exponential random variables. ????

What is the benefit of using a copula?

They have the ability to model different ranges of correlation easy to use

How is the Binomial Failure Rate Model derived?

This can be explained using superposition and decomposition of HPP. In a group of size m, the constant rate at which a particular single component (only) fails is l+mpqm-1 where q = 1-p the constant rate at which some single component (only) fails is l1 = ml+mmpqm-1 . the constant rate at which a particular group of i components will fail is mpiqm-i (i >1) the constant rate at which some group of i components fail is li = mCm,i piqm-i (i > 1)

NonHomogeneous Poisson Process: What makes the NHPP different from an HPP

Unlike the HPP, the time between arrivals is neither exponential nor independent [failure rate not constant; no stationary increments] HPP is a straight line (constant rate = lambda). NHPP is a curved line (varying rate = lambda(t) ).

Describe the inference procedure for the Marshal Olkin Model

You need to estimate the mean vector and the estimation of the covariance and diagonal elements. Based upon the estimators you can make inferences. take the MLE. divide number of time that failed in the trial over the total time T. T is observation time and nij is the number of failures in group ij.

Describe the FMEA techniques.

[potential failure modes and effects] step-by-step approach for identifying all possible failures in a design/process/product/service Estimate probability, consequence, and likelihood to detect (1-10 scale usually). Multiply these values to yield RPN (Risk priority number)

What is min cut representation for fault trees?

a min cut representation is a fault tree that only contains the min cuts and their relevant components

How does the multivariate Normal Model account for dependence?

r=CORR[X1,X2]=s12/(s1s2) uses r as a parameter to relate the variables' dependency in it's calculation of f(x1,x2)

What do common cause failure models do? Where are they used?

shows probability of all scenarios that would cause system failure - the scenarios being how many single component failures, two component failures, etc.

Homogeneous Poisson Process: Be able to draw a sample path from this process

start at 0,0 move to first time with shock, move up one unit, then to next time unit and move up one unit, etc.

Discuss how to create the System CDF

Given min cut sets, set the probability each cut set = CDF of component - F(Ci), and finding the product of the CDFs based on a series or parallel Parallel: T=C1xC2xC3 Series: T=1-(1-C1)x(1-C2)x(1-C3)

What is the definition of a Stochastic Process?

a collection of random variables, such that at any point X(t) is a random variable often used to represent the evolution of some random value or system over time

Risk

a. A measure of potential loss due to natural or human activities b. A combination of the probability or frequency of the hazard and its consequence

What are the Problems with Using Risk Matrices? Why do these exist?

a. Consequence Classification is often not easy. Categorizing severity may require inherently subjective judgments (e.g., reflecting the rater's personal degree of risk aversion). In general, there is no unique way to interpret the comparisons in a risk matrix that does not require explanations—seldom or never provided in practice—about the risk attitude and subjective judgments used by those who constructed it. b. The use of risk matrices is too widespread (and convenient) to make cessation of use an attractive option. c. Hard to know true values of probability or consequence. d. Sometimes the risk value is greater in yellow than red (inconsistent coloring). e. Risk matrix confines the risk calculation to a quadrant - some risk is missed

What is the definition and interpretation of the basic distribution form F(x).

a. Cumulative Distribution Function (cdf) b. a function that denotes the probability of a value existing less than a specified value. c. This is the probability that a system has failed prior to this value. d. SYSTEM FAILURE CDF

What is the definition and interpretation of the basic distribution form H(x).

a. Cumulative Failure Rate Function b. Continuous only c. Denotes cumulative wear or exposure

What is the definition and interpretation of the basic distribution form h(x).

a. Failure Rate Function b. Denotes instantaneous rate of failure c. This again shows the likelihood that a failure occurs at this moment

Determine the Min, Max, Mean, and Variance of a set of risks on a Risk Matrix Using the Uniform Assumption for Probability and Consequence

a. Min - multiply lower bounds for both consequence and probability b. max - multiply upper bounds for both consequence and probability c. Mean - (1) Calculate the average for both consequence and probability using the formula ((L+U)/2); (2) Multiply the resulting products d. Variance - calculate using formula [(U-L)^2]/12

What is the definition and interpretation of the basic distribution form f(x).

a. Probability Distribution (pdf) b. a function that denotes the likelihood that a certain value exists relative to all other values.

What is the definition and interpretation of the basic distribution form R(x).

a. Reliability Function b. a function that denotes the probability of a value existing greater than a specified value. c. This is the probability that a system will fail after this value.

How can you modify a Risk Matrix to have these Properties (Weak Consistency, Betweenness, Consistent Coloring)?

a. This can be done by changing the color of individuals cells b. changing ranges of cells (the more common approach)

How can you determine if a Risk Matrix has these Properties (Weak Consistency, Betweenness, Consistent Coloring)?

a. Weak Consistency - Comparing the min risk value of red cells vs. max risk value of green cells b. Betweenness - Make sure that no green cells touch any red cells c. Consistent Coloring - Check weak consistency and check that the yellow cells contain points lower than max green, but also points greater than min red

What are the Properties of Good Risk Matrices? Why do they Make Sense?

a. Weak Consistency - if you're going to design a Risk Matrix, you would want to be able to distinguish between a green and a red risk. b. Betweenness - There has to be some gradual ramping up in the risk. You can't go from 0 to 100 without passing through 50. c. Consistent Coloring - risks that are categorized in the same color should be of equal levity - i.e. all yellow cell risks are of a logical range; must be yellow if not green or red - yellow has risks higher than some in red cells and risks lower than some in the green cells

What is a Risk Matrix?

a. a table with several categories of probability (or likelihood or frequency) and consequence (or severity or impact) on two axes. The Resultant square denotes a certain level of risk b. It associates a recommended level of risk, urgency priority, or management action associated with each column-row pair (i.e. cell)

How can any one form (f(x), F(x), R(x), h(x), H(x)) be derived from the other?

a. f(x) = derivative F(x) b. F(x) = integral f(x) c. R(x) = 1 -F(x) d. h(x) = f(x)/R(x) or derivative H(x) e. H(x) = integral h(x)


संबंधित स्टडी सेट्स

UNIT: STATES AND CHANGES OF MATTER

View Set

AWS - Elastic Compute Cloud (EC2)

View Set