Comp Sim Exam 2

Ace your homework & exams now with Quizwiz!

Examples of a terminating sim:

-a store that opens @ 7am, locks its doors at 11pm, and closes when the last customer departs. -a project that begins on a designated day and complete when all activities are finished. -a portfolio's performance is tracked over 6 months

When is steady state sometimes okay?

-designing for sustained peak-load conditions (ex: emergency room staffing at peak times) -be careful: a system designed as if always at peak may be over designed if the peak period is short. (ex: Turner's place having its peak last 10 minutes, but is heavy)

Uses of @risk

-explore model logic -designate input -designate outputs -simulate -analyze results

Ways to check the confidence interval on Simio

-reports tab under experiments -Pivotgrid -Response chart

Input modeling

-represent the uncertainty in a stochastic simulation -must be capable of representing the physical realities of the process to the extent needed for the decision at hand -No "true" model for any stochastic input. We can only hope to get an approximation that is useful. -When we have data we can fit a model to the data. -When we have no data we have to creatively use what we can to get an input model

Problems with the Bonferroni inequality

-tends to be inconclusive, since individual alpha needs to be really small, and hence confidence intervals get really wide. -often only okay for comparison of 3-6 systems

How to determine the warm-up period

-trial and error. -determine run length: a) until natural stopping point if terminating. b) maybe 10 times warm-up period if steady state -determine the number of repetitions needed to make the confidence interval short enough.

When is steady state okay?

-when the system is truly continuously operating and stationary. -when non-productive time or downtime is ignored (or ignoring shift changeovers) -no fixed planning horizon -system "warms up" quickly, or warm up is well accounted for.

Steps for input modeling with data

1. Select 2. Fit 3. Check 4. Repeat

Using @Risk to fit a distribution to data

1. Select data 2. click data viewer 3. look at histogram and summery stats 4. go to distribution fitting 5. Fixed bound: 0/unsure; upper limit: open/unsure; select necessary distributions. 6. Write to cell puts Risk function in a cell with all the parameters filled to match your data. Can be copy/pasted into any spreadsheet for use. 7. Use define distributions to show a picture of the theoretical distribution.

Say I want to know whether the mean time in system for Scenario 1 (average 4.03, half width of 2.01) is less than 5 hours. I would like to draw my conclusion at the 95% confidence level. In theory, I will need to increase my number of replications by approximately a factor of ____ to get a statistically significant result.

4. Needs to be reduced by a factor of 2^2 so 4.

Looking at the ModelEntity -> defaultEntity -> Population -> TimeInSystem -> average, and assuming a 95% confidence level was used for all 3 scenarios, with what confidence level would the Bonferroni Inequality confirm that scenario two had the lowest average flow time?

90% - Using 1-Ca. 3 inferences tested at 95% confidence. C is the number of scenarios - 1.

Input Modeling Step 2. Fit

Alignment of the distribution to the data. Determines values for its unknown parameters

When comparing the results of multiple scenarios, you must use the __________ ____________ to determine the true confidence of the system.

Bonferroni Inequality

Input Modeling Step 3. Check

Ensure the fit to the data via tests (ex: chi-squared tests) and graphical analysis

I can conclude at the 95% confidence level that Scenario 5 (average of 6.07 and half width of .44) has a higher mean time in system than 6.0. (True/False)

False

Input Modeling Step 4. Repeat

If the distribution does not fit, select another candidate and go to step 2 of the process, or use an empirical distribution.

If the confidence interval for the mean overlaps with the standard, what conclusion can be made?

Inconclusive. The mean could be above or below the standard

If the confidence interval for mean performance overlaps the standard...

Inconclusive. The mean could easily be above or below standard. Collect more data.

What must be done if confidence intervals overlap?

Increase the number of trials to see if they come out significant. You may not actually be able to draw a conclusion regardless though.

With an average of 2.24 and a half-width of .188, can we conclude at 95% confidence that the average time in queue is below 2 min.?

No, we can conclude at the 95% confidence level that the average time in queue is above 2 minutes.

Terminating Simuluation

Performance measures are with respect to well defined initial and final conditions.

There are currently 100 replications and the confidence interval is too wide for the decision I want to make. If I want to cut the width in half, about how many replications do I need total?

To cut a confidence interval in half you must quadruple the number of replications, so 400 replications would be needed.

I can conclude at the 95% confidence level that Scenario 1 (average 5.67 and half width of .30) has a higher mean time in system than 5.0. (True/False)

True

Warm-up period

Used only in steady-state simulations. We increase the period until further increases no longer seem to make statistically significant differences in the performance measures. Might matter in some cases more than others.

Steps to reduce the confidence interval by increasing number of reps.

a. how much does the half width need to decrease (by what factor? ex: factor of 2) b. Repetitions needed to increase by that factor squared c. multiply current number of reps by factor of increase (ex: 100*2^2) = 400 reps.

steady state situation

assume probability distributions and system logic do not change over time. We are interested in the "long -run" performance as time (conceptually) goes to infinity.

Which of these is not a problem with using the Bonferroni inequality? a. Alpha needs to be really small b. It is always conclusive no matter what c. Confidence Intervals gets really wide d. Results tend to be inconclusive

b. It is always conclusive no matter what

Input Modeling Step 1. Select

choose one or more candidate distributions, based on physical characteristics of the process and graphical examination of the data.

In @Risk, blue distributions refer to...

continuous distributions

When is Steady Stake Ok? a. When the system is truly continuously operating and stationary b. No fixed planning horizon c. When non-productive time or down time is ignored d. All the above

d. all of the above

In a confidence interval comparison, if the intervals don't overlap, then the scenarios are probably...

different

In @Risk, red distributions refer to...

discrete distributions: finite possible options

If the time between events is _____, the number of events is _______

exponential, poisson

Use of fitting methods

find parameter values for the distribution that makes the distribution fit the data as well as possible

Monty Carlo simulation

how likely something will happen

Bonferroni inequality

if we make C inferences, each at level 1-alpha, then Probability{all inferences are correct} >= 1-C*alpha

@Risk purpose

improves strategic management process/decisions in Excel

In a confidence interval comparison, if the intervals overlap, then the test is probably...

inconclusive

Formula to increase replications and halve the confidence interval:

is quadruple the replications

common way to compare means in confidence intervals

look for overlapping confidence intervals. Non-overlapping confidence intervals indicate statistically significant difference in mean performance.

Common methods for fitting distributions

maximum likelihood, method of moments, and least squares.

triangular distribution

models a process when only the minimum, most likely, and maximum values of the distribution are known. Used to base distribution on expert opinion

Lognormal distribution

models the distribution of a process that can be thought of as the product of a number of component processes. Used a lot in financial models.

Normal Distribution

models the distribution of a process that can be thought of as the sum of a number of component processes. Used especially if number of things being summed is large.

poisson distribution

models the number of independent events that occur in a fixed amount of time or space.

Binomial Distribution

models the number of successes in n trials, where the trials are independent with common success probability, p. A discrete distribution that can look quite normal, or skewed depending on the parameters. Used a lot in quality control

exponential distribution

models the time between independent events, or a process time which is memoryless. Used for interarrival times

physical basis for distributions

most probability distributions were invented to represent a particular physical situation. If we know the physical basis, we can match it to the situation we have to model.

If the confidence level is based on 1-alpha, then the chance of making an error is...

no more than alpha.

Is there a such thing as a steady state situation?

no, but it can be useful to summarize long-run performance

example of poisson distribution

number of customers that arrive to a store during 1 hour, or number of defects found in 30 cubic meters of sheet metal.

Non-terminating (steady-state) simulation

performance measures defined over a conceptually infinite planning horizon. "Long-run average performance."

Which distributions are one-parameter distributions, and thus have less flexibility than the other distributions?

poisson and exponential

simulations provide better estimates of _____ difference than they do ____ performance because the same simplifications go into all the models being compared.

relative; absolute

empirical distribution

reuses the data themselves by making each observed value equally likely. Can be interpolated to obtain a continuous distribution. Used when there is a very large data set.

Two types of system simulations:

terminating and nonterminating

A 95% Confidence Interval indicates

that we are 95% confident that the interval covers the true mean of the system.

What are the boundaries of the confidence interval?

the average plus or minus the half width

example of triangular distribution

the minimum, most likely, and maximum inflation rate we will have this year.

Example of binomial distribution

the number of defective components found in a lot of n components.

example of lognormal distribution

the rate of return on an investment, when interest is compounded, is the product of the returns for a number of periods.

Example of normal distribution

the time to assemble a product which is the sum of the times required for each assembly operation.

example of exponential distribution

the time to failure for a system that has constant failure rate over time.

Discrete or continuous Uniform distribution

used when all outcomes on an interval are equally likely

If the confidence interval for mean performance falls below the standard...

we can reject the hypothesis that the mean is above standard in favor of the alternative, that it is below standard.

If the confidence interval for mean performance falls above the standard...

we can reject the hypothesis that the mean is below standard in favor of the alternative, that it is above standard.


Related study sets

Dante Alighieri Background EVANS

View Set

Cognitive Processes Test 3, Chapter 5/6 Wrap up

View Set

Score for this attempt: 0 out of 100 Submitted Feb 6 at 2:57pm This attempt took 30 minutes. UnansweredQuestion 1 0 / 5 pts A Trojan is a malicious program that uses a computer network to replicate. False True A worm is a malicious program that us

View Set

Micro final from previous quizzes

View Set