BualExam2

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

prediction interval

(rather than confidence interval) is used for an individual value because an individual value of Y is not a population parameter

Point prediction

, or best guess, is found by substituting the given values of the Xs into the estimated regression equation. To measure the accuracy of the point predictions, calculate a standard error for each prediction. Standard error of prediction for a single Y: Standard error of prediction for the mean Y:

First step is the model development step

. Decide on: Decision variables Objective Constraints How everything fits together: Develop correct algebraic expressions, relate all variables with appropriate formulas.

Regression Assumptions

1.There is a population regression line. It joins the means of the dependent variable for all values of the explanatory variables. For any fixed values of the explanatory variables, the mean of the errors is zero. 2.For any values of the explanatory variables, the variance (or standard deviation) of the dependent variable is a constant, the same for all such values. 3.For any values of the explanatory variables, the dependent variable is normally distributed. 4.The errors are probabilistically independent.

Heteroscedasticity

: The variability of Y values is larger for some X values than for others

F-ratio

=(SSEr-SSEc)/(K-J)/MSEc

Observed Value

=Fitted Value + Residual

Autocorrelated residuals

A common problem in time series data, when the residuals are often correlated. If residuals separated by one time period are correlated, it is called lag 1 autocorrelation.

A Two-Variable Product Mix Model

A frequent problem in business: Company must decide on a product mix (how much of each product to introduce) to maximize net profit. Contains only two decision variables, and can be solved graphically. Shows answers to what-if questions about the completed model.

an X and a Y

A scatterplot is a graphical plot of two variables

Modeling Possibilities

A wide variety of explanatory variables can be used in regression equations: Dummy variables Interaction variables Nonlinear transformations Constant elasticity relationships (multiplicative relationships) It is not wise to include all or many of the different types in a particular regression equation. Only a few might improve the fit of the model

Decision variables, Objective function Constraints

All optimization models have several common elements

Rules for Using Dummy Variables

Always use one fewer dummy than the number of categories for any categorical variable. No original categorical variables (used to create dummies) should be used simultaneously with dummies. Once the dummies have been created, you can run the regression analysis using any combination of numerical and dummy explanatory variables

Outliers

An ___________is an observation that has an extreme value for at least one variable.

Hypothesis Tests for the Regression Coefficients

Another important piece of information in regression outputs: The t-values for the individual regression coefficients. Each t-value is the ratio of the estimated coefficient to its standard error and indicates how many standard errors the regression coefficient is from zero. A t-value can be used in a hypothesis test for the corresponding regression coefficient: If a variable's coefficient is zero, there is no point including this variable in the equation. To run this test, simply compare the t-value in the regression output with a tabulated t-value and reject the null hypothesis if the output t-value is greater.

Durbin-Watson statistic and is given in many regression packages' outputs.

Autocorrelation is measured by the

they are only relevant for linear relationships.

Be careful when interpreting correlations

it is useful to summarize them with a single numerical measure This measure is called the standard error of estimate

Because there are numerous residuals

Backward

Begins with all potential explanatory variables in the equation and deletes them one at a time until further deletion is no longer warranted

Forward:

Begins with no explanatory variables in the equation, and successfully adds one at a time until no remaining variables make a significant contribution

smallest sum of squared residuals

Best-fitting line through the points of a scatterplot is chosen using the

regression analysis

Drawing scatterplots is a good way to begin

can be transformed

Either the dependent, or the independent, or all of the variables

Model development stage:

Enter all the inputs, trial values for the changing cells, and formulas relating these in a spreadsheet. This stage is the most crucial. The spreadsheet must include a formula that relates the objective to the changing cells.

Assumption 3: Normal Distribution

Equivalent to stating that the errors are normally distributed. The easiest way to detect non-normality is by examining a histogram and a Q-Q plot. If assumption 3 holds, the histogram should be approximately symmetric and bell-shaped, and the points of a Q-Q plot should be close to a 45 degree line.

Solving Optimization Problem

First step is the model development step. Decide on: Decision variables Objective Constraints How everything fits together: Develop correct algebraic expressions, relate all variables with appropriate formulas. Second step is to optimize. A feasible solution is a solution that satisfies all of the constraints. The feasible region is the set of all feasible solutions. An infeasible solution violates at least one of the constraints. The optimal solution is the feasible solution that optimizes the objective. Third step is sensitivity analysis.

autocorrelation

For time-series data, this assumption is often violated. This is because of a property called

Nonnormality of residuals

Form a histogram of residuals to check for violations. Unless the distribution of the residuals is severely nonnormal, the inferences made from the regression output are still approximately valid.

Invoking Solver

Formally designate the objective cell, the changing cells, the constraints, and selected options, and tell _____________ to find the optimal solution

Three types of equation-building procedures

Forward: Begins with no explanatory variables in the equation, and successfully adds one at a time until no remaining variables make a significant contribution. Backward: Begins with all potential explanatory variables in the equation and deletes them one at a time until further deletion is no longer warranted. Stepwise: Much like a forward procedure, except that it also considers possible deletions along the way.

Predicted Y = a + b1X1 + b2X2 + ... + bkXk

General Multiple Regression Equation: If Y is the dependent variable, and X1 through Xk are the explanatory variables, then a is the Y-intercept, and b1 through bk are the slopes. Collectively, a the bs in the equation are called the regression coefficients.

transformed variables

General linear regression does not require that any of the variables be the original variables in the dataset. Often, the variables being used are

multiple regression

Graphically, you are no longer fitting a line to a set of points. The regression equation is still estimated by the least squares method. There is a slope term for each explanatory variable in the equation. The standard error of estimate and R2 summary measures are almost exactly as in simple regression. Many types of explanatory variables could be used.

Sensitivity analysis

Here you see how the optimal solution changes (if at all) as selected inputs are varied.

Assumption 2: Variation around the Population Regression Line

Homoscedasticity: The variation of the Ys about the regression line is the same, regardless of the values of the Xs. A simpler term is constant error variance. This assumption is often questionable. Heteroscedasticity: The variability of Y values is larger for some X values than for others. The easiest way to detect such a nonconstant error variance is through a visual inspection of a scatterplot

Potential Uses of Regression Analysis

How do wages of employees depend on years of experience, years of education, and gender? How does the current price of a stock depend on its own past values, as well as the current and past values of a market index? How does a company's current sales level depend on its current and past advertising levels, the advertising levels of its competitors, the company's own past sales levels, and the general level of the market? How does the total cost of producing a batch of items depend on the total quantity of items that have been produced? How does the selling price of a house depend on such factors as the appraised value of the house, the square footage of the house, the number of bedrooms in the house, and perhaps others?

Proportionality:

If the level of any activity is multiplied by a constant factor, the contribution of this activity to the objective is multiplied by the same factor.

Linear Models and Scaling

If the model is poorly scaled, with some very large and some very small numbers, then the roundoff error is far more likely to be an issue. There are three possible remedies for poorly scaled models: Check the Use Automatic Scaling option in Solver. Redefine the units in which the various quantities are defined. Change the Precision setting in Solver's Options dialog box to a larger number.

multiple regression

If there are several explanatory variables, the analysis is called

simple regression

If there is a single explanatory variable, the analysis is called

it is usually apparent from the scatterplot

If there is any relationship between the two variables

Additivity

Implies that the sum of the contributions from various activities to a particular constraint equals the total contribution to that constraint

dependent variable. It is also called the response variable or the target variable

In every regression study, there is a single variable that we are trying to explain or predict, called the

Relevance Data availability

In practice, the choice of relevant explanatory variables is almost never obvious. Two guiding principles:

sensitivity report

In real LP applications, the solution to a single model is hardly ever the end of the analysis. Solver's__________ performs two types of sensitivity analysis: On the coefficients of the objective, the cs. On the right sides of the constraints, the bs. On a Solver run, a sensitivity report is requested in Solver's final dialog box.

Proportionality Additivity Divisibility

In terms of the general setup, LP models possess three important properties that distinguish them from general mathematical programming models:

Solver Sensitivity Report

In the first row of the top section, the allowable increase and allowable decrease indicate how much the coefficient could change before the optimal product mix would change

There are many ways to develop an LP spreadsheet model. Common elements:

Inputs: All numerical inputs - that is, all numeric data given in the statement of the problem - should appear somewhere in the spreadsheet. Changing cells: Instead of using variable names, such as Xs, spreadsheet models use a set of designated cells for the decision variables. The values in these changing cells can be changed to optimize the objective. Objective cell: One cell, called the objective cell, contains the value of the objective. Solver systematically varies the values in the changing cells to optimize the value in the objective cell. Constraints: Excel does not show the constraints directly on the spreadsheet. Instead, they are specified in the Solver dialog box. Nonnegativity: Normally, the decision variables - that is, the values in the changing cells - must be nonnegative.

Assumption 4: Probabilistic Independence of the Errors

Intuitively, this assumption means that information on some of the errors provides no information on the values of the other errors. For cross-sectional data, there is generally little reason to doubt the validity of this assumption. For time-series data, this assumption is often violated. This is because of a property called autocorrelation. Autocorrelation is measured by the Durbin-Watson statistic and is given in many regression packages' outputs. No explanatory variables can be an exact linear combination of any other explanatory variables. Property called exact multicollinearity. If it exists, there is redundancy in the data.

Potential characteristics of an outlier

It has an extreme value for one or more variables. Its value of the dependent variable is much larger or smaller than predicted by the regression line, and its residual is abnormally large in magnitude. Its residual is not only large in magnitude, but this point "tilts" the regression line toward it. This type of______ is called an influential point. Its values of individual explanatory variables are not extreme, but they fall outside the general pattern of the other observations.

A Test for the Overall Fit: The ANOVA Table

It is conceivable that none of the variables in the regression equation explains the dependent variable. First indication of this problem is a very small R2 value. Another way to say this is that the same value of Y will be predicted regardless of the values of Xs. Hypotheses for ANOVA test: The null hypothesis is that all coefficients of the explanatory variables are zero. The alternative is that at least one of these coefficients is not zero. Two ways to test the hypotheses: Individual t-values (small, or statistically insignificant). F test (ANOVA test): A formal procedure for testing whether the explained variation is large compared to the unexplained variation.

Discussion of Linear Properties

It is easy to recognize the linear properties if the model is described algebraically: a1x1 + a2x2 + ... + anxn, where n is the number of decision variables. This expression is called a linear combination of the Xs. It is fairly easy to recognize when the model is not linear: When there are products or quotients of expressions involving changing cells, or When there are nonlinear functions, such as squares, square roots, or logarithms, that involve changing cells. If the three properties (proportionality, additivity, and divisibility) are satisfied, Excel uses Simplex method - fairly efficient at solving linear problems.

the absolute value of the residuals, but for technical and historical reasons, the procedure is not used.

It is perfectly appropriate to minimize the

A solution is feasible if it satisfies all of the constraints

It is possible that there are no feasible solutions to the model. It could happen when: There is a mistake in the model (an input was entered incorrectly). The problem has been so constrained that there are no solutions left. In general, there is no foolproof way to remedy the problem of infeasibility. Another problem occurs if the objective is unbounded - that is, it can be made as large or small as you like. When this happens, Solver states that the objective cell does not converge.

A correlation

It measures the strength of linear relationship only

mathematical programming models

Linear programming is a subset of a larger class of models called

Include/Exclude Decisions

Look at a variable's t-value and its associated p-value. If the p-value is above some accepted significance level, such as 0.05, this variable is a candidate for exclusion. Check whether a variable's t-value is less than 1 or greater than 1 in magnitude. If it is less than 1, then it is a mathematical fact that se will decrease (and adjusted R2 will increase) if this variable is excluded from the equation. Look at t-values and p-values, rather than correlations, when making include/exclude decisions. An explanatory variable can have a fairly high correlation with the dependent variable, but because of other variables included in the equation, it might not be needed. When there is a group of variables that are in some sense logically related, it is sometimes a good idea to include all of them or exclude all of them. Use economic and/or physical theory to decide whether to include or exclude variables.

stepwise regression.

Many statistical packages provide some assistance in include/exclude decisions. Generically, these methods are referred to as

Divisibility:

Means that both integer and noninteger levels of the activity are allowed.

Decision Support System

Most DSSs are built around spreadsheets, but there are also many other platforms. Users see the front end (allowing to select input variables) and the back end (producing a report that explains the solution in nontechnical terms).

Stepwise

Much like a forward procedure, except that it also considers possible deletions along the way.

Prediction

Once you have estimated a regression equation from a set of data, you might want to use it to predict the value of the dependent variable for new observations. Two general prediction problems in regression: The objective is to predict the value of the dependent variable for one or more individual members of the population. The objective is to predict the mean of the dependent variable for all members of the population with certain values of the explanatory variables. The second problem is inherently easier in the sense that the resulting prediction is bound to be more accurate. One problem: There is no guarantee that the relationship within the range of the sample is valid outside this range

explanatory variables in the analysis

One variable (simple regression analysis) or more variables (multiple regression analysis) can be used as

Assumption 1: Population Regression Line

Probably the most important assumption. It implies that for some set of explanatory variables, there is an exact linear relationship in the population between the means of the dependent variable and the values of the explanatory variables. Population regression line joining means: μY|X1...Xk = α + β1X1 + ... + βkXk α is the intercept term, and βs are the slope terms. They are denoted by Greek letters to indicate that they are unobservable population parameters. Most individual Ys do not lie on the regression line because of the error ε: Y = α + β1X1 + ... + βkXk + ε

regression coefficients.

Regression line : Y = α + β1X1 + ... + βkXk + ε α and βs are called

linear (straight-line) or nonlinear (curved).

Relationships analyzed could be

Sampling Distribution of the Regression Coefficients

Result implications: The estimate of b is unbiased in the sense that its mean is β, the true unknown value of the slope. The estimated standard deviation of b is labeled sb. It is usually called the standard error of b. The shape of the distribution of b is symmetric and bell-shaped.

Sampling Distribution of the Regression Coefficients

Sampling distribution of any estimate is the distribution of this estimate over all possible samples. Sampling distribution of a regression coefficient has a t distribution with n-k-1 degrees of freedom

no relationship between a pair of variables. In such situations, the scatterplot usually appears as a shapeless swarm of points.

Scatterplots are also useful for detecting

outliers

Scatterplots are especially useful for identifying

optimize

Second step is to

Addressing Outliers

Simply finding an outlier does not mean you ought to do anything about it - it depends entirely on the situation. If an outlier is clearly not a member of the population of interest, then it is probably best to delete it from the analysis. This is the case for the company CEO in the figure on the previous slide. If it is not clear whether outliers are members of the relevant population, you can run the regression analysis with them and again without them. If the results are practically the same in both cases, then it is probably best to report the results with the outliers included. Otherwise, you can report both sets of results with a verbal explanation of the outliers.

dummy variable

Some potential explanatory variables are categorical and cannot be measured on a quantitative scale

ANOVA Table Elements

The ANOVA table splits the total variation of a variable SST into the part unexplained by the regression equation, SSE and the part that is explained :SSR=SST-SSE

Interpretation of Logarithmic Transformations

The R2 values with Y and Log(Y) as dependent variables are not directly comparable. They are percentages explained of different variables. The se values with Y and Log(Y) as dependent variables are usually of totally different magnitudes. To make the se from the log equation comparable, you need to go through the procedure described in the example so that the residuals are in original units. To interpret any term of the form bX in the log equation, you should first express b as a percentage. Then when X increases by one unit, the expected percentage change in Y is approximately this percentage b.

Comparison of Algebraic and Spreadsheet Models

The algebraic models are quite straightforward. For product mix models, the spreadsheet models are almost direct translations into Excel of the algebraic models. For multiperiod production models, algebraic models have two sets of variables, while spreadsheet models have only one. Algebraic models for multiple periods must be related algebraically through a series of balance equations. Extra level of abstraction makes algebraic models much more difficult for typical users to develop and comprehend. Spreadsheet models are much easier for typical users

A Multiperiod Production Model

The distinguishing feature of this model is that it relates decisions made during several time periods. This type of problem occurs when a company must make a decision now that will have ramifications in the future. The company does not want to focus completely on the short run and forget about the long run. Many optimization problems are of a multiperiod nature, where a sequence of decisions must be made over time. When making the first of these decisions, it is usually best to include future decisions in the model

product of explanatory variables raised to powers.

The effect of one-unit change in any X on Y depends on the levels of the other Xs in the equation. The dependent variable is then expressed as a

This procedure is called validating the fit.

The fit from a regression analysis is often overly optimistic. One way to see if the regression was successful is to split the original data into two subsets: One subset for estimation and one subset for validation. A regression equation is estimated from the first subset. Then the values of the explanatory variables from the second subset are substituted into the equation to obtain predicted values for the dependent variables. Finally, these predicted values are compared to the known values of the dependent variable in the second subset.

of how useful the regression line is for predicting Y values from X values.

The magnitude of the residuals provide a good indication

Interpretation in Multiple Regression

The multiple regression output is very similar to simple regression output. Standard error of estimate is essentially the same, but denominator gets adjusted for the number of explanatory variables (n-k-1). It is interpreted exactly the same way as before. The R2 value is again the percentage of variation explained by the combined set of explanatory variables, but it has a drawback: It can only increase with the number of variables added to the model. The adjusted R2 is an alternative measure that adjusts R2 for the number of explanatory variables in the equation. It is used to monitor whether extra explanatory variables belong in the equation.

Formula for correlation

The numerator of the equation is a measure of association between X and Y, called covariance

Data availability

The principle of parsimony is to explain the most with the least. It favors a model with fewer explanatory variables, assuming that this model explains the dependent variable almost as well as a model with additional explanatory variables.

Nonconstant Error Variance

The second regression assumption that variance of the errors should be constant for all values of the explanatory variables is almost always violated to some extent. Mild violations do not have much effect on the validity of the regression output. One very common violation needs to be dealt with: The fan-shape phenomenon. It occurs when increases in a variable result in increases in variability. Two ways to deal with it: Use a different estimation method than least square. Weighted least squares is an option available in some statistical packages. Use a logarithmic transformation of the dependent variable.

standard error of estimate

The usual empirical rules for standard deviation can be applied to the

Homoscedasticity

The variation of the Ys about the regression line is the same, regardless of the values of the Xs. A simpler term is constant error variance. This assumption is often questionable.

The Partial F Test

There are many situations where a set of explanatory variables forms a logical group. It is then common to include all the variables in the equation or exclude all of them. Example: Categorical variables with more than two categories, represented by a set of dummy variables. The ________________is a test to determine whether the extra variables provide enough extra explanatory power to warrant their inclusion in the equation. To run the test, estimate both the complete (C) and the reduced (R) equations and look at the associated ANOVA tables. Then, form the F-ratio:

Inputs Changing cells Objective cell Constraints Nonnegativity

There are many ways to develop an LP spreadsheet model. Common elements:

Violations of Regression Assumptions

There are three major issues to deal with in case regression assumptions are violated: How to detect violations of the assumptions. What goes wrong if the violations are ignored. What to do about violations if they are detected. Detection is relatively easy with available graphical tools. What could go wrong depends on the type of the violation and its severity. The last issue is the most difficult to resolve.

the fan shape This is a violation of one of the assumptions of linear regression analysis, but there are ways to deal with it.

There is a clear upward relationship, as is evident from

The common standard deviation of the errors is σ.

There is one other unknown constant in the model: The variance of the errors, σ2

sensitivity analysis

Third step

explanatory variables

To help explain or predict the dependent variable, we use one or more

multiple regression

To obtain improved fit to the data, several explanatory variables could be used in the regression equation, moving into realm of

The Statistical Model

To perform statistical inference in a regression context, you must first make several assumptions about the population. Assumptions represent idealization of reality, and are never likely to be entirely satisfied for the population in any real study. If the assumptions are grossly violated, statistical inferences that are based on these assumptions should be viewed with suspicion. Assumptions are crucial to the regression analysis, so it is important to understand exactly what they mean.

logarithm, square root, the reciprocal, and the square.

Typical nonlinear transformations are

X and Y is rxy.

Usual notation for a correlation between variables

constant percentage, regardless of the value of this X or the values of the other Xs.

When any explanatory variable X changes by 1%, the predicted value of the dependent variable changes by a

there is still a dilemma of what to do with them. Deleting them is not always appropriate.

When outliers have been identified,

Equation for a straight line

Y = a + bX

In regression

a is the Y-intercept of the line, and b is the slope of the line - the change in Y when X increases by one unit.

well-scaled model

all of the numbers are roughly of the same magnitude.

Regression analysis

allows to understand how the world operates and make predictions simultaneously

standard error of estimate

and denoted se. It is essentially the standard deviation of the residuals.

Constant elasticity relationships

are also called multiplicative relationships, and are firmly grounded in economic theory.

Correlations

are completely unaffected by the units of measurement.

Correlations

are numerical summary measures that indicate the strength of linear relationships between pairs of variables.

Categorical variables

are used when there are two categories (example: gender) or more than two categories (example: race). For each additional category above 2, an additional dummy variable needs to be created.

Nonlinear transformations

are used whenever curvature is detected in scatterplots.

Scatterplots

are useful for detecting relationships that may not be obvious otherwise.

A correlation

between a pair of variables is a single number that summarizes the information in a scatterplot.

Regression analysis

can be applied equally well to cross-sectional and time series data.

decision support system (DSS)

can help users solve problems without having to worry about technical details.

Nonnegativity

constraints imply that changing cells must contain nonnegative numbers.

changing cells

contain values of the decision variables

objective cell

contains the objective to be minimized or maximized.

correlation

correlation with magnitude close to 1 indicates a strong linear relationship.

General linear regression

does not require that any of the variables be the original variables in the dataset.

correlation

equal to -1 (negative correlation) or +1 (positive correlation) occurs only when the linear relationship between the two variables is perfect

correlation

equal to 0 or near 0 indicates practically no linear relationship.

Relevance

established economic or physical theory).

independent variables or predictor variables

explanatory variables also called

reduced cost

for any decision variable with value 0 in the optimal solution indicates how much better that coefficient must be before that variable enters at a positive level.

F-value

has an associated p-value that allows you to run the test easily; it is reported in most regression outputs.

A solution is feasible

if it satisfies all of the constraints

constraints

impose restrictions on the values in the changing cells.

An interaction variable

in the regression equation if you believe the effect of one explanatory variable on Y depends on the value of another explanatory variable.

Scatterplots and correlations

indicate the presence of a linear relationship, but they do not quantify it.

shadow price

indicates the change in the optimal value of the objective when the right side of some constraint changes by one unit. If a resource constraint is binding in the optimal solution, the company is willing to pay up to some amount, the shadow price, to obtain more of the resource.

Durbin-Watson (DW)

is a numerical measure used to check for lag 1 autocorrelation. It is usually positive, with DW statistic less than 2. General rule: When the number of observations is about 30, and the number of explanatory variables is relatively small, then DW statistic less than 1.2 warrants attention

feasible solution

is a solution that satisfies all of the constraints.

R2

is an important measure of fit of the least squares line. Along with the standard error of estimate, it is the most frequently quoted measure in applied regression analysis. It always ranges between 0 and 1. When the residuals are small, is close to 1. It represents the fraction of variation of the dependent variable explained by the regression.

Multicollinearity

is not a problem if you simply want to use a regression equation as a "black box" for predictions.

The corresponding residual

is the difference between the actual and fitted values of the dependent variable

optimal solution

is the feasible solution that optimizes the objective.

The least squares line

is the line that minimizes the sum of the squared residuals. It is the line quoted in regression outputs.

A fitted value

is the predicted value of the dependent variable

An interaction variable

is the product of two explanatory variables.

feasible region

is the set of all feasible solutions

Regression analysis

is the study of relationships between variables. It is one of the most useful tools for a business analyst because it applies to many situations

outliers

observations that fall outside of the general pattern of the rest of the observations

the value of the explanatory variable

occasionally the variance of the dependent variable depends on

Multicollinearity

occurs when there is a fairly strong linear relationship among a set of explanatory variables. In this case, the relationship between the explanatory variable X and the dependent variable Y is not always accurately reflected in the coefficient of X. It depends on which other Xs are included or not included in the equation.

Decision variables

or the variables whose values the decision maker is allowed to choose. They are the variables that a company must know to function properly; they determine everything else.

Linear regression

quantifies the relationship between the independent and the dependent variable

A learning curve model

relates the unit of production time (or cost) to the cumulative volume of output since the production process first began. Empirical studies indicate that production times tend to decrease by a relatively constant percentage every time cumulative output doubles. To model this relationship: Predicted Y = aXb Whenever X doubles, the predicted value of Y decreases to a constant percentage of its previous value. This constant is often called the learning rate

Making Predictions

standard errors can be used to calculate a 95% prediction interval for an individual value and a 95% confidence interval for a mean value. Exactly as in previous chapters, you go out a t-multiple of the relevant standard error on either side of the point prediction.

Durbin-Watson (DW)

statistic is a numerical measure used to check for lag 1 autocorrelation. It is usually positive, with DW statistic less than 2.

Constraints

that must be satisfied: Physical, logical, or economic restrictions, depending on the nature of the problem

Objective function

to be optimized - maximized or minimized

Time series data

treated somewhat differently, because variables are usually related to their own past values - a problem of autocorrelation.

dummy variable

variable is a variable with possible values of 0 and 1. It equals to 1 if the observation is in a particular category, and 0 if it is not.

infeasible solution

violates at least one of the constraints

Some relationships may not be linear

when points do not cluster around a straight line.


Set pelajaran terkait

Chapter 9: Production and Operations Management

View Set

Parathyroid hormone and Vitamin D

View Set

Encountering the Old Testament, Chapter 1

View Set

Chapter 22 Vet Diagnostic Imaging: Dental Radiography

View Set