GBUS 3302- MODULE 6

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

robust fit

used to minimize the impact of response outliers. This is available only for continuous response

hidden layers provide

weights for each input and combine them using a function.

at each neuron or node

weights will be applied and a summation is taking place which will provide the input for the next layer.

the prediction is

a function of the weighted transformed predictors.

the nueron combines

all of the weighted inputs and uses a specific activation function to calculate an output to be passed on.

neural networks

black box type methods that have been shown to be very successful in prediction accuracy. applied in finance, engineering. data- driven method that can be used for classification and prediction.

structure supports

capturing very complex relationships between predictors and the response variable.

basic structure of a Neural Network consist

input layers, hidden layers, and output layers.

When the desired outcome is not known

it is call unsupervised learning.

Linear functions are

limited because the output is simply proportional to the input

learning rule

modifies the weights according to the input patterns it is presented.

neural network is made up of

neurons and layers

input layers

predictors

supervised learning network

processes the inputs and compares its resulting outputs against the desired outputs. Errors are then propagated back through the system, causing the system to adjust the weights that control the network. This process occurs over and over as the weights are continually tweaked.

The weights w are typically initialized to

random values in the range -0.05 to +0.05.

neurons within the network are

simple processing units, which take one or more inputs and produce an output

three functions in JMP

(1) The default TanH function is often a good choice because it normalizes all inputs to a range of -1 to +1. (2) Linear transfer functions are appropriator when the response is a linear function of the predictors. (3) The Radial Basis Function (Gaussian in JMP) is useful when predictors are continuous with measurement errors that need to be smoothed.

When the desired outputs are know

, we have supervised learning.

neural networks have

some sort of training rule- learn from examples and exhibit some capability for generalization beyond the training data

One disadvantage of the neural network is

that the output is a very complicated function of the input that can not be easily expressed in closed form. That means that although we usually get very good predictions, they will be hard to explain to management. If a manager wants to know which factors make the prediction good we don't have a satisfactorily answer. But there are many cases were we don't need to know what makes a prediction good. For instance, if Facebook or Google wants to predict what ad to place for an individual visitor, all one needs to know is that the algorithm maximized the probability that a visitor will click on the ad not why she clicked.

For each observation,

the model produces predictions, which are then compared with the actual response value. Their difference is the error for the output node. Popular approaches to estimating weights in the networks involve using the errors iteratively to update the estimated weights. In particular, the error for the output node is distributed across all the hidden nodes that led to it, so that each node is assigned responsibility for part of the error or each of these node specific allocations is then used for updating the weights.

For categorical variables,

the platform automatically creates indicator variables.

During the training of a network

the same set of data is processed many times as the connection weights are continuously refined

Transform Covariates

transforms all continuous variables to near normality

A neural network consists of input layer, hidden layers, output layers and neurons or nodes. Neural networks can be very good predictors when it is not necessary to describe the functional form of the response surface, or to describe or explain the relationship between the inputs and the response. True False

true

Which of the following is true of neural networks? Inputs are weighted at so-called neurons to produce an output for the next layer. Neural networks have input layers, hidden layers, and output layers. All are features of neural networks. There may be one or more hidden layers before an output is computed.

All are features of neural networks. Correct answer.

Penalty Method

Choose the penalty method. To mitigate the tendency neural networks have to overfit data, the fitting process incorporates a penalty on the likelihood.

The network processes the outputs and compares its resulting inputs against the desired outputs. Errors are then propagated back through the system, causing the system to adjust the weights that control the network. Select one: True False

FALSE

The neuron simply adds together all of the inputs and calculates an output to be passed on to the next layer. Select one: True False

FALSE

The weights w are typically initialized at the value of zero. Select one: True False

FALSE

How are inputs used for creating an output in neural networks? Inputs are summed up to produce an output. Inputs are combined using weights. Inputs are averaged to produce an output. Inputs are combined using a transfer function.

Inputs are combined using weights.

NEURONS MAY WEIGH THE INPUT IN A

LINEAR OR NONLINEAR FASHION THROUGH SEVERAL LAYERS

output layers

RESPONSE VARAIABLE

Select one. One must always use the model with higher AUC. None are good conclusions. Since the neural Network has only slightly lower sensitivity and slightly lower AUC than the boosted NN, the simpler neural network might be preferable. Since the neural Network has slightly lower sensitivity and lower AUC than the boosted NN, the boosted neural network is definitively the better choice.

Since the neural Network has only slightly lower sensitivity and slightly lower AUC than the boosted NN, the simpler neural network might be preferable. Correct answer.

Number of Tours

Specify the number of times to restart the fitting process, with each iteration using different random starting points for the parameter estimates. The iteration with the best validation statistic is chosen as the final model.

To mitigate the tendency neural networks have to overfit data, you should use the absolute or weight decay penalty method when there are many predictors, but only few contribute much to the output. True False

true

A neural network consists of input layer, hidden layers, output layers and neurons. Select one: True False

TRUE

Neural network computing uses specified activation unctions applied to the sum of weighted inputs to a layer (at so called neurons or nodes) to obtain outputs which provide the input for the next layer. Select one: True False

TRUE

Neural networks are highly flexible and generally have excellent predictive capabilities. However, one drawback is that neural networks have a tendency to overstate the data. Select one: True False

TRUE

The neural network processes the inputs and compares its resulting outputs against the desired outputs. Errors are then propagated back through the system, causing the system to adjust the weights that control the network. Select one: True False

TRUE

The weights are typically initialized to random values in the range -0.05 to +0.05. Select one: True False

TRUE

To mitigate the tendency neural networks have to overfit data, you should use the absolute or weight decay penalty method when there are many predictors, but only few contribute much to the output. Select one: True False

TRUE

The three activation functions used in the hidden layer are

TanH, which is the default function, linear, and Gaussian.

gaussian

The Gaussian function. Use this option for radial basis function behavior, or when the response surface is Gaussian (normal) in shape. The Gaussian function is: (e^-x^2) where x is a linear combination of the X variables.

Select one. The Boosted NN is slightly better than all other models for all cutoff points. The cutoff value of the boosted NN may be changed to obtain a sensitivity above 90% with a specificity of 40%. The sensitivity of the neural Network is 65%. The specificity of the boosted NN is 87%.

The cutoff value of the boosted NN may be changed to obtain a sensitivity above 90% with a specificity of 40%.

TanH

The hyperbolic tangent function is a sigmoid function. TanH transforms values to be between -1 and 1, and is the centered and scaled version of the logistic function. The hyperbolic tangent function is: (e^2x-1)/(xe2^+1) where x is a linear combination of the X variables.

linear

The identity function. The linear combination of the X variables is not transformed. The Linear activation function is most often used in conjunction with one of the non-linear activation functions. In this case, the Linear activation function is placed in the second layer, and the non-linear activation functions are placed in the first layer. This is useful if you want to first reduce the dimensionality of the X variables, and then have a nonlinear model for the Y variables. For a continuous Y variable, if only Linear activation functions are used, the model for the Y variable reduces to a linear combination of the X variables. For a nominal or ordinal Y variable, the model reduces to a logistic regression.

Penalty Method

The penalty is λp(βi), where λ is the penalty parameter, and p(. ) is a function of the parameter estimates, called the penalty function. Validation is used to find the optimal value of the penalty parameter.

training set

This process occurs over and over as the weights are continually tweaked.

Robust Fit.

Trains the model using least absolute deviations instead of least squares. This option is useful if you want to minimize the impact of response outliers. This option is available only for continuous responses.

Transform Covariates.

Transforms all continuous variables to near normality using either the Johnson Su or Johnson Sb distribution. Transforming the continuous variables helps mitigate the negative effects of outliers or heavily skewed distributions. See the Save Transformed Covariates option in Model Options.

Training the model involves

estimating the weights that lead to the best predictive results.

at each neuron

every input has an associated weight, which modifies the strength of each input.

network learns by

example as do their biological counterparts

Three common activation functions used in the hidden layer in are linear, quadratic and Gaussian. True False

false


Set pelajaran terkait

prehospital care 10th edition chapter 32

View Set

GPHY 141 Exam 2 Study Guide Part 2

View Set

Texas Principles of Real Estate 1 - Chapter 2

View Set

Chapter 11 Quiz **one is wrong***

View Set

Digital forensic Quiz assessment

View Set

020301hA - Survey Equipment - Part A

View Set

Quiz 5 Information Security Fundamentals

View Set

RELIGION CTT Ch. 9 The Age of the Imperial Church

View Set