Quizes

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Match the symbols in the Bellman equation 1) v_pi(s) 2) pi(a | s) 3) p(s', r | s, a) 4) r 5) gamma

1) expected return in state 1 2) policy 3) transition probability of the Markov process 4) reward for the state transition 5) discount factor for future rewards

Since policies are only partially ordered, there can be many different optimal policies with different value functions a) True b) False

b

Which of the following is not true? Principal component analysis maps data into a lower dimensional space such that ... a) the new features are linear combinations of the original ones b) the new features have a high correlation with the target output c) the new features explain most of the variance in the data d) the new features are uncorrelated

b

Which of the following statements is not true about ordinal values? a) They can be ordered in a meaningful way b) The mean of such values can be calculated c) The median of such values can be calculated d) They are categorical

b

Which of these measures is most sensitive to outliers? a) mode b) mean c) median d) interquartile range

b

Which statement is true about the K-Means algorithm? a) The output attribute must be categorical b) All attributes must be numeric c) All attribute values must be categorical d) Attribute values may be either categorical or numeric

b

Any policy that, in every state s, is greedily selecting the action a with the highest q_*(s, a), is an optimal policy a) True b) False

a

Assume a classification problem with three classes A, B, and C. The false negatives of a classifier wrt. class A are ... a) the instances in class A that are classified as B or C b) the instances in classes B or C that are classified as A c) The instances in B classified as C and the instances in C classified as B d) depending on whether we want the false negatives of A vs. B or A vs. C

a

For the following data set and initial centroids, which two points are more likely as positions of the centroids after running the KMeans algorithm? https://www.dropbox.com/s/qa4jjgp7oia7hhp/Screenshot%202017-11-19%2020.01.05.png?dl=0 a) (0, -4) and (0, 4) b) (-2, 2) and (2, -2) c) (-2, 0) and (2, 0) d) (-2, -4) and (2, 4)

a

Which of the following three algorithms will learn a policy closest to the optimal policy for a given MDP, assuming an epsilon-greedy with epsilon=0.1 is used for generating episodes of the MDP and the learning rate is small enough? a) Q-Learning b) Sarsa c) On-policy Monte-Carlo control

a

Which of these is not a measure for the spread of the data? a) mean b) interquartile range c) variance d) standard deviation

a

Which of these statements is true about nominal (categorical) values? a) Their mode can be calculated in a meaningful way. b) They can be ordered in a meaningful way. c) Their median can be calculated in a meaningful way. d) Their mean can be calculated in a meaningful way.

a

Which reinforcement learning method belongs to this update equation https://www.dropbox.com/s/fpp1vhunbrfsh9x/Screenshot%202017-11-19%2020.05.17.png?dl=0 a) Q-Learning b) TD(0) policy evaluation c) Dynamic programming value iteration d) On-policy Monte-Carlo control e) Sarsa

a

Our training data consists of the points plotted in the following chart: https://www.dropbox.com/s/igbz5qsn5l9156p/Screenshot%202017-11-19%2019.58.11.png?dl=0 Which class would kNN with the Euclidian distance predict for the unlabeled data point (x,y)=(6,1) with k=10 (assuming majority voting with equal weights among neighbors). a) C b) A c) 50% B and 50% A d) B

a)

An internal node in a decision tree represents ... a) a class b) a test specification c) an attribute value d) a number of instances

b

For the following data set and initial centroids, which two points are more likely as positions of the centroids after running the KMeans algorithm? https://www.dropbox.com/s/c99uxc17bgk7wq2/Screenshot%202017-11-19%2020.00.26.png?dl=0 a) (-1, -1) and (1, 1) b) (-1, 0) and (1, 0) c) (0, -1) and (0, 1) d) (1, -1) and (1, 1)

b

In reinforcement learning we often need to balance exploitation vs. exploration. In this context, what is exploitation? a) greedily selecting the optimal action b) selecting the action with highest estimated value c) achieving the highest cumulative reward d) finding a loophole in the problem that allows us to always win

b

What is the dimensionality of record data? a) the range of possible values of the attributes b) the number of records c) the number of attributes d) 2, because record data is a table

c

Which of the following cases indicates overfitting? a) low training error, low test error b) high test error, the training error does not matter c) low training error, high test error d) high training error, high test error e) high training error, low test error

c

Which reinforcement learning method belongs to this update equation https://www.dropbox.com/s/olrg2io8nitqq6f/Screenshot%202017-11-19%2020.05.52.png?dl=0 a) Q-Learning b) TD(0) policy evaluation c) Dynamic programming value iteration d) On-policy Monte-Carlo control e) Sarsa

c

A leaf node in a decision tree represents a) a number of instances b) an attribute value c) a test specification d) a class

d

Accuracy is a misleading measure for the performance of a classifier when ... a) we have many different classes b) the classifier is underfitting c) the classifier is overfitting d) we do not have similar numbers of instances for each class

d

Assume a classification problem with three classes A, B, and C. The false positives of a classifier wrt. class A are ... a) The instances in B classified as C b) the instances in class A that are classified as B or C c) depending on whether we want the false postives of A vs. B or A vs. C d) the instances in classes B or C that are classified as A e) The instances in A classified as A

d

Supervised learning differs from unsupervised learning in that supervised learning requires ... a) at least one input attribute b) input attributes to be categorical c) output attributes to be categorical d) at least one output attribute

d

The K-Means algorithm terminates when .. a) the number of instances in each cluster for the current iteration is identical to the number of instances in each cluster of the previous iteration b) a user-defined minimum value for the sum of squared distances between instances and their corresponding cluster center is seen c) the number of clusters formed for the current iteration is identical to the number of clusters formed in the previous iteration d) the cluster centers for the current iteration are identical to the cluster centers for the previous iteration

d

Which of the following cases indicates underfitting? a) low training error, low test error b) high test error, the training error does not matter c) low training error, high test error d) high training error, high test error e) high training error, low test error

d

Which reinforcement learning method belongs to this update equation: https://www.dropbox.com/s/jkv7l6qa35ygxkp/Screenshot%202017-11-19%2020.06.58.png?dl=0 a) Q-Learning b) TD(0) policy evaluation c) Dynamic programming value iteration d) On-policy Monte-Carlo control e) Sarsa

d

You are training a decision tree classifier with a minimal number of instances n that are required for making a new decision node. The results from cross-validation indicate overfitting. To reduce overfitting you ... a) change some other parameter, because n does not influence overfitting b) decrease n c) increase n d) try out both larger and smaller n, because it is not clear which one will reduce overfitting

d

Our training data consists of the points plotted in the following chart: https://www.dropbox.com/s/zd7t05fjkc0v3yw/Screenshot%202017-11-19%2019.57.45.png?dl=0 Which class would kNN with the Euclidian distance predict for the unlabeled data point (x,y)=(3,6) with k=3 (assuming majority voting with equal weights among neighbors). a) C b) B c) No prediction because value is out of range d) A

d)

Which reinforcement learning method belongs to this update equation https://www.dropbox.com/s/cgar5n4tw1zz38y/Screenshot%202017-11-19%2020.06.17.png?dl=0 a) Q-Learning b) TD(0) policy evaluation c) Dynamic programming value iteration d) On-policy Monte-Carlo control e) Sarsa

e


संबंधित स्टडी सेट्स

(HESI #11) Selection of Technical Factors Affecting Radiographic Quality Technique Charts Criteria for Image Evaluation and Technical Factors

View Set

Work and Energy Conceptual Questions

View Set

Chapter 8: Risk and Rates of Return

View Set

PassPoint - Psychosocial Integrity

View Set