Machine Learning True False questions

¡Supera tus tareas y exámenes ahora con Quizwiz!

The "winner-takes-all" problem in competitive learning is less likely to occur if weights are initialized to small random values, instead of drawing values from the training data.

False. Drawing small numbers values is to draw vectors fromm an area close to origo. The training data may be sommewhere else. By drawing patterns from the training set instead, we make sure that the nodes have weight vectors in region(s) where there is data.

A self-organizing feature map can be trained to separate overlapping classes

False. Even if it could, it would actually be wrong. Separating overlapping classes would go against the idea to preserve topology (= to maintain statistical distributions)

Grammatical Evolution evolves grammars

False. GE takes a grammar as one of its inputs and evolves expressions in the language defined by that grammar, given a numerical genotype

Reinforcement learning requires that the reward function is determininstic

False. It may be (and often is) stochastic. Values (Q values in Q learning, for example) are defined as the expected su of future rewards. One could possibly argue that the reward function should be stationary though, since changing the distribution is to change the objective.

In Q Learning, Q value changes are always positive.

False. It was true in lab 2 on this course, but not in general. For example, Q values may have been initialized greater than their final values (which may also very well all be negative), the problem may be non stationary, rewards may be noisy.

The main difference between Q learning and sarsa is that Q learning is greedy, while sarsa explors

False. No, neither is greedy. A greedy reinforcement learning algorithm would be something of a contradiction in terms. RL is to learn by trial and error.

There is no cognitive component in the PSO algorithm gbest

False. Particles in gbest strive for a combination of their personal bests (the cognitive component) and the best personal best of any particle in the swarm (the social component)

RPROP is a second order optimization method

False. Second order would mean that we use derivatives of derivatives, i.e. observe how slopes change over time. Quickprop does this, for example, but RPROP does not. Actually RPROP throws away most of the first order information as well, since it only considers the sign of the first derivative.

in PSO, lbest is more likely to prematurely converge, than gbest

False. Since all individuals in gbest strive for the same gloval best (stochastically) they tend to move in a more tight group, while lbest leads to a more diverse population.

Evolutionary computing requires that the fitness function is differentiable.

False. That EC does not require this is one of its big advantages.

Binary perceptrons require that the input values are binary

False. The 'binary' perceptrons referes to the output of the node (being 0 or 1)

Self Organizing Feature Maps require batch learning

False. The SOFM implementation you used on the lab (in matlab) uses batch learning, but this is not required and indeed rather unusual. SOFM was originally meant to be trained by pattern learning, and is still usually trained this way.

Reinforcement learning requires that the problem to be solved has at least one terminating goal state

False. There are many reinforcement learning problems which don't have terminating states. Elevator scheduling, or network routing, for example. Pole balancing has terminating states, though the goal is actually not to reach them.

The 'winner takes all' problem in unsupervised learning is more likely to occur in self organizing feature maps than in competitive learning

False. it is more likely to occur in competitive learning since only the winner is moved in that algorith. In SOFM, also the winner's neighbors are moved towards the input, so the winner will drag some friends along.

In radial basis function networks, one hidden layer is sufficient.

True, in theory this is true also for MLPs (though more hidden laryers are very rarely needed). For RBF networks its not just theory, though, but a very practical rule.

Randomizing the order in which patterns are presented to the network has no effect in batch (epoch) learning

True, since all weight changes are accumulated until the whole training set has been presented. Addition is commutative.

A reinforcement learning agent trained with a low discount factor, will tend to be more short-sighted than an agent trained with a greater discount factor. ("short-sighted", here, means to faour solutions which give quicker rewards)

True, the discount factor controls the effective distance to the horizon. The rule of thumb is that an agent with discount factor r should be able to solve problems that require planning up to 1/(1-r) steps ahead. So, for example, r=0.9 corresponds to 10 steps.

In the PSO variant called lbest, swarm diversity tends to decrease with increased size of the particle neighborhoods.

True. A larger neighborhood means that more particles will tend to strive for the same point, thus making the swarm more tightly grouped (i.e. lose diversity). The extreme case is gbest where the neighborhood graph is a fully interconnected structure.

Q learning and Sarsa are equivalent, if the exploration rate is set to 0.

True. If the exploration rate is 0, the agent is greedy, meaning that it always chooses the action with the highest expected future reward. In that case, both Q learning and Sarsa will update their Q values towards the same target.

A multilayer perceptron must have at least one non linear hidden layer, to solve the XOR problem

True. In a regular MLP there must be at least two nodes in that hidden layer. If we allow bypass connections, from the inputs directly to the output, only one hidden node is required.

If exploration rate is decreased over time slowly enough, Q learning and sarsa should converge to the same Q values

True. In the limit when exploration approaches zero, they are the same

Weights are update more often in pattern learning, than in batch learning.

True. Pattern learning updates the weights after every pattern presentaiton. Batch learning only accumulates the weight changes until the whole batch of patterns has been presented once.

In general, Q Learning ends up with greater action values than Sarsa, when trained on the same task with constant learning parameters.

True. Q Learning always adjust its values towards the maximum value in the next state, while Sarsa adjust its values towards the value of the action actually taken in the next state (which is most likely, but not necessarily, the maximum one)

In a well-trained self-organizing feature map, nodes that are close to each other in the structure (grid) will also very likely have similar weight vectors

True. SOFMs are supposed to preserve topology, meaning that two patterns that are close to each other should activate nodes that are close to each other on the grid, i.e. that those two nodes have similar weight vectors. When a node wins it drags its grid neighbors with it, towards the same point, thus making the weight vectors more similar.

PSO adapts the particles velocities, rather than their positions

True. This is one of the special characteristics of PSO, with the effect that the search has a kind of builtin momentum. Most other optimization methods operate directly on positions.

One difference between (most) evolutionary computation algorithms and (most) neural network training algorithms, is that evolutionary computation algorithms do not require the objective function to be differentiable.

True. This is since the objective function in EC is only used for selection, not to compute how to change the parameters in the system.

Ant colony optimization is mostly used for combinatorial optimization problems

True. Though variants of ACO which solve continuous problems have been proposed, it is mostly used for combinatorial problems, as originally intended.

In Radial Basis Function Networks, the hidden layer is usually trained separately from, and before, the output layer

True. Usually by unsupervised learning, for example by competitive learning/k-means, to find positions, then the sizes of the receptive fields, and last the output layer.

In general, radial basis function networks are better for on-line learning tasks (continuous learning) than multilayer perceptrons

True. this is because a basis function is local. New information from some other part of the space does not affect it and, hence, is not likely to destroy what it has learnt so far. MLPs, on the other hand, tend to destroy the old information when new information comes in.

Temporal difference learning tries to minimize the difference between successive predictions, rather than the difference between the current prediction and true value

True. this is the main point with TD and that which makes it most different from supervised learning

If a machine learning algorithm A performs better then machine learning algorithm B on a non-empty set of benchmark problems, there must be another non-empty set of problems for which B is better than A.

True

The boolean logic function (a AND b AND c) OR d can be implemented by a single binary perceptron

True

There is no social component in the PSO algorithm pbest

True

If a classification trask is such that it is possible to separate the classes with a linear discriminant, then a perceptron trained by Rosenblatt's convergence procedure is guaranteed to find it

True, and it will do so in finite time. This is Rosenblatt's convergence theorem

In general, compact input representation are better than distributed ones, when training neural networks

False, Distributed representations (more input values) are usually better. The network should not have to learn how to unpack things

In general, radial basis function networks perform better than multilayer perceptrons for problems with many input variables.

False, RBF networks have more problems in high dimensionality than MLPs.

Radial basis function networks usually require more than one hidden layer

False, more than one is extremely uncommon.

stigmergy does not scale well with the number of communicating agents

False, since the agents dont need to communicate directly to each other, stigmergy scales very well with population size. There are ant colonies with half a billion individuals

it is not possible to overtrain a genetic algorithm

False. Any learning system trained on a finite set of training data can be over trained on that set.

A multilayer perceptron trained by Backprop and with sufficiently many hidden layer and nodes to represent a solution to a given problem, is guaranteed to find that solution (or one equally good) if trained for long enough

False. Backprop is gradient descent and can (often do) get stuck in local minima. Even worse, the problem may not even be differentiable (in which case we could still represent the solution, but not find it with Backprop at all)

The main difference between immediate and delayed reinforcement learning is in how often the rewards are received

False. Delayed RL problems require that rewards are associated to sequences of actions, instead of just the latest one as in immediate RL, but the agent may still receive rewards after every state transition.

Growing neural gas is an unsupervised clustering algorithm

True

Evolutionary computation algorithms do not usually require the objective function to be differentiable

true. in evolutionary computing the objective function is only used to select individuals, not to decide how they are modified.

evolutionary computation algorithms are very unlikely to get stuck in local minima

true. There is always at least a theoretical chance that mutation will create an outlier to the population, even if it has prematurely converged.

Any function represented by a multilayer perceptron with a linear hidden layer, can be represented by a single layer of perceptrons (i.e. without that hidden layer)

true. Two linear combinations after each other can just as well be written as one

The "winner-takes-all" problem in unsupervised learning is more likely to occur in self organizing feature maps than in competitive learning

false. In SOFM, the winning node drags its neighbors along towards the input. Sooner or later, all nodes will be pulled in towards the region where the data is.

in general, genetic algorithms require smaller populations than particle swarm optimization when trained on the same task

false. the search in PSO is more directed and less stochastic than in GA, which implies that PSO, in general, should require smaller populations.

In PSO, if the neighborhood graph used in lbest is fully interconnected, it would be equivalent to gbest

true. A fully interconnected graph means that the social component will strive for the global best position found so far, as in gbest.


Conjuntos de estudio relacionados

Chapter 1: General Concepts and Administrative Issues

View Set

Microeconomics HW Questions Ch. 5-8

View Set

CPR Online Portion Study Guide 2021 January

View Set

The Radicals Take Control Chapter 2 Lesson 2

View Set

bio 152 speciation hw quiz + lecture quiz on speciation and drift

View Set

Chapter 7 Quiz - Linkages between CRM and SRM

View Set