30 Questions to Test a Data Scientist on Tree Based Models

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

Which of the following algorithm doesn't uses learning Rate as of one of its hyperparameter? 1. Gradient Boosting 2. Extra Trees 3. AdaBoost 4. Random Forest A) 1 and 3 B) 1 and 4 C) 2 and 3 D) 2 and 4

D Random Forest and Extra Trees don't have learning rate as a hyperparameter.

[True or False] Cross validation can be used to select the number of iterations in boosting; this procedure may help reduce overfitting. A) TRUE B) FALSE

A

True-False: The bagging is suitable for high variance low bias models? A) TRUE B) FALSE

A The bagging is suitable for high variance low bias models or you can say for complex models.

How to select best hyperparameters in tree based models? A) Measure performance over training data B) Measure performance over validation data C) Both of these D) None of these

B We always consider the validation results to compare with the test result.

Which of the following is true when you choose fraction of observations for building the base learners in tree based algorithm? A) Decrease the fraction of samples to build a base learners will result in decrease in variance B) Decrease the fraction of samples to build a base learners will result in increase in variance C) Increase the fraction of samples to build a base learners will result in decrease in variance D) Increase the fraction of samples to build a base learners will result in Increase in variance

A Answer is self explanatory

Which of the following is true about "max_depth" hyperparameter in Gradient Boosting? 1. Lower is better parameter in case of same validation accuracy 2. Higher is better parameter in case of same validation accuracy 3. Increase the value of max_depth may overfit the data 4. Increase the value of max_depth may underfit the data A) 1 and 3 B) 1 and 4 C) 2 and 3 D) 2 and 4

A Increase the depth from the certain value of depth may overfit the data and for 2 depth values validation accuracies are same we always prefer the small depth in final model building.

In Random forest you can generate hundreds of trees (say T1, T2 .....Tn) and then aggregate the results of these tree. Which of the following is true about individual(Tk) tree in Random Forest? Individual tree is built on a subset of the features Individual tree is built on all the features Individual tree is built on a subset of observations Individual tree is built on full set of observations A) 1 and 3 B) 1 and 4 C) 2 and 3 D) 2 and 4

A Random forest is based on bagging concept, that consider faction of sample and faction of feature for building the individual trees.

Suppose you are using a bagging based algorithm say a RandomForest in model building. Which of the following can be true? 1. Number of tree should be as large as possible 2. You will have interpretability after using RandomForest A) 1 B) 2 C) 1 and 2 D) None of these

A Since Random Forest aggregate the result of different weak learners, If It is possible we would want more number of trees in model building. Random Forest is a black box model you will lose interpretability after using it.

Now, Consider the learning rate hyperparameter and arrange the options in terms of time taken by each hyperparameter for building the Gradient boosting model? Note: Remaining hyperparameters are same 1. learning rate = 1 2. learning rate = 2 3. learning rate = 3 A) 1~2~3 B) 1<2<3 C) 1>2>3 D) None of these

A Since learning rate doesn't affect time so all learning rates would take equal time.

When you use the boosting algorithm you always consider the weak learners. Which of the following is the main reason for having weak learners? To prevent overfitting To prevent under fitting A) 1 B) 2 C) 1 and 2 D) None of these

A To prevent overfitting, since the complexity of the overall learner increases at each step. Starting with weak learners implies the final classifier will be less likely to overfit.

In which of the following scenario a gain ratio is preferred over Information Gain? A) When a categorical variable has very large number of category B) When a categorical variable has very small number of category C) Number of categories is the not the reason D) None of these

A When high cardinality problems, gain ratio is preferred over Information Gain technique.

Which of the following is true about training and testing error in such case? Suppose you want to apply AdaBoost algorithm on Data D which has T observations. You set half the data for training and half for testing initially. Now you want to increase the number of data points for training T1, T2 ... Tn where T1 < T2.... Tn-1 < Tn. A) The difference between training error and test error increases as number of observations increases B) The difference between training error and test error decreases as number of observations increases C) The difference between training error and test error will not change D) None of These

B As we have more and more data, training error increases and testing error de-creases. And they all converge to the true error.

Which of the following is/are true about boosting trees? In boosting trees, individual weak learners are independent of each other It is the method for improving the performance by aggregating the results of weak learners A) 1 B) 2 C) 1 and 2 D) None of these

B In boosting tree individual weak learners are not independent of each other because each tree correct the results of previous tree. Bagging and boosting both can be consider as improving the base learners results.

Consider the hyperparameter "number of trees" and arrange the options in terms of time taken by each hyperparameter for building the Gradient Boosting model? Note: remaining hyperparameters are same 1. Number of trees = 100 2. Number of trees = 500 3. Number of trees = 1000 A) 1~2~3 B) 1<2<3 C) 1>2>3 D) None of these

B The time taken by building 1000 trees is maximum and time taken by building the 100 trees is minimum which is given in solution B

Which of the following is true about the Gradient Boosting trees? In each stage, introduce a new regression tree to compensate the shortcomings of existing model We can use gradient decent method for minimize the loss function A) 1 B) 2 C) 1 and 2 D) None of these

C Both are true and self explanatory

In random forest or gradient boosting algorithms, features can be of any type. For example, it can be a continuous feature or a categorical feature. Which of the following option is true when you consider these types of features? A) Only Random forest algorithm handles real valued attributes by discretizing them B) Only Gradient boosting algorithm handles real valued attributes by discretizing them C) Both algorithms can handle real valued attributes by discretizing them D) None of these

C Both can handle real valued features.

Which of the following is/are true about bagging trees? In bagging trees, individual trees are independent of each other Bagging is the method for improving the performance by aggregating the results of weak learners A) 1 B) 2 C) 1 and 2 D) None of these

C Both options are true. In Bagging, each individual trees are independent of each other because they consider different subset of features and samples.

In greadient boosting it is important use learning rate to get optimum output. Which of the following is true abut choosing the learning rate? A) Learning rate should be as high as possible B) Learning Rate should be as low as possible C) Learning Rate should be low but it should not be very low D) Learning rate should be high but it should not be very high

C Learning rate should be low but it should not be very low otherwise algorithm will take so long to finish the training because you need to increase the number trees.

To apply bagging to regression trees which of the following is/are true in such case? 1. We build the N regression with N bootstrap sample 2. We take the average the of N regression tree 3. Each tree has a high variance with low bias A) 1 and 2 B) 2 and 3 C) 1 and 3 D) 1,2 and 3

D All of the options are correct and self explanatory

Which of the following is/are true about Random Forest and Gradient Boosting ensemble methods? 1. Both methods can be used for classification task 2.Random Forest is use for classification whereas Gradient Boosting is use for regression task 3. Random Forest is use for regression whereas Gradient Boosting is use for Classification task 4. Both methods can be used for regression task A) 1 B) 2 C) 3 D) 4 E) 1 and 4

E Both algorithms are design for classification as well as regression task.

Which of the following algorithm are not an example of ensemble learning algorithm? A) Random Forest B) Adaboost C) Extra Trees D) Gradient Boosting E) Decision Trees

E Decision trees doesn't aggregate the results of multiple trees so it is not an ensemble algorithm.


Kaugnay na mga set ng pag-aaral

Ch. 27: WHMIS Part 2 - Labeling of Controlled Products

View Set

Фармакологія тести

View Set

Chapter 4 Loops and Files (reading)

View Set

Tversky and Kahneman- Availability heuristic

View Set

Pituitary Disorder NCLEX Questions

View Set

CHAPTER 2 (Business Pressures, Organizational Responses, and IT Support)

View Set