Practice Quiz Module 7

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

Suppose you are dealing with 4 class classification problem and you want to train a SVM model on the data using One-vs-all method. How many times we need to train our SVM model in such case? 4 3 1 2

4

Suppose you are dealing with 4 class classification problem and you want to train a SVM model on the data using One-vs-Rest method. Suppose you have same distribution of classes in the data. Now, say for training 1 time in one vs all setting the SVM is taking 10 second. How many seconds would it require to train one-vs-all method end to end? 40 80 20 60

40

What is supervised learning? Some data is labeled but most of it is unlabelled and a mixture of supervised and unsupervised techniques can be used. All data is labeled and the algorithms learn to predict the output from the input data All data is unlabelled and the algorithms learn to inherent structure from the input data It is a framework for learning where an agent interacts with an environment and receives a reward for each interaction

All data is labeled and the algorithms learn to predict the output from the input data

The effectiveness of an SVM depends upon: Selection of Kernel Kernel Parameters Soft Margin Parameter C All of the above

All of the above

Which of the following are real world applications of the SVM? Text and Hypertext Categorization Image Classification Clustering of News Articles All of the above

All of the above

Support Vector Machine (SVM) can be used for ........... Classification only Regression only Classification and regression both None of the above

Classification and regression both

Suppose you are using a Linear SVM classifier with 2 class classification problem. Now you have been given the following data in which some points are circled red that are representing support vectors. If you remove the non-red circled points from the data, the decision boundary will change? True False

False

The distance of the vectors from the hyperplane is called the margin which is a separation of a line to the closest class points. We would like to choose a hyperplane that maximizes the margin between classes. Which options are true for the margin? (Select two) Hard margin — if the training data is linearly separable, we can select two parallel hyperplanes that separate the two classes of data, so that the distance between them is as small as possible. Hard margin — if the training data is linearly separable, we can select two parallel hyperplanes that separate the two classes of data, so that the distance between them is as large as possible. Soft margin — doesn't allow some data points to stay in either incorrect side of hyperplane and between margin and correct side of the hyperplane. Soft margin — allows some data points to stay in either incorrect side of hyperplane and between margin and correct side of the hyperplane.

Hard margin — if the training data is linearly separable, we can select two parallel hyperplanes that separate the two classes of data, so that the distance between them is as large as possible. Soft margin — allows some data points to stay in either incorrect side of hyperplane and between margin and correct side of the hyperplane.

Suppose you are using SVM with linear kernel of polynomial degree 2, Now think that you have applied this on data and found that it perfectly fit the data that means, Training and testing accuracy is 100%. Now, think that you increase the complexity(or degree of polynomial of this kernel). What would you think will happen? Increasing the complexity will overfit the data Increasing the complexity will underfit the data Nothing will happen since your model was already 100% accurate None of the above

Increasing the complexity will overfit the data

Which of the following mathematical technique allows SVM to work on non-linear data? Kernel trick Algebra trick Calculus trick

Kernel trick

Do SVM's work for huge datasets? No, SVM's are perfect only for small and medium sized datasets Yes, SVM's are perfect for large datasets

No, SVM's are perfect only for small and medium sized datasets

If I am using all features of my dataset and I achieve 100% accuracy on my training set, but ~70% on validation set, what should I look out for? Nothing, the model is perfect Overrfitting Underfitting

Overrfitting

Which options are true for SVM? (Select two) a. SVM can solve the data points that are not linearly separable b. The distance of the vectors from the margin is called the hyperplane c. SVM can solve the linearly separable data points

SVM can solve the data points that are not linearly separable SVM can solve the linearly separable data points

The SVM's are less effective when: The data is clean and ready to use The data is noisy and contains overlapping points The data is linearly separable

The data is noisy and contains overlapping points

Kernel function maps low dimensional data to high dimensional space. True False

True

Support vectors are the data points that lie closest to the decision surface. True False

True

To use SVMs for regression, you need to try to fit as many instances as possible on the street. True False

Ture

Suppose you are using a Linear SVM classifier with 2 class classification problem. Now you have been given the following data in which some points are circled red that are representing support vectors. If you remove the following any one red points from the data. Does the decision boundary will change? No Yes

Yes

Are SVM's sensitive to feature scaling? No, they are not sensitive Yes, they are sensitive

Yes, they are sensitive

What is classification? when the output variable is a real value, such as "dollars" or "weight" when the output variable is a category, such as "red" or "blue" or "disease" and "no disease"

when the output variable is a category, such as "red" or "blue" or "disease" and "no disease"


Kaugnay na mga set ng pag-aaral

Chapt 47 Mgt of Intestinal and Rectal Disorders

View Set

13-2: How The Federal Bureaucracy Is Organized

View Set

Chapter 49: Assessment and Management of Patients with Hepatic Disorder

View Set