SAS - Machine Learning (Practice Exam)

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

For a new project that you are creating in SAS Model Studio, you wish to use a SAS data set that is not in memory but exists on your connected server (not the local machine). In which tab would this data set be located? a. Data Sources b. Available c. Import d. Load Data

Correct: A (All data sets that exist on a connected server are found in Data Sources. Only CAS tables loaded into memory are seen in Available. Data sets that are located on your local machine can be found in Import. There is no tab named Load Data.)

Which statement is true regarding the Feature Extraction node? a. The Autoencoder method builds a neural network that uses the inputs to reconstructs the inputs. b. The original features are kept along with the new features by default. c. The Principal Component Analysis method can perform both linear and nonlinear transformations. d. It transforms the existing features into a higher-dimensional space.

Correct: A (An autoencoder network is like an MLP network except that it's output layer is duplicated from the input layer. The correct answer is: The Autoencoder method builds a neural network that uses the inputs to reconstructs the inputs.)

You are modifying a data set so that the variable AGE is imputed using the mean and all other interval inputs are imputed using the median. How can this be done in Model Studio? a. Use a Manage Variables node to assign an imputation method of mean for AGE, and then use an Imputation node which has interval variable imputation set to median. b. Use a Manage Variables node to assign an imputation method of median for all interval inputs and then use an Imputation node which has interval variable imputation set to mean. c. Use an Imputation node which has interval variable imputation set to mean following by a second Imputation node which has interval variable imputation set to median. d. Use a Manage Variables node to assign an imputation method of mean for AGE followed by a second Manage Variables node which has interval variable imputation set to median for all other interval inputs.

Correct: A (Answer B is not correct since the manage variables node assigns median imputation for all interval inputs and this takes priority over any setting in the imputation node. This means all interval inputs, including AGE, would be imputed with median. Answer C is not correct since the first imputation node would perform mean imputation for all interval inputs, and then no interval inputs would be missing for the second imputation node. Answer D is not correct since the manage variables node only assigns the imputation rules, an imputation node is required to actually perform the imputation. The correct answer is: Use a Manage Variables node to assign an imputation method of mean for AGE, and then use an Imputation node which has interval variable imputation set to median.)

Within the Variable Selection node, which variable selection technique identifies the set of input variables that jointly explain the maximum amount of variance contained in the data? a. Unsupervised Selection b. Fast Supervised Selection c. Linear Regression Selection d. Gradient Boosting Selection

Correct: A (B, C, and D focused on the target's variance when finding the set of input variables that jointly explains the maximum amount. Unsupervised selection does not consider the target. The correct answer is: *Unsupervised Selection identifies the set of input variables that jointly explain the maximum amount of variance contained in the data*)

Which grow criterion can be used for both interval and categorical target variable? a. CHAID b. Variance c. Chi-square d. Entropy

Correct: A (CHAID is used by both Interval and class target.) [Variance, Chi-square and entropy would only be for interval variables.]

A variable, Size, has the following values: "Small," "Medium," and "Large." Size is best characterized as which type of variable? a. Ordinal b. Interval c. Nominal d. Binary

Correct: A (In this case, Size is a categorical variable with 3 levels that have a natural order to them, thus making it ordinal. The correct answer is: Ordinal)

What is the most appropriate interpretation of the Partial Dependence (PD) and Individual Conditional Expectation (ICE) overlay plot for the variable DemPctVeterans (Percent Veterans Region) shown below? a. The variable DemPctVeterans likely has an interaction with at least one other input variable when predicting these observations. b. There is a consistent positive linear relationship for DemPctVeterans and the predicted outcome over-all and for these observations. c. There is a consistent negative linear relationship for DemPctVeterans and the predicted outcome over-all and for these observations. d. There are no differences for DemPctVeterans when predicting the outcome for these observations.

Correct: A (Intersecting lines in a PD and ICE overlay plot indicates interaction with at least one other input variables and shows differences for the predicted outcome for these observations for the plotted variable. *The correct answer is: The variable DemPctVeterans likely has an interaction with at least one other input variable when predicting these observations.*)

Which of the following is a Global Interpretability plot, as opposed to Local Interpretability? a. Partial Dependence (PD) plot b. Individual Conditional Expectation (ICE) plot c. Local Interpretable Model-Agnostic Explanations (LIME) plot d. Kernel SHAP (Shapley) plot

Correct: A (Partial Dependence plots are based on an aggregation across all observations, thus they provide global interpretability. It is also the only model interpretability plot found under Global Interpretability in the properties pane for Model Studio. All other plots are constructed on individual observations and are found under Local Interpretability in the properties pane.)

Assume a variable has four levels coded as: 1=unmarried, 2=married, 3=divorced, and 4=widowed. Which measurement levels should be selected in Model Studio for this variable? a. Nominal b. Binary c. Ordinal d. Interval

Correct: A (The reason is that each level of the variable simply represents the fact that they are different from each other. No level is greater or smaller than the other level. These levels do not follow any natural order. Therefore, the underlying scale of measurement is nominal in nature.)

A big bank is developing models for fraud detection using decision tree, gradient boosting, and forest models in SAS Viya. Fraud cases comprise around 1% of the population. What is the correct way to preprocess data for proper modelling and model assessment? a. Use oversampling to generate the sample data. Use stratification sampling to separate the sample data into training and validation. b. Use oversampling to generate the sample data. Use random sampling to separate the sample data into training and validation. c. Use stratification sampling to generate the sample data. Use random sampling to separate the sample data into training and validation. d. Use stratification sampling to generate the sample data. Use stratification sampling to separate the sample data into training and validation.

Correct: A (Using oversampling to generate a balanced data, which may be 40/60 or 50/50 or 30/70, and then use stratification sampling to separate the sample data into training data and validation data. The correct answer is: *- Use oversampling to generate the sample data. - Use stratification sampling to separate the sample data into training and validation.*)

Which of the following are *lift-based statistics*? (Choose 2). a. F1 Score b. Gain c. Accuracy d. Percent (%) Response

Correct: A+D (*Gain and Percent (%) Response are lift-based statistics*)

What is another term for a feature in predictive modeling? a. Instance b. Input c. Target d. Outcome

Correct: B (A feature for a model is another term for an input variable. The correct answer is: Input)

What is true regarding changes made in the Data tab of a project in Model Studio? a. Target variables are automatically added to global metadata for use in other Model Studio projects. b. After the partition table has been created, you can not change a variable with a role of target, segment, or partition. c. Selecting a transform value other than "Default" will cause the transformation to be applied when a supervised learning node runs. d. Selecting an impute value other than "Default" will cause the imputation to be applied when a supervised learning node runs.

Correct: B (Adding variables to global metadata must be done manually. Transformations and imputations are invoked by their respective nodes, not supervised learning nodes. A lock icon appears in the variable property panel for target, segment, and partition variables after running, and options are grayed out The correct answer is: After the partition table has been created, you can not change a variable with a role of target, segment, or partition.)

If you build a Forest model on 100 inputs, which settings will consider the most inputs per split? a. Figure A b. Figure B c. Figure C d. Figure D

Correct: B (By default, the number of inputs considered per split is the square root of the number of inputs, therefore A would consider 10 inputs. B considers 12 inputs, which is the most shown on the answer choices.)

*Image* Refer to the exhibit below. Which statement is true? a. CustID should be used as a unique identifier. b. City will be rejected by default. c. Margin is left-skewed. d. Country has a high level of variability.

Correct: B (CustID is not completely unique. City, a character variable, exceeds the maximum cardinality, for a nominal variable, which is 20 by default. Margin is most likely right-skewed (mean greater than median). Country only has one level represented in the data set. *The correct answer is: City will be rejected by default.*)

Which optimization method would the Neural Network node use when the number of hidden layers is 3? a. LBFGS b. SGD c. ReLU d. Softplus

Correct: B (LBFGS cannot be used in networks with more than 2 hidden layers and ReLU and Softplus are hidden layer activation functions.)

What is the largest Polynomial Kernel degree possible within a Support Vector Machine node? a. 2 b. 3 c. 4 d. 5

Correct: B (Only polynomial degrees of 2 and 3 are available. Thus 3 is the largest polynomial degree possible.)

*Image* Consider the excerpt below from the pipeline and properties panel. What is true regarding missing values for interval inputs in this pipeline? a. Observations with missing values for interval inputs will be excluded. b. Observations with missing values for interval inputs will be imputed with the mean and included. c. Observations with missing values for interval inputs will be imputed with the method specified in the Data tab and included. d. Observations with missing values for interval inputs will be imputed with the midrange and included.

Correct: B (The "Include missing inputs" box is checked, so A is not correct. There would have to be an Imputation node between the Data and Neural Network nodes in Pipeline in order for C to be correct. The "Midrange" selection under "Input standardization" is not related to missing input values, so D is not correct. Hovering over "Include missing inputs" reveals a note indicating that the mean is used for missing values of interval inputs when the box is checked, so B is correct. *The correct answer is: Observations with missing values for interval inputs will be imputed with the mean and included.*)

You want to predict the rankings of a target variable and have built multiple models. Which selection statistic should be used to compare your models? a. Misclassification Rate b. Gini Coefficient c. Average Squared Error d. KS Statistic

Correct: B (The Gini coefficient is the measure of statistical dispersion in a distribution. A value of 0 expresses total equality, and a value of 1 expresses maximal inequality. This statistic will help judge whether the rankings that you predicted are accurate because the target predictions are judged based on whether the outcomes of each observation are in the desired order compared to the other observations.)

*Image* Refer to the pipeline diagram below. Which of the following is true about the output from the Save Data node? Select one: a. It contains model fit statistics. b. It contains predicted probabilities. c. It is a permanent table in a SAS library. d. It is a temporary table in a SAS library.

Correct: B (The Save Data node produces a temporary table in a CAS library. Following a decision tree, the table contains predicted probabilities and leaf IDs. *The correct answer is: It contains predicted probabilities.*)

The two graphs shown below are from a study of response rate to a marketing campaign. How much more likely are the top 20% of targeted respondents to purchase the product than a randomly selected sample based on the hold out data set? a. 68% b. 230% c. 20% d. 180%

Correct: B (The answer is derived from the cumulative lift chart at a depth of 20%. The correct answer is: 230%)

Which statement is true about the activation functions for neural networks? a. The exponential target activation function is appropriate when the target contains negative values. b. The identity target activation function is appropriate when the target error function is normally distributed. c. The rectifier (ReLU) function has a sigmoid shape. d. The softplus activation function has lower and upper limits.

Correct: B (The identity target activation function is appropriate when the target error function is normally distributed.)

When you scale the input variables for a binary target using support vector machines, what happens to the inputs? a. Values are scaled to range from -1 to 1. b. Values are scaled to range from 0 to 1. c. Values are scaled to range from negative infinity to infinity. d. Values are scaled to be positive.

Correct: B (When the Scale inputs option is selected all input variables are scaled to range from 0 to 1, inclusively.)

Which statement is true when you analyze unstructured data using the Text Mining node? a. Each document can belong to only one topic. b. The output of the Text Mining node contains SVD scores that can be used as inputs for models. c. You can use the Stem term option to extract entities. d. Topics are created based on terms that are not occurring together.

Correct: B (You can extract new features from Text Mining node. The output columns contain SVD scores that can be used as input columns for the downstream nodes. The correct answer is: The output of the Text Mining node contains SVD scores that can be used as inputs for models.)

*Image* Refer to the exhibit below that shows options for the Transformations node. Which statements correctly describe what happens when this node is run? (Choose 2) Select two: a. The log transformation is applied to all interval input variables. b. The log transformation is applied to all interval input variables unless metadata says otherwise. c. No transformation is applied to any class input variable. d. No transformation is applied to any class input variable unless metadata says otherwise.

Correct: B + D (You have correctly selected 1. Metadata always overrules transformations that are defined within the Transformation node. So all interval input variables will have their log transform applied unless metadata says otherwise. Similarly, no transformation will occur to class input variables unless the metadata says otherwise. Metadata can be set using the Manage Variable node or on the Data tab.)

*Image* Refer to the exhibit below. Which of the following is true regarding default Role assignment? a. The variable age is rejected due to high cardinality. b. The variable age is rejected due to redundancy. c. The variable Country is rejected because it is a constant. d. The level for the variable City can be changed to Interval and used as an input.

Correct: C (Age is rejected due to missingness, not cardinality or redundancy. City is a character variable and cannot be set to Interval. Number of Levels equal to 1 and Level of Unary indicate that Country is constant and is rejected because it will not add value when training models. The correct answer is: The variable Country is rejected because it is a constant.)

You wish to determine which model is best at predicting an interval target. Which assessment measure should be used to determine the champion? a. Misclassification Rate b. ROC Index c. Average Square Error d. Gini Coefficient

Correct: C (An interval target is a type of estimation prediction. Estimate predictions use Average Square Error as the fit statistics for comparisons. If this was a decision focus, the answer would have been misclassification. If this was a ranking focus, the answer could have been ROC Index or Gini Coefficient. The correct answer is: Average Square Error)

Given a four dimensional or higher input space (i.e., a scenario with 4 or more input variables) what best describes the geometric shape of a support vector machine model? a. a line b. a plane c. a hyperplane d. a sphere

Correct: C (For a two dimensional input space the SVM model is a line, for a three dimensional input space the SVM model is a plane, and for four dimensional and higher input spaces the SVM model is a hyperplane.)

Which statement is true regarding model deployment? a. Models can be registered from a Pipeline tab in Model Studio. b. You can publish an open source model in Model Studio. c. Models can be published from the Pipeline Comparison tab in Model Studio. d. Models must be registered before they are deployed.

Correct: C (Models may be published from "Pipeline Comparison" but NOT "Pipelines" in model studio. The correct answer is: *Models can be published from the Pipeline Comparison tab in Model Studio.*)

Which statement is NOT true for an Open Source Code node? a. Python code can be executed. b. Python must be installed to run an Open Source Code node with Python code. c. Open Source Code node executes in CAS. d. A sample of the training data is downloaded to SAS.

Correct: C (Open source code node is executed in the SAS Compute Server. The correct answer is: Open Source Code node executes in CAS.)

Which statement is true regarding registering and publishing model? a. Registering a model will enable you to visualize the results in SAS Visual Analytics. b. Publishing a model will enable you to access it with SAS Model Manager. c. Registering a model will enable you to access it with SAS Model Manager. d. Publishing a model will enable you to visualize the results in SAS Visual Analytics.

Correct: C (Registering a model is used to make the model available in Model Manager. Publishing a model is used to makes score available in CAS, Teradata or Hadoop. The correct answer is: *Registering a model will enable you to access it with SAS Model Manager.*)

When "Use missing" is specified for a Support Vector Machine node, how are missing values for class variables handled? a. The missing value is replaced by the highest frequency value. b. The missing value is replaced by the lowest frequency value. c. The missing value is treated as a separate category. d. The missing value is replaced by the median frequency value.

Correct: C (SVMs treat missing values as a separate category. The correct answer is: *The missing value is treated as a separate category.*)

In a Neural Network node, what is the purpose of the Minibatch size option and to which optimization method does it apply? a. It defines the number of weights to calculate the model error and update the model coefficients. It is used in SGD. b. It defines the number of training observations to calculate the model error and update the model coefficients. It is used in LBFGS. c. It defines the number of training observations to calculate the model error and update the model coefficients. It is used in SGD. d. It defines the number of iterations to calculate the model error and update the model coefficients. It is used in LBFGS.

Correct: C (Stochastic Gradient Descent computes the model error and update the model coefficients at each iteration considering all training observations. The Minibatch size option define the number of training observations to use in the SGD instead of using all training observations.. The correct answer is: *It (the Minibatch) defines the number of training observations to calculate the model error and update the model coefficients. It is used in SGD.*)

As the number of input variables in a problem increases, there is an exponential increase in the number of observations needed to densely populate the feature space. This is referred to as: a. Problem of rare events b. Multicollinearity c. Curse of Dimensionality d. Underfitting

Correct: C (The curse of dimensionality refers to the exponential increase in data required to densely populate space as the dimension increases.)

Which data set partition should be used to assess how the final model generalizes to new data? a. Training data set b. Validation data set c. Test data set d. Crossvalidation data set

Correct: C (Training generates the possible models. Validation and cross validation data sets assist in comparing possible models. Test data assesses how the final chosen model performs and generalizes to new data. The correct answer is: *Test data set should be used to assess how the final model generalizes to new data*)

Which statement is true about a forest model? a. Increasing the number of trees makes a better prediction model. b. Increasing the maximum depth makes a better prediction model. c. Forest models are not as interpretable as decision tree models. d. Forest models develop a sequence of decision trees that iteratively improve on the previous tree.

Correct: C (c. Forest models are not as interpretable as decision tree models.)

Which model interpretability tools can be used to help interpret a machine learning model for a single observation. (*Choose two*) a. Variable Importance table b. Partial Dependency (PD) plots c. Local Interpretable Model-Agnostic Explanations (LIME) plots d. Kernel SHAP (Shapley) plots

Correct: C+D (LIME and Shapley plots are locally interpretable whereas Variable Importance and PD plots are globally interpretable. Variable Importance tables and PD plots require the whole data set for calculation.)

Which model can you import using a Score Code Import node? a. R Model b. Python Model c. A SAS program that contains PROC LOGISTIC. d. A SAS analytic store (ASTORE) model.

Correct: D (A Score Code import node can only import either an ASTORE model or a single DATA step file model. The correct answer is: A SAS analytic store (ASTORE) model.)

What is the purpose of the Bonferroni correction during a decision tree split search? a. To adjust for the number of irrelevant inputs. b. To prevent overfitting due to the number of tests needed to test each split point. c. To correct for the number of correlated inputs and allow for surrogate rules. d. To maintain overall confidence by inflating the p-values.

Correct: D (As the number of possible split points increases, the likelihood of obtaining significant values also increases. In this way, an input with many unique input values has a greater chance of accidentally having a large logworth than an input with only a few distinct input values.Statisticians face a similar problem when they combine the results from multiple statistical tests. As the number of tests increases, the chance of a false positive result likewise increases. To maintain overall confidence in the statistical findings, statisticians inflate the p-values of each test by a factor equal to the number of tests being conducted. If an inflated p-value shows a significant result, then the significance of the overall results is assured. This type of p-value adjustment is known as a Bonferroni correction. The correct answer is: Bonferroni Correction - to maintain overall confidence by inflating the p-values.)

Which statement is true about the "Perform Autotuning" property? a. Autotuning is available for all supervised learning methods. b. Autotuning can be performed for all of the model parameters. c. By default, autotuning uses cross validation to assess models. d. User can specify the maximum amount of time to spend on autotuning

Correct: D (Autotuning is not available for logistic regression. Autotuning is performed only for a selected subset of the model parameters, not all of them. By default, autotuning uses validation data (if exists) to assess models. Autotuning enables users to set maximum amount of time to spend on hyperparameter tuning. The correct answer is: User can specify the maximum amount of time to spend on autotuning.)

*Image* Based on the information shown in the exhibit below, which statement is true regarding variable selection? a. The combination selection criteria is "Selected by all". b. The combination selection criteria is "Selected by at least 1". c. The combination selection criteria is "Selected by gradient boosting". d. The combination selection criteria is "Selected by majority".

Correct: D (D is correct because variables are selected based on the majority rule. C is incorrect because there is no such a criteria. *The correct answer is: The combination selection criteria is "Selected by majority".*)

The input variables have missing values. What should you do before running a Decision Tree node with these input variables? a. impute all missing values using the Impute node b. impute only interval variables using the Impute node but do not impute the class variables c. impute only class variables using the Impute node but do not impute the interval variables d. not impute any missing values because trees can handle them

Correct: D (Do not impute any missing values because trees can handle them as a separate category. The split search criteria for decision trees assign the missing values along one side of a branch at the Splitting node as a category. This is quite different from a regression or neural network, where each input variable is used in a mathematical equation and hence cannot have missing values.)

Which modeling algorithm is an ensemble learning technique? a. Support Vector Machine b. Decision Tree c. Neural Network d. Gradient Boosting

Correct: D (Ensemble learning helps improve machine learning results by combining several models. Gradient Boosting fits a sequence of week learners to weighted versions of the data. The correct answer is: Gradient Boosting)

Which statement is true when you build a forest model? a. The more trees used, the better the model performs. b. Missing values need to be imputed before model building. c. Each tree should be pruned to increase model accuracy. d. The out-of-bag sample is used to assess the fit of the model.

Correct: D (For each individual tree, the out-of-bag sample is used to form predictions. These predictions are more reliable than those from training data.)

The Neural Network node can use weight decay to avoid overfitting. How are the L1 and L2 regularizations applied? a. L1 penalizes the bias terms. L2 penalizes the weights. b. L1 penalizes the square root of the weights. L2 penalizes the squared weights. c. L1 penalizes the absolute delta of the weights. L2 penalizes the difference of the weights. d. L1 penalizes the absolute value of the weights. L2 penalizes the squared weights.

Correct: D (L2 Regularization (also known as Ridge Regression) penalizes the square value of the weight (which explains also the "2" from the name). It tends to drive all the weights to smaller values. L1 Regularizarion (also known as Lasso Regression) penalizes the absolute value of the weight (v- shape function). It tends to drive some weights to exactly zero (introducing sparsity in the model), while allowing some weights to grow big. The correct answer is: *L1 penalizes the absolute value of the weights.L2 penalizes the squared weights.*)

*Image* Refer to the exhibit below. Why did the model creation process finish? a. The SVM node encountered an error. b. The maximum number of iterations was encountered. c. The number of support vectors exceeded the number of support vectors on the margin. d. The model converged to the specified tolerance value.

Correct: D (Max Iterations is set to 25, and the model terminated at step 22 because the delta was less than the tolerance value. *The correct answer is: The model converged to the specified tolerance value.*)

Regarding Misclassification (MCE) and Misclassification (Event) statistics within the Model Comparison Node, which statement is true? a. Misclassification (MCE) considers only the classification of the event level versus all other levels. b. Misclassification (Event) is the true misclassification rate. c. The Misclassification (MCE) statistic is computed in the context of the ROC report. d. For binary targets, Misclassification (MCE) and Misclassification (Event) are the same.

Correct: D (Misclassification (MCE) is the true misclassification rate. That is, every observation where the observed target level is predicted to be a different level counts in the misclassification rate. Misclassification (Event) considers only the classification of the event level versus all other levels. Thus, a non-event level classified as another non-event level does not count in the misclassification. The Misclassification (Event) statistic is computed in the context of the ROC report. That is, at each cutoff value, this measure is calculated.For binary targets, these two measures are the same. The correct answer is: *For binary targets, Misclassification (MCE) and Misclassification (Event) are the same.*)

*Image* Refer to the exhibit below. What are the minimum and maximum number of layers supported in the Neural Network node? a. minimum: 1 maximum: 10 b. minimum: 2 maximum: 10 c. minimum: 0 maximum: 12 d. minimum: 2 maximum: 12

Correct: D (The slider icon on allows from 0-10 on the right bar. There is an input layer and an output layer as well, for a total of 2-12 layers.)

In Model Studio, you have multiple pipelines in a project. Which statement is true? a. The Model Comparison node compares only the champion models for each project. b. The Pipeline Comparison tab compares all of the models from each pipeline. c. You can override the champion in a Model Comparison node. d. You can override the champion in a Pipeline Comparison tab.

Correct: D (You can override the champion in a Pipeline Comparison tab.)

*Assume the Target has an event proportion of 2% in the original data. You want to build models where event-based sampling has been used such that the modeling data set will have a 50% event proportion. What are the two ways this can be done using Model Studio? (Select two)* a. - While the project is being created, after the data source has been selected, click the *Advanced* button. - Select the *Event-Based Sampling* option. - Turn on event-based sampling by checking the check box. b. - After the project is created but before a pipeline has been run, go into project settings, select the *Event-Based Sampling* option. - Turn on event-based sampling by checking the check box. c. - After the project is created and after a pipeline has been run, go into project settings, select the Event-Based Sampling option. - Turn on event-based sampling by checking the check box. d. - After the project is created, go into a pipeline and place a Save Data node after the Data Source node. - In the Properties panel turn on event-based sampling by checking the check box - Run the pipeline.

Correct: a and b Answer C is not correct because the project setting for event-based sampling cannot be changed after a pipeline has been run. Answer D is not correct because the Save Data node does not perform event-based sampling thus the node does not have a property for even-based sampling.

Given the Misclassification Matrix below, calculate the misclassification rate on the validation data set. (Wert konnte richtig berechnet werden. Rate ist dezimal und nicht in %)

Validation misclassification is (185 + 1101)/16967 = 1286/16967 = 0.0758


संबंधित स्टडी सेट्स

Non Open-ended Pediatric, Powdered, Oral, By weight, Parenteral, and Injection

View Set

Psychology Exam 3 (Chapters 7,12,13)

View Set

Quiz #8: Enterprise Resource Planning

View Set

Advantages & Disadvantages of Fulfillment Centers

View Set

LIT - Out of My Mind - Final Review

View Set

geology lab 7 humid and arid landscapes

View Set