MR Midterm 2

¡Supera tus tareas y exámenes ahora con Quizwiz!

What are the major reasons that we discussed for taking a sample?

1) Practical reasons (unrealistic to survey an entire population, takes a lot of time, not enough funds) 2) Computational reasons (computers cannot handle such huge amounts of data)

Why should we care about data quality issues? What techniques can we use to identify yea-saying or nay-saying?

"Garbage in, Garbage out." = If we don't have quality data, our statistical analyses are pointless because we will not get anything useful out of it. Techniques to identify yea-saying and nay-saying ???

What is a nominal measure? What rule(s) do nominal measures adhere to?

"In name only" = those measures that only use labels and posess only the characteristic of a description = the answer describes something [gender (M/F), recall (Y/N/unsure), etc.] Only assign numbers for the purpose of identification (1=Coke, 2=Pepsi, 3=Sprite...)

Psychographic analysis

= Lifestyle analysis = Consumers' way of life, what interests them, and what they like.

What are some potential issues with survey questions? If I give you some sample survey questions, be able to spot some issues with the question and rewrite it in a better way.

- Leading questions (ex: Have you heard about the new course that everyone is talking about?) - Double-barreled questions (ex: When was the last time you upgraded your computer and printer? Were you satisfied with the food and service?) - Overstating the case (Placing undue emphasis on some aspect of the topic) - Imprecise or confusing language/answer options (multiple meanings) - Assumed consequences (ex: Would you like to double the number of job offers you receive as a senior?) - Overlapping Choices (some respondents may have more than one answer) - Leaving respondent with no choice (having too limited options)

Non-response error

- Refusals to participate = not answering anything or just answering all the same way to get it over with - Break-offs = stopping before the end - Item Omission = not answering certain questions

"Attitude-like" variables

- Value - Quality - Satisfaction

Primary data collection methods

1) Communication (questioning respondents to secure the desired information) Ex: surveys 2) Observation (situation of interest is watched and the relevant information is recorded) Ex: stalking

Developing the sample plan

1) Define the population 2) Obtain a sample frame 3) Select a sample method 4) Decide on the sample size 5) Draw the sample 6) Validate the sample

3 Considerations for Collecting Data by Communication

1) Degree of structure 2) Degree of disguise 3) Method of administration

4 Considerations for Collecting Data by Observation

1) Degree of structure 2) Degree of disguise 3) Setting 4) Method of administration

Questionnaire Organization/Design Phases

1) Design introduction 2) Arrange questions on questionnaire 3) Pretest questionnaire 4) Reorder/Reword (keep going back and forth between steps 3 and 4 until perfect) 5) Finalize questionnaire

3 Types of Measurement Scales

1) Graphic-rating scales 2) Itemized-rating scales (Likert, Semantic Differential, and Stapel Scales) 3) Comparative-rating scales (Constant Sum Scale)

Why are questionnaires important?

1) If we ask no questions, we never get any information. 2) If we ask improper questions we get meaningless information. (Need to ask the right question to the right person.)

4 methods of administering questionnaires

1) Interviewer + No Computer = Person Administered 2) Interviewer + Computer = Computer Assisted 3) No Interviewer + No Computer = Self Administered 4) No Interviewer + Computer = Computer Administered

How should we organize our questionnaire? Why do we do this?

1) Introduction 2) Question flow: - screens (first question asked, usually in intro to make sure we are surveying the right people) - warm up questions (to make people comfortable and confident in their survey-taking abilities, to make them believe this will be easy so they don't regret agreeing to take our survey) - use transitions (put prior to major sections of questions or changes of format to make everything flow nicely) - put complicated/difficult to answer questions towards the middle/end (so that people will have already put some effort in so they are motivated to finish the survey by answering these questions) - classification/demographics questions should be the last section (because some people find these questions sensitive and, if we asked them first, this might deter people from taking our survey)

What are the two major types of errors that occur when we are running a questionnaire or survey? How do we differentiate these two types of errors? Can we estimate and control for both types of these errors?

1) Sampling error - errors relating to the fact that we have taken a sample 2) Non-sampling error - all errors in a survey not related to sampling error (data collection errors, data interpretation errors, non-response errors, data handling errors, data coding errors)

3 Rules for any set of numbers

1) follow a rank order (4 > 3) 2) intervals are comparable (4 -3= 2 -1) 3) absolute magnitudes are comparable (4 is twice as big as 2)

What is disguise? When is it useful?

= The amount of knowledge about the purpose or sponsor of a study communicated to the respondent Disguise is especially useful when... ...knowing the purpose or sponsor is likely to bias respondents' answers. ...re-creating the natural environment is necessary, particularly in experimental research.

Should we care about cheating? How prevalent and to what degree do people cheat according to the research of Airely?

??? People cheating do not follow the typical rules of economics (people do not necessarily try to maximize utility/benefit) - different economics entirely in act of cheating - There is a threshold for cheating, the personal "fudge factor" - you can cheat a little bit to get some of the benefits, but not too much so that you still feel good about yourself - How can we manipulate (increase or decrease) the fudge factor?

What is a census?

Accounting for the complete population.

What is an ordinal measure? What rule(s) do ordinal measures adhere to?

Allow us to rank order respondents or their responses. (rule 1 applies)

Motivation

Any inner state that energizes, activates, or directs behavior toward goals (e.g. need, want, desire, impulse).

What major assumption does SRS, systematic sampling, and cluster sampling make about the distribution of the population of interest? What if this assumption is not met? What type of sampling technique can we use?

Assume that qualities of interest are normally distributed. If this assumption is not met, use stratified sampling.

Attitude vs. Opinion

Attitude = someone's evaluation of something Opinion = verbalization of an attitude

What is an example of an interval measured variable?

Attitudes/Opinions Ex: What is your opinion on these brands: Unfavorable (1) 2 3 4 (5) Favorable

Why are absolute magnitudes not comparable for interval measures? (Why does rule 3 not apply?)

Because a true zero origin does not exist. With rankings, you cannot say that someone who ranks something a 4 likes it twice as much as someone who ranked it as a 2 Ex: You can have someone rate something on the scale: 1 2 3 4 5 or scale 6 7 8 9 10... a rank of 4 and rank of 2 are same comparative distance apart in the interval as 9 and 7, but 9 is not double 7 like 4 is double 2

Why use observational research?

Best method for generating valid data about individuals' behavior.

Data quality issues

Break offs - no more answers after a certain question Item omissions - the respondent refused to answer certain questions, but answered others Yea-saying or Nay-saying patterns - respondent shows a persistent tendency to respond favorably (yea) or unfavorably (nay) regardless of the question - maybe they don't really want to take the survey so they are just putting whatever Middle-of-the-road patterns - respondent indicates "no opinion" to most questions

How do we differentiate between objective and subjective properties?

Objective properties can be verified (ex: male or female?) Subjective cannot be verified (based more on attitude or opinion)

What is a transition statement?

When shifting topics and/or sections in the questionnaire, clear and understandable transition elements or statements are important to avoid confusion and to make the survey flow nicely.

As market researchers, what can we learn about the "chair of death?"

Can't always trust what people tell you in research - market research is often too blunt to pick up the distinction between "bad" and "different" - first impressions need interpretation - the people said they hated the Aeron chair but what they meant was that it was so new they weren't used to it (Same with Kenna - he did badly when he was subjected to market research because his music was new and different, and it is the new and different that is always most vulnerable to market research)

What do we mean by classification questions and where do these normally go in a survey? Why?

Classification questions ask about a person's demographics. Ex: gender, race, age, "what is the highest level of education you have attained?" These go at the very end of a survey because some people find these questions sensitive and, if they were at the beginning, they might deter certain respondents from completing the survey.

What is coding? What is a codebook?

Coding = identification of code values that are associated with possible response for each question on the questionnaire. Code Book = identifies (1) the questions on the questionnaire, (2) the variable name or label that is associated with each question, & (3) the code numbers associated with each possible response.

Where did Coke go wrong with the introduction of "New Coke?" How could Cokes' market research team have avoided this disaster?

Coke changed the taste of their product without being sure that taste was even the issue. Customers were outraged. Illustrates of how difficult it is to figure out what people are really thinking. Coke should not have based this decision off of sip test results. They could have tested differently by using a home-use test and sending people home with a case of coke so that they can have the full experience in a non-artificial setting, which is more reflective of behavior in the market.

What are the different types of non-probability samples?

Convenience samples = Have a person in a certain place and at a certain time survey people in the area. Ex: Contact the first 120 people that you meet in a mall. (Won't truly be random because you will only be able to get people who are in the same place at the same time - sample frame error). Purposive samples = Someone makes an educated guess about what the sample group should look like and you create a sample based on that. Ex: focus groups. Referral samples = "Snowball" samples = Start with a short list of individuals then have them recommend others to take the survey (build sample off of their networks). Quota samples = Trying to ensure that you are maintaining a certain distribution of gender or age in your sample (often used with convenience sampling, but makes the survey-giver more cautious about who they choose).

What can we measure with surveys?

Demographic/socioeconomic characteristics Personality/lifestyle characteristics Attitudes/opinions Awareness/knowledge Intentions Motivation Behavior

What do we mean by measurement?

Determining a description or the amount of some property of an object. (Properties = characteristics, attributes, or qualities of objects). As marketers, we are interested in measuring objects such as: product properties, consumers, competitors, etc.

What is sampling error?

Error related to the fact that we used a sample (there will always be sample error), comes from 2 different sources: 1) Sample frame error 2) If we don't take a large enough sample of individuals (if our "n" is not high enough)

How might we code closed-ended items like Semantic Differential or Likert scaled items?

Ex: Semantic Differential Scale: What is your overall opinion of Target department stores? Unfavorable 0 0 0 0 0 0 0 Favorable Coding : 1 2 3 4 5 6 7 Ex: How did you learn about this coffee shop? O newspaper | coding: 1 if checked, 0 if not O radio | coding: 1 if checked, 0 if not O online | coding: 1 if checked, 0 if not etc...

Data Collection Errors

Fieldworker error: - Intentional (Cheating, leading the respondent) - Unintentional (Misunderstanding, fatigue) Respondent error: - Intentional (Falsehoods, nonresponse) - Unintentional (Misunderstanding, guessing, attention loss & distractions, fatigue). How can we mitigate some of these errors ???

Structured communication

Fixed alternative questions - Responses are limited to stated alternatives.

Questionnaire Development Phases

For each research objective: 1) What is research objective? 2) What properties do we need to measure? 3) What type of measure/scale would be best? 4) Word question/responses. 5) Evaluate question 6) Reword (keep going back and forth between steps 5 and 6 until perfect)

What's our goal when designing surveys?

GOAL = valid responses To get these, we must minimize question bias and make our questions as low-impact as possible.

What is an example of a nominally measured variable?

Gender, Recall, Favorite brand, Favorite flavor

What are the differences between graphic, itemized, and comparative rating scales? Give an example of each type.

Graphic = A scale in which individuals indicate their ratings of an attribute typically by placing a check or an "x" at the appropriate point on a line running from one extreme to another. Ex: "Please evaluate each of the following attributes of mp3 players according to how important the attribute is to you personally by placing an "X" at the position on the horizontal line that most accurately reflects your feelings." Itemized = A scale in which individuals indicate their ratings of an attribute by selecting the response category that best supports their position. Ex: Likert, Semantic Differential, and Stapel Scales Comparative = A scale requiring subjects to make ratings as a series of relative judgments or comparisons. Asked to rate each attribute with direct reference to other attributes being evaluated. Ex: Please divide 100 points between the following attributes of smartphones according to the relative importance of each attribute to you.

Awareness/Knowledge

Insight into, or understanding of facts about, some object or phenomenon Usually defined for marketing research by recognition/recall

In what circumstances does it make sense to include a neutral response or a "don't know" option?

It depends on the type of people you are testing and what you are testing them on: - If you are testing experts in a certain field, you can ask questions without having the "don't know" option - But if you're testing high school students on health care, for example, the don't know option is justified cause many might not know much about the subject

Settings for observation

Laboratory or natural setting

What measures of central tendency (i.e. mean, median, mode) can be used with a ratio measure?

Mean, median, and mode (mean is most important)

What measures of central tendency (i.e. mean, median, mode) can be used with an interval measure?

Mean, median, and mode (mean is most important)

Measures of central tendency

Mean, median, mode

What is a ratio measure? What rule(s) do ratio measures adhere to?

Measures in which a true zero origin exists (rules 1, 2, and 3 apply)

What is an interval measure? What rule(s) do interval measures adhere to?

Measures in which for each adjacent level the distance is normally defined as one unit. (rules 1 and 2 apply)

What measures of central tendency (i.e. mean, median, mode) can be used with an nominal type of measure?

Mode (you can keep track of most common/frequent answer)

What measures of central tendency (i.e. mean, median, mode) can be used with an ordinal type of measure?

Mode, median (can't get mean from rankings)

Personality

Normal patterns of behavior exhibited by an individual.

Writing Questions Do's and Don'ts

Need to select appropriate response formats and wording (wording is critical, one word can make a huge difference!). Need questions that are understandable, unambiguous, and unbiased. DOS: - focus on a single topic - be brief - be grammatically simple - be clear and precise - use simple words (ex: not "marital status") - avoid ambiguity (ex: what is the difference between "often" and "regularly"?) - avoid generalizations and estimates (ex: what is the annual per capita expenditure on groceries in your household?) - use scales that are 7 or 10 points (if appropriate) rather than 5 point scales. - for number scales, use labels to anchor items (i.e. 1 = strongly disagree; 7 = strongly agree) - use attention checks! - use open-ended questions only when necessary DON'TS: - do not lead the respondent - do not use double-barreled questions - do not overstate the case - be careful about what assumptions people can make about your questions - do not allow overlapping choices (choices should be mutually exclusive) - do not leave respondents without an answer (choices should be exhaustive)

What are the three major types of measures that we discussed?

Nominal Ordinal Scale (Ratio + Interval)

What is the difference between a probability and a non-probability sample?

Non-probability sample = have human intervention in them to select certain individuals, the chances (probability) of selecting members from the population into the sample are unknown Probability samples = randomized with no human intervention so that members of the population have a known chance (probability) of being selected into the sample

How might we change our survey or questionnaire if we are dealing with experts? What things can we do with experts that we should stay away from when dealing with novices?

Only experts are able to reliably account for their reactions Can use specific vocabulary and require more explained, detailed answers

Unstructured communication

Open ended questions - Respondents are free to reply in their own words rather than being limited to choosing from among a set of alternatives.

How might we code open-ended items?

Open-ended items seeking concrete, or factual, responses are relatively easy to code: numeric answers are typically recorded as given by the respondent, while other types of responses are given a specific code number. Ex: In what year were you born? (code year) Ex: How many times have you eaten at Chipotle in the last month? (code number) Ex: Name the first 3 coffee shops located in NYC that come to mind. (code as 3 separate variables; assign numbers to represent each coffee shop mentioned) Open-ended items seeking less structured responses are much more difficult to code. Process to code Abstract Open-ended Questions: 1) Develop initial response categories before reading responses (Ex: social reasons, job related reasons, quality of life, other) 2) Review responses, add, delete, revise categories 3) Assign code numbers for each category, use these codes to represent responses in the data file 4) Sort responses into categories, use multiple coders 5) Assess interrater reliability (the degree of agreement between coders); low interrater reliability suggests that the categories are not well-defined and we need to be more specific

What are the major pros/cons of simple random sampling?

PROS: - Guarantees that every member of the population has an equal chance of selection. - Valid representation of the population. CONS: - Have to predesignate each population member. - Basically have to begin with a current and complete list of the population.

Pros/Cons of Computer Administered Questionnaire

PROS: Breadth of features Relatively inexpensive Reduced interview evaluation concern CONS: Computer literacy Respondent control Lack of monitoring High questionnaire requirements

Pros/Cons of Person Administered Questionnaire

PROS: Feedback Rapport Quality Control Adaptability CONS: Errors Speed Cost Interview evaluation

Pros/Cons of Self Administered Questionnaire

PROS: Reduced cost Respondent control No interview evaluation apprehension CONS: Respondent control Lack of monitoring High questionnaire requirements

Pros/Cons of Computer Assisted Questionnaire

PROS: Speed Relatively error free Pictures, Videos, Graphics Quick Data Capture CONS: Technical Skill High set-up costs

What is the difference between a parameter and a statistic?

Parameter = Summary description of a characteristic or measure of the population. Statistic = Summary description of a characteristic or measure of the sample. **A sample is drawn from a population so that the sample statistics can be used to make inferences about that population**

What is an example of a ratio measured variable?

Purchases in a time period, dollars spent, miles traveled, years of college education. Ex: Divide 100 points among these soft drinks according to your likelihood of purchasing in the next week

What information do we need to provide in the introduction of a survey?

Purpose/Sponsor How/why selected Ask for participation (including incentives) Ensure anonymity or confidentiality Screening questions

What is Qualtrics?

Qualtrics helps us to create and distribute electronic surveys/questionnaires. Can design many different types of questions. Tools: - Page breaks vs. adding a new block - Force responses - Timing - Randomization of questions - Survey flow - Activating & distributing surveys

What is an example of an ordinal measured variable?

Rankings, Comparative preferences Ex: Rank the following soft drinks from 1 (Most Liked) to 4 (Least Liked)

Measuring awareness

Recognition - Confirm seeing brand/communication/promotion/advertisement when provide cue. Recall - An individual's ability to remember the brand/communication/promotion/advertisement[Unaided Recall: "For what products and brands do you remember seeing ads?" Aided Recall: "Do you remember seeing ads for any fast food restaurants?"]

How do you calculate a response rate for a survey?

Response Rate = Number of Completed Interviews / Number of Eligible Units in Sample = Completions/ Completions + [(Completions/Completions + Ineligible) * (Refusals + Not Reached)] Ex: Completions: 400 Ineligible: 300 Refusals: 100 Not Reached: 200 Response rate = 400/ 400 + [(400/400+300) * (100+200)] = 70%

What do we mean by reverse scaling? Why might we reverse scale an item or question?

Reverse scaling = the use of reverse coded items on scales "Class size is important to me" & "I do not care about class size" Why use this? It motivates participants to process items more carefully and prevents negative respondent behaviors such as response set, satisficing, and acquiescence. Response set is the tendency of participants to respond to the set of scale items rather than individual items. For example, a respondent may have a positive impression of an instructor being evaluated and simply respond to the entire set of items positively rather than processing the nuances of each item individually. The inclusion of an item worded in the opposite direction could encourage participants to read and process each item more carefully. Also, people prefer to answer in agreement rather than disagreement; therefore, participants may acquiesce or simply agree with an item out of some form of social desirability. Furthermore, participants may also satisfice or agree with an item because doing so requires minimal cognitive effort. Therefore, reverse coded items are prescribed to force participants to process individual items more carefully and accommodate participants who wish to vary their responses and not always provide the same answer.

What is a sample frame? Is there usually perfect agreement between our sampling frame and our population? What is the different between the two called?

Sample Frame = master source of sample units in the population. Sample Frame Error = the degree to which the sample frame fails to account for all of the population. Created when our sample frame does not perfectly overlap with our population. Ex: if population is all auto dealers in NYC and the sampling frame used is the telephone book... you are missing out on all businesses not in the telephone book (newer businesses)... you may also get businesses in your frame that are not within the population of interest (may get some businesses outside of NYC, like in Long Island or Jersey)

What do we mean by sample and sample unit?

Sample= a subset of a population that suitably represents the entire group. Sample unit = basic level of investigation (one person, a school, etc.).

Scale measures (2 types)

Scale measure = Measures in which the distance between each level is known Interval - measures in which for each adjacent level the distance is normally defined as one unit Ratio - measures in which a true zero origin exists

What are the four major types of probability samples we discussed? How do we implement each of these?

Simple random sample (SRS) = Use a truly random process to select people from population (using bingo balls, rolling a dice, random number generator) so that the probability of being selected into the sample is equal for all members of the population. Probability of selection = sample size/population size. Systematic samples = Makes SRS a bit easier. 1) Choose sample of population that is already in a list 2) Choose random starting point 3) Choose skip interval (skip interval = population list size/sample size) 4) Choose certain number of people, starting with the random starting point and then every other person after skip interval. Cluster samples = Population divided into mutually exclusive and exhaustive subsets (clusters), and a random sample of subsets is drawn. Clusters are heterogeneous within (so each cluster is representative of the population), and homogeneous between groups (each cluster is equally as representative of the population). 1) First divide into groups so that each group is representative of the population 2) Then choose a random group Stratified samples = Use if population is skewed (key properties are not normally distributed). The population is divided into mutually exclusive and exhaustive subsets (called strata) from which SIMPLE RANDOM SAMPLES are drawn from EACH subset. So that homogenous within each strata, heterogeneous between all strata with respect to key variable(s).

Advantages of surveys

Standardization Ease of administration Ability to tap into "unseen" Ease of analysis Sensitivity to subgroup differences

What is questionnaire design? What are the two major phases of questionnaire design?

Systematic process where we: Contemplate various formats/factors Carefully word questions Organize the survey layout 2 Major Phases: 1) Questionnaire Development (coming up with the actual questions) 2) Questionnaire Organization/Design (the sequence of statements and questions that make up a questionnaire)

What is a population?

The entire group under study as defined by research objectives.

What do we mean by question bias?

The language of a question can influence peoples' responses, even very small details. (Ex: leading questions, assumptive questions, overstating the cause, etc.) "Did you see A broken taillight?" vs. "Did you see THE broken taillight?" <-- assumptive

Likert Scale

Type of Itemized Interval Scale Measure = Indicate degree of agreement or disagreement for a series of statements.

Stapel Scale

Type of Itemized Interval Scale Measure = Relies not on bipolar terms but on positive and negative numbers. Ex: -5 -4 -3 -2 -1 Friendly Professors 1 2 3 4 5

Semantic Differential Scale

Type of Itemized Interval Scale Measure = Series of bipolar adjectives in which respondents indicate their impressions of each property.

Ethics of disguise. How do we reconcile these?

Use of disguise amounts to a violation of the respondent's right to know. Reconcile with DEBRIEFING: The process of providing appropriate information to respondents after data have been collected using disguise.

Why are probability samples considered the "gold standard" in marketing research? What do probability samples allow us to do?

We are able to assume that our sample will be representative of the larger population. This allows us to make inferences about the population (a non-probability sample does not).

How do we interpret our results? What do we need to interpret the information we gather from ratings scales?

We need a comparative/normative standard, or norm, to compare our raw scores to. Population-based norms (Ex: comparing a survey of NYU students to a survey of students from another school) Time-based norms (Ex: comparing a survey of NYU students in 2015 to a survey of NYU students in 2010) *Think Little Jerry Seinfeld* Ex: A restaurant received an average score of 4.33 on a 1 - 7 service quality scale, where 1 = "Poor Service" and 7 = "Excellent Service." Is this score good or bad? What if 75% of similar restaurants posted higher scores than 4.33? What if the restaurant had a score of 3.14 a year earlier?

When is a questionnaire considered "complete?"

When every question is answered ???

What is reliability of a measure? How would you explain this concept to someone who has never heard of reliability before?

When we have a reliable measure the respondent will respond in the same or a very similar manner to identical or near-identical questions. (Consistency of a measure). Observed response = Truth + Error

What is validity of a measure? How would you explain this concept to someone who has never heard of validity before?

When we have a valid measure we are measuring what we intended to measure (Truthfulness of a measure). Observed Response = Truth + Error

What are intentions? What is the behavior-intention gap?

Whether or not someone is planning on engaging in some action in the future People do not always act in the way that they claim they intend to (people get tired, lazy change their mind, etc.). Therefore, you cannot always trust intentions.

Do surveys have any influence on consumers? What benefits might come from using a survey or questionnaire? Would we expect these benefits to have strong external validity?

Yes, surveys influence consumers. Some benefits: increases awareness, might make them realize their already positive attitude about a company/brand, might improve their attitude about a company/brand, measurement-induced judgments --> increased sales --> increased profitability Strong external validity? Not for every company, surveying dissatisfied customers may strengthen the negative emotions and decrease profitability


Conjuntos de estudio relacionados

5.3 Guided Reading Industrial Revolution Begins

View Set

LearnSmart: Chapter 3- Cellular Form and Function

View Set