marketing research final exam

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

ethical issues in the observation of humans

Issues: -respondent's right to privacy (by observing people, are you violating their right to privacy?) -contrived observation as entrapment (when mystery shoppers go to customer service desk with complaint--is that entrapment? probably not...) Researchers feel comfortable collecting observational data if: -the observed behavior is commonly performed in public (putting cameras in stores is fine--you're not invading privacy) -the anonymity of the person being observed (you don't need to know who they are--just what they're doing) -the person has agreed to be observed (if they're hooked up to brainwave monitors, they've agreed to be part of a study) -there is another one of these--page 217--has the person been adequately notifies that their behavior is being observed?

effects

Main effects: -the experimental difference in DV means between the different levels of any IV; when an individual IV does cause a DV to change -the experimental difference in DV means between the different levels of any single experimental variable Interactions: -when two or more independent variables together impact the dependent variable (intensity of lighting and color of walls in the store) -differences in a DV due to a specific combination of independent variables **** difference in the height of the lines means there's a main effect (blue colored walls led to a higher response) ****if slopes are different, there's an interaction effect in this case, there's a main effect and an interaction effect: best option is bright lights and blue walls in stores

maturation effect

a function of time and the naturally occurring events that coincide with growth and experience

measurement

the process of describing some property of a phenomenon of interest, usually by assigning numbers in a systematic way that aims to be reliable and valid concept -what to measure -generalized idea about some thing (age, income, loyalty, satisfaction, etc.) EX: US grading scale (A-F)

See's Candies

--started a long time ago --owned by Berkshire Hathaway --2011 sales: $376 million --2011 pre-tax profits: $83 million (really good) --Buffett calls it a "dream business" (increasing profits without needing new capital) --211 stores; 110 in California (have moved east and slowly moving international) --boxed chocolate is a small industry--just under $2 billion in industry sales --diverse industry -high end: Godiva and boutique brands -low end: pharmacy candy: Russell Stover and Whitman Samplers --seasonal business (90% of sales occur within 4 months) characteristics: --no preservatives --short shelf life (as low as 25 days) --loyal customers--allows them price flexibility; up to 5% price increase per year --customer reviews are positive --loyal employees the web: --65% of orders online --880,000 Facebook likes (over 1 million today) new strategy: --Eastward expansion --expansion has not always worked well --will need a new factory --not easy to ship (warm weather shipping) --younger target--30ish Cioffi--"The story is there, the heritage is there. I need to package it and tell it." the challenge: --new stores cost $300,000 to build out --editors of Candy Industry magazine--"I don't think See's means anything to the people on the East Coast."--"If you put See's everywhere, what makes it so special anymore?" --hasn't gone as well as they hoped

monitoring website traffic

-Alexa monitors traffic to sites and demographic data of visitors -conversation volume--amount of internet postings that have a specific name or term -NPR interviewed an Amazon exec and talked about everything they do to measure who's viewing different sites and what sort of profile they represent

selecting and measuring the DV

-DV may be difficult to identify (Crystal Pepsi-- a lot of people said they'd buy it but people bought it because it was different but then went back to regular Pepsi because they thought they were measuring long-term purchasing but instead were measuring short-term trials -choosing the right DV is part of the problem definition process-you really have to think through the DV you want to examine

advantages of scanner research

-actual vs reported behavior (not relying on people's memories) -mechanical vs human record keeping (cameras don't lie; people do) -unobtrusive (in some ways, but also, asking guys to put on a camera when they take a shower is obtrusive) -records all purchases (scanner data); too time consuming for diaries -can combine scanner data with other data (demographics) page 224

gold fishing or blue fishing

-blind taste test of two types of fish sticks and experiment showed that consumers preferred Gorton's and Miss Paul's ran the same experiment and couldn't find a problem so they challenged Gorton's in court and the only thing Gorton's used differently was different colored plates (Gorton's on blue plates and Miss Paul's on yellow or orange plate while researchers at Miss Paul's used white plates)

limitations of observations

-can't observe cognitive activity to explain why the event occurred (all you're observing is the physical activity; you don't know what the thinking processes are) -observation over long periods of time is expensive or even impossible; you pay a researcher to go and live in people's homes, like P&G--that's expensive!! and participant will get tired

benefits of observations

-communication with respondent is not necessary (you still need a researcher, though) -data not distorted by self-report bias (not relying on memories; people aren't lying to you) -no need to rely on respondents' memory -nonverbal behavior (nonverbal often tell you more than verbals, and they don't lie) -certain data may be obtained more quickly -environmental conditions may be recorded -may be combined with survey data to provide complementary evidence (often done--researcher will observe people and then later, interview them, as complementary evidence to feel more comfortable with the analysis)

layout for self-administered

-don't overcrowd (usually talking about a pen and paper survey) -good margins (inch on all four sides) -no widows or orphans (meaning that the questions and answers are split up--question on bottom of page one and options on top of page two) -avoid putting too many questions on a page (looks crowded and too many question decrease response rate) -multiple grid (matrix table) question (several similar questions arranged in a grid format)

measuring physiological reactions

-eye-tracking monitor: records how the subject actually reads or views an advertisement; measures unconscious eye movements; a mechanical device used to observe eye movements--some eye monitors use infrared light beams to measure unconscious eye movements; typically used for advertising research or packaging research to see what people focus on -pupilometer: a mechanical device used to observe and record chances in the diameter of a subject's pupils -psychogalvanometer: measures galvanic skin response (skin's electrical conductivity); a device that measures involuntary changes in the electrical resistance of the skin; when we are excited or stimulated, our skin's galvanic measurements change--we sweat, etc. -voice pitch analysis (measures emotional reactions): a physiological measurement technique that records abnormal frequencies in the voice that are supposed to reflect emotional reactions to various stimuli; ask questions -neurological devices (viewing ads in an MRI): MRI is a machine that allows one to measure what portions of the brain are active at a given time

complementary evidence

-hand lotion focus group example (before it started, he tried to make them feel comfortable. as soon as he mentioned the study was about hand lotion, the women put their hands under the table. he learned that women are self conscious about their hands -response latency-also provides this (this is how long it takes someone to answer something); if you observe people and later go back and interview them, the longer someone spends to explain something, the more thought they're putting into it because they're not as certain--but be careful, because sometimes response latency is random

selecting and assignment of test units (subjects)

-sampling error (outboard motor lubricant example)--would people who own boats like and buy this type of lubricant? they got positive results and they were hit with lawsuits a year later because engines started dying on them--the boats were in operation 12 months a year--in warmer states, the lubricant kept working; in colder states, the lubricant congealed -systematic error (non sampling error): when the test units are in different cells; occurs if the sampling nuts in an experimental cell are somehow different than the units in another cell, and this difference affects the DV ways to overcome sampling and non sampling errors: -randomization (randomly selecting gives you good change of eliminating systematic error, but you might end up with one cell being very different); the random assignment of subject and treatments to groups--it is one device for equally distributing the effects of extraneous variables to all conditions -matching (you intentionally make sure the cells look alike) -repeated measures (expose one group to all four; that alerts the participant of the purpose to the experiment); experimental in which an individual subject is exposed to more than one level of an experimental -controlling extraneous variables

selecting a test market

-select cities/markets representative of the population/target -selecting test markets is a sampling problem because a test market is like a sample and estimate what the entire US is going to act on

characteristics of experiments

-subjects: sampling units (those who are part of test group, experimental group) (respondents are used for surveys, not experiments) -experimental condition: one level of IV manipulation (the research manipulated the IV to see what impact the DV); one of the possible levels of an experimental variable manipulation -blocking variable (has a limited number of potential options): an independent variable that's categorical (race, gender, ethnicity)--included to account for variance in DV due to this variable; do they impact the IV; not an experimental condition because you can't manipulate it (cannot manipulate gender) -covariate: an IV; continuous variable with a statistical relationship to DV; not an experimental condition (might have multiple blocking/covariates); a continuous variable included in the statistical analysis as a way of statistically controlling for variance due to that variable experiments are widely used in casual designs. experimental research is allows a researcher to control the research situation to evaluate the potential for causal relationships among variables

experimental confound

-when there is something beyond the IVs that causes the DV to change (color of plate may have been reason to preferred fish sticks); the 4 Ps may interact with uncontrollable forces in the market (fish sticks again) --an experimental confound means that there is an alternative explanation beyond the experimental variables for any observed differences in the DV -validity is questioned if confounds are present (did we measure what we intended to measure?) sources of confounds/ causes of confounds? -sampling error -systematic error -later-identified extraneous variables

manipulation of the IV

-you might have several different treatment levels -more than one IV can be examined (bar example--gender and promotion type_ -Cell: specific treatment combination (1/4 of group exposed to blue walls and bright lights); refers to a specific treatment combination associated with an experimental group -experimental treatment: the term referring to the way an experimental variable is manipulated

other experimental design issues

Basic experimental designs: --one IV and one DV (does ethical reasoning impact sales performance? ethical reasoning is IV and sales performance is DV) Factorial experimental designs: --two or more IVs and one DV (sales performance and ethics and gender/years in sales) Lab experiment: --control for them in some type of laboratory; good bc you can control a lot of extraneous variables you cannot control in the real world Field experiment: --done in the real world; advantages is it isn't a contrived setting but you might not be able to control some factors

direct and contrived observation

Direct Observation -record what naturally occurs (like putting cameras in a retail store and observing shopper's movements) -may lead to observer bias -a straightforward attempt to observe and record what naturally occurs; the investigation does not create an artificial situation Contrived Observation -create an artificial environment (if you wait for real situation, it may take a while) -environment may increase the frequency of certain behavior patterns to be observed (airline passenger example--airline industry hires researchers to sit on planes and observe the crew to see how situations are handled; researchers may create some complaint (an artificial environment) because waiting for a complaint to happen might take multiple flights) -also mystery shoppers -observation in which the investigator creates an artificial environment in order to test a hypothesis

levels of scales measurement

Nominal: -represents the most elementary level of measurement in which values are assigned to an object for identification or classification purposes only -most elementary level -assigns a value to an object for identification or classification (i.e. --gender; yes or no) -example of nominal scale: jersey number (i.e. Tom Bray wears 12) Ordinal: -ranking scales allowing things to be arranged based on home much of some concept they possess -ranking scales -have nominal properties -example: dissatisfied to satisfied; level of education; Ben and Jerry's runs four new flavors and asks you to rank them 1-4) -tells you the order but not much more than that -in a horse race, you have win, place, and show--it doesn't tell you by how much the horse won, though Interval: -scales that have both nominal and ordinal properties, but that also capture information about differences in quantities of a concept from now observation to the next -capture information about differences in quantities of a concept -have both nominal and ordinal properties -scale is not iconic--it doesn't exactly represent a phenomenon (temperature example--80 degrees in Spartanburg and 40 degrees in Chicago--you can say there is a 40 degree difference but you can't say it's twice as hot) -does capture more, but no absolute zero -i.e.--temperature--Fahrenheit and Celsius (see page 275 for some example) -in corporations, profitability is an interval scale because you can lose money (go below zero) -in a horse race, it will tell you how far first place finished in front of second place Ratio -represent the highest form of measurement in that they have all the properties of intervals scales with the additional attribute of representing absolute quantities; characterized by a meaningful absolute zero -highest form of measurement -represent absolute quantities -absolute zero; is iconic (cannot go below zero) -i.e.--amount purchased (cannot purchase less than zero); Kelvin scale; cannot have a negative market share)

attitude rating scales

RANKING TASK: a measurement task that requires respondents to rank order a small number of stores, brands, or objects on the basis of overall preference or some characteristic of the stimulus -on the basis of a characteristic or stimulus (could be a product and a list of its characteristics and you ask people to rank their features on the basis of most important to least important) RATING: a measurement task that requires respondents to estimate the magnitude of a characteristic or quality that a brand, store, or object possesses (now you're getting magnitude--don't get that with rank; here, people are ranking them and you know how closely they're ranked together) SORTING: a measurement task that presents a respondent with several objects or product concepts and requires the respondent to arrange the objects into piles or classify the product concepts -cards with given brand names on them -put cards in piles (brands of athletic shoes; put them in four or five piles and after, ask how they were organized) -piles have something in common CHOOSE AN ALTERNATIVE: a measurement task that identifies preferences by requiring respondents to choose between two or more alternatives --of these, which is your preference? SIMPLE ATTITUDE SCALE: two response categories (usually agree or disagree based on some statement about some company (but you don't really get much else) CATEGORY SCALE: more than two response categories -question and response category wording is critical (make sure the question captures attitudes) -question interpretation issues (likely to recommend question--Dr. Madden always says unlikely because he doesn't really recommend companies, even if he really likes them) LIKERT SCALE: a measure of attitudes designed to allow respondents to rate how strongly they disagree or agree with carefully constructed statements, ranging from very positive to very negative attitudes toward some object -most popular method!! researchers use this more than any other attitude rating scale -give people a series of statements and then ask -how strongly they agree or disagree with statements --EX: better to have firm pillow than a soft pillow (strongly agree-strongly disagree) SEMANTINC DIFFERENTIAL: a measure of attitudes that consists of a series of bipolar rating scales with opposite terms on either end -bipolar adjectives anchor the ends of the scale (usually an odd number)--check off where a company or product falls according to one polar adjective -a weight is assigned to each position on the scale -may be difficult to choose semantic opposite anchors (choose the right polar adjectives; they have to be truly opposites: simple & complex is a good one) -anchors should be appropriate CONSTANT SUM SCALE: a measure of attitudes in which respondents are asked to divide a constant sim to indicate the relative importance of attributes; respondents often sort cards but the task may also be a rating task -divide a constant sum to indicate the relative importance of attributes or preference -example: divide 100 points among each of the following brands according to your preference for the brand -giving people some amount of points and ask people to allocate points to attributes based on how important they are; an item you prefer the most should get the most points) -mobile phone service provider survey (screen, battery life, data cost) GRAPHIC RATING SCALE: a measure of attitude that allows respondents to rate an object by choosing any point along a graphic continuum -advantage: allows the researcher to choose an interval desired for scoring purposes -sliding scale -you let people choose along some scale; works well in online surveys; they can click where they think about some concept -anchored by some adjectives

advantages of between-subjects design

Within-Subjects Design: the same subject is measured with different treatments (one group of people exposed to all 3 ad campaigns; inly need one group of people but they will become wise to what you're testing); involves repeated measures because with each treatment, the same subject is measured Between-Subjects Design: each subject receives only one treatment (recruit three groups of people and show each group only one ad campaign--usually advantageous because they're only exposed to one treatment and less likely to figure out what you're up too; but, more expensive because more more people; validity is usually higher because you can control things better); each subject receives only one treatment condition

observer bias

a distortion of measurement resulting from the cognitive behavior or actions of a witnessing observer -a distortion of measurement -recording events inaccurately (Envirosell--in addition to putting cameras in stores, they have researchers physically in stores. they record the same things but may do so inaccurately) -interpreting observation data incorrectly (you may observe behavior and think it's because of one thing, but may be because of something else)

concept

a generalized idea that represents something of identifiable and distinct meaning (demographic concepts, usually) -what to measure (age, income, customer loyalty, etc.)

experimental group

a group of subjects to whom an experimental treatment is administered

control group

a group of subjects to whom no experimental treatment is administered which serves as a baseline for comparison

paired comparison

a measurement technique that involves presenting the respondent with two objects and asking the respondent to pick the preferred object; more than two objects may be presented but comparisons are made in pairs!! comparing two at a time -preferred object between two; person chooses between two objects -number of comparisons = [(n)(n-1)/2] -jewelry example: comparing precious gems

testing effects

a nuisance effect occurring when the initial measurement or test alerts or primes subjects in a way that affects their response to the experimental treatments

instrumental effect

a nuisance effect that occurs when a change in the wording of questions, a change in interviewers, or a change in other procedures causes a change in the dependent variable

UK tobacco company

a smoker sued and said smoking caused them to have cancer and it went to court and the company did some research and found/reported that smoking can lead to lung disease... and they won

advantages and disadvantages of test markets

advantages --real world setting --easily communicated results (constant contact with marketers and consumers) disadvantages --cost (having retailers carry it in the year; producing the product (ranch cost $20 a bottle because you don't have economies of scale (not producing a lot)) --time (Dreyel and Febreze)--they used test markets that lasted 2 years or more and that's a long time (things change) --lost of secrecy (competition will find out what you're doing)

respondent's memory

aided and unaided recall tests (typically used with advertising): aided recall: asking the respondent to remember something and giving them a clue to help unaided recall: asking respondents to remember something without providing any clue telescoping: believing past events happened more recently than they did (Dr. Madden does this) squishing: believing recent events took place longer ago than they did

the theory of reasoned action

also developed by Fishbein--extension of multi-attribute attitude model assumes consumers consciously choose behaviors that have desirable consequences; we choose to engage in actions that deliver something positive measure intention to perform behavior (behavioral intention) rather than behavior itself because that's what researchers are trying to predict tend to do things/perform behaviors that are evaluated favorably and popular with others (social norm)

observation of physical objects

artifacts: -things that people made and consumed within a culture that signal something meaningful about the behavior taking place at the time of consumption (what sort of books have been written, what the of TV programming is popular--based on these, information is gained) -garbology--addresses reporting problems of consumption (go through people's garbage and see what they're not telling you (consumption of alcoholic beverages is lowballed)) -Charles Coolidge Parlin (father of modern marketing research) and Campbell's Soup (was hired by the Saturday Evening Post to get Campbell's to advertise their soup but Campbell's didn't want to because their customers were middle and upper classes whereas readers were lower class...Parlin went through garbage and found that lower classes were much more likely to buy soup than make it because they don't have time!!! and middle classes have maids who will make it from scratch) inventories: -physical trace evidence (measuring how much of a product is used during a test); if you're testing a new type of bleach, you might give people a bottle of it and tell them you're coming back in a couple of weeks and actually measure how much was used during that time -count inventories--for retailers and wholesalers -pantry audit (what is in people's pantries? people miss things but if you physically count, you know what is in there content analysis: -the systematic observation and quantitative description of the manifest content of communication -what is produced in a culture? (counting themes or people in ads--what are key themes, what types of actors?) -is more than just counting--businesses use twitter to share hyperlinks and hashtags more than photos and videos (they feel this gives them more contact with the customer)

reliable and valid index measures

attribute: a single characteristic or fundamental feature of an object, person situation, or issue; a single characteristic of something (asking for age) index measure: assigns a value based on a mathematical formula separating low scores from high scores -index formulas often put several variables together to measure one thing -variables not related to each other (social class index--occupation, affiliation, income, education, possessions used to measure income--those are five discrete attributes that are not related to each other) composite measures: assign a value to an observation based on a mathematical derivation of multiple variables to create an operational measure of a construct -similar to an index (you have two or more variables to measure some concept) -multiple variables that are related -differ from index since the variables are related to each other -example: customer satisfaction surveys (items are all related to each other)

test market sabotage

intentional attempts to disrupt the results of a test market being conducted by another firm

common mistakes

avoid complexity--simpler language is better (sending to general public? fifth grade reading level) avoid leading and loaded questions (non-exhaustive alternatives is a type of leading question--there is some option that the respondent could choose that you don't account for: ex: how many books do you read a month? (0, 1-5,6-10--what if I read 30? They're forcing you to choose 6-10) avoid ambiguity (avoid these: often, frequently, good, poor--those words mean different things to different people) avoid double barreled questions (a question that may induce bias because it covers two or more issues at once) --ex: in your community, does Olive Garden provide the best value and the best menu? It could provide one, but not both, and there's not an option for that avoid making assumptions (Macy's excellent gift wrapping--"should we continue our excellent gift wrapping services?"--well, what if their gift-wrapping services are not good?) avoid burdensome questions that may tax the respondent's memory

balanced or unbalanced rating scale?

balanced rating scale: a fixed-alternative rating scale with an equal number of positive and negative categories; a neutral point or point of indifference is at the center of the scale -equal number of positive and negative categories with a neutral point at the center (most Likert scales are balanced (strongly disagree-strongly agree) unbalanced rating scale: a fixed-alternative rating scale that has more response categories at one end than the other, resulting in an unequal number of positive and negative categories -used if there's a certain lack of balance -more positive or more negative categories on one side; only use this if you are certain the answers will be more positive than negative or vice versa (satisfied, neither satisfied nor dissatisfied, dissatisfied, very dissatisfied--GW bookstore example_

issues a pretest should address

can the interviewer/respondent clearly follow? does it flow naturally? are questions clear and easy to understand? are questions easy to answer? (open-ended questions give you good information but they are not easy to answer) expected response rates? (textbook says you can get a lot of information from pretests but Dr. Madden disagrees--he'll get 100% response rate during sale but not during actual thing)

establishing control

constancy of conditions: -identical conditions for ll subjects except for the differing experimental treatments counterbalancing: -rotate the order of treatments (Ben and Jerry's) -to prevent primacy (says that the first thing you're exposed to has the greatest impact) and recency effect (last thing has greatest impact) **if Ben and Jerry's tests 4 flavors of ice cream, they ask which is best, second best--primacy says first will have greatest, so rotate flavors (participant one: 1,2,3,4; participant 2: 2,3,4,1)

ethical issues in experimentation

debriefing experimental subjects --at the end of an experiment, you should explain the purpose. Sometimes, it's not practical (test market isn't practical) interfering with a competitor's test market: --such as changing prices or increasing advertising to influence (confound) competitor's test-market results are ethically questionable (Hidden Valley Ranch was bought by competitors to make it seem that it was successful)

basic considerations in questionnaire design

decisions in questionnaire design: 1) what should be asked? (stay focused on research questions) 2) how should questions be phrased? (want to make sure it makes sense to everyone, not just you) 3) in what sequence should the questions be arranged? (needs to flow and to capture data you need--most researchers use a funnel (starting broadly and then getting more specific as you move along) 4) what questionnaire layout will best serve the research objectives? 5) how can the questionnaire encourage complete responses? (not in lecture) 6) how should the questionnaire be pretested and then revised? (always pretest a new questionnaire) -critical process (you have to word them and sequence them properly; GIGO -questionnaire/survey is only as good as the question it asks -takes a lot of work (one of the reasons Dr. Madden does not write his questions) -basic criteria of relevance and accuracy (they need to pertain to your research objectives)

demand characteristics and validity

demand characteristic: something in the study alerts the participants to the purpose or hints about the research hypothesis; if you give people multiple treatments or expose the same group of people to different experimental treatments, they'll catch on to what you're doing --experimental design element or procedure that unintentionally provides subjects with hints about the research hypothesis demand effect: occurs when demand characteristics actually affect the DV; potential problem

mathematical and statistical analysis of scales

discrete measures: measures that take on only one of a finite number of values (disagree/ neutral/ agree) -ex: ask someone their age (only one value) continuous measures: measures that reflect the intensity of a concept by assigning values that can take on any value along some scale range -require at least interval level measurement -ex: salespeople's sales (a lot of different numbers alone some range)

reducing demand characteristics

experimental disguise: -may not tell subjects the whole truth (you shouldn't because then they might tell you what they want to hear) -placebo: false treatment (placebo effect is when there is a change in the DV due to psychological impact) (Rogaine ex.) isolate experimental subjects (so they can't talk to alert each other) use a blind experimental administrator (if they don't know, they can't spill the beans) administer only one experimental condition per subject (best way to reduce demand characteristics!!!)--not giving multiple treatments

types of validity

face/content validity: extent to which individual measures' content match the intended concept's definition --a scale's content logically appears to reflect what was intended to be measured --looking at the scale and questions and asking yourself if it measures what you want it to measure criterion validity: the ability of a measure to correlate with other standard measures of similar constructs or established criteria --can be either concurrent (using this scale/measure to measure something as it's happening) or predictive --if it correlates with other scales which are measuring the same thing, then it's criterion validity construct validity: exists when a measure reliably measures and truthfully represents a unique concept; consists of several components including face validity, convergent validity, criterion validity, discriminant validity, and fit validity (you are including criterion) convergent validity: depends on internal consistency so that multiple measures converge on a consistent measure --another way of expressing internal consistency--multiple measures converge on a consistent meaning --combination of face and criterion validity --do all the items in the scale converge on some concept (construct validity takes this consideration) discriminant validity: represents how unique or distinct is a measure; a scale should not correlate too highly with a measure of a different construct --how unique or distinct a measure is --use factor analysis to establish this --a scale should not correlate too highly with a measure of a different construct (correlation coefficients above 0.75 are bad) (but maybe .3 or .4 according to Dr. Madden) --does this measure your construct and nothing else? you would use factor analysis to see if this scale discriminates fit validity: represents the extent to which a researcher's proposed measurement approach it represents than with other constructs

single or multiple items?

factors affecting the choice of using a single measure or an index measure -the complexity of the issue is to be investigated -the number of dimensions the issue contains -whether individual attributes of the stimulus are part of a holistic attitude or are seen as separate items (is it a compilation of different things or is it separate?) -use one question to get at the concept

back translation

for international research taking a questionnaire that has previously been translated into another language and having a second, independent translator translate it back into the original language (does the same translation come back?) problems in translation: -equivalent language concepts do not exist (ampersand does not exist in French) -differences in idiom and vernacular (things make sense to us but not to others)

forced-choice scales

forced-choice rating scale: a fixed-alternative rating scale that requires respondents to choose one of the fixed alternatives --do not give them "N/A" or "neutral" because those are used as cop-outs; and most of us have opinions about things non-forced choice scale: a fixed-alternative rating scale that provides a "don't know" category or allowing respondents to indicate that they cannot says which alternative is their choice -"no opinion" category; "N/A" or "neutral"

internet questionnaires

graphical user interface (GUI) software: -controls the background, colors, fonts, and other features displayed -creates an attractive and easy-to-use interface -controls the look of the online questionnaire (don't want it crowded or cluttered because the better it looks, the better participation will be ) web publishing companies (companies that specialize in internet questionnaires) -survey monkey (biggest and best online survey company) layout issues: -paging layout-going from screen to screen -scrolling layout-entire questionnaire appears on one page (only use this for a short questionnaire)

factors affecting internal validity

history effect: when some change other than the experimental treatment occurs during the course of an experiment that affects the DV (a competitor changing their marketing strategy during a test market--Starbucks example; you can buy at at any mass merchandiser--they put the product on shelves in Chicago to see if their products would sell in stores (test market) and then Seattle's Best Coffee ramped up their promotion only in Chicago market); also... coronavirus and unemployment will have a big impact on test markets... cohort effect: a type of history effect; a change in the DV occurs because members of one experimental group experiences different historical situations (two different groups; i.e, groups tested at different times one group of salespeople receives training and others don't); one group before coronavirus and one group now. The group now will be affected by coronavirus. maturation effects: over time, subjects grow older, more experienced, tired of participating (and then may fall not a pattern of telling you what you want to hear testing/pretesting effects: occurs in before-after studies; before you rn an ad campaign, ask questions about how ppl view your product... then run the campaign and after, see if their views have changed; the initial measurement may alert or prime the subject and then they'll tell you what you want to hear instrumentation effect: a change in the wording of questions, change in interviewers, change in procedures cause a change in the DV selection: a problem when selecting subjects for the groups; you want groups to be very similar but they may not be (age or higher income) mortality effect (sample attrition): subjects may leave the experiment before it is completed (get tired of it or die--you don't get the benefit of them before they leave)

Belief evaluation-- (ei)

how favorably or unfavorably does the person perceive that attribute or salient belief can change over time can vary between situations (friends who love coffee; caffeinated in the morning, decaf in the afternoon--evaluation of caffeine changes) typically measured on a -3 to +3 scale (-3 means a person has a very negative salient belief; +3 means a positive salient belief)

validity

internal validity: the extent that an IV is responsible for any variance in the DV; did we measure what we intended to measure? did that DV get changed by the IV? --exists to the extent that an experimental variable is truly responsible for any variance in the DV manipulation check: a validity test to make sure that the manipulation in the IV does produce differences in the DV (if Toyota is considering a new type of Camry, they're testing prices, they'd want to see if the changes in price produced a difference in the independent variable--sales could be the same at all three prices)

attention filters

items that have known and obvious answers included just to see if participants are playing along researchers should consider using these in surveys and experiments

leading and loaded questions

leading: a question that suggests or implies certain answers; pushing the person in some direction as to how they should respond may be phrased to reflect either a positive or negative aspect of an issue (may use split ballot technique to address--question wording is reversed for 50% of the sample (i.e. compact car question)) loaded: suggestions a socially desirable answer, or an answer that's emotionally charged; answer may be based on "ideal" behavior (i.e. teeth brushing; PETA; buying organic food); use inflammatory language --ex: question about dental hygiene: "we all know that we should brush our teeth twice a day...? may include counterbiasing statement--an introductory statement or a preamble to a potentially embarrassing question that reduces a respondent's reluctance to answer by suggesting that certain behavior is not unusual (teeth brushing) --"We all know we should brush our teeth twice a day, but sometimes we don't have time... how often do you brush?" --helps respondents be more honest

basic issues in experimental design

manipulation of the independent variable: -experimental treatment: the way an experimental variable (IV) is manipulated --could be a categorical variable (one that has limited numbers like color or usage level) --or, it could be a continuous variable (like lighting or price) -experimental group: the group exposed to treatment of some sort -control group: does not get some sort of treatment (you test to see if the manipulation of the exp. group led to a change in the sales or behavior) 1) manipulation of the IV 2) selection and measurement of the DV 3) selection and assignment of experimental subjects 4) control over extraneous variables

attitudes

multi-attribute attitude model: a model that constructs attitude score based on the multiplicative sum of beliefs about an option times the evaluation of those belief characteristics -belief strength (for some object, how strongly does a person believe that object possesses that feature/benefit) -evaluation of attribute (for that attribute/benefit, is it viewed as positive or negative) -equation -widely used to measure attitudes fishbein

history effect

occurs when some change other than the experimental treatment occurs during the course of an experiment that affects the DVs

mortality effect (sample attrition)

occurs when some subjects withdraw from the experiment before it is completed

operational definitions

operationalization: finding scales that measure the properties of the concept; the process of identifying scale devices that correspond to properties of a concepts involved in a research process (13-item performance scale, 1-7 for each item that measures performance) scales: range of values to measure the concept; a device providing a range of values that correspond to different characteristics or amounts of a characteristic exhibited in observing a concept (13 item performance scale) correspondence rules: the way a certain value on a scale corresponds to some true value of a concept; (how likely are you to recommend Amazon to a friend? 1-highly unlikely; 7-highly unlikely); how well does the scale capture the concept? (13 item performance scale is widely accepted and has been around for 40 years because it does a good job of capturing sales performance) variable: capture different values of a concept; use to measure concept (13 variables on sales performance) constructs: concepts measured with multiple variables; a term used to refer to latent concepts measured with multiple variables --13 item sales performance scale (a single variable is gender; construct is this scale)

ask a sensitive question, get a sensitive answer

survey researchers believe that the responses people give are valid care must be taken with sensitive questions researchers must take case in asking relevant questions in ways that produce the most truthful results ex: if you ask people if they visit porn sites online, the vast majority will day no; but they're the largest viewed sites, or one of them, so someone is lying

what is the best question sequence?

order bias: results when a particular sequencing of questions affects the way a person responds or when the choices provided as answers favors one response over another -2 potential sources -earlier questions may bias (asking more specific questions before general questions--this is not a funnel approach and it tends to bias later answers; you want a funnel approach) -order of response questions (primacy and recency: primacy says the first item you're exposed to has the greater impact; recency says the last item you're exposed to has the greatest impact)--brands listed as 1 and 5 are more likely to be chosen may use randomized presentations to address second type (to avoid recency and primacy affect (1,2,3,4,5, then 2,3,4,5,1) funnel technique: asking general questions before specific questions in order to obtain unbiased responses need to avoid breakoff (term referring to a respondent who stops answering questions before reaching the end of the survey) --a real problem because they started taking the survey but something causes them to stop filter question may lead to branching (skip questions) pivot question: -a filter question (a question that screens out respondents who are not qualified to answer a second question); determine which version of a second question will be asked --how many movies in theaters have you seen? If I say I've seen five, that might determine what question I get next

estimating sales volume potential problems

over attention from the marketers and may indicate that sales will be higher than they actually will be unrealistic store conditions; some retailers may be told to treat the product normally but they might give it a prime location (test market sabotage with Starbucks and Millstone) time-lapse between test market and commercialization -the longer you wait, the more things can change, like competition

pretesting and revising questionnaires

pretesting process get a small group of people with similar interests/hobbies preliminary tabulation: tabulation of pretest results to see if it meets research objectives

internet questionnaire layout

push button: dialogue box to click to perform a function (some you have to click the bottom to go to the next page; some are automatic) status bar: visual indicator that tells there respondent what portion of the survey he or she has completed (good or bad; if a really long survey, it may be bad because you see how long it is) radio button: circular icon that activates on response choice and deactivates others -most questionnaires ask for household income; when you click on the correct category, it isolates the other boxes heat map question: a graphic question that tracks the parts of an image or advertisement that most capture a respondent's attention -what do you like about this advertisement? -move your cursor over the ad and click what you like about it or what interests you? drop-down box: like picking a state you live in from check boxes: boxes next to answers, that a respondent clicks on (sometimes you can only check one, sometimes you check more than one) open-ended boxes: boxes to type in open-ended responses

what should be asked?

questionnaire relevancy: all information collected should address a research question (or address an objective); you don't want to put on nice to know questions questionnaire accuracy: information is valid; requires that... -simple, understandable, unbiased, unambiguous, and nonirritating words -should facilitate recall and motivate respondents to cooperate -proper question wording and sequencing to avoid confusion and biased answers

fixed alternative questions

questions in which respondents are given specific limited-alternative responses and asked to choose the one closest to their own viewpoint; a close-ended question advantages: -require less interviewer skill -less time to answer (checking a box) -easier for the respondent to answer (don't have to sit there and think) -provides comparability of answers (how many people said A, how many people said B) disadvantages: -error in choosing responses (you want the categories to be exhaustive--capture all possible options) -tendency of respondents to choose a convenient alternative (N/A or "doesn't apply" is commonly checked off because it's an easy answer to put) -respondent may check the more prestigious or socially acceptable answer

open ended questions

questions that pose a problem and ask respondents to answer in their own words advantages: -good for exploratory research (because they force the respondent to open up and give you some detail that you won't get with a close ended question) -may reveal unanticipated reactions toward the product (may get something you didn't think you would--may tell you a preconceived notion is not correct) -are good first questions; allow respondents to warm up to the questioning process (Dr. Madden disagrees--thinks they should be used at the end of the questionnaire because they require more effort and if you're already at the end, they probably want to finish the survey) disadvantages: -high cost--have to edit, code, analyze (usually two researchers needed to code them) -the possibility that interviewer bias will influence the answer (expressions and nonverbal may influence how the respondent answers) -bias introduced by articulate individuals' longer answers

Belief strength--(bi)

received probability of association between an object and its relevant attributes salient beliefs: relevant attributes to that customer (for consumers, there's a maximum of 7-9 salient beliefs)--but most consumers have much less than this affected by past consumer experiences this object possesses some attribute typically measured on a 1-10 scale (10 means the person strongly associates that attribute with that object; 1 means they don't associatee it)

cohort effect

refers to a change in the DV that occurs because members of one experimental group experienced different historical situations than members of the other experimental group

reliability

reliability: an indicator of a measure's internal consistency -the degree the measures are free from random error and yield consistent results -consistency in a scale or measure -if you give this scale to a similar group of people and get similar results, then it's reliable internal consistency: represents a measure's homogeneity or the extent to which an indicator of a construct converges on some common meaning -measured by correlating scores on subsets of items making up a scale; how well do the items correlate with one another -Dr. Madden uses scales already out there because they're reliable because those 13 items converge on performance

internet survey technology

response quality: questions to determine if people are paying attention -you might put a random question in there to make sure people are paying attention; ex) select three and there's not a question (Dr. Madden and NFO)--what animal did you see and what animal did you hear? timing: -for individual questions, pages, and entire survey -rule of thumb (1.5 standard deviations less time is problematic) -how long does it take someone to respond? -ex) companies measure how fast do you answer questions to make sure they're not 1.5 standard deviations or more than the average person then they're probably not reading the questions and just clicking through)

types of fixed-alternative questions

simple-dichotomy (dichotomous alternative) question: -two alternatives (yes or no; agree or disagree) determinant-choice question: -choose one response from several alternatives (a, b, or c); which do you prefer? this is your favorite? frequency-determination question: -frequency of occurrence (often, occasionally, or never); how often do you do something? checklist question: -respondent may provide multiple answers

keys to help avoid break-offs

someone begins a survey but they don't finish it -visually appealing and easy to read -few questions per page (more questions will encourage breakoff) -sensitive and open ended questions encourage breakouts (put them toward the end if you need to ask them) -sophisticated samples increase response rates (the better educated that your sample is, the better your response rate will be) -pretesting is important (have to pretest if this is a brand new survey that has never been administered)

extraneous variables

something that happens... running a test market and coronavirus hits and unemployment skyrockets -history -maturation -testing -instrument -selection -mortality

methods for testing reliability

split-half method: checking the results of one-half of the items against the results from the other half; a method for assessing internal consistency by checking the results of one-half of a set of scaled items against the results from the other half -score half the items, and then take the other half and correlate to see how they match up (take 7 of 13 items and add them up and then compare to how the other 6 correlate; if they correlate, then it's reliable -not really used coefficient alpha (a): the most commonly applied estimate of a multiple item scale's reliability. It represents the average of all possible split-half reliabilities for a construct -this is used by everybody (only used method) -split half of all the possible split half (14 item scale, for ease--there are a lot of potential split half--7 and 7 items put into different categories. The coefficient alpha will look at ever potential split half. You can do this with SPSS, MINICAB, SAS test-retest method: administering the same scale or measure to the same respondents at two separate points in time to tests for validity; administer the same scale to the same respondents at a different time -used to be a method before personal computers -no one uses now -administer questionnaire to sample (group you're interested in) and then find a second sample (similar group of people) and then correlate the results

computing scale values

summated scale: a scale created by simply adding the response and each item making up the composite measure together -the scores can be but do not have to be averaged by the bunker of items making up the composite scale -adding values together (just add up the scores (13 item sales performance) reverse coding: means that the value assigned for a response is treated oppositely from the other items (some items may be coded in different ways in the way their worded) -ie personality scale with negatively worded items -questions 1 and 5 of my relationship scale (items 2-4 were worded positively); questions 1 & 5 were coded negatively but when they were returned, Dr. Madden had to reverse the coding for 1 & 5-- ranking becomes 5-1, not 1-5)

test marketing

taking product into 2-3 mid-sized cities to see if the product has success the most common type of field marketing experiment is the test-market test markets have 3 broad uses: 1) forecasting the success of a newly developed product (Miller 64--ultra light miller beer; wanted to see if it would sell) 2) testing hypotheses about different options for marketing mix elements; testing marketing mix (Jackass 3 Movie: in addition to theater release, they thought about releasing it on cable with 1ads for free--place!!) 3) identifying weaknesses in product designs or marketing strategies (McDonald's has attempted pizza and it bombs every time)

mechanical observation

television and radio monitoring: computerized mechanical observation used to obtain television ratings -AC Nielsen's people meter -Mobiltrak monitoring website traffic (monitor records radio stations that are listened to at an intersection or red light) monitoring website traffic: -click through rate (proportion of people exposed to a hyperlinked Internet ad who actually click on its hyperlink to enter the website) -----still the standard way of monitoring website traffic -----may not indicate interest (click fraud) -----may not distinguish how many unique visitors -hits and page views -----unique visitors (via cookies) -conversation volume (a measure of the amount of Internet postings that involve a specific name or term (kiss)) -----number of online postings of a name or term scanner-based research -scanner-based consumer panel (IRI)--paying retailers for scanner data) (a type of consumer panel in which participants' purchasing habits are recorded with a laser scanner rather than a purchase diary--what they purchase in different areas, etc.) -at-home scanning systems (whenever you make a purchase or type of purchase, scan it and send it to us) -camera surveillance --Old Spice Body Wash (young guys go and take a shower and put on helmet that records their showers to see how people wash) and Huggies example (parents put on camera to see how they change diapers) -smartphones and tablets --text and phone records --GPS info

what can be observed

textbook page 210, exhibit 8.1 physical movements verbal behavior expressive behavior and physiological reactions spatial tensions and locations temporal patterns physical objects verbal and pectoral records neurological activities internet activities geographical information physical distribution

validity

the accuracy of a measure of the extent to which a score truthfully represents a concept -you have to prove the scale is reliable and valid three questions for establishing validity: --is there a consensus that the scale measures what it is supposed to measure? (do other people agree that it measures what it is supposed to measure?) does the measure correlate with other measures of the same concept? (you should be able to find a similar questionnaire --does the behavior expected from the measure predict actual observed behavior? (does it help you predict future behavior)

external validity

the accuracy with which experimental results can be generalized beyond the experimental subjects can the results be generalized beyond the experiment? extended to the population? using student surrogates--problems in role playing and time of the semester (ask questions and pretend to be 35 year old person married with 2 children or pretend you're a marketing manager... it saves time and finding people but role playing is a disadvantage because we cannot put ourselves in those shoes; students are more likely to participate at the end of the semester because of extra credit, more comfortable trade-offs between internal and external validity: laboratory experiments usually higher in validity because you can control for extraneous variables but field experiments have much higher external validity because you've got the real world setting... it's a trade-off.. what's more important? high internal validity or external validity?

response latency

the amount of time it takes to make a choice between two alternatives; used as a measure of the strength of preference -the longer a decision takes, the more difficult that decision was and the more thought the respondent put into the choice; a quick decision presumably indicates an easy or obvious choice

phrasing questions

the means of data collection will influence the question format and question phrasing how you phrase a question is a pretty big deal based on what you want to do -mail, internet, and telephone surveys must be less complex than personal interviews, especially mail surveys because you do not have an interviewer -telephone and personal interviews should be written in a conversational style (it's a conversation between the researcher and the respondent)

observation

the systematic process of recording the behavioral patterns of people, objects, and occurrences as they take place -qualitative or quantitative research -exploratory, descriptive, or causal (any type of research) -often ethnographic research (researcher immerses self into culture) -Callaway Golf (uses observational research--they hire researchers to go to golf courses and act as caddies. They listen to golfers, what they like and don't like, and report their findings) -Alexa, Echo, Facebook, Google, Spotify--they observe us (they learn from us to make recommendations)

what is an attitude?

the vast majority of marketing research is measuring consumer attitudes attitude: an enduring predisposition to consistently respond in a given manner to various aspects of the world; composed of affective, cognitive, and behavioral components -global evaluation (like or dislike) -consistently respond to various things (Dr. Madden likes Starbucks over and over) -long lived (most; but, they can change--Dr. Madden used to think Starbucks was overpriced and used to have a positive attitude toward Folger's and Maxwell House -magnitude and direction (two most important characteristics to marketers)--magnitude means how strong is the attitude, direction means is it positive or negative components of attitudes--you need to measure these: -affective (emotions, feelings, moods) -cognitive (thought processes, your knowledge) -behavioral (how do you physically respond to stimuli)--since you can't measure behavior directly, measure what's expressed as behavioral intention McDonald's vs TacoBell breakfast--Taco Bell tracked down 20 different men named Ronald McDonald and all 20 preferred Taco Bell breakfast over McDonald's breakfast

behavioral intension

theory of reasoned action: intentions often translate into behavior; marketers are interested in what people are likely to do in the future example: how likely are you to purchase a Honda Civic? (i definitely will buy, i probably will buy, i might buy, i might not buy, i probably will not buy, i definitely will not buy)--measuring the likelihood of someone doing something

software to make questionnaires interactive

variable piping software: allows answers from previous questions to be inserted into future questions; may also rotate questions and response categories (explain why you said "xyz") (also if you don't want respondents getting the questions in the same order--reduces primacy and recency) error trapping software (prompting): gives error message if respondent does something inappropriate (do not use the forward or backwards keys--if you accidentally press it, a message will pop up telling you not to do that) forced answering software: prevents respondents from skipping questions (my survey multiple grid questions); a lot of online surveys have multiple questions on one page so if you leave an answer out, it will tell you to go back and answer it before you can move on) interactive help desk: real-time support; if you have a question, click the "help" button and you will be redirected

the nature of observation studies

visible observation: observation in which the observer's presence or mechanical measurement device is obviously known to the subject -observer's device/presence is known (Neuroco brain wave example--resesarch technique where participants are hooked up to electrodes and then are shown ads or packaging... and their brain waves are observed) hidden observation: observation in which the subject is unaware that observation is taking place -participant doesn't know they're being studied (Envirosell) unobtrusive observation: no communication with the person being observed is necessary so that he or she is unaware that he or she is an object of research advantages over surveying: -data are free from distortions, inaccuracies, or other response biases; you're getting the real deal -data are recorded when actual and non-verbal behavior takes place; not asking people to think about something that they've done already

some more questions

what type of category labels, if any? --verbal labels help respondents better understand the response positions --the maturity and educational levels of the respondents will influence the labeling decision (if you're surveying college educated people, you can have very descriptive labels) how many scale categories or response positions? --five to eight points are optimal for sensitivity (Dr. Madden says 8 is pushing it... he almost always uses 5-7 because there's a midpoint and respondents like that_

who really does housework?

women reported doing 47 hours a week doing housework compared to men's 23 hours really?? the numbers indicate the average married couple spends 70 hours a week on housework? is that plausible? how is housework defined? response bias may be present, too


Set pelajaran terkait

Ch. 12 - Regulation of Gene Expression Ch. 13 - Global Regulation (Exam 2)

View Set

Chapter 2 Early River Valley Civilizations

View Set

4.REP/Vrooman Street Act/Special Tax/Property Tax/Transfer Tax/ Assessed value/ ad valorem/California's Proposition 13/MORGAN TAXPAYERS' BILL

View Set