Research Methods
4 Methods of Gathering Survey Data
Listed below...
2. Computer-Administered/Computer-Assisted Surveys
o Computer-administered → computer has integral role in posing questions and recording answers o Computer-assisted → technology plays a support role in the interviewer's work o Types • CATI = computer-assisted telephone interview (interviewer uses computer to read/input) • IVR/Fully computerized interviews (computer is programmed to administer questions) • Online interviews (respondents answer questionnaires on the internet) o Advantages • Speed!!!! • Relatively inexpensive • Relatively error-free interviews (caveat: incorrect programming or respondent mistakes) • Ability to include pictures, videos, and graphics • Quick/real-time capture of data • More control by the researcher (e.g., can have more complex skip patterns) • Reduction of "socially acceptable" answers o Disadvantages • High set-up costs • Technical skills may be required • Requires computer-literate and internet-connected respondents (sample bias!)
List the four "don'ts" of questionnaire wording
• 1. Don't lead the respondent (i.e., by using questions that have strong cues on how to answer) o Ex: "Don't you think these new cigarettes are more healthy?" • 2. Don't uses loaded wording or phrases (i.e., those with emotional overtones) o Ex: "Do you think cigarettes are dangerous to sell to children?" • 3. Don't use double-barreled questions (two questions rolled into one) o Sometimes the response categories can be non-mutually exclusive (e.g., "employment status") • 4. Don't use words that overstate the condition
List the guidelines for conducting a depth interview
• 1. Funnel questions from general à specific à general. o Don't ask heavy-hitting questions at first - let them get comfortable beforehand. • 2. Do not ask "Why?" o Ask: "How is that?" "Can you tell me more about that?" "Can you please explain that a bit more?" • 3. Do not ask yes/no questions (exceptions: customer/employee satisfaction, behavior questions) • 4. Use probes strategically without interrupting the flow • 5. Try to circle back to earlier topics for greater depth • 6. Be willing to explore tangential topics (in both qualitative and quantitative studies!) o "Is there anything we haven't covered that you'd like to address?"
Describe the five elements of ethnography in practice
• Applied ethnography seeks to understand the experiences of people and how their lives are shaped by their experiences; ethnography in practice: o 1. Focus - expected to contribute to marketing strategy or tactics by revealing consumer (or employee) wants, desires, attitudes, and behaviors o 2. Immersion/time - 8 week or so period that's spent in the culture/community o 3. Context - how people act/react and express themselves; how they interact w/ or use products o 4. Methods - little time spent on methodological explication (unlike academics) o 5. Results - PowerPoint, illustrative video clips, videography, etc. → produced for customer's benefit
Give examples of applications of psychology to business problems
• Marketing/market research • Employee training and retention • Employee satisfaction • Program evaluation • Human factors • Legal issues
List what areas standardized data may be applied
• Measuring consumer attitudes and opinions • Defining market segments • Monitoring media usage and promotion effectiveness (e.g., Nielsen ratings) • Conducting market-tracking studies
Describe the mechanics of piloting a questionnaire
• Piloting and pretesting are totally different! • Pretest = a dry run of a questionnaire to find and repair difficulties that respondents encounter while taking the survey • Piloting can be done informally • Soft-launch • Start and then stop • Get feedback from interviewers • Not always feasible
List the elements of observational interviews
• Same guidelines as depth interviews • May have more than one researcher • Takes place in a naturalistic setting • Disguised vs. undisguised (whether you're aware that you're being researched) • Researcher can observe and ask questions • Didactic interview between researcher/informant • Trust is an important element
Distinguish the different types of question formats
• 1. Open-ended o Unaided (E.g., What brand of gasoline can you think of? à Just let consumers tell you brands) o Aided (E.g., Have you heard of this brand, yes or no? à Let respondents choose from a set of brands.) • 2. Categorical o Dual choice (E.g., yes/no questions) o Multiple choice (E.g., select all that apply; what kind of health problems run in your family?) • 3. Metric o Natural (They give us their own answer.) • Ranking • Constant sum o Synthetic (Predefine what the categories are.)
Demonstrate how to pre-code questionnaires
• Coding: using numbers associated with/assigned to each unique question response option in order to facilitate data analysis after survey has been conducted • Rules: o Set up the coding system before the questionnaire is finalized o All closed-end questions should have a code number associated w/ every possible response. o Start with "1", move to the next response as a "2", and so on. o "Check all that apply" are variables within the same question (1=yes, 0=no for each option) o Use the same coding system for all questions w/ identical response options (regardless of where)
Describe the different types of validity
• Criterion/Predictive Validity (some specific criteria that we're measuring our results again) o E.g., SAT scores are "validated" by showing that high scorers really do perform better in college • Content/Face Validity (the extent to which an empirical assessment reflects a specific domain of; i.e., does it appear to answer question that it's asking?) • Construct Validity (make sure to measure construct that we think we're measuring) • Statistical Validity (significant findings in results?) • External Validity (generalizability! How well your sample generalizes to population at large?)
Compare and contrast the six different classes of data collection aids
• Data collection aids help better elicit and capture data o Range is from simple to sophisticated - but sometimes simpler is better! • 1. Written notes • 2. Audio recording o Should contain: Place; Time; Whats happening; Why; Ppl present/interviewed (observations/analysis) o Check equipment prior to interview → Transcribe audio recordings afterwards → Lock file at the end • 3. Still photography o 1. Perspectives of action (image) OR o 2. Perspectives in action (convey the image and the meaning of what's going on as well - richer) o Used for: • Primary data collection → Recording information AND Eliciting information • Secondary data collection (caution here!) o Ethical questions about photography? • 4. Audio-visual recording (so it can be transcribed later) o Same "rules" apply to videos as to photos o Be careful of sound quality (e.g., mind the wind) o Convey emotion; Body language and gestures; Tone of voice → More powerful than just photographs o Some participants may like being filmed, some may not - and then again what about ethical issues? o Can be transformative in the dissemination of results • 5. Participant-produced materials o Maps; Photos; Video; Archival sources/secondary sources like YouTube o Advantages: More engaging; Info that may not be otherwise revealed; Can then use it for elicitation • Other aids o Tangible artifacts: Maps; Brochures; Objects/consumer goods; Business cards; Articles/handbooks o Also: Smartphones; Tablets; Email
Describe the features on which to evaluate survey software
• Ease of use/accessibility (we're not programmers!) • Look and feel (can you control this? If you're clients are picky, this will be a big consideration!) • Survey creation (who is doing this? Who can have access to software?) • Customization (does program has ability to meet client's needs, to collect data how the client wants? If not, you may have to rewrite script or tell your client no!) • Security (where is data housed? If in a cloud, carefully document security procedures of host company!) • Functionality (does it have all the functions your normally need?) • Help and support (if you need help, will you be able to get it?) • Price (can get very expensive; the more customized the software, the more expensive)
List the important implications of causal research
• Experimentation → Determination of Causal Relationships (Establishes Causation!) o Experiment = manipulating an IV to see how it affects a DV o The variables must be related (statistically) o Temporal precedence must exist (treatment first à then outcome) o Must also control the effects of any additional extraneous variables so they're not influencing results
Describe how to prepare for a depth interview
• Familiarize yourself w/ topic before going into field (e.g., literature reviews, perspective of clients) • Gain some distance from the topic, and forget what you've learned. • Have a recorder ready (including new batteries) • Do take notes during the interview (Helps recall/interpretation AND Makes you "look" engaged)
State the history of SPSS
• Founded in 1968 by Norman Nie (a political science student at UChicago) • Needed a way to analyze his dissertation data (there was no good package to analyze data then) → got help from a couple of computer scientists to help him write this • SPSS now widely used among social scientists (more mathy statisticians use SAS); bought by IBM in 2010
Explain the basics of questionnaire organization
• Introduction o Not a sales pitch - a crucial period because it sets the stage for answering questions! o 1. Sponsor Identity • Disguised → sponsor's name is not divulged; Undisguised → sponsoring company is identified o 2. Purpose/content (should be described clearly and simply) o 3. Describe how respondents were selected (must be made aware of how/why they were selected) o 4. You must ask for their participation; then would be the time to also → Offer Incentives! • Anonymity = respondent not identified at any point • Confidentiality = respondent is identified but their identify not shared w/ 3rd party • Ex: "All of your answers will be kept confidential and will only be reported in summary form." • Can ask to share answers at the end of the survey though o 5. Screening Questions • Used to select respondent types desired by researcher to be in survey (and exclude inapp. ones) • Quotas: specific counts of respondents who meet specific criteria (related to weighting!) • Main Questionnaire/Body o Flow: pertains to the sequencing of questions or blocks of questions o Warm-up questions = quick and easy questions used to get the respondent's interest o Transitions (statements/questions) = used to let person know that changes are about to happen • Ex: "Next we have a few questions about the types of cat food you buy." o Skip patterns = imbedded in questionnaire to ask questions relevant to only certain respondents o Difficult to answer questions = should come in the middle of the questionnaire ("bury" those!) o Classification Questions = "personal" and possibly offensive ?s → placed at end/wind-down • Classification questions often used for profiling or analysis purposes; can include • Demographics or Firmographics • Questions specific to the topic but extraneous to the main questionnaire o Ex: "How would you rate the energy efficiency of your home?"
Articulate basic concepts in measurement
• Measurement: Determining the description or amount of some element of interest to research objective. o Measurements should be helping us to solve the business problem/research objective • Properties: Specific features, attributes, characteristics of object that can distinguish it from other objects • Objective vs. subjective properties • Objective = observable or tangible • Subjective = unobservable and intangible (thus must be translated onto a rating scale through the process of scale development) • Properties are specified through operational definition, or operationalization • Operationalization of responses o (Operationalization - how we take our research objective and specify the level of measurement to be used and then translate it into questions for our questionnaire) o Open-ended responses → non-standardized o Categorical → discrete alternatives (e.g., yes or no) o Metric → amounts/levels • Natural: Order; Distance • Synthetic: Numeric; Label/completely anchored
List the situations in which coding can be applied
• Survey data • Focus group data • In-depth interviews/one-on-one data • Archival data o Public records/documents o Newspaper/magazine articles or ads • Sales persons or CSR comments/observations • Customer comments • Web comments
Define reliability
• Consistency/stability of measurements • Extent to which experiment, test, or any measuring procedure yields the same results on repeated trials • Identical or very similar responses from the same respondent
List the caveats to qualitative research
• May not be the best research method in given cases • Often seen as quick and cheap - not seen as rigorous (an erroneous view - but still present) • Often seen as "exploratory" (customers may think that researchers are lacking in a good hypothesis) • Qualitative researchers have difficulty not complying with client requests • Must always generate actionable implications for managers/decision makers!
Define secondary data
• Primary data = information that is gathered by the researcher specifically for the project at hand • Secondary data = previously gathered by someone other than researcher and/or for some other purpose
Articulate why reliability and validity are important in applied research
• Scale construction • Test sensitivity • Prediction • Generalizability
List the steps in the research process
• 1. Identify problem/area of study • 2. Gather information (about nature of problem) • 3. Create research objective (often the most important part!) • 4. Formulate hypotheses • 5. Design research study • 6. Collect data • 7. Analyze/interpret data • 8. Report data • 9. Receive feedback (on data, interpretation, and report)
Define Validity
• Appropriateness/meaningfulness of measurements • The extent to which any measuring instrument measures what it is intended to measure • The truthfulness of responses to a measure
Apply various factors when choosing a data collection method
• Balance quality against... • Survey time horizon • Survey data collection budget • Type of respondent interaction required (e.g., if you need super detailed stuff, internet is not for you) • Special considerations o Incidence rate (% of population possessing some characteristic necessary to be included in survey) • General population survey - anybody - 100% - but if you're looking for very specific people, a niche group (pregnant women who use apple computers), finding people is going to be harder) o Cultural norms and/or Infrastructure considerations (i.e., limitations of communication systems -- in different parts, less internet access)
Articulate the classifications of secondary data
• Internal Data (data collected within the firm) o Database marketing records (process of building, maintaining, and using customer databases and other databases (products, suppliers, resellers) to contract, transact, and build customer relations) • CRM (Customer Relationship Management Data)(e.g., sales center records) • HR records (e.g., absences, lateness, reprimands) o Data mining (e.g., software that helps managers make sense out of sales data, transaction data, etc.) • External data o 1. Published: (a) Academic, (b) Industry o 2. Syndicated Services Data (provided by firms that collect data in a standard format and make them available to subscribing firms) o 3. External databases (supplied by orgs outside the firm, searchable by search engines; e.g., LinkedIn)
Describe how exploratory research informs the researcher
• Often just discussions with your client in order to get more context about research problem • Uses: o Gain background information (context in which you're conducting research) o Establish research priorities o Clarify problems and hypotheses o Define terms and concepts (unfamiliar words or familiar words that are being used differently)
Apply the process of coding open-ended questions
• Sort the codes to get like responses together • Look for large groups of similar answers • Create a code for each large group of responses • Start coding the data, applying previously created codes • Revise codes as needed, giving non-fitting responses a generic code of an "other" • Once data is initially coded, then sort the codes and recode others as needed, using codes that have been created during the process
List the elements of a successful qualitative report
• Tell a story/narrative. • Make it useful and actionable (link up need for action w/ the info you found to help direct action) • Keep it short and tight. • Make good use of visuals (e.g., use infographics - charts and graphs.) • Use direct quotes (text, audio, video) - these can add life and energy. • Devise personas (help to show the emotions and dreams and fears of your respondents) • Make it sharable (i.e., research memes - short video, powerful diagram, etc.)
Describe why the type of measurement scale is important
• The choice of the scale is important because: o It affects how long it will take your ppl to fill it out o It affects the type of analysis to be performed (garbage in, garbage out!) o It affects how we display the results in reporting
Describe qualitative social network analysis
• The use of network theory to analyze social networks (network of social relationships) • 2 main elements In a social network o 1. Social actors themselves ("nodes") o 2. Relationships between them ("ties") • Useful for: Word of mouth; Diffusion of innovation; Key influencers • Applications: Segmenting markets; Targeting (ppl w/ lots of ties best); Positioning; Employee recruitment
Give examples of applied vs. basic research
Pure Research Academic Hard to apply in business settings Done out of curiosity Concerned w/ refuting or supporting theories EX: Do people like to congregate in landscape settings or in hardscapes? (Just to know!) Applied Research Non-academic Easier to apply in business settings Done to solve real-world problems Just trying to solve biz prob! Not root theories How do we develop parks that people will like better? (May lower blood pressure!)
Discuss the difference between standardized data and other types of information
Standardized information is a type of secondary data in which the data collection process is standardized for all users *and* is reported in the same format for all users; two kinds:
Describe the process of designing a quantitative questionnaire
o 1. Question Development • The practice of wording questions that are: • Understandable • Unambiguous • Unbiased • It also involves the selection of appropriate response formats (question scaling) o 2. Question Evaluation (used to scrutinize possible questions for question bias) • Question bias = when question's wording influences respondent's answer - avoid this! o 3. Client approval o 4. Revise as needed o 5. Programming (i.e., set up question response codes) o 6. Pilot (NOT pretest!)/soft-launch o 7. Revise as needed o 8. Finalize and duplicate o 9. (Field the survey)
Describe the three types of empirical inquiry
•1. Descriptive Qualitative or quantitative info about the sample you're studying Ex: customer/employees' attitudes, demographics, levels of satisfaction/quality •2. Relational/Correlational Quantitative only → Lacks causality Ex: relationship between employee satisfaction and employee loyalty •3. Experimental Experimentation necessary for this (DV, IV, manipulation) → Establishes causality Ex: How do we build a better product?
List the four "do's" of questionnaire wording
(Questions should...) • 1. Focus on one topic (e.g., no double-barreled questions) o Ex: "Did you buy this brand of cat food because it was high-quality and cheap?" • 2. Be brief • 3. Be grammatically simple (e.g., no double negatives) o Ex: "Do you feel that the new cigarettes are less unhealthy?" • 4. Be very clear, specific, and unambiguous
List the five traditions in qualitative research and what questions the ask
o 1. Phenomenology • Seeks deep understanding of lived experiences → Asks about nature of ppl's lived experiences • Not generalizable - this approach doesn't even seek universal understanding! • Not common in applied research • Data Sources: Introspection; Depth interviews; Diaries, letters, etc. o 2. Hermeneutics • Assumes that understanding is based on language and influenced by culture → Asks how cultural notions shape specific kinds of experiences and actions • Not generalizable because this method assumed that language influences our thinking • Data Sources: Depth interviews; Observations; Archival data o 3. Postmodern • Committed to a critique of what is taken for granted as assumptions and practices of marginalized or oppressed groups → Asks what factors contribute to oppression/marginalization of some groups and how this can be alleviated → Tries to identify possibilities for change • Rarely done in market research (a little too academic in nature, says Thelen) • Data Sources: Texts; Ethnography o 4. Semiotics • Focuses on the structure of meaning-producing events (both verbal and non-verbal) • Investigates sign systems, codes, symbols (advertising) → Asks how specific words, phrases, gestures, images, products, or practices with a symbol system acquire meaning • Rarely done in market research (a little too academic in nature as well) • Data Sources: Archival texts; Interviews; Observations o 5. Neopositivism • Positivism is the quantitative approach to science (empiricism) - what's used most often in AP • Looks for patterns, regularities, relational explanations → Asks what factors help explain a particular phenomena or the consequences that may arise when the phenomena occurs • Data Sources: All kinds, especially depth interviews; Triangulation of different data sources • Choice of Empirical method can dictate: o 1. How to collect the data, o 2. How to analyze the data o 3. How to interpret the data • Remember: research is not linear - it's an ongoing cycle! Qualitative work may span different methods/traditions! And these traditions are ever evolving!
4. Mixed-mode
o A mixture/hybrid of two or more modes o Advantages • Multiple avenues to achieve data collection goal - can access the advantages of each! • Allows for more complexity o Disadvantages • Survey mode may affect response • Additional complexity (i.e., differences in instructions; integration of data from different sources)
Describe Netnography as Applied Research
• Brand communities (Apple) • Consumer communities (Travel boards) • User groups (Sawtooth) • "Guilds" (Conjoint) • Employee experience
Evaluate secondary data sources
• Sources o Reference guides o Indexes and databases o Dictionaries and encyclopedias o Directories o Statistical sources (e.g., census) • Evaluating secondary data—reliability o Original purpose (i.e., why was it collected? What was the true purpose?) o Who collected the data (i.e., is agency reputable? Study may not have been objective, if not!) o What information was collected? (i.e., operationalized definitions may differ!) o How was the information obtained? (e.g., survey, database; large sample? Response rate?) o How consistent is the information with other sources? (If differing, investigate why!)
Discuss what questions are addressed by descriptive research
• Used to describe characteristics of a population or phenomenon being studied - not trying to intervene! o It does not answer questions about how/when/why the characteristics occurred. o Rather it addresses the "what" question (What are characteristics of population/situation studied • Cross-Sectional Studies o Measure units from a big sample of the population at only one point in time (i.e., "snapshots") o Sample Surveys = large cross-sectional studies whose samples are drawn so that they are representative of a particular research population. o Convenience samples = not systematically sampling research population - but people easiest to find o Ad-hoc survey? • Longitudinal Studies o Repeatedly measure the same sample units, or "panels," of a population at periodic intervals over a period of time (the panels agreed to answer question in the beginning) o Market Tracking Studies = measure some specific variable of interest (share, sales) over time o Syndicated studies = companies continually do surveys in hope that people will buy the results
Articulate the relationship between reliability and validity
• Validity is usually seen as more important than reliability in the applied field • Must have reliability in order to have validity (i.e., Reliability is necessary - but not sufficient - for validity)
Describe alternative observational methods
• Videography o Uses: Videotaped interview; Naturalistic observations; Video diary; Use of smartphones/pictures o Benefits: Humanizing documentation; Captures subtle temporal, social, spatial dimensions; o Easier now with technology! • Other forms of observations o Online observations; Archival/texts; Physical traces (e.g., how a room is left after a class); "Garbology" • Plusses/Minuses of Observations o PLUS: Rich in detail and contextual (very context-sensitive) o MINUSES: Expensive; Time-Consuming; Low Internal and External Validity
List the processes of the scientific method
•1. Question formulation (often the most important part!) •2. Hypothesis/Conjecture formulation •3. Prediction •4. Testing: Experimentation/Observation Experimentation = involves some sort of manipulation; usually quantitative in nature Observation = not actively manipulating the environment in order to affect the outcome •5. Data Analysis Qualitative Quantitative •May also include: Data sharing External review (rare in AP) Replication (rare in AP)
List the characteristics of applied behavioral research
•5. Scientific study has an element of creativity (clients often have novel problems needing novel analysis) •2. Uses theories and language associated with a field of study (APs develop this language for clients) •3. Behavior is not governed by strict laws (little more squishy than the hard sciences) •4. Behavior is perceptible (measurable) and our measurements of them have to "make sense" •1. External observation (can't just conjecture about what is going on!) •6. There are limits to our scientific knowledge (predictions are often poor - but this keeps APs in biz!)
Compare and contrast the advantages and disadvantages of standardized data
Advantages • Shared costs/cheaper • Data quality is high • Data is disseminated very quickly Disadvantages • Little or no control over the data collected • No strategic information advantage • Long-term contracts are often required (once a yr for several yrs)
Compare and contrast the four difference between qualitative and quantitative data
Qualitative Nature of Data - Richly detailed data (not quantified); Concepts illustrated through recordings (words, pictures, videos, all 3) Contextualism - Results assumed as specific to time, place, ppl, culture studied (narrow context → not generalizable) Naturalism - Ideally naturalistic w/ multiple factors shaping behaviors observed and discussed (in situ (in place)) Role of Researcher - Researcher = the data collection instrument; uses ppl skills/rapport to gain good insights based on trust Quantitative Nature of Data - Concepts reduced to scales or binary variables; responses distilled into numeric scores Contextualism - Results are generally assumed to be generalizable across contexts and cultures (decontextualized) Naturalism - Ideal settings controlled and variables manipulated/measured to allow simple causal inferences Role of Researcher - Researcher = invisible; relies on responses to structured measures or choices
Discuss the difference between syndicated data and standardized data
o 1. Packaged Services • Prepackaged marketing research process that is used to generate info for a specific user • E.g., Bass's / M/A/R/C; Mystery shopping • Advantages • Time-tested • Reduced costs • Speed/efficiency • Disadvantages • Not customized data collection - and not customized reporting either o 2. Syndicated Data • Syndicated data = data collected in a standard format and made available to all subscribers • E.g., Scarborough (media reporting agency; media/product consumption); Claritas (provides segmentation of customers in a certain zip); Dunn & Bradstreet
1. Person-Administered Surveys
o Interviewer reads questions and records the answers on paper. o Types: • In-home (interviewer comes to respondent's home during appointment time) • In-office (interviewer comes to respondent's place of work) • Central telephone location (interviewer works in an office calling people on the phone) • Mall-intercept (shoppers in a mall are approached and asked to take part in the survey) o Advantages • Feedback (adjust to the person) • Rapport (actual person can create trust) • Quality control (ensures that right type of person is being asked right things) • Adaptability (to person taking survey) o Disadvantages • Humans make errors (e.g., skipping over questions, changing the wording) • Slow speeds (person with paper) • High cost (interviewers are well-trained) • Fear of Interview evaluation (interviewer's presence create anxieties/cause ppl to alter response) • Representativeness (okay - just not as good at this as phone surveys are)
3. Self-Administered Surveys
o Respondent completes the survey on his or her own; no human administers the interview o Types: • Mail surveys (questionnaires mailed to prospective respondents) • Group-administered surveys (people take the survey in a group context) • Drop-off surveys (questionnaires left w/ person to fill out) o Advantages • Reduced costs and increased efficiency • Respondent control (relaxed and leisurely) • No interviewer-evaluation apprehension (i.e., no socially desirable responding) o Disadvantages • Respondent errors (people can pick and choose what questions they answer) • Non-response bias/self-selection (people who get it in the mail may just decide not to answer it) • Lack of complexity (cant be too onerous for participants - nothing fancy w/o lots of instructions) • Lack of supervision (can't supervise or encourage person to keep going)
Compare and contrast the advantages and disadvantages of focus groups
• Advantages o Relatively quick and easy (for APs) o Allow clients to observe participants first-hand (e.g., 1-way mirrors) o Generate fresh ideas (better than depth interviews) o Broad spectrum of use - employees, customers, HR, etc. o Allow easy access to special groups/niche samples • Disadvantages o Non-representative (due to small sample size) o Results can be difficult to interpret o Expensive • When should they be used? o Best used for exploratory research! (May find answers to issues - but wont be accepted by journals) • Especially good when researchers want to examine shared meaning and shared terminology • May also work well for taste and smell tests or reactions to package design • Also for investigating certain types of group dynamics (males v. females reaching consensus) o Topics in which the intent is to elicit long narratives or to gauge attitudes? → Don't use them
Articulate the key terms in survey design
• "Look and feel" (color, layout, design, etc.) • Single-select questions (one answer per question) • Multiple-select questions (check all that apply) • Grid-style questions (multiples rows of questions, columns at tops) o Most often single-select questions (e.g., how satisfied w/ these things are you?) o Could be multiple-select (e.g., Who is family has what disorders?) • Quotas/quota control (you want your sample to have a certain quota - can get complicated, e.g., age group by gender, etc. - then, after it's met, no others can come in survey) • Survey flow o Skip pattern (if A → then skip theses questions) o Branching (if B → then go to this branch of questions) • List building (select all that apply → then responses carried over into new questions; dynamically creating lists of options, more advanced/expensive option; e.g., if experience with C, how satisfied were you?) • Randomization (important to limit your biases in questionnaire!) o Items (changing the order in which list items in single select or multi-select appear) o Questions (eliminating order bias - position of questions on page) o Blocks (different concepts put in different order; frequently done when testing 2 advertisements) • Piping (take response from earlier questions → pipe into later question; more to do w/ selecting appropriate questions for respondent; e.g., how satisfied? Why very satisfied?) • Data export (most all export to Excel, probably to SPSS too) • Hosting/delivery method (where survey is hosted; the server) o On-premise (upload to your own server → more secure) o Software as a Service, SaaS (Uploaded to a cloud (most surveys; hosted by a company offering the software (becoming a problem nowadays - be sure to take out any ID'ing info, names, account #s)
List the seven themes to watch out for when coding
• 1. Metaphors (coding these can help you figure out how people are making sense of their own reality) • 2. Strong emotions (provide clues as to what is important to your participants) • 3.Phrases that are Imported/used in new context (may point to context-spanning discourses that structure the ways people think about the particular context you are studying) • 4. People/actors who matter in that context (may lead you to ID other categories of actors or to code the type of interdependencies that may exist) • 5.Behaviors (noticing them can sensitize you to actions that are rare or considered illegitimate) • 6. Motives (consider purposes that might lie behind the creation of this text) • 7. Contradictions (being aware of these can help you think out what other data you might need to collect to solve the problem - and might open up new research questions for consideration)
List the threats to validity of experimental designs
• 2 types of Validity o Internal: Concerns the extent to which the change in the DV was actually due to IV (i.e., was proper experimental design used/implemented correctly, external sources of variation controlled for?) o External: Concerns the extent to which the relationship observed between the IV and DVs during the experiment is generalizable to the "real world" (the more representative the sample, the higher EV) • History (Did some unanticipated event occur while the experiment was in progress (in between the first measurement and the second measurement and affect the DV?) • Instrument Reactivity o Instruments directly affect subjects (i.e., leading people to behave differently) so that true measurements are distorted o Bias: Instrument reacts w/ treatment so that subject responses to various treatments are changed and differ from treatment to treatment • Unreliability of instruments (isn't always measuring what it is supposed to be measuring the same way) • Invalidity of instruments (isn't always measuring what it's actually supposed to be measuring) • Instrument change over time (if society/behavior measured has changed, but instrument hasn't kept up) • Differential subject loss (loss of subjects due to refusal to continue, death, injury, etc.) • Hawthorn effect (by the nature of your presence as researchers, you are eliciting a change) • Nonrepresentative samples (samples so specialized that conclusions can only be made for a limited pop)
Explain the different experimental designs
• 3 Experiment Types o 1. True experiments (i.e., laboratory studies) → internal validity • Typically done in labs, where effects of IV on DV can be isolated, extraneous variables can be controlled (through use of control group), random assignment done o 2. Natural experiments/quasi-experiments • No random assignment - how they naturally fall may produce confounds; thus, does not properly control for effects of EV on DVs o 3. Field experiments → external validity • IV is manipulated and the measurements of DV are made on test units in natural settings • Potential confounds due to lack of controlled environment -but can have random assignment ONE GROUP DESIGNS (Non-experimental since they do not involve assignments of subjects to conditions) o 1. Pretest-Posttest (O1 x O2 ) • Pre-E test/measurement of DV → exposure/treatment of IV → post-E test/measurement of DV o 2. Interrupted time series (O1 O2 x O3 O4 ) • 2 observations → experimental treatment → 2 observations (extension of pt-pt) o 3. Correlation Designs (include w/ other one-group studies - purely observational) • 3a. One-Point/Cross-Sectional Designs (O) • All measurements are taken at one point in time; O = all observations on all variables • 3b. Tracking/Cross-Sectional Designs (O O') • Take place over time - but only a single measurement is made on each different variable • 3c. Longitudinal/Time Series (O1 O2 O3 O4) • Two or more measurements of variables in a study taken on the same group of subjects (but measurements done at different periods of time) • MULTIPLE GROUP DESIGNS (experimental designs; comparisons of 2 or more treatment groups that differ along a single independent variable) o Two-Group Design • Simplest experimental design; involves 2 variables: 1 IV and 1 DV • Experimental group = (R) O1 x O2 • Control group = (R) O3 x O4 o Ex Post Facto Design (Matching) • Done is observational studies where one variable can be ID'ed as independent, or in experimental designs where it was not possible to randomly assign subjects • Experimental group = O1 x O2 Match • Control group = O3 x O4 Match FACTORIAL DESIGNS MULTIVARIATE DESIGNS (multiple IVs or DVs)
Define applied research
• A form of systematic inquiry involving the practical application of science. • Accesses some part of research communities' (academia) accumulated theories, knowledge, methods, and techniques -- uses them to solve specific problems (e.g., gov't, business, client-driven purposes)
Compare and contrast the advantages and disadvantages of secondary data
• Advantages o Readily available (for almost any application) o Can be obtained quickly (i.e., days or even hours) o Inexpensive/cost-effective (compared to collecting primary data) o May enhance primary data (a current look at the issues/trends affecting what PD should be collected) o May even negate the need for primary data collection • Disadvantages o Incompatible reporting units (e.g., areas based on zip codes - but may need a smaller 2-mile area) o Measurement units do not match (e.g., household income v. per capita income) o Class definitions are different (e.g., ranges of income used on surveys; thus not usable) o Data outdated o Missing data; Imputed data (computation/guess on what data should be - worse than missing data) o Inappropriate use
Compare and contrast the advantages and disadvantages of including open-ended questions
• Advantages: o The answer categories are not known o A richer answer is desired o It is important to have the respondent's own words (can give clients quotes!) o To give the respondent the opportunity to discuss any topics that may not have been covered previously (e.g., is there anything else? Can you please explain, etc. = probing) o To give the respondent a chance to compliment or complain about an issue (important to let respondent air grievances, and important for company because they can try to improve themselves or help the customer become satisfied) • Disadvantages: o Expensive o Time consuming (e.g., developing code book, training coders, lengthy answers) o Error prone (i.e., lack of specificity in the codes, less Inter-rate reliability) o Vague answers (hard to code - might as well be no!) o Difficult to use in multivariate analyses • Use a closed-ended question, if possible!
10) Qualitative Research: Wrap Up!
• Blend old technology and new technology o Don't be afraid of new technologies/media - but don't get carried away w/ their great potential! • Don't be afraid to use different qualitative methodologies! o Develop a big methodological toolkit, and select appropriate tools for the problem/question at hand! • Qualitative methods not any easier than quantitative methods! o In fact, they are more time-consuming, require more active participation, and are quite likely more challenging and difficult to master than quantitative methods. • Just do it! The only way to really learn qualitative methods is to use them! • It is often easier to work as part of a research team than to go solo. (Analysis/interpretation made easier) • Bring back up equipment! (Two or more of each essential piece) • Be prepared to become emotionally involved with informants. • Keep it real and stay close to your data! • Research itself is a language, a field of culturally constituted meanings and dynamic social conventions. o Learn the relevant language and keep up with the current conversations!
Describe why projective methods might be used
• Can be used in depth interviews or focus groups - but not generally used in applied research that often • Commonly used in academic research. They have their roots in psychotherapy. • Advantages (Why might they be used?) o Help informants say things they otherwise might filter o Help informants say things that might not be in awareness • The responses range from simple to elaborate → Interpretation can sometimes be difficult
Describe how to conduct a depth interview
• Choose a neutral place to conduct the interview. (e.g., the conference room, not their cubicle) • Have the discussion guide/protocol in front of you or memorize it • Important to develop beforehand - treat it only as a guide/general outline, though! • E.g., Deviate from it when needed, and try to circle back to important topics. • Get permission of the informant to be interviewed and recorded o Agree to the various and possible uses of the recording o They can stop at any time - and can say things "off the record" • Keep in mind that the interview is about the informant! o Appear naïve or ignorant concerning the topic o Do not disclose your own personal viewpoints (but things like "I understand" are okay) • Treat the interview as a conversation. o Get the informant comfortable by asking general questions first. o Take turns talking - but the informant should do more of the talking o The researcher should direct the conversation, though. • Transcribe the interviews for analysis afterwards.
List the different types of measurement scales used in applied psychology
• Natural Scales (they give us their own answer) o Standard natural scales (e.g., age, income, number of Target visit this week) • Nothing predefined in these questions but we know that there is going to be a linear distribution o Ranking (rate or rank these things on this list) o Constant sum (approximates interval scale (break down some activity/perception onto a scale or %s) • Difficult to do; instructions are really important; best to do online • Advantage: Lot of respondents will rate things either really low or really high - this prevents that • Synthetic/Interval Scales → rating scales for subjective properties of consumer (e.g., feelings, opinions) o Symmetric scales ("balanced" scales) • Equal amounts of + positive and - positions; typically "no opinion" or "neutral" separating sides • Likert scales (measures intensity of agreement/disagreement) • Visual (smiley faces!) • Completely anchored (Strongly agree/agree/neither agree nor disagree/DA/strongly DA) • Partially anchored (Strongly agree 1 2 3 4 5 Strongly Disagree) • Number of scale points? o More options = more nuance (For variability/wider array of answers → 7, 10, 11 points) • Neutral point/midpoint? (Strongly agree 1 2 3 4 5 Strongly Disagree) o Use when you think respondents can validly have "no opinion" but.... o When you want people to pick a side → Use an even number of scale points (Strongly agree 1 2 3 4 5 6 Strongly Disagree) → People are unable to go right down the middle. • Semantic Differential (ex:strong-weak) • Can plot the average evaluation on each set of bipolar descriptors) • Good way to measure a brand, company, or store image • Stapal Scale (+2 +1 0 -1 -2) • Easily recognized as it has numbers that range from a minus end to a corresponding plus end, with or without a zero as the midpoint • Hybrids o Non-symmetric scales • Not balanced - because not all constructs have counter opposing ends.... • E.g., Not important/somewhat important/quite important/very important/extremely important • This scale, which has mainly degrees of positive positions, would be more appropriate because most people do not think in degrees of negative importance • Some Likert scales can also be non-symmetric (see above)
Compare and contrast ethnography with netnography
• Netnography = ethnography in the social spaces of online environments • Key elements: o Researcher = Participant Observer o Lived experiences of participants o Rich contextualization (situation/data) o Systematic and rigorous o More academic, less applied • Online interactions v. face-to-face/social interactions: o 1. Alteration o 2. Anonymity (at least some degree) o 3. Accessibility (geographical, social, etc.) o 4. Archived forever! • Selecting appropriate sites: o Relevant/Accessible (inform and clearly linked to your stated research focus) o Active (both recent and regular communication between members) o Interactive (you want members of the online community to be interacting with each other a lot; e.g., question-answer or posting-comments interactions) o Substantial (offering a critical mass of communicators and a lively cultural atmosphere) o Heterogeneous (you want a good number of people w/ different opinions!) o Data-rich (data is significantly detailed and descriptive) • Netnographic Participation: o Reading messages in real-time is best. (Archived messages are good to read as well, though!) o Following shared links, both on that site and on other sites (sites outside the forum) o At first: offering short comments → Once you learn more about group: offering longer comments o Replying to members via email/chat (side conversations) o Starting threads/new posts o Becoming an organizing/leader in the community • Data Collection (much the same as ethnography - but w/ less travel time!) o 1. Field notes (written notes, copy/pasted info, recorded thoughts, etc.) o 2. In-field observations and initial analysis (happens at same time) o 3. Out of field analysis and write-up o Types of data collected • Archival Data (past convos that you weren't a part of) • Elicited Data (answers to very specific questions you ask) • Field notes/field analysis • Structured interviews are harder online! • Text (not so good) • Email • Phone • Skype (almost as good) • In-person (best!)
Compare and contrast data mining with opinion mining
• Opinion Mining = A specific type of data mining - attempts to measure online word of mouth • Uses natural language processing (which can be very tricky!) o Sentiment classification (valance: whether it is a positive or negative comment) o Feature-based opinion mining (Qualities people like/dislike about product, that're seen as important) o Comparative mining (Comparisons w/ similar products offered by another brand) o Strength/passion assessment (Strength of ppl's comments, degree to which they're involved with it) • Cons o Often computer-based → lots of noise in the data (SPAM, paid opinions, etc.) o Non-representative (reviewers always have something really bad or something really good to say)
Describe how theory influences analysis and report
• Presentations should be managerially useful (i.e., give directives for action that can be taken to solve the problem under investigation) o Presentations are based on the problem, the data, and theory - theory helps to: • Guide data analysis (coding) • Guide presentation of results • Direct our results towards meaningful managerial actions • Allocation of resources • Planning of future goals and strategies Key categories of results in market research (each study in generally focused on one of these) o Mapping the opportunity space (open opportunity in the consumer market) o Understanding connections between consumption patterns (e.g., habits, contexts, behaviors) o Segmenting and sub-segmenting consumer groups o Describing the target market segments (e.g., developing a persona) o Decision graphing Actionable Topics o People (Who uses product?) o Practices (How do people use product?) o Processes (Entire consumption process - from intention to fulfillment?) o Language (How do people talk about their consumption?) o Details (Where/when is product used? Where is it stored? What other products are used with it?) o Problems (What sort of frustrations do consumer have with the product/service?) o Plans (What do consumers wish for? What is their ideal consumption experience?) o Proxies (Who performs particular house tasks? Who shops for particular products? Who uses them?) o Irregularities of product use (How do ppl use existing products to create new ones? Customization?) o Pairings (Do consumers combined different products together? Do they have their own 'recipes'? o Partitions (Do consumers split larger packages in smaller ones? How so?) o Benefits/Pleasures (Where was the positive energy in the consumers' experience?)
What makes up Qualitative Research?
• Qualitative Research (the core, at least) = o (1) Interviews (usually depth interviews) + • Interviews consist of: (1) Depth interviews, (2) Casual interviews, (3) Group interviews o (2) Observational Research • Ethnography is a special type of observational research.
Describe the elements of quality in qualitative research
• Quality is assessed qualitatively! J o Trustworthiness (e.g., admitting bias and problems with data collection) o Honesty (e.g., the transitional nature of research and understanding) o Sufficiency of observations and sampling o Congruence within data o Applicability of data to interpretation (i.e., Can we make interpretations based on the data?)
Discuss the approaches to reliability and validity in qualitative research
• Reliability o Objectivity (in terms of coding and data interpretation) o Reporting raw data o Field notes as a reliability check (Have we treated hypothesis/theories correctly or are we off track?) • Validity o Correctly naming concepts during coding o Field research/duplication o Theoretical validity
Compare and contrast the different types of research designs
• Research Objectives → Research Designs • 2 types of general research designs: o 1. Non-experimental studies • To gain background information and to develop hypotheses → Exploratory Research • To measure the state of a variable of interest → Descriptive Research o 2. Experimental studies • To test hypotheses that specify the relationship between 2 or more variables → Causal Research • Different from non-experimental studies because they have: • Use of IVs and DVs • Randomization plans • Temporal precedence
Describe coding (data analysis) as a process
• Run through initial coding - adjust as needed (it's an iterative process!) o Look for variation in codes among different groups in your data o Review of coding/data and literature/hypotheses may dictate other approaches, different types of interviews, or different questions that you should consider o Review of literature/hypotheses (looking for similar concepts) may dictate new codes or code groups • Look for discrepancies between what people say and what people do • Look for higher order relationships among codes (nets) o Distinct elements of the same phenomena o Process elements—steps, stages, phases o Explanatory elements that promote understanding (antecedents and consequences) • Logically order codes and nets • Diagram (if complex or multifaceted) • Repeat process as many times as needed!
Discuss the considerations when coding data
• Specify the objectives of the coding assignment • Maintain a balance between too much/too little detail (don't want to get lost in detail!) • Codes should be exhaustive and mutually exclusive (don't want codes that overlap) • Create enough codes so answers are not "forced" into an inappropriate category • Allow for the systematic coding of missing data o I.e., "None" and "Not sure/don't know" are two very different answers! • Group together related categories of similar response (translation: use nets)
List the elements of managerially actionable qualitative research
• Success indicators of managerial actionability: o Clear answer to the business problem o Clear reasoning for the methods chosen o Outcome and recommendations must be clearly linked to the data • Managerial actionability based on: o Contextualized data (place observed consumer behavior into context → new insights) o Analysis of metaphors (give insights into values of kinds of consumption) o Attend to contradictions in points of view (can help locate unexpected new insights) o Eureka moments (on the part of the researcher or respondent)(e.g., burritos and paper towels) o Discrepancies in what people say vs. what they do (clarify how people think/describe their consump) o Bring in academic theory (adds legitimacy and contributes to a higher level of thinking)
Describe the different types of reliability
• Test-retest (temporal stability)(very academic - very rarely done in applied field) o The same test is given to the same people after a period of time. One then obtains the correlation between scores on the two administrations of the same test. It is assumed that responses to the test will correlate across time because they reflect the same true variable, t. • Split-halves (can be conducted on 1 occasion!) o The total set of items is divided into halves and the scores on the 2 halves are correlated to obtain an estimate of reliability. The halves can be considered approximations to alternative forms. • Internal consistency (statistical measure of reliability, commonly used) o Methods for estimating reliability that do not require either the splitting or repeating of items • Require only a single test administration; provides a unique estimate of reliability for given test o Cronbach's alpha (rarely used in applied psych) • Generalization of a coefficient to estimate the reliability of scales composed of dichotomously scored items. Dichotomous items are scored 1 or 0 depending on whether respondent does or does not possess the particular characteristic under investigation • Want .7 or higher (threshold for standard of internal consistency) o KR20 (dichotomy questions, yes or no) • To determine the reliability of scales composed of dichotomously scored items o Important issues for both • Important use - to "correct" correlations for attenuation (unreliability due to random measurement error) • General rule: reliability should not be below .80 for widely used scales • Difficult to specify a single level of reliability that should apply in all situations • Often too costly in terms of time and money to try to obtain a higher reliability coefficient • Mot important thing to remember: report reliability of scale and how it was calculated (then other researchers can determine for themselves whether its adequate for particular purpose) • Inter-rater reliability o You want all of your raters to be rating things the same way, coding things the same way (esp. relevant when you have open-ended questions)
Articulate what research designs are
• The "plan" for the research; includes type of research, data collection method, and the analysis method o The design is based upon the research problem and the research objective o Because every research question is unique, every design is unique as well - our primary job in MR! o Designs are not mutually exclusive or sequential (you can have multiple designs at one time - they can overlap, mix and match, etc.) Designs can sometimes be amorphous! • Research Objectives → Research Designs o To gain background information and to develop hypotheses → Exploratory Research o To measure the state of a variable of interest → Descriptive Research o To test hypotheses that specify the relationship between two or more variables → Causal Research
Describe qualitative data mining
• The process of discovering useful patterns of knowledge from sources of data such as: (1) Databases, (2) Website, (3) Text files, (4) Images, (5) Videos. • Elements: o Complex naturalistic in situ situations (the internet is where it is actually happening) o Large amounts of data "scraped" off of the internet (blend of qualitative and quantitative data) o Analysis may be decontextualized o Inductive (i.e., derives general principles from specific observations) o Unsupervised learning (no predefined categories in which we are sorting the data in to) o Critical question = How to analyze the data • Data Mining Process: o 1. Identification of suitable data sources (usually driven by curiosity in phenomenon, product, brand) o 2. Cleaning of raw data (in order to remove noise and/or abnormalities - hard spaces, typos, etc.) o 3. Pattern recognition (i.e., data processed by algorithms that try to recognize or represent patterns) o 4. Solution evaluation (Not all discovered patterns will be valid/useful -reject those that aren't, identify those that are → Does solution solve problem at hand?) o 5? Repeat cycle if needed (process is usually iterative, taking multiple rounds to achieve results)
Recall the key terms in qualitative data analysis
• Theory: system of ideas/statements explaining some phenomena (informal hypotheses common in AP) • Coding: Concepts that reduce the data into meaningful chunks and assigns names to these concepts o Emic: Analyses that draw directly on the language used by the informants o Etic: Analyses that draw indirectly on the language of informants but seems appropriate (paraphrase)
Describe how to recruit informants
• Think about who you want to recruit prior to interviewing • Think about the best places to find informants o Important to ask: Who is your customer's target market??? This will affect where you go! • Convenience samples are not ideal! (You want your sample to be as representative as possible!) • May need to screen respondents - for one or many things (often done in applied research) o Demographics (e.g., not too young/old) o Usage characterizes (e.g., only person who actually bought it) o Attitudes • Screener may be quantitative
State the importance of recoding variables
• Useful for creating new variables from existing variables, especially when creating new categories
List the characteristics of a good applied researcher
•Enthusiasm (about what you're studying) •Open-mindedness (can't go in thinking you know everything!) •Common sense •Role-taking ability •Inventiveness •Confidence in one's own judgment (comes from knowledge and experience) •Consistency and care about details •Ability to communicate •Honesty and integrity
Describe the value of qualitative research
•Qualitative research can (1) Supplement, (2) Complement, (3) Stand on its own •Provides unique insights into: o HOW people behave and o WHY people behave as they do •Providers deeper understanding to numeric data → Situations in which it is applicable is expanding •Deep history in market research → Qualitative insights are appreciated by managers/decision makers