Comm research Exam 2
problem
Problem- crisis management follow up (trying to rebrand) i.e BP oil messup in Mexico and they dealt with crisis *tactical thing) and once it was settled down they went on a rebranding stride Is it clearly defined? Can a PR campaign effectively create a new perception?
4 P's
Product Pricing Promotion Placement (People)- need right people to do it (Process)- have a working way to do this (Packaging)
trends affecting PR/marketing relationships
Proliferation of Mass Media Audiences/Publics continue to fragment- New Media replacing traditional media Targeted messaging to publics getting more efficient/effective Emergence of Integrated Marketing Communication (IMC) -Public Relations -Marketing -Advertisin
all publics
Advertising Backgrounders Direct Mail Fact sheets Literature Media kits Media tours News releases Op Eds Pitches, feature articles PSAs VNRs Websites White papers
If we want to establish causation, we must have:
A. Correlation that are strong B. Temporal precedence (time order c. No spurious variables
main features of an experiement in causality
A. Manipulation of the in depend variable (iv) b. Random assignment of participants t experimental and control condition- Pre-treat and post-test Experimental control of other factors that could influence the outcome of the experiment
strength of experimental research
Ability to isolate IV effects = can test and establish causal relationships. Replication: Can re-run experiments with different groups of participants, in different locations to establish external validity
mean-central tendency bias
Add up the responses and divide them by the total number of responses
what does a salmon test let us control for
potential measurement effects. Most effective at establishing internal validity.
mission statemnet : It is ? but not detailed
specific
Avoid loaded terms/triggers terms:
specific words or phrases that carry larger cultural or political meaning - people support certain words to screw the biases in responses i.e do you think people should receive assistance or do you think people to provide welfare- i.e people support assistance than welfare questions because view welfare as negative and lack of motivation to work depending on the gov
sampling error
that is a result from drawing a smaller sample from the larger population -- the sample is to be representative of the larger population but there is always going to be an amount of error that exists when we can't access the entire population we are interested in. This error -- the error that is inherent to drawing a sample instead of collecting data from the entire population -- is referred to as sampling error. So when you define it as "the difference between the estimate and the true parameter" the "estimate" is based on the data drawn from the sample and the "parameter" is what is true of the larger population -- sampling error captures what is the difference between those two.
parameter
the characteristic you're interested in studying (e.g. binge drinking, voting preferences
experiments are artificial because
they look completely different in real life therefore social phenomenas happen in real life but when we remove people from the experiment its artificial
issue with 2 group
unable to say with certainty that our treatment resulted in a change.
vision statement:It is clear, free of jargon, ? by people outside of the field
understandable
Variables: Confounders
variables not being controlled by the researcher that can also vary systematically with the dependent variable. Might lead us to think that a relationship exists between our IV and DV when really something else is causing the effect- have a 3rd value influencing our DP and IV ALSO called spurious variables (needed to infer false relationships)- two variables are related, all we know is that they are related but don't know causation- but if there is a 3rd variable that correlated A and B but we didn't measure it (other things that may cause the effects) Presence of an untroubled confound variables draw questions about the study validity (
survey vs polls: difference
vary in their breadth and depth but usually severel questions Polls: shorter and brief, single topic survey will usually a question or two
Why post-test-
without this measure, we would have no idea if our treatment worked
vision statement: It is ?—achieving it provides benefits that transcend the advantages its attainers may realize.
worthwhile
quasi experimental design
you have 2 unrandomized groups, give each. pre-test, treatment to one and control to the other then test
solomon 4 gorup
you have 4 randomized groups- you only give a pre-test to the first 2, you give treatments to groups one and three and controls to groups two and four but you give all of them a post-test
experiemntal
a condition where the IV is manipulated -i.e masculinity threat condition (mas threat was IV that was manipulated for this group of men)
frequency distribution
a way of presenting data that shows the number of times a variable's attributes were observed in a sample. Ex. Time spent online What do we know about our participant? 7/115 people spent 9 hours online (6.10%) 86.07% spent less time online
inferential
allow us to infer or make generalizations about the population from our sample-
what is marketing
an organizational function and a set of processes for creating, communicating and delivering value to customers and for managing customer relationships in ways that benefit the organization and its stakeholders. American Marketing Association
tHREATS to validity example
experiment testing effect of an anti-war movie on peoples attitudes toward war (dv) Session 1- brief introduction give - "attitudes toward wars" scale given (pre-test) Session 2- anti war movie shows (treatment)- "attitudes toward wars scale given again (post-test0 Results- more people reported decreases in anti-war attitude Did the anti-war movie cause the anti-war attitudes? Issues of internal and external validity? -pre-test will influence how they respond and will not be accurate!, study war after war may change their attitudes due to the 9/11 event External validity- if I saw a war movie I would go to the movies and not in the laboratory setting- will it effect their results by seeing it in the real world apposed to in the real world
Condition
experimental condition: -control condition-
Change of beliefs-
happens frequently in technology- what is great today is bad tomorrow- i.e facebook proceeded myspace- say myspace doesnt know what they are doing
true experiments
has a random assignment
Public education/information-
nce you have made people aware of product/system or whatever, and you research for that (phone survey or social media survey) adn then say I think we have a critical mass of awareness and now we have to educate them about them (more detail)- if you staet off with details from the beginning it wont be effective theyll say "that too complicated"- so ease into
Public awareness
most common type esp in an interactive campaign- i.e auto comes out with a new car- make people aware i.e here it is and here is what it has
quasi experiment
no random assignment, but a control group
non experiment
np random assignment- no control group
higher SD means that
numbers varied around the mean
vision statement: It is fixed in the future within a ? amount of time, e.g. a decade not a generation.
perceptible
how can researchers accidentally and unknowingly influence responses and observations
personal attribute effect, expectancy effect and observational bias
visions statement: It is ?--the elements needed to achieve it are present or reasonably obtainable.
possible
advertising deals with
customers
marketing deals with
customers, shareholders, government, opinion leaders. and other
integrated marketing part 1
describe the organization, statement of situation, key publics integrated message platform key public relationship strategies strategic public relations objects (measure of success)
mission statement:It is (like a Vision Statement) clear, possible, jargon-free, worthwhile, ?, i.e. it presents opportunities for people to do things they like and are good at or not so good at (that needs work in)
desirable
Ex. Time spent online. Your participants in your study of online behaviors report the following # of hours spent online yesterday: 1, 3, 3, 4, 5, 5, 5, 7, 7, 8, 9, 10, 11, 14, 15
5
Would switch retailers to support a cause
62
believe that cause-related marketing should be a standard part of a company's activities.
64
would switch brands to support a cause;
66
ould more likely buy a product associated with a caus
78
Research question: Do threats to masculinity make men more likely sexually harass female coworkers What varies What is IV-cause and what is the DV effect what should be controlled
: threats to mass and sexual harassment behaviors - IV- threats to masculinity and dependent is sexual harassment behaviors Control for masculinity to figure out cause and effect- don't have to actively manipulate income level therefore more likely to observe this in the world (get someone tax reports) - have to manipulate masculinity threats to make it happen to see act its effects are- manipulated it to see independent variable (gave men surveys and saw their response to manipulate them) - can operationalize harassment by hiring people to see what men do Income we don't have to manipulate but we have to manipulate masculinity
tension between internal and external
:If we control for everything in our study, how much can it really tell us about what happens in the real world?
consumer pros- private company
+Effortless opportunity to support causes- dont do anything just pick that iteam up +Gains awareness of social issues
double blind experiment
- both participants are blind- researchers and participants are blind to who is being exposed- potential to remove bias
Identifying variables: researchers find that exposure to beer commercials on TV increases drunk driving Trying to find relationships between two variable Independent variable dependent variable- what type of correlation
- exposure to beer commercials(this is what you manipulate) drunk driving (don't manipulate) Positivity correlated- when one goes up, the other goes up
services
-Activity or benefit offered for sale that is essentially intangible and does not result in ownership.i.e someone to cut your grass or taxi
experiences
-Consumers live the offering.i.e vacation(something that you live
Random Assignment Conditions: RQ: Do negative gender stereotypes to perform poorly on aptitude tests Independent and dependent variable conditions how will random assignment help in experiment
-Independent variable- negative gender stereotypes and Dependent Variable: test performance Manipulate gender stereotypes and measure test performance not manipulate 2 conditions: experimental - expose female students to negative gender stereotypes (the give them an aptitude test) Control- dont expose female students to negative gender stereotypes and give them the same aptitude test -= by randomly assigning participants to different condition, we can determine whether exposure to IV caused changes in DV(ELIMINATING INDIVIDUAL DIFFERENCES)
products
-Persons, places, organizations, information, ideas
to what extrent is internal and external validity
-To what extent- missing measures or compounds that you sacrafice internal validity: does the study lead to accurate results? External: can this results be applied to other cases.
cause related cons- corporation
-Unfunded administrative costs- costs $ to do this- -Demand from other non-profits-
global campaign
-one size does not fit all (hard to do a campaign in the states that will go to pakastan and other places)- have to hire people in those states to do that -Challenge to coordinate seamless campaign across borders -Language, regulation, cultural sensitivities make issues taboo as well as message techniques, slogans, appeals- (make it difficult to do)
marketing approach
.Taking actions to define, create, grow, develop, maintain, defend and own markets. 2.An approach to business that seeks to identify, anticipate and satisfy customers needs. 3.Any activity that connects producers with consumers.- wouldnt say that; consumers sometimes sue companies - cop out def war between competitiors- onnly salesmen would define it like that
cause related marketing
A commercial partnership between a for-profit and a non-profit organization, that associates the non-profit's identity with a brand, product or service. - DEF The transaction (e.g. purchase) yields value to both parties while benefiting a worthy cause. The non-profit organization gains a valuable source of funding and membership while gaining awareness of its purpose. The for-profit organization gains sales, positive brand association and expands its customer base.- CLients learns about products
what is the aim of marketing management
Aim is to find, attract, keep, and grow customers by creating, delivering, and communicating superior value
what are social needs
Belonging, affection- need to know we are accepted and have a place in society or that people like us- may not be conscious but build into us
Experiment are often designed to establish causation
B. Allow us to control how variables are experienced and when-We understand causality in a white room but we need to understand if it will exist outside of the white room in reality C.Allow us to control for other variables that may influence results - correlation establishes that a relationship exist
hawthorn effect- personal threat due to participants
Behavioral changes due to the fact that they know they are being studied
cause related pros- corporation
Builds brand, attracts new customers +Helps hire and retain best employees- employees like to work for companies that care +Strengthens corporate culture, builds pride, morale and loyalty- if enough people believe and right thinks happy- you'll be happy to go to work
marketing
Business plan, how things work (nuts and bolts), pay taxes, earn proft, work relationships with customers- creative thing (always thinking of new ideas)
Variables:
Characterists, numbers or quantities that take different values (i.e that carry) in different sitations- anything that can vary characteristics such as age, gender, education,income
Internal And external validity concerns
Concern: Internal- thinking of accuracy of conclusion drawn from a particular study- can I translate this to the real world External- generalizability of conclusion .
constructing a scale
Convert indicators into scale items I buy apples every time I go to the grocery store I eat apples every day I enjoy the taste of apples I enjoy the crunch of apples I enjoy the smell of apples I enjoy the color of apples strongly agree - somewhat agree - neutral - somewhat disagree - strongly disagree Score each response option differently 5 - 4 - 3 - 2 - 1 Can rank each one based on Intensity Highest score possible: 30 points (strongly agree to every item) Composite score: 23
What is a Campaign?
Coordinated, purposeful, extended effort Designed to achieve specific objectives or a set of interrelated objectives Addresses issues, solves problems, corrects or improves a situation, creates value Changes, modifies or reinforces beliefs beliefs are what we do- we want to change them, instill new beliefs, reinfornce them
Successful Campaign: Foundational Elements
Definitive Vision Statement (aspiration) Definitive Mission Statement (operation-what you do) Corporate Culture (shared values)- what we believe in i.e customers is always first, quality, ethics Positive relationships (expressed values)- people get along, like each other, respect each other, no animosity - imbued our values on our team and have positive relationship- viewed from the outside looking in i.e JJ Reputation (understood values)-
Strategic Public Relations Campaign Executive Summary- know in ORDER
Description of Organization- every plan you do you begin with describing the organization because they end up in the hands that didn't hear you present it and has to stand on their own and has to know the vision of the brand Statement of Situation (Problem or Opportunity) Key Strategies Strategic Objectives (Measures of Success) Key Publics Key Messages Tactical Plan- what you will actually do Budget Timetable Summary of Research
strategic pr campaign EXECUTIVE summary
Description of Organization Statement of Situation (Problem or Opportunity) Key Strategies Strategic Objectives (Measures of Success) Key Publics Key Messages Tactical Plan Budget Timetable Summary of Research
stratified random sampling
Dividing the population along a chosen characteristic and then randomly sample from each group i.e race is a common one Ex. Students by Race i.e you have 10%black 25% asian 20%white 35%native american 5%middle eastern all togethger you sample is 200 and your trying to find n which you divide the number of groups by 200 and the result is your sample
key publics- typical questions asked
Do you know who they are? Can you describe them? (demographics, psychographics, lifestyle, infographics) Do you have information (research) on their current beliefs? Can you segment them? Can you arrange them in order? Do you know what their values/interests are? Do you know how to address them? (media)
observational bias
Drawing incorrect conclusions because "seeing" certain things and not seeing others.
evolution of cause related marketing
Early 1900's: NYC candy maker donates to local orphanage- put some of the money into that organization- regarded as the first time cause related marketing was established 70's: Wally Amos donates portion of profits to literacy programs- illerate, lives in LA, LIVED in uncomfortable surroundings- like to make Chocolate chip cookies and people said he should sell- franchise came in to franchise company- donated % to literacy training for inner city youth from where he came from Early 80's: from short term sales driver to powerful brand builder 90's: 400% increase in cause related programs Late 90's: became common practice
Typical Key Publics
Employees- Customers- keep us in business Shareholders- Industry Analysts- analyze, characterize and publish comments on company- how they serve the company- inside commentary about how company is viewed by that company Financial Analysts- follow public companies and conjecture about company based on theri future value i.e will stocks go up or down, Influencers/Opinion Leaders- Trade Associations Consumer/Activist Groups- industry with alot of controversy- you need to know poeple that go after that company- i.e the privacy people Legislators/Regulators- Unions- unions in enterprises are at a low(single digits)- but very powerful (electrical union, automotive)- ranking of private employees are free from this Suppliers- provide you with raw material Competitors- important public but dont directly communicate to them- dont do things aimed directly to them - competitors take a message from what you are doing
Characteristics of successful campaigns
Ethical conduct- above all Based on the values, needs and interests of key publics- can't influence people's beliefs unless you speak to them in the customs, languages, values that are important to them Systematic planning- Continuous monitoring and evaluation- Multimedia (print, broadcast, interpersonal...)- and social media and not jsut the ad in the paper Achieves strategic objectives- if you dint achieve object you are not successful Executed on time, in budget- time comes before budget (money can always be obtained) Professional satisfaction of team- the manager of that product will want to achieve goals on time and n budget and want team to be satisfied (everyone is congratulated and contributed equally) Learnings (documented)- build a library and look at the campaign you want to do and what was done before, what worked and how they went about it
history
Events that take place between measurements in an experiment that are NOT related to the treatment effect Studying attitudes towards war when a war is declared More of a problem when experiments continue over long periods of time
personal attribute effect
Ex. Social support and sensitivity—influence of male vs. female researcher the interviewer influences our responses i.e talking to our grandma might not talk about sex but women prefer to talk to women
index construction
Face validity: Do these questions logically make sense? Unidimensionality: Do these questions only measure one construct?- only measures one construct, scales also Specificity: Are these questions general/specific enough for my purposes? - how general or specific do we want to make them depends on construct trying to get at Ex. General religiosity or religious participation? (Even a single construct has many nuances i.e apples) Variance: Do these questions get a variety of responses?- when we test out something we aren't testing how the people respond - problem something wrong with the team itself if everyone is responding the same If a question identifies all participants in a random sample as the same, you might question your items. (highly unlikely)
median
Find number that is exactly in the middle
PR and marketing distinction
General Distinction: PR influences beliefs; focuses on multiple publics- employees, customers, analysts, government... Marketing influences behavior; focuses primarily on customers
relationship between vision and mission
Generally, they exert equal influence on each other. - if you change vision, you have to change mission- if mission leads you in a successful manner, you have to reevalute vision Neither one is crafted in a vacuum. - crafted in the same vacum
"Other" Types of Campaigns
Government Campaigns- i.e military ads Many countries use agencies to deal with social, economic and political issues- big client for PR agency because gov doesn't know how to do PR because they are the gov adn they do what they want
Effects of IMC trends on Public Relations:
Greater focus on values of particular publics and responding with clear messages Proliferation of advertising and public relations agency mergers and cross discipline diversification Broader use of traditional marketing tactics by public relations -Direct mail -Advertising -Packaging design
Gutenberg scale
Guttman Scales: contains items of increasing order of intensity- range in logical sequence- no one will choose four and not choose 1 Assume that individuals answering a certain way on more 'intense' items will respond similarly to less intense items. Example:- increasing intensity- might circle one and two but not three and four but wont see someone choose 4 without circling 1-3 1. I like listening to music at times. 2. I like listening to music most of the time. 3. Listening to music is very important to me. I make time for it. 4. Music is my life. I don't know what I'd do without it. Person who responds "yes" to #4 likely responded "yes" to #1-3. Clearly a more intense music fan than others. Each item in a Guttman scale has yes/no option Each item is equal in score- Goal - to be perfectly logical so that individual who answers 'Y' to Item 6 will have also answered 'Y' to all others If participant answers 'Y' to 1-3 and 'N' to 4-6, they score a 3 on our Guttman scale Perfecting logical
internal validity
Have I ruled out or controlled for other reasons that could be giving me these results? Eliminate alternative explanations. what are the threats to my experiment that effect my results
Value of a control group
Helps protect and establish our internal validity
external
How research from one internally valid study might be generalizable to another context? Assess how our research translates. -weight both of them- the more I control things in the lab will be cause less in external validity
confounders: i.e a recent study found that heavy drinkers don't live as long as people who abstain from drinking completely or who drink in moderation ID- Dp- What other variables could explain these results buy not measured-
ID amount of drinking Dp- life longevity age, income, social
in a normal curve
In a normal curve, the mean, median, and mode all equal the same number.
understanding relationships & questions that we ask
In research, we are often interested in understanding how some variable or construct relates to another First question: Are variables related to each other? Put another way: Does a change in one variable correspond to a change in another variable? i.e is Tv viewing related to increase in violent behavior? - have to figure out cause even if they are correlated Does using wearable devices contribute to long term weight loss
non-profit cons for cause related marketing
Increased administrative costs - relatively low return- doesnt pay back what they owuld like it to be -Perception of endorsement- if one oarty steps out of line and get involved in something bad. iT REFLECTS both parties -Potential liability
key difference between an index vvs scale
Index sums responses to survey items to capture key elements of a concept being measured- same impact Index items may not be statistically related to each other (may not correlate to eachother) Ex. ACE index: divorced parents not related to sexual abuse Scale averages responses to series of related survey items that capture a single concept or trait- expect items to be related Scales combine items (a composite of items) of different degrees (or intensity)
what are individual needs
Individual needs: Learning, knowledge, self-expression
maturation
Internal changes that occur within participants but have nothing to do with the study Ex. Effects of eating breakfast on reading scores
Formulating strategic objectives/goals
Is this something new? e.g. a product or service, a policy, corporate strategy, change of leadership, market, an unfavorable perception... Do you need to: Create awareness among key publics? Provide education/information to key publics? Develop relationship with new publics? Enhance or solidify perceptions/beliefs? Change or re-define perceptions? Build/manage the brand? Do you have current facts (research) to use as benchmarks? Can you set realistic, quantifiable goals? Do you have the resources to be successful?
issuesorissues
Issue or issues: What are its long term implications to the organization?- something that happens or might happen? Is it large and significant enough to be a part of a long term strategic campaign? Is it one of several issues that might provide content and messaging material for a campaign? tabacoo companies had to deal with people dien and how to deal with it Have you analyzed its significance in terms of the organization's overall perception, brand, image or reputation?
government
Lobbying Political action committees Public Policy testifying Grass roots activism Direct mail post civil war era- when the indurstrial revolution were devleping such as steam engines- if you were a congressman do it for worthwild outcomes not personal interstes!
morality
Loss of research participants
demand generation
Market Development (Public Relations doing awareness building, id and help messaging to entire domain) -Advertising -Promotion -Training -Customer Service Sales
origin of marketing
Marketing begins with a human need or a want- what economics is all about- people want things and people buy things- how much you pay for things( how did they determine that $- what demand and what price will the demand increase- if the you increase the price for something the demand goes down- price elasticity- equilibrium- supply and demand) People can't fill all their needs on their own The concept of a "market" began around agriculture A gathering place Beginning of mass production instead of producing one thing at a time, they produce a thousand things at a time
marketing defined
Marketing is the activity, set of instructions, and processes for creating, communicating, delivering, and exchanging offerings that have value for customers, clients, partners, and society at large
consumer cons- private company
May incur higher prices-
experiements manipulate the independent variable which mean
Meaning researchers actively change the level of the IV And observe the effects of that change With other methods, researchers may observe the natural variation of the IV (rather than actively control or change it
measures of central tendency
Measures that describe the center point of a distribution of quantitative data. Frequencies tell you the whole distribution. Measures of central tendency give you information about the middle (or center) of your distribution. Characterize often large amount of info with a single number Three measures: Mean: a.k.a., "average" Median: a.k.a., "absolute middle point" Mode: a.k.a., "most frequently occurring"
measures of dispersion
Measures that report how far a set of scores are spread around the center point of the data and across the distribution. Combine elements of frequencies and measures of central tendency. Range: distance between highest and lowest scores in a distribution. Ex: 1, 3, 3, 4, 5, 5, 5, 7, 7, 8, 9, 10, 11, 14, 15 Highest number? Lowest number? Distance between them?
crafting messages
Message: What the public needs to hear in order to have their beliefs influenced- i.e change value of brand and you know that people regard brand that doesn't really apply i.e BP oil spill (brand suffered as unsafe company when they were always called to safests company- message properly) Messages are generated by the recipient based on how they hear and process information- make sense of things that is worth believing- Messages are statements that contain explicit conclusions supported by facts and examples- message and reasons to believe are called PROOF POINTS A campaign should have no more than 3 or 4 key messages- strong enough and provable enough Key messages should cover all key publics- doesnt mean that every message to cover all topics, some may just cover a few or one but its an important one - statements that we want people to believe in their heads (beliefs)
what country goes with this gov campaign: tourism
Mexico
define needs
Need: State of felt deprivation including physical, social, and individual needs.i.e food, shelter, proper nutrition, proper health care
need want fulfillment
Needs & wants are fulfilled through a Marketing Offering: Products: . Services: Experiences:
shareholders
Newsletters Letters Annual meeting Annual report Open house Financial analyst meetings Pitches to financial media Webcasts dont communicate with shareholders as A pr PERSON- dangerous- strictly regulated process as a result of the great stockmarket crash- if you dont follow rules there are criminal reliability differ to people that have this responsibility brokers can do it because they dont work for that compan
what country goes with this gov campaign: military expansion
North Korea
Why does data analysis matter?
Numbers carry authority- i.e politician Numbers carry value (i.e., instrumental and symbolic)
what to do with median number
Odd total number means there will be one number in the middle; when you have an even number of responses, take the two middle responses and divide by 2 to get the median.
opportunity
Opportunity Are the means available to take advantage of it? Is the payback significant enough to invest the time and money? Have we considered unintended consequences? always unintended consequences (things never invisioned)
marketing process step by step
Opportunity Identification Research CompetitiveStrategy/Positioning Marketing Planning Demand Generation/Management
likert scale components
Other potential Likert scale anchors: Frequency: Very infrequently - Very frequently Truth: Very untrue - Very true Likelihood: Very unlikely - Very likely Quality: Poor - Excellent Issues with Likert scales: Central tendency bias- tend to see people cluster in the middle and not choose either extreme(strongly agree or disagree)i.e say I agree because it is useful for my major but then its hard therefore you measure it neutral Social desirability bias
PR and marketing activities
PR: Manages brand (organizational)& Manages issues Marketing: Manages brands (products) &Drives Sales
PR and marketing organizational skills
PR: More strategic than tactical Marketing: More tactical than strategic- have to be because they get a report car everny month- requires immediate returns
PR and marketing general skill set
PR: Writing, strategic thinking, broad perspective, creative nature in an additional to writing Marketing: Verbal & written communication, strategic thinking, business sense, technology comfort
internal subject bias- personal validity threats due to participants
People in experiments talk to each other and share details about the experiment with each other (that they should not know).
what are physical needs
Physical needs: Food, clothing, shelter, safety
How we interpret data has consequences:
Policy- use # to influence policy Ethical Research-related-hear a number and want to investigate more
likert scale
Posing a series of questions with ordered responses(what changed, in other scales they might order the items) that demonstrate intensity through their ordering. -Response format is very common in survey research -Can be anchored with numbers or not. Example: 1. All social science students should be required to take a course in research methods. Strongly Disagree Disagree Neutral Agree Strongly Agree 2
Pretest and Post-test
Pre-test- measure of DV before exposure to the IV )SERVES AS A BASELINE- How the samples fares before any manipulation) - can also compromise internal validity= testing threat- give them a clue about the experiement and may be priming them to influence how they perform Post-test- measure DV after exposure to the IV (alllows us to measure the effects of the IV and DP)
semantic differential scale
Presented with two opposite adjectives, and ask participants to rate something between those two words. Similar to a Likert scale. Example: Please think of journalists in the US today. Check a space between each of the adjectives below to indicate how you describe journalists in general. Educated ___ ___ ___ ___ ___ Uneducated Skilled ___ ___ ___ ___ ___ Unskilled Biased ___ ___ ___ ___ ___ Objective Dedicated ___ ___ ___ ___ ___ Uncommitted Presented with two opposite adjectives, and ask participants to rate something between those two words.
a good experiment should
Provide a logical structure that allows us to pinpoint the IV's effect on the DV and help us address our hypotheses and/or research questions. Help us to rule out alternative explanations for our results (or confound variables). Issues of internal validity Apply to contexts outside of this one experiment. Issue of external validity. Apply to real-world contexts - ecological validity
what to do with data
Quant methods: Surveys and Experiments Quantification of data = representing concepts with numbers-shifting concepts in numerical form- asking about depression and scaling it
Experimental Notions
R: RANDOM ASSIGNMENT O- OBSERVATION MEASUREMENT X-INDEPENDENT VARIABLE (TREATMENT) treatment resulted in change
experimental notions
R: Random assignment O: observation/measurment X: independent variable (treatment)
cause related marketing pros for non profits
Receives additional funding +Establishes relationships with consumers that they didnt have before +Benefits from brand equity gains by corporation- bigger the corporation the better they do in market therefore better they do in Non-profit
range
Remember means don't tell us anything about the range of our data distance between the highest and lowest scores Mean annual temp in Death Valley, CA is 77 degrees (sounds pretty comfortable) But the range can kill you: 15 - 134 degrees
what does marketing require
Requires that consumers and the marketplace be fully understood.
weakness of survey research
Round peg, square hole problem (everything is standardized which helps with comparison and generalizability but may be hard to pick the response that represents them directly - some skip those questions and others are forced to choose an answer) Is your construct valid?- hard to assess therefore some sole purpose if to measure the validity and your entire research you ask if this a question that measures what they are supposed to Question responses lack context Issues of artificiality in responses i.e like multiple choice questions- you can think how you can add detail or take away detail- same with a survey and sometimes think of them as artificial- try to contextualize question if we know that it is hard to understand- ask a small population to analyze questions to see if your questions will resonate with a particular culture they will tell you if people will understand Desirability bias can be a big issue. Inflexibility: Once instrument is finalized, it is difficult to adjust or change- hard to change questions- benefits of survey(generalizability) become iffy- try to keep survey the same over time but hard if you started years ago and measuring it again therefore it might not resonate with issues of today or we may not understand them -Reliable, but not necessarily valid (are we measuring what we want to measure?)
expectancy effect
Self-fulfilling prophecy Ex. Elementary students and education achievement Use a double-blind experiment to solve this issue Researcher and participant don't know whether they are in the treatment or control condition.
selection
Self-selection of participants More common bias when we use nonrandom sampling techniques. Remember: These techniques help us limit sampling basics
tactics
Social Media Blogs Wikis Content Communities Social Networks Podcasts/Vodcastt
when and why do we do a nonrandom sample
Sometimes random sampling isn't feasible or possible. So that's one reason we might sample nonrandomly. For example, some populations are invisible and hard to reach -- and that makes drawing a random sample or identifying a good sampling frame very difficult. We also use nonrandom sampling to "pilot" test our measures -- that's the example you mentioned. If I want to pre-test a survey or some other measure to see if people understand my questions and what they're asking, I might not go through all the effort of randomly selecting participants, but will just select some participants who are convenient to access and who are willing to pre-test it. And then there's the example we looked at from Snowden who wanted to look at if there are factors from one's youth that were predictive of dementia and he purposively sampled nuns (so even though his research question was asking a question that is relevant to a much larger population, he didn't random sample that larger population) -- he went and nonrandomly sampled nuns because this population allowed him to answer his research question in better/innovative ways. (Nuns share similar lifestyles and health habits and they also are tracked in the church for decades across their life span so he had access to data from their lives from years prior -- and for these reasons, he justified using a nonrepresentative/nonrandom sample).
strength of a survey
Standardization: Constructs kept constant from respondent to respondent- indicators to see there construct because there is a single question and that single question is meant to relate for everyone Capability for generalizing- hard to compare one persons response to another persons response in an interview Surveys can be useful for describing characteristics of a large population- creat a profile of a community or socialization and can be done for large population- hard to interview every person face to face but you can send them a survey and get their reports Best accomplished through random sampling!- Make large samples possible Can survey a lot of people with relatively little cost Flexible: can ask multiple questions on a subject and analyze accordingly - throw in a few questions in an inductive way because we want to see which allows flexibility compared to an experiment (cant go back to measure something else- controlled) Not possible with other methods, like experiments
systemic error
Systematic error occurs when there is biased introduced in the way we draw our sample. So this is relevant for nonrandom sampling because, for instance, when I ask for volunteers to participate in my study, it is likely that people who volunteer are systematically biased in some way (i.e. they share some quality or characteristic that led them to volunteer). It's biased because I'm not accounting for people who did not volunteer. Or if I use snowball sampling to study ex-convicts. I talk to one and then I ask his/her if s/he knows anyone else that might be willing to talk -- I'm excluding from my sample a lot of ex-convicts so my sample is systematically biased based on the way I've decided to sample my population of interest.
marketing management is an
The art and science of choosing target markets and building profitable relationships with them.
employees
Town Meetings (Face-to-face, satellite broadcast, conference call...) Newsletters Magazines Videos Bulletin Boards Speeches Intranets E-mail Special Events Rallies -tell you what to achieve
what country goes with this gov campaign: opiod epidemic and anti tobacco
U.S
marketing process
Understand the marketplace and customer needs and wants.- understand clearly what the customer wants and needs - diff between needs and wants Design a customer-driven marketing strategy.- not based on what we want, but based on what the customers wants and needs i.e pantene is customer driven because if they modify it in particular ways people will buy Construct an integrated marketing program that delivers superior value.- combination of sales, marketing, advertisment and PR- create a virtual program that share things- measure things in a shared way Build profitable relationships and create customer delight.- employees needs to be paid, need to grow, need to satisfy shareholders which means profitability Capture value from customers to create profits and customer quality.- the more profit we generate the better quality and quantity our products can
community
Volunteering Donations Sponsorships Cause-related marketing Open houses Philanthropy Scholarships Endowed chairs
what country goes with this gov campaign: anti-Americanism
Venezuela
A bit of reality—hidden opportunities for the PR Professional
Very few organizations have properly written vision and mission statements- fact!- Many have one or the other Many combine the two Some have non at all never assume that big brand name companies know how to write a mission/vision statement- if they have good PR people we should assume that they know how but they may not know how to sway people
in datâ anaylsis
We look for patterns—do they answer our RQs and reflect our Hs, or do they demonstrate something else?- What presentation of the results best reflects the reality of our findings?- where a lot of manipulation happens- can present number in a number of ways and as a reader we have to think critically because easy to present number in particular ways or hide numbers numbers can look huge therefore making us think in a particular way compared to percentages How can we compare our findings to others' results?- building on whats been done before- look at what others found and how they have reported numbers
Marketing managers must consider the following, to ensure a successful marketing strategy:
What customers will we serve? — What is our target market?- who are we selling this to How can we best serve these customers? — What is our value proposition?- if we know our target market, what do we have to say to them to motivate them to buy this product
vision/ mission statement
What you want to BE.- state of being, not a process, its destination Mission: What you Do -deal with outside audiences and we're are supposed to be good writers -product of committes of people fighting on what our vision amd mission statements should be -think of organization as where are we going and what are we doing to ge there
sensitization
When an initial measurement influences measurements that follow When participants are given the same questions over time, they can adapt to that measurement Ex. Being asked a set of questions after each doctor's visit
mode-central tendency
Which number appears most often
reporting measure of median
are best when you have some extremely high or extremely low values (outliers) that might affect your mean ~ Ex: Income
reporting measures of modes
are generally reported with nominal data In nominal data, means and medians are less meaningful. ~ Ex: Race, university major, etc.
reporting measure of means
are reported most often; when in doubt, report the mean. However, means are subject to specific forms of bias due to dispersion, so they are not always the best.
weakness of experimental research
artificiality
vision statement-It is a brief description of an organization's (or individual's)
aspiration(what do you aspire to be).
correlation and causation & how we do we establish correlation
at a min, we want the variables in our study to correlate with one another Correlation does not imply a direct for the relationship i.e A and Bare related to one another but we don't know whether A causes B or vice versa Correlation does establish: That a relationship exists The strength of that relationship- don't know anything about causes, and it tells us how strong that relationship is correlation is not causation!
vision statement: what an organization want to ?
be
pre-test post test
both experimental and control groups are randomized, the is a pretest for the both of them, the independent variable was only treated in the experimental group but there was a post test for them both- there was no treatment for the control group therefore that is the control!
mission statement: it is
brief and memorable
what country goes with this gov campaign: population control
china
Bogardus social distance scale
common Guttman scale used to measure a person's willingness to participate in social relationships.- if you say yes to 4 you should have said yes to 1-3 Are you willing to permit immigrants to live in your country? Are you willing to permit immigrants to live in your community? Are you willing to permit immigrants to live in your neighborhood? Are you willing to permit immigrants to live next door to you? Would you permit your child to marry an immigrant?
mission statement: It instills a sense of ? and is a clearing house within which to qualify potential activities, e.g. someone asks you to do somehting and you ask yourself"Is this something we should be doing?" - as a supervisor you have to answer this question and when you delingnate tasks you have to think of how that fits into mission and vision
common purpose
mission statement:It provides people with a clear opportunity to assess their ? with the organization, e.g. "Hm, I don't seem to fit here" or "My skills and interests have limited potential to be relevant here."
compatibility
control other factors: RQ: do people judge resumes of men differently than resumes of women? -Condition what is the problem with rating the resumes -how could we control for other factors and what does this help us establish
condition: read and rate resume with a mans name or read and write a resume with a woman name Researchers compare rating of mens vs women resume- problem is everyone resume is different therefore stopping her is problematic control for other factors : use identical resumes, we are controlling other factors that might affect rating Which helps establish that is gender that causes any effect Other factors like handwriting, sentence structure, aesthetics are controlled for or held constant Names were the only thing diff but qualifications the same
no treatment =
control
Reinforcement of attitudes, beliefs-
doing well and selling good products (apple) and think the world loves you but wake up to being sued (not happy)- do a campaign to reinforce attitudes and beliefds about product
public relations deals with
employees, customers, shareholders, gov, opinion leaders and other
vision statement: It captures the ? of what the organization wants to be, can be, should be-reasonable about it
essence
number one characterisistc of successful campaign
ethical conduc
missionstatement : It is ? and can be ?.- look at it once a week in todays generation
flexible and modified to align with external change
why having a control group matters for validity
for knowing that the IV is truly responsible for our results
define wants
form that a human need takes, as shaped by culture and individual personality- aren't born wanting thing, we are born needing things- it isnt until our brain starts forming other things Wants + Buying Power = Demand
missions statement: It provides a ? within which people can identify, e.g. "Oh, there's where I fit in."
framework
Why pre-test
gives us a point of comparison (baseline) Would a pre-test help the design on the gender stereotypes that cause females to perform poorly on aptitude test- comparing individuals - cant measure change unless I do a pre-test
key public definition
groups that an organization have an interest in and groups that have an interest in the organization i.e employees, customers
Control of Other Factors
have maximum control of others -whether we make human subjects do something that they wouldn't normally or otherwise do -in order to observe the effects on them
Always doing a campaign-
held accountable for at the end of that year and how well you do at achieving objective of campaign witll have a 60% influence on salary increase but you will spend most of your time being reactive to things- 70% on reacting to issues and 30% working on PR campaign- most salary increase will come from PR campaign
internal validity theats
history and sensitization
standard deviation
how much individual scores vary (or deviate) from the mean score for the group. useful because it shows us how many subjects in the group score within a certain range of variation from the average for the entire group. Simple Example Test scores for Com 101: 70, 80, 90, 100 Mean score: (70+80+90+100) /4 = 85 Subtract mean from each score: -15, -5, 5, 15 Square these differences: 225, 25, 25, 225 Computer average for these squares: (225+25+25+225)/4 = 125 Take square root of average: 11.18 = SD
use median because
if I have low values with one or two extremly high values and add them and divide them my numbers will be scewed which doesn't accurately represent the central tendency value -tells us that there were so many numbers above the estimates and below
DO CHILDREN FROM LOW- INCOME ENVIRONMENTS PERFORM WORSE AT SCHOOL THAN THOSE FROM HIGH INCOME ENVIRONMENTS wHAT VARIES: Which is the IV- cause and which is the dv effect?
income level and school performance Independent variable is income level and dependent variable is school performance
independent variable
influence another variable Manipulate independent variable to influence another variable must be operationalized the "cause"
Behavior modification?-
influencing behavior- our job is to influence people's beliefs and marketing jobs are to influence behavior (buy this product, come in for a test drive, look for it in your refrigerator)- this is not something that PR does- Integrated Public Relations indirectly we influence behavior but first we must influence beliefs
vision statement: it ? its aspirants to look up, look ahead
inspires
vision statement:It remains ? but well-reasoned enhancements can imbue it with new life.- things change, technology change, regulations change- constantly changing world and you have to look at vision when there are changes to decide if it can stay this way or be updated
intact
validity in experiment
internal and external
Ruling out Alternative Explanations
internal and external validity
key factors of internal and external validity
internal- minimize confounding threats External-sampling, replication, and ecological validity (can I TAKE THIS out of the lab and put it in the real world)
part 2
key marketing strategies, strategic marketing objective key advertising advertising objectives integrated tactical plan overall timetables combined budget
vision statement: It has a ? expectancy--becoming more desirable, more attainable as it matures.
long-life
mission statement:A high level description of the ? or activities the organization undertakes to achieve its vision
major activity
old view
making a sale telling and selling";industrial revolution in the late 1800, telegraphs, steam engines and electricity - produce products and services and society started "cooking" with need and needs - sales were one at a time, they didnt try to keep the customers, tried to just sell the product and move on
Experiments: the basic
manipulation has to be involved- make them do things they wouldn't normally do Whenever we make human subjects do something that they would normally or otherwise In order to observe the effects on them-
controlling other factors
matters for validity
dependent variable
measured by the researcher in the experiment believe to be changed by another variable y depends on X (changed bt X) Independent influence dependent Both must be operationalized (measure both) IV and DV change from study to study -the "effect"
vision statement: It is brief, ?, easily learned and repeated--a bumper sticker not a magnun opus.
memorable
smaller SD
more people scored closer to the mean
vision statement: It is the first answer to the question "?"
why
mission statement: It provides a basis for ? and ? .- help them assign recruiting efforts if they know mission and vision statement
organizational design and management of resources
p value of .05 or less
our sample was not due to chance- no sampling error but above that there is sampling error
types of control groups
pLACEBO effect- Hawthorne EFFECT- Double blind experiments
Moving parts
pretest and post-testing- measuring before and after treatment at least two conditions: two levels or amounts of the independent variable (one level can be zero) Random assignment in those contains such as treatment/intervention: researchers manipulate the IV and in a Control researchers do not manipulate the IV Two possible explanations for the result our hypothesis (what we predict is going to happen) Our null hypothesis (the opposite of what we predicts) The null hypothesis is often left unstated
Customers
product news releases Customer meetings Trade shows Open houses Customer service
types of pr campaigns
public awareness public education reinforcement of attitudes change of beliefs brand management behavior modification
2 group (post test)
randomize control and experimental group, treat just the experimental group and give the control to the control group and just do a post test! no pretest!
new view
satisfying customer needs;e tend to talk about sellers markets- the seller has the power- demand outstrips demand
vision statement: It is ?--imagining its reality creates desire to possess it
seductive
validity threats due to participation
selection, maturation, morality
potential threat of quasi experiment
self-selection biasCan't be sure groups aren't different from one another in confounding w
possible issue of pre-test/post-test
sensitivity -If you ask participants about a particular topic, you may prime them to focus on that topic
vision statement: It is ?, containing a single idea, modified by logical, clarifying, supportive and reasonable qualifiers, e.g. "To be the best at what we do, in the United States, as regarded by our peers.
singular
descriptive
stats describe or summarize data to help see patterns that emerge- Do NOT allow us to draw conclusions beyond our sample (beyond the data we have collected) - describing sample but they are powerful because they are summarizing broad data that can be massive and I'm just giving you the descriptive of all of that data They sum the raw data which we rarely provide a reader
eliminating threats
step 1:random sampling (probability sampling) Step 2: Random assignment (experimental vs. control) Equate groups before an experiment to include error and bias
Brand Management-
supervising and calculating the status of the brand in the market (key publics and so on)- if you sense that it is changing due to competition- you have to begin fixing it- sometimes people completely change their brand personality because it has to change- many companies were pioneered in there industry that are not non-existent i.e cameras used to be called Kodak - try to win back publics approval
mission statement:It is the first answer to the question, - i know what we want to be but how will we get there- HR problem vision and mission statement can help you recruit people, this is what we want to be and what we want to do. people can see where they want to fit in our out of organization
what and what we do
use mode to say
what was common across the board because we can't use arithmetic for that
Random Assignment
what you do with your participants and randomize them within two different groups -randomly assign participants to experimental and control condition Works to ensure that the only difference between the experiment and control group is the manipulation of the IV BY ensuring that individual characteristics such as biological traits, family backgrounds, attitudes, education are randomly distributed across condition- randomly picking people so that we have a large sample and assign them to two groups Historically interested- used to use "matching" - finding people that have the same age or same income across the board to balance - used matching most often- put people in 2 group make sure their are enough women in each group, same economic income, age instead of randomly distributing those attributes across (easier to do)- Borrowed
hawthorn effect
when being observed shapes ones behaviors- - shows what the impact of IV
negative correlation
when changes in one variable are associated with opposite changes in another variable- one goes up and the other goes down or the other way i.e when tuition goes down, satisfaction goes up
positive correlation
when changes in one variable are associated with similar changes in another variable- when one variable goes up the other goes up i.e student grades and course evaluation - both lines parralel
control
when condition is not manipulated
placebo effect
when just participating in an experiment shapes ones ones behavior- don't expose them to Independent variables
types of scales
• Likert scales • Guttman scales • Bogardus' Social Distance scale - a special kind of Guttman scale • Thurstone scales - • Semantic Differential scale
choclate milk survey
"We've become accustomed to seeing these kinds of poorly phrased survey questions pop up and go viral because of some bonkers statistic they claim to support" non of them are the right responses, report bizarre statistic - conflict of interest done by dairy organization about argicultire- make us look dumb, bias about what they want to do with data- easy to design bad survey questions
Avoid negative items-
- take longer to understand negative form i.e what is more likely to be objective vs what is least likely to be objective, do you agree or disagree that the US. Should not allow refugees to enter the country- have to reflect on grammar and think through it than the question
Advantage of Random Sampling
1.Estimates based on random samples are unbiased. To whatever extent estimates differ from the true population parameter, they are equally likely to over- as underestimate it. (This is not true of convenience samples) 2.In a random sample, the difference btw the estimates and the true parameter is due to chance. The difference is called a sampling error. When the difference is due to a 'flaw' in the design, it is a systematic error and we can't know the size of the bias introduced. Ex. Network news quick polls
polling example
2016 presidential election polling was perceived a failure 䡦 Lapinski found the polls weren't that far off from reality ! Clinton ended up with a 2% popular vote margin over Trump ! And most nat'l polls were within that margin of error
real world example of a construct: ACE index
Adverse Childhood Experiences (ACE Index) Asks respondents if they've experienced each of 11 adverse events before age 18 Used when interested in total number of adversities ("more or less" of a construct) not intensity (we wouldn't expect them to have the same impact but that is what an index does) Each adverse advent is treated as equal in impact Main finding of ACE research: the more adverse experiences one has in childhood (regardless of type), the greater one's risk of illness and substance use
fixed quest and respondent
Allow us to compare people across categories i.e highest educational degree that you have completed and everyone has the same categories but now we have missing data because you didn't answer because it doesn't apply to you or they don't know Non fixed would be- how much schooling do you have?- can be hard to compare responses because I can say ALOT but what does that mean Fixed questions and responses allow us to compare answers Allows us to compare subjective phenomena i.e consider a 7 point scale on which political view of people might hold are arranged from liberal to conservative
non-response erros
And then there is nonresponse error, when the likelihood of responding to a survey is systematically related to how one would have answered the survey. 䡦 Studies show that supporters of the trailing candidate are less likely to respond to surveys, biasing the result in favor of the more popular politician.
general rules for survey design
Ask relevant questions Dont ask for info you don't need- Pay attention to order effect use multiple questions to asses the same construct-
volunteer sampling
Asking individuals to volunteer as participants. A researcher posts a flyer asking18-24 year old women diagnosed with depression to contact her if interested in participating in an interview.
contingency question format
Asking people to only answer questions that are relevant to them and if they answer no then they wont answer the following questions Can steer participants toward relevant questions and away from those that don't concern them Skip logic- design a computerized survey you can put in skip logic into computer- if someone answers no to the first question it wont bring up the next question, it will just bring up the next ordinal questions
designing Q questions
Avoid overlapping response categories- i.e how many times did you family eat dinner together in the last week? None, 1-2 times, 2-4 times- don't know if you should choose b or c- want to make it mutually exclusive and cant choose two Ensuere all responses categories are exhaustive i.e what is your family income after taxes-every option that someone can give is available to them i.e what is your family income after taxes- but what if you make more than those options you cant fill it in therefore we include an "other" option that they can choose in ranges
funnel format
Begins with broad questions, moves to more specific ones Assists respondent in recall of detailed information Can ease into a sensitive topic i.e tell me bit about your childhood, best childhood memory, dow did you get on with your mom and dad, how was. discipline wealth with and then specific questions about abuse!
introverted funnel format
Begins with specific questions, moves to more general questions Initial questions set a frame of reference for following ones- might be hard what feature benefited to me the most if I haven't thought about that specific product May be used w/ topics that don't evoke strong feelings (direct Qs are often easier to answer first that are non-sensative)- done in marketing research 1. did you prefer flouride toothpaste A or B? 2. which features/benefits appealed most to you? 3. why do you think It will appeal to other consumers?
benefits of nonrandom sampling
Can be better for initial hypothesis testing Snowden's RQ: Is AD due to brain changes occurring later in life or causes that affect the brain over time? Studied whether traits ppl exhibit early in life could be linked to development of AD later on Sampled 678 Catholic nuns - why? Tend to live healthier lifestyles, they have more similar lifestyles to each other, essays collected from nuns as young adults in the church Finding: the more complex sentences used in a nun's essays when she was young, less likely to develop AD Diversity of representative samples can make detecting cause-and-effect relationships more difficult (explanatory studies are hard!) Can often gather more/better info on nonrepresentative samples (e.g "The Nurses Health Study" got high participation rates of collecting blood and urine samples)
systemic sampling
Choose every Nth person in a list to create your sample. Sampling interval: The standard distance between elements (population size/desired sample size). i.e if you take every 5th person or 20th person Ex. 240 students / want to sample 12 (sample size) = 20 interval size Choose every 20th person in the population Randomness improved by choosing a random starting point.
cluster sampling
Clusters (groups) are randomly sampled first, with individuals then randomly sampled from the clusters. - some groups selected at random. Some persons selected at random from each selected group -Some groups not selected at all in clustering Sample everyone in stratification so you may over sample to make sure you have everyone
cluster vs strategic differences
Clusters are selected at random and sample is randomly drawn only from selected clusters For stratified samples, sample includes some individuals from every stratum, but researchers make decision about what proportion of sample should be drawn from each stratum- research manipulation
constructing an index
Construct: "attitude toward apples" Identify indicators (operationalize something)- what does it mean about our attitude for apples. Does it mean: Buy apples every time when grocery shopping Eat apples every day Enjoy the taste of apples Enjoy the crunch of apples Enjoy the smell of apples Enjoy the color of apples Assign each item a score; for indexes each item is scored the same (Apple attitude index: score 0-6) for indexes every single item has the exact same weight (difference between index and scales) Composite score is total sum of 'Yes' responses
Probaility theory allows us to do what in margin of eror
Determine the range of possible estimates in which we can have a specific degree of confidence (confidence interval i.e margin of error therefore between 40 percent and 46 percent - within degree of certainty) Determine how likely those estimates are at including the actual population parameter (confidence level- how likely it is to the possible error) 95% confident (confidence level) that our estimate of presidential approval falls between 40% and 46% (confidence interval)
advantages of cluster sampling
Does not assume complete sampling frame i.e. can't easily access a list of all members of pop That's why we use the clusters first Often get larger samples and cheaper than simple random sampling
scales
Employs multiple observations or items of measurement Usually evaluates item-intercorrelations before selecting items for inclusion- expect individual items to correlate and go together Composite scores that allow you to rank more or less of your construct. Also allow for intensity- not only can we rank them we understand difference between them Provide more info than an index EXAMPLES: TV Violence: # of violent events in an episode AND their severity Political Preference: # of conservative politicians voted for in 2016 AND how conservative they are (saying not all of them are the same- giving more info)
indexes
Employs multiple observations or items of measurement Usually combines items w/o concern about their intercorrelations- Composite scores that allow you to rank more or less of your construct.- like ordinal EXAMPLES: TV Violence: # of violent events in an episode Political Preference: # of conservative politicians voted for in 2016 Health status: # of health symptoms one has- the number of health values because the single value to help us understand health status Religiosity: # of religious beliefs one endorses
equal probability of selcection - random sampling
Every individual in the population has an equal chance of being selected for the study
tricky part of longitudinal survey
Exact same question has to be asked in each survey but the questions mays have to change wording wise because you live in different time period which change how to analyze them or do research
ways to administer surveys
Face to Face Surveys Telephone Surveys Mail Surveys Email or Web-Based Surveys
questions formats commonly used in survey
Funnel format Inverted funnel format Contingency question format
ask relevant questions
Have to understand population so that issues are relevant to population- know your audience! There is no pint in asking for people ideas, opinions or behaviors if people don't care or don't know about those issues i.e are you familiar with political tom sakumoto and 9% said yes and half said they have read about him or saw him on tv- he isn't real- ask questions that they don't understand or know they mgt answer anyway and it might hinder results
representatives
How closely a sample matches its population in terms of the characteristics we want to study Representativeness leads to generalizability I want to study resilience amongst homeless but I have access to a sample of 'couch surfers' Can I generalize to my population of interest? I test 200 volunteer students on geography Can I generalize to the larger population of students in my district? Representativeness leads to validity I test 200 MIT students on math ability to understand Americans' math skill compared to other countries The validity of my results are compromised because they may be due to the unusual place I did the study I can't say much about Americans' math skill from my sampling approach
confident level
How sure we are that our confidence interval is accurate. i.e how confident that I am that this is true - usual confidence level is 95% Ex. I'm 95% sure that 75-85% of registered Democrats will vote for Hillary Clinton in the 2016 presidential election.
advantages of strat
If you want to discuss subgroups, this is a way to assure you'll be able to get enough participants Allows one to oversample - particularly smaller groups - and then "weight" some individuals to better represent the proportions of the population to make them even out- wont be tested on weighting Ex. Med data set CARDIA to study differences in cardiovascular health btw racial demographics in US Simple random sample may not include enough minority participants (b/c each individual regardless of race has equal chance of getting selected)
random selection-random sampling
Individuals chosen by chance. Helps researchers to avoid sampling biases.
start examples
Kalev et al. (2006) studied effectiveness of diversity training using sample of 700 businesses First, identify population of businesses from which to sample(who population is): US employers w/ 100+ employees must file reports with EEOC EEOC Report = sampling frame If we did do a simple random sample of the roster would be dominated by small companies (even though more people are employed by large companies in US) which would lead to over representation of employees in the United States so they... Divided employers into 2 strata: 1) employing more >500 and 2) employing <500 to get more equal proportion for mall and large companies Over sampled: Selected 2x as many large companies to better reflect proportions of Americans who work for them Make sure you have sampled from every single group
history of sampling
Literary Digest poll in 1936 ! Sent out 10 million ballots ■ Polled voters: Alf Landon or FDR? ■ Chose participants from phone directories and automobile registries. ! Results: Poll favored Landon (57%) ! Reality: Roosevelt won—by a lot! Predicted L would win 31 states. He won 2
random sampling: the basics
Methods of identifying study samples that adhere to assumptions of probability theory.- every single individual has the same likely hood to be selected for research- no systematic bias and every individual is chosen by chance
what type of sample is this: Allison wants to study women with endometriosis and their perceptions of health care. She gets permission to post a survey announcement and link on the Endometriosis Foundation's website. Random or nonrandom? What sampling type?
Non-Random because people checking that site- they are self-selecting themselves
achieving generalizability in non random sampling
Nonrandom sampling methods make it hard to establish representativeness and generalizability. B/c our choices of who to sample may be biased and so not accurately represent the population Often used to test out survey or interview questions or used when you can't construct a sampling frame
what type of sampling:Mirabelle is doing a class project on social media use. She decides to interview her friends about what social media they use and how they use it. Random or nonrandom? What sampling type?-
Nonrandom- introduce bias by asking our friends because they may be like us repesenting a homogenous type - convince sample
other report
Others' reports: Ask an individual to respond about someone else's behavior, opinions, etc.- Benefits: Useful when evaluating one's performance/skills- bad reporters of our own skills therefore we might ask someone else Can provide more accurate, objective information- less identity concerns (want to present yourself as a type of person in the world or surveys) Drawbacks: Limited observations- depending on who the person is i.e social workers are assumed to know about their patients but every relationship between a patient and social worker is not the same Lack of motivation to report Bias toward a person- if parents report about you i.e they either want to present you in a particular way to represent themselves
nonrandom sampling: the basics
Participants are not randomly chosen So: Not all participants have an equal likelihood of being selected for participation More prone to bias and higher error (due to possible systematic biases introduced into sample) Sample is thus NOT easily generalizable to the larger population Eg. -- journalists asking passersby on the street Can't assume that results gathered from these samples generalize to larger populations because they are not random
problem with polls
Public doesn't understand what an estimate is or how much uncertainty exists i.e misunderstanding about what a poll result is actually telling us -Public isn't participating in polls (non-response bias- not every person takes the sample or survey- taking survey less as a public) -dem/rep trends in participation may vary according to political climate (additional non-response bias- whoever is doing worse in the election there are a number of people not taking the poll) -We also poll people who may be unlikely to vote (sample frame errors- they might think they will vote or aspect to vote and say they will vote for this person but then end up not voting )
write multiple questions to assess the same construct
Rosenberg scale on self-esteem he asked 10 questions and average the score together to get a single self-esteem score(composite score) i.e feel that I have number of good qualities (cirlce one) q2: i feel that I am a person of worth, at least on an equal basis with others (circile one with 0 being not at all and 7 being extremly
simple random sampling
Sampling units randomly selected from a population.- i.e like the lottery Each unit has equal chance of being selected Assumes the sampling frame contains(practicle list of all of members in population- don't always have a complete sampling frame but you need one for this) all members of a population, and are numbered. i.e go get roster of members and that is our sampling frame which may not be fully accurate for reasons like drop outs, or registered late but its the best we can do Relies on random numbers to identify sample- get the computer to do it Usually used w/ smaller populations whose members can be identified individually from sampling frame (ie. You have a complete sampling frame) -people selected randomly without reference to what forum they belong to
self report
Self-report surveys: asking participants to report about their own attitudes, behaviors, opinions, etc.- something are more easier to report than others- REPORT ON ONESELF Most common type of survey! Benefits: Useful for measuring one's psychological characteristics- how they perceive or identify themselves- hard for others to do that for other Useful for assessing own behavior- depends on behavior you are asking them to report I.e how much time did you spend in your doctors office educating the patient they will say most time informing the patient but they are spending less time than they recorded- they think that they should have spend that amount of time therefore they falsely report that or they think that they have actually spent that amount of time Some things only the individual him/herself can answer.- Drawbacks: Sometimes, individuals are unable to answer accurately- we aren't self aware of thing we are asked therefore if I'm forced to answer I probably will Recall issues or carelessness in response- careless as we fill out surveys, have really long surveys i.e 40 minute survey and get skipped questions therefore we are careless Prone to social desirability bias- biggest drawback when assessing the results of survey esp if we are asking things we know are sensitive. i.e doctors might report that they are educating community because they know is socially sensitive.
cluster vs strategic similarities
Similarities Population is divided into groups (clusters or strata) Sampling is done in stages (first clusters or strata and then individuals)
random sampling technique
Simple random sampling Systematic sampling Stratified random sampling Cluster sampling
composite measures
Some variables can be measured w/ 1 survey item- some areas are not complex like age, gender or smoking which can be measured with one survey items Current smoking... use in the past 30 days? Select Y or N With constructs, we often design surveys that cover all aspects (content validity).- not questionairs- called indexes and scales- require many indicators and measure More complex the construct = more indicators, questions Physical activity vs. Social justice- many indicators we can use to assess but physical activity is grounded in everyday behaviors so we might need less survey items Composite measures: combining multiple items to create a single value/score that captures multifaceted construct.- take something complex and get a single value for everyone who responds to the scale Often no single indicator of a complex construct. Composite measures allow a single value to represent a complex construct (we compose multiple items into one score) Allow for more precision in measurement.
snowball sampling
Start with one participant, and snowball/branch out to include their references/acquaintances. Useful for 'invisible' or difficult populations What might this look like? You interview a homeless youth in Hollywood. Ask youth if he knows other homeless youths who might be interested in interviewing. eally hard population i.e want to study residence and homelessness which is hard because they don't have addresses or phone numbers but find one person and ask that homeless person if there is another person to talk to and move from there
considerations of cluster sampling
Still a random sample b/c everyone in the population has some probability of being selected (researcher is not introducing systematic bias) B/c clusters (e.g. schools, counties) are different sizes(properties or sizes), not every individual has same probability of being selected Use weighting to correct for possibility of oversampling certain kinds of individuals (e.g. students, residents) because not every individual has to same probability of being selected- samples not representing population in a particular way therefore they say that they have "weighted it" -Only WEIGHT Cluster SAMPLING
email or web based
Strengths Cheapest Easy to design with software programs (SurveyMonkey) Easy for respondents to navigate questions (b/c preprogrammed) - moves you along questions Can reach large geographic areas (global research)- very difficult to mail things but now we can send surveys across the world Fewer researcher effects- researcher isn't present Limitations Response rates slightly better than mail surveys- Less researcher control May be biased toward younger and resource-rich populations
telephone
Strengths Interview control (similar to face to face)- if someone hesitates or doesn't answer the question there are ques to tell In addition: lower cost to conduct, can be monitored by a supervisor who can ensure quality (researcher control), and requires little advance planning Limitations Respondents may be reluctant to talk about some issues over the phone Increase in 'robocalls' is annoying so response rates are variable- some people do not answer the phone which fosters lower response issues Workarounds (e.g. sending advance letters) add additional costs- have to tell them that you are calling at a specific time if they answer that they will participate Respondent fatigue may lead to incomplete data- people burn out because you are just hearing something out load Lack "paradata" that face-to-face interviews provide- get more info in being face to face which is useful for understanding face to face
face to face
Strengths Interviewer can help make sure respondent understands the questions (researcher control)- And does not skip any questions- trying not to get missing data Tend to have higher response rates- hard for people to say no to you face to face Limitations Interviewer may lead respondent to answer questions in a way that may not accurately reflect their beliefs- sociability bias Ex. People have been found to report more open-minded attitudes toward gender issues when interviewed by a woman Social desirability bias: report socially desirable behaviors People may over-report volunteer work or attending religious services (e.g. Brenner's self-report vs. daily diary study) therefore we use mixed method
Strengths Less susceptible to researcher effects- researcher doesn't have a part in this- feel it out on their own time More likely to report undesirable behaviors and attitudes- anonymous; like you don't exist Exceptions: people more likely to report weight and smoker status in face-to-face interviews Cheap Limitations Less researcher control: More likely to respond w/ "I don't know" or N/A missing data- i.e there are indicators of weightgain of these things in face to face but cant see that through mail Low response rates: 20-40% (but may be as low as 3%)
reprensative and generalizability
The main thrust here is to understand how sampling matters for determining whether our sample is representative of the larger population we are interested in and thus whether we can generalize our findings to more than our sample
ACE score and alcoholism
The more bad things you have The higher you'll do another Bad thing By investigating the relationship btw childhood adversity and risk of alcoholism using an index (like the ACE), what do and don't we know? The more childhood adverse experiences That a child experiences The more alcoholism they will have We don't know the growing up with a parents That is an alcohol is worse Than growing up with a parent That is abusive
what is a surveys main use
Use them for patters, hypotheses, explore differences of groups, document patters of stability and change
convenience sampling
Use who is most conveniently available as participants. 450 undergraduate students in Com 101 are asked to participate in a survey about balancing work and school happens a lot in university study- go to large classes on uNIVERSITY and ask them to fill out a study- goes to population that convenience to them
example of random sampling with songs
Weitzer & Kubrin (2009) studied misogynistic song lyrics in pop songs 䡦 To prevent bias (used random sample)in song selection, they made a list of every song on a platinum-selling album btw 1992-2000 (2,000 songs) 䡦 They had a computer program select 400 songs at random from their sampling frame- every song gets a number from 1-200 and use computer to randomly select numbers and choose the numbers selected 䡦 Each song on the list was as likely to be selected into the
design sensitive Q
When asking sensitive questions, contextualize them- helps to soften the awkwardlness of responding to the questions and prepare the respondent for the questions to come i.e everyone needs help from their fam and friends sometimes. I'd like to ask you about times that you maight need someone to help you with situations that require speaking, reading or understanding english- people are embarrassed sometimes therefore contectualize it because people are embarrassed that they cant read
random sampling and weighing
When there are disproportionate samples, the sub-sample data might need "weighting" to represent the population proportions correctly - have to mix up the individuals in your population well enough i.e military drafts and put names in a hat and if you don't mix them up which results in a patter it has bias in them therefore not random
sampling at work-margin of error
When you hear the margin of error is plus or minus 3 percent, think 7 instead- example
cluster sampling example
You want to sample American university students (no list of all American university students) Student lists from 3 randomly selected universities are easily accessible. Randomly sample from each list. (Dont have to travel or survey to face to face just travel to selected universities, Much cheaper to do)
simple random sampling example
You work at a company that wants to research clients' views of quality of service over the last year How do we select a simple random sample for this study? 1.Prepare/ get sampling frame b/c cant do it without it- sort company records, identify every client over last year to get list of clients (N=1000) i.e get company records over the last year but you don't want to talk to 1000 people because not feasible for you so youn decide how many people to sample 2.Sample size - decide on number of clients in the sample (say you decide to sample 10% of the 1000 clients from last year s=100)* (not how we do this but we will do it for this sample example) 3.Draw sample - put each client name in a hat and select 100; or use computerized random # generator(normally how we do it) or like the lottery Give all members of sample frame a number btw 1 and 1000 Randomly choose 100 numbers up to 1000 (from random # table) based on what generator told me and you go to what the computer told you and pick the numbers that were listed
confidence interval
a range of values within which our population parameter (age, income, approval) is estimated to lie.-where range lies Ex. Between 75-85% of registered Democrats will vote for Hillary Clinton in the 2016 presidential election.
sample
a subset (part of a large group of related things) of population - that we use to make generalizations about the population -representative for the population how and why we select participants for our study give us estimates to represent population
ACEitems
abuse, neglect, household dysfunction
margin of error
always reported in scientific research but not news research *with a margin of error of 3 percentage points- means as likely to be 3 percent under your estimate as likely as it to be 3 percent over that estimate therefore margin error can be between 40-46%- find interval of where our number is in population MOE = how confident we can be that the true value of the population parameter is close to 43% as we thought It is the amount of uncertainty in an estimate Pertains only to sampling error (not systematic error- only for random samples because others we cant calculate margin of error) Random Sampling: are as likely to over- as under-estimate the parameter of a population of our sample estimates
non random sampling
any sampling approach that does not adhere to the principles of probability theory Ex. To study binge drinking among first-yr students, I survey 200 first year students in Psychology 101 (convenience sample) Would this sample accurately represent binge drinking among all first-yr students?
avoid Open- ended vs close-ended:
are a set of answers provided- not forced into an open-ended questions or set of responses i.e in your opinion, which if the following causes is the most important- closes ended questions vs in your opinion, which causes are the most important
Ask questions that are short, clear and closes- ended-
avoid overly complex language, jargon or technical terminology unless you are targeting a specific population
summary of random sampling
based on assumptions of probability theory which gives us the math underlying to say we aren't introducing bias. Increase likelihood of representativeness and generalizability; reduced chance of sampling error and bias.
survey
collect attitudes, opinion, behaviors from a sample to make observations and generalize about the aggregate Relies on people to response to surveys but people are responding less and less -problematic Often feature the wise of questionnaires Time in surveys- cross sectional(at one point in time) or longitudinal (across moments) - GSS SURVEY I.E
Avoid double-barrel questions-
combining two different ideas into one questions when the respondent is forced to give only one answer- hard for people to respond to- two ideas that they are thinking about but many they think of them diff i.e are you comfortable talking to strangers and giving public speeches Do you agree that women should have free access to birth control and a right to choose? - stuck if you feel diff between two of them even if on the surface they seem as thought they go together
difference between systemic and simple random
difference is each pair doesn't have the same likely to be selected in systematic sampling : every member of the population in a systematic sample has equal prob of being selected. BUT NOT ALL PAIRS ARE EQUALLY LIKELY TO BE SELECTED (systematic sampling)
random sampling
each person in the population of interest has an equal chance of being selected = random selection Based on probability theory = procedures assure that different units in your population have equal probabilities of being chosen (e.g. picking a name out of a hat; choosing the short straw; lottery drawings) Want to study binge drinking of university's 2,000 first-yr students. Take a random sample of 200. Sampling frame = freshman enrollment roster. Use computer program to assign each student a random number and then survey the lowest (or highest) 200 numbers. -(based on assumptions of probability theory
purpose sampling
eliminate some patients that couldn't participate- wanted diverse responses - choose certain patients Segmenting sample into different kinds of people, and then selecting individuals who fit those characteristics. Common quota sampling: Gender, race, age, religious groups, political affiliation, etc. What might this look like? Hospital patients' experience. Develop a matrix of patient 'types' (chronic/acute, short-stay/long-stay, surgical/nonsurgical, etc) looking for diversity in nonrandom sample i.e in a hospital and wanted diversity from every patient such as long stay, short stay
dont ask info you dont need
esp if its sensitive info Do you drink alcohol on a realgar basis vs please team me the exact number of alcoholic drinks you have consumed in the last 7 days- if questions fits research then ask it but generally ask less than more because you don't want to overburden takers
population
every possible item of a pre-defined aggregate that could be studied - a complete list of person/objec5ts we want to study i.e Communication students, university faculty, all clients served in 2016, deployed military officers
two kinds of composite measurement
index and scales
sampling frame
list from which all members of a population are sampled (those you can actually identify and access) i.e Local telephone book to represent all residents of a county (practical but imperfect) Ex. Roster of enrolled university students Ex. Faculty directory
what type of sampling:Camille is researching the effects of Type 2 diabetes management on mental health. She begins by surveying a colleague with Type 2 diabetes, and then asks her colleague to share the contact information of anyone she knows with Type 2 diabetes. Random or nonrandom? What sampling type?
nonrandom snowball
sampling frame error
occurs when there is a mismatch between the people who are possibly included in the poll (sampling frame) and the true target
WHAT DID lITERARY DIGEST TEACH US ABOUT SAMPLING
oversampled bias population because of the sampling frame used before knew about random sampling
what about if you dont have a sampling frame
political exit polls- there is no way to know exactly votes beforehand (cant do a simple random in advance) 䡦 Can still use a random sampling method to avoid biased results -> use a systematic sampling approach 䡦 Researchers keep track of the order of people leave the polling place to approach every nth of population
what type of sampling is this: Omar is concerned about employee moral. To get a sense for how his employees are doing, he takes pay records of all his employees and selections every 25th person to survey. Random or nonrandom? What sampling type?
random Systematic
two main types of sampling
random(probability) or non random (non probability)
Weitzer & Kubrin (2009) studied misogynistic song lyrics To prevent bias in song selection, they made a list of every song on a platinum-selling album btw 1992-2000 (2,000 songs) They used a computer to choose 400 songs at random from their sampling frame Each song on the list was as likely to be selected into the sample
simple random sampling
is this a poll or survey:Do you agree with the presidents decision t withdraw from the Paris climate accord On a scale from 1 to 5 how do you feel about university raising tuition
survey poll
advantages and disadvantages fixed responses
surveys responses can be used to compare responses - disadvantage- forcing categories on people and may not fit well or understand but advantage is that you can compare respondents
what went wrong in sampling history
telephone subscribers and automobile owners in 1936 䡦 Selected a wealthy sample and generalized to the larger voting population
what does close- ended responses allow us to do
to easily compare across respondents o easily compare across respondents
For any random sample estimate,
we can say how condifdent, we are that our estimates falls within a particular range - confidence level Important to remember that in our estimate we build room for error (we're working with possibilities or likelihood)
pay attention to question ordering
what sequence they come in- early questions can affect how participants answer following questions- called order effects(this occurrence is know as this) i.e when asking about someones charitable level, might be bad idea to ask them to report their generosity level before asking about their monetary donations- might increase rate that they give to keep it consistent on they way that they have previously presented themselves i.e how religous are you and then asking how many times do you attend religous services in the last month is out of order
one reason to use cluster sampling
when we want to sample a population across a large geographic region If we take a random sample of NY state, we'd have to travel pretty far to interview each person So we could do a cluster sampling of 5 counties More efficient and cost effective
Use certain questions to determine
which questions will be relevant for the participant-
principles for designing the actual questions (language/wording is important)
• Avoid loaded questions • Avoid trigger terms or terms with baggage • Avoid double-barrel questions • Avoid negation • Ask short and close-ended questions • Contextualize sensitive questions
types of nonrandom sampling
• Convenience sampling • Volunteer sampling • Purposive/Quota sampling • Snowball sampling