Comm 88 Final (Matni Fall '19)

Ace your homework & exams now with Quizwiz!

Factorial designs help us with what two kind of effects?

"Main effect" of each IV "Interaction effect" between IVs

What is mortality/survivorship bias?

Making conclusions based on only those who participated from start to finish. Not taking the drop-outs of the study into consideration Can lead to overly optimistic conclusion bc failures might be ignored

What are the two types of content to code?

Manifest content (visible, surface content) Latent content (digging deeper at the underlying meanings)

What are 2 key elements of a TRUE experiment?

Manipulation of IVs Random assignment of participants It's not a "true" experiment if one of these elements is missing! (will be a quasi- experiment then)

Type of Relationship: Correlation decreasing

as X increases, Y decreases (Negative r)

Type of Relationship: Correlation increasing

as X increases, Y increases (Positive r)

random assignment

assigning participants to experimental and control conditions by chance, thus minimizing preexisting differences between those assigned to the different groups

To ensure high internal validity...

have a truly random way of splitting up your groups and minimize/eliminate the "3rd variable" problem a) control your environment well b) do your random assignments well Then you can say that your experiment has high internal validity

manipulation/control + random assignment =

internal validity

Why do you think a "well-defined" coding scheme is important?

Makes the coding process clear and consistent to every person

random sample

a sample in which every element in the population has an equal chance of being selected

Both the maturation and history effect are ______

outside of the researchers control

Describe the manipulation of IVs

(1) Divide your subject pool into different "conditions" groupings and then test each group the same way EXAMPLE: • Experiment on a new painkiller drug • Half of subjects get drug/other half do not - while controlling for all other variables, subjects in each condition (the IVs) are treated the same, etc. (2) Then you examine effects on the Dependent Variable (DV) - That is, your outcome in the experiment (3) Your measures will be of the continuous types - That's the just about the only type of measurement we use in experiments (4) Compare measures (mean scores) for subjects in each condition and see if differences exist ----- EXAMPLE DV from last example: amount of perceived pain (e.g., 1-7 scale)

Analyzing Data

* Reporting Percentages - Use when all variables are discrete/categorical (i.e. nominal or ordinal) - this is typical of opinion polls Ex: When your answers are: - yes/no; M/F; support/oppose/no opinion etc... *Comparing mean scores (averages): - Use for "difference" hypotheses Ex: Your hypothesis states: - Group A will give me different DVs (measure higher, or lower) than Group B 1.) Separate your IV categories (in this example: Group A vs Group B) 2.) Compute means (averages) on DV for each IV category 3.) Compare your two results To do this, DV must be a continuous measure i.e. Interval or ratio *Computing a Correlation - To describe variable associations --- A statistical value (often denoted with "r") that quantifies how two (or more) continuous variables are related to each other -- r tells you: type (positive vs negative) and magnitude (strength) of a relationship

What are ethics?

- A set of moral principles - Based on well-founded standards of right and wrong that prescribe what humans ought to do - Provides us with a moral map or framework that we can use to find our way throughdifficult issues - Does it have to do with your feelings? With religion? With laws?

Why do we care about ethics?

- Because it's a social contract - It clarifies what we should do and shouldn't do in Science Research

Who and what can we sample? (sampling units)

- Individual Persons (voters, fans. etc.) - Groups (couples, juries, orgs, countries) - Social Artifacts (ads, TV scenes, tweets)

internal validity vs external validity

- Internal Validity: The ability of your research design to test your hypothesis. Does it test what it's designed to? - External Validity: The degree to which a study's results can be extended beyond the limited research setting and sample (generalizability).

To establish causality...

- Must be connection between IV & DV - Must establish time order (IV --> DV) - Must rule out other explanations/causes

How do ensure that the coding scheme was done well?

- We measure consistency of how it was used - In other words: we calculate inter- and intra-coder reliability!

Summary: All Representative Sampling Techniques

- Will always have sampling error - We can generalize to the larger target population (assuming the random sampling is done properly) Caution : Avoid the "ecological fallacy" - Making unwarranted assertions based on observations about groups - Generalization must be done carefully! Caution: Avoid systematic error (aka sampling bias) - When you systematically over - or - under represent certain segments of population Caused by: - Very low response rate - Imperfect sampling frame - Using non-representative sampling methods

What are some challenges to asking good questions ?

- factually inaccurate responses - getting reported behavior vs actual behavior - response w/lack of knowledge - misinterpretation of questions - Cultural or other biases - small changes in wording can dramatically impact response - question order can influence response

What can you conclude from correlation data?

-- Can conclude if variables are related or associated -- CANNOT conclude if one other variable causes the other

Describe a factorial 2 x 2 Design

-- each factor (IV) has at least two levels (conditions)

MAIN EFFECTS DON'T TELL THE WHOLE STORY

...

QUESTION WORDING (AND ORDER) IS IMPORTANT

...

The Process of Content Analysis

...

Careful: Random Sampling ≠ Random Assignment!

.....

How many GigaBytes of digital storage equal 1 PetaByte?

1,000,000 GigaBytes = 1 PetaByte or 1 PB

Steps in Survey Research

1. Establish the goals of the study 2. Determine your sample (respondents) 3. Construct survey questionnaire 4. Administer questionnaire

What are the four stages of multicultural sampling?

1. Identify target population 2. Define groups with the large population (clusters; e.g. geographical location) 3. Randomly sample them 4. Randomly sample individual elements within each of the sampled cluster

What are the primary goals of a survey?

1. Identify/describe attitudes or behaviors in a given target population 2. Examine relationships b/w the variables measured Does one factor (X) predict/relate to an outcome(Y)? Ex: Does exposure to alcohol ads (X) predict teen drinking (Y)? Do (X1, X2, X3, etc) predict (Y)? Ex: Do alcohol ads (X1), parent drinking (X2), peer drinking (X3), & risk-taking (X4) together predict teen drinking (Y) ?

What are the 3 important procedures in content analysis?

1. Sampling - define pop, identify unit of analysis for coding (e.g.: For TV shows: do u code each episode? each scene? each character? e.g.: For FB: do u code each entry? thread? word?) , representative sample (esp. for what reason??) 2. Coding - transform content into numerical categories ------ Two types of content to code: • Manifest content (just looking at the visible, surface content) • Latent content (digging deeper at the underlying meanings) 3. Establishing reliability - inter- and intra-coder reliability through Cronbach's Alpha

What 7 things can affect internal validity?

1. Selection bias 2. History effect 3. Maturation 4. Statistical regression toward the mean 5. Mortality 6. Testing 7. Reactivity effects, including placebos

Three Types of Non-Representational Sampling Techniques

1.) Convenience Sample 2.) Purposive Sample 3.) Network / Snowball Sample

What are some critiques of Big Data research?

1.) Not always objective or accurate (low validity sometimes) 2.) Must be done in the correct context (in unfocused data could find patterns that don't actually exits) 3.) Not just about size... quality of data too 4.) Ethical and privacy concerns

Representative Sampling Techniques

1.) Simple Random Sampling 2.) Systematic Sampling 3.) Stratified Sampling 4.) Multistage Cluster Sampling

Quasi-Experiment

A comparison that relies on already-existing groups (i.e., groups the experimenter did not create). * Like "true experiments" but done without random assignment -- Often use other criteria like "cut-offs" (eligibility, certain times, etc..) -- Sometimes use convince samples (like many true experiments) --- Are still considered as important methods in some kinds of studies (like online studies or studies of people's social media use and content) Used when you can't fully control your conditions or your sample well --- Like in field work or online studies

What is a cross-sectional survey?

A cross-sectional survey gathers information about a population at a single point in time. For example, planners might conduct a survey on how parents feel about the quality of recreation facilities as of today. Use diff. groups of people who differ in the variable of interest • Example: studying women w/ breast cancer - You give your survey to your sample of women - Divide up the pop into 3 age groups: 18 - 40, 40 - 65, over 65 - Useful because it helps to make relevant comparisons • One sample taken at one point in time

What is a codebook and why is it important?

A list of the variables to be coded: akin to operationalizing variables Gives consistency to those involved in coding process (multiple coders for reliability)

Systematic Sampling

A procedure in which the selected sampling units are spaced regularly throughout the population; that is, every n'th unit is selected. - Start from a LIST of the population you want to sample - Take a RANDOM STARTING POINT - Then select every "nth" element, until the cycle is through the entire list - For example, choosing every third person - You get similar results to SIMPLE RANDOM SAMPLING - But a more "even sample" will result - Again, if you are worried about representation, watch out for the list you start out with! CON? - Hidden periodic trait --- (like a un-shuffled deck of cards)

Simple Random Sampling

A sampling procedure in which each member of the population has an equal probability of being included in the sample. - Select elements randomly from population - Get a phonebook listing and randomly dial numbers from it - (Using a computer helps) How? - Create a random phone numbers table - Use a computer program to pick random numbers for you (usual method) Pro? - Easy and simple Con? - Since it is more simple, it is prone to more error Can you see problems with this? - Your population has to be representative! Does everyone have a telephone?

Stratified Sampling

A type of probability sampling in which the population is divided into groups with a common attribute and a random sample is chosen within each group --- Use for getting pop. proportions more accurate than the others --- (but it's a harder technique to do) 1.) Divide population into subsets ("Strata") of a particular variable 2.) Usually stratify for demographic vars (e.g: gender, race, political party) 3.) Select randomly from each strata to get right proportions of the pop --- You'll need prior knowledge of your target pop. proportions

What are the 3 types of surveys?

A. Self-Administered Surveys B. Interview Surveys C. Experience Sampling

Artificial Intelligence (A.I.) Technology

Algorithm: A detailed step-by-step plan on how to solve a problem. Program: Applying an algorithm to a computing environment using specific "programming languages" - Computer programs often used - Type of computer programs that get people to mimic human cognitive behavior - Certain programs can be trained on how to analyze and classify big data

Describe random assignment. Why is it important?

All participants have an equal chance of ending up in either condition You want IV manipulation to be the only factor!! So... it makes groups equal before manipulation --> fair comparison later

What type of researchers use big data?

Applied researchers (and sometimes communication researchers)

How are participants selected for focus groups?

Are they representative samples? Compensate them for their time.

How is data/content coded in content analysis?

Coding into categories (as broad or narrow as the researcher needs) Use of a coding scheme that is well-defined Codes written in a "codebook"

What are the types of survey questions?

Cognitive beliefs / perceptions "do you think that..." • Affective feelings / emotional responses "how do you feel about..." • Factual reports "when did you see..." • Behavioral reports "what did you do..." • Trait / Motivation reports "why did you do..." • Communication networks "who did you call/text/snap/..." • Demographic criteria "what is your income/age/race/..."

Examples of Uses of Big Data in Communication Research

Communication Habits of People In Their Communities Collective Action

What does a Pre-test/Post-test experiment do?

Compares participant groups and measures the degree of change occurring as a result of treatments or interventions

How can the threats to internal validity be minimized?

Conduct a true experiment!also, be sure to treat groups equally Use double-blind setups when appropriate Automate/script the experimentBe aware of all limitations

What is done with the data in grounded theory?

Data reviewed for repeated concepts, tagged with codes that are extracted from data (not from researcher like in C.A.), repeat reviewing and code again, examine categories for the basis of a new theory

What is done in Single-stage Cluster Sampling?

Define groups within larger population (clusters) then randomly sample them

Give examples of how content analysis can be used.

Describe how much or what kind of certain messages there are Asses the "image" of particular groups in media Compare media content to "real world" Examine message changes over time

What is a possible problem of the Pre-test/Post-test control group design?

Differences in Y2 could be the result of interaction of manipulation with pretest i.e. if ppl's beliefs re: smoking are measured 1st before anti-smoking ad, are you making them more aware, will they be influenced differently?

What is a trend?

Different random samples from the same target population (usually large) Longitudinal survey diff random samples from same target pop. (usually large one) ------(e.g., poll "Americans" every 10 yrs re: their church-going habits; track "likely CA voters" over the course of an election campaign)

What is a cohort?

Different samples, but of the same cohort (smaller target population) Longitudinal survey Diff samples, but of same "group of people banded together" (smaller target pop.) ------- (e.g., survey "Class of 2019" every 5 yrs re: their employment since graduation)

How should you address worries of pre-testing influencing post-testing?

Do a bunch of comparisons: Solomon 4-Group Design You'd like to know if: - Pre-testing influenced the overall results - Pre-testing influenced just the results from the X1 IV manipulation - Pre-testing influenced just the results from the X2 or X3 or etc.... IV manipulation

How does one handle data ethically?

Do not conceal it, make it available Do not falsify it/make it look better Keep copies of all data and notes

What are The Declaration of Helsinki and TITLE 45?

Helsinki: International set of ethical principles regarding human experimentation - Est. 1975TITLE 45: Code of federal regulations that protects human subjects - Est. 1991 by US Gov

What is a main effect? And how do we test for it?

Effect of one IV, ignoring effects of other IVs Example: for the simple 2 (music) x 2 (caffeine) study We want to know the MAIN EFFECT for caffeine BY ITSELF (that is, regardless of if music was present or not) E.g.: We might find out that the MAIN EFFECT for caffeine is: the test scores were lower when caffeine was present than without (i.e., that caffeine worsens learning) --- note no mention of music! Finding out the effect of one IV individually effecting the DV To test for the main effects, Compare the marginal means (average of the averages) of the DV(s) for each factor/IV

What is the history effect?

Events outside of the experiment affect participants' responses to experimental procedures

Random Selection

Everything gets an independent and equal chance of getting selected - Can minimize bias

What is the goal and sacrifice of testing hypotheses of cause and effect?

Goal: establish internal validity Willing to sacrifice external validity

What are the pros of a multiple time series design?

Good for comparing before and after some IV manipulation and comparing 2 or more groups with each other Y1 Y2 Y3 Y4 X1 Y5 Y6 Y7 Y8 (group 1) Y1 Y2 Y3 Y4 X2 Y5 Y6 Y7 Y8 (group 2)

What are focus groups?

Groups of people discuss an issue or a product and are led by a moderator using open-ended Q's Popular in marketing & political research

How can you increase response rates in self-administered surveys?

Have inducements ($$, credit, etc) Make it easy to complete --- Clear instructions and not too long Include persuasive cover letter Do several advance mailings, persistence ("heads-up" approach) --- Persistance Pays

To ensure high external validity...

However, we might also want to have our experiment be relevant outside of the lab setting Carry out experiment in the field Have other researchers replicate your work Make your subjects more representative Use different research strategies to test the same hypotheses

What are the pros of a single-group interrupted time series design?

Improves upon the "one group pretest-posttest" Good for comparing before and after some IV manipulationY1 Y2 Y3 Y4 X1 Y5 Y6 Y7 Y8 (group 1)

What is qualitative data?

In the form of words (not #'s) Rich descriptions and explanations of processes in identifiable contexts must be well-grounded (based on good evidence/reasons) Staple in social sciences and humanities

Interview Studies

In-depth open-ended questions about a particular topic, typically tape recorded and transcribed. Analyzed by looking for recurring theme in data; similarities and differences. Not self administered ; Personal/ Face-to-face

What is network/snowball sampling?

Individuals are selected for the study who contact other similar individuals to be in the study, who contact others and so on...

What is volunteer sampling?

Individuals volunteer to be included EX: extra credit for classes

Describe inductive and deductive reasoning.

Inductive: observations --> empirical patterns --> theories Deductive: hypotheses --> gather data --> form theory

What legal and social harms must researchers protect subjects from?

Legal: inform that some illegal things brought up outside of what's being studies, researcher must report it Social: protect privacy/anonymity and confidentiality

Do quasi-experiment have more or less internal validity than true experiments?

Less. -- Can be used to estimate causality but may have less internal validity -- single-group time series and multiple time series

What is content analysis used on?

Most often written texts like books/papers (including hypertexts i.e. webpages) Oral texts, audio-visual texts, visual objects (i.e. transcripts of interviews and TV shows and works of art)

Are ALL Experiments True?

NO! -- sometimes we don't and cannot randomly assign a subject --- There's almost always an IV manipulation

What are some important features of qualitative studies?

Natural setting is ideal, non-representative sampling, researcher uses own influences on participants, uses inductive reasoning

What are the guidelines for using human subjects?

Participation must be voluntary Must obtain informed consent: explain purpose & procedures, possible risks & discomforts, choice to withdraw from study

What kind of sampling is this? when "who" is being sampled is more important than giving everyone an equal chance

Non-representative sampling aka non-probability sampling

Can we ever have perfect representative sampling?

Not usually - There is no such thing

Data

Numbers (quantitative) or other characteristics (qualitative) that represent the measures taken in research

What is data and how do you calculate total data set size?

Numbers (quantitative) or other characteristics (qualitative) that represent the measures taken measures x people = # of data points e.g. 20 questions x 200 participants, or (1 DV before + 1 DV after) x 2 groups x 50 subjects

What are the 3 pre-experiments?(No Random assignment) -- Mostly done before "true" experiments are done

One-shot case study, , informational only, (a) see if treatment had an effect on outcome:X1 Y (group 1) Static group comparison, (a) + see if 2 different treatments led to 2 different results:X1 Y (group 1)X2 Y (group 2) One-group case pre-test/post-test design, (a) + see if treatment made a difference from a starting baseline:Y X1 Y (group 1)

What is systematic error/sampling bias? What causes it?

Over- or under-representing certain segments of the population systematically Caused by low response rate, imperfect sampling frame, and using non-representative sampling methods

Cons of Systematic Sampling

PERIODICITY - The process of selection can interact with a hidden periodic trait - If the sampling technique coincides with this periodicity, the sampling technique will no longer be random and so representativeness of the sample is compromised - This has a lot to do with your starting list and how it is arranged - The more randomized the order of your starting list is, the better!

Experience Sampling

People answer some questions, for example, about their mood or physical symptoms, every day for several weeks or longer. People are usually contacted electronically ("beeped") one or more times a day at random intervals to complete the measures. Although experience sampling uses self-report as the data source, it differs from more traditional self-report methods in being able to detect patterns of behavior over time. --- In short: Getting immediate input from your sample, usually through technology/smart-phone app • Getting immediate input from your sample - Get data moment-to-moment - Usually done using mobile tech, like a smart-phone app or a wearable device (fit bit, iWatch, etc...) • Good for selecting panels and for longitudinal data

What is maturation?

People grow up, or change attitudes/behaviors as a matter of natural changeEspecially in longitudinal studies

What is the difference between a placebo and a nocebo?

Placebo - inert treatment leads to improved results Nocebo - leads to worsened results

What are the 3 types of true experiments?

Post-Test Only Pre-Test/Post-Test Solomon 4-Group Design

Representative Sampling could also be known as ____________________

Probability Sampling

Pros and Cons of Stratified Sampling

Pros: - Increases representativeness because it reduces sampling error Cons: - More costly and more time consuming - Not all variables can be stratified

What are the pros and cons of the Solomon 4-Group Design?

Pros: Powerful, yields a lot of info about pre-test influencing Cons: Complicated, costly, takes more time, but serious studies often do this b/c the effort is worth it

Pros/ Cons of Experience Sampling

Pros: answer questions about their experiences/ feelings "at the moment", can improve accuracy of self-reports, good for longitudinal/panel data Cons: Apps are usually customized (lots of work, costly) must be well-designed (needs computer programming)

What are the pros and cons of self-administered surveys?

Pros: easy and inexpensive, no interviewer influence, increased privacy/anonymity Cons: must be very self-explanatory, suffer from low response rates

What are the pros and cons of qualitative research?

Pros: extremely rich and descriptive, good for time order, serendipitous findings Cons: labor-intensive, "data overload," researcher bias, hard to generalize, credibility & quality of conclusions must be pursued carefully

What are the pros/cons of interview surveys?

Pros: more flexible (can probe for depth) , higher response rate than self-admin Cons: potential for interviewer influence, more work, higher costs compared to self-administered

What are the pros and cons of Multistage Cluster Sampling?

Pros: useful for populations where individuals are not listed (ex: very large pop), reduces costs when sampling Cons: sampling error accumulates from each stage Pros: Practical to do for large populations - Reduces cost when trying to sample huge populations Cons: - There are multiple random sampling stages - We introduce sampling error in each stage

What psychological and physical harms must researchers protect subjects from?

Psychological: diminish self-worth or cause stress, anxiety, or embarrassment Physical: clearly outline risks in advance

What might happen to research after it is conducted?

Published in academic journals/books, put to use in practical applications (especially in Applied Research studies) Goal: research is absorbed into society's "knowledge base"

What are the limitations of content analysis?

Purely descriptive (describe what but not why and effects) - It can explain what is happening pretty well (and with numbers too!) - It cannot explain very well why the content is that way - It cannot conclude anything about effects of the messages Very reductionistic (reduces content to "code-able" concepts only, may miss deeper meanings) - Reduces content to "code-able" concepts only - May miss out on deeper meanings in the text

What is critical theory (aka cultural studies/ideological criticism) and its goal?

Qualitative. Craft arguments about the cultural implications or cultural oppression in media (e.g. feminist analysis of ad images) Goal: social/political awareness & change

What is Grounded Theory (GT)?

Qualitative. Method for construction of theory through analysis of data Begins with question(s) and some data which is reviewed then look for repeated ideas/concepts

What is conversation analysis and its goal?

Qualitative. Micro-level analysis of recorded conversation Goal: to describe how people do things with and in talk when communicating

What is rhetorical criticism and is goal?

Qualitative. Subjectively analyze comm messages Write a critique of form, language, imagery, delivery of speeches, etc Goal: great understanding and/or appreciation of the use of words and language

What are closed-ended questions?

Questions that are used to elicit a single response such as: yes, no, or a specific number. - Example: How many hours does it take for the claim process to be completed? where respondents are asked to choose from a set list of answer choices - Easier to conduct, analyze

Good Sampling relies on ___________________

Random Selection

What are the pros of using data sets in communication research?

Relatively simple to put into computer software tools, don't take up a lot of storage, cheap and easy to collect and analyze

What is the testing effect OR "Carry Over Effect" and some fixes for it?

Repeatedly measuring the participants can lead to bias bc they may remember the correct answers or become conditioned to being tested or become bored Fix: ask different questions in each test or move the order of questions around

What are some features of participant observation (e.g. ethnographies, unstructured interviews, focus groups)?

Researcher participating, subjects may or not be aware being studied, purposive types of sampling (not random), detailed field notes/records, stop taking data when achieved "saturation"

Solomon 4 group design

Researcher uses 2 control groups - only one experimental group and one control group are PRE-tested. The other control group and experimental group are merely post-tested. (Lets the researcher known if results are influenced by testing.)

What are the two types of deception in studies?

Researchers should avoid deceiving the subjects - Outright Deception: Deliberately providing false information - Concealment: Withholding key information

Multistage Stage Cluster Sampling

Sampling in which elements are selected in two or more stages, with the first stage being the random selection of naturally occurring clusters and the last stage being the random selection of multilevel elements within clusters. - Target sampling is useful for target pops. where we do NOT have them listed as individuals - Example: When it is a very large population (so full listings of targets are impractical) - The main idea is to break the pop down an sample in STAGES

What are some classical uses of Big Data in scientific and communication research?

Scientific: planetary science, business analytics, biology, chemistry, particle physics Communication: habits of people in their communities, collective action

Describe the process of experience sampling.

Set up a panel (pre-arrange it) -Give people access to app (download to their phones/computers) or give them devices ahead of time Send messages via app/texting/email Can answer short questions in real time Combine it! Participants answer short questions and/or web-link to online survey about experiences/feelings "in the moment"

What are some sources of Big Data?

Social media, social networking sites, internet searches, open/public databases Cheap, fast, and ubiquitous (i.e. everywhere)

What should researchers do for other researchers and what should they avoid?

Support proper "peer review" by submitting work and reviewing others work Avoid conflicts of interest like funding sources that expect certain findings --- EX: Doing a study on the "benefits of alcohol consumption" that's funded by beer and vodka companies!

What is convenience sampling?

Subjects are asked to be in the study because they are "in the right place at the right time". It provides little opportunity to control for biases. Convenience samples are inexpensive, accessible, and usually less time consuming to obtain than other types of samples. They are common in healthcare studies. Select individuals that are available/handy - Example: a professor using students at her university for a study

What are reactivity effects?

Subjects react to being studied rather than bc of IV manipulation Hawthorne/observer effect Sometimes done on purpose, placebo effect

What is statistical regression towards the mean?

Subjects selected on the basis of extreme scores (i.e. far from the mean)

What is an ethnography?

Systematic study of people and cultures

What is content analysis (C.A.)?

Systematically and quantitatively examining the content of communication, method for studying meaningful info from documents - Finding evidence in textual and symbolic content - Goal is to present quantifiable results ----- Ex: How many times does the word hate appear in a book?

What consequences do researchers have to face if they violate ethical policies?

The university's IRB has to approve the research proposal Loss of grant money, bad professional reputation, possible legal action, expulsion/termination of employment

Where do researchers submit research proposals for approval?

The university's Institutional Review Board (IRB) --- ALL RESEARCHERS MUST COMPLETE HUMAN SUBJECT TRAINING

Describe a 3X2X2 Design

This is more than two factors

Describe a 3x2 factorial design

This is more that two levels of factors

Is it OK to have the internal validity of your results compromised??

To an extent/ it depends

What is the main idea of Multistage Cluster Sampling?

To break population down and sample in stages.

What is the goal of qualitative research (aka field research/ethnography)?

To develop rich understandings of peoples' personal experiences

Why are multiple coders are often necessary?

To ensure reliability of the results (you want more agreement)

What is the purpose of factorial designs? What kind of effects do they test?

To examine the effects of two or more IVs simultaneously "main effect" of each IV and "interaction effect" b/w IVs

What is the goal of content analysis?

To present quantifiable results (e.g. how many times does that word "love" appear in a book?) Finding evidence in textual and symbolic content

What is the Solomon 4-Group Design?

To see if pre-testing is influencing the post-test results.

What is a time-series experiment design?

Track many observations over time, before and after manipulation Single-group interrupted time series design: looks for an interruption in graph where intervention was introduced Multiple time series design: series of periodic measurements taken from two groups (experimental & control)

What is a placebo effect?

Treatment with no active therapeutic effect Can make it more difficult to evaluate new treatments Used as a control to the experiment Mitigated by doing double-blind studies

We can trust science but what should we look out for regarding the scientists?

Using methods correctly, mixing too closely with a political/financial agenda, demonizing people with conflicting points of view

What is a longitudinal survey and the 3 different types?

Variables are measured at more than one point in time 1. Panel 2. Trend 3. Cohort

What is Big Data?

Very large and complex collections of data sets where info can be rich, fast moving and varied in type Difficult to work with, need specialized technology to analyze

What is the key problem with web/online surveys? How can it be overcome?

Very popularly used in social science research Key problem (and main difference with other self-admin. essays) : lack of control over sample population ----- Think: Amazon.com product ratings (and who writes them?) Overcome by: drawing from a bounded population (don't open it up to everyone) or providing unique URLs to control who is able to respond

What are the 4 V's of Big Data

Volume, Velocity, Variety, Veracity (enormous quantities, generated quickly, from many sources, sometimes not accurate)

Multistage Cluster Sampling 1

What do we do? - What's your target population? - First, define your groups within the large population. (we'll call them "clusters") - Then randomly sample the clusters, while making sure that all of the clusters should have equal chances of being selected Example: ---- Your target population: High School Athletes in CA - Divide the target population into different clusters of high-schools - There are almost 4,500 different HS in CA - Randomly arrange them into a few dozen "clusters" IF you stop here, this is called a SINGLE STAGE CLUSTER sample , most of the times we don't stop here, we just continue on

Multistage Cluster Sampling 2

What do we do? - Next: randomly sample individual elements within each of the sampled cluster - Now, thats called a 2-stage multistage cluster sample Example! - 1st stage: Random sample the "few dozen" clusters - Use simple or systematic sampling - 2nd stage: Random sample athletes from schools in sampled clusters - Again, use simple or systematic sampling IMPORTANT: Give all the clusters equal chances of being selected

When can researchers use deception in studies?

When it's justified by compelling scientific concerns, doesn't increase the risks of the study, and subjects are adequately debriefed afterwards

When can you generalize to a larger population?

When random sampling is done properly

When does selection bias happen?

When selection of subjects is not really a random sample Poor planning by the researcher

A word on design notation for experiments:

X: IV manipulation (treatment - things we change) Y: DV measure (outcome) R: Random assignment

Can we combine sampling techniques? Example?

YES! For example: multistage cluster with stratified sampling Example: High school athletes *1st stage: Cluster the HS population, then random sample clusters *2nd stage: sample HS in your chosen clusters, but stratify for private vs public *3rd stage: sample athletes in the HS, but stratify for different sports ( football, water polo, tennis, etc)

Self-Administered Survey

a data collection technique in which the respondent reads the survey questions and records his or her own answers without the presence of a trained interviewer • Mail surveys • Online or emailed questionnaires • Handouts • Diaries --- typically low cost

sampling frame

a list of individuals from whom the sample is drawn Before we go out and start any representational sampling, we need to do 2 things -- and in this order: 1.) Define the Target Population 2.) Construct the "sampling frame" - Sampling frame is NOT a sample of anything or anyone - It's a PLAN -- or an "operational definition" -- of the target population - Example: I want to sample UCSB students at UCSB during lunchtime - What sampling frame might I come up with? ------- Coverage error

Coverage Error

a sampling error that occurs when the sample chosen to complete a survey does not provide a good representation of the population what happens if my sampling frame isn't perfectly fit for my target population When sampling frame doesn't perfectly fit the target population

sampling error (margin of error)

a statistical calculation of the difference in results between a poll of a randomly drawn sample and a poll of the entire population - polling error that arises based on a small sample - My explanation: the expected difference of the true population compared to the sample population - EX: a national poll has N of 1,000 with +/- 3 Q: What role does sample size play in sampling error? A: Having a bigger number of people is better

Non-Representative Sampling

a subgroup that differs in important ways from the larger group (or population) to which it belongs Sometimes who we sample is more important to us than giving everyone an equal chance to be sampled - These samples are generally called non-representative - We cannot generalize results from these - Typical of experiments and qualitative research - But for different reasons

Samples

a subset of the target population (that you want to report about)

Magnitude of Relationship: Causation

a.k.a Correlation Strength r ranges from -1.00 to +1.00 -- Zero being in the exact middle *****THE FURTHER FROM ZERO, THE STRONGER THE RELATIONSHIP **** IF R IS ZERO, THERE IS NO RELATIONSHIP

ecological fallacy

an error in reasoning in which incorrect conclusions about individual-level processes are drawn from group-level data - erroneously drawing conclusions about individuals solely from the observation of groups

What are some guidelines for asking good questions?

pay attention to what is needed make q's clear and relevant use simple wording ask q's that respondents are actually competent to answer avoid double-barreled q's, bias in wording/loaded q's Avoid double-barreled questions - E.g. "How strongly do you identify with this fan group and agree with the opinions of the other fans?" • Avoid bias in wording / loaded questions - E.g. "Should a slap to a child, as part of good parental discipline, be a crime?" • Ask questions that respondents are actually competent to answer - E.g. Asking a sample of US Art History students questions on Bolivian Economics

What are open-ended questions?

questions that allow respondents to answer however they want Respondents provide a response in their own words ----- Don't know what all possible answers might be

Telephone Interviews

quick, efficient, and relatively inexpensive. Disadvantage: some people are unwilling to participate • Compared to face-to-face interviews: quicker results, reduced costs, more privacy, more efficient • Compared to self-admin surveys: more detail possible, better response rate •Weakness: people use call screening... - What do you think one can do to counter that?

Representative Sample (aka Probability Sampling)

randomly selected sample of subjects from a larger population of subjects - Sample should be a "miniature version" of the target population - Allows you to generalize results to THAT population ** The Key is RANDOM SELECTION: So everyone in the population has an equal chance of being included in the sample

What is purposive sampling?

recruiting and studying certain types of participants i.e. participants with specific qualities Select certain individuals for a special reason (their characteristics, etc.) - Example: a researcher who wants to study other researchers at a conference

What is a panel?

same person each time

What do "p" values measure?

tests statistical significance for ruling out chance outcomes the smaller the "p" value, the less likely we got results by chance

target population

the entire group about which a researcher would like to be able to generalize - A group that you are interested in studying * Examples of target populations: voters, facebook users, juries, football fans etc *Example of other target entities: TV shows, magazine ads, blog posts, etc.

Causality

the relationship between cause and effect

Sampling Units

the target population elements available for selection during the sampling process - Individual Persons (voters, fans. etc.) - Groups (couples, juries, orgs, countries) - Social Artifacts (ads, TV scenes, tweets)

Two Ways in which We Can Take Surveys

• One "snapshot" in time: CROSS-SECTIONAL • Over multiple "snapshots" in time: LONGITUDINAL • Either way, we want to look at multiple values of multiple variables

What are the pros and cons of pre-testing?

• Pros: - To "check" on the random assignments and outside influences - To get information on change (i.e. to see a "before" and "after" picture) • Cons: - Not necessary to establish causality (post-testing is sufficient for that) (so, it's an expensive addition to an exp. if it's used just for that purpose) - Since it could cause an interaction, think about if that's something you want to deal with or not

What is the interaction effect and how do you test for it?

• There could be an effect of the combination of IVs - Where the effect of one IV depends on the levels of the other IV(s) An example finding could be: Caffeine reduces learning only when combined with listening to music; caffeine without music has no effect • To test for interaction effect, you have to graph the cell means


Related study sets

4 - B Commercial General Liability_Revised

View Set

Lippincott: Nursing Care During Pregnancy

View Set

Community Mental Health Nursing Questions

View Set