630 - Research Final

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

define rater bias

bias with the characteristics of the rater, training, experience, orientation, understanding of the instrument

how are quantitative and categorical variables similar?

both use either nominal or ordinal measurement scales.

what does hypothesis serve to do

brings clarity, specificity and focus to a research problem

define research problem

broadly defined gaps in knowledge

what are some examples of passive observation

cafeteria observation of customers eating & overweight

what is the advantage of recording observations?

can see it a number of times before interpreting interaction or drawing conclusions

variables can be classified based on level or units of measurement as either

categorical = measured on nominal or ordinal scale (examples: - Constant (only 1 value, i.e. water, tree) - Dichotomous (2 values; yes/no; rich/poor) - Polytomous (2+ values; strongly agree, agree, disagree) vs. continous = measured on interval or ration scales (more precise units)

define random variables or chance variables

changes in dependent variables, because of the respondent's state of mood or ambiguity in the research instrument.

what does operationalization mean

changing concepts into variables Examples: 1. QOL (quality of life) 2. Coordination 3. ADLs

questionnaire vs. interview schedule choose depending on:

choice between a questionnaire and interview schedule should be chosen based upon: - Nature of investigation = if it's about issues respondents may feel reluctant to discuss = questionnaire - Geographical distribution of study population = if potential respondents are scattered over wide area choose questionnaire as interviewing may be extremely expensive - Type of study population = if they are illiterate, very young or old, handicapped you may have no option but to interview

what are the differences between concepts and variables

concepts are Images, perceptions, beliefs (Example: Academic achievement, Balance, Behavior) variables are concepts that can be measured (Example: GPA, FW score, Score on Get-up-go, Hitting; positive social interactions)

define disproportionate stratified sampling

consideration is not given to the size of the stratum

define information bias

deals with bias with Information bias (knowledge, memory recall)

define selection bias

deals with bias with participant membership, i.e. volunteers)

depending how an observation is recorded....

determines whether it is quantitative or qualitative

describe online questionnaire

develop a questionnaire and then either post it either on a website or provide a link in your email or even through mobile phones; you can also analyze data through online programs

define secondary sources

different sources that can provide qualitative research study and the difference in their use is in quantitative and qualitative research

what are the disadvantages of quantitative observational strategy

does not provide specific and in-depth information; may suffer from following errors: - error of central tendency - elevation effect - halo effect

when choosing a sample with qualitative research you...

don't have a predetermined sample size but during the data collection phase you wait to reach a point of data saturation; so when you stop getting new info. Or the info you get is negligible you stop!

define random/probability sampling design

each element in study population has equal and independent chance of selection in the sample; sample is representative of the total community

define simple random sampling

each element in the population is given an equal and independent chance of selection

define sampling unit/element

each individual person that forms the basis for selecting your sample

what are examples of a before and after study design

effects of education module on attitudes about healthy eating Change in knowledge after training on hip precautions

what is the rigor of quantitative research

emphasis on objectivity Hypothesis & plan formalized & developed prior to study

what is another name for cross sectional designs?

ex post factor or after the fact

with what type of research do you purposely select information-rich respondents, which s considered a type of sample bias

qualitative research

with what type of research is the main aim to explore diversity, sample size, and sampling strategy do not play a significant role

qualitative research

with which type of research is data not collected through a set of predetermined questions but by raising issues around different areas of enquiry

qualitative research

what is another name for a deductive approach?

quantitative approach

measuring attitudes is more common with which research?

quantitative because they aim to measure or quantify information

with which sampling design do you have an equal and independent chance of being selected as a sample.

random/probability sampling designs

what measurement scale is described below: has all the properties of nominal, ordinal and interval scales and it also has starting point fixed at zero. Therefore it is an absolute scale.

ratio scale

describe time series or multiple baselines design

repeated measures of the dependent variable both before & after the treatment O1 O2 O3 X O4 O5 O6

what are the advantages of interview schedule

requires fewer interviewing skills than does unstructured interviewing

what do you call the road map that you decide to follow during your research journey to find answers to your research question as validity, objectively, accurately and economically as possible

research design

what provides the input to a study and therefore the quality and validity of the output, findings, are solely dependent upon it

research tool

define structured interviews

researcher asks a predetermined set of questions using the same wording and order of questions as specified in the interview schedule

the saturation point in qualitative research determines

sample size

define quota sampling

select sample from a location convenient to the researcher and whenever you see someone with this visible characteristic you ask that person to participate; you do this until you meet your required number of respondents (quota)

define extraneous variable

several factors operating in a real life situation may affect or effect changes in the dependent variable. These factors, not measured in the study, may increase or decrease the magnitude or the strength of the relationship between independent and dependent variable.

what is the most commonly used method of sampling design?

simple random sampling

define elevation effect

some observers may prefer certain sections of the scale in the same way some teachers are strict markers and others are not

what is the disadvantage of recording observations?

some people may feel uncomfortably or behave differently in front of the camera

define a null or statistical hypothesis (H0)

states that the differences b/w groups is due to chance or that there is no "true" difference (example - no relationship between obesity and heart disease)

the sample size in quantitative research depends on...

study and possible use of findings

is saturation point objective or subjective

subjective

what is the purpose of quantitative research

testing theory, predicting, establishing facts, hypothesis testing

with post-test only design we cannot assume what?

that the effect is due to intervention

define independent variable

the cause suppose to be responsible for bringing about change (s) in a phenomenon or situation. Is manipulated considered the treatment, causal factor of the change.

the way a method is employed for data collection determines...

the classification of study

what is the second most important consideration in the formulation of a research problem in quantitative research

the construction of a hypothesis

the greater the heterogeneity or diversity in what you are trying to find out about...

the greater the number of respondents you need to contact to reach saturation point

with levels of evidence, the smaller the number...

the higher the level of evidence

the larger the sample size

the more accurate your estimates

define sample size

the number of people in your sample

define dependent variable

the outcome or change (s) brought about by introduction of an independent variable. Is manipulated, considered the effect or outcome brought about by the "I" variable.

the way you ask a question determines....

the response that you are likely to get from your respondents

define sample

the small group from the study population that you collect information from

what is the function of control groups?

the sole function is to quantify the impact of extraneous variables on the dependent variable.

with qualitative research what do you look at regarding attitudes?

the spread of attitidues

define halo effect

the way an observer rates an individual on one aspect of the interaction influences the way she rates that individual on another aspect of the interaction (assessment of student in one subject can affect performance assessment in another subject)

define sample design/strategy

the way you select your sample

large sample size means that

there is inclusion of people with diverse backgrounds

define active variables

those variables that can be manipulated, changed or controlled. (types of OT intervention)

define attribute variables

those variables that cannot be manipulated. And reflect characteristics of the population. (age, gender, color, religion)

when should you use qualitative methods

to gain information you can't easily get from quantitative methods.

define error of central tendency

unless the observer is extremely confident of his/her ability to assess an interaction; may avoid extreme positions on thee scale using most central part

with quantitative research how do you analyze the data?

using quantitative statistical tests

describe passive observation

variables are not manipulated but measured & then examined for relationships & patterns of prediction

examples of non-participant observation

watching nurses in a hospital

mixed methods use of restriction is dependent on....

what and how the specific methods are used

the appropriate sample data depends on...

what you want to do with your findings and the relationships you want to establish

define participant observation

when the researcher participates in the activities of the group being observed in the same manner of its members with or without their knowing that they are being observed (think anthropological)

define sampling error

when there is a difference between sample statistic and population mean

when is null hypothesis the hypothesis of choice

when there is little research or theoretical support

when should a non-directional research hypothesis be used?

when you expect a difference or relationship but not which direction it will be

define study population

where you select your sample from

what does the alternative or research hypothesis populate

whether a relationship or difference is expected and in what direction

define judgement or purposive sampling

who can provide the best information to achieve the objectives of your study; you only go to those people who in your opinion are likely to have required information

define natural observations

without interfering in its normal activities

what does the unit of measurement look at

wo ways of categorizing variables. (categorical and continuous variables)

define interview schedule

written list of questions, open-ended or closed, thoroughly tested for standardised wording, meaning and interpretation, prepared for use by an interviewer in a person-to-person interaction

define non-participant observation

you do not get involved in the activities as a researcher but remain a passive observer, watching and listening to activities and drawing conclusions

what are the advantages of quantitative observational strategy

you do not need to spend time on taking detailed notes and can thus concentrate on observation

type II errors means that...

you said that you your null hypothesis was true when it wasn't because your alternative hypothesis was true!

so type I error means that...

you said that your alternative hypothesis was true; when it wasnt because your null hypothesis was true!

define primary data

you undertake a research study and need to collect required information

define sampling statistics

your findings based on the information obtained from participants

define expert sampling

your respondents must be known experts in the field of interest to you

types of quantitative research samples are

• Random/probability sampling designs • Non-random/non-probability sampling designs; and • 'mixed' sampling designs

when you are searching for information what should you look at?

•What? •What is known about the topic? What information do I need? •Why? For whom? What format? •Where? •Where can the information be found? •General knowledge sources (textbooks, encyclopedias,) •Electronic databases (search engines, institutional websites, etc) •How? •Search query, topic subject/ keywords, authors •How Well? •Evaluate the quality and reliability of information. Discriminate b/w fact and opinion; Notice interpretations of data, find additional information as needed to further understanding on perspectives-topic

examples of cross-sectional studies

- Group preference for a candidate - Study strategies of a group of students - Satisfaction with OT services by current patients

example of experimental designs

- H1: Ergonomics and body mechanics training is significantly positively correlated with decreased incidence of CTS in office personnel 50 years or younger - HI: Kinesiotaping is significantly more effective in reducing subluxation than joint mobilization in young adults with shoulder injury

a true experimental design

- Have the greatest degree of control and internal validity - Upheld as the highest level of scientific evidence (RCT) - includes randomization, control group, manipulation of the "I" variable

what are some possible problems with observation as a method of data collection may suffer from?

- Hawthorne effect = you act differently when you are aware of being observed (either positive or negative) - Observer bias = if observer is not impartial, they can easily introduce bias and there's no easy way to verify the observations and inferences drawn from them - Interpretations may vary from observer to observer - Possibility of incomplete observation or recording, which varies with the method of recording

what are the threats to internal validity

- History - Maturation - Selection - Mortality/attrition - Testing - Instrumentation - Contamination - Intervention

questions to ask about attrition

- How did the research handle incomplete data? - Did they report the # and reasons why participants did not complete study? Any adverse effects?

what are the threats to external validity

- How researchers use the data to make generalizations from the sample to the population, to other settings or future situations. - Interaction of selection & treatment - Interaction of setting and treatment - Interaction of history and treatment

the differences between quantitative and qualitative approaches are based on what 3 things

- How the data was collected - How it was analyzed - How the findings were communicated

what things SHOULD make your findings reasonably accurate

- If you get info from the total sampling population - if you sample represents the study population - if you method of enquiry is correct

describe manipulation

- Independent variable manipulation done by withholding (control) or present (experimental) - In health services, controlling all influences on the "I" Variable is not always possible. Therefore, a cause-effect relationship is more difficult to establish. - Mortality and/or attrition may also affect outcome

what are some common causes of type II errors in OT

- Individuals with disabilities are a small part of the population - Individuals with disabilities vary from each other more than general population - difficult to homogenize - A large number of instruments used in OT generate measurement errors - A single intervention variable has limited long term effects - Research designs in OT in the past is at lower strength not using powerful statistics or robust procedures

disadvantages to closed questions

- Information obtained through them lacks depth and variety - Greater possibility of investigator bias (may only list response patterns that your interested in) - The given response pattern may condition the thinking of respondents so answer may not reflect respondents opinions - Ease of answering a ready-made list of responses may create a tendency among some respondents and interviewers to tick a category without thinking about the issue

when should observations be used?

- Interested in the behaviour than in the perceptions of individuals - Subjects are so involved in the interaction they are unable to provide objective information about it

tips for searching for literature

- Keyword "recall" searching yield matches in any field of a record or any part of a Web page. (more information with less precision). - "advanced," or "expert" searching using author, subject, etc (less information with more precision). - Use Quotation marks for exact phrases (i.e. "MS") - Use parenthesis for relationships (trauma and fractures) - Boolean-Connecting words that narrow or broaden a search (or, and, not) - Use wildcards and truncation symbols (* # ? !) for terms that have variant forms of spelling (child* for child, children, childhood) - Try using limiters when available (English language, A/V)

advantages of accidental sampling

- Least expensive way - You don't need information (sampling frame, total number of elements, location, or other information)

advantages to quota sampling

- Least expensive way - You don't need information (sampling frame, total number of elements, location, or other information)

what are the different levels of evidence

- Level I (RCT, meta-analyses, systematic reviews) ** - Level II (2 group, non-randomized, cohorts, case controls) - Level III (1 group, non-randomized (before-after, pre-post-test) - Level IV (descriptive studies, single subject, case series) - Level V (case reports, expert opinions)

disadvantages of questionnaires

- Limited application - Low response rate - Self-selecting bias - Lack of opportunity to clarify issues - No opportunity for spontaneous responses - The response to a question may be influenced by the response to other questions - Others can influence the answers - A response cannot be supplemented with other information

advantages of interview

- More appropriate for complex situation - Useful for collecting in-depth information - Information can be supplemented - Questions can be explained - Has a wider application

what are the prerequisites for data collection

- Motivation to share the required information - essential for respondents to be willing to share information with you; you should try to motivate by explaining clearly the objectives and relevance of the study - Clear understanding of questions (if not may give wrong or irrelevant answers) - Possession of the required information sought

study designs in quantitative research can have 3 different perspectives which are

- Number of contacts with the study population - Reference period - Name of the investigation

what are the types of hypothesis

- research hypothesis - alternative hypothesis - null hypothesis (aka statistical)

what are the methods to draw a random sample

- Fishbowl draw = number each element and put them on pieces of paper in a bowl, and then draw one by one out without looking - Computer program - Table of randomly generated number

what are some pros of using a summated scale

- Good psychometric properties - Relatively cheap and easy to develop - Usually quick and easy to complete for subjects

describe the placebo design

A patient's belief the s/he is receiving treatment can play an important role in his/her recovery from an illness even if treatment is ineffective. This psychological effect is known as The placebo effect.

types of deductive hypothesis

-

what are the different types of variables used when looking at causal relationships?

- "I" or independent variable: is manipulated, considered the treatment, causal factor of the change - "D" or dependent variable: is not manipulated, considered the effect or outcome brought about by the "I" variable - "Extraneous variable": factors operating in real life that affect the relationship or effects on the "D" and "I" variable. These factors are not measured in the study - "Confounding/Intervening variable": it links the independent & dependent variable. Outcome effect only assumed in its presence THINK DICE for causal relationship...

define hypothesis

- A tentative explanation for an observation, phenomenon, or scientific problem that can be tested by further investigation. - Something taken to be true for the purpose of argument or investigation; an assumption.

what are the aims in selecting a sample

- Achieve maximum precision in your estimates within a given sample size - Avoid bias in selection of your sample

define experimental designs

- Aim to predict the causal relationships between "I" and "D" - Most rigorous types of investigation (most controls) - Many different types of models

describe qualitative research

- Allows for differences among subjects - Able to account for multiple aspects of complex situations - Semi-structured methods - Greater flexibility

describe control

- Allows to see what the sample would be without the experimental condition - allows us to cancel out certain bias of "attention factor", "Hawthorne effect" or "halo effect": effect of subjects experiencing change as a result of getting attention for being a participant

define cross-sectional studies

- Also know as one snap shot - Most commonly used design in social science - Very simple design, inexpensive to implement - Appropriate for studying: prevalence of a problem, phenomenon, attitude, opinion of a group at a given time

how to formulate effective questions

- Always use simple and everyday language - Do not use ambiguous questions (one that contains more than one meaning and can be interpreted differently by different respondents) - Do not ask double barrelled questions (question within a question) - Do not ask leading questions (one which, by its contents, structure, or wording, leads a respondent to answer in a certain direction) - Do not ask questions that are based on presumptions (assumes the respondents fit into a particular category)

disadvantages to open ended questions

- Analysis is more difficult (requires content analysis) - In a questionnaire some respondents may not be able to express themselves to information can be lost - Greater chance of interviewer bias in open-ended questions

what are the benefits of qualitative research

- Answer some questions that quantitative measures can't. - Connect directly with the population and the community with which you're concerned. - Get at certain underlying realities of the situation. - Involve the population of interest, or the community at large, in helping to assess the issues and needs of the community. - Often allow for a deeper examination of the situation or the community than quantitative methods do. - Allow for the human factor. - The parameters of the scope of a qualitative study, and information gathering methods and processes, are often flexible and evolving - Distinct to qualitative research is that, as the researcher, you make every effort to seek agreement of your respondents with your: § Interpretation § Presentation of the situations § Experiences, perceptions and conclusions

what are possible biases with selection for treatment

- Are there natural differences between the groups under study? Not homogeneous groups? - Due to narrow characteristics in the selection of participants,the results cannot be generalized to pts or individuals who do not have those characteristics - Researcher must restrict claims about groups to which the results cannot be generalized

it is considered a random/probability sample if

- As they represent the total sampling population, the inferences drawn from such samples can be generalised to the total sampling population - Some statistical tests based upon theory of probability can be applied only to data collected from random samples

describe randomization

- At selection & group assignment (example: ) - Important for generalizability & external validity of findings - Powerful tech for eliminating bias & increasing control - Influence of maturation or historical events are theoretically eliminated

what are some determinants of quality of data?

- At times the method most appropriate for a study cannot be used due to constraints such as lack of resources or lack of required skills - In selecting a method of data collection you should know as much about educational level, age structure, socioeconomic status and ethnic background - In selecting a method of data collection you should know the purpose and relevance of the study to respondents

possible secondary sources

- Government or quasi-government publications (census, vital statistics registration, labour force surveys, health reports, economic forecasts and demographic info) - Earlier research - Personal records (diaries) - Mass media

when determining your sample size for quantitative studies (cause-and-effect) you need to consider

- At what level of confidence do you want to test your results, findings, or hypothesis - With what degree of accuracy do you wish to estimate the population parameters - What is the estimated level of variation (standard deviation) with respect to the main variable you are studying in the study population

unstructured interviews within qualitative research

- Based upon most of the characteristics of qualitative research - Flexible in structure, in-depth in their search, free from rigid boundaries, and at liberty to deviate from their predetermined course - Interaction can be at a one-to-one or a group level

what are possible biases with site or setting selection

- Because of the characteristics of the setting where intervention took place, researcher cannot generalize to individuals in other settings - Where did the study or interventions took place? - Where they different for different groups? - Clinic vs home effects (naturalistic vs. controlled env.)

what are 2 ways we can eliminate or isolate extraneous variables

- Build the extraneous variable into the design of the study - Eliminate the variable

define categorical variables

- Categorical variables: are measured on nominal or ordinal measurement scales. They are 3 types of categorical variables: - Constant variable: has only one category or value. (tree, water, etc.) - Dichotomous variable: has only two categories (female/male, yes/no) - Polytomous variable: can be divided in more than two categories. For example religion (christian, muslim, hindu)

define the maturation effect

- Changes in the study population may be because it is maturing - This is particularly true when you are studying young children. - If is significantly correlated with the dependent variable, is reflected in the 'after' observation

when is it best to use each closed and open-ended questions

- Closed questions are extremely useful for eliciting factual information - Open-ended questions are useful for seeking opinions, attitudes, and perceptions

types of randomization

- Cluster - Systematic - Simple: ID population- determined desired sample size- assign all a consecutive #- select arbitrary #s - Stratified: ID population- determine desired sample size

what are some examples of summated scales used in OT

- Community Integration Measure (CIM) - Reintegration to Normal Living (RNL) Scale - Functional Independence Measure (FIM) - Geriatric Depression Scale (GDS) - Social Support Inventory for People with Acquired Disabilities - Community Integration Questionnaire (CIQ) - Visual Function Questionnaire - 25 (VFQ-25) - Multidimensional Anxiety Scale for Children (MASC)

how would you describe concepts

- Concepts cannot be measured. - Variables can be subjected to measurement by crude/refined or subjective/ objective units of measurement. - Concepts are subjective

define variables

- Concepts that can be measured - Concepts are mental images or perceptions for which the meaning may vary from individual to individual - Variables are measured with varying degrees of accuracy depending on the scale used.

what are the functions of a research design

- Conceptualize overall design, procedures and protocols - Clearly delineate details to follow to ensure objectivity, accuracy and ensure internal and external validity

when is judgement or purposive sampling used

- Construct a historical reality - Describe a phenoment - Develop something about which only a little is known

what are the different types of study designs based on the number of contacts within the study population?

- Cross sectional studies - Before and after studies - Longitudinal studies

describe the quantitative research concepts

- Deductive approach - Purpose: testing theory, predicting, establishing facts, hypothesis testing - Focus: Isolate variables, use large samples, collects data using formal instruments - Rigor: emphasis on objectivity - Hypothesis & plan formalized & developed prior to study - Data Analysis using quantitative statistical tests

what are the steps to developing a summated scale?

- Define the construct - Design the scale §Agreement response choices: usually bipolar and symmetrical around a neutral point §Most common choices around agreement, evaluation, and frequency - Pilot test the scale - Administration and Item Analysis - Validate and Norm

describe descriptive designs

- Depicts naturally occurring events, characteristics of participants or data - No manipulation of "I" variable - *Common in OT - Serve as precursors to experimental studies

questions to ask about performing testing

- Did the assessors know about the participant's allocation/treatment groups? - Did the participants receive the same assessment and become familiar & remember responses?

what are some the guiding questions in selecting a design

- Does the design answer the research question? - Does the design adequately control I Variables? Are there confounding variables? - Does the design maximize control & minimize bias? To what extend does the study maximizes external generalizability of results? - What are some of the ethical or field limitations that affect the design?

interview guides serve to

- Ensures desired coverage of the areas of enquiry and comparability of information across respondents - Serve as starting points for discussion

study design based on the nature of investigation is classified as

- Experimental - Non-experimental - Quasi - or semi-experimental

describe type II error

- Failure to find a relationships when it actually exists - Sources of error (small sample size, possible confounding I variable, non-robust statistical test) - Related to B

what are some possible sources of errors?

- Faulty design - Faulty sampling procedures - Methods for data collection were inaccurate - Incorrect analysis - Statistical tests used were inappropriate - Interpretation of results are incorrect - the conclusions drawn are incorrect which can lead to 2 types of errors: - Rejection of a null hypothesis when it is true this is known as a type 1 error - Acceptance of a null hypothesis when it is false, this is known as type 2 error

possible issues of bias with timing and maturation

- Participants in the study change overtime affecting the outcomes (i.e. age, strength, physical development) - Are the effects of development-maturation changes controlled for? - If study too long — developmental change? - If study too short --- effects may not be detected

what are some examples of the longitudinal design

- Participants study habits in OT program - F/U with HEP or use of AD

advantages to open ended questions

- Provide in-depth information if used in an interview by someone experienced or allow respondents to feel more comfortable expressing their opinion in a questionnaires - Provide opportunity in questionnaire for respondents to express themselves freely and are not conditioned to select answers - Eliminate the possibility of investigator bias

advantages to closed questions

- Provide ready-made categories within which respondents answer help ensure the info needed is obtained and the responses are also easier to analyse

difference in observations that are qualitative or quantitative

- Qualitative research = no framework for observation; recording is done in descriptive and narrative form - Quantitative research = follows predetermined framework and the recording is either categorical or on a scale

difference of use of secondary sources from qualitative and quantitative research

- Qualitative research use of secondary sources basically just extracts descriptive and narrative information -Quantitative research use of secondary sources usually extracts info in numerical or categorical forms

difference between quantitative and qualitative study designs?

- Quantitative study designs are specific, well structured, have been tested for their validity and reliability, and be explicitly define and recognised. - Qualitative studies either do not have these attributes or have them in lesser degree. They are less specific and precise, and do not have the same structural depth.

designs for non-random sampling

- Quota sampling - Accidental sampling - Convenience sampling - Judgemental or purposive sampling - Expert sampling - Snowball sampling

to be a true experimental design what 3 characteristics must be present

- Randomization - Control group - Manipulation of the "I" variable

characteristics of RCT

- Randomization (for selection & allocation), - control - manipulation/intervention - Measure long term outcomes "D" variable - Large sample size - Blinding - Scientific background - Clinical effectiveness - Generalizability (external validity; generalizing form the sample to the population)

describe RTC - Randomized control trials

- Randomization (for selection & allocation), control, manipulation/intervention - Measure long term outcomes, "D" variable - Large sample size - Blinding - Scientific background - Clinical effectiveness - Generalizability (external validity; generalizing form the sample to the population)

define selection

- Refers to the methods used to select and allocate participants in the study

describe type I error

- Reporting a relationship between variables when there is none - Rejecting Ho when it should not be rejected - Sources of error (poor design, lack of randomization, lack of control) - Related to α(probability level)

disadvantages to quota sampling

- Resulting sample is not a probability one - Cannot generalize findings to the total sampling populations - Choosing one location may cause them to have characteristics unqiue to them

disadvantages of accidental sampling

- Resulting sample is not a probability one - Some people contacted may not have the required information

possible bias regarding history of treatment

- Results are time bound, therefore, a researcher cannot generalize to past or future situations. - Would need to replicate at later time to determine if results hold

what are the different types of study designs based on the reference period (time-frame in which a study is exploring)

- Retrospective - Prospective - retrospective-prospective

randomization ensures that

- Sample is representative of population - Avoid bias

bias in selection can occur if

- Sampling is done by a non-random method - The sampling frame (list of population) doesn't cover the sampling population accuraely and completely - Section of a sampling population is impossible to find or refuses to cooperate

advantages of selecting a sample from total population

- Saves time - Saves money and human resources

what are the different threats to validity of biases

- Selection bias (participant membership, i.e. volunteers) - Information bias (knowledge, memory recall) - Interviewer bias (selective information gathered ) - Rater bias (characteristics of the rater, training, experience, orientation, understanding of the instrument ) - Hawthorne effect (influence of investigator's presence)

define before and after study design

- Similar to Cross-Sectional but involves 2 separate data sets or data periods with same group - It can measure change in a phenomenon, attitude, problem or situation - Commonly used to evaluate effectiveness of a program, treatment - This design could be Experimental or Non-Experimental

characteristics of hypothesis

- Simple - Specific - conceptually clear; there is not time for ambiguity - It should be unidimensional; it should test only one relationship or hunch at the time

what are 3 commonly used types of random sampling design

- Simple random sampling - Stratified random sampling - Cluster sampling

how can we eliminate or isolate extraneous variables

- Sometime its possible to eliminate an extraneous variable to build it into the study design. This is usually done when there is strong evidence that the extraneous variable has a high correlation with the dependent variable. - Two methods to achieve this: - Build the extraneous variable into the design of the study - Eliminate the variable

what does a good hypothesis statement include?

- Sound reasoning - Reasonable explanation for the predictable outcome - Clearly states the relationship between variables - Can be tested - Specify what you want to find out about; bring specificity & clarity - Provide focus in your design, measurement & data collection - Enhances objectivity - Enables you to specifically conclude what is true or not true based on your study & data

how do you operationally define the variable?

- State or describe the variable for purpose of the research

how do you operationally define the variables?

- State or describe the variable for purposes of the research - This is important for internal validity and to allow for replication of the study to be possible - The effectiveness of intervention results, descriptions of the populations and/or outcome measures may be invalid if the variables were not operationally defined

define the retrospective design

- Study of a phenomenon or situation that has already occurred - Document review or recollection (memory) - May use primary or secondary data (records)

what are some cons of using a summated scale

- Subjects usually need fairly high level of literacy - Some level of expertise to develop a good scale - once someone knows how, it is not difficult

What is a research design?

- The "HOW" - The "Road Map" for answering your research question - the road map that you decide to follow during your research journey to find answers to your research question as validity, objectively, accurately and economically as possible - The "Operational Plan" - The "Blueprint" for describing how you would go about completing your research

what are the different types of experimental designs?

- The after-only experimental design - The before and after experimental design - The control group design - The double-control design - The comparative design - The matched control experimental design - The placebo design

define ordinal measurement scale

- The data have the properties of nominal data and the order or rank of the data is meaningful. - A non-numeric label or a numeric code may be used. - Data are qualitative or quantitative. - Arranged in order (ranking), but differences between data entries are not meaningful or equal. - Examples: - Students of a university classified by their class standing using a nonnumeric labels- Freshman, Sophomore, Junior, or Senior. - Attitudinal-Likert scales (strongly agree, agree, disagree)

define interval measurement scale

- The data have the properties of ordinal data and the interval between observations is expressed in terms of a fixed unit of measure. - Interval data are always numeric. - Data are quantitative. A zero entry simply represents a position on a scale; the entry is not an inherent zero - Arranged in order, the differences between data entries can be calculated. - Examples: - temperature - attitudinal scales (Thurstone —statements or ratings established by judges)

descrive the after-only experimental design

- The researcher knows that a population is being, or has been exposed to an intervention and wishes to study its impact on the population. - This study is widely used in impact assessment studies

what needs to be included in a research design?

- The type of Design (i.e. group comparison, cross-sectional, experimental, etc.) - Who will be the "Population"? How will it be identified? - Who is the "Sample"? How will it be selected? How will ethical principles ensured? - What data will be collected? Variables? - How would the data be collected? Analyzed? - How would confidentiality or anonymity ensured?

describe what pre-testing a research instrument means

- This entails a critical examination of the understanding of each question by respondents - Pre-test should be carried out under actual field conditions on a group of people similar to your study - Purpose is NOT TO COLLECT DATA but to ID problems that the potential respondents might have in understanding and interpreting the question

why should you operationally define variables

- This is important for internal validity and to allow for replication of the study to be possible

disadvantages of interview

- Time consuming and expensive - Quality of data depends upon the quality of the interaction - Quality of data depends upon the quality of the interviewer - Quality of data may vary when multiple interviewers are used - Possibility of researcher bias

what is the purpose of hypothesis testing?

- To establish if our hypothesis is true. Choose between competing hypotheses (H0 or H1) - Must determine the test statistic - Select the degree of certainty- probability (p value), critical value

what are the different types of descriptive designs

- Univariate (descriptive case study, epidemiological-incidence, normative and developmental designs) - Correlational (relationships or associations between variables)

how do we manage and control extraneous variables

- Use of a control group - Randomization - Screening (to eliminate certain factors) - Matching groups - Build effect of "E" into design (i.e. 2x3 Factorial) - Use of Standardized and valid, reliable instruments

what are the problems with data from secondary sources

- Validity and reliability = may vary markedly from source to source - Personal bias = info from personal diaries, newspapers, and magazines may have the problem of personal bias - Availability of data - Format = make sure categories match what you need

examples of when observation is appropriate

- Want to learn about the interaction in a group - Study the dietary patterns of a population - Ascertain the functions performed by a worker - Study the behaviour or personality traits of an individual - Situations where full and/or accurate info cannot be elicited by questioning (patients are not cooperative or are unaware of the answers because it is difficult for them to detach themselves from the interaction)

questions to ask about selection

- Was there randomization? Were the participants selected because they had certain predisposing characteristics? Known to researcher - Was there concealment of assignments?

when writing a literature review consider

- What is known about the subject? Present a comprehensive view - Are there gaps in the knowledge of the subject? - Have the studies reviewed identified areas of further research? - Be able to distinguish author's opinion vs. empirical evidence. - Who are the recognized researchers in the topic? Make sure to cite them. - Is there significant debate on the topic? Include the different perspectives - What methods or problems were identified by others studying the topic? Discuss them & point to how your project would address them. - Highlight the current status of research in the topic?

when operationally defining variables you should consider

- What is the theoretical rationale underlying the definition? - What is the accepted definition? - How can the variable be measured? - Is the variable defined by a standardized procedure?

when you are operationally defining the variable you should consider

- What is the theoretical rationale underlying the definition? - What is the accepted definition? - How can the variable be measured? - Is the variable defined by a standardized procedure?

questions to ask about intervention

- Who provided or will provide the interventions? - Would there be a different therapist involved? - Were they trained? - Is the protocol for intervention clearly delineated? - Are the therapist following strictly?

all non probability sampling designs (puposive, judgemental, expert, accidental, and snowball) can be used in qualitative research with 2 differences:

- You do not have a sample size in mind; you collect data until saturation point - You are guided by your judgement as to who is likely to prove the best info

disadvantages of selecting a sample from total population

- You do not obtain info. About population's characteristics of interest to you but only estimate or predict them on basis of what you found - Meaning there is a possibility of error

examples of participant observation

- You pretend to have a handicap that requires you to use a wheelchair and observe the reactions of people you encounter - You live like prisoners live and collect informations - Study a tribe in a remote area and you go and live with them and collect the data you need

define non-probability sampling designs

- a design that does not follow the theory of probability in the choice of elements from the sampling population - used when either the number of elements in a population is unknown or the elements cannot be individually identified

variables can be classified based on study design as either

- active = can be manipulated, changed or controlled (Examples Types of OT Interventions) vs. - attribute = cannot be manipulated, changed or controlled and reflect characteristics of the population (Examples - Age, gender, eye color, religion)

define longitudinal design

- appropriate for the study of patterns of change in phenomenon, attitude, behavior over a long time period (*period may vary from study to study—1wk, 2wks, 5 yrs) - Multiple data collection points are used (weekly, admission,discharge, 6 month f/u) - Accuracy of data is increased

what are the disadvantages of the longitudinal design

- attrition - conditioning of subjects

define accidental sampling

- based upon convenience in accessing population - you do not make attempt to include people observed having the visible characteristics - you stop collecting data when you reached your number of respondents

what would be the design of a before and after (pre and post) test

- baseline is done (may or may not include control comparability better) R O X O

what are some advantages of survey

- can reach a large # of respondents with limited time & resources - Numerous variables could be measured - Statistical manipulation may allow multiple use for the data

types of variables are broken down into 3 different ways

- causal relationship (DICE - dependent, independent, confounding, extraneous variables) - study design (active or attribute variables) - unit of measurement (categorical or continuous variables)

define in-depth interviewing

- come from interpretive tradition; it is known as repeated face-to-face encounters between the researcher and informats directed towards understanding informant's perspectives - Includes face-to-face, repeated interaction and aims to understands perspectives of respondents - Leads to increased rapport, understanding, and confidence

variables must be....

- considered in your questions - clearly delineated in the research hypothesis

describe quasi-semi experimental designs

- decision to use these designs should be based on the theory level, the research question and the constraints of the environment - In health & human services there might be ethical considerations for use of randomization and manipulation or withholding of treatment.

what types of studies under quantitative research would need large sample sizes?

- designed to formulate policies - - test relationships - establish impact assessments

define indirect approach

- direct questioning is likely to offend respondents and they are unlikely to answer even non-sensitive questions - Can ask questions in indirect manner by: - Showing drawings or cartoons - Asking respondents to complete a sentence - Asking respondents to sort cards containing statements - Using random devices

deductive research breaks into what type of research hypothesis?

- directional (only use it if you have a substantive bases to believe the results would be in the indicated direction) - non-directional (used when you expect a difference or relationship, but not which direction it will be)

literature type from weakest to strongest

- expert opinions, editorials - case reports, case series - case controlled studies - cohort studies - RCT - systematic reviews

define narratives

- have almost no predetermined content except that the researcher seeks to hear a person's retelling of an incident or happening in his/her life - You let the person talk freely without interruption - Are very sensitive in nature

define ratio measurement scale

- highest level of measurement scale - The data have all the properties of interval data and the ratio between values is meaningful. - Variables such as distance, height, weight, and time use the ratio scale. - This scale must contain a zero value that indicates that nothing exists for the variable at the zero point. - Data are similar to the interval level, but a zero entry is meaningful. - Examples: - Age, GPA, height, time

example of causal relationships variables with smoking

- independent = extent of smoking (assumed cause) - dependent = incidence of cancer (assumed effect) - extraneous = secondary exposure, duration of smoking, other health factors like diet and exercise - intervening = genes

example of causal relationships variables with sleep habits

- independent = sleep habits - dependent = daytime somnolence - extraneous = motivation, stress, children - intervening = neurological disease

what are some primary sources

- information gathered using the first approach; provide first-hand information - Includes: observations (Participant and non-participant), interviewing (structured and unstructured), and questionnaires (mailed, collective, and online)

what are some secondary sources

- information gathered using the second approach; provide second-hand data - Includes: documents (government publications, earlier research, census, personal records, client histories, service records)

define oral histories

- involve the use of both passive and active listening (like narraitve) - More commonly used for learning about a historical event or episode or for gaining info about a culture, customs, or story that's passed from generation to generation - Useful when in-depth info is needed or little is known about the area - Comparability of questions asked and responses obtained may be difficult as they do not have a specific list of questions - Freedom from questions can introduce investigator bias

how would you describe variables?

- is a property that takes on different values. - is something that varies - Is a symbol to which numerals or values are attached - Rational units of analysis that can assume any one of a number of designated set of values

define focus group interviewing

- just like in-depth interview is that the former is undertaken with a group; you explore the perceptions, experiences, and understandings of a group of people who have some experience in common with regard to a situation or event - Broad areas of discussion topics are developed beforehand, either by researcher or group

what are some disadvantages to the before and after study design

- longer time to complete - not possible to determine causation or to quantify effects of the "I" or "E" variables - biases— attrition, maturation, memory

nominal measurement scale

- lowest measurement scale - Data are labels or names used to identify an attribute of the element. - A non-numeric label or a numeric code may be used. - measurements are qualitative only. - It is a classification scale - A numeric or non-numeric code may be assigned (ex: 1=OT, 2=NUR,3=PT) - Examples: - Colors of the American flag, - Programs of study at a college (OT, NUR, PT) - Gender - types of trees

what are the ways of administering a questionnaire

- mailed questionnaire - collective administration - online questionnaire - administration in a public place

what is the disadvantages of the retrospective design

- memory bias - limited control of variables

describe a mailed questionnaire

- most common approach to collecting information is to send the questionnaire to prospective respondents by mail - MUST INCLUDE COVER LETTER

descibe a factorial design

- multiple comparisons of factors and levels R O X Y1 O R O X Y2 O R O X O R O Y1 O R O Y2 O

observation situations can be

- natural - controlled

what are the different levels of measurement (least to greatest detailed)

- nominal - ordinal - interval - ratio ACRONYM NOIR! like black... because well the better we measure it the more complicated the research gets and its like falling in a black hole!

types of measurements scales are

- nominal or classificatory scale - ordinal or ranking scale - interval scale - ratio scale

what are the different types of questions you can ask in a questionnaire?

- open-ended - closed

what are different types of observation

- participant observation - non-participant observation

what are the major approaches to information gathering

- primary data - secondary data

what are the two types of stratified random sampling

- proportionate - disproportionate

what are the 2 categories on the best way to order questions

- random order - follow a logical progression based on objects

what are the different types sampling designs?

- random/probability sampling designs - non-random/non-probability sampling designs - 'Mixed' sampling design

what are the two methods that ensure that the control and experimental groups are comparable with one another

- randomisation - matching

advantages of questionnaires

- save time and human and financial resources - offers greater anonymity = no face-to-face interaction between respondents and interviewer

possible bias regarding reporting

- selective reporting of information

describe non experimental design

- survey - passive observation - cross sectional

define alternative hypothesis

- the formulation of this hypothesis is a convection in scientific circles. Its main function is to explicitly specify the relationship that it will be considered as true in case the research hypothesis proves to be wrong. - Alternate hypothesis is the opposite of the research hypothesis

define closed questions

- the possible answers are set out in the questionnaire or schedule and the respondent or investigator ticks the category that best describes the respondent's answer. - The questions and categories you develop cannot be changed therefore you should be certain about your categories

what is the design for the solomon 4 group

- this is a combine post test design with true experimental - It includes a control group and could also use multiple intervention comparisons - It also gives the opportunity to test the potential influence to the test-restest learning phenomenon R O X O R O O R X O R O

with survey designs you need to consider

- type of questions - scaling

whats are the methods of data collection with qualitative research

- unstructured interviews (in-depth interviewing, focus group interviewing, narratives, oral histories) - observations - secondary sources

how can we record observations?

- videocamera - notes

random sample can be selected using 2 different methods

- with replacement - without replacement

how do you record observations with quantitative research

- would record an observation in categorical form or on a numerical scale - Recording is done on a scale developed in order to rate various aspects of the interaction or phenomenon - Scale may be one-, two-, or three-directional - Example: - 2-dimensional = categorical recording - passive/active; introvert/extrovert - 3-dimensional = nature of interaction within a group - positive, negative, and neutral or categorical - always/sometimes/never

when can you classify a study under the retrospective-prospective study design

- you measure the impact of an intervention without having a control group. - Where the baseline is constructed from the same population before introducing the population - Combination of retrospective data collection and then f/u prospectively to determine impact of intervention or change

what are methods of drawing a random sample

1 The fishbowl draw 2. A computer program 3. A table of randomly generated numbers

what are the 2 methods to ensure that extraneous variables have a similar effects on control and experimental groups

1. Ensure that the extraneous variable have a similar impact on control and experimental groups 2. Eliminate or isolate extraneous variable (s)

what are the steps fro stratified sampling

1. Identify all elements or sampling units in the sampling population 2. Decide upon the different strata (k) into which you want to stratify the population 3. Place each element into appropriate stratum 4. Number every element in each stratum separately 5. Decide the total sample size (n) 6. Decide whether you want to be use proportionate or disproportionate stratified samplling a. Disproportionate 7. Determine the number of elements to be selected from each stratum (n / k) 8. Select the required number of elements from each stratum by simple random sampling. b. Proportionate 7. Determine the proportion of each stratum in the study population (elements in each stratum / total population size) 8. Determine the number of elements to be selected from each stratum (sample size x p) 9. Select the required number of elements from each stratum by simple random sampling

what are the steps for simple random sampling

1. Identify by a number all elements or sampling units in the population 2. Decide on the sample size n 3. Select n using the fishbowl draw, table of random numbers, or computer program

what are the 3 principles that guide sampling

1. In a majority of cases where sampling is done, there will be a difference between the sample statistics and the true population mean, which is attributable to the selection of the units in the sample. 2. The greater the sample size, the more accurate the estimation of the true population mean. 3. The greater the difference in the variable under study in a population, for a given sample size, the greater the difference between sample statistics and true population mean.

what are the four characteristics of summated scales?

1. Multiple items 2. Items with underlying, quantitative measurement continuum 3. There is no "right" answer 4. Each item is a statement that subjects are asked to respond to

steps for systematic sampling design

1. Prepare a list of all the elements in the study population 2. Decide on the sample size (n) 3. Determine the width of the interval (k) (k = total population (N) / sample size (n) 4. Using SRS< select an element from the first interval 5. Select the ith element from each subsequent interval

factors affecting the inferences drawn from a sample

1. Size of sample = larger samples have more accurate findings 2. Extent of variation in the sampling population = the higher the variation (with respect to characteristics) in the study population, the greater the uncertainty for a given sample size

what are the 3 common attitudinal scales in quantitative design?

1. The summated rating scale (aka the likert scale) 2. The equal-appearing interval scale (aka thurstone scale) 3. The cumulative scale (aka the Gutman scale)

what are 3 common difficulties with developing an attitudinal scale?

1. Which aspects of a problem or situation should be included when seeking to measure and attitude towards an issue or a problem? From the example above, which aspects of teaching should be included in the scale to determine the overall attitude of the students towards the lecturer? 2. What procedure should be adopted for combining the different aspects to obtain an overall picture? 3. How can one ensure that a scale really is measuring what it is supposed to measure?

what factors affect the inferences drawn from a sample?

1.The size of a sample—larger samples have more certainty than those based on smaller ones. 2.The extent of variation in the sampling population—the greater the variation in the study population with respect to characteristics under study, for a given sample size, the greater the uncertainty. In other words, the > the SD, the > the standard error for a given sample size in your estimates. As a rule, the higher the variation with respect to the characteristics under the study in the study population, the greater the uncertainty for a given sample size.

examples of a quasi experimental design

2 groups of CVA pts at a hospital-one receiving NMF the other not

variables must have at least how many values or scores?

2!

what is a statement of hypothesis?

A statement predicting the relationship between the study variables

what is qualitative research based on?

Based on deductive rather than inductive logic, are flexible and emergent in nature, and are often non-linear and non-sequential in their operationalization

define variable

An image, perception or concept that is capable of measurement- hence capable of taking on different values.

questions to ask about contamination

Are the study participants receiving other types of interventions not accounted for in the study.

define following a logical progression of questions based on objects

Author prefers this one as it gradually leads into themes of study going from simple to complex

define the equal-appearing interval scale (thurstone scale)

Calculates a weight or attitudinal value for each statement. The main advantage of this scale is that as the importance of each statement is determined by judges, it reflects the absolute rather than the relative attitudes of respondents. The scale is thus able to indicate the intensity of people's attitudes and any change in this intensity should the study be replicated. On the other hand the scale is difficult to construct and a major criticism is that judges and respondents may assess the importance of a particular statement differently and therefore, the respondents' attitude might not be reflected.

example of a pre-experimental design

Case Study (small group response to intervention) X O Pretest-posttest (change overtime) O X O

what could be a dependent variable for these independent variables: I: safety training, exercise I: memory aid use

D: re-hospitalization freq. D: occupational participation

the purpose of sampling in qualitative research is to

Gain in-depth knowledge either about a situation, event, or episode or different aspect of the individual

what is deductive research?

Generally derived from theory or previous research (i.e. John X and Smith K (2009) studies found that the use of resting splints was significantly correlated with pain....)

how do we ensure objectivity, accuracy, and validity in research?

Defining and isolating variables (i.e. "I", "D", "C",E") - Maximizing effect of "I" on "D" - Minimizing effect of "E" and "C" - Managing/Controlling for Extraneous variables

the purpose of sampling in quantitative research is

Draw inferences (with respect to the focus of your inquiry)

with any research regardless of sample size there is always a possibility of....

ERROR

define randomization

Each member in the population has an equal and independent chance of being selected

define prospective design

Establishing the likelihood of or prevalence of a phenomenon, situation, or event in the future.

study design examines what?

Examines association or causation may be controlled/ contrived experiment, quasi-experiment or an ex post facto or non-experimental. In controlled experiments the independent (cause) variable may be introduced or manipulated either by the researcher or by someone else who is providing the service. Two sets of variables (active or attribute)

define the retrospective-prospective study design

Focus on past trends in a phenomenon and study into the future. Part of the data is collected retrospectively from the existing records before the intervention is introduced and then the study population is followed to ascertain the impact of the intervention.

describe systematic sampling design

Has characteristics of both random and non-random sampling

To use systematic sampling design you must

Have a sampling frame for your study population

how are hypothesis tested?

I. Formulate your hunch; construct your hypothesis II. Collect the research data and gather appropriate evidence III. Analyze the data to determine if your hunch or hypothesis is true or false (validity of your hypothesis)

define cumulative or guttman scale

Is one of the most difficult scales to construct and therefore is rarely used. This scale does not have much relevance for beginners in research.

what is the focus of quantitative research

Isolate variables, use large samples, collects data using formal instruments

examples of retrospective-prospective study design

Impact of education on smoking cessation

describe control group design

In a study utilising the control design the researcher selects two population groups instead of one. These groups are expected to be comparable as far as possible in every respect except for the intervention.

what should you consider when making a likert scale?

In developing a likert scale there are a number of things to consider. - Firstly decide whether the attitude to be measured is to be classified into one, two, or three directional categories with respect to their attitude toward the issue under study. - Next, consider whether you want to use categories or a numerical scale.

describe the matched control experimental design

In matched studies, comparability is determined on an individual-by-individual basis.

what is inductive research?

Indicates a generalization based on observed relationships, patterns among variables (i.e. therapist notes that 5 of her patients with deQuervain's who use resting splint have less c/o pain than the others..)

what are threats to validity of a study

Internal threats - Relates to flaws of the design in relation to procedures, treatments, or experiences of participants that threaten the research's ability to draw correct inferences from the data/results. External threats - Relates to incorrect inferences by the researcher to the population, setting or future situation where results/findings may be applicable.

what is the difference between interview schedule and questionnaire

Interview schedule the interviewer asks the questions and with questionnaire the respondent answers themselves

define summated scale or likert scale

Is based upon on the assumption that each statement/ item on the scale has an equal attitudinal value. Importance on weight in terms of reflecting an attitude towards the issue in question. This assumption is also the main limitation of this scale.

what is the main aim of choosing a sample

Main aim is to find answers to your research questions as they relate you your total study population

example of time series or multiple baselines design

Measure ROM once a week, then provide dynamic splint, measure AROM once a week after

with a before and after test can you assume the effect is due to intervention?

NO

are hypothesis essential for a study?

NO!

with research instruments in qualitative research have predetermined questions?

No! but may have a loose list of issues and discussion points that they way to discuss with respondents or have ready in case (aka interview guide)

describe quasi experimental designs

Non-equivalent control group design (pre-test, post-test comparison of two or more groups; groups may be convenient sample, non-randomized) O X O O O

difference between quantitative and qualitative interviews

Quantitative = response categorisation from responses that are coded and qualified vs. Qualitative = responses are used as descriptors and can be integrated with your argument

define alternative or research hypothesis (H1)

Postulates whether a relationship or difference is expected and in what direction. It can be directional or non-directional Examples: H1: There is a significant positive relationship between obesity and heart disease H1: The higher the BMI there will be increased incidence of heart disease H1: There is a significantly negative relationship between regular moderate exercise and MI disease

what would be the design of a post-test only design

R X O or r X O

define internal threats

Relates to flaws of the design in relation to procedures, treatments, or experiences of participants that threaten the research's ability to draw correct inferences from the data/results.

define external threats

Relates to incorrect inferences by the researcher to the population, setting or future situation where results/findings may be applicable.

what are some examples of cross sectional designs

Resumption of life roles in coronary bypass patients

what are examples of the retrospective design

Review of records for THR FIM scores by discharge (OT vs PT)

define reactive effect

Sometimes the instrument itself educates the respondent

define null hypothesis or statistical hypothesis

States that the difference b/w groups is due to chance or there is no "true" difference HO or Null hypothesis is the hypothesis of choice when there is little research or theoretical support.

guidelines for approach in development of a research instrument for a beginner

Step I - if you have not already done so, clearly define and individually list all the specific objectives, research questions, or hypothesis, if any, to be tested Step II - for each objective, research question, or hypothesis, list all the associated questions that you want to answer through your study Step III - take each question that you identified in Step II and list the information required to answer it Step IV - formulate questions that you want to ask of your respondents to obtain the required information

describe the comparative design

Studies can be carried out either as an experiment and non- experiment in the comparative experimental design, the study population is divided into the same number of groups as the number of treatments being tested.

if the variables were not operationally define then...

The effectiveness of intervention results, descriptions of the populations and/or outcome measures may be invalid

examples of prospective design

The incidence of driving accidents in adolescents with TBIs The effects on functional performance in dementia patients who receive OT.

what does it mean to collect data from secondary sources

This is when the data has already been collected by someone else or already exists as part of the routine record keeping by an organisation and what you need to do is extract the required info for the purpose of your study

what is the focus in qualitative research?

Understand, explain, explore, discover and clarify situations, feelings, perceptions, attitudes, values, beliefs and experiences of a group of people

when is snowball sampling useful?

Useful if you know little about the group or organisation youw ish to study

define random order of questions

Useful in situations where the researcher wants respondents to express their agreement or disagreement with different aspects of the issue

questions to ask about internal bias

Where there any historical events where participants were affected and unduly influence the outcomes of the study. Examples: passing of legislation, institutionalization, new meds

define observations

another method used for data collection; difference between use of observation in quantitative and qualitative research depends on the degree of flexibility and freedom in what we observe, recording and analyzing the data generated through it

when choosing a sample with quantitative research you...

are guided by a predetermined sample size that is based on the number of consideration in addition to resources

define concepts

are mental images or perceptions and therefore their meaning vary markedly from individual to individual.

what is the difference of hypothesis in quantitative and qualitative research?

around the importance attached to and the extent of use hypothesis when undertaking a study

define cluster sampling

based on ability of researcher to divide the sampling population into groups (based on visible or easily identifiable characteristics) called clusters, and then select elements in each cluster using the SRS technique

define stratified random sampling

based upon logic that if you can reduce the heterogeneity in the population for a given sample size, you can achieve greater accuracy in you estimate; researcher attempts to stratify the population in such a way that the population within the stratum is homogeneous with respect to the characteristic on the basis which they are stratified - Example = stratify by gender

define hawthorne effect

bias regarding the influence of investigator's presence

define interviewer bias

bias with selective information gathered

What are the meaning of campbell and stanley's classic notation system to diagram a study design

X = independent variable or experimental condition Y = another experimental or intervention condition Z = another experimental or intervention condition O = dependent variable - measurement/data collection R = random selection r = random assignment/allocation to group M = matching

describe the double-control design

You have to control two groups instead of one. To quantify, say, the reactive effect of an instrument, you exclude one of the control groups from the 'before' observation.

what do variables signify?

a level of measurement

define questionnaire

a written list of questions, the answers to which are recorded by respondents; respondents read questions, interpret what's expected, and then write down the answers

use of attitudinal scales in quantitative research

able to explore, measure, and determine the intensity and combine attitudes to different aspects of an issue to arrive at a single indicator that is reflective of the overall attitude.

what type of sampling is common among market research and new reporters

accidental sampling

define without replacement sampling

after one has been chosen it is not returned to the sample population which means that the others are more likely to be chosen

define with replacement sampling

after you choose a sample unit you put them back in the sample population you are choosing from so that each has an equal chance of being chosen

what is the opposite of the research hypothesis

alternate hypothesis

what are these examples of? - How effective is safety training compared to exercise in preventing fall related re-hospitalizations in community dwelling older adults? - Is the use of assistive technology an effective cognitive compensation strategy for improving occupational participation in young adults with memory deficits?

examples of guiding questions or research questions

what are these examples of? - There is a significant negative relationship between safety training and re-hospitalization related to falls - There is a significant positive association between the use of memory aids and occupational participation in young clients with memory impairments

examples of hypothesis

what are these examples of: - Falls account for 75% of hospitalizations in the elderly - The use of technology for cognitive training has shown conflicting effects and clinicians are not familiar with the technology

examples of research problems

use of attitudinal scales in qualitative research

explore the spread of attitudes and est. the types of attitudes prevalent. We can also ascertain the prevalent. We can also ascertain the types of attitudes, how many people have a particular attitude, and the intensity of their attitudes.

describe quantitative research

favours restrictions, follows structured, un-flexible format; recorded info in a structured pre-planned manner

describe qualitative research

goes against restrictions, follows an unstructured, flexible format; identified issues for discussion during data collection; recorded info in a descriptive and narrative format and subjected it to categorical and descriptive analysis

define continuous variables

have continuity in their measurements, its measured on interval or ratio scale; they have more precise units of measurements.

define unstructured interviews

having almost complete freedom in terms of structure, contents, question wording and order; you may formulate questions and raise issues on the spur of the moment; useful in exploring intensively and extensively and digging deeper

what is the importance of attitudinal scales?

help us find out how people feel towards these situations or issues.

disadvantages of unstructured interviews

high level of skills they require in conducting them

what do you call A proposition, condition, or principle which is assumed, perhaps without belief, in order to draw out its logical.

hypothesis

what do you call a tentative explanation for an observation, phenomenon, or scientific problem that can be tested by further investigation.

hypothesis

what do you call something taken to be true for the purpose of argument or investigation; an assumption.

hypothesis

define saturation point

in qualitative research when you have gotten enough info. Where none of the new information tells you something different or the new info is negliglible

what is the difference between continuous and discrete variables?

infinite # of gradations between measurements vs. have gaps between measurements) i.e. body weight (continuous) vs. # family members (discrete)

define blinding

information about the test is masked (kept) from the participant or researcher, to reduce or eliminate bias, until after a trial outcome is known. It is understood that bias may be intentional or unconscious, thus no dishonesty is implied by blinding.

define secondary data

information required is already available and need only to extracted

what measurement scale is described below: this variable is relative, that is, it plots the position of individuals or responses in relation to one another with respect to the magnitude of the measurement variable. Hence, an interval has all the properties of an scale, and it has unit of measurement with an arbitrary starting and terminating point.

interval scale

define controlled observations

introducing a stimulus to the group for it to react to and observing the reaction

causal relationship studies attempt to

investigate a causal relationship or association, four sets of variables. (DICE - dependent, independent, confounding/intervening, extraneous)

define the placebo study

involves two or three groups, depending on whether or not the researcher wants to have a control group.

best techniques for sensitive or threatning questions is to

is to have two opposite opinions based on how the question is asked: direct or indirect

the order of questions is important because...

it affects the quality of information, and the interest and even willingness of a respondent to participate

what are some of the disadvantages of cross-sectional studies

it does not measure change over time

what is the theory of causality about?

it helps you to understand the difference sets of variables that cause the change in the dependent variable. It also helps you to determine and isolate the impact of different categories of variables so that you validly, objectively and accurately ascertain the impact of independent variables.

how can we ensure that extraneous variables have a similar impact on control and experimental groups?

it is assumed that if two groups are comparable, the extent to which the extraneous variable will affect the dependent variable will be similar in both groups. There are two methods that ensure that the control and experimental groups are comparable with one another: - randomisation - matching

define confounding/intervening variable

it links the independent & dependent variable. Outcome only assumed in its presence. Also called the Confounding variable. Sometimes the relationship between an independent and dependent variable cannot be established without the intervention of another variable.

describe administration of a questionnaire in a public place

like a shopping centre, health centre, hospital, school or pub; purpose is usually explained and participation is requested; slightly more time consuming but same advantages as collective administration

define sampling frame

list that identifies each participant in your sample

what is the difference between a concept and a variable

measurability

describe surveys

measures characteristics & describe population parameters & predict relationships

define interview

method of collecting information from people; essentially a person-to-person interaction either face to face or otherwise; verbal interchange in which an interviewer tries to elicit info, beliefs, or opinions from another person

describe a counterbalance design

multiple interventions and effect of order of participation R O X1 O X2 O R O X2 O X1 O

how do you record observations with qualitative research?

narrative and descriptive recording in the researcher's own words

define research question

narrow, focused, and specific intended to generate knowledge to help close the gap

disadvantages of questionnaires

no one to explain the meaning of questions to respondents; layout of questionnaire should be such that it is easy to read and pleasant to the eyes and sequence of questions to follow

what measurement scale is described below: enables the classification of individuals, objects or response based on a common/shared property or characteristic. The sequence in which subgroups are listed makes no difference as there is order relationship among subgroups.

nominal or classificatory scale

define proportionate stratified sampling

number of elements from from each stratum is selected in relation to its proportion in the total population

what are the disadvantages of qualitative observational strategy

observer may be biased in their observation and therefore the interpretation and conclusions may also be biased; interpretations are subjective; researcher's attention may forget to record info; part of recording may be missed

descrive collective administration

obtain a captive audience (such as students in a classroom, people attending a function) ensures a high response rate; you can also explain relevance, importance, and clarify any questions

define direct approach

one can be sure the affirmative answer is correct

when should a directional research hypothesis be stated?

only be stated if you have a substantive basis to believe the results would be in the indicated direction

what are these examples of: - Training on environmental barriers and equipment; fall related re-hospitalizations - Use of pdf; attendance and punctuality to work

operational definitions

what do you call it when you convert concepts into variables?

operationalisation

what measurement scale is described below: has all the properties/characteristics of a nominal scale, in addition to its own. Subcategories are arranged in order of the magnitude of the property/characteristic. Also the distance between the subcategories is not equal as there is no quantitative unit of measurement.

ordinal or ranking scale

describe the before and after experimental design

overcomes the problem of retrospectively constructing the 'before' observation by establishing it before the intervention is introduced to study population.

define open ended questions

possible response categories are not provided in the research instrument - Questionnaire - respondent write down answer in own words - Interview - investigator records the answers either verbatim or in a summary

define convenience sampling

primarily guided by convenience (easy accessibility, geographical proximity, known contacts, ready approval for undertaking the study, or being part of the group)

define snowball sampling

process of sampling using networks; you start with a few individuals get information from them, and ask them to identify other people

define sampling

process of selecting a few (sample) from a bigger group (sampling population) as the basis for estimating or predicting the prevalence of an unknown piece of information, situation, or outcome regarding the bigger group

define randomisation

process that enables you to achieve an unbiased sample where every unit has an equal and independent chance of selection.

what is the difference between categorical vs. quantitative variables?

provide qualitative differences vs. exist on a continuum (mathematic numbers)

what are the advantages of qualitative observational strategy

provides deeper insight

define observation

purposeful, systematic and selective way of watching and listening to an interaction or phenomenon

expert sampling is more common in quantitative or qualitative research

qualitative

judement or purposive sampling is more common in quantitative or qualitative research

qualitative


Kaugnay na mga set ng pag-aaral

Lab 13: iWorx EMG Activity of Antagonist Muscles: wrist Flexion and Extension and Arm Wrestling

View Set

BLAW EXAM 2, BA 325 Final Exam Homework questions, BLAW 205 EXAM 2, Blaw Exam 3

View Set

Statistics Exam 3 Cedarville, Liu

View Set

Exam 3 (Ch. 14 & Multiple Regression Analysis)

View Set

Chapter 8: Productivity & Growth

View Set

Apologia Marine Biology Module 7

View Set

Chapter 6 Interpersonal Communication

View Set