Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
68 Cards in this Set
- Front
- Back
The Quantitative Paradigm
|
1.Reality is empirical: observable and can be objectively measured
2. Research is systematic 3. Controlled conditions: usually a design issue 4. Measurement is usually numeric 5. Constancy of conditions across subjects 6. Researcher is objective and neutral 7. Studies can be replicated and applied to other groups: generalizable 8. Usually deductive from theory |
|
Steps in Quantitative Study
|
1. The conceptual phase
- Formulating the problem - Reviewing the literature - Defining a conceptual framework - Formulating questions or hypotheses 2. Design and planning stage - Select a research design - Developing protocols for the intervention - Identifying the population - Designing a sampling plan - Making a data collection plan - Insuring human rights during the research |
|
Questions to ask during the preliminary review of a study
|
1. What is the study about? (main concepts under investigation?)
2. What are the independent and dependent variables? 3. Do the researchers examine relationships among variables? 4. Is there an intervention? 5. Are the key concepts clearly defined? 6. Who is in the study? How were they recruited? |
|
What is the purpose of the research design?
|
1. To provide the plan for answering the research question or testing the hypotheses.
2. Allows the researcher to apply control so it can be said that the independent variable really changed the dependent variable 3. Rules out alternative explanations 4. The design will be dependent on the question.Must be a coherent whole. |
|
What is the purpose of the research design? (shortened)
|
1. Objectivity
2. Accuracy 3. Feasibility 4. Control 5. Constancy 6. Manipulation of the independent variable 7. Random assignment to groups in experimental research |
|
Why should the researcher maximize control
|
1. Rule out extraneous variables or alternative explanations for the outcomes.
- Homogenous sampling - Constancy in data collection - Manipulation of the independent variable - Random assignment to groups |
|
What is internal validity?
|
Asks the study about what it says it is about
-Did the independent variable or something else cause the change in the dependent variables? |
|
Threats to Internal Validity
|
1. History - the occurrence of external events that could affect the dependent variables
2. Selection - Biases that result from pre-existing differences between groups 3. Maturation - Processes that occur within subjects as a result of the passage of time 4. Testing - The effect a pre-test, can have on a post- test 5. Mortality - Differential drop out from groups 6. Instrumentation - differential effects of instruments at different times |
|
What is the purpose of experimental design?
|
1. Most appropriate for cause and effect relationships
2. Provides highest level of evidence 3. Rules out alternative explanations 4. Not all research is amenable to manipulation or random assignment 5. Often difficult to carry out in field setting |
|
Quasi-experimental design
|
1. Non-equivalent control group
2. After- only nonequivalent control group 3. time series design 4. Typically will lack random assignment to groups making them non-equivalent 5. Sometimes will lack a control group |
|
Quasi-Experimental Design
|
1. Often more feasible in a clinical setting
2. Difficult to make cause and effect statements |
|
Other Experimental Designs
|
1. Correctional studies
2. Cross-sectional dtudies 3. Longitudinal studies 4. cohort studies 5. Survery research 6. Retrospective 7. Prospective |
|
Describing a Population
|
1. Gender
2. Age 3. Marital Status 4. Socioeconomic status 5. Religion 6. Ethnicity 7. Education 8. Diagnosis 9. Co-morbidities 10. Health Status 11. Descriptive statistics |
|
Population Descriptors
|
1. Specify inclusion or eligibility criteria
2. Specify exclusion criteria 3. Affects those who will be included in the study 4. Criteria for sample selection 5. These directly affect the generalizability of the study 6. Frequently excluded: Non-english speaking, women, comorbidities |
|
Population
|
The theoretical population
|
|
Target Population
|
Set of individuals who meet the sampling criteria
|
|
Accessible population
|
Individuals or elements of the population the researcher can access
|
|
Sample
|
Group included in the study
|
|
Sampling
|
A process of selecting a subset of the designated population to represent the entire population
|
|
Representativeness
|
- The most important characteristic of a sample
- General the larger the sample size, the more likely it will be representative |
|
External Validity
|
The degree to which a study result can be generalized or settings and samples other than the one deing studied
|
|
Threats to external validity
|
- Selection effects (who) - check inclusion/ exclusion criteria
- Reactive effects (where/ who) - Measurement effects (how, when, what) |
|
Sampling Strategies
|
- Probability - means that every element of a poplation has an equal change of being included in a study
- Non-probability |
|
Probability Sampling
|
- simple random
- Stratified random - Cluster or multi-stage - Systemativ |
|
Non-probability Sampling
|
- convenience sampling
- Quota sampling - Purposive sampling |
|
Sample Size
|
1. Type of design used
2. Type of sampling procedure 3. Power analysis 4. Degree of precision or measurement 5. Heterogeneity or the attributes 6. Relative frequency the phenomenon occurs in the population 7. Longitudinal designs: account for the drop out |
|
Critiquing the Sample
|
1. Look at the descriptive statistics that describe the sample
2. Does the author tell you how the sample is like the population 3. check inclusion/ exclusion criteria to see if the sample is like the people you see as a clinician 4. Is the sample size appropriate? |
|
Ethical Considerations
|
1. In most cases, people are data sources. Research must go though the IRB review
2. Subjects are always volunteers 3. They refuse to participate or drop out 4. They must give informed consent 5. They must understand the cost/benefit ratio for themselves 6. Must safeguard privacy and confidntiality 7. Vulnerable populations |
|
Data Collection Strategies
|
1. Physiological or biological measurements
2. Observational methods 3. Interviews - open-ended or closed-ended 4. Questionaires 5. Records or available data - hospital records, historical documents, audio or video tapes |
|
Physiological Measurements - Advantages/ Disadvantages
|
1. Objective, precise, and sensitive
2. some are expensive or require specialized training and knowledge 3. Using them may change the variable being measured 4. May be affected by the environment |
|
Observation
|
1. Determines how subjects behave under certain conditions
2. Must be objective and systematic 3. Consistent with the study's objectives and theoretical frameworks 4. Data collectors and must be trained 5. All observervations are documented |
|
Observational Methods
|
1. Concealment without intervention
2. Concealment with intervention 3. No concealment without intervention 4. No concealment with intervention |
|
Scientific Observation - Advantages/ Disadvantages
|
1. Describes what people really do, not what they say they do.
2. Reactivitym ethical concerns may be a problem 3. Observer may be biased |
|
Measurement Tools
|
1. Open-ended questions
2. Closed-ended questions - Likert-type scales, true/ false |
|
Example of the Likert Scale Instrument
|
Please read the statement and decide how much of the time the statement describes how you have been feeling is the past several days
A. a little of the time B. Some of the time C. Good part of the time 4. Most of the time |
|
Interviews & Questionaires - Advantages/ Disadvantages
|
. The response rate is higher with interviews
2. Interviews allow for richer and more complex data to be collected 3. Interviewers can clarify misunderstood questions 4. Questionaires are less expensive to administer and allow for complete anonymity |
|
Importance of Accurate Measures
|
1. Instruments must accurately reflect the concepts being measured or study results will be invalid
2. The appropriateness of the instruments influence the internal and external validity of a study |
|
Measurement Error - Error Variance
|
the extent of variability in test scores that is attributable to error rather than a true measure of behavior.
|
|
Validity
|
The accuracy of the measure in reflecting the concept it is supposed to measure
the extent to which, and how ell, an instrument measures a concept; concept validity; Criterion-Related Validity; construct validity |
|
Reliability
|
Stability and consistency of the measuring instrument over time.
A measure can be reliable without being valid, but it cannot be valid without being reliable |
|
Content Validity
|
- Content of the measure is justified b other evidence, e.g. the literature
- Entire range or universe of the construct is measured - Usually evaluated and scored by experts in the content area - A CVI (content validity index) or .80 or more is desirable |
|
Face Validity
|
1. A subtype of content validity
2. Just on its face the instrument appears to be a good measure of the concept, "intuitive arrived at thtough inspection" 3. e.g. concept = dyspnea level 4. measure = verbal rating scale "rate your dyspnea from 0 to 10" |
|
Criterion Related Validity
|
1. The relationship between the subject's performance on the measurement tool and actual behavior
2. The ability of an instrument to measure a criterion (usually set by the reasearchers) Subtypes: concurrent validity/ prodictive validity |
|
Concurrent Validity
|
1. Correspondence of one measure of a phenomenon with another of the same construct (administered at the same time)
2. Two tools are used to measure the same concept then a correlation analysis is performed. The tool which is already demonstrated to be valid is the "gold standard" with which the other measure must correlate. |
|
Predictive Validity
|
The ability of one measure to predict another future measure of the same or a similar concept.
|
|
Construct Validity
|
The extent to which a test measures a theoretical construc or trait and attempts to validate a body of theory underlying the measurement and hypothesizing of relationships
|
|
Ways of arriving at construct validity
|
1. Hypothesis-testing
2. Convergent and divergent 3. Contrasted-groups 4. Factor analysis |
|
Reliability
|
Homogeneity, equivalence, and stability of a measure over time and subjects. The instrument yields the same results over repeated measures and subjects; expressed as a correlation coefficient
|
|
Correlation Coefficient
|
1. Degree of agreement between times and subjects
2. Reliability coefficient expresses the relationship between error cariance, true cariance and the observed score. 3. The higher the reliability coefficient the lower the error variance. Hence, the higher the coefficient the more reliable the tool! |
|
Stability
|
1. The same results are obtained over repeated administration of the instrument
2. Test-retest reliability 3. Parallel equivalent or alternate forms |
|
Test-Retest Reliability
|
1. The administration of the same instrument to the same subjects two or more times (under similar conditions - not before and after treatment)
2. Scores are correlated and expressed as a Pearson r. (usually .70 is acceptable) |
|
Parallel or alternate forms of reliability
|
1. Parallel or alternate forms of a test are administered to the same individuals and scores are correlated
2. This is desirable when the researcher believes that repeated administration will result in "test- wiseness" |
|
Homogeneity
|
- Item-to-total correlation
- Split half reliability - Kuder-Richadson coefficient - Cronbach's alpha |
|
Kuder-Richardson Coefficient
|
1. Estimate of homogeneity when items have a dichotomous response e.g. yes/ne items
2. Should be computed for a test on an initial reliability testing, and computed for the actual sample. 3. Based on the consistency of responses to all of the items of a single form of the test. |
|
Crobach's Alpha
|
1. Likert scale or linear graphic format
2. Compares the consistency of response of all items on the scale 3. May need to be computed for each sample |
|
Equivalence
|
1. consistency of agreement of observers using the same measure or among alternate forms of a tool
2. Paralles or alternte forms (described under stability) 3. Interrater reliability |
|
Interrater Reliability
|
*Used with observational data
- Concordance between two or more observers scores on the same event of phenomemon |
|
Critiquing
|
1. Was reliability and validity data presented and is it adequate?
2. Was the appropriate method used? 3. Was the reliability recalculated for the sample 4. Are limitations or the tool discussed - race, age, # siblings, etc. |
|
Constructivism
|
Basis of naturalistic or qualitative research
|
|
Positivism
|
basis of empirical-analytical or quantitative research (deduction)
|
|
Paradigm / Worldview
|
Greek work for "pattern" relationship of ideas to one another
|
|
Epistemology
|
What we know as the "truth" and how we know it - it is the branch of philosophy concerned with the nature and scope (limitations) or knowledge- do you believe truth is evolving, changing, etc?
|
|
Ontoloty
|
The study of existence- what is real versus fiction, i.e. the nature of reality, Are you comfortable with differences? What is real and important to one group may not be to another.
|
|
Context
|
Environment - where something occurs can refer to clinical setting or cultural context
Helps explain a conidion Influences its meaning Helps to determine its interpretation |
|
Paradigms of Knowledge - Deductive
|
1. Truth/ objective
2. One reality 3. Generalizability of findings 4. Prediction/ control 5. Experimental/ controlled 6. Subjects |
|
Paradigms of Knowledge - Inductive
|
1. Truth/ subjective
2. Multiple realities 3. rich detail of context-application of findings 4. Understanding 5. Transformative 6. Participants/ Informants |
|
Grounded Theory
|
Developed by Closer and Strauss based on sociologic tradition of the Chicago School of Smbolic Interactionism
|
|
Clinical State, Setting, and Circumstances
|
1. Patient's clinical state - acuity and history of illness, symptoms, cognitive impairment, etc.
2. Clinical Setting - inpatient, outpatient, rehabilitation, nursing home, community-based, context or seeking health care, and role/relationship between health care provider, client, family, urban vs. rural clinical setting 3. Clinical circumstances - severity of illness, chronology of illness, functional impairment, disability, etc. |