Research questions should..
- Be Very Clear! neither to broad or too narrow
- Be Researchable
- Relate to established theory, research and each other.
- Allow the researcher to make a contribution to existing knowledge
Ideographic
Explanations involve rich descriptions of a person or a group
Explanation not meant to apply to persons or groups who were not part of a study.
Nomothetic Designs
Explanations involve cause and effect, expressed in terms of general laws and principles
Developed through particular (groups/regions etc.) research subjects and extrapolated to larger populations outside the study
Variables
Characteristics or attributes of data that vary or change
Causality in social research directly related to variables
Three criteria for evaluating social research
Reliability
Replicability
Validity
Replicability
- Results remain the same when others repeat all or part of a study
- The procedures used to conduct the research are sound
Reliablability
- Results remain the same each time same measurement technique used on the same subject (assuming that what is being measured has not changed)
- Results aren’t influenced by the research, the location, the timing etc.
Validity
•There is integrity to the conclusions
Measurement validity (or construct validity)
Ex: is the # of deaths recorded by Iraqi vital
Internal validity is concerned with issue of whether causation has been established by a particular study.
Ex:
Lincoln and Guba’s standard for qualitative research
Trustworthiness •Credibility (internal validity) •Transferability (external validity) •Dependability (reliability) •confirmability (replicability)
Two kinds of experiments
- Field experiments- conducted in real-life surroundings
- Laboratory experiments- take place in artificial environments
- controls research environment
- easier to randomly assign research subjects (systematic selection, everyone/thing has equal opportunity of being selected)
- enhanced internal validity as result
- easier to replicate
Experimental or Treatment group
receives a treatment or manipulation of some kind
Know “placebo” & “double blind experiment”
Control group
does not get the treatment or manipulation
Random assignment
participants are placed in the experimental or control group using a random method
Pre-test
Measurement of the dependant variable before the experiment manipulation
Post-test
measurement of the dependant variable after the experimental manipulation
Quasi- or ‘natural’ experiments
Naturally occurring phenomena or changes introduced by people who are not researchers result in experiment-like conditions.
- differ from true experiments- internal validity harder to establish
Cross-sectional designs
Involve taking observations at one point in time.
Do not include manipulation of the independent variable- no ‘treatment’
Examples: questionnaires. Structured interviews, etc.
Two or more variables are measured in order to detect patterns of association
Longitudinal designs
Cases examined at a particular time (T1), and again at a later time or times (T2, T3, etc.)
Provides information about the time-order of changes in certain variables.
Helps establish direction of causation
Panel study (Longitudinal designs)
the same people, households, organizations.
Drawbacks of longitudinal designs
Attrition over time
May be difficult to determine when subsequent waves of the study should be conducted.
Panel conditioning: people’s attitudes and behaviours may change as a result of participating in a panel
Case Studies
- A basic case study involves an in-depth study of a single case.
- A single case can be a person, family, organization, event, etc.
- Can involve qualitative and/or quantitative analysis
Two types of concept definitions
Nominal: describes concepts in words
Operational: describes how the concept is to be measured
Critical case study
illustrates conditions under which a certain hypothesis holds it does not hold. Ex: studying a person for whom certain counselling techniques are successful
What are Indicators?
- Indicators tell us that there may be a link and tell us how strong that link may be
- Usually, one indicator for each concept is adequate
- Sometimes it’s advantageous to use more than one indicator of a concept- this reduces likelihood of misclassifying question wording is vague /misunderstood
- gets access to a wider range of issues related to the concept
What is post-materialism?
When you grow up poor, over your lifetime you’ll be more likely to value material things, whereas when you grow up rich you begin to value less materialistic things such as the environment
Face validity
Established if, at first glance, measure appears to be valid
When Assessing concepts, reliability refers to…
- Stability of a measure over time
- Internal reliability: are multiple measure administered in one sitting consistent with each other?
- Inter-observer consistency: all observers should classify ehaviour or attitudes in the same way
Concurrent validity
Established if the measure correlates with some criterion thought to be relevant to the concept.
Construct validity
Established if the concepts relate to each other in a way that is consistent with the researchers theory.
Convergent validity
Established if a measure of a concept correlates with a second measure of the concept that uses a different measurement techniques
Measurement
Data are used to understand or quantify social phenomena, concepts and their interrelations, in general.
Establishing causality
Researchers want to know what causes social phenomena.
Ex: prejudice, crime, class, conflict, etc.
Generalization of findings
To those not studied
Ex: generate laws of social life: need representative samples for this.
Criticisms of quantitative research.
- People are social institutions are treated as if they are part of “the world of nature” (We can’t research humans as we wld nature. We have free will, and thought. For example, if someone knows they are a part of a study, they may act different.)
- Measurement process produces artificial and false sense of precision and accuracy
- Disjuncture between research and everyday life - can a survey really “get at” people’s real lives?
- Analysis of relationships between variables ignores people’s everyday experiences and how they are defined and interpreted.
- explanations for findings may not address perceptions of the people to whom the findings purportedly pertain
- researchers tend to assume an objective
Replication (of a study/studies)
if same methods used, provides check for biases and routine errors
Interviewer effects
•The characteristics of the interviewer may influence the responses given - sex, social class, and race of interviewer are key reactive issues.
Inter-interviewer variability
lack of consistency in asking questions or recording answers between different interviewers.: lack of consistency in asking questions or recording answers between different interviewers.
Questionnaires
Basically structured interviews without the interviewer.
Ex: Canada Census 2011
Compared to interviews, these are I)shorter II)simpler (more closed questions)
Advantages: cheaper, quicker, convenient. No interviewer effects. Social desirability effect is reduced
Types of structured interviews;
Face to Face
- preferred method in academic research.
Social desirability effect
Insincere responses when asked questions according to racism, homosexuality Etc.
Responses depend on what is excepted socially. People are less likely to admit to something that is frowned upon in society at this time.
For example, if someone is asked “Would you be okay with a family of X race moving beside you?” Some people might say no when truthfully the answer is yes.
Response sets
Respondent not motivated to provide genuine response.
Acquiescence
Trying to please the researcher
- respondent agrees just to be ‘cooperative’
- that’s why instruments with mult-item measures are designed with items that have logically opposite positions
Closed questions
Present respondent with a set of answers from which to choose
Feminist Critique
•Structured interviews and questionnaires have been argued to be exploitative. - they involve an asymmetrical power relationship between the researcher and the respondent
•Shift in thinking: Increased attention given to the rights of research respondents in recent years, e.g., privacy rights, the right to end the interview at any time, etc.
- Some findings arising out of these research methods are consistent with feminist ideals, e.g., findings on sexual assault, sexual harassment, etc.
Purpose of a theory
To assess the adequacy
To gather information
To understand social problems
To explore personal experiences
Open questions
Response decisions are left completely to the respondent
Difficult to convert answers to numerical data
Tend to be used in qualitative research
4 types of research design
Experiments
Cross-sectional design
Longitude designs
Case studies
What is a research design?
Framework for collection and analysis of data
Must meet certain criteria to produce useful results
Research ethics must be considered when choosing a design
What is a theory?
An explanation of observed regularities or patterns.
Predetermined theory
brings one to look at social
interaction through a particular framework
Observation of social interaction without a specific theoretical base
Theories of the middle range
Limited in scope and can be tested directly
e.g., Durkheim’s theory of suicide
Grand theories
General and abstract, cannot be tested directly
e.g., structural functionalism, symbolic interactionism
Deduction
Social scientific inquiry.
Process that begins with a theory or explanation for something, and is then tested.
The process of deduction
- Theory – 2. Hypothesis – 3. Data Collection – 4. Findings – 5. Hypothesis confirmation or rejection – 6. revision of theory – (starts back at 2)
Induction
Process where a theory or explanation derives from already collected/examined data.
grounded theory
theory that derives from qualitative data.
What is Epistemology?
Notions of what can be known and how knowledge can be acquired
Positivism Epistemology
Social scientists should use the same methods of inquiry that are used in natural sciences
- A deductive approach (hypothesistesting) can be used to acquire knowledge.
- Induction may provide knowledge or generalizations or laws.
Interpretivism Epistemology
Rejects positivism; social (human) sciences can’t be studied the way natural sciences can
- Researchers should use inductive methods to try to grasp subjective meanings of people’s actions.
- Actions should be viewed from point of view of social actors, i.e., the people studied.
- Researchers should develop an empathetic understanding of the people studied
Empiricism (positive epistemology)
knowledge must be based on information gathered through the senses.
Ontology
What we believe the world to be. What is reality?
Ontology - Objectivist View
social reality is fixed; we have little or no control over it.
Ontology - Constructionist View
We create our social worlds through our actions, in particular through negotiation.
Ontology - Soft Constructivist View
There is an objective social reality, however, many of our ideas and perceptions are false because they have been constructed to justify some form of domination rather than the objective social reality.
Quantitative Research
Uses numbers and statistics in the collection and analysis of data.
- mainly deductive; testing of theory
- Natural science model; positivism
- Objectivism
Qualitative Research
Uses mainly words and other non-numeric symbols in the collection and analysis of data.
- Mainly inductive; generation of theory
- Interpretivism
- Constructionism
6 Key influences on Social Research
Theory Practical considerations Epistemology Values Politics Ontology
How do values contribute to bias research?
Values affect;
Three views of values in research.
- Research should be value-free.
- Research cannot be value-free. But researchers should be open and explicit about their values.
- Researchers should use their values to direct and interpret their investigations: value commitment is a good thing for researchers to have.
Politics in social research
people often have political agendas; ‘Taking sides’ Funding Gaining access Public institutions–‘the research bargain’ Publication of findings
Cross-Sectional Design Weaknesses
- Internal validity–direction of causation, endogeneity
- External validity–are observations randomly assigned?
Dewey defeats Truman
Cross-Sectional Design Strengths
Can examine effect of variables that cannot be manipulated in experiments–age, gender, ethnicity, culture, social class, etc.
Cohort study (Longitudinal Designs)
people sharing the same experience are studied at different times, but different people may be studied at each time
Extreme (or unique) Case Study
illustrates unusual cases, which help
in understanding the more common ones
e.g., Understanding the causes/sources of religiosity in the United States helps us understand more about the modernity and secularism
Revelatory:
examines a case or context never before
studied
e.g., the study of a particular historical figure may be enhanced when documents are ’de-classified’ or enter the public domain, such as the diaries of former Prime Minister McKenzie-King
Concept
ideas or mental representations of things
- Concepts may be independent (cause) or dependent (effect) variables
e.g., ’crime’, ’gender’, ’alienation’, ’love’, ’life satisfaction’, etc.
Coding
transforming a measure into numbers
e.g., in measuring life satisfaction, respondents who say they are ’very satisfied’ may be given a code of ’1’, which is then recorded in a file.
Why measure concepts? (via coding)
Intra-interviewer variability
an interviewer is not consistent in asking questions or recording answers
Types of Structured interviews;
Telephone Interviews
Strengths: cheaper/quicker to administer; reduced bias from ‘interviewer effect’
Weaknesses: sample bias; hard to sustain for long periods, are you actually talking to the correct person.
Types of structured interviews;
Computer-assisted interviewing (CAPI and CATI)
- increases efficiency and decreases costs.
Types of structured interviews;
Online personal interview
Advantages: quality of face to face with efficiency & economy of the Internet.
Disadvantages: fairly high drop out rate
Open question Advantages & Disadvatages
Respondents can answer in their own terms
Allow for unusual, unanticipated responses
Responses may expose knowledge and be more genuine
Time-consuming to record answers
Answers have to be coded
Closed question Advantages & Disadvantages
Easy to process answers
Standardization allows comparison of answers
Reduced bias in recording/coding answers
Loss of spontaneity and authenticity because relevant answers may be excluded from the choices provided
Difficult to make forced-choice answers exhaustive
Designing Questions; things to avoid.
ambiguous terms long questions (you’ll just lose people) “double-barrelled” questions very general questions leading questions questions that are actually two questions questions that use negatives "dont know" option
rules for Designing Questions
Minimize technical terms
Be sure that respondents have the knowledge needed to answer the question
Ensure symmetry between a closed question and its answers.
Ensure the answers are balanced
do not count on memory (what they’ve done or observed in detail)
Vignette questions
Presenting people with one or more scenarios and asking them how they would respond
Anchor the choices in a realistic situation.
Creates distance between question and respondent.
Pilot Studies
Used to test whether individual items or the instrument as a whole operates well
Used with open questions to generate closed questions for subsequent studies
Provide interviewers with experience in administering the instrument
Can identify questions that are embarrassing, uninteresting, etc.
Can identify questions that are difficult to understand