10 April 2012

Phenomenological Research

Upon reading the book of Clark Moustakas, it is time to jot down a few word about the phenomenological research method.

First of all, phenomenological research has a different philosophical foundation. Neither the "real world" nor the world of "ideas" explain the epistemological understanding from the  phenomenological perspective. It is rooted in both.

In simple, phenomenological research examines a phenomenon through it is perceived by the actors in a situation. Therefor it can be summarized as the study of experiences.  In this way it favors the lived experiences rather than the measured outcomes. Phenomenological research aims at arriving the "essence of the experience" by applying qualitative research methods.

Let me note some important terms of the phenomenological research
noema:
noesis:
epoche:
intentionality
perception...

Also check: http://www.sld.demon.co.uk/resmethy.pdf

18 March 2012

Nitel Araştırma Yöntemleri

I want to jot down the names of the book that I've read throughout qualification process. One of the books is "Nitel Araştırma Yöntemleri" which is written by two valuable professors from METU-EDS. The book covers the qualitative research top to bottom. It is essential for those who wants to learn how and why to conduct qualitative research.

Yıldırım, A., & Şimşek, H. (2006). Sosyal Bilimlerde Nitel Araştırma Yöntemleri (6th ed.). Ankara: Seçkin Yayınevi. 

14 March 2012

How to increase statistical power

This is an important question if you're conducting a quantitative study. How do you increase the statistical power? First, let's make an explanation of the statistical power than examine the possible choices to increase the statistical power.

Statistical power is the probability that the test will reject the null hypothesis when the null hypothesis is actually false (i.e. the probability of not committing a Type II error, or making a false negative decision). The probability of a Type II error occurring is referred to as the false negative rate (β). Therefore power is equal to 1 − β, which is also known as the sensitivity.[*] 

Increasing the sample size: This is the answer that immediately comes into my mind when talking about increasing the statistical power. It is most likely to detect the effect, if any, when the sample size is increased. 30 is given as the minimum number in many statistical books. I remember "the more is the better" was our approach when conducting research for my master thesis. Increasing sample size has a basic mathematical explanation. When estimating the population mean using an independent and identically distributed (iid) sample of size n, where each data value has variance σ2, the standard error of the sample mean is:
\sigma/\sqrt{n}.
Therefore increasing n will provide a more accurate estimation of the population mean. 

Changing the α level: In statistics different α levels are used like 0.5, 0.1 or 0.01. One easy way to increase the power of a test is to carry out a less conservative test by using a larger significance criterion. On the other hand, α levels determine the likelihood of committing a Type I error (declaring a difference that does not exist.) Therefore one should be aware of this risk and evaluate this option accordingly.

Increasing the actual Effect Size: Greater effect size means that there is greater power to detect larger effects. This is not a statistically modifiable condition. However selecting more extreme treatments or experimental conditions might work.

Developing a strong research design: This is about the whole research process. Measurement, sample selection, matching processes should be conducted carefully to ensure the power. 

Further analysis: In medicine, for example, tests are often designed in such a way that no false negatives (Type II errors) will be produced. But this inevitably raises the risk of obtaining a false positive (a Type I error). The rationale is that it is better to tell a healthy patient "we may have found something - let's test further," than to tell a diseased patient "all is well."[*]


05 March 2012

Validity Reliability Issues

"Research for its own sake." Is it really for its own sake? Why? How? How not? These questions are closely related to the well-known twin brothers in the area of research? Validity & reliability. These highly debated terms should be clarified at first stage: Lets me jot down the definitions from the book "Case Study Research" Lin (2003). Kidder and Judd defines the terms as below (as cited in Lin, 2003, p. 34). Other, additional definitions are also welcome.

Construct validity: Establishing correct operational measures for the concepts being studied.
Internal validity: (For internal and casual studies only) establishing a casual relationships whereby certain conditions are shown to lead other conditions, as distinguished from the spurious relationships.
External validity: Establishing the domain to which a studies findings can be generalized.
Reliability: Demonstrating that the operations of a study, such as the data collection procedures, can be repeated with the same results

In fact validity and reliability are evaluated differently for qualitative and quantitative research. Although the idea behind the terms are similar, how they are perceived is changed depending on the type of research design. Therefore, to understand these issues, there is a need to read them from multiple sources. One is provided below.


A nice resource which compare and contrast the meanings of validity and reliability for both qualitative and quantitative research: http://www.nova.edu/ssss/QR/QR8-4/golafshani.pdf


02 March 2012

Ethnographic Design


Another qualitative research design is the ethnographic research. According to Creswell (2001) ethnographic research examines “culture-sharing groups shared pattern of behavior, beliefs and language” (p.436.)
Ethnography take its roots from the anthropology, however it is re-shaped for its use in educational research. The original form of ethnography is named as the realist ethnography. Creswell ads two more types of ethnography which are the case study and the critical ethnography. Let me explain those three types in brief:

  • Realist ethnography: The researcher examines a person or a group of people or a situation and reports it objectively, as a third person. An example realist study examines “the process a school selection committee experiences as they interview candidates.” The researcher is more like a reporter reporting the facts.
  • Case studies: In this type, researchers aim at in depth exploration of the actual case. (an activity, a event, process or individuals) The case is discriminated from the others by time, place or other separate boundaries. But how this special case should be selected?  The answer of this specific question is given with the aim of the study. A case might be selected because it is “unusual” or it might be selected because it illustrates a broader phenomenon. Or the case/cases might illustrate an issue. However this case study as a form of ethnography research makes me question the difference between “case study as a research design” and “ethnographic design”. In this research report the writers explain their point of view regarding this question. In fact, the answer to this question can only be given by discussing the world view behind the research design. This question should be elaborated in another discussion.
  • Critical ethnographies: This type of ethnographic studies aim at advocating against inequality, that s why the critical researcher is not neutral, rather he/she is political. They seek to create a literal dialogue with the participants (minorities, people from different social classes)

To sum up, I should note that, in ethnographic research there is a “cultural theme” and a “culture sharing group”. Of course conducting a study with ethnographic design is not that much easy. It has lots of details and important points. But they’re beyond the scope of this text!

23 February 2012

Grounded Theory Research

Grounded theory is a research design which implements qualitative research methods. According to Creswell (2005), it is used when you need "a broad theory or explanation of a process" (p.396).
When there is little is known about a process or the existing theories does not explain it, grounded theory research is used. The aim in this type of studies is to generate a theory. It is too assertive, isn't it.
However conducting a grounded theory research is not that much easy. first of all, the researcher who wish to implement such a study should decide on which type of grounded research to conduct. Creswell (2005) groups the types of grounded theory research under three: systematic design, emerging design and the constructivist design. The systematic design is the most "systematic" one with a positivist theory behind, whereas constructivist design put more emphasis on the beliefs and views of participants with a postmodern point of view. In fact for the young researchers, with no experience et all, systematic design is recommended.

In systematic design the structure o the reserach is clearly explained. In addition a diagram showing the theory is served. In emergent desing, on the other hand, more flexible structure is put forward. And lastly, for the constructivist design, any kind of prescribed theory formation is not welcomed. In the continuum of types of grounded tehory research, systematic design can be put in a more "quantitative like" position. Nevertheless, all three aim at formaiton of a theory or explanation of a unexplained process or action or interaction.

Creswell (2005) lists the following as the key characteristics of the grounded theory research.

  • Process approach
  • Theoretical sampling (sampling is intentional and focused on the generation of a theory)
  • Constant comparative data analysis
  • A core category 
  • theory generation 
  • Memos
Example purpose statement: "to explore the role of art therapy in individuals' recovery from chemical addiction."

18 February 2012

Mixed Method


During my masters, I had already read this book. I have a copy with some notes in it. I am revising it these days. I should admit that, although one can learn about a research design by reading, no one can fully grasp the necessary knowledge for application without really applying research. But how? 

In brief, the book deals with mixed method as a research design and categorizes it under for themes: 
  • triangulation design
  • embedded design
  • explanatory design 
  • exploratory design 

17 February 2012

Methodology? Research design? Research method?


Methodology: Philosophical framework and the fundamental assumptions of research (van Manen, 1990)

Research Design: The plan of action that links the philosophical assumptions to specific methods (Creswell, 2003; Crotty, 1998)

Research Method: Techniques of data collection and analysis such as quantitative standardized instrument or qualitative theme analysis of text data (Creswell, 2003; van Manen, 1990)

Developmental Research

Reeves, T. C. (2000). Enhancing the worth of instructional technology research through “design experiments” and other development research strategies. International perspectives on instructional technology research for the 21st century, New Orleans, LA, USA.

Writer convinces the researchers for the design experiments, developmental research in other words. It is a nice paper touching upon realities about instructional design research. Why there are so much "invaluable research" "research of its own sake" is explained clearly. I also deals with theory vs. practice issue. Developmental research is offered as a kind of solution to this well known problem.

RE-START


I (again) started to study for the qualification, this time as a mother. My son, who is just three months old is sleeping in his crib next door. I hope this time, in MAY 2012, I will pass this exam and start my doctoral dissertation studies. I sometimes question myself about my doctorate studies. Anyway this doesn't help. Let's get started. Re-started.