True Eksperimental Design
Posted by henypratiwi pada 18 Oktober 2009
True Experimental Design
True experimental design makes up for the shortcomings of the two designs previously discussed. They employ both a control group and a means to measure the change that occurs in both groups. In this sense, we attempt to control for all confounding variables, or at least consider their impact, while attempting to determine if the treatment is what truly caused the change. The true experiment is often thought of as the only research method that can adequately measure the cause and effect relationship. Below are some examples :
Posttest Equivalent Groups Study. Randomization and the comparison of both a control and an experimental group are utilized in this type of study. Each group, chosen and assigned at random is presented with either the treatment or some type of control. Posttests are then given to each subject to determine if a difference between the two groups exists. While this is approaching the best method, it falls short in its lack of a pretest measure. It is difficult to determine if the difference apparent at the end of the study is an actual change from the possible difference at the beginning of the study. In other words, randomization does well to mix subjects but it does not completely assure us that this mix is truly creating an equivalency between the two groups.
Pretest Posttest Equivalent Groups Study. Of those discussed, this method is the most effective in terms of demonstrating cause and effect but it is also the most difficult to perform. The pretest posttest equivalent groups design provides for both a control group and a measure of change but also adds a pretest to assess any differences between the groups prior to the study taking place. To apply this design to our work experience study, we would select students from the college at random and then place the chosen students into one of two groups using random assignment. We would then measure the previous semester’s grades for each group to get a mean grade point average. The treatment, or work experience would be applied to one group and a control would be applied to the other.
It is important that the two groups be treated in a similar manner to control for variables such as socialization, so we may allow our control group to participate in some activity such as a softball league while the other group is participating in the work experience program. At the end of the semester, the experiment would end and the next semester’s grades would be gathered and compared. If we found that the change in grades for the experimental group was significantly different than the change in the grades of our control group, we could reasonably argue that one semester of work experience compared to one semester of non-work related activity results in a significant difference in grades.
Advantages of the true-experimental design include:
- Greater internal validity
- Causal claims can be investigated
LIMITATIONS OF TRUE EXPERIMENTAL DESIGN
Experimental designs also are limited by narrow range of evaluation purposes they address. When conducting an evaluation, the researcher certainly needs to develop adequate descriptions of programs, as they were intended as well as how they were realized in the specific setting. Also, the researcher frequently needs to provide timely, responsive feedback for purposes of program development or improvement. Although less common, access and equity issues within a critical theory framework may be important. Experimental designs do not address these facets of evaluation.
With complex educational programs, rarely can we control all the important variables which are likely to influence program outcomes, even with the best experimental design. Nor can the researcher necessarily be sure, without verification, that the implemented program was really different in important ways from the program of the comparison group(s), or that the implemented program (not other contemporaneous factors or events) produced the observed results. Being mindful of these issues, it is important for evaluators not to develop a false sense of security.
Finally, even when the purpose of the evaluation is to assess the impact of a program, logistical and feasibility issues constrain experimental frameworks. Randomly assigning students in educational settings frequently is not realistic, especially when the different conditions are viewed as more or less desirable. This often leads the researcher to use quasi-experimental designs. Problems associated with the lack of randomization are exacerbated as the researcher begins to realize that the programs and settings are in fact dynamic, constantly changing, and almost always unstandardized.