Course Description:

1. Introduce you to the fundamental terms, concepts, and designs characteristic of both quantitative and qualitative educational research.

  • I will use this concept throughout the rest of my educational career and future school counseling profession.  In terms of using this throughout the rest of my educational career, these concepts are crucial for me when reading articles, and understanding the differences, concepts, and designs of both quantitative and qualitative educational research.  In terms of my future school counseling profession, I will use this skill when discussing with my colleagues quantitative and qualitative educational research.
  • One specific situation in which I will apply this skill I have developed: Just yesterday in my Practicum experience, I discussed with the speech pathologist within the middle school about how she was testing a child with Autism using both quantitative and qualitative research.  I recognized the importance of me already understanding what both of these types of research entails, and found it beneficial that I already knew the fundamental terms, concepts, and design characteristics of both quantitative and qualitative research.  Therefore, I have already applied the skills and will continue to use these skills I have developed because I understand the importance of both quantitative and qualitative educational research pertaining to students in need.

2.  Learning and application of skills that will enable you to design your own research studies and critically evaluate published research articles in an effort to encourage data-driven reflection.

  • Throughout my educational career, I have critically evaluated published research articles in order to better understand a specific topic, such as the role digital storytelling has on elementary school-aged students.  In my school counseling profession, I will use this when I am interested in learning more about a student’s background or a particular topic, and utilizing the learning and application of skills that I have learned to design my own research studies.  I have learned the importance of reviewing published research articles in order to better understand a particular topic or subject.   I can also use this concept when I am creating classroom guidance lesson plans and group sessions.  I will also be able to have the knowledge to critically evaluate published research articles to see if they relate to the topic I want to learn more about.

3.  Evaluate the methodological procedures than an author followed.

  • I will use this throughout the rest of my educational career when I am researching different articles on particular topics, and what type of methods the author used in their studies.  I have learned the importance methods make in a study, and also the necessity of looking at the limitations of methods in a research study.  I also think that this skill will be used in my profession if I am researching a particular topic so that I can evaluate what other researchers have used, and whether or not their methods could be related to my own research study.

4.  Evaluate the results that were reported.

  • I will use this skill both throughout the rest of my educational career and in my school counseling profession.  It is crucial for me to know how to appropriately look at the results and understand the meaning of the results in relation to the research question.  I do not feel that I would be able to fully understand a research study without knowing how to look at the data analysis and results.  I will use this skill when reading future research studies so that I will be more aware of the types of methods that do and do not work based on appropriately evaluating the results that have been reported.

5.  Evaluate the practical significance of the study.

  • I will use this both throughout the rest of my educational career and school counseling profession.  I now have a better understanding and the ability to determine when I feel a study is useful, and when it is not useful to my own research.  I also feel I will use this skill when I evaluate the significance of the study in relation to my own role and responsibilities as a school counselor.

6.  Possess the skills to comprehend common research designs, methods, and procedures.

  • I will use these skills throughout the rest of my educational career when I am evaluating research studies because I now understand the common research designs, methods, and procedures.  For example, I now understand that experimental, quasi-experimental, correlational, causal-comparative, and survey research designs are a part of quantitative research while qualitative research designs include interviewing, observation, and conducting a case study.

7.  Possess skills to have ability to communicate research results clearly, concisely, logically, and in a coherent manner.

  • I will use this in the rest of my educational career when I am discussing in upcoming papers results of a study, and also the results in relation to my own research study.  In terms of using this skill in my profession, it is important for me to have the competence of fully understanding the results and being able to discuss them with my colleagues, parents, and others.
  • One specific situation in which I will apply this skill I have developed: If I found in my research study results that there is a significant positive effect of using digital storytelling for elementary-school aged students, it is vital for me to be able to communicate these research findings with my fellow colleagues and others within the school.  I must be able to clearly and concisely explain the results of my study and how it relates to future school guidance lessons.  It is my role as a school counselor to communicate these results with my colleagues, and be able to describe the strategies and results accurately, while also incorporating potential future innovations that can take place within the school.

8.  Able to read and critically evaluate scholarly journal articles.

  • I feel that this skill is particularly important in my education.  For future research papers, this skill is helpful for me to be able to read and know how to appropriately critique scholarly journal articles, and decide whether or not they are beneficial to use in my own research.

9.  Design my own research investigation.

  • This skill is helpful for my school counseling profession if I decide I want to research a particular topic.  It is essential for me to know how to appropriately design my own research and the many elements that need to be taken into account when I am designing my own research investigation.  For example, I must look at past peer-reviewed articles, think about the participants I want in my study, and the type of research design most beneficial for my study.

Course Objectives:

1.  Compare and contrast quantitative, qualitative, and mixed-methods approaches to research.

  • I will use this concept for the rest of my educational career and as a school counselor because from now on, when I am presented with research studies, I will be able to determine whether or not it is quantitative, qualitative, or a mixed-methods approach to research.  I will be able to determine this based upon such things as the research design and data collection of a study.

2.  Explain what experimental, quasi-experimental, and non-experimental research designs entail and describe their application to different research questions.

  • Knowing the different types of research designs will allow me to evaluate quantitative research studies throughout the rest of my educational career, and understand the differences between each design in relation to the research studies.  This will also help me as a school counselor develop my own research studies, and whether or not these research designs apply to my own possible future research question.

3.  Explain descriptive statistical techniques such as measures of central tendency, standard deviation, and correlation.

  • This skill will allow me to recognize statistical techniques when I am reviewing research articles, and help me better understand data analysis of research studies.  I will need to use this skill for any quantitative research studies, including my own research proposal for this course.

4.  Explain the ethical principles that pertain to research involving human subjects and research conducted in educational settings.

  • This concept is particularly crucial when I perform my own research study.  It is important that I know all of the ethical principles when I am involving human subjects in my research study to protect them from any harm.  As a school counselor, it is essential for me to know the ethical principles while interacting with my clients, and understanding the appropriate way to form positive relationships with my clients.

5.  Select a research problem and formulate appropriate research hypotheses and/or questions.

  • I will use this skill throughout the rest of my educational career and as a school counselor if I am developing another research study.  Even though I will most likely not focus on research as a school counselor, I would like to carry out my own research (especially my research study I created for this class).  It is essential for me to be able to create a research problem and develop suitable research hypotheses and/or questions while conducting a study.

6.  Conduct a review of educational literature from texts, journals, and computer library databases.

  • I will use this skill throughout the rest of my educational career when I am assigned research papers in class.  I am more aware of what the appropriate databases, journals, and texts there are to use when writing a literature review.  This concept also allows me to gain an understanding of the research and findings out there on a particular subject from going through the appropriate educational literature on my area of interest.

7.  Write a coherent synthesis of such literature as it relates to the research problem.

  • I will use this concept for the rest of my educational career when I am writing a research paper so that I will be able to highlight the important information that relates to the research problem.  As a school counselor, I will use this skill by being able to point out the information that most closely relates to my topic of interest that I want to learn more about.

8.  Prepare a viable research proposal.

  • I will use this skill for my future classes in my educational career when I am conducting research on a particular area or subject.  As a school counselor, I will use this skill when I am interested in conducting research and my own study on a particular topic.  For example, I would be able to use this skill if I decided that I wanted to investigate the effects of media on elementary school-aged students within my school.
Advertisements

Research proposal question as of now: Does using multimedia in school counseling guidance lessons produce changes in the effectiveness of the lessons for elementary school-aged students?

While reviewing the quantitative rubric for the research proposal, I have identified some areas that are challenging for me right now.  My first challenge pertains to my literature review, and making sure everything that I want to address in my literature review is covered.  In order to address this area and make sure that I have included everything I wanted to put in my literature review, I have divided my literature review into two sections.  I have one section dedicated to multimedia school guidance lessons and the impact multimedia can have on elementary school-aged students.  I also have a section for print-based guidance lessons and the impact bibliotherapy can have on elementary school-aged students.  However, I still have the question of whether or not I am supposed to focus on a specific topic of the guidance lessons.  For example, should I focus on social skills guidance lessons to see if using multimedia (a video) or print-based (a book) is more effective for elementary school-aged students?  Or, is it okay to just have my literature review focusing on multimedia and bibliotherapy?

Another challenging area for me right now is the participants of my study.   Even though I have been examining past research to see the types of participants other studies have had and deciding after speaking with you a few weeks ago that second grade participants would be an appropriate age group, I am still uncertain how many participants I should have and whether or not they should be divided into two groups.   For instance, I was thinking of comparing a lesson done by using a book and also a lesson using multimedia (a video).  Should I use the same group of participants with two separate lessons on the same topic (i.e., using a book on friendship skills and then using a video on friendship skills)?

A third challenging area for me right now is whether or not I need to define effectiveness into something more specific.  In order to address this challenging area, I have identified that multimedia guidance lesson and print-based (book) guidance lesson are my independent variables and that my dependent variable is the effectiveness of the lesson.  However, should I define effectiveness into something more?

Another challenging area for me is the research design.  In order to address this area, I have been thinking that the research design would be either experimental or causal-comparative.  However, depending upon whether or not I have one group of participants in which the participants would be exposed to a multimedia (video) lesson and a book lesson or if I split the group of participants into two separate groups (i.e., one group of participants would have a video lesson and the other group of participants would have a book lesson the same topic) determines my research design.  I have been reading over the book again and still examining which design would be the most appropriate for my research proposal.

One last challenge for is data collection.   I was thinking of designing a pre-test/post-test to assess the effectiveness of the lesson by seeing if they could answer questions related to the book or video correctly.  Should I be using a more formal instrument?  I am unsure what kind of instrument I should use to collect my data.  In order to address this issue, I have been reviewing past research and looking back into my research book to try to find ideas on how to test my research topic, but I am still unsure which to use.

The methods used by quantitative and qualitative researchers to establish trustworthiness differ in many ways.  For qualitative researchers, the methods used to establish trustworthiness include credibility, transferability, dependability, and confirmability.  For quantitative researchers, the methods used to establish trustworthiness include internal validity, external validity, reliability, and objectivity.  

Criteria: Truth Value

    Credibility is one method used by qualitative researchers to establish trustworthiness by examining the data, data analysis, and conclusions to see whether or not the study is correct and accurate.  For qualitative researchers, credibility is a method that includes researchers taking on activities that increase probability so that there will be trustworthy findings.  The following are procedures qualitative researchers can use to increase credibility in qualitative studies:

1.  Prolonged engagement is an activity qualitative researchers use to learn traditions and customs of the participants and build trust.  It is crucial for researchers to spend a good amount of time at a site and examine any distortions, including perceptual, selective, and misconstruction of investigator’s questions.

2.  Persistent observation is used to examine credibility by looking in-depth at what the researchers are examining and investigating factor in detail.

3.  Triangulation is an activity used to examine a substantial amount of various sources (i.e., interviewing and observation), methods, investigators, and theories.  Several different investigators are used to examine if one researcher is more or less honest from other team members.  Multiple theories are examined because theories can be interrelated, and findings could result in a function of the similarity of theories.  Contextual validation plays a role in triangulation because it examines the validity of a piece of a study by comparing it with other kinds of evidence on same points to find a similar characteristic style or distortion in a source.

4.    Peer debriefing is used to help make sure none of the researchers are using their biased opinion.  This method consists of researchers asking a colleague or another person to look over the study for credibility and determine if the results seem to align from the data.

5.  Negative case analysis is used to show that not all the data will provide the same result. This improves the credibility of a study because it shows that the researchers are looking over the cases thoroughly, and it allows researchers to present information from a study that does not align with other themes, patterns, and overall results.

6.  Referential adequacy is a method used to store raw data in records to examine later and compare to other future studies to show the credibility of data.

7.  Members checking is used for participants to review the data, analytic categories, interpretations, and conclusions tested with the participants.  This allows qualitative researchers to examine the overall accuracy of the study, and verifying data results.

In contrast to qualitative researchers credibility methods, quantitative researchers use internal validity methods to establish trustworthiness.  Quantitative researchers evaluate trustworthiness by how well the threats to internal validity have been controlled, and the validity of the instruments and measurements used in a study.  These researchers analyze data through using statistical test measures.  Internal validity is supported when changes in the dependent variable happen from only the independent variable, not from other confounding variables.  It is important for quantitative researchers to remember the following possible threats to internal validity: history, selection, maturation, pretesting, instrumentation, treatment replications, subject attrition, statistical regression, diffusion of treatment, experimenter effects, and subject effects.

Criteria: Applicability

    Transferability is another method used by qualitative researchers to establish trustworthiness.  In qualitative studies, transferability means applying research results to other contexts and settings in order to get at generalizability.  Qualitative researchers use this method to provide a detailed description of the study’s site, participants, and procedures used to collect data in order for other researchers to assess whether or not applying the results of one study is a good match, and makes sense to generalize.

In contrast to transferability, quantitative researchers use the method of external validity to establish trustworthiness.  External validity is used to generalize from the research sample to the larger population.  It is crucial for quantitative researchers to examine the sampling technique in determining the trustworthiness of a study.  Researchers use external validity in the form of such things as statistical confident limits to make reasonably accurate statements.  Quantitative researchers must look into the following factors that could affect external validity and generalizability: subjects, situation, time, intervention, and measures.

Criteria: Consistency

    Dependability is a method qualitative researchers used to show consistency of findings.  Qualitative researchers describe in detail the exact methods of data collection, analysis, and interpretation.  This is so the study could be auditable to describe the situation, and for another researcher to follow the study.   The following are ways to show dependability:

1.  There can be no validity without reliability, and no credibility without dependability.

2.  “Overlap methods” as a direct technique to exemplify a kind of triangulation.

3.  “Stepwise replication” as a process of establishing reliability.  This approach requires an inquiry team of at least two people or more who can be separated into two inquiry teams.  The two teams deal with data sources separately and perform their studies apart from one another.  Then, the results between the two teams are compared.

4.  Inquiry audit for a researcher auditor to examine the process of the study and determine its acceptability to the dependability of the study.  The researcher auditor looks into the data, findings, interpretations, and recommendations and looks into whether the study is supported by data and is trustworthy.

For quantitative researchers, reliability is a method used to established trustworthiness.  Quantitative researchers use reliability by examining the consistency of a group of measurements or measuring instruments used in a study (also known as internal consistency).  Researchers also use the test-retest method (also known as stability) to prove reliability by administering one measure to one group of individuals, wait for a certain amount of time, and then readminister the same instrument to the same group.  Equivalence is a measure than can be used to administer two forms of the same test to one group of individuals and then correlate the scores from the two administrations.  Equivalence and stability estimate is another way to examine reliability by administering one form of an instrument and then a second form of the instrument after a certain amount of time to the same group of individuals.  Agreement is another way reliability is measured by raters observing the same behavior and examining whether or not they have similar and consistent results to one another.  Reliability is important to quantitative researcher because it is a basis for validity, and measures whether or not a study obtains the same results each time.

Criteria: Neutrality

    Confirmability is a method used by qualitative researchers to establish trustworthiness.  Confirmability includes an audit trail that includes raw data, such as electronically recorded materials, written field notes, documents, and records.  This method is used for another researcher to be able to verify the study when presented with the same data.  Confirmability is achieved when findings of a study reflect from the participants of the study and make sure the data speaks for itself, and is not based on biases and assumptions of the researchers.

Unlike qualitative researchers method of using confirmability to establish trustworthiness, quantitative researchers use the method of objectivity. Objectivity is used through the methodology of measurements, data collection, and data analysis through which reliability and validity are established.  Objectivity is performed through methodological procedures such as instrumentation and randomization.  Quantitative researchers focus on the facts.  Objectivity also refers to the appropriate distance between a researcher and participants that lessens bias.  The objective researcher is distant so that the researcher is not influenced by the participants, and does not influence the study.      

One last method both qualitative and qualitative researchers can use to establish trustworthiness is the reflexive journal.  This method is a kind of diary in which both qualitative and quantitative researchers can use on a daily basis or as needed for records of a variety of information.  This method can be useful for qualitative researchers in particular in order to to provide information about reflecting upon what is happening in terms of the researcher’s own beliefs and thoughts about the study.  The method of using a reflexive journal can also be useful for quantitative researchers because they can provide information about methodological decisions made and reasons for choosing certain methods, instruments, and data analyses of the study.

*Note: I have changed my research question to the following quantitative research question: Does multimedia teaching methods affect a student’s academic success in elementary school?

Article #1: Achievement Effects of Embedded Multimedia in a Success for All Reading Program

This study examined the incorporation of video content (embedded multimedia) within teachers’ lessons to see if embedded multimedia enhances the effectiveness of beginning reading instruction.  The study design was a year-long experimental study that used a cluster randomized trial design with random assignments of schools comparing first graders who learned beginning reading through the Success for All reading program either with or without short video modules.  The Success for All reading program is designed for beginning readers to help them with “…letter sounds, sound blending strategies, vocabulary, and and comprehension strategies” through visual models (Chambers, Cheung, Gifford, Madden, & Slavin, 2006, p. 233).  There were 450 first grader participants that attended one of the ten different inner-city Hartford, Connecticut elementary public schools (all within the same inner city district), and 394 completed the pre-tests and post-tests.  62% of the students were Hispanic, 35% were African American, and 3% were White.  Almost all of the students were qualified for free or reduced-price lunches.

In regards to the data collection methods, the study consisted of an experimental group and a control group.  The experimental group incorporated the use of multimedia content, while the control group did not use any multimedia content.  The schools were randomly assigned to either the experimental group or the control group.  The experimental and control group were about equal in terms of having about the same  number of schools, number of students, percentage of students, qualified for free lunch, ethnicity, and percentage of students whose first language is not English.  The experimental group used multimedia content included in teachers’ every day ninety minutes Success for All reading lessons, and included thirty seconds to three minute skits and other multimedia components incorporated with the teachers’ lessons to demonstrate reading, comprehension, and vocabulary strategies to the students.  The control group (no embedded multimedia) schools used the every day ninety minutes Success for All reading program by the teachers using picture cards to demonstrate the letter shapes and vocabulary in the student books, as well as activities and games to help improve a student’s reading and comprehension skills.  The participants were given the Peabody Picture Vocabulary Test and the Word Identification sub-test from the Wood-Cock Reading Mastery Test-Revised as pretests to the study.   For the post-test, the participants were given a reading fluency test from the Dynamic Indicators of Basic Early Literacy Skills and three scales from the Woodcock Reading Mastery Tests-Revised (Word Identification, Word Attack, and Passage Comprehension).

In term of data analysis, the data were analyzed with hierarchical linear modeling with students within the schools to cluster the randomized design.  Classroom-level analyses were not conducted because the participants changed reading teachers over the course of the year.  The condition was the independent variable, and the Wood-Cock Reading Mastery Test-Revised sub-tests and the Dynamic Indicators of Basic Early Literacy Skills fluency test were the dependent measures.  The Peabody Picture Vocabulary Test and the Word Identification sub-tests were used as covariates to help regulate for starting differences between the experimental and control groups so that statistical power could be raised.  Analyses were also performed for both the overall sample and for a Hispanic subsample.

The researchers of this study found only one significant experimental-control difference as conclusions to this study.  The only significant experimental-control difference was with the Word Attack outcome measure in which the experimental group scored higher than the control group on the Word Attack test.  This finding supports past theoretical expectations in which it was believed that students who were given multimedia components dealing with letter sounds and sound blending would have a greater effectiveness of beginning reading instruction than no multimedia components.  The control group scored higher than the experimental schools on Word Identification, Passage Comparison, and the Dynamic Indicators of Basic Early Literacy Skills (DIBELS).  Even though it was expected that the Hispanic participants would not benefit the same from the embedded multimedia treatment as the other participants, the multimedia effects for Hispanic participants were almost the same as the other participants (mostly African-American).  Thus, no interaction between ethnicity and treatment were found.

In regards to whether or not the conclusions are valid given the study design, I feel that even though the data analyses were thorough and detailed, this study is somewhat weak because it only measured the achievement effects of multimedia content on an individual’s reading program.  As the researchers suggest, I feel that it would be important to examine the effect multimedia components have on teaching practices and whether or not multimedia improves teachers’ performance of the reading program to students.   I feel that there are also several threats to internal validity given the study design.  One threat of internal validity that pertains to this study is the selection of participants.  It is not possible to say the conclusions are necessarily valid because researchers only studied ten schools in Hartford, Connecticut.  In addition, the majority of the participants were either African-American or Hispanic, with only a minority percentage of white participants and from a lower socio-economic background, which does not give a representative sample of the total population of first grade students around the United States.  Maturation is another threat to internal validity in this study due to the possible mental developmental changes in the participants over the one-year duration.  Treatment replication is another threat to internal validity because this study was only conducted once instead of several times, which could lead to invalid results.  Diffusion of treatment can also be a threat to internal validity of this study because the student participants may have been talking about the multimedia components within the school, which could impact the results.  This could result in the participants taking the reading assessments to already have ideas before taking the assessment from talking with other participants who took the measurement before they did. Even though I feel that this study used several valid assessments to measure what it was intended to measure (essential reading skills), I feel further research with a more representative sample needs to be conducted.

Article #2: Science Education in Primary Schools: Is an Animation Worth a Thousand Pictures?

This study examined fourth grade and fifth grade elementary school teachers’ methods for incorporating animated movies, the teachers’ views about the role of animation in improving fourth grade and fifth grade elementary school-aged students’ thinking skills, and the effect of animated movies on fourth and fifth grade students’ conceptual understanding of science and their reasoning ability.  The study design was experimental, and consisted of an experimental group and a control group.  There was a pilot study to create the reliability and validity of the research tools.  The main study was performed to examine the fourth grade and fifth grade school teachers’ methods for incorporating animated movies, the teachers’ views about the role of animation, and the effect of animated movies on fourth and fifth grade students’ learning.  The schools in the sample were separated into experimental and control groups according to the school principal and the science teachers’ preferences.  Stratified sampling was used to make sure students from different demographics and age groups would be represented in the sample.

The participants were from primary schools in the central part of Israel.  1,335 primary schools students were separated into experimental and control groups.  The experimental group students included 926 students from five elementary schools, in which there were 435 fourth graders and 491 fifth graders.  The control group student participants included 409 students from two elementary schools, in which there were 206 fourth graders and 203 fifth graders.  The student participants gender distribution was equal, meaning there were 50% girls and 50% boys between both the experimental and control group.  Approximately 11% percent of the student participants’ parents’ career choices involved a scientific field, such as medical doctors, scientists, and engineers.  12.8% of the student participants’ participated in extracurricular activities that related to science education.  In terms of the teacher participants, there were 15 science teachers who incorporated who incorporated animated movies into their teaching.  All of the science teacher participants’ were females, 62% had a Bachelor’s of Education degree, and 85% had taught for more than 10 years.

In regards to data collection methods, both quantitative and qualitative methodologies were used.  Inform discussion was performed with the science teachers’ participants during the breaks between classes.  Five science teachers were selected to the experimental group in which they shared their beliefs and experience in their classrooms about their teaching style, and their thoughts about incorporating animated movies to improve elementary school-aged students’ thinking skills.  Data was also collected through surveying teachers’ methods for incorporating web-based educational technologies before and after the BrainPop animated movies program (three to five-minute animated movies that explain scientific concepts, interactive quizzes, and experiments that correspond with Israeli national science education standards).  Pre- and post-questionnaires were given to the experimental group student participants and control group student participants at the beginning and at the end of the school year.  The questionnaire used was the Science thinking skills, in which two versions were included, one for fourth grade student participants and one for fifth grade student participants, according to Israeli national standards and topics.  The questionnaires for both grade levels included the same questions, but in a different order.  Both questionnaires included looking further into students’ understanding of scientific ideas (i.e., motion and forces, life on earth, environmental issues) through multiple choice questions and true/false questions that mandated explanations.

The data analysis of this study included content analysis of informal discussions with both experimental and control group teachers.  The students’ scores in the pre-post Science thinking skills questionnaire was analyzed by comparing between the experimental group and control group, gender (boys and girls), parents career choice (science and non-science occupations), and extracurricular activities (science-related or not science-related).  An ANCOVA-Analysis of Covariance test was used for comparing the pre-questionnaire results when testing for statistical significant differences in the post-questionnaire results.  Eta Squared analysis was also used to measure the development of students’ science thinking skills.  An ANCOVA test was used for comparing the fourth grade students’ levels of explanations.

The conclusions of this study showed that teachers felt that animated movies can be shown in the classroom to help promote class discussions, teamwork between groups of two or three students, and individual learning.  There was a statistical significant difference between the experimental group and control group was that students who experienced the use of animated movies as part of their science learning improved their understanding and academic performance towards science concepts and reasoning, compared to students who used only textbooks and photos.  The experimental group that consisted of fourth and fifth grade students had statistically significant higher scores on the Science thinking skills questionnaires compared to the control group of fourth and fifth grade students.  The study found significantly higher percentage of experimental group students’ correct explanations than the control group participants’ explanations for the true/false questionnaire.  Therefore, this study found that animated movies could support the use of animated movies as a multimedia teaching tool because it can promote scientific interest, the development of using scientific language, and better understanding scientific reasoning through integrating visual, auditory, and kinesthetic learning styles and using sight, hearing, and feeling senses.

For the most part, I feel that the conclusions are valid given the study design.  The researchers had a pilot study stage performed in order to create the reliability and validity of the research tools, including the Science thinking skills questionnaire.  The questionnaires were also validated by four experts in science education and three elementary school teachers.  In order to make sure that the study was designed to measure what it intended to measure, the experimental teachers received a two-hour workshop for incorporating the web-based animations (BrainPop) and received extra help if needed by BrainPop experts.  The ANOVA analysis also seems that it was thorough and detailed throughout this study.  However, I feel that incorporating surveys and informal discussions are hard ways to examine the teacher participants, and this could be a threat to validity.  I also feel that there is a threat to internal validity because the sample of participants was only conducted in Israel, the participants are not representative and should not be used to generalize a wider population of all fourth and fifth grade students.  In addition, the study did not state the age of the fourth and fifth grade student participants, which could be different from fourth and fifth grade aged students in the United States.  Even though there are threats to internal validity, I do feel that this study was well-performed.  I think that the conclusions are valid in terms of studying the effectiveness of animated movies on students’ participants scientific learning and reasoning based on the validity of the Science thinking skills instrumental measure and thorough data analysis.

References:

Barak, M., & Dori, Y. J. (2011). Science education in primary schools: Is an animation worth a thousand pictures? Journal of Science Education and Technology, 20(5), 608-620. doi: 10.1007/s10956-011-9315-2.

Chambers, B., Cheung, A. K., Gifford, R., Madden, N.A., & Slavin, R.E. (2006). Achievement effects of embedded multimedia in a success for all reading program. Journal of Educational Psychology, 98(1),233-237. doi: 10.10370022-0663.98.1.232.

Quantitative Research Question: Does matching a student’s learning style and a teacher’s learning style affect a student’s academic success in elementary school?

There are many threats to internal validity that could affect my study of whether or not matching a student’s learning style and a teacher’s learning style affects a student’s academic success in elementary school. History could affect my study because events and situations could arise inside or outside of my study that I have no control over that could affect a student’s academic success.  An example of outside of my study is if there was a major crisis that occurred for a participant outside of the classroom, such as a death in the family, it could influence the participant’s learning style that particular day I am performing my study.  In terms of inside the study, an example of a potential threat to internal validity is that during the time of my study, disruptions of people coming in and out of the classroom while the participants are taking the assessment tests could distract the participants.  As a result, I, the researcher, would not know if the participant answered a certain question because the participant was distracted or if that is how the participant truly would answer the question without distractions.  The time of day could also influence my results.  If I perform the study to my student participants in the morning and then perform the study to my teacher participants in the afternoon, the participants could have answered the way they did due to the time of day the assessment was given; the participants may have answered differently if the questions were given at a different time of the day.

Selection is another threat to internal validity because of the characteristics of the participants in my study.  For example, if I grouped all academically gifted student participants together, these participants would have a better chance of having a greater academic success than a separate group of “appropriately” academic developed students without even taking into consideration whether or not a teacher’s learning style may affect the student participants academic success.  Therefore, it is important for me to take into consideration the student participants academic performance and whether or not I want to involved gifted student participants in my study because of this threat of of internal validity.

Maturation is a threat to internal validity in my study due to the mental developmental changes in participants over time.  Students who are studied at the beginning of the year may need to adjust to a teacher’s teaching style, which could influence a student’s academic success compared from the beginning of the year to the end of the year, without an influence of a teacher’s learning style.  If my study was going to be a longitudinal study over the student participants’ course of time at the elementary school, maturation could also be a threat because students typically develop mentally over the period of time.  Another reason that maturation is a threat is because my participants could easily become tired, bored, or hungry as a participant in the study.  Thus, when measuring a student’s learning style and a teacher’s learning style, the answers could be skewed because the questionnaire to measure the learning style took a substantial amount of time to complete and some participants got tired and lost focus in it.  This could cause inaccurate results of the type of learning style of a student participant or a teacher participant.

Pretesting is another threat to internal validity due to the effect of participants taking the pretest.  For instance, if my student and teacher participants were given a pretest measuring their learning styles, the participants could become more aware of their type of learning styles based on the questionnaire.  After the questionnaire, the participants may want to read and investigate more about learning styles and academic success.  As a result, the participants could start acting differently or answer questions a certain way to increase the chances of having the same type of learning style as a student or teacher.  Participants could also start acting differently in order to act more like their designated learning style or try to act differently in hopes of being able to change their designated learning style.

Instrumentation is a threat to internal validity for my study if the instrument I use to measure the student and teacher’s learning styles is not reliable and does not provide consistent results each time I assess the participants.  If I do not use the same instrument to measure both my student and teacher participants, it could also serve as a threat with internal validity because my measures are not consistent and the same each time I assess the participants.

Instrumentation brings me to another threat of internal validity, treatment replication.  Treatment replication is also a threat because if I do not repeat my study, it could lead to invalid results.  I must conduct the experiment repeatedly in order to make sure that the results are consistent and can be potentially generalized to a population.  I must not assume that one student’s learning style and teacher’s learning styles will be exactly the same as another student’s and teacher’s learning style.  It is crucial for me to repeat the experiment over and over again with different student and teacher participants using the same measures.

Another threat to internal validity is subject attrition.  Over the course of my study, there is the threat of participants dropping out or not being there the days I conduct my study.  If I study the participants over a long period of time, such as for all six years of attending elementary school, and one of the student participant’s drops out or moves when the participant is in the third grade, it could skew my results; I will have a drop in the number of participants.  Teachers from the school could also drop out, change their location of schools, and even stop teaching in the middle of conducting my study, which could influence my results of not having as many teacher participants from initially performing my study.

Statistical regression is a threat to internal validity if the participants were chosen based upon their scores on a pretest.  If only academically gifted students are used in my study, is likely that the student’s academic success will continue to maintain or increase regardless of the teacher’s learning style because of the already designated high rate of academic success.

Diffusion of Treatment can be a threat to internal validity if the intervention method is given to various classes in the same school.  The student and teacher participants can talk about the learning style assessments with the other participants who have not taken the assessment yet, which can impact the results.  This could result in the participants taking the learning styles assessment already having ideas before taking the assessment from talking with other participants who took the measurement before they did.

Experimenter effects could also be a threat if I, the experimenter, allow any of my own bias to impact the instrumentation process.  I may higher expectations for certain learning styles over another learning style, which could skew the results.  I could also be bias of a particular learning style that I feel impacts a student’s academic success, which could pose bias on the results.  Even if I was an observer or an interviewer, I could also pose bias on wanting my study to turn out the way the original researcher hypothesized, which provides false and inaccurate results.

Subject effects are a threat to internal validity of my study due to the effects of my participants knowing that they are involved in my study, which could change their natural behavior.  For example, if I explain to the teachers that their learning style is being measured along with student’s learning style to see if it impacts student success, they may answer a certain way in which they think the students learning style is so that it gives positive results.  After taking a pretest and becoming more aware of whether or not matching students and teachers learning styles affects a student’s academic success, the participants could also unintentionally change their behavior in hopes of matching their behaviors with one another.

Both comparative studies and correlational studies examine the relationship, including the similarities and differences between variables.  However, causation should not be assumed from neither comparative nor correlational designs because even though there may seem to be a relationship between two variables, that does not necessarily mean that there is a cause or reasoning that one variable affects or changes another variable.  Another reason why causation should not be assumed from neither comparative nor correlational designs is because the direction of causation of a variable is not known (McMillan, 2012).  For example, in Legette’s (1998) study of “Causal Beliefs of Public School Students about Success and Failure in Music,” it is important to not assume that one’s effort and musical ability causes students to be more successful in music (as cited in McMillan, 2012, p. 203).  For one, the school-aged subjects were only taken from northern Georgia and therefore, it is unlikely that this sample is representative of the entire United States population.  In addition, since the subjects were only taken from one area of the United States, it is not plausible to assume based on these subjects’ results what causes every student to be more successful in music because of the lack of variability of subjects.

Another reason to not infer causation from comparative research studies is because of the methods of the design.  In relation to Legette’s (1998) study, he used the Music Attribution Orientation Scale questionnaire.  However, this scale questionnaire could have bias associated with it, such as gender bias, which could skew the scores in which could be reasoning why female subjects were scored as being seen as valuing ability and effort as being more important than the male subjects.  In addition, the subjects who took the questionnaire could have answered the way they did due to their current mood in which they could have not felt well or were tired, which could provide inaccurate results.  Thus, we are unable to assume that just because subjects who were of the female gender valued ability and effort over the male subjects, it does not necessarily mean that being of the female gender caused the subjects to value ability and effort over the male subjects.  There is no causal relationship between how being of the female gender affects or causes them in this study to value ability and effort over male subjects.

In Legette’s (1998) study, he found that there was a positive correlation between subjects that attended county schools and their belief that class environment is more important.  However, just because a student is enrolled in a county school, it does not explain why these students value class environment over ability and effort.  There are other crucial factors and variables that need to be looked into, such as race, socioeconomic status, and environmental conditions because these factors may have an effect as to why county school subjects and city school subjects responded the way the did.  Thus, it is not feasible to assume that there is a relationship between county school subjects and valuing school environment because we do not know how or what causes attending a county school affects or causes a student to value school environment over ability and effort.

Even though two things may be correlated, it is crucial to not assume that one variable causes the other variable or vice versa.  It only means that these two variables take place around the same time and there could be a relationship, but it has not been proven that there is one.  Subjects, methods, other variables, and how data is collected are all reasons why causation should not be inferred from comparative or correlational designs, as exemplified while using Legette’s (1998) study.  One of the most important things to remember is that causation should not be assumed from comparative or correlational designs because even though there many be a relationship between two variables, it does not necessarily mean there is a cause or that one variable affects or changes another variable.

 References

Legette, R. M. (1998). Causal beliefs of public school students about success and failure in music. Journal of Research in Music Education, 46(1), 102-11.

McMillan, J. (2012). Educational research fundamentals for the consumer (6th ed.). Boston: Pearson Education, Inc.

There are many important factors to consider when conducting research involving human participants in educational setting.  I believe that one of the most important points for a researcher to keep in mind when conducting educational research is notifying participants of all features of the research that could possibly influence agreeing to take part in the research.  I feel that a researcher must be honest with participants about their research study as much as possible and should inform participants of potential time commitments and any other factors that may affect a participant from being a part of the study.  For example, if a researcher needs all participants at a certain time throughout the day for two hours, it is vital for a researcher to inform the potential participants before conducting the study so that they are fully aware of the time at which the study needs to be performed and the time commitment needed to take part in the study.  Based on my own experience of being a part of participant in multiple research studies at my undergraduate school, I felt more comfortable agreeing to be a participant from the researcher notifying me about the basic elements of the study I was participating in.  Thus, I feel being honest is essential for a researcher because even though researchers cannot tell participants necessarily everything about the research study, participants still have the right to know elements of the research study that they are voluntarily participating in.

Along with being honest to participants, another point that I feel is important for a researcher to think about when conducting educational research is obtaining informed consent from participants before conducting a study.  In my opinion, informed consent is essential for researchers to give to participants before conducting a study because it allows potential participants to read about elements of the research and sign that they agree to the terms of the study.  If an element of the research described in the informed consent form is something that a person does not feel comfortable with and does not agree with, it is beneficial for both the potential participant and researcher to know.  In terms of the potential participant, it is beneficial for the individual to already know before taking a part in the study that it is not something that the person is willing to participate in and does not agree to the terms of the study.  Informed consent is also beneficial for the researcher because it can help a researcher from having “dropped participants” who decide not to show up through various reasons, including the possibility of no longer wanting to comply with elements throughout the study.  From my own personal experience of conducting a research study at my undergraduate school, informed consent was important because it allowed the participants of my study to understand and agree to the conditions and factors of the study.  As a result, the informed consent from my participants allowed me, as a researcher, to have participants who were willing to comply with the factors of my study.  Another important point that is crucial for a research to recognize is if a potential participant is under the age of 18.  I feel it is necessary for parental consent to be obtained.  I believe that this is an important point for researchers who are conducting studies with children under the age of 18 because their parent(s) have the right to know about the study being conducted and also have the right to consent to allow or not allow their children to participate.

An additional point I personally feel is important for a researcher to keep in mind when conducting research involving human participants in an educational setting is protecting participants from physical and mental danger, pain, and damage.  It is vital for a researcher to recognize that there should only be a limited, if at all risk to those taking a part in the study.  If at any time throughout the study a researcher notices a potential harm or danger to a participant, I feel it is a researcher’s responsibility and ethical obligation to remove the participant from the study.  If there is any potential physical and/or mental risk from participating in a study, I believe it is a researcher’s duty to inform the participants and the participants’ parents if they are under the age of 18 before conducting the study about these potential risks.

I also feel a researcher must keep in mind that participants can stop participating at any time from a study.  Participation is voluntary and thus, no punishment or penalty can be given to participants if they decide to no longer be a part of the study.  In addition, I believe it is unethical for a researcher to force participants to take part in a study.  For instance, if a participant signs up to be a part of the study and then does not show up, it is not mandatory that the participant had to come and no repercussions can be given.  Instead, the researcher will just no longer have that participant in the study and may have to look for an additional voluntary participant to partake in the study.

One last point I personally feel is important for a researcher to keep in mind when conducting research involving human participants in an educational setting is maintaining confidentiality and keeping data anonymous.  I feel that it is a researcher’s ethical obligation to keep all participant information private.  For example, if a researcher is conducting a study on participants with a personality disorder, it is crucial for a researcher to understand that all participants’ names need to never be disclosed and that the data and results should be kept confidential.  In my own person experience of being a participant in multiple research studies at my undergraduate school, it was reassuring when the researchers told me that my participation in their research was anonymous and confidential.  I also felt more comfortable knowing that the information I provided in the research was never going to be disclosed to anyone but the researchers.  In my own experience as a researcher, it was beneficial to keep the data and information from participants anonymous because it allowed me to not pose any bias towards certain participants’ results.  I feel that a researcher needs to respect the participants who are voluntarily taking a part in the study by protecting their privacy and not disclosing their information.