Student and Faculty Perspectives on Student Evaluation of Teaching: A Cross-Sectional Study at a Community College.

Sherese Mitchell, ED.D., Associate Professor, Education

Asrat Amnie, MD., M.P.H, Ed.D., Education

Jacqueline M. DiSanto, Ed.D., Associate Professor, Education

Allison Franzese, Ph.D., Natural Sciences

Carlos Guevara, M.S., Director, Office of Educational Technology

Juno Morrow, M.F.A., Assistant Professor & Coordinator, Humanities

Silvia Reyes, M.S.W., Director, Title V

Maria Subert, Ph.D., Assistant Professor, Humanities

Hostos Community College/CUNY

Abstract

Evidence-base for improving teaching effectiveness can be discovered via student evaluation of teaching (SET). Despite this, a complete assessment of all essential aspects of college or university teaching cannot be provided. Researchers at a small, urban community college conducted a mixed quantitative and qualitative cross-sectional studyto identify the perceptions of students and faculty to SETs. Among the categories considered were importance of the SET instrument, its usefulness, and how it is approached by both students and faculty.

Key words: faculty evaluation; instructional evaluation; student evaluation of teaching; student voice

Introduction

How do we improve student retention and graduation rates in the face of ever-changing challenges across individual, institutional, and social levels? This is one of the most important questions that continues to haunt institutions of higher education. Furthermore, public colleges face a continued decrease in funding as a result of state budget cuts and seek new ways to cope with the financial stress. Current best practices make the use of technology the preferred option available for academic continuity and the only option on this campus.

According to Bain et al. (2011), the most important factors for student success are the instructor’s interest in students’ academic success and availability of affordable tuition, followed by the presence of a knowledgeable advisor and family support. Next in importance are personal motivation, having access to an online library, and taking online/hybrid classes. As a matter of fact, in a large sample comparison of student learning-outcomes in online versus face-to-face course formats, there was little to no difference in grade-based student performance between these two instructional modes for courses where both modes are applicable (Cavanaugh & Jacquemin, 2015). If an instructor’s interest in a student’s success is one of the most important factors in determining student learning outcomes, it would be equally important to look into both student and faculty perspectives on student evaluation of teaching in order to expand our understanding and generate new evidence for decision making. After all, evaluation remains one of the most important tools for performance improvement in any field of endeavor. Necessarily, student evaluation of teaching is an important consideration in the reappointment of pre-tenure and promotion of tenured faculty.

Rationale for Student and Faculty Survey Study

Student surveys are one way of generating an evidence-base for improving teaching effectiveness by providing high-quality actionable feedback to instructors and academic leadership. Student surveys conducted using an online questionnaire, the validity and reliability of which have been established, are important means for measuring multiple aspects of teaching effectiveness. Moreover, student surveys help evaluate pedagogical effectiveness of instructional approaches and practices. Instructors who hold students to higher standards ensure better student satisfaction from academic success and build greater trust and higher confidence.

The importance of the classroom environment cannot be over emphasized. A conducive classroom environment attempts to reduce incidents of disciplinary action and ensures better student safety and security, which contributes to improved learning outcomes. Student engagement, including regular class attendance, assignment completion, staying in school, and improved academic progress are ensured when instructors apply engaging pedagogical approaches and strategies. One of the factors that influences the teaching-learning process is teacher-student relationships. Instructors with better communication skills encourage students to be more assertive and foster a sense of belongingness and self-efficacy (Cantrell & Kane, 2013).

Student evaluation of teaching has impacted and made a difference in improving teaching effectiveness, but it does not provide a complete assessment of all essential aspects of college or university teaching. However, significant improvements can be made in the system with only minor changes, which may significantly improve the educational experience for students, as well as faculty members (Kozub, 2008). One way to improve the use of student evaluation of teaching is to transition to the online platform. A study conducted by Donovan    et al. (2007) indicated a significant preference for the online course evaluation format over the traditional format.

Review of Literature

Although many researchers have examined the usefulness of Student Evaluation of Teaching (SET) for measuring teaching quality from the faculty point of view (such as Baldwin & Blattner, 2003; Boysen et al., 2014; Felton et al. 2008; and Hamermesh & Parker, 2005), fewer have analyzed it from the students’ perspective (Nasser-Abu Alhija, 2017; Macfadyen et al., 2015). Even fewer scholars paid attention to both at the same time (Bresciani, et al., 2004; and Keeling et al., 2008). The contribution to this body of research of the study discussed in this article is the parallel investigation of both faculty and student perceptions of SET; the researchers consider them dialectical counterparts that should be examined together.

From the faculty’s perspective, previous studies reason that SETs are ineffective to measure teaching effectiveness, they are misused at the institutional level, and they are biased on both student and faculty sides (Nasser-Abu Alhija, 2017; Uttl et al., 2017). The first argument is that SETs fail to measure teaching, per se. Nasser-Abu Alhija (2017) proposes that SET is more likely measure student satisfaction than student learning, therefore “student ratings of teaching quality should be considered with caution, for formative (teaching improvement) and summative purposes, especially for high-stakes use” (p. 11). Uttl and colleagues (2017) came to a similar conclusion, emphasizing that evaluation of teaching ratings and student learning are unrelated; SET should not be considered (at least not exclusively) as a stand-alone method of faculty evaluation.

The second argument in previous research is that SETs were originally intended for faculty use in improving their own teaching. For these researchers, any other use abuses the

SETs original purpose. Accordingly, these researchers conclude that SETs are misused when included in faculty evaluation (Boysen et al., 2014).

Finally, the third argument in previous scholarship is that SETs are biased against both students and faculty members. Hamermesh and Parker (2005) assert that women faculty more often receive lower evaluations from students than men do; minorities receive lower evaluations than white faculty; and female minority faculty receive lower evaluations than male minority faculty. Rubin and Smith (1990) add that instructors who have accents are rated as poorer instructors than others without accents. On the other hand, Merritt (2008), Weinberg et al. (2007), and Felton et al. (2008) reveal biases on behalf of faculty, as teachers try to inflate grades, make their classes easy, or “bribe” students in other ways to garner better student evaluations. Thus, as they argue, SETs do not measure the quality of actual teaching and learning. They conclude that SET should not be used institutionally for evaluating faculty. Some (such as Berk, 2014) are less strict, asserting that SET can be partially useful to evaluate teaching effectiveness. This includes the observation that SET can the offer opportunity for faculty self- reflection, if they are not viewed as an administrative task (Bresciani et al., 2004).

Only a few scholars studied SET from the students’ perspective. Examining a Canadian research university, Macfadyen et al. (2015) identified course-specific factors that affect SET participation (such as course year level and course type) and biases, such as faculty gender and degree of student achievement in the course. Despite the acknowledgement that bias can impact SET results, the inclusion of student assessment of classroom experience plays a key role in campus assessment initiatives as “institutions remain dependent on SET output for quality assurance and performance management processes (Macfadyen et al., 2015). The goal of the research conducted by the authors of this article (herein “authors”) is to make SET more useful (to close the achievement gap) by more clearly understanding both student and faculty attitudes toward SET.

Methodology

The purpose for this investigation was to identify attitude, practices, values, and perceptions of students and faculty toward the SET administered at Hostos Community College. The SET is an online survey administered at the end of each semester.

Research Questions

The researchers sought to answer the following research questions:

  • What is the level and nature of students’ attitude, practices, values, and perceptions toward student evaluations?
    • What is the level and nature of faculty’s attitude, practices, values, and perceptions toward student evaluations?

Research Design

              This was a mixed quantitative and qualitative cross-sectional study designed to examine

values and perceptions of student evaluations at a community college. Web-based surveys

(created in Qualtrics, a cloud-based platform for creating, distributing and analyzing web-based

surveys) and focus-group interviews of students and faculty at Hostos were used to gather data.

The focus groups for the students and for the faculty were conducted independent of each other.

              Two survey instruments were created that each included closed- and open-ended questions using two distinct scales, the Likert scale and semantic differentials. Convenience sampling was used in order to include “basically everyone you can find who will cooperate” (Lindlof & Taylor, 2011, p. 151). The data analysis plan included (1) appropriate qualitative data analysis; descriptive analysis; and (3) inferential data analysis; these were analyzed separately for students and faculty.

For the survey, the n for students was 527, which represents about 8% of the student population (~7,000 students), and the n for faculty was 100, which represents about 22% of the faculty population (465 faculty). The invitation to participate was sent via email blasts by the principal investigator using the campus faculty and student distribution lists. The students were asked to take the Student Feedback Questionnaire (see Appendix I); faculty were offered the Faculty Perceptions about Student Evaluations Survey (see Appendix II). Additionally, flyers were prominently displayed in approved areas with the links to the surveys.

              After the administration period for the quantitative surveys had concluded, invitations were sent out, again via email, to take part in a small-group discussion about SETs. There were five focal groups held for students, with a total of 33 participants; three focal groups were held for faculty, with a total of 14 participants. The prompts for the focus groups were based closely on the survey questions. The participants were asked to expand upon their previous responses. Each focus group had a discussion leader, notetaker, and timekeeper. Co-principal investigators (co- PIs) served in these roles; all co-PIs fulfilled one of these roles for at least one focus-group session.

Data Analysis

              The quantitative and qualitative aspects of the data collected from the surveys were analyzed using Qualtrics. Thematic analysis was used to analyze the qualitative components of the surveys, the data was categorized and coded to identify different themes that connect with the dimensions analyzed in this study.

 In addition, a thematic analysis approach was adopted to analyze the data collected in the focus groups. Note-takers made sure to capture the responses of each participant, who was identified with a number to facilitate tabulation and maintain their anonymity. Once the data from all the focus groups was entered into Qualtrics for analysis, the research team proceeded to categorize and label the data entries through a two-step process, the first data-labeling round to identify all recurring themes that appear in the data, and second data-labeling round to define the common categories that will be used to encode the data.

Findings

Student Survey Results. Students were asked to select an option to describe how they see

the student survey. In order of largest-to-smallest prevalence, students responded that they see the survey as important (57.29%), easy (15.71%), empowering (9.71%), fun (7.71%), boring (4.57%), unimportant (3.29%), disempowering (1.14%) and difficult (0.57%). Generally, student respondents perceived the survey as important or positive.

Students reported filling out the student survey sometimes (42.31%), always (36.73%), rarely (12.69%) and never (8.27%). The response rates of surveys at the campus surveyed may be lower than the percentage of students responding that they always complete the student survey. This may suggest that students perceive themselves as completing the survey more than in actuality or that there is significant selection bias taking place.

When asked to select the appropriate response as to whether their professors promised extra points for completing the survey, 18.02% responded with always, 24.49% with sometimes, 11.13% with rarely, and 47.36% with never. Notwithstanding the aforementioned sampling issue, this is significantly higher than the number of faculty claiming to use extra credit to encourage completion of the survey, at 5.17%. This may suggest that social acceptability among faculty may affect their responses.

Students were then asked whether they agree or disagree with a series of statements. The majority of student respondents did not know who read their answers and did not believe their answers would affect their grade. A majority of students disagreed that they didn’t understand the questions and that they don’t like the questions. More than 50% of students agree that they prefer to use the “Rate my Professor” (RMP) website over the Hostos Student Survey.


            When asked whether they had anything else to add, one student wrote that one of their professors brought in laptops or tablets into the classroom to complete the survey and that some of the students did not want to complete the survey. They went on to state that “we all filled it out and answered the questions picking everything positive about the professor because we felt if we didn’t there might be retaliation from the professor.” Another student stated that while some professors emphasize the importance of the survey, “the not so good professors don’t make it a point to evaluate them.” Finally, in very strong words, one student respondent expressed that there are no repercussions for faculty who receive overwhelmingly negative responses.

             Student Focus-Group Findings. Based on the large percentage of students who responded to the first survey question that they see the survey as important, the authors first asked students present for the focus group “Why do you think the survey is important?” Students generally perceive the survey as important because it gives them a voice, letting faculty know what the students think about their teaching. Students perceived that their feedback is, or should be, used to help faculty identify aspects of their teaching that need to be changed. The authors then asked students whether they have ever filled out the survey without thinking honestly about their answers. In contrast to the students’ perception that their responses are useful and important, many students eventually admitted to doing this, at least occasionally. One student said, “I’m generally honest but sometimes rush through and click without thinking.” Several students said that when they complete the survey simply for extra credit, they do it quickly, without thinking. Hence, the students’ attitudes towards the survey while completing them do not necessarily align with its perceived importance. As for extra credit, students were asked if they think that their grade can be affected by whether or not they complete the survey. In general, students perceived no negative consequences to not filling out the survey, since it is anonymous, but several students noted a positive effect on their grade if the professor gives extra credit points for completing it.

The survey results showed that the majority of students disagreed that they did not understand the questions and that they do not like the questions. This was borne out in the focus groups, where we asked whether students think the survey questions help to evaluate teaching performance. Students’ general perception is that yes, they do.

             To explore students’ attitudes toward RMP compared with the Hostos Student Survey, the authors asked students why they prefer assessing faculty on ratemyprofessor.com. For the students present in the focus groups, their attitudes were mixed. Many prefer RMP because of its transparency. Comments included: “When I don’t like a professor, I write comments about the class to let other students know;” “I know it can’t be taken down and people will see it;” “I love it. I go there to pick my classes.” Students did acknowledge the possibility for bias in feedback due to factors other than teaching ability, such as small sample size (“If there’s only 1 or 2 reviews, it could be students who had a bad experience but they’re not a bad professor,”) the difficulty or rigor of the course content (“Sometimes the students rate them bad because they don’t want to do any work,”) or the personality of the professor (“I have one class that the professor is very nice [so she will probably get good ratings]. Everybody likes the professor, but

I don’t understand her. I have to go to tutoring to learn the material.”).

The majority of students present in these focus groups were perceptive enough to develop an attitude of distrust toward feedback from a small sample of students. In the course of the discussions during the student focus groups, their perceptions on the effectiveness of student feedback came to light. Students believe that, in some cases, their feedback leads to effective change, but. in other cases, their words fall on deaf ears. One student said, “Some professors, they improve the way they teach. Some professors it doesn’t help,” and another said, “I think some professors don’t care about them. It’s just always the same. Nothing changes.” The overall perception by the students is that better effort is needed to ensure that their feedback is used to improve teaching and learning at Hostos.

Faculty Survey Results. Approximately 70% of respondents selected an evaluation response rate of less than 25% of students enrolled in their courses. In the faculty survey, respondents were then asked what factors contributed to the response rates of their evaluations. Faculty generally responded with rationale for either why they believed they had low or, alternatively, high response rates.

A significant portion of faculty expressed that they perceived survey delivery through a paper form to have better results with higher response rates. Other faculty cited a lack of motivation and understanding on the part of students for diminished response rates. Student stress, low prioritization, and poor timing were also mentioned as reasons for receiving a low response rate. Students were described as being “too busy to care,” that they “seem totally disinterested,” seeing it as “another chore asked of them.” Faculty also reported that taking the time to explain the importance of the survey had a positive effect on response rates. One faculty member states that “I typically have low response rates, but they went up last year. I believe this is due to my taking the time to actually explain the survey and its importance to faculty, including how I reconsider my teaching each semester based on their responses.” Extra credit was listed as a contributing factor for response rates by some faculty. However, several faculty respondents were concerned that extra credit was an option for completing the survey. One respondent wrote “I’m a little appalled that offering extra credit in the class is even an option here,” describing it as “not really ethical.” Faculty that reported bringing computers into the classroom for students to complete the survey had mixed results, with some attributing high response rates, while others found that this did not increase response rates. One respondent described it positively by saying “bringing the computers to the classroom during class time helps them complete the survey and increases the response rate.” Conversely, one response stated “I used to do in-class computers, but it didn’t help increase the response rate at all.”

One respondent expressed concern for whether students who dropped the course would be allowed to submit the form. Others expressed doubt over the validity of the instrument.

Faculty were asked “Based on your knowledge, how are the results of the student evaluations used?” In order of largest-to-smallest prevalence, faculty responded that student evaluations results were used for personal reflection (24.13%), tenure, promotion and reappointment (23.78%), to assess teaching (21.33%), professional development (11.54%), used as a discussion with department chair (10.84%) and other (2.1%).

The survey asked faculty to share why they thought the survey is important and why they thought it was not important. Some faculty reported that the survey was important for purposes of reappointment, tenure, and promotion. Bias in the survey results was also a concern among many faculty, with some claiming that results are polarized between positive and negative responses. Relatedly, low response rates were also seen as a source of bias among faculty. One respondent wrote “The only responses are mostly from students who have an axe to grind.” Another faculty respondent cited research suggesting that student evaluation results are biased against female faculty and faculty of color. Distrust was a common attitude expressed to the question regarding why the survey is not important.

When asked to provide recommendations or thoughts for how to improve the surveys, many respondents made suggestions for how to improve the delivery of the survey, suggesting that survey delivery is perceived as a significant weakness. Generally, suggestions for improving the completion rate were the most common type. Faculty also used this section to express uncertainty regarding how the survey is used, how it should be used, and whether it reflects teaching effectiveness.

Faculty Focus-Group Findings. When asked if they encouraged students to complete the student-evaluation form, faculty responses varied significantly. However, the most salient responses alluded to the idea that it was important to let students know that they had a voice and that what they said matters. Also, faculty felt that they wanted to help students make informed decisions about the courses they wanted; reminding students that, as students, they have responsibilities was also key. They also felt that a paper evaluation would yield better results.

In terms of the value of the evaluation, some faculty felt that the evaluation was not “appropriate and did not measure what it was supposed to.” Evaluations could be biased–not only in terms of how language is used, but also in terms of gender. Other faculty felt that the evaluation could be a “valuable tool.” The evaluation could provide insight into teaching practices; however, by the time faculty receive them, it is “well into the next semester,” thereby preventing them from incorporating the information they received to better plan for the following semester.” The evaluation can also serve as a self-evaluation tool.

According to some faculty, the evaluation does not reflect teaching and learning. “There are a lot of different methods of teaching.” It was reported that there is a difference between teaching in person or face-to-face teaching and online. Other faculty felt that the evaluation “doesn’t measure the learning outcomes and the effort they put into teaching.”

When asked how they used the evaluation, some faculty stated that they did not use it. One commented that “the amount of stress it creates with faculty is not worth it.” Other faculty felt that the feedback they received from students was useful when planning for the following semester. The responses also helped some faculty enhance their teaching and make changes in the course. One faculty member commented that the evaluation could be empowering and good to use in a portfolio. According to the campus guidelines for creating the portfolio, which must  be submitted for annual pre-tenure reappointment and applications for tenure and promotion, faculty are expected to write a reflective statement on their students’ evaluations and how the results will inform their teaching in the next academic year.

When asked about the support needed to make the evaluation valuable, some faculty expressed that it was important to build a “culture of evaluation” to create awareness about the significance of the evaluation in terms of tenure and promotion. Additionally, it was equally important for some faculty to include the purpose of the evaluation on the syllabus so that students are aware that it is expected that they complete the survey.

Some faculty also felt that the title of the survey “student feedback evaluation” engendered fear and mistrust. Others said they did not have any particular feelings about it. One jokingly said “time to ‘berate’ my professor.” Another faculty said that the title should send an empowering message.

When asked about the rate of satisfaction with the title of the survey, most faculty could not think of its name. There was one suggestion for a change to “instructor evaluation form.”

Discussion

About 70% of faculty indicate that the average response rate of the student evaluation survey is less than 25%, which is the current rate reported by the campus’s office of Institutional Research. Also, 84% of faculty report that the survey is valuable/important.

Faculty responses to the methods of communication coincide with how students responded, having in-class reminders as the most used method (36%), followed by Blackboard reminders (20%) and email reminders (13%).

Faculty value the feedback they receive from the student evaluations, and about 60% believe that it is used for tenure, promotion and reappointment processes. Half of them reported that they use the feedback to assess teaching, compared with 60% for self-reflection and only 28% for professional development. A similar percentage of faculty reported that the results are being discussed with the department chair.

In general, students perceive the value of the student evaluation survey as important (57% of 558 participants) and that they are beneficial to improve how faculty teach their courses. A percentage (20%) of students shared their concerns about how the information of this survey is used, especially because they have not seen change in some teaching practices of certain faculty.

When students were asked if they fill out the surveys while thinking honestly, many of them shared that they just filled out the survey for the extra credit or because they had to do it; hence not really paying attention to the questions. The vast majority in the focus groups trusted the fact that the survey was anonymous. In contrast, about 2% of participants who responded to the survey indicated having fear that their grades might be impacted.

When asked why students prefer RMP, almost all students concurred that the main reason was the transparency and immediateness of the information, which coincide with the findings of the survey that indicated over 50% of the participants like the RMP website for the same reason. Students can actually see what other students are saying about the professors, their ratings, and they can make a decision whether they want to take a class with these instructors. Some students mentioned that there is value in both approaches and that a mixed approach should be implemented.

The methods used to inform students about the survey are similar: BlackBoard, email, and word-of-mouth. A high concern about how students are asked to fill out the survey and the potential of introducing bias to the results is the fact that a large number of students in this study mention that faculty have offered extra credit for filling out the survey. This aligns with the results from the survey that shows that over 40% of students report receiving extra credit sometimes or always. Although many students indicated that they get extra credit, some share their concerns about this practice. The vast majority shared that faculty do not talk about the survey and the value of participating. All students that participated in the focus group shared that this instrument and the responses they provide are opportunities for faculty to improve their teaching and learn about their students.

Next Steps

The researchers will seek to present their findings within their college community, at university events, and to a broader professional audience at national conferences. It is expected that the findings discussed herein will lead to professional development for faculty on how to interpret the results of the student-faculty evaluations and how to use the results to drive pedagogical choices and enhancement. It is also anticipated that discussions will take place among different agencies such as college leadership, Institutional Effectiveness and Assessment, Student Government, the Center for Teaching and Learning, the Office of Academic Affairs, and the Office of Educational Technology on how to effectively enhance the process of administering student evaluations, increasing the number of students completing the survey, disseminating the results, and including its findings to promote continuous improvement. Future investigation will examine the effectiveness of the actual survey questions in eliciting responses that can support further development of teaching skills, which in turn could lead to enhanced classroom experiences for students.

Conclusion

An evidence base for improving teaching effectiveness can be discovered via student surveys. Despite this, a complete assessment of all essential aspects of all aspects of college or university teaching cannot be provided. Therefore, in evaluating student and faculty perceptions of such a survey, a focus group was employed to delve deeper into the results. In summation, these evaluations concluded that student respondents found the survey important or positive. Faculty believed the survey’s purpose was for tenure, promotion, and the reappointment process. Being reminded about the survey proved to be beneficial for student completion. Moreover, faculty expressed gratitude in receiving feedback from students, which they could apply to improving teaching and learning in their courses. At the end of the semester, students become consumed with other things, and it simply is not a top priority. Moreover, some students admitted to completing it and not reading it just because they were asked to or offered bonus credit. Emphasizing the importance of the survey and student voice was one way suggested by faculty to assist in effective completion, which supports Macfadyen et al. (2015), who posited that stressing the value of student voice could “ensure that [SET’S] output is valid and reliable.”

              The need of recognition and transparency on how the student feedback is used was a common theme, which poses the imperative of looking at SET as a process that goes beyond survey administration and completion and is seen as an intentional commitment to integrate SET to the strategic framework of an institution. Student buy-in will follow if they perceive genuine attempts to include their voices in the continuous improvement loop.

                                                                        References

Bain, S., Fedynich, L., & Knight, M. (2011). The successful graduate student: A review of the factors for success. Journal of Academic and Business Ethics, 3.

Baldwin, T., & Blattner, N. (2003). Guarding against potential bias in student evaluations: What every faculty member needs to know. College Teaching, 51(1), 27–32. Retrieved from https://doi.org/10.1080/0260293980230207

Berk, R. A. (2014). Should student outcomes be used to evaluate teaching? Journal of Faculty Development, 28(2), 87–96.

Boysen, G. A., Kelly, T. J., Raesly, H. N., & Casner, R. W. (2014). The (mis)interpretation of teaching evaluations by college faculty and administrators. Assessment & Evaluation in Higher Education, 39(6), 641–656. https://doi.org/10.1080/02602938.2013.860950

Bresciani, M., Zelna, C., & Anderson, J. (2004). Assessing Student Learning and Development: A Handbook for Practitioners. NASPA-Student Affairs Administrators in Higher Education.

Cantrell, S., & Kane, T. J. (2013). Ensuring fair and reliable measures of effective teaching: Culminating findings from the MET project’s three-year study. MET Project Research Paper.

Cavanaugh, J. K., & Jacquemin, S. J. (2015). A large sample comparison of grade-based student learning outcomes in online vs. face-to-face courses. Online Learning, 19(2), n2.

Donovan, J., Mader, C., & Shinsky, J. (2007). Online vs. traditional course evaluation formats: Student perceptions. Journal of Interactive Online Learning, 6(3), 158-180.

Felton, J., Koper, P. T., Michell, J., & Stinson, M. (2008). Attractiveness, easiness, and other issues: Student evaluations of professors on RateMyProfessors.com. Assessment

Evaluation in Higher Education, 33(1), 45-61. https://doi.org/10.1080/02602930601122803

Hamermesh, D. S., & Parker, A. (2005). Beauty in the classroom: Instructors’ pulchritude and putative pedagogical productivity. Economics of Education Review, 24, 369–376. https://doi.org/10.1016/j.econedurev.2004.07.013

Keeling, R. P., Wall, A. F., Underhile, R., & Dungy, G. J. (2008). Assessment reconsidered: Institutional effectiveness for student success. Student Affairs

Administrators in Higher Education.

Kozub, R. M. (2008). Student evaluations of faculty: Concerns and possible solutions. Journal of College Teaching & Learning (TLC), 5(11).

Lindlof, T. R., & Taylor, B. C. (2017). Qualitative communication research methods (4th ed.). Sage Publications.

Macfadyen, L. P., Dawson, S., Prest, S., & Gašević, D. (2015). Whose feedback? A multilevel analysis of student completion of end-of-term teaching evaluations. Assessment & Evaluation in Higher Education, 41(6), 821–839. https://doi.org/10.1080/02602938.2015.1044421

Merritt, D. (2008). Bias, the brain, and student evaluations of teaching. St John’s Law Review, 82(1), 235–287. https://doi.org/10.2139/ssrn.963196

Nasser-Abu Alhija, F. (2017). Teaching in higher education: Good teaching through students’ lens. Studies in Educational Evaluation, 54, 4-12. https://doi.org/10.1016/j.stueduc.2016.10.006

Rubin, D. L., & Smith, K. A. (1990). Effects of accent, ethnicity, and lecture topic on undergraduates’ perceptions of non-native English-speaking teaching assistants. International Journal of Intercultural Relations, 14(3), 337–353. https://doi.org/10.1016/0147-1767(90)90019-S

Uttl B., White, C. A., & Gonzalez, D. W. (2017). Meta-analysis of faculty’s teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation, 54, 22-42. http://dx.doi.org/10.1016/j.stueduc.2016.08.007

Weinberg, B. A., Fleisher, B. M., & Hasimoto, M. (2007). Evaluating methods for evaluating instruction: The case of higher education (NBER Working Paper No. 12844). Retrieved from National Bureau of Economic Research: http://www.nber.org/papers/w12844

effectiveness: Student evaluation of teaching ratings and student learning are not related.

Studies in Educational Evaluation, 54, http://dx.doi.org/10.1016/j.stueduc.2016.08.007

Appendix A

Student Feedback Questionnaire

The End-of-Course Student Survey of Instruction allows students to comment and give suggestions about a professor, as well as a course. Please share your opinion/experience with the Student Survey of Instruction to help us improve faculty evaluation. By completing any portion and submitting this survey you are consenting to participation.

Please circle the answer that is closest to your opinion.

1. I see the Student Survey as:

Important    Unimportant    Fun    Boring    Easy    Difficult    Empowering    Disempowering

2. I fill out the Student Survey.

Always    Sometimes    Rarely    Never

3. Someone explained the purpose of the Student Survey.

Always    Sometimes    Rarely    Never

4. My professor(s) promised extra points for filling out the Student Survey.

Always    Sometimes    Rarely    Never

5. I am informed about how my answers will be used.                         Agree    Disagree

6. I don’t know who reads my answers.                                                    Agree    Disagree

7. I am afraid my answers can affect my grade.                                      Agree    Disagree

8. I don’t understand the Student Survey questions.                             Agree    Disagree

9. I don’t like the Student Survey questions.                                            Agree    Disagree

10. I like to use Rate my Professor instead of the Student Survey.    Agree    Disagree

Is there anything you would like to add?

Thank you for your participation!

Appendix B

Faculty Perceptions about Student Evaluation Survey

Dear Faculty,

please share your opinion/experience with the Student Survey of Instruction. Your information will assist in identifying and recommending ways to improve the student evaluation survey.                                                

Thank you.

the Hostos Instructional Evaluation Committee

Q1 How do you encourage your students to complete the student evaluation form?

▢  In-class reminders

▢  BlackBoard reminders

▢  Email reminders

▢  Computers in classroom

▢  Class discussion

▢  Extra credit

▢  Reward

▢  Other

Q2. What is the average response rate of the student evaluation survey in your classes?

▢  less than 25%

▢  25% or higher

Q3 What factors do you think have contributed to the response rate of the student evaluation survey in your courses?

______________________________________________________________________________

______________________________________________________________________________

Q4 Based on your knowledge, how are the results of the student evaluations used?

▢  For tenure, promotion, and reappointment

▢  To assess teaching

▢  For professional development

▢  For personal reflection

▢  Are being discussed with department chair

▢  I don’t know how they are being used.

▢  What are the student evaluations?

▢  Other    ____________________________________________________________________

Q5 Is the student evaluation survey important for you?

▢  Yes

▢  No

Q6 Please share why you think the survey is important.

______________________________________________________________________________

______________________________________________________________________________nt evaluation survey important for you? = No

Q7 Please share why you think the survey is not important.

____________________________________________________________________________________________________________________________________________________________

Q8 Please provide recommendations or thoughts for how to improve the surveys or any impediments to survey completion and usefulness, or anything else you would like to add.

____________________________________________________________________________________________________________________________________________________________

Teaching Using a Flipped Classroom Approach: Impacts for Students of Color

Amber M. Gonzalez, Ph.D. (she/her/ella)

California State University, Sacramento

Abstract

Using a quasi-experimental research design this study examined whether the use of a flipped classroom teaching method for undergraduate quantitative research methods had an impact on undergraduate students’ academic achievement within the course as measured by their course assignments, quizzes, exams, and final paper. Findings suggest that utilizing a flipped classroom teaching design impacted Students of Color, as they performed better than their White peers on their final papers. 

Keywords: Undergraduate Research Methods, Flipped Classroom, Students of Color

Teaching Using a Flipped Classroom Approach: Impacts for Students of Color

Hispanic Serving Institutions (HSIs) are recognized as Minority Serving Institutions (MSIs), which account for at least 40% of undergraduate Student of Color enrollment (IHEP, 2014). These institutions often create a pathway to college success for students who are first generation, low income, and have been historically disadvantaged in terms of access to success in postsecondary education. Often as we discuss student outcomes at HSIs we focus on enrollment and graduation outcomes without paying particular attention to the path towards graduation and the experiences of Students of Color within the classroom learning environments.

Higher education learning environments are being increasingly investigated as the use of technology within these learning environment increases (Casanova et al., 2020; Goedhard et al., 2019). In addition, technology has changed the ways in which our students think and engage with information (Beichner, 2014).

In a traditional lecture style classroom, the learning environment is often designed by row-by-row seating with the teacher facing the students from a position at the front of the classroom. Using this classroom design environment, the teacher is often viewed as the transmitter of knowledge and students are viewed as passive learners that take in information through note taking and question asking (Casanova et al., 2020). However, as a student-centered teaching approach (McLaughlin et al., 2014), the flipped classroom is an instructional setting characterized by both online and face-to-face instruction (DeLozier & Rhodes, 2017; Roehl et al., 2014). Although there are many models of flipped classroom design the main characteristics include instructional content that has been viewed before class through course readings, concept videos, and/or providing students with detailed notes (DeLozier & Rhodes, 2017; Roehl et al., 2014). Thus, teachers and students are able to spend in class time working on interactive engagement activities (Mitchell, 2020) influencing the development of higher order skills such as application and analysis through the use of problem solving, in depth discussions, or advancing concepts thus allowing for more one-on-one engagement between teacher and student (Nouri, 2016). The use of this model also can provide teachers with an opportunity to gain insight into their students’ learning and understanding of course concepts through in class observations of students’ engagement activity (Mitchell, 2020).

Research examining the use of a flipped classroom design has indicated students who take courses using flipped classroom designs improve student engagement and student learning as measured by student grades (Ball & Pelco, 2006; Nouri, 2016; Peterson, 2016; Pienta, 2016). Moreover, Nouri (2016) found that students felt more motivated as learners when taught using a flipped classroom approach. In addition, Pienta’s (2016) findings suggest that students with low levels of achievement in class are benefitting the most from flipped classrooms. While empirical research has found that utilizing a flipped classroom design is supportive of student learning and has positive impacts on student outcomes (Ball & Pelco, 2006; Nouri, 2016; Peterson, 2016; Pienta, 2016), it is unclear how Students of Color respond to the use of a flipped classroom design.

Teaching Research Methods

              Undergraduate research methods courses are aimed at supporting students’ development of critical thinking skills through their understanding of issues associated with conducting research. Often these courses are students’ first experiences with research and have the potential to spark aspirations to pursue post-baccalaureate degrees that focus more on research.  However, even if students are not inspired to pursue graduate education at the end of the course, these courses are important in helping students to critically evaluate scientific research to make informed decisions as part of their professional development (Zablotsky, 2001). Despite their importance, undergraduate research methods courses are often perceived by students as challenging and can have a negative impact on students’ academic achievement, motivation, and attitudes towards engaging in research (Ball & Pelco, 2006).

Current Study

Although a flipped classroom design is a relatively new pedagogical design being utilized in higher education settings, there are empirical studies examining the benefits of such a design (Nouri, 2016). However, the minimal literature that does exist looks at students as a whole and does not examine the heterogeneity of our student population. The purpose of this quasi-experimental study was aimed at better understanding how completing a course in undergraduate research methods that flips the traditional lecture/homework pedagogy and utilizes an activity-based learning environment impacts undergraduate students’ academic achievement. Additionally, this study examined how Students of Color differed from their White peers in the achievement within the course.

Method

              This study examined group differences, without the ability to randomly assign participants to the different learning environments, therefore employing a quasi-experimental research design (Creswell & Creswell, 2018). Using statistical analysis, the current study evaluated group differences using the learning environment as the quasi-independent variable and course achievement as outcome variables. In addition to examining group differences by learning environment, this study also evaluated the effects of a flipped classroom learning environment on undergraduate students’ course achievement with attention paid to differences for White students compared to Students of Color.

Research Design

              The research methods course was aimed at introducing undergraduate students majoring (or minoring) in a developmental psychology program to quantitative research methods. This course is a required foundation course within the program and students within the course were mostly junior level. A pre-requisite to the course is that students must have completed at least 45 units prior to enrollment. Major topics within the course included the structures, design and conduct of quantitative research inquiry, the generation of quantitative research questions and hypotheses, and collection and analysis of quantitative data. Emphasis was placed on challenging students to think critically about methodological issues in quantitative research as it applied to developmental research. Although the overall passing rate of students enrolled in this course is fairly good, approximately 8% of undergraduate students fail the course on a yearly basis and the retention and applicability of the course material for use in future courses is low.

Both sections were taught using the same textbook, covering the same chapters in the same order over a 16-week semester. Students in both sections completed the same amount of course assignments including problem-based activities, quizzes, exams, and a final mini research paper that was application based. Both sections were taught face-to-face for 75 minutes twice a week but on different days of the week. More specifically, students in the flipped classroom met on Tuesday and Thursday early afternoon (12:00pm-1:15pm) whereas students enrolled in the lecture-based course met on Monday and Wednesday early afternoon (12:00pm-1:15pm). Both courses were taught by the same self-identified Latina tenure-track Assistant Professor.

Utilizing a quasi-experimental research design, section A was considered to be the control group, meaning the course was taught by the instructor using a traditional method of face-to-face lecturing in class and providing out of class problem-based activities. In contrast, section B of the course was taught using a flipped classroom design. More specifically, the instructor provided course content via short concept videos and lecture notes that were to be read outside of class and class time was devoted to completing the problem-based activities, as well as working on the final research paper.

Participants

Participants included 70 undergraduate students who self-enrolled in one of two undergraduate quantitative research methods courses in a developmental psychology program during the fall semester of 2018 at a Hispanic Serving Institution (HSI). Students self-enrolled in either a traditional lecture-based course (n = 36) or a course that utilized a flipped classroom design (n = 33). At the time of enrollment, students were unaware of what section they were enrolling it.

On average, during fall 2018, students were enrolled in 14.12 units (SD 2.24) and their grade point average was 2.96 (SD 0.45). It is important to note that students who self-identified as White had an average grade point average of 3.12 (SD 0.38) whereas Students of Color had an average grade point average that was slightly lower at 2.9 (SD 0.46); these differences were not significant (t(62) = 1.7, p = 0.10, two-tailed). Additional demographic information of students in flipped and lecture style classroom settings is provided in Table 1.

Table 1

Demographics of Students in Flipped and Lecture Style Classroom Setting

  LectureFlippedTotal
363470
Transfer StudentYes121224
GenderMale549
Female313061
AgeMean21.4721.3821.43
SD 2.23SD 1.58SD 2.96
RaceWhite8917
Latina/o191130
Black145
Asian5611
Multi-Racial336
Academic ProbationYes7613
Grade Point AverageMean2.962.972.96
SD 0.47SD 4.3SD 0.45

Measures

At the start of the semester, students in both sections completed informed consent. At the end of the semester, students were provided with a survey that asked questions about their demographic characteristics, including their student identification number, gender, ethnicity, age, anticipated graduation date, number of units enrolled in and completed, grade point average, whether they have been on academic probation, whether they were a transfer student, and their specific program of study. At the end of the semester, after grades were submitted, students’ demographic data were merged with their grades earned in class (ie. course assignments, quizzes, exams, final paper). Course grades included course problem-based activities with the highest possible score being 60, quizzes with the highest possible score a 48, exams with the highest possible score 300, and a final paper with the highest possible score 150.

Analysis

              Descriptive and inferential statistical analysis was conducted using SPSS 27.0 software. Independent sample t-tests were used to determine if there were significant group differences between type of instructional method (i.e. flipped and lecture-based) and students earned grades. In addition, independent sample t-tests were used to examine differences in student achievement among Student of Color and White students within each instructional method.

Findings

              At the completion of the semester, overall, students performed well on their course problem-based activities, quizzes, exams, and final paper. Across both conditions, 14.3% of students earned A’s, 37.1% of students earned B’s, 30% of students earned C’s, and 18.6% of students failed the course. Neither classroom setting nor students’ self-identified racial identity were related to earned grades at the end of the semester.

Flipped and Lecture Based Classroom Setting

              When comparing students in the flipped classroom setting to the lecture-based setting, there were no significant differences among students in regard to their mean scores on course problem-based activities, quizzes, exams, or final paper. Table 2 provides additional details on student mean scores earned on activities, quizzes, exams, and their final paper.

Table 2

Comparing Group Means: Flipped and Lecture Based

LectureFlipped
Course Activities48.0745.9
SD 10.39SD 9.41
Quizzes42.3942.03
SD 25.37SD 4.01
Exams234.89238.24
SD 28.46SD 27.51
Final Paper109.31117.32
SD 21.94SD 23.5

Classroom Setting: Students of Color and White Students

              For the flipped classroom condition, there was a significant difference between White students (M = 102.44, SD = 29.26) and Students of Color (M = 123.54, SD = 18.90) who scored significantly higher on their final paper (t(31) = 2.45, p < .05, two-tailed). When comparing Students of Color and White Students in the flipped classroom setting, Students of Color had higher mean scores for all course problem-based activities and quizzes, however, these mean differences were not significant. White students did perform better on exams than Students of Color, however, these mean differences were not significant. Table 3 provides additional details on student mean scores earned on problem-based course activities, quizzes, exams, and their final paper, by racial type.

Table 3

Flipped Classroom Comparing Group Means: Students of Color and White Students

Assignments WhiteStudents of Color
Course Activities41.6747.89
SD 12.8SD 7.5
Quizzes41.6142.25
SD 4.76SD 3.87
Exams245.11237
SD 15.13SD 30.7
Final Paper102.44123.54
SD 29.26SD 18.90

When comparing Students of Color and White students, for the lecture-based classroom condition, Students of Color had higher mean scores on course problem-based activities and quizzes, however these mean differences were not significant. In addition, White students performed better on exams and final paper scores, however these mean differences were not significant. Table 4 provides additional details on student mean scores earned on activities, quizzes, exams, and their final paper by racial type for the lecture-based classroom condition.

Table 4

Lecture Based Classroom Comparing Group Means: Students of Color and White Students

 WhiteStudents of Color
Course Activities41.3150
SD 12.56SD 9.03
Quizzes40.3142.98
SD 5.05SD 5.40
Exams246.5231.57
SD 29.81SD 27.71
Final Paper113.13108.21
SD 25.00SD 21.36

Discussion

While the quantitative data findings did not suggest that undergraduate students do significantly better academically in the flipped classroom design than in the traditional lecture based course design, we do see improvements for Students of Color within the flipped classroom design. Students of Color were able to engage in higher order thinking as they applied course concepts to the writing of their final paper. This finding is consistent with those suggested by Nouri (2016). Students in the flipped classroom had more opportunities for one-on-one engagement as they completed their course problem-based activities as well as work on their final papers. Students were able to ask questions, ask the instructor to read portions of the paper for immediate feedback, and work on analyzing quantitative data within the classroom environment. These opportunities were beneficial to students as evidenced in their final paper submissions.

As the key findings of this current study suggest, Students of Color may thrive on student centered teaching (McLaughlin et al., 2014) that are more activity and team based. This classroom environment will provide more opportunities for students to engage and interact with their peers on course assignments (Mitchell, 2020). 

Limitations

              The results of this study may be limited because the instructor’s teaching style may have played a role in the lack of significant findings. Both sections of the course were taught by the same instructor and it may be that the instructor’s teaching style and energy influenced student’s engagement with the course material. In addition, the small sample size may have also impacted the study findings. While mean differences did appear within the results, these differences did not reach significance, which may have been influenced by the limited the number of participants.

Conclusions

As we continue to think about the success of our Students of Color in higher education, we must reimagine how we deliver the course content to our students and engage them with course content. Rather than continuously to think about students as passive recipients of knowledge, we must use pedagogical strategies to allow students to actively engage with course material. Faculty must consider how they can optimize in class time, face-to-face time to interact with their students, scaffold student learning, and assess student learning outcomes.

References

Ball, C.T., & Pelco, L.E. (2006). Teaching research methods to undergraduate psychology students using an active cooperative learning approach. International Journal of Teaching and Learning in Higher Education, 17(2), 147-154.

Beichner, R. J. (2014). History and evolution of active learning spaces. New Directions for Teaching and Learning, 2014(137), 9–16.

Casanova, D., Huet, I., Garcia, F., & Pessoa, T. (2020). Role of technology in the design of learning environments. Learning Environments, 23, 413-427.

Creswell, J. W. & Creswell, J. D. (2018). Research design (5th ed). Sage Publications.

DeLozier, S.J., & Rhodes, M.G. (2017). Flipped classrooms: A review of key ideas and recommendations for practice. Educational Psychology Review, 29(1), 141-151.

Institute for Higher Education Policy. (2014). Minority-serving institutions “do more with less” to serve their students well. Washington, DC: Institute for Higher Education Policy. Retrieved from http://www.ihep.org/press/news-releases/minority-serving-institutions-do-more-less-serve-their-students-well

Mitchell, S. (2020). The evolution of lesson plans in a hybrid course: Flipping the classroom and engaging students through iPads and YouTube videos. Hispanic Educational Technology Services Online Journal, 10(2), 1H+.

 Nouri, J. (2016). The flipped classroom: For active, effective and increased learning – especially for low achievers. International Journal of Educational Technology in Higher Education, 13(33).

Peterson, D.J. (2016). The flipped classroom improves student achievement and course satisfaction in a statistics course: A quasi-experimental study. Teaching of Psychology, 43(1), 10-15.

Pienta, N.J. (2016). A “flipped classroom” reality check. Journal of Chemical Education, 93(1), 1-2.

Roehl, A., Reddy, S.L., & Shannon, G.J. (2013). The flipped classroom: An opportunity to engage millennial students through active learning strategies. Journal of Family & Consumer Sciences, 105(2), 44-49.

Zablotsky, D. (2001). Why do I have to learn this if I’m not going to graduate school? Teaching research methods in a social psychology of aging course. Educational Gerontology, 27, 609-622.


Trackback from your site.

Leave a comment