The effect of implementing mind maps for online learning and assessment on students during the COVID-19 pandemic: a cross-sectional study | BMC medical training
During the online teaching period from March to the end of April 2020, three mind map assignments, a set of critical questions and two surveys were given to students from two different courses as an alternative method of teaching and assessment . These three assignments covered important topics in both courses as the grading system was changed by the Ministry of Education due to the COVID-19 pandemic. Therefore, mind mapping assignments were chosen as an alternative, to enhance student learning and retention, and to be used as an assessment tool. Performing this task as a group under such sudden and stressful circumstances could help students learn soft skills, such as conflict resolution and time management. Additionally, the results of total student pass in these courses in the second quarter were compared to the level of student pass in the first quarter of 2020, before the COVID-19 outbreak, using hypothesis tests. .
The sample chosen for the study was the Kingdom of Saudi Arabia Cluster Sampling, Jeddah City, King Abdulaziz University, Department of Physics, Female Campus. The participants were medical physics students. The sample only includes female students, as female and male campuses are separated in Saudi universities. The study was conducted among female students enrolled in two separate medical physics courses taught by the study author: 1) health physics, a second-level introductory course, and 2) magnetic resonance and medical imaging (MRI), taught to fourth year freshmen. The study sample was 55 students in total, from the two courses. Students were verbally informed that 1) participation in the survey would be voluntary, 2) responses would be anonymous, and 3) non-participation would not affect their course grades, as no identifying information would be collected. Thus, participation was considered to imply consent.
To complete the mind mapping assignments, students were first randomly divided into groups of 3-6 students using the Blackboard system. There were 11 students in the MRI course; thus, the students were divided into 3 groups. There were two classes for the Health Physics course, one with 14 students and one with 30 students, which were divided into 3 and 5 groups, respectively. Thus, a total of 55 students received three mind map assignments that covered the most important topics of the course. Each mind map covered a section of the course that the students had already completed.
All mind mapping missions have been posted on Blackboard with instructions for guidance. Students also participated in a short session where the assignments were explained, emphasizing the resources provided to help them and the grading style (rubric). The students were asked to self-learn on the Blackboard platform using the various YouTube video instructions and resources posted in Arabic and English on how to create a mind map. Students also had the freedom to choose and download one of three online mind mapping software applications: MindMaster Mind Meister and XMind . Online materials on how to use each tool were also posted for students on Blackboard.
Based on the concept map assessment criteria published online, the author designed a mind map rubric to assess student achievement (see Appendix A in Supplementary File 1; [32, 33]). After each mind mapping assignment, students received feedback on their work on Blackboard based on the topic.
Once these assignments were completed, a fourth assignment was posted for students on Blackboard. This assignment included mind maps that students had worked on in each course: 23 for the Health Physics course, 8 for the MRI course, and a set of 3 questions. Mind maps with the highest scores were excluded from this assignment. Students had to choose a mind map that they had not worked on and answer the three critical questions. The questions were: “Does this mind map include all the ideas and concepts related to the topic? If not, specify what is missing? Indicate only one missing aspect; “What aspects do you like most about the mind map (eg, in terms of ideas, links and connections used, supporting evidence, information used)?” and “Provide at least one suggestion for improving this mind map.” The motivation behind this task was to encourage students to constructively critique the work of their peers. Therefore, there were no wrong answers to 2 of the 3 questions, unless the students failed to spot their peers’ errors regarding missing information on the mind map. The results of this mission are not presented in this article.
To meet the objectives of the study, a survey was distributed online to students of both courses after the first assignment (Survey 1), Appendix B in Supplementary File 1. It was divided into three main sections: students with regard to information regarding mind mapping and associated assignments; the effects on students regarding the change in style of assessment due to COVID-19 control measures and the transition to online teaching; and student satisfaction with using mind maps as a learning tool and skills acquired through practice. Responses to questions in each section were rated on a five-point Likert scale ranging from “strongly agree” to “strongly disagree”. Additionally, an additional question about the skills the students thought they had learned while working on their homework was included.
After submitting the last assignment, the same survey was again posted for students with an additional open-ended question: “After completing the three mind mapping assignments, what do you think are the positive and negative aspects of the assignments?” “Do you have any suggestions about them?” (Survey 2). This was done to compare students’ responses to the two surveys and measure changes in their perceptions about mind mapping through their openly shared views on homework.
To answer the research questions, students were asked to complete a survey after the first assignment and again after the last. The data collected from students for the first survey S1 is available in additional file 2 while the data collected from the second survey S2 is available in additional file 3. The responses of students to the two surveys were then compared, as well as their levels of success in the first survey. term before the COVID-19 pandemic and the second term after the start of the pandemic. Data on first- and second-term student academic performance can be found in Supplementary File 4. Survey 1 (S1; N=53 students) and Survey 2 (S2; N=45 students) had a variable independent: medical physics students. (health physics and MRI). However, the dependent variable for S1 represents student responses related to survey objectives, while for S2 it represents student responses related to survey objectives and an open-ended question. To measure the internal consistency of survey items, Cronbach’s alpha was used. It creates a measure of the reliability of survey items and the close relationship between a set of items within a group. Factor analysis was used to test validity across survey questions across subsites. If the value is less than the absolute value of 0.4, it is inconsistent and saturated. This analyzes the relationships between the set of survey questions grouped by survey objective (subset), to determine if the respondent’s responses to different questions by survey objective (subset) are more closely related to each other and to other survey questions. .
Responses to positively worded questions were collected and coded as follows: strongly agree = 5, agree = 4, neutral = 3, disagree = 2 and strongly disagree = 1 However, to maintain consistency, for questions 1 and 2 of the second section of the survey (Appendix B in Supplementary File 1), negatively worded questions have been taken into consideration and responses to these have been edited. reverse coded. The survey data was analyzed using the SPSS software package. Frequencies (N), percentages (%), means (M) and standard deviations (SD) were used to analyze the response. Responses to the S2 open-ended question were low (only 19 responses) and therefore not statistically significant. Therefore, the coding technique was used and similar answers were added in the same category. Responses were categorized as positive, negative, and in the form of suggestions for mind mapping assignments.
The means of the first and second objectives of the two surveys were tested for their significance of difference using a paired-samples t-test. Finally, a chi-square test was used to determine any statistically significant differences between student achievement levels in term one (pre-COVID-19 pandemic) and term two (early pandemic post-COVID-19).