How Effectively are University Students Tested? A Case Study
Testing and examining go on in higher education all the time through continuous assessments and end semester examinations. The grades scored by students determine not only academic mobility but eventually who get employed in the job market, which seems to be shrinking all over the world. Those charged with testing are often staff who have higher qualifications in their subject areas but are not necessarily teaching or examination experts. Against this background, the researcher wanted to find out what was happening at selected university across three schools: Social Studies, Education and Science. The university is fairly young having been awarded its charter twenty years ago. The paper asked two questions namely, at what levels of Bloom’s Taxonomy are lecturers asking examination questions? Secondly, do the level and balance of questions show growth in examining skills? The study evaluated over 1039 questions from randomly selected examination papers from the Examinations Office for the academic years from 2014/15 to 2017/18 (three academic years). A guide from the list of verbs used in Anderson s (revision of Bloom was used to analyze the questions. Descriptive statistics were used to describe the trends in testing for each year. ANOVA and t-tests were used to find out if there were significant differences between numbers across categories and within categories. The results of the study show that most examination questions are at the levels of remember (literal) and knowledge (understand). In 2016/17 and 2017/18 academic years, there were significant differences in the percentage of questions examined in these two categories. However, it seems from the study, that testing or examining skills do not grow through the practice of setting questions. There is need for examiners to be trained to use the knowledge in setting questions that discriminate effectively across the academic abilities of students they teach.
Abosalem, Y. (2016). Assessment techniques and students’ higher-order thinking skills. International Journal of Secondary Education, 4(1), 1-11.
Airasian, P. W. (1994). Classroom assessment. New York, NY: McGraw Hill.
Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., Raths, J., & Wittrock, M.C. (Eds). (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s Taxonomy of Educational Objectives. New York: Longman.
Bloom, B. S. (1956). Taxonomy of Educational Objectives Handbook: The Cognitive Domain. New York: David McKay.
Bloom, B. S. (1984). Taxonomy of educational objectives. Boston: Allyn and Bacon.
Krathwohl, D. R. (2002). A revision of Bloom's taxonomy: An overview. Theory into Practice, 41(4), 212-218.
Mawa, B., Haque, M. M., & Ali, M. M. (2019). Level of Learning Assessed through Written Examinations in Social Science Courses in Tertiary Education: A Study from Bangladesh. Journal of Teacher Education and Research, 14(1), 7-12.
Momsen, J., Offerdahl, E., Kryjevskaia, M., Montplaisir, L., Anderson, E., & Grosz, N. (2013). Using assessments to investigate and compare the nature of learning in undergraduate science courses. CBE—Life Sciences Education, 12(2), 239-249.
Phelps, R. P. (2006). Characteristics of an effective student testing system. Educational Horizons, 85(1), 19-29.
Sweller, J. (1988). Cognitive load during problem-solving: Effects on learning. Cognitive science, 12(2), 257-285.
Tremblay, K., Lalancette, D. & Roseveare, D. (2012). Assessment of Higher Education Learning Outcomes. Feasibility Study Report. OECD.
Weir, C. & Roberts, J. (1994). Evaluation in ELT. Blackwell.
Copyright (c) 2020 Jane Kembo, PhD
This work is licensed under a Creative Commons Attribution 4.0 International License.