Assessment and feedback practices in higher education

Published: 2019/12/11 Number of words: 2673

Select one specific teaching issue that you’ve identified in your teaching development plan. Investigate academic and professional HE literature and resources, including some specific to your discipline or profession, for evidence and guidance regarding effective practice relating to that issue. Consider the implications of this evidence for your own practice and explain how you intend to take it forward (approx 1500 words).

1 Assessment Practice Literature Review

This section investigates the academic literature and provides research evidence regarding assessment and feedback practices in higher education. It gives a simple summary of the key relevant conclusions of research. From the literature, it is apparent that there is a growing body of research into assessment practices in higher education. This can form the basis of a robust planning and practice framework for my discipline.

In general, the assessment of students can be an influential educational instrument that can assist both teaching and learning. In teaching, assessments can be used to determine whether students have achieved the desired learning outcomes and to identify whether the course material has been effectively presented. In learning, assessments have a vital role to play in encouraging and helping students to organise their learning and develop greater study skills (Fleming, 2008).

On the other hand, there is evidence that assessments may have a negative effect on both students’ learning and achievement if they are poorly constructed (Biggs, 2003). Inappropriate assessment design can act against its practicality as a strategy for teaching and learning (Boud et al., 2001) and encourage students to adopt a superficial approach to learning (Boud et al., 1999). Well-designed assessments, however, can offer students an overview of their learning environment, identify what they need to focus on and what they need to do to master the material (Biggs, 2003; Fleming, 2008). In addition, appropriate assessments can motivate students to adopt a more in-depth learning approach. The converse is true for assessments that are poorly designed (Bloxham, 2007).

There are vital criteria to be considered when designing assessments and to avoid several unintended and negative consequences. Assessment methods should involve active and long-term engagement with learning tasks (Ramsden, 2003). Clear requirements of the assessment can encourage the intrinsic motivation of the learners (Mortiboys, 2010). Assessing work that falls outside the curriculum will provide unrelated or counter-productive tasks of no value (Biggs, 2003). Therefore, an effective approach would be to determine the learning outcome and then identify the assessment approach related to that outcome.

There is evidence that students spend most of their available time studying just before an examination or coursework submission (Gibbs et al., 2004). Moreover, early examinations or coursework submission can influence students’ commitment to learning the rest of the course material. Therefore, the timing of an assessment can have a considerable impact on students’ approach to learning. Overloading students with excessive assessment tasks can also discourage students from adopting in-depth learning approaches (Boud et al., 1999). Efficient assessment preparation is needed to reduce students’ anxiety that might otherwise encourage students to adopt a superficial approach to learning (Bloxham, 2007). Self-assessment should be encouraged in higher education. This will enhance students’ learning and equip them with the skills to adopt assessment tasks for lifelong learning (Boud et al., 1999).

2 Towards Designing Effective Assessments in my Higher Academic Practice

In 1956, Bloom developed a cognitive development taxonomy showing how to identify critical thinking skills by progressing from lower-order thinking skills to higher-order skills (Bloom et al., 1956). In the first assignment, which was to provide a poster on technologically-enhanced learning, it was argued that most theories associated with learning and teaching have their basis in Bloom’s taxonomy. Moreover, Anderson et al. (2001) argue that this taxonomy, with comprehension, application, analysis, synthesis and evaluation skills, can help tutors design a well-constructed assessment. Computer science tutors can utilise Bloom’s taxonomy for programming assessment (Thompson et al., 2008), and software design and analysis in order to apply a specific cognitive process to a specific type of knowledge. In my own profession I have started using Bloom’s taxonomy table to design effective assessments, taking into account the concepts and standards of assessment design, illustrated above. Table 1 shows Bloom’s taxonomy framework for assessment, while Appendix 1 gives an example.

Capture

Assessment techniques should determine that the work has been properly done by the student concerned, as well as encourage higher-level learning by constructing assessments for the purpose of ‘lifelong-learning capacity’ (Bloxham, 2007). Adopting Bloom’s taxonomy for assessment can fulfill these demands: assessments tasks that are designed to progress from low-level thinking to a higher level can increase confidence in students’ own work, guarantee higher-level learning skills, and encourage metacognitive knowledge which lasts for longer.

3 Towards Providing Good Feedback in my Higher Academic Practice

It is obvious that a significant aspect of the assessment practice in improving achievement is providing feedback (Gibbs et al., 2004). Because of this, tutors should devote a considerable amount of time on the nature of the feedback they provide. They should identify issues for development, and encourage students to concentrate on these rather than simply on grades (Bloxham, 2007). There is evidence that peer-groups, student self-assessments, and on-line computerised assessments have the capacity to generate rapid or instantaneous feedback and to improve students’ overall understanding of the association between criteria and desired performance related to a particular assessment.

In my profession, I deal with first year students who need attention and advice from the tutor to help them develop suitable strategies for future academic success. Many problems that challenge first year students can be solved through effective assessment practices. In fact, the success of assessment in higher education may be strongly correlated with early feedback, especially for learners who doubt their capability to succeed (Yorke, 2005). I design my coursework to include formative assessment tasks during the lab sessions, to assist in clarifying the objectives, criteria and standards. I provide students with feedback in order to relate their lab tasks to what is required in the coursework, while I defer a summative assessment until the end of the semester, thereby permitting students to enhance and build on their future work.

My procedure works as follows: students are given the assessment tasks in the early stages of the course and can work through these tasks along with the lab tasks and try to relate them to what is to be assessed. I provide students with high-quality information about their performance. As a result, students can reflect on their learning and fill any gaps that may exist between current and desired performance. In the ninth and tenth weeks I start to assess students in the lab. This involves checking their ability to use the software and, through demonstrations, showing that they understand the work. At the same time, I continue to give students feedback on their performance and give them the opportunity to improve their work before the submission date. Finally, they submit their coursework at the end of the semester for marking. Appendix 2 gives details of my approach.

3.1 Technology-Enhanced Feedback

‘New media’ can offer a variety of methods for giving or constructing learning opportunities. They provide new methods to access and combine information, and the opportunity for students to work cooperatively in groups. There is now no need for students to work in isolation as they belong to an electronic ‘community of learners’ (Collis, 1998). Laurillard (2002) argues that learning using ‘new media’ demands a clear modification of assessment standards and criteria to take into account the possibility of utilising the new technology available.

Yorke (2005) suggests that technology can enhance the feedback given to students by providing a way for students to work in groups and through collaborative learning. Adopting such settings can motivate students to cooperate, communicate, give and receive feedback, and promote lifelong learning (Keppell et al., 2006).

In my academic practice, we use a virtual learning environment to give students their coursework tasks, links to extra online activities that might relate to assignment tasks, timely and sufficient feedback after each discussion held in the lab session, and exemplars of performance. It has been demonstrated that assessing students in this way, we can give students enhanced assessment results and lifelong learning experiences. Dropbox (Dropbox.com) as a collaborative and sharing tool has been used to share files and add any related good quality material.

3.2 Technology-Enhanced Marking

As stated above, assessment practices have been affected by advances in technology, from providing online assessment to immediate feedback. Technology can also be used for marking assessments.

It is important to ensure that the marking procedure is efficient and effective in assessing students’ performance. Assessment tasks should be designed so that the tutor’s marking is transparent and reliable. Reliability means that different tutors give the same mark to an assignment, or that the same tutor gives a consistent mark for particular work at different times. Transparency is achieved by clear marking criteria or schemes that ensure marking is fair and consistent through all disciplines (Bloxham, 2007).

In my academic practice, I deal with large groups of students thus when marking their work it is very difficult to achieve these two criteria. In addition, to reduce the chance of plagiarism, the same assessment tasks are given to all students but with different inputs depending on each student’s university number. Although this strategy is effective in preventing students copying each other’s work, it requires considerable effort from the tutor to mark tasks reliably and transparently. We use computer-based marking to ensure consistent marks and fairness between students by developing special-purpose software to mark students’ assessments. Using such software ensures the correct achievement of assessment tasks as well speeding up the marking process. Figure 1 shows a screenshot of this software. As can be seen, the university number for students is provided, and the software gives answers from questions ‘a’ to ‘h’.

1

References

  1. Abiola-Ogedengbe, A. (2011) Inquiry-Based Learning in Experimental Sessions: Strategies towards conducting more effective Experimental Laboratory Sessions with Engineering Undergraduate Students, 2-17
  1. Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., Raths, J., & Wittrock, M. C. (2001). A taxonomy for learning, teaching, and assessing: a revision of Bloom’s taxonomy of educational objectives. New York: Longman.
  1. Baume, C. (1999). Practice Guide 8: Developing as a Teacher.The Open University (support material for H852 Course Design in Higher Education module of the Postgraduate Certificate in Learning & Teaching in Higher Education).
  1. Biggs J. (2003). Aligning teaching and assessing to course objectives. Teaching and Learning in Higher Education: New Trends and Innovations. University of Aveiro, 13 – 17 April 2003.
  1. Bloom, B.S., Engelhart, M.D., Furst, E.J., Hill, W.H. and Krathwohl, D.R. (1956). Taxonomy of educational objectives Handbook 1: cognitive domain. London, Longman Group Ltd.
  1. Bloxham, S., and Boyd. P. (2007). Developing effective assessment in higher education: A practical guide. Maidenhead: Open University Press.
  1. Boud, D. (Ed.). (1988). Developing Student Autonomy in Learning (2nd Edition). New York: Kogan Page.
  1. Boud, D., Cohen, R. and Sampson, J. (1999). Peer Learning and Assessment, Assessment & Evaluation in Higher Education, 24:4, 413-426
  1. Boud, D., Cohen, R. & Sampson, J. (Eds) (2001). Peer learning in higher education: learning from and with each other. London: Kogan Page.
  1. Centre for Academic Development (2009). Hot Tips for Lab Teaching. Aavailable at: http://www.cad.auckland.ac.nz/
  1. Collis, B. (1998) New didactics for university instruction: why and how? Computers and Education 31 pp.373-393
  1. Davies, C. (2008) Learning and Teaching in Laboratories. Engineering Subject Centre. Available at: engsc.ac.uk
  1. Davis B. (nd), Practical Ideas for Enhancing Lectures, SEDA Special 13
  1. Ferman, T. (2002). Academic professional development: What lecturers find valuable. The International Journal of Academic Development, 7 (2), 146–158.
  1. Fleming, D. L. (2008). Using Best Practices In Online Discussion And Assessment to En-hance Collaborative Learning. College Teaching Methods & Styles Journal, 4(10), 21 – 40.
  1. Gibbs, G and Simpson, C. (2004). Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education1 pp.3-31.
  1. Grace, S. and Gravestock, P. (2009), Inclusion and Diversity. New York, NY: Routledge.
  1. Keppell, M., Au, E., Ma, A. and Chan, C. (2006). Peer learning and learning‐oriented assessment in technology‐enhanced environments, Assessment & Evaluation in Higher Education, 31:4, 453- 464
  1. King, H. (2004.) Continuing professional development in higher education: what do academics do? Educational Developments, 5(4), 1-5.
  1. Knowles, M (1970), The Modern Practice of Adult Education: from pedagogy to andragogy. Cambridge: Cambridge Book Company.
  1. Laurillard, D. (2002) Rethinking University Teaching: A Conversational Framework for the Effective Use of Learning Technologies, 2nd edition, London: RoutledgeFalmer.
  1. Lindblom-Ylänne, S., Trigwell, K.., Nevgi, A., & Ashwin, P. (2006). How approaches to teaching are affected by discipline and teaching context. Studies in Higher Education, 31 (3), 285-298.
  1. Mayer, R. E. (2002). A taxonomy for computer-based assessment of problem solving, Computers in Human Behaviour, 18(6), 623–632.
  1. Mortiboys, A. (2010) How to be an Effective Teacher in G. Gibbs, Habeshaw and T. Habeshaw (1988), 53 Interesting Things to do in Your Lectures. Trowbridge: Cromwell Press Ltd.
  1. Race, P. (2005) 500 Tips on Open and Online Learning: 2nd edition London: Routledge
  1. Ramsden, P. (2003.) Learning to teach in higher education. London: Routledge Falmer.
  1. Thompson, E., Luxton-Reilly, A., Whalley, J., Hu, M. and Robbins, P. (2008). Bloom’s Taxonomy for CS assessment. In of ACE 2008, Wollongong, NSW, Australia. 155-162.
  1. Yorke, M. (2005). Formative assessment and student success. In: Quality Assurance Agency Scotland (Ed.). Reflections on assessment: Volume II. Mansfield: Quality Assurance Agency, 125-137.

 

Appendix

1) Example of Designing Assessment using Bloom’s Taxonomy

 

2

2) Formative and Summative Feedback3

Cite this page

Choose cite format:
APA
MLA
Harvard
Vancouver
Chicago
ASA
IEEE
AMA
Copy
Copy
Copy
Copy
Copy
Copy
Copy
Copy
Online Chat Messenger Email
+44 800 520 0055