Skip to content
Curtin University
Office of the Academic Registrar

Speaker Abstracts

Associate Professor Phillip Dawson, Deakin University

Hacking and online exams

A range of new computer-based tools allow assessors to create more authentic, scalable and equitable exams. Where pen-and-paper tests were static, modern computer-based exams can involve computer simulations, rich media and instant feedback. However along with additional affordances for assessors, computer-based exams provide new attack vectors for cheating students, through hardware- and software-based hacks. This presentation provides a snapshot of the current state of exam hacking, based on successful hacks conducted by myself and other researchers. The presentation covers both exam hall invigilated exams (which have been hacked) and remote proctored exams conducted on students’ own computers (which have also been hacked).

I predict over the next five to ten years there will be multiple large-scale online exam incidents. These will dwarf the scale of other academic integrity problems the field has faced to date. To prepare for this new intersection of cybersecurity and academic integrity, this presentation argues we need to adopt a ‘hacker mindset’. We need to ask “how can I break it?” alongside the traditional positive missions of the field of academic integrity. We need to invite others to try to break our new online exam approaches. Most importantly, we need to understand that as with computer security, we will likely never achieve absolute security for computer-based exams. We therefore need to consider cybersecurity as another factor in assessment design, alongside other compromises like reliability, validity and authenticity.

Dr Mathew Hillier, Monash University and Dr Andrew Fluck, University of Tasmania

The assessment triad and e-exams: integrity, authenticity and scalability

The assessment triad between integrity, authenticity and scalability has been shot into stark relief in recent times. The publicised rate of contract cheating in the media is leading towards the use of more exams. Meanwhile the increasingly ICT intensive world in which we live leaves current pen-on-paper testing looking increasingly antiquated. Yet, institutions are struggling to find a way forward that works in the large scale, budget constrained world of higher education.

Many have probably heard the saying in complex technical problem solving "Good, quick, cheap – pick two", and it appears ring true for those seeking to modernise high stakes assessment in higher education. In attempting to drag exams into the 21st century, institutions are looking at computerising the exam room. However when selecting an e-exam solution from a managerial or technical support perspective, there is a tendency to focus on scalability and integrity with the capacity for authentic assessments often left out in the pedagogical cold. This could potentially lead an institution to merely computerise poor testing pedagogies for the foreseeable future. On the other side of the campus, the graduate needs of the modern world see academic policies encouraging more authenticity in assessment, with integrity also high on the minds of academic leaders. Solutions that meet these needs are rarely scalable when applied to the increasingly large classes of today.

This session will explore how we can work towards assuring integrity, authenticity and scalability for examinations. The experiences of an Australian funded project on authentic e-exams and stories from European institutions will highlight issues and possibly solutions. Much remains to be done with significantly more investment needed in shared research and development to bring the vision to fruition – join us in finding a solution.

Dr. Ann Rogerson, University of Wollongong

It is not just about the method of assessment: The importance of the question asked and the instructions given to ensure integrity

This presentation discusses the need for rigorous framing of assessment questions (whether written, online, presentation based or via examination) as being the critical foundation point to support academic integrity. While an examination (properly proctored) may ensure students are assessed on their own efforts, poorly designed examination questions can cause as many issues as poorly framed written assessment tasks. This may include encouraging students to memorise and regurgitate facts rather than apply knowledge to a contextual situation to demonstrate deeper learning. It also requires new questions to be written and designed rather than reusing materials from previous instances, or relying on questions from texts.

Effective questions allow students to demonstrate learning. They also ask students to demonstrate how a concept applies in a given disciplinary context regardless of whether it is a written assessment task, presentation or examination. Well-structured and framed questions also make unoriginal work easier to detect.

Proposing class or hall based examinations as the sole means of ensuring integrity is limiting creativity in designing authentic and meaningful assessments with framed questions that can deliver a more robust way of evaluating learning. This can involve constructive experimentation in assessment design including the elimination of examinations in some disciplines. Faculty, institutional and sector collaboration approaches on new and refined methods of structuring questions and assessment providing options which move beyond a reliance on examinations are presented.

Professor Richard James and Neil Robinson, University of Melbourne

The Examination Lifecycle - Key themes and observations from a University of Melbourne perspective

A comprehensive review of the University of Melbourne’s examination processes was conducted in late 2016. The resulting report identified the distinct stages of the Examination Life Cycle and opportunities for improvement. This presentation will outline the key themes and findings.

Participants will gain an insight into the key elements, challenges and risks relating to the entire Examination Lifecycle. The presentation will cover attempts to strengthen each stage of the examination process.

Associate Professor Fiona Henderson, Victoria University

Assoc Prof Tracey Bretag (UniSA), Dr Rowena Harper (UniSA) and Assoc Prof Cath Ellis (UNSW)

Contract cheating and assessment design: Exploring the connection

Data from two large national surveys of teaching staff and students suggests that serious forms of cheating are widespread and entrenched across the Australian higher education sector. This panel, comprised of project leaders and members from the OLT funded project Contract Cheating and Assessment Design: Exploring the Connection will report key findings and analysis of the largest survey dataset in the world on the topic of contract cheating (n=15, 047), and a large collection of procurement requests from online outsourcing platforms.

The presentation will share insights regarding students’ attitudes toward and experiences with a range of outsourcing behaviours as well as the related individual, contextual and institutional factors. The presentation will make the case that while assessment design is an important part of our response to contract cheating, it alone is not enough to address the problem. We consider what other factors are at play including the vital role of the teacher-learner relationship, and the nature of the conversations we have with our students about their learning and about academic integrity. We will explain why a return to traditional examinations is unlikely to provide a meaningful solution to contract cheating and may in fact create even more problems.

This topical and maybe contentious discussion is anticipated to arouse audience participation and discussion. We invite this interaction.

Dr Christine Slade (Presenter) and Associate Professor Susan Rowland, University of Queensland

Developing Student Identity Verified Assessment: A response to contract cheating

Contract cheating website services are a serious threat to academic integrity in universities because they challenge the authenticity of student authorship in assessment. Current plagiarism detection strategies do not catch contract cheating students because their purchased assessment responses are individualised and users’ details are hidden behind sophisticated fire walls. The default response is to increase the use of invigilated examinations but we wanted to collaboratively design ways of improving the robustness of student identity verification in other types of assessment tasks. In February 2017, we asked academics, curriculum designers, learning designers and other staff experienced in assessment design from across the university sector to participate in developing a collection of high-stakes assessment examples (other than exams) that demonstrate ways to strengthen the verification of student identity in undertaking these tasks. We ran two workshops – one in Brisbane at The University of Queensland and the other in Melbourne at Deakin University, funded by the Asia Pacific Forum on Educational Integrity (APFEI). Fifteen universities were represented and a collection of assessment case studies were developed as immediate practice examples that can further enhance authentic assessment practices for academics and supporting staff in the sector. This presentation will outline the workshop process, overview the common concerns raised and highlight the strategies identified through the assessment re/design process.

Associate Professor Gareth Denyer and Dr Dale Hancock, University of Sydney

Accurate Detection of Cheating Rates in Exams over Two Decades: Are Heads still in the Sand?

We have been measuring and combatting cheating in multiple choice (MCQ) exams, continuously for over 20 years. Our initial engagement arose from the frustration of dealing with a University administration that refused to believe that cheating occurred. They were not convinced by data which just revealed similarity in answering patterns between students sitting exams in close proximity. Even when we applied published analysis methods which involved pattern matching on wrong answers (famously revealing ‘islands of corruption in a sea of honesty’), the administration was unmoved.

That drove us to make multiple versions of our MCQ papers; with the options to each question being in a different order in each. Our innovations were an easy method for generating the multiple versions in standard word-processing software and an equally straightforward process for mark computation. Not only did the system prevent cheats from prospering (if they copy off a neighbour with the right answer, they, by definition, have the wrong answer) – it also allowed us to formally quantify rates of cheating, and it gave us water-tight evidence to say who had cheated off whom.

Our data shows that 1% of students are major cheats – copying nearly the entire MCQ section (sometimes >70 questions) from their neighbours. About 5% of students plagiarise discrete sections of a paper (for example, a part of the course that they may not have revised or the questions near the end of the exam). Around 10% of the class copy sporadically (i.e., on specific difficult questions). These rates have remained constant over two decades, except recently when central administration decided to run exams in tiered lecture theatres.

In our presentation, we will describe the impact that our research has had on examination practices and policy at Sydney University, and we will discuss the implications of the lack of plagiarism detection in MCQ exams more widely.

Dr Lesley Sefcik, Steve Steyn and Dr Michael Baird, Curtin University and Engineering Institute of Technology

Dr Connie Price, Associate Professor Jon Yorke, Dr Steve MacKay and Kim Li

Remote proctoring: a way forward to ensure the integrity of online tests?

The increase in online courses and assessments has made education more widely accessible; however it has also increased the risk of cheating and fraud during online examinations. Automated remote proctoring may help decrease this risk by providing a tool for educators to asynchronously authenticate identity and monitor students during online test situations. Remote proctoring software can monitor the audio, video and screen of students’ work environment during online examinations and flag behaviour that may show academic dishonesty. This helps provide educators with greater assurance of assessment integrity, while allowing students to work in a convenient location of their choosing.

Our project aims to develop a cost effective approach that can be used widely in higher education to help assure the integrity of online tests and examinations. We will share the results of a current pilot study involving students who use remote proctoring software during online tests. We will discuss the pros and cons of the process through the lens of the institution, the unit coordinator and the student participants.