Since the beginning of the COVID-19 pandemic, educators and licensing boards have relied on EdTech proctoring and examination programs to prevent cheating within the remote testing environment. But the use of this software comes with three major concerns: exam integrity, procedural fairness, and the security and privacy of those taking the exam.
Accordingly, a team consisting of Avi Ginsberg from Foley & Lardner, Ben Burgess and Edward Felten from Princeton University, and Shaanan Choney from the University of Melbourne conducted a technical analysis of the four proctoring suites used by U.S. law schools and state attorney licensing boards. Through reverse-engineering, they were able to evaluate each for student privacy features and effectiveness in detecting cheating.
Of particular interest were:
- Accuracy issues displayed by facial recognition algorithms when observing minorities, showing bias in the machine learning models and raising the potential for disparate treatment of individuals based on certain skin tones. These discrepancies have been previously noted in consumer electronics and other environments.
- The risk of having personal information and other sensitive information misused or leaked by proctoring software when installed on privately-owned computers. Many of the programs in use today require privileged access to devices, install monitoring services that do not automatically uninstall, and in some instances create activity logs before examinations that are shared with exam vendors.
The implications of the findings are significant for the fields of cybersecurity, data privacy, and artificial intelligence, showing that the requirements of conducting fair and secure remote exams has outpaced the capabilities of these programs.
For a deeper dive and subsequent recommendations, use the link below to download and read the full report.