What We Can Learn from China About Using AI Assessment in Schools
China may be leading the race in implementing artificial intelligence in schools. The Chinese are using AI to evaluate student behavior and learning.
The country has developed an ambitious roadmap that will not only launch AI in education but eventually permeate to all other industries – and the world – by 2030. Two of the assessment initiatives are already in place, and they have serious implications about the use of AI in schools.
Facial recognition software
Every thirty seconds the camera in the ceiling scans the faces in the room and evaluates emotions on the students’ faces and identifies what task the student is doing. Called an “intelligent classroom behavior system,” the artificial intelligence program monitors every action and transaction, including checking out library books and paying for cafeteria meals. They even record attendance for the teacher.
Although surveillance companies have received pushback regarding their intrusion into the privacy of minors, China may continue the practice. Monitoring student behavior in schools is a precursor to the “social credit” system being implemented for Chinese citizens — those who obey the law and conduct themselves honorable earn social credits. Even minor infractions of the law cause citizens to lose social credits and privileges, such as purchasing airline tickets.
Western countries are far less appreciative of the intrusion of surveillance in their lives, but schools already rely on AI to monitor what students type on keyboards and capture their movement about the school on camera. Administrators routinely review digital recordings of student-student altercations to provide school-wide safety and security. Like the intelligent classroom behavior system, cameras can verify attendance.
While a social credit system may not appeal to Americans, schools routinely reward positive student behavior, as evidenced by compliance with agreed-upon rules.
AI grading systems
Teachers have long relied on computer-assisted grading systems. Optical Mark Readers (OMRs) have been around for a long time; they were developed in the 1930s. OMRs have been a reliable tool for grading countless multiple choice answers that have been bubbled into tiny ovals and scanned electronically. They weren’t effective for grading subjective responses, such as essay questions.
China, however, is developing AI grading systems designed to evaluate essays and other written responses. More than a spellchecker, the software considers structure, style, and overall theme. Then it assigns a grade and offers recommendations for improvement.
The AI software used for grading in China has an accuracy rate of 92%, but Westerners tend to be skeptical of this level of accuracy. It’s difficult to imagine that artificial intelligence can grade better and more accurately than a classroom teacher. That skepticism, however, may make early adoption less likely, leaving the United States behind in optimizing AI in the classroom.
China has a long history of economic and education disparity. The gap between the haves and have-nots widens each year. Implementing AI software programs that level the playing field may help to close the chasm between the two, or these programs may bring about 1984-like conditions that Orwell could not have imagined.
The implications about the emergence of AI-assisted assessment are huge, and as we look to adopting similar programs in the United States, we must consider their impact.