Schools worldwide are slowly bringing AI into the education system, aiming to better tailor education to individual learners. One AI EdTech application that is gaining traction is teaching assistants. Such technology is designed to ease the workloads of human teachers.
Nowadays, with an average class size of 21 in the US, teachers simply don’t have the time required to develop a highly personalized learning experience for each student. So, the general idea is that these AI-driven teaching assistants will both remove mundane, repetitive tasks from the teacher’s plate while also improving learners’ satisfaction and reducing dropout rates.
But, in order for AI assistants to be broadly accepted by society, they need to communicate well and actually be effective. In this article, we will examine what responsibilities AI assistants can take on and highlight how this can change the education sphere.
Essay Scoring
Essay scoring is one of the most time-consuming tasks that a teacher can undertake – and is often the reason why teachers bring their work home with them. With piles upon piles of essays on a teacher’s desk, especially in a higher education setting, it becomes almost impossible to provide detailed feedback and a fair grade to every single paper.
This is where AI essay scoring comes in. Machine-scoring of essays has been around for quite a while, but it has limitations. It was proven possible to “game the system” by submitting essays filled with long sentences and multi-syllable words. Even if such an essay was filled with nonsense, it would get a higher score than a great essay with shorter sentences.
As AI technology has continued to develop, essay rating systems have become better at reading and evaluating writing. Yet, it is still not perfect and is met with plenty of criticism for assigning “biased” grades. So, essay scoring is not yet an AI EdTech feature that’s ready to roll out on a full scale. In the next few years, though, we hope to see the NLP algorithms become even stronger and, hopefully, eliminate bias.
Forum Monitoring
Online forums or “discussion boards” have become the new norm in higher education. Even if students are taking an in-person class, they will likely still be required to post to the forum and respond to each other. And for online courses, discussion board posts are one of the most common assignments.
But, as-is, educators agree that online forums are one of the least valuable components of a class – and, simultaneously, they are one of the most challenging assignments to track and grade. There are simply too many posts to read and evaluate.
That’s not to say that forums are without their merit; they increase class participation and encourage even the most introverted students to voice their opinion. However, the conversations held are usually vague, dull, and repetitive. Very rarely is thoughtful, substantive conversation found within a class forum.
The lack of meaningful posts is largely due to a forum’s lack of quality control. Typically, a class forum is free-functioning; the instructor might read and grade a post, but they do not engage with it. And the reason behind that is simple: there simply isn’t time. Forums are often found in higher education classes, in which a professor might be teaching several class slots with 30 students each. With 2 forum assignments per week, this amounts to hundreds of initial posts to review – not to mention branching replies and conversation threads.
AI forum monitoring, while not firmly established in the education sphere, has the potential to transform how discussion board posts are done. When the algorithms behind AI essay scoring are fully fleshed out, they can be applied to forum monitoring. With such an application, the AI forum assistant can check that each post is relevant and meaningful, and it can perhaps even challenge students to expound further upon their ideas.
Learning Diagnostics
Diagnostics assessments are continuously performed by teachers, with the aim of identifying the students’ weaknesses, strengths, skills, and knowledge before the instruction is given. Such assessments are not graded, but they help teachers decide where to focus their attention during lessons.
As such, this is an education area that stands to heavily benefit from AI enhancement. The results have no impact on the student’s grades, but teachers will be presented with a wealth of information about their students – and can use the results to drive personalized instruction. Teachers will gain more time to focus on actually planning a meaningful learning experience rather than spending so much time on testing.
The good news is that AI-powered assessment systems are already here and thriving, as evidenced in Continuous AI Assessment for Learners AI systems are able to grade diagnostic assessments in a matter of seconds and in bulk – whereas it could take human teacher days to pore over the results. What’s more, AI systems can deliver feedback and recommendations on the curriculum and areas of study, thus making the teacher’s job much easier.
Automatic Test Generation
We’ve already touched on how AI-powered systems can score assessments, but they are also capable of automatically generating test questions, essentially providing an endless question bank. For instance, a question model can be full of string elements that include two genders, four names, three product materials, and three product names. The stem may be presented as:
And then transformed into the following item model:
Using n-layering, different values can be substituted for the gender, name, product materials, and product names to provide different item sets.
So, why is this item generation mechanism important to the education sector? Simply put, the need for test items is far greater than the supply – a fact that is largely attributed both to the transition to computerized testing and huge changes in how 21st-century tests are administered. With AI-driven test item generation, a huge number of diverse questions can be formed – and, more importantly, they can be developed in a way that is aligned to the individual state’s learning standards.
Plagiarism Detection
In secondary and higher education, plagiarism is a huge problem. According to the Academic Research Guide Association, 95% of surveyed students have cheated or plagiarized at least once, and 40% have specifically cheated on written assignments. Automated plagiarism detection software is failing – and students have noticed.
There are many reasons that a plagiarism detector might fail to flag content – for instance, if the source hasn’t been digitized, if the essay material is translated, or if multiple sources have been mixed and matched together. A plagiarism score largely depends upon the materials available for comparison and the materials used, which is why systems will assign wildly different plagiarism scores to the same piece of written work.
It’s apparent that the current method of detecting plagiarism is wildly insufficient, which is why many organizations are racing to produce AI-driven plagiarism checkers. One such company is Copyleaks, which claims to “seamlessly detect plagiarism” and score marks for originality. Results are shown as a 1:1 comparison in over 100 languages. It has been found to be quite effective, and it has a content comparison base of over 60 trillion web pages. If tools like this become the new norm, we anticipate seeing drastic reductions in plagiarism.
Final Thoughts
AI-driven teaching assistants are poised to transform the educational landscape – both in online and in-person classes and for all levels of education. Like the Internet and computers will significantly alter the face and the function (how, what, why) of learning. AI-driven education is a huge topic of interest in academic and business spheres alike, and it certainly isn’t a component of the far-off future. It’s already being used as a supplemental learning tool, aiding teachers in providing personalized learning – and it will only get better from here.
To find out more, download the full White Paper “AI in Education – Looking to the Future of Learning” .