Learning to live with AI 

Learning to live with AI 

Students: make sure it’s your own work 

A teacher uses AI to set exam questions, a student uses AI to answer them, then the teacher uses AI again to mark the answers and even give feedback. So, what has been achieved? 

This hypothetical exercise in academic futility was raised by the Chancellor of Queensland’s University of Technology and has been exercising the minds of Trinity’s leaders, too. 

“At its most extreme, such a scenario suggests the question of who, if anyone, has learnt anything? And what was the purpose of the assessment?” the academic asked. 

Debbie Williams, Trinity’s Deputy Headmaster – Academic, has been examining the potential minefield of AI in education and has sought to answer two of the most common questions. 

Short answers first: 

Is there an ethical way to use generative AI in learning and assessment? “Probably.” 

Is the use of generative AI in learning and assessment dangerous? “Unequivocally yes!” 

Her advice is to stay well away from AI applications when doing assessments. 

“It is all too easy for a student, under pressure, to plug an assessment question into ChatGPT, receive a response, read it through, and agree that yes, that’s a reasonable position, change the odd phrase or expression, copy it onto a document, and upload it as his own submission. 

“But it’s not his own work; it does not represent his own learning. 

“He has not met the School’s expectations for academic integrity.” 

Three reasons weigh heavily against using AI in assessments, she says. 

-It renders students vulnerable to academic malpractice investigations; 

-It could exclude HSC and IB students from NESA assessments, with the School also acting to “protect its own reputation for academic integrity of the highest standard”;  

-It limits the kind of deep learning required for genuine academic growth. 

Learning involves generating, evaluating, and exploring ideas, she says, and formulating arguments, perspectives, and positions. 

“Research, critical thinking, problem-solving, and creative production take time; learning takes place during this extended time and process. 

“Generative AI applications deny students the opportunity to work through all of these steps because they produce a synthesised thesis or completed creative work in a matter of seconds.” 

Her view that it may be possible to use AI ethically at school comes with an important qualifier. 

“It is very difficult for school-age students, under the pressure of time and competing demands, to find that ethical path and stick to it, particularly in assessment contexts. 

“We have found repeatedly that students who begin preparing an assessment response by asking ChatGPT rarely go beyond the answer it offers.” 

Trinity will continue to monitor the development of AI to guide students in using it at the ideation stage of learning and using it ethically – how to reference it, for example. 

Director of Research, Kimberley Pressick-Kilborn, said one question for all schools to consider was how to use AI to amplify productivity and impact, both as teachers and learners. 

“How can Gen AI tools be used not for providing answers but for promoting human learning?” 

She invited Dr Damian Maher and Dr Keith Heggart from UTS to talk to Trinity teachers in April about the ethics and practicalities of integrating AI in secondary teaching. 

The School will continue to use software to flag submissions likely to have been generated by AI applications. 

Senior staff will work with students to ensure they understand academic integrity principles. 

“The emphasis during these conversations and consequences is always educative: ensure students understand the way in which their submission fell short,” said Mrs Williams. 

“At the heart of any position adopted by Trinity is the question of what is best, in the long term, for our boys.”

 

This article originally appeared in our June 2024 Edition of Trinity News which you can view on our online digital bookshelf.

 

 

Share this post