privacysavvy

privacysavvy

Friday, June 2, 2023

[New post] Dangers of AI in education

Site logo image Gary Henderson posted: " Am now onto my third post in my AI posts following the Times, "AI is clear and present danger to education" article.  In post one I provided some general thoughts (see here) while in post two I focused on some of potential positives associated w" Technology and Learning

Dangers of AI in education

Gary Henderson

Jun 2

Am now onto my third post in my AI posts following the Times, "AI is clear and present danger to education" article.  In post one I provided some general thoughts (see here) while in post two I focused on some of potential positives associated with AI (see here) however now I would like to give some thought to the potential negatives.    Now I may not cover all the issues identified in the article however I will hope to address the key issues as I see them.

The need for guardrails around AI

One of the challenges with technology innovation is the speed with which it progresses.  This speed, driven by the wish of companies to innovate, is so quick that often the potential implications aren't fully explored and considered.   Did we know about the potential for social media to be used to promote fake news or influence political viewpoints for example?   From a technology company point of view the resultant consequences may be seen as collateral damage in the bid to innovate and progress whereas others may see this as more a case of companies seeking profit at any cost.   One look at the current situation with social media shows how we can end up with negative consequences which we may wish we could reverse.   But sadly once the genie is out the bottle, it is difficult or near impossible to put back plus it does seem clear from social media that companies ability and will to police their own actions is limited.    We do however need to stop and remember the positives in social media, such as the ability to share information and news at a local level in real time, connectedness to friends and family irrespective of geographic limitations, leisure and entertainment value and a number of other benefits.

So, with a negative focus, the concern here in relation to the need for AI "guardrails" sounds reasonably well founded however who will provide these guardrails and if it is government for example, wont this simply result in tech companies moving to those countries with less guardrails in place. Companies are unlikely to want to slow down as a result of adhering to government guardrails where this may result in them ceding advantage to their competitors.    And in a connected world it is all the more difficult to apply local restrictions, especially as it is often so easy for end users to simply bypass such restrictions.    Also, if it is government, are government necessarily up to date, skilled, impartial, etc, to make the right decisions?    There is also the issue of the speed with which legislation and "guardrails" can be created, as the related political processes are slow especially when compared with the advancement of technology, so by the time any laws are near to having been passed the issues they seek to address may already have evolved into something new.  To be honest, the discussion of guardrails goes beyond education and is applicable to all sectors which AI will impact upon, with this likely to be most if not all sectors of business, public services, charities, etc.

Cheating

There has been lots of discussions of how students might make use of AI solutions to cheat, with risks to the validity of coursework being particularly notable.    There is clearly a threat here if we continue to rely on students submitting coursework which they have developed on their own over a period of time.   How do we know it is truly the students own work?    The only answer I can see for this is teacher professional judgement and questioning of their students but this approach isn't scalable.    How can we ensure that teachers across different schools and countries question students in the same way, and make the same efforts to confirm the origin of student work?    Moderation and standardisation process used by exam boards to check teacher marking is consistent across schools won't work here.    We will also need to wrestle with the question of what does it mean for submitted work to be the students "own" and "original" work.   Every year students submit assessments, and more and more gets written online, and now AI adds to the mix, and with this growing wealth of text, images, etc, the risk of copying, both purposely or accidentally continues to increase.   The recent course cases involving Ed Sheeran are, for me, an indication of this.     When writing and creating was limited to the few, plagiarism was easy to deal with, but in a world where creativity is "democratised" as Dan Fitzpatrick has suggested will occur through use of AI, things are not so simple.

Conclusion

The motives of tech companies for generating AI solutions may not always be in the best interests of the users.  They are after all seeking to make money, and in the iterate and improve model there will be unintended consequences.   Yet, the involvement of government to moderate and manage this innovation isn't without its consequences, including where some governments own motives may be questionable.   

In looking at education, the scalable coursework assessment model has worked for a long period of time however AI now casts it into question, but was its adoption about being the right way to measure student learning and understanding, or simply the easiest method to do this reliably at scale?  

Maybe the key reason for AI being a threat is the fact that, if we accept it is unavoidable, it requires us to question and critique the approaches we have relied on for years, for decades and even for centuries.

Comment
Like
Tip icon image You can also reply to this email to leave a comment.

Unsubscribe to no longer receive posts from Technology and Learning.
Change your email settings at manage subscriptions.

Trouble clicking? Copy and paste this URL into your browser:
https://techandlearning.wordpress.com/2023/06/02/dangers-of-ai-in-education/

WordPress.com and Jetpack Logos

Get the Jetpack app to use Reader anywhere, anytime

Follow your favorite sites, save posts to read later, and get real-time notifications for likes and comments.

Download Jetpack on Google Play Download Jetpack from the App Store
WordPress.com on Twitter WordPress.com on Facebook WordPress.com on Instagram WordPress.com on YouTube
WordPress.com Logo and Wordmark title=

Learn how to build your website with our video tutorials on YouTube.


Automattic, Inc. - 60 29th St. #343, San Francisco, CA 94110  

at June 02, 2023
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

No comments:

Post a Comment

Newer Post Older Post Home
Subscribe to: Post Comments (Atom)

Leadenhall expands AUM to $5.72bn, growing across strategies

The ILS manager has experienced strong inflows from investor clients across its range of ILS strategies ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌...

  • [New post] Norwegian Black Metal Bands – Satanic or Psychotic?
    Dawn ...
  • [New post] Estrazioni Lotto di oggi martedì 30 novembre 2021
    Redazione News posted: "Seguite su Cyberludus.com la diretta delle estrazioni di Lotto, 10eLotto e Superenalotto di martedì...
  • [New post] After Announcing a New CEO, is Lordstown Motors Worth Buying?
    Editorial Team posted: "To improve its market reputation and streamline its operations, on Aug. 26 electric vehicle (EV) ma...

Search This Blog

  • Home

About Me

privacysavvy
View my complete profile

Report Abuse

Blog Archive

  • January 2026 (63)
  • December 2025 (79)
  • November 2025 (73)
  • October 2025 (88)
  • September 2025 (79)
  • August 2025 (71)
  • July 2025 (89)
  • June 2025 (78)
  • May 2025 (95)
  • April 2025 (85)
  • March 2025 (78)
  • February 2025 (31)
  • January 2025 (50)
  • December 2024 (39)
  • November 2024 (42)
  • October 2024 (54)
  • September 2024 (83)
  • August 2024 (2665)
  • July 2024 (3210)
  • June 2024 (2908)
  • May 2024 (3025)
  • April 2024 (3132)
  • March 2024 (3115)
  • February 2024 (2893)
  • January 2024 (3169)
  • December 2023 (3031)
  • November 2023 (3021)
  • October 2023 (2352)
  • September 2023 (1900)
  • August 2023 (2009)
  • July 2023 (1878)
  • June 2023 (1594)
  • May 2023 (1716)
  • April 2023 (1657)
  • March 2023 (1737)
  • February 2023 (1597)
  • January 2023 (1574)
  • December 2022 (1543)
  • November 2022 (1684)
  • October 2022 (1617)
  • September 2022 (1310)
  • August 2022 (1676)
  • July 2022 (1375)
  • June 2022 (1458)
  • May 2022 (1297)
  • April 2022 (1464)
  • March 2022 (1491)
  • February 2022 (1249)
  • January 2022 (1282)
  • December 2021 (1663)
  • November 2021 (3139)
  • October 2021 (3253)
  • September 2021 (3136)
  • August 2021 (732)
Powered by Blogger.