{"id":974,"date":"2025-06-24T15:31:48","date_gmt":"2025-06-24T15:31:48","guid":{"rendered":"https:\/\/www.scaine.net\/site\/?p=974"},"modified":"2025-10-15T14:00:28","modified_gmt":"2025-10-15T14:00:28","slug":"the-ethics-of-ai-june-2025","status":"publish","type":"post","link":"https:\/\/www.scaine.net\/site\/2025\/06\/the-ethics-of-ai-june-2025\/","title":{"rendered":"The Ethics of AI (June 2025)"},"content":{"rendered":"\n<p>This short article outlines the ethics of using AI (mainly generative AI, or large language models), as of June 2025. In short, there exists no truly ethical way to engage with AI currently. This article will outline a few of the ways that support this conclusion.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Copyright<\/h2>\n\n\n\n<p>Quite a lot to unpick here, but the crux of it is that the way that LLMs are &#8220;trained&#8221; is by shovelling data into the neural network to form the AI&#8217;s model (basically its &#8220;brain&#8221;). The more data, the more useful this becomes, so the tech companies quickly realised that they needed <em>all<\/em> the data. This led to suspicions, later <a href=\"https:\/\/arstechnica.com\/tech-policy\/2025\/02\/meta-torrented-over-81-7tb-of-pirated-books-to-train-ai-authors-say\/\">found to be true<\/a>, that copyrighted material was being pirated and consumed too. In Meta&#8217;s case, that amounted to 82Tb of books and publications. In book terms, that&#8217;s tens of millions of publications, illegally downloaded from pirate site LibGen via BitTorrent. Meta&#8217;s engineers joked that &#8220;Torrenting from a corporate laptop doesn\u2019t feel right&#8221;, but, you know, did it anyway.<\/p>\n\n\n\n<p>It&#8217;s not just books. Images are also consumed in the model training process, leading <a href=\"https:\/\/www.bbc.co.uk\/news\/articles\/cg5vjqdm1ypo\">Disney and Universal to sue MidJourney<\/a>.<\/p>\n\n\n\n<p>Code was also consumed en masse. Open Source projects will typically upload their code in full, using restrictive licenses to ensure that proper attribution takes place. However, the tech companies consumed it all, and will happily recreate snippets of code based on what they took, completely disregarding the license.<\/p>\n\n\n\n<p>In short, nothing is safe. If it existed on the internet in any form, it was shovelled into the model with no attribution, never mind compensation.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Driving nuclear power<\/h2>\n\n\n\n<figure class=\"wp-block-image alignright size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"640\" height=\"450\" src=\"https:\/\/www.scaine.net\/site\/wp-content\/uploads\/2025\/06\/ThreeMileIsland-640x450.png\" alt=\"\" class=\"wp-image-976\" style=\"width:392px;height:auto\" srcset=\"https:\/\/www.scaine.net\/site\/wp-content\/uploads\/2025\/06\/ThreeMileIsland-640x450.png 640w, https:\/\/www.scaine.net\/site\/wp-content\/uploads\/2025\/06\/ThreeMileIsland-480x338.png 480w, https:\/\/www.scaine.net\/site\/wp-content\/uploads\/2025\/06\/ThreeMileIsland-768x540.png 768w, https:\/\/www.scaine.net\/site\/wp-content\/uploads\/2025\/06\/ThreeMileIsland.png 934w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>While we know that AI interactions consume vast amounts of power, the scale of it was relatively unknown until recently. However, Meta <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/jun\/03\/meta-nuclear-power-ai\">made headlines<\/a> in June by striking a deal in Illinois to keep a nuclear reactor online for the next two decades, instead of closing in 2027.<\/p>\n\n\n\n<p>That deal echoes similar moves by Google in California and Microsoft&#8217;s deal at the end of last year to re-open the Three Mile Island reactor in 2028, a reactor famous for its 1979 meltdown.<\/p>\n\n\n\n<p>At a time when the world is looking for greener energy and an answer to the climate crisis, AI is powering big tech companies against the flow, encouraging vast, wide-scale power consumption, driving huge investment in non-renewable, polluting energy.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">AI Makes you Dumb<\/h2>\n\n\n\n<p><a href=\"https:\/\/www.edtechinnovationhub.com\/news\/mit-study-shows-chatgpt-reshapes-student-brain-function-and-reduces-creativity-when-used-from-the-start\">MIT researchers<\/a> in June this year conducted a study that found that using AI (in a certain way) will make you dumb(er). To put it in a less antagonistic way, if you <em>start<\/em> your creative process by using AI, you will reshape your brain&#8217;s function to be less creative, compared to when you start your creative process without it, and perhaps only use it at the end to tidy up the results.<\/p>\n\n\n\n<p>This finding relates to the evaluation of students writing an essay, and at what point in that process they engage with AI. Those starting with AI felt disassociated from their work, and 80% of these students couldn&#8217;t quote from their submission at all. Meanwhile, those who started without AI felt like they took ownership of the result, even if later tweaked by AI, and 80% of those students <em>could<\/em> quote from their work.<\/p>\n\n\n\n<p>The paper also notes a tendency towards &#8220;linguistically bland&#8221; output from those starting with AI, which suggests to me that we&#8217;re in a race to the bottom. This is especially true if future models are trained on the internet as it is today, leading to homogenisation, where biases are reinforced, creativity stifled and mistakes compounded.<\/p>\n\n\n\n<p>At least developers benefit from all that coding advice and boilerplate code generation! Oh wait, no they don&#8217;t, <a href=\"https:\/\/metr.org\/blog\/2025-07-10-early-2025-ai-experienced-os-dev-study\/\">a study has found<\/a> recently. In fact, instead of any increase in productivity, the study found that (experienced) developers were 20% LESS productive, as they found errors in the AI-generated code, or simply had to peer review obscure code before it was fit for purpose.<\/p>\n\n\n\n<p>What&#8217;s incredible about this study is that economists thought it would result in a circa 40% increase, and AI experts expected a circa 35% increase. Even <em>during<\/em> the study, developers <em>thought<\/em> they would be around 20% more productive even after the study was complete! But results showed the opposite.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"640\" height=\"385\" src=\"https:\/\/www.scaine.net\/site\/wp-content\/uploads\/2025\/06\/forecasted-vs-observed-640x385.png\" alt=\"\" class=\"wp-image-1011\" style=\"width:794px;height:auto\" srcset=\"https:\/\/www.scaine.net\/site\/wp-content\/uploads\/2025\/06\/forecasted-vs-observed-640x385.png 640w, https:\/\/www.scaine.net\/site\/wp-content\/uploads\/2025\/06\/forecasted-vs-observed-480x289.png 480w, https:\/\/www.scaine.net\/site\/wp-content\/uploads\/2025\/06\/forecasted-vs-observed-768x462.png 768w, https:\/\/www.scaine.net\/site\/wp-content\/uploads\/2025\/06\/forecasted-vs-observed-1536x923.png 1536w, https:\/\/www.scaine.net\/site\/wp-content\/uploads\/2025\/06\/forecasted-vs-observed-2048x1231.png 2048w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">AI Slop<\/h2>\n\n\n\n<p>The rise of AI has led to a corresponding rise of easily-accessible tools to create social media content. This has led in turn to an enormous uptick in what&#8217;s being called &#8220;AI Slop&#8221;, a term to capture this kind of easily-produced, often professional-looking, but frequently weird or simply misleading content.<\/p>\n\n\n\n<p>AI Slop is often eye-catching, but Meta has doubled-down on AI content and has changed their algorithms (on Meta-owned sites, such as Facebook, Instagram or Threads), encouraging the use of these tools, monetising viral content. This could be images, video, or songs.<\/p>\n\n\n\n<p>Aside from the deterioration of social media feeds, the bigger concern is when these tools are used to mislead, particularly when the content is political.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Transparency<\/h2>\n\n\n\n<p>Related to AI Slop, the realism of certain content often makes it difficult to know that it&#8217;s generated by AI. An image or even video can be so realistic that it looks authentic, which creates the risk of disinformation for political or personal gain. We need an equivalent to the padlock we used to see on websites that indicated that they were secure. This missing checkmark of authenticity <a href=\"https:\/\/c2pa.org\/\" data-type=\"link\" data-id=\"https:\/\/c2pa.org\/\">actually exists already<\/a> but isn&#8217;t in widespread use.<\/p>\n\n\n\n<p>This can also be a difficult question to answer for prose. How much is too much AI? If an author creates their novel and then edits it using AI, is it now a &#8220;work of AI&#8221;? How about if only 20 paragraphs were rephrased by using AI tools, is <em>that<\/em> a work of AI? Publisher Faber <a href=\"https:\/\/www.faber.co.uk\/about-faber\/#block-without-image_4\">now include &#8220;Human Written&#8221;<\/a> in their works as a result of pushback from readers concerned by the use of AI tools. This will be an ever-evolving debate, I suspect.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">And the rest<\/h2>\n\n\n\n<p>There&#8217;s honestly too many other issues with AI to cover without turning this article into a monster. This excellent <a href=\"https:\/\/en.wikipedia.org\/wiki\/Ethics_of_artificial_intelligence\">entry on Wikipedia<\/a> covers a wide variety of additional concerns, including the very real, very negative impact AI scraping has had on Wikipedia itself!<\/p>\n\n\n\n<p>Hopefully the five primary areas I covered will give you food for thought the next time you consider whether to engage with an AI tool, let alone <a href=\"https:\/\/openai.com\/chatgpt\/pricing\/\">pay <\/a>\u00a320\/month (or \u00a3200\/month!) for a subscription.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This short article outlines the ethics of using AI (mainly generative AI, or large language models), as of June 2025. In short, there exists no truly ethical way to engage with AI currently. This article will outline a few of the ways that support this conclusion. Copyright Quite a lot to unpick here, but the [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":976,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[109,3],"tags":[],"class_list":["post-974","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-science","category-technical"],"mb":[],"mfb_rest_fields":["title"],"_links":{"self":[{"href":"https:\/\/www.scaine.net\/site\/wp-json\/wp\/v2\/posts\/974","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.scaine.net\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.scaine.net\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.scaine.net\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.scaine.net\/site\/wp-json\/wp\/v2\/comments?post=974"}],"version-history":[{"count":7,"href":"https:\/\/www.scaine.net\/site\/wp-json\/wp\/v2\/posts\/974\/revisions"}],"predecessor-version":[{"id":1012,"href":"https:\/\/www.scaine.net\/site\/wp-json\/wp\/v2\/posts\/974\/revisions\/1012"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.scaine.net\/site\/wp-json\/wp\/v2\/media\/976"}],"wp:attachment":[{"href":"https:\/\/www.scaine.net\/site\/wp-json\/wp\/v2\/media?parent=974"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.scaine.net\/site\/wp-json\/wp\/v2\/categories?post=974"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.scaine.net\/site\/wp-json\/wp\/v2\/tags?post=974"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}