Home Opinions AI isn’t just unethical, it’s inaccurate

AI isn’t just unethical, it’s inaccurate

Artificial intelligence isn’t so intelligent after all

0
PHOTO: Matheus Bertelli / Pexels

By: Hailey Miller, Staff Writer

The use of artificial intelligence (AI) is unethical, and it takes away from our originality and skill development. There are many reasons to avoid AI chatbots and writing assistants, but what’s the point of using them when they’re often extremely inaccurate in the first place? 

AI platforms like ChatGPT and Grammarly were never great to begin with, but as of recent months they’ve been going further downhill. A 2023 study shows that ChatGPT has become less accurate over time and reports on certain information — notably medical and legal — with a level of inaccuracy that’s deeply concerning. ChatGPT and other AI platforms such as company chatbots have also reported information that isn’t real, or events that never happened. An Air Canada chatbot gave bad advice for plane tickets to a customer, and, as a result, the airline company had to pay compensation based on their misleading information. 

Using a more personal example, if you ask ChatGPT about the ghost at The Peak’s offices, it’ll reply with this: “The office ghost at The Peak, Simon Fraser University’s student newspaper, is a lighthearted and longstanding legend among and contributors. The ghostly presence is often playfully referenced by members of The Peak as part of the newspaper’s lore and culture.” For your reference, this has never been a topic of conversation among staff.

Any work generated with AI will never compare to properly-researched content.

The inaccuracy of AI is not only legally concerning and confusing — it’s harmful. It has significant consequences that aren’t good for any usage, whether professional, academic, or creative. We can’t rely on or trust inaccurate content. When AI is used in pieces of writing, it’s evident the work is not original. AI is a waste of time — it fails to provide accurate information, struggles to write a proper draft or paper, and is often too robotic to sound genuine.

The risk of misinformation and falsehoods makes it difficult to know what to believe when using AI. Since it generates information from many unidentified internet sources, there’s a chance your result will be inaccurate or even plagiarized from someone else’s work. In the academic world, AI takes away our ability to learn and absorb content. Unlike properly researching information from books or scholarly databases, most AI software doesn’t disclose sources for their responses. We will never learn about the world around us if we rely on inaccurate or incomplete information, and any work generated with AI will never compare to properly-researched content. It isn’t fair that those who choose not to use AI do all the difficult work and research, only to have their work either stolen by AI or in competition with those who used it.  

Consequences stemming from AI errors need to be taken seriously. Corporations and organizations found to be using AI should face social pressure to stop their use of these programs. Education institutions should create universal mandates surrounding AI usage, and hold students accountable. If you really care about the quality of your work, you should avoid AI at all costs — it’s not helping you as much as you may think.

NO COMMENTS

Leave a ReplyCancel reply

Exit mobile version