日曜日 | 月曜日 | 火曜日 | 水曜日 | 木曜日 | 金曜日 | 土曜日 |
1 | 2 | 3 | 4 | 5 | 6 |
|
7 | 8 | 9 | 10 | 11 | 12 | 13 |
14 | 15 | 16 | 17 | 18 | 19 | 20 |
21 | 22 | 23 | 24 | 25 | 26 | 27 |
28 | 29 | 30 | 31 |
Greetings everyone!
I wanted to take the opportunity to write about A.I. checkers / validators and plagarism detectors in general. As for my credentials, I am certified in Machine Learning (Microsoft) and also many other certifications and the founder and creator of Jappleng with over 20 years of developer's experience in software engineering and in the past decade, A.I. My involvement in advancing technology has a pretty extensive track record.
The purpose for this thread is raise awareness about A.I. detectors and how they are essentially useless flagging essays as false positives. Plagarism detectors also don't work entirely well but I will first begin with A.I. (chatGPT) since it's the hottest and newest thing. But keep in mind that chatGPT is not an A.I. by defintion, it's a Language Learning Model (LLM) which is a text generative system.
I have simplified this post to make it easy to understand and categorized each topic under its own header. Please do take note, this is all over-simplification because I believed it to be the better way to explain.
What do teachers expect from A.I. validators?
Teachers believe that by submitting their student's essays or other written work in these checkers will tell them a truthful and accurate reports on whether or not the student had used chatGPT or other similar software to generate the text.
How do A.I. Checkers (Validators) work? Do they really work?
The only thing that they do is analyze certain keyword combinations and assume that if something is like that, then it must be A.I. That's it. It's nothing extraordinary, it's simply if “these words are found together” then add points to the A.I. machine. Simply saying something like “In conclusion” will make it appear as-if it's A.I. written even though it's natural to write “In conclusion”.
How should it work if it could work?
If it could work then programs like chatGPT should provide some sort of hidden watermark in its text in order to show that it is in fact written by chatGPT. However, a student would just be able to use a detector to find the watermark and remove it. chatGPT and others do not add a watermark (yet) and likely never will.
Is there no way to detect A.I. generated content?
The short answer to this is no, it's not possible. There is absolutely no way to tell if content is generated by A.I. other than having the submitter admit to it. Almost everything that I have published in the past 16 years of Jappleng's existence has been flagged as A.I. generated which is only because I would write them as formal editorials. If you wan to test this yourself, write an editorial and submit it to a A.I. checker and you'll find that it's going to start assuming that you are an A.I. Maybe you are, maybe you're not? It depends who or what is reading this messge right at this moment (hah).
The inner workings of a LLM is a bit like a black box. It contains what is called a neuro-network and this neuro-network has an intricate connection between each other. This neuro-network is the result of the training it received. It's called a black box because data scientists cannot tell you what's inside or at least easily. Since the data is so vast, it would be almost impossible to tell you what's in it.
Do music, images, and text data that its trained on remain?
Not exactly, but it depends on how it's trained. For stable diffusion for example, the image training data is not found inside the the compiled data model. You can ask it to draw a famous painting and it will not be the same famous painting but something similar to it just like if I asked you to do the same. You know what it looks like but you won't be able to replicate it exactly the same.
What about plagarism detectors?
Plagarism detectors are a bit different as some of them may contain a database of actual text from different websites and compare with texts pasted. I really have a hateful relationship with these because plagarism detectors will believe that this site has plagarized other websites when we were one of the first. Why does it think that? It's anyone's guess but it's likely because they have an incorrect dating system. Either way, plagarism detectors are better than A.I. checkers in the sense that it can compare and show where the text originates from but A.I. detectors cannot do this, it simply doesn't have the capacity to do so and never will. Plagarism detectors can also give false positives because there are only so many ways one can write about something so there will be bound to be duplicates just like there are times when people make the exact same logo or song, these things just happen.
Moving forward, what can you do as a teacher?
Just like the invention of the calculator, there isn't much you can do to prevent your students from using it. Students have been using sparknotes and other collaborative technologies, and even hiring essay writers for a long time now. ChatGPT is another technology that will just keep getting better and more elusive. Students are most likely going to use chatGPT and a A.I. checker to make sure it's passable. I'm sure you may have noticed straight-A students suddenly fail the A.I. validator. Should you fail them? Of course not. Although students who normally didn't submit any work and now decided of all times to submit their work and get A+ on everything might just be acting a tad bit suspicious, don't you think? But personally I've gone through those phases myself in school where I would do my best then stop trying altogether. It's not a guaranteed tell but talking with your students is all you can really do, I know I wished that my teachers wouldn't have given up on me and spoke to me about what was really going on. This would have helped me tremendously between high school and college.
The only thing you can do as a teacher is offer guidance regarding the future and purpose to the essays or perhaps even provide an alternative to writing essays if possible. I've seen teachers talk about how they have a class exclusively on using chatGPT and how to use it responsibly. After all, the future will be A.I. driven. AGI (Artificial General Intelligence) will be the next big thing in the next 5 - 8 years and it will essentially be what humans are but in the A.I. world, but smarter. Much, much smarter. We will be living in a time where essays no longer matter and machine write for us. I cannot predict the future, but for the time being “robots” haven't replaced us too much just yet. LLM technologies such as chatGPT, Google Bard, Llama and Microsoft Co-pilot are still infant and incapable at this time to replace authentic creative writing.
Just remember, right now this is what “A.I.” thinks when you ask it to draw salmon swimming down a river. We've got some ways to go.