Our world to Generative AI.
Content warning: Includes subjective opinions.
First off.
Technology is a double-edged sword.
But most of the time, people who control the tech will find every way to extort humanity's concerns by lambasting their doomerism about tech, especially generative AI. They wanted to make sure the image of the invention is "done in good faith".
Over the first half of the 2020s, this invention has not only damaged artists' ability to put themselves in a profitable position, but also made information harder to distinguish. And it's getting easier to inflict such impact every day.
The current worldview.
By this point, almost everyone knows about Generative AI, as it's been advertised and implemented in devices we use everyday. But not everyone has ever cared about what it has brought upon us until recently.
At first, it was still an experiment dubbed as "DALL-E", which aimed to make images that mirror a distorted reality.[1] It seemed innocent at first, but it established a major precedent for improvements. After two years of training, Will Smith has evolved from distorted facial expressions, to almost properly enjoying his noodles.[2] All of that is thanks to the relentless effort of unnecessary trainings, in order to churn out images and videos almost mirroring our reality. It has caused irreversible damage to the art community, forcing artists to not only worry about art being stolen by people, but also from these so-called "AI companies".
Then tools like ChatGPT came out, aiming to facilitate writing by assisting humans. But not even after a few months, people have jailbroken chatbots [3] to trick the AI to write full essays without restrictions. LLMs like ChatGPT aren't the type that pose danger to the information world, but because of our involvement, we realize the absurd amount of power given to our hands. Thus, we're willing to go to extreme lengths just to not make things for ourselves.
It is unfortunately objectively agreed that all of this is a part of technological progress. It is said that a lot of these AI tools are aimed to improve productivity, and make us more efficient in our profession. In reality, it's aimed to facilitate human tasks to further improve the quantity of products at the cost of quality. That doesn't sound quite innovative and progressive. Because all we have done, until today, is use it to cheat homework and disrupt the art industry as a whole with just prompts.
In times like this, we have to prove to ourselves that we have values, as others have already voiced their concerns about this dark side of AI development. As long as people believe that, humans will never cease to create valuable things for ourselves.
At least, that's what people seem to focus on at the moment.
An oblivious, yet disturbing reality.
When the technology was being developed, a problem had emerged that extends beyond the AI concerns about job security, or social media users simply brushing off "AI slop". We have said: "AI will get better, and will reach the point of devaluing art, or to replace human labor[4], including programmers", but that's it. What we did not take into account was that generative AI has always been fine-tuned to fabricate reality. Imagine this: If it can take people's art and make an almost exact replica of it, who's to say it cannot do the same to reality? The recent improvements to Midjourney[5] and adoption of Sora text-to-video generation has accelerated the worsening of human ability to distinguish between real and fake.
This type of technology is also heavily marketed, which means it's for the public to use. Which means many, many cases of believing misformation will arise, with celebrities, or people who just like putting themselves on socials, becoming victims to their own likeness being used for pornographic content. In some cases, individuals can deceive people with things we barely know about, with information that is blatantly false, or by conducting scam calls using familiar voices. A great example that surfaced earlier this year is a footage about a baby penguin being fed drinks and food, which they cannot do in real life. We know these types of fakes have been done before without AI assistants, but now it's so easy for anyone to learn without any training whatsoever. We're just facilitating a dangerous process. And only now do we only realize how much of our reality has been mimicked.
More and more people then became incapable of distinguishing real and fakes, no matter how hard they try.
We're not knowledgeable about everything, we're inept at dissecting information, and look at things at a glance. We're susceptible to any information thrown at us, and easily emotionally manipulated. Or in some cases, we're willing to accept it because of our ideology, and use it to extort others because of spite. Because of that, and given how powerful AI-generated images/video software has become, we're willing to let this tool transform into a weapon.
It's current and apparent goal is to make a difficult problem even worse, at the cost of worsening our trust in images and videos as a form of legitimate media. There's no "some bad actors" here anymore, we're all in on it, because the tools are made for us, the consumers, to wield, to freely contribute and shitpost for fun. All those seemingly innocent actions are just contributing ourselves for the AI wielders' sake. This affects all of us. There's no turning back. Like people said, the cats are out of the bag.
At this point, we should focus our worries towards this imminent and bleak scenario, and get ready to double, triple check anything that looks real at a glance, even media that appears realistic at first. For now, people have nothing to say except "We're cooked", because, well, we have ruined the Internet's perception of reality. But what can I say? It's part of human progress, and everyone seems to agree.
I'm happy I could help with your essay. 😊 Is there anything else you need assistance with, or something else you'd like to chat about?
...
I'm just messing with you :P
Co-written by adamng074 & brot. - January 25, 2025