There’s bucketfuls of criticism towards these AI softwares, and I’m actively following where the lawsuits thrown at them are headed.
Here’s some examples how AI/machine learning softwares are already causing harm in some creative fields (writers, visual artists, photography, human models):
ChatGPT:
AI-generated fiction has flooded several literary magazines. This is a shame because one of them has had a good reputation getting stories reviewed quickly, and they had to put hold on new submissions entirely. I really wouldn’t want to be a reviewer right now having to go through submissions. I also don’t envy teachers who have to figure out what to do with various student assignments going forward.
Writers Guild of America is seeking to protect writers from machine generated stories:
ChatGPT is facing criticism from major news outlets for allegedly using their articles to train the artificial intelligence tool without compensating them:
We’ll see where that goes.
As for visual arts, here’s some issues I’ve seen happen over the past half a year or so with image generation softwares such as Midjourney. I said some? Okay, it’s a lot:
When searching for a famous artist’s work, you’re likely to come across AI works that used the artist’s name in the prompts, but that the artist had nothing to do with. Their style and name are getting mixed up with AI generated content that sought to mimic the artist’s works.
AI generated images used for harassment campaigns, such as against artists who expressed being against their name/works used in generation prompts. Prompters made a competition out of who’d be able to create an AI model that best mimics the artist voicing their disapproval.
Art spaces where you’d usually go to admire people’s skill in imagery, such as ArtStation, DeviantArt, Facebook art communities, and other social media, getting flooded with AI images. They thrive especially on social media sites where cool thumbnails are everything, and viewers generally spend but few seconds looking at the images and either leave a react to it or not, and move on. The longer you look at AI generated images though, the more you notice things that look off.
When it comes to roleplaying communities, the Argent Archives for example has been a place where I’ve for a long time enjoyed browsing human expression in the form of stories and art, and it’s been a shame seeing AI generated content starting to appear there as well. Double so when a users slaps made by ArtistNameHere in the credits while it’s obvious to anyone with any experience with Midjourney that the image was generated. We see you.
Prompters using an artist’s work-in-progress image to generate a “finished result” and post it up before the artist has finished their piece, and then accuse the artist of stealing. Seen this happen to a big name artist on social media, the accuser had to double down fortunately.
After some big AI models started removing specific artist names from the dataset (such as Greg Rutkowski), there’s been KickStarter fundraiser to train a new AI model with claims to make it possible to generate you-know-what-graphy, and to plagiarize specific artists. They were rather upfront about this, and after public attention edited their post to make it sound more innocent. Kickstarter thankfully removed the campaign from their website after it caught a lot of online attention. That Kickstarter then moved to Patreon, etc.
Prompters have been selling AI works (such commissions claiming them to be original art, art books, texture and illustration packs) though the ability to copyright these works is very muddy right now.
There’s been claims of sensitive material to have been found in the LAION dataset that Midjourney used/uses, such as medical images that a patient who came upon, and hadn’t approved to be posted online. Somehow it had leaked from the hospital and been scraped.
Human models have found their likeness stolen for AI imagery. In some cases for very uncomfortable context too. Related to this, the other day I saw a new AI initiative where the creator is trying to make money by offering AI models for hire to sell products such as clothing.
It’ll become easier to create fake news/events/situations of anything and anyone (with enough pictures to be trained on) when the effort barrier becomes super low, while the image quality keeps getting more believable. Just wait until someone gets mad at you over some Argent Dawn drama and embarks on an AI crusade.
I’ve taken a look at a website where you can browse images used in the enormous AI dataset. By putting in the search words “night elf art” for example, there were tons of artworks created not by Blizzard Entertainment, but by independent digital artists, including artists I know from Argent Dawn and social media art circles. You can bet that a lot of art people on AD have commissioned is in there.
On that note, isn’t it curious that professional-looking digital art probably makes a very small percentage of a 5 billion image dataset, but that small percentage must’ve played a huge role in Midjourney being able to deliver the results that it now does. You don’t get the same results training on stick figure crayon doodles. Midjourney has already made a lot of profit from selling their services (subscriptions).
I tried out Midjourney last summer to see how it works after seeing it featured in ImagineFX magazine for digital artists, and I understand where the excitement to use it comes from, but with everything that has come up around the software since then, I cannot justify using it at all. Even if you’re not paying for the product, you’re the product in helping train the AI.
Some tech companies are pushing an “opt out” rather than “opt in” stance towards datasets now that they’ve been put under pressure about the ethics. Sorry but people shouldn’t be chasing to opt out every single bit of text, images or audio that they post online. As if we even have control over wherever the content ends up posted around the Internet.
When it comes to audio side of things, I’ve for example seen the partner of a voice actor who passed away, claiming to be busy now trying to remove the VA’s content online after people started grabbing them for VA AI models. That must’ve been fun.
Summa summarum, in its current form, this isn’t the kind of tech you should release into the wilds without any consideration of its long term impacts or people’s rights to their online content. Machine learning softwares will have a role in the creative industries, but in its current form it’s causing a lot of headache and concern, understandably so. Hope the above list offers people unaware of all this going on at least some understanding towards creatives who are upset feeling exploited. Machine learning is far from how we humans learn, adopt and express art, so the whole “humans learn from looking at pictures too, what’s the difference” argument shouldn’t even be used.
I saw someone say that a lot of AI prompters don’t actually even want fast art, they want fast love, identity or money while this is still fresh and that got me wondering. Is it for some about that feeling of being a rockstar… before everyone else is a rockstar too?
I believe this tech may hurt the creativity of people, especially young folk, who adopt a nihilist attitude towards various forms of human expression; why bother learning skill X when everyone else is just using this magic box to generate in seconds what would take me 5-10 years to learn to do all by myself?
Lastly, as a society we’re already consuming all kinds of entertainment at an ever increasing speed, and people’s attention span keeps getting shorter and shorter. How many of us sit down to read books anymore? Look at a picture for longer than a few seconds? What happens if we become numb in the face of constant overload of content?
Now let me rest my fingers before I go back to edit this long post for XX times to fix all the grammar issues the human way, or maybe I’ll just replace it with ChatGPT’s explanation of all these problems