Larian Studios The Latest To Face Backlash Over Use of AI To Make Games
from the too-much-dogma dept
I guess I’m a masochist, so here we go. In my recent post about Let It Die: Inferno and the game developer’s fairly minimal use of AI and machine learning platforms, I attempted to make the point that wildly stratified opinions on the use or non-use of AI was making actual nuanced conversation quite difficult. As much as I love our community and comments section — it’s where my path to writing for this site began, after all — it really did look like some folks were going to try as hard as possible to prove me right. Some commenters treated the use of AI as essentially no big deal, while some were essentially “Never AI-ers,” indicating that any use, any at all, made a product a non-starter for them.
Still other comments pointed out that this studio and game are relatively unknown. The game was reviewed poorly for reasons that have nothing to do with use of AI, as I myself pointed out in the post. One commenter even suggested that this might all be an attention-grabbing thing to propel the studio and game into the news, so small and unknown as they are.
Larian Studios is not unknown. They don’t need any hype. Larian is the studio that produces the Divinity series, not to mention the team that made Baldur’s Gate 3, one of the most awarded and best-selling games of 2023. And the studio’s next Divinity game will also make some limited use of AI and machine learning, prompting a backlash from some.
Larian Studios is experimenting with generative AI and fans aren’t too happy. The head of the Baldur’s Gate 3 maker, Swen Vincke, released a new statement to try to explain the studio’s stance in more detail and make clear the controversial tech isn’t being used to cut jobs. “Any [Machine Learning] tool used well is additive to a creative team or individual’s workflow, not a replacement for their skill or craft,” he said.
He was responding to a backlash that arose earlier today from a Bloomberg interview which reported that Larian was moving forward with gen AI despite some internal concerns among staff. Vincke made clear the tech was only being used for things like placeholder text, PowerPoint presentations, and early concept art experiments and that nothing AI-generated would be included in Larian’s upcoming RPG, Divinity.
Alright, I want to be fair to the side of this that takes an anti-AI stance. Vincke is being disingenuous at best here. Whatever use is made of AI technology, even limited use, still replaces work that would be done by some other human being. Even if you’re committed to not losing any current staff through the use of AI, you’re still getting work product that would otherwise require you to hire and expand your team through the use of AI. There is obviously a serious emotional response to that concept, one that is entirely understandable.
But some limited use of AI like this can also have other effects on the industry. It can lower the barrier to starting new studios, which will then hire more people to do the things that AI sucks at, or to do the things where we really don’t want AI involved. It can make Indie studios faster and more productive, allowing them to compete all the more with the big publishers and studios out there. It can create faster output, meaning adjacent industries to developers and publishers might have to hire and expand to accommodate the additional output.
All of this, all of it, relies on AI to be used in narrow areas where it can be useful, for real human beings to work with its output to make it actual art versus slop, and for the end product to be a good product. Absent those three things, the Anti-AI-ers are absolutely right and this will suck.
But the lashing that Larian has been getting is divorced from any of that nuance.
Vincke followed up with a separate statement on on X rejecting the idea that the company is “pushing hard” on AI.
“Holy fuck guys we’re not ‘pushing hard’ for or replacing concept artists with AI.
We have a team of 72 artists of which 23 are concept artists and we are hiring more. The art they create is original and I’m very proud of what they do. I was asked explicitly about concept art and our use of Gen AI. I answered that we use it to explore things. I didn’t say we use it to develop concept art. The artists do that. And they are indeed world class artists.
We use AI tools to explore references, just like we use google and art books. At the very early ideation stages we use it as a rough outline for composition which we replace with original concept art. There is no comparison.”
Yes, exactly. There are uses for this technology in the gaming industry. Pretending otherwise is silly. There will be implications on the direct industry jobs at existing studios due to its use. Pretending otherwise is silly. AI use can also have positive effects on the industry and workers within it overall. Pretending otherwise is silly and ignores all the technological progress that came before we started putting these two particular letters together (AI).
And, ultimately, this technology simply isn’t going away. You can rage against this literal machine all you like, it will be in use. We might as well make the project influencing how it’s used, rather than if it’s used.
Filed Under: ai, artists, generative ai, llm, swen vincke
Companies: larian studios
BestNetTech is off for the holidays! We'll be back soon, and until then don't forget to




Comments on “Larian Studios The Latest To Face Backlash Over Use of AI To Make Games”
GenAI is a bubble, it is going to burst, and that is going to have a significant impact on its viability going forward.
It is possible that some limited uses of genAI, like the ones mentioned in this story, will continue. But it is not inevitable. Pretending that a short-term trend is a 100% reliable predictor of where technology is headed in the future is silly.
Re:
I’m of the opposite opinion, the bubble bursting on AI is what will ultimately result in finding actual, successful, uses for Gen AI. We’re going to be talking about data centers full of equipment you can buy for pennies on the dollar. As a result, companies and researchers with good ideas, but are less flashy, might actually get a shot at it.
At least that’s my hope.
Thanks!
Hey there! Someone read my comment! Thanks Timothy. It was so far down the comments that I figured no one read it. Glad to see that wasn’t the case (regardless whether or not I convinced you).
The audience would probably have a better opinion of “the developers are highlighting their AI use” if it weren’t a constant trainwreck being used to cut corners where it has been implemented elsewhere within games. It also is not the audience’s role to parse PR statements to consider “well this company might use it well” in light of what AI use to date has looked like. When AI has led to multiple PR messes, be it for translations or the mess Black Ops 7 had just a month ago, and the push for AI appears to actively hurting most of the game studios under the Microsoft umbrella.
When the push for AI within this field has lead largely to PR messes and worse products, it is perfectly reasonable for efforts to highlight it leading to backlash, even if it isn’t necessarily fair for an individual developer.
Artist and good idea exploration
I believe the current use of gen ai is mainly bad in terms of direct use because of the fact that it uses copyrighted images and despite transformative possibility, the direct usage is too risky and heard some do use identical copies, creating a massive risk. Plus this usage without permission of the artists do help replace them.
However when it comes to using the tool to explore ideas and then having actual humans make their own creative intake of the idea, this seems morally equal to common history of thousands of artists downloading copyrighted images to edit, without permission, and then transforming them private and get new good ideas from it, from computer tools not 100% human crafting either.
The only time I could find this usage bad in terms the ethics of permission and usage, is if using it fuels the program itself whereas such fueling benefits malicious use from other people somehow but I don’t know if it works like that. Think of a person going to a stolen artwork page of a harmless work itself, and while taking inspiration from the work itself is harmless, the interaction with the infringer is giving more demand to the infringer which is bad.
However despite all that, some of the arguments from some of the anti-genAI folks are rotten to the core, based off falacies, reeks of massive hypocrisy, and are based off made up moral ideogy that goes avainst actual rights that exist.
One notable tweet was arguing that merely using ai no matter the purpose is bad just because it was based off works without permission, then cited moral rights and copyright. One person pointed out the fact that people often used many images all the time without permission outside of genAI but then anti genAI person said in the lines of “Uhh, that’s different because I like that there is human connection experience with how you get inspired.”…despite the fact that the foundation of copyright and moral rights make no distinction, let alone the fact that photoshop edits are already less human in some way.
Another horrible argument, and this was a dangerous horrible argument too, was that someone in the name of arvalis (guy who makes “real life” pokemon) argued that if actual human beings made 100 percent human made art influenced off an idea that came from an genAI in the first place, then the final product is still using generated ai, just because it influenced them at some point… even though the final work is based off human experienced and human made art in the last place. Same person argued you are not using copyrighted work if you made it by scratch (without ai) making him hypocritical too. This argument implied that artists who worked and gave all their blood, wasted all of their time and that their hard work is all a waste of time, the moment they are “tainted” with AI in some influence inspiration in the first place. This was one of the most disgusting arguments I’ve ever seen in probably my life and it’s sad it came from him. Like if he wanted to argue that using genAI is bad because it creates less distraction of finding concept artists (though there are some problems with that argument) but left the after fact alone, I wouldn’t be as upset.
The argument saying it creates a distraction is flawed too because public domain, and inspired laqful works also helps create distractions, same with having a crazy creative brain from indirect memory of certain works without remember names of who.
Another argument I’ve seen is that we don’t need genAI to get good. This argument misses the fact that it can still help with creative ideas in some cases and faster too. So that is another weird argument.
There are concerns about generated AI art but some of these anti generated AI art folks has gone so down in the barrel to the point of promoting harassment mob against checks notes, actual human artists. It’s gone to the point of telling other artists that their hard work doesn’t count, or doing something no different than lots of traditions is bad, or just because it’s a robot helping them. This isn’t ethical. This isn’t fighting for artists, this is just discouraging artists like her, and some others from making their own interpretation and being creative due to how they got some ideas.
Current AI art has a lot of problems and I would rather make it where it only uses lawful public domain material, and lawful AI license art, and have a credit list each result, but some of these people are out of their minds and are not fully ethical. Like wow.
Re: Grammar
Some of the words here are not spelled right. I want to point out that I was on phone, wrote a lot, and spell error checking here seems different than what I’m used to. I want to edit but it wouldn’t let me so I apologize for some of crappy grammar here. Haha
Re:
Although likely poorly explained and disingenuously defended (I’d prefer a citation of the public conversation in question but am willing to grant that exact conversation has played out in this general way multiple times the last three years)
There is more merit to the ‘fruit of the poisoned tree’ anti gen-AI argument than you, the article OP or Larian would like to believe. The effect is corrosive even if the user is wise enough to stick to limited amounts of iteration. The entire training set already goes leaps and bounds further than an artist drawing inspiration manually from other copyrighted works, if it didn’t it wouldn’t be useful at all.
There is undeniable merit to the drudgery and busywork the article OP is advocating small dev teams and solo devs skip using gen-AI or NPC LLM chatter that devs feel like isn’t adding to the quality of the final product (it is).
As for the article OP it’s probably time to politely hang up their hat and find a more nuanced AI friendly publication to write for if they feel like there’s nuanced discussion that isn’t happening (the nuance isn’t necessary and such a publication doesn’t exist so checking their emotions at the door that gen-AI usage is in fact a black and white argument is necessary and overdue for them personally at this stage.)
Re: Re:
Let’s remember the history of people learning from other people’s works. People has learned non-sense formations, including many lesser human results (hence mostly non-sense). This includes photoshop manipulation, splashing around other peoples art to see what happens, mistakes in modifying, and some others both through digital and non-digital outside of “gen-AI” visual audio.
Modern Gen-AI simply mixes up other peoples works, and can make weird formations. The difference is a that its another angle leading to the same discovery (like the others) that alone real human can take, and then form their own ideas of the formations and put their own human connection, redraw and result in a new 100% human work. Thus by definition, it’s a human work from human experience at the end result.
If you are seriously trying to argue that artists who worked real hard, to make money for their family, should just be seen as wasting their time or that their works are “poisoned” just because of an idea coming from something that may not have been so great, then you are just making a personal excuse to just selectively call out living artists for your own twisted opinion. Next let’s call out thousands of artists who felt deeply inspired by a source that was technically pirated in a way that was bad, like a song downloaded improperly or some crap, and stop them from progressing their dreams of hard work for legal works ideal wise. BTW, I’m talking about works that are original enough like taking an idea from a specific result but forming your own original formation over it lawfully.
Again, we can debate wether using the current genAI is really itself fueling a problem so uhh, maybe best to avoid it (not because without permission as copying isn’t theft but because it might be fueling bad purposes too) even if the purpose is innocent, but that doesn’t mean we should go after a transformative and harmless human formation just because it came from an idea sourced from a bad place after the fact.
Apologizes for any misunderstanding of your comment btw.
Re: Re: Re:
There’s a difference between that and a generative AI model being fed someone else’s work: The person’s work can evolve.
Everyone in the world who has ever picked up a pencil to draw has started out drawing stick figures. Everyone who makes a living from drawing/illustrating/animating started with stick figures. They liked drawing so much that they wanted to improve; maybe they had an innate talent for it, but what most likely happened is they put in the work to improve their skills. They got a little better, hit a roadblock, got a little better, hit a roadblock, and yadda yadda yadda. Even the people you’d consider to be “great” artists still have blind spots and hindrances—“flaws”, if you will—that come with the territory. Someone who’s great at figure drawing might suck at backgrounds; someone who can paint beautiful landscapes might have no skill whatsoever at painting a portrait. That’s the beauty of human artwork: There is an innate humanity to it that is born from the flaws of the artist, even when such flaws aren’t necessarily related to art.
Generative AI doesn’t evolve the same way. It doesn’t start out with stick figures. It doesn’t learn how to draw based on references and practice and the evolution of skills over time. It swallows up a person’s work (often without their permission), digests it, and vomits it back out in a mosaic of pixels that its algorithms say are the “best” combination of pixels for a given seed or moment in time depending on the model and whatnot. It’s not trying to make anything more than pre-digested slop. Generative AI makes images and sounds and videos that can resemble human artwork, but it lacks that special bit of humanity that makes art feel “real”.
As people evolve their skills, they find their style. People who like anime might go on a path that sees them mimic the art styles of their favorite shows/mangas before finding an evolution of that style that feels like their own. Generative AI just copies a style wholesale and doesn’t evolve it without an extra additional input that says “hey, toss in this style, too”. The evolution doesn’t happen because it isn’t really an evolution—it’s a command to a machine that doesn’t actually care, in any way, whether its output pleases the person requesting that output.
The same argument applies to using generative AI as part of, say, idea generation: As you come to rely on the Emptiness Machine to spit your ideas back at you, you lose the ability to fully iterate—to explore new variants, sometimes wildly different variants—because they don’t fit with your prompts. Artists exploring new ideas that were drastically different from their original ideas, or moving their original ideas around into new combinations, is how Street Fighter II evolved its core roster from the version that was sketched out in 1988 to the version that became one of the biggest arcade games ever. An AI generator might be able to spit out variants on a prompt, and you might even evolve that prompt to create different variants, but that doesn’t mean you’re doing the work to figure out why one variant might work better than another. Maybe you even happen upon a design that wouldn’t work as a man, but would work as a woman—but you can’t ask the AI generator to tell you if it would because it can’t tell you any-goddamn-thing about the “why” behind its “art”.
Therein lies another way human-created art will always have a one-up over AI slop: It’s collaborative. The people who worked on Street Fighter II were a team who bounced ideas off each other, drew up designs, questioned those ideas, and reiterated until they landed on ideas and designs that the whole team liked. Those discussions are how the game evolved from its original design state to the final version that dropped into arcades and changed fighting games forever. You don’t get that with generative AI because it’s fundamentally incapable of the kind of collaborative efforts that a group of people working on a big project, or even just two people drawing bullshit with each other while they smoke some weed, can provide.
Generative AI can produce an image that can look passable enough to count as “concept art”. But it can’t iterate on that art. It can’t tell you why it thought the design is “cool”. It can only ever vomit up slop and wait for the next order. Anyone who thinks that is a better state of affairs than collaborating with actual people can go right ahead and lose themselves in the Emptiness Machine—preferably while they stay as far the fuck away as possible from actual people who want to create actual art with other actual people.
Re: Re: Re:2
I feel like you are ignorant of how our existing culture works to an extent. Millions of works did not come from 100% human hand drawn art, digital or physical. Many of them came from a human experiencing a variety of many things that isnt 100% always crafted by a human, let alone some way less or more. Many humans also experienced learning differently than each other and that INCLUDES modern digital mashup transformation with “see what happens” tools too, way before gen visual AI existed. Some of these experiments are also similar to spitting out stuff randomly.
Photoshop splashing, prog. texture or 3D landscape generation experimenting, throwing physical multiple human art into a cluster to see what happens, and some digital with simple digital art tools to see what comes up, accidents, many more things that don’t come from a traditional manual use of a physical paintbrush, via digital and even physical world (but not as much). Hell, most works are a combination of things found in nature that isn’t created by humans in case I need to make that point.
And yet here you are saying that the moment there is a few special computer code making less predictable formations or variants by copying other works first (like what millions do without permission in digital for many years with less but more simple less predictable (e.g. photoshop experimental mashing splashes)) ways, you say it doesn’t count. Nope. A human taking an idea learned from a less human the moment it’s from a special robot, it all doesn’t count. None of it.
Let me make an example. Say I used gen visual AI, I copy other peoples works without permission (which isn’t itself wrong because copying isn’t theft so it always depends beyond that) via some special command. I type in “show me a dark forest”, it gives me some variations of it. I of course get what I want to the base and not much skill itself, but then I see some weird combinations or variations that gives me an idea. I then take the idea in my head and I make my own forest with real current human skill, I make a story with more ideas but from me more, then add my own original art with origianl enough shapes by scratch. Beyond that I make a whole series about it. In this scenario, I am 100%, the human, the good old fashion misanthropic too, had an experience with while a weird less human result (naturally no different than discovering a weird rock or pro. gen seed texture result) in the first place but came up with my own conclusions, made up my own creative intake, and with my own will, added creatived stories beyond it. EXACTLY the same as downloading a few copyrighted pictures directly, loading them in photoshop, and experimenting with them in a see what happens way that is transformative (maybe even the legal fair use kind), and get an idea and end result do the same thing. It makes no logical sense to say both are different in this example other than the problems I already mentioned but that has nothing to do with this specific argument. Ai gen “art” isn’t always a lazy learn 1-1 imitation lack of skill, lack of education, direction and then do nothing.
There is also the fact that ai gen visual can also promote original shapes and one might redraw that (but not enough to be 1-1 with copyrighted material) and use as an asset but still have creative control beyond it like public domain of human made and non-human made (e.g. texture that was seed generated and released as cc0?).
Also many artists do adapt existing styles instead of coming up with their own but makes new things with it. It feels like you are ignoring other types of artists when you mention the style thing that way?
Like I said, there is an issue where the current tool is being feed on, and corporations are using it for profit more directly with less human artists entirely, and if using it for inspiration is fueling the same thing indirectly then yeah, that isnt good either (the usage itself), and it’s morally better off to use a more legal licensed and legal public domain training, but that doesn’t mean you, I, or anyone gets to tell legit human crafting artists in the final part of the product of that game is somehow fake, ai slop itself, or “tainted”. Like Jesus…
Re: Re: Re:3
I assure you that I’m not, but please, condescend a little more and see how well that endears your argument to me.
And to all of that, I say: “No shit.” My dude, I grew up in the ’80s and ’90s—that was the prime era for chiptunes (before they became a medium that any rando could fuck around with), new wave music, and the brand new artform that was music videos. I’ve seen the evolution of culture, analog and digital, from where it was when I was born to where it is now. Maybe I glossed over some things in my comments, sure—but the general point remains the same regardless of whether I’m talking about someone writing a novel by hand or someone dicking around with glitch art.
The difference is in the intent. People who fuck around with algorithms or use algorithmic tools are experimenting with intent—they’re looking for something that gives them a spark of creativity that they can further explore through more experimentation. With generative AI, there is no spark. There is only slop. You ask for a thing, you get a thing (and it might even be of decent quality if you’re lucky), and that’s it. There’s no trying a different brush stroke size or mixing colors to see what’s aesthetically pleasing to you or futzing with the pitch of a guitar chord to see if going higher or lower might sound better. AI just barfs up a thing, then waits for the next command to barf up the next thing. To the extent that there is any experimentation with generative AI, it’s only about fine-tuning prompts to get a “better” output. Anyone who wants the entirety of culture—and the very human experience of fucking around with a medium and finding out anything from whether you like the medium to how you can adapt your tastes to that medium—condensed into the equivalent of “tea, Earl Grey, hot” is a person who cannot fathom the joy of genuine creation.
I’ve already got one argument going with you here and I’m not about to get off on a tangent, so I’ll put a pin on this point with the following sentence: If you think stealing other people’s hard work to transform it into a digital pastiche of their work and claim you created “art” is a morally righteous act, you both underestimate my hatred for copyright and overestimate how willing I am to conflate copyright infringement with plagiarism.
Okay, but why would you, though?
The whole point of generative AI, at least according to the assholes who evangelize it, is that it’s supposed to give you exactly what you want. If you want an image of a dark forest, the Emptiness Machine is supposed to give it to you. Yes, you can make tweaks to the prompt or generate multiple versions using different seeds, but the overall intent is to generate an image as close to the one in your head as possible. That’s the whole fucking point: It’s to cut out any middlemen and any need for actual artistic skill by “democratizing” the creation of (what idiot techbros think of as) art. Who needs to improve their craft or hire people to make art when “Push Button to Receive Picture” is right fucking there? Part of the reason I despise generative AI is that “easy way out” mentality, and there’s literally no other point to it other than to coddle that mentality. Yes, making art of any kind is hard. It’s supposed to be hard. The joy of creation isn’t in the work that you get as the end result, but in the making of the work itself. Generative AI evangelists talk a whole lot about that “democratizing art” idea as a bunch of cope for not having the patience to either improve their own art or find a medium that fits their specific creative tastes. I can’t draw well because I’m pretty sure I have aphantasia (and I haven’t drawn regularly in years), but maybe I might be halfway decent at writing if I ever sat down to write something other than long-ass crank-ass comments on This God Damned Internet. But if all I wanted is some quick slop to satiate my creative impulses, why would I bother learning to write better when I could find an AI story generator?
I’m aware of that. But the ones who do that also tend to experiment beyond that style. Just because someone can draw Goku in a style that is practically 1:1 to Akira Toriyama’s doesn’t mean that said artist is aping the DBZ style 24/7—or that they’re unable to draw Goku in their own personal style.
It is doing that, and it still wouldn’t be good even if it wasn’t.
Whether someone treats my opinion of generative AI seriously is wholly irrelevant to the fact that I have the right to share that opinion. You can disagree with it all you like; I can’t stop you from doing that, nor would I try. But I promise you that I very much that I have the right, the privilege, and the distinct honor of telling generative AI evangelists to fuck all the way off.
Re: Re: Re:4
I’m sorry but it really sounds like you are a reactionist. You are even trying to decide one thing is just slop when you could say that against a lot of pre-gen ai stuff today.
“The difference is in the intent. People who…”
No it’s not. Maybe you aren’t aware but a lot of “see what happens” also includes slopped results based off robot commands but more simple. There is 100% no difference between downloading three copyrighted images, putting it in a fancy paint program, using a few commands or programs to then less predicted result in non-sense to see what it’s like and get an idea from the slop it produces, than using the fancier more robotic special robot (gen ai visual) and seeing what happens then get an idea from it. BOTH of which uses copyrighted images in the first place, where one simply has some extra get-random commands and likely branch commands. (Note: yes I’m aware it’s not as simple more of that but it’s in the same realm of random slop anyway).
“The whole point of generative AI, at least according to the assholes who evangelize it, is that it’s supposed to give you exactly what you want. …”
For the first part, that claim isn’t true, at least not all the time. Generative ai can produced less predicted results and some people will use it so to see what it’s like to get an idea faster than going to Google and spending hours to find mixes to see.
And why would I use the forest thing? Maybe because it’s faster, less risky (well in terms of online researching I guess?) and it can easily make weird variations that that no other artists did?
I’ve seen a video where there was an ai video predicting where Mario goes in one of those slop video non-sense, clearly an experimental type of thing and it produced many weird things that not even the original video uploaded knew if im assuming right. Proving my point. It doesn’t give you “exactly what you want”, at least not all the time.
Plus some people may want to use it to enjoy content faster without having to feel with visually trying to get it manually but I would prefer the licensed type thing I mentioned.
“I’ve already got one argument going with you here and I’m not about to get off on a tangent, so I’ll put a pin on this point with the following sentence: If you think stealing other people’s hard work to transform it into a digital pastiche of their work and claim you created “art” is a morally righteous act, you both underestimate my hatred for copyright and overestimate how willing I am to conflate copyright infringement with plagiarism.”
Im talking about getting an idea from a tool that uses some pictures without permission, getting idea from it, forming my own thoghts about it, then DRAWING or modeling it myself from my own experience about said idea. Thats 100 fucking percent no morally different than what millions of artists do with other peoples works in paint programs, including less predicted slop that some editing tools do actually do (which BTW is transformative fair use sometimes iirc.). BOTH the HUMAN REACTIONS to it AFTER the fact are the same.
“It is doing that, and it still wouldn’t be good even if it wasn’t.”
And who are you to decide that if it wasnt? People have a right to use a certain tools that isn’t violating anyone’s right (the scenario I’m thinking of). If someone wants to use say an visual ai generator off say lawful public domain and lawful special licensed pictures for harmless stuff, its as much as a right as someone deciding to not pay a specific artist and go watch certain baseball game or use that public domain art found online as a reference instead.
“Whether someone treats my opinion of generative AI seriously is wholly irrelevant to the fact that I have the right to share”
…yet you are trying to be a moral dictator against an artist’s right to adapt an idea that originated from ai slop to have their own harmless take that actually is human after the fact. Just as much as it’s an artists right to adapt a prog. generated texture or a weird rock formation found in nature instead of going to a concept artist and theb paying them.
In the end, it really feels like you are selectively trying to decide that one human experienced based off a hen ai slop doesn’t count while leaving all the other ones that is already way different, whereas some anti digital editing people back in 1980s would scream about and make up more made up fallacy ignorant excuses about it.
As a reminder too, I think I’m more pissed at people who goes against an artist adapting an idea that already came from slop. Like even if me (scenario wise) using the tool that was aiding crap was bad in the first place, its still my right to at least adapt the idea that already came from it, and have my own creative take to it. If im in the “wrong” in, then you are in the wrong for fair use transformative works in a digital program because its not as authentic as painting or some crap.
Note it’s possible I missed some stuff from your comments so I apologize for missing anything.
Re: Re: Re:4
You can evolve outputs in fine-tuning ways or in experimental ones. You can incorporate generative tools into a broader workflow. Take off your hate blinders, they’re making you miss the forest for the trees.
Re: Re: Re:
The ‘fruit of the poisonous tree’ doctrine implies as soon as training set is involved in or starts iteration, the final result will be rejected by the majority of customers and must be thrown out.
It does not bar the original artist from working in the industry.
The phrase originates from bad evidence causing an entire court case to be dismissed before or during trial even if the suspect is clearly guilty.
It does not prevent the prosecutor from bringing up a better case with ‘poisoned’ evidence (evidence that does not need the foundation of the bad or illegally obtained evidence) removed.
The same concept is here with gen-AI, the artist is not fired or laid off for using gen-AI, they are merely sent back to the drawing board (if nothing else, to ‘parallel construct’ the gen AI piece without modifying it directly, and removing the elements that will be identified as gen-AI). If they are not, the targetted customer base will punish the company caught using gen-AI.
It’s a completely different use case for vibe coding where LLM generated AI code is usable only for a small fraction of senior engineers and most senior engineers are more productive not having to parse how to prompt and frame the requests at all, while junior developers are severely punished for watering down their own skillsets trying to skip the busy work with the AI and getting caught making their work worse.
There are other names and rules for the ‘fruit of the poisoned tree’ legal argument for dismissal of a court case but they generally follow all the same basic principles in modern democracies.
Re: Re: Re:2
I guess I’ll say that I probably don’t fully understand, but I can try to form my own take and see how you can answer it if you don’t mind.
Morally speaking, it’s very possible that some good things that originated from a bad thing can still be left alone. Popular super hero stories only exist because of horrible crimes in real life. Obviously, we do not need to stop the elements of a fictional legal story, let alone the fact that it’s a huge part of the foundation of it because it came from it.
For a regular copyright infringement or ‘piracy’ case of a copyrighted art. Some artists might of gotten inspired by an art found on a plagiarism page. It sucks the artist gave a promotion to it, but it doesn’t really seem to logically make sense to go after the reaction to it that was formed into a new good art (e.g. idea gained from morally wrong place, artist used the idea in head and made new original harmless art with it) after the fact. Like yeah, without it, the type of art the artist created by hand (due to the idea found through the plagiarized infringement page) would not exist in this case, but we can still separate the after fact in many cases even though bad source prior helped made it possible.
So in the case of generated visual AI, whereas using the tool did fuel the stolen art tool even when only trying to find a spark, the artist using the spark and making a legally new art via drawing out of a new and more original creation could be left alone. Artist best not to use harmful tool in the future, but no need to destroy any good that formed after it. That was one of my main points.
It’s possible a harm could be extended by a bad source, but I see no logical way to justify the idea that harm is expended to many examples of these three cases. So a poison tree made non-poison fruit in some cases.
Of course, it’s also possible for an artist to use gen visual AI, get a specific certain result and just use that (thus actually using the AI “art”) or imitate too much of the AI image (from scratch, so human art technically) but still copy too much of it increasing the risk of copying an original copyrighted picture such AI trained on. That might be poison off a poison tree.
For anyone reading, I’m probably embarrassing myself if I’m still misunderstanding the comment…
However, generative AI isn’t even useful for concept art, at least in most cases. This article brings up the perspective of actual concept artists, and they all say generated “artwork” only makes their jobs harder. Part of it is due to warped expectations from clients (for example, the vast majority of concept art is relatively basic rather than big fancy renders but clients expect the latter), but in general the consensus is that these generative models lack originality, require concept artists to figure out where the “inspirations” for each generation came from (which only adds to their workload), and takes out the “discovery” part of exploring concepts that is vital to creating new and interesting ones.
Generating placeholder assets? We’ve already seen how that can cause problems, since they can be potentially “good enough” for studios to accidentally forget to replace, and later cause controversy when players notice them. We’ve already seen this happen with Expedition 33 and The Alters.
Coding? “Coding assistants” more often than not tend to spit out poor-quality output that programmers need to fix anyway.
And production-level content is right out. People will notice, and it’s never as good as actual human artistry.
I’ve only found one “productive” use for generative AI in game development that doesn’t just add more work for people in the long run – generating placeholder voice lines, assuming you put them all in a “placeholder” folder that you ensure is completely deleted by the end of production, with all references to said placeholders replaced with actual production voice lines. And even then that’s not exactly new, it’s just a fancier version of using text to speech.
Generative AI is mostly just popular in the c-suite rather than with actual developers or gamers. A Quantic Foundry poll shows that 85% of gamers have a negative perception of generative AI, with 62% saying they had a “very negative” attitude towards it. “This technology simply isn’t going away” is one thing, but who will use it seriously when developers hate using it and gamers hate seeing the output?
Re:
“This isn’t going away” is an apt descriptor for metastatic cancer.
Sure, but how? I’m not sure you can (hence the horse armor joke). It’s going to be like other tools: Does it make more money (by making a bigger/better product, shaving cost, etc)? Then the industry will move towards it. There will be exceptions, but they will be niche.
I do think a nuanced approach is best, but consumer behavior is hard (impossible?) to make nuanced. We can’t even get the industry to behave when it comes to things that hurt consumers/workers, like crunch, predatory pricing, sexually harassing female employees etc. Heck, we already can’t even get companies to use ethically sourced training data to begin with. And I don’t know if you can regulate a nuanced use.
To be clear, I don’t think you can stop it. I think maximum outrage at most gets you a slightly larger speedbump. We’re going to get whatever is market optimal regardless of whether it’s good for consumers/workers or not. There’s a reason big-time execs are positively giddy about AI, and it’s not because of indie competition. Whatever influence we have is subordinate to the mighty dollar.
One thing I worry about with concept art specifically, is how it could anchor things. An analogy I’ve seen used is it’s like watching a movie based on a book, and then going back to read the book. The movie will tend to heavily influence how your brain pictures the book. We’re kind of seeing this in other places already- people who use LLMs are starting to pick up speech mannerisms from them.
Re:
What are you talking about? The use of copyrighted materials? That’s completely ethical on top of probably being legal.
Re: Re:
It is likely legal (at least in the U.S. The EU is more complicated with Article 4, it has explicit opt-outs). A lot of anti-AI people would disagree with you that it’s ethical, though. It is one of the major complaints about the technology.
Re: Re: Re:
I know. They’re idiots.
Re: Using genAI as "exploration" is unnecessary and limits creativity
Your last point I think makes a lot of sense. The guys on the Aftermath Hours podcast were talking about this story, and they pointed out that what they’re talking about using AI for, like composition, doesn’t require AI at all. They’ve seen amazing art come from an art director drawing on a post it with stick figures to get across what they’re going for, and the concept artist can turn that into exactly what they need. And if you’re communicating at that base of a level, then you have an actual person (the concept artist) making the creative decisions, rather than the AI making the creative “decisions” that come between that stick-figure idea and the output that then becomes the concept artist’s input. And by giving up those creative decisions to AI you’re limiting the scope of creativity to what AI can do.
Thank you! This anti-AI stance in gaming has become increasingly shrill with no basis in reality, and it’s exhausting. People pretend that Larian using GenAI for menial pre-production tasks is the same as their making another Codex Mortis.
Re:
I guess their stance is that generative AI is such a convenient tool that people may become lazy and use it too much. Then players, in many years, will have no choice but paying $60 for slop games.
I’m not talking here about some AAA studios, from EA or Ubisoft, that are already seeing AI as a new way for save money and time, but about the high quality and smaller studio, that have always produced content with hard work and a lot of love.
It’s pretty much like music, you certainly like an album because of the artist and the story behind it, but who cares if pop stars use AI to write their shallow lyrics.
History will look back on these people the same way we look back on people opposed to like, self-checkout. Any big efficiency improvement might reduce the amount of meaningless, menial work for workers to do, and thus reduce jobs. Therefore we should make sure we are as wasteful as possible to prop up the wage system.
“ Some commenters treated the use of AI as essentially no big deal, while some were essentially “Never AI-ers,” indicating that any use, any at all, made a product a non-starter for them.”
Yup, that describes me.
If you are too lazy to write your own emails, draw your own art, I will spend my money elsewhere
Re:
“If you are too lazy to write your own emails, draw your own art, I will spend my money elsewhere”
You know, it was just a few decades ago that some people claimed the exact same thing about using email instead of writing letters.
It is always impressive how the old guard ALWAYS flips out about new innovations while insisting that everything THEY do is perfectly fine.
Douglas Adams was right:
― Douglas Adams
Re: Re:
Email versus letter versus text messages, it’s still a person choosing which words to use.
As regards to your quote, I am still closer to 15 than 35. And yet despise people who cannot choose their own words to communicate with
Re:
I’m interested to hear how you plan to avoid any use of gen AI, given that nearly every developer in every company uses Copilot or Claude for, at minimum, code completion. If you’re really committed to your zero-gen-AI approach, I guess you’ll have a lot of money you can spend on non-game things. Oh, and don’t forget to make sure your bank is committed to zero gen AI (including their vendors, like whoever writes their banking app), and uninstall Windows and find an obscure Linux distro with a small maintenance team that is willing to commit to no gen AI, and don’t do business with any companies that use ChatGPT to fluff out their emails, and… It’s just not a sustainable approach.
It’s like refusing to work with any company that uses plastic: both plastics and gen AI are here to stay; and while we don’t want them everywhere, we do need to come up with reasonable, sustainable approaches to their use.
Larian didn't use AI though
They really didn’t.
Re:
The CEO of Larian explained in detail how they do though?
There is a serious lack of nuance on both sides of the AI debate at the moment, which I guess is reflective of general online discourse about anything, and this story is just another example of that. There was never any suggestion that AI generated assets would be in the game, and it is clear that their use of AI is limited to concept art and project management tools. The vehement anti-AI crusade is only going to drive people who are sympathetic to the genuine concerns away from the cause. I’m already starting to feel exhausted from all the hand wringing on the subject.
There are genuine issues with AI but the hand wringing we get over non issue stories like this are a distraction and it will harm us all in the long run.
Soo, the record studios and RIIA et al was right when they said every copy is a lost sale? Right?
Timothy, you are using exactly the same reasoning as they do and to that I say: FUCK YOU! Do better.
Re:
Are you seriously comparing piracy with hiring practices?
Re: Re:
The hiring is only done by the ship’s captain.
Re: Re:
I don’t think Tim is necessarily right in his assumption (it could result in shorter working weeks for existing staff for example), but it is not remotely the same as the lost sales fallacy.
Re: Re: Re:
It seems like a relatively safe assumption. There are extremely few cases where companies have rewarded improved productivity by reducing work hours rather than reducing staff.
Re: Re:
No, I comparing the thought-process behind the argument.
The argument is that any use of AI negatively impacts hiring practices which doesn’t take into account the financial situation or the type of project it is used in. It’s the same type of fallacy that every copy is a lost sale, ie the presupposition is that everyone has the financial means to throw money at something.
There is nuance in how AI can be used, it can be used to increase productivity when resources are constrained or as an extractor of money using on our common cultural heritage.
Saying that all use of AI is bad is a stupid and simplistic take and doesn’t in any way help the discussion when it is proper to use AI or not.
Re: Re: Re:
Well said.
I’m a software engineer, and workers in my industry ARE being replaced by LLMs in many businesses. The company I work for isn’t doing that (yet, anyway). It IS making Copilot available to us, which in certain circumstances is a helpful tool.
Yes, in theory they could hire a recent college graduate to do that work instead, but there is no budget for that. Companies don’t have an infinite money cheat. Either an LLM generates those unit tests (which I then double-check), or they pay a senior engineer’s salary to spend time doing dumb work.
For Larian, I doubt there was a world where they would hire extra actual people to do the kind of work they’re talking about. They probably hired all the actual people they could afford to hire. They don’t appear to be one of the “sack the humans, keep the cash, use LLMs” companies.
Re:
They’re not the same reasoning. Someone who pirates could choose not to buy something, at least hypothetically. To get a task done, your choices are not doing the task, or hiring someone to do it. There is no route, even hypothetical, of it being done without someone doing the labor.
He also addresses whether that can be offset on net in literally the next paragraph as well. But even when it does increase productivity for specific tasks/output, that is still labor lost.
Re: Re:
It’s exactly the same reasoning, that everyone has the financial means to throw money at something but they choose not to so someone must be loosing out.
What if you can’t afford to hire someone? What if you are making something by yourself just for the fun of it? Of course, they must hire someone, right? Or else, that someone is loosing out on work, right?
Assuming everyone has money to burn which is an idea taken from la-la land. And it’s here the fallacy strikes, the assumption that someone using an AI is depriving someone else of work which is exactly the same reasoning the copyright mafia is using – the assumption that someone who pirate is depriving someone else of a sale.
As I said earlier, FUCK THAT. If you think I’m wrong, then every media exec complaining about lost sales are 100% right, right?
Re: Re: Re:
I don’t think he’s assuming everyone has the financial means. There will be things they don’t have the financial means to expand, and simply wouldn’t get done (or will be done smaller scope). In this particular example with Larian, they do happen to have the means, but that won’t always be the case. The issue with replacing labor is still there.
Even if you assume some fixed budget of $x, whatever money is going towards AI tools could’ve gone towards labor instead (which, to be clear, you might get less return on). It’s a fundamental trade off, at any budget; not unique to expansion.
I think you’re mixing two different takes. Acknowledging that something requires labor to be done is different than assuming someone has the money to do get it done manually. The former is true even if the latter isn’t. I think he makes this pretty clear in the next paragraph? The entire discussion around indies etc is precisely the idea that things can be enabled, that wouldn’t otherwise be viable.
The difference in the assumption is there isn’t a way around it. If you want the thing done, someone has to put in the labor. If it isn’t, then the thing won’t get made. That part is true regardless of whether someone has the financial means or not. In your software analogy, the assumption breaks because someone who pirated may not have actually converted to a sale.
So, I will skip my opinion of AI itself. The problem is, maybe today they are using it and not letting people go, then the number crunchers say “Hey, we can make more for doing less! Lets get to it!”
We all know its not the lower management that makes these choices, its people who have investors to think about.
I hesitate to comment here because I’ve caught shit in the past for opposing generative AI on the grounds of “it has no soul to it”, and I feel like making that same argument again. But I think I’ve found a better way to make it, and it’s all thanks to Hideo Kojima.
But I’m getting ahead of myself a bit.
First thing’s first: If there are tangible reasons to oppose generative AI, they’re already out in the open—the ethical sourcing of content for LLMs and the environmental impact of training/using generative AI tools are the two biggest, but I’m fairly sure they’re not the only ones. Point is, any argument in favor of generative AI will have to lay to rest those concerns, and I can’t think of how to steelman such arguments without sounding like an AI evangelist who thinks generative AI “art” or chatbots running on advanced LLMs are ushering in The Singularity or some shit like that.
But that’s the more tangible, less “subjective” arguments. For the one I want to make, I have to point out an interview with the co-composer for Death Stranding 2, who had this to say about a lesson he learned from Kojima (the game’s director):
I know Kojima isn’t talking about generative AI here, and I know it’s not an exact quote from Kojima himself, but there’s a phrase in that paragraph that stuck out to me: “it’s already pre-digested for people to like it”. And that gets back to the argument I want to make.
I’ve made no attempt to hide my contempt for generative AI. While I will admit to having tinkered with it in the past (because duh), I stand against it now in no uncertain terms because of the more “objective” arguments I mentioned above in addition to the “subjective” argument about how generative AI art is “empty” and “soulless”. Hell, I despise generative AI to the point where I use the title of a recent Linkin Park song (“The Emptiness Machine”) as a derisive nickname for it. But it wasn’t until that bit up there that my argument finally had a shape I could give it: Generative AI “art” is slop precisely because it’s been “pre-digested”—it’s all just art made by talented people that’s been swallowed, chewed up, and spit back out in a way the Emptiness Machine “thinks” will be acceptable to the end user.
When I used generative AI, I did generate some images that were aesthetically pleasing. And yeah, some of them were close to the image in my head that I had when I generated them. But if I were to look back on them now (which I can’t because I deleted them months ago), I’d be able to see the flaws in, and the generic nature of, all those images. They’re digital mosaics of other people’s work that were “pre-digested” and barfed back at me without any real human touch to them. I have bits of art from the furry fandom saved on my computer that are at least two decades old; even today, none of them invoke in me the same kind of boredom and emptiness as I get from looking at generative AI images.
Generative AI “art” might have some sort of future within the video game industry. It might even have a future in other creative fields, too. But the people who actually give a fuck about supporting human artists won’t give it any space because beyond the arguments about data sourcing and water usage and replacing people with an Emptiness Machine, the one thing generative AI can’t replace is the feeling one gets when they see a work of art made by an actual fucking person. That’s how I know generative AI doesn’t have a bright future ahead of it: Show me a piece of generative AI “art” that has had any kind of cultural impact beyond “ew, look at that slop, everyone bully the company who thought that was a good idea” and I can show you anything from Avengers: Endgame to Manos: The Hands of Fate in response. The fucking four-note Torgo theme has had more of a cultural impact than any individual piece of generative AI “art”. Wanna know how I know that? Most everyone will forget that shit-ass AI-generated McDonald’s ad by this time next year (other than to point at whatever McDonald’s does next and compare it to that ad), but anyone who hears the Torgo theme will have it in their head for the rest of their lives. Even one of the worst movies ever made by the hands of
fateMan, a movie where “every frame … looks like someone’s last known photograph”, has more of a “soul” than the average AI-generated “funny animal” video. Pre-digested AI slop isn’t art. Manos is art.The only genuine reason to have any kind of “nuance” about generative AI is to separate generative AI from other forms of what we’ve colloquially called “AI” in the past. In the gaming world, that can mean separating generative AI from the use of algorithms to create NPCs (i.e., making “AI-controlled” characters). Beyond that? Nah, fuck the use of generative AI. If programming and concept art and voice acting can be done by a person, it should be done by a person. And if you think there’s an excuse for not having it done by a person, you are—objectively speaking—the wrongest person on the planet.
Re:
You need to remember that even without gen visual AI, humans had already discovered non-sense formations or ideas that were not manually crafted with full intention by another human but was able to make it a form of art via inspiration. Transforming it. This already happened with photoshop manipulation, prog. generation (think of landscapes, like Minecraft), and some included these splashes and weird formations to push with a lot of copyrighted works without permission including manual editing with transformation.
So if someone used a few extra robot codes by using a robot to experiment with mashing copyrighted pictures together to form a new idea which is then controlled and transformed, I don’t see this as any different than all that and just see another example of human experience and creativity.
That being said, the current genAI does have problems but if she used basic forms to only get an idea, they used real blood to transform it, it’s still human art in the after fact. Even if using it is fueling the current tool fueling bad actors (so ugh best not use that again), the end after result separately is still proof it’s creative so that proves it can be useful for creative aim itself. But it’s better off if it was using lawful public domain and lawful license images first at the same time.
Re:
Well, if that’s true, you have nothing to worry about. Everyone will spot the soulless AI output and reject it.
Of course, that obviously won’t happe., People are quite capable of making soulless trash without advanced tools, and happily consume it. And a reasonable, common complaint about AI is that people can’t tell.
This kind of aesthetic chauvinism is mind-bogglingly goofy to me. Human artists are also just machines that assemble a quilt of their inputs into outputs, except we’re conscious of the process and have a lot of biological and social baggage.
Objectively speaking, copyright infringement is good, and AI is a striking example. If copyright stands in the way of innovation, that’s another strike.
The only complaint you make that holds water is, well, water. Energy requirements are high for hyperscalers. But as the technology advances, efficiency will increase — that’s what all those annoying accelerator coprocessors they’re putting in everything are for.
I am not a fan of AI, in fact it drives me up the wall having to avoid LLM summaries, educate family members, etc. But anti-AI people are radicalizing me in favor of it. Their arguments are just so bad.
Re: Re:
Hey, so, I mentioned the MCU in the same sentence as Manos for a reason, and it wasn’t because Endgame is The Objectively Best Movie Ever™.
A lot of people can, though—especially if people are generating images/video with the default models and shit. I’ve seen more than enough slop to know that most of the people posting slop aren’t using anything beyond the same basic-ass “AI style” that uses the same wonky shading, lighting, coloring, and anatomical construction across every generated image. Once you’ve seen enough slop, the style stands out so much that spotting it is as easy as spotting a pile of shit against an otherwise empty tile floor. The real challenge is video, but like with images, if the videos are being generated with default settings/models, it’s real easy to spot the slopshit when you learn to recognize the style.
But the best artists adapt and evolve their styles beyond their inspirations. Osamu Tezuka, who is often referred to as “the father of manga”, created a style that plenty of other people aped back in his day—but a lot of people who grew up with his style as an inspiration evolved their own art along the way, and as those artists became prominent, they inspired more people, and so on and so forth until you get to the anime and manga of today. You can trace influences and inspirations and evolution with human artists. With an AI generator, all you get is slop on command, barfed at you in whatever style you want (provided the model can be made to recognize the request).
I’m willing to own the fact that I’ve experimented with generative AI in the past and found it lacking from an artistic standpoint. I’m also willing to admit that while the idea of “any art you want, on command, to your heart’s content” is tempting, the actual “art” is so devoid of any life and humanity and “soul”—of any genuine complexity in both style and substance—that I’d rather watch the worst movies ever made without any MST3K-style riffing than subject myself to some AI-generated slop churned out by some mediocre asshole who thinks his slop is The Next Big Thing™ instead of machine-generated bullshit designed to placate his personal tastes. If the best you can admit to is “AI-generated art might suck on every level and destroy the environment to boot, but people complaining about it is making me want to swallow all that slop anyway”? You’re not as “anti-AI” as you want people to think you are.
Re: Re: Re:
So, basically, when trash is made without certain tools, it has a magical sparkly human essence, but when a program is involved, it’s extra-trash. Got it. Doesn’t seem arbitrary at all.
So you concede that like other artistic tools, there are ways for skilled users to create better art. You just don’t like the fact that most art is bad, and this particular tool makes it easy for people to make more bad art.
What makes you think an artist who makes use of AI won’t evolve? What a strange idea. Not even taking into account the ability to evolve prompts, train new models, etc.
User error. I did find the old early models more fun though.
I don’t give a shit what people think of me, I was trying to provide context for my attitude toward certain prevailing uses of AI. Turns out, new technologies involve growing pains — who’da thunk? Unless you prefer to pretend it will all just go away in some magical event where the bubble bursts at the same time as an EMP wipes every hard drive, it’s not going away. Whining about its existence is pointless.
Re:
Also, this “if you can’t pay people to do your drudgery, you just shouldn’t make it!” shit drives me FUCKING BANANAS. You just… hate independent artists and studios? Fuck you if you’re broke because the program hurts my feelings? Fuck all the way off.
Re: Re:
And what’s more, that’s not even a fucking excuse to not make a thing! Plenty of indie devs with budgets ranging from “zero dollars” to “I got a couple hundred bucks I can afford to blow this month” have bought (or downloaded for free) enough assets off the Internet to cobble together at least a shitty version of the thing they want to make. And as was pointed out by multiple people on social media, people who are all “I can’t draw for shit” don’t have a similar excuse when it comes to concept art. A big-name filmmaker like Rian Johnson drafted the concept of an idea for Knives Out with a stick figure. If he can turn a shitty little sketch into an actual realized shot in his movie, what the fuck is anyone else’s excuse?
Re: Re: Re:
So yeah, just “fuck you try harder”. If you specialize in one area, and you can’t find someone to do the areas you can’t do alone, you are not allowed to use the computer program, because it violates some obscure moral precept. And if you aren’t an artist in any way, but you need assets for your business, well, tough shit, trying to scrape together a living is no excuse for taking shortcuts, in fact you might as well be a thief since you didn’t waste untold hours and dollars on a shittier slower way to get the job done.
Re: Re: Re:
Also like, are you serious? Rian Johnson?! Wow, a successful Hollywood filmmaker whose entire job is getting paid to tell other people what to do can pull it off, so anyone should be able to. “Look at what Michaelangelo and Steven Spielberg did without AI. Why would you need it?”
Just a reminder that there is an analogy that does reveal the usage of AI in media development is black-and-white no nuance: The Hot Stove Analogy.
The longer an artist or dev team keeps touching the hot stove, the more permanently scarred they are and the more it mars the customers’ perception of the final product.
Corporations (Tim Sweeney) want to give gloves and oven mitts (banning mandatory disclosures on the use of AI even if the customers can guess with reasonable accuracy who has been touching the hot stove for too long with the damage on the gloves and treat it the same as scars anyway)
Meanwhile advocating touching the hot stove is advocating a faster and sloppier line cooking because you can focus on other things while touching the hot stove.
Then the customers wonder what the fuck you’re doing when you’re sucking your thumbs or have bandaged hands while moving fast between glimpses of development.
The heat of the stove, of course, is the size and quality of the copyrighted material scraped to make the training set.
You can advocate around it all you want but touching hot stoves creates fruit of the poisoned tree in the minds of the most consistent paying customers with the most money. Try to make up for that in volume with gruel for the homeless at your own risk (you already priced everyone younger than millennials out of the game development market other than indies with small unnoticeable stoves)
Why don’t you cover how Ai is being built by socializing and subsidizing the cost on the public while the only ones actually profiting are the biggest and richest companies?
I use AI to “rough in” SQL queries and code snippets. I don’t feed it any data, so I still have to make adjustments for column/variable names, among other things to make what it generates applicable to my use cases. It’s literally only useful to me for outlining syntax, but I guess some people would prefer I hire new staff to do that slower. I simply cannot afford to pay those people any mind, however, because that’s idiotic.
Re:
Except AI vibe coding is useless to senior engineers who cannot be bothered to do prompt engineering and work faster without the hurdle.
It’s useful to senior engineers who can frame the prompts and questions properly, but only moderately so compared to their peers.
And then the LLM AI code just hamstrings junior coders right out of the gate as soon as they try to generate something they don’t understand and have to troubleshoot it when they could have written it fine from scratch in half the time.
It’s not remotely the same argument as training sets build off scraped copyrighted data transformed into offensive shit.
Re: Re:
It’s more efficient than Googling through stack overflow if you’re not a complete dullard.
AI = NFT pt.2
‘AI’ is a tool by and for the profits(/’productivity’) uber alles crowd, and frankly, fuck every last one of those pychos. They and their mindset are at the root of too many of our problems.
Re:
A computer program that can pass the turing test, create photorealistic images and voices, and generate working programs from natural language input is… the same as fucking NFTs? Do you people hear yourselves?
Stolen training input not mentioned at all
A lot of the reason people have an issue with Larian using genAI isn’t even mentioned in this article — it’s trained on images stolen from real artists. There’s an ethical issue with using it at all, and it’s especially insulting to force your artists to use it.
He says that everyone at his studio is basically okay with how they’re using it, but has he done an anonymous survey? You have a bunch of employees in an industry where getting a job right now is extremely difficult, at a studio which is notoriously difficult to get a job at — have you considered that maybe they’re not raising a stink about this to you because they’re afraid of rocking the boat and potentially being replaced by someone who doesn’t?
Re:
Copying is not theft. This is a complete non-issue for anyone who isn’t a big company scared of getting sued or a diva who thinks they’re a True Artiste.
Re: Re:
lol I guess every artist or musician I’ve ever met is a “true artiste diva” then.
Re: Re: Re:
Many such cases.
Really recommend reading this article on how GenAI is affecting the concept art as a job.
https://thisweekinvideogames.com/feature/concept-artists-in-games-say-generative-ai-references-only-make-their-jobs-harder/
The “middle ground” in this discussion- if we want to go there- is that GenAI does not give productivity increases in its current form. It can be useful, but it does not (at least when it comes to concept art) replace people. This tracks with what Svenke said in the controversial interview, where he actually says that GenAI uses more time than otherwise.
This doesn’t mean that AI won’t replace jobs, but let’s give a bit more context: according to current people IN the concept art industry, AI is not there yet for concept art.
I think this applies to a lot of laymen’s understanding of jobs; on a base level AI replaces “the easy stuff” but I mean, has it actually replaced any jobs at say, BestNetTech? I don’t think it’s there yet. Companies cutting jobs to replace people with AI are either trimming the fat, or going to swiftly run into issues. Microsoft is a potential case here, with all the issues from Win11 getting worse.
Re:
That depends on what one means by “the easy stuff”. If we’re talking about an email template for an office-wide message? Sure, I can see how it would be helpful. If we’re talking about concept art? You can literally grab a pen and a napkin and draw your idea in the shittiest way possible, and that would still be of more value than the pre-digested AI slop that some dipshit C-suite failson got barfed back to him by an image generator.
Re: Re:
How about you feed your sketch to an AI to make it more realistic? That’s almost a perfect workflow for conceptual thinking. All the cognition is spent on a high level, you rough out the important bits, and your idea can be immediately visualized by people who don’t have access to your imagination.
“Just learn to draw better!” is the carbon-fascist’s version of “just learn to code!”
Re: Re: Re:
How about I give myself voluntary brain damage instead?
As I pointed out in another comment, Rian Johnson sketched out a storyboard for a shot in Knives Out using a stick figure. Sure, I’d bet on someone he worked with during the production of that film iterating that shitty sketch into something a little more substantial—but it’s also possible that Johnson’s sketchy-ass storyboard image was the only reference for the shot until he could set up a camera and frame it himself. My point there, as it is here, is that artistic ability is ultimately irrelevant to making people understand an idea. If you can communicate an idea in a solid enough way that people can understand it even through stick figures, you can find a way to turn that idea into something more substantial. People have spent decades making movies and TV shows and games and books and music without any kind of generative AI. Why would we ever need it now, other than some shithead C-suite brunchlord claiming (without evidence!) that generative AI would make some part of the process “more efficient” even as he uses it to replace human artists and keep more money for himself?
Re: Re: Re:2
We’ve gotten by this far with paint, why should we use cameras?
Re: Re: Re:
“carbon-fascist”
Wow.
That’s a new one for me.
This might be finally reaching the “use it too much and making it meaningless” dividing line.
Re: Re: Re:2
It’s a term from Iain Banks’ scifi for people who who buy into the “magic meat” theory of consciousness that says sentience is limited to carbon-based life forms/believe AI is bad and should be eliminated.
Re: Re:
Sorry, I didn’t fully flesh out my thoughts- I tried to keep it succinct.
It replaces the “easy stuff” but that easy stuff is not so easy and is important to the development. It’s only easy stuff from the C suite who wants a rubber stamp yes on any idea they produce.
The part AI fails the most at (giving any sort of pushback) is also where the most creative decisions are made.
It's too hard to do anything at all....
The lazy LOVE AI.
The think?
Its going to be Used anyway, so why not use it NOW.
Does that Justify anything?
Use for AI.
Better Multiple Sentences, and answers in games. Insted of having 3 answer to 1 question, it can have many.
Security, that it NEEDS the net to do much of anything.
Programming? Art? design?
As with most Games, once you have the Engine and basic programming, then you Build Around what you have and Make Multiple games.
Welcome aboard
the bored ape club yacht
My dearest dudes of BestNetTech, whom i respect and love: There is no nuanced conversation to be had about AI any more than there is a nuanced conversation to be had about Carr, or Trump, or Musk, or the prison telecom industry.
I'm fun at parties, promise
This comment section in particular is an unbearable navel-gaze held by people who have very little idea how reality operates. There is no ability to scale AI or use it longterm without consigning the entire planet to a repeat of the Permian extinction; there is no reality where there is enough freshwater resources on Earth for humanity, agriculture, and data centers at scale. You literal nitwits are arguing about who’s right about video game ethics as the subject of the discussion turns your forests to cinders, boils your oceans, and poisons the real, living human beings caught in the eye of the digital storm. These reckless technologies are causing a prolonged mass psychosis in thousands of people. While data centers poison human beings, your blood beyond any machine, your own, you argue that it is ever ethical to participate? Shame on every single person who debated the ethics of copyright infringement before pointing out, justly, that this industry will escalate climate change and water scarcity at a rate that will slaughter millions. Pull yourselves out of the abyss that is ever seeing a machine as equivalent to human life, and understand that this is not the time for those privileged enough to debate video games to do so over corpses.
Your principles are meaningless if you can’t give up a video game in their name, and the times ahead will flatten you like a frog on the pavement. Anyone who refuses to think without a machine, which is operated by a massive corporation that is surveiling you and stealing your information to further create what Silicon Valley quite literally believes to be God, has lost their soul. You’ve traded the bodies of poisoned humans just to make your life more convenient, given that there is no circumstance in which any human actually needs this technology to live, or even to be happy. It’s pure convenience, the complete reduction of the most complicated brain on Earth to a transaction. Congratulations. Previously in the human story, you’d need to be an oil baron to understand what it felt like, to breathe a sigh of relief standing on the neck of someone else.
Re: Strange thought/idea.
All the heat generated, Proven Fact in Alaska, that all you need is Heat Expansion to MAKE POWER. And considering MOSt of these are Probably going to have a Solar Battery backup. And with the heat generated, from 40-50 degrees, upto over 100-120+(below boiling). The heat expansion will be Perfect for Low power generation..
BUT, BIG power DONT like Distributed Systems. The LEAST these things can do is Offset the Power they need.
Think about it.
Re:
I drive a car sometimes too.