DLSS 5 is on track for a Fall 2026 debut and replaces a game’s original textures with AI-inflused versions to make them hyperreal. Or, out of one uncanny valley and into another!
Slop: The work is not handcrafted, therefore it is slop. Especially if the results are not natural that fits the game or has problems. Or do you think the artist goes through each result and adjusts it? No? Then it is slop by definition.
Every slightly reasonable response in this thread was downvoted.
The lemmy hate train must chug on.
First, AI “only does the lighting, i swear, guys!”
Far from the first circlejerk slop, tho. I’m sure like in all circlejerks, the people going all into their presumptions will surely admit they were wrong when proven so, surely.
If NVIDIA was lying and this was all marketing, they would have definitely done more to hide the clearly obvious flaws of DLSS 5. It’s a shame that we can’t have a decent discussion about them because the circlejerk slop is just slopping the argument down into an hallucination.
I’ve actually just been corrected, this was referred to employ some form of generative AI by Jensen. It’s also significantly different enough to what I generally thought of as AI slop and my issues with it that it could also be said that I am a supporter of generative AI now.
Your usage of “slop” is really confusing.
But i understsnd what you’re saying (i think).Hopefully you find someone to discuss with!Edit: lmao nvm, I’ve happened across your other comments in one of the other posts and yoi clearly aren’t interested in discussion and get offended when people disagree with you.
Also, you come across as a slop-supporter.Is that so? Thank you for your constructive criticism. No I was not interested in a discussion, but I was under the impression that neither were you. My support for AI slop is conditional and rather minimal,
but I do not believe the concept applies here.Thank you for your laughter, it brings joy into the thread.I’ve actually just been corrected, this was referred to employ some form of generative AI by Jensen. It’s also significantly different enough to what I generally thought of as AI slop and my issues with it that your suspicion has turned out to be right, I do support some form of generative AI now.
I think they took this to heart

Why use AI to generate a frame every 8.3ms when you can just pay a person to generate a frame every 8.3ms, ez.
AI bros are so stupid.
Problem with gritty post-apocalyptic grim-dark games is that not enough people have Mar-a-Lago face and stage lighting.
Games should have a setting that toggles between real textures to whatever is on the right
It should be a toggle between:
“Real textures” - “I think women are objects”
I mean, I don’t think this current tech does anything to materially objectify the subject (any more than the original). What it does is to smooth and brighten certain artifacts.
That might genuinely be a benefit in a game with poorly rendered models or bad lighting inherent to the game. But Resident Evil doesn’t have this problem. They made an explicit choice in setting the scene as dirty, cloudy, and grim. This modeling reversed that for everything, not just the lady bits. You’re going to have zombies and Scagdead and Lickers all brightened up and polished.
That might genuinely be a benefit in a game with poorly rendered models
No.
As for the rest, yes.
There are men out there with so little connections to real women who truly think that a girl should always wear makeup, regardless of time and circumstances.
-
Angry at women for not doing up hair/makeup daily
-
Thinks hair/makeup is a trap to trick men into treating you like a human being
You get both from the same chud online influencers
-
Incels should never be hired for anything involving women.
I’m suing you for the brain damage I got by looking at this
if it helps, iirc the image on the right was made in a circlejerk subreddit making fun of chuds, but then it got taken seriously by real chuds on twitter
Tried it with an old Simpsons game. The result is amazing!

Also, they had this running on two 5090’s in a SLAI mode.
So not only will they be shitting all over the game designers and artists design with AI slop, you’ll have to buy two top of the line cards for the privilege to have that slop served to you.
ETA: it’s also two top of the line cards that are massively increased in price due to AI slop.
Are novideo even making 5090s anymore?
I read that as “two slop of the line cards” and I’m not even mad about it. I think it works.
That is what they were doing for this test but that is not what will be required to run this once it releases. At least, that’s what they’re saying now
Ah, yes, because shiny graphics that require the absolute peak of expensive gaming GPUs is totally going to get people to take out a mortgage to look at the incredible post-processed details that change every time you look at them. \s
Isn’t this just upscaling?
No, its a stylistic filter. Like turning an image into an oil painting filter as a comparison, not just upscaling the resolution.
Upscaling is supposed to look like the same thing at a higher resolution, whereas this is specifically making a point about looking different
I’m so glad that the GDP of a medium-sized country has gone into turning up the contrast on some videogames a little bit
No way this is real! Right?
Sadly, it’s real. Direct from Nvidia: https://nvidianews.nvidia.com/news/nvidia-dlss-5-delivers-ai-powered-breakthrough-in-visual-fidelity-for-games
People in this thread seem to think this is “AI slop” when they could stop for a second and read the title - this isn’t even changing geometry, that’s the thing. You can take this and run a photorealistic Cyberpunk 2077 without having to be stuck in some weird daylight fog environment, but it only really benefits games made for raytracing and the people willing to throw their money away for shiny thing. Then again, DLSS did this so it might even be worthwhile if people are willing to tone down their resolutions and have no moral qualms about continuing to support one of the worst offenders in the AI bubble…
Hope you like slop in your slop
What does this even mean?
DLSS applies upscaling to video games. So, even if we buy the “call anything made by AI ‘slop’” meme then wouldn’t the headline be ‘Hope you like slop in your video games’?
Some people are so anti-AI-brained that they don’t even make sense. I’m just picturing the OP going back and forth trying to wedge the word ‘clanker’ in there somewhere but giving up and posting this nonsense instead.
Slop in slop = AI textures fed into DLSS
First fake frames, now fake textures
That’s…it. You just didn’t get it, my man…
Edit: idk why I expect the pc gaming community to be reasonable, my bad
my brother in… elipses…
…if I don’t get it… after you’ve… explained… it.
Wouldn’t… that be your… problem?
Insanely impressive that it’s able to do that in real time.
Not at all, tiktok does the same on a phone…
If it’s doing it on the video, yeah, though I’m sure they’re using the absolute best hardware available.
But it would be a lot more effective to run it on the flat texture assets themselves.
Dual 5099’s so probably standard on more mid range GPUs in a few more generations.
It’s going to be interesting to see what and when it generates detectable artifacts. Reminds me of this: https://www.youtube.com/watch?v=DKCyk3CeUFY
Wtf is with this editorialized title and these comments? This isn’t generative AI using an LLM. Games that have been “upscaled” and “enhanced” in the past 10 years have likely gone through a similar process, just not in real time.
And instead of charging you another $90 to get it for an individual game you already own, you get it with a driver update for a bunch of games, on an opt-in basis.
Before people start going braindead circlejerk, the way it’s working is by changing lighting only, it isn’t changing geometry, making about 90% of the memes people are replying with wrong. Basically works best with games that have high resolution raytracing modes (like Cyberpunk) and on PC rigs people can no longer really afford.
Then again, it’s literally in the title, I don’t think there’s any way my comment can fix this level of circlejerk. I dunno, “Ugh, say thing i no lik” so go ahead and downvote and reply with your strawmans …
If the picture in the article is a true real representative of DLSS5, the hair texture is obviously different and has been changed. The DLSS5 picture even has a slightly different hairstyle.
I think you think you are making an argument, but the hair is the best example. The strands haven’t been changed a bit, all the unique curls, all there. Generative AI would have changed that big time. You might be getting confused by some of the shots like the Starfield ones, that have been taken from different frames (look at the person in the background).
I’ve actually just been corrected, this was referred to employ some form of generative AI by Jensen. It’s also significantly different enough to what I generally thought of as AI slop and my issues with it that it could also be said that I am a supporter of generative AI now. I am surprised by the application of the label, but it does prove me wrong.
I’d suggest taking a look at the comparisons on Nvidia’s website, because it really makes it obvious how much this is changing things https://www.nvidia.com/en-us/geforce/news/dlss5-breakthrough-in-visual-fidelity-for-games/
If we look at the one that’s in the article thumbnail, the blonde woman in Resident Evil, you can see it has made significant changes to her face: her eyes are bigger and the outside corners of them have been moved up, and her lips are much fuller
Edit: also it straight up changes the skin colour of the black football player in an orange shirt, and that’s presumably meant to be a representation of a specific real person. It’s not even a lighting change either, because the shirt is the exact same colour. It’s only his skin that changes
Not only have I done that, I overlayed one image on top of the other in GIMP to test it out with the opacity slider. Her eyes are not bigger, and the corners have not been moved up. The overlay is perfect, and transitions perfectly. I think that what you are referring to is the optical illusion of the eyes appearing to get “bigger” when they get brighter, but if you say, place it around a fixed reference, it is clear they remain the same size.
Regarding the football player, if you look at the entire scene, there’s a dark tone applied to everything, including the soccer ball. It seems to make dark scenes brighter and outdoor scenes darker. Having said that, I agree, the filter does exaggerate the skin color of the football player, but that’s what it alters, the lighting and material properties. There’s even a point where you can place the bar that the transition is seamless enough that it appears to be the same shot of the face. To test whether this was the case, I put it into GIMP, and using just the brightness slider tried to see whether I could make the colors match just from changing the brightness - and I could.
What I actually found more interesting is that in every other example, even the clothing folds remained the same - this is the only example where the folds in the clothing seem to change. Looking at the background, there’s also some evidence it’s not the same frame. I doubt it’s from a material change, it’s just that they are really one frame apart.
Without using GIMP, you can also take the football player, anyone of them, and zoom close up. Make a note of every features in their face, because it is preserved, if exaggerated.
but if you say, place it around a fixed reference, it is clear they remain the same size.
You are working with different frames, and you are also flickering between them as opposed to using the opacity slider, which makes it difficult to see how the brightness and material effects are being altered between the two. All you need to do is gradually shift the opacity layer from the top layer once you’ve aligned them. You are actually working with the source images while I just down and dirty snipped it, gonna try getting the source image of the side by side comparison from the same frame and see if the higher definition makes a difference. I would make it a streamable, but I have no experience doing it.
Yeah, just tried it out. The ones actually from the same frame are pretty low res in comparison, but the high res ones you are choosing are from different frames, so even if you align them using the pupil as a reference, zooming out shows just how uneven they are due to minor shifts in position. Unfortunately, that means having to resort to the lower resolution alternative.
Smooth fades with the brightness upped for visibility: left eye, right eye, lips
Here are the source images for you: DLSS off and DLSS on
Streamable is just a video uploading site, you can put any video file on there for free (though it will be deleted after a while). I used OBS to screen-record, it’s free and fairly simple
DLSS 5 fundamentals are based on a new real-time neural rendering model that greatly ramps up photorealism in games by combining “photoreal lighting” and lifelike materials.
It’s also effecting materials, such as the skin on her face in the example. Materials includes the textures applied to or generated by them.
I assume material as in https://dev.epicgames.com/documentation/en-us/unreal-engine/unreal-engine-material-properties , except at the pixel level.















