• 4 Posts
  • 836 Comments
Joined 9 months ago
cake
Cake day: June 25th, 2025

help-circle



  • @administrateur@tarte.nuage-libre.fr

    I’m actually having to reply to myself because for some reason it isn’t federating correctly, at least as of now: https://web.archive.org/web/20260318212358/https://lemmy.ca/post/61961274 (And after posting this, it only shows this one comment. It’s also been having issues federating the latest edits, but then again I’ve continued doing so until now to leave it ready to be archived: https://web.archive.org/web/20260318222736/https://lemmy.ca/post/61961274 )

    The user continues to claim I continue to misinform, which I don’t actually think I’m doing but that’s up to him. They have not eliminated or edited any part of the claims that were deliberately false. He can claim I’m misinforming all he wants, it’s not deliberate and I do not believe I am.

    This thread isn’t about what he disagrees with, and I doubt he is interested in any actual discussion if he isn’t bringing it up directly. He could, say, edit his comments to eliminate his false accusations with this evidence he thinks proves my claims wrong, but isn’t.

    But, to defend myself, and I have admitted and corrected myself when I was wrong, what Jensen is saying now has not contradicted anything regarding previous claims from actual quotes on how the technology works.

    Citing myself:

    • It takes color and motion vectors and passes it as input through AI - same as previous versions of AI.

    • It generates no new shapes or geometry.

    • It just applies changes to lighting and materials, basically creating a mask over the original.

    Citing what they’ve told us:

    https://nvidianews.nvidia.com/news/nvidia-dlss-5-delivers-ai-powered-breakthrough-in-visual-fidelity-for-games

    DLSS 5 introduces a real-time neural rendering model that infuses pixels with photoreal lighting and materials. Bridging the divide between rendering and reality, DLSS 5 empowers game developers to deliver a new level of photoreal computer graphics previously only achieved in Hollywood visual effects.

    DLSS 5 takes a game’s color and motion vectors for each frame as input, and uses an AI model to infuse the scene with photoreal lighting and materials that are anchored to source 3D content and consistent from frame to frame. DLSS 5 runs in real time at up to 4K resolution for smooth, interactive gameplay.

    Citing what Jensen actually is telling us now:

    https://www.tomshardware.com/pc-components/gpus/jensen-huang-says-gamers-are-completely-wrong-about-dlss-5-nvidia-ceo-responds-to-dlss-5-backlash

    “It’s not post-processing, it’s not post-processing at the frame level, it’s generative control at the geometry level,” he said.

    This is not inconsistent with what we have been told, the filter takes a game’s color and motion vectors and changes the defined “photoreal” lighting and materials of the geometry.

    It is inconsistent if you interpret “generative control” to mean changing the geometry, and we will know soon enough, but my own attempts to test these claims suggest otherwise:

    https://streamable.com/qrdgmt

    https://i.imgur.com/HeQvqVO.gif


    Regardless, if it does turn out to be misinformation, it is not deliberate, it is consistent with what has been told to us, and I’ve even bothered to double-check it myself. I find it rather hypocritical to complain about misinformation while continuing to to promote it, given I have not seen them take any action regarding their comments yet. In contrast, the moment I realized I was wrong about my claims about generative AI not being involved, I corrected myself.

    I don’t think his comment about “more than 250+ words, so it must be a bot” comment is in good faith either, given the extents I’m going through to document, reply, and edit this out. I mean the guy just literally posted a 320+ word post, so either he’s not being self-aware or worse. I apologize for this word stew, but the guy is making this personal.

    Nor do I think he actually cares about whether I am continuing to “spread misinformation” given that he avoided telling me. Being fully aware I have gone back to edit my comments when told, he avoids slapping what he thinks is an obvious “I told you so” to my face.

    I hope you don’t mind if I don’t pass it through the auto-translator this time now that I realize you also speak English fairly well :)


  • That’s a good point, but this technique isn’t suddenly going to be replacing people’s works, jobs, or IP.

    I’ve criticized NVIDIA for being the abusive monopoly that it is in nearly every other comment, but what you are describing isn’t even limited to just NVIDIA, and frankly, I’m not going to be really defending major IP holders, just the small creators. There’s a lot of generative AI that’s endangering small creators, but in my opinion this is not endangering it.

    My biggest problem with AI training ignoring intellectual property is their hypocrisy to rush technology that is being used to try to replace a lot of real world jobs blindly, not to protect excessively long IP laws and IP copyright trolls and hoarders. People should download their free cars if they can.




  • It’s worse, I made plenty of criticism to go along with those comments. They are downvoting not only anything that breaks the circlejerk. They’ve been given free reign to be as toxic as possible, to the point of just being able to get away with spamming multiple communities with claims I’m a bot - and I can’t even have my usual fun trolling back in the sea of downvotes because I wrongly dismissed that this was generative AI, something Jensen himself had referred to as 😭

    It would be funny if my account is lost for an issue so temporary. I’ve already begun to see the circlejerk try to get over what this actually is by switching over from claiming it is AI slop to saying it just makes it look like it and criticism that’s actually more valid. Still, it’s led to the pretty funny circumstances of people implying I’m an NVIDIA shill, because that also implies that NVIDIA shills have to now get around by telling people they should not support them and their abusive monopoly feeding the AI bubble through global cartels.





  • That’s ok, I can paste what you were trying to compare here:

    I’m not seeing the relevance of your new video. This filter manipulates brightness and material at a pixel level, which my video shows at several. At the level of focus you are trying to show, there are still material differences being applied, like how light bounces of off the skin, eye, and lips, and the filter is working over detail that I already warned you the only frames that could be compared against each other are lacking.

    My video already shows it applying well enough, but if try to zoom up to the pixels in an image that does not have the quality to show what it’s parting from and ignore what’s happening on the quality that can be made it, it certainly can be argued into a different story.

    I think my example already does a decent job at showing that this isn’t just the typical image generation AI, so I’m afraid we’ll have to disagree from here on out, as I don’t think either can make the example to each other any more clearer. Regardless, if you are as interested as I am on this, it will be something true experts go over and point out when it gets released.





  • You are working with different frames, and you are also flickering between them as opposed to using the opacity slider, which makes it difficult to see how the brightness and material effects are being altered between the two. All you need to do is gradually shift the opacity layer from the top layer once you’ve aligned them. You are actually working with the source images while I just down and dirty snipped it, gonna try getting the source image of the side by side comparison from the same frame and see if the higher definition makes a difference. I would make it a streamable, but I have no experience doing it.


    Yeah, just tried it out. The ones actually from the same frame are pretty low res in comparison, but the high res ones you are choosing are from different frames, so even if you align them using the pupil as a reference, zooming out shows just how uneven they are due to minor shifts in position. Unfortunately, that means having to resort to the lower resolution alternative.


  • Not only have I done that, I overlayed one image on top of the other in GIMP to test it out with the opacity slider. Her eyes are not bigger, and the corners have not been moved up. The overlay is perfect, and transitions perfectly. I think that what you are referring to is the optical illusion of the eyes appearing to get “bigger” when they get brighter, but if you say, place it around a fixed reference, it is clear they remain the same size.

    Regarding the football player, if you look at the entire scene, there’s a dark tone applied to everything, including the soccer ball. It seems to make dark scenes brighter and outdoor scenes darker. Having said that, I agree, the filter does exaggerate the skin color of the football player, but that’s what it alters, the lighting and material properties. There’s even a point where you can place the bar that the transition is seamless enough that it appears to be the same shot of the face. To test whether this was the case, I put it into GIMP, and using just the brightness slider tried to see whether I could make the colors match just from changing the brightness - and I could.

    What I actually found more interesting is that in every other example, even the clothing folds remained the same - this is the only example where the folds in the clothing seem to change. Looking at the background, there’s also some evidence it’s not the same frame. I doubt it’s from a material change, it’s just that they are really one frame apart.

    Without using GIMP, you can also take the football player, anyone of them, and zoom close up. Make a note of every features in their face, because it is preserved, if exaggerated.


  • https://nvidianews.nvidia.com/news/nvidia-dlss-5-delivers-ai-powered-breakthrough-in-visual-fidelity-for-games

    DLSS 5 introduces a real-time neural rendering model that infuses pixels with photoreal lighting and materials. Bridging the divide between rendering and reality, DLSS 5 empowers game developers to deliver a new level of photoreal computer graphics previously only achieved in Hollywood visual effects.

    DLSS 5 takes a game’s color and motion vectors for each frame as input, and uses an AI model to infuse the scene with photoreal lighting and materials that are anchored to source 3D content and consistent from frame to frame. DLSS 5 runs in real time at up to 4K resolution for smooth, interactive gameplay.

    At most it’s texture generation, but if Jensen already identified it as generative AI, I’m not sure why they would then go and lie about it only affecting lighting and material at the pixel level.

    It’s image generation just like the prototypes that converted your drawings into “realistic” image are.

    This it is not, and if Jensen hadn’t referred to as generative AI, I would still think it’s just an evolution of what DLSS had been doing, except expanded to lighting and materials. Jensen obviously has no problem referring to this as generative AI, so I’m not sure why he would hide and lie about doing what you claim.