• 1 Post
  • 172 Comments
Joined 2 years ago
cake
Cake day: August 8th, 2024

help-circle








  • One of my old jobs had me trying to turn the word salad our business lead told our clients into web apps. It’s truly amazing how someone can say so much and yet so little while convincing people to pay money for it. I ended up just having to best-guess what their business needs were on my own. That experience was honestly valuable in seeing through the blather - Jensen Huang with DLSS 5 the other day was a good example.






  • This guy is clearly speaking from a place of technical ignorance. It can’t do any of that because it’s a screen-space post-processing effect that only works on final pixel colours and motion vectors. It does not have depth, material, or lighting information. It is purely a generative AI filter and in the demo gets so much wrong with the lighting and material properties. There’s one scene from the Hogwarts game where it turns a cast-iron cauldron into flat ceramic or plastic. It makes up reflections that are effectively screen-space because it can’t „see“ detail off screen and overrides actual RT reflections with them. It’s bad for faces and bad for backgrounds.







  • The inputs from everything Nvidia has said, are simply the final pixel colour values and motion vector information. It’s meant to sit in the same post-processing stack as the upscale. It’s effectively a screen-space post-processing filter over the final image. Nvidia have said that the artist controls are masking (blocking certain areas from it), intensity (so a slider value), and some kind of colour re-grading (since it destroys the original grading). It’s extremely limited.