Nvidia’s new DLSS 4.5 announcement comes with the promise of 4K 240 Hz path-traced gameplay, and you know there’s only one way we’re going to get there right now: AI. With a combination of a second generation transformer model and an expanded Multi Frame Generation feature offering up to 6x frame gen, the green team reckons it can deliver “incredible 240 Hz+ smoothness” in some of the most graphically intensive games around.
Though, I’m guessing your mileage may vary.
We’re promised better temporal stability, reduced ghosting, and smoother edges with DLSS 4.5 thanks to a mix of that second-gen transformer model as well as improved training for the model itself. At last year’s RTX Blackwell CES event Nvidia’s VP of applied deep learning research, Brian Catanzaro noted that it has a supercomputer “with many 1000s of our latest and greatest GPUs, that is running 24/7, 365 days a year improving DLSS.” And Nvidia’s had another 12 months making that training dataset even bigger and has made it even better at analysing just where its upscaler is going wrong.
“When the DLSS model fails it looks like ghosting or flickering or blurriness,” Catanzaro tells us. “And, you know, we find failures in many of the games we’re looking at and we try to figure out what’s going on, why does the model make the wrong choice about how to draw the image there?
“We then find ways to augment our training data set. Our training data sets are always growing. We’re compiling examples of what good graphics looks like and what difficult problems DLSS needs to solve.
“We put those in our training set, and then we retrain the model, and then we test across hundreds of games in order to figure out how to make DLSS better. So, that’s the process.”
There’s also more compute being used for DLSS now than even the previous generation of transformer model, which in itself used four times more compute than the old convolutional neural network (CNN) models which DLSS used to be built on. We’re told the second-gen transformer now uses five times more compute (presumably than CNN), including greater contextual awareness and smarter pixel sampling.
This is essentially where the transformer models are smarter than the old CNN one. CNN models were fine for analysing big images, but transformer models are better at looking at the finer pixel details and the data and means you can spend compute resources more efficiently. Though in its first iteration it was certainly far from perfect, as you can see from my own testing of the 1st gen transformer model.
“The idea behind transformer models,” Catanzaro explains, “is that attention—how you spend your compute and how you analyse data—should be driven by the data itself. And so the neural network should learn how to direct its attention in order to look at the parts of the data that are most interesting or most useful to make decisions.
“And, when you think about DLSS, you can imagine that there are a lot of opportunities to use attention to make a neural graphics model smarter, because some parts of the image are inherently more challenging.”
Nvidia must believe it has made the graphics model smart enough now that it can get past those occasional visual issues and get away with turning the MFG dial up to 6x and not end up with a laggy artifact-ridden gaming experience.

You will still need to have an RTX 50-series GPU capable enough of delivering at 60 fps in your chosen path-traced game to be able to take advantage of the extra frame gen levels, however. Because even if you’re getting hugely inflated frame rate figures that is all for naught if your PC latency gets into triple figures.
I’ve experienced that throughout all of my RTX Blackwell GPU testing from last year, where it became clear further down the stack just how limited MFG gets when the input frame rate is low and PC latency is high.
That’s the same situation for any level of frame generation—whether on Nvidia or AMD—and it’s certainly not the perfect panacea for poor gaming performance it might have at first appeared; you still need to have a whole heap of graphical grunt to make it a functional experience.
But if you do have that power already humming away in your rig, then another new feature being added into the Nvidia App sometime in the springtime will be of interest. Dynamic Multi Frame Generation allows you to use the DLSS Override feature of the app to either tie the Dynamic MFG feature to the maximum refresh rate of your screen or to a custom fixed level.
This then adjusts the level of MFG you require to hit that frame rate automatically, with Nvidia demonstrating what that might mean if you’re lucky enough to have an RTX 5090 running in a selection of games. The example of Black Myth Wukong running at 246 fps on an RTX 5090 at 4K with path tracing and DLSS 4.5 running at 6x MFG looks very impressive, more so that it’s reportedly doing so with just a PC latency of 53 ms.
That should be eminently playable, and I’m looking forward to testing that out in person when the feature is fully released to the public at large. Especially just to check out the impact of that improved transformer model and what effect it has on the frame gen issues that have put many a gamer off using the feature in the past.
But, outside of MFG, the exciting thing is that the new version of DLSS 4.5 Super Resolution isn’t tied to the RTX 50-series graphics cards, meaning any RTX GPU will be able to take advantage of the new model. And because of the DLSS Override feature of the Nvidia App, there will be a ton of games—reportedly over 400 at launch—that will be able to use it right away.
That ought to mean every Nvidia RTX owner’s AI-powered gaming will get that bit smarter, clearer, and sharper as 2026 goes on. But whether DLSS 4.5 is enough to convince a sceptical public that Dynamic Multi Frame Gen can be a feature you enable automatically, as upscaling has arguably become, will remain to be seen.

Best graphics card 2026




