With , the GPU launches a "Node." That node processes the work. If it needs more work (a second bounce, a third bounce, a particle effect that spawns more particles), it spawns a child node right there on the silicon.
For decades, programming a graphics card has felt like managing a chaotic restaurant kitchen. The CPU (the head chef) had to shout every single instruction: chop the onions, boil the water, plate the steak. If the kitchen fell behind, the chef had to stop everything to micro-manage the cleanup. latest directx
In DirectX 11 and classic DirectX 12, the CPU had to record every single GPU task in a massive linear list. If a game needed to calculate shadows, then physics, then lighting, the CPU had to sit there, line by line, building that list. With , the GPU launches a "Node
We have reached a point where CPUs aren't getting much faster; they are just getting more cores. Work Graphs finally admit that the GPU is the star of the show. By letting the GPU manage itself, Microsoft has effectively removed the traffic cop from the intersection. The CPU (the head chef) had to shout
Imagine a ray-traced reflection. In the old model, the GPU shoots a ray. If that ray hits a mirror surface, the GPU has to stop, bounce the data back to the CPU, wait for the CPU to say "yes, shoot another ray," and then restart. That round trip costs milliseconds—an eternity in gaming.