Expanding on this:
Ray tracing was invented in the 1970s for CGI animation like what Pixar does. Rays are cast from each pixel into the scene to get lighting, shading, reflection, and shadow information. Up until now we haven't really been doing this, what we've been doing is rasterizing: take the polygons, move them to the closest pixel, and take a snapshot. Take all the objects in the room, point the camera this way, take a snapshot, then apply it to the objects in the final image (for shadows). Or my favorite, take a snapshot of the environment, and that's your reflection!
Modern techniques have attempted to leverage this rasterization process with more ray casting. Screenspace is the most popular. Using information collected from a deferred renderer, a complete reconstruction of the scene can be used to, say, calculate reflections on weird shiny-shaped objects. But if the ray casts something that is not in the actual image (like you stare into a lake and see your own reflection), that's not possible. Some games that let you do this use multipass rendering, but that's prohibitively expensive to do and it's not as flexible.
The only way to get accurate shadows, reflections, refractions, lighting, fogging, and all sorts of volumetric and displaced effects is with true raytracing: every pixel is calculated perfectly. This is the technology that games were striving for ever since the dawn of gaming, and that tech is finally becoming a reality!
That's cool! Just let me know when you do. I've actually changed a lot in the game and controls: the momentum feels a lot better now, and you can run down a hill to gain speed.
Posted September 1st, 2018