The next great hurdle in consumer graphics technology is the successful implementation of self-shadowing.
In every successive generation of graphics technology, programmers have made massive steps toward approximating the rendering equation. We've moved from simple vertices and shaded polygons to advanced geometry and multi-pass shader engines. But real-time graphics remain stuck in an uncanny valley and I believe self-shadowing is the reason.
But of course I do, since I am a major proponent of ray-traced graphics over rasterized ones. Self-shadowing is an oft-ignored advantage of ray-tracing, which I find interesting because a central argument in the ray-trace/rasterize debate is that ray-tracing engines are aware of the global scene while rasterization engines are only aware of individual vertices - so obviously self-shadowing is difficult for a rasterization engine; it isn't aware of any occlusion!
Yes, developers have found wonderful shortcuts via normal mapping, baked radiosity, and basic occlusion testing (which I think is just ray-tracing by another name), but until engines faithfully reproduce this extraordinarily simple visual nuance, the state of graphics realism will remain sorely lacking.
And as the other aspects of the industry begin bumping against photorealism, it is this fact which leaves so many renders flat and spatially "blurry" because a key visual cue has been left out.
More details, less ranting, to follow.