june gloom on 1/8/2008 at 09:13
I'm... really not seeing any point to this. Could someone who lives someplace where it's not 5am explain this to me?
addink on 1/8/2008 at 10:04
In a limited form (as is required for real-time rendering using todays 'limited' processors) ray tracing obviously doesn't add that much to the experience.
However once the processing power increases*) the results can be scarily realistic.
See here for some explanation and some examples:
(
http://en.wikipedia.org/wiki/Ray_tracing_%28graphics%29)
*) possibly by vastly increasing the number of cores, ray tracing lends itself very well to parallel processing.
EvaUnit02 on 1/8/2008 at 10:06
I don't see the point at the moment, considering that you need like a super-computer to use that rendering technique in real-time.
I'd imagine that a consumer-economic implementation maybe possible sooner, rather than later, if they used GPGPU mapping to take some of the strain off of CPUs.
Volca on 1/8/2008 at 11:46
Often, it is said there are also numerous problems with the raytracing approach. There, for example, is the fact you need an efficient lookup structure for the geometry, so the ray hit queries are efficient and quick.
Such structure takes some time to be built, so usually it's an offline process. This limits the possibilities to have a dynamic world. Every time I see someone do realtime raytracers, a map from quake or simmilar is used. Why? It is static, and lookup structures can be prebuilt for such geometry. There seems to be a small amount of movable objects that are probably not organized in those lookup trees, and are queried separately.
What will it likely bring? Cheap sharp-edged shadows, curved mirrors and glass materials. Later on, someone will probably try doing soft shadows, radiosity calculations, diffusive reflections, photon maps. These advanced techinuqes are years away.
Carmack wants to combine Raytracing with ZBuffer based rendering. I see why - prerender world using an optimal static raytrace, render objects using a normal Z buffer approach. Problem I see here: Reflections of dynamic parts of the scene to the static parts.
Anyway, I'm still looking forward to this. It was my dream to see this for years, and finally the HW seems to be capable enough to do a simple RT in realtime.
(
http://www.openrt.de/) OpenRT is a few years old. They even had a FPGA based prototype hw which I can't find now. It's not that Intel is first in any way. They might be successfull where others failed. Let's wait and see :)
IndieInIndy on 1/8/2008 at 12:45
There's already an engine called Arauna: (
http://igad.nhtv.nl/~bikker/) http://igad.nhtv.nl/~bikker/. But you need a hefty quad- or octocore system to really run it, and even then you're likely to do so at 640x480.
In theory, raytracing gives you real-time shadows, soft shadows, radiosity, reflection, and other lighting effects "for free". Throw even more rays at it, and you get antialiasing as well. The down side includes "sizzling" and aliasing problems for details and distant objects... but you can reduce that by throwing even more rays at it. In other words, the more cores and more memory your system has, the higher FPS and screen size you can get.
So far, ray tracing is one of the very few things you can meaningfully do with the increasingly massive parallelism found in processors over the next several years. Servers and databases also benefit, but that's about it, and they're not nearly so sexy to talk about. So expect to hear lots more about raytracing from Intel, since there isn't much else you can legitimately do with that 1024-core processor they'll be trying to sell you a few years down the road.
ZylonBane on 1/8/2008 at 12:53
Quote Posted by twisty
By the time that ray tracing in games comes to fruition, it will probably be abused in the same way that Bloom and motion blur have been.
Ray tracing is a rendering technique, not an effect. Saying it will be abused makes exactly as much sense as saying, "By the time that polygons in games come to fruition, they will probably be abused in the same way that sprites and pixels have been."
Volca on 1/8/2008 at 12:59
Quote Posted by IndieInIndy
So far, ray tracing is one of the very few things you can meaningfully do with the increasingly massive parallelism found in processors over the next several years. Servers and databases also benefit, but that's about it, and they're not nearly so sexy to talk about. So expect to hear lots more about raytracing from Intel, since there isn't much else you can legitimately do with that 1024-core processor they'll be trying to sell you a few years down the road.
Agreed. Add to that the fact rasterization has it's problems for triangle sizes less than a pixel, and the fact you can't do cheap omnidirectional shadow mapping (even today, games use one light per area tipically, or at least one shadow per entity). Some effects are hard to implement and combine, etc. Rasterisation is a huge hack.
I suppose interesting things will happen if the result of a raytrace contains a Z information as well. Could, for example, SSAO be used on a raytraced scene? I bet it could.
The_Raven on 1/8/2008 at 13:55
Quote Posted by dethtoll
I'm... really not seeing any point to this.?
As has already been mentioned, ray-tracing allows more realistic lighting effects. I know it really doesn't have many gameplay applications; but the idea of proper reflections, refractions, shadows, etc... does sound very appealing.
Eabin on 1/8/2008 at 18:16
and, most importantly, skin can be made to actually look like skin, not like plastic :)