Doom 3 source code packaged and tested. - by lost_soul
lost_soul on 8/11/2011 at 00:28
It isn't just Linux gaming though. OpenGL is used for all sorts of things, from desktop enhancements to professional rendering and design. I remember trying to use the Compiz 3D effects on my Radeon X1300 and having some effects just show up as white because the driver did not support them. Some of their drivers are even blacklisted by Compiz due to bugs.
When I pay for a video card, I expect it to work. That's why I really love NVidia. They always work great. I wouldn't say I'm a fanboy though, because I know very well what happens when I can only choose from one company's products... price increases! Other manufacturers, please step up your game. If I see comparable performance, I'll consider buying your card.
Vernon on 8/11/2011 at 01:01
Lol dude you can't expect people to support compiz. Compiz has to be written such that their effects use standard rendering techniques, not the other way around. For the record, and I have used ati for years and have never had any problems with opengl
Yakoob on 8/11/2011 at 15:02
Quote Posted by nbohr1more
That version of Id Tech 4 has "Deferred Rendering" which could be one of the most sought after changes to the Doom 3 engine for folks who know it's implications.
Simply put:
Regular Id Tech 4 = Immediate Mode Render = Triangles are generated for every new light source
Wolfenstein = Id Tech 4 with Deferred Rendering = Light values from light sources are merged before triangles are generated.
Conclusion: Deferred Rendering allows for INSANE amounts of light sources.
Any of the other nifty graphic stuff folks wish to add will be secondary to that improvement. (Graphic card performance wise).
Let's hope someone gets that working as soon as possible.
except idtech 4 comes with volumetric shadows which still require to compute them individually for each light+object combination. And deferred rendering has its issues as well (such as no native support for shadows or alpha).
Eldron on 8/11/2011 at 17:19
Quote Posted by Yakoob
except idtech 4 comes with volumetric shadows which still require to compute them individually for each light+object combination. And deferred rendering has its issues as well (such as no native support for shadows or alpha).
Both can and have been solved though, and alpha has its issues on non-deferred too, since you'll have to sort polygons.
The pros of deferred definitely outnumber the cons.
EvaUnit02 on 8/11/2011 at 17:34
Wasn't Wolfenstein 2009 ported to a DX9 renderer? Does deferred rendering work with OGL?
We would want to stay with OGL for any source port, else the Mac hipsters and Lunix tards (all 5 of them) would be left out.
wonderfield on 8/11/2011 at 19:29
Quote Posted by Eldron
The pros of deferred definitely outnumber the cons.
Depends on what you're rendering and, perhaps more importantly, how much effort you want to put in. In Doom 3's case, though, I think it'd be well worth the effort. Being able to practically flatten the cost of lighting alone makes it a pretty compelling idea.
Quote Posted by EvaUnit02
Does deferred rendering work with OGL?
Yes, you can use deferred shading in OpenGL.
Quote Posted by EvaUnit02
We would want to stay with OGL for any source port, else the Mac hipsters and Lunix tards (all 5 of them) would be left out.
Direct3D could be quite a bit faster on Intel hardware, however. That said, there aren't any facilities in Direct3D for vendor extensions, which means you can't utilize facilities like NVIDIA's swap-tear.
The renderer in Tech 4 is complex enough that switching APIs would entail a fairly substantial amount of work anyway. Like I mentioned earlier, there's plenty of room for optimization without switching APIs, and that's
probably going to be the right approach initially.
nbohr1more on 8/11/2011 at 19:58
Brink is OpenGL and has a deferred renderer though it's poor performance compared to Wolfenstein is kinda scary. (I'm sure that it's hacky megatexture method is taking some of the performance toll though.)
With regard to the volumetric shadows, doing 3d shadow volume extrusion on Graphic Processors (as apposed to the CPU as is currently done in Doom 3) has been possible since Direct X 8 hardware (according to Sebbi over at Beyond3d):
Quote:
The shadow volume CPU processing was not the main reason stencil shadows got replaced by shadow maps in all recent engines. Stencil shadow volume GPU extraction was possible and pretty fast even on DX8 GPUs. In our old DX9 engine we used fully GPU generated stencil shadows extensively and all our benchmarks indicated that GPU extraction was noticeably faster than CPU extraction. (we had around 20-30 fully uniformly shadowed light sources in view at once).
Yes, stencil shadow extraction on DX9 GPUs doubled the vertex count of the volume, but since the volumes were rather low poly, this basically didn't affect the performance at all (the bottleneck was never the vertex shader performance). The biggest drawback of stencil shadows has always been the massive stencil fillrate it requires when multiple shadow volumes are crossing the view in a bad angle. The performance is really view dependant and a slight change in viewing angle can drop the performance dramatically. It's really difficult for artists to tweak the performance of stencil shadowed scenes, as the algorithm performance is so erratic.
Positive things about stencil shadows:
- Stencil shadows combine really well with deferred shading, especially with LiDR (light indexed deferred shading).
- With stencil shadows you have to light only the pixels that receive light. Stencil test skips the complex deferred lighting shader completely for pixels that are in shadow. This saves around 30% of the (deferred) lighting shader performance.
- Stencil shadows use much less memory than shadowmaps.
- Stencil shadows use much less texture memory bandwidth than shadowmaps. The rendertarget bandwidth usage is however much larger (but in systems like Xbox 360, the render target is located in really fast EDRAM to provide practically unlimited bandwidth).
- Stencil shadows are pixel perfect and have no surface acne, no blockiness, no sample walking issues in moving lights, etc... For example self shadowing looks really good and requires no tweaking (good looking self shadowing has always been hard to do with shadow maps).
Negative things:
- Performance is sometimes good and sometimes abysmal. A simple chain link fence can drop the game to 1 fps if the camera looks along it.
- No alpha masked shadow casters are possible. Plants, trees and vegetation are all almost impossible to render properly with decent performance.
- For fast (DX9) GPU volume extraction the shadow meshes have to be closed surfaces. In DX10/11 you can likely use geometry shader to render more freely formed geometry efficiently.
- Deferred shading light area combination tricks do not work with stencil shadows, since stencil test can only fail or succeed (not succeed for one light and fail for the other). So you have to render each light separately (and this causes extra g-buffer reads and backbuffer blending).
though some say it's only practical on DX10+ hardware...
lost_soul on 9/11/2011 at 00:23
"We would want to stay with OGL for any source port, else the Mac hipsters and Lunix tards (all 5 of them) would be left out.
Glad to know someone is thinking of my four friends and I..
How does CUDA come into the scenario? Doom 3 doesn't use that at all right? What kinds of cool things could you do if it did support CUDA or equivalents? Surely CUDA can be put to better use than fancy smoke effects.
nbohr1more on 9/11/2011 at 00:32
Doom 3 doesn't even support GLSL right now so... no CUDA is not currently possible.
That said, much of what folks would do with GLSL or CUDA can be done with ARB assembly language it just takes folks like Rebb, Sikkpin, JC Denton, etc who are ARB language experts.
As to what CUDA could bring? The best use I can think of would be physics acceleration... Though I do recall some nifty pathfinding GP-GPU demos awhile back. Most of that stuff is considered too specialized and impractical.
Of course, for an open standards project, OpenCL should be under discussion not CUDA...
lost_soul on 9/11/2011 at 01:35
No GLSL? But it does things like specular and bump mapping and per-pixel shadows. Does it use vendor-specific hacks to achieve this?