Doom 3 source code packaged and tested. - by lost_soul
nbohr1more on 9/11/2011 at 01:39
Just as Assembly Language can do anything C++ can do...
so can ARB Assembly do anything that GLSL can do...
The difference is difficulty and low-level knowledge :thumb:
Yakoob on 9/11/2011 at 13:17
Quote Posted by wonderfield
Depends on what you're rendering and, perhaps more importantly, how much effort you want to put in. In Doom 3's case, though, I think it'd be well worth the effort. Being able to practically flatten the cost of lighting alone makes it a pretty compelling idea.
I don't think so. To generate shadow volumes, you must go through each light and then create shadow volumes for each object in game. Period. That's what "shadow volumes" is by definition. Whether you use deferred rendering or not, whether you are doing it on CPU or GPU, you still have to go through each light+object combination. This is where a big bulk of your processing gets sucked into so no, switching to deferred rendering would not flatten the cost.
I mean yes it would definitely improve performance, but I highly doubt the gain would be as big as you guys expect, and it would not allow for hundreds of lights for free since, if I understand D3 correctly, the bottleneck is calculating the shadow volumes, not individual pixel colors. That being said, I do not know how D3 works behind the scenes, and I'm just assuming it's basic shaders (normal and maybe parallax, tho I dont recall seeing it ever) with stencil based volume shadows.
nbohr1more on 9/11/2011 at 20:20
The sad truth Yakoob is that even when scenes are 99% devoid of shadows using the "noshadows" keyword or applying lights that don't cast... the light count still takes a heavy toll on the engine. Doom 3 will create a new render pass every time there is a light bounding box overlap. There is a trick where you can get lights to just pass the values to accumulate n-times but it cannot be controlled via the z-distance falloff image so all it can do is a pretty useless brightening effect. (Hell, if you could just tell the engine when you wanna pass lights shaders together at once it would be beneficial.) So missions like Return to the City where there are minimal shadowed lights can still beat your GPU to a pulp. I am just glad Bikerdude optimized the hell outta that mission... he's just like an Engine upgrade :laff:
Natively, there is no Parallax shading but it can be added via mods though it is expensive.
EvaUnit02 on 9/11/2011 at 21:32
Quote Posted by nbohr1more
Natively, there is no Parallax shading but it can be added via mods though it is expensive.
That's one of the shader mods by Dafama2K7. Even with all of his mods installed, Doom 3 still ran at constant 60fps for me (C2D E8500, GTX275, 4GB) @ 1080p.
Would it again be because Doom 3 vanilla is a poorly lit corridor shooter?
nbohr1more on 9/11/2011 at 22:09
Yeah.
If you keep the lights (and therefore light overlaps) to a minimum (as the maps for Doom 3 do) then Doom 3 can run very nicely even with loads of shader effects applied on modern systems. See "(
http://www.moddb.com/mods/sikkmod) Sikkmod" as the current best example...
If you want wide-open outdoor areas or even large well-lit indoor spaces, then performance will be hit hard unless the engine is changed.
Of course, there are workarounds if you are a genius at art and light volume construction:
(
http://lunaran.com/page.php?id=165) Strombine :thumb:
Melan on 9/11/2011 at 22:11
1280x1024 and 24 FPS should be enough for everyone. :sly:
Sorry, all you folks still running Doom3 in software rendering; I feel for you, I have been where you are, but hardware requirements increase over time.
Also, I have always liked the Unreal engine, even if I never tried to make a level with it.
EvaUnit02 on 9/11/2011 at 22:17
not-sure-if-serious.jpg
:weird: You do know that the Quake 2 engine was the last from Id to support software rendering, right? It's impossible to run Doom 3 entirely on the CPU.
nbohr1more on 9/11/2011 at 22:31
Well...
If you read back a few posts, those older Intel Integrated Graphics chips (before Sandy Bridge) ARE doing the Vertex processing in software. So folks with that level of tech would technically be running Doom 3 in software mode. :thumb:
wonderfield on 9/11/2011 at 23:29
Quote Posted by Yakoob
[Shadow volumes] is where a big bulk of your processing gets sucked into so no, switching to deferred rendering would not flatten the cost.
I don't recall saying anything about shadow volume generation.
Quote Posted by Yakoob
I mean yes it would definitely improve performance, but I highly doubt the gain would be as big as you guys expect...
I never said I would expect "big gains". I said it would practically flatten the lighting cost.
Quote Posted by nbohr1more
If you read back a few posts, those older Intel Integrated Graphics chips (before Sandy Bridge) ARE doing the Vertex processing in software. So folks with that level of tech would technically be running Doom 3 in software mode. :thumb:
That's a bit akin to saying that doing VSD on the CPU is akin to running in "software mode". Some parts of the rendering process are, and will probably always be, performed by CPUs, but that doesn't mean it results in what you would call "software rendering".
Doing
everything on the CPU, as would be the case when using WARP or some other software rasterizer, would be software mode rendering.
wonderfield on 9/11/2011 at 23:40
Quote Posted by lost_soul
What kinds of cool things could you do if it did support CUDA or equivalents? Surely CUDA can be put to better use than fancy smoke effects.
There aren't severe limits as to what you can do with CUDA. You
could do a lot of things with it, but probably very little of any real interest to most people. Complex particle physics is a good use case for GPU compute, but you don't get a hell of a lot of "bang for your buck" with those kinds of features — lots of effort and computational expense; not much visual benefit. Other uses of GPU compute have questionable benefit in most cases, at least for game engines, but it depends on what processes you're undertaking which are good candidates for the high level of parallelization you get out of a GPU.