Domarius on 20/4/2006 at 14:54
You guys still don't get it. He even SAID - he posted this thread because me and a few other guys ASKED him to. He's NOT here trying to get fame for something that hasn't been developed much yet. So stop - being - rude. Shut up.
Quite frankly I don't know or care at this stage if it will suceed - he mentioned some interesting things in the other thread and I want to know how he's personally going about it. And hopefully see some screens.
@hexarith, I'm interested to know the following;
You mentioned meta materials that automatically add detail so that the level size is very small and loads quick, what are they again, and how do they work?
You mentioned the real-time destructable geometry. My brother and I were actually talking about this sort of thing (if I get your meaning) - instead of incessantly scaling up the graphical complexity to burn up the new processing power, why not leave it at a certain older standard, and scale up the rest of the game, so you can do more with it, for example, exactly what you said, real-time terrain destruction.
I read about another game engine that does this - The way it works sounds like how yours might work - you have the initial level geometry, but when you blow a hole in a wall, that's (for example) a sphere that is a boolean subtraction out of the wall. The reason the graphics couldn't be as complex is this is kind of intensive. To render this every frame, the game starts with the original level geometry, then subtracts all your modifications (holes), and then draws the final result to the screen. So naturaly, the more modifications visible at once, the slower the frame rate.
Does yours work like this? If not, how?
And in this type of engine, how do you cull out the unessecary, out of view geometry?
footsteps on 20/4/2006 at 14:55
This is very interesting! I've been wanting to learn how to program for the longest time, but never actually got around to it. Furthest I got was like 10 years ago when I made a spinning 3D cube in pascal. I could rotate it along any one axis, but couldn't mix them without messing everything up. I think when I got past the actual challenge of figuring out how to represent 3D space on a 2D surface, I lost interest :P. But my knowledge of programming was very poor, so I spent more time trying to find bugs than actually solving problems.
I assume you will be doing this in C++? Will you be focusing on DirectX or OpenGL?
hexarith on 20/4/2006 at 16:34
First to awnser a few other questions:
* C++ sucks; I used it developing all those small algorithms my engine uses, but when it came to build the blocks together my needs clashed with the complications of C++. Asking for advice on some for the code management tools I developed some guy on comp.lang.c++ just suggested: Write your own compiler. Yeah, Thanks :-(. I looked around for some streamlined, performant language and found D (
http://www.digitalmars.com/d/index.html)
* My renderer infrastructure is API neutral, but I focus on development with OpenGL, using only ARB and EXT extension. No vendor specific extentions are being used.
* The engine is going to be cross plattform with a GPL/commercial dual licence like the one Trolltech uses for Qt. My promise of a licence for the TTLG community is, that they may use the engine then also without being bound to the GPL, maybe there is code from other projects (e.g. AI) they want to integrate without GPLing it.
* The building system already creates branches for Linux, FreeBSD, MacOS X, WinXP. There will be no support for Windows Vista!
Quote Posted by Domarius
@hexarith, I'm interested to know the following; You mentioned meta materials that automatically add detail so that the level size is very small and loads quick, what are they again, and how do they work?
Meta materials are something like a "shader-shader, scripting object generator, procedural texture". The observation is, that most detail stuff one can find in nature is in fact a repetition of similair patterns varying over the large scale. You may compare 1000 different 1 meter squares of a meadow. The same is with walls, being stone or bricks, tiles on the floor, etc. A metamaterial applied to a object replaces the objects geometry and texture with procedurally generated geometry and texture, being parameterized by the object itself. For example one metamaterial could define, that a floor object is to be replaced with tiles of a small mosaic, and the tiles' color being defined by a texture; however there might by a overriding rule, that builds margin around the tiles along the border made from solid marble. Also rules for creating procedural texture, mixing a set of textures, are part of it. All the rules and procedures to do this are called a metamaterial. Now one important thing is, that the created objects have no representation in the engine's datastructes as real objects. They are created by the renderer just in time, to add detail. And the metamaterial is used to create an optimized shader for the used GPU. E.g. Radeon cards are good in fetching texture but they suck when it comes to arithmetics => Precompute some stuff and put it into textures. GeForce cards can do arithmetics better than accessing textures so compute stuff on the card.
Quote Posted by Domarius
I read about another game engine that does this - The way it works sounds like how yours might work - you have the initial level geometry, but when you blow a hole in a wall, that's (for example) a sphere that is a boolean subtraction out of the wall. The reason the graphics couldn't be as complex is this is kind of intensive. To render this every frame, the game starts with the original level geometry, then subtracts all your modifications (holes), and then draws the final result to the screen. So naturaly, the more modifications visible at once, the slower the frame rate. Does yours work like this? If not, how?
The engine is a CSG engine (CSG is also used in UnrealEd, but the CSG tree is discarded at the BSP generation stage). CSG for those who don't know is the process of building complex geometry by merging, intersecting and subtracting simple geometry. While boolean operations on mesh models are computationally difficult and hardly to get robust, boolean operations can be easyly done on simple primitives like planes, cuboids, spheres, cylinders, cones. It's getting a bit harder if multiple levels of boolean operations apply (like eg. using the intersection of a cylinder and a cube to cut out a part of a sphere, that are 3 levels), due to some sorting necessary. There are 2 CSG algorithms, one for the visiblilty tests (I come later to this) and one for the tesselation. Tesselation is done by determining the parameterization bounds for all interacting objects. E.g. if 2 planes are intersected an edge is created. One plane can only be tesselated from say [-inf, x(t)][-inf, y(t)] and the other [x(t), +int][y(t), +inf] for more complex situations a whole three of variable ranges is built for each object. These data has to be calculated once for a given CSG, but, and this is important, if something changes _only_ the parts of the tree on which the modification happend must be rebuilt. This is far different to BSPs where adding a hole in a wall would cause mayham in all BSP nodes that can see that wall and BSP nodes that see the wall that see the wall. Yes this CSG stuff is a computationally intensive but if you look, where modern 3D engine idle the most you know you can do it: Today bottleneck is the fillrate. Both CPU and GPU can easily deal with 10 times the geometry computations they do today (GPUs even about 100 times, GPU manufactors are emphasing this on every SIGGRAPH, in every publication etc). Destruction of geometry is done by adding a subtracting (what a phrase ;-)) primitive that cuts out the hole. The hole is then detailed with debris and whatever by a metamaterial.
Quote Posted by Domarius
And in this type of engine, how do you cull out the unessecary, out of view geometry?
I'll try to explain roughly: In the culling CSG process the space gets separeted into 2 kinds: void and filled. "Light" can only travel through void space, while filled space is occluding. To cull out parts of the scene not visible, the CSG tree is virtually merged with the viewing volume (so that all space outside the volume is filled and thus not considered for determining visible parts). Then a tree if built of all void space being connected, the root at the void volume in which the viewer is, this is the "void tree". Since some of those void nodes may be huge, but are connected only through a small opeing in filled space (think of a hidden entrance into a large hammerite cathedral ;-)) for all filled space bordering the determined "void tree" a "filled tree" is built. Both trees are projected into screen space and subtraced. The result is a tree of all visible void space. Everything that is within this void space is visible. Just traverse the tree and mark the objects within for rendering. Some sorting is applied to optimize the caching on the GPU, but this is done in the same way other engines do.
OrbWeaver on 20/4/2006 at 16:43
Quote Posted by hexarith
Meta materials are something like a "shader-shader, scripting object generator, procedural texture".
Actually I would like to see more of this in games, and I expect we will. Rather than just static images, textures should be defined based on object-oriented code, with functions, variables etc. This would allow a lot more variety without needing massive images.
Quote:
E.g. Radeon cards are good in fetching texture but they suck when it comes to arithmetics => Precompute some stuff and put it into textures. GeForce cards can do arithmetics better than accessing textures so compute stuff on the card.
I believe this is in fact incorrect. There is a shader patch for Doom 3 that increases performance on Radeons by replacing a texture lookup with arithmetic, not the other way round.
Quote:
Both CPU and GPU can easily deal with 10 times the geometry computations they do today (GPUs even about 100 times, GPU manufactors are emphasing this on every SIGGRAPH, in every publication etc).
GPUs may be limited by fill rate (I am no GPU architect but the constantly increasing numbers of pixel pipelines suggests this is still an issue), however there is nowhere near that much spare CPU time available, especially when such things as shadow-volumes are used.
Real-time CSG sounds nice but I think you will find it is totally infeasible on today's hardware.
sparhawk on 20/4/2006 at 16:46
Quote Posted by Domarius
You guys still don't get it. He even SAID - he posted this thread because me and a few other guys ASKED him to. He's NOT here trying to get fame for something that hasn't been developed much yet. So stop - being - rude. Shut up.
It seems you don't get it that you are not a kind of internet moderator. So who are you to tell other posters to shut up?
Quote:
I read about another game engine that does this -
I think "Red Faction" supported this, if I remember correctly. AFAIK you could destroy any geometry on the map. Blow holes in all walls and such.
Quote:
To render this every frame, the game starts with the original level geometry, then subtracts all your modifications (holes), and then draws the final result to the screen. So naturaly, the more modifications visible at once, the slower the frame rate.
This sounds like a bloody waste of CPU cycles. If I already write the algorithms to do this kind of deformations, then why would I need to start over again every frame? Wouldn't it be much better to just keep the destruction, wire it into the mesh and leave it like that, untl the next blow. You would only need a good algorithm that reduces polygons if a piece of geometry is deformed multiple times, instead of adding each new deformation in.
Quote:
And in this type of engine, how do you cull out the unessecary, out of view geometry?
A dynamic BSP tree?
Gestalt on 20/4/2006 at 17:22
The only map in Red Faction where you could blow up any of the walls was the Geomod test room. The actual in-game implementation prevented you from blowing up most walls, but I don't know if this was a technical limitation or just a design decision.
OrbWeaver on 20/4/2006 at 17:50
Quote Posted by Gestalt
The only map in Red Faction where you could blow up any of the walls was the Geomod test room. The actual in-game implementation prevented you from blowing up most walls, but I don't know if this was a technical limitation or just a design decision.
Almost certainly both. Half life 2 had some "dynamic" geometry towards the end, where parts of derelict buildings could be blown up to create new holes and the like - but this is presumably a pre-scripted event, where the "breakable" geometry is not actually part of the main BSP tree but just a static object that looks part of the level (with more than one mesh/rendering state).
Martek on 20/4/2006 at 18:50
Quote Posted by Gestalt
The only map in Red Faction where you could blow up any of the walls was the Geomod test room. The actual in-game implementation prevented you from blowing up most walls, but I don't know if this was a technical limitation or just a design decision.
Hmm, I feel like I remember blowing new passages through the rock walls in an early mine level (maybe the first level of the game). And on some other mine level where there were like a couple bridges over chasms I blew a new passage but that caused the game to mess up because I bypassed a scripted sequence so I had to reload and forget about that. But it's been a while since I've played it so perhaps I'm just recalling the game incorrectly.
Cheers,
Martek
Tony on 20/4/2006 at 22:48
Good luck! The world definitely needs more creative game developers. Looking Glass, Arcane, and a few others are bright flowers, so alone in a great sea of grass.
ZylonBane on 20/4/2006 at 23:14
Quote Posted by Gestalt
The only map in Red Faction where you could blow up any of the walls was the Geomod test room. The actual in-game implementation prevented you from blowing up most walls, but I don't know if this was a technical limitation or just a design decision.
In general, the farther you got in the game, the less you could geomod. You started in a mine and could blow holes in pretty much anything, but once you got into office complexes and space stations... not so much.
The game sucked anyway. Developed simultaneously for the PC and PS2, it was one of the first victims of consolization.