Phatose on 5/5/2009 at 17:34
In the broadest definition, I suppose. Usually the term is limited to computer generation of things that a designer or artist would usually create though.
Qooper on 5/5/2009 at 19:50
Data can be hand-made, or it can be generated using some rules. Let's take textures as an example (see Wikipedia article on procedural textures for a quick introduction (
http://en.wikipedia.org/wiki/Procedural_textures) [1]). In most cases, textures are drawn or photographed by texture artists. This data is then fed into video memory. The larger the texture, the more memory it takes, but the crisper the result. Using different kinds of compression techniques we can trade CPU time in for some memory. Some years ago this would've been a ridiculous suggestion, but today it is very plausible. Where one used to write software for 2 cores or maybe four, today one writes software for n amount of cores using functional programming techniques and perhaps nested data parallelism (
http://www.cs.umd.edu/projects/gvil/papers/i3d92.pdf) [2]. Processing power is cheap.
So, procedural textures would of course fall into the category of data that is not hand-made, but generated. However, if we use procedural techniques only to pre-generate texture files that are then loaded into memory, we are indeed missing the point from a technical perspective. Not only can procedural content reduce the amount of work the artists would need to do by hand, it can also make a huge impact in rendering performance and make possible some things that would otherwise take astronomical measures to accomplish. One of the greatest advantages of procedural content is the fact that it can be calculated in real-time (as opposed to being pre-generated). This way it doesn't take up any video memory, which is more expensive than ordinary RAM. A procedural texture can also be dynamic, that is, a function of any variable (for example time, distance, location, orientation or any force exerted on it, just to name a few) (
http://research.microsoft.com/en-us/um/people/simonpj/papers/ndp/ndpslides.pdf) [3]. Procedural textures allow more flexibility and scalability as well. If the CPU is busy calculating the texture for the current frame, but there are available cycles on the GPU, the work load can easily be transferred to the graphics chip (which might be able to crunch the numbers quicker due to an instruction set more specific towards linear algebra). Because there are no heavy amounts of data to be transferred (only the few lines of mathematical rules that define the texture), the work load can be effortlessly shifted either way, or even to an external texture specific processor.
In a way shaders have provided a form of procedural textures. Let's take normal mapping for instance. The texture you see "changes" depending on what lights illuminate it and from what angles, and your own position relative to the texture also changes it. With procedural textures this can be done easily. In fact, procedural textures can be calculated using shaders, but for that we'd normally need a specific shader language. Usually it is preferable to have a single development language. One solution is to provide a common virtual machine that can translate texture specific instructions to a shader language at run-time.
To conclude, I'll say that in the years to come we'll most likely enjoy more and more procedural game content.
_____________________________________
[1] Wikipedia artice. Procedural textures. (
http://en.wikipedia.org/wiki/Procedural_textures) http://en.wikipedia.org/wiki/Procedural_textures. 2009-5-5
[2] Jones, S., Chakravarty, M., Keller, G. and Leshchinskiy, R. (2008).
Nested data parallelism in Haskell. (
http://research.microsoft.com/en-us/um/people/simonpj/papers/ndp/ndpslides.pdf) http://research.microsoft.com/en-us/um/people/simonpj/papers/ndp/ndpslides.pdf
[3] Rhoades, J., Turk, G., Bell, A., State, A., Neumann, U. and Varshney, A. (1992). Real-Time Procedural Textures. (
http://www.cs.umd.edu/projects/gvil/papers/i3d92.pdf) http://www.cs.umd.edu/projects/gvil/papers/i3d92.pdf
Sulphur on 5/5/2009 at 20:44
Interesting writeup, Qoop. But I was of the impression that calculating procedural textures was still fairly processor intensive, as far as current CPU/GPU tech goes. Given that current GPUs struggle with processing geometry and shaders at a resolution like 2560x1600, I doubt they'd have enough cycles left over to also calculate procedural textures on a real-time basis for video games today.
In the future though, who knows, the sky's the limit really.
Qooper on 5/5/2009 at 22:23
Sulphur,
thank you for your post. You raise good points and you're partly correct. Procedural textures are heavy to calculate at run-time, but I've seen them in action in several academic research projects in my university. The sad truth is a big part of calculations are calculated in wrong places. For instance, a lot of the physics in computer games is heavy CPU-wise, but could be done faster by the GPU. Especially these days that we have several GPUs in parallel or several cores in a single GPU we can easily afford to move some of the physics calculations over to the GPU-side and free numerous CPU cycles to better use. Here's a general example:
Type A calculations take 12 time units to calculate on a CPU, but only 5 TUs on the GPU. Type B calculations take 6 on the GPU but only 5 on the CPU. If A's are done on the CPU and B's on the GPU, we get 12 + 6 = 18, but if they are swapped, we get 5 + 5 = 10. These differences come from the instruction sets on each processor. The GPU is meant to handle operations like the scalar- and vector-product, and data like vectors (for example quaternions) and matrices.
This example is not very accurate, but it does clarify the point a little. We could do all kinds of cool stuff if we fully utilized the GPUs power.
The reason why higher resolutions in games take up more hardware power is because the amount of ray-casting increases. But this is a problem of linear time-complexity only, that is, O(n) if you're familiar with the Big-O-notation. The n in this case would be the amount of pixels, which is determined by the resolution. The reason why the complexity is only linear and not higher (thank God) is portal-based culling (assuming we have a good space-partitioning scheme in use). For each pixel, we cast a single ray to determine the colour of that pixel. We take the first polygon that ray intersects with and we use that polygons procedural texture to get the pixel colour. We use portal culling to greatly reduce the number of polygons we have to check. Here's the same in functional pseudo-code:
Code:
cam = the camera that outputs to the players screen.
rays C =
return the list rs so, that for each pixel in C, a ray r is created
and appended in rs.
getColour R =
return the colour of a point on the texture of the first polygon
that the ray R intersects with.
nextFrame = map getColour (rays cam)
As you can see here, the time to evaluate rays with a camera parameter is linearly dependent on the amount of pixels that camera has. So rays is O(n). That leaves us with getColour, which is kind of like a single ray of light hitting the camera lens. It determines the colour of a single pixel. However, if we simply check the ray against every polygon in the game world, we have another linear dependency (this time with the amount of polygons in the entire game world). That would make it a O(k). Together these two are O(nk). Not good. Fortunately, we can use portal culling to optimise the parameter space a little. We take into account only what we can see and cull the rest. Now we have a fairly constant amount of polygons per frame, so getColour gets reduced to O(C), where C is almost a constant (not a constant mathematically, but constant enough for us).
So you are correct that higher resolutions are a problem, because they require more processor power. But The problem is not as big, and gets reduced effectively with each new generation of CPUs (remember Moore's law). However, there's a similar kind of problem with texture resolution. Without compression, a 512x512 RBG (24-bit) texture takes 786432 bytes of video memory. A texture of size 1024x1024 is four times as big, so it takes a little over 3 MB. Then we have a normal map for that texture, so add another 3 MB. Sometimes we want to include some additional information about the material, say, a specularity map, which is only a single-channel image (8-bits) so it's not quite as memory consuming, but these maps tend to multiply in number as games get visually more complex. Now we have two things that increase: texture resolution, and the amount of extra information about each pixel in the texture. That's already a polynomial complexity. Again, not good. So this is one reason to start becoming more procedurally oriented.
This is an interesting topic (very close to the topic of my thesis), and I'd very much like to see more discussion about it.
Ajare on 5/5/2009 at 22:55
Quote Posted by Qooper
The sad truth is a big part of calculations are calculated in wrong places. For instance, a lot of the physics in computer games is heavy CPU-wise, but could be done faster by the GPU.
The reason that the physics is done on the CPU is that reading from the GPU is still very slow, so practically speaking any physics you do on the GPU cannot affect objects that are used by game systems (eg scripting) which run on the CPU.
Shadowcat on 6/5/2009 at 05:14
Look up "Infinity - The Quest for Earth", and check out some videos.
moddb.com had some as well, IIRC.
Really incredible stuff.
Ships and space stations are modelled in the usual way, but stars and planets (including the geometry and textures) are all generated on the fly.
doctorfrog on 6/5/2009 at 07:38
I don't know jack about game programming, but it seems to me that some of the advantages of procedural generation aren't being used fully. Here's an example:
Let's say we're building a cobblestone road. Let's say we want that road to be pretty realistic, but we want to balance against user hardware and manpower limitations.
Well, our map is hand-guided and roughly painted, but procedural generation fills in some work there by placing in trees in realistic clumps and a random, but sensible variety. Hooray for us.
We also used some hand-guided procedural generation to create a winding road that 'grew' a little naturally from the terrain shape, to mimic generations of footfalls. Good on us for that, if we did that at all.
But the road itself is just a repeated texture; it's bland and boring. So we just made it really high-resolution and hoped that gamers would be blown away by how cool it looked enough so that they don't so much notice that it's basically repeated everywhere.
Same with the grass texture. And the dirt texture. And the sky. And the leaves in the trees. And so forth. I'm looking at you, vanilla unpatched Oblivion that I played for about an hour yesterday and wondered away, disappointed, even though I know I'm being totally unfair. (I mean seriously, the tree foliage is made up of clones of clones of sprite leaf clusters, they don't even bother flipping or rotating them for faked variety.)
SO. Would it make sense, instead of creating a small collection of high-res but endlessly repeated textures, to create a set of 10-40 or so small cobblestone textures and maybe cobblestone polygons, and use developer-side procedural generation to create a road composed of a grid of these cobblestones arranged in a non-repeating pattern? The same textures could be flipped and rotated, perhaps even color-shifted to fake a larger 'palette' of stones. Hopeful result: a realistic road. Followed by realistic dirt, trees, water...
Would this not be a balance between procedurally generating a 'real' road and faking it really well with an unpredictable pattern? Memory need not be hampered overmuch with an excess of texture, CPU would not be weighed down with rendering massive real-time procedurally generated textures, load time would not suffer, etc. Or would all the processor overhead needed to coordinate these separate entities outweigh the benefits of requiring smaller textures?
I mean, come on, for that matter, why aren't height and weight more often randomized among NPC characters? Same number of polys, same skeleton, hell, same textures, just some of the vertices are tweaked a bit for variety's sake. No, instead, everyone in Tamriel is 6'0", 160 lbs, and not a child in sight. Really? (Also they are ugly.)
Or is all this mooning on pretty obvious, all boiling down to lack of developer time?
DDL on 6/5/2009 at 13:26
Getting all the cobblestone textures to tile properly might be a bit of an arse, though. You might end up wasting a lot of artist time getting them all nicely compatible (as opposed to looking like "40-odd different cobblestone tile textures stuck together repeatedly"), when ultimately the majority of your audience won't even notice (I didn't notice the tree leaf thing: I was too busy going OOOH at the way the sunlight adjusted when you went under the trees, and stuff).
Plus unless you were doing it on the fly (i.e at runtime), you'd end up storing more information on tile type and positioning than you would if you just used a huge single texture. And if you were doing it on the fly, you'd probably want to limit the amount of work you're doing (don't want to prerender tilesets in a city on the other side of the world, for instance) so you'd probably have the situation of cobblestone patterns CHANGING when you hadn't been there for a while..which might actually be more jarring.
As for no children: isn't that just because "games where you can kill kids" are usually higher rated, even if such a thing is far, far from the actual focus of the game?
Apostolus on 6/5/2009 at 15:55
Just as I thought. Everything looks boring, replicative and uninspired. Who wants to play though an ubiquitous, unending dull-drum of blah.
june gloom on 6/5/2009 at 17:39
Halo fans? WoW fans?