Why DirectX 11 will save the video card industry and why you don't care - by clearing
Koki on 23/7/2009 at 20:17
Quote Posted by nicked
Ever is a long time. Sooner or later we could reach the stage where game engines are created in the same way as the real world.
No.
june gloom on 23/7/2009 at 20:19
Back that up, Koki.
Koki on 23/7/2009 at 20:35
Next on TTLG: We're proving you can't, in fact, cast a Magic Missile.
Ostriig on 23/7/2009 at 20:38
Damn it, Koki, you should've just said "No" again.
P.S.
Regarding the sort of tech that nicked described there - engines that actually simulate reality - I'm sure that will come to pass eventually, but it's still about as far-fetched as the Magic Missile, and it's anybody's guess whether we'll even have graphics cards then, or even personal computers for that matter.
As for the subject at hand, there's plenty of room to improve. I suspect the slowdown in the evolution of visuals isn't so much that they're "good enough" or even that it's gotten exponentially difficult to make advancements, but rather I think a very big part of it is that the gaming industry's requirements have changed. The PC is no longer the dominant platform, and console manufacturers have found sterling success in pushing their more affordable, couch-ready, more plug-and-play products. And it's these same manufacturers who are pushing for longer lifecycles for models. In this situation, picking to deploy exclusively on the only platform that could benefit from rapid and radical evolution of graphics hardware, the PC, would be a difficult choice to make for most game development studios.
Sulphur on 23/7/2009 at 20:40
I was going to post a huge post summing up my feelings about this, but just about everybody else in this thread has already done this for me.
So I won't do that, but I'm still gonna make a fucking gargantuan post anyway.
Anyway, I just wanted to add that, in terms of animation, there's a couple of things to bear in mind: the standard way of animating characters/objects for actions like walking/running/jumping etc., is normally done these days by keyframing or mocapping or whatever, a job which usually rests with the art department.
When you add physics into the equation, though, things get stickier: ragdoll physics is less of an animator task than it is a task of the graphics engine/programmers. You've got other things to worry about like inverse kinematics and blending for transitions between animations to make them seem more natural, which can call for artists as well as programmers to work together to implement viable solutions.
Then you have things like NaturalMotion's Euphoria, where even the blending between each animation stage and individual animations themselves can be determined by the software itself. The line, as such, is blurring more and more with advances in technology.
RE: the photorealism debate - as far as games go, there's still plenty of ground to cover. We've reached the point of diminishing returns as far as graphics quality goes - we've gotten to the stage where we have reasonable to great-looking (if not quite offline-rendering level) visuals, but that doesn't mean we're done yet. There's still a ways to go - multiple accurate realtime reflections/refractions, subsurface scattering for translucent surfaces like skin and marble, realtime radiosity for realistic lighting, etc... that's all still in the pipe.
It's not necessarily jaw-dropping stuff, like the transition from software 3D to hardware-accelerated 3D, but important in terms of verisimilitude when it comes to simulating real-world behaviour/visuals in games, something which the industry as a whole has been and will continue moving towards.
Also, nicked's post read like just the sort of philosophical reflection I've had for a while on where games would go if you took the process of developing one to its logical extreme at some far point in the future.
It's highly unlikely though, because the amount of computational power required to simulate an entire world at a molecular (or deeper) level would be, well... ginormous to the power of a couple googolplexes. Still, it's a nice, warm, fuzzy thought to have, especially when it entails being able to create simulacrums of living worlds at the drop of a hat, kinda like the holodecks in Star Trek.
Volitions Advocate on 23/7/2009 at 20:50
I get warm fuzzy thoughts just hearing the word googolplex.
What nicked said in his post was essentially the kind of new thinking what I was thinking of.
Chade on 23/7/2009 at 21:30
Quote Posted by nicked
Designers will build molecules with material properties and physics programming, and based on the element represented by the molecule, different hardnesses, textures, behaviours of materials will be mass produceable ... The engine programming will handle how those molecules interact with their environment, and all the artist has to do is define a volume and density of those molecules.
No matter how "advanced" the industry gets, I doubt designers will ever want to simulate at a level that's so far removed from the players actual experience.
Renzatic on 23/7/2009 at 22:55
It'd be a great idea if it could be implemented on a macro scale. Like you have a model of a bunch of bricks. Instead of UVing the whole thing and applying a hand drawn texture, you select your polygons and tell it you want the surface to reference a red clay brick schematic with x amount of wear and have it generate the results procedurally.
You could even take it farther with surface details like lichen. Just tell it to produce a moss covering based off another schematic wherever it's most likely to grow. Set some local environmental conditions, like light and humidity, gravity settings, or whatever other things you can think of, and let it go to town.
Even if it isn't taken to the extremes Nicked suggested, it'd still be an excellent way to produce realistic textures quickly and easily.
Muzman on 23/7/2009 at 22:57
He makes a point in there that the number of graphics nerd spotting RT trickery is decreasing.
If so I reckon that's only because the number of people playing has increased greatly and is merely diluting the nerdery. On Eurogamer they noticed (after a year or two of hosting videos, including making their own show) that when they started doing hi res system comparison videos of games, these videos immediately got more downloads than everything else combined.
Console tribalism is a big part of this, of course. But it doesn't mean there's less people paying attention to the minutiae. We're entering an age of videophilia and it's going to be a lot worse and more widespread than the fringe audiophilia of old simply because it's pictures. Spec porn is rife and more and more people are paying attention to the res, the artifacts and putting together rationales for understanding it all (including seeing problems that aren't there because one brand costs more than another). Games won't escape this. And since most really don't care to differentiate between frame delivery to the screen and rendering, the latter will come in for inevitable nit picking as well. Which gives fidelity room for one engine/game to be more 'true' than the next.
It is true that people like Valve and Nintendo have cleverly found the average to appeal to and Source shadows and bobble headed tennis players are plenty for most people. But that's never been what drove innovation in the industry. The existence of a strong middle ground has never stopped the high end from existing, or stopped it from causing the average to shift. The 240 lines of VHS was plenty until people started seeing different. Your soundcard never needed to render sample rates higher than CD quality in stereo (as "the human ear cannot tell the difference" yadda.), until it did. The human eye cannot discern more than 10 million colours, apparently, so 24bit colour is all you'll ever need they told us (which is garbage in itself, but that's another story).
And look where we are now.
But this time we've got it sussed. We're finally on top of that fidelity "need". This time for sure. Mmm hmm.
Probably should add that his general argument about DX11 not being a big draw is probably true. I'm just off on a tangent as always.
Chade on 23/7/2009 at 23:31
Quote Posted by Renzatic
Even if it isn't taken to the extremes Nicked suggested, it'd still be an excellent way to produce realistic textures quickly and easily.
Well, to me nicked seemed to be making a point about the actual runtime engine of the game, rather then any algorithm which might go into producing the assets.
And I don't see designers relinquishing that much control. I imagine they'd like to directly specify, for instance, what sort of rubble gets created by a collapsing wall, and how that rubble interacts with the player. Let's say you want the player to be able to take cover behind fallen debris, or you want the player to be able to destroy shit without impeding their progress around the map ... do you really want to rely on some heavily indirect process to result in the appropriate debris, or do you just want the programmers to say "here is the probability distribution of debris size when this wall gets destroyed".
It's all very well to enthuse over these low level procedures when they are just going to be used is a cosmetic way (aka 3d nerdery in movies), but once those mechanics start affecting the gameplay, they are pretty much diametrically opposed to the sort of game mechanics that you want (they aren't readible, for starters).
EDIT: having said that, if you did model walls as individual bricks glued together, and destroying a wall was just destroying the glue, then that would make a lot of sense from a gameplay pov. But that's a far cry from simulating tiny atomic elements ... it really just shows that you want to simulate at an appropriate level, not that you want to simulate at a super low level.