catbarf on 6/8/2008 at 02:33
Quote Posted by The_Raven
Did you even bother to read the post before yours?
Yes, but
Quote Posted by The_Raven
That isn't real-time rendering, though.
isn't terribly illuminating. Forgive me for having the audacity to expand upon a six-word sentence.
The_Raven on 6/8/2008 at 04:25
After rereading your post earlier, I did realize that was probably the case. I think it was more the wording and complete lack of acknowledgment that made me wonder.
Jason Moyer on 6/8/2008 at 06:15
That post about the Radeons says they were doing it at 720p. Surely the resolution of CGI that's going to be transferred to film has to be much higher than that?
catbarf on 6/8/2008 at 13:39
It seems very, very strange to me that you would be able to get movie-quality graphics at game-quality FPS on a system that didn't cost as much as a new car.
IndieInIndy on 6/8/2008 at 20:56
Quote Posted by catbarf
It seems very, very strange to me that you would be able to get movie-quality graphics at game-quality FPS on a system that didn't cost as much as a new car.
You can't. I'm almost tempted to call shenanigans on that tgdaily article, but that might imply the author really understood what he was talking about.
The article sorta hints at it, mentioning that the commercial was "rendered on a GPU and ... directed in realtime". That implies previz (previsualization), which is shorthand for "turn everything off so we can get interactive frame rates". No shadows, no reflections, no anti-aliasing, maybe even no textures. This allows the technical director to control camera angles, position elements, etc. by good ol' click-n-drag. Once things are positioned, you start turning on lights, shadows, and other effects to adjust light positions and settings. Very quickly, things stop being interactive, and you're waiting seconds for things to update.
Once you've got all of the effects turned on, it takes a long time to render complex CGI. Sometimes on the order of hours per frame. And rendered result is never the final product. It still goes through touch-ups, color correction, compositing elements that were rendered separately from one another, etc.
Real-time previz is nothing new. Some directors have been using Unreal machinimia to set up shots beforehand. Some of the systems I worked on almost a decade ago were being used to combine realtime CGI with actors to lay out shots on the set. That they're now able to do the previz in realtime with raytracing is impressive. But we're still a long time away from being able to render CGI-heavy movies or even commerials in realtime.
The_Raven on 7/8/2008 at 00:06
Quote:
Real-time previz is nothing new. Some directors have been using Unreal machinimia to set up shots beforehand.
I remember hearing that Spielberg did that when he made AI.
<HR>
Quote:
I worked on almost a decade ago were being used to combine realtime CGI with actors to lay out shots on the set.
What exactly do you do for a living, if you don't mind me asking?
IndieInIndy on 7/8/2008 at 13:57
Quote Posted by The_Raven
What exactly do you do for a living, if you don't mind me asking?
I write software for video display and editing systems, television and HD stuff mostly, which effectively includes the film industry now that many productions are either all digital, or use digital it on-set.
Not something I talk about much on-line, since I don't need my idiotic ramblings reflecting back on my day job. It's a small industry, with a long memory, and while I like to think of myself as a reasonably intelligent and coherent person, that opinion seldom survives contact with the internet.
The_Raven on 8/8/2008 at 08:21
Fair enough. Discussing the inner workings of a job on the Internet are usually against employee policy for most places; plus, it can come across as unprofessional.
Silkworm on 11/8/2008 at 00:08
That article's argument is a strawman, he assumes by "hybrid-approach" that you would render some of the pixels on the screen with a raytracer, and others with a rasterizer. But "hybrid approach" could also mean using raytracers to "help out" or "assist" rasterizers (such as constructing polygonal scenes from voxels, or using rays for light data).
John Camrack doesn't think polygons are going away anytime soon (what, people are going to throw their Nividia and Ati cars out the window?) and I agree. Shit, I hope he's right, this GTS8800 cost damn near 300$! From what he's said, he plans on using voxelization to make rasterization more efficient.
The logic goes like this: filling in polygons with textures (which is basically what is meant by rasterization) makes sense only if the polygons take up a significant about of pixel space on your monitor (a polygon in a game can be anywhere from 8 pixels in area to 8 million, or more depending on the scene). As games get more and more complex geometry, the average size of the polygon will decrease (and thus rasterization will be less and less efficient) unless developers find a way to dynamically scale polygon density.
One way of doing this is "voxelizing" the geometry of the game, ie turning it into tiny cubes. See this for an example: (
http://lefebvre.sylvain.free.fr/octreetex/octree_textures_on_the_gpu.pdf)
There's a number of advantages to doing it this way. For one thing, it no longer matters how complex a scene is in theory, because complexity can be scaled as you come closer or farther away (ie more pixel space is taken up by a potential voxel). Two, even if the user doesn't get to see the voxels directly, they can also be used to for lighting, collision, sound occlusion effects, and other benefits of raytracing. And three, apparently this type of processing can be split up among multiple processors.
ZylonBane on 11/8/2008 at 15:50
Quote Posted by Silkworm
John Camrack doesn't think polygons are going away anytime soon (what, people are going to throw their Nividia and Ati cards out the window?)
That's right, people certainly don't replace their video cards every 3-4 years, now do they?