zombe on 1/3/2012 at 01:20
Quote Posted by Yakoob
At last, the main reason I switched to using bullet is implemented - efficient per-triangle collision detection! I can now import complex meshes and have perfect collision out of the box :D
I really-really want to use Bullet too, but i keep postponing it. Will probably keep postponing it as long as possible - for various reasons:
* my current physics implementation [1] is extremely easy to integrate into my ever-changing data formats.
* it implements only the bare minimum needed (collision + data struct for response) leaving the broad phase to me - useful as i can just use my own spatial data structures directly instead of forcing me to have a separate structure like in Bullet (limited to the ones it has).
* perfect access to everything (collision events / collision response / material [down to triangle level]) - looks like a f* nightmare at Bullet side [2].
* my current level of physics is sufficient for the time being.
[1] Sweeping (ie. does not skip space => can not "miss" collisions / get stuck / go unstable) sphere (+ deformations) vs triangle. Implemented and tested thoroughly by way smarter people than me (have not found any faults, and as it has been used commercially for a long time, i seriously doubt there are any).
[2] One special worry for me is that Bullet has only one structure that can hold my current world info (multi material triangle soup) and its implementation sounds terrible (for me that is). Need to delve deeper into it tho to be sure (primary concerns for me: memory consumption & cheap raytracing & excessive amount of pointless collision checks [ie. if it would ask me then i could tell that there are no triangles in the spot the entity is without looking at any triangles at all]).
Quote Posted by Volitions Advocate
Been working on this for awhile now.
Seems i missed it. How the hell does it work? Ie, how does it determine where your "cursors" are? Infrared led + specially crafted cam for infra red = ... no idea. Different amount of infra red hitting pressured areas?
-----------------------------------------------------------
A word about JavaScript :D :
I don't know whether i have mentioned it before somewhere, but i use JavaScript a lot for prototyping and such. Mostly for historical reasons as one only needs a browser. Very good for the computer classes (etc) that had no "usual" programming languages available :). Also, easy to make GUI's. JavaScript is also quite versatile language when you get to know it. Today, i got also really impressed by its speed (relatively speaking of course):
I was generating a 3D texture (proof of concept stuff) and it took ~35-45 sec (includes me navigating to- and clicking "yes, continue with the suspiciously long script?" several times) to generate my 128*128*128 texture data and draw a slice of it 3x2 tiled with 2 lights:
* 2 times processed with an convolution filter of size 9*9*9.
* several octaves of brown noise (tiling value noise [uses vectors for displacement - so, looks even better than simplex noise, but of course, very-very slow]).
* lots of other per texel computations.
So, how many times does it execute the innermost loop in the convolution filter? Let's see: 2*128*128*128*9*9*9 ... > 3billion ... with non trivial, tiling, lookups into the texture along the way ... tell me that is not bloody impressive. I doubt i could do it in heavily optimized C under 4 sec :/. Hm, 40sec * 2.5GHz / 3bil ~= 33 tics per innermost loop (that would include the innermost loop logic itself and texture lookup calculations and assuming 0 L0 cache misses). Damn. 4sec might have been way too optimistic.
Btw, i did the calculations as i was bothered that it took so long and was wondering whether it would be worth to convert it to C with all the inconveniences C brings to the table. Well, it just is not worth it, it seems :/.
---------------------------------------------------------------------
So, what the hell am i doing, you ask?
(
http://dl.dropbox.com/u/19090066/stuff/tex.png) This kind of thing.
* Mouse movement over the part that has 73 on it selects the texture slice - very convenient to see what is going on inside the 3D texture (especially how it renders when i have the rendering visualizer function attached to it).
* 2 lights (one redish with an angle & one bluish almost perpendicular to slice plane)
* Materials: none inbetween, every cobble is one shade of pure gray (8 options) ... basically no materials to fog the view.
* Yellow line is heightmap at y=0 at current slice depth.
GOAL: 3D texture that works however which way you cut it. This includes surface normals (! think about that for a moment), occlusion (ie, cracks and such ... ! think about that too x_x), specular & falloff, 3 separate materials to be mixed together per texel, additional stuff if i find a way to squeeze all of that into 2 textures. Pretty optimistic about it so far :)
Generation:
* Try to generate 2500 points. All at least 15 units apart from each other (taking tiling into account of course).
* Do a concurrent (per generated point) distance timed flood fill to generate the 3D voronoi shapes (aka. cobbles).
* Round them using 9*9*9 counting (texels with matching cobble id's) convolution filter (need something better either here or somewhere else to ensure there will be space between cobbles as some of them tend to fuse together graphically).
* Find the borderness (x_x) of any cobble texel in relation to other cobbles / "empty" space (convolution filter again) - use it as a base measure of depth (perpendicular to triangle surface direction ... whichever way that happens to be).
* Add a bit of per cobble (to prevent separate adjacent cobbles sharing a recognizable pattern) noise to it (avoiding excessive damage to cobble surface regions while doing so).
* Calculate normals & relevant magic to make it actually work (not entirely committed to current solution. it works alright, but that is not a reason to stop looking) etc etc.
Currently some of it works fairly well, some does not, some of it is not even done yet ... aka ... WIP.
--------------------------------
Other javascript stuff:
(
http://dl.dropbox.com/u/19090066/stuff/triline.png) Invented a triangulation algorithm as i could not find anything matching my needs (That thing is considered to be in one piece, built from contour lines [no prior relations know between the contour lines - ie. does not need to know what is inside what]. Counter clockwise forms solids and clockwise forms holes). One of many debugging polygons. "Step" is single step through algorith steps with highlights using "yield" command.
(
http://dl.dropbox.com/u/19090066/stuff/light.png) Testing light propagation algorithms. Bottom image is decision map (to see what the algorithm is doing) - mouse over there shows the details at top right. Allows edits (local updates - does not relight all, just the area that will change [yeah, it can predict that pretty accurately]).
(
http://dl.dropbox.com/u/19090066/stuff/dir.png) Finding a new light direction algorithm as i found that real-world texture interpolation precision is absolutely horrifyingly bad (what do you mean i can not differentiate 1/256th change in color 500 units away from origin between 2 adjacent texels in a 256 wide texture on a 1/100th texel delta!?) - i kind of required that to be not the case x_x. Well, the new one kind of works (even on real hardware) and bypasses most of the interpolation issues.
(
http://dl.dropbox.com/u/19090066/stuff/tessel.png) Developing and generating code for voxel based tessellation. Things to note:
* There are exactly 22 unique configurations of 2x2x2 space.
* red = bottom, green = top, yellow = both.
* #0 & #21 do not produce anything.
* Cases #3, #6, #8, #9, #11, #13, #16, #17, #19 are combinations of the gray "base" cases.
* Cases #14 & #18
cannot be solved reliably (need to know more than just 2x2x2).
* Case #19 is fucking annoying exceptional case.
* The fXrYZ stuff notes the flips and rotations needed to turn the 2x2x2 area into the one and only shown base case (case bit-pattern number in front).
Volitions Advocate on 1/3/2012 at 02:17
There are a lot of different ways that DIY multitouch screens work. Pretty much all of them involve IR light and a camera modified to reject visible light and only see IR.
Some methods use a plane of light over the surface that shines brightly when it intersects with your finger, can use LED's or IR lasers. My method is called Frustrated Total Internal Reflection. The led's fill the inside of the plexiglass, it bouces around inside of it and doesn't escape.
But it does shine a bit on the surfaces by a few hundred nanometers. This is called Evanescent Field Strength. and when your finger touches it, the light is "frustrated" and shines down through plexiglass to the IR camera. then you just track it with software and use whatever protocol you want to turn it into mouse events.
To make the effect better, with a foam paint roller i put a layer of silicone (from a tube at the hardware store) on the back of my projection surface, which is just drafting paper. That way when you press on the paper it squishes into the surface of the plexiglass, again "frustrating" the shape of the surface and shines down even stronger into the camera. (and it works much better for dragging.
It's pretty cool, but lots of work.
There's another method like mine that uses some fancy acrylic plexiglass with microscopic mirrors in it that increase the evanescent field strength so that you dont' have to use a layer of silicone. This would be ideal, but it has a bit of a hover effect and the acrylic is expensive.
I'll post updates if people are interested. You can find almost everything at
(
www.nuigroup.org)
fun project. Makes me excited for Windows 8 actually.
Yakoob on 1/3/2012 at 06:36
Quote Posted by zombe
I really-really want to use Bullet too, but i keep postponing it. Will probably keep postponing it as long as possible - for various reasons:
* my current physics implementation [1] is extremely easy to integrate into my ever-changing data formats.
* it implements only the bare minimum needed (collision + data struct for response) leaving the broad phase to me - useful as i can just use my own spatial data structures directly instead of forcing me to have a separate structure like in Bullet (limited to the ones it has).
* perfect access to everything (collision events / collision response / material [down to triangle level]) - looks like a f* nightmare at Bullet side [2].
* my current level of physics is sufficient for the time being.
[1] Sweeping (ie. does not skip space => can not "miss" collisions / get stuck / go unstable) sphere (+ deformations) vs triangle. Implemented and tested thoroughly by way smarter people than me (have not found any faults, and as it has been used commercially for a long time, i seriously doubt there are any).
[2] One special worry for me is that Bullet has only one structure that can hold my current world info (multi material triangle soup) and its implementation sounds terrible (for me that is). Need to delve deeper into it tho to be sure (primary concerns for me: memory consumption & cheap raytracing & excessive amount of pointless collision checks [ie. if it would ask me then i could tell that there are no triangles in the spot the entity is without looking at any triangles at all]).
Just skim through the official manual, it will give you an idea how Bullet works and how integrating will need to happen. It does kinda force you into thinking of your physical world in Bullet's terms, but it's not at all a bed system.
Basically you have a "world" which has "bodies" made of different "shapes." The shapes can be various primitives (sphere, capsules, boxes, cones, etc), a convex point cloud, a triangle mesh, etc. And the actual bodies can be rigid or soft, with lots of properites you can set like mass, velocity, restition; you can also limit linear and angular factors so you can effectively use it as 2D or 1D simulation, make sure player and NPCs dont fall forward (rotate only around y-axis) etc.
So for each of your game-objects you just create an associated body made of associated shapes, and Bullet handles the rest in a (safe) multi-threaded environment with its own spatial partitioning structures and algorithms.
You basically have a mirror-copy of your game-world in bullet. Whenever a body's position is changed, it triggers a callback object you define yourself, so it can update your game-objects (for rendering and logic). And when your logic changes your game-object, simply have it also set the bullet's body (position, velocity, whatever) and you ensure the two parallel worlds are synched!
There's also collision callbacks, possible controllers for NPCs/Vehicles (early stage looks like) and ways to query the physical world if you want to do some gameplay checks, but I havent gotten around to that so can't comment.