Renzatic on 15/1/2019 at 06:41
Well, Day 5 is the first one I've done that just turned out bleh as hell. Tried to go with a different style, and it didn't work out. I'm gonna chalk it up as tablet practice, and an overall learning experience.
Always do your texturing first before doing your cracks. Otherwise things get messy.
[ATTACH=CONFIG]2529[/ATTACH]
demagogue on 15/1/2019 at 08:10
I guess this can count. I've decided to make this the year I make my chat bot. I've been taking notes on making a conscious, reflective robot since the mid-90s, each year the plan gets more and more concrete. Now I think I finally have enough operationalizable ideas to know how I want to do it, and I have the tools to do it (online tutorials, Tensor, other open source components & example code I can use or look at, etc).
To give an idea of what I'm investing in to start this... Right now I'm re-taking the classes (online lectures) for Linear Algebra (matrix math), Advanced Calculus (differential & partial differential equations), and Probability (along with some behavioral economics / game theory that's associated with it, Nash, Markov models, ML & Bayesian and Boltzmann learning, etc.), the math I definitely know I'll need ... I'm reading up on the equations & methods for voice synthesis (the legit kind where you model air pressure fronts through vocal tract geometry and filters via mouth organ articulators like glottis, tongue, lips, teeth, etc ... none of that concatenation bullshit Alexa, et al, uses. My bot may not sound fluent, but it'll be mumbling through with its own voice dammit), and after this I'll have to learn to how to model and parameterize sound data so it can recognize words, and I'm going to be coding this thing in Python, so learning that, and via Tensor, so learning that. At the start, it'll just be able to hear sounds & its only actions are controlling its vocal tract muscles (and a similar "inner monologue" version). Start small.
The first layer is going to be a kind of mirror neuron system to bootstrap the thing. Basically it's going to be working up a neural net so that when the bot hears a spoken word, it can coordinate the vocal tract organs to repeat what it hears as best it can, so I can make a neural map of phoneme-to-articulation links. Then that's going to be the base representation for its stream of consciousness "thought". In other words, it's not going to be computing thoughts by predicate calculus and converting them into speech like classic chat bots. From its perspective it's going to be hearing itself "talking to itself" and under the illusion that the words make sense more like we do. I'm just starting with communicative grunt signaling though, like between a baby & mother. It's going to be a long road before I get to even simple words like mama.
Well I'm still early in the preparation phase. If I ever get to a point I feel like I'm making something like progress on the project, I think I'll make dev blog posts on it and link them here. The task appears so vast right now, even the minuscule starting tasks... we'll see if I can get traction on it.
Sulphur on 15/1/2019 at 08:36
That's ambitious, but I presume the purpose of starting off at the very atomic level is to learn even more from doing it, right? Sounds similar to those early AI stories about neural net projects (and, uh, HAL from 2001 - the book version of HAL that is). Parametrising speech at that deep a level would also require you to have some amount of general bioengineering knowledge (assuming you're modelling virtual articulators to help control virtual air flow, resonances, etc.). And then, once you've taught it to do a reasonable facsimile of human speech with a pronunciation dictionary of some description, natural language processing and keyword interpretation before moving on to some semblance of cognition? Those are each incredibly large pieces to take a bite off of, but I suspect you're going to learn an endless wad of cool things in the process.
demagogue on 15/1/2019 at 10:26
Yeah to be honest the primary purpose is to write book or something about my experiences and what I learned trying to make the thing, where I ran into roadblocks and where (I hope) I found little successes, etc--and, yes, just learning from the project for its own sake--more than my expectation for it doing much itself. I'm wading into waters full of the shipwrecked hulls of the projects that came before me & managing my expectations accordingly. But hey you never know. That's the beauty of science & engineering. You can never really know what to expect and sometimes you just have to jump in, try something, and see what shakes out, good or bad. I hope some kind of enlightenment shakes out of it anyway.
Also I have a really particular way of thinking about language I've picked up that I haven't seen modeled yet (AFAIK, although some parts have). I just want to see some form my idea put out there in the world so I can just point to it and say my idea looks something like
that, and it probably has to be me or it won't be done.
Edit: A lot of the code for vocal tract synthesis & articulation is already in an app called (
https://dood.al/pinktrombone/) Pink Trombone. You can see the code just with "show source" & it cites its own tutorials. For now I'm just tweaking the articulators to something a bit more muscle-like but still simple and tacking on bot control of them.
Judith on 15/1/2019 at 20:39
Renz, IMO you're doing the same thing most of us are guilty of, you're distracting yourself with tiny little things and half-finished exercises without sticking to / committing to one project and finishing it. I'd rather see new shots from that house you've been making ;)
Renzatic on 15/1/2019 at 21:59
Oh, I'm on the same project. I'm just banging things together to create the look and feel that I like.
Like that house, for instance? I think the house looks great. It's about what I had in mind. Problem is, surrounding it with more realistic foliage not only clashed with the style I already had, but getting said surroundings dense and foresty looking took hundreds of millions of polygons, which ran like crap, and ate up all the RAM in my computer. So I decided to step things back a bit, thinking it'd be better to use much lower poly game style trees. When I started looking for good examples for that, I came across a few shots showing off that handpainted texture look. I liked it, though it'd fit in better with my overall intended style, so...
...here I am. :P
Edit: Here's an example of one of the more realistic trees I made. The tree itself looks fine, but it clashes with my more cartoon style, and, well, it's 6 million polygons. Put 30 of them into a scene, and holy shit!
Admittedly, I could probably knock a good 5 million polys off it by using billboard branches and leaves for its extremities, but that doesn't get around the fact that I still don't like the realistic look. I actually liked the low poly trees I had in my original rendering awhile back. I just want more detail. The hand painted look provides that.
[ATTACH=CONFIG]2531[/ATTACH]
Judith on 16/1/2019 at 08:40
5 million :O Nah, it's no use for a background prop. Bake those leaves into cards, even some branches too. SpeedTree would be a nice thing to have here, do they even have a software package for freelance/indie artists these days? This thing used to generate like 2k tris models that looked pretty good.
Renzatic on 16/1/2019 at 23:48
Man, I'd love to have Speedtree. They do provide a deal through UE4 where you can rent it for $30ish a month, plus buy assets off their store for X amount of dollars.
I've considered it, but really, I think it's more advantageous for me to continue on my current course. I like the style more, and painting these textures could, in an indirect way, help me on my way to sculpting, since it's teaching me brush control, and how to leverage pressure sensitivity better. Plus, it's just a nice skill to have.
And speaking of which, here's Day 6 of my great big texture painting push. I finished most of this one last night, but wanted to touch up a few bits and pieces before showing it off here. So without further ado...MOAR WOOD FLOOR!
[ATTACH=CONFIG]2532[/ATTACH]
Judith on 17/1/2019 at 04:41
It looks like SpeedTree is either 19.99 USD a month (Unreal/Unity), or free for Lumberyard engine. Never used this one, but ST always had export options, like fbx or obj, so that should be usable in any modeling software or engine. I wouldn't mind a small subscription fee, but that's the price of the whole Substance suite. I'd have to make trees all the time to make it worth ;)
Renzatic on 17/1/2019 at 06:20
If I remember correctly, the sub model doesn't allow you to export your trees out to anything other than the engine they were created in. I think you have to buy the full package to get that flexibility.
Though on the plus side, you're allowed access to everything you make, even if you cancel your subscription. So you could rent it for a month just to make a metric shit ton of trees, and still have them forever.