demagogue on 21/4/2014 at 07:38
I'll be honest, I'm not very positive about humanity right now. Doing human rights work, there are enough cases on mutilations, extrajudicial killings, and the like, so it can grate on you. And my neighbor seems to have anger issues and a phobia to any sound above a few decibels after 9 pm, meaning I got screamed at most of this weekend, I thought unfairly; but even if it were fair, you don't have to scream to make your point. It all added up to put me in a bad mood recently.
So to combat all this negative energy, I'd like to read about more positive things on the horizon. And the kind of positive thing that makes me particularly happy are technological breakthroughs on the horizon that promise to either really advance our understanding to a higher level, or to improve the quality of life for everybody.
I'm thinking about things like hydrogen energy, quantum computers, life on other planets, cybertech & artificial general intelligence (not your dad's question-answering phone bot, but the real deal, human-like intelligence), and so on. This brings me to my contribution:
Artificial General Intelligence
For a lot of reasons, my favorite breakthrough to look forward to is advanced AI, first because unlike other innovations, AI has a spiritual and transhuman aspect to it. I mean it allows us to get a better understanding of our own consciousness, and also deal with non-human intelligence as smart as we are. It's not just about improving some part of our life, but promises to fundamentally change the way we think about ourselves and our world at its spiritual core. I studied philosophy of cognitive science in undergrad, and it's never really left me.
So for a long time, I was very pessimistic that AI would ever get anywhere near human levels. But more recently, I'm starting to become a little more cautiously optimistic. For one, computing power, speed, and storage space are improving to the point you really can't use them as excuses for designing a good system. I think our main barrier now are issues of architecture and design. I've been following teams developing different AI architectures recently, and while most of them are still just optimizing narrow symbol-juggling systems ("narrow AI", or GOFAI, "good old fashioned AI", which you can't call real intelligence yet), some of them are into more interesting territory of general intelligence that I think has real promise over the next generation, so-called artificial general intelligence, AGI.
My favorite project is called OpenCog, but there are a few others that are playing with AGI too. They're mostly small, neglected teams while the Google and IBM projects stay in the limelight with their narrower but very successful approaches. But it somehow gives these fringe teams a kind of romantic flair. Somebody summed it up with a comment on a video where a guy was showing-off his very life-like bot in the back of his car. "When you show off a bot in a climate-controlled lab wearing a lab coat, that's commercial R&D. When you show it off in the back of your car somewhere in Hong Kong, that's transhumanism."
At this point, I think having a bot with human-like intelligence is now possible within my lifetime. It might still be at the toddler level early on, but I think once they get over the conceptual leap, the important step will have been made. But most of all I think it's going to be a spiritual breakthrough too, when people starting coming to terms with it. What's fun is we can start thinking about the spiritual angle even now.
So what are some of your favorite technological breakthroughs on the horizon, and what is their state of development? What sorts of things do you think can restore our faith in humanity and the future?
nicked on 21/4/2014 at 07:55
I was reading about Space Elevators the other day, and I think that if we get them up and running, it'll be a huge step toward making space habitation viable.
Essentially you need a very long cable with a counterweight in space so that the centrifugal force of the Earth's rotation keeps it up. You need super-strong, super-tensile materials to build it and be able to withstand impact from meteors and space junk, and the journey to the top would still take about 5 days at 200km per hour. But, once it was in place, you'd be able to shift stuff into space for about 5% of the cost of a rocket, and therefore make constructing more space elevators that much easier. Then what - space habitats, moon bases, etc. etc.
Since it seems unlikely that anyone's gonna come up with FTL travel or wormholes any time soon, and that trip to Mars that you can never return from just seems like a recipe for a sci-fi horror film, this seems to me like the most promising gateway into affordable space colonisation.
Mr.Duck on 21/4/2014 at 10:57
Tell your neighbor to eat a bent walrus dick.
If not, I'll offer a hug.
:3
faetal on 21/4/2014 at 11:29
I'm not sure creating human-like AI should be an apex goal since human intelligence is highly biased towards the biological needs of one species. Perhaps an AI which bypassed as many cognitive biases as possible would be the most use. That said, it would be very difficult to determine if it had bypassed them successfully, since we'd be the ones calibrating it.
demagogue on 21/4/2014 at 12:33
Thanks Duck. Not sure I want to ever talk to him again, so I'd rather have the hug. :snuggs:
@Faetal, that's a good point, and I think it's probably the consensus view too. Human-level shouldn't mean human-like, and it'd be much better, e.g., tuning its architecture in ways that take advantage of modern computer processing (not necessarily similar to neural architecture), or giving it reasoning abilities we don't have, and there's no reason to purposefully give it misleading biases.
I personally think it should still have an affective (emotion) system not too different from humans just to be able to understand emotional language, and because I tend to think even the most "logical" of thinking is still steeped in emotion-driven thinking. I'd be happy to see a proliferation of cognitive variety actually.
I was reading recently that about 100-200K years ago, the earth proliferated with different intelligent homo species (us, Neanderthal, Denisovan, another one they haven't found fossils for yet, but have a DNA footprint), and I was thinking how fascinating such a world must have been, with so many different intelligent species around. Having a variety of AGI systems might be like that.
faetal on 21/4/2014 at 12:57
I think the best we can hope for in the short term as far as AI goes is something which doesn't even calculate in the same way as the brain, since neurological processing is very different to zeroes and ones and relies heavily on a stupendous array of different factors which determine the origin, routing and ultimate fate of any input. I think a biological style AI would require a different kind of computing. Perhaps there are things which could be done in quantum computing which could ape biological computation, but I don't know enough about it. All I do know (new information notwithstanding) is that the speed and precision of neurological processing has been mostly determined by billions of years of evolution rather than a specific blueprint, so getting something at all similar is an astrologically complex task.
In my opinion, the best way to get something resembling a sentient AI would be to work on machine learning algorithms and see what the AI becomes by way of feedback into itself.
Phatose on 21/4/2014 at 14:10
If we were to create a human-level AI that wasn't human-like, would we even be able to recognize it as such? There's plenty of interesting philosophy of mind out there, but at the end of the day people still recognize other intelligences by identifying them as human. We make our gods human-like so we can identify with them, and sci-fi shows a similar failure of imagination. Fictional aliens who are truly alien are incredibly rare, and I wonder if this isn't because we only recognize intelligence by it's similarity to us.
Could people actually manage to tell the difference between a human-level but completely alien intelligence from an arbitrarily complex system?
faetal on 21/4/2014 at 14:47
We can recognise quirks, behaviour etc as being cat-like, dog-like, wasp-like, fish-like etc.. to varying extents. If a completely anodyne AI came along, and we could interact with it, I can imagine we might find it "AI-like", for want of a better phrase. I think overall it's difficult to know how much we do or don't understand about the minds of things which aren't us since we'll always use ourselves as the baseline comparator.
demagogue on 21/4/2014 at 22:32
The unforgiving rules of scarcity and need (aka supply & demand) means any sophisticated AI worth its salt will have to at least be economically rational in its behavior, and that would be recognizable. Also probably the main design focus is getting it to learn and speak natural (human) language, so even if it has novel thoughts, at least it will have a way to try to explain it to us in our own language... And of course we'll have the sourcecode, so we can get a rough idea what it's talking about from the inside.
On the architecture side, what I'm finding interesting is that actually different architectures can "translate" into each other (they're all Turing complete), but not all of them with the same efficiency. The two big paradigms in AI are symbolic (symbol-pushing) vs sub-symbolic (nodes with links and weights) architectures. But the catch is you could get a symbolic-language to do sub-symbolic processing, and you could get a connectionist network (which is usually sub-symbolic) to do symbolic processing; for that matter you can get both of them to do hybrids, which is all the rage these days. But the point is you could rig any Turing complete system to do the work of any other kind. The real issue is finding out what can work most efficiently with the tools you have.
One of the keys for me is that knowledge be grounded in perception, so the word "apple" is linked to and about a red round image it sees (the visual data set its getting), the texture and weight when you hold it (kinematic & tactile data set), etc, and isn't just an abstract symbol linked to other abstract symbols. It's why I think just running a genetic algorithm forever won't do much; you need the senses, along with social & environmental learning to direct it.
When they make that leap, though, that's when I think things will start getting interesting. And I don't think we're far from at least that little milestone (linking knowledge to perception).
faetal on 21/4/2014 at 23:35
I'm thinking to get a half decent AI, we'll have to teach it to improve itself and thus evolve an emergent personality based on what it deems important. Sure we can create a pseudo 'ecological niche' for it, by telling it that profitability is worth x amount of importance, provided yadayada Aasimov etc... But I think after enough time has passed for the AI to be decent, a look at the source code will be an alien language to us.
I highly doubt that we'll be able to program a decent AI in a granular fashion.