How scared should we be about IBM’s new ‘TrueNorth’ chip? Is it Self-aware and Conscious?

hfjtdfjdThe results of artificial intelligence, have well, thus far been a complete disappointment.

“Machines will be capable, within twenty years, of doing any work that a man can do.” -Herbert Simon, 1965.

But are things about to change? With this week’s announcement of ‘TrueNorth’, the new brain-inspired computer chip from IBM, set to take computers in an entirely new direction, might artificial intelligence, robotics, the internet and yes, the scary sci-fi self-aware ‘skynet’ type of networks, have come one step closer?

What makes us conscious? What makes us self-aware? These are tough questions that philosophers have pondered on for thousands of years. Nowadays psychologists and neuroscientists have been attacking these questions empirically and mathematically. There are entire conferences concentrating on the scientific study of consciousness (e.g. the Association for the Scientific Study of Consciousness; ASSC).

So what makes our brains conscious? The current leading theories on consciousness all propose it has something to do with the way information is processed in the brain. For example, one theory called the Information Integration theory (IIT), by Giulio Tononi, talks about the degree of integrated and differentiated information (check out these links for more info: link1 link2). Other more ‘in the wild’ medical applications focus on the detail or level of complexity in brain activity (read this article for more on this or check out this great episode of Radio lab), and they are beginning to be applied during general anesthesia.

These and other theories propose something along the lines of the following: once a system, any system, brain or computer chip, can ‘hold’ or process information in just the right way (lets just say it’s coherent and integrated etc. ) it should simply become conscious.

Is IBM’s new TrueNorth chip one step towards a conscious non-biological entity?

“We have taken inspiration from the cerebral cortex to design this chip,” says IBM’s chief scientist for brain-inspired computing, Dharmendra Modha.

“It can see an accident about to happen.” – Modha.

IBM1IBM says the new chip TrueNorth, communicates via an inter-chip interface, which enables ‘seamless scalability’. In other words, it should be no problem to up-scale TrueNorth to any size we want. This might just be my paranoia kicking in, but I want to ask whether, because the neuromorphic TrueNorth system is completely scalable, could it be scaled up to the size and power of the human brain? What would that be like? Would this TrueNorth network become self-aware like us?

Besides Hollywood, there are some serious discussions developing around the real dangers of large scale/scalable AI systems. For example check out Superintelligence by Nick Bostrom. He writes about the real danger of a Superintelligence that could surpass our own capabilities, and if it is possible to control such an entity. In theory we do have one advantage: we are the ones creating it, so we should be able to predict its occurrence.

Elon Musk, of PayPal, Tesla Motors and SpaceX fame, who has an impressive track record of predicting future technology trends and acting on them, recently tweeted:

“…We need to be super careful with AI. Potentially more dangerous than nukes.”- Elon Musk, 2 Aug 2014

Elon Musk with president Obama at Cape Canaveral in 2010 – photo by Steve Jurvetson.

Elon Musk with president Obama at Cape Canaveral in 2010 – photo by Steve Jurvetson.

As we don’t yet know when or how conscious self-awareness is created, we won’t necessarily know if we do create it.

Maybe TrueNorth is already a little conscious. Maybe 10 TrueNorth chips working together will become conscious? Maybe 100 TrueNorth chips?

The human brain has around 86 billion (give or take) neurons in it. If each TrueNorth chip has around 1 million ‘neurons’ on board, then perhaps if we hook up 86 thousand of them we might have something closer to resembling the human brain…

What would happen then?

As far as I know, IBM hasn’t yet done anything like this. But if TrueNorth chips become as common as smart phones, IBM seems to be predicting this, then this scenario may be realised someday soon.

2070083278_7484ac1432_mWhat signs do we need to be on the lookout for? How would be know if the 86k- TrueNorth network is conscious? Because we don’t have a ‘consciousness meter’ we don’t know how to test such a network for consciousness. We have no way of knowing!

As we don’t yet have a measure or operational definition of consciousness, is it dangerous to build something that our best theories suggest could become conscious? Is this akin to building the AI nuke of which Musk warns us? In this analogy we are building something with potential bomb-like capacity, but with no ability to monitor safety, because we don’t know when it will become bomb-like or if it already is.

On the other hand, maybe all this talk really just belongs in Hollywood, and the chances of us actually creating AI are negligible. But the issue is that we don’t know, and currently have no way of finding out. Would such Superintelligence eventually attempt some form of communication with us?

What if it didn’t… We would never know that it even existed.


Here’s a link to the actual neuromorphic TrueNorth paper in science, although it is behind a pay wall so you might not have access.


Are you scared? Think this would make a great Hollywood film? Want to know more about all of this?  Let us know in the comments.


How rewarding the conservative is killing innovation and discovery in science

There’s a great little trick that audiences can play on someone giving a lecture (try it next time you’re in a lecture). All the audience has to do is pick one side of the auditorium to be the happy side, and the other to be the angry side. If you are on the ‘happy’ side smile each time the lecturer looks over in your direction. If you are on the ‘angry’ side, frown and look unhappy. You will soon notice the speaker gravitating towards the ‘happy side’ of the auditorium.

185472365_7ae7f2303b_zThere is a near universal premise behind this little trick that is also the backbone of behavioural economics – we respond to rewards. Simply rewarding certain actions results in dramatic changes to behaviour. Behavioural economists and psychologists have shown countless times that rewarding or incentivizing behaviour can have profound effects on society, having both good and bad long-term outcomes.

There are many different ways we hear about new science. Conference talks, discussions with colleagues, reading published papers, reviewing papers for journals or even reviewing research grants. When a colleague tells me about a new piece of exciting research, the natural dynamic is that the attention is on them, they are giving the ‘presentation’, and I am the audience. I really only have a few possible response options:

1. Tell them honestly how cool and exciting their finding is.

2. Ask for clarification, then, once I understand, give some form of positive response.

3. Take on the role of skeptic and ask about alternative causes of their finding. To put it bluntly: try and show how their discovery is wrong.

Of course I’m not trying to be negative (I promise), but this scenario poses an interesting situation. As the ‘audience’ listening to my colleagues breakthrough discovery there are two ways to impress them: come up with a more exciting breakthrough, or logically falsify their discovery by giving an alternative explanation of their data. So if I don’t have a competitive discovery of my own, then the only way I can impress my colleague is to be more conservative or skeptical than they are, and tell them I cannot be sure their discovery is what they say it is. If I want to look good or impress, my only real option is to out-conserve them.4046758837_ec664ec1b5_z

Of course being conservative is an important part of science, precision and being sure of our claims is a must. However, the inherent asymmetry of falsification in the scientific method, naturally forces an audience into a position of more conservative = better. So once my colleague has finished telling me about their amazing new research, I have an urge to impress them, as one naturally does in the presence of a person they respect, so I quickly rack my brain until I come up with an alternative explanation of their data. “What about this…” I ask, “Couldn’t this also explain your data? You might need to run further control experiments to exclude this potential confound.” “Oh you are right, good point” my colleague replies with a sigh.

This in isolation is not really a problem, until we realise that what just transpired was analogous to the happy side of the audience smiling at the lecturer. Just like the lecturer and the smiles, by being more conservative than my colleague I was rewarded by their respect. So again, like the lecturer who will return to the happy side of the auditorium, next time I hear about a colleague’s new research I will move to the conservative side of the spectrum to get my reward.

rewarding_conservativeMight incentivizing being conservative, maybe, just maybe, produce an environment in which individuals shy away from risky or ambiguous science? Once the habit of conservatism has formed, it won’t just be applied to the work of others, but also our own. Novel leftfield ideas will be put aside as too crazy, too risky and potential discoveries will be lost. On a mass scale scientific progress will slow and operate in a much more conservative parametric ‘safe-zone’.

Its hard to know what kind of impact incentivizing conservatism might have on a large scale, but for a moment think through all the scenarios in which moving to the conservative side of the spectrum can win you the incentive of respect. Asking questions after a conference talk has the potential for a big reward, as you get the opportunity to impress many people at once. What about when reviewing papers? Or grants? Both provide opportunities to be rewarded for moving to the conservative side of the spectrum. When reviewing a paper, if I want a journal editor to respect me, the best course of action is to come up with an alternative to what the authors are claiming in the manuscript: maximizing conservatism wins me respect. Unnecessarily boosting conservatism like this forces people to use more resources to check all possible alternatives, while promoting conservative safe-science.

What is the effect of incentivizing conservatism year-in and year-out like this? Is this a bad thing? After all we want to be sure about science, especially if there are important or dangerous implications, say like with climate science or life saving medical procedures. However, at the same time we also want to maximise breakthrough discoveries. The discoveries that will most profoundly change our lives are the ones we aren’t expecting and can’t predict and it is precisely these that are lost by the science community becoming more and more conservative.

Black Swans, the now famous term coined by Nassim Taleb. It refers to the huge impact of unexpected rare events.

Black Swans, the now famous term coined by Nassim Taleb. It refers to the huge impact of unexpected rare events.

Do the rewards of discovery (respect etc.) outweigh the rewards for conservative skepticism (also respect)? Yes, probably, but there is a huge difference in difficulty between breakthrough discoveries and being conservative. There is still no real recipe or how-to guide for breakthrough discoveries. Whereas, for doubt and skepticism, simple logic will give you all the alternative possible explanations you need. Which means anyone can be a skeptic anytime it’s easy. However, coming up with new breakthroughs is hard and unpredictable. In other words, being conservative in science is an easier reward than going for a novel discovery.

Is there a way to prevent or counter the conservatism in science? One idea is that by simply acknowledging the nature of the incentivizing system, we should be less influenced by it.

Many venture capitalists invest in countless different ventures, knowing full well that the majority will fail, but they are relying on the radical success of a small minority. Just one huge success can more than make up for all the failures. Nassim Nicholas Taleb coined the phrase “Black Swan” to describe the huge impact of a rare and unexpected event. Black Swan investing, an investing strategy in which you bet on an extreme event, positive or negative, occurring at some stage in the market. Over time this investment strategy can be costly because everyday the rare event doesn’t occur costs you money, but then when a Black Swan event eventually occurs (GFC, a volcano, Google etc.), the win is so great that it dwarfs the accumulated slow loss.

Could we apply this Black Swan strategy to science? What would this look like? What would it involve? One way to apply such a strategy in science would be to fund and green-light high-risk, high-reward research projects, knowing full well that most of these projects will fail. But a small percent, and it only needs to be small, will end up being Black Swan discoveries, that have a huge impact on technology, medicine or our understanding the world around us.

Would such an ‘open-minded’ liberal research strategy by design change the conservative nature of science? Maybe not, but perhaps by explicitly acknowledging that certain science projects are by design high-risk and high failure, the reward incentives might shift away from conservative skepticism. In other words, by shifting the focus or value to discovery as opposed to doubt and conservatism, we might just be able to boost the number of life-changing discoveries.

Agree? Disagree? I’d love to hear from you…