Getting at meaning is hard

Thanks to a friend I recently stumbled across this article by Melanie Mitchell in the New York Times on machine learning and its limitations. It highlights that despite the huge advances especially in deep learning, machines are still pretty far from ‘understanding’ things the way we do. They make mistakes we never would, are susceptible to small changes in the input, and are pretty bad at generalization – they need huge amounts of data to become ‘clever’, and even then, their cleverness is rather limited – too limited to be called (general) intelligence, and more akin to very specialised perception, as Gary Marcus argues in his article on Medium.

Going from perception to understanding and eventually some sort of meaning (for lack of a better term) is hard – that’s what I learnt in Neuroscience too. Describing the neural activity of some cells or the abundance of a protein is easy, but understanding what this means in the whole process of cognition is pretty damn hard. I wrote earlier about the need for more behaviour in Neuroscience, and I think this all goes along the same lines. It is now possible to image the activity of all (!) neurons in a zebrafish larva, but we are still far from understanding its brain – what it does, why, and how. Maybe, somewhat counterintuitively, it is even the ‘how’ – the implementation – that we are closest to understand. But what these implementations are good for, and why they are the way they are – very unclear.

And just today I read another article (in German), once more voicing critique on the Blue Brain /Human Brain project, that again to me seems to go along the same lines: Even if the project succeeds in modelling e.g. a whole mouse brain – that would not automatically imply that we understand it. In fact, I guess it would be just another black box, like the real brain, that we do not understand (though of course with the advantage that some ‘experiments’ can be carried out much more easily).

I do not want to disparage simulation studies, or deep learning – I think they are necessary for progress. I just don’t think that they by themselves will take us all the way. This stuff is hard, after all – otherwise we would have solved it long ago!

Any ideas of how to get at real understanding/meaning/general intelligence? Drop me a line 🙂

https://medium.com/@GaryMarcus/the-deepest-problem-with-deep-learning-91c5991f5695

Leave a Reply

Your email address will not be published. Required fields are marked *