No, we’re not yet at Mind Reading

This perhaps overly excited piece in the Economist got my attention. At it’s root are a series of studies that have come out over the past several years where machine learning has been used to “recognize” the signatures of concepts (like nouns) from many fMRI scans. Tom Mitchell’s work at Carnegie Mellon comes to mind.

While these are indeed exciting studies, the notion that we’re somehow at the threshold of ubiquitous “mind reading” and deception detection strikes me as far fetched. As an example, the concept of “banana” can surely be found in either an individual’s ground truth or a lie. While we might be able to pick out the brain activity signature of banana in both, we’d really have a great deal of difficulty figuring out which context was the lie.

The ethical, legal and social implications of cognitive neuroscience: intel edition

Jonathan Moreno raises the specter of the use of neurotechnologies in the intelligence arena. I’m not at all sure I agree with his sinister take–but it’s worth considering as we consider that these technologies will become ubiquitous in society over the coming years. This is what we would definitely call a tripping hazard.

Jim

NPR’s series on Deception

NPR’s Dina Temple-Raston is doing a three part series on deception (the second part of which aired on today’s Morning Edition). I think it’s pretty excellent and worth paying attention to. I do think however, that my own notion of the “lies that we are interested in” as opposed to the casual non-interesting lie is key. Lie detection only makes sense in the context of catching spies, fraud etc. It doesn’t seem to be fruitful in terms of finding out what your colleagues really think of you.

Jim