By @SimonCocking. Great interview with Luke Dormehl author of Thinking Machines, reviewed here.

What interested you in choosing this topic?

There’s something, I think, very profoundly human about this desire to build thinking machines; to bring life to the lifeless. The ideas trace back at least as far as Greek mythology, and we never seem to have shaken them. It is only now, however, that we’re starting to reach the point where AI is rolling out into the real world. Three things, I suppose, influenced my decision to write this book: the fact that this year represents 60 years since AI was formed as its own official discipline (even though the ideas go further back than that); a desire to help people make sense of this field in a way that is approachable; and as a way of exploring a theme which seems to have echoed throughout human history.

Since you wrote the back has anything changed / happened faster than you expected?

The cutoff date for my book in terms of adding new material was early this year, so the death of Marvin Minsky, a.k.a. the last surviving members of the core group behind the 1956 Dartmouth Conference — which started AI as its own discipline — made it in. In terms of new developments, I’m fortunate enough to cover a lot of them for my day-to-day journalism, writing for Digital Trends, Fast Company and others. I’m not sure anything has caught me off guard in terms of breakthroughs, but the volume of cutting edge AI projects — used to do everything from helping diagnose early onset medical conditions to more frivolous goals like brewing beer using artificial intelligence — confirm for me that this was certainly the right time to write a book like this one.

At the same time as things changing rapidly, Ray Kurzweil seems to have revised his predictions of when the Singularity will happen, in an older book I read he mentioned around 2025, whereas now you mentioned he said 2045. Do you think this could slip again if it turns out we have underestimated the complexity of the human mind?

There is certainly a temptation among some AI theorists to think that replicating human-level intelligence is basically just about building a brain-scale artificial neural network, referring to the technology used to create an approximation of the brain inside a machine, or just about getting to human levels of performance in a bunch of different sub-disciplines, such as facial recognition etc., and then stitching them all together. Depending on how one defines Singularity, I don’t think we’ll be getting there by 2025 or 2045, but I don’t think that negates some astonishing advances in AI during that time. I’m also not convinced the Singularity will be quite so, well, singular. More like a Plurality, really.

Will we ever get to a Singularity? In some ways we seem to be doing everything we can to hasten reaching this point, even though it would not be in our best interests. What are your thoughts?

AI has always been a field with goals that differ from researcher to researcher: whether it’s experimental psychologists wanting to understand how the human brain works or a company like Google wanting to come up with smart ways to sell ads and structure information. There is certainly interest from some people in creating AI that can carry out general intelligence tasks — which is to say, not be confined to one specific domain — but I’m unconvinced this represents the broad approach across all researchers.

What has been the response to your book from the AI community?

My book is really designed as a way to give some context and a “jumping on” point for casual readers, hoping to find out more about what AI is, where it’s been, and where it’s off to next. I interviewed a large number of researchers as part of it, and they certainly proved very willing to share their insights with me. Since it’s been finished, I’ve had favourable responses from the people I’ve spoken with working in the industry, and fortunately some nice reviews from the mainstream press as well. So I’m very happy about that.

The whole ethics area is fascinating and important. The recent fatal Tesla car crash, robots best efforts thwarted by humans perhaps? Do you have any suggestions on how it might be managed?

It was inevitable that a self-driving car crash was going to happen at some point, and statistically I have no problem in saying that by the time autonomous vehicles become mainstream technologies we’ll be much safer in a car driven by machine than by another person. With that said, there certainly does need to be investigation into the ethical, and, by extension, the legal aspects of AI, which have often been woefully ignored. Fortunately, we’re starting to see more of that, both in academia and public discourse. As AI’s applications continue to broaden, these are conversations we need to be having… and ones your average person needs to be thinking about, too.

Overall, in researching and writing the book, does it leave you hopeful or otherwise for mankind’s future relations with AI?

I’ve always tried to steer clear of the “techno-determinism” that shapes us as powerless and machines as driving forward the Grand Narrative. AI will change the world, but it is not monolithic as a field in terms of its goals, and its deployment will depend on us. Since we’re not yet at a stage to start thinking about AI as being on a level with us, and certainly not above us, I think a more important question than our future relations is how we want AI to work for us now: augmenting us or replacing us? The overwhelming majority of scientists, researchers and other AI practitioners want AI to be something that will extend our lives, make us more creative, provide us with better employment opportunities, and so on. I would subscribe to that view.

Thinking machines. The inside story of Artificial Intelligence by Luke Dormehl, reviewed


If you would like to have your company featured in the Irish Tech News Business Showcase, get in contact with us at [email protected] or on Twitter: @SimonCocking

Pin It on Pinterest

Share This