From our own correspondent: IJCAI / ECAI 2018, Stockholm
I found the AI community in somewhat of a reflective mood at IJCAI this year. Clearly, the field is enjoying an enormous surge in popularity and this new-found success has led to what I might characterise as a degree of soul searching. The theme of the conference was “the evolution of the contours of AI” and there were several talks about the social utility and ethical dimensions of the field. The moral machine experiment from Jean-Francois Bonnefon was one such highlight - essentially a citizen science version of the classic trolley problem, which gave some compelling evidence for how moral frameworks are set by cultural norms and prevailing social conditions. Naturally, there were plenty of talks on applications of deep learning, including some interesting biomedical applications. A particularly interesting technical stream for me was the relation of planning to various other subfields including robotics, knowledge representation and uncertainty.
Of particular interest were the talks on how to bridge the gap between learning and knowledge-based methods. For instance Facebook’s Yann Le Cun, who pointed out that although learning methods have shown some impressive results in the last few years they are very inefficient compared to animals, many of which can copy a behaviour with only a single example. On a broadly similar note, Hector Geffner distinguished learners from solvers, drawing an analogy with Kahnemann’s system 1 / 2 model of human cognition. Learners are quick and (presently) need huge amounts of training. Solvers, by comparison, need thinking time and an explicit model of the domain but no data.
Closer to my own heart was the excellent AI for Synbio workshop. The aim of this was to generate collaborative debate between AI researchers and synthetic biologists, although sadly the AI people were very thin on the ground. Nonetheless, it was an excellent day with solid talks on the use of model-guided experimental optimization for identifying cancer cells (Kobe Benenson), synthetic pathway design (Anil Wipat, Natalio Krasnagoor) and an interesting application of deep learning to modelling transcriptome/proteome relationships (Hector Garcia Martin). Of course there was the obligatory discussion of ethical and social issues with talks on threat detection from Mikhail Wolfson and Fusun Yaman, as well as an interesting survey of ethical and social issues from Kenneth Taylor, including one paper which describes three kinds of synthetic biologist: the epistemics, pragmatists and engineers, who seek to learn about, make use of and improve the practice of biology respectively.
In line with the reflective mood of the conference as a whole, I was inspired to think of some of my own thoughts on the subject of AI for Synbio. Although disappointing, the modest interest in synbio from the AI community at this stage is not a big surprise: biology is very tough for computational methods, as anyone who’s worked at the sharp end of bioinformatics knows only too well. We don’t yet understand biology nearly well enough (or even know enough about it), to do much with the approaches characterised as ‘solvers’ above.
Learners, inherently statistical, have much more near-term potential but in reality, they don’t currently do well except in a few fairly specific domains because in many areas there’s a big problem with getting the deep, broad and well-annotated data required to get a good predictor going. The potential of applying AI / ML to biology is genuinely huge, but it will take a fundamental shift in the way we work with biological systems - one that allows us to capture every nuance of an experiment with minimal effort - to really get at the power of the two together.