How does the new version of Iris.ai work?

Iris AI Outline

With our version 2.0 launch Iris.AI has officially become a toddler.

Gone are the days when baby Iris.AI could only deal with those really cool TED Talks to begin making sense of the vast, fascinating world of science. She can now also read the abstract of pretty much any English-language scientific paper — a great achievement, no doubt!  

 

In what ways has she grown, you might wonder?

Well, there are several critical aspects to her development worth going over.

As an AI, we wanted young Iris.AI to learn how to instantly map out the research landscape around an inputted scientific text. She is after all an aspiring science assistant, developing ambitious capabilities to save academic and industrial researchers hours of manual search, which requires hard-to-come-by domain expertise of taxonomies and vocabulary.

Let’s look at her brain and how it has evolved since version 1.0.  

Iris.AI now addresses natural language processing through the in-house implementation of a novel neural model. There are three aspects to that:Iris AI 2.0 Technology

On the AI front Iris.AI now performs non-semantic neural topic modelling, replacing our previous implementation of LDA. Given a user input, she generates a concept hierarchy flexibly tailored to that particular input. To do her tasks Iris.AI uses a relational database with a Python-based API platform and an HTML5/CSS3 client. And in terms of learning Iris.AI now combines unsupervised learning derived from running models like TF-IDF and Word2Vec with a supervised input layer put together by our wonderful community of AI Trainers, all integrated into our Neural Topic Modelling algorithm.   

So how much better is Iris.AI performing in terms of extracting concepts, modelling topics and matching papers? With a cool head it is still early days to tell, but we are very excited about the results obtained from our first Scithon run in Gothenburg last week.

 

What is coming next, in terms of tech developments?

From an AI technology point of view, we will strengthen the current models by shaping them as close as possible to human behavior using state-of-the-art neural models. Looking at systems architecture, a Spark framework with a graph database. And from an AI learning perspective,  introducing deep learning with reinforcements, plus semi supervised learning and cutting edge annotation techniques at the disposal of our AI trainers.

So stay tuned for more news around our next Scithons and the plans to grow our AI Trainer community. And please do not hesitate in sending any feedback our way. We’d love to hear from you!