How exactly does work?

How exactly does do it all? is our baby and she is learning as fast as she can. But how does she do it?

A lot of people have asked us about the process she follows when linking open access science to any TED talk we ask her about. The short answer is simple: she has been designed to learn like a baby human – using the capacities of her algorithmic brain!

What does this mean exactly? Well, to answer that we need to get into more details. First of all, Iris.AI has very large storage and processing capacities. She can read the transcripts of all the TED talks to date in no time.

While doing this reading performs an elaborate frequency analysis over the text. As a first step she tries to make sense of the different words she encounters to figure out which are the most important ones – the ones that she needs to pay closer attention too. This frequency analysis is more complicated than mere word counts. It analyzes the dynamics of the words in the text and their context. can do and in fact does quite better than that.

Once the frequency analysis is performed does what her machine-learning teachers refer to as feature extraction: this process includes combining the words in clusters to find contextually similar groups of words. She then uses those clusters to find what the words mean in this context (e.g. the word charge implies something different in electrical engineering than it does in chemistry).

Throughout this process she also looks at synonyms to expand the categories and get a broader understanding of the topic – always within the same context. She also performs filtering to disregard words that are irrelevant to the context.

As a next step runs a generalization over the entire set of TED talks – she has been lucky enough to be exposed to this privileged body of knowledge to form her first notions of the world! With this exercise she finds a more precise definition of each word in its respective context.

Once that process is completed organizes the concepts in hierarchies, to be able to more easily grasp and represent the context to the user communicating with her. It is important to note that our baby AI creates flexible hierarchies –not humanly pre-built ones–, expressing patterns across different research disciplines that she sees from her very own, direct experience.

Lastly, Iris.AI structures the results of her thinking and presents them to users through a particular type of Voronoi treemaps. This data visualization approach displays hierarchical data by partitioning a polygon continuum. The polygon areas are proportional to the relative weights of their respective nodes.


What does this mean for you, the user?

1) Faster speed.’s unique qualities save you time. With her help a process that could take the user several hours is now completed in a matter of seconds.

2) Better connections. Asking users will find relevant fields that they were not aware of, fighting the dangers of tunnel vision when approaching a scientific topic.

3) More empowerment. Mapping contextual results helps users bypass the need to know detailed terminology requirements to perform a search.

What are’s shortcomings today?

In these early days, has learned to extract concepts from the TED talks and we are very happy with how she is performing in that regards. However, she still has not mastered a similar concept extraction technique on the other side of the coin: the research papers that she wants users to connect to. That is going to be the next step in her learning process.

How will she learn more over time? has learned from inspiring, very high-quality texts –the full body of TED talks ever given–, but in order to keep up with her impressive rate of learning two things will need to happen. Firstly, she will need to read a lot more scientific information. Secondly, she will need to learn from knowledgeable adults willing to spend some of their precious time training her.

Sounds similar to what you yourself have had to go through? Yes, we agree…