Join us
Hello!
Are you a weirdo, passionate about science, a bit of a rockstar and above average geeky, driven by the idea of leaving a positive imprint of the world? Then come join us!
Below are some of our current openings – however, we will be hiring extensively during 2023, so please feel free to submit your unsolicited resume no matter your background, if you would believe you would be a good fit for the team, to
talent@iris.ai.
JobĀ openings
Research project 1
(Get in touch for research collaborations)
Scaling an embedding evaluation framework
RQ: How can we reliably measure the quality of a word embedding model trained for a new linguistic domain, in particular, for specific scientific domains? How can we get more detailed insights into the strengths and weaknesses of domain-specific embedding models? How can we use this information to iterate faster towards high-quality domain-specific embedding models?
RQ: How can we reliably measure the quality of a word embedding model trained for a new linguistic domain, in particular, for specific scientific domains? How can we get more detailed insights into the strengths and weaknesses of domain-specific embedding models? How can we use this information to iterate faster towards high-quality domain-specific embedding models?
Learn more
Research project 2
(Get in touch for research collaborations)
Factuality and quality evaluations for text generation
RQ: How do we reliably measure the factuality of generated texts from a language model? How do we automatically evaluate the writing quality of generated texts? How do we provide users a quantitative evaluation on the factuality of the generated texts?
RQ: How do we reliably measure the factuality of generated texts from a language model? How do we automatically evaluate the writing quality of generated texts? How do we provide users a quantitative evaluation on the factuality of the generated texts?
Learn more
Research projects open for collaboration
If you are interested in Thesis work or an Internship, please start by reviewing the open projects below, and then get in touch if it’s interesting to you!
Research project 1
(Get in touch for research collaborations)
Scaling an embedding evaluation framework
RQ: How can we reliably measure the quality of a word embedding model trained for a new linguistic domain, in particular, for specific scientific domains? How can we get more detailed insights into the strengths and weaknesses of domain-specific embedding models? How can we use this information to iterate faster towards high-quality domain-specific embedding models?
RQ: How can we reliably measure the quality of a word embedding model trained for a new linguistic domain, in particular, for specific scientific domains? How can we get more detailed insights into the strengths and weaknesses of domain-specific embedding models? How can we use this information to iterate faster towards high-quality domain-specific embedding models?
Learn more
Research project 2
(Get in touch for research collaborations)
Factuality and quality evaluations for text generation
RQ: How do we reliably measure the factuality of generated texts from a language model? How do we automatically evaluate the writing quality of generated texts? How do we provide users a quantitative evaluation on the factuality of the generated texts?
RQ: How do we reliably measure the factuality of generated texts from a language model? How do we automatically evaluate the writing quality of generated texts? How do we provide users a quantitative evaluation on the factuality of the generated texts?
Learn more




Ć
Get in touch!
Contact us to learn more!
Schedule a demo and learn more about how Iris.ai might work for your organization.
Opportunities at Iris.ai
Contact support!