I am a research scientist at the Allen Institute for Artificial Intelligence (AI2). Before that I was a PhD student with Tom Mitchell at the Language Technologies Institute at Carnegie Mellon University.
At AI2, I am on the Aristo team, trying to create machines that can pass elementary science exams. My current research interest on Aristo is in trying to apply methods similar to semantic parsing to answering these questions. My research agenda is motivated by the observation that we do not have, and likely will never have, enough question-answer pairs to learn complex models for these questions, so we must find some other source of training data. I am thus looking at models that can be trained using large collections of science-related text.
I have also spent a lot of time working on knowledge base completion (KBC), and I occasionally still dabble in this area. Much of my thesis was exploring the connection between graph-based and embedding-based methods for KBC. I have code on github that implements a few recently-published KBC methods.