A Fast, Effective, Non-Projective, Semantically-Enriched Parser, by Stephen Tratz and Eduard Hovy
I saw a talk by Eduard Hovy a week ago that I thought was really interesting. It was about merging propositional and distributional semantics and was very close to some things I had already been thinking about. So I went to Hovy’s website and printed off all of his recent papers that looked interesting. I was interested in this one because I thought “Semantically-Enriched” meant that the parser used some form of semantics as input, which I thought would be really cool. Turns out it doesn’t, so that was disappointing, but the paper was still interesting.
Read more...
Mintz, Bills, Snow, Jurafsky, ACL-IJCNLP 2009. Distant supervision for relation extraction without labeled data.
Read more...
Hoffmann et al, ACL 2011. Knowledge-Based Weak Supervision for Information Extraction of Overlapping Relations
The point of this paper is to use a knowledge base to learn a classifier for mapping sentences to relations over entities, very similar in purpose to what +Justin Betteridge has been working on. They use markov random fields to model possible facts in a database. They use some amount of known relations from Freebase to learn the weights for the factors in their MRF, then use the learned model on new sentences (and with more entities? that part wasn’t exactly clear to me) to extract more relations.
Read more...
Coreference Resolution with World Knowledge, by Altaf Rahman and Vincent Ng, ACL 2011
The premise of this paper is that systems that try to solve the coreference problem have typically relied on linguistic knowledge encoded into the algorithms. However, they have largely ignored world knowledge, which the authors argue is an important type of knowledge for determining the antecedent of anaphoric noun phrases. The authors address this gap in the literature by augmenting existing coreference resolution systems with features derived from several sources of world knowledge.
Read more...
Types of common-sense knowledge needed for recognizing textual entailment, LoBue and Yates, ACL 2011
This is an interesting short paper from ACL 2011. The paper examines the problem of “recognizing textual entailment,” essentially that of judging whether or not a conclusion is justified from a given piece of text. That’s interesting because drawing conclusions from text depends on a large amount of assumed knowledge in the reader, and machines don’t generally have that knowledge. This paper examines what kinds of knowledge are typically required for making these kinds of judgments.
Read more...