Title: Building Syntactic-Semantic Inference Rules from Text
Speaker: Prachi Jain
Abstract:
The visions of machine reading and deep language understanding emphasize
the ability to draw inferences from the text to discover implicit
information that may not be explicitly stated. Example: If (X, got
married to, Y) is true then (X, is a relative of, Y) is also true. This
has natural applications to textual entailment, knowledge-base (KB)
completion, and effective querying over KBs. One popular approach for
such inference use inference rules.
The objective, in this talk, is to get a hitchhiker's view of the
evolving techniques to extract inference rules from the text. We will
talk about how syntactic structures, lexicographic resources (like
WordNet, thesaurus) and distributional similarity can exploited to build
a corpus of inference rules by looking at systems like - SHERLOCK,
PATTY, CLEAN, VCLEAN, PPDB 2.0, NaturalLI. Then we will look at some
outstanding challenges which are essential for progress in the field of
semantic inference and hence in Natural Language Understanding.