Quantifier scope ambiguity is one of the most challenging problems in deep
language understanding systems. As a result, most deep understanding
systems use a constraint-based scope-underspecified model to represent the
output of the syntax/semantics interface, where constraints on the
relative order of quantifiers can be further added to the representation
at the deeper processing levels (e.g. discourse or pragmatics) in order to
rule out unwanted readings. A major algorithmic problem to be solved for
such a representation is the satisfiability problem, that is whether there
exists any solution satisfying all the constraints. We present the first
tractable underspecification framework, broad enough to provably cover all
coherent natural language sentences under a linguistically justified
notion of coherence.
In the second part of the talk, I discuss automatic scope disambiguation.
We have built the first scope-disambiguated corpus of English text, in
which every pair of scope-bearing elements in a sentence (including all
noun phrases and scopal adverbials) are examined for possible scope
interactions. I present the challenges we faced in building this corpus
and our solutions to address those. Given the scope-disambiguated corpus,
we define learning to disambiguate quantifier scoping, as learning to
build partial orders. We achieve a relatively high F-score on retrieving
the scope preferences on our corpus using a preliminary supervised model.
The early results are promising, encouraging further studies in this area.