In order to make use of this textual data in natural language understanding systems, we need to rely on text analysis that structures this information. Natural language parsing is an example for such text analysis, a fundamental problem in NLP. it provides the basic structure to text, representing its syntax computationally. This structure is used in most NLP applications that analyze language to understand its meaning.
I will discuss three threads of research aimed at improving the accuracy, efficiency, and the use of unlabeled data.
The first thread refers to an algorithm for estimating latent-variable PCFG models (L-PCFGs). These models have shown to be a good fit for natural language parsing, but because of the latent component in the model, they require estimation with algorithms such as expectation-maximization (EM) -- which are not statistically consistent and require careful initialization. I will describe a spectral algorithm for L-PCFG estimation which solves these two issues. The spectral algorithm is simple and efficient, requiring a significantly lower running time than EM.
The second thread of research refers to an algorithm that speeds up parsing with latent-variable PCFGs. Parsing with L-PCFGs scales cubically with the number of latent components in the model. This is quite prohibitive, even for a relatively small number of components. Here, the idea is to exploit a tensor decomposition of the underlying parameters of the model in order to use the model more effectively. This basic idea also extends to parsing with regular PCFGs, improving the asymptotic complexity of the fundamental CKY parsing algorithm (with respect to the grammar size).
In the final thread of research, I will touch on work with unsupervised language learning in the Bayesian setting. Previous work used a rather poor set of priors for this Bayesian learning. I will survey novel priors that guide the learner to find better parameters, compensating for the lack of data in the unsupervised setting.
Bio: Shay Cohen is a postdoctoral research scientist in the Department of Computer Science at Columbia University. He holds a CRA Computing Innovation Fellowship. He received his B.Sc. and M.Sc. from Tel Aviv University in 2000 and 2004, and his Ph.D. from Carnegie Mellon University in 2011. His research interests span a range of topics in natural language processing and machine learning, with a focus on structured prediction. He is especially interested in developing efficient and scalable parsing algorithms as well as learning algorithms for probabilistic grammars.