Training parsers by inverse reinforcement learning
Neu, Gergely and Szepesvári, Csaba (2009) Training parsers by inverse reinforcement learning. Machine Learning, 77 (23). pp. 303337.

Image (cover image)
cover.jpg  Cover Image Download (23kB)  Preview 
Abstract
One major idea in structured prediction is to assume that the predictor computes its output by finding the maximum of a score function. The training of such a predictor can then be cast as the problem of finding weights of the score function so that the output of the predictor on the inputs matches the corresponding structured labels on the training set. A similar problem is studied in inverse reinforcement learning (IRL) where one is given an environment and a set of trajectories and the problem is to find a reward function such that an agent acting optimally with respect to the reward function would follow trajectories that match those in the training set. In this paper we show how IRL algorithms can be applied to structured prediction, in particular to parser training. We present a number of recent incremental IRL algorithms in a unified framework and map them to parser training algorithms. This allows us to recover some existing parser training algorithms, as well as to obtain a new one. The resulting algorithms are compared in terms of their sensitivity to the choice of various parameters and generalization ability on the Penn Treebank WSJ corpus.
Item Type:  ISI Article 

Subjects:  Q Science > QA Mathematics and Computer Science > QA75 Electronic computers. Computer science / számítástechnika, számítógéptudomány 
Depositing User:  Eszter Nagy 
Date Deposited:  11 Dec 2012 16:05 
Last Modified:  11 Dec 2012 16:05 
URI:  https://eprints.sztaki.hu/id/eprint/6000 
Update Item 