In psycholinguistics, there has recently been increased interest in
the role of linguistic experience in models of human parsing,
reflecting the current interest in statistical methods in
computational linguistics.
In this talk, I will discuss a computational model of experience-based
structural preferences in human ambiguity resolution. The model uses a
dynamic grammar extracted from the Penn Treebank, which defines the
ways in which a word (or part-of speech category) can be attached to
an incrementally expanding tree during a left-to-right parse. It is
trained using recursive neural networks, a novel machine learning
technique designed to learn hierarchically structured objects such as
trees. The model takes a partial tree and a part-of-speech tagged word
as its input, and returns a list of trees which would result from
attaching that part-of-speech category to the tree in an incremental
parse. The list of trees is ranked by the neural network, based
on preferences gained during training.
Results for the model are encouraging, both for test sentences from
the Penn Treebank, and for modelling human structural preferences.