Answering natural-language questions is a remarkable ability of search engines.
However, brittleness in the state-of-the-art is exposed when handling complexity
in question formulation. Such complexity has two distinct dimensions:
(i) diversity of expressions which convey the same information need, and,
(ii) complexity in the information need itself. We propose initial solutions to
these challenges: (i) syntactic differences in question formulation can be
tackled with a continuous-learning framework that extends template-based
answering with semantic similarity and user feedback; (ii) complexity in
information needs can be addressed by stitching pieces of evidence from multiple
documents to build a noisy graph, within which answers can be detected using
optimal interconnections. The talk will discuss results for these proposals, and
conclude with promising open directions in question answering.