Relevance Feedback is an important technique to improve result quality in
text retrieval and has been recently successfully applied for XML
retrieval, and several feedback algorithms have been proposed. The
comparison of them, however, is still an unsolved problem. Even though
some evaluation methods have been proposed in the literature, it is still
unclear which of them can be applied in the XML context, and which of them
can be combined with the plethora of metrics proposed to assess the
quality of retrieval algorithms.
In this context, our goal is to define and formally analyze different
evaluation methods, implement an evaluation framework for feedback
algorithms that supports different evaluation methods and metrics. We also
want to compare and analyze several feedback algorithms for keyword-based
and structural XML queries by using real submission files.