CASP provides a unique opportunity to compare the performance of automatic fold recognition methods with the performance of manual experts, that might utilize these methods. Here, I will show that a novel automatic fold recognition server, Pmodeller, is getting close to the performance of manual experts. While a small group of experts still perform better most of the experts participating in CASP5 actually performed worse, although they had full access to all automatic predictions. Pmodeller is based on Pcons, the first ``consensus'' predictor that utilizes predictions from many other servers. Therefore, the success of Pmodeller and other consensus servers should be seen as a tribute to the collective of all developers of fold recognition servers. Further, I will show that the inclusion of another novel method, ProQ, to evaluate the quality of the protein models improves the predictions.
All these methods will be discussed and described in details.