In this talk I will describe how mathematical learning theory can be used in the understanding and designing of learning algorithms. To this end I will first recall the binary classification problem which is one of the most important learning scenarios in applications. In particular I will describe the learning goals in this scenario and their formalizations. Then I will present an old result of learning theory which essentially states that no learning algorithm can learn all classification problems well. One of the consequences I will then discuss is that establishing learning performance guarantees always require to analyze which implicit assumptions on the data-generating process the algorithm at hand makes. In the second part of the talk I will present a new assumption on the data-generating process and compare this assumption with earlier introduced concepts. Then some corresponding learning performance guarantees for support vector machines, which are the state-of-the-art classification algorithms, will be formulated. In the third part of my talk I will briefly describe how learning theory can be used in the design and analysis of new anomaly detection algorithms. In particular I will present a reliable method which allows us to compare the learning performance of different anomaly detection algorithms.