concerns about its potential for discrimination against certain social groups. However,
incorporating nondiscrimination goals into the design of algorithmic decision making
systems (or, classifiers) has proven to be quite challenging. These challenges arise mainly
due to the computational complexities involved in the process, and the inadequacy of
existing measures to computationally capture discrimination in various situations. The
goal of this thesis is to tackle these problems.
First, with the aim of incorporating existing measures of discrimination (namely,
disparate treatment and disparate impact) into the design of well-known classifiers, we
introduce a mechanism of decision boundary covariance, that can be included in the
formulation of any convex boundary-based classifier in the form of convex constraints.
Second, we propose alternative measures of discrimination. Our first proposed measure,
disparate mistreatment, is useful in situations when unbiased ground truth training data
is available. The other two measures, preferred treatment and preferred impact, are
useful in situations when feature and class distributions of different social groups are
significantly different, and can additionally help reduce the cost of nondiscrimination
(as compared to the existing measures). We also design mechanisms to incorporate these
new measures into the design of convex boundary-based classifiers.