Welcome EMS!
Lectures & Seminars Home - Lectures & Seminars - 正文
Treat me fairly: Implementing unbiased and interpretable algorithmic decision making
Date:2018-05-30

Topic: Treat me fairly: Implementing unbiased and interpretable algorithmic decision making

Speaker: Wenxuan Ding,Northwestern University

SiteB247

Time15:00-16:30 P.M.,June 8(Friday),2018

Abstract

Today, driven by big data and rapid improvements in computing power, many firms and organizations are interested in deploying various machine learning algorithms to improve their decision-making. Although conventional machine learning models have shown a great performance in many areas, the European Union has recently issued a new law (the General Data Protection Regulation- GDPR) to prohibit the currently widely used conventional machine learning algorithms in applications including recommendation systems, collaboration filtering in social networks, credit and insurance risk assessments, computational advertising, intelligent agent, etc. This law posts important challenges to industries and information systems and artificial intelligence communities because conventional machine learning models suffer two flaws: the potential discrimination on data subjects and being unable to explain the logic involved in learning outcomes.

These two flaws are due to (1) aggregate-level learning from a data set consisting of different data subjects such that a sample selection bias may occur, resulting in biased decisions when a subject is not in the same population as those in the training sample; and (2) non-theory-driven processes focusing on correlations among feature variables rather than their causal effects.

This paper presents a novel theory-based individual dynamic model to overcome the discrimination issue and incorporate a causal mechanism to enable explanation simultaneously. The model only uses information from an individual subject without employing other subjects’ data. It learns the underlying data generating process and identifies the corresponding causation mechanism to achieve a fair and interpretable decision. Using a real-world credit and risk assessment as the context, we empirically test our model and show that the proposed model outperforms conventional supervised learning models and decision tree models in terms of fairness and prediction accuracy.