IIIT Hyderabad Publications |
|||||||||
|
Fairness in Artificial Intelligence based Decision MakingAuthor: Manisha Padala Date: 2023-05-29 Report no: IIIT/TH/2023/59 Advisor:Sujit Prakash Gujar AbstractAI systems are ubiquitous in the current times, facilitating numerous real-world even real-time applications. Such sophistication is the consequence of advancement in algorith- mic research and concurrent up-gradation of computational resources. The existing models achieve near-optimal results for specific performance measures. Such perfection is often ob- tained at the cost of Fairness. By fairness, we try to quantify the impact an application has on an individual user (Individual Fairness) or a group of users (Group Fairness). In this work, we shift our focus from a single performance measure and explore the fairness of existing algorithms specifically in two settings, i) Fair resource allocation with strategic agents and ii) Fair classification models. We divide our work and discuss it in the following two parts. Part A – Fair Allocations with Strategic Agents. We consider the setting of resource allocation, where there are multiple items and multiple agents who have prefer- ences for these items. The agents are rational and strategic and may manipulate their preferences to obtain higher gains. The social planner must find allocations that satisfy certain desirable fairness properties and are resistant to manipulation, i.e. ensure strategy- proofnesss. Researchers have proposed algorithms that charge agents in order to prevent manipulations. However, analytically designing payments which are fair and strategy-proof is challenging. In this part, we propose data-driven approach to learn payments that are fair and strategy-proof. We additionally consider resource allocation settings wherein charging payments is not feasible. We analyze the existence of strategy-proof algorithms that ensure air allocations. We consider certain well known fairness notions like envy-freeness, pro- portionality and max-min share allocations. Such notions only ensure individual fairness of the agents involved. Part B – Fair Decisions for Groups. We consider machine learning-based classifi- cation algorithms. The accuracy of such algorithms has been the primary concern and is widely researched. More recently, researchers have uncovered the prejudiced predictions of such models towards certain demographic groups. Due to existing bias against certain race, gender or age, the data available is often biased. The prejudices in the data, amplified by the algorithms trained only for achieving higher accuracy, lead to unfair decisions to certain groups. Moreover, such algorithms made public on various online platforms poten- tially leak private information of the individual data used in training. Ensuring fairness and privacy in a machine learning framework gives rise to a non-convex and complex op- timization with multiple constraints. Towards this, we rely on learning-based approaches and exploit neural networks’ immense capacity to get closer to the goal. Full thesis: pdf Centre for Machine Learning Lab |
||||||||
Copyright © 2009 - IIIT Hyderabad. All Rights Reserved. |