IIIT Hyderabad Publications |
|||||||||
|
Agent Reputation and Reward Fairness in Peer-Based Crowdsourcing MechanismAuthor: Samhita Kanaparthy Date: 2023-01-11 Report no: IIIT/TH/2023/1 Advisor:Sujit Prakash Gujar AbstractCrowdsourcing effectively solves a large variety of tasks by employing a distributed human population. Information aggregation from multiple reports provided by potentially unreliable or malicious agents is a primary challenge in crowdsourcing systems. As a result, research in this area has focused on incentivising agents to exert efforts and report truthfully. In particular, Peer Based Mechanisms (PBMs) appropriately reward agents for reporting accurately and truthfully. However, we observe that with PBMs, crowdsourcing systems may not be fair. As PBMs evaluate agents’ reports based on their consistency with their peers, agents may not receive deserved rewards despite investing efforts and reporting truthfully. Unfair rewards for the agents may discourage participation. Motivated by this, we aim to build a general framework that assures fairness in PBMs. Towards this, we propose the idea of providing trustworthy agents with additional chances of pairing while evaluating their reports. Providing additional chances will help to reduce the penalty obtained by trustworthy agents from unfair pairings, improving their expected reward. To decide which agents to give additional chances we adopt a reputation model that quantifies agents’ trustworthiness in the system. Based on this approach, we build a general iterative framework, REFORM, which adopts the reward scheme of any existing PBM and uses a suitable reputation model. To quantify fairness in PBMs, we introduce two general notions of fairness for PBMs, namely γ-fairness and qualitative fairness. γ-fairness is based on the proximity of the expected rewards a PBM assures to a truthful agent with the optimal reward it can provide. Qualitative fairness prioritises agents who consistently report accurate over other agents. In this work, we also consider that the tasks in the setting are time-sensitive. The task’s requester expects agents to submit the task reports at the earliest. We refer to such a setting as temporal settings. In a temporal setting, the reputation model needs to consider both accuracy of reports and the time taken to report. However, no existing reputation models consider the time taken to report. Towards this, we introduce Temporal Reputation Model (TERM) to quantify an agent’s trustworthiness in a temporal setting. TERM assigns scores to the agents based on their reporting behaviour and the time taken to report. Later, we demonstrate REFORM’s significance by deploying the framework with RPTSC’s reward scheme and TERM. Specifically, we prove that REFORM considerably improves fairness; while incentivising truthful and early reports. Furthermore, we conduct synthetic simulations to validate our results. Full thesis: pdf Centre for Machine Learning Lab |
||||||||
Copyright © 2009 - IIIT Hyderabad. All Rights Reserved. |