-
Notifications
You must be signed in to change notification settings - Fork 2
Exposure Metric #92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Labels
enhancement
New feature or request
Comments
The following is a simple unit test of the aforementioned metric, expu import FairRankTune as frt
from cmn.metric import *
member_probs = [(0, True, 0.99), (1, True, 0.93), (2, True, 0.90), (3, True, 0.89), (4, True, 0.89), (5, False, 0.86), (6, False, 0.77), (7, False, 0.70), (8, False, 0.68), (9, False, 0.66)]
dic_before, dic_after = dict(), dict()
dic_before['expu'], dic_after['expu'] = list(), list()
dic_before['expu'], dic_after['expu'] = {'protected': [], 'nonprotected': []}, {'protected': [], 'nonprotected': []}
exp_before, per_group_exp_before = frt.Metrics.EXPU(pd.DataFrame(data=[j[0] for j in member_probs]), dict([(j[0], j[1]) for j in member_probs]), pd.DataFrame(data=[j[2] for j in member_probs]),'MinMaxRatio')
try:
dic_before['expu']['protected'].append(per_group_exp_before[False])
except KeyError:
dic_before['expu']['protected'].append(0)
try:
dic_before['expu']['nonprotected'].append(per_group_exp_before[True])
except KeyError:
dic_before['expu']['nonprotected'].append(0)
dic_before['expu']['expu'] = exp_before
reranked_list = [(4, False, 0.89), (5, False, 0.86), (6, False, 0.77), (7, False, 0.70), (8, False, 0.68), (9, True, 0.66), (0, True, 0.99), (1, True, 0.93), (2, True, 0.90), (3, True, 0.89)]
exp_after, per_group_exp_after = frt.Metrics.EXPU(pd.DataFrame(data=[j[0] for j in reranked_list]), dict([(j[0], j[1]) for j in reranked_list]), pd.DataFrame(data=[j[2] for j in reranked_list]),'MinMaxRatio')
dic_after['expu']['protected'].append(per_group_exp_after[False]), dic_after['expu']['nonprotected'].append(per_group_exp_after[True])
dic_after['expu']['expu'] = exp_after
print('per group and overall expu before', dic_before)
print('per group and overall expu after', dic_after) The output of this sample would be as follows: per group and overall expu before {'expu': {'protected': [0.43463221231851584], 'nonprotected': [0.6409693736694332], 'expu': 0.6780857716029791}}
per group and overall expu after {'expu': {'protected': [0.7560151586870236], 'nonprotected': [0.3650114918098291], 'expu': 0.48280975270885695}} |
Tracking completed results:
#####################################################det_cons:
#####################################################det_relaxed:
#####################################################det_greedy
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
The Exposure Metric in fairness for ranking systems refers to the measurement of how visible or accessible items (such as search results, recommendations, or candidates) are to users, with a focus on ensuring that this visibility is distributed equitably across different groups. The primary concern is to ensure that the ranking algorithm does not disproportionately favor one group over another, leading to biased outcomes.
There are different variations of exposure-based metrics. In Adila, so far we included 2, which are as follows:
Group Exposure (exp)


compares the average exposures of groups in the ranking(s) and does not consider relevances or scores associate with items. It aligns with the fairness concept of statistical parity. For a ranking like Τ (tau), and an item like x_i the group exposure is as follows
and average exposure for a protected group like g_j will be calculated as follows:
The range of this metric and the most fair value is connected to the in group aggregation function we utilize. With MinMaxRatio, the value of this metric will be from 0 to 1, 1 being the most fair setting.
Exposure Utility (expu)

assesses if groups receive exposure proportional to their relevance in the ranking(s). This is a form of group fairness that considers the scores (relevances) associated with items. The per-group metric is the ratio of group average exposure and group average utility, whereby group average exposure is measured exactly as in group exposure(exp). Group average utility for a group like g_j is:
The same rules as the previous section applies to the metric's range and most fair settings.
These metrics and variations were originally proposed by Singh et al.
The text was updated successfully, but these errors were encountered: