---
res:
  bibo_abstract:
  - Word embedding models reflect bias towards genders, ethnicities, and other social
    groups present in the underlying training data. Metrics such as ECT, RNSB, and
    WEAT quantify bias in these models based on predefined word lists representing
    social groups and bias-conveying concepts. How suitable these lists actually are
    to reveal bias - let alone the bias metrics in general - remains unclear, though.
    In this paper, we study how to assess the quality of bias metrics for word embedding
    models. In particular, we present a generic method, Bias Silhouette Analysis (BSA),
    that quantifies the accuracy and robustness of such a metric and of the word lists
    used. Given a biased and an unbiased reference embedding model, BSA applies the
    metric systematically for several subsets of the lists to the models. The variance
    and rate of convergence of the bias values of each model then entail the robustness
    of the word lists, whereas the distance between the models' values gives indications
    of the general accuracy of the metric with the word lists. We demonstrate the
    behavior of BSA on two standard embedding models for the three mentioned metrics
    with several word lists from existing research.@eng
  bibo_authorlist:
  - foaf_Person:
      foaf_givenName: Maximilian
      foaf_name: Spliethöver, Maximilian
      foaf_surname: Spliethöver
      foaf_workInfoHomepage: http://www.librecat.org/personId=84035
    orcid: 0000-0003-4364-1409
  - foaf_Person:
      foaf_givenName: Henning
      foaf_name: Wachsmuth, Henning
      foaf_surname: Wachsmuth
      foaf_workInfoHomepage: http://www.librecat.org/personId=3900
  bibo_doi: 10.24963/ijcai.2021/77
  dct_date: 2021^xs_gYear
  dct_language: eng
  dct_title: 'Bias Silhouette Analysis: Towards Assessing the Quality of Bias Metrics
    for Word Embedding Models@'
...
