Call for Papers: IP&M Special Issue on Fair and Explainable Information Access Systems for Social Good

 

Information Processing & Management (IP&M) - Elsevier

 

Impact Factor: 6.222

Scimago: Q1 journal

 

Submission deadline:

- Submission system opens: October 15th, 2021

- Submission system closes: March 30th April 28th, 2022

 

Link to the flyer with condensed information about this call for papers

 

BACKGROUND AND SCOPE

 

This special issue addresses research on design, maintenance, evaluation, and study of Fair and Explainable Information-access systems including recommender systems, search, (interactive) QA, and conversational systems. In particular, it addresses what it means for an information-access system to be fair, and how to assess the social and human impact of such systems when applied to equity and social welfare scenarios. Another area this SI addresses includes the explainability of information-seeking systems, which has become an important topic recently in the rise of recent AI regulations such as EU GDPR and the California Consumer Privacy Act of 2018. Explainable AI (XAI) popularized recently is, hence, a critical theme, which aims to devise ML models that provide interpretable outcomes, i.e., models that produce results that can be understood with little inspection.

 

 

The special issue is calling for innovative papers that open up new vistas for fair and explainable information-seeking research and stimulate activity towards addressing new, long-term challenges of interest to the community. Submissions must be scientifically rigorous while also introducing new views. The SI's highlight is "Information Access Systems for Social Good," where innovative research demonstrating the benefits to the general public is encouraged.

 

The questions addressed under each criterion are seen as follows:

-       Fairness: what might 'fairness' mean in the context of information access (or information seeking)? How could an information access system be unfair, and how could we measure such unfairness? Who is the main stakeholder involved in the definition of fairness? How can we measure biases in information-seeking systems?

-       Explainability: how to provide intuitive explanations? What is the goal of the explanation achieved, e.g., to improve the system transparency, its persuasiveness, trustworthiness, effectiveness, or scrutability? Which are effective techniques to present explanations?

 

TOPICS

 

We solicit different types of contributions (research papers, replicability and reproducibility studies, resource papers) on fairness and explainability. Of particular interest are case studies of successful practices in domains with large societal impact (e.g., healthcare, insurance, lending, news, educational systems), but also with large financial impact (e.g., e-commerce sites, travel booking sites, job search sites, dating sites, etc.). Studies that try to understand how algorithmic decisions fulfill some of the 17 sustainable development goals proposed by the United Nations (see https://sdgs.un.org/goals) in its 2030 agenda are also encouraged, such as goal 3 (health and well-being), goal 5 (gender equality), or goal 10 (reduced inequalities), focused but not limited to the following areas. If in doubt about the suitability, please contact the Guest Editors.

 

Designing fair and explainable Information Access systems:

-       How to define fairness or explainability in Information Access systems

-       Impact of these definitions when systems are tailored for social good

-       Analysis of constraints to implement these systems, such as collecting proper data, addressing biases or inequalities in the data, using simulations or synthetic data, etc.

-       Novel user models for explainable or fair information access using heterogeneous content, audio-visual content, or crowdsourcing techniques, while exploiting the impact of user emotion, personality, context, and individual cognitive differences

-       Analysis of interactional and presentational aspects of explanation and fairness in Information Access systems (multi-modality, level of interactivity)

Evaluating fair and explainable Information Access systems:

-       How to evaluate fairness or explainability in Information Access systems

-       Impact of these models when systems are tailored for social good

-       Comparison of evaluation measurements when assessing these systems, such as defining objective metrics, user studies, understanding settings of offline experiments, influence of multiple stakeholders in evaluation, etc.

-       Explore domains with large societal or financial impact: healthcare, insurance, lending, news, educational systems, e-commerce sites, travel booking sites, job search sites, dating sites, etc.

-       Measuring UX and design aspects of Information Access systems for social good

 

Interventions towards fair and explainable Information Access systems:

-       How to modify current Information Access systems to achieve some level of fairness or explainability

-       Impact of these interventions when systems are tailored for social good

-       Discussion of preconditions needed or achievable post hoc analyses when modifying these systems, such as designing protocols to mitigate biases, exploratory analysis on explainable systems, etc.

-       Causal and counterfactual inferences for fairness, explanation, and transparency

-       Novel techniques in adversarial machine learning, graph learning, deep neural networks, etc. to improve explainability and fairness of Information Access systems

 

PAPER SUBMISSION AND REVIEW

 

Submitted papers must conform to the author guidelines available on the IPM journal website at <https://www.elsevier.com/journals/information-processing-and-management/0306-4573/guide-for-authors>. Authors are required to submit their manuscripts online through the IPM submission site at <https://www.editorialmanager.com/IPM/default.aspx>, article type 'SI: Fair&Explain4SocialGood'.

 

We encourage original submissions of excellent quality that are not submitted to or accepted by some other journals or conferences. Significantly extended variants of conference or workshop papers (essentially 30% novel content) are welcome. In this case, the authors should ensure that submissions reference the previous publication and elaborate on the differences between the submitted manuscript and the preceding version(s). The authors also need to highlight in a designated section of the paper the novel contributions compared with the preliminary version to assist the guest editors, and reviewers identify the differences.

Submissions will be evaluated by at least two independent reviewers on the basis of relevance for the special issue, novelty, clarity, originality, the significance of contribution, technical quality, and quality of presentation. The editors reserve the right to reject without review any submissions deemed to be outside the scope of the special issue. Authors are welcome to contact the special issue editors with questions about scope before preparing a submission.

 

GUEST EDITORS/CONTACT

-     Alejandro Bellogín, Universidad Autónoma de Madrid, Spain, alejandro.bellogin@uam.es

-     Yashar Deldjoo, Polytechnic University of Bari, Italy, deldjooy@acm.org

-     Pigi Kouki, Relational AI, USA, pigikouki@gmail.com

-     Arash Rahnama, Amazon Research, USA, arashrahnama@gmail.com