International Workshop on Fair and Interpretable Learning Algorithms (FILA 2020)

December 10 - 13, 2020
Atlanta, GA, USA

In conjunction with the IEEE International Conference on Big Data (IEEE BigData 2020)
December 10 - 13, 2020
Atlanta Marriott Marquis
Atlanta, GA, USA


Introduction

With the proliferation of artificial intelligence (AI) and machine learning (ML) in every aspect of our automated, digital, and interconnected society, the issues of fairness, explainability and interpretability of AI and ML algorithms have become very important. If the output of an algorithm in response to a query is not transparent to or interpretable by humans, then they will always have questions about it’s correctness and fairness. An algorithm is considered to be fair, if its results are independent of some sensitive but unrelated variables (e.g., gender, race, ethnicity, sexual orientation). For an algorithm to be interpretable, it should not only output the results, but also produce a certificate showing that the results that it has computed are according to the expected specifications. This is our motivation to organize the International Workshop on Fair and Interpretable Learning Algorithms (FILA 2020).


The objectives of the FILA 2020 workshop are to:



Workshop Schedule

Date of Workshop: December 10, 2020 (All times are in EST.)

Time Title Presenter/Author
09:00-09:10 Welcome Talk Abhijnan and Mayank
09:10-10:00 Keynote Talk: Network Derivative Mining Dr. Hanghang Tong
10:00-10:25 S31212: Analyzing ‘Near Me’ Services: Potential for Exposure Bias in Location-based Retrieval Ashmi Banerjee, Gourab K Patro, Linus W. Dietz, and Abhijnan Chakraborty
10:25-10:40 S31214: Interpretable Machine Learning for Understanding Epidemic Data Dean Frederick Hougen, Jin-Song Pei, and Sai Teja Kanneganti
10:40-11:05 S31210: A Generic Framework for Black-box Explanations Clément Henin and Daniel Le Métayer
11:05-11:20 Coffee Break
11:20-11:45 S31211: Fairness for Whom? Understanding the Reader's Perception of Fairness in Text Summarization Anurag Shandilya, Abhisek Dash, Abhijnan Chakraborty, Kripabandhu Ghosh, and Saptarshi Ghosh
11:45-12:00 S31205: Fairness Metrics: A Comparative Analysis Pratyush Garg, John Villasenor, and Virginia Foggo
12:00-12:25 S31209: Validation of an Alternative Neural Decision Tree Yu Leo Lu and Chunming Wang
12:25-12:50 S31206: BeFair: Addressing Fairness in the Banking Sector Alessandro Castelnovo, Riccardo Crupi, Giulia Del Gamba, Greta Greco, Aisha Naseer, Daniele Regoli, and Beatriz San Miguel Gonzalez
12:50-13:00 Closing Remarks


Call for Papers

The International Workshop on Fair and Interpretable Learning Algorithms (FILA 2020) will provide a venue for academic researchers, industry professionals, and government partners to come together, present and discuss research results, use cases, innovative ideas, challenges and opportunities that arise from designing machine learning algorithms that are fair and interpretable. Since the task is inherently multi-disciplinary, the workshop will attempt to foster collaboration between different research communities working on problems ranging from Algorithms, Theoretical Computer Science, Network Science to Artificial Intelligence, Machine Learning, Data Science, and Social Choice, Game Theory, Computational Social Science. This year, our focus is specifically on fairness. We encourage submissions spanning the full range of theoretical and applied works. Topics of interest include, but are not limited to:

Identification of unfairness and biases
  • Biases in popularly used machine learning datasets
  • Fairness audits on the use of sensitive data
  • Biases in news and social media
  • Investigation of black-box systems, particularly web platforms and algorithms
  • Biases in machine learning from complex data (e.g., networks, time series)
  • Biases in information retrieval and natural language processing
  • Other novel application domains such as economics, healthcare, climate studies
  • Machine learning in the context of developing countries and other under-represented communities
Designing fair learning algorithms
  • The statistical and computational complexity of fair machine learning
  • Online and stochastic optimization methods for fair machine learning
  • Fair machine learning through Bayesian methods
  • Achieving fairness through computational social choice and game theory
  • Fairness, accountability, transparency and ethics in search
  • Fairness and diversity in recommender systems
  • Fairness-aware algorithms for social impact
  • Evaluation methods for fair and interpretable machine learning
  • Fairness beyond supervised learning, e.g. clustering, reinforcement learning
  • Interpretability of neural network and deep learning algorithms
  • Accountability in human-in-the-loop machine learning

Important Dates

  • Paper Submission: October 7, 2020
  • Notification: November 1, 2020
  • Camera Ready: November 15, 2020
  • Workshop: December 10 - 13, 2020
All times are 11:59 PM Eastern Standard Time.

Paper Guidelines

  • Paper Submission Guidelines

    We welcome submissions related to the above research areas. Submissions may include late-breaking results and work in progress. We also solicit vision or position papers. As IEEE BigData 2020 will no longer take place physically in Atlanta, Georgia, US, and will instead take place virtually, the relevant workshop submissions will be accepted for video presentations in the virtual presentation session.

  • Submission Site: Cyberchair's FILA submission portal

    Kindly note that we will distinguish short vs full paper based on their length. For non-archival paper submissions, kindly use title note to indicate the type of submission. We will also double check with the authors of the accepted papers at the time of camera ready submission.

  • Archival and Non-archival

    FILA 2020 offers authors the choice of archival and non-archival paper submissions. Archival papers will appear in the published proceedings of the workshop, if they are accepted. Whereas, accepted non-archival papers will only appear in the workshop website and not in the proceedings. Authors of non-archival papers are free to also submit their work for publication elsewhere. Note that all submissions will be judged by the same quality standards, regardless of whether the authors choose the archival or non-archival option. Authors of all accepted papers must register and present their work at the workshop, regardless of whether their paper is archival or non-archival. For non-archival paper submissions, kindly use title note to indicate the type of submission.

  • Long and Short Papers

    Paper authors may choose between two formats: Long (10 pages) and Short (4 pages), in the IEEE Computer Society proceedings manuscript format. Both formats will be rigorously peer-reviewed. We will distinguish long vs. short paper based on their length.

  • Writing Guidelines

    All submissions must follow the IEEE Computer Society proceedings manuscript format. You are strongly encouraged to print and double-check your PDF file before its submission, especially if your paper contains Asian/European language symbols (such as Chinese/Korean characters or English letters with European fonts). Complete papers are required; abstracts and incomplete papers will not be reviewed. FILA follows a single-blind review process. The formatting guidelines are available at: https://www.ieee.org/conferences/publishing/templates.html


Keynote Speaker


Organization

  • General Chairs: Arindam Pal (Data61, CSIRO and Cyber Security CRC, Sydney, New South Wales, Australia) and Yinglong Xia (Facebook AI, Santa Clara, CA, USA)
  • Program Chairs: Abhijnan Chakraborty (Max Planck Institute for Software Systems, Saarbrücken, Germany) and Mayank Singh (IIT Gandhinagar, Gujarat, India)
  • Web and Publicity Chairs: Shivam Patel (IIT Gandhinagar, Gujarat, India)
  • Technical Program Committee:
    • Abhisek Dash (IIT Kharagpur, India)
    • Ana-Andreea Stoica (Columbia University, USA)
    • Ancsa Hannák (University of Zurich, Switzerland)
    • Animesh Mukherjee (IIT Kharagpur, India)
    • Arpita Biswas (Google Research, India)
    • Chayan Sarkar (TCS Research and Innovation)
    • Dongxi Liu (Data61, CSIRO)
    • Gourab Patro (IIT Kharagpur, India)
    • Jiaojiao Jiang (University of New South Wales)
    • Kai Wang (Facebook)
    • Kripa Ghosh (IISER Kolkata, India)
    • Kunal Banerjee (Walmart)
    • Maunendra Desarkar (IIT Hyderabad, India)
    • Nina Grgic-Hlaca (MPI SWS, Germany)
    • Nipun Batra (IIT Gandhinagar, India)
    • Saptarshi Ghosh (IIT Kharagpur, India)
    • Sharif Abuadbba (Data61, CSIRO)
    • Suranga Seneviratne (University of Sydney)
    • Sushmita Ruj (Data61, CSIRO)
    • Tanmoy Chakraborty (IIIT Delhi)
    • Till Speicher (MPI SWS, Germany)
    • Udit Bhatia (IIT Gandhinagar, India)