Applying Artificial Intelligence to business application gives us the capacity to automate repetitive tasks as well as to increase business revenues by adapting to users’ needs, profiles and behaviours. However, machine learning data, algorithms, and other design choices that shape AI systems may reflect and amplify existing cultural bias and prejudices.
Consequently, when we develop data-driven AI products and feed them data, we should be careful that we are neither simply confirming existing biases nor introducing new ones.
Several attempts to fix biases in certain algorithms or products have been proposed in the literature. However, these fixes are application-specific and tweaked to the product in question and thus, cannot be generalized. The underlying mechanisms of how bias happens from the computational and design perspectives are not yet thouroughly explored.
The aim of this research project is to:
– Develop theoretical understanding of how algorithms may become discriminatory and biased.
– Understand what product design and development choices lead to the transmission and the empowerment of existing biases and prejudice.
– Establish a fundamental theoretical framework for bias prevention. Such framework would tackle the problematic from a machine learning mathematical perspective as well as from a product design, management and development perspective.
Consequently, this research proposal will help advancing the state-of-art from heuristic repairing, to proactive and theoretically supported prevention tackling the bias problematic from both an algorithmic and a design perspectives