The urgent need for a national framework to combat data bias in India


In today’s world, we are surrounded by a variety of artificial intelligence (AI) tools that have made our lives easier and more efficient. From ordering groceries online to automating business processes, these tools have become an integral part of our daily routine. The global adoption rate of AI has steadily increased to 35 per cent, a four-point rise from 2021. However, as we continue to embrace these technologies, it is crucial to examine their potential impact on the society, economy and job market. 

Growing concerns about AI bias  

As AI systems are only as unbiased as the data they are trained on, the use of AI brings with it a significant challenge concerning data bias. Data bias arises mainly due to the lack of diversity in the data used to train the systems. When AI systems are trained on data that does not represent the population it serves, it becomes incapable of making unbiased decisions. For instance, in healthcare, a study only examining the medical information of white patients would not represent the entire population since it fails to include data from patients who are Black, Indigenous and People of Colour.

Several AI systems use intricate algorithms that are challenging to comprehend or interpret. As a result, identifying biases in the training data may not be sufficient, as understanding how these biases impact the system’s decisions can be problematic.

According to our recent industry report, 66 per cent of businesses expect to increase their dependence on AI in the next five years. However, 78 per cent are worried that data bias will become a more significant issue with the increased reliance on AI/ML. The trend is consistent across regions, indicative of the concern of these companies regarding current data bias levels in their organisations. Although 57 per cent of respondents indicated that their business decision-makers were worried about data bias, 65 per cent believed that there was already data bias in their organisations.

Addressing data bias 

Many business respondents have only recently begun to acknowledge and tackle data bias. Out of the respondents, six per cent are yet to initiate investigations into data bias. About 36 per cent are beginning to learn and address data bias, while 45 per cent have started implementing tools, training and policies to address data bias. Only 13 per cent have committed to continuously evaluating how they use tools, training and culture to tackle data bias.

Effective measures to combat data bias include education and training, improving the transparency and traceability of algorithms and data, spending more time on model training, building and evaluation and using tools to detect bias within datasets. Despite some progress, 77 per cent of the respondents felt that their organisations still needed to do more to understand data bias. While the effective measures identified by the respondents primarily focused on developing skills, practices and training, 65 per cent believed that technology and tools were the most pressing need to better tackle data bias. About 59 per cent believe that they need more training while 49 per cent believe that they need to adjust their strategy or vision.

Several organisations fail to address bias in key areas of their operations, such as in hiring and leadership diversity. It conflates design issues, such as a lack of responsive sites, sites not being user friendly and inappropriate usability and testing effects. 

However, the data on organisational maturity demonstrates that there were encouraging signs of progress. About 67 per cent of businesses have assessed technology options to address data bias, while 40 per cent take data bias into account when evaluating AI/ML vendors. Additionally, 76 per cent recognised the need for a centralised approach to addressing data bias, rather than relying on individual departments to handle it in isolation. These findings suggest that a comprehensive approach that integrates people, tools, training and ongoing policy monitoring will be necessary to effectively address data bias in AI/ML practices.

Tackling data bias in AI for a more equitable future

As the use of AI grows and companies aspire for sustainable value, the use of AI and ML will only increase. More data scientists, line-of-business practitioners and programmers will dive into datasets and produce more and more algorithms. The challenge then involves carefully considering all aspects of a project to avoid the consequences of unconscious bias. Bias impacts day-to-day business, from security and governance and bad business decisions to lost customer trust and potential legal and ethical exposure. And these risks do not even begin to cover the profound consequences experienced by victims of data bias, including those who suffer adverse outcomes resulting from intrinsically biased AI algorithms.

For AI to be sustainable over time, the pool of those developing these algorithms must expand. It not only needs to include those across the racial and gender spectrums but also must include those with less advanced degrees and those who hail from a broader cross-section of professions and backgrounds. Cultural training on inclusiveness is not enough. Companies must also provide technical training on dataset management so expert practitioners can develop protocols to detect, remediate and avoid creating biased algorithms. 

Every touchpoint within the entire technology or development stack and process must consider and factor in the reality of data bias. It includes all aspects of data selection and preparation, business logic development and analytical models, testing and results analysis. Only a continuous commitment to assessment and removal will ensure the bias does not perpetuate in the organisation. 

Eliminating AI data bias will take a combination of technology, training and practices to prevent it from entering the development process. But as our world grows more dependent on machines to make vital decisions that impact lives, it’s up to those leading these efforts to ensure that biases do not creep in.



Views expressed above are the author’s own.


Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top