WhoshouldIsee Tracks
Skip to content Skip to footer

Exposing Inequities: Diversity Gaps In The Creative Sector And The Challenge Of Biased Data

In the fast-paced realm of the creative industry, innovation is the lifeblood that keeps businesses thriving and evolving. Yet, beneath the surface of groundbreaking ideas and artistic brilliance lies a pressing issue: the lack of inclusivity. As the industry marches forward, it’s essential to address the disparities that exist, not only in the workforce but also in the very processes designed to foster growth. One such challenge is the biased data conundrum, especially concerning the innovations in the recruiting process.

AI Scanning

Recent research has brought attention to the AI being utilised in the general hiring system to assess various traits including body shape, facial features, hands, iris, and even the candidate’s scent.

The London company, WeSee, declares their ability to identify suspicious activities by scanning the micro-expressions of people’s faces. The claim of achieving 100% accuracy in detecting negative feelings is understandably met with scepticism and WeSee substantiated this by stating that their technology can accurately apply machine learning in 90% of cases. However, the remaining 10% discrepancy holds significant weight, especially considering the technology’s potential impact on life-changing decisions. 

The studies investigating how AI-based facial recognition interprets emotions differently based on individuals’ races unveils intriguing findings:  

 ‘AI displays racial disparities in their emotional scores and are more likely to assign negative emotion to black men’s faces. Face++ interprets black players as angrier for every level of Smile. Microsoft only interprets black players as more contemptuous for ambiguous and/or non-smiling pictures.’ 

Imagine a situation where the AI system mistakenly perceives a candidate’s expression as hostile or suspicious, potentially obstructing their advancement in the selection process. Moreover, in the case when the candidate was considered a threat and was falsely accused of a crime, their employability could be impacted drastically, which also would put them at a disadvantage in the future job field.  

Such scenarios underscore the need for ongoing research and development to improve the AI systems, making them more reliable.  

AI Scanning and Biased Data: A Troubling Alliance 

The datasets used to train the AI algorithms are often biased. Historical hiring data, which forms the basis of AI algorithms, can reflect the biases present in past hiring decisions. For instance, if a company historically favoured candidates from specific demographics, the AI system trained on this data would perpetuate these biases, making it harder for individuals from underrepresented groups to get hired. Moreover, this continuous bias of AI algorithms not only hampers diversity and inclusion but also deepens existing societal disparities. It creates a feedback loop where the algorithms continue to reinforce and amplify prejudices, making it exceedingly difficult to break the cycle of discrimination.  

Diversity Gap In AI Development

A stark check reveals a concerning imbalance in the industry’s workforce. The data indicates that at Facebook, merely 15% of AI research employees are women, and the numbers aren’t much better elsewhere. Google, another tech giant, employs only 10% women in their AI workforce. These numbers tell a story of underrepresentation, highlighting the need for a transformative shift in the tech landscape.

But it doesn’t stop there.  

When it comes to racial diversity, the figures are even more disheartening. At major tech companies including Google, Facebook, and Microsoft, the representation of black workers hovers between a mere 2.5% and 4%. 

What Can We Do? 

Addressing this issue requires not only awareness but also active efforts to reevaluate and diversify the datasets, ensuring fairness and equal opportunities for all candidates. 

One of the most effective ways is to incorporate a wide range of expressions, cultures, and backgrounds, so the algorithms can be trained to be more inclusive and less prone to misinterpretations, becoming a reflection of the global population. 

There is also a pressing need for transparency in AI systems. Companies and organisations developing AI technologies should be open about their algorithms and datasets, allowing independent audits to identify and rectify potential biases. It would also build trust between the customers and companies, as well as pave the way for fair and accountable AI systems. 

The Human Touch: Why Diversity and Inclusivity Matter

A recent study revealed that companies with a diverse workforce are 35% more likely to outperform those with less diversity. 

Diversity and inclusivity are not just buzzwords; they are essential for the creative industry’s growth and innovation. Diverse teams bring different perspectives, ideas, and problem-solving approaches to the industry. However, real inclusivity is more than just having a seat at the table; it’s ensuring that every voice is not only heard but also valued. This approach fosters a culture where everyone feels valued, leading to higher employee morale, increased productivity, and ultimately, better creative outputs. This sense of belonging cultivates higher morale among employees, creating a positive feedback loop where creativity and productivity thrive. 


The potential of AI is astonishing, but it certainly must be approached with ongoing vigilance and professionalism. By integrating human touch with cutting-edge technology, we can strike the perfect balance for future advancements. Additionally, educating people can have  a positive impact, not just on the brand progress, but also on personal outlooks and beliefs.

Have a question about Exposing Inequities: Diversity Gaps In The Creative Sector And The Challenge Of Biased Data?