Quantcast
Channel: Trends – IAB Southeast Asia & India
Viewing all articles
Browse latest Browse all 33

Promoting Fair and Inclusive AI

$
0
0

Anyone who knows me knows that I’m a huge fan of strong television dramas. One particular favourite of mine is The Good Fight, a spin-off of The Good Wife TV series a few years ago. Set in Chicago, the show focuses on a law firm and the internal and external client challenges it faces. If you haven’t seen it yet, you are in for a treat.

My interest in The Good Fight probably has something to do with my passion for law, having studied it years ago. I’ve always loved a legal battle, and the series often goes into real-world issues such as ‘fake news,’ the Me Too movement, and the political use of social media. I love it so much because its creators push boundaries in order to create content that provokes meaningful responses from its audience. The show leaves an impact on its viewers, and that’s powerful–considering how much content is available out there today.

Personally, the episode that most stands out, as I reflect on my own career journey in Asia, focuses on artificial intelligence (AI). Its fictional depiction of the negative effect when there are biases in AI spoke volumes to me.

The potential for bias in AI isn’t new, and it must be examined

It’s become increasingly clear in recent years that without thoughtfulness and care by technologists, there is a potential for bias in AI and the ensuing intent-driven online search results.

This particular dystopian episode of The Good Fight shows viewers how easy it is to programme the biases we practice in the real world into AI. Because it was so easy to do, the bias proliferated much faster and, before long, manifested itself as fact, ultimately causing more harm than good.

In this episode, a Black business owner, whose restaurant is in a predominantly Black neighborhood, wants to sue the series’ fictional tech search giant for its platform app, ChummyMaps. This app directs users away from “unsafe” neighborhoods, hiding businesses, like hers, located in those neighborhoods where people of color predominantly live.

The American Civil Liberties Union (ACLU) wrote a blog post on this episode that asked the question: “Will the algorithms that increasingly govern our economic and personal lives exacerbate racial inequality in America?” Sociologically, it’s a question that we do need to keep in our minds as we move forward towards greater utilisation of data and AI.

You might remember a lesson from biology or sociology classes in school that illustrates a similar principle: that when presented with the same stimuli, humans do not always have the same response or interpretation. There is beauty in this variation and, despite our anatomical similarities, individual brains can interpret things differently, resulting in diversity of thoughts, desires, and feelings.

That’s amazing when you think about it, because diversity in thoughts, desires, and feelings are part of what makes us human. Who wants to have people all acting in the same manner? If so, we might as well be a world full of robots. But oddly enough, AI could, absent adequate thought and attention, lead to unintended results in the future.

A recent incident made this evident to me. Last winter, while reading my news feed on LinkedIn, I was shocked that it took me almost 10 scrolls down before I got to a post written by a female.

I’m using gender as an example because it represents a clear weighing of diversity. There are 326 million female users on LinkedIn versus 430 million male users. Assuming a correlation between the number of users and number of posts published, posts by men will outweigh posts by women.

Fewer posts by female authors results in less engagement–with less data to indicate engagement with female-authored content, without intervention, learning algorithms will ultimately start to deprioritise exposure to female-published content. This is a generalisation, but one that’s worth paying attention to.

It has, after all, taken women hundreds of years to progress to where we are today with gender equality. We must train the AI we rely on to operate in a manner that is fair and inclusive.

How does AI and the potential for bias play out in today’s digital world through our industry?

Since I work in ad tech, my first thoughts went straight to what this means for our industry, and myself as an individual.

AI is not new. Its very first conceptions started as early as the 1950s, and its modern day applications (such as search engines) have simply proliferated with the advent of the internet. With huge data sets becoming more normalised, and cloud computing decreasing the costs of managing data precipitously, AI technology has become more mainstream and can now influence every stage of the buying decision.

Ideally, AI technology should provide marketers with information about  customers, their preferences, and how businesses can connect with them in meaningful ways. Implicit in this exchange is trust: trust in the AI.

To feel confident placing trust in AI, marketers and advertisers must educate themselves on how the potential for bias in AI can affect their ability to reach their desired audiences. Core to this is also understanding responsible and explainable AI, which can help guard against bias.

What can I do to promote fair and inclusive AI?

Sometimes the simplest actions can result in a butterfly effect, and that’s my initial thought with how I approach the potential for bias in AI. For me, it starts with pushing myself to be more active in getting my voice out, and educating my peers and the wider industry about amplifying theirs.

After all, AI takes its logic and decisioning from inputs. If you don’t have any input, then the AI will not include you in its decision. To illustrate, the more often you publish content and generate engagement, the more this engagement will factor into algorithms responsible for serving up the most relevant content to consumers. On a larger scale, that translates into content diversity, which is one of the best things about the internet–more diverse content can help all of us learn from and think about different perspectives.

If I have learned anything in recent years working in the digital space, it’s that we all have a part to play in this new world with AI. We all need to be mindful of how future developments in digital technology will play out in regions like Asia, which boasts a vibrant landscape of diversity in its multiple languages, customs, and business practices.

To me, this vibrancy is truly the most amazing thing about life. And AI should serve to enhance, rather than dampen it.

This piece has been authored by Sonal Patel, IAB Southeast Asia and India Regional Board Member and Managing Director SEA, Quantcast.

 


Viewing all articles
Browse latest Browse all 33

Latest Images

Trending Articles





Latest Images