I love The West Wing. Before post-COVID remote working, I thrived in walking around the office and talking business at light speed like they do in the show. And recently, I was thinking about a particular episode in which a group of cartographers visit the White House seeking to change the widely used map of the world, called the Mercator projection. It was originally designed for European sailors to navigate the oceans and did so by distorting the relative sizes of continents. Europe, for example, was often shown as slightly larger than South America, when in reality it’s only about half the size. The cartographers argued that these visual representations shape how we see global power. We unconsciously equate size with importance and top positions over bottom positions. Thus, Europe, the United States, and China are given a boost in perceived dominant positions over the world. Maps shape how we perceive reality.
And just as the Mercator map silently shaped how generations understood the world, I started thinking about how generative AI now shapes how we see professions, genders, cultures, and even ourselves. But unlike static maps, these AI “maps” redraw themselves constantly, according to collective user behavior, filtering, and personalization. These are done automatically and shape users in ways of which they are unaware. There is no turnkey model for equal representation; every system reflects biased data and the design choices behind it.
I remember the early bias issues AI had in its first foray into the world: The Gender Shades study revealed how face-recognition systems from major vendors had significantly higher error rates on darker-skinned females vs lighter-skinned males. There was the case of the Amazon recruiting tool that penalized résumés with the word “women’s” (or graduates of women’s colleges). And there was The COMPAS risk score controversy, where the algorithm mislabeled Black defendants as “higher risk” at a rate higher than white defendants.
When biases are obvious, they can be addressed head-on. However, as the complexity of these AI models grows, the biases become harder to spot.
To better understand this, a study was done in 2023 on Fairness and Bias in AI. It shows how bias can creep in through the data used to train AI, the way algorithms are built, or the human decisions made along the way. It addresses generative AI as well, where bias is in the images themselves. Text-to-image systems like Stable Diffusion learn statistical patterns from the internet, which often over-represent certain demographics, aesthetics, and roles. Another study found systematic biases in gender-power associations, occupational depictions (like “CEO” or “nurse”), and demographic portrayals across multiple models.
The effects of these biases can heighten social hierarchies, deepen inequalities and spread stereotypes. Especially now, as generative AI is creating much of the media people consume via search engines, entertainment, and educational material, AI is shaping the narrative that billions of people engage with daily. Over time, these outputs normalize particular views that subtly shift public perception of who holds power, who belongs, and what “typical” looks like.
Addressing these issues isn’t as simple as cleaning the code. Generative AI introduces unique technical challenges: its latent spaces encode complex statistical patterns that are hard to interpret, and biases often surface in underspecified prompts where models revert to cultural “defaults.” Unlike traditional classification systems, where bias can be measured through error rates, bias in generative models is representational of the content itself. This makes it harder to quantify and regulate.
The 2023 study does provide several strategies however: Improve training datasets to increase diversity, fine-tune models to counter harmful stereotypes, and apply post-processing filters or reranking to balance outputs. Yet these measures often treat symptoms rather than causes. Because generative models are used across industries and updated continuously, mitigation also requires interdisciplinary approaches like combining technical auditing, policy oversight, and cultural awareness.
Some companies and educational institutions are making efforts: As far back as 2020, The University of California at Berkeley created a Playbook on addressing bias in AI. In 2022, Siemens put out their own report on detecting and diminishing biases. And just last year, 2024, the dean of Emory Goizueta Business School wrote a research paper on mitigating bias in high stakes scenarios. The work is being done even as biases run deep and change can be a barrier.
On a practical level, at Solvd we are applying these bias-reducing strategies in the area of multimodal commerce – the next frontier of consumer engagement in retail. We have developed realistic ‘try-on’ visuals to preserve body context. We use an automated QA layer that verifies realism, garment accuracy, and ensures brand safety before final output. As well as AI generation of body type, pose, and skin tone with ethical review and fairness testing to ensure visuals align with user identity. We believe AI can be intentionally designed to respect identity and inclusion at every stage and understand that brands will lose consumer trust if AI isn’t implemented correctly, securely, and responsibly.
In The West Wing, the cartographers conclude by suggesting we switch to the correctly proportioned, Gall-Peters map, and also to flip it upside down to ensure equality. The white house official responded, “You can’t do that.” They ask, “Why not?” And she says, “Because it’s freaking me out.”
Changing how we see the world can be incredibly hard. But it isn’t too late. In 2017, Boston Public Schools started using the Gall-Peters map, so new generations of kids will be more tuned into the reality of the globe we live on. As one researcher put it: “The world map is not the Earth but it can influence the worldviews that guide our thoughts and actions.”
AI is currently mapping the data from millions of people on a daily basis. We are at the beginning of mainstream AI design, but it’s moving fast. With diverse data, transparent systems, and ethical frameworks that involve multiple fields working together, we can build AI that’s fair, accountable, and reflective of a wide range of human experiences. Right now, we have the power to create an artificial world map that represents us all.