One day in November, a product strategist we’ll call Michelle logged into her LinkedIn account and switched her gender to male. She also changed her name to Michael. She was partaking in an experiment called #WearthePants where women tested the hypothesis that LinkedIn’s new algorithm was biased against women. For months, some heavy LinkedIn users complained about seeing drops in engagement and impressions on the career-oriented social network. This came after the company’s vice president of engineering, Tim Jurka, said in August that the platform had implemented LLMs to help surface content useful to users.
Michelle was suspicious about the changes because she has more than 10,000 followers and ghostwrites posts for her husband, who has only around 2,000. Yet she and her husband tend to get around the same number of post impressions, she said, despite her larger following. She noted that the only significant variable was gender.
Marilynn Joyner, a founder, also changed her profile gender. She’s been posting on LinkedIn consistently for two years and noticed in the last few months that her posts’ visibility declined. She reported that after changing her gender on her profile from female to male, her impressions jumped 238% within a day. Megan Cornish reported similar results, as did Rosie Taylor, Jessica Doyle Mekkes, Abby Nydam, Felicity Menzies, Lucy Ferguson, and others.
LinkedIn stated that its algorithm and AI systems do not use demographic information such as age, race, or gender as a signal to determine the visibility of content, profile, or posts in the Feed. The company added that a side-by-side snapshot of your own feed updates that are not perfectly representative, or equal in reach, do not automatically imply unfair treatment or bias within the Feed.
Social algorithm experts agree that explicit sexism may not have been a cause, although implicit bias may be at work. Platforms are an intricate symphony of algorithms that pull specific mathematical and social levers, simultaneously and constantly, explained data ethics consultant Brandeis Marshall. She noted that changing one’s profile photo and name is just one such lever, and the algorithm is also influenced by how a user has and currently interacts with other content. Marshall said what we don’t know are all the other levers that make this algorithm prioritize one person’s content over another, calling it a more complicated problem than people assume.
The #WearthePants experiment began with two entrepreneurs, Cindy Gallop and Jane Evans. They asked two men to make and post the same content as them, curious to know if gender was the reason so many women were feeling a dip in engagement. Gallop and Evans both have sizable followings, more than 150,000 combined compared to the two men who had around 9,400 at the time. Gallop reported that her post reached only 801 people, while the man who posted the exact same content reached 10,408 people, more than 100% of his followers. Other women then took part. Some, like Joyner, who uses LinkedIn to market her business, became concerned.
But LinkedIn, like other LLM-dependent search and social media platforms, offers scant details on how content-picking models were trained. Marshall said that most of these platforms innately have embedded a white, male, Western-centric viewpoint due to who trained the models. Researchers find evidence of human biases like sexism and racism in popular LLM models because the models are trained on human-generated content, and humans are often directly involved in post-training or reinforcement learning. Still, how any individual company implements its AI systems is shrouded in the secrecy of the algorithmic black box.
LinkedIn says that the #WearthePants experiment could not have demonstrated gender bias against women. The company reiterated that its systems are not using demographic information as a signal for visibility. Instead, LinkedIn told TechCrunch that it tests millions of posts to connect users to opportunities. It said demographic data is used only for such testing, like seeing if posts from different creators compete on equal footing and that the scrolling experience is consistent across audiences. LinkedIn has been noted for researching and adjusting its algorithm to try to provide a less biased experience for users.
Marshall said it’s the unknown variables that probably explain why some women saw increased impressions after changing their profile gender to male. Partaking in a viral trend, for example, can lead to an engagement boost; some accounts were posting for the first time in a long time, and the algorithm could have possibly rewarded them for doing so. Tone and writing style might also play a part. Michelle, for example, said the week she posted as Michael, she adjusted her tone slightly, writing in a more simplistic, direct style, as she does for her husband. That’s when she said impressions jumped 200% and engagements rose 27%. She concluded the system was not explicitly sexist, but seemed to deem communication styles commonly associated with women a proxy for lower value.
Stereotypical male writing styles are believed to be more concise, while the writing style stereotypes for women are imagined to be softer and more emotional. If an LLM is trained to boost writing that complies with male stereotypes, that’s a subtle, implicit bias. Researchers have determined that most LLMs are riddled with them. Sarah Dean, an assistant professor of computer science at Cornell, said that platforms like LinkedIn often use entire profiles, in addition to user behavior, when determining content to boost. That includes jobs on a user’s profile and the type of content they usually engage with.
LinkedIn told TechCrunch that its AI systems look at hundreds of signals to determine what is pushed to a user, including insights from a person’s profile, network, and activity. The company runs ongoing tests to understand what helps people find the most relevant, timely content for their careers. Member behavior also shapes the feed; what people click, save, and engage with changes daily, and what formats they like or don’t like. This behavior also naturally shapes what shows up in feeds alongside any updates from the platform.
Nevertheless, it seems like many people, across genders, either don’t like or don’t understand LinkedIn’s new algorithm. Shailvi Wakhulu, a data scientist, told TechCrunch that she’s averaged at least one post a day for five years and used to see thousands of impressions. Now she and her husband are lucky to see a few hundred. She called it demotivating for content creators with a large loyal following. One man reported about a 50% drop in engagement over the past few months. Still, another man said he’s seen post impressions and reach increase more than 100% in a similar time span, attributing it to writing on specific topics for specific audiences, which is what the new algorithm is rewarding.
In Marshall’s experience, she believes posts about her experiences as a Black woman perform more poorly than posts not related to her race. She noted that if Black women only get interactions when they talk about black women but not when they talk about their particular expertise, then that’s a bias. Researcher Sarah Dean believes the algorithm may simply be amplifying whatever signals there already are. It could be rewarding certain posts, not because of the demographics of the writer, but because there’s been more of a history of response to them across the platform.
LinkedIn offered some insights into what works well now. The company said the user base has grown, and as a result, posting is up 15% year-over-year while comments are up 24%. This means more competition in the feed. Posts about professional insights and career lessons, industry news and analysis, and education or informative content around work, business, and the economy are all doing well.
Ultimately, many people are just confused. Michelle said she wants transparency. However, as content-picking algorithms have always been closely guarding secrets by their companies, and transparency can lead to gaming them, that’s a big ask. It’s one that’s unlikely ever to be satisfied.

