Facebook’s ad-regulating algorithms seem to be flagging adaptive fashion ads for violating their policy. Ads, like the one seen above from the company Mighty Well, are consistently rejected by these algorithms when adaptive fashion companies attempt to advertise on Facebook.
Adaptive fashion is a relatively new, but quickly growing, section of the fashion industry. Adaptive fashion brands create a wide range of products from patterned colostomy bags to “seated fit” undergarments designed for those who use a wheelchair.
These ads are often flagged for the promotion of “medical and health care products and services” even though they are not promoting these items. Facebook’s ad algorithms are not able to adjust for the context of these ads and often flag them simply for featuring a model in a wheelchair or wearing a catheter sleeve.
This issue is just another example of the implicit biases embedded into machine learning algorithms and how these biases can be detrimental to marginalized communities. These often reflect our own implicit biases, such as excluding those with disabilities from being interested in fashion, which can be encoded into algorithms that are then deployed at-scale.
“It’s the untold story of the consequences of classification in machine learning,” said Kate Crawford, the author of the upcoming book, Atlas of AI. “Every classification system in machine learning contains a worldview. Every single one.”