A recent study by researchers at Princeton University and the University of Southern California suggests that Meta’s algorithms for displaying educational ads show signs of racial bias, particularly in delivering ads associated with for-profit colleges and universities with a history of predatory marketing practices.
Registry Reports The research paper, titled “Auditing Racial Discrimination in the Delivery of Educational Ads,” will be presented at the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT) in Rio de Janeiro, Brazil. The authors, who include researchers from Princeton University and the University of Southern California, found that Meta’s algorithms disproportionately showed ads from for-profit universities, which have historically engaged in predatory practices, to black users compared to ads from public universities.
The findings raise concerns that discrimination could extend beyond the scope of current solutions, which are limited to housing, employment, and credit. In 2017, Meta (then known as Facebook) was sued by the U.S. Department of Housing and Urban Development (HUD) for violating the U.S. Fair Housing Act by allowing housing advertisers to block ads from showing to people of certain races. Meta settled the charges in June 2022 and committed to developing a new system to address racial and other disparities caused by its housing ad personalization algorithms.
The researchers employed a methodology that used two seemingly equal educational ads, one of which is tied to historical racial disparities that ad-serving algorithms may promote: If ads from for-profit universities are shown to relatively more black users than ads from public universities, the platform’s algorithmic choices could be considered racist.
Korolova, one of the authors, said that while only Meta knows how its advertising algorithm works, the researchers suspect that the observed algorithmic effects are driven not by the direct use of race but by proxy or other historical data that Meta may rely on.
The findings could have legal implications for Meta under the disparate impact principle of discrimination. The researchers argue that Meta has taken a narrow view of compliance, focusing on discrimination in housing, employment and credit ads while not adequately addressing algorithmic bias in ad delivery more broadly.
Meta spokesperson Daniel Roberts told The Register: “Addressing advertising fairness is an industry-wide issue, and we have worked with civil rights groups, academics and regulators to promote fairness in the advertising system. Our advertising standards do not allow advertisers to run ads that discriminate against an individual or group of individuals based on personal attributes such as race, and we are actively building technology designed to make further progress in this area.”
Click here for details Registration is here.
Lucas Nolan is a reporter for Breitbart News covering free speech and online censorship.





