Real Estate

Zillow Launches AI Tool To Stop Fair Housing Discrimination

The Fair Housing Classifier is an open-source tool aimed at helping real estate, technology and civil rights organizations create smarter, fair-housing compliant chatbot and search functions for their websites. The code is available on GitHub.

At Inman Connect Las Vegas, July 30-Aug. 1, 2024, the noise and misinformation will be banished, all your big questions will be answered, and new business opportunities will be revealed. Join us.

Zillow has released the Fair Housing Classifier, an open-source artificial intelligence tool that enables platforms powered by large language model (LLM) technology to filter questions that violate fair housing rules more effectively.

Josh Weisberg | Credit: LinkedIn

“Since 2006, Zillow has used AI to bring transparency to home shoppers, powering tools like the Zestimate,” Zillow SVP of Artificial Intelligence Josh Weisberg said in a written statement on Tuesday. “We’ve made it our business to increase transparency in real estate — open-sourcing this classifier demonstrates that advancements in technology do not need to come at the expense of equity and fairness for consumers.”

TAKE THE INMAN INTEL INDEX SURVEY FOR MAY

The Fair Housing Classifier works with LLM models, a term to describe the generative AI that powers chatbots and search tools.

The classifier focuses on questions that may lead to steering, the illegal practice of pushing a homebuyer toward — or away from — a neighborhood based on race, color, national origin, religion, sex (including gender identity and sexual orientation), familial status or disability.

While the classifier does the heavy lifting of identifying questions that may violate fair housing rules, it’s up to system developers to determine how the chatbot will respond. In an example provided by Zillow, the chatbot denied a question about “adults-only neighborhoods in Seattle,” which relates to familial status.

“I’m not able to respond to this because it relates to what’s considered a legally protected class (or group of people) under fair housing laws,” the example response read. “We’re committed to all fair housing principles and requirements. I’m more than happy to help with any other questions related to housing.”

National Fair Housing Alliance’s Chief Responsible AI Officer Michael Akinwumi said the Fair Housing Classifier is another important step in addressing “algorithmic harms” homebuyers face.

Michael Akinwumi

According to a recent Zillow survey, 57 percent of respondents said they’ve experienced housing discrimination, with younger generations, renters, LGBTQ+ people and people of color being more likely to experience issues. However, only 42 percent of respondents said fair housing impacts them — highlighting an education gap in fair housing laws and regulations.

“In today’s rapidly evolving AI landscape, promoting safe, secure and trustworthy AI practices in housing and lending is becoming increasingly important to protect consumers against algorithmic harms,” Akinwumi said in a written statement. “Zillow’s open-source approach sets an admirable precedent for responsible innovation. We encourage other organizations and coalition groups to actively participate, test, and enhance the model and share their findings with the public.”

The code and framework for the Fair Housing Classifier is available on the developer platform GitHub. A Zillow spokesperson said system developers can reach out to the Zillow team through an email alias on GitHub for help with classification issues and genuine case uses that could help optimize the Classifier’s performance.

“We’re really proud of the work of Zillow’s ethical AI team and compliance team. And although we’re confident in the classifier’s ability to identify instances of non-compliance, we are dedicated to continual improvement,” the spokesperson told Inman. “This commitment is why we are the first company in real estate to have made this technology open source.”

“By allowing others to test the Fair Housing Classifier – it provides us with the chance to explore potential enhancements for our model and pinpoint edge cases for further refinement,” they added.

Email Marian McPherson




Source link

Related Articles

Back to top button