California made AI-altered listing photos a misdemeanor crime on January 1, 2026. Not a fine. Not a warning. A misdemeanor. AB 723 requires any digitally altered image in a listing to carry a clear disclosure AND provide access to the original unaltered photo. A realtor in Kelowna, British Columbia already got hit with a misleading advertising fine for failing to disclose AI virtual staging. And NAR’s updated 2026 Code of Ethics now requires that AI applications meet MLS standards across the board.
So yeah. This is real now.
I use AI more than almost any agent in the country. At Neuhaus Realty Group, AI handles property valuations, content creation, lead identification, market analysis, and about a dozen other things I’ve written about before. And precisely because I use it this heavily, I take compliance seriously. The agents who will get in trouble aren’t the ones using AI carefully. They’re the ones using it carelessly.
Lets walk through what you’re probably getting wrong.
Virtual Staging: The One That Already Has Teeth
This is the compliance area with the most enforcement activity right now, and it’s the one most agents are sleepwalking through.
If you use AI to virtually stage a listing photo, you must disclose it. Period. “Virtually staged” isn’t a nice-to-have label. It’s a legal requirement in California (AB 723, effective January 2026), a Code of Ethics obligation under NAR Articles 2 and 12, and an MLS rule in virtually every major market. HAR MLS in Houston requires a watermark at the bottom of every digitally altered photo that reads “image does not represent actual property as is.” Most other MLS systems require similar labeling.
And here’s where it gets interesting. California’s law doesn’t just require a label. It requires you to provide access to the original, unaltered image. Either include it in the listing or provide a publicly accessible link. That’s a higher bar than most agents realize right.
The Kelowna case made this real for people who weren’t paying attention. A realtor got fined for misleading advertising because they used AI-enhanced images without disclosure. Not Photoshop touch-ups. Not color correction (which is exempt by the way, along with cropping, white balance, and exposure adjustments). AI virtual staging that added furniture and decor that didn’t exist in the property. That’s the line.
I use staging strategies with my sellers all the time. But when we use virtual staging in photos, every single image gets labeled. No exceptions. It takes about 30 seconds to add the disclosure. Skipping it to make your listing look prettier is genuinely one of the dumbest risks you can take right now.
AI-Generated Property Descriptions: Your License Is on the Line
Ok so this one doesn’t have the same enforcement mechanism as virtual staging yet. But it should scare you more.
Here’s the scenario. You paste your listing details into ChatGPT and ask it to write a compelling property description. The AI writes something beautiful. It mentions the “walking distance to the community pool” that doesn’t exist. Or the “recently updated HVAC” that was actually replaced in 2014. Or it describes the neighborhood as “family-friendly” in a way that could be interpreted as steering under the Fair Housing Act.
You paste it into the MLS without reading it carefully. You just made a material misrepresentation. And your license is on the line.
I wrote about the first casualty of AI in real estate a few months ago. The theme was that AI kills your mediocre tech stack before it kills anything else. But there’s a compliance corollary here: AI also amplifies whatever sloppiness already existed in your process. If you’ve been writing sloppy listing descriptions, AI will make them sloppier faster.
The fix is simple but non-negotiable. Review every word of AI-generated content before it goes client-facing. Every. Word. AI hallucinates. It invents features, it makes up amenities, it gets square footage wrong. Nassim Taleb would call this a “silent risk,” the kind where the output LOOKS authoritative and polished even when it’s completely wrong. That confident tone is exactly what makes it dangerous.
I have a rule at Neuhaus Realty Group: no AI-generated content touches a client without human review. Period. My systems produce drafts. Humans verify facts. The AI is the research assistant. I’m still the broker.
Fair Housing and AI: The Compliance Bomb Nobody Sees Coming
This is the big one. And almost nobody in the industry is talking about it with the urgency it deserves.
HUD issued formal guidance in 2024 confirming that the Fair Housing Act applies to AI-powered tenant screening, advertising, and content generation. The Colorado AI Act (effective June 30, 2026) goes further, requiring formal impact assessments for any AI used in “consequential decisions” including housing. Penalties under the Fair Housing Act for AI-related violations can exceed $100,000 for repeat offenses, plus compensatory and punitive damages in private lawsuits.
But here’s the part that should keep you up at night if you’re using AI to generate marketing content. AI can produce language that violates fair housing law in subtle ways you might not catch. Describing a neighborhood as “great for young professionals” could be interpreted as age discrimination. Mentioning “close to churches” could signal religious preference. Using phrases like “perfect for families with children” in marketing (not just descriptions) can be interpreted as familial status steering.
I’ve written about fair housing implications in the context of private exclusive listings. The principle is the same here. The Fair Housing Act doesn’t care whether discrimination was intentional. Disparate impact is enough. And if your AI is generating content that has a discriminatory effect, you’re responsible. Not the AI. Not OpenAI. Not Google. You.
This is where I see the biggest gap between what agents are doing and what they should be doing. Most agents using AI for content creation have zero fair housing review process. They generate, they paste, they publish. That workflow is a lawsuit waiting to happen right.
Client Data and AI: Stop Feeding Secrets to Free Tools
I’ll keep this one short because the principle is straightforward even if the implications are huge.
When you copy a client’s financial pre-approval letter into ChatGPT to “summarize it for the file,” where does that data go? When you paste a client’s divorce settlement details into an AI tool to help you understand the property division, who else can access that? When you upload transaction documents to an AI summarizer, what happens to those documents?
Most consumer-grade AI tools (the free ones agents love to use) explicitly state in their terms of service that they may use your inputs for training data. That means your client’s financial details, personal circumstances, and transaction information could theoretically end up informing the AI’s responses to other users.
I use business-grade AI tools with clear data handling policies. Not because I’m paranoid. Because my clients trust me with sensitive information and that trust isn’t something I’m willing to risk to save $20 a month on a ChatGPT subscription. If you’re handling client PII (personally identifiable information), use tools that are built for professional use. Know their data retention policies. And for the love of all that is holy, don’t paste your client’s tax returns into a free AI chatbot.
AI Valuations Are Not Appraisals (Including Mine)
I built an automated CMA system that I’m genuinely proud of. It pulls MLS data, adjusts for condition and location, and generates value ranges that are impressively accurate. I’ve tested it against actual closing prices and the delta is remarkably small.
But it is not an appraisal. It is an opinion of value tool. And I make that distinction clear to every single client who sees one.
This matters legally because appraisals are governed by USPAP (Uniform Standards of Professional Appraisal Practice) and must be performed by licensed appraisers. An AI-generated valuation, no matter how sophisticated, does not meet that standard. If you present an AI valuation to a client in a way that implies it carries the same weight as an appraisal, you’re creating liability for yourself.
My approach: I use AI valuations as conversation starters with sellers. “Here’s what the data suggests your home is worth based on recent comparable sales. This is not an appraisal. This is my analysis tool helping us have a smarter conversation about pricing.” That framing is honest, it’s legally defensible, and honestly it’s more persuasive anyway because clients appreciate the transparency. Nobody trusts the agent who claims to know the exact value of their home. They trust the agent who shows them the data and lets them think.
Ed’s Framework: Five Rules for AI Real Estate Compliance
I’ve been refining this framework for about two years now. It’s simple because compliance frameworks that aren’t simple don’t get followed.
1. Disclose AI use in all visuals. Every virtually staged photo gets labeled. Every AI-enhanced image gets disclosed. No exceptions, no “it’s just minor changes,” no gray area. California made this criminal. Other states will follow. Get ahead of it now.
2. Review all AI-generated client-facing content. Your license is on the line for everything you publish, email, or present to clients. AI is the research assistant. You are the licensed professional. That means you read every word before it goes out. If you can’t verify a claim the AI made, delete it.
3. Run AI content through a fair housing filter. Before publishing any AI-generated marketing, listing description, or neighborhood content, read it specifically looking for language that references protected classes. Age, race, religion, familial status, national origin, disability, sex. If the AI described a neighborhood in terms that could be interpreted as targeting or excluding any group, rewrite it. This is the compliance area with the biggest potential downside and the least enforcement activity so far. That combination should terrify you.
4. Don’t feed client PII into consumer AI tools. Use business-grade tools with documented data handling policies. If you can’t explain to your client exactly where their data goes when you paste it into an AI tool, don’t paste it. Period.
5. Stay current. Quarterly. NAR updated 18 MLS policies in January 2026 alone. TREC added AI to the 2026-2027 Legal Update curriculum. California passed AB 723. Colorado’s AI Act takes effect in June. The regulatory landscape is changing faster than most agents realize. Set a quarterly reminder to review what’s new. I do this and I still get surprised sometimes.
The Agents Who Get in Trouble
Lets be clear about who’s actually at risk here. It’s not the agents avoiding AI entirely (though they have different problems, and I’ve written about those too). And it’s not the agents like me who use AI extensively but take compliance seriously.
The agents who will get fined, sued, or lose their licenses are the ones in the middle. The “I’ll just use it and nobody will notice” crowd. The agents who virtually stage photos without disclosure because “everyone does it.” The agents who let AI write listing descriptions without reading them. The agents who use AI-generated neighborhood descriptions that inadvertently steer clients based on protected characteristics.
Kahneman would call this the “illusion of safety.” Because AI’s output looks so professional and polished, agents assume it must be compliant. It’s the exact opposite. The better AI gets at producing convincing content, the more important it becomes that a human reviews it through a compliance lens.
NAR’s own surveys show 80 to 90 percent of agents at recent conferences report using AI in some form. That’s a lot of people using powerful tools with very little framework around responsible use. And regulators have noticed.
Frequently Asked Questions
Where This Is All Headed
I’ll make a prediction. Within 24 months, every state will have some form of AI disclosure requirement for real estate transactions. TREC is already teaching it in the 2026-2027 Legal Update curriculum. NAR is updating MLS policies quarterly. HUD is watching. The direction is unmistakable.
The agents who build compliance into their AI workflow now will barely notice when the new rules arrive. The agents who are cutting corners will scramble. Or worse.
I’ve been in this business for 19 years. I’ve watched the industry survive the internet revolution, the social media revolution, the MLS data revolution, and now the AI revolution. Every single time, the agents who adapted early and adapted responsibly came out ahead. This time is no different.
If you’re an agent figuring out how to use AI responsibly, or a buyer or seller who wants to work with someone who actually takes this seriously, lets connect. I’m always happy to talk about this stuff. Probably too happy honestly (my wife would say definitely too happy).
Be safe, be good, and be nice to people.