Every month, Kate Teves, HR consultant, recruiter and founder of The HR Pro, answers Realtors’ questions about anything and everything related to human resources. Have a question for Kate? Send her an email.
Question: What are the risks of AI use in real estate, and what policies should brokerages implement to protect data, ensure compliance, and use AI responsibly?
Kate: Artificial intelligence has become one of the most talked-about tools in the real estate industry, and for good reason. A surge of useful apps that drive productivity and shorten the time a task like a CMA, property description, or managing social media takes is virtually eliminated with the right tools. If you are on the cutting edge of technology, AI can help agents and admin staff save hours each week.
At its best, AI improves productivity, creativity and communication. But at its worst, it can expose brokerages to serious risk, especially when employees or agents feed sensitive or proprietary information into AI platforms without realizing the consequences.
Let me be very clear: we like and use AI, but it is our duty to guide employers on the pitfalls and balance innovation with responsibility.
Most AI tools, including chat-based ones, learn from the data you give them. That means when your assistant, marketing coordinator or agent uploads internal brokerage information, like customer data, commission reports or sections of your independent contractor agreements, that information could potentially leave your organization’s control. Additionally, any information shared over virtual meetings and annotated by an AI to save you time from writing notes will also be stored. Even webinars and conferences with a hybrid attendance component should consider the storage and analysis of all recordings.
In a heavily regulated industry like real estate, this isn’t just bad practice; it can also create PIPEDA (Personal Information Protection and Electronic Documents Act) compliance issues, or even breach contractual confidentiality clauses. If you think this sounds far-fetched, remember: the majority of AI platforms openly state that they may retain user input for training or product improvement. That means anything shared, even inadvertently, can be stored, analyzed, and, in some cases, reproduced in future outputs.
Don’t even get me started on deepfakes and other digitally altered products that can severely compromise the reputation of even the most respected professionals (we are already seeing cases emerging in the US, some with jail sentences attached).
The risk isn’t AI, it’s the lack of policy
Right now, many businesses are in a grey zone. Staff are using AI to draft emails, write copy, summarize client files, or analyze deals, often without any formal training or policy oversight. This, of course, results in inconsistent quality, potential privacy violations, and situations where AI-generated work is being presented as original content without disclosure or review. Just as you wouldn’t authorize all staff to handle client deposits without process or supervision, you shouldn’t allow unrestricted AI use in your business.
What every brokerage should be doing right now
It’s time to establish clear, written policies that outline how AI can and cannot be used within your brokerage. Consider addressing the following areas:
1. Acceptable and unacceptable uses
Define when AI tools can be used (e.g., for marketing ideas, copy drafts or data summaries) and when they cannot (e.g., analyzing client files, entering personal or financial data, or uploading proprietary documents like commission splits).
2. Protection of confidential information
Remind all staff and contractors that confidential information, including client data, contracts, deal notes, and internal financials, must never be entered into external AI systems unless they are approved and secure.
3. Ownership and attribution
If AI assists in creating materials such as blogs, marketing pieces or presentations, ensure there’s a process to verify accuracy and compliance, and be transparent about authorship when required. Presenting AI-generated work as entirely one’s own may violate ethical standards or brand guidelines.
4. Training and accountability
Provide your staff and agents with brief training on AI tools, emphasizing data sensitivity, copyright awareness, and brand tone. Make sure everyone understands that AI is a support tool, not a substitute for human judgment.
IMPORTANT: policies and trainings must be documented and acknowledged by staff to protect the business in case they are not being upheld in the future.
Real estate has always been a relationship-driven business, built on trust, reputation, and professionalism. AI doesn’t change that; it simply raises the standard for how we protect the information our clients and agents trust us with.
By establishing thoughtful policies and boundaries now, you don’t have to fear technology; you can embrace it confidently.
As you prepare for 2026, view this as an opportunity to formalize your brokerage’s approach to AI. It’s not just about compliance; it’s about leading.
Because in an age where technology is everywhere, safeguarding your data and integrity will be your most valuable competitive advantage.

Kate Teves is the founder and COO of the HR Pro, a recruiter and a Human Resources Professional who focuses on the real estate industry by finding incredible people to support solopreneurs, teams and brokerages. She also helps leaders and managers build HR processes and design a culture and mindset that facilitate business growth and employee development.
