Across executive teams and multifamily operations, the same pattern is playing out. Leaders are deploying AI to analyze contracts, leases, and regulatory agreements. The instinct is right. The execution is not. Most are asking for a summary, and in doing so, they are trading genuine risk reduction for the feeling of it. “Summaries don’t reduce risk,” says Kevin A. Weishaar, executive advisor and founder of Weishaar Strategic Partners. “They just reduce awareness.”
Documents Are Behavior-Shaping Systems, Not Information Sources
The foundational error is a misread of what legal and regulatory documents actually are. Treating a lease, contract, or compliance policy as an information source to be digested and compressed fundamentally misunderstands its function. “Laws, regulations, policies, and contracts are not informational documents,” Weishaar says. “They are behavior-shaping systems. They define who acts, when they act, what happens if they don’t, and who absorbs the consequences.” A summary of that document does not surface those dynamics. It buries them beneath the illusion of comprehension. “If your AI prompt doesn’t surface those dynamics, you’re not reducing complexity,” he says. “You’re just automating blind spots. Which is a very efficient way to get yourself into trouble.”
Summaries Answer the Wrong Question
A summary answers one question: what does this say? Executives and operators running complex portfolios need answers to a different set of questions entirely. Where does risk concentrate over time? What decisions does this document constrain? What assumptions are buried in the language, and what gaps do they create? What happens when reality does not match those assumptions?
“Summaries remove consequence, ownership, timing, and escalation,” Weishaar says, “which just happens to be where most of the operational failures live.” The information compressed out of a summary is often precisely the information that determines what happens when something goes wrong. Defaulting to AI summaries does not make that risk disappear. It makes it invisible until it is expensive.
The Prompt Is the Product
The real leverage in AI is not better technology. It is better prompting and the discipline to refine it. The same document that produces a useless summary under a generic prompt becomes a strategic instrument under the right one. Ask AI to identify the decisions the document drives, translate clauses into an operating plan, surface hidden assumptions and explain how two different parties might interpret the same clause. “When you do that, AI stops paraphrasing,” Weishaar says. “It starts mapping cause and effect. And that is where clarity starts to show up.”
Analyzing the same document through the lens of operations, compliance, finance, and site leadership simultaneously transforms it from static text into an operable system. Leverage points where small actions reduce large risks become visible. The sequence of obligations that determines how a situation plays out becomes navigable. “The document stops being a static text,” Weishaar says. “It becomes a system you can actually operate.”
Clarity Is Where Performance Lives
Used with precision, AI functions like a senior operator who never gets tired. Used carelessly, it produces comfort rather than clarity. The difference is entirely in what is asked of it. “If your prompts only ask for compression, you will get comfort,” Weishaar says. “If your prompts demand structure, ownership, and consequences, you will get clarity. And clarity is where performance, compliance, and confidence actually come from.”
Prompt refinement is not a technical task. It is a strategic discipline, and it is the part that still requires human judgment. “In AI work, the prompt is the product,” he says. “Otherwise, there is already a button that says summarize. AI does not need your help doing that.”
Follow Kevin A. Weishaar on LinkedIn or visit his website or his company’s website for more insights on decision design, operational risk, and system execution.