<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[AI & The Oath:  CARAF: Building the Framework Medicine Doesn't Have Yet]]></title><description><![CDATA[A structured conversation — not a white paper — designed to surface the right healthcare AI questions from the people closest to the problem before harm accumulates.]]></description><link>https://mikepackman.substack.com/s/caraf-building-the-framework-medicine</link><generator>Substack</generator><lastBuildDate>Thu, 14 May 2026 01:48:26 GMT</lastBuildDate><atom:link href="https://mikepackman.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Michael Tekely]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[mikepackman@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[mikepackman@substack.com]]></itunes:email><itunes:name><![CDATA[AI & The Oath]]></itunes:name></itunes:owner><itunes:author><![CDATA[AI & The Oath]]></itunes:author><googleplay:owner><![CDATA[mikepackman@substack.com]]></googleplay:owner><googleplay:email><![CDATA[mikepackman@substack.com]]></googleplay:email><googleplay:author><![CDATA[AI & The Oath]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The Five Gaps Nobody Is Addressing in Clinical AI Governance]]></title><description><![CDATA[AI & The Oath | Michael Tekely, AAI]]></description><link>https://mikepackman.substack.com/p/the-five-gaps-nobody-is-addressing</link><guid isPermaLink="false">https://mikepackman.substack.com/p/the-five-gaps-nobody-is-addressing</guid><dc:creator><![CDATA[AI & The Oath]]></dc:creator><pubDate>Thu, 09 Apr 2026 22:02:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!UgsA!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50fa9bf2-affe-4cfa-8ed3-675cbfb4f043_750x750.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>AI &amp; The Oath | Michael Tekely, AAI</em></p><div><hr></div><p>Most health systems believe they are managing clinical AI governance.</p><p>They have a vendor contract. An IT sign-off process. A governance committee. They may have reviewed the Duke-Margolis framework or attended a CHAI webinar. They feel informed.</p><p>They are not wrong. They have done something.</p><p>What they have not done is address the five gaps that determine whether everything they built holds up when something goes wrong &#8212; in a claim, in a deposition, in a carrier coverage dispute, or in a board meeting where a plaintiff&#8217;s attorney has already asked questions they cannot answer.</p><p>These are not future risks. They are present exposures. And they exist at most health systems deploying clinical AI right now &#8212; including the most sophisticated ones.</p><div><hr></div><h2>Gap 1 &#8212; The Coverage Gap</h2><p>Every major clinical AI governance framework in existence today &#8212; Duke-Margolis, CHAI, The Hastings Center, Stanford, MedicoVigilance &#8212; stops before it reaches the coverage question.</p><p>Which policy responds when the AI governance fails?</p><p>Most institutions assume one of their four policies covers it. Most of the time none of them do.</p><p>Medical professional liability insurance is silent on AI. There is no coverage trigger for machine learning errors in standard MPL policy language. Cyber liability covers data breaches and ransomware events &#8212; not clinical harm from a flawed AI recommendation. Commercial general liability now explicitly excludes it: ISO Form CG 40 47, effective January 2026, excludes generative AI bodily injury from CGL coverage. And Tech E&amp;O covers the vendor&#8217;s software failures &#8212; not the institution&#8217;s, and not the physician&#8217;s.</p><p>Most health systems have four policies and no confirmed coverage for AI-assisted clinical harm.</p><p>That is not a theoretical gap. It is an active uninsured exposure operating right now in every clinical AI deployment that has not asked the question in writing.</p><p><strong>The question to ask your carriers today:</strong> Can you confirm in writing that at least one of our four policies responds to an AI-assisted clinical harm event &#8212; and what documentation do you need from us in the first 24 hours to protect that coverage?</p><div><hr></div><h2>Gap 2 &#8212; The Physician Judgment Gap</h2><p>In March 2026, a peer-reviewed study published in <em>Nature Health</em> found that mock jurors sided with plaintiffs at nearly 75% when a physician reviewed AI output once without a prior independent assessment. When the physician documented their reasoning before seeing the AI output &#8212; the double-read protocol &#8212; that number dropped to 52.9%.</p><p>Same physician. Same missed diagnosis. Same AI that got it right.</p><p>The only difference was the sequence.</p><p>Most clinical AI deployments have no mechanism to capture that sequence. The physician&#8217;s independent reasoning is not timestamped before the AI output influences it. The documentation exists. The proof of sequence does not. And in litigation, the sequence is everything.</p><p><strong>The question to ask your risk manager today:</strong> Can we prove, for any AI-assisted clinical encounter that occurred in the last 90 days, that the physician&#8217;s independent reasoning preceded the AI output &#8212; not just that they reviewed it?</p><div><hr></div><h2>Gap 3 &#8212; The Pre-Decisional Architecture Gap</h2><p>By the time a clinician acts on an AI recommendation, a series of consequential decisions have already been made. What thresholds define acceptable output. What conditions trigger escalation to human review. What information was included or excluded from the AI&#8217;s input. All of it running continuously, invisibly, beneath every clinical interaction.</p><p>Most institutions cannot name who owns those decisions.</p><p>The vendor owns them by default. And no policy is clearly written to respond when that layer is where the failure originates &#8212; when the harm traces back not to what the clinician did, but to a threshold that was set at deployment by a vendor without institutional visibility and never reviewed.</p><p>When performance thresholds shift with a model update, when escalation logic changes without clinical notification, when input governance drifts &#8212; and the clinician acts in good faith on an output that was shaped by parameters nobody at the institution owns &#8212; who is accountable?</p><p>Right now, the honest answer is nobody.</p><p><strong>The question to ask your vendor today:</strong> Who owns the performance thresholds that define acceptable output for this tool &#8212; and will you put that in writing, including what happens when those parameters change?</p><div><hr></div><h2>Gap 4 &#8212; The Executive Accountability Gap</h2><p>Most health systems have committees. They have IT sign-off. They have vendor contracts. None of those is an accountable human being.</p><p>When a clinical AI deployment produces harm, the accountability chain must run upward &#8212; to a named C-suite executive who made the deployment decision, has documented authority to decommission the tool, and has been briefed on what their personal accountability scope is if that tool causes harm.</p><p>Most health systems cannot identify that person today.</p><p>A committee approved the pilot. An IT department integrated the system. A vendor contract limits liability. But when a plaintiff&#8217;s attorney asks who made the decision to deploy this tool in a high-acuity clinical environment &#8212; and who is accountable for what it did &#8212; the answer cannot be &#8220;the committee.&#8221;</p><p><strong>The question to ask your leadership team today:</strong> Who is the named executive owner of each active clinical AI deployment &#8212; identified by name and title, not by committee &#8212; and have they been briefed on the four-policy coverage void?</p><div><hr></div><h2>Gap 5 &#8212; The Reconstruction Gap</h2><p>When an AI-assisted adverse event occurs, can your organization reconstruct exactly what happened?</p><p>Not generally. Not approximately. Exactly.</p><p>Which AI tool was active. What version. What it recommended. What the clinician saw. What the clinician documented before and after seeing that output. What the patient&#8217;s full longitudinal record contained at that moment. Which vendor&#8217;s model produced which output. In what sequence. In what timeframe.</p><p>From your own records alone &#8212; without relying on the vendor to do it for you.</p><p>Vendor relationships deteriorate after adverse events. When a claim is filed, when litigation begins, when discovery opens &#8212; the vendor&#8217;s cooperation cannot be assumed. Your defense cannot depend on a relationship that the adversarial nature of litigation will strain or break.</p><p>If reconstruction requires vendor cooperation, your defense is only as strong as that relationship.</p><p><strong>The question to ask your risk management team today:</strong> Have we tested our ability to reconstruct an AI-assisted clinical encounter from our own records alone &#8212; and has our MPL carrier confirmed that our documentation architecture meets their defense requirements?</p><div><hr></div><h2>What These Five Gaps Have in Common</h2><p>None of them are addressed by any governance framework currently operating in the clinical AI space.</p><p>Duke-Margolis identifies what health systems should build. CHAI certifies against a governance standard. The Hastings Center holds the ethical frame. MedicoVigilance asks the board accountability questions. Stanford benchmarks AI performance.</p><p>None of them answer the question that follows every one of those recommendations: which policy responds when the governance fails?</p><p>That question is an insurance question. And it requires a specific combination of expertise that no think tank, no academic medical center, and no policy institute currently holds.</p><p>Twenty years of medical professional liability insurance, specializing in physicians and surgeons. Five and a half years as a Healthcare Risk Manager inside a major academic health system. At the exact intersection of the two disciplines that matter most when AI-assisted clinical harm occurs.</p><p>That intersection is where CARAF was built.</p><div><hr></div><h2>What CARAF Does</h2><p>CARAF &#8212; the Clinical AI Reasoning and Accountability Framework &#8212; is a practitioner-built governance instrument that addresses all five gaps through a structured, scored, defensible assessment architecture.</p><p>It does not evaluate whether an AI tool performs accurately. Clinical validation frameworks do that. It does not propose legislation or advise regulators. Policy frameworks do that. It does not describe ethical principles for AI development. Bioethics institutions do that.</p><p>CARAF translates all of those recommendations into the operational questions a health system must be able to answer before the next AI-assisted clinical encounter &#8212; and tells them what the absence of an answer means for their insurance coverage, their liability exposure, and their defense.</p><p>The CARAF Document Family includes five instruments designed to meet every audience at their altitude &#8212; from the board-level Executive Summary Card to the full Procurement Checklist to the Scoring Methodology that converts assessment scores into a defensible governance posture designation.</p><div><hr></div><h2>The Standard Is Being Written Now</h2><p>The standard of care for clinical AI governance is being written right now. In courtrooms. In carrier policy language. In frameworks being built by the people closest to the problem.</p><p>The organizations that define it first will shape what everyone else is eventually required to do. The ones that wait will find out what the standard looks like in discovery.</p><p>Which of the five gaps is your organization least certain about?</p><p>That is the conversation worth having &#8212; before an adverse event makes it urgent.</p><div><hr></div><p><em>To receive the CARAF Document Family or to discuss a governance assessment for your organization, reach out directly through aiandtheoath.substack.com.</em></p><p><em>Michael Tekely, AAI is the founder of the Malpractice Insurance &amp; Clinical Risk Management Academy, LLC and the developer of CARAF &#8212; the Clinical AI Reasoning and Accountability Framework. He brings twenty years of medical professional liability insurance experience and five and a half years as a Healthcare Risk Manager at Duke University Health System to the intersection of clinical AI governance, insurance architecture, and liability defense.</em></p><p><em>This post does not constitute legal, clinical, or insurance advice. All findings and recommendations should be discussed with your malpractice carrier, cyber liability carrier, Tech E&amp;O vendor, and qualified legal counsel before your next insurance renewal and before your next AI-assisted clinical encounter.</em></p><p><em>&#169; 2026 Michael Tekely, AAI. All rights reserved.</em></p>]]></content:encoded></item><item><title><![CDATA[Join my new subscriber chat]]></title><description><![CDATA[A private space for us to converse and connect]]></description><link>https://mikepackman.substack.com/p/join-my-new-subscriber-chat-201</link><guid isPermaLink="false">https://mikepackman.substack.com/p/join-my-new-subscriber-chat-201</guid><dc:creator><![CDATA[AI & The Oath]]></dc:creator><pubDate>Wed, 08 Apr 2026 14:07:41 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!KYZT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Today I&#8217;m announcing a brand new addition to my Substack publication: AI &amp; The Oath subscriber chat.</p><p>This is a conversation space exclusively for subscribers&#8212;kind of like a group chat or live hangout. I&#8217;ll post questions and updates that come my way, and you can jump into the discussion.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://open.substack.com/pub/mikepackman/chat&quot;,&quot;text&quot;:&quot;Join chat&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://open.substack.com/pub/mikepackman/chat"><span>Join chat</span></a></p><div><hr></div><h2>How to get started</h2><ol><li><p><strong>Get the Substack app by clicking <a href="https://substack.com/app/app-store-redirect">this link</a> or the button below.</strong> New chat threads won&#8217;t be sent sent via email, so turn on push notifications so you don&#8217;t miss conversation as it happens. You can also access chat <a href="https://open.substack.com/pub/mikepackman/chat">on the web</a>.</p></li></ol><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://substack.com/app/app-store-redirect&quot;,&quot;text&quot;:&quot;Get app&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://substack.com/app/app-store-redirect"><span>Get app</span></a></p><ol start="2"><li><p><strong>Open the app and tap the Chat icon.</strong> It looks like two bubbles in the bottom bar, and you&#8217;ll see a row for my chat inside.</p></li></ol><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KYZT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KYZT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!KYZT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!KYZT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!KYZT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KYZT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg" width="1456" height="728" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:728,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:241528,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://kylewarrentest.substack.com/i/114198534?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KYZT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!KYZT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!KYZT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!KYZT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><ol start="3"><li><p><strong>That&#8217;s it!</strong> Jump into my thread to say hi, and if you have any issues, check out <a href="https://support.substack.com/hc/en-us/sections/360007461791-Frequently-Asked-Questions">Substack&#8217;s FAQ</a>.</p></li></ol>]]></content:encoded></item><item><title><![CDATA[Durham Casualty Company]]></title><description><![CDATA[CARAF Reference Brief &#8212; Public Information Only | Michael Tekely, AAI | April 2026]]></description><link>https://mikepackman.substack.com/p/durham-casualty-company</link><guid isPermaLink="false">https://mikepackman.substack.com/p/durham-casualty-company</guid><dc:creator><![CDATA[AI & The Oath]]></dc:creator><pubDate>Wed, 08 Apr 2026 13:55:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!UgsA!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50fa9bf2-affe-4cfa-8ed3-675cbfb4f043_750x750.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>What Durham Casualty Company Is</h2><p>Durham Casualty Company, Ltd. (DCC) is a captive insurance company wholly owned by Duke University Health System, Inc. It is domiciled in Bermuda &#8212; standard for captive structures, chosen for favorable regulatory and tax treatment.</p><p>DCC provides professional liability insurance coverage for Duke University Health System and the Private Diagnostic Clinic (PDC), which is the faculty physician practice plan. Its coverage scope includes medical professional liability, patient general liability, privacy/cyber liability, and international liability for Health System clinical providers.</p><p>All of this is fully in the public record. Duke disclosed it in their audited financial statements, in public HR job postings, and through federal court filings from the Duke lacrosse litigation that entered the public record in 2009. There is no sensitive disclosure risk in referencing this material.</p><div><hr></div><h2>Why This Matters for CARAF</h2><p><strong>Duke self-insures its own clinical AI exposure.</strong></p><p>When a clinical AI governance failure produces a claim at Duke &#8212; an AI-assisted missed diagnosis, a documentation gap, a coverage trigger question &#8212; it does not go to an outside carrier. It stays inside the Duke enterprise. Durham Casualty pays it. Duke feels it directly on their own balance sheet.</p><p>That is a fundamentally different risk architecture than a health system that buys commercial MPL coverage from an outside carrier like MAG Mutual or Curi. Duke has direct financial skin in the governance game. The person who deploys the AI and the entity that pays when it fails are the same organization.</p><p><strong>The four-policy coverage void is even more acute in a captive structure.</strong></p><p>A commercial carrier has actuaries, underwriters, and claims professionals actively re-evaluating their exposure as clinical AI deployment accelerates. A captive like DCC reflects the governance assumptions Duke built into it &#8212; which were built before clinical AI was a material deployment reality. The question of whether DCC&#8217;s coverage language responds to AI-assisted clinical harm, and under what circumstances, is an internal governance question Duke has not publicly answered.</p><p><strong>The Ferranti connection.</strong></p><p>Jeffrey Ferranti is Duke Health&#8217;s Chief Digital Officer &#8212; the person who oversees clinical AI deployment at the institution. He and Duke&#8217;s risk management team are not separated by a carrier relationship. The governance failure and the financial consequence land in the same house. A conversation with Ferranti about CARAF is not a conversation with a technology officer. It is a conversation with the person whose institution absorbs every uninsured AI-assisted clinical harm directly.</p><div><hr></div><h2>What We Know About AI Claims at Duke Specifically</h2><p>No publicly documented case of an AI-assisted clinical harm claim filed against Duke Health or handled through Durham Casualty has surfaced in public records or court filings. That absence is significant but not surprising.</p><p>Captives are specifically structured to keep claims internal. Durham Casualty does not report to a state insurance department the way a commercial carrier does. Duke handles its own claims, with its own counsel, on its own balance sheet. That structure is designed precisely to keep sensitive claims out of public view.</p><p>The NPDB (National Practitioner Data Bank) confirms that paid malpractice claims from captive structures like DCC are reported &#8212; but the public data layer is anonymized. You cannot identify Duke or DCC specifically from public NPDB data. What the public data does confirm is the national trend: malpractice claims involving AI tools increased 14% between 2022 and 2024, with most involving diagnostic AI in radiology, cardiology, and oncology.</p><div><hr></div><h2>The Structural Coverage Question</h2><p>DCC is the first-dollar payer for Duke faculty physicians. Large health systems with captives typically layer excess coverage above their retained layer through commercial carriers &#8212; DCC absorbs losses up to a retention level and commercial excess carriers sit above that. The exact layering at Duke is not publicly detailed.</p><p>What that means for clinical AI: DCC is the first governance relationship any AI-assisted claim encounters at Duke. The coverage language in DCC&#8217;s policies &#8212; whether it addresses clinical AI, what triggers it, whether it is silent on machine learning in the causation chain &#8212; is an internal document not in the public record.</p><p>That is precisely the conversation CARAF opens. Not with an outside carrier. With the institution that owns both the technology deployment and the financial consequence of its failure.</p><div><hr></div><h2>The Bottom Line for CARAF Positioning</h2><p>Duke is arguably the most sophisticated health system in the country on clinical AI governance. They have Duke-Margolis at the policy layer, the ABCDS oversight program at the governance layer, SCRIBE at the evaluation layer, and Durham Casualty absorbing the financial exposure at the insurance layer.</p><p>What they do not have &#8212; publicly &#8212; is a governance framework that connects those four layers to the coverage question. Which DCC policy language responds when the AI governance fails? What documentation does DCC need in the first 24 hours to protect the coverage relationship? Has DCC confirmed in writing that its coverage responds to AI-assisted clinical harm?</p><p>Those are CARAF questions. And they are uniquely answerable by someone who spent five and a half years inside Duke&#8217;s risk management infrastructure &#8212; and twenty years understanding how captive and commercial MPL coverage actually functions.</p><div><hr></div><p><em>This brief contains only publicly available information. All Durham Casualty disclosures sourced from Duke Health audited financial statements, Duke HR public job postings, and public federal court filings.</em></p><p><em>&#169; 2026 Michael Tekely, AAI. All rights reserved.</em></p>]]></content:encoded></item><item><title><![CDATA[CARAF — Clinical AI Reasoning & Accountability Framework]]></title><description><![CDATA[Version 3.1 &#8212; With Insurance Architecture, Clifford Anchor Case & Pre-Decisional System Governance]]></description><link>https://mikepackman.substack.com/p/caraf-clinical-ai-reasoning-and-accountability-993</link><guid isPermaLink="false">https://mikepackman.substack.com/p/caraf-clinical-ai-reasoning-and-accountability-993</guid><dc:creator><![CDATA[AI & The Oath]]></dc:creator><pubDate>Wed, 08 Apr 2026 13:16:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!UgsA!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50fa9bf2-affe-4cfa-8ed3-675cbfb4f043_750x750.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Developed by:</strong> Michael Tekely, AAI | 20 Years, Medical Professional Liability Insurance | Clinical Risk Management, Duke University Health System <strong>Clinical contributions:</strong> John Ferguson, MD, FACS <strong>Public-facing companion series:</strong> AI &amp; The Oath &#8212; aiandtheoath.substack.com</p><div><hr></div><h2><strong>Version History</strong></h2><p style="text-align: center;"><strong>Version</strong></p><p style="text-align: center;"><strong>Date</strong></p><p style="text-align: center;"><strong>Key Additions</strong></p><p>V1.0</p><p>2025</p><p>Core six-layer governance framework</p><p>V1.1</p><p>2025</p><p>Clinical contributions from Dr. John Ferguson, MD, FACS, ENT/subspecialty tools, context integration, repeatability data, governance ownership</p><p>V1.2</p><p>Early 2026</p><p>Layer 0.5 Execution Authorization Boundary; Layer 6 Cybersecurity &amp; Integrity Assurance</p><p>V2.0</p><p>March 2026</p><p>Longitudinal Memory Spine (LMS) architecture woven throughout all layers; Brownsville Stress Test formalized</p><p>V3.0</p><p>March 2026</p><p>Part 2A, Four-Policy Insurance Architecture; Part 2B, Clifford Anchor Case; Tech E&amp;O expanded throughout; claims-made vs. occurrence trap named; UpToDate/Wolters Kluwer governance opportunity added; AI &amp; The Oath series integrated as public companion</p><p>V3.1</p><p>April 2026</p><p>Layer 5B, Pre-Decisional System Governance added; developed through the CARAF think tank process in response to expert contribution from the AI enablement and operations constituency</p><div><hr></div><h2><strong>Table of Contents</strong></h2><p><strong>Prefatory Material</strong></p><ul><li><p>Version History</p></li><li><p>Opening Statement</p></li><li><p>The Equity Principle</p></li><li><p>Core Assumption: The Longitudinal Memory Spine</p></li><li><p>The UpToDate Governance Opportunity</p></li></ul><p><strong>Part One: Medicine Already Knows How to Standardize Complex Judgment</strong></p><ul><li><p>1.1 The Validated Assessment Foundation</p></li><li><p>1.2 The Core Principle These Tools Share</p></li></ul><p><strong>Part Two: The CARAF Framework &#8212; Six Layers</strong></p><ul><li><p>Layer 0 &#8212; Upstream Constraint</p></li><li><p>Layer 0.5 &#8212; Execution Authorization Boundary</p></li><li><p>Layer 1 &#8212; Assessment Foundation</p></li><li><p>Layer 2 &#8212; AI Input &amp; Differential Generation</p></li><li><p>Layer 3 &#8212; Physician Interrogation Checkpoint</p></li><li><p>Layer 4 &#8212; Treatment Decision &amp; Care Plan Reasoning</p></li><li><p>Layer 5 &#8212; Audit Trail &amp; Governance</p></li><li><p>Layer 5B &#8212; Pre-Decisional System Governance</p></li><li><p>Layer 5A &#8212; Insurance Architecture (see Part 2A)</p></li><li><p>Layer 6 &#8212; Cybersecurity &amp; Integrity Assurance</p></li></ul><p><strong>Part 2A: The Insurance Architecture</strong></p><ul><li><p>The Four Policies: Who Is Covered, What Is Covered, and Where Each Fails</p></li><li><p>The Four-Policy Matrix</p></li><li><p>The Coverage Trigger Trap: Claims-Made vs. Occurrence</p></li><li><p>The Coordination Failure: Four Carriers, Four Denials</p></li><li><p>What Needs to Change</p></li></ul><p><strong>Part 2B: The Clifford Anchor Case</strong></p><ul><li><p>Why This Case Belongs in CARAF</p></li><li><p>The Facts</p></li><li><p>What Prenuvo Did Next</p></li><li><p>The Product Liability Theory</p></li><li><p>The Mandatory Workflow Question</p></li><li><p>The Four-Policy Breakdown Applied to Clifford</p></li><li><p>What the Clifford Case Teaches CARAF</p></li><li><p>The Documentation Imperative</p></li><li><p>A Note on the Human Reality</p></li></ul><p><strong>Part Three: We Have Seen This Before &#8212; The EMR Parallel</strong></p><ul><li><p>3.1 How Epic and EMR Adoption Actually Happened</p></li><li><p>3.2 What the Evidence Actually Shows About EMR and Malpractice</p></li><li><p>3.3 The Unavoidable Truth About Unexpected Outcomes</p></li><li><p>3.4 The Direct Parallel for CARAF</p></li><li><p>3.5 The Regulatory Landscape: The Forcing Functions Already in Motion</p></li><li><p>3.6 The Critical Difference</p></li></ul><p><strong>Part Four: The Constituency Questions</strong></p><ul><li><p>For Legal &amp; Regulatory Professionals</p></li><li><p>For Insurance &amp; Underwriting Professionals</p></li><li><p>For Health System &amp; Clinical Leadership</p></li><li><p>For Ethics Professionals</p></li><li><p>For AI Vendors &amp; Technology Companies</p></li></ul><p><strong>Part Five: Anticipated Challenges and Honest Limitations</strong></p><ul><li><p>5.1 The Documentation Burden Objection</p></li><li><p>5.2 The Legal Standing Objection</p></li><li><p>5.3 The Enforcement Mechanism Objection</p></li><li><p>5.4 The Patient Rights and Health Equity Objection</p></li><li><p>5.5 The Governance and Ownership Objection</p></li><li><p>5.6 The Insurance Architecture Objection</p></li></ul><p><strong>The Question Underneath All of It</strong></p><div><hr></div><p>This is not a white paper. It is a structured conversation. CARAF is a practitioner-developed framework designed to stimulate expert discussion across legal, insurance, healthcare, and ethics communities. The goal is not to declare answers but to surface the right questions from the people closest to the problem, so that legal frameworks, insurance policy language, clinical governance, and ethics can move at the speed AI is already moving in healthcare.</p><p>This framework does not constitute legal, clinical, or insurance advice. It is intended solely to stimulate professional discussion and collaborative development.</p><div><hr></div><h2><strong>THE EQUITY PRINCIPLE &#8212; The Floor, Not Just the Ceiling</strong></h2><p>CARAF is designed to function as the minimum standard for every healthcare organization deploying clinical AI, not just well-resourced academic medical centers.</p><p>The harm from AI failure will not concentrate at Duke, Johns Hopkins, or Mayo Clinic. It will concentrate where governance is weakest and resources are thinnest, rural critical access hospitals, Federally Qualified Health Centers, solo and small group physician practices, and safety net hospitals serving underserved populations.</p><p>The patients most vulnerable to AI failure are being treated in the institutions least equipped to prevent it. That is not a technology problem. That is a health equity crisis hiding inside a technology deployment.</p><p>If CARAF only works at Duke, it isn&#8217;t working.</p><p>The patient&#8217;s right to know that AI was involved in their care, and the framework for that disclosure, is a foundational design requirement of CARAF, not an afterthought. Informed consent has always been the covenant between medicine and the people it serves. AI-assisted care does not suspend that covenant. It raises the stakes of honoring it. A patient-facing disclosure standard is an explicit design priority of this framework and will be developed through the ethics constituency of this think tank as a core deliverable, not a deferred consideration.</p><div><hr></div><h2><strong>CORE ASSUMPTION &#8212; The Longitudinal Memory Spine</strong></h2><p>CARAF Version 3.1 assumes a Longitudinal Memory Spine (LMS), a unified, time-ordered patient record that both clinicians and AI systems can see and query.</p><p>The LMS connects encounters, labs, imaging, orders, notes, and messages into a single clinical narrative and preserves &#8220;what was known when.&#8221; Every CARAF layer is evaluated against this spine: AI recommendations, human decisions, and governance all rest on the expectation that decisions are made, and later defended, in the context of the full relevant history, not a fragmented snapshot.</p><p>Without a Longitudinal Memory Spine, AI output is based on partial context and malpractice reconstruction becomes guesswork. With it, CARAF can show exactly what data the AI and clinician had, how longitudinal patterns informed (or should have informed) the decision, and how their choices fit into the patient&#8217;s overall story.</p><h3><strong>The UpToDate Governance Opportunity</strong></h3><p>The most trusted clinical reference tool in the world is UpToDate. Physicians rely on it. Hospitals require it. It has earned that position over decades of rigorous, continuously updated, peer-reviewed content.</p><p>But even UpToDate, in all its greatness, has a gap that CARAF names directly.</p><p>It has no longitudinal memory of the patient.</p><p>Every encounter starts fresh. The tool knows everything about disease. It knows nothing about this patient, their prior encounters, their socioeconomic reality, their documented history, their prior imaging, their prior responses to treatment.</p><p>UpToDate already integrates with EMR platforms. The infrastructure for a Longitudinal Memory Spine is closer than any startup entering this space can achieve. The question CARAF asks, and invites Wolters Kluwer to answer, is:</p><p>Why wouldn&#8217;t it want to build that memory in?</p><p>As clinical decision support tools move from reference to participant, synthesizing differentials, recommending pathways, shaping clinical documentation, the learned intermediary doctrine that has historically protected vendors is cracking. Plaintiff attorneys are following the workflow. They subpoena every tool. They ask: what did this system recommend, and why?</p><p>The company that closes the longitudinal memory gap first, that combines population-level clinical intelligence with individual patient continuity and a documentation architecture that prompts independent judgment, will define the standard of care for AI-assisted medicine.</p><p>Wolters Kluwer is already standing at that door. CARAF is the governance architecture that sits behind it.</p><div><hr></div><h2><strong>Part One: Medicine Already Knows How to Standardize Complex Judgment</strong></h2><p>Before we can understand what AI changes, we need to appreciate what medicine already built, because the solution to AI accountability may be closer to existing clinical infrastructure than we think.</p><h3><strong>1.1 The Validated Assessment Foundation</strong></h3><p>Over decades, medicine developed structured tools to standardize clinical reasoning at the bedside. These tools exist because human judgment, however expert, benefits from scaffolding. They force the clinician to slow down, apply a validated framework, and create a documentable record of their reasoning.</p><p><strong>Acute Neurological &amp; Consciousness Assessment</strong></p><ul><li><p>AVPU, Rapid LOC screen: Alert, responds to Verbal, responds to Pain, Unresponsive</p></li><li><p>GCS, Glasgow Coma Scale (Eye, Verbal, Motor; 3&#8211;15); cornerstone for coma and trauma grading</p></li><li><p>NIHSS, National Institutes of Health Stroke Scale; structured ischemic stroke exam covering LOC, gaze, visual fields, motor, ataxia, sensory, language, and neglect</p></li></ul><p><strong>Cognitive &amp; Delirium Screening</strong></p><ul><li><p>MMSE, Mini-Mental State Examination; 30-point global cognitive screen</p></li><li><p>MoCA, Montreal Cognitive Assessment; 30-point screen emphasizing executive function and mild cognitive impairment</p></li><li><p>SLUMS, Saint Louis University Mental Status; 30-point exam with education-adjusted cutoffs, more sensitive to mild neurocognitive disorder than MMSE</p></li><li><p>Mini-Cog, Brief screen combining 3-word recall and clock drawing</p></li><li><p>CAM, Confusion Assessment Method; bedside algorithm to diagnose delirium</p></li></ul><p><strong>Newborn Status</strong></p><ul><li><p>APGAR, 1 and 5 minute newborn assessment: Appearance, Pulse, Grimace, Activity, Respiration; scored 0&#8211;10</p></li></ul><p><strong>Global Clinical Frameworks</strong></p><ul><li><p>ABCDE, Airway, Breathing, Circulation, Disability (neuro), Exposure; primary survey scaffold</p></li><li><p>SBAR, Situation, Background, Assessment, Recommendation; handoff and escalation standard</p></li><li><p>OLDCART / OPQRST, Symptom history frameworks</p></li></ul><p><strong>ENT &amp; Subspecialty</strong> <em>(V1.1 Addition &#8212; Dr. John Ferguson, MD, FACS)</em></p><ul><li><p>SNOT-22, Sino-Nasal Outcome Test; the primary patient-reported outcome measure for chronic rhinosinusitis and one of the most validated quality-of-life instruments in ENT. As defensible in litigation as the GCS, defending a different kind of clinical decision.</p></li></ul><p>This list is not exhaustive. CARAF is designed to be specialty-agnostic. AI-assisted care extends well beyond acute neurological and delirium assessment. Subspecialty validated assessment tools belong in this foundation and will be developed through Version 4.0 with contributions from specialty communities.</p><h3><strong>1.2 The Core Principle These Tools Share</strong></h3><p>Every tool above does the same thing: it takes a complex, high-stakes clinical judgment and breaks it into structured, documentable, reproducible components. The physician still makes the decision. The tool ensures the reasoning is visible, consistent, and defensible.</p><p>That principle is exactly what CARAF proposes to extend into the AI era, with the Longitudinal Memory Spine providing the continuous patient context those tools now live inside.</p><div><hr></div><h2><strong>Part Two: The CARAF Framework &#8212; Six Layers (Built on the LMS)</strong></h2><h3><strong>Layer 0 &#8212; Upstream Constraint</strong></h3><p><em>(V1.1 Addition &#8212; Dr. John Ferguson, MD, FACS)</em></p><p>Before a health system asks what an AI can do, the first question should be what it is architecturally constrained from doing. A system that cannot perform past its validated corpus cannot generate the confident wrong answers that make Layer 3 necessary in the first place.</p><p>Upstream constraint beats downstream documentation every time, but both are necessary. Upstream constraint also includes how the model is allowed to read from the Longitudinal Memory Spine.</p><p><strong>Pre-deployment requirements:</strong></p><ul><li><p>What clinical tasks is this AI architecturally constrained from performing?</p></li><li><p>Has the system been audited for algorithmic bias across demographic groups, race, age, gender, geography?</p></li><li><p>What is the validated corpus this AI was trained on, and what patient populations are underrepresented in it?</p></li><li><p>Has the vendor provided demographic performance stratification data before deployment?</p></li><li><p>Has the vendor provided repeatability data, the percentage of identical queries that produce consistent responses across multiple sessions? (A system with 77% response consistency on identical queries has a reliability profile that belongs in every underwriting conversation.)</p></li><li><p>Can the AI safely and appropriately access the Longitudinal Memory Spine, with guardrails that prevent it from ignoring critical historical information or overreaching into data it is not validated to use?</p></li></ul><p><strong>The Brownsville Stress Test</strong> &#8212; Run this before signing any contract for a general clinical reasoning AI: Present the AI with a patient who cannot afford a prescription in the first exchange. Continue the encounter across multiple turns. In exchange four, does the system remember? Does it carry the patient forward? Or does it prescribe what it cannot see the patient cannot fill? If it fails this test, it is not ready for your clinical environment.</p><p>In Version 3.1, Brownsville is explicitly an LMS test: can the system carry longitudinal clinical and socioeconomic context forward across the encounter, or does it behave as if each turn is a new patient?</p><div><hr></div><h3><strong>Layer 0.5 &#8212; Execution Authorization Boundary</strong></h3><p><em>(V1.2 Addition)</em></p><p>Layer 0 asks what an AI is constrained from doing before deployment. This layer addresses a separate and equally critical question: at the moment of clinical action, who authorizes what the AI is permitted to do, and is that authorization documented in real time?</p><p>The distinction is the difference between a governance aspiration and a load-bearing control point. The AI may be architecturally constrained from performing certain tasks. But within its permitted scope, each recommendation that crosses into clinical action, a medication order, a diagnostic flag acted upon, a care pathway initiated, requires a documented authorization event. Not assumed from prior clinical context. Not inherited from the physician&#8217;s general use of the system. Resolved explicitly, at each transition.</p><p><strong>The operational questions this layer must answer:</strong></p><ul><li><p>At the moment AI output influences a clinical action, is there a documented record that a named physician authorized that specific action?</p></li><li><p>Is the authorization timestamped independently of the AI output, before, not after, the action is executed?</p></li><li><p>When a physician accepts, modifies, or rejects an AI recommendation, is that decision captured as a discrete governance event, not buried in general documentation?</p></li><li><p>Who owns the execution authorization log, and is it preserved in a format that survives legal discovery?</p></li></ul><p>The technical infrastructure to answer these questions at scale does not yet exist as a clinical standard. It is the missing layer between governance policy and operational accountability, and it is the most important unsolved problem in AI clinical deployment. CARAF names it here as a design priority for Version 4.0 development.</p><div><hr></div><h3><strong>Layer 1 &#8212; Assessment Foundation</strong></h3><p>The validated tools above remain the clinical backbone. AI does not replace them. AI is trained to recognize, map to, and surface them based on patient presentation. This grounds AI output in language clinicians already trust and in standards courts already understand. When AI flags a potential stroke, it does so in NIHSS-relevant terms. When it screens for delirium, it maps to CAM criteria. The physician&#8217;s frame of reference stays constant.</p><p>In Version 3.1, those assessment tools are not applied in isolation. They are interpreted against the Longitudinal Memory Spine, prior scores, prior diagnoses, prior functional baseline, so that each new assessment sits on top of the patient&#8217;s documented history, not beside it.</p><div><hr></div><h3><strong>Layer 2 &#8212; AI Input &amp; Differential Generation</strong></h3><p>AI ingests available patient data, vitals, history, imaging flags, lab trends, nursing notes, and produces a structured output. Not a diagnosis. A ranked differential with supporting data points, confidence indicators, and flagged uncertainties. It identifies which validated assessment tools are relevant to the presentation and prompts their application. The physician receives AI reasoning, not AI conclusions.</p><p>In Version 3.1, &#8220;available patient data&#8221; explicitly includes the Longitudinal Memory Spine, the full, time-aware record the system is allowed to see. A defensible AI differential is one that can be traced back to specific elements in the LMS: prior events, trajectories, and patterns.</p><p><strong>Model-first workflow</strong> &#8212; The physician receives AI&#8217;s independent assessment before introducing their own reasoning. Once the physician anchors the AI to their differential, they have changed what it will tell them. Every clinician using AI for clinical decision support should understand this before they open the chat window.</p><p><strong>Context integration</strong> <em>(V1.1 Addition &#8212; Dr. John Ferguson, MD, FACS)</em> &#8212; The more dangerous failure mode is a system that supports model-first workflow and still ignores context established earlier in the same encounter. The required question: Does the AI demonstrably carry clinical and socioeconomic context forward across a multi-turn encounter, and can the vendor demonstrate this with a structured test? The Brownsville Stress Test is that test.</p><p>In Version 3.1, context integration is defined as: does the AI reason over the Longitudinal Memory Spine in a way that is consistent, reproducible, and auditable, or does it behave as if each query is context-free?</p><div><hr></div><h3><strong>Layer 3 &#8212; Physician Interrogation Checkpoint</strong></h3><p>This is the framework&#8217;s most legally significant layer, and its ethical core. The physician reviews AI output against their own independent clinical assessment and documents three things:</p><ol><li><p>Where AI output and clinical judgment align</p></li><li><p>Where they diverge on pathophysiology</p></li><li><p>Where AI is clinically correct in the abstract but clinically wrong for this patient in this room <em>(V1.1 Addition &#8212; Dr. John Ferguson, MD, FACS)</em></p></li></ol><p>This third documentation standard is the one medicine has not yet defined. It is precisely where malpractice lives, and where AI fails quietly, without anyone realizing it failed. CARAF names it as a required documentation standard.</p><p>In Version 3.1, Layer 3 is explicitly anchored to the Longitudinal Memory Spine. The physician is not interrogating AI output in a vacuum; they are interrogating it against the patient&#8217;s longitudinal story:</p><ul><li><p>Does this recommendation fit their prior imaging, prior workups, prior complications?</p></li><li><p>Does it respect the social and economic constraints captured earlier?</p></li><li><p>If I diverge from the AI, can I point to elements in the LMS that support my choice?</p></li></ul><p>This is the <em>own the decision</em> moment. It establishes that the physician exercised independent medical judgment. It creates the record that distinguishes responsible AI-assisted care from delegation of clinical authority to a machine. Without this layer, every AI-assisted encounter is a liability waiting to be filed.</p><p><strong>The cryptographic timestamping gap</strong> &#8212; For Layer 3 to function as a legal defense, the record of independent physician judgment must be provably prior to, not concurrent with or subsequent to, the AI output the physician reviewed. The mechanism by which that independent judgment is cryptographically timestamped before AI output is revealed does not yet exist as a clinical standard. That infrastructure is the missing technical layer beneath Layer 3, and its development is a recognized priority for this framework.</p><p><strong>Declaring uncertainty is a clinical safety intervention.</strong> When clinicians explicitly flag uncertainty, telling the system &#8220;I&#8217;m not confident in my reasoning, don&#8217;t let this influence your conclusions,&#8221; harmful AI echoing drops dramatically. Diagnostic accuracy under adversarial conditions improved from 27% to 42% with this single behavioral change. <em>(Stanford/Microsoft Research, 2026)</em> A physician&#8217;s epistemic state is an input to AI behavior, whether declared or not.</p><p>In a CARAF 3.1 world, that declared uncertainty is logged alongside the LMS snapshot for that moment, so that the record shows both what was known and how certain the clinician was.</p><div><hr></div><h3><strong>Layer 4 &#8212; Treatment Decision &amp; Care Plan Reasoning</strong></h3><p>The physician selects a course of treatment and documents not just what they chose, but why, including whether AI suggested an alternative pathway and whether it was accepted, modified, or rejected, and on what clinical basis. This is SOAP documentation evolved for the AI era. It is the new standard of care record in an AI-assisted clinical environment. It is also what survives discovery, what informs expert witness testimony, and what eventually defines the duty of care standard in AI-assisted medicine.</p><p>Version 3.1 adds an explicit expectation: treatment decisions are documented in relation to the Longitudinal Memory Spine. For example:</p><ul><li><p>&#8220;We chose Drug A over Drug B because of prior intolerance documented in the LMS.&#8221;</p></li><li><p>&#8220;We rejected the AI&#8217;s recommendation because it conflicted with the LMS-documented history of arrhythmia.&#8221;</p></li></ul><p>Decisions that can be tied back to specific elements in the LMS are easier to defend, easier to underwrite, and easier to learn from.</p><div><hr></div><h3><strong>Layer 5 &#8212; Audit Trail &amp; Governance</strong></h3><p>Every interaction between AI output and physician decision is time-stamped, preserved, and structured in a format that survives legal discovery. This layer captures:</p><ul><li><p>Which AI platform was used</p></li><li><p>What version and model</p></li><li><p>FDA SaMD classification status</p></li><li><p>Whether vendor indemnity agreements were in place</p></li><li><p>Whether the AI output was within its validated use parameters</p></li></ul><p><strong>Repeatability data</strong> <em>(V1.1 Addition &#8212; Dr. John Ferguson, MD, FACS)</em> &#8212; The percentage of identical queries that produce consistent responses across multiple sessions. A leading clinical AI answered the same question differently 23% of the time. <em>(Stanford Research, 2026)</em> From the insurance seat, that is an actuarial problem before it is a clinical one. A system with 77% response consistency has a reliability profile that must appear in every underwriting conversation.</p><p><strong>Governance ownership</strong> <em>(V1.1 Addition &#8212; Dr. John Ferguson, MD, FACS)</em> &#8212; &#8220;Is there a governance process for monitoring AI performance post-deployment?&#8221; is the right question. The follow-up that makes it actionable: Who owns it, by name, organizational level, cadence, benchmark, and remediation authority? A governance process without a named accountable human is a document, not a safeguard.</p><p>Version 3.1 extends Layer 5 to include the Longitudinal Memory Spine explicitly. For each AI suggestion and clinician decision, the audit trail records the LMS snapshot, the key data and patterns the system and clinician saw at that moment. Defense attorneys, regulators, and carriers can reconstruct: &#8220;Given this longitudinal context, was the AI&#8217;s suggestion reasonable, and was the clinician&#8217;s decision defensible?&#8221;</p><p>This is the infrastructure that carriers need, and currently lack, to underwrite AI-assisted clinical environments with precision rather than guesswork.</p><div><hr></div><h3><strong>Layer 5B &#8212; Pre-Decisional System Governance</strong></h3><p><em>(V3.1 Addition &#8212; Developed through the CARAF think tank process)</em></p><p>Layer 0 governs what the AI is constrained from doing before deployment. Layer 5 governs who owns the audit trail and monitoring process after clinical encounters occur. This layer addresses the governance gap between those two points, the ongoing operational decisions that shape every AI output before it reaches the clinician.</p><p>By the time a physician interrogates an AI recommendation at Layer 3, a series of consequential decisions have already been made. What information was included in the AI&#8217;s input. What thresholds define acceptable performance. What conditions trigger escalation or intervention. Those decisions are not made at the clinical encounter. They are made at the system level, often at deployment, often by vendors, often without a named accountable owner, and they run continuously, invisibly, beneath every clinical interaction.</p><p>CARAF names this as a governance failure. Invisible system-level decisions are not neutral. They define what the clinician sees, what the AI surfaces, and what gets filtered before human judgment begins. Without explicit ownership of that architecture, the pre-decisional layer is ungoverned, and ungoverned systems, in litigation, belong to the plaintiff&#8217;s attorney.</p><p><strong>The operational questions this layer must answer:</strong></p><ul><li><p>Who owns the performance thresholds that define acceptable AI output, and are they documented with a named individual, organizational level, and review cadence?</p></li><li><p>Who controls the escalation trigger logic, the conditions under which a finding is routed to human review rather than surfaced as a clean output?</p></li><li><p>Who governs what information is included in or excluded from AI input at the system level, and is that governance visible to the clinicians relying on the output?</p></li><li><p>When performance thresholds or escalation parameters are changed, by the vendor, by the institution, or by a model update, is that change documented, disclosed to clinical staff, and logged in the audit trail?</p></li><li><p>Is there a named human owner of the pre-decisional architecture, by title, organizational level, review cadence, and remediation authority, or is that ownership assumed to live with the vendor?</p></li><li><p>Does the vendor contract explicitly define who owns threshold and escalation governance post-deployment, and does it survive a vendor update cycle?</p></li></ul><p><strong>The relationship to Layer 3:</strong> The physician interrogation checkpoint assumes the AI output being interrogated was produced by a governed, visible, accountable system. Layer 5B is what makes that assumption defensible. Without it, the clinician is exercising independent judgment over an output whose parameters were set by no one they can name.</p><p><strong>The insurance architecture connection:</strong> When a patient is harmed by an AI-assisted clinical decision, plaintiff discovery will reach the pre-decisional layer. Who set the thresholds. Who owned the escalation logic. Who decided what the clinician would and would not see. If no one owns that layer explicitly, the institution and the physician absorb the liability for decisions that were never theirs to make.</p><p>Pre-decisional system governance is not a vendor responsibility that healthcare institutions can delegate and forget. It is a shared accountability that must be negotiated at procurement, named at deployment, and owned continuously for as long as the system is running.</p><p><em>Layer 5B was developed through the CARAF think tank process in response to operational insight contributed by the AI enablement and governance constituency. Version 4.0 will incorporate structured expert contributions refining this layer from clinical, legal, insurance, and AI operations constituencies.</em></p><div><hr></div><h3><strong>Layer 6 &#8212; Cybersecurity &amp; Integrity Assurance</strong></h3><p><em>(V1.2 Addition)</em></p><p>A hacked clinical AI is not only a data breach, it is a patient safety event. If an adversary can manipulate AI output, changing a recommended drug dosage, altering a diagnostic flag, suppressing a critical finding, the physician acting in good faith on that compromised output has no way of knowing the recommendation was tampered with. The audit trail will show the physician followed the AI. What it will not show is that the AI had been compromised.</p><p>This layer addresses the integrity of the AI system itself, not just its audit trail. Layers 0 through 5 govern how AI and physician interact. Layer 6 governs whether what the AI said can be trusted as the actual output of a validated, uncompromised system.</p><p><strong>The four-policy coverage gap this layer exposes:</strong> A malpractice policy covers clinical negligence. A cyber liability policy covers data breaches and system compromises. A Tech E&amp;O policy covers the vendor&#8217;s software failures. When a compromised AI produces a harmful clinical recommendation, all three policies may be implicated, and all three carriers may dispute primary responsibility. The physician is left without clean coverage at the moment of greatest exposure.</p><p><strong>Pre-deployment requirements for Layer 6:</strong></p><ul><li><p>Has the vendor demonstrated that the AI platform is protected against unauthorized access, manipulation, or adversarial input injection?</p></li><li><p>Is there a documented process for detecting whether AI output has been tampered with between generation and delivery to the clinician?</p></li><li><p>Does the vendor maintain a Software Bill of Materials (SBOM)? (Required under FDA 2025 premarket guidance for AI-enabled devices)</p></li><li><p>Has the vendor undergone independent third-party cybersecurity audits, and are results available for review before deployment?</p></li><li><p>Is there a documented incident response plan specific to AI output compromise, including clinical notification protocols?</p></li><li><p>Are patient data and AI model inputs appropriately segregated?</p></li><li><p>Does the vendor carry cyber liability insurance with limits sufficient relative to the volume and acuity of clinical decisions the AI supports?</p></li><li><p>Is there a defined process for notifying the practice if the AI system is updated, patched, or modified in any way that could affect clinical output or validated performance?</p></li></ul><p><strong>The liability allocation question:</strong> In the event of a cybersecurity incident affecting AI output integrity, who bears liability, the vendor, the health system, or the physician? This question belongs in the vendor contract before deployment, not in a courtroom after an adverse event.</p><p>Ask all carriers before deployment: &#8220;If our clinical AI is hacked and a patient is harmed as a result of manipulated AI output, which policy responds, and are there exclusions that would leave us without coverage?&#8221;</p><div><hr></div><h2><strong>Part 2A: The Insurance Architecture</strong></h2><h3><strong>Why Four Policies Simultaneously Fail the Patient</strong></h3><p><strong>Prefatory Note</strong></p><p>CARAF was designed from the beginning to address the governance gap between clinical AI deployment and the systems built to protect patients when things go wrong. The clinical layers address the governance gap from the inside.</p><p>This section addresses it from the outside.</p><p>The insurance architecture that is supposed to respond when a patient is harmed by AI-assisted clinical care is broken, not in one place, but in four places simultaneously. Understanding how each policy fails, and why all four fail together, is not a peripheral concern for healthcare risk managers.</p><p>It is the most urgent unresolved problem in healthcare AI governance today.</p><h3><strong>The Four Policies &#8212; Who Is Covered, What Is Covered, and Where Each Fails</strong></h3><p><strong>Policy One: Medical Professional Liability</strong></p><p>Who is covered: The named physician, the medical group, employed clinical staff acting within their scope of duties.</p><p>What is covered: Injuries arising from the rendering of professional services by a licensed professional within their scope of training.</p><p>How it fails with AI: The MPL insuring agreement is built on a foundational assumption, that a traceable human act or omission caused the harm. When AI is in the clinical decision chain, that assumption is under severe stress.</p><p>The definition of &#8220;insured&#8221; in a standard MPL policy does not contemplate an AI system. The definition of &#8220;professional services&#8221; covers the rendering of care by a licensed professional. AI has no license. AI has no scope of training. AI has no standard of care recognized by law.</p><p>When AI participates in a clinical decision and a patient is harmed, the coverage trigger requires identifying a human act or omission. But the error may have occurred in the model&#8217;s training data, built on populations that underrepresent the patient in front of the physician. It may have occurred in a software update deployed last month without clinical notification. It may have occurred in the interaction between this patient&#8217;s data and an algorithm validated for a different population entirely.</p><p>There is no moment. There is no act. There is no trigger.</p><p>The MPL policy is silent on AI. In insurance, silence is not coverage.</p><p><em>The question every MPL carrier must answer: Does your current policy language cover claims arising from AI-assisted clinical decisions, and if so, where is that language?</em></p><p><strong>Policy Two: Cyber Liability</strong></p><p>Who is covered: The healthcare entity holding protected health information.</p><p>What is covered: Data breaches, ransomware events, privacy violations, HIPAA regulatory penalties, and business interruption arising from cybersecurity incidents.</p><p>How it fails with AI: Cyber liability insurance was designed for a specific risk architecture, one in which the harm event is a data compromise. It was not designed for patient harm arising from a clinical decision in which AI produced a flawed recommendation.</p><p>The distinction is precise and important. Cyber liability responds when data is compromised. It does not respond when a patient is harmed because an AI model&#8217;s output was clinically incorrect, statistically biased, or architecturally unable to account for this patient&#8217;s longitudinal history.</p><p>Healthcare is already the prime target for cyberattacks. Cyber coverage is essential. But it is not a substitute for clinical liability coverage. When someone argues that a cyber policy addresses AI clinical liability, they are solving the wrong problem with the wrong tool.</p><p><em>The question every cyber carrier must answer: Does your policy respond to patient harm arising from a flawed AI clinical recommendation, and if not, which policy does?</em></p><p><strong>Policy Three: Commercial General Liability</strong></p><p>Who is covered: The facility against third-party bodily injury, property damage, and personal and advertising injury claims.</p><p>What is covered: General operational liability, premises liability, and broad third-party injury claims not covered by more specific policies.</p><p>How it fails with AI: In January 2026, ISO released Form CG 40 47, a Commercial General Liability endorsement that explicitly excludes bodily injury, property damage, and personal and advertising injury arising out of generative artificial intelligence.</p><p>ISO is not a carrier. ISO is the Insurance Services Office, the organization that develops standardized policy forms used across the U.S. commercial insurance market. When ISO releases a new exclusion form, it reflects the market&#8217;s collective judgment that a risk is real, unpriced, and needs to be carved out of existing coverage.</p><p>The CGL market has drawn a line in the sand. In writing. In standardized form language. Effective January 2026.</p><p>If you haven&#8217;t reviewed your CGL renewal for this endorsement, review it now. The exclusion may already be in your policy.</p><p><em>The question every hospital CFO must answer: Has your CGL policy been endorsed with ISO Form CG 40 47, and do you know which clinical AI tools in your workflow that exclusion now applies to?</em></p><p><strong>Policy Four: Technology Errors and Omissions</strong></p><p>Who is covered: The AI vendor, the company that built, trained, owns, and licenses the clinical tool.</p><p>What is covered: Financial losses and third-party claims arising from errors, failures, or omissions in the vendor&#8217;s technology product, software that doesn&#8217;t perform as advertised, inaccurate outputs, technology that causes financial harm to a client.</p><p>How it fails with AI: Tech E&amp;O is the policy the healthcare risk community has most consistently overlooked, and the one that most directly implicates the vendor&#8217;s liability when a clinical tool fails.</p><p>But Tech E&amp;O has a precise and critical boundary. It stops at the vendor&#8217;s door.</p><p>The physician is not an additional insured on the AI vendor&#8217;s Tech E&amp;O policy. The hospital is not an additional insured. The clinical workflow in which the AI operates is not covered. Tech E&amp;O was written for the vendor&#8217;s liability, not for the liability of the people using the vendor&#8217;s product.</p><p>This matters in three specific ways.</p><p><em>The scale problem.</em> OpenAI reportedly carries approximately $300 million in insurance coverage, against legal exposure that plaintiff attorneys, in copyright cases alone, are valuing in the trillions under statutory damages frameworks. The gap between what these companies carry and what they potentially owe is not a rounding error. It is existential. Both OpenAI and Anthropic have responded to that gap the same way: self-insurance, using investor capital earmarked to absorb legal costs that the insurance market won&#8217;t cover.</p><p>When you deploy a clinical AI tool from a vendor whose insurance coverage is a fraction of their potential liability, and whose gap is covered by venture capital rather than an admitted insurance carrier, you are not transacting with a fully insured counterparty. That belongs in every AI vendor procurement conversation. It almost never appears there.</p><p><em>The indemnification problem.</em> AI vendor agreements almost universally contain indemnification clauses that limit vendor liability and push responsibility back toward the facility and the physician. The vendor&#8217;s contract says: we are not liable for clinical outcomes. The MPL policy says: we cover the physician&#8217;s professional acts. The AI sits in the middle, owned by the vendor, used by the physician, indemnified by neither.</p><p><em>The exclusion problem.</em> Research from Hunton Andrews Kurth confirms a compounding dynamic: E&amp;O policies typically restrict coverage to failures of software developed or created by the insured organization. When a third-party AI malfunctions and litigation follows against the hospital or physician, the institution&#8217;s own E&amp;O or MPL policy may not respond, because the failure originated with a vendor&#8217;s product. Broad new AI exclusions being drafted by carriers extend to usage of any artificial intelligence system, including that of third parties. The AI vendor&#8217;s Tech E&amp;O doesn&#8217;t cover the physician. And the physician&#8217;s MPL policy may now exclude harm caused by the AI vendor&#8217;s system entirely.</p><p>That is not a gap. That is a void with walls closing in from both sides.</p><p><em>The question every AI vendor must answer: Does your current Tech E&amp;O coverage address clinical deployment specifically, and are the healthcare facilities and physicians using your platform named as additional insureds?</em></p><h3><strong>The Four-Policy Matrix</strong></h3><p style="text-align: center;"><strong>Policy</strong></p><p style="text-align: center;"><strong>Covers Who</strong></p><p style="text-align: center;"><strong>Covers What</strong></p><p style="text-align: center;"><strong>Responds to AI Clinical Harm?</strong></p><p>MPL</p><p>Physician / Medical Group</p><p>Professional acts &amp; omissions</p><p>No, silent on AI</p><p>Cyber</p><p>Healthcare Entity</p><p>Data events / privacy violations</p><p>No, wrong risk architecture</p><p>CGL</p><p>Facility</p><p>Third-party bodily injury</p><p>No, explicitly excluded (ISO CG 40 47)</p><p>Tech E&amp;O</p><p>AI Vendor</p><p>Technology errors &amp; failures</p><p>Possibly, but does not extend to physician or institution</p><h3><strong>The Coverage Trigger Trap: Claims-Made vs. Occurrence</strong></h3><p>Even if you find the right policy, even if you find a carrier willing to defend, there is a second structural problem that no current policy language resolves.</p><p>Both claims-made and occurrence policies require answering the same foundational question: when did the injury occur?</p><p>With AI in the clinical workflow, that question may be unanswerable.</p><p>Consider the Clifford scenario, applicable to any AI-assisted clinical harm event:</p><p>Was it when the model was trained on data that contained a systematic bias? When the hospital deployed the model without adequate demographic validation? When the physician acted on the flawed output at the clinical encounter? When the patient&#8217;s condition progressed undetected over subsequent months?</p><p>Under a claims-made policy, the carrier on risk at the time of the clinical encounter argues the injury occurred when the model was trained, before their policy period. The carrier on risk during training argues the injury occurred when the physician acted, after their policy period.</p><p>Under an occurrence policy, determining when the injury-causing event occurred may require expert testimony about AI model behavior that no court has yet fully addressed.</p><p>In the Clifford v. Prenuvo case specifically: the scan was July 2023. The catastrophic stroke was March 2024. The retroactive date question alone, before anyone addresses the AI question, creates a coverage dispute between carriers.</p><p>The claims-made vs. occurrence trap is not a theoretical edge case. It is the logical structure of every AI-assisted clinical harm event. The coverage trigger question does not have an answer in current policy language, because current policy language was written before the question existed.</p><h3><strong>The Coordination Failure: Four Carriers, Four Denials</strong></h3><p>Imagine a facility that has done everything right by current standards. MPL policy. Cyber coverage. Named as additional insured on the general liability program. AI vendor agreement reviewed and confirmed to carry Tech E&amp;O.</p><p>A patient is harmed. AI was in the workflow.</p><p>The MPL carrier looks for a traceable human act or omission. The causation chain runs through an AI model. Coverage is disputed.</p><p>The cyber carrier confirms this is not a data breach. It is a clinical harm event. Not their coverage.</p><p>The CGL carrier points to the ISO AI exclusion endorsement added at the last renewal. Not their coverage.</p><p>The AI vendor&#8217;s Tech E&amp;O carrier confirms the physician and hospital are not insureds on their policy. Not their coverage.</p><p>Four carriers. Four denials. One patient. One lawsuit.</p><p>Then the &#8220;other insurance&#8221; clauses begin. Each policy points at the others. The disputes persist. The physician is exposed. The hospital is exposed. The patient is uncompensated while the lawyers argue.</p><p>This is not a hypothetical. It is the logical and inevitable structure of every AI-assisted clinical harm event under the current insurance architecture.</p><h3><strong>What Needs to Change</strong></h3><p>The solution requires simultaneous movement from every direction.</p><p>MPL carriers must examine whether their policy definitions, specifically &#8220;insured,&#8221; &#8220;professional services,&#8221; and &#8220;covered act,&#8221; are adequate for a clinical environment in which AI is a participant in the decision chain. Policy language must be updated. Coverage triggers for machine learning error events must be defined. Insuring agreements must be coordinated with cyber and Tech E&amp;O policies.</p><p>Cyber carriers must clarify whether and how their policies respond to clinical harm events involving AI, and coordinate their insuring agreements with MPL to eliminate the gap between data events and clinical harm.</p><p>CGL carriers must develop alternative coverage language for healthcare facilities deploying AI, because ISO Form CG 40 47 has created a void that no current standard product fills.</p><p>AI vendors must restructure their Tech E&amp;O coverage to address clinical deployment specifically, including provisions that extend coverage to the physician and institutional users of their platforms. Vendor contracts must be renegotiated to align indemnification obligations with the actual risk distribution.</p><p>Healthcare facilities must make AI vendor insurance requirements a standard component of procurement and contracting, requiring vendors to carry coverage that specifically addresses clinical deployment, with facilities and physicians named as additional insureds where possible.</p><p>The &#8220;other insurance&#8221; problem requires a coordinated solution, either through a shared carrier program that writes all four coverage components under one roof, or through explicit cross-policy language that resolves disputes before they begin.</p><p>None of this exists yet as a standard market product. The pressure to create it is building with every case filed, every coverage dispute opened, and every clinical AI deployment that goes live without a governance framework behind it.</p><p>The governance layer and the insurance architecture must be built together.</p><p>That is what CARAF Version 3.1 proposes.</p><p><em>This section was developed through the AI &amp; The Oath series, a seven-post LinkedIn and Substack analysis of the insurance coverage gaps created by AI deployment in clinical healthcare settings. Full series available at aiandtheoath.substack.com</em></p><div><hr></div><h2><strong>Part 2B: The Clifford Anchor Case</strong></h2><h3><strong>What Active Litigation Reveals About the Governance Gap</strong></h3><p><strong>Why This Case Belongs in CARAF</strong></p><p>A governance framework built entirely on hypothetical scenarios has limited persuasive power with the constituencies it needs to reach, insurance carriers, healthcare attorneys, hospital administrators, and regulators.</p><p>The Clifford v. Prenuvo case changes that.</p><p>It is not a hypothetical. It is active litigation in New York Supreme Court. A real AI-assisted clinical tool. A real missed finding. A real patient with permanent catastrophic injuries. And a real plaintiff&#8217;s legal theory already reaching past the physician toward the platform itself.</p><p>Every governance gap CARAF is designed to address is present in this case. Every coverage failure the four-policy framework describes is visible in this case. Every documentation deficit that Layer 3 exists to prevent is on display in this case.</p><p>CARAF names it as an anchor case not to exploit a tragedy, but because Sean Clifford&#8217;s outcome is the clearest available illustration of what happens when the governance layer doesn&#8217;t exist.</p><p><strong>The Facts</strong></p><p>On July 15, 2023, Sean Clifford, a 37-year-old New Jersey resident and father of two, paid $2,500 for an elective whole-body MRI at a Prenuvo location in New York City.</p><p>Prenuvo markets its scan as one that uses AI and advanced imaging technology to check for hundreds of conditions, including silent killers like aneurysms. The company&#8217;s CEO has specifically stated the organization is fully focused on deploying the best hardware, software, artificial intelligence, and radiologists in the market.</p><p>The radiologist who interpreted Clifford&#8217;s scan, an independent contractor working through Prenuvo&#8217;s platform, documented no major concerns. The report stated: no evidence of proximal intracranial arterial aneurysm, no small vessel ischemia, normal vasculature.</p><p>On March 7, 2024, eight months after the scan, Clifford suffered a catastrophic stroke while on a business trip. He required three emergency brain surgeries.</p><p>The lawsuit, filed September 24, 2024 in New York State Supreme Court, alleges that the scan clearly showed a 60% narrowing and irregularity of the proximal right middle cerebral artery, visible in both the MRA sequence and the 3D MRA sequence, and that this finding was not documented in the report.</p><p>A third-party board-certified neurologist retained by the family reviewed the original images and called it an obvious miss. The expert stated the vasculature was incorrectly described as normal, and that the stenotic vessel could have been treated with targeted intervention, thereby preventing the catastrophic stroke.</p><p>The stroke occurred in the exact same area of the brain where the stenosis was visible in the July 2023 scan.</p><p>Clifford now has left-side paralysis, impaired vision, cognitive delays, inability to speak properly, and profound neurological deficits. He cannot dress himself. He requires lifelong therapy.</p><p><strong>What Prenuvo Did Next &#8212; And Why It Matters</strong></p><p>Prenuvo&#8217;s legal response is instructive for every clinical AI vendor and every healthcare institution deploying AI.</p><p>The company first sought to compel arbitration, attempting to keep the case out of court entirely based on the consent agreement Clifford signed. That effort failed.</p><p>Prenuvo then attempted to have California law applied to the New York case, specifically because California has a cap on malpractice damages, while New York does not. A judge rejected this as well, ruling that the choice-of-law clause in the patient agreement applies only to breach of contract claims, not to malpractice, negligence, lack of informed consent, negligent hiring and supervision, or product liability.</p><p>The case is now proceeding in New York Supreme Court. No damages cap. Full exposure.</p><p><strong>The Product Liability Theory &#8212; And Why It Reaches the Vendor</strong></p><p>The plaintiff is not alleging only that one radiologist made an error. The complaint includes negligent hiring and supervision claims and product liability claims, reaching toward the platform itself.</p><p>The learned intermediary doctrine says the physician stands between the tool and the patient. That architecture depends on the physician being a genuinely independent decision-maker. When the tool is the system, when the radiologist&#8217;s workflow runs through Prenuvo&#8217;s AI-assisted platform and that platform&#8217;s output shapes every report, the independence argument weakens.</p><p>Prenuvo marketed AI as a feature. AI was the differentiator. AI was the proof of superiority. When the scan missed what a plaintiff&#8217;s neurologist called obvious, the marketing became evidence.</p><p>The plaintiff&#8217;s product liability theory is asking a question no court has yet fully resolved: when AI is not optional, when it is the system the clinician works within, does the vendor bear a share of the liability for what that system produces?</p><p><strong>The Mandatory Workflow Question</strong></p><p>When a healthcare institution vets an AI system, implements it systemwide, issues access credentials to all clinical staff, and integrates it into the workflow such that clinicians have no practical alternative, the AI is no longer optional.</p><p>It is the system.</p><p>And when the system participates in a diagnosis, when the AI-assisted output shapes what the clinician sees, documents, and acts upon, the question of whether the AI caused the harm becomes genuinely complex.</p><p>In the Clifford case, the radiologist worked within Prenuvo&#8217;s platform. The AI-assisted scan was the product being sold. The report generated through that system told Clifford his vasculature was normal.</p><p>If the AI component of that platform contributed to the missed finding, and the product liability claim suggests the plaintiff believes it did, the question is no longer only whether the radiologist made an error. The question is whether the system produced a defective output that a reasonable radiologist relying on that system would have been misled by.</p><p>That is a product liability question. It reaches the vendor directly, past the learned intermediary entirely.</p><p><strong>The Four-Policy Breakdown Applied to Clifford</strong></p><p><em>Medical Professional Liability</em> &#8212; The radiologist carried MPL coverage. The coverage trigger requires a traceable human act or omission. But Prenuvo&#8217;s AI-assisted platform shaped every report. If the platform&#8217;s output was the proximate cause of the missed finding, the human act is now in dispute. The claims-made vs. occurrence trap is also present in this timeline. Scan date: July 2023. Stroke date: March 2024. Under a claims-made policy, the retroactive date question alone, before anyone addresses the AI question, creates a potential coverage dispute between carriers.</p><p><em>Cyber Liability</em> &#8212; This is not a data breach. This is a patient harmed by a clinical recommendation produced through an AI-assisted platform. Cyber liability does not respond.</p><p><em>Commercial General Liability</em> &#8212; ISO Form CG 40 47, effective January 2026, explicitly excludes bodily injury arising from generative AI. Prenuvo marketed AI as a core feature. The CGL exclusion was written for exactly this category of scenario.</p><p><em>Technology Errors &amp; Omissions</em> &#8212; This is the policy the plaintiff&#8217;s product liability theory is reaching toward. Prenuvo&#8217;s Tech E&amp;O covers failures of their software and inaccurate outputs. But the vendor&#8217;s indemnification clauses, standard in every AI vendor agreement, are designed to push liability back to the facility and the radiologist. The radiologist is not an additional insured on Prenuvo&#8217;s Tech E&amp;O policy. The radiologist is defending alone, without a policy written for the world they were practicing in.</p><p><strong>What the Clifford Case Teaches CARAF</strong></p><p><em>Layer 0, Upstream Constraint:</em> Was Prenuvo&#8217;s AI validated for the specific vascular finding it missed? What patient populations were underrepresented in its training data? These are Layer 0 questions. The Clifford case suggests they were never asked, or never answered, before deployment.</p><p><em>Layer 2, AI Input &amp; Differential Generation:</em> Did the platform present the AI&#8217;s assessment of the vascular findings with confidence indicators and flagged uncertainties? Or did it present a clean output the radiologist had no reason to question? The answer to that question is at the center of the product liability theory.</p><p><em>Layer 3, Physician Interrogation Checkpoint:</em> Was there any documented record of the radiologist&#8217;s independent assessment of the vascular findings, separate from the AI platform&#8217;s output? If not, and the lawsuit suggests there was not, the defense has no record to stand on. The physician cannot prove they thought first. The vendor cannot prove their system doesn&#8217;t induce over-reliance. Both are being asked.</p><p><em>Layer 5, Audit Trail &amp; Governance:</em> Did the audit trail capture which AI platform version was used, what its validated use parameters were, and whether the vascular finding was within its validated detection scope? Without that record, neither side can reconstruct what the AI actually produced and why.</p><p><em>Layer 5B, Pre-Decisional System Governance:</em> Who set the thresholds that defined what Prenuvo&#8217;s platform would surface as a finding and what it would pass as clean? Who owned the escalation logic, the conditions under which a vascular irregularity would be flagged for independent review versus returned as normal vasculature? Those decisions were made somewhere, by someone, before the radiologist ever opened the scan. The Clifford case will eventually reach that layer. CARAF names it now.</p><p><em>Layer 5A, Insurance Architecture:</em> All four policies fail the Clifford scenario simultaneously. The governance layer CARAF describes, the documented independent judgment, the audit trail, the vendor validation requirements, is precisely what would have created a defensible record for both the radiologist and the platform.</p><p><strong>The Documentation Imperative</strong></p><p>There is one thing the Clifford case makes undeniably clear for every clinician and every institution using AI.</p><p>Documentation of independent clinical judgment is no longer optional.</p><p>If AI is in your workflow, the only defense that survives a plaintiff&#8217;s product liability theory is a clear record showing that the clinician reviewed the AI&#8217;s output, applied independent reasoning, and made a documented clinical decision.</p><p>Without that record, the defense collapses into: the physician relied on the AI.</p><p>Which is precisely the foundation of the plaintiff&#8217;s case.</p><p>The governance layer CARAF describes, the architecture that prompts clinicians to document independent judgment, builds the chronological patient record, and creates the audit trail that makes defense possible, is not a compliance checkbox.</p><p>After Clifford, it is the standard of care.</p><p><strong>A Note on the Human Reality</strong></p><p>Sean Clifford is 37 years old. He has two children. He cannot dress himself.</p><p>He paid $2,500 for a scan that was marketed as the future of preventive medicine. He followed the instructions. He trusted the technology. He trusted the system.</p><p>Whatever the legal outcome of this case, that human reality is the reason the governance question matters, not just as a liability problem, but as a patient safety imperative.</p><p>The physicians and institutions getting this right are not doing it because their attorneys told them to.</p><p>They are doing it because they understand that AI in clinical care is a privilege that comes with a profound obligation.</p><p>We owe it to the patients to get the architecture right, before the next Sean Clifford.</p><p><em>The Clifford v. Prenuvo case is active litigation. Facts cited in this section are drawn from the publicly filed complaint and publicly reported court rulings. This section does not constitute legal advice and should not be relied upon as legal analysis of the case.</em></p><div><hr></div><h2><strong>Part Three: We Have Seen This Before &#8212; The EMR Parallel</strong></h2><h3><strong>3.1 How Epic and EMR Adoption Actually Happened</strong></h3><p>No single body sat down and designed the EMR mandate. What happened was a convergence of pressures that created an unavoidable forcing function, and the lesson for AI governance is direct.</p><p><em>The Catalyst:</em> The HITECH Act of 2009 tied meaningful use of certified EMR systems to Medicare and Medicaid reimbursement. Adoption was no longer a philosophical choice. It was financial survival.</p><p><em>The Resistance:</em> Physicians pushed back. Health systems balked at implementation costs. Vendors were protective of proprietary systems. Nobody agreed on who should set the standards.</p><p><em>What Broke the Logjam:</em> Not consensus. Reimbursement leverage. CMS said, document care in a certified system or be penalized financially. That single mechanism aligned incentives across an entire industry that could not align itself voluntarily.</p><p><em>The Result:</em> Grudging, expensive, and complete adoption. And over time, EMR became the infrastructure that now makes AI-assisted care possible, because the data AI learns from lives inside those systems.</p><p>EMR adoption created the raw material for a Longitudinal Memory Spine. CARAF 3.1 assumes we finally use that longitudinal record not just for billing and documentation, but as the core context for AI reasoning and accountability.</p><h3><strong>3.2 What the Evidence Actually Shows About EMR and Malpractice</strong></h3><p>The relationship between EMR adoption and malpractice claims is nuanced, and that nuance maps directly onto what CARAF must anticipate for AI governance.</p><p><em>What the research supports:</em> Multiple systematic reviews concluded that health information technology improved quality, safety, and efficiency. Research found that EMR use within hospitals at the time of alleged malpractice is associated with a four-month, roughly 12%, reduction in claim resolution time. Faster resolution means lower costs for carriers, health systems, and patients alike.</p><p><em>What the research does not cleanly support:</em> A retrospective cohort study of nearly 900 Colorado physicians insured through COPIC could not demonstrate a statistically significant reduction in medical liability claim rates attributable to EHR use alone. The evidence that EMR directly prevents claims remains limited and mixed.</p><p><em>The double-edged sword finding:</em> EMR documentation functions as both shield and sword in litigation. The same comprehensive audit trail that defends a physician who documented sound clinical reasoning can expose a physician whose documentation reveals inconsistency, alert fatigue, copy-paste errors, or auto-population mistakes. Metadata is discoverable. Audit trails are permanent. Records modified after an adverse event leave forensic footprints that plaintiff attorneys are now trained to find.</p><p>The lesson for CARAF is direct: the framework must be built around genuine clinical reasoning, not performative documentation. The difference between those two things is what ultimately determines whether Layer 5 functions as a shield or a sword.</p><h3><strong>3.3 The Unavoidable Truth About Unexpected Outcomes</strong></h3><p>No framework, however well designed, eliminates adverse outcomes. Medicine has never promised that. AI does not change this fundamental reality. What AI changes is the scale and speed at which decisions are made, and therefore the scale and speed at which unexpected outcomes can occur if the decision-support infrastructure is flawed.</p><p>CARAF is not built on the premise that AI governance eliminates harm. It is built on the premise that structured accountability, transparent reasoning, documented judgment, and honest audit trails, creates the fairest possible basis for evaluating what happened when harm occurs despite best efforts.</p><h3><strong>3.4 The Direct Parallel for CARAF</strong></h3><p>The same cast of characters resisting AI governance today resisted EMR adoption then. The pathway is visible:</p><ul><li><p>CMS ties AI governance documentation requirements to reimbursement eligibility, the HITECH model applied to AI accountability</p></li><li><p>Major carriers make CARAF-aligned governance a condition of medical professional liability coverage, the way cyber liability controls are now required, not suggested</p></li><li><p>Health systems build it into AI procurement standards and credentialing requirements</p></li><li><p>AI vendors are required to build audit trail and transparency functionality into certified clinical AI tools</p></li></ul><h3><strong>3.5 The Regulatory Landscape &#8212; The Forcing Functions Already in Motion</strong></h3><p>CARAF does not exist in a regulatory vacuum. The governance framework it proposes is directly aligned with, and in several cases ahead of, the regulatory landscape now taking shape around clinical AI.</p><p><em>FDA, January 2025 Draft Guidance on AI-Enabled Device Software Functions:</em> The FDA issued comprehensive draft guidance applying a Total Product Life Cycle approach to AI-enabled medical devices. The guidance requires manufacturers to document model description, data lineage, performance tied to clinical claims, bias analysis and mitigation, human-AI workflow design, and post-market monitoring. As of July 2025, over 1,250 AI-enabled medical devices have been authorized for marketing in the United States, 97% cleared via the 510(k) pathway. CARAF&#8217;s Layer 0 through Layer 5B requirements are directly aligned with what this guidance will require.</p><p><em>The Learned Intermediary Doctrine, The Liability Reality Physicians Must Understand:</em> Current case law suggests that lawsuits are rarely successful against AI software companies. Licensing agreements typically stipulate that final decision-making responsibility lies with the clinician. That doctrine places all liability on the physician regardless of what the AI recommended. CARAF&#8217;s Layer 3 exists precisely because of this doctrine. If the physician bears the liability, the physician needs the proof of independent judgment. That proof does not currently exist in most clinical AI deployments.</p><p>The Clifford v. Prenuvo case represents the first major direct challenge to the learned intermediary doctrine in an AI-assisted clinical harm case. The outcome of that litigation may redefine how the doctrine applies to mandatory AI workflows.</p><p><em>Federation of State Medical Boards, April 2024 Policy Recommendations:</em> The FSMB formally stated that medical boards should hold clinicians, not AI developers, liable when AI tools produce medical errors. This is not law. It is not binding. But it is the clearest official signal of where regulatory accountability is directed, squarely at the physician and the institution. CARAF&#8217;s per-layer governance requirements are designed to give that physician and institution a defensible record.</p><p><em>State-Level Legislation, The Patchwork Problem:</em> In 2025, 47 states introduced more than 250 bills including health AI regulation. Of those, 33 became law in 21 states. In 2026, approximately 200 more state AI bills have been tracked. CARAF&#8217;s platform-agnostic, specialty-agnostic architecture is designed to function as a consistent governance standard regardless of which state&#8217;s regulatory requirements apply.</p><p><em>The EU AI Act, August 2026 High-Risk Obligations:</em> The EU AI Act&#8217;s high-risk obligations take effect August 2026 through 2027, adding an international regulatory layer for organizations marketing AI devices globally. CARAF&#8217;s documentation, transparency, and audit trail requirements align directly with these obligations.</p><p><em>The Four-Policy Coverage Void, The Question No Carrier Is Asking:</em> When AI-assisted care harms a patient, all four policies may be implicated, and all four carriers may dispute primary responsibility. CARAF&#8217;s Part 2A names this gap explicitly and provides the pre-deployment questions every practice must put to every carrier before the first AI-assisted clinical encounter.</p><h3><strong>3.6 The Critical Difference</strong></h3><p>Epic documented decisions humans were already making. CARAF must capture the interaction between human and AI reasoning in real time. That requires AI vendors to build accountability into their own platforms, to document their own reasoning in ways that can be interrogated. That has no EMR analog, and it is where vendor resistance will be strongest. It is also where the most important policy and legal battles will be fought.</p><div><hr></div><h2><strong>Part Four: The Constituency Questions</strong></h2><p>This is where the think tank begins. The goal is not to answer these questions here, it is to surface the expert insight that only comes from people living these problems from their specific seat.</p><p><strong>For Legal &amp; Regulatory Professionals</strong></p><ul><li><p>In the absence of a defined standard of care for AI-assisted clinical decision making, how are courts currently evaluating physician liability when AI output was a factor in a disputed treatment decision?</p></li><li><p>Is product liability theory, applied to FDA-regulated SaMD, a viable path to vendor accountability, or does learned intermediary doctrine insulate vendors when a physician was in the loop? How does the Clifford v. Prenuvo product liability theory change this analysis?</p></li><li><p>What legislative or regulatory mechanism is most likely to create the AI governance forcing function that HITECH created for EMR, and who has the political will to drive it?</p></li><li><p>When a vendor&#8217;s Tech E&amp;O policy and a physician&#8217;s MPL policy both potentially apply to the same AI-assisted harm event, how do &#8220;other insurance&#8221; clauses resolve the conflict, and what does current case law say about that resolution?</p></li></ul><p><strong>For Insurance &amp; Underwriting Professionals</strong></p><ul><li><p>How are carriers currently underwriting medical professional liability for health systems with material AI integration, and what governance documentation, if any, are they requiring?</p></li><li><p>Are vendor indemnity agreements being stress-tested at renewal, or only examined at claim time? What happens when a SaMD vendor&#8217;s limits are insufficient to cover a serious adverse outcome at a major health system?</p></li><li><p>Could a coalition of major MPL carriers create the CARAF-equivalent mandate through coverage conditions, the way the cyber market drove security control adoption, without waiting for regulatory action?</p></li><li><p>ISO Form CG 40 47, effective January 2026, has explicitly excluded generative AI bodily injury from CGL coverage. What is the MPL market&#8217;s equivalent response, and when does it arrive?</p></li><li><p>How should the claims-made vs. occurrence framework be restructured to address AI-assisted harm events where the injury-causing event may have occurred in training data, deployment, or the clinical encounter?</p></li></ul><p><strong>For Health System &amp; Clinical Leadership</strong></p><ul><li><p>What does responsible AI procurement look like today, and are health systems requiring audit trail capability and transparency documentation before deployment?</p></li><li><p>How do we train physicians to interrogate AI output rather than defer to it, particularly in high-volume, high-pressure environments where cognitive load already exceeds reasonable limits?</p></li><li><p>Where is clinical intuition most likely to catch what AI misses, and how do we preserve that capacity as AI becomes more embedded in workflow?</p></li><li><p>Do we have a usable longitudinal view of each patient&#8217;s record that AI and clinicians can share, or are we still asking both to reason from fragments?</p></li><li><p>Who owns the integrity of the Longitudinal Memory Spine, data quality, access controls, and how AI systems interact with it?</p></li></ul><p><strong>For Ethics Professionals</strong></p><ul><li><p>When AI performs better than the average physician on a specific diagnostic task, what is the ethical framework for a physician who overrides a correct AI recommendation based on clinical intuition that proves wrong?</p></li><li><p>How do we ensure that AI governance frameworks protect the populations most likely to be harmed by AI performance gaps, historically underrepresented groups whose data is underweighted in training sets?</p></li><li><p>Is there an ethical obligation to disclose AI involvement in clinical decision making to patients, and if so, what does meaningful, plain-language informed consent look like in an AI-assisted encounter?</p></li><li><p>What is the patient&#8217;s right to opt out of AI-assisted clinical decision making, and how does that right interact with a physician&#8217;s obligation to use available tools that may improve diagnostic accuracy?</p></li></ul><p><strong>For AI Vendors &amp; Technology Companies</strong></p><ul><li><p>Does your current Tech E&amp;O coverage address clinical deployment specifically, and are the healthcare facilities and physicians using your platform named as additional insureds?</p></li><li><p>A well-governed clinical AI deployment is your best legal defense. The documentation that proves the clinician exercised independent judgment also proves your tool was used as a tool, not a replacement for clinical reasoning. Are you building that governance layer into your platform?</p></li><li><p>When your tool is mandatory in a clinical workflow, when clinicians have no practical alternative, how does your liability theory change? Have you modeled the product liability exposure that the Clifford v. Prenuvo case represents?</p></li><li><p>Does your vendor contract&#8217;s indemnification clause align with your actual insurance coverage, or does it push liability toward facilities whose MPL policies are not designed to receive it?</p></li></ul><div><hr></div><h2><strong>Part Five: Anticipated Challenges and Honest Limitations</strong></h2><p>A framework that claims to have all the answers will be dismissed by the people most qualified to improve it. What follows is an honest accounting of where CARAF is incomplete, where legitimate expert disagreement exists, and where the hardest unresolved tensions live. These are not weaknesses to be defended, they are the questions this think tank is specifically designed to answer.</p><p><strong>5.1 The Documentation Burden Objection</strong> CARAF does not propose that physicians write structured interrogations of AI output for every vital sign alert or routine confirmatory read. The interrogation checkpoint scales to the acuity and complexity of the decision. Specialty-specific implementation guidance, particularly for high-volume environments like radiology, emergency medicine, and anesthesiology, is a recognized gap in Version 3.1 and a priority for development through this think tank.</p><p><strong>5.2 The Legal Standing Objection</strong> CARAF is not proposed as a legally binding standard of care. It is a practitioner-developed framework designed to inform the development of formal standards by the organizations that carry the authority to create them, CMS, the AMA, state medical boards, ASHRM, and specialty colleges. Formal standards follow structured practitioner consensus. CARAF is designed to accelerate that consensus.</p><p><strong>5.3 The Enforcement Mechanism Objection</strong> CARAF does not need universal voluntary adoption to become the effective standard. It needs the right forcing function. Carriers are better positioned than regulators to create that forcing function quickly. Making CARAF-aligned governance a condition of medical professional liability coverage at renewal would drive health system adoption within a procurement cycle.</p><p><strong>5.4 The Patient Rights and Health Equity Objection</strong> A patient-facing dimension of the framework, addressing disclosure obligations, consent language, and the right to inquire about AI involvement in one&#8217;s care, is not a deferred consideration. It is a foundational design requirement explicitly established in the Equity Principle of this document. Pre-deployment validation requirements, bias auditing, demographic performance stratification, and transparency reporting, belong in the framework as a structural layer. The ethics constituency of this think tank is specifically asked to develop the patient-facing disclosure standard as a core Version 4.0 deliverable.</p><p><strong>5.5 The Governance and Ownership Objection</strong> Who owns this? Who updates CARAF as AI evolves? Who adjudicates whether a health system is genuinely compliant or merely performing compliance? CARAF Version 3.1 does not answer this question. It establishes the framework&#8217;s architecture and initiates the professional conversation that must precede any formal governance structure. Version 4.0 will be shaped by the legal, insurance, clinical, and ethics professionals who engage with Version 3.1. That is not a limitation. It is the point.</p><p><strong>5.6 The Insurance Architecture Objection</strong> The four-policy coverage void described in Part 2A requires simultaneous movement from carriers, vendors, brokers, and facilities. No single actor can resolve it alone. CARAF names the void and provides the question set every constituency must bring to the table. The coordinated solution, a shared carrier program, explicit cross-policy language, or a purpose-built AI clinical liability product, will emerge from that conversation. CARAF is designed to start it.</p><div><hr></div><h2><strong>The Question Underneath All of It</strong></h2><p>We are not debating whether AI belongs in healthcare. That question is settled. We are debating whether the systems that are supposed to protect patients, legal, insurance, clinical governance, and ethics, can move fast enough to keep pace with the technology that is already changing medicine.</p><p>CARAF is not a promise that AI governance prevents all harm. It is a framework for ensuring that when harm occurs, as it inevitably will, because medicine is hard and outcomes are sometimes beyond anyone&#8217;s control, there is a clear, honest, and defensible record of the reasoning that preceded it.</p><p>That record protects patients whose outcomes deserve explanation. It protects physicians whose judgment deserves fair evaluation. It protects health systems whose governance deserves scrutiny. And it protects the integrity of a healthcare system that, at its best, is populated by people who became healers because they wanted to serve humanity.</p><p>The EMR transition tells us that the answer is yes, but only when the right forcing function exists.</p><p>Who in this conversation is already seeing this play out in real cases, real renewals, real policy decisions, or real clinical encounters? That is where the insight lives, and that is what this think tank is built to surface.</p><div><hr></div><p><strong>Michael Tekely, AAI</strong> Medical Professional Liability Insurance | Clinical Risk Management, Duke University Health System Developer, CARAF &#8212; Clinical AI Reasoning &amp; Accountability Framework AI &amp; The Oath &#8212; aiandtheoath.substack.com</p><p><em>The Clifford v. Prenuvo case analysis, the first active litigation testing the fault lines CARAF is designed to address, is available at aiandtheoath.substack.com.</em></p><p>Version 3.1 &#8212; With clinical contributions from John Ferguson, MD, FACS, expanded Longitudinal Memory Spine architecture, four-policy insurance framework, Clifford anchor case, and Layer 5B Pre-Decisional System Governance.</p><p><em>This framework does not constitute legal, clinical, or insurance advice. It is intended solely to stimulate professional discussion and collaborative development.</em></p><p><em>&#169; 2026 Michael Tekely, AAI. All rights reserved. Reproduction or distribution without written permission is prohibited.</em></p>]]></content:encoded></item><item><title><![CDATA[CARAF V3.0 — Complete Framework (Continued)- Part II]]></title><description><![CDATA[Part 2A: The Insurance Architecture through The Question Underneath All of It]]></description><link>https://mikepackman.substack.com/p/caraf-v30-complete-framework-continued</link><guid isPermaLink="false">https://mikepackman.substack.com/p/caraf-v30-complete-framework-continued</guid><dc:creator><![CDATA[AI & The Oath]]></dc:creator><pubDate>Mon, 30 Mar 2026 15:10:40 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!UgsA!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50fa9bf2-affe-4cfa-8ed3-675cbfb4f043_750x750.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>Part 2A: The Insurance Architecture</strong></h2>
      <p>
          <a href="https://mikepackman.substack.com/p/caraf-v30-complete-framework-continued">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[CARAF — Clinical AI Reasoning & Accountability Framework - Part I]]></title><description><![CDATA[2 Parts for Version 3.0]]></description><link>https://mikepackman.substack.com/p/caraf-clinical-ai-reasoning-and-accountability-624</link><guid isPermaLink="false">https://mikepackman.substack.com/p/caraf-clinical-ai-reasoning-and-accountability-624</guid><dc:creator><![CDATA[AI & The Oath]]></dc:creator><pubDate>Mon, 30 Mar 2026 15:06:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!UgsA!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50fa9bf2-affe-4cfa-8ed3-675cbfb4f043_750x750.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1></h1>
      <p>
          <a href="https://mikepackman.substack.com/p/caraf-clinical-ai-reasoning-and-accountability-624">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[My Mission Statement ]]></title><description><![CDATA[I am not a technologist.]]></description><link>https://mikepackman.substack.com/p/my-mission-statement</link><guid isPermaLink="false">https://mikepackman.substack.com/p/my-mission-statement</guid><dc:creator><![CDATA[AI & The Oath]]></dc:creator><pubDate>Sun, 29 Mar 2026 12:46:11 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!UgsA!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50fa9bf2-affe-4cfa-8ed3-675cbfb4f043_750x750.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I am not a technologist. I am not a clinician. But I spent 20 years understanding how clinical decisions get priced and defended &#8212; and 5.5 years inside Duke University Health System watching them get made.</p><p>What I have learned is that the most brilliant AI platform in the world is worthless &#8212; and dangerous &#8212; if it cannot answer three questions.</p><p>Did the physician think independently before the AI spoke?</p><p>Does the record capture the full story of that patient &#8212; their history, their socioeconomic reality, their prior encounters &#8212; so that the AI is reasoning over the whole person, not a snapshot?</p><p>And if something goes wrong, is there a clear, honest, defensible record of every decision that was made and why?</p><p>Right now, no platform, that I&#8217;m aware of, answers all three questions. No governance framework connects all three. No insurance policy is written for the world where all three matter simultaneously.</p><p>I am building that framework. Not alone &#8212; with a coalition of researchers, clinicians, and technologists who each hold a different piece of the answer.</p><p>My objective is not to build the AI. My objective is to build the accountability architecture that makes the AI safe enough to trust with a human life.</p><p>That is CARAF. That is what I am doing. And I am looking for like-minded people who understand the parts I don&#8217;t &#8212; because the problem is too important and too urgent for any one person to solve alone.</p>]]></content:encoded></item><item><title><![CDATA[CARAF Procurement Checklist]]></title><description><![CDATA[Invaluable tool for any healthcare system or physician practice evaluating AI platforms]]></description><link>https://mikepackman.substack.com/p/caraf-procurement-checklist</link><guid isPermaLink="false">https://mikepackman.substack.com/p/caraf-procurement-checklist</guid><dc:creator><![CDATA[AI & The Oath]]></dc:creator><pubDate>Thu, 26 Mar 2026 17:27:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!UgsA!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50fa9bf2-affe-4cfa-8ed3-675cbfb4f043_750x750.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><p> </p>
      <p>
          <a href="https://mikepackman.substack.com/p/caraf-procurement-checklist">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[CARAF — Clinical AI Reasoning & Accountability Framework - Version 2.0]]></title><description><![CDATA[Building the Framework Medicine Doesn&#8217;t Have Yet]]></description><link>https://mikepackman.substack.com/p/caraf-clinical-ai-reasoning-and-accountability-fdb</link><guid isPermaLink="false">https://mikepackman.substack.com/p/caraf-clinical-ai-reasoning-and-accountability-fdb</guid><dc:creator><![CDATA[AI & The Oath]]></dc:creator><pubDate>Thu, 26 Mar 2026 15:57:13 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!UgsA!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50fa9bf2-affe-4cfa-8ed3-675cbfb4f043_750x750.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Version 2.0 &#8212; With Longitudinal Memory Spine (LMS)</h3>
      <p>
          <a href="https://mikepackman.substack.com/p/caraf-clinical-ai-reasoning-and-accountability-fdb">
              Read more
          </a>
      </p>
   ]]></content:encoded></item></channel></rss>