Adaptive Trials with Agentic AI: Real-Time Protocol and Operations
Sep 11 2025 | 4 Min Read

Clinical trials run on a protocol, the detailed plan for how the study should work. Real life often breaks that plan. Enrollment slows, visits get missed, a site falls behind. Teams then file protocol amendments, which add time and cost. A Tufts CSDD study estimates median direct costs of about 141,000 dollars in Phase II and 535,000 dollars in Phase III for a substantial amendment (source). An analysis in Applied Clinical Trials reports average implementation timelines around 260 days from internal approval to final external approvals and re-consent (source).
Agentic AI offers a different path. Agentic AI uses software agents that can watch live data, make decisions, and carry out tasks. In trials, these agents can launch a protocol, monitor progress, flag risks like slow enrollment, and adjust timelines while the study runs. The goal is simple: keep the study on track with small, pre-approved actions that are logged for audit, as described in the Salesforce overview of agentic AI in pharma. Regulators are paying close attention to AI in development. FDA materials stress documentation, human oversight, and ongoing monitoring when AI supports trial decisions (AI in drug development, discussion paper).
Executive summary
Short answer: Agents watch live trial data, find risks early, and recommend small, rule-based changes that keep the study moving.
- Agents can coordinate enrollment and site work in real time and adjust timelines as conditions change, which Salesforce illustrates in its agentic AI explainer.
- Site selection and recruitment improve when models focus effort where eligible patients actually are, as outlined in Salesforce’s life sciences AI guide.
- Regulators expect clear records, human oversight, and performance monitoring for AI, which you can see in FDA’s AI in drug development page and its AI discussion paper.
The problem with rigid protocols
Short answer: Big amendments are slow and expensive, and they often arrive after damage is done.
When a plan cannot adapt, teams file amendments. Those changes cost money and time. The Tufts figures above show the direct dollars, and the Applied Clinical Trials review shows the time toll. That delay can push milestones, strain budgets, and slow patient access to treatment. Reducing the number and size of amendments is why adaptive execution matters.
What agentic AI does differently
Short answer: It adds continuous, machine-assisted oversight that nudges the study inside pre-set rules.
Generative AI is good at drafting materials such as a protocol. Agentic AI is built to execute. It launches the plan, monitors progress, flags issues such as enrollment gaps, and adjusts timing while the study runs. These actions happen within limits you set and are recorded for review, which aligns with the Salesforce view of agentic execution.
Examples you will recognize
Short answer: Small, rule-based adjustments keep the study moving.
- Enrollment focus: Shift outreach toward sites with higher pools of eligible patients, using live feasibility and referral data described in Salesforce’s AI guide.
- Visit timing: Tighten or relax scheduling windows inside the protocol’s allowed range when missed visits rise, a pattern in the agentic AI in pharma use cases.
- Targeted oversight: Raise monitoring at a site with repeat deviations and reduce checks at consistently clean sites, following the peer-reviewed evidence for risk-based monitoring and Medidata’s view of centralized monitoring.
Real-time operations, made practical
Short answer: The system watches operations as they happen, predicts where trouble might start, and alerts teams so they can act before delays grow.
Clinical operations include enrollment, visit scheduling, data capture, query handling, safety review, and site support. In a weekly review model, problems can sit for days. With agentic AI, the platform ingests live inputs from systems such as EDC, ePRO, labs, and site activity. It detects patterns, then prompts the right action within the same day. That shift from weekly to continuous oversight is how teams reduce bottlenecks and keep milestones on schedule, which is consistent with Salesforce’s life sciences AI guide and the peer-reviewed RBM overview.
Enrollment and site performance
Short answer: Focus effort where patients actually are.
Agents track enrollment velocity, screen-fail rates, and referral sources. If a site falls behind, the system can recommend shifting outreach to stronger sites or activating a backup earlier. This keeps the enrollment curve close to plan and reduces the chance you will need timing-driven amendments. See the section on site selection and recruitment in Salesforce’s AI guide.
Data quality and deviations
Short answer: Catch outliers early and fix them fast.
Risk-based and centralized monitoring focus attention on data that could affect safety and endpoints. The AI highlights sites with unusual patterns or repeat protocol deviations so monitors can respond before small errors become big rework, as shown in the RBM and RBQM review and Medidata’s centralized monitoring overview.
Safety vigilance
Short answer: Faster detection leads to faster medical review.
Live analytics can surface safety signals earlier than weekly checks. The platform alerts the medical monitor when labs, vitals, or notes show patterns that need attention, which supports timely action. FDA’s program pages reinforce ongoing oversight when AI supports development, for example on the AI in drug development page.
Want help mapping one adaptive workflow and the approval gates that keep it safe? Book a CI Health AI Strategy Workshop and we will scope it with your team.
Compliance and trust
Short answer: Treat agentic AI like a validated tool chain with clear records.
The FDA emphasizes documentation, human oversight, and performance monitoring for AI in trials. Building those elements into the workflow keeps AI adjustments transparent and auditable, as described in the AI in drug development page and the AI discussion paper.
Simple controls that match expectations
Short answer: Make every AI action traceable and gated.
- Link each AI suggestion to the protocol section and the data that triggered it, then keep a dated change log with snapshots, a theme in the FDA discussion paper.
- Require human sign-off for any change that affects patients or endpoints, which aligns to the FDA’s oversight posture in the same discussion paper.
- Version prompts, rules, and automated steps as you would any validated tool.
- Keep personal data out of model inputs that do not need it.
- Re-validate performance at set intervals on known studies and track how often the system was right, wrong, or escalated to a human.
What good looks like
Short answer: Fewer bottlenecks, cleaner data, and fewer large amendments.
Agentic AI use in R&D includes coordinating execution and adjusting timelines during a study, which the Salesforce page on agentic AI in pharma highlights. When oversight follows risk, teams protect quality and speed, as the RBM and RBQM review explains. Because major amendments add months and six-figure costs, preventing them improves ROI, which you can see in the Tufts amendment cost study and the Applied Clinical Trials report on implementation delays.
Conclusion
Short answer: Agentic AI keeps trials adaptive and accountable.
With live monitoring and small, auditable adjustments, teams avoid delays while staying safe and compliant. Start where the rules are clear, prove value, then scale.
Ready to keep trials on track without adding amendments? Book a CI Health AI Strategy Workshop to scope a focused 90-day pilot, define validation gates, and build the evidence pack your auditors expect.
FAQ
Can AI change a protocol by itself?
Small, pre-approved adjustments can be automated. Larger changes still need people and normal approvals. Oversight themes are laid out in the FDA discussion paper. (U.S. Food and Drug Administration)
Will this replace study teams?
No. The system removes repeatable checks so people focus on decisions, safety, and communication with sites. The Salesforce overview frames agentic AI as an execution co-pilot. (Salesforce)
Where should we start?
Enrollment pacing, site performance, and deviation hot spots. These have clear rules and fast payoff. Salesforce’s guide and centralized monitoring guidance outline patterns and signals. (Salesforce)
Let’s work together