Designing AI That Holds Up: Key Takeaways from Intellus Worldwide 2026

Intellus Worldwide has long been a meeting place for the pharma community bringing together manufacturers, research agencies, data providers and technology partners. Unlike more theoretical forums,we’re able to engage with conversations behind the curtain, shedding light on how insights are generated, operationalized and supported inside increasingly complex organizations. 

This year, one theme cut across nearly every conversation: AI. 

It’s been a major focal point at all our recent conference visits; however, at Intellus was noticeably different. Hallway conversations developed and shifted from “What is AI?” and “w\Which tools should I use?” to “How do I maximize the outputs and efficiency of our internal AI?” and “How should AI show up in our business outputs, if at all?”  

AI is no longer a novelty in healthcare research.It’s beyond encouraged; it’s expected. And yet, many organizations are struggling to translate this expectation into value that’s built to last. What became clear is that the reason isn’t technical, it’s human.

AI Doesn’t Win on Its Own 

One of the most grounded perspectives from the week came from Eli Lilly, where AI innovation was shaped not by abundance, but by constraint. 

The lesson, enriched with a superhero motif, was simple and repeatable: to sell AI internally, you need a galvanizing reason. Not a roadmap. Not a feature list. Not a demo. You need a problem that is felt acutely enough to demand change.  

Internally, their successful initiatives kicked off by highlighting a clear “villain”. Time. Cost. Fragmented workflows. Once found and isolated, their team developed solutions to explicitly “fight” this villain. By leveraging existing products and owned data, they were able to clearly define success and be intentional about what AI should do. 

AI didn’t earn adoption by being impressive. It earned trust by being useful. 

Synthetic Isn’t the Same as Strategic 

Synthetic respondents and AI agents surfaced repeatedly, but confidence varied. 

Some organizations are finding value using AI for ideation, early exploration, or message testing, particularly when grounded in large, continuously refreshed proprietary datasets. Others shared more cautionary experiences. They invested heavily in fine-tuned synthetic models only to watch base LLMs quickly close the gap. 

A more pragmatic framing is emerging; AI works best when it is constrained, transparent, and paired with real human data. When positioned as a full replacement for lived experience, credibility quickly erodes. 

Dialogue Data Remains Misunderstood 

Dialogue research sparked some of the most pointed debate at Intellus. While critiques around efficiency and cost are real, many stem from treating dialogue as a volume exercise rather than a linguistic one. Dialogue delivers value through context, interaction, and meaning over time not through scale alone. 

This tension underscores a broader challenge with AI‑driven analysis. Without methodological rigor, AI can amplify noise as easily as insight. Tools built to accelerate analysis must prioritize traceability, transparency, and guardrails so speed doesn’t come at the expense of signal. That design philosophy guided our session at Intellus, where we demonstrated how AI can support dialogue analysis without abstracting away from the language itself. Insight doesn’t come from verbosity; it comes from understanding how language works. 

The Research Audience Is Expanding 

A significant shift is underway: market research is no longer serving marketing alone. Medical, access, and other functions are increasingly seeking insights and often without deep research fluency.  

This adds a layer of complexity on what “user‑friendly” AI means. It’s less about the UI and more about systems that understand user roles, enhance research questions and translate insights appropriately. In this future, AI doesn’t just surface insights, it protects them from misuse. 

Verilogue’s Vision 

At Verilogue, we left Intellus with a reinforced conviction. 

AI will absolutely reshape healthcare insights, and it already is, but the organizations that win won’t be the ones chasing every new capability. They’ll be the ones that intentionally apply AI grounded in real human data, methodological rigor, and respect for how language works. 

Dialogue data isn’t inefficient by default. Synthetic respondents aren’t inherently flawed. AI analysis isn’t the enemy. The risk lies in abstraction without accountability and traceability. 

In health, the stakes are higher. Decisions affect patients, providers, and lives. That demands more than speed. It demands judgment. 

AI is an amplifier. Of insight. Of confusion. Of rigor. Of shortcuts. 

The difference lies in how and why it’s used. 

At Intellus, the message was clear: the future of healthcare research won’t be defined by who adopts AI fastest, but by who designs it most responsibly. 

And that future is already taking shape. 

Next
Next

Inside the Art of Award Winning Case Studies: Lessons from the NYF x June Laffey Masterclass