The trials and tribulations of implementing AI technology

Editor's note: Monika Rogers wrote this article while serving as vice president of client services/operations at Illuminas. She is currently vice president growth strategy at CMB. She can be reached at mrogers@cmbinfo.com.

I don’t know about you, but every day someone is trying to convince me of the promise of their new AI tool, technology or service. It’s not just messages in my inbox, it’s the topic of most articles, webinars and conference presentations. While AI technology is the shiny new object, translating all those promises into reality can be daunting. 

Rather than talking about specific AI solutions, I want to dive into the work of finding and implementing new AI technology. I’ll start by sharing the automation journey we took at Illuminas, why some solutions succeeded while others failed, as well as team and process obstacles we had to overcome. I’ll then layer in some additional complexities we encountered related to gen AI and offer my thoughts on the future of research operations as we shift to a whole new landscape of tech-enabled insight. 

The story starts about two years ago after I sold my research tech startup and found my way back to consulting at Illuminas. I was excited that the company had built powerful research methods and was looking to accelerate adoption of AI and automation tools. With more digging, I found that several automation tools had been explored in the past but failed to gain traction after investing in implementation. While there was strong interest in bringing on new AI technology, there was also quite a bit of concern regarding the potential for wasted time and effort of searching for, evaluating and implementing a tool just to have it ultimately be abandoned. 

So, what sounded like one opportunity was really three: match the right AI tools to the right problems; build team confidence in operationalizing AI; and create a culture of AI innovation.

Opportunity number 1

Match the right tool to
the right problems

Like most companies, Illuminas started its AI journey with a strategic planning session where leadership brainstormed the biggest problems and opportunities for the company. There were the requisite Post-its, whiteboards and voting dots. Ultimately two key themes emerged. The team wanted automation, particularly related to streamlining analysis and reporting. And, the team wanted more agile processes, specifically ways to reduce cycle times and to complete project work more efficiently. There weren’t resources to tackle both, so the team decided to focus on automation and I was given the “Accountable” role in our RACI team. 

Since this wasn’t our first rodeo at automation, the first thing we did was meet with folks in client services who had implemented new tools/approaches. After hearing one story related to the disruption that was felt after introducing a tool that didn’t deliver to expectations, we discovered the cycle shown in Figure 1 that we all wanted to avoid.

Several hypotheses emerged on why the team might hit obstacles adopting new AI tools:

  • The solution might be missing capabilities to fit the variety of work we do.
  • The solution might be hard to learn, taking many more hours than expected to use.
  • We might not have the right people/training to be successful with a new solution.

My own hypothesis was that potential issues would stem more from process than technology or the people on our team. In essence, we needed to minimize the “leap of faith” between demo and implementation that could increase the risk of failure. To avoid that, we put in place a process that allowed the team to preempt the ambiguity inherent in change. It changed how we approached problem definition and added multiple points where we could pivot based on results (Figure 2).

By adding an initial step to identify and segment concrete use cases, we were able to reframe the must-have requirements list. We split out requirements based on specific use cases and identified how various solutions spanned them. This step also helped us increase the range of solutions we considered. Seeing use cases with very specialized requirements, the team started looking beyond SaaS solutions to both in-house tools and outsourcing partners. Figure 3 summarizes the categories of solutions we considered, with the benefits as well as the risks. 

Looking beyond SaaS subscriptions to custom solutions and contractors was a game-changer. When our marketing sciences team began to explore how they could leverage gen AI to build in-house tools, many new options emerged. They saw its potential not just to apply gen AI within the research applications but also to help write code for our own internal app development. In addition, once we started looking at contractors, they were able to help us navigate tool selection based on their experiences across platforms. And they were some of the most effective at addressing complex use cases. 

Ultimately, we ended up pursuing solutions in all three camps. Even though building an in-house AI tool was higher-risk, our team believed they could quickly get to proof-of-concept with in-house tools. So, while we were evaluating outside solutions, another team was simultaneously working through the same process but using iterations of our own AI tools. We made it through the proof-of-concept stage with six different solutions: two were our own apps, two were third-party platforms and two were solution providers who had services to speed up implementation. 

Opportunity number 2

Build team confidence in operationalizing AI

Before I talk about the final implementation of AI tools, I want to circle back to talk about the team. The last thing they wanted was to get to the piloting part of the process and feel like we were asking them to learn a new tool without a high degree of confidence it was the right one. What we needed was in-depth input from those most directly impacted by implementation. But those experts weren’t necessarily the champions that wanted to drive AI development. So, we put the implementation team expert skills to work in defining the use cases and providing detailed examples the AI team could reference along the way. Excitement and buy-in started to build about solutions as the implementation team saw their input used when vetting new solutions. 

The biggest test came when we moved from proof-of-concept to piloting on live projects, where we wanted to avoid hitting obstacles that would ultimately lead to implementation failure. Even with early team involvement and the use case-specific vetting we did, unexpected things still happened in the live projects. Here are a few examples of the variety of issues we encountered, primarily driven by unanticipated nuances that were specific to individual projects. For example:

  • Data summarization that looked reliable in some languages faltered in other languages.
  • Data visualization that worked for some data types didn’t work for others.
  • Data coding that worked for some open-ends didn’t seem to generalize to all cases.
  • Video analysis that worked for some videos faltered with captioning and longer videos.
  • Auto-formatting that worked in proof-of-concept didn’t extend to all studies or platforms.

Another related challenge came with the unique needs of internally developed tools. For these tools a wider range of issues emerged, such as UI optimization, device compatibility and the need for more sophisticated software testing and QA.

But the team felt these obstacles were manageable and persevered as we cycled through several iterations of pilots. Keeping the energy up hinged on the expectation that we wouldn’t roll any solution out more broadly until we had ironed out most issues.

Our hard work paid off as we narrowed down to implementing three of the six solutions. The ones we rolled out were so wildly successfully they were ultimately extended to do the job of the three tools we walked away from.

Opportunity number 3

Create a culture of AI innovation

Along with changing our processes and how we engaged the team to ensure successful adoption of AI tools, there were also several mind-set shifts that helped us succeed in building a culture of AI Innovation.

Shift #1: Be open to multiple solutions. We initially focused on finding a single solution that would work across many use cases. When we opened our options to using multiple solutions, it helped us see a better path forward. A good example is related to our report automation. We evaluated three solutions and rated each of them. They got us 50%, 80% and 95% of the way to meeting our needs. The solution that got us 80% of the way would work across nearly all our use cases, whereas a specialized contractor could get us 95% but on a more limited set of really challenging use cases. We ended up going with both solutions because the 95% solution saved us enough money and time to justify the additional investment for the 80% solution. The 50% solution was ultimately abandoned for all the right reasons. 

Shift #2: Be open to revisiting solutions, as AI is evolving quickly. When we started our qualitative AI exploration, gen AI tools were just starting to emerge. We were fortunate to have a new VP of qualitative join the company and bring a proven tech solution. However, because this tool was slow to roll out new gen AI capabilities, we found limited time savings compared to our expectations. We continued to experiment with new gen AI solutions as they became available and to work with our marketing sciences team to train up our own models for coding open-ends. The realization became that even if we found a winning gen AI solution today, the capabilities and tools available would surely be different next week. Fortunately, we found a startup that was interested in having us partner in development. Their ability to adapt quickly became even more important to our success. So, while we are using a home-run solution that covers 80% of our use cases today, we continue to test our own solution and other third-party tools as they evolve. 

Shift #3: Process goals need to evolve along with solutions. All of the experiences above led me to this final insight related to operationalizing AI: The goal line keeps shifting. When I started at Illuminas, the mind-set was to have consistent processes that everyone could master so that we could continue to deliver “The Illuminas Way” to our clients as we grew. While this approach wasn’t inherently wrong, as we adopted new solutions, nearly every one of our processes changed and the training documentation needed constant updating. And even more important, the belief about what we could achieve shifted as we discovered new and better ways of working. In other words, the goal line for operational success continued to shift (see Figure 4). 

For example, automation of data entry changed our expectation from limiting rework to built-in accuracy. Enabling automated transcriptions and reporting did more than accelerate timelines, it enabled a different level of flexibility in delivering insights to clients. Building repeatable processes became less important than having specialists that can drive continuous improvement. And consulting became less about simplification and more about eliminating the toil and focusing on delivering better outcomes for clients. 

Our lives will be different

There is no doubt as AI tools evolve and gen AI gets embedded across our tech stacks that our lives will be different. Those on the client side will be faced with differentiating your insights from those created by a chatbot or even embedding them into the chatbot. Those of you on the agency side will be faced with helping your clients navigate the speed of change and bringing them new ways of thinking and working. 

While the story I shared started with goals of improving efficiency, clearly the larger opportunity is to build new products and services that drive growth and differentiation within the insights industry. The interesting part is that these outcomes are interconnected. Those agility goals we had set aside to work on automation ultimately became central to our success. And the outcome of driving efficiencies was the discovery we could use these same solutions to offer new solutions to our clients.

As you continue your own journey with AI technology, I’ll leave you with this final thought: Whatever approach you are taking today to implement AI will likely be different tomorrow and the goals you have will continue to shift. But at the end of the day, the process of operationalizing AI is just as important as the technology itself.