How to drive analytics adoption with generative AI
- Blog
- Uncategorized
The fastest way to add conversational insights to your apps
Your customers and users expect generative AI analytics capabilities in the apps you’re building and delivering. If you don’t deliver it, your competitors will.
According to a just-published Battery Ventures survey, enterprise buyers are adopting AI and machine learning, finding that 79% of respondents plan to consume generative AI/LLMs within the next 12 months.
But simply adding GenAI to your app for the sake of it isn’t a solution. After all, it has to deliver real value. When I speak to product leaders and development teams who are on the path to building GenAI into their roadmaps, they are consistently asking the same questions:
- Where can we innovate and add real value using AI-first experiences??
- How can we deliver it responsibly and avoid hallucinations that compromise user trust?
- How can we deliver it quickly while circumventing LLM lock-in risk?
The answer to the first question is immediate for many teams. Even before the emergence of GenAI and LLMs, a surefire way to engage users has been to embed analytics as a natural part of your product’s experience. Applications with thoughtful analytics experience have a significant increase in user engagement compared to those without.
Analytics can enhance the user experience by providing personalized and interactive data visualizations in-app, which drives higher engagement levels. And it’s an area ripe for GenAI reinvention to shift from analytics clicks to conversations—conversational analytics.
In this post, you’ll learn why the new era of conversational analytics, which is at the intersection of analytics and GenAI, will have profound implications for your roadmap.
I’ll share the five Cs of conversational analytics―consistent emerging themes for success that product and AI teams are thinking about as they look to adopt. Finally, I’ll delve into how Sisense is expanding Compose SDK with GenAI and share how you can start on the path today.
The evolution to conversational analytics
We know conversational AI systems and LLMs excel at understanding and contextualizing user inquiries. They can leverage previous interactions and user history to provide more personalized and relevant insights. This contextual understanding can now add a layer of intelligence to analytics, enabling more meaningful and tailored insights for each user. It provides a substantial leap over old-school first-gen search-based analytics that lack context and are unable to facilitate an ongoing conversation.
Conversational AI also provides a more intuitive and user-friendly interface for interacting with analytics. Instead of navigating through complex dashboards or writing specific queries, users can have an ongoing conversation with the AI to access the desired information, from asking questions to explaining answers. This improves the user experience, reduces the learning curve, and facilitates the broader adoption of analytics.
In fact, Eckerson Group sees conversational business intelligence as the next step in analytics to make it easier for users to engage with, beyond previous-generation visual analytics and search-based BI tools.
So whether you’re embarking on your analytics journey or looking to evolve the analytics you’re already providing, it’s time to consider how you will add GenAI data analytics to your apps.
A path to embedding generative AI analytics
When it comes to embracing this new era of analytics and being successful, there’s a fundamental path that product and engineering teams can follow. We call it The Five Cs of Conversational Analytics:
1. Conversational and contextual
Enable your users to quickly identify insights that are relevant to them. Start them off by suggesting natural language analytical starter prompts based on their data model like “What is my revenue by country?” Let them follow up by asking questions about their scope of analytics such as “What fields can I ask about?” Let the questions flow, such as enabling contextual follow-on prompts later in the conversation, such as asking for narrative guidance, like “Explain these results.” It’s about quickly delivering an end-to-end and ongoing conversational experience around data.
2. Composable
If conversational analytics looks and feels different from the rest of your app, that creates a roadblock to adoption. This is where composable analytics SDKs come in, providing development teams with a range of React Components (or other frameworks)—from visualizations to an AI analytics chatbot, as well as strong TypeScript/JavaScript language support for complete control. A composable SDK enables your product team to build the exact conversational experience and workflow for users and makes natural language more intuitive.
3. Capable, responsible, and trusted
Trust is paramount regardless of how and where we engage with an LLM. For conversational analytics, a foundational way to ensure trust is to build it on a semantic layer or data model. It enables analytics delivery teams to define the scope, meaning, fields, and relationships that encompass the end-user analytics experience. Another important consideration is what controls are in place over data, prompts, and outputs—for example, are they fully isolated? Will they be used to improve and train the LLM model or shared in any other way?
4. Cloud and LLM agnostic
In AI, accelerating change is a constant. When you embed conversational analytics, it’s important that it doesn’t lock you into any one LLM—which may not be the best LLM to use for your use case in the future, or perhaps for your compliance needs, your organization may need to shift to use its own private LLM. So, whether it is OpenAI GPT 3.5 to GPT-4, using Anthropic/Claude, Bard, LLaMa, or using your own private LLM (or just your own private OpenAI instance), it pays to be cloud- and LLM-agnostic.
5. Cost
Finally, with any discussion around LLMs, it’s also critical to consider cost. Ultimately, it’s about flexibility, because managing cost requires the flexibility to plug and play different LLMs from public to in-house models; in this way, you remain in control as LLM costs change and evolve across different providers. It also means you get the flexibility to exact-fit the most cost-effective LLM for balancing, scalability, affordability, and output quality.
Sisense Compose SDK now with GenAI-powered analytics built-in
We launched Compose SDK earlier this year to enable software engineers and developers to integrate and embed analytics capabilities powered by Sisense Fusion into any web application, utilizing modern frameworks such as React and TypeScript or their preferred alternatives. Its composable and modular design ensures developers can add analytics in a flexible and versatile manner to suit their specific needs.
Now, we are expanding Compose SDK to make it incredibly easy for product teams to responsibly incorporate generative AI analytics into their products, their user experience, and workflows.
Introducing the Analytics Chatbot
Bring the power of Generative AI analytics directly into your product with ease. Think of the Analytics Chatbot as an embeddable LLM-powered sidekick, a whole chatbot experience you can rapidly add to your app with complete control. It’s provided as a React Component and enables your users to begin their conversation with pre-generated QuickStart natural language questions, or if they prefer, begin with their own specific questions.
The Analytics Chatbot makes it simple for your users to ask questions around the data model, such as fields and metrics they can query, ask analytics questions, get visual answers, or generate detailed narratives for data storytelling based on results.
Better yet, the Analytics Chatbot is completely composable. Let’s say you want to control how it works with your broader user experience. Your development team can extend and customize it, using it in tandem with the rest of your app.
For example, you might want to use Analytics Chatbot as the natural language starting point for your users, extending it with a “pin-to-palette” so they can establish their own personalized, NLQ-driven analytics home page in your app. The possibilities are endless.
Enable your users to interpret answers easily with GenAI-powered Narratives
What if you could easily add explanations for any of the analytics in your app? Now you can, with Narratives, another GenAI-powered Compose SDK React Component. With Narratives, it’s easy for your users to get textual explanations that provide insight into key findings within any data set.
These explanations are generated using natural language generation techniques and supported by advanced algorithms to produce clear and concise narratives that highlight key insights, trends, and anomalies. Narratives is also available as an API within Compose SDK for headless use.
Lego-like composability for conversational control
In fact, because Compose SDK is composable from the ground up, every conversational aspect is available as a discrete React Component, including QuickStart questions, Narratives, and Data Topics (that enable users to select a data model to converse with). And, of course, all of the functionality is available as an API if you’re looking to weave GenAI-powered analytics deep into your application.
Designed for trusted and responsible development
It may seem common sense that for your users to confidently act on the GenAI-powered analytics answers they get from your application, they must trust those answers. But it’s a crucial factor as you build out your roadmap around conversational analytics.
This is the reason why all of the GenAI capabilities we’ve built into Compose SDK are created on top of the Sisense data model, our semantic layer. It gives your analytics team the power to ensure that your data is optimally designed for the right queries and enabled to go from analytics answers to in-context narratives. The Sisense data model includes fields, metrics, formulas, and relationships and is all governed by centralized row and column-level security, so everyone is chatting with the same, managed single version of the truth.
GenAI for analytics: create a roadmap that flows with change
Whether you start with OpenAI today, are looking to perhaps bring your LLM in-house in the future, or even considering switching gears to LLaMa 2, it’s important to be able to plug and play as new LLMs or needs emerge.
In just the same way that Sisense is cloud and database agnostic, whether you want to run Sisense on AWS, Azure, or GCP or use RedShift, Google BigQuery, or another database for analytics, we’re designing Compose SDK’s GenAI capabilities to be LLM fluid too.
We aim to enable you to deliver the best GenAI-powered analytics experience while enjoying complete back-end and architecture flexibility, from database to LLM. It means you can build for today but embrace what tomorrow will bring.
Turn your generative AI analytics ideas into reality
It’s an incredibly exciting time to be a Sisense customer. Transformation in analytics is at hand. If you’re as excited about what this could mean for your product roadmap, I invite you to join our webinar: GenAI, in-context analytics, and the future of data monetization.
Joined by Paul Turner, Founder & CEO of Skyview Consulting, we’ll discuss the emerging conversational analytics landscape, highlight practical strategies to seamlessly incorporate these analytics into applications, and explore how to maintain trust in AI-driven analytics.