Given the recent pandemic, the state of the workforce, and unpredictable economic conditions, the world has changed. Technology has been part of that change by making rapid digitization of business possible, serving up masses of data that could be sifted by ML and analytics for signals of future behaviors. Into this volatile mix drops ChatGPT, accelerating change even while organizations are still trying to determine its value and its risks.
Qlik’s Vice President of Product Marketing, Dan Potter, has a front-row seat as he watches how customers and partners are making buying and design decisions for data and analytics solutions. (See his bio below.) He shared his observations with Cloud Data Insights (CDI) at the Gartner Data & Analytics Summit in March. One thought he shared is that history is still a powerful indicator of what might happen in the future.
Dan: Shall we have this interview done with ChatGPT? You ask ChatGPT a question and then just attribute the answer to me. That way, I might sound smarter than I am. I tried writing a session abstract using ChatGPT and I’d say it was about 95% perfect. ChatGPT is going to change everything.
CDI: Speaking of change, how have customers’ needs or attitudes changed given the pandemic-fueled acceleration of digital transformation? Are current economic conditions influencing customer conversations?
Dan: Absolutely but let’s first step back for a minute and look at the major trend we’re in–-movement to the cloud. We’ve hit that point where it’s becoming much more mature, and people are comfortable moving data into the cloud and processing data in the cloud, from an integration perspective. Now you’re starting to see people behave a little differently. They’re no longer as interested in picking come open-source pieces and assembling an ingestion piece, transformation pieces, etc. People are looking for more solutions to solve a different level of problem rather than how to build an integration stack. Buying behavior has changed and people are taking a more holistic view and asking themselves, “How do I solve this business problem in a more modern way?”
The other part of the change that we’ve seen particularly in the last year was a focus on FinOps and how to best optimize spend because the move to consumption-based pricing makes prediction harder, especially of computing costs. In the end, a FinOps app is an analytics application.
The nice thing with cloud is you can start off small, but you scale up really quickly. So people are faced with bills that they didn’t realize they were going to incur in some of these projects. They’re starting to want to have visibility into where they are spending their money. What part of the process is costing more and how do we get smart about doing some of this stuff? If I’m building a cloud data warehouse and I’m moving data from a lot of different sources, I incur cost. When I start to apply and merge that data into the warehouse, that’s a compute cost. So if you’ve got a table that gets used only on a weekly basis, don’t spend money to have it continuously updated. When you know, it’s only going to be updated. It’s only gonna be used once a week. So being smart about turning the knobs. Another one is the transformation of that data. If this is something that needs to be done frequently, do it frequently, but if it’s not, don’t pay for it.
In addition to visibility into usage and cost, you need different approaches for landing data. Qlik does continuous real-time change-data capture. Data is moved in real time, it lands into the data warehouse in real time, but you don’t have to apply it in real time. You may have teams that need real real-time operational views of that data so you create a view to that data, we don’t materialize the view. And so people have the benefit of real time, but it hasn’t been applied yet to the warehouse; they haven’t incurred the cost of applying data.There are some techniques like this that people are starting to get wiser about.
CDI: Would you say that instead of collecting new data to feed into FinOps analysis, you can leverage your platform’s operating data for insights into usage patterns and make some predictions based on past utilization?
Dan: And what’s not being used by looking at query logs from the different systems. Many enterprise customers, are doing chargebacks for IT services anyway. The data engineering team is keeping close tabs on what they create and how it gets utilized. They’re capturing this kind of data already for some systems. They can broaden their practice.
The next step is to apply machine learning and get really intelligent about predictions and recommendations, for example, to automate the process of when and where to transform data. It might make sense to do transformation and generate the SQL in the warehouse itself as opposed to doing it on the client before you even move the data. Two factors will determine which approach to use–meeting the business requirement and reducing cost.
CDI: This level of embedded advisory service is still a work in progress, right? Are there other parts of your product roadmap that speak to dual requirements of meeting business needs and containing cloud costs?
Dan: It’s not all new technology. For example, the Qlik Cloud for Data Integration is based on the best pieces of our client-managed technologies offered in the cloud as a fully-managed solution. Automation is one of the capabilities we’ve included in that. We’ve automated transformation and the creation of data marts is a push-button operation–no data prepping like putting it into an analytics format, no SQL or scripting. We generate the SQL, we push it into the cloud warehouse, and it just runs continuously. That’s game-changing. We heard at the Summit keynote that the number one challenge is skilled people. How do you overcome that? Automation.
CDI: It does seem that the no-code, low-code, automation conversation has heated up recently. It’s no longer a topic for just software developers. Data teams see it as a productivity measure and CDOs see it as a self-service / democratization approach.
Dan: Is it no code, low code, automation, or augmentation? Assisting the human in the tasks that they do is definitely one of the goals. And that’s where ChatGPT is going to play a role.
CDI: What role do you see ChatGPT playing in your product roadmap?
Dan: We think that generative AI is really, really interesting. Automating the generation of code like SQL is really interesting to us. We haven’t announced anything in the roadmap yet, but we demonstrated some of it today in the BI Bake Off from an analytics perspective.
On the data integration side, we need to find the right way in which to use generative AI because we’re selling to people who code for a living, so we want to augment their work, not replace it. And automate the mundane tasks so they can focus on the higher-value work. Instead of generative AI, we look at the promise of co-generation through ChatGPT. My son and I used it to build a website with a map of the United States where each state was clickable and could show news stories from that state. It just took a couple of runs to get the working HTML
I think generative AI will progress with organizations like ours creating our own language models and training them on our own unique content or our user knowledge base–our collective learning. If you build your own ChatGPT-like service you can provide a lot of value that a generic ChatGPT that’s an amalgamation of everything put into a large language model cannot provide.
CDI: What challenges are your customers facing now?
Dan: The same kind of challenges they’ve been faced with for the past 30 years. Everything new that comes into play does not do away with the old themes or necessarily solve the existing problems. I think the biggest challenge though is the many kinds of source data that need to be integrated. Sensor data, unstructured data, etc. are coming into play. And the volume of data is becoming much larger. The techniques for working with this complexity keep evolving but they don’t keep pace and it’s still a struggle to make data analytics-ready and business-ready.
I look at the data architectures and data management changes in the last 20 years. We went from enterprise data warehouses, which were SQL, and very rigid and very structured but were supposed to solve everything. Then they became too rigid and we moved to Hadoop, which was going to solve everything and data warehouses would go away. Next came data lakes into which the enterprise data warehouse was dumped it, along with the Hadoop data and new data sets. The data lake was going to solve everything but no, people were having a hard time getting value from it. Then all sudden Cloud Data Warehouses came in and SQL was back in vogue.
The technology has changed but the challenges are still the same kinds of challenges: unlock the data, make it ready for me to use, how do I get the insights that I need? How do I take action on those insights?
CDI: What emerging technology or business trend do you think will have the most impact on your company or your customers? Is it ChatGPT?
Dan: I think generative AI is going to it certainly had certainly have the biggest mindshare. We are just starting to scratch the surface. What business value it starts to deliver, I think we don’t even know. But I definitely think that’s going to have a huge impact on everything we do. Not only in integration and analytics, but in our everyday work as we start to integrate this functionality into the daily productivity apps.
Another thing we can say is that we’re back into the world of no predictions.
Bio: Dan Potter is the vice president of product marketing at Qlik. He is responsible for Qlik’s go-to-market strategies for modern data architectures, data integration and DataOps. Dan brings more than 25 years of leadership and marketing management having previously held executive positions at Oracle, IBM, Attunity and Progress Software. He helped accelerate industry-leading revenue growth in data integration at Attunity and post-acquisition at Qlik. He is a published author and frequent speaker on cloud modernization, data management and analytics.