How to create SAP Insights with Qlik, Snowflake on AWS
Technical blog about Qlik Cloud Data Integration
SAP ERP (Enterprise Resource Planning) contains valuable sales order, distribution, and financial data, but it can be challenging to access data in SAP systems and integrate it with data from other sources to get a complete picture of the end-to-end process.

How to create SAP Insights with Qlik, Snowflake on AWS
Order-to-cash is a critical business process for any organization, especially retail and manufacturing. It begins with booking a sales order (often on credit), followed by fulfilling that order, invoicing the customer, and finally managing accounts receivable for customer payments.
The fulfillment and invoicing of sales orders can impact customer satisfaction, while accounts receivable and payments impact working capital and cash liquidity. As a result, the order-to-cash process is the lifeblood of the business and critical to optimize.
SAP ERP (Enterprise Resource Planning) contains valuable sales order, distribution, and financial data, but it can be challenging to access data in SAP systems and integrate it with data from other sources to get a complete picture of the end-to-end process.
For example, understanding the impact of weather events on supply chain logistics could have a direct impact on customer sentiment and their propensity to pay on time. Organizational silos and data fragmentation can make it even harder to integrate with modern analytics projects, which in turn limits the value you get from your SAP data.
Order-to-cash is a process that requires active intelligence: a state of continuous intelligence that supports initiating immediate actions based on real-time, up-to-date data. Streamlining this analytics data pipeline typically requires complex data integrations and analytics that can take years to design and build, but it doesn’t have to.
- What if there was a way to combine the power of Amazon Web Services (AWS) and its artificial intelligence (AI) and machine learning (ML) engines with the computing power of Snowflake?
- What if you have one single Qlik Could you use a Software-as-a-Service (SaaS) platform to automate the capture, transformation and analysis for some of the most common SAP-focused business transformation initiatives?
- What if suppliers and retailers/manufacturers could collaborate better by enabling mutual access to real-time data through the capabilities of Snowflake for data sharing and the marketplace?
In this blog we discuss the Qlik Cloud Data Integration accelerators for SAP in collaboration with Snowflake in AWS.
Qlik Cloud Data Integration
Qlik Cloud Data Integration accelerators integrate with Snowflake to automate the capture, transformation, and analysis to solve some of the most common SAP business problems. This enables users to gain business insights that can drive decision making.

Qlik provides a singular platform for extracting data from SAP and lands the data into Snowflake as the data repository. Qlik keeps the data synchronized with its change data capture (CDC) gateway that feeds the transformation engine, allowing Qlik to convert the raw SAP data into business-friendly data ready for analytics.
Qlik have been used Qlik Cloud Analytics service on the SAP data to enable analysis and visualization, and to feed data into Amazon SageMaker to make predictions with the artificial intelligence (AI) and machine learning (ML) engine.
Qlik uses its Qlik Cloud Analytics service on the SAP data to allow analytics and visualization, and to feed data into Amazon Sage Maker to render predictions with the artificial intelligence (AI) and machine learning (ML) engine.
Snowflake: The Data Collaboration Cloud
Snowflake heeft AWS Competencies in Data and Analytics and Machine Learning, and has reimagined the data cloud for today's digital transformation needs. Organizations across all industries are using Snowflake to centralize, govern, collaborate and generate actionable insights.
Here are the top reasons why organizations share their data Snowflake to entrust:
- Snowflake is a cloud and region independent data cloud. If a customer is SAP data is hosted on AWS, for example, when they Snowflake deliver on AWS and AWS PrivateLink use for secure and direct connectivity between SAP, AWS services and Snowflake.
- Separating compute and storage allows users to have granular controls and isolation, as well as role-based access control (RBAC) policies, for different types of workloads. This means that extract, transform, and load (ETL) tasks can have isolated compute versus critical business intelligence (BI) reports versus feature engineering for ML, and users have control over how much compute they can dedicate to each of these workloads.
- A blooming marketplace data to enrich first-party customer data with third-party listings. The marketplace enables secure data sharing both internally and externally, without creating duplicate copies of data.
- A strong tech partner ecosystem, delivers best-in-class products in every data category: data integration, data governance, BI, data observability, AI/ML.
- Ability to add code to data, instead of exporting/moving the data to separate processing systems, via Snowpark. Code in Java, Scala or Python is stored in Snowflake.
Overview of joint solutions
Let’s take a look at an SAP business use case that all companies share: orders to cash. This process in SAP usually involves the sales and distribution model, but we’ve added accounts payable to complete the story.

The SAP accelerators use ready-made logic to transform raw SAP data into business case analyses. It starts with extracting the data from SAP; you can Qlik Data Gateway – Implement and install Data Movement near the SAP system to get the data from SAP and into Snowflake to be placed without affecting the performance of the SAP production system.
For the SAP accelerators, we used the SAP extractors as the basis for the data layer. This pre-transformed data allows us to use smarter methods to extract data from SAP. For the order-to-cash usage, we need 28 extractors, which would be more than 200 tables if we went directly to the underlying SAP structure.
Using a single endpoint, we pull the data from SAP to Snowflake, where we divide the fact data based on use cases; however, we use a common set of dimensions to feed the scenarios. Below is what this architecture looks like conceptually.

Figure 3 – QCDI data transformation process.
This process also enables easy future additions to new SAP use cases.
With our single endpoint we can load the data into our landing and storage areas at the same time. The data is only landed once and Qlik uses views as much as possible to facilitate data replication within Snowflake to prevent.
Now we have two different dimensional loads and some dimensions are not suitable for CDC (called Delta). These are reloaded according to a schedule and merged with the Delta dimension in a transformation layer, which presents a single set of entities for the construction of the data mart layer.
Let's take a look at the process of redeeming orders. We land and save the data in Snowflake, and in the landing layer we add the rules that convert the names and columns of the SAP extractors to descriptive names.

You may notice that there are a lot of rules. We ran a report in SAP to extract all the metadata by extractors, but not all the names are the same. For example, KUNNR is ship-to-customer in one extractor and sold-to-customer in another.
Each extractor has its own definition and we have Qlik Sense is used to create a metadata dictionary that we can apply in the user interface (UI).

As you can see, there are several important things happening at the same time. Within this no-code UI, we have routed over 80 extractors to a landing area, renamed them, and added views with descriptive names in the store layer.
This is important because many SAP solutions require flows or coding for each extractor or table as a separate piece of code to maintain, but within Qlik it is all managed simultaneously via the SaaS UI (no coding required).
Once the data is properly keyed and renamed, we begin executing our transformation layers. This process combines the dimension into a single entity and creates the business-specific process for a use case such as cash orders.
The transformation layer is where we start manipulating the data with 100% pushdown SQL to SnowflakeSome examples of transformations include currency rotation, descriptive text flattening, and other SQL manipulations.
In addition to the SQL manipulations, there is in Snowflake created a stored procedure in Python Snowpark that can be called via the Qlik SQL pushdown has been called. This shows how engineers familiar with the Python language can build transformation steps as a stored procedure in Snowflake and these can be opened via Qlik.
Once the data is fully mapped, transformed and prepared, we create the final data mart layer. Qlik Cloud Data Integration flattens the dimensional flakes into a true star schema ready for analysis, and these star schemas are consolidated per business process under a single data mart layer.

Our data layer is now complete. We have Qlik Cloud Data Integration SaaS platform used to load, store, transform and deliver analytics-enabled data marts to our Qlik Feed SaaS analytics engine.
The SAP accelerators come with modules for orders to cash, inventory management, financial analysis and procure to pay.

SAP from Raw to Ready Analytics
Now that the data preparation is complete, we can now harness the power of Qlik Add SaaS analytics. We include all star schemas from the data mart layer in Qlik and create a semantic layer on top of the pure SQL data.
The associative engine of Qlik is used to combine all parts of the order-to-cash module into a single, connected, in-memory model. We also add master measures and complex set analysis calculations in the style of online analytical processing (OLAP) to create dynamic data entities such as rolling dates or complex calculations such as days open.
This is what the refined analysis model looks like in Qlik SaaS. Note that there are multiple fact tables (10) that share the same set of dimensions.

Figure 8 – Qlik SaaS data model.
By having access to all that data, we can see the big picture of the order-to-cash process in SAP.

Figure 9 – Order-to-cash Qlik application.
SAP Order-to-Cash Analysis
The business question of how an order moves from the product ordered, to the moment it was shipped, to the moment it was invoiced, to the moment the customer paid, is the answer of the order-to-cash module.
Let's take a look at an order that a customer placed. That order (5907) was initially placed on 17-06-1999 and the payment was completed on 12-12-1999. That's an open day sales (DSO) of 194 days!
That would be the end of the question if we were using a simple SQL-based query tool, but with the associative model of Qlik can we find out what happened.

Figure 10 – Visualization of the order-to-cash process in Qlik from SAP.
There was no material in stock to ship the entire order, so it was split into three separate shipments and invoiced/paid in three documents.
Now, the total DSO was technically 194 days, but only 187 days from invoice to payment. However, that still doesn't tell the whole story. When the customer was invoiced, he actually paid within 1-2 days. This series of details would have been missed without using the Qlik-analytics engine.
Even in this case, we’re still only looking back at what’s happened. What about looking forward and identifying trends? For example, out-of-stock parts mean we can’t ship everything at once. With Amazon SageMaker, we can predict what issues and delays we might face.
What we've created with the SAP accelerators are plug-and-play templates to ask the tough questions about SAP data, with Snowflake and AWS as the engines that drive the insights with the Qlik SaaS platform.
Predicting the Future with Amazon SageMaker
One of the more powerful components of the Qlik SaaS architecture is the ability to store the data in the Qlikapplication with the Amazon SageMaker engine. In our order-to-cash use case, we took sample data and trained a SageMaker model to predict late versus on-time delivery.
A quick way to achieve this is to use the Snowpark API to perform feature engineering on the dataset, before finally bringing the data to a SageMaker endpoint for training and deployment. We can then Qlik use to access the endpoint and view predictions directly in the dashboard.
How does this work with analytics? Within Qlik SaaS we can connect to Amazon SageMaker to transfer data from the Qlik-engine to an endpoint that will make the above predictions based on the SAP data.
When the data is reloaded into the Qlik-analytics engine, the data from relevant in-memory tables is sent as data frames to the SageMaker endpoint, where the AI/ML prediction is calculated. The predictions are sent back to the Qlik-app and cached together with the original data, and available for presentation in the visualization layer.

Figure 11 – Amazon SageMaker and Qlik SaaS integration.
We have now completed the circuit of taking historical data from SAP and editing and milling it into analysis-ready data using Qlik Cloud Data Integration. We also presented that refined data with Qlik Cloud Analytics and Future Outcomes Predicted with Amazon SageMaker, all running on the Snowflake-data cloud.
Conclusion
In a typical order management cycle, information sharing between organizations has become critical to the successful operation of the modern enterprise. Improved customer satisfaction, increased competitiveness, and reduction of supply chain bottlenecks and days of sales outstanding are key indicators of optimized cash flow for the company.
In this post we discussed how the Qlik Cloud Data Integration SAP accelerator solution, in partnership with Snowflake and AWS, can accelerate your SAP data modernization, enable greater agility and collaboration across organizations, and quickly deliver business solutions through optimized order-to-cash business insights.
Contact us!
For more information, please contact our sales department so you can leverage the full potential of your SAP data.