High-load analytic applications: a challenge for SAP HANA® | Deloitte Deutschland has been added to your bookmarks.
High-load analytic applications
A challenge for SAP HANA®
Is your SAP HANA system running slow or operating at its capacity limit? Or has it even crossed that border and breaks down occasionally?
Factors which cause high loads in SAP HANA
In such scenarios, which can be called high-load analytic applications, considerable computing resources are necessary to process the data, especially big data. High-load analytics is very challenging for business users who want to have latest data on most detailed level at hand.
On the one hand, batch processing usually contains many, and often very complex extraction and transformation operations. That is actually the ground rule why such a requirement does not allow real-time querying (even today). On the other hand, real-time calculations can also lead to large temporary memory allocations in SAP HANA in the range from megabytes to gigabytes per user. Response times will grow from seconds to minutes. Summing up, the main factors that influence query runtime and respectively system response time and memory allocations are the complexity of the calculations, the data volume, and the number of concurrent users.
High loads during batch processing
Let’s pick two examples of high-load analytic scenarios from Deloitte’s comprehensive SAP HANA project experience. The first one deals with batch processing for profit contribution reporting. The requirement is to build an SAP HANA based financial reporting solution that can handle large amounts of data and is able to handle complex business rules. For this scenario a very detailed and thorough SAP HANA SQL based approach might be applied to accomplish all business requirements.
Thereby, business is provided with a standard snapshot of data, which gives a consistent view of the data to financial analysts and auditors. Downside of this approach is the daily, reoccurring requirement of handling very large volumes of data and to parse all close and open periods to restate and revise the Year-to-Period KPIs and measures. This results in high CPU/Memory consumption ratios and needs to sustain a resource intensive SAP HANA solution. Besides, business requirements do alter, and e.g., if the requirement is to change the KPI definition from Year-to-Period to Year-to-Date computation, it is obvious that incorporating such specifications leads to a significant increase of the CPU/Memory usage.
Depending on your current SAP HANA load, this little change might even lead to the installation of additional SAP HANA nodes just to ensure stable and performant operations of all applications.
High loads during real-time processing
To complement the picture of high-load analytics applications, we now look at a sentiment analysis use case representing a real-time scenario. Sentiment analysis is still challenging as it deals with complex rules of transforming unstructured information into structures to interpret them with high precision. To process 250.000 reviews for instance on 56 aggregates demands much from SAP HANA’s CPU and RAM. And the number of reviews to be analyzed is growing every minute. The case is even twice challenging for virtual machines. Therefore, it is necessary to find an approach that consumes little resources and at the same time fulfills business expectations concerning data freshness and response times.