One of the benefits of going to a Cloud Data Warehouse is the option to have a scalable system setup. In moments that there is high demand, additional resources are added and you pay for what you use. Ever since the introduction of SAP Datasphere, this flexibility has not yet been there, but this has changed with the July release in 2024. This blog will explain how to configure this on-demand compute functionality and will also give an example on cost savings that could come from this.
Tenant Configuration

In practice, this means you always will oversize your system. In the end you want to make sure that at moments that most people use your system it keeps performing, in order to prevent end-users from experiencing long waiting times to receive answers to their queries. This means that basically, you size for the peak-moments. This entails that on non-peak moments (weekends, nights) you have to pay for an almost idle system. This is not what you would expect from a cloud solution.
Luckily, SAP also realized this and in July 2024 they have introduced the Elastic Compute Node concept. This allows you to purchase additional Compute blocks for a certain amount of hours per month (see picture).

You have the option to choose the type of performance class, and based on that, you will get an additional 1, 2 or 4 CPUs assigned to you. You can also define the amount of block hours you want to purchase. Within one hour, you can assign a maximum of 4 block hours.
This option will only be available in case you have at least a base configuration that has 128GB of memory.
Investigation

Setup of Elastic Nodes

When creating the Elastic Compute nodes, you can set the number of compute blocks. If you have chosen 4 compute blocks, it means that for each hour that node runs, you have to ‘pay’ 4 block hours. This is then subtracted from the amount of available block hours that you have purchased in the initial configuration.
Once the node is created, you can assign spaces or individual objects to it. These are the spaces and objects you identified in the previous step via the Workload Analysis tool. These objects will be shown in the ‘Exposed Objects’ tab.
You need to be aware that data from remote tables cannot be replicated to the Elastic Compute Node. If you need this data to be available, you will need to persist this data in a view (best practice is to use the fact-view).
Running Elastic Nodes

Cost Savings

Let’s take an example in which you have an environment with 512GB of storage and 128GB of memory of performance class Compute. This environment will cost you 4946 Capacity Units per month. This is, in general, enough to cover your reporting requests; however, during the Month End Close period, end-users are complaining about long response times. You have done your investigation and indeed see that specific queries on Financial Data take a lot of CPU time during the first two working days of the new month. As a consequence, requests are put in a queue and sometimes rejected, which explains the issues that end-users are experiencing.
In order to solve this, you would need to increase your memory with another 64GB (8 CPUs). In case you do this via your base configuration and set your memory to 64GB permanently, your monthly Capacity Units would increase to 7045. With a one-on-one ration to euro’s this would be an additional 2100 EUR per month.
In case you decide to use the elastic compute nodes instead, you would require 80 hours of High Compute. These 80 hours can be assigned to elastic nodes on the first two days a week that run every day between 8:00-18:00 and use 4 compute blocks (2days*10hrs*4 blocks = 80 block hours). These 80 hours would cost you only 93 Capacity Units in total. This would save 2000 EUR/month compared to increasing your overall memory size.
Conclusion
Chris is here to listen.
Get in touch with him.
Chris is based in Croatia.
