top of page

SAP HANA Transformations - Tips & Tricks



Intro


With SAP BW 7.40 on HANA, a new runtime for transformations is available, namely HANA runtime. The transformation logic is executed inside SAP HANA database without any redundant data transfer between the DB server and application server. This opens a whole new world for modeling technics, business intelligence, with great performance. In this blog post, I have listed some approaches that I consider useful in the context of BW transformations and HANA. The content here is based primarily on BW release 7.50 SP10.


Restrictions


Not all BW transformations can be pushed down using this approach. Especially these that contain custom ABAP logic cannot be pushed to HANA. This logic needs to be re-written in SQL Script first. One can check existing transformations for the runtime possibilities (ABAP or HANA) in the transformation maintenance screen. If the transformation cannot be pushed down, you get the list of unsupported features within the transformation. On the other hand, if the transformation has an ABAP Managed Database Procedure (SQL script based routine), it can be performed only in SAP HANA.



The following object list, provided by the official SAP Help, prevents for executing transformations in HANA runtime:

  • Queries as InfoProviders are not supported as the source.

  • Rule groups are not supported.

  • Customer-Defined Functions in the Formula Editor

  • To read data from DataStore objects, the entire key must be provided.

  • Near-line connections are not supported.

On top of what is officially listed, I also want to share additionally some scenarios that I came across which are available in ABAP runtime but are not so straight forward or not possible at all in HANA runtime.


Messages in the request monitor


There is an easy way to log messages (being it information or error) in the request monitor in ABAP runtime. An example could be the list of all missing exchange rates in the data package. Until now, I have not discovered a way to do the same in HANA runtime unless I switch on the error handling which is not always desired. Furthermore, information messages seem not possible at all. I am looking forward to this feature or if someone can prove me wrong, please share.


Semantic grouping


Setting semantic grouping in the DTP with SAP HANA Execution flagged only affects the records sent to the error stack. So, control over the grouping of records into packages is not available.



Side Effect - Free AMDP


When custom implementation is necessary, one can create a HANA based routine. In essence, this is an AMDP method. There are use cases where you want to modify a custom table outside the transformation and this is no issue with ABAP routine. However, the AMDP method header is predefined for and have the OPTIONS READ-ONLY keywords. This means that the underlying DB procedure has to be side effect free. In other words, statements like INSERT/UPDATE/DELETE are not supported in the read-only procedure. Furthermore, building dynamic SQL statements (using EXEC and EXECUTE IMMEDIATE which are sometimes very handy) is forbidden.



Global section in ABAP TRFN


In ABAP routines you can easily declare variables in the global sections of the generated class and use this across different packages. E.g. you can derive value in the first package and use it in all the rest. I did not find a way to achieve this using SQL script. Potentially, I could imagine a possible implementation to include the use of Global Session Variables. Unfortunately, because of the read-only state of the underlying procedures, this approach is banned.


Requests are not packaged when the source doesn't support high volume extraction


I have observed issues when having an Infocube as a source of the transformation. What happened is that the package size setting in the DTP was disregarded and all records were processed at once. After a bit of research, I came across an SAP Note 2329819 which explains the behavior. My understanding is that the providers that support packaging are listed in section A of the note and unfortunately Infocube is not there. Out of memory situation is very likely when you need to transfer with one DTP execution more than several million records. There are recommendations on how to work this around manually in SAP Note 2230080.


SESSION_CONTEXT function


You can use the above function to retrieve the values of global session variables set for AS ABAP. In 7.50 the available variables are:

  • CLIENT = SY-MANDT

  • APPLICATIONUSER = SY-UNAME

  • LOCALE_SAP = SY-LANGU

Additions since 7.51 are

  • CDS_CLIENT derives client from open SQL statement with the addition USING CLIENT

  • SAP_SYSTEM_DATE = SY-DATUM

REQTSN derivation


If there is a requirement to use the request number in the transformation logic, it can be derived from the standard SAP tables. In the implementation below the REQTSN is derived. For derivation of the old REQUID the approach should be similar.


CONVERT_CURRENCY function


Once, I stumbled across a requirement that included currency conversion using the well-known SAP exchange rates. Back then, I imagined reinventing the wheel and joining al the TCUR* tables to derive the proper exchange rate. This sounded like a nightmare. Fortunately, this time, SAP has delivered a native SQL function for currency conversion using these tables. I just had to provide the schema where the TCUR* tables reside and a proper client number and the currency conversion went flawlessly.

ROW_NUMBER function


This function returns the order number of records partitioned by a set of characteristics. In the example below, it is used to deliver the “RECORD” field.


Exit Handlers


In SQL script, to terminate the current processing an exit handler is needed. In the below example, highlighted in green, we stop further processing if we are in the error handling execution of the procedure. (In short, when you activate the error handling from the DTP the SQL procedure is called twice. Once for erroneous records determination and once for processing the non-erroneous records.) Highlighted in red is when we want to stop the processing and throw an error.



Database schemas access


Accessing tables or views in the default BW schema is allowed with the restriction that all objects have to be listed in the header row with USING keyword. These objects have to be also present in the ABAP data dictionary. This is why the highlighted below in red is not a valid syntax. On the other hand, different database schemas are accessible in the regular way (below in green). Of course, the DB user has to be granted with the respective authorizations.



AMDP Macros


Predefined AMDP macros are available since version 7.52. There are two implementations available:

  • "$ABAP.type( int8 )" - Macro for ABAP Types. It can be used to access ABAP types. You can specify all elementary, non- generic types, which are possible after a TYPE addition in an ABAP implementation.

  • "$ABAP.schema(LOGICAL_DB_SCHEMA)" - Macro for Logical Schema mapping. The intention is to have one logical name that represents a physical database schema. This helps when for example throughout different landscapes DB schema names are not consistent. Then, with using the macro you can have your SQL script code transportable between landscapes.


In my opinion, converting existing transformations to HANA runtime is definitely worth it. I use SQLScript whenever I can in order to avoid the additional data transfers between application and database servers. This easily covers a lot of the trivial scenarios, like master data lookup, unit conversion, currency translation, value derivation from a custom table or DSO, etc. Choosing the AMDP approach before the ABAP routine in some cases could lead to significant performance gains. I have observed cases with more than 15 times better runtime.


Still, not everything is supported though, as you just understood. The most annoying for me is the packaging issue when extracting from the old school Infocube objects. Furthermore, in scenarios where more complex logic is necessary which requires processing the data line by line using imperative logic. This includes any if-statements, loops, cursors and any other control structures. In this case, the additional data roundtrip is saved, but the benefit from the parallel processing power of HANA is not utilized. The regular ABAP developer could easily get puzzled because they are used to think in record-based processing of the data. However, to get out most of HANA, one should learn to think set-based using the ordinary SQL statements.

1,224 views0 comments
bottom of page