ABAP/4 Performance Tuning
Tips and Tricks
Author : Mario Opsomer
Last Updated : January 7, 1998
Table Of Contents
1) Incorporate CHECK statements for table fields into the WHERE clause of the SELECT statement.
2) Do NOT use Logical Databases as programs using Logical Databases are much harder to tune.
3) Avoid the usage of the nested select construct - Use views or the 4.x join construct instead.
4) Check whether your program uses the appropriate indexes using SQL Trace (ST05) - Explain SQL.
5) All programs run fine on small systems like DE2, so include performance tests on the QA systems.
6) Pay attention with clustered and pooled tables. These often cause unexpected performance issues.
7) Only SELECT the columns really needed instead of transferring all columns with SELECT *.
8) Make sure the users will use the program properly by including the available selection options.
9) Only use an ORDER BY in your SELECT if the order matches the index which should be used.
10) Avoid reading the same data rows over and over again from the database - read data once in itab.
11) Avoid complex tests in the WHERE clause of SELECTs as these can confuse the db optimizer.
12) Use aggregate functions for calculations/summations in the SELECT (whenever possible).
13) Use SELECT SINGLE instead of SELECT-ENDSELECT whenever possible.
14) Check whether you are using the right tables to retrieve the information needed by your program.
15) If the performance problem does not seem to be db-related, use one of the other Workbench Tools.
16) Check whether the runtime matches the amount of data to be retrieved from the database.
Appendix 1 : Example Explanation of an Informix Statement (=> SAP Documentation).
Appendix 2 : SQL Explain examples - How the SQL syntax used affects performance.
Appendix 3 : SQL Trace output example - Explanation of database operation types.
Appendix 4 : SQL Explain output - How user behavior affects performance.
Appendix 5 : Output from program ZZBCR006 for table MSEG on PR1.
Appendix 6 : List of the biggest database tables on the PR1 system (status on January 7, 1998).
Appendix 7 : Same as in Appendix 6, but sorted by category-tablename (status on January 7, 1998).
1
1) Incorporate CHECK statements for table fields into the WHERE clause of the SELECT statement.
The database can then use an index (if possible) and network load is considerably less.
There are cases in which such changes resulted in programs running 5 times faster.
select * from table.
check field1 = ‘001’.
endselect.
should become
select * from table where field1 = ‘001’.
endselect.
2) Do NOT use Logical Databases as programs using Logical Databases are much harder to tune.
It is very hard to tune programs using a logical database (=> not enough flexibility).
CHECK statements must often be placed behind the GET statements, and as mentioned
in tip 1, CHECK statements should be replaced by additional tests in the WHERE-clause
which is not possible when working with one of the standard SAP logical databases.
3) Avoid the usage of the nested select construct - Use views or the 4.x join construct instead.
Each SELECT statement involves a considerable amount of overhead (on db and network).
Using the traditional nested select construct generates a lot of SELECT statements,
especially if the inner select statements are often executed, causing bad performance.
By defining views which combine (join) the different involved tables, the program only
executes one select statement (against the view) reducing the db and network overhead.
In 4.x, SAP introduced the join construct into their ABAP/4 Open SQL, so you do not
need to define views anymore to avoid the traditional terrible nested loop construct.
When using views or the join construct, the db can also better optimize the disk accesses.
select v~vbeln v~auart v~bname v~kunnr
p~posnr p~matnr p~kwmeng p~meins e~etenr
into table itab
from vbak as vinner join vbap as p on v~vbeln = p~vbeln
inner join vbep as e on p ~vbeln = e~vbeln and p~posnr = e~posnr
where v~vbeln between '0000000000' and '0000001000'.
In the above example, an inner join is done between three tables (vbak, vbap, vbep).
This method is really much faster than having three separate SELECT statements.
(The complete source of program ZZSTU06E can be found on the DE2 system.)
IMPORTANT REMARK :
SAP did not only introduce the INNER JOIN into their ABAP/4 Open SQL in SAP 4.x,
but also the following very useful ANSI-SQL constructs : the OUTER JOIN, SUBQUERIES
and the GROUP BY - HAVING clause.
The difference between an INNER JOIN and an OUTER JOIN is the following. If a query on
an INNER JOIN of VBAK (outer table) and VBAP (inner table) finds a record in VBAK but
no matching records in VBAP, then no data is retrieved from the database because the inner
table is empty. If you still want to keep VBAK rows for which there are no matching VBAP
rows, you need to use the OUTER JOIN construct available in ABAP/4 Open SQL in 4.x..
While the ABAP/4 Open SQL was only covering a small part of the ANSI-SQL standard
in previous SAP releases, it is now covering most of the ANSI-SQL standard (at last).
Please try to understand and use these techniques, as they make programs more efficient !
If you are not familiar with SQL yet, borrow or buy a good book on SQL and study it.
4) Check whether your program uses the appropriate indexes using SQL Trace (ST05) - Explain SQL.
Learn how to use the SQL Trace tool (SAP Extended Help : BC ABAP Workbench Tools).
Learn how to use the “Explain SQL” function of SQL Trace and understand its output.
Appendix 1 is a copy of the topic called “Example Explanation of an Informix Statement” from the SAP Extended Help : BC ABAP Workbench Tools.
It is CRUCIAL that programs use the proper index and use as much as possible of that index.
You can only verify this by using the SQL Trace “EXPLAIN SQL” function for statements.
Appendix 2 shows a number of select statements against the central FINANCE table BKPF.
Query 1 is phrased in three different ways, each time returning the same results,
but the runtime varies depending on the way the SQL is presented to Informix.
The runtime differences are small on DE2, but on QA2 the first version completes
in 2 minutes, while the other two version require 18 minutes (9 times longer !).
Query 2 is phrased in two different ways, each time returning the same results. The
runtime differences are again small on DE2 (1.5 seconds versus 9 seconds). When
running the same selects on QA2, the first version completes in 4 seconds while the
second version of the SELECT takes more than 19 minutes (~ 300 times slower !).
The “Estimated Cost” line indicates how expensive the SQL statement is for Informix.
It is just a number, but the higher that number, the longer the query probably will take.
Costs above 100,000 can be considered high (especially if # of rows returned should be low).
If a select is repeatedly executed (e.g. within a nested loop), the cost should be below 100.
Keep in mind that the estimated costs are different for different SAP systems (DE2 vs QA1).
The two thresholds mentioned here are thresholds for our bigger systems (QA# and PR1).
The “Estimated Cost” thresholds for DE2 are even much lower as there is less data on DE2.
The “Index Keys” line in the “Explain SQL” output shows the fields of the index being used.
The “Lower Index Filter” in the output shows how much is used for positioning within index.
It’s important that a lot of tests are included there (and not in the “Filters” line of the output).
Appendix3 shows the output from an SQL Trace on an execution of program ZZGXRC11.
It also contains an explanation of the different types of database operations.
Appendix4 shows another example of an SQL statement with its “Explain SQL” output.
Because the user forgot to fill in the year and two other FI-GLX fields for which
there is only one valid value, the runtime was several hours on PR1. If the user
had filled in these fields, the runtime would have been less than 5 minutes on PR1.
Keep in mind that developpers can guide the user in the proper direction by using
the options DEFAULT and OBLIGATORY on the PARAMETER statement and
by using the options DEFAULT, OBLIGATORY, NO-EXTENSION and NO
INTERVALS on the SELECT-OPTIONS statements. The SAP environment should
be considered as an OLTP environment and not as some kind of data warehousing
environment in which the user can ask whatever they want. The developper should
really ask whether it should be allowed to run the programs without test on the most
important fields such as company code, ledger, plant, fiscal year, etc. Quite often, if
the user does not fill in a good test on these fields, the performance is really terrible.
You can run program ZZBCR006 to find technical information about a table which must be
accessed in your program. The output shows the number of rows and the size of the table.
It also shows in a single screen all indexes on the given table. Keep in mind that ZZBCR006
shows the output for the currently used SAP system, so you need to run it on QA# or PR1, if
you want to know more about the technical details of the table on our bigger SAP systems.
Appendix5 shows the output of running program ZZBCR006 for the MSEG table on PR1.
5) All programs run fine on small systems like DE2, so include performance tests on the QA systems.
As the amount of data is very small in the DE2 db, most programs run very fast on DE2 !
So you really need to check the performance on our QA systems which are copies of PR1 !
Compare the amount of data you expect to be needed by the program run with the runtime.
If the runtime is not reflecting the amount of data needed by the run, check with SQL Trace.
In most cases, unexpected high runtimes are caused by too much data being read from the db.
It is for example possible that data is read for multiple plants and periods, although the user
only asked for data for a single plant-period combination, because the wrong index is used.
6) Pay attention with clustered and pooled tables. These often cause unexpected performance issues.
Some of the SAP tables are not transparant, but pooled or clustered. Be aware of this !
There are a lot of limitations on how such tables can be accessed. You can not include such
tables in database views and join constructs. The FI-GL table BSEG, which is one of our
biggest PR1 tables, is an example of a clustered table. At the database-level, there is no table
called BSEG, but instead RFBLG is being used for the BSEG data. Most of the fields known
in BSEG are not known in the database table RFBLG, but are compressed in a VARDATA
field of RFBLG. So tests in the WHERE clause of SELECTs agains BSEG are not used by
the database (e.g. lifnr = vendor account number, hkont = G/L account, kostl = cost center).
As a consequence, these tests are done after the facts similar to using the CHECK statement,
and as already said in tip 1, CHECK statements are worse than tests in the WHERE-clause.
7) Only SELECT the columns really needed instead of transferring all columns with SELECT *.
All fields are transferred over the network with SELECT * although maybe a few are needed.
You can reduce this network overhead by explicitly naming the columns really needed.
8) Make sure the users will use the program properly by including the available selection options.
Developpers can guide the user in the proper direction by using the options DEFAULT and
OBLIGATORY on PARAMETER statements and the options DEFAULT, OBLIGATORY,
NO-EXTENSION and NO INTERVALS on the SELECT-OPTIONS statements.
The SAP environment should be considered as an OLTP environment and not as some kind
of data warehousing environment in which the user can ask whatever they want to.
The developper should really ask whether it should be allowed to run the programs without
tests on the most important fields such as company code, ledger, plant, fiscal year, etc.
If the user does not fill in a good test on these fields, the performance could be terrible !
9) Only use an ORDER BY in your SELECT if the order matches the index which should be used.
If the order you need is different from the order of the fields in the best index, do not use
ORDER BY in the SELECT but read the data rows into an internal table and sort the itab.
If the best index contains the fields in the same order as the one you want for the output,
then you can use the ORDER BY in the SELECT statement (and avoid sorting an itab).
10) Avoid reading the same data rows over and over again from the database - read data once in itab.
In some cases, the same SELECT is run repeatedly, each time reading and returning the
same data rows, especially when using the nested loop construct. Avoid this by reading the
data once into an internal table at the beginning and working with this itab afterwards.
It is much faster to read from an internal table (possibly using binary search) each time
which is in memory, than to run a SELECT against the database over and over again.
11) Avoid complex tests in the WHERE clause of SELECTs as these can confuse the db optimizer.
It is a fact that complex WHERE-clauses often confuse the optimizers of relational DBMS.
This is true for all RDBMS, so also for the one we are using for our SAP systems (Informix).
Some of the things which should be avoided in the WHERE clause of SELECTs are :
- OR-constructs : Never include OR-tests (for important fields) in your SELECTs !
Example of SELECT for which an alternative solution must be found :
select * from bsik
where xblnr in xblnr
and budat in date
and ( lifnr between 'PD0000' and 'PD9999' OR
lifnr between 'PR0000' and 'PR9999' ).
Use the “IN-construct” instead of the “OR-construct”, whenever possible !
select * from zzrefa1
where ( rldnr = “C1” or rldnr = “S1” ) and
bukrs = “4150” and ryear = “1999” and poper = “003”
should become
select * from zzrefa1
where rldnr in ( “C1” , “S1” ) and
bukrs = “4150” and ryear = “1999” and poper = “003”
IMPORTANT REMARK RELATED TO THE “OR-CONSTRUCT” :
Keep in mind that “OR-tests” can also be generated when the users fill
in complex combinations (e.g. multiple range-tests for a given field).
You can avoid that users use difficult combinations by using the options of
the SELECT-OPTIONS statement which were mentioned earlier (cfr. tip 8).
- NOT-conditions in a SELECT statement can not be processed via an index.
- Do NOT use the “>= AND <= construct”, but use BETWEEN construct instead.
The results will of course be identical, performance will however be improved !
select * from tab where field >= “val1” and field <= “val2”
should become
select * from tab where field between “val1” and “val2”
Always check whether the database access path (index) choosen by the cost-based Informix optimizer is the most efficient one, by running an SQL trace in one of our QA environments.
12) Use aggregate functions for calculations/summations in the SELECT (whenever possible).
If you want to find the minimum/maimum/average value or sum of a db column or want to
find the number of rows (COUNT aggregate function), use a SELECT list with aggregate
functions instead of computing the aggregates yourself. When you do the calculations in
your program, you need to read all data rows from the db into the program (over the network),
causing a considerable load on the network.
You can of course combine these aggregate functions with the GROUP BY clause to find
the minimum/maximum/average/sum/rowcount, grouped by one or more table columns.
The following SELECT will list the number of accounting document headers (rows in BKPF)
with document type “EM” for FY1999, for each of the company codes,
select mandt, bukrs, count(*) from bkpf
where blart = “EM” and gjahr = “1999”
group by mandt, bukrs
In SAP 4.x, it is even possible to use the functions combined with GROUP BY - HAVING.
The following SELECT is similar to previous one, but only the companies fore which there
are more than 10000 accounting documents of type “EM” for FY1999 will be listed, again
reducing the network load and also avoiding additional coding in the ABAP/4 program.
select mandt, bukrs, count(*) from bkpf
where blart = “EM” and gjahr = “1999”
group by mandt, bukrs having count(*) > 10000