SQL is a great data accessing tool – in most cases

SQL is a very powerful tool, and probably at present the preferred method for accessing data. SQL has been continuously improved over the years, and many tests show that SQL – when selecting the optimal access method – often offers unparalleled efficiency. However, if SQL is used for all types of I/Os, the resulting performance will in some cases be worse. This article highlights the result of several tests that clearly indicate a common situation where SQL is not the optimal method and other options should be employed.

Commercial applications use on average 80 % of all resources on I/Os, and because QDBGETKY seems to be the most often found active program in the call stacks, I decided to test alternative ways of accessing one record by key. The only difference between the variations is the data access method. I ended up with 16 solutions where the programs using SQL topped the chart for least efficient. Each test run reads a customer data base ten million times using a generated “random” customer number as key.

The test environment used was a model 720 8202 E4D running under version 7.3, where the only other active application was our own automatic performance analysis software product GiAPA that assisted in documenting the results.

Since the data accessed remained in main memory during the tests there was no I/O wait time caused by physical reads from disks.

The SQL alternatives were tested both using a traditional DDS-defined file, an SQL defined table as input, and after creating an index allowing SQL to select the “Index only access method”. These additional changes resulted in less than two percent performance difference and are therefore not documented here.

When SQL performance is unsatisfactory, the typical comment from SQL gurus is that the application probably is designed incorrectly – and in most cases they are right. The exception is when records must be read one by one in a random sequence where the keys used for accessing do not origin from another file. The typical example is customers calling a support center, where it is unknown which customer is the next to call.

 Test Results: Keyed access of records one by one

The traditional – or old fashioned – way of accessing customer data in these cases is CHAIN operation code in #RPGLE or COBOL’s READ of a file defined with “Organization Indexed”. They perform equally well. The result from using RPG CHAIN is demonstrated in Test 1 in the below table.

Test 2 showed that a simple FETCH provides the best SQL performance, using 3.6 times more CPU and run time than RPG’s Chain. Test 3 was slightly slower despite selecting only the four wanted fields. Adding a PREPARE caused Test 4 to consume over five times the amount of resources used by RPG.

SQL’s poor performance in this case should not come as a surprise, since SQL was never intended for accessing records one by one. Each time SQL is called it will try to find the best access method to be used. This results in superior performance when a lot of data is accessed, but adds an overhead when fetching just one record.

Being interested in application performance I also want to highlight alternatives that are significantly more efficient than the standard read-by-key. Tests 5 and 6 used untraditional “access methods”, which appeared to be 4 to 6 times faster than RPG’s CHAIN, i.e. they outcompeted SQL by a factor between 14 and 22.

For test 5 the data needed – customer number being the key plus the wanted four rows – was loaded into a user index that is accessed instead of the standard Customer data base. This provides two significant advantages:

1.      CPU usage and run time is very much reduced.

2.      The data occupies less space in main memory.

The user index performance was four times better than the traditional read, and the solution makes a significant amount of main memory available for other jobs. Power i’s advanced storage management system keeps frequently used data in memory. However, if we access the Customer data base, the entire records/rows including all columns/fields not required for this operation must be kept in memory to avoid disk I/Os.

A user index is a permanent object that can be saved and restored. It can be accessed using IBM APIs however, the operation codes available through the C function library are more efficient. It is easy to use and we are happy to demonstrate how this powerful option is used: download https://www.giapa.com/usridxcode.zip containing a save file with a RPGLE source code demonstrating how to create, load, access and delete a user index.  

The user index must contain updated data and could be loaded before each use. If the table/file is accessed from many jobs, a better solution is to use a trigger program to update the user index whenever the data base is updated. In many cases updates to files/tables read hundreds of millions of times are very rare. E.g., a product data base is typically accessed by many applications, but prices, etc. are not adjusted every day.

One argument for using SQL for all I/Os is the fact that SQL allows the layout of a data base to be modified without the need for recompiling the programs using it. This is a true and valid argument, but not a show stopper, if we use SQL to access the data base when the user index is loaded.

Test 6 used a standard binary table lookup, where the fields/rows needed were loaded into a User Space. Compared with memory allocated within the program this has the advantage that it can be used simultaneously by many applications. It performed 38 % better compared to the user index. A drawback is that it is not easy to update while in use.

Test 7 was the absolute winner using a method which unfortunately cannot be applied very often. It requires that the program based on the key field(s) can compute a value that may be used to address each individual entry in the array containing the data needed. Here it worked wonders given the keys of our “customer data” were generated customer numbers increasing from 1 i.e., they could be used directly as index for the access. The resulting performance is incredibly fast – slightly more than three seconds elapsed to fetch 10.000.000 “records”.

Although the hash table cannot be used very often, its efficiency makes it worthwhile mentioning. We use it heavily in our software product analyzing application performance, and it is the main reason why our GiAPA software can collect performance data for all jobs and tasks running on a server – and still only use less than 0.1 percent CPU.

Is Optimization Worth the Effort?

Optimizing application performance is often thought to be a complex and time-consuming task requiring assistance from expensive external experts. It may be a challenge to pinpoint what to modify, but in most cases the required change is in fact minimal, because only one or a few statements need modification. This is also the case here, where only the keyed access is replaced whereas the main and possibly complex logic of the application remains untouched. The take-home message is to avoid using SQL when accessing a large number of records one-by-one. Instead, we recommend that you employ one of these suggestions – and if in doubt, please reach out for us to solve your current performance issue.

Power i performance hints by Kaare

iPerformance ApS, Denmark

[email protected]

#ibm #AS400 #iseries #ibmi #ibmpower #sql #giapa 

Verified by MonsterInsights