Practical SQL: Journal Forensics Using SQL

​[[{“value”:”Journals provide a lot of data about IBM i file operations, and the DISPLAY_JOURNAL table function lets you mine that data.
I’ve talked a lot about using SQL to analyze data. But did you know that you can use SQL and journals together to analyze your programs as well? The table function DISPLAY_JOURNAL makes it easy to identify exactly which programs are updating files in your library.
Unlocking an Application’s Architecture
IBM i applications tend to be highly integrated, which typically means that the application is supported by lots of files, which are in turn managed by many programs. Also, IBM i applications tend to have very long lifespans, measured in decades, not years. And while the applications themselves may have gone through many upgrades, the specific version you’re working on may no longer be supported. Alternatively, you may find yourself working on a completely custom system whose original architects are no longer available. In either case, enhancements or even fixes to these applications can be difficult because a change in one program may affect another program in ways that are not immediately evident.
Products such as Hawkeye’s PATHFiNDER and X-Analysis from Fresche Solutions are designed to support that sort of analysis, and while they’re quite powerful they’re typically not cheap. On the other hand, you can do the analysis yourself. While there is no silver bullet for self-directed analysis, hopefully the technique I’m about to describe can help.
Identifying the Goal
What I’ve found over the years is that one of the most common tasks in IBM i application analysis is to find out which database files get updated by a specific task. This could be something as simple as maintaining a master record to something more complex, such as entering an order. In either case, you need to identify all the files that are getting updated. Even for the more simplistic file maintenance scenario, entering data on a single panel might update multiple files. In a real-world scenario, we might want to perform a mass update on some master data, but if we don’t know what all the side effects are, just doing an SQL update on one file might leave other files out of sync.
The Setup
We’re going to use journals to perform this analysis. This process has one absolute prerequisite: all of the files you are investigating must have journaling enabled. And for ease of processing, it’s best if the files are all being journaled to the same journal. While that’s not strictly necessary, it definitely makes the process less complicated. We’re going to assume all files are in the same journal for this discussion. In fact, the journal will be called MYJRN in MYLIB, because that’s easy to remember.
I’m assuming that you understand how to set up journals and journal receivers. They’re really critical to a lot of IBM i processing. If you’re not, you can start with this IBM article. I’ll also have a follow-up article with a utility to enable and disable journaling for all the files in a library; that will help those of us who aren’t currently taking advantage of journaling.
The Execution
There are two basic steps to this process: execute the business activity, and analyze the results. As long as everything is set up correctly, it’s not that difficult. Let’s address a couple of practical issues right off the bat, though. First is the environment. Are you working in a test environment, or are you in production? While this analysis can be done in production, and in certain cases must be done there, I highly recommend you do your testing in development for a number of reasons, not the least of which is that you can’t damage production data! A second practical issue that can be harder to deal with is whether you are running interactively or in batch. While it’s not a showstopper by any means, it is a complication that I’ll show you how to address.
Let’s say we want to update the remit-to address for a customer. Finance would like us to do this automatically for a large number of customers, so manually keying this isn’t an option. We might be able to use some sort of keyboard macro; the old iSeries Access for Windows actually does this through Visual Basic and allows you to read and update a spreadsheet. You might even see an article about that in the future. But assuming that’s not an option, we’re going to want to do this using some sort of SQL update, but first we want to make sure we know exactly what needs to be updated.
We’re ready to execute the process, but before we do, we’re going to do a little bit of housekeeping. We’re going to send a special marker to the journal that we can use later to bracket our query. This is less of an issue in a lower volume environment, but it can be critical on a production machine. In any case, we’ll use the SNDJRNE command to send this marker.
This sends an entry to the journal that we can find later. The test can be whatever we want, and in a more involved process we might send multiple entries identifying different steps in the process. Next, we just execute the process, and finally we send the closing bracket:
Time to See the Results
This is also a two-step process. The two steps allow us to winnow out any extraneous noise from other jobs which would be required in a production environment. In a low-volume test environment you may be able to skip some of this, but we should review the whole process so that you have the tools available.
First, we’re going to get our bracket information. You can use time and date stamps, but I find sequence numbers to be more precise. Remember that we sent entries to the journal with type 00; the following SQL query shows those:

Figure 1: Retrieve sequence numbers
This gives you the sequence numbers to use to bracket the next statement, which is the actual analysis:
   STARTING_SEQUENCE => 38603413,
   ENDING_SEQUENCE => 38603458))
ORDER BY 1,2,3;
This statement groups all of the journal entries between the two bracket entries by object and journal entry type; this details the database operations being performed. Note that I included WHERE JOB_NUMBER = 491213. This is the job number of the interactive job where I ran the customer maintenance. In a busy environment, that can really focus on just the process you are running yourself. It can be a little trickier with submitted jobs; you have to get the job number of the submitted job, but the concept is the same.
Anyway, at this point you can now see what happened:

Figure 2: Retrieve objects
Looking at this, you’ll notice a couple of things. First, the OBJECT field is 30 characters long. This is the file name, library, and member of the file being updated. Typically, when I do this, I substring out the file name and library. Second, though, is the interesting piece. Sure, I expected one record to get updated in the customer master (CUSMAS), but what’s with the INVHDR updates? My initial guess would be that the maintenance program goes out and updates the remit-to address on any open invoices. I could be wrong, but we’d figure that out once we opened up the INVSYNR program. And more importantly, this would be something we’d want to mimic if we did indeed go with an SQL approach to updating the customer header. And last but not least, you can see the two 00 entries; those are my brackets, and it helps show that I got the correct sequence range in my Select statement.
This is a first pass at learning how to analyze an application using forensic analysis of the database operations. While not exhaustive, it can provide a solid starting point when you need to know exactly how an application works. Journals are a really powerful tool, and we’ll do more with them later.”}]] Read More 

Verified by MonsterInsights