More Audit Journal table functions added @ RPGPGM.COM

As part of the last Technology Refresh four SQL table functions were added that allowed me to extract certain journal entry types from the system audit journal. In the latest Technology Refreshes, IBM i 7.4 TR5 and 7.3 TR11, twelve new Table functions to extract other journal entries types were introduced:

Command string (CD)
Create object (CO)
User profile changes (CP)
Delete operations (DO)
Environment variable (EV)
Generic record (GR)
Action to system value (SV)

As well as those seven there are five for the Db2 Mirror product. Alas, I cannot show examples of those as I do not have access to an IBM i partition that is using Db2 Mirror.

Db2 Mirror setup tools (M0)
Db2 Mirror communication services (M6)
Db2 Mirror replication services (M7)
Db2 Mirror product services (M8)
Db2 Mirror replication state (M9)

All of these table functions have the same parameters and set of common result columns, the same that those previously release table functions have. Rather than repeat myself I am going to refer you to appropriate part in that post here.

Do notice that, as with the earlier table functions, these also reside in the SYSTOOLS library.

I can only give full examples of the table functions only if the data is being gathered by the Audit journal. On the partition I use for writing these posts there is only certain information gathered. I can identify what types of data will be written to the Audit journal by looking at the QAUDLVL and QAUDLVL2 system value. I could use the Display System Value command, DSPSYSVAL, but what is the fun with that if I can get the system value data using SQL:

SELECT SYSTEM_VALUE_NAME AS “Name”,
CURRENT_CHARACTER_VALUE AS “Values”
FROM QSYS2.SYSTEM_VALUE_INFO
WHERE SYSTEM_VALUE_NAME LIKE ‘QAUDLVL%’

This returns to me:

Name Values
——– ————————————–
QAUDLVL *AUTFAIL *SAVRST *SYSMGT *PGMFAIL
QAUDLVL2 *NONE

Those values mean:

*AUTFAIL:  Authority failure events are logged.
*SECCFG:  Security configuration is audited.
*SYSMGT:  Use of systems management functions is logged.
*PGMFAIL:  System integrity violations are logged.

Which means that not all the journal type entries are being written to the System Audit journal. This makes it difficult for me to write about what I cannot see. Therefore, I will give the SQL statement and example results for the ones I can retrieve information for. The others I will just have the basic statement that you can copy and try on your IBM i partition.

Command string journal entries (CD)

The “CD” journal entries are for command strings that have happened on your partition. These are retrieved from the audit journal using the AUDIT_JOURNAL_CD table function. In this case I am only interested in the most recent journal entries:

01 SELECT QUALIFIED_JOB_NAME,OBJECT_LIBRARY,OBJECT_NAME,OBJECT_TYPE,
02 ENTRY_TYPE_DETAIL,WHERE_RUN_DETAIL,COMMAND_STRING
03 FROM TABLE(SYSTOOLS.AUDIT_JOURNAL_CD(
04 STARTING_TIMESTAMP => CURRENT_TIMESTAMP – 4 HOURS))
05 ORDER BY ENTRY_TIMESTAMP DESC
06 LIMIT 5

Lines 1 and 2: I did not want all of the columns from AUDIT_JOURNAL_CD, just the following:

QUALIFIED_JOB_NAME:  What I would call the full job name
OBJECT_LIBRARY, OBJECT_NAME, OBJECT_TYPE:  Are what the columns name say they are
ENTRY_TYPE_DETAIL:  I could have used the ENTRY_TYPE column, but that is only one character. Without the mapping it is not self-evident what that types mean. ENTRY_TYPE_DETAIL is the description of the ENTRY_TYPE.
WHERE_RUN_DETAIL:  This is the description of the WHERE_RUN column, which only returns a single character. In my example results some of the columns end with ‘…’ as the width of the data is wider than I can show here
COMMAND_STRING:  The actual command string. If not command is performed the result is null

Lines 3 and 4: This is the definition of the table function. I am using the STARTING_TIMESTAMP parameter to limit the results to just anything that happened within the last just four hours.

Line 5: I want to order the results so the most recent is the first displayed.

Line 6: I only want five results, therefore, I use the LIMIT clause.

My results are as follows:

OBJECT_ OBJECT OBJECT ENTRY_
QUALIFIED JOB_NAME LIBRARY _NAME _TYPE TYPE_DETAIL
———————– ——- —— —— ————-
420651/QUSER/QZRCSRVS QSYS CHGJOB *CMD Command run
409494/QSECOFR/##SECDTA QSYS FTP *CMD Proxy command
429323/SIMON/QPADEV0003 QPDA WRKMBRPDM *CMD Command run
429323/SIMON/QPADEV0003 QSYS WRKMBRPDM *CMD Proxy command
429323/SIMON/QPADEV0003 MYLIB WM *CMD Proxy command

WHERE_RUN_DETAIL
————————————————————-
The command string was passed as a parameter to one of the…
From a compiled OPM CL program or an ILE CL program
Interactively from a command line or by choosing a menu op…
Interactively from a command line or by choosing a menu op…
Interactively from a command line or by choosing a menu op…

COMMAND_STRING
—————————————————
CHGJOB JOB(370597/QUSER/QZDASOINIT) INQMSGRPY(*DFT)
<NULL>
WM FILE(DEVSRC)
<NULL>
<NULL>

The last three results are all about my proxy command WM I use in place of the WRKMBRPDM command.

Create object journal entries (CO)

No journal entries.

SELECT *
FROM TABLE(SYSTOOLS.AUDIT_JOURNAL_CO())

User profile changes journal entries (CP)

No journal entries.

SELECT *
FROM TABLE(SYSTOOLS.AUDIT_JOURNAL_CP())

Delete operations journal entries (DO)

The AUDIT_JOURNAL_DO table function returns a list of object deleted from the IBM i partition. I have found that I need to give a starting date and time for the table function to return any results. If I omit all of the parameters no results are returned.

My statement looks like:

01 SELECT OBJECT_LIBRARY,OBJECT_NAME,OBJECT_TYPE,OBJECT_ATTRIBUTE,
02 ENTRY_TYPE_DETAIL,
03 QUALIFIED_JOB_NAME,ENTRY_TIMESTAMP
04 FROM TABLE(SYSTOOLS.AUDIT_JOURNAL_DO(
05 STARTING_TIMESTAMP => CURRENT_TIMESTAMP – 1 MONTH))
06 ORDER BY ENTRY_TIMESTAMP DESC
07 LIMIT 10

Lines 1 – 3: The columns I want returned.

OBJECT_LIBRARY:  Library name
OBJECT_NAME:  Object name
OBJECT_TYPE:  Type of object
OBJECT_ATTRIBUTE:  If appropriate object attribute, or null
ENTRY_TYPE_DETAIL:  Description of the entry type
QUALIFIED_JOB_NAME:  Full job name
ENTRY_TIMESTAMP:  Date and time the deletion happened

Lines 4 and 5: Definition of the AUDIT_JOURNAL_DO table function, and I need to pass the starting date and time to this table function to return results.

Line 6: Sort the file by the deletion timestamp, with the most recent coming first.

Line 7: As this is only a test I just want ten results returned.

My returned results are as follows:

OBJECT_ OBJECT_ OBJECT_ OBJECT
LIBRARY NAME TYPE ATTRIBUTE
——— ———- ——- ———
QGPL SYSLOADUPD *CMD <NULL>
#SYSLOADX SYSLOADUPC *PGM CLP
QGPL SLUPDCMD *FILE PF-DTA
#SYSLOADX GETSYSLOAR *PGM RPGLE
#SYSLOADX SYSVALRTNS *SRVPGM RPGLE
#SYSLOADX SYSVALRTNS *MODULE RPGLE
#SYSLOADX #SYSLOADQ *JOBQ <NULL>
#SYSLOADX OUTPUT *OUTQ <NULL>
#SYSLOADX #SYSLOADJ *JOBD <NULL>
#SYSLOADX #SYSLOAD *CLS <NULL>

ENTRY_TYPE_DETAIL
————————————————-
Object was deleted (not under commitment control)
Object was deleted (not under commitment control)
Object was deleted (not under commitment control)
Object was deleted (not under commitment control)
Object was deleted (not under commitment control)
Object was deleted (not under commitment control)
Object was deleted (not under commitment control)
Object was deleted (not under commitment control)
Object was deleted (not under commitment control)
Object was deleted (not under commitment control)

QUALIFIED_JOB_NAME ENTRY_TIMESTAMP
————————- ————————–
382359/#SYSLOAD/INSTSL 2021-09-20 13:11:40.702192
382359/#SYSLOAD/INSTSL 2021-09-20 13:11:40.695072
382359/#SYSLOAD/INSTSL 2021-09-20 13:11:40.568640
382325/#SYSLOAD/SYSLOADUP 2021-09-20 13:11:08.421392
382325/#SYSLOAD/SYSLOADUP 2021-09-20 13:11:07.594736
382325/#SYSLOAD/SYSLOADUP 2021-09-20 13:11:07.578928
382325/#SYSLOAD/SYSLOADUP 2021-09-20 13:11:07.570112
382325/#SYSLOAD/SYSLOADUP 2021-09-20 13:11:07.563984
382325/#SYSLOAD/SYSLOADUP 2021-09-20 13:11:07.421232
382325/#SYSLOAD/SYSLOADUP 2021-09-20 13:11:07.417328

The results lists various different types of object that were deleted.

Environment variable journal entries (EV)

No journal entries.

SELECT *
FROM TABLE(SYSTOOLS.AUDIT_JOURNAL_EV())

Generic record journal entries (GR)

It is a bit strange to have “Generic record” or “General purpose” journal entries, but the operating system does create them. To find them I can use the AUDIT_JOURNAL_GR table function.

01 SELECT USER_NAME,QUALIFIED_JOB_NAME,PROGRAM_LIBRARY,PROGRAM_NAME,
02 ENTRY_TYPE_DETAIL,ACTION_DETAIL,
03 FUNCTION_REGISTRATION_OPERATION,FUNCTION_NAME
04 FROM TABLE(SYSTOOLS.AUDIT_JOURNAL_GR())
05 ORDER BY ENTRY_TIMESTAMP DESC
06 LIMIT 5

Line 1: These columns are all from the common columns that are found in all of these Audit journal table functions. You may wonder why I have included the user name, when it is normally in the qualified job name. You will have to be a patient.

Lines 2 and 3: These are columns specific to the AUDIT_JOURNAL_GR table function:

ENTRY_TYPE_DETAIL:  Rather than give the single character ENTRY_TYPE column, I prefer to use the description column ACTION_DETAIL:  The ENTRY_TYPE column is a two character code. This column give the description
FUNCTION_REGISTRATION_OPERATION:  When the ENTRY_TYPE is “F” this column will contain data. If it is any other entry type then the value will be null
FUNCTION_NAME:  If ENTRY_TYPE is “F” this column contains data, otherwise it is null

My results are:

QUALIFIED_ PROGRAM_ PROGRAM
USER_NAME JOB_NAME LIBRARY _NAME
——— ———————– ——— ——–
RPGPGM 370597/QUSER/QZDASOINIT QSYS QZDASOINIT
RPGPGM 370597/QUSER/QZDASOINIT QSYS QZDASOINIT
RPGPGM 370597/QUSER/QZDASOINIT QSYS QZDASOINIT
RPGPGM 370597/QUSER/QZDASOINIT QSYS QZDASOINIT
QSECOFR 424214/QSECOFR/##SECDTA FB400 FTPSAVEBCH

FUNCTION_
ACTION REGISTRATION
ENTRY_TYPE_DETAIL _DETAIL _OPERATION
——————————– ——- ————
Function registration operations Read CHECK USAGE
Function registration operations Read CHECK USAGE
Function registration operations Read CHECK USAGE
Function registration operations Read CHECK USAGE
Function registration operations Read CHECK USAGE

FUNCTION_NAME
————————
QIBM_DB_ZDA
QIBM_DB_ZDA
QIBM_DB_ZDA
QIBM_DB_ZDA
QIBM_QTMF_CLIENT_REQ_10

The first four rows happened when I connected my “Run SQL Scripts” to this partition. This is why the USER_NAME and the user name in QUALIFIED_JOB_NAME are different.

Action to system value journal entries (SV)

I am glad I can retrieve the changes made to system values entries. I will be adding this to my things to check on a regular basis.

The AUDIT_JOURNAL_SV is the table function I would use to do this. When writing this post I did find that if I left the table function’s parameters blank I did not return any results. If I used the STARTING_TIMESTAMP parameter I could.

This is the statement I used:

01 SELECT SYSTEM_VALUE,ENTRY_TYPE_DETAIL,OLD_VALUE,NEW_VALUE,
02 USER_NAME,QUALIFIED_JOB_NAME,ENTRY_TIMESTAMP
03 FROM TABLE(SYSTOOLS.AUDIT_JOURNAL_SV(
04 STARTING_TIMESTAMP => CURRENT_TIMESTAMP – 1 MONTH))
05 ORDER BY SYSTEM_VALUE,ENTRY_TIMESTAMP DESC

Lines 1 and 2: I decide to have what I would call the “interesting” information as the first columns, followed by the common ones.

SYSTEM_VALUE:  System value name. Or, rarely, some special values that you can find in the documentation
ENTRY_TYPE_DETAIL:  The ENTRY_TYPE is only one character, and without the key it is almost impossible to determine what it means. Therefore, I am including the column that contains the description for the entry type
OLD_VALUE:  Value in the system value before it was changed
NEW_VALUE:  Value in the system value after it was changed
USER_NAME:  I am including this in case the system value was changed in ACS’s “Run SQL Scripts”
QUALIFIED_JOB_NAME:  The full job name
ENTRY_TIMESTAMP:  When the change was made

Lines 3 and 4: The definition for the AUDIT_JOURNAL_SV, including the STARTING_TIMESTAMP parameter>

Line 5: I want the results returned to me sorted first by the system value, and then in reverse date and time order so that the newest entry for each system value comes first.

There were only two results in the desired date and time range:

SYSTEM OLD_ NEW_
_VALUE ENTRY_TYPE_DETAIL VALUE VALUE
———- ———————– —– —–
QIPLDATTIM Change to system values *NONE *NONE
QIPLDATTIM Change to system values *NONE *NONE

USER_
NAME QUALIFIED_JOB_NAME ENTRY_TIMESTAMP
—– ——————– ————————–
QPGMR 370600/QPGMR/QSYSSCD 2021-09-19 23:50:13.805024
QPGMR 293045/QPGMR/QSYSSCD 2021-09-17 14:17:03.472832

I think these results show that even if you view the system value using the Change System Value command, CHGSYSVAL, or use the “2” option in the Work System Value command, WRKSYSVAL, even if I do not change anything and I press Enter it is recorded by a “SV” type journal entry.

You can learn more about this from the IBM website:

Audit journal entry table functions
Audit journal entry types

This article was written for IBM i 7.4 TR5 and 7.3 TR11.

Introducing Ansible Automation Platform 2

 

For the last two years, the Red Hat Ansible Automation Platform product team has been hard at work developing the next major release. We are incredibly excited to introduce Red Hat Ansible Automation Platform 2, which was just announced at AnsibleFest 2021.

What’s new in Ansible Automation Platform 2?

The main focus was to enhance the foundational pieces of the Ansible Automation Platform and to enable automators to automate at enterprise scale more easily and flexibly. This means everything you know and love about writing Ansible Playbooks is largely unchanged, but what is evolving is the underlying implementation of how automation is developed, managed, and operated in large complex environments. In the end, enterprise automation platforms must be designed, packaged, and supported with container native and hybrid cloud environments in mind.

So how did we get here? It’s been years in the making, which included the following changes:

1. Ansible content was separated from the Ansible executable in the Ansible Project, creating a new construct called an Ansible Content Collections to house Ansible modules, plugins, roles and more in a discrete and atomic form.

The vast majority of time recently has been spent relocating the majority of Ansible content (modules, plugins) into standalone Ansible Collections developed and maintained separately from the
Ansible open source project
. The main benefit is that the updating of Ansible content is no longer dependent on updating the Ansible project itself, allowing for continuous and asynchronous releases of content while maintaining stable releases of the Ansible executable.

2. The control plane was separated from the execution plane in Ansible Tower, and renamed these components to automation controller and automation execution environments.

Ansible Tower was split into two components: automation controller (control plane) and automation execution environments (execution plane) in order to better scale and provide more predictable automation for enterprises. By splitting Tower into two components, you can now have execution running outside of the control node and is more conducive to running your automation in hybrid cloud and container native environments such as Red Hat OpenShift. You’ll also see additional features in the upcoming 2.1 release with a new component called automation mesh (think: a service mesh for Ansible), which replaces isolated nodes in Ansible Tower. This becomes more interesting by enabling new use cases such as automating at or to the edge as well as cloud automation.

3. New tools were created to better enable enterprise automation developers.

Developing Ansible content has largely been up to the individual for building and curating content. New tools such as automation content navigator (ansible-navigator) and execution environment builder (ansible-builder) allow for a more consistent experience for content developed on a workstation that’s destined for an enterprise automation controller instance. This is made possible with  automation execution environments, which are now much more predictable, portable, and scalable compared to traditional Python virtual environments previously.

Ansible Automation Platform 2 introduces an improved architecture and a variety of new tools to scale your automation while still providing a familiar Ansible experience to your teams.  We want to provide you with all the information you need to get your automation teams up to speed on the new features and start developing your migration strategy (if applicable) to best prepare for the forthcoming 2.1 general availability anticipated for later this year. Over the next month, keep an eye on the Knowledgebase on Red Hat Customer Portal for all the latest on documentation, installation, migration and component deep dives.

Where do I go next?

As automation becomes more strategic to your business, so will the changes you make in the way you can adopt, manage and operate automation. Ansible Automation Platform 2 introduces an improved architecture and flexibility with automation controller and automation execution environments, along with a variety of new tools to scale your automation while still providing a familiar experience to your teams.  We want to ensure you have all the information you need to get your automation teams up to speed on the new features and start developing your migration strategy.

There are a number of resources available as you begin to explore Ansible Automation Platform 2:

To learn more about new features and components, check out the updated product overview page on ansible.com. You can also consult our new interactive features guide.

If you’re ready to get hands on, we have self-paced interactive labs available to explore right now.  

It’s not too late to register (for free!) for AnsibleFest 2021; you can follow along live on September 29th and 30th, or explore session content on-demand following the event. 

We also encourage you to register for our free upcoming webinar “Red Hat Ansible Automation Platform brings you a new way to automate,” which will be live on November 2 and available afterwards on demand.

How do I get more help?

If you are interested in hearing more about Ansible Automation Platform 2, please reach out to your Red Hat sales representative. In the meantime, you can log into the Red Hat Customer Portal for official resources around the launch, including migration considerations, getting started blogs, and official documentation which can be found at the Early Access page. (Note: a Red Hat subscription is required for access)

Still need assistance? Can’t find your Red Hat Sales representative? Contact Red Hat Technical support for additional information. Please refer to the official resources that will help you on your automation journey.

Keys to accelerating IT innovation: technology and collaboration

Technology is helping organizations to deliver on their mission and become smarter and more efficient to differentiate themselves from the competition. It is also helping to shape the world, as we depend on it to assist with basic demands such as food, living space, health, safety, mobility, communication, education, entertainment, as well as governmental and financial services. In parallel, we are also living in an era of information technology in which we are still witnessing exponential efficiency gains in basic capabilities like compute, storage, and networking. Data is fueling innovation and it has become clearer than ever the true value of data.

Many outcomes of technology innovation start as small agile projects. Some of them scale to large impact. There are so many ways to innovate, which leads organizations to question where to start and how to approach it. Co-Creation and Design Thinking have proven to be the right concepts for innovation because they focus on jointly understanding an area of business and implementing a change, doing something more intelligently, efficiently. Experts of different areas are working together to design and implement a solution as a minimum viable product, which can be augmented and scaled over time to generate big impact.

It is important that an understanding of a specific industry or industry opportunity is in place. Many innovations are specific to an industry, but they can also disrupt existing business operations in that industry, if for example, a manual task is automated or an analog asset is digitalized.

It is not only about having experts from different areas working together, but it is also key to have diversity in the agile teams. This includes the diversity of age. Collaboration of experienced baby boomers with early professionals from Generation Y or Z can do magic. It is valuable to nurture diversity, openness, and respect.

When it comes to technology decisions, the value comes from the ecosystem that is behind a solution. An open hybrid cloud and AI platform approach backed by open source is an incredible foundation. But the largest value is gained when this system is simultaneously backed by professional services and an ecosystem of additional software. Red Hat OpenShift plays a big role here because this open and secure, enterprise ready Kubernetes platform enables teams to develop once and deploy everywhere. It avoids lock-in into a single technology stack. It opens the doors to deployments in private data centers, at the edge or in public cloud or collocation sites.

Data and AI will be ingredients of nearly every innovative solution. It’s crucial to have the ability to use AI and to train AI models, but also to use the trained models at the edge which means close to the device where the data lives. The opportunity to apply AI models to existing or future data is huge.

But as data becomes more valuable, so too does the risk of cyber-attacks as hackers are eager to get their share of the value. In the 2021 X-Force Threat Intelligence Index report, based on billions of data points from clients and public sources, a major finding was that ransomware was the most popular attack method in 2020, making up 23% of all incidents IBM Security X-Force responded to and helped remediate. Innovation needs to go hand in hand with handling increasing cyber risks.

And the efficiency gains in technology might slow down in the decade to come. Information Technology is reaching more and more the atomic scope which will limit the ability to scale compute, storage, and networking hardware further down to deliver more outcome with less resources. The future of computing will need to combine new computing concepts to create the systems needed to tackle challenges with more efficiency. This will include Quantum computers, AI hardware and optimized scaling for traditional semiconductors.

All these important elements of innovation will be highlighted at this year’s IBM TechU where I will be hosting the “Architecting the Future” track. Participants at this year’s TechU will get a complete picture of drivers and ingredients for innovating your business with IBM Z, LinuxONE, Power and Storage and beyond from a range of technical leaders; Eight hosted live sessions will inspire you about what’s next.

>> Click here to learn more about IBM TechU.

The post Keys to accelerating IT innovation: technology and collaboration appeared first on Servers & Storage.

Verified by MonsterInsights