AAG – Nagios Monitoring for IBM i – explained in simple terms with Q&A!linkedin.com/pulse/aag-nagi… #ibmi #highavailability #nagios #monitoring @shieldadca pic.twitter.com/78zYDaeef9
– Alison Hird (@AHird)08:20 – Jan 06, 2022
AAG – Nagios Monitoring for IBM i – explained in simple terms with Q&A!linkedin.com/pulse/aag-nagi… #ibmi #highavailability #nagios #monitoring @shieldadca pic.twitter.com/78zYDaeef9
– Alison Hird (@AHird)08:20 – Jan 06, 2022
Register to join us next week: Cloud Backup and Disaster Recovery for #IBMi Webinar register.gotowebinar.com/register/32100… pic.twitter.com/sB3EjVZEIq
– Datanational (@Datanational)07:11 – Jan 06, 2022
Just 4 weeks to go until i-UG Conference 2022. Have you signed up yet? Register here bit.ly/31wTvi7
#IBMi #techconference #educate #iug pic.twitter.com/ERQrnhafQm
– i-UG IBM i UserGroup (@i_ug_uk)05:31 – Jan 06, 2022
Is moving pre-HANA systems causing road rage? Find out why cloud migration is one of the speediest, most picturesque drives organizations can take on the way to their SAP HANA destination.
Close your eyes and picture this: you are cruising by a clear, blue ocean, verdant tropical rainforests, and the shining sun is warming your face – it is the ultimate paradise. The cloud migration equivalent of that would be the shift to SAP HANA. The route to HANA is a similarly beautiful drive, and the destination is a reward as well.
For SAP, ECC, NetWeaver, and R/3 users, their HANA (High Performance Analytic Appliance) is not so much a vacation spot as it is an incredibly fast database foundation for future SAP solutions. However, there is always one question that inevitably comes up during the path to SAP HANA or S/4HANA:
What should we do with our on-premises pre-HANA ECC, NetWeaver, and R/3 systems once we have moved to HANA?
One solution is to migrate these systems to the cloud. Let’s look at the benefits of moving pre-HANA systems to the cloud and preparing for the next step in the evolution of the workplace.
Businesses running SAP most likely have a large IT department with a well-thought-out disaster recovery (DR) plan. Today it is easier than ever to run pre-HANA systems in the cloud, thereby enabling DR in the cloud for those systems. It is possible to decommission on-premises DR hardware and infrastructure, in turn lowering costs and harnessing the cloud’s flexibility in a way that the IT team couldn’t before. Cloud-based DR can also offer greater resilience and make it easier to comply with data sovereignty regulations based on where the cloud provider’s data center is located.
See More: A Roadmap for Migrating Legacy Tape Storage to the Cloud
All major cloud vendors now support Power in the Cloud, including AIX, IBM i, and Linux on Power, providing a road to the cloud for current Power-based applications and SAP deployments.
Systems running in the cloud can have workload symmetry with existing on-premises systems. This means that it is possible to perform an initial “lift-and-shift” of applications without altering them, then over time, refactor applications can be employed for native cloud services piece by piece. Using the same hostnames, IP addresses, and overall network topology allows IT to perform a lift-and-shift, determine if any modernization or re-engineering is required, and then do that work at their own pace. IT teams can first adjust to having the application in the cloud, and once they start refactoring, they can apply lessons learned from early projects to the later ones. This slower, more controlled approach can help reduce the risk of migrating complex, business-critical applications to the cloud.
Once an organization decides to run its SAP HANA-based system in the cloud, it makes even more sense to move any pre-HANA systems to the same cloud. This will reduce latency, and housing both systems in the same cloud allows SAP to modernize any of the satellite application systems connected to SAP. Native services from cloud vendors such as blob storage, DevOps tools, directory management and virtual machines are just one of many services that could be used to revitalize pre-HANA applications if they are moved to the cloud.
If an organization chooses to run HANA on-premises, migrating the pre-HANA systems to the cloud may still make sense as it allows the decommission of all on-premises hardware previously supporting the pre-HANA systems. Depending on the size of their deployment, these savings could be considerable. Satellite SAP applications can be modernized once moved to the cloud. However, the downside is the potential latency or separation between the systems. That said, this is commonly fixed by using a high-speed interconnect from a cloud provider back to on-premises.
See More: What Is Cloud Migration? Definition, Process, Benefits and Trends
Depending on the industry, some organizations must keep legacy systems and data intact and easily accessible for several years after a migration to comply with regulations. Over time, these legacy systems will be used less and less or perhaps be discontinued entirely. One alternative to using on-premises tools to support decommissioned applications is “Application Cold Storage,” where users store applications in the cloud. The application can be stored in a powered-off state to lower costs and then activated whenever it is accessed. This way, organizations do not need to pay the full infrastructure costs for applications that are no longer needed.
The above is only a smattering of options to consider while navigating the “journey to HANA .”By utilizing all of the cloud vendors’ latest features, the path to the beautiful destination that is SAP HANA can be smooth sailing indeed.
Did you find this article helpful? Tell us what you think on LinkedIn, Twitter, or Facebook. We’d be thrilled to hear from you.
In this article I will talk about an open-source utility that hasn’t got much airspace since it was released on our IBM i.
Its called logrotate, a handly utility that can help us manage the many logs we have on our IFS.
As we all know, log files can easily get out of hand.
How many times have our clients call us up and moaned about performance, upon investigation it becomes apparent the php.log file has never been cleared down and holds millions of records.
Logrotate is here to help you get around this problem and clean up the logs on a periodic basis.
Logrotate is very powerful. It allows automatic rotation, compression, removal and mailing of log files.
Each log file may be handled daily, weekly, monthly, or when it grows too large. All very useful.
If you are confident using a Bash session connected to your IBM i using SSH and a quick Yum install will get you going.
yum install logrotate
Or, use Open Source Package Management from Access for Client Solutions (ACS), which can be seen in the figure below.
Do you have the excellent man utility on your box, if not, why not!
See my previous PowerWire article Where’s that manual gone?
Using man, we can see the manual for logrotate.
Man logrotate
Please be aware that the logrotate utility resides in the /QopenSys/pkgs/sbin folder, so adjust your profile path, or prefix it with this folder if you intend using this feature. For example
/QOpenSys/pkgs/sbin/logrotate
Logrotate works off very simple text configuration files. Create this file, in any IFS folder, using your favourite editor, VS Code for example.
In this example, I’ll create a configuration file that checks and maintains all the logfiles in my /PowerWire/Logs folder on my IBM i IFS. I’ll create a file called powerwire.conf in my /powerwire/logrotate folder, with the following entries.
This is not an extensive list by any means. Check out the logrotate command using MAN in my top-tip above.
Now we have our config file, let me test it out.
All we must do, is to run logrotate with our configuration file as the first parameter, as in
logrotate /powerwire/logrotate/powerwire.conf
If we only want to check our configuration file is holding all the necessary details, we can run logrotate in debug mode, which will perform just a dry run and not actually perform any archiving.
To run in debug mode, just use the -d flag after logrotate. This can be seen in the figure below.
Here we can see each log file is being interrogated to check if any archiving needs to be performed.
Once you are happy with the archiving, just run the logrotate command without the -d flag and away it goes, sorting out your logs. All very neat!
If we want to perform logrotate on a scheduled basis we can easily add it to our job scheduler using the command seen below.
ADDJOBSCDE JOB(TIDY_LOGS)
CMD(QSH CMD(‘/QOpenSys/pkgs/sbin/logrotate /powerwire/logrotate/powerwire.conf’))
FRQ(*WEEKLY)
SCDDATE(*NONE)
SCDDAY(*ALL)
SCDTIME(0600)
TEXT(‘Tidy log files’)
Another great addition to the open source on IBM i catalogue, that opens the possibilities of what can be achieved on the IBM i server.
All the examples I have written for this article can be found on my open-source repository on GitHub, which can be found at https://github.com/AndyYouens/f_Learning
If you have any questions, either on this article, or anything else on the IBM i, use the comments below, or send me a message on twitter @AndyYouens
Andy Youens is an IBM i consultant/instructor at Milton Keynes, UK-based FormaServe Systems with over 40 years IBM midrange experience.
IBM Champion 2021