Register to join us next week: Cloud Backup and Disaster Recovery for #IBMi Webinar register.gotowebinar.com/register/32100… pic.twitter.com/sB3EjVZEIq
– Datanational (@Datanational)07:11 – Jan 06, 2022
Register to join us next week: Cloud Backup and Disaster Recovery for #IBMi Webinar register.gotowebinar.com/register/32100… pic.twitter.com/sB3EjVZEIq
– Datanational (@Datanational)07:11 – Jan 06, 2022
Just 4 weeks to go until i-UG Conference 2022. Have you signed up yet? Register here bit.ly/31wTvi7
#IBMi #techconference #educate #iug pic.twitter.com/ERQrnhafQm
– i-UG IBM i UserGroup (@i_ug_uk)05:31 – Jan 06, 2022
Is moving pre-HANA systems causing road rage? Find out why cloud migration is one of the speediest, most picturesque drives organizations can take on the way to their SAP HANA destination.
Close your eyes and picture this: you are cruising by a clear, blue ocean, verdant tropical rainforests, and the shining sun is warming your face – it is the ultimate paradise. The cloud migration equivalent of that would be the shift to SAP HANA. The route to HANA is a similarly beautiful drive, and the destination is a reward as well.
For SAP, ECC, NetWeaver, and R/3 users, their HANA (High Performance Analytic Appliance) is not so much a vacation spot as it is an incredibly fast database foundation for future SAP solutions. However, there is always one question that inevitably comes up during the path to SAP HANA or S/4HANA:
What should we do with our on-premises pre-HANA ECC, NetWeaver, and R/3 systems once we have moved to HANA?
One solution is to migrate these systems to the cloud. Let’s look at the benefits of moving pre-HANA systems to the cloud and preparing for the next step in the evolution of the workplace.
Businesses running SAP most likely have a large IT department with a well-thought-out disaster recovery (DR) plan. Today it is easier than ever to run pre-HANA systems in the cloud, thereby enabling DR in the cloud for those systems. It is possible to decommission on-premises DR hardware and infrastructure, in turn lowering costs and harnessing the cloud’s flexibility in a way that the IT team couldn’t before. Cloud-based DR can also offer greater resilience and make it easier to comply with data sovereignty regulations based on where the cloud provider’s data center is located.
See More: A Roadmap for Migrating Legacy Tape Storage to the Cloud
All major cloud vendors now support Power in the Cloud, including AIX, IBM i, and Linux on Power, providing a road to the cloud for current Power-based applications and SAP deployments.
Systems running in the cloud can have workload symmetry with existing on-premises systems. This means that it is possible to perform an initial “lift-and-shift” of applications without altering them, then over time, refactor applications can be employed for native cloud services piece by piece. Using the same hostnames, IP addresses, and overall network topology allows IT to perform a lift-and-shift, determine if any modernization or re-engineering is required, and then do that work at their own pace. IT teams can first adjust to having the application in the cloud, and once they start refactoring, they can apply lessons learned from early projects to the later ones. This slower, more controlled approach can help reduce the risk of migrating complex, business-critical applications to the cloud.
Once an organization decides to run its SAP HANA-based system in the cloud, it makes even more sense to move any pre-HANA systems to the same cloud. This will reduce latency, and housing both systems in the same cloud allows SAP to modernize any of the satellite application systems connected to SAP. Native services from cloud vendors such as blob storage, DevOps tools, directory management and virtual machines are just one of many services that could be used to revitalize pre-HANA applications if they are moved to the cloud.
If an organization chooses to run HANA on-premises, migrating the pre-HANA systems to the cloud may still make sense as it allows the decommission of all on-premises hardware previously supporting the pre-HANA systems. Depending on the size of their deployment, these savings could be considerable. Satellite SAP applications can be modernized once moved to the cloud. However, the downside is the potential latency or separation between the systems. That said, this is commonly fixed by using a high-speed interconnect from a cloud provider back to on-premises.
See More: What Is Cloud Migration? Definition, Process, Benefits and Trends
Depending on the industry, some organizations must keep legacy systems and data intact and easily accessible for several years after a migration to comply with regulations. Over time, these legacy systems will be used less and less or perhaps be discontinued entirely. One alternative to using on-premises tools to support decommissioned applications is “Application Cold Storage,” where users store applications in the cloud. The application can be stored in a powered-off state to lower costs and then activated whenever it is accessed. This way, organizations do not need to pay the full infrastructure costs for applications that are no longer needed.
The above is only a smattering of options to consider while navigating the “journey to HANA .”By utilizing all of the cloud vendors’ latest features, the path to the beautiful destination that is SAP HANA can be smooth sailing indeed.
Did you find this article helpful? Tell us what you think on LinkedIn, Twitter, or Facebook. We’d be thrilled to hear from you.
In this article I will talk about an open-source utility that hasn’t got much airspace since it was released on our IBM i.
Its called logrotate, a handly utility that can help us manage the many logs we have on our IFS.
As we all know, log files can easily get out of hand.
How many times have our clients call us up and moaned about performance, upon investigation it becomes apparent the php.log file has never been cleared down and holds millions of records.
Logrotate is here to help you get around this problem and clean up the logs on a periodic basis.
Logrotate is very powerful. It allows automatic rotation, compression, removal and mailing of log files.
Each log file may be handled daily, weekly, monthly, or when it grows too large. All very useful.
If you are confident using a Bash session connected to your IBM i using SSH and a quick Yum install will get you going.
yum install logrotate
Or, use Open Source Package Management from Access for Client Solutions (ACS), which can be seen in the figure below.
Do you have the excellent man utility on your box, if not, why not!
See my previous PowerWire article Where’s that manual gone?
Using man, we can see the manual for logrotate.
Man logrotate
Please be aware that the logrotate utility resides in the /QopenSys/pkgs/sbin folder, so adjust your profile path, or prefix it with this folder if you intend using this feature. For example
/QOpenSys/pkgs/sbin/logrotate
Logrotate works off very simple text configuration files. Create this file, in any IFS folder, using your favourite editor, VS Code for example.
In this example, I’ll create a configuration file that checks and maintains all the logfiles in my /PowerWire/Logs folder on my IBM i IFS. I’ll create a file called powerwire.conf in my /powerwire/logrotate folder, with the following entries.
This is not an extensive list by any means. Check out the logrotate command using MAN in my top-tip above.
Now we have our config file, let me test it out.
All we must do, is to run logrotate with our configuration file as the first parameter, as in
logrotate /powerwire/logrotate/powerwire.conf
If we only want to check our configuration file is holding all the necessary details, we can run logrotate in debug mode, which will perform just a dry run and not actually perform any archiving.
To run in debug mode, just use the -d flag after logrotate. This can be seen in the figure below.
Here we can see each log file is being interrogated to check if any archiving needs to be performed.
Once you are happy with the archiving, just run the logrotate command without the -d flag and away it goes, sorting out your logs. All very neat!
If we want to perform logrotate on a scheduled basis we can easily add it to our job scheduler using the command seen below.
ADDJOBSCDE JOB(TIDY_LOGS)
CMD(QSH CMD(‘/QOpenSys/pkgs/sbin/logrotate /powerwire/logrotate/powerwire.conf’))
FRQ(*WEEKLY)
SCDDATE(*NONE)
SCDDAY(*ALL)
SCDTIME(0600)
TEXT(‘Tidy log files’)
Another great addition to the open source on IBM i catalogue, that opens the possibilities of what can be achieved on the IBM i server.
All the examples I have written for this article can be found on my open-source repository on GitHub, which can be found at https://github.com/AndyYouens/f_Learning
If you have any questions, either on this article, or anything else on the IBM i, use the comments below, or send me a message on twitter @AndyYouens
Andy Youens is an IBM i consultant/instructor at Milton Keynes, UK-based FormaServe Systems with over 40 years IBM midrange experience.
IBM Champion 2021
With IBM Power10, AI can be deployed and integrated without disruption to the enterprise attributes of the platform. Power10 provides a dramatic improvement in inferencing capability over IBM Power9, making it possible to add inferencing capabilities to your enterprise application without requiring additional hardware. As a first step we can deploy AI asynchronously, leveraging an Apache Kafka stream, within the same Power10 platform. This non-invasive way of introducing AI functionality generates a data stream from the database transactions and performs inferencing-based analysis in a separate partition that does not disrupt operations. That is what this example is focused on.
IBM i provides an efficient means of generating a Kafka stream from a set of database transactions, and in this example, we analyze the data stream in a separate Linux partition. We use a deep learning model for time-series (N-Beats) to forecast future prices of transactions.
Besides being an example of AI integration, this demonstration of Business Inferencing at Scale is also an example of application modernization. The example uses numerous open source components and leverages the flexibility of the IBM Power platform to seamlessly run different operating systems side-by-side. It allows introducing new and advanced functionality side-by-side with existing enterprise applications, in a non-disruptive manner.
Offline analysis using AI can be a precursor to a tighter, inline, integration of AI function with the enterprise application. Because Power10 provides the AI functionality within the processor core rather than in a separate system or accelerator, the Power10 hardware allows non-disruptive integration of AI functionality after it is developed and validated. Thus, for example, after a fraud detection model has been developed and validated using the approach in this example, a user may in the future deploy the fraud detection AI as an inline step and prevent a fraudulent transaction from being committed in the first place.
The purpose of this tutorial is to explain with an example how the Power10 Matrix Math Accelerator (MMA) feature can be used to accelerate enterprise AI inferencing with the data on IBM i.
In the example Daytrader7 application, the sequence consists of trades, buying, and selling stock. We have modified the Daytrader7 application to replay a sequence of actual trades in order to make it more realistic, and we use an inferencing model to make a prediction about the future stock price in each time interval. Of course, stock prices are one of the most difficult to predict sequences; the intent here is to provide an integration example. We make no claims as to the ability of the example model to predict stock price! Other example uses of similar models could be inventory management (to predict future purchases), customer management (to predict customer churn), and so on.
On IBM i, we use Camel-integrated other modernization applications to stream IBM Db2 transactions to Kafka. For detailed information about how and what we implemented, you can go through the blog: Using Power10’s Superfast AI With Event Streaming
Stock market prediction is an application of time-series forecasting. Machine learning and deep learning have shown impressive results and solved multiple forecasting tasks (with applications on product sales, server utilization, meteorology, and so on), and multiple attempts exist to predict stock market values. In our case, we work with univariate time-series: we train our predictions on one variable, the previous prices (as opposed to multivariate where we could predict the price based on covariates: previous prices, volumes, transactions frequency, and so on). For details about what the approach, details on how the libraries are used, the data collecting and cleaning process, and the model we use for running our predictions, see the “Stock market prediction” section.
Power10 provides four (MMA) engines per core, and the IBM Power E1080 server supports a maximum of 15 cores per socket compared to a maximum of 12 cores per socket for the IBM Power E980 server. For large single-precision floating-point (fp32) based inferencing models similar in complexity to the N-Beats model used in this demonstration, we have demonstrated a 5x throughput advantage on Power10 relative to Power9. Models based on lower-precision data types (bfloat16 and int8) are expected to see even higher speedups.
Refer to Figure 1 for an overview of the architecture.
Figure 1 . Infrastructure of time series prediction with modernized IBM i
You can also have a quick view of the case in the following demo video.
This section describes our approach and details the libraries used, the data collection and cleaning process, and the model we used for running our predictions.
The library we used is PyTorchForecasting. This library is built on top of Pytorch Lightning and provides a flexible and high-level API to work with time-series.
It provides a TimeSeriesDataset class that converts a Pandas DataFrame and automates holding, encoding, and normalization of time-series data. It also embeds multiple state-of-the-art models for time-series forecasting.
Their introduction article is an interesting read to get more details on the capabilities of the library.
We used data from Yahoo Finance which provides a stock market data API. It can be easily queried through the pandas.DataReader module:
We repeat that operation for around 20 stocks, fetch daily data between 1990 and 2020 , and store them in CSV files. The data returned contains the daily opening, closing, highest, and lowest prices and trade volume – out of which we will keep only the closing price as reference value for the day.
For the demo, we will then use these daily variations as second variations to have a faster evolution of the stock price.
Note that we picked stocks that have a relatively small variation on the period, so what we can then concatenate them (to have a long enough time-series) and avoid exploding stock market values or too unusual variations. The overall plot of the data set looks as shown in Figure 2.
Figure 2. The overall plot of the data set
We then concatenate the stock values (simply making sure to align the last price of a stock with the first price of the following one).
We finally apply a light smoothing with a rolling window of size 10.
Note that a real application with the aim of achieving the best possible accuracy would probably not merge time-series that way. In our use case, ease of use was a criterion, and the resulting data set looks realistic and therefore suits our use.
We picked the N-Beats model or Neural basis expansion analysis for interpretable time series forecasting. Released by Joshua Bengio’s team in 2020, it is one of the state-of-the-art models for univariate time-series prediction and won the M4 competition.
This model is built-in in the PyTorchForecasting library and therefore easy to test and use.
We chose N-Beats because it was both well-suited for the univariate time-series forecasting problem we aimed at solving and it showed very good MMA instructions rate during our tests (see “Parameters tuning” section below). We also evaluated the Temporal Fusion Transformer model on which MMA instructions usage was a bit lower.
We trained that model on the aggregated data set of stock prices described above (with 80%/20% split for training and testing sets). We used the following parameters for the model architecture:
num_blocks = [3, 3]
num_block_layers = [4, 4]
widths = [4096, 4096]
Note that we intentionally picked very large layers to maximize the use of MMA (see “Parameters tuning” section below); because of that, the training sometimes suffers from instability (gradient exploding) and architecture could probably be improved with regularization techniques to adjust that. However, it was sufficient for our use case and we could train a model without gradient issues.
We trained for 20 epochs with a batch size of 128 using an IBM Power AC922 server with an NVIDIA Tesla V100 GPU (with 16 GB memory).
We ran tests of the N-Beats model with multiple parameters set and compared the impact in terms of speedup compared to both a Power9 system and a Power10 system without MMA; and found out that the set of parameters described above yields the best results.
Figure 3. N-Beats model test results with Power9 and Power10
We also have the option to predict multiple stocks. To do so, we have two options:
Run a single model instance and increase the batch size. This option will bring better throughput results but increase the latency (as inference of a bigger batch is longer and we would need to buffer the inputs until a batch is ready)
Run multiple model instances in parallel, each with a 1 or low batch size. This option will show the best latency results.
It is possible to achieve best performance on Power10 systems, but the demonstration is functional also on earlier generations of Power hardware that support the required OS distributions. Note that RHEL 8.4 requires systems with little endian support for Linux (which started with IBM Power8).
IBM i: IBM i 7.3 or IBM i 7.4
Linux: Red Hat 8.4
On IBM i: To use the techniques described in this tutorial to deploy DayTrader with Db2 for i and stream data to a Kafka broker on IBM i, the following software must be installed:
The following open source packages to be installed using RPM:maven
wget
ca-certificates-mozilla
unzip
A suitable Java runtime, running Java version 8 or later. Most systems already have this capability through 5770-JV1 option 17.
PTF group SF99703 Level 16 or later for IBM i 7.3 or SF99704 Level 4 or later for IBM i 7.4.
If you choose to run the Kafka broker on IBM i, download Kafka directly from the Apache Kafka website.
On Linux: To try the time series prediction AI case, the following software must be installed using RPM:
wget
git
podman or Docker
If you have your IBM i and Linux systems ready (network can access internet, prerequisite software installed), it should take about 1 hour to finish the tasks mentioned in this tutorial.
Deploy DayTrader, Kafka, and JMeter on IBM i automatically in a simple way.
Refer to the README.md file of this example’s GitHub repository Kafka-based AI example to complete the deployment of DayTrader, Kafka, and JMeter on IBM i.
Deploy OpenCE with the Stock Price Prediction AI case related software and model on Linux.
Refer to the Environment setup section of the README for the AI components to manually setup the environment.
Run the demo on Linux.
Refer to the remaining steps in the [Manual setup] How to run section of the README for the AI components to run the demo.
Artificial intelligence and machine learning technologies are quickly becoming the new standard. This trend is particularly interesting to IBM i clients, because the data housed and processed on IBM i has mind-numbing potential. It can add plenty of new business insights if processed with an artificial intelligence algorithm. Thankfully, Power10 brings new on-chip optimizations for AI computations. This means much better performance without introducing unnecessary complexity to your infrastructure. Now, AI can be done efficiently and in a non-disruptive manner. Like the concept of real-time predictions? If so, take some time to explore the sample application outlined in this tutorial!
We acknowledge Joe McClure from the Websphere Performance team for helping us customize the Daytrader application for this use case; Karthik Swaminathan from the Efficient and Resilient Systems Research for helping us evaluate the model performance on Power10; and Stu Cunliffe from the EMEA Lab Services team for providing us the Linux and IBM i systems during the PoC phase.