IBM i Open Source Updates March 2021

Hold on to your hats, because this might be the biggest monthly update yet!

New Packages

Integer Set Library (ISL)

ISL is a math library used by GCC and needed to build GCC. This will likely be more useful in the future. 😉

GCC Package and gcc-aix

Our existing gcc-related packages have been updated to now use a gcc6 prefix. Thus gcc‑aix ➔ gcc6‑aix, gcc‑cpp‑aix ➔ gcc6‑cpp‑aix, etc. In addition, all the GCC binaries have a -6 suffix, eg. gcc ➔ gcc‑6, g++ ➔ g++‑6, etc. This will be important when we deliver a new version of GCC so that both versions can be installed side-by-side. Through the magic of rpm tags, these renames should not cause any issues when upgrading through yum.

In addition, we have also shipped unversioned gcc packages, which always point to the “base” GCC package version. This is currently pointed at the existing GCC 6 packages mentioned above and is the default compiler used by our rpm builds. In the future, we may decide to change our base compiler and this allows an easy way to do that.

brotli

Like zstd we shipped previously, Brotli is another modern compression algortithm, this time developed by Google. It is required to build Node.js. Up until now, we’ve static linked in the version shipped with the Node.js source, but we’re experimenting with building some of these dependencies as shared libraries instead.

c-ares

c-ares is an asynchronous DNS resolver. Like Brotli, it is required to build Node.js and we’re experimenting with building some of these dependencies as shared libraries instead.

pixman

pixman is a low-level pixel manipulation library. It is used by cairo and other higher-level graphics libraries.

X.org X11 libraries

We are now shipping the open source X.org X11 libraries developed by the freedesktop.org community. While the AIX-sourced versions of these libraries as shipped in PASE and the X.org versions do share a common lineage, they have diverged over time, with much of the current development focus coming from the X.org version to support Linux desktops. As such, many open source projects using X11 libraries expect to be run under X.org versions of the libaries. Not much changes at the moment, but this lays groundwork for the future.

Package Updates

R

The R build has been updated to build with both libtiff and the X.org X11 libraries now that they are available. There are some other optional dependencies that we are looking to build with in the future, but don’t currently have approval to ship.

In addition, we have enabled -pthread flags by default, which should enable some packages to build out of the box, without having to set special flags.

Finally, R has been built against our new BLAS library indirector which I’ll go in to more detail below.

BLAS & related packages

Our BLAS infrastructure has gone through a massive overhaul. We currently ship the Netlib reference versions of the BLAS and LAPACK scientific libraries used by ML workloads. While these libraries do work, these are generic, unoptimized versions. We’d like to offer alternative, optimized versions of these BLAS libraries, but currently applications link directly to the Netlib libraries. To enable ease of switching BLAS implementations, we have built an indirection mechanism that provides 4 libraries:

  • libblas-indirect.so.0
  • libcblas-indirect.so.0
  • liblapack-indirect.so.0
  • liblapacke-indirect.so.0

These libraries can be switched between the various BLAS implementations using the update-alternatives mechanism, eg. alternatives --config blas and all our existing BLAS-using packages (R, python3-numpy, python3-scipy, python3-scikit-learn) have been rebuilt to use these indirection libraries.

Ok, so you can now switch between BLAS implementations, but what good is it if we only have a single implementation? Well, read on my friend!

OpenBLAS

OpenBLAS is an optimized version of GotoBLAS. It contains hand-optimized assembly versions of BLAS functions for various various architectures and operating systems. Most importantly for us, it has optimized instructions for POWER and builds on IBM i.

We currently ship version 0.3.10 with support for both POWER6 and POWER8 optimization levels. The code will detect at runtime which CPU you are running on and switch appropriately. Running various benchmarks we saw OpenBLAS running over 5x faster than Netlib BLAS on a POWER8 IBM i 7.3 machine.

While OpenBLAS supports some POWER9 and even some POWER10 optimizations, we are currently limited to POWER8 support at the moment, due to limitations in the assembler used in PASE. Because the IBM i 7.2 PASE version of as only supports up to POWER7 and the IBM i 7.3 version only supports up to POWER8, we have only enabled the POWER8 optimizations and had to build this RPM on IBM i 7.3 to do so.

This means this package is our first IBM i 7.3+ rpm and will also be our first RPM delivered through our new IBM i 7.3+ repo. If you want to enable it, you can download the repo file here to /QOpenSys/etc/yum/repos.d. We’re working on a more permanent and seamless solution for the future. In addition, a bunch of new packages we’re working on will start showing up there (and only there). With IBM i 7.2 going out of support at the end of April, it’s time to start leaving it behind.

RPM

Various improvements for packagers:

  • The brp-python-bytecompile script now uses the correct paths for IBM i, enabling automatic Python byte-compiling during RPM builds
  • RPM scriptlets now use the default PASE $PATH with the addition of /QOpenSys/pkgs/bin.
  • Various build-related files and utilities have moved from the rpm package to the rpm-build package
  • The default compile flags (%optflags) are now set per-release to include the minimum POWER hardware level supported on that release (eg. 7.2: -mcpu=power6, 7.3: -mcpu=power7 -maltivec -mvsx, etc)

bash

bash now looks for an inputrc file in /QOpenSys/etc/inputrc instead of /etc/inputrc (~/.inputrc is not affected)

perl

libperl.a has been repackaged in to a standard svr4-style shared library libperl.so.5.24. In addition, various files have been split in to a perl-devel package.

db2util

Updated to 1.0.11 to fix an issue with empty string output for NUMERIC and DECIMAL types.

util-linux

This package has been updated to require the necessary version of libutil2.

pkg-config

pkg-config has been patched to ignore Requires.private dependencies. These are only needed when static linking and since we provide no static libraries, disabling this allows us to avoid adding unnecessary rpm dependencies to many development packages.

ansible

Ansible has been updated to no longer adjust the shebang line of various Python files it ships. This was causing issues when running IBM i as the Ansible control node and communicating with a non-IBM i remote node.

Other Updates

  • nodejs10 was updated to 10.24.0.
  • nodejs12 was updated to 12.21.0.
  • nodejs14 was updated to 14.16.0.
  • libutil was updated to 0.10.0
  • ansible was updated to 2.9.10

Closing

Last month I mentioned bigger changes were coming and while I certainly think we nailed it on that front, there’s still more on the way. Stay tuned!

Git DevOps going on IBM i – A practitioner’s guide to easy CI/CD on IBM i with Git (or, why are you waiting?)

By Ray Bernardi | 7 April 2021

Git DevOps going on IBM i

I have a question for the IBM i shops out there. What are we waiting for? Why aren’t we doing DevOps yet? I’m going to suggest that a lot of shops fall into the “if it ain’t broke, don’t fix it” category. That’s pretty much been the culture of traditional IBM i shops. It’s also the reason IBM i shops lag behind in innovation and adopting new technology and techniques. That leads to the incorrect assumption that the platform is old and can’t respond to business needs. Nothing could be further from the truth. 

Yet, I talk to businesses that are leaving the platform because they need to modernize. They are willing to take the risk of a complete rip and replace of their business systems. Why? Because they want the software running on the IBM i to be as agile as their open world software is. They want to be able to use the same pipeline for changes no matter where the code lives. They want to automate the process and implement DevOps. 

Summary

  1. It’s the technical debt 
  2. What cultural divide?
  3. Making Git ‘smart’ about native IBM i components 
  4. One CI/CD pipeline to rule them all
  5. True DevOps with Git on IBM i 
  6. 1. It’s the technical debt

    The problem is not the platform. The problem is technical debt. Their code is old, their database has not been updated to DDL, they still use old clunky change management systems developed decades ago that are incapable of meeting the new agile requirements they face. They have no idea how to automate what’s happening on the IBM i or how to interface it with the tools they are using like Jenkins, Jira, Git and so on.

    2. What cultural divide?

    The other issue is culture. The IBM i has been around for a long time. A lot of people see what’s happening in the open world, and have no idea that can all apply to the IBM i. The answer is in tooling and automation.

    Why not use the same tools the open world uses? Where is your source control for your IBM i right now? Is it contained within an outdated change management system? Why not use Git for your native code? That’s probably where all the open code is stored and you probably have people who know Git pretty well.

    Here’s the problem. Most IBM i developers haven’t used Git. Branching, merging and the Git workflow are foreign to them. They don’t think it can work for them. Try telling an IBM i developer that from now on, when they make a change, they need to pull their source from a Git branch, see how far you get.

    That’s where tooling comes in. A modern change management system like ARCAD knows about Git. It uses webhooks during branch creation and creates associated development libraries on the IBM i automatically. It then puts the source in those development libraries during checkout, ready to edit and compile. Not much different.

    With that kind of tooling in place, how would an IBM i developer use Git to do the pull? They simply checkout the source, that’s it. The tooling does the rest. It’s just a menu option or a right click. When they are done making their changes, another menu option or right click pushes back to the branch. It’s the same workflow they’re used to (now), they checkout, edit, compile, test and deliver the change. The difference is Git is behind the scenes.

    Did you notice in the above paragraph I said they can take a menu option? There’s an implication there that an IBM i developer that uses 5250 can also do this. That implication is true. Green screen development with Git as the source control is fully supported by ARCAD. That’s serious integration.

    ‘True Git’ – are there right and wrong ways to do Git on IBM i?

    ‘True Git’ – are there right and wrong ways to do Git on IBM i?

    Are you looking to manage your RPG and COBOL source code under Git, or you already tried making the leap? Git brings multiple advantages, including flexible distributed development, parallel versioning and complete visibility of changes anywhere in the system. Take your IBM i teams to the next level with Git with this webinar!

    3. Making Git ‘smart’ about native IBM i components

    Ok then… next issue. Git has no clue what an IBM i is. It has no idea what a PF or a Table is. It has no clue about dependencies. So, how can a build with IBM i components possibly work? By passing what’s changed to a tool that knows the IBM i, that’s how.

    At ARCAD we call that tool Builder. It’s a smart IBM i build. If you change a table using Git as your source code repository, Git will pass what you changed to Builder. Builder will then identify all related components and recompile them automatically, in the correct order, in the correct development libraries. Again, automation is the key here. You don’t need a make file, or to script every change. It’s done for you.

    The best part is your source is always available on your IBM i. ARCAD using Git will push source to the correct libraries as needed. Developers work on the IBM i in libraries with source files that are all native to them. RDi and 5250 are both supported.

    4. One CI/CD pipeline to rule them all

    So, with this kind of tooling, your IBM i code can follow the same pipeline as your open source. Are you starting to see the light here? Now we can use the same type of automation for IBM i code that’s used for open code. This opens the door to true DevOps on the IBM i.

    For example, you could use Jira to create a task. ARCADs interface with Jira will create the related maintenance report at the appropriate time. That could be when a developer drags the task from Backlog, to In-Development on a Kanban board as shown here.

    Kanban Branch

    A branch is created for the developer and using ARCAD Skipper, they develop as they do now, no big changes, no new workflow, no huge learning curve, no slow down in productivity.

    Created Branch

    When they are ready, they push the change and automation takes over. The branch is updated in the Git repository. The push event triggers a webhook, something has happened, and action is needed. What kind of action? Let’s examine some possibilities. Take a look at the illustration below.

    Pipeline DevSecOps

    That’s an example of an automated pipeline for IBM i code moving into a unit test environment. This is Jenkins shown in this example. When the developer did the push, Git and Builder together created everything needed. During Integration, the source code was copied from development to a test environment on the IBM i. Objects were either moved or compiled into the test area, that’s a choice you make.

    Once the source and objects are in place, the pipeline triggers ARCAD Code Checker. Code Checker automatically examines the source code for quality and security standards that you can set. It’s the first step of the quality gate for your IBM i code.

    ARCAD iUnit can then test the procedures in your programs to make sure they still function as expected. Then ARCAD Verifier can run full regression tests over the environment to see if the change had any unexpected results you could not have predicted. All this is automated.

    You get reports and results all along the way. You can view them from your automation tool, like Jenkins, so you have one place to look for issues. All this because a developer took a menu option or did a right click in RDi. Can you imagine the time savings?

    So without changing things too much for your development teams, this could be your IBM i workflow.

    Example IBM i Workflow

    You can see the production line of code on the master branch at the top. Each of the feature branches is a change under development. The feature branches get created from the master branch. Developers work on those feature branches and use the automation we have already talked about. They push to the feature branches. You can also see push events off each of the feature branches, this is not the developer doing this, let me explain.

    The developer only pushes back to the feature branch. You can have as many feature branches as you need. The developer can push to them as often as they wish. Now, imagine you’re creating a new release of your software, here is a question for you. When you’re ready to update production, would you prefer doing it change by change, feature by feature, like you did in test? Or would you rather promote the aggregate in one motion as a release? The later sounds a lot easier to me.

    That’s what that release branch at the bottom of the diagram is about. It’s there to collect the changes delivered by developers on the feature branches. The push events shown here might be done by your QA department when the change is accepted, for example.

    Now remember, a branch has an associated IBM i library. So yes, you would be correct in assuming that when the release branch got created the same thing happened. A library was created to store the release as a whole. The illustration below is a CloudBees pipeline, Jenkins isn’t a requirement, automation is the requirement and there are lots of tools to help you like CloudBees, Azure DevOps and Jenkins.

    Branch Release Update Display

    Once the changes are accepted for the release, the push events shown on the diagram would trigger everything seen above. The process first uses the branch information to determine the correct release library on the IBM i. Before the code is put into the release, it is scrubbed by Code Checker for quality and security compliance. If it passes that test, the source is placed in the release(version) library and compiled during the build. Once the objects are created, ARCAD iUnit tests the procedures again, if that goes well the changes get automatically promoted from the unit test environment, to the QA environment in the Integration step. Then a full regression test is done, and if that goes well, the changes get deployed to a test LPAR.

    All that happened because a feature got pulled into the release branch. That’s serious automation and it is true DevOps on the IBM i.

    5. True DevOps with Git on IBM i

    I’ve been working with the tools I’ve talked about here, and many others, for a long time now. I’ve been talking about DevOps for the IBM i for at least the last 6 years or so. These tools let you shift left. Issues are found early in the development cycle when they cost a lot less to fix. They are redundant; they check things multiple times during the lifecycle of a change, automatically. They automate the steps you now do manually. This is a modern toolset that allows you to treat your IBM i code the same way you do your open code. With agility.

    The only thing that confuses me is the rate IBM i shops have adapted to the new ecosystems we are seeing. It’s not the platform; it can handle whatever you throw at it. It’s not the lack of tooling. As you’ve seen here, the tools are available and mature.

    This solution allows open and IBM i developers to work together, it includes cross-platform cross reference capabilities, and they can “see” each other. It allows a senior 5250 developer that’s been with the company for years to work like they always have. It allows you to attract new talent because the (young) developers coming out of college today, already know Git, Jira and tools like Jenkins and they will feel at home with it all. This is a solution that works. It’s available today. So what is it then? What are we waiting for? An invitation?

    Well then, I formally invite you to examine the rest of the ARCAD site and see what we can do for you and your organization. Now that that’s out of the way, Git Going!!!!

    Enterprise Devops White Paper

    DevOps Transformation:
    Engage your IBM i teams

    White Paper

    Implementing a DevOps strategy on IBM i? Influencing IBM i DevOps maturity and success in the enterprise?

    Philippe Magne

    Ray Bernardi

    Senior Consultant, ARCAD Software

    Ray is a 30-year IT veteran and currently a Pre/Post Sales technical Support Specialist for ARCAD Software, international ISV and IBM Business Partner. He has been involved with the development and sales of many cutting edge software products throughout his career, with specialist knowledge in Application Lifecycle Management (ALM) products from ARCAD Software covering a broad range of functional areas including enterprise IBM i modernization and DevOps. In addition, Ray is a frequent speaker at COMMON and many other technical conferences around the world and has authored articles in several publications on the subject of application analysis and modernization, SQL, and business intelligence.
    Contact us

    Contact us
    …………………………………..

    Book a demo

    The post Git DevOps going on IBM i – A practitioner’s guide to easy CI/CD on IBM i with Git (or, why are you waiting?) appeared first on ARCAD Software.

When Cloud Meets DevOps on IBM i

When Cloud Meets DevOps on IBM i

We’re witnessing multiple macro trends play out in the IT industry, with the rise of cloud computing and managed services being one of the most compelling storylines in 2021. How software is developed is changing too, with DevOps tools and agile methodologies rapidly becoming the norm. When these two trends converge in our little neck of the woods, it can leave IBM i customers wondering where to go. With their recently formed partnership, ARCAD Software and Connectria have mapped out one path.

A decade ago, companies were still unsure of the cloud. They would dabble with adding development, test, and maybe DR in the cloud, but they kept production systems on premise. That resistance is quickly melting away, as many companies now look to the public cloud platforms as their first choice for production runtimes, especially for new Web and mobile applications, and analytics and AI.

The post When Cloud Meets DevOps on IBM i appeared first on ARCAD Software.

Displaying foreign key constraints using SQL

foreign key information using sql

I was asked:

What about referenced tables when you specify a foreign key when creating a table? I can check this when I run DSPFD, it is in Parent File Description but I still can’t find an SQL to elaborate all tables in our Application.

I have to admit this one took me some time to find the information I need to provide the example in this post.

A foreign key is one of the many constraints that can be used with Db2 for i tables and DDS files. Doing a quick search of the IBM KnowledgeCenter I found the following:

  • SYSCST:  Every constraint, can be considered the “header” file for constraints
  • SYSCSTCOL:  Columns upon which the constraints have been defined
  • SYSCSTDEP:  Tables upon which the constrains have been defined
  • SYSKEYCST:  Every unique, primary, and foreign key that has been defined
  • SYSREFCST:  Foreign keys that have been defined

Read more »

Verified by MonsterInsights