Service Commander for IBM i

Service Commander for IBM i

A utility for unifying the daunting task of managing various services and applications running on IBM i. Its objective is to provide an intuitive, easy-to-use command line interface for managing services or jobs.

This tool can be used to manage a number of services, for instance:

IBM i host server jobs
IBM i standard TCP servers (*TCP, *SSHD, etc.)
Programs you wrote using open source technology (Node.js, Python, PHP, etc.)
Apache Tomcat instances
Apache Camel routes
Kafka, Zookeeper, ActiveMQ servers, etc
Jenkins
The Cron daemon
OSS Database servers (PostgreSQL, MariaDB)

Current features

Some of the features of the tool include:

The ability to specify dependencies (for instance, if one application or service dependds on another), and it will start any dependencies as needed
The ability to submit jobs to batch easily, even with custom batch settings (use your own job description or submit as another user, for instance)
The ability to check the “liveliness” of your service by either port status or job name
Customize the runtime environment variables of your job
Define custom groups for your services, and perform operations on those groups (by default, a group of “all” is defined)
Query basic performance attributes of the services
Assistance in providing/managing log files. This is a best-guess only and naively assumes the service uses stdout/stderr as its logging mechanism. Service Commander has its own primitive logging system that works well only for certain types of services
Ability to define manage ad hoc services specified on the command line
Ability to see what ports are currently opening (have a job listening)

Hands-on Exercise

Want to walk through a quick exercise to get some basic “hands-on” experience with this tool? If so, please see our very simple hands-on exercise

Have feedback or want to contribute?

Feel free to open an issue with any questions, problems, or other comments. If you’d like to contribute to the project, see CONTRIBUTING.md for more information on how to get started.

In any event, we’re glad to have you aboard in any capacity, whether as a user, spectator, or contributor!

Service Commander’s design is fundamentally different from other tools that accomplish similar tasks, like init.d, supervisord, and so on. Namely, the functions within Service Commander are intended to work regardless of:

Who else may start or stop the service
What other tools may be used to start or stop the service. For instance, Service Commander may start/stop an IBM i host server, but so could the STRHOSTSVR/ENDHOSTSVR CL commands.
Whether the service runs in the initially spawned job or a secondary job

Also, this tool doesn’t have the privilege of being the unified, integrated solution with the operating system that other tools may have. Therefore, Service Commander cannot take the liberty of assuming that it can keep track of the resources tied to the services that it manages. So, for example, this tool does not keep track of process IDs of launched processes. Similarly, it doesn’t have special access to kernel data structures, etc.

Instead, this tool makes strong assumptions based on checks for a particular job name or port usage (see check_alive_criteria in the file format documentation). A known limitation, therefore, is that Service Commander may mistake another job for a configured service based on one of these attributes. For example, if you configure a service that is supposed to be listening on port 80, Service Commander will assume that any job listening on port 80 is indeed that service.

Service Commander’s unique design is intended to offer a great deal of flexibility and ease of management through the use of simple .yaml files.

Installation

System Requirements

For most of the features of this tool, the following is required to be installed (the installation steps should handle these for you):

db2util (yum install db2util)
OpenJDK (yum install openjdk-11)
bash (yum install bash)
GNU coreutils (yum install coreutils-gnu)

The performance information support (perfinfo) has additional requirements that are not automatically installed, including:

Python 3 with the ibm_db database connector (yum install python3-ibm_db)
Required operating system support, which depends on your IBM i operating system level, as follows:
IBM i 7.4: included with base OS
IBM i 7.3: Group PTF SF99703 Level 11
IBM i 7.2: Group PTF SF99702 Level 23
IBM i 7.1 (and earlier): not supported

Option 1: Binary distribution

You can install the binary distribution by installing the service-commander package:

yum install service-commander

If you are not familiar with IBM i RPMs, see this documentation to get started.

Option 2: Build from source (for development or fix evaluation)

Feel free to build from the main branch to start making code contributions or to evaluate a fix/feature not yet publish. This process assumes your PATH environment variable is set up properly, otherwise:

PATH=/QOpenSys/pkgs/bin:$PATH
export PATH

The build itself can be done with the following steps:

yum install git ca-certificates-mozilla make-gnu
git clone https://github.com/ThePrez/ServiceCommander-IBMi/
cd ServiceCommander-IBMi
make install_with_runtime_dependencies

Basic usage

Usage of the command is summarized as:

Usage: sc [options] <operation> <service(s)>

Valid options include:
-v: verbose mode
–disable-colors: disable colored output
–splf: send output to *SPLF when submitting jobs to batch (instead of log)
–sampletime=x.x: sampling time(s) when gathering performance info (default is 1)
–ignore-globals: ignore globally-configured services

Valid operations include:
start: start the service (and any dependencies)
stop: stop the service (and dependent services)
restart: restart the service
check: check status of the service
info: print configuration info about the service
jobinfo: print which jobs the service is running in
perfinfo: print basic performance info about the service
loginfo: get log file info for the service (best guess only)
list: print service short name and friendly name

Valid formats of the <service(s)> specifier include:
– the short name of a configured service
– A special value of “all” to represent all configured services (same as “group:all”)
– A group identifier (e.g. “group:groupname”)
– the path to a YAML file with a service configuration
– An ad hoc service specification by port (for instance, “port:8080”)
– An ad hoc service specification by job name (for instance, “job:ZOOKEEPER”)
– An ad hoc service specification by subsystem and job name (for instance, “job:QHTTPSVR/ADMIN2”)

The above usage assumes the program is installed with the above installation steps and is therefore
launched with the sc script. Otherwise, if you’ve hand-built with maven (mvn compile), you can specify arguments in exec.args (for instance, mvn exec:java -Dexec.args=’start kafka’).

Specifying options in environment variables
If you would like to set some of the tool’s options via environment variable, you may do so with one of the following:

SC_TCPSVR_OPTIONS, which will be processed when invoked via the STRTCPSVR/ENDTCPSVR commands
SC_OPTIONS, which will be processed on all invocations
For example, to gather verbose output when using STRTCPSVR, run the following before your STRTCPSVR command:

ADDENVVAR ENVVAR(SC_OPTIONS) VALUE(‘-v’) REPLACE(*YES)

Usage examples

Start the service named kafka:

Stop the service named zookeeper:

Check status of all configured services (all services belong to a special group named “all”)

Try to start all configured services

Print information about all configured services

Try to start all services in “host_servers” group

sc start group:host_servers

List all services

List jobs running on port 8080

Stop jobs running on port 8080

Check if anything is running on port 8080

Start the service defined in a local file, myservice.yml

Checking which ports are currently open

As of version 0.7.x, Service Commander also comes with a utility, scopenports that allow you to see which ports are open.
Usage is as follows:

Usage: scopenports [options]
Valid options include:
v: verbose mode
mine: only show ports that you have listening

Example output when invoked with the –mine option:

The value in the service name column can be used with the sc command. For instance, with
the above example, if I wanted to see which job was running on port 62006, I could run

Important Note: Currently, the scopenports utility can only show human-readable descriptions for services that have
been configured for sc’s use. To populate some common defaults, run sc_install_defaults.

Configuring Services

Initializing your configuration with defaults

If you’d like to start with pre-made configurations for common services, simply run:

This will install service definitions for:

IBM i TCP servers
IBM i Host Servers
The Cron daemon (if you have cron installed)
MariaDB (if you have mariadb installed)

Through YAML configuration files

This tool allows you to define any services of interest in .yaml files. These files can be stored in any of the following locations:

A global directory (/QOpenSys/etc/sc/services). This, of coures, requires you to have admin access (*ALLOBJ special authority).
A user-specific directory($HOME/.sc/services)
If defined, whatever the value of the services.dir system property is. The file name must be in the format of service_name.yaml (or service_name.yml), where “service_name” is the “simple name” of the service as to be used with this tool’s CLI. The service name must consist of only lowercase letters, numbers, hyphens, and underscores.

The file can also be located in any arbitrary directory, but it must be explicitly passed along to the sc command, for instance

YAML File Format

See the samples directory for some sample service definitions. The following attributes may be specified in the service definition (.yaml) file:

Required fields

start_cmd: the command used to start the service
check_alive: the technique used to check whether the service is alive or not. This is either “jobname” or “port”.
check_alive_criteria: The criteria used when checking whether the service is alive or not. If check_alive is set to “port”, this is expected to be a port number. If check_alive is set to “jobname”, this is expect to be be a job name, either in the format “jobname” or “subsystem/jobname”.

Optional fields that are often needed/wanted

name: A “friendly” name of the service
dir: The working directory in which to run the startup/shutdown commands

Other optional fields

stop_cmd: The service shutdown command. If unspecified, the service will be located by port number or job name.
startup_wait_time: The wait time, in seconds, to wait for the service to start up (the default is 60 seconds if unspecified)
stop_wait_time: The wait time, in seconds, to wait for the service to stop (the default is 45 seconds if unspecified)
batch_mode: Whether or not to submit the service to batch
sbmjob_jobname: If submitting to batch, the custom job name to be used for the batch job
sbmjob_opts: If submitting to batch, custom options for the SBMJOB command (for instance, a custom JOBD)
environment_is_inheriting_vars: Whether the service inherits environment variables from the current environment (default is true)
environment_vars: Custom environment variables to be set when launching the service. Specify as an array of strings in “KEY=VALUE” format
service_dependencies: An array of services that this service depends on. This is the simple name of the service (for instance, if the dependency is defined as “myservice”, then it is expected to be defined in a file named myservice.yaml), not the “friendly” name of the service.
groups: Custom groups that this service belongs to. Groups can be used to start and stop sets of services in a single operation. Specify as an array of strings.

You can use the scinit tool can be used to create the YAML configuration files for you. Basic usage of the tool is simply:

scinit <program start command>

The idea is that you would simply:

cd to the directory where you’d normally start the service
Run the command you’d normally use to start the service, prefixed by scinit
Answer a series of questions about how you would like the service deployed
In doing so, the scinit will create the YAML configuration file for you and also show you information about the newly-configured service.

For instance, if you would normally launch a Node.js application from /home/MYUSR/mydir by running node app.js, you would run:

cd /home/MYUSR/mydir
scinit <program start command>

The scinit tool will ask you for a “short name” among other things. When done, a service configuration will be saved under that short
name. So, for instance, if your short name is “my_node_app”, you can run sc start my_node_app.

Ad hoc service definition

Ad hoc services can be specified on the sc command line in the format job:jobname or port:portname. In these instances, the operations will be performed on the specified jobs. This is determined by looking for
jobs matching the given job name or listening on the given port. The job name can be specified either in
jobname or subsystem/jobname format.

If an existing service definition is found (configured via YAML, as in the preceding section) that matches the
job name or port criteria, that service will be used. For instance, if you have a service configured to run on
port 80, then specifying sc info port:80 will show information about the service configured to run on port 80.

Ad hoc service definition is useful for quick checks without the need to create a YAML definition. It’s also
useful if you do not recall the service name, but remember the job name or port.

It is also useful for cases where you just want to find out who (if anyone) is using a certain port. For instance,
sc jobinfo port:8080 will show you which job is listening on port 8080. Similarly, sc stop port:8080 will kill
whatever job is running on port 8080.

Demo (video)

Automatically restarting a service if it fails

Currently, this tool does not have built-in monitoring and restart capabilities. This may be a future enhancement. In the meantime, one can use simple scripting to accomplish a similar task. For instance, to check every 40 seconds and ensure that the navigator service is running, you could submit a job like this (replace the sleep time, service name, and submitted job name to match your use case):

SBMJOB CMD(CALL PGM(QP2SHELL2) PARM(‘/QOpenSys/usr/bin/sh’ ‘-c’ ‘while :; do sleep 40 && /QOpenSys/pkgs/bin/sc start navigator >/dev/null 2>&1 ; done’)) JOB(NAVMON) JOBD(*USRPRF) JOBQ(QUSRNOMAX)

This will result in several jobs that continuously check on the service and attempt to start it if the service is dead. If you wish to stop this behavior, simply kill the jobs. In the above example, the job name is NAVMON, so the WRKACTJOB command to do this interactively looks like:

Testimonials

“I use this a lot for my own personal use. Might be useless for the rest of the world. I don’t know, though.”

  –@ThePrez, creator of Service Commander

STRTCPSVR Integration

Service Commander now has integration with system STRTCPSVR and ENDTCPSVR commands. This feature is experimental and may be removed
if too problematic.

To integrate with the STRTCPSVR and ENDTCPSVR commands, you can run the following command as an admin user:

/QOpenSys/pkgs/lib/sc/tcpsvr/install_sc_tcpsvr

This will install create the SCOMMANDER library and compile/install the TCP program into that library. To use a different
library, just set the SCTARGET variable. For instance:

SCTARGET=mylib /QOpenSys/pkgs/lib/sc/tcpsvr/install_sc_tcpsvr

If you need to compile to a previous release of IBM i, set the SCTGTRLS variable to the required value of CRTCMOD parameter TGTRLS. Example for IBM i 7.1:

SCTGTRLS=V7R1M0 /QOpenSys/pkgs/lib/sc/tcpsvr/install_sc_tcpsvr

After doing so, you can run the *SC TCP server commands, specifying the simple name of the sc-managed service as the instance name. For example:

STRTCPSVR SERVER(*SC) INSTANCE(‘kafka’)

Important Notes about AUTOSTART(*YES)

You can set the *SC server to autostart via CHGTCPSVR SVRSPCVAL(*SC) AUTOSTART(*YES). However, great care must be taken in order for this to work properly and not create a security exposure. When STRTCPSVR runs at IPL time, the task will run under the QTCP user profile. This user profile does not have *ALLOBJ authority, nor does it have authority to submit jobs as other user profiles. Thus, in order for the autostart job to function properly, the QTCP user profile must have access to run the commands needed to start the service, and the service must not submit jobs to batch as a specific user. Be are that adding QTCP to new group profiles or granting special authorities may represent a security exposure. Also, due to the highly-flexible nature of this tool, it is not good practice to run this command as an elevated user in an unattended fashion. In summary, it is likely not a good idea to use AUTOSTART(*YES).

Special groups used by STRTCPSVR/ENDTCPSVR
There are a couple special groups used by the TCP server support. You can define your services to be members of one or more of these groups:

default, which is what’s started or ended if no instance is specified (i.e. STRTCPSVR SERVER(*SC))
autostart, which is what’s started when invoked on the *AUTOSTART instance (i.e. STRTCPSVR SERVER(*SC) INSTANCE(*AUTOSTART))

Log4j vulnerabilities for IBM i

The Log4j vulnerabilities came to light earlier this month. I have not written about it as others have a better understanding of how this effects the operating system we love, and have written good articles about it too.

I was sent this link to an IBM Blog entry that describes what you can to remediate these vulnerabilities. And I thought I would share it with you. The blog post is general to all IBM products, not just IBM i and Power systems.

https://www.ibm.com/blogs/psirt/an-update-on-the-apache-log4j-cve-2021-44228-vulnerability/

Please share this link with your IBM i system administrators, and ask them to check if any updates need to be applied to your IBM products and environments.

IBM Certified Developer – IBM i 7.x

Certification

IBM Certified Developer – IBM i 7.x

Group: IBM Systems – Power Systems

Certification status: Live

PartnerWorld code: C9002700

Replaces PW code: N/A

Required exam: IBM i 7.x Developer

Exam status: Live

An Assessment Exam is an online test that results in a score report to help you gauge your preparedness. They can be booked through Pearson VUE.

The Sample Test is designed to give you an idea of the type of questions you can expect to see on the exam.

This exam is available in the following languages: English

Price per exam: $200 USD

IBM Business Partners, MSPs, and ISVs amplify growth with IBM Power Systems Virtual Server

To achieve our mission of helping enterprises across the globe succeed through innovation, our team works alongside IBM® Business Partners within our valued, growing ecosystem.

Our network consists of IBM Business Partner companies, independent software vendors (ISVs), managed service providers (MSPs), global system integrators (GSIs) and other integral organizations that share our goals. Together, our Business Partners have achieved new levels of growth themselves through innovative products, including our latest IBM Power® offering, IBM Power Systems Virtual Server.

In this third installment of our blog series about IBM Power Systems Virtual Server, we’re looking at real-world examples of how growth is possible through scalable and transformative software hosting and management solutions.

IBM Power Systems Virtual Servers at a glance

Our team is thrilled about the impact our new hybrid cloud solution is already making for our Business Partners. We’ve combined the reliability and sheer processing power of on-premises Power servers with the scalability and flexibility that comes with a hybrid cloud environment. The result enables end-users to appreciate true hybrid cloud solutions that are strategically designed to be a seamless extension of their on-premises Power servers and IBM Power Systems Virtual Server data centers. IBM Power Systems Virtual Server is designed to deliver low-latency capacities thanks to 14 data centers in 7 countries with more on the way. They can run IBM AIX®, IBM i and Linux® workloads, including SAP SUSE Linux Enterprise Server (SLES) 12 and 15 Open Virtualization Alliance (OVA) boot images and Red Hat® Enterprise Linux 8.1 and 8.2.

ISVs and IBM Power Systems Virtual Server

Let’s look at some examples of how ISVs are currently using IBM Power Systems Virtual Servers to promote innovation, transformation and peace of mind.

Created as a global solution

While Iptor has over 1,000 clients across the globe, its data centers were only in Denmark. Its goal was not only to globalize its operations, but also to automate and streamline its process. The worldwide data centers of IBM Power Systems Virtual Servers helped Iptor maximize uptime, while its image capture feature helped them better meet the needs of clients quickly and in a more efficient way. Finally, our DevOps pipeline helped make installation and upgrades universally accessible so customers on its supply chain could stay up to date.

A security-rich environment for sensitive data

Silverlake, a provider of a software-as-a-service (SaaS) digital banking cloud platform, joined our ecosystem to support IBM Cloud® for Financial Services. Our collaboration allowed Silverlake to create scalable and security-rich virtual digital banking solutions in the highly regulated financial industry. Silverlake uses IBM Power Systems Virtual Server as an efficient and effective way to virtualize its solutions while meeting compliances and lowering risk.

 Sandbox capabilities

Many of our ISVs that are already part of the IBM Power Systems ecosystem have found success by conducting proof of concepts (PoCs) in IBM Power Systems Virtual Server data centers. These vendors have containerized their applications, while also running demo versions of their application in a client-accessible sandbox. With this structure in place, prospects can try out applications in an isolated environment. This offering is designed to help improve security and uptime while helping environments meet spikes in traffic.

MSPs, Business Partners and IBM Power Systems Virtual Server

It’s always vital for MSPs, resellers and business partners to grow and discover new opportunities and revenue paths. IBM Power Systems Virtual Server is an excellent way to extend Power Systems offerings in a hybrid cloud environment for current and new customers while being supported and covered by the IBM brand. Moreover, we take infrastructure-as-a-service (IaaS) management responsibilities from our MSPs and take care of it ourselves, including maintenance, data center floor space, updates and the costs of running a data center. Handling the maintenance of infrastructure helps not only cut costs by taking away data center expenses, but it also helps increase revenue margins and maximize uptime and availability.

As an IBM partner, you now have the opportunity to build partner-driven solutions on top of IBM Power Systems Virtual Server. We work alongside MSPs looking to grow in the enterprise space and shift their focus on services, such as disaster recovery (DR), migration, end-to-end management or reselling the offering. IBM Power Systems Virtual Server helps the clients of our Business Partners reach new geographies — regardless of their location — to further achieve globalization. Clients also have quicker access to new Power Systems features and functions.

At the end of the day, it’s all about making a better experience for the end-user. Plus, being the only on-premises and off-premises certified Power solution for SAP HANA and SAP NetWeaver platform workloads, IBM can help enterprises meet the deadline to migrate to SAP S/4 HANA by 2027.

Success through collaboration

A leader in hosting and managed services, Connectria has years of demonstrated and recognized experience serving IBM i customers. Connectria is now offering IBM Power Systems Virtual Server solutions to both its install base and new prospects after seeing strong demand for such a virtualized Power solution.

The Connectria team was excited to work with us, embracing opportunities to access a new market of high-end P30-tier licensing clients, as well as several other IBM i software tiers. We provided a variety of managed services and licensing specifically tailored to Connectria’s goals, growth objectives and current offerings. Now, Connectria is proud to offer our hybrid cloud infrastructure portfolio solutions bundled together alongside its management solutions. Connectria is also extending and shifting current clients’ infrastructure, including older on-premises Power Systems, to IBM Power Systems Virtual Server. As a result, Connectria clients can stay modernized by staying virtual.

Driven by the benefits of this latest offering combined with competitive pricing, Connectria’s sales team is proposing Power System Virtual Server solutions worldwide. Clients can now experience a new level of performance, experience and peace of mind.

A look at our roadmap

We are always pushing to add new capacities and capabilities to IBM Power Systems Virtual Server. In addition to upgrading our storage to IBM FlashSystem® 9200, here’s a high-level look at what end-users are finding beneficial. We’ve recently implemented virtual private network as a service (VPNaaS) solutions into this offering, as well as new network automation features. Coming soon is a whole new way to manage your IBM Power Systems Virtual Server, with IBM Cloud credit management. This credit system creates a simplified and efficient way for you to get the most out of your virtualized solution.

Getting started

Let’s work together to see how IBM Power Systems Virtual Server can drive success for you and your clients. Speak to an expert today to see how it fits into your strategy. If you’re in the US or Canada, you can call us at 1-866-872-3902 to learn more about the new IBM Power Systems Virtual Server offering.

 

The post IBM Business Partners, MSPs, and ISVs amplify growth with IBM Power Systems Virtual Server appeared first on Servers & Storage.

Verified by MonsterInsights