How I ended up at IBM

I started in IBM in January of 1982 as a co-op, this is like an intern. IBM was not my first choice… I was a biology major at Rensselaer Polytechnic Institute in Troy NY. RPI was a technical college like MIT but located in New York and less well known to the general public. Technical people recognized RPI as a top notch engineering school. I went there because their computer center was in a former church… really.

I was doing Biology because my Dad was a dentist, and his father was a dentist… and it seemed like a good profession to be in… and keep the family legacy. Any chance I got though… I would be found writing code. Back then you got some time on a system (a time sharing system) for each class you took that required it. But it was hard to get more time… unless you bartered with people. “I’ll write your project for you if you give me your computer time”. I was one of those people that actually read the manual from front to back and remembered most of it. I could do things on MTS (Michigan Terminal System) that others had no idea you could do.

Being a Biology major it turned out I didn’t need quite as many credits as some other majors. I decided to apply for a semester internship using the RPI co-op office. I found the coolest job doing computer simulation of cell chemistry that was intended to lead up to simulating a cell. This was at Lawrence Livermore Lab in California. I did a phone interview, was accepted and I was getting ready to pack. I got a call that the budget was cut and they had to rescind the offer. Now I was kind of stuck, I hadn’t picked any classes for the spring semester… The co-op office jumped in and said that IBM might have something interesting. So I did a call with a group in East Fishkill and accepted the job.

Before I even got the next details, they handed me to another group in Poughkeepsie. They claimed to have an even better internship. Another friend of mine Rich was also doing a co-op and I agreed to share an apartment in Poughkeepsie with him. My recollection was I moved in on Sunday January 3rd and that evening we had an ice storm and I couldn’t get my car out on that first Monday… it may have been that co-ops didn’t start until tuesday.

After filling out paperwork, I got to my building, a carriage house about 3 miles from the Poughkeepsie main site. My boss was Charlie Daniele and the area was site safety and chemical control. Charlie was the product safety manager and the manager of the Industrial Hygiene department. I was assigned to industrial hygiene. I got a pager, and would put sound and radiation dosimeters on people, put air samplers out for asbestos and other things. If there was a chemical spill I was supposed to go there and determine if the area needed to be evacuated. After 2 weeks this didn’t excite me.

I thought about going back to RPI… but Charlie asked me what I might want to do. He had 3-4 people working on computerizing various tasks in the area. I said I would do that. Well they said we can have you help Lorraine do data input… and if you do that… we will see if you can help write some code… CODE… this excited me. They dropped a listing (green stripe paper) about 8 inches high with 60 or so lines per page… and each entry had to be put into a new system.

So starting with page one I typed in each line of information… and it took a long time… between Lorraine and I it might take 3 months. Not so much fun. One of the programmers, Barry, pointed out that I could update more than one row at a time… which I started doing. I estimated we might be able to finish in a month.

Then a fateful event happened… I accidentally omitted the equivalent of the ‘when’ clause and updated every entry in the database to ‘Smooth Black Paint Spec 25’ or something like that… ‘Oh crap’… this was a huge problem… since the database was used by others. I went to Frank, the team leader and he said there was no backup. At this point I had a choice… go back to RPI… or stay and fix this problem…

I decided to stay and fix it. As it turns out the data management team had a recent backup and we didn’t have to recover too much. I worked on entering data. In no time we had it done.

They then started to try and teach me MIS360… an IBM file management product. Most of the programmers in IBM in the IT area we retrained machinists and other things… I took the books home and read them. I built my first application in about a week (others might have take 3-4 months). I built 3 more of those applications during my co-op. I also learned about 3270 terminals, printers and controllers and ordered and installed 40 or so across 4 buildings. I learned APL and built a report tool on chemical authorization.

Needless to say I redeemed myself, and was offered a permanent job on my exit interview.

So there are a couple lessons, such as when one door closes another opens. (LLNL to IBM)… Sometimes when something seems like a bad situation there is a silver lining (data entry to programming)… and maybe the most important is you only have your personal integrity (sticking around to repair the mess I created).

Next I will talk about how I ended up in MVS development.

z/OS 3.1 Generally Available

z/OS 3.1 has become generally available. This is the first new z/OS version in 10 years. The focus of version 3 is going to be around AI. Initially a new AI framework and usage for improving WLM powered initiators. You will use a z/OSMF application to train the AI code on your batch usage, then it will use that information to predict the initiator usage and needs. The result is a model that should be more aligned with your usage making z/OS more responsive and requiring less manual operation.

z/OS 3.1 also of course has all the incremental improvements you’ve come to look for in a new z/OS release from improvements in JES2, SDSF, RACF, RMF etc.

A couple of my favorites are the module fetch monitor in SDSF which tracks loads and helps you to identify modules that might be better placed in LLA. Another is the RMF Grafana user interface which brings a high function graphical user interface to RMF.

I also like the JES2 policy support which starts to provide default policy that will help you avoid out of space situations on Spool.

Announced – OpenShift for zCX

I am very excited that we have just announced zCX Foundation for Red Hat OpenShift. This is an IBM offering of Openshift Container Platform for the z/OS Operating system. It provides all the material to install and run OpenShift on the z/OS zCX virtualization engine.

Purchasing this offering will entitle the client to a single core license to Red Hat’s OpenShift product. So slightly different packaging from other OpenShift offerings, this one comes from IBM, like the limited OpenShift entitlement in an IBM CloudPak. Red Hat will provide backend support for the IBM offering but is not going to list the IBM offering on their site or sell the offering directly.

A typical OpenShift on zCX setup would be 5 zCX servers, 3 control plane and 2 worker nodes. There are other requirements such as a DNS, loadbalancer, and file services. Once you have a cluster you can run many different kinds of workloads in that cluster through containers.

IBM’s strategy around OpenShift container platform now even runs on z/OS!

To learn more https://www.ibm.com/docs/en/zcxrhos/1.1.0

z/OSMF Tidbits

z/OSMF or z/OS Management Facility is a component of z/OS. It has been expanded every release of z/OS since z/OS V 1.11. The latest version V2.5 has introduced a desktop user interface. Through the desktop UI we are giving customers an alternative to the 3270 user interface for many parts of z/OS management and administration. The idea is to provide a base component that gives the tools required to manage z/OS. It can also act as a pluggable interface to connect to other tools whether on z/OS or on a non-z/OS platform.

In my opinion, z/OSMF will never be able to provide all the management tools that customers want or need. This is why it is pluggable. Should a customer want to connect to other browser based tools they can save links in z/OSMF and publish them for others in their installation to use as well.

Zowe is just one UI that could be linked to from z/OSMF. z/OSMF is intended to be present in every z/OS deliverable and it is highly recommend that customers configure at least one instance in each sysplex. A customer can configure Zowe to link to z/OSMF as well, they might choose to make Zowe their goto interface. There really is no problem doing that either. Here thought I want to talk about what you can do from the desktop UI.

First when you login you will see a desktop. There may be some icons on the desktop. If you right click on the desktop you can create a folder. Folders can contain objects. One type of object is an application icon. Another is a link which is like a bookmark stored on z/OS. Links are private to a particular user. You can give folders names, you can arrange them on the desktop.

If you go to the action bar on the bottom, you will see icons on the left that can take you to additional information about z/OSMF. One of the shortcuts will enable you to explore the z/OSMF services (REST Apis) through their Open API specification or commonly known as Swagger documentation. The API explorer function provides a simple interface to even try out the API’s.

On the right bottom side is a magnifying glass. This will let you look for datasets or files. It is like ISPF 3.4 you can search for datasets by name. When you find one you can click on it and it will bring up a member list and clicking on a member will bring up a browse window. You can also save a shortcut to either a dataset, member, or file on the desktop or into a folder. You can toggle the browse window to edit mode to edit a dataset. If the type of data is REXX, JCL, XML, or HTML then a syntax highlighter is enabled. If it sees a pattern that appears to be a dataset or a file path it will enable hot linking on that name which will browse that object.

JCL objects can be submitted to the z/OS local system to run, this causes a basic spool browser to be started in another window. Through that interface you can monitor jobs and view the output.

So the desktop not only provides access to the various z/OSMF functions, it also provides replacement function for ISPF 1 and 2 (browse, edit) and part of ISPF 3. You can allocate datasets, search for datasets etc. You can also do output processing, ISPF 3.8.

What function in ISPF would you like to see added to the browser interface?

z/OS Simplification

I could write about aspects of simplification for z/OS for a week… but I will try and at least start this topic with a couple insights.

z/OS has been around for over 50 years. It was originally developed as an overarching replacement for the system ‘monitors’ that were around up to 1964. The ‘monitors’ that were around at the time would hardly be considered much more than a simple wrapper in front of the hardware. You can think of them kind of like the BIOS on a PC today.

Prior to OS/360 IBM computers required human operators. They would take your deck of punched cards and put them into a card reader. The cards represented a program or a set of commands, or a collection of data. Frequently the cards were copied to a reel to reel tape to make them more Then the operator would run a program by issuing a set of commands at the operator console. The program might select customers who are located in New York so that you could run another program to generate a printed report. An operator would read messages issued by the system to see what was happening, the operators were integral to recovery procedures as well.

One facet of OS/360 was automating the physical Operators (people) out of the system. You could run a job without operators. Frankly nothing could be further from the truth. One aspect of z/OS complexity is that its history is steeped in OS/360. That is good and bad. The system is very stable and predictable, but it often seems to be telling the wrong person about aspects of the problem that they aren’t prepared to deal with.

As an example, when a program fails hard in z/OS, the system will issue an ABEND (abnormal end), and take a Dump of memory. The message the job gets includes the internal state of the computer. This includes the registers at the time of error. The result of that… is rather than a message, such as ‘job failed’ you get a lot of odd information which tells someone what happened but typically not a low skilled programmer or user.

Sometimes making z/OS easier is just interpreting these kinds of results so that a normal technician doesn’t have to puzzle it out. I mean we all ‘know’ that an abend 822 is out of memory, or an abend 047 is an authorization failure, or an 106 is a module not found. Ok maybe we don’t all know that. This is just one small aspect of simplification that we are working to make better.

We have been using multiple strategies to simplify z/OS from improving messages, to converting return and reason codes to messages, giving you tools to lookup the messages, eliminating the messages to eliminating entire tasks.

Another aspect of z/OS is that the character set is Upper case text… if you type in a lower case character it is folded to upper case in some interfaces. However when you deal with standards based interfaces you are more likely to see case sensitivity as that is more common in newer API’s. This split personality is another aspect of complexity that we need to work on to simplify the z/OS experience.

In recent years we have delivered a local Knowledge Center that mimics the one on the internet. This local version is called KC4Z. You can run it on your z/OS system in cases where you can’t easily get to the internet. There are clients who don’t allow internet access… but you need the documentation. This provides a solution there. Additionally if you have a disaster and can’t access the internet for some reason but you need to see the documentation this can be a critical component. The KC4Z runs in a WebSphere Liberty runtime and requires you to maintain the documentation by downloading from an internet site. A few weeks back IBM rolled out a new KC on the internet called IBM Docs. While it has a lot of good features it will be a little while until we can replace KC4Z.

Other aspects of simplification, such as 3270 UI and some of the language is also a complexity people have to deal with. The 3270 UI once you become used to it, is actually very efficient. These days people don’t want to learn a single platform skill like that so we are working to provide web based alternatives such as z/OSMF and Zowe. While it won’t fundamentally make z/OS easier it will eliminate one of the skill requirements. The language complexity is for example z/OS calls memory which others call RAM, central storage, we call disk… DASD, we call CPU’s… general purpose and specialty engines etc. In software SMP/E is used to install software and has things like FMIDS, Hold data, global zones etc. Some of this we can simplify just by using the names every one else does. However in some places the names are very specific and helpful in expressing problems or solutions.

In a future blog I will discuss more about z/OSMF.

z/OS 2.5 Simplification by Removal

For z/OS 2.5, which we recently announced, there have been a number of statements of direction around changes in functional content of z/OS. I will focus on the removals that planned in this blog.

First and by far the most significant is the removal of JES3 (Job Entry Subsystem 3). z/OS has had two JES implementations that grew up from customers. JES2 or HASP and JES3 or ASP. For years JES2 has dominated the customer base both in number of sites, operating system images, and even size of image. The largest customers run JES2 these days. Years ago, before parallel Sysplex it was common for large customers to run JES3. Without any encouragement by IBM the customers have largely moved to JES2. The features being added to the JES components have therefore been focused on JES2. As a result IBM announced in 2017 that JES3 would be removed from z/OS in the future. In 2019 this was reiterated in another announcement. The z/OS 2.5 preview reiterated this again and indicated that 2.5 would be the last release to include JES3.

JES3 customers have the option of migrating to JES2 which is included in z/OS, or another option is to use JES3Plus. Phoenix software is offering a JES3plus. The JES3Plus option intends to run a JES3 derivative product on z/OS which would maintain all the existing exits, JCL/JECL, Operator commands and operational procedures. JES2 migration does require customers to change all those items as part of the migration. The migration to JES2 can be time consuming but there are vendors out there who can do a lot of the migration for a fee.

In addition to JES3 we also announced the Bulk Data Transfer – BDT feature will be removed in a future deliverable. In fact BDT is only planned to be retained in z/OS through V2.5. BDT comes in two priced features, SNA/NJE and File to File (FTF). The BDT SNA/NJE version is only applicable to JES3 customers. JES2 customers have SNA/NJE built into JES2. BDT FTF is applicable to both JES2 and JES3 customers. But BDT is a very very old technology base, it assists customers to copy and move datasets from one system to another. An astute reader will recognize that moving datasets could be done today with FTP, with SSH, with MQ file transfer, with Sterling Direct, or any number of vendor options.

So again here BDT customers have the option of migrating their functional requirement to one of the other IBM options or they might want to look at the Phoenix offering announced for BDT similar to JES3Plus.

Beyond this we removed the need to specify maxsharepages. This setting was put in place to control the consumption of common storage which used to accompany the use of shared memory. The real storage manager no longer needs any common storage for using shared memory. So the option becomes obsolete.

We’ve also removed some HFS options (as opposed to zFS) since we have deprecated HFS. There are also some removals in the communication area. We replaced the SSL configurations of TN3270, FTP and DCAS to use AT-TLS policy. This not only unifies the the configuration but also brings the code up to the most recent level of capability.

Lastly in ISPF, the workstation agent has been removed as well as the support in ISPF for HFS.

While there are quite a few removals these should reduce the need to know about these features or capabilities as they are essentially no longer needed.

z/OS 2.5 Preview

I am excited to be part of todays announcement of z/OS 2.5, the next release of z/OS. This release contains a number of new capabilities and a sweeping up of many continuous delivery items that clients should appreciate.

I have been directly involved in selecting content for z/OS for the last 13 years. This is the 8th major z/OS release that I have worked on recently. Some of you may know that I worked on MVS, MVS/ESA, and OS/390 before I ventured out to work in WebSphere.

This release of z/OS includes hardware support, resiliency improvements, security improvements, system management improvements, performance improvements, application improvements and simplification improvements. The chart deck that I present on this is located here. https://github.com/IBM/IBM-Z-zOS/tree/main/zOS-Education/zOS-V2.5-Education

I had to pre-record my SHARE version of this deck for Thursday’s presentation and I pre-recorded another version for Thursday outside of SHARE.

A couple of highlights I will point out in V2.5. The Server Pac will now come in z/OSMF Portable Software Instance Format which will bring a browser based experience to installation and configuration of z/OS. This is complemented by our software management component which brings a browser based experience to installation of service. With these two changes we have really moved the bar in the capability of the browser UI. While it likely won’t convince diehard ISPF users… it should tip the scale for some of the new tenure clients to major in the browser UI.

Both the upgrade workflows and the ICSF component will now ship as z/OS parts and be serviced using typical service tooling rather than being downloaded from GIT (in the case of the workflows) or from a web server in the case of ICSF. These kinds of changes reduce the variation in delivery model which should result in a more consistent experience.

z/OSMF has moved to the desktop view only. This is a better user interface, providing more screen area for applications, the opportunity for multi-tasking is now present. This means you can have two or more windows active at the same time. For newer tenure system programmers they find the desktop more intuitive. From my perspective it is no different than a KDE desktop in Linux or the Windows desktop or the MacOS desktop. Now we have a z/OS desktop. Clearly we will have a different approach as z/OS is not a personal operating system. But you can have folders, objects in folders etc. Some of the objects can be datasets, unix directories and files, PDS(e) members and you can create/delete, browse/edit, rename, copy etc. These objects are really ‘alias’s or virtual links to the real objects. We also support Jobs this way as well. You can submit, status, get output etc.

Let me stop here and talk about some other new capabilities in my next post.

Started Jobs

In z/OS we have this entity called a started task. It is like a process in Unix. It isn’t something anyone can do, it requires operator authority. As a console operator you issue the command Start xxx, where xxx is the name of a previously prepared JCL procedure. The JCL ‘proc’ is the same as procedure in JCL, a callable piece of function. So we use the term start a proc. A procedure includes a program or series of programs that get invoked with optional parameters and what ever data is required for that program.

The security is first through the ability to access the operator console which we call console authority, second by your access to the start command at all. The OPERCMDS class in SAF secures commands by name. So you would need permission to the Start command. Next by your ability to search and update a ‘proclib’. There are some procs that have well known names and are located in the procedure library. Other procs you might have to guess, but if you can read the proclib then you can see the existing procs. The procedure library is defined in 2 places. One place is in MSTRJCL – the master JCL. This is the proc that is used to start the master scheduler which is up all the time z/OS is running. There are DD’s – data defintions in the master JCL for the master proclib (IEFPDSI). This proclib is used whenever a proc is started that is directed to, or defaults to the master scheduler. For example S xxx,SUB=MSTR will direct a start command to the master scheduler. Primary subsystems such as JES2 when started will use the MSTRJCL IEFPDSI DD to locate the proc of JES2 itself. Once the Primary Job scheduler is up it takes on the role of the primary scheduler and all start commands are directed to it unless otherwise directed.

When you issue a start command once JES2 is up, the proclib used will be dictated by the STCCLASS statement in the JES2 parms. Those parms are located in the JES2 proc located in the MSTRJCL. Typically the proclib concatenation is has “SYS1.PROCLIB” or “SYS1.USER.PROCLIB” etc. To make a new proc you just place it into one of the libraries that you have permission to update. You need to be careful when updating proclibs to do so safely.

A Start command will invoke a proc and a proc defines the started task. Normally that’s all you need to know. The JOB statement and the EXEC statements are constructed by the Start command to invoke the PROC. There isn’t much configurability to the JOB statement. Further if you want to add a JCLLIB or modify the output class for a particular proc this isn’t easily done.

A Started Job is the same as a started task but it replaces the Start commands created Job statement with one you build. This can be very helpful. The key thing to note is the MSTRJCL has a DD called IEFJOBS. The system will look here for Started jobs.

z/OS Container Extensions and SMU

z/OS Container Extensions has continued to demonstrate our vision of being able to run Linux applications on z/OS. Our first customer is in production with Service Management Unite – SMU. Let me give you a brief outline of why that is so important.

For many years when a group was developing code that was truly platform neutral they would exclude z/OS. Why… well because z/OS is EBCDIC encoding, it implements some Unix functions in a compliant way but different from Linux or Windows, it can be hard to locate a z/OS machine to do development and testing etc.

Our Unix branded facility called Unix System Services was a faithful implementation of the POSIX standard, but the posix standard isn’t everything you need to implement to be the same as Linux. To be fair we have had a lot of very successful ports of applications from other Unix platforms to z/OS.

With zCX, we have a Linux operating system running right inside a z/OS address space. It is close by in the sense that the TCP/IP communication is low latency and high performance because we move data using cross memory instructions. Operationally it is just like any other started task, you start and stop these with operator commands. That means that your automation can easily control the zCX servers.

Service Management Unite is a product that produces a dashboard on top of select IBM systems management products. The SMU implementation uses sockets to gather information about the systems it is displaying. It was written to a Linux platform programming model. The team that owns it in IBM had made it work on Linux x86 and Linux on z. To run SMU on z/OS’s zCX they merely had to create a docker image and bring it over. It runs in a binary compatible manner.

The customer used SMU on a Linux x86 platform. They say that it took them some time to get the SMU server defined by the distributed team and setup so that the communication worked and was secure. It was not the worst experience they ever had, but they did notice a lack of interest from the folks running the x86 platform with their server. It was the last to get security updates, and during disaster recovery it was one of the last to be recovered. As a consequence the z/OS team was less reliant on it.

The move to zCX did a couple things. One was that the z/OS team runs the SMU servers. There are two servers on two systems in a sysplex for redundancy. Further it was very easy to define the data used by the zCX servers or replicated volumes. The SMU server is now recovered with the rest of the sysplex and started again as part of it. The team can rely more heavily on it now.

To my mind this is just one of the many new uses that zCX brings to z/OS, I look forward to outlining some of the other ones in a future posting.