What’s the difference between MQ Web, and z/OS Connect MQ support?

With MQ Web

  1. You can issue commands to administer MQ  for example display, define, delete MQ objects.
  2. You can put and get messages to and from a queue.  The message is what you specify – typically a character string.

With Z/OS Connect MQ support

  1. You can put and get messages to and from a queue, and do transformations on the message.  For example mapping a COBOL structure to JSON.  
  2. You can do field validation.
  3. You can covert HTTP code “200” to “great it worked”.

What is common?

They both use z/OS WebSphere Liberty to provide the basic web server.

Planning your Liberty – this is not an escape plan

With the web being the new front end to z/OS, most z/OS products are using Liberty as their web server to deliver web content.   Each product seems to be documented as if it is the only Liberty instance on the z/OS image, for example they all default to use http port 9080.

This blog post helps identify what planning you need to do before you can configure a Liberty instance, be it z/OSMF, MQWEB, CICS, or zOS Connect EE.

Roles

Often the person installing and configuring the server is part of the “installation team”, and may not be familiar with the  detailed use of the product, or product based tooling, for example the eclipse based  z/OS tooling.   This person’s role is to configure the server so it meets the enterprise requirements, and the configuration within the server is down to the team who requested it.

Update proclib

You need to decide if each instance will need its own procedure in proclib, or one procedure can start multiple servers.

You will need an Angel process – there can only be one Angel started task across all of your Liberty instances on an LPAR.  Ensure it is at the highest service level.

If you want isolation you may want to set up an test instance with a different started task userid to the production instance.

Where do the product ZFS libraries go

Usually the product code goes in /usr/lpp/IBM/product_name. Typically you make a directory /usr/lpp/IBM/product_name, then mount the product file system over this directory.  Some times this file system needs to be mounted RW during customisation.   When you upgrade you can just mount the new file system instead of the old file system.

Where do the configuration files go?

The Liberty configuration files can go in /var/… or /u/… . If you intend for the server to be started on another LPAR, then the configuration files need to be available on the other LPAR.  Having a variable with the LPAR name as part of the directory will not work.   The configuration files are defined with the WLP_USER_DIR environment variable. You may want shell scripts which define this variable.  For example the shell script prodmqweb could have export WLP_USER_DIR=/u/mqweb/production/MQPA.   You then use sh prodmqweb to define the variable, or pass commands to it, such as sh prodmqweb dspmqweb so you can be sure you are using the right configuration file.

UNIX Directory List Utility ISPF 3.17 has space for 56 characters in the default directory name, but only 40 when working with files, such as new file or rename.   You may want to have a short prefix, or use an alias. For example /u/zoscA/servers/stockManager/server.xml instead of /MVSA/var/zosconnect/servers/stockManager/server.xml

What configuration files will be shared?

If you already have Liberty running in your environment you may be able to reuse some files, and include them in your server.xml. For example if your keystores are defined in a file you could use <include location=”…./keystore.xml”/> .

If you are providing duplicate servers for availability, you can put you common definitions in one file and share this, and server specific definitions in a different file.

It is easier to manage the configuration files if you provide small function specific files.  For example saf.xml, trace.xml,applications.xml, andr keystores.xml .

What TCPIP ports will be used

Each product will need its own ports, typically one port for http, and another port for https.   You can define multiple ports with different characteristics.  You use httpEndpoint and can specify for this port log this information, for that port log different information to a different place.

If you want to have two instances running on an LPAR using the same port,  the port needs to be defined as SHAREDPORT.

You may want to have the same port defined on each LPAR, so no matter which LPAR is used, use port 9443.

You may want to use VIPA so you have one external IP address into your SYSPLEX, and  z/OS SYSPLEX Distributor to route connection requests or distribute connection requests to available servers. If you want to do this you will need a configured VIPA TCPIP address.

You may need to specify which TCPIP instance to use if you have more than one TCPIP instance on an LPAR.

What keystores will be used

You need two keystores

  1. To identify the server – the keystore
  2. To validate certificates passed into the instance – the trust store.  Typically this has Certificate Authority certificates, and any self signed certificates.

These can be file based or SAF based using z/OS keyrings.  For example <keyStore filebased=”false” id=”racfKeyStore”
location=”safkeyring://START1/KEY” password=”password” readOnly=”true” type=”JCERACFKS”/> 

You might have enterprise keystores available to every one, or provide isolation so you have keystores for bank1 servers, and different keystores for bank2 servers.

Defining the APPL and SERVER resource

The Liberty default  APPL definition is BBGZDFLT. This allows people to access the server (the front door).  If you already have a Liberty installed then you may be able to use the existing definition.

If you want isolation, for example test and production, or between two major applications you will need to select and define different APPL resources.

You will need

<safCredentials profilePrefix="ZZZZDFLT"

and

RDEFINE APPL ZZZZDFLT UACC(NONE) 
PERMIT ZZZZDFLT CLASS(APPL) ACCESS(READ) ID(START1) 
PERMIT ZZZZDFLT CLASS(APPL) ACCESS(READ) ID(GRALL) 
SETROPTS RACLIST(APPL )REFRESH 

RDEFINE SERVER BBG.SECPFX.ZZZZDFLT UACC(NONE) 
PERMIT BBG.SECPFX.ZZZZDFLT CLASS(SERVER) ACCESS(READ) ID(START1) 
SETROPTS RACLIST(SERVER,APPL)REFRESH

   /* for z/OS Connect
RDEFINE EJBROLE ZZZZDFLT..zos.connect.access.roles.zosConnectAccess   +
   UACC(NONE) 
PERMIT ZZZZDFLT..zos.connect.access.roles.zosConnectAccess + 
   CLASS(EJBROLE)ACCESS(READ) ID(IBMUSER) 
SETROPTS RACLIST(EJBROLE )REFRESH 

   /* for MQ Web
RDEFINE EJBROLE MQWEB.com.ibm.mq.console.MQWebAdmin UACC(NONE)

These tend to be mixed case, so take care when defining them.

Starting the instance

If you use

S BAQSTRT,PARM='server1'
S BAQSTRT,PARM='server2'

If you use STOP BAQSTRT then both servers will stop.

If you use

S BAQSTRT.BAQ1,PARMS='server1' 
S BAQSTRT.BAQ2,PARMS='server2'

You can use STOP BAQ1 to stop just server1.

You can also use

S BAQSTRT,PARMS=’server1′,jobname=BAQ1
S BAQSTRT,PARMS=’server2′,jobname=BAQ2

Then you can use P BAQ1, and also have WLM give BAQ1 and BAQ2 different service classes, and so give them different priorities

Set up monitoring

There may be SMF data available, which you can collect if you enable the SMF collection classes.

You may have data from JMX which you collect and report.

Accessing the server

You will need to set up a profile and give permission to groups of people, just to be able to use the server.

You may need to protect individual applications, for example ability to start or stop applications, or to invoke the application.  This can be done once the basic setup has been done, and the system handed over.

For example in server.xml for z/OS connect

<zosconnect_service> 
<service name="stockquery" 
    serviceDescription="stockQueryServiceDescriptionColin" 
    id="stockQueryService" 
adminGroup="a3Admin,a4Admin" 
invokeGroup="a3Invoke" 
operationsGroup="a3Ops" 
readerGroup="a3Reader" 
/> 
</zosconnect_service>

It is good policy to only grant access to groups, and not individual ids as it simplified userid administration.  You would define a3Admin, a4Admin, a3Invoke as groups

 

HA Liberty web server – implementing VIPA with distributing connections.

Overview of VIPA solutions

You can implement VIPA, where you give your application its own IP address, across multiple TCPIP images.   This solves the problem of certificates not matching the host IP address.

You can have

  • One TCPIP image processing the connection requests. You have multiple TCPIP images – but only one TCPIP image at a time processes the connections.   If the TCPIP image stops, another can take over.
  • Multiple TCPIP stacks can process connection requests. A front end TCPIP image takes the connection requests and distributes them to TCPIP instances where the application is running.   You can use load balancing across multiple TCPIP images such distributing the connection techniques such as Round Robin, or Hot Standby.  This is based on Sysplex Distributor technology.

This blog post discusses the second case.

To provide background information, I created

Sysplex Distributor background

Sysplex Distributor is like having a router inside your TCPIP on z/OS; it can route traffic transparently to other TCP images in the environment.  The Sysplex Distributor can distribute connection requests for a  VIPA requests to TCPIP images where the application is running.    This set up is called Distributed VIPA (DVIPA).

It took me about 2 weeks to get a Sysplex DVIPA working – about a week understanding the documentation on VIPA, and the other week trying to understand why it didn’t work – and the simple configuration error I had.

I’ll break it down into simple stages which should help you understand the documentation.

The scenario

The scenario I used was going from my laptop, to z/OS running under zPDT on my laptop.  In effect there was an OSA connection between Ubuntu and my z/OS LPAR.  If you do not know what an OSA is, think of it as an Ethernet connection which can plug into multiple z/OS LPARs.

  1. My Ubuntu had an address of 10.1.1.1 over the tunnel connection
  2. I had one LPAR  with three TCPIP images
    1. TCPIP, host address 10.1.1.1 for the primary “front end”
    2. TCPIP2, host address 10.1.1.2 for the backup “front end” and where a server instance was running
    3. TCPIP3, host address 10.1.1.3 with a server instance running.
  3. I used a VIPA address of 10.1.3.10

The steps are

  1. Connect from Ubuntu to the z/OS
  2. Configure the LPAR(s)
  3. Define the VIPA configuration
  4. Define the “routing” to where the server was running.
  5. Getting the server to use the VIPA
  6. Commands to see what is going on (or not as the case may be)

Connect from Ubuntu to the z/OS

I had an existing tunnel connection from Ubuntu to z/OS.

I used the ip route command

sudo ip r add 10.1.3.10 link tap0

to define the route to 10.1.3.10 via the tunnelling device tap0 .  This looks like an OSA connection into z/OS.

Configure the LPAR(s)

The Sysplex Router uses XCF communications between LPARs and TCPIP images on the LPARs.

You configure each TCPIP with a statement

IFCONFIG DYNAMICXCF 172.1.2.x

My “frontend” TCP/IP had  IFCONFIG DYNAMICXCF 172.1.2.1, the other two TCP/IP images had 172.1.2.2 and 172.1.2.3.

The 172.1.2.x address can be any address not used by your enterprise.  It is internal to the Sysplex configuration.

Define the VIPA configuration

You define the configuration once, in the front end TCPIP.  It is visible from the other TCP/IP images because the information is shared via the DYNAMICXCF.

You define the VIPA in the front end TCP/IP image with a VIPADEFINE netmask address.  I used

VIPADYNAMIC
  VIPADEFINE 255.255.255.0 10.1.3.10
...
ENDVIPADYNAMIC

You can define VIPABACKUP in another TCPIP image, so if the main front end TCP/IP is not available then a backup can take the traffic and distribute it to the other TCP/IP stacks.

When the main front end TCP/IP image is restarted, you can have it take back the routing.

Define the “routing” to where the server was running.

You can define a variety of ways of routing the work

  • A Hot Standby – where the input to the front end TCP/IP image is routed to a single “backend” application’s TCP/IP image.  If this fails, the work is routed to a running backup application.
  • A Round Robin – where requests are routed in turn to each TCPIP with an active application.
  • Routing depending on WLM or other load characteristics.

This routing is done by the VIPADISTRIBUTE command in the front end TCP/IP image.

The definitions for the front end TCPIP

IPCONFIG SYSPLEXROUTING 
    DYNAMICXCF 172.1.1.1 255.255.255.0 3 

VIPADYNAMIC 
   VIPADEFINE 255.255.255.0 10.1.3.10 

   VIPADISTRIBUTE DEFINE DISTM ROUNDROBIN 10.1 .3.10 PORT 8443 
      DESTIP 
         172.1.1.2 
         172.1.1.3 
ENDVIPADYNAMIC 

This routes connection requests to the TCPIP images with the DYNAMICXCF of 172.1.1.2 (TCPIP2) and 172.1.1.3 (TCPIP3)

The definitions for TCPIP2

IPCONFIG SYSPLEXROUTING 
DYNAMICXCF 172.1.1.2 255.255.255.0 3 

The definitions for TCIP3

IPCONFIG SYSPLEXROUTING 
DYNAMICXCF 172.1.1.3 255.255.255.0 3

Use of VIPABACKUP

If the backup TCPIP front end image it used, it can have its own VIPADISTRIBUTE statement, or just use the same statement shared from the main front end TCP/IP image.  It is better to have the VIPADISTIBUTE statements, for the case when the backup TCPIP is started before the front end TCPIP.   The backup needs the VIPADISTRIBUTE statements. (These statements can be put into a PDS, and included using the INCLUDE dataset(member) statement in both primary and backup environments.)

To define TCPIP2 as a backup I used

VIPADYNAMIC 
    VIPABACKUP MOVEABLE IMMEDIATE 255.255.255.0 10.1.3.10 
    VIPADISTRIBUTE DEFINE DISTM ROUNDROBIN 10.1.3.10 PORT 8443 
        DESTIP 
        172.1.1.2 
        172.1.1.3 
ENDVIPADYNAMIC 

Getting the server to use the VIPA

The TCP/IP images hosting the applications just have the IFCONFIG DYNAMICXCF aa.bb.cc.dd statement.  They do not have any VIPADYNAMIC … ENDVIPADYNAMIC statements unless they are the main or backup front end TCP/IP images.

The application can connect using the VIPA address, for example create the SSLSOCKET Listener passing the VIPA address.   You can also configure TCP/IP so when a port is used, it binds to a particular IP address for example

PORT 8443 BIND 10.1.3.10

So an application using port 8443 to listen, will get IP address 10.1.3.10 – which in my case is a VIPA address.

You can use

PORT
9443 TCP * SHAREPORT BIND 10.1.3.7

to allow the port to be shared by many applications on a TCPIP Instance.

How are the connections distributed?

The VIPADISTRIBUTE  has many routing options. I used Hot Standby and Round Robin.

With RoundRobin, I had

  • the front end TCPIP
  • TCPIP2 with two Liberty servers
  • TCPIP3 with  one Liberty server

I ran some workload and found that the server on TCPIP3 had half the requests, and each of the two servers on TCPIP2 had a quarter of the overall requests.  This shows the routing is done at the TCPIP level – not the number of servers.

HA Liberty web server – implementing VIPA using the simpler VIPARANGE technology

Overview of VIPA solutions

You can implement VIPA, where you give your application its own IP address, across multiple TCPIP images.   This solves the problem of certificates not matching the host IP address.

  • One TCPIP image processes the connection requests. You have multiple TCPIP images – but only one TCPIP image at a time processes the connections.   If the TCPIP image stops, another can take over.
  • Multiple TCPIP stacks can process connection requests. This uses Sysplex Distributor;  a front end TCPIP image takes the connection requests and distributes them to TCPIP instances where the application is running.   You can use load balancing such as Round Robin, or Hot Standby.

This blog post discusses the first case.

To provide background information, I created

Using VIPARANGE configuration

The technique uses the VIPARANGE configuration statement.

The concept is that many LPARs can be attached to an OSA adapter, one, just one,  TCPIP stack (I dont know which of the available images) takes the connection requests and passes them on to the application on that TCPIP image.

You allocate a range of TCPIP address for your applications, with the same network prefix, for example 9.4.6.x   Allocate a host id to a Liberty, for example 9.4.6.7.   The Liberty instance keeps this address for whereever it runs.  You configure your routers so that  9.4.5.* are routed to the OSA adapter.

For each TCPIP image where you want to run Liberty, add to the  TCPIP startup configuration (or to an OBEY file)

VIPADYNAMIC 
   VIPARANGE DEFINE 255.255.255.0 9.4.6.7 
ENDVIPADYNAMIC

The 255.255.255.0 is the  subnet mask.  If your organisation uses a different subnet mask, it affects the IP addresses used.

These instructions say that they are defining a range of IP addresses on this LPAR, for  9.4.6.1 to 9.4.6.254

If an application connects to TCPIP, and the bind specifies a value in this range (9.4.6.1 to 9.4.6.254) then it is considered a VIPA address.  If the application used 9.4.6.7 this would count as a VIPA address.

When your application (Liberty) connects to TCP and uses an address in the VIPARANGE,  the TCPIP instance will create a dynamic IP address.   When I started my server application,   I got a z/OS console message

EZD1205I DYNAMIC VIPA 9.4.6.7 WAS CREATED USING BIND BY jobname ON TCPIP2.

When I shut down the server I got

EZD1298I DYNAMIC VIPA  9.4.6.7 DELETED FROM TCPIP2
EZD1207I DYNAMIC VIPA 9.4.6.7 WAS DELETED USING CLOSE API BY jobname ON TCPIP2

If the VIPA address is active on more than one TCPIP image, just one image will get all of the requests.  If you stop this image, another TCPIP image can take over.

If you have a different server using the same IP address, but a different port number, because they use the same IP address, the same LPAR will process the requests.

With VIPAROUTE you do not get connections distributed to more than one TCPIP image.

In your browser use  9.4.6.7:9443 address, the network router, routes this to the OSA, a TCPIP captures this and passes it to the application (Liberty).   As part of the handshake Liberty sends down its certificate, which has a SAN of  9.4.6.7 which matches the IP address, so this works.

On another day, when a different z/OS image is capturing the VIPA address connections,  the TCPIP address is 9.4.6.7 as before, so this matches the SAN in the certificate.

Testing it

In a test I used “ping -R 9.4.6.7 ” to the VIPA address.
This reported it was sent to TCPIP stack with 10.1.1.2. When I shut this TCPIP image down, ping reported the request was sent to 10.1.1.3.  It did this with no manual intervention.

 

Setting up a highly available web server is a “yes – but” problem

I’ve been setting up a Liberty web server, as used in MQWEB, z/OS Connect, z/OSMF and so on, and was looking into how to make this available, so I could move the web server to a different LPAR or TCP/IP instance.

Moving it should be easy – it is  – but …  but there are things you need to think about. It is a bit like going around a maze trying to find the solution.

How do I get to the fail over system?

You start the web server on a different LPAR in the sysplex. How can you support this to allow your browser to get to the backend, without changing the URL?

You have two choices.

  1. You change your DNS look up, or router so your request goes via a different connection (think different bit of wire) to the failover LPAR. These change can be automated to some extent.
  2. Multiple z/OS images can listen for an IP address.

These work but…

The certificate sent down from the web server contains the address of the LPAR as part of the SAN.  When the browser processes it, it compares the LPAR address in the certificate with the address in the certificate.  If they do not match the browser produces an error message.

How do I get over the certificate SAN and the IP address difference?

You have a couple of choices

  1. Use a unique certificate on each LPAR.    Yes this works, but there is more administrative work to set up.  You could set up two web servers and only use one at a time.   This work, but it is unnecessary work.
  2. Use a Virtual IP address.   In TCP networking the end of every connection is a “device” or system with its unique IP address.   You can give the web server its own IP address which is “virtual” as it is not  device or system.  With this, when you start your web server on a different LPAR, it has the same IP address.  To use this you have to configure z/OS to support this.  You can set this up
    1. To support multiple web servers, and distribute the work to them
    2. Have a hot standby
    3. To route traffic to where the web server has started.

Yes, these work, but – is not easy to set up.  I’ll be blogging how to do this.

Defining a second TCPIP stack on z/OS on zPDT

I wanted a second TCPIP stack on my z/OS because I wanted to test it with MQWEB.   There is no good documentation in one place, there is good documentation hidden away, but not all in one place.
This took me about half a day to set up -including several IPLs , but I am on my own z/OS zPDT image so this was not a problem.  It take a while to understand the definitions – it is another one of “this point to that which points to something else…”.   You need to be able to copy a definition rather than use the books to create it from nothing.

I’ll describe setting up TCPIP2.

Overall I was surprised at how easy this bit was to set up.

The work breaks into

  • setting up the connectivity from Linux to z/OS
  • setting up the second TCP stack
    • Configure sys1.parmlib memmber and IPL
    • Define the new TCPIP procedure
    • Configure the new TCPIP configuration
    • Allowing people to use the TCPIP stack

Both of these need an IPL of z/OS, so you could do all of the customising and IPL afterwards at the end.

I’ll cover sharing an existing OSA adapter and setting up a new OSA adapter.

Sharing an existing OSA adapter.

Copy ADCD.Z24A.VTAMLST(OSATRL2) to USER.Z24A.VTAMLST(OSATRL2) and make the changes in bold

OSATRL1 VBUILD TYPE=TRL 00010000
OSATRL1E TRLE LNCTL=MPC,READ=(0400),WRITE=(0401), X00020007
               DATAPATH=(0402,0404,040,0406),     X00021013
               PORTNAME=PORTA,                    X00022004
               MPCLEVEL=QDIO                       00023005
*SATRL2E TRLE LNCTL=MPC,READ=(0404),WRITE=(0405),DATAPATH=(0406), X00024011
* PORTNAME=PORTB, X00025011
* MPCLEVEL=QDIO 00026011

I changed

  • DATAPATH=(0402) to DATAPATH=(0402,0404,0406)  – note every other address.    With 0402,0403 etc in the list, the second TCP failed to work, with messages like
    • EZZ4310I ERROR: CODE=80100040 REPORTED ON DEVICE PORTA. DIAGNOSTIC CODE: 03
    • EZZ4309I ATTEMPTING TO RECOVER DEVICE PORTA
    • IST1222I DATA DEVICE 0403 IS INOPERATIVE, NAME IS PORTA
    • IST1578I DEVICE INOP DETECTED FOR PORTA BY ISTTSCMA CODE = 104
  • Commented out/deleted the second TRLE definition

The zPDT devmap needs to have OSA definitions for these

name awsosa 0009 --path=A0 --pathtype=OSD --tunnel_intf=y # QDIO mode
device 400 osa osa --unitadd=0
device 401 osa osa --unitadd=1
device 402 osa osa --unitadd=2
device 403 osa osa --unitadd=3
device 404 osa osa --unitadd=4
device 405 osa osa --unitadd=5
device 406 osa osa --unitadd=6

I created a file USER.Z24A.TCPPARMS(T2OSA)

DEVICE PORTA  MPCIPA 
LINK ETH1  IPAQENET PORTA 
START PORTA 
HOME 10.1.1.3 ETH1 

and put

include USER.Z24A.TCPPARMS(T2OSA)

into my tcpip2 startup.

By putting the definitions in a PDS member, means I can use

V TCPIP,TCPIP2,OBEY,USER.Z24A.TCPPARMS(T2OSA)

to activate them.

I reipled the system to pick up VTAM changes.

Once I had stared TCPIP and TCPIP2 the command d net,id=OSATRL1E gave

D NET,ID=OSATRL1E
IST097I DISPLAY ACCEPTED
IST075I NAME = OSATRL1E, TYPE = TRLE 466
IST486I STATUS= ACTIV, DESIRED STATE= ACTIV
IST087I TYPE = LEASED , CONTROL = MPC , HPDT = YES
IST1954I TRL MAJOR NODE = OSATRL2
IST1715I MPCLEVEL = QDIO MPCUSAGE = SHARE
IST1716I PORTNAME = PORTA LINKNUM = 0 OSA CODE LEVEL = 7617
IST2337I CHPID TYPE = OSD CHPID = A0 PNETID = **NA**
IST1577I HEADER SIZE = 4096 DATA SIZE = 0 STORAGE = ***NA***
IST1221I WRITE DEV = 0401 STATUS = ACTIVE STATE = ONLINE
IST1577I HEADER SIZE = 4092 DATA SIZE = 0 STORAGE = ***NA***
IST1221I READ DEV = 0400 STATUS = ACTIVE STATE = ONLINE
IST924I -------------------------------------------------------------
IST1221I DATA DEV = 0403 STATUS = ACTIVE STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST1717I ULPID = TCPIP ULP INTERFACE = PORTA
...
IST1221I DATA DEV = 0404 STATUS = ACTIVE STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST1717I ULPID = TCPIP2 ULP INTERFACE = PORTA
IST2310I ACCELERATED ROUTING DISABLED
IST924I -------------------------------------------------------------
IST1221I DATA DEV = 0405 STATUS = RESET STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST924I -------------------------------------------------------------
IST1221I DATA DEV = 0406 STATUS = RESET STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST924I -------------------------------------------------------------
IST1500I STATE TRACE = OFF

Setting up the connectivity from Linux to z/OS using a second OSA adapter

You need to set up an interface from Linux to z/OS via an Open Systems Adapter (OSA).

TCP/IP Interfaces are used to tunnel from Linux to z/OS.  These have names like tap0, tap1;  they tie up with z/OS paths and devices.  The Linux device drivers implement the QDIO protocol, a simpler and faster protocol than traditional z/OS channels.

Identify the path and devices to be used.

The zPDT find_io command gave me

 FIND_IO for "colin@colin-ThinkCentre-M920s" 

      I/face Cur           MAC      IPv4         IPv6 
Path  Name   State         Address  Address      Address 
----  ----   ---- -------- -------- -------       ----------------- ---------------- -------------- 
F0    eno1    UP, RUNNING 00:d8:... 10.1.0.3     fe80:...%eno1 
F1    wlxd..  UP, RUNNING d0:37:... 192.168.1.67 2a00:...6cab 
.  
A0   tap0     UP, RUNNING 9e:30:... 10.1.1.1     fe80:... %tap0 
A1   tap1     UP, RUNNING 7e:66:... 0.1.2.1      fe80:... %tap1 
A2   tap2   DOWN 02:a2:a2:a2:a2:a2  *            *

We can see from this the IP addresses being used;  channel paths A0, A1 are in use by tunneling; channel path A2 is available.

In the zPDT devmap I set up

[manager] # tap0 define network adapter (OSA) for communication with Linux
name awsosa 0009 --path=A0 --pathtype=OSD --tunnel_intf=y # QDIO mode
device 400 osa osa --unitadd=0
device 401 osa osa --unitadd=1
device 402 osa osa --unitadd=2

[manager] # tap1 define network adapter (OSA) for communication with Linux
name awsosa 0010 --path=A1 --pathtype=OSD --tunnel_intf=y --tunnel_ip=10.1.2.1 --tunnel_mask=255.255.255.0 # QDIO mode
device 408 osa osa --unitadd=0
device 409 osa osa --unitadd=1
device 40a osa osa --unitadd=2

Where the paths tie up with the output from the find_io.

Each connection needs 3 consecutive devices, for example 408,409,40a.

On z/OS use the command D U,CTC to find which devices are available.  I think (I am not sure) that the first device has to end in 0, or 8 .

I have

UNIT TYPE STATUS 
0400 OSA A-BSY 
0401 OSA A 
0402 OSA A-BSY 
0403 OSA OFFLINE 
0404 OSA OFFLINE 
0405 OSA OFFLINE 
0406 OSA OFFLINE 
0407 OSA OFFLINE 
0408 OSA A-BSY 
0409 OSA A 
040A OSA A-BSY

Once you have selected the OSA addresses to use, and configured the devmap file, you will need to restart zPDT with the updated devamp – but you need to customise z/OS and IPL – so do not IPL just yet.

Z/OS work for setting up the second TCP stack

Some basic terminology and concepts.

  • There is an network domain AF_INET which programmers use via sockets to communicate with the network.   (There is another network domain AF_UNIX for Unix programming).
  • You have to configure the domain, for example how many concurrent sessions it can support.
  • Originally you could have only one TCP stack in the environment.   This used an interface called INET.  This did not support more than one TCP/IP stacks.
  • A new interface was developed Common INET ( CINET). Conceptually this sits in front of TCP/IP and routes packets to the TCPIP subsystems.
  • To be able to use multiple stacks, CINET needs to be used instead of INET.
  • These are customised in SYS1.PARMLIB(BPXPRMxx).

Customise sys1.parmlib(BPXPRMxx) member

For example

FILESYSTYPE TYPE(INET) ENTRYPOINT(EZBPFINI) 

SUBFILESYSTYPE NAME(TCPIP) 
     TYPE(INET) 
     ENTRYPOINT(EZBPFINI) 

NETWORK DOMAINNAME(AF_INET) 
     DOMAINNUMBER(2) 
     MAXSOCKETS(64000) 
     TYPE(INET) 
     INADDRANYPORT(5555) 
     INADDRANYCOUNT(1000)

Change TYPE(INET) to TYPE(CINET) in 3 places, and change ENTRYPOINT(EZBPFINI) to ENTRYPOINT(BPXTCINT)

Add the new TCPIP address space

SUBFILESYSTYPE NAME(TCPIP2) 
     TYPE(CINET) 
     ENTRYPOINT(EZBPFINI)

This change needs a IPL to activate (or possibly a SETOMVS RESET=(xx).   I do not know what else the change from INET to CINET affects, so check with IBM before implementing it.

Define the TCPIP2 procedure

  • I defined a new profile in the STARTED class to map TCPIP2 to a userid.   I used the same userid as for TCPIP.
  • I copied the TCPIP procedure from TCPIP to TCPIP2.
  • The TCPIP procedure refers to TCP configuration,
    • //PROFILE  … DSN=SYS1.TCPPARMS(PROF) and
    • //SYSTCPD …. DSN=SYS1.TCPPARMS(TCPDATA).
  • Create your own copies of these, for example copy them to USER.TCPPARMS, and rename the members to PROF2, and TCPDATA2

Create VTAM definition for the tunnelling connection for the a second OSA adapter

 

If you are using a second OSA adapter, you need to create a VTAM member to map from the OSA device to a TCP/IP name using MPC.  This is Multi Protocol Channel, using protocol QDIO which is simpler and faster protocol than traditional z/OS channels.

Create a member in VTAMLST for a Transport Resource List major node, for example OSACOLIN.

----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8
OSA5 VBUILD TYPE=TRL 
OSATRL5E TRLE LNCTL=MPC,READ=(0408),WRITE=(0409),DATAPATH=(040A),      X
               PORTNAME=PORTZ,                                         X
               MPCLEVEL=QDIO

Note the format, continuation ‘x’ in column 72, and continuation text in column 16.

You can use V NET,ACT,ID=OSACOLIN  to activate it. If you use D NET,IS=OSACP,E it should find it, and report it is active.

You can use D NET,TRL  to display the status of the  links.

Configure TCPPARMS(PROF2)

I used a copy of the TCPIP(PROF) as my starting configuration.

I commented out all of the lines between AUTOLOG and ENDAUTOLOG.

I went down to the DEVICE and BEGINROUTE section and used

DEVICE PORTZ MPCIPA
LINK ETHZ IPAQENET PORTZ
; end of link and device definitions
;
HOME 10.1.2.2 ETHZ
;
BEGINRoutes
;            Destination SubnetMask FirstHop LinkName Size
ROUTE        DEFAULT               10.1.2.1  ETHZ     MTU 1492
ENDRoutes
; start it when TCP/IP starts
START PORTZ

Where

  • DEVICE PORTZ MPCIPA  –  MPCIPA says this is an OSA QDIO, and uses the PORTZ definition.  PORTZ was defined above in the VTAMLST(OSACP).
  • LINK ETHZ IPAQENET PORTZ –  this create a LINK ETHZ associated with DEVICE PORTZ in the line above. It uses the interface type IPAQENET, which is for IP V4 and device OSA QDIO.   (There is IPAQENET6 for IP V6 for OSA QDIO).
  • HOME 10.1.2.2 ETHZ –  for traffic coming in over ETHZ (via PORTZ, and back to the tap1 which was defined with –tunnel_ip=10.1.2.1).   A ping 10.1.2.2 should come in over this interface.  For the first OSA adapter this had 10.1.1.2.
  •  BEGINRoutes
    ;                Destination SubnetMask FirstHop LinkName  Size
    ROUTE  DEFAULT                               10.1.2.1          ETHZ    MTU 1492
    ENDRoutes

    •  Any traffic going to 10.1.2.1 go via link ETHZ and use a packet size of 1492 bytes.
  • START PORTZ – get it working

Edit the TCPDATA

For sharing an OSA or using a new OSA, I edited the TCPDATA2 file and added

TCPIPJOBNAME TCPIP2
S0W1: HOSTNAME S0W1COL
DOMAINORIGIN COLIN.HOST.COM
DATASETPREFIX TCPIP
NSPORTADDR 53
RESOLVEVIA UDP
LOOKUP LOCAL
ALWAYSWTO YES

I dont know which of these are important.  I changed the bold lines, to match my name.

RACF profile changes

You have to set up a security  profile before an application can connect to TCPIP and listen on a socket.  MQWEB got EDC5112I Resource temporarily unavailable. (errno2=0x74610296)

rdefine SERVAUTH EZB.INITSTACK.*.TCPIP2  from(EZB.INITSTACK.*.TCPIP)

Using the model… above copies the permission from the base object.   You can allow more users using

permit EZB.INITSTACK.*.TCPIP2 class(SERVAUTH) id(START1) access(READ)

The “*” is for any system in the sysplex, so you could have EZB.INITSTACK.MVSA.TCPIP2 and allow access to TCPIP2 on system MVSA, but not from another MVS system.

You can protect TCPIP2 for example protect the NETSTAT command

RDEFINE SERVAUTH (EZB.NETSTAT.*.TCPIO2.*) UACC(NONE)
PERMIT (EZB.NETSTAT.*.TCPIP2.*) ACCESS(READ) CLASS(SERVAUTH) ID(TCPADMIN)
SETROPTS GENERIC(SERVAUTH) REFRESH 

Check it out

You can use the Linux netstat -i command to display the interfaces defined to Linux.  On my Linux  I got

colin@colin-ThinkCentre-M920s:/home/zPDT$ netstat -i 
Kernel Interface table
Iface     MTU   RX-OK ... Flg
eno1     1500   84758 ... BMRU
lo      65536  188855 ... LRU
tap0     1500       6 ... BMRU
tap1     1500      25 ... BMRU
wlxd0374 1500   10545 ... BMRU

z/OS commands

D TCPIP – displays the TCP address spaces in the LPAR

D tcpip,tcpip2,netstat,home gave
EZZ2500I NETSTAT CS V2R4 TCPIP2 540
HOME ADDRESS LIST:
ADDRESS LINK FLG
10.1.2.2 ETHZ P
127.0.0.1 LOOPBACK

Using TCPIP2 from Liberty web server

I added

_BPXK_SETIBMOPT_TRANSPORT=TCPIP2

to the server.env file, and restarted Liberty

I connected from my web browser to MQWEB using 10.1.2.2:9443, and got the messages

Your connection is not private
Attackers may be trying to steal your information from 10.1.2.2
NET:ERR_CERT_COMMON_NAME_INVALID

The NET:ERR_CERT_COMMON_NAME_INVALID message is because the certificate had a Subject Alternative Name of a different IP address 10.1.1.2.  It traffic flow was sent from 10.1.2.2.

This was what I expected.

Customising for MQWEB Liberty on z/OS, things the documentation does not tell you about

This post covers the customising you need to consider enterprise use of the Liberty MQWEB server.  It covers

  • Setup the USS path and defining an alias for the mq executable’s  directory
  • Do you have common configuration across mqwebuser.xml files?
  • Decide if you want to use setmqweb.
  • Setting up the server’s certificate and keyring
  • Setting up the trust store
  • Setting up the Angel process(es)
  • Reserving the TCP/IP Port number
  • Customising the jvm.option
    • To prevent the web server coming up if the Angel process is missing
    • Setting the time zone
  • Customising the mqwebuser.xml
    • SAF definitions
    • Setting the log sizes so the logs can be viewed
  • Letting requests in from outside of the LPAR
  • dspmqweb/setmqweb – which instance to use?
  • Selecting which IP stack to use
  • Customising ISPF option 3.17 – Unix Directory List

Setup the USS path and defining an alias for the mq executable’s directory

To be able to use the dspmweb and setmqweb commands you need to point to the command location.

You can add to the user’s .profile file, or the /etc/profile the statement

export PATH=/usr/lpp/mqm/V9R1M1/web/bin:$PATH

If you have multiple releases of MQ in your environment you could set up shell commands like v913dspmqweb.sh

/usr/lpp/mqm/V9R1M3/web/bin/dspmqweb "$@"

But this causes extra work when you need to migrate to the new release.  It might be better to set up an alias

ls -s  /usr/lpp/mqm/V9R1M3/web/bin /v913
ls -s  /usr/lpp/mqm/V9R1M3/web/bin /mqcur

so you just need to type /v913/dspmweb or /mqcur/setmqweb

As part of the migration to a newer release you just change the alias.

Do you have common configuration across mqwebuser.xml files?

If you have multiple mqweb instances, either because you have multiple LPARs in a sysplex, or you have to support different release of MQ concurrently, you may want to put common configuration in an include file. For example created a file common.xml to hold the configuration and put

<include location=”common.xml” optional=”false”/>

in the mqwebuser.xml file.

Decide if you want to use setmqweb.

You can update your *.xml configuration files, or use setmqweb to update mqwebuser.xml for you.

Some organisations do not allow manual changes to configuration. You have to change a configuration file, have it reviewed, and use automation to deploy it.

For test systems it may be ok to use the setmqweb command and change things dynamically.

If you make a change using setmqweb, it updates the mqwebuser.xml file, by adding/changing a <variable name=”…” value=”..”/>  statement.

If you are using SAF authentication and certificate authentication

You will need keyring with the certificate to identify the server (the key store).  You will need a keyring to identify the certificates you trust (the trust store).  You could use the same keyring for both – but this is not good practice.

The server’s certificate and key store keyring

You need to decide if the MQWEB server uses the same certificate as CICS, WAS and z/OS Connect etc. on the same LPAR.  You could have a common certificate to simplify administration. The certificate needs a Subject Alternative Name, to identify the machine the certificate came from. This can be the DNS name or the dotted address (9.20.4.6) depending on your set up.  It might be easier to define both. Note the RACF command

RACDCERT .. ALTNAME(IP(10.1.1.2) IP(10.1.1.3) DOMAIN(‘WWW.ME.COM’) DOMAIN(‘WWW.LAST.COM’))…

accepts multiple entries, but only uses the last one. The above command gave produced a certificate with

Subject's AltNames: 
IP: 10.1.1.3 
Domain: WWW.LAST.COM

This means you many only be able to use the certificate only on the LPAR that has been defined, (if you move the server to a different LPA, it will have a different IP address, and your clients will complain – see below).   You may be able to something clever things with VIPA (Virtual IP addressing) where your Sysplex has one IP address and this maps to different IP addresses on each LPAR.

If you have the wrong IP or Domain then the browser gets  a message like “Your connection is not private. Attackers may try to steal your information from 10.1.1.2.  NET:ERR_COMMON_NAME_INVALID”

The trust store keyring.

The trust store keyring has the certificates to authenticate what has been sent from the client.  For example, a copy of any self signed certificate, or the Certificate Authorities of the Web Browser’s certificate.

This keyring could be sysplex wide, and shared by CICS, WAS, Z/OS connect etc – assuming they have the same people connecting to them.

The certificates may have been configured with owner CERTAUTH rather than an userid.

My definitions

<sslDefault sslRef="defaultSSLConfig"/> 
<ssl id="defaultSSLConfig" 
   sslProtocol="TLSv1.2" 
   keyStoreRef="racfKeyStore" 
   trustStoreRef="racfTrustStore" 
   clientAuthenticationSupported="true" 
   clientAuthentication="true" 
   serverKeyAlias="MYMQWEB/> 

<keyStore filebased="false" id="racfKeyStore" 
   location="safkeyring://START1/KEY" 
   password="password" 
   readOnly="true" 
   type="JCERACFKS"/> 

 <keyStore filebased="false" id="racfTrustStore" 
   location="safkeyring://START1/TRUST" 
   password="password" 
   readOnly="true" 
   type="JCERACFKS"/> 

<webAppSecurity allowFailOverToBasicAuth="false"/>
  • The sslDefault  points to the ssl with the same ID
  • The ssl points to
    • the key store with the servers certificate with the id racfKeyStore
    • the trust store to validate connecting clients, with the id racfTrustStore

Create an angel

You need an Angel process to handle the SAF (RACF) security requests – the MQ documentation tells you this.

Typically the Angel started task is started at IPL, and shut down at system shut down.
All instances of Liberty Web Server running on an LPAR can all use the same Angel, for example the z/OSMF angel IZUANG1.

You cannot shut down the Angle process if it is in use, but if you cancel it, the servers using it will stop working (hang) and may abend.

You may want to consider more than one Angel process, and not share it.

When the Angel process has started, it uses no CPU, as the Web Servers execute code within the  Angel address space, on the Web Server’s threads – just like MQ, DB2 etc.

Customise  jvm.options

Stop if there is no Angel  process

If the Angel process is not running at Liberty startup,  then the Web Server may continue to come up.  People will not be authorised to access it, but the Web Server will be running.   This is pretty useless.

You can specify an option so the liberty server (MQWEB) does not start if the Angel task is not running.

I use

-Dcom.ibm.ws.zos.core.angelRequired=true
#-Dcom.ibm.ws.zos.core.angelName=MYANGEL

-Dcom.ibm.ws.zos.core.angelRequired=true

If the angel process is not available then the MQWEB stops when it detects the angel is not available.

#-Dcom.ibm.ws.zos.core.angelName=MYANGEL

If you are using a names Angel, uncomment this and specify the Angel name.

If you are using the unnamed Angel, leave this commented.

Set the time zone

The time zone is picke up from TZ in /etc/profile, but you can override it by specifying

-Duser.timezone=Europe/London

This sets the time-zone of the messages in the message.log and trace.log files.

Reserve the TCP/IP port number

It is a good idea to talk to the networking team and get them to update the TCP/IP configuration for example

PORT 
    20 TCP OMVS NOAUTOLOG ; FTP Server 
    21 TCP OMVS ; FTP Server 
    22 TCP SSHD* ; port for sshd daemonrver 
    23 TCP TN3270 ; Telnet 3270 Server 
    ...
    1414 TCP CSQ9CHIN ; CSQ9 MQ TCP Listener  
    ...
    9443 TCP MQWEB ; Colin Paice MQWEB

 

Customise mqwebuser.xml

Message log and trace file settings

If the trace or message files are too big, you cannot view them. You have to use edit to look at them, but if the file is too large, browse is substituted and browse does not do code page conversion, so you are looked at raw ascii characters in an EBCDIC browser.

<variable name=”maxTraceFileSize” value=”20″/>
<variable name=”maxTraceFiles” value=”20″/>
<variable name=”maxMsgTraceFileSize” value=”20″/>
<variable name=”maxMsgTraceFiles” value=”20″/>

The file size values are in MB.

You should consider keeping you messages.log files for a week or so, so make the number of files large enough.

SAF – Access to RACF

If you are using SAF (RACF or other z/OS security manager) to manage access and authorisation you will have a default entry like

<!-- 
Example SAF Registry 
--> 
<safAuthorization racRouteLog="NONE" id="saf" /> 

<safRegistry id="saf" /> 
<safCredentials unauthenticatedUser="WSGUEST" profilePrefix="MQWEB" 
suppressAuthFailureMessages="false" /> 

I use <safAuthorization racRouteLog=”ASIS”… to get RACF violation messages on the joblog during set up.  See here.

<safRegistry suppressAuthFailureMessages=”false”…  prints out violation messages.  See here.

Let request in from outside z/OS

For this to work you have to edit the mqwebuser.xml file and uncomment

<variable name="httpHost" value="*"/> 
<!-- 
-->

By default it only allows request from the same z/OS system – so not allowing browsers access.

dspmqweb/setmqweb – which instance to use?

This page  says you must use

export WLP_USER_DIR=WLP_user_directory

This is fine when you have one mqweb instance on one LPAR.  You might want a shell program to set this every time.  For example,  the program disMQPAweb.sh

export WLP_USER_DIR=/u/mqmweb/MQPA
/usr/lpp/mqm/V9R1M1/web/bin/dspmqweb "$@"

Then you can use /usr/lpp/mqm/V9R1M1/web/bin/dspmqweb as before.

If you have multiple releases of MQ in your environment you might want to point to the command in the script, so dspMQPA.sh might have

export WLP_USER_DIR=/u/mqmweb/MQPA
/usr/lpp/mqm/V9R1M1/web/bin/dspmqweb "$@"

Though it might be better to have a shell script mq911 with an optional queue manager parameter

Selecting which IP stacks to use.

There is an article from IBM, which gives two ways of configuring it.  Changing the httpEndpoint, or specifying an environment variable

Customise ISPF z/OS UNIX Directory List

In the MWEB directory are message logs and trace logs.  When the file fills up, it renames the old file to include the date and time, for example messages_20.07.29_16.49.29.0.log , and creates a new message.log or trace.log

If you are using ISPF 3.17 (z/OS UNIX Directory List) to use the files, it only displays the first 15 characters of the file name, so you get lots of files with a name like “messages_20.07.” where 20 is the year, and 07 is the month.

The default layout for the z/OS UNIX Directory List  displays by default some unhelpful fields.   You can arrange the fields, (but not make the filename field wider).
If you go to the OPTIONS on the top line, and select “2. Directory List Column Arrangement… ” you can change what fields are displayed, and the order.  I set the widths of all fields to 0, except for

  • Type 04
  • Modified 19 (if you specify a smaller value you only get the YYYY-MM…  not the time)
  • Size 10

The documentation says

  • Modified The date and time the file was last changed.
  • Changed The date and time the status of the file was last changed.

I do not know the difference between these two.

Controlling what is displayed

In the directory list you can use sort commands

  • sort file A
  • sort mod D

Looking at a log or trace file

If you sort by Modified A the newer files will be at the top, so you can look at the “modified” column to look for the time the file was created, and so get the order of the files.

You can use the line command / to display the options.

You can use e to edit, or V to use edit in browse mode.

Browse displays a mess because it does not do conversion

 


	

Liberty on z/OS: Mapping an incoming certificate to a z/OS userid for client certificate authentication – and don’t forget the cookies!

I thought I understood how this worked, I found I didn’t, then had a few days hunting around for the problem

The basics

You can use a digital certificate from a web browser ( curl, or other tools) to authenticate to z/OS.  You need to map the certificate to a userid.

A certificate coming in can have a Distinguished Name like CN=adcdd.O=cpwebuser.C=GB  (Note the ‘.’not ‘,’ between elements).

Your userid needs to have SPECIAL define to be able to use the RACDCERT command (SPECIAL, not just GROUP-SPECIAL).

You will need a definition like (see here for the command)

RACDCERT MAP ID(ADCDD ) - 
    SDNFILTER('CN=adcdd.O=cpwebuser.C=GB') - 
    WITHLABEL('adcdd')

or a general definition for those certificate with  O=cpwebuser.C=GB, ignoring the CN part

RACDCERT MAP ID(ADCDB ) - 
   SDNFILTER('O=cpwebuser.C=GB') - 
   WITHLABEL('cpwerbusergb') 

or using the Issuing Distinguished Name (the Certificate Authority)

IDNFILTER(‘CN=TESTCA.OU=SSSCA.C=GB)

Using a generic

SDNFILTER(‘CN=a*.O=cpwebuser.C=GB’)

does not work.

If you attempt to use a certificate which is not mapped you get

ICH408I USER(START1 ) GROUP(SYS1 ) NAME(COLIN)
DIGITAL CERTIFICATE IS NOT DEFINED. CERTIFICATE SERIAL NUMBER(0163)  SUBJECT(CN=adcdd.O=cpwebuser.C=GB) ISSUER(CN=SSCA8.OU=CA.O=SSS.C=GB).

It is worth defining these using JCL, because if you try to add it, and it already exists then you get a message saying it exists already.  If you know the userid, you can list the maps associated with it.   If you do not know the userid, there is no practical way of finding out – you have to logon with the certificate, and display the userid from the web browser, or extract the list of all users, and use LISTMAP on all of them.

Once you have set up the userid, you can connect them to the group to give them access to the EJBROLE profiles.  For example use group names

  • MQPAWCO MQPAMQWebAdminRO Console Read Only.
  • MQPAWCU MQPAMQWebUser  Console User only.  The request operates under the signed on userid authority.
  • MQPAWCA MQPAMQWebAdmin Console Admin.

for queue manager MQPA, Web  Console (rather than REST) and the access.

You may want to set up  userids solely for client authentication.  If the userid has NOPASSWORD, it cannot be used to logon with userid and password, and of course the lack of password means the password will not expire.

Having a set of userids just for certificate access makes it easier to manage the RACDCERT MAPping.    You have a job with

RACDCERT ID(adcd1) LISTMAP
RACDCERT ID(adcd2) LISTMAP
etc

and search the output for the certificate of interest.

It gets more complicated…

Often the user’s certificate is in the form CN=Colin Paice,o=SSS,C=GB so if you want to allow all people in the MQADMIN team access, you will need to to specify them individually.  It would be easier if DN had CN=Colin Paice,OU=MQADMIN,o=SSS,C=GB, then you can filter on the OU=MQADMIN.   These could map to a userid MQADM1.

It gets more complication if someone can work with MQ, and CICS or z/OS Connect, and you have to decide a userid – MQADM1 or CICSADM1?

Setting up a one to one mapping may be the best solution, so CN=Colin Paice,o=SSS,C=GB maps to CPAICE (or GB070594).   This userid is then added to the appropriate RACF groups to give access to the EJBROLEs, to give access to the servers.

How do I tell what is being used?

I could not get Liberty to record an audit record for the logon/matching.   I tried altering the userid to have UADIT – but it did not work either.

If you have audit defined on the class EJBROLE profile MQWEB.com.ibm.mq.console.MQWebUser, you will get a audit record in SMF.   This has many fields including

  • Date
  • Time
  • ACCESS
  • SUCCESS – or INSACC (INSufficient Access)
  • ADCDC – userid being used
  • READ – Requested access
  • READ – permitted access
  • EJBROLE – the class
  • MQWEB.com.ibm.mq.console.MQWebUser – the profile
  • CN=adcdd.O=cpwebuser.C=GB – the Distinguished Name of the certificate
  • CN=SSCA8.OU=CA.O=SSS.C=GB – the Issuers (Certificate Authority) of the certificate

From this you can see the userid being used ACDC, and the certificate DN CN=adcdd.O=cpwebuser.C=GB.

And to make it more complicated

I deleted the RACDCERT MAP entry, but the web browser continued to work with the user.  I had a cup of tea and a cookie, and the web browser stopped working.   Was problem this connected to a cup of tea and a cookie?

Setting up the initial handshake is expensive.  The system has to do a logon with the certificate to get the userid from the RACDCERT mapping.  It then checks the userid has access to the SERVER profile, then it checks to see if it is MQWebAdmin, MQWebAdminRO, or  MQWebUser.

Once it has done this it it takes the userid and information, encrypts it, and creates the LTPA cookie.   This is sent down to the web browser.

The next time the web browser sends some data, it also sends the cookie. The MQWEB server decrypts the cookie, checks the time stamp to make sure the information is current, and if so, uses it.  The timeline I had was

  • create the RACDCERT mapping from certificate DN to userid
  • use browser to logon to mqweb, using the certificate with the DN
  • it works, mqweb sends down the cooke
  • delete the RACDCERT mapping for the DN
  • restart the browser, logon to mqweb, using the certificate with the DN.  The cookie is passed up – the logon works
  • clear the browser’s cookies – and retry the logon.  It fails as expected.

So ensure the browser cookie is cleared if you change the mapping or ejbrole access for the user.

Tracing Liberty logon on z/OS – is difficult

I had a few problems logging on to the MQWEB server using certificates, and found there was no documentation to help resolve problems.  The debug information you can get is often not very helpful!

As an extra twist, a userid having no access and getting a “not authorised to the resource” is OK.  For example my userid may have access to MQWebAdmin, but not to MQWebAdminRO – it may be wrong to have access to both, so you will get at least one “not authorised to the resource” violation.

I looked at

  • MQWEB traces not enough information provided
  • RACF traces – looks wrong to me
  • RACF audit data in SMF – this is all you need

The only way of getting out the data, is to enable RACF audit for the profile, and set an option in the mqwebuser.xml file.

To make it even more difficult to resolve problems. When a request arrives at the web server, it encrypts the information, and sends down an LTPA token.    The next request from the browser sends this token, and bypasses some of the initial checks, so you will not see trace entries.  After the LTPA token has expired, the next request will do the full logon again.
To prevent this from happening, clear your browser history and cache before retesting.

MQWEB trace provides information – none of it usable.

I used the trace string

traceSpecification=”*=info:zos.native.03=fine”

I also included :UserRegistry=all:Credentials=all which gave more information, not all of it useful.

This provides information like

Description: Entry: checkAuthorizationFast 
serviceResults: 00000050868103e7 
suppresMessages: 0 
logOption: 3 
requestor: 
raco_cb: 0000005082d08290 
acee: 0000000000000000 
accessLevel: 2 
applName: MQWEB 
className: EJBROLE 
entityName: MQWEB.com.ibm.mq.console.MQWebAdminRO 
...
Description: RACROUTE REQUEST=FASTAUTH return 
   returnCode: 0 
   safReturnCode: 0 
   racfReturnCode: 0 
   racfReasonCode: 0

But does not tell you which userid the request was being made for!

Sometime it gives you full control blocks, other times truncated like MQWEB.com.ibm.mq so you do not know if this is for Admin, AdminRO, or User.

MQWEB safAuthorization racRouteLog

I enabled RACF AUDIT for the MQWEB.com.ibm.mq.console.MQWebAdminRO and MQWEB.com.ibm.mq.console.MQWebAdmin.

In the mqwebuser.xml you can have to display audit messages about EJBROLE

<safAuthorization racRouteLog=”NONE” id=”saf”
reportAuthorizationCheckDetails=”false” />

See here – which says

Specifies the types of access attempts to log.

  • ASIS Records the event in the manner specified in the profile that protects the resource, or by other methods such as the SETROPTS option.
  • NOFAIL If the authorization check fails, the attempt is not recorded. If the authorization check succeeds, the attempt is recorded as in ASIS.
  • NONE The attempt is not recorded.
  • NOSTAT The attempt is not recorded. No logging occurs and no resource statistics are updated.

With AUDIT enabled, and with racRouteLog=”ASIS”  I got the following “failures” (every 10 seconds) due to the web server doing auto-refresh. The checks to MQWebUser worked, and were not recorded in the joblog.

  • 15.51.54 ICH408I USER(ADCDC ) GROUP(TEST ) NAME(ADCDC) MQWEB.com.ibm.mq.console.MQWebAdmin CL(EJBROLE )  INSUFFICIENT ACCESS AUTHORITY ACCESS INTENT(READ ) ACCESS ALLOWED(NONE )
  • 15.51.54 ICH408I USER(ADCDC ) GROUP(TEST ) NAME(ADCDC ) MQWEB.com.ibm.mq.console.MQWebAdmin CL(EJBROLE ) INSUFFICIENT ACCESS AUTHORITY ACCESS INTENT(READ ) ACCESS ALLOWED(NONE )
  • 15.51.55 ICH408I USER(ADCDC ) GROUP(TEST ) NAME(ADCDC ) MQWEB.com.ibm.mq.console.MQWebAdminRO CL(EJBROLE ) INSUFFICIENT ACCESS AUTHORITY ACCESS INTENT(READ ) ACCESS ALLOWED(NONE )
  • 15.51.55 ICH408I USER(ADCDC ) GROUP(TEST ) NAME(ADCDC )MQWEB.com.ibm.mq.console.MQWebAdminRO CL(EJBROLE ) INSUFFICIENT ACCESS AUTHORITY ACCESS INTENT(READ ) ACCESS ALLOWED(NONE )
  • 15.51.55 ICH408I USER(ADCDC ) GROUP(TEST ) NAME(ADCDC )MQWEB.com.ibm.mq.console.MQWebAdmin CL(EJBROLE ) INSUFFICIENT ACCESS AUTHORITY ACCESS INTENT(READ ) ACCESS ALLOWED(NONE )
  • 15.51.55 ICH408I USER(ADCDC ) GROUP(TEST ) NAME(ADCDC )MQWEB.com.ibm.mq.console.MQWebAdminRO CL(EJBROLE ) INSUFFICIENT ACCESS AUTHORITY ACCESS INTENT(READ ) ACCESS ALLOWED(NONE )

When I changed it to racRouteLog=”NONE” the messages were not produced on the joblog.   There were still records produced in the SMF audit data with the profiles having AUDIT ALL(READ) specified.

I think you should usually run with racRouteLog=”NONE” , and change it to racRouteLog=”ASIS”  when you have a problem – but be careful not to generate a flood of messages.

To display SAF messages about other violations use

<safCredentials unauthenticatedUser=”WSGUEST” profilePrefix=”MQWEB”
suppressAuthFailureMessages=”false” mapDistributedIdentities=”false”/>

RACF Trace

This gives data as to what was accessed, but reports the userid of the web server, not the userid being checked – so not very useful.

I used the command

  • #set trace(RACROUTE(ALL),JOBNAME(CSQ9WEB))
  • traced to GTF with USRP(F44)
  • formatted it with the IPCS command GTF USR(ALL)

This had data like

  • Trace Type: RACFPOST – this is the “AFTER” request
  • Service number: 00000002 – this is RACROUTE 2, verify
  • RACF Return code: 00000008
  • RACF Reason code: 00000004
  • MQWEB.com.ibm.mq.console.MQWebAdmin – this profile
  • EJBROLE – this class
  • MQWEB – this application
  • ACEE ( userid block) with userid=START1, Group=SYS1, Jobname=CSQ9WEB.   This had a userid of START1 (from the job),  but the userid being tested was for ADCDC which was not in the control blocks – so no good for telling you which userid had accces or not.

So all we can tell is, that for the profile EJBROLEMQWEB.com.ibm.mq.console.MQWebAdmin someone got a ‘no access’ return code.

Turn off the RACF trace using

#SET TRACE(NORACROUTE,NOJOBNAME)

RACF Auditing – this worked and gave me most of the information

I turned on RACF auditing using

  • RALTER EJBROLE MQWEB.com.ibm.mq.console.MQWebAdmin AUDIT(ALL,READ)
  • RALTER EJBROLE MQWEB.com.ibm.mq.console.MQWebAdminRO AUDIT(ALL,READ)
  • RALTER EJBROLE MQWEB.com.ibm.mq.console.MQWebUser  AUDIT(ALL,READ)
  • SETROPTS RACLIST(EJBROLE)

This writes a record to SMF for ALL(failed  and successful) request which were READ or above.

I used the RACF provided exits to the SMF dump program(IFASMFDP).   This produces readable file with all of the data.

I wrote an ISPF edit macro in rexx to take the data, and extract the key fields.

Below are the records produced for

  • an SSL connection using a certificate which mapped to userid ADCDB,
  • a logon with a userid and password with userid ADCDC
RESULT  USERID  WANT ALLOWED CLASS   RESOURCE
SUCCESS START1  READ /READ   SERVER  BBG.SECPFX.MQWEB 
SUCCESS ADCDB   READ /UPDATE APPL    MQWEB 
SUCCESS ADCDB   READ /UPDATE APPL    MQWEB 
SUCCESS ADCDC   READ /READ   APPL    MQWEB 
SUCCESS WSGUEST READ /READ   APPL    MQWEB 
INSAUTH ADCDC   READ /NONE   EJBROLE MQWEB.com.ibm.mq.console.MQWebAdmin 
INSAUTH ADCDC   READ /NONE   EJBROLE MQWEB.com.ibm.mq.console.MQWebAdminRO 
SUCCESS ADCDC   READ /READ   EJBROLE MQWEB.com.ibm.mq.console.MQWebUser 
INSAUTH ADCDC   READ /NONE   EJBROLE MQWEB.com.ibm.mq.console.MQWebAdmin 
INSAUTH ADCDC   READ /NONE   EJBROLE MQWEB.com.ibm.mq.console.MQWebAdmin 
INSAUTH ADCDC   READ /NONE   EJBROLE MQWEB.com.ibm.mq.console.MQWebAdmin 
INSAUTH ADCDC   READ /NONE   EJBROLE MQWEB.com.ibm.mq.console.MQWebAdminRO 
INSAUTH ADCDC   READ /NONE   EJBROLE MQWEB.com.ibm.mq.console.MQWebAdmin 
INSAUTH ADCDC   READ /NONE   EJBROLE MQWEB.com.ibm.mq.console.MQWebAdminRO 
INSAUTH ADCDC   READ /NONE   EJBROLE MQWEB.com.ibm.mq.console.MQWebAdminRO 
SUCCESS ADCDC   READ /READ   EJBROLE MQWEB.com.ibm.mq.console.MQWebUser 
INSAUTH ADCDC   READ /NONE   EJBROLE MQWEB.com.ibm.mq.console.MQWebAdminRO 
SUCCESS ADCDC   READ /READ   EJBROLE MQWEB.com.ibm.mq.console.MQWebUser 
SUCCESS ADCDC   READ /READ   EJBROLE MQWEB.com.ibm.mq.console.MQWebUser 
SUCCESS ADCDC   READ /READ   EJBROLE MQWEB.com.ibm.mq.console.MQWebUser 
INSAUTH ADCDC   READ /NONE   EJBROLE MQWEB.com.ibm.mq.console.MQWebAdmin 
INSAUTH ADCDC   READ /NONE   EJBROLE MQWEB.com.ibm.mq.console.MQWebAdminRO 

From the RESULT column, the userid ADCDC had

  • only READ access to MQWEB.com.ibm.mq.console.MQWebUser,
  • no access to MQWEB.com.ibm.mq.console.MQWebAdmin and MQWEB.com.ibm.mq.console.MQWebAdminRO, as we can see from the INSufficientAUTHority in the RESULT column.  The resource wanted READ – but had NONE.    This is OK, as we want the MQWebUser permissions.

From SUCCESS ADCDB READ /UPDATE APPL MQWEB  we can see that userid ADCDB (from the certificate) wanted READ access, but had UPDATE access to to MQWEB in the class APPL.

It does not look very optimised code, it looks like the logic is like

  • hmm it looks like this userid does not have access to this EJBROLE, let me check again
  • and again
  • and again
  • and again
  • and again
  • and again
  • OK give up, and try another resource
  • hmm it looks like you do have access to this other resource -let me check again..
  • and again…
  • … ok  you still have access – let’s go with this.

I expect is because a high level java program called a class to do some work, which checked the access; it called another class which did its own checks etc.  Understandable, but not efficient coding.

This all worked, I could see all of the access requests, but sadly I did not get a record saying “this certificate…. was mapped to userid ADCDB”.

You need a SAF Angel for Liberty Web Server on z/OS

I spent a week trying to get MQWEB on z/OS to work using digital certificates using RACF as a repository, and had lots of frustrations, and some successes. I found that to get security working you need an Angel process.

Some parts are well document, some other parts are not, so I’ll try to fill some of the gaps.

How does MQWEB do security?

The Liberty server has a couple of ways of Authentication and Authorisation.

  • The basic repository where you define users, passwords and their role (MQWebAdmin, MQWebAdminRO or MQWebUser) within the xml configuration file.   This provides the most basic levels of protection – for example the MQ administrator has to create and maintain the passwords.  It is easy to set up, but not very secure.
  • Using the System Authorization Facility (SAF) interface.  This is an interface which applications can use to get to the security back end.  There is a choice of back-ends, for example RACF from IBM, and Top Secret from CA.  This can be configured to be very secure, but has more administrator work to set up.

How does SAF work?

There are RACF profiles which control access to restricted facilities.  If you have access to the profile you can perform the function.  For example when you logon to the a server, the server thread has to run as your userid, to allow your userid access to resources.  You set up a profile, and explicitly give the server access to the profile permitting the server to “run on behalf of another userid”.

There is a C function on z/OS that allows my userid to check to see if your userid is authorised to a resource.    If my userid has access to the correct profile, my userid can get the information about your userid.

Some “system-like” functions assume the requester is authorised, and bypass some of the checks.  For example the RACROUTE FASTAUTH check request.   These have another level of control in that they need to be in a restricted, Supervisor (kernel) state, to be able to issue the requests.

What is the angel process?

Instead of giving Java programs access to this restricted Supervisor mode, there is an Angel address space which can execute the restricted requests, and the Java program can call the code running in the Angel address space.  (Under the covers there is a Program Call to execute the code, in the same way that MQ and DB2 requests do.)   Of course, there is a RACF profile to control which address spaces can access the Angel, and other profiles to configure what restricted functions the Angel process can execute.

If the Angel process was not running, I could not logon to MQWEB.   There were messages saying my userid did not have access to MQWebAdmin, MQWebAdminRO or MQWebUser.   The checking is done within the LIberty and is very simple, and restrictive – and does not work because it looks for group names  MQWebAdmin etc… but MQWebAdmin is too long for a z/OS group name which can have up to 8 characters in it.

Setting up the Angel process.

  • The Angel process can be shared by all of the web servers on the LPAR.
  • In a highly available environment you need more than one Angel on an LPAR.
  • MQ ships Angel code, but it may be at a different level to other Liberty servers’s Angel code.  You should run on the latest level of code, but I expect if they are reasonably current( within 1 year) they should all work ok.
  • This gives a good overview of the Angel.
  • This also gives a good overview and instructions
  • See here for configuration instructions.
  • I think the only Angel interface used by MQWEB  is SAFCRED  (SAF Credentials) and PRODMGR, but it does no harm having access to services you do not use.
  • You cannot stop an Angel process if it has servers “connected” to it.
  • If you cancel the Angel, the Web Server stops working,  it may get  CEE3250C The system or user abend S0D6 R=00000027 was issued  in the message.log file,  and abend.
  • For availability you may want two queue managers on an LPAR, so you need two Angel process.
  • There is no configuration you do to an Angel, so the only reason why you may want to shut down the Angel during normal running is to apply fixes.
  • You need to ensure the Angel process is started before the Web Server is started, as the Web Server only connects at start up, see below.
  • Each Angel running in an LPAR needs a unique name.  You can have a default, unnamed Angel, or give your Angel a name.

I called my angel process ANGEL.   From WAS and z/OS Connect they talk about their Angels with names like BBZANGL, BAQZANGL, which are not very memorable, hard to remember, and easy to mistype.

You start an angel process (once the JCL has been configured) using a command like

  • S ANGEL
  • S ANGEL,NAME=ANGEL1
  • S ANGEL.ANGEL1,NAME=ANGEL1
  • S ANGEL.ANGEL2,NAME=ANGEL2

The first command starts the default, unnamed, Angel.

Using the S ANGEL.name command is useful if you have more than one angel as it means you can use the STOP name command to that particular instance.

Configure your web server to end if the angel is not available

You can configure your Liberty server to end if the Angel process is not running.    Without it, the Web Server would start, and requests to use it would fail, because the unauthorised interface would be used. Using this option means you find out Sunday night and not when your business starts at 0900 on Monday morning!

You configure it by editing the server jvm.options file by adding

-Dcom.ibm.ws.zos.core.angelRequired=true

If the Angel is not available, this statement prevents the Web Server from starting

You can specify the angel name using

-Dcom.ibm.ws.zos.core.angelName=MYANGEL

If you are using the default angle name, comment this out

# -Dcom.ibm.ws.zos.core.angelName=

When your Web Server starts you should get messages in the message.log  like

CWWKB0103I: Authorized service group SAFCRED is available.

Tracing violations.

You can start an Angel using

S ANGEL,SAFLOG=YES

I gave my MWEB userid no access to SAFCRED, and restarted MQWEB.  With SAFLOG=YES, I got the following on the joblog

ICH408I USER(START1 ) GROUP(SYS1 ) NAME(####################)
BBG.AUTHMOD.BBGZSAFM.SAFCRED CL(SERVER )
INSUFFICIENT ACCESS AUTHORITY
ACCESS INTENT(READ ) ACCESS ALLOWED(NONE )

and in the MQWEB directory/logs/message.log file

CWWKB0104I: Authorized service group SAFCRED is not available.

With SAFLOG=NO, there was no message on the joblog, but the same message in  MQWEB directory/log/message.log file

CWWKB0104I: Authorized service group SAFCRED is not available.

From this we can see that you should always specify SAFLOG=YES.

Angel commands

You can use

F ANGEL,DISPLAY,SERVERS

This gives a message like

CWWKB0052I ACTIVE SERVER ASID 3f JOBNAME CSQ9WEB

You can use

F ANGEL,TRACE=Y

This writes a trace to //STDOUT in the Angel job.

The output is like

Trace: 2020/07/25 15:38:07.877974 t=8D5E88 key=S2 (04002008)
Description: write_to_operator_location, entry
message_p: CWWKB0057I WEBSPHERE FOR Z/OS ANGEL PROCESS ENDED NORMALLY

Which is not very helpful, as it traces what the Angel process is doing – rather than the Web Servers using it.