Have a good REST and save a fortune in CPU with Python

Following on from Have a good REST and save a fortune in CPU. The post gives some guidance on reducing the costs of using Liberty based servers from a Python program.

Certificate set up

I used certificate authentication from Linux to z/OS. I used

  • A certificate defined on Linux using Openssl.
  • I sent the Linux CA certificate to z/OS and imported it to the TRUST keyring.
  • I created a certificate on z/OS and installed it into the KEY keyring.
  • I exported the z/OS CA, sent it to Linux, and created a file called tempca.pem.

Python set up

Define the names of the user certificate private key, and certificate

cf=”colinpaicesECp256r1.pem”
kf=”colinpaicesECp256r1.key.pem”
cpcert=(cf,kf)

Define the name of the certificate for validating the server’s certificate

v=’tempca.pem’

Set up a cookie jar to hold the cookies sent down from the server

jar = requests.cookies.RequestsCookieJar()

Define the URL and request

geturl =”https://10.1.1.2:9443/ibmmq/rest/v1/admin/qmgr/

Define the headers

import base64
useridPassword = base64.b64encode(b’colin:passworm’)
my_header = {
‘Content-Type’: ‘application/json’,
‘Authorization’: useridPassword,
‘ibm-mq-rest-csrf-token’ : ‘ ‘
}

An example flow of two requests, using two connections

For example using python

s = requests
response1 = s.get(geturl,headers=my_header,verify=v,cookies=jar,cert=cpcert)
response2 = s.get(geturl,headers=my_header,verify=v,cookies=jar,cert=cpcert)

creates two session, each has a TLS handshake, issue a request, get a response and end.

An example of two requests using one session

For example using python

s = requests.Session()
response1 = s.get(geturl,headers=my_header,verify=v,cookies=jar,cert=cpcert1)
response2 = s.get(geturl,headers=my_header,verify=v,cookies=jar,cert=cpcert2)

The initial request has one expensive TLS handshake, the second request reuses the session.

Reusing this session means there was only one expensive Client Hello,Server Hello exchange for the whole conversation.

Even though the second request specified a different set of certificates, the certificates from when the session was established, using cpcert1 were used. (No surprise here as the certificates are only used when the session is established).

For the authentication, in both cases the first requests received a cookie with the LtpaToken2 cookie in it.

When this was passed up on successive requests, the userid information from the first request was used.

What is the difference?

I ran a workload of a single thread doing 200 requests. The ratios are important, not the absolute values.

Shared sessionOne session per requests
TCP flows to server1 11
CPU cost1 5
Elapsed time16

What they don’t tell you about using a REST interface.

After I stumbled on a change to my Python program which gave 10 times the throughput to a Web Server, I realised that I knew only a little about using REST. It is the difference between the knowledge to get a Proof Of Concept working, and the knowledge to run properly in production; it is the difference between one request a minute to 100 requests a second.

This blog post compares REST and traditional client server and suggests ways of using REST in production. The same arguments also apply to long running classical client server applications.

A REST request is a stateless, self contained request which you send to the back-end server, and get one response back. It is also known as a one shot request. Traditional client server applications can send several requests to the back-end as part of a unit of work.

In the table below I compare an extreme REST transaction, and an extreme traditional Client Server

AttributeRESTClient Server
ConnectionCreate a new connection for every request.Connect once, stay connected all day, reuse the session, disconnect at end of day.
Workload BalancingThe request can select from any available server, and so on average, requests will be spread across all connections. If a new server is added, then it will get used.The application connects to a server and stays connected. If the session ends and restarts, it may select a different server.
If a new server is added, it may not be used.
AuthenticationEach request needs authentication. If the userid is invalidated, the request will fail. Note that servers cache userid information, so it may take minutes before the request is
re-authenticated.
Authentication is done as part of the connection. If the userid is invalidated during the day, the application will carry on working until it restarts.
IdentificationBoth userid+password, and client certificate can be used to give the userid.Both userid+password, and client certificate can be used to give the userid. If you want to change which identity is used, you should disconnect and reconnect.
CostIt is very expensive to create a new connection. It is even more expensive when using TLS, because of the generation of the secret key. As a result it is very very expensive to use REST requests.The expensive create connection is done once, at start of day. Successive request do not have this overhead, so are much cheaper
Renew TLS session keyBecause there is only one transfer per connection you do not need to renew the encryption key.Using the same session key for a whole day is weak, as it makes it easier to break it. Renewing the session key after an amount of data has been processed, or after a time period is good practice.
RequestSome requests are suitable for packaging in one request, for example where just one server is involved.This can support more complex requests, for example DB2 on system A, and MQ on system B.
Number of connectionsThe connection is active only when it is used.The connection is active even though it has not been used for a long time. This can waste resources, and prevent other connections from being made to the server.
StatisticsYou get an SMF record for every request. Creating an SMF record costs CPU.You get one SMF record for collection of work, so reducing the overall costs. The worst case is one SMF record for the whole day.

What are good practices for using REST (and Client Server) in production?

Do not have a new connection for every request. Create a session which can be reused for perhaps 50 requests or ten minutes, depending on workload. This has the advantages :

  • You reduce the costs of creating the new connection for every request, by reusing the session.
  • You get workload balancing. With the connection ending and being recreated periodically, you will get the connections spread across all available connections. You should randomise the time a connection is active for, so you do not get a lot of time-out activity occurring at the same time
  • You get the re-authentication regularly.
  • The TLS key is renewed periodically.
  • You avoid the long running connections doing nothing.
  • For a REST request you may get fewer SMF records, for a Client-Server you get more SMF requests, and so more granular data.

How can I do this?

With Java you can open a connection, and have the client control how long it is open for.

With Python and the requests package, you can use

s = requests.Session()
res = s.get(geturl,headers=my_header,verify=v,cookies=jar,cert=cpcert)

res = s.get(geturl,headers=my_header,verify=v,cookies=jar,cert=cpcert)
etc

With Curl you can reuse the session.

Do I need to worry if my throughput is low?

No, If you are likely to have only one request to a server, and so cannot benefit from having multiple requests per connection you might just as well stay with a “one shot” and not use any of the tuning suggestions.

Looking at the performance of Liberty products and what you can do with the information

The Liberty Web Server on z/OS can produce information on the performance of “transactions”. It may not be as you expect, and it may not be worth collecting it. I started looking into this to see why certain transactions were slow on z/OSMF.

This blog post covers

  • How does the web server work?
  • How can I see what is going on?
  • Access logs
  • SMF 120
  • WLM
  • Do I want to use this in production?

How does it work?

A typical Web Server transaction might use a database to debit your account, and to credit my account. I think of this as a fat transaction because the application does a lot of work,

z/OSMF runs on top of Liberty, and allows you to run SDSF and ISPF within a web brower, and has tools to help automate work. You can also use REST services, so you can send a self contained HTTP request to z/OSMF, for example request information about data sets, or jobs running on the system. Both of these send a request to z/OSMF, which might send a request to a TSO userid, get the response back and pass the response back to the requester. I think of this as a thin transaction because the application running on the web server is mainly acting as a router or broker. What the end user sees as a “transaction” may be many micro services – each of which is a REST requests.

How can I see what is going on?

You can see what “transactions” or requests have been issued from

  • the access log – a flat file of requests
  • SMF 120 records
  • WLM reports.

Access logs

In the <httpEndpoint … accessLoggingRef=”accessLogging”…>, the accessLogging is a reference to an <httpAccessLogging… statement. This creates a flat files containing a record of all inbound HTTP client requests. If you have multiple httpEndpoint statements, you can have multiple accessLogging files. You can control the number and size of the files.

This has information with fields like

  • 10.1.0.2 – the IP address the request came in from
  • COLIN – the userid
  • {16/May/2021:14:47:40 +0000} – the date, time and time zone of the event
  • PUT /zosmf/tsoApp/tso/ping/COLIN-69-aabuaaac – the request
  • HTTP/1.1 – the type of HTTP request
  • 200 – the HTTP return code
  • 78 – the number of bytes sent back.

You can have multiple httpEndpoint definitions, for example specifing different IP address and port combinations. These definitions can point to a shared or indivdual httpAccessLogging statement, and so can share (or not) the flat files. This allows you to specify that for one port you will use the accessLogging, and another port does not have access logging.

The SMF 120 records.

The request logging writes a record to SMF for each request.

<server>
<featureManager>
<feature>zosRequestLogging-1.0</feature>
</featureManager>
</server

I have not been able to find a formatter for these records from IBM, so I have written my own, it is available on github.

It produces output like

Server version      :2                                                              
System              :S0W1                                                           
Syplex              :ADCDPL                                                         
Server job id       :STC02771                                                       
Server job name     :IZUSVR1                                                        
config dir          :/var/zosmf/configuration/servers/zosmfServer/                  
codelevel           :19.0.0.3  
                                                     
Start time          :2021/05/16 14:42:33.955288                                     
Stop  time          :2021/05/16 14:42:34.040698                                     
Duration in secs    : 0.085410                                                      
WLMTRan             :ZCI4                                                           
Enclave CPU time    : 0.042570                                                      
Enclave ZIIP time   : 0.042570                                                      
WLM Resp time ratio :10.000000                                                      
userid long         :COLIN                                                          
URI                 :/zosmf/IzuUICommon/externalfiles/sdsf/index.html               
CPU Used Total      : 0.040111                                                      
CPU Used on CP      : 0.000000                                                      
CPU Delta Z**P      : 0.040111 
                                                     
WLM Classify type   :URI                                                            
WLM Classify by     :/zosmf/IzuUICommon/externalfiles/sdsf/index.html
WLM Classify type   :Target Host                            
WLM Classify by     :10.1.1.2                               
WLM Classify type   :Target Port                            
WLM Classify by     :10443   
                               
Response bytes      :7003                                   

Origin port         :57706                                  
Origin              :10.1.0.2                                              

This has information about

  • The Liberty instance, where it is running and configuration information
  • Information about the transaction, start time, stop time, duration and CPU time used
  • WLM information about the request. It was classified as
    • URI:…index.html
    • Target Host:10.1.1.2
    • Target port:10443
  • 7003 bytes were sent to the requester
  • the request came from 10.1.0.2 port 57706

The SMF formatter program also summarises the records and this shows there are records for

  • /zosmf/IzuUICommon/externalfiles/sdsf/js/ui/theme/images/zosXcfsCommand_enabled_30x30.png
  • /zosmf/IzuUICommon/externalfiles/sdsf/js/ui/theme/sdsf.css
  • /zosmf/IzuUICommon/externalfiles/sdsf/sdsf.js
  • /zosmf/IzuUICommon/externalfiles/sdsf/IzugDojoCommon.js
  • /zosmf/IzuUICommon/persistence/user/com.ibm.zos.sdsf/JOBS/JOBS

This shows there is a record for each part of the web page, icons, java scripts and style sheets.

Starting up SDSF within z/OSMF created 150 SMF records! Refreshing the data just created 1 SMF record. The overhead of creating all the SMF records for one “business transaction” may be too high for production use.

As far as I can tell this configuration is server wide. You cannot enable it for a specific IP address and port combination.

WLM reports

Much of the data produced in the records above can be passed to WLM. This can be used to give threads appropriate priorities, and can produce reports.

You enable WLM support using

  • <featureManager>
    • <feature>zosWlm-1.0 </feature>
  • </featureManager>
  • <zosWorkloadManager collectionName=”MOPZCET”/>
  • <wlmClassification>
    • <httpClassification transactionClass=”ZCI6″ resource=”/zosmf/desktop/”/>
    • <httpClassification transactionClass=”ZCI1″ resource=”/**/sdsf/**/*”/>
    • <httpClassification transactionClass=”ZCI3″ resource=”/zosmf/webispf//”/>
    • <httpClassification transactionClass=”ZCI4″ resource=”/**/*”/>
    • <httpClassification transactionClass=”ZCI2″ resource=”IZUGHTTP” />
    • <httpClassification transactionClass=”ZCI5″ port=”10443″/>
  • </wlmClassification>

Where the httpClassification maps a z/OSMF resource to an 8 character transaction class. THe records are process from the top until there is a match. For example port=10443 would not be used because of the generic resource=/**/* definition.

These need to be mapped into the WLM configuration…

WLM configuration

You can configure WLM through the WLM configuration panels.

option 6. Classification rules.

option 3 to modify CB(Component broker)

          -------Qualifier--------                 -------Class--------  
 Action   Type      Name     Start                  Service     Report   
                                          DEFAULTS: STCMDM      TCI2
  ____  1 CN        MOPZCET  ___                    ________    THRU
  ____  2   TC        ZCI1     ___                  SCI1        RCI1
  ____  2   TC        ZCI2     ___                  SCI2        RCI2
  ____  2   TC        ZCI3     ___                  SCI3        RCI3
  ____  2   TC        ZCI4     ___                  SCI4        RCI4
  ____  2   TC        ZCI5     ___                  SCI5        RCI5
  ____  2   TC        ZCI6     ___                  SCI6        RCI6
  ____  2   TC        THRU     ___                  THRU        THRU

For the Type=CN, Name=MOPZCET, this value ties up with the <zosWorkloadManager collectionName=”MOPZCET” above. Type=CN is for CollectionName.
For the subtype 2 TC Name ZCI4, This is for TransactionClass which ties up with a httpClassification transactionClass statement.

The service class SCI* controls the priority of the work, the Report class RCI* allow you to produce a report by this name.

If you make a change to the WLM configuration you can save it from the front page of the WLM configuration panels, Utilities-> 1. Install definition, and activate it using Utilities-> 3. Activate service policy.

If you change the statements in Liberty or z/OSMF I believe you have to restart the server.

How to capture the data

The data is written out to SMF records on a timer, or on the SMF end-of-interval broadcast. If you change the interval, SMF sends an end-of-interval broadcast and writes the records to SMF. For example on my dedicate test system I use the operator command setsmf intval(10) to change the interval to 10 minutes. After the next test, I use setsmf intval(15) etc..

The data is kept in SMF buffers, and you may have to wait for a time, before the data is written out to external storage. It SMF data is being produced on a regular basis, it will be flushed out.

How to report the data

I copy the SMF data to a temporary data

//IBMURMF  JOB 1,MSGCLASS=H RESTART=POST 
//* DUMP THE SMF DATASETS 
// SET SMFPDS=SYS1.S0W1.MAN1 
// SET SMFSDS=SYS1.S0W1.MAN2 
//* 
//SMFDUMP  EXEC PGM=IFASMFDP 
//DUMPINA  DD   DSN=&SMFPDS,DISP=SHR,AMP=('BUFSP=65536') 
//DUMPINB  DD   DSN=&SMFSDS,DISP=SHR,AMP=('BUFSP=65536') 
//* MPOUT  DD   DISP=(NEW,PASS),DSN=&RMF,SPACE=(CYL,(1,1)) 
//DUMPOUT  DD   DISP=SHR,DSN=IBMUSER.RMF SPACE=(CYL,(1,1)) 
//SYSPRINT DD   SYSOUT=* 
//SYSIN  DD * 
  INDD(DUMPINA,OPTIONS(DUMP)) 
  INDD(DUMPINB,OPTIONS(DUMP)) 
  OUTDD(DUMPOUT,TYPE(70:79)) 
  DATE(2020316,2021284) 
  START(1539) 
  END(2359) 
/* 

and display the report classes

//POST EXEC PGM=ERBRMFPP 
//* PINPUT DD DISP=SHR,DSN=*.SMFDUMP.DUMPOUT 
//MFPINPUT DD DISP=SHR,DSN=IBMUSER.RMF 
//SYSIN DD * 
SYSRPTS(WLMGL(RCPER)) 
/* 

The output in //PPXSRPTS was

REPORT CLASS=RCI4                                                      
-TRANSACTIONS--  TRANS-TIME HHH.MM.SS.FFFFFF  
AVG        0.01  ACTUAL                69449  
MPL        0.01  EXECUTION             68780  
ENDED       160  QUEUED                  668  
END/S      0.13  R/S AFFIN                 0  
#SWAPS        0  INELIGIBLE                0  
EXCTD         0  CONVERSION                0  
                 STD DEV              270428  
                                              
----SERVICE----   SERVICE TIME  ---APPL %---  
IOC           0   CPU    2.977  CP      0.05  
CPU        2551   SRB    0.000  IIPCP   0.05  
MSO           0   RCT    0.000  IIP     0.19  
SRB           0   IIT    0.000  AAPCP   0.00  
TOT        2551   HST    0.000  AAP      N/A  
/SEC          2   IIP    2.338                
ABSRPTN     232   AAP      N/A                
TRX SERV    232                               
                                              

There were 160 “transactions” within the time period or 0.13 per second, The average response time was 69449 microseconds, with a standard deviation of 270428. This is a very wide standard deviation, so there was a mixture of long, and short response times.

The total CPU for this report class was 2.977 seconds of CPU, or 0.019 seconds per “transaction”.

Do I want to use this in production?

I think that the amount of data produced is managable for a low usage system. For a production environment where there is a lot of activity then the amount of data produced, and the cost of producing the data may be excessive. This could be an example of the cost of collecting the data is much larger than the cost of running the workload.

As z/OSMF acts as a broker, passing requests between end points you may wish just to use your existing reporting structures.

I used z/OSMF to display SDSF data, and set up and ISPF session within the web browse. This created two TSO tasks for my userid. If you include my traditional TSO session and the two from z/OSMF this is three TSO sessions in total running on the LPAR.

Setting up a highly available web server is a “yes – but” problem

I’ve been setting up a Liberty web server, as used in MQWEB, z/OS Connect, z/OSMF and so on, and was looking into how to make this available, so I could move the web server to a different LPAR or TCP/IP instance.

Moving it should be easy – it is  – but …  but there are things you need to think about. It is a bit like going around a maze trying to find the solution.

How do I get to the fail over system?

You start the web server on a different LPAR in the sysplex. How can you support this to allow your browser to get to the backend, without changing the URL?

You have two choices.

  1. You change your DNS look up, or router so your request goes via a different connection (think different bit of wire) to the failover LPAR. These change can be automated to some extent.
  2. Multiple z/OS images can listen for an IP address.

These work but…

The certificate sent down from the web server contains the address of the LPAR as part of the SAN.  When the browser processes it, it compares the LPAR address in the certificate with the address in the certificate.  If they do not match the browser produces an error message.

How do I get over the certificate SAN and the IP address difference?

You have a couple of choices

  1. Use a unique certificate on each LPAR.    Yes this works, but there is more administrative work to set up.  You could set up two web servers and only use one at a time.   This work, but it is unnecessary work.
  2. Use a Virtual IP address.   In TCP networking the end of every connection is a “device” or system with its unique IP address.   You can give the web server its own IP address which is “virtual” as it is not  device or system.  With this, when you start your web server on a different LPAR, it has the same IP address.  To use this you have to configure z/OS to support this.  You can set this up
    1. To support multiple web servers, and distribute the work to them
    2. Have a hot standby
    3. To route traffic to where the web server has started.

Yes, these work, but – is not easy to set up.  I’ll be blogging how to do this.

mqweb – who did what to what, when, and how long did it take?

You can provide an audit trail of the http requests coming into your server.  This is described in the base liberty document, and works in mqweb.

Within the httpEndoint tag you can add

<httpEndpoint host=”${httpHost}” httpPort=”${httpPort}” httpsPort=”${httpsPort}” id=”defaultHttpEndpoint”>
<httpOptions removeServerHeader=”false”/>
<accessLogging enabled=”true” filePath=”${server.output.dir}/logs/http_access.log”
logFormat=’a:%a A:%A D:%D h:%h HeaderHost:%{Host}i HeaderOrigin:%{Origin}i m:%m R:%{R}W t:%{t}W u:%u U:%U X:%{X}W r:”%r” s:%s’
maxFileSize=20

maxFiles=0
/>
</httpEndpoint>

See here for information on the accessLogging,  here for the syntax of the <accessLogging…> and here for the logFormat format options.

From the logFormat page

%a
Remote IP address
%A
Local IP address
%b
Response size in bytes excluding headers
%B
Response size in bytes excluding headers. 0 is printed instead of – if no value is found.
%{CookieName}C
The request cookie specified within the brackets, or if the brackets are not included, prints all of the request cookies.
%D
The elapsed time of the request – millisecond accuracy, microsecond precision
%h
Remote host
%{HeaderName}i
HeaderName header value from the request
%m
Request method
%{HeaderName}o
HeaderName header value from the response
%q
Output the query string with any password escaped
%r
First line of the request
%{R}W
Service time of the request from the moment the request is received until the first set of bytes of the response is sent – millisecond accuracy, microsecond precision
%s
Status code of the response
%t
NCSA format of the start time of the request
%(t)W
The current time when the message to the access log is queued to be logged in normal NCSA format
%u
Remote user according to the WebSphere Application Server specific $WSRU header
%U
URL Path, not including the query string
%{X}W
Cross Component Tracing (XCT) Context ID

So

logFormat=’a:%a A:%A D:%D h:%h HeaderHost:%{Host}i HeaderOrigin:%{Origin}i m:%m R:%{R}W t:%{t}W u:%u U:%U  r:”%r” s:%s’

gave me

  • a:127.0.0.1 remote IP address
  • A:127.0.0.1 local host
  • D:225960 duration in millisecond
  • h:127.0.0.1 remote host
  • HeaderHost:127.0.0.1:9443 the Host header
  • HeaderOrigin:- the Origin header (missing in this case)
  • m:GET request method
  • R:186930 Service time of the request from the moment the request is received until the first set of bytes of the response is sent
  • t:[27/Feb/2020:17:04:27 +0000] NCSA format of the start time of the request
  • u:colinpaice remote user
  • U:/ibmmq/console url path
  • r:”GET /ibmmq/console HTTP/1.1″ First line of the request
  • s:302 Status code of the response

There are other log formatting options available, I picked those I thought were most useful.

Note when using the MQConsole from a browser, the interface is chatty. I had 20 request to refresh one window.

Other ways of formatting the data

  • I separated each field with a ‘,’ and could read it into a spread sheet.
  • You could configure your log format string to produce the output in JSON format, to make it easier to post process.

Is mqweb secure enough? Depends on what you mean by secure enough.

When doing research into the MQ Console and REST interface I was some times surprised at how “insecure” the mqweb was, but with a bit more thought it seemed more reasonable.

One of the questions I had was, is authentication with certificates better than using userid and password?  See here.

What do I mean by secure?

You can buy a safe with a combination lock to secure your valuables.    It will not be 100% secure.   It will have a statement like “This safe will provide protection from fire for up to 1 hour”.  The number of combinations is limited, and so it will take someone some hours to try them all.   They may be lucky and guess it first go!  An insecure safe would have one combination wheel.  The more wheels, the longer it should take to find the combination.

Digital certificates are considered a good way of providing “secure” communication.  However if you are careless with your private information, then other people have easy access to your secrets.  The private information for some encryption types is based on multiplying two very large prime numbers together.  The theory being it will take you a long time to factorize  the product .  If I was a government, I would take the “book of prime numbers” and go through each number, and multiply it be each other number in the book, and generate a reverse dictionary of factor1 * factor 2 -> public key.    You need a lot of computing power as the numbers are very large, but computing power is cheap, and you only need to do it once.  Take the public key, and look up the factors.  Of course it is not that simple, you might need a dictionary for each encryption type, as well as a lot of CPU power.  These certificates give some security, but are not totally secure.

You can have locks on your house, but if you leave the doors unlocked – it is easy to break in.  People often lock the front door, but leave the back door or a window open.

Security is down to how long it will take someone to access the information.  All components have to be secure; you cannot be “half secure”.

What do I want to secure?

This is where it starts to get murky.  Some of the HTTP protocol protection depends on “state change”, so changing a field in a database is a state change, creating an object is a state change, but browsing a record is not a state change.

With the MQ REST API an HTTP GET maps to a browse;  an HTTP POST is a change or put;  and an HTTP DELETE is a destructive get.  With the standard HTTP protection POST and DELETE have more protection than GET.

I consider being able to browse a queue which has financial transaction information, as insecure.  You can obtain bank account details, which could be used in a different sort of attack.   In mqweb, the protection for browse is weaker than for POST and DELETE.

Surprise number 1 – which certificate should the browser use?

My first surprise when using digital certificates for identification was how the browser decides which certificate to use.

I said here

If client authentication is used, the certificates are read from the server’s trust store. The DN of the self signed certificates and the DN of the CA certificates are extracted and sent down to the browser.

The browser takes this list and checks through the “user” certificates in its key store, for those self signed certificates with a matching DN, or those certificates which have been signed by the CA.

The list of matching certificates is displayed for the user to select from.

From then on, communication to that https://url:port will use that certificate.  If you create a new tab within the browser instance, and work with the same site, the selected certificate will be used – without prompting.

You can only change the certificate by restarting the browser and selecting a different certificate.  You cannot logoff.

Surprise number 2 – it is easy to browse the queue with no additional security.

In an HTML page,  I used the html link <a href=”https://localhost:9445/ibmmq/rest/v1/messaging/qmgr/QMA/queue/DEEPQ/message “> direct link https ></a> .

If I had already accessed the server then this displayed the content of the next message on the queue, with no further security checks.

If I opened another tab within the browser, displayed the page and clicked on the link, I also got the next message content displayed.

I used a file on my laptop with the link in it – and it displayed the message content.  It was easy to create a web page in an editor and so display the content of the queue.

The CSRF checks are not applicable – as these only occur when the request is a change of state, POST, DELETE etc.

Using Curl, and changing the GET request to a DELETE I received (as expected)

{“error”: [{
  “action”: “Resubmit the request with the correct CSRF header.”,
  “completionCode”: 0,
  “explanation”: “The REST API request cannot be completed because the CSRF header was omitted.”,
  “message”: “MQWB0100E: The CSRF header ‘ibm-mq-rest-csrf-token’ was omitted from the request.”,
  “msgId”: “MQWB0100E”,
  “reasonCode”: 0,
  “type”: “rest”
 }]}’

This is not a Cross Origin request – as it is in one page – so the CORS checks are not performed.   Putting the backend request into a java script did cause the CORS checking as expected.

Turning off client Certificate Authentication helped

I changed the <ssl…  clientAuthenticationSupported=”true”> to “false” to use userid and password.

With my <a href=”https://localhost:9445/ib…  > ” > direct link https ></a>  I got

{"error": [{
  "action": "Provide credentials using a client certificate, LTPA security token, or username 
   and password via HTTP basic authentication header. 
   On z/OS, if the mqweb server has been configured for SAF authentication, 
   check the messages.log file for messages indicating that SAF authentication is not available. 
   Start the Liberty angel process if it is not already running. 
   You might need to restart the mqweb server for any changes to take effect.",
  "completionCode": 0,
  "explanation": "The REST API request cannot be completed because credentials were omitted from the request. 
   On z/OS, if the mqweb server has been configured for SAF authentication, 
   this can be caused by the Liberty angel process not being active.",
  "message": "MQWB0104E: The REST API request to 
  'https://localhost:9445/ibmmq/rest/v1/messaging/qmgr/QMA/queue/DEEPQ/message' is not authenticated.",
  "msgId": "MQWB0104E",
  "reasonCode": 0,
  "type": "rest"
}]}

I tried putting my userid and password in the URL as in
<a href=”https://colinpaice:mypassword@localhost:9445/ib…> “> direct link https ></a> . This gave me the same message.  In particular the message says Provide credentials using …  or username and password via HTTP basic authentication header.  So it needs credentials in  a header not inline.

As raw html cannot insert HTTP headers, it means this is more secure.   To insert headers you need a script, and then the CORS processing will capture it.

I used the logon url https://127.0.0.1:9445/ibmmq/console/login.html this prompted me for userid and password, then displayed the console.

So is using userid and password the right thing to use?

Yes and No.

  1. Yes – you know when you are accessing the back end, but then it logs you off, and you have to logon again after ltpaExpiration minutes.
  2. With long running displays, such as the monitors in the operations room – you do not want to be logging on every hour.  In this case certificates are the right answer.

See is it better to use certificate authentication or userid and password? for information.

 

Making it more secure.

I have shown that if people are authorised to process messages on a queue as part of their normal job, they could easily write a web page to browse the queue.

If the queue was access as part of a client  program accessing the queue over a channel, the end user could write or obtain a program to process the queue – but this would be harder than just writing a web page.    This is like having a combination safe with one wheel, and having multiple wheels – you need a lot more effort.

You cannot secure mqweb for message access to a subset of queues – it is all or nothing.

I would disable the end users access to message data – and just use the rest API for administration.

This means in the mqwebuser.xml the <security-role name=”MQWebUser”> … </security-role> has no users who are allowed to process sensitive queues.  You need to prevent them using curl and other tools, so

<enterpriseApplication id=”com.ibm.mq.rest”> … and <enterpriseApplication id=”com.ibm.mq.console”>…. should have no users in <security-role name=”MQWebUser”>… </security-role>

Ban the special subject.

I would also explicitly specify the groups of users – and not use the <special-subject type=”ALL_AUTHENTICATED_USERS”/>.

Best practice says not to specify user names in the security roles, but to put the user names in groups and specify the groups.   It make it much easier to manage.  For example the manager in charge of the Payroll team, can administer the PayrollGroup, and the MQ Admin team does not need to edit the mqwebuser.xml file.

mqweb- is it better to use certificate authentication or userid and password?

Like may questions the answer is it depends.

You can set up your mqweb environment

  1. Certificates are required – you cannot use userid and passwords
  2. If a valid certificate is found use that, else prompt for userid and password
  3. Do not use certificates – just use userid and passwords.

This is described in the Liberty documentation.

It says for the <ssl…> tag

  1. If you specify clientAuthentication="true", the server requests that a client sends a certificate. However, if the client does not have a certificate, or the certificate is not trusted by the server, the handshake does not succeed.
  2. If you specify clientAuthenticationSupported="true", the server requests that a client sends a certificate. However, if the client does not have a certificate, or the certificate is not trusted by the server, the handshake might still succeed.
  3. If you do not specify either clientAuthentication or clientAuthenticationSupported, or you specify clientAuthentication="false" or clientAuthenticationSupported="false", the server does not request that a client send a certificate during the handshake.

No matter how you logon, the web server creates a LTPA* cookie with encrypted information about the logon.   The mqwebuser.xml attribute ltpaExpiration says how long this cookie is valid for.  If the time is exceeded you have to logon again, and it generates a new cookie.

Logging on

Certificate authorisation

If you are using certificates, the server sends down a list of valid self signed certificates or CA certificates, and the browser compares this list with the certificates in its store.   If the certificate in the browser’s store is acceptable, it is added to a list

The list is displayed, and the end user selects a certificate.  For subsequent requests with the same  http://url:port combination, in the same or a different tab within the browser window, the same certificate is used, and the browser does not prompt.

If the ltpa cookie expires then the same certificate is used, invisibly to the end user.  A new cookie is generate and sent down.

If there are no valid certificates in the list, then depending clientAuthentication, it may prompt for a userid and password.

You cannot logoff from the page as there is no option to do so.

Userid and password.

When userid and password are passed up in a header (not in the URL as some web servers accept) the ltpa token is returned.  When the ltpa token expires, the user is prompted to re-logon.   They need to enter the userid and password again.  You can also logoff from the web page.

What are the implications of certificates and userid with password

Certificate logon

  • The web browser can logon, and although the ltpa token expires, it is automatically logged on again.   This is useful for a “big screen monitor” in the operations room, where you want the display active all day.
  • Someone can create a file with some HTML in it to browse a queue  (if authorised).  So if they are authorised to the queue for their normal job, they could easily create a web page to display the queue contents.  This could be the end user, or an evil person saying “click this link”.

userid logon

  • If the user is using the mqconsole web pages, then periodically they will get logged off – and they will have to logon again.
  • If they try to access a queue from an HTML page, they will be asked for userid and password.  Depending on what the end user is doing, this may give them notification than there is some unusual activity.  If they are hacking they will enter the userid and password anyway.
  • The person may have access to the same information without using the mqweb console interface, so there is no change to the threat level.

What do I recommend?

Tricky.

  • I think long running monitors need to use certificates
  • End users should logon with userid and password, with an expiry period of about an hour or less.
  • You may want to have a Certificate  Authority just for the mqweb browsers, to stop your corporate certificates from being used to access the mqweb server.

Because it is easy to create an html page and browse a queue, I would allow administration over mqrest, and not allow processing messages over REST.

As I said in another posting

  • The admin API authorisation is managed through <security-role name=”MQWebAdmin”> and <security-role name=”MQWebAdminRO”> sections in the mqwebuser.xml file.
  • The messaging API authorisation is managed through <security-role name=”MQWebUser”> sections.

To stop messaging over the rest interface, have no entries in the <security-role name=”MQWebUser”> section.  This applies to both certificates, and userid and password.

(Personally I would have called<security-role name=”MQWebUser”> as <security-role name=”MQWebMessaging”>  to make the roles clearer. 

Update from Gwydion…

The MQWebAdmin, MQWebAdminRO and MQWebUser roles can all be used for the admin REST API. That’s why MQWebUser is not called MQWebMessaging – it’s not just for messaging. The difference between them is the user ID that’s checked by the qmgr for operations performed via the REST API.
  • Operations performed by users in the MQWebAdmin* roles take place under the context of the mqweb server user ID.
  • Operation performed by users in the MQWebUser role take place under the context of the user logged into the REST API.
To stop messaging over the REST API completely, the mqRestMessagingEnabled should be set to false in the mqweb server configuration.

Can evil websites get to your mqweb – using java script to get to the backend server

With HTML and scripting people would write scripts and get you to execute them, and so access your personal information, and steal your money.  Over time security has been improved to make this harder, and now you have to explicitly say which web sites can run scripts which use the mqweb interface to access MQ to put and get messages.

One way of protecting the access is using Cross Origin Resource Sharing, or CORS.  I explained the basics of CORS here.  I struggled getting it to work with a web browser.

  • browsers have been improved and what worked last year may no longer work now, and the documentation does not reflect the newer browsers.
  • Chrome carefully changes your hand crafted HTTP headers, so what is sent up may not what you expected.

I’ll go through three examples, and then show how it gets more difficult, and what you can do to identify problems.

Note: If you use a web page from a file:// then the origin is ‘null’, and this will fail any checks in the backend, as the checks compare the passed origin to the list of acceptable urls.

I used Dev Tools in Chrome (Alt+Ctrl+i) to display information including the headers flowing.

Simple HTML

With the following within your web page,

<a href=”https://localhost:9445/ibmmq/rest/v1/messaging/qmgr/…/message
> direct link ></a>

It issues the REST request, returns the data from the queue, and displays it.

From the headers we see

Simple HTML: Request headers

The important request headers were

And no origin header.

Simple HTML: Response headers

The response headers have

  • HTTP/1.1 200 OK
  • ibm-mq-md-expiry: unlimited
  • ibm-mq-md-messageId: 414d5120514d412020202020202020204c27165d04a98a25
  • ibm-mq-md-persistence: persistent
  • Content-Length: 1024
  • Set-Cookie: LtpaToken2_1 ….

There are no Allow….  headers, so this indicates the response is not a valid cross origin response.   The request came from one page, so there was no cross origin request.

You can see the MQ attributes returned, ibm-mq*.

Invoking request from java script.

My HTML page had

<html
<head>
<script src=”http://localhost:8884/src.js ” text=”text/javascript”></script >
</head>
<body>
<button onclick=”src()””>Get message</button>
</body>

I had the src() script downloaded from http://localhost:8884/src.js, and inline within the html file.  Both worked.

function src()
{
  fetch("https://localhost:9445/ibmmq/rest/v1/messaging/qmgr/QMA/queue/DEEPQ/message",
  { 
     method :'DELETE',
     headers: {
                'Origin': 'https://localhost:888499'
                ,'Access-Control-Request-Headers' : 'Content-Type' 
                ,'ibm-mq-rest-csrf-token' : '99'
              } 
   }
  ) // end of fetch
   .then((response) => response.text() )
   .then(x => {document.write("OUTPUT:"+x); } 
  ) 
  .catch(error =>  {console.log("Error from src:" + error);});
}

When the button was pressed, the script was executed,  there was an OPTIONS request (and response), and a DELETE request (and response).   It returned a message and displayed it.

In more detail, the flows were:

JavaScript OPTIONS request

The OPTIONS request had headers

This is doing a preflight check, saying it intends to issue a DELETE request from Origin  http://localhost:8889, the url of the web page.

JavaScript OPTIONS response headers

The OPTIONS response headers were the same as before but with additional ones

  • Access-Control-Allow-Credentials: true
  • Access-Control-Allow-Origin: http://localhost:8889
  • Access-Control-Allow-Max-Age: 91
  • Access-Control-Allow-Methods: GET, POST, PATCH, PUT, DELETE
  • Access-Control-Allow-Headers: Accept-Charset, Accept-Language, Authorization, Content-Type, ibm-mq-rest-csrf-token, ibm-mq-md-correlationId, ibm-mq-md-expiry, ibm-mq-md-persistence, ibm-mq-md-replyTo, ibm-mq-rest-mft-total-transfers

This means, the script is allowed to issue a request from http://localhost:8889 using a request method in the list {GET, POST, PATCH, PUT, DELETE } and the valid headers that can be used are in the list of headers.

The Access-Control-Allow-Max-Age: 91 came from my mqwebuser.xml file, <variable name=”mqRestCorsMaxAgeInSeconds” value=”91″/>.

After this there was a DELETE request to get the message.

JavaScript DELETE request headers

JavaScript DELETE response headers

The response included the CORS headers, and included the headers from the non CORS situation

  • HTTP/1.1 200 OK
  • Access-Control-Allow-Credentials: true
  • Access-Control-Allow-Origin: http://localhost:8889
  • Access-Control-Expose-Headers: Content-Language, Content-Length, Content-Type, Location, ibm-mq-qmgrs, ibm-mq-md-messageId, ibm-mq-md-correlationId, ibm-mq-md-expiry, ibm-mq-md-persistence, ibm-mq-md-replyTo, ibm-mq-rest-mft-total-transfers
  • Content-Type: text/plain; charset=utf-8
  • Content-Language: en-GB
  • ibm-mq-md-expiry: unlimited
  • ibm-mq-md-messageId: 414d5120514d412020202020202020204c27165d04a98a25
  • ibm-mq-md-persistence: persistent
  • Content-Length: 1024
  • Set-Cookie: LtpaToken2_….

Because the Access-* headers are present, this is a CORS response.

The message content was displayed in a browser window.

Link to another page

I set up a link <a href=”http://localhost:8884/page.html”/>localhost 8884</a>  to execute a page on another web server.  When this executed, it issued the java script request as before.  The Origin was Origin: http://localhost:8884 – so the page where the script executed.

What happens if CORS is not set up?

If the http://localhost:8889 is not in the list in the mqwebuser.xml file,

No data was  displayed.   The Chrome browser Developer tools ( Alt+Ctrl+i) displays a message

Access to fetch at ‘ https://localhost:9445/ibmmq/rest/v1/messaging/qmgr/QMA/queue/DEEPQ/message ‘ from origin ‘http://localhost:8889 ‘ has been blocked by CORS policy: Response to preflight request doesn’t pass access control check: No ‘Access-Control-Allow-Origin’ header is present on the requested resource. If an opaque response serves your needs, set the request’s mode to ‘no-cors’ to fetch the resource with CORS disabled.

The OPTIONS Request header has, as before

  • HTTP/1.1 200 OK
  • Host: localhost:9445
  • Access-Control-Request-Method: DELETE
  • Origin: http://localhost:8889
  • Access-Control-Request-Headers: ibm-mq-rest-csrf-token

but the OPTIONS Response header has no Access-* headers.

No DELETE request was issued.

The HTTP/1.1 200 OK is the same for all cases – it means the request was successful.

Trying to be clever

I read the documentation on the web, and much of it was very helpful, but some of it is no longer true.  It was hard to get it working, because every things has to be right for it to work.

Unsupported header.

I added an extra header, which is a valid CORS thing to do – but the back end has to support it.  With hindsight it makes no sense to add headers that will be ignored by the server.

headers: {
     'Origin': 'https://localhost:888499'
     , 'Access-Control-Request-Headers' : 'Content-Type' 
     , 'colin':'TRUE' 
     ,'ibm-mq-rest-csrf-token' : '99'
}

This sent up a header with

Access-Control-Request-Headers: colin,ibm-mq-rest-csrf-token

Header “colin” is not in the list of accepted headers, header ibm-mq-rest-csrf-token is in the list:

Access-Control-Allow-Headers: Accept-Charset, Accept-Language, Authorization, Content-Type, ibm-mq-rest-csrf-token, ibm-mq-md-correlationId, ibm-mq-md-expiry, ibm-mq-md-persistence, ibm-mq-md-replyTo, ibm-mq-rest-mft-total-transfers

The Developer tools message was the same as before,

Access to fetch at ‘https://localhost:9445/ibmmq/rest/v1/messaging/qmgr/QMA/queue/DEEPQ/message ‘ from origin ‘http://localhost:8889 ‘ has been blocked by CORS policy: Response to preflight request doesn’t pass access control check: No ‘Access-Control-Allow-Origin’ header is present on the requested resource. If an opaque response serves your needs, set the request’s mode to ‘no-cors’ to fetch the resource with CORS disabled.

I removed the unsupported header and it worked.

Other headers do not work

The request header ‘Access-Control-Allow-Method’ : ‘DELETE’ is a valid header.

When this was used, the request headers included

  • Access-Control-Request-Headers: access-control-allow-method,ibm-mq-rest-csrf-token
  • Access-Control-Request-Method: DELETE

As before  Access-Control-Request-Method is not in the list of Access-Control-Allow-Headers, so the request fails.

The Access-Control-Request-Method: DELETE is not needed, as the method: DELETE defines what will be used.

Using Access-Control-Request-Headers to add more headers does not work, if the header is not in the list of valid parameters.

The true origin is used

I tried using the header ‘Origin’: ‘https://localhost:888499 ‘ – as it did with my curl – and this was ignored.  The true Origin from the web page was used (I am glad to say, otherwise it would make the whole protection scheme worthless)

Some options ignored

I found  the “fetch” options credentials:, redirect: , and cache:, were all ignored by Chrome.

Some  cookies  were used.

The LtpaToken2_… was sent up and down

Problems in mqwebuser.xml file

I misconfigured the configuration file with <variable name=”mqRestCorsMaxAgeInSeconds” value=”AA91″/>

This gave me the familair message

Access to fetch at ‘https://localhost:9445/ibmmq/rest/v1/messaging/qmgr/QMA/queue/DEEPQ/message ‘ from origin ‘http://localhost:8889 ‘ has been blocked by CORS policy: Response to preflight request doesn’t pass access control check: No ‘Access-Control-Allow-Origin’ header is present on the requested resource. If an opaque response serves your needs, set the request’s mode to ‘no-cors’ to fetch the resource with CORS disabled.

So if you get this message it is worth checking the configuration file for problems.

Can I trace it?

I used <variable name=”traceSpec” value=”*=audit:CorsService=finest”/> in the mqwebuser.xml file to provide information about the CORS processing, and what was coming in, and what was being checked.

Can evil websites get to your mqweb – understanding CORS

In the beginning was the html, and the html was good;  then we had html and scripts  which could only do things on the page, which was also good; then we had scripts which could reach out to other websites – and that’s where the problems began.  It was easy for evil developers to get you to click on an innocent looking page, which had a script which jumped into a different tab of your browser where you had your banking window open ,  or  to executed a script ; and steal all your money.

The browsers were improved to stop evil scripts from accessing a server, and then they were improved again so the server could say “stuff coming from this list of web sites is OK, I trust them.   One implementation is called CORS.  There is a good description here.   It says

Cross-Origin Resource Sharing (CORS) is a mechanism that uses additional HTTP headers to tell browsers to give a web application running at one origin, access to selected resources from a different origin. A web application executes a cross-origin HTTP request when it requests a resource that has a different origin (domain, protocol, or port) from its own.

I do not trust things that “just work”, I like to see evidence of it.   This lack of trust comes from working with a group of young testers who came to IBM Hursley to test their code.   All of their tests ran cleanly – and thought they had a few days spare to go to London sightseeing.  I stopped the server they were meant to be using (without telling them) and the tests carried on running successfully!   It turned out they had been testing their code with a dummy program acting as the server.    They removed this code, reran their tests  and most of them failed – and had to stay an extra week.

I also remember changing a config file, and being surprised when my changed worked first time.   After a cup of tea (an invaluable thinking aid) I put some spelling mistakes in the file ; and it carried on running.  Why? I was using the wrong config file.

I played with CORS and wanted to get things to fail, as well as to work.   This was a good choice, as I had many failures.

I’ll document how I got curl to work and demonstrate CORS , and document how I got a web browser to work – a real challenge

mqweb implements CORS, so you can configure mqweb to give a list of websites which may access your server.

The documentation  is not very clear.  It says

where allowedOrigins specifies the origin that you want to allow cross-origin requests from. You can use an asterisk surrounded by double quotation marks, “*”, to allow all cross-origin requests. You can enter more than one origin in a comma-separated list, surrounded by double quotation marks. To allow no cross-origin requests, enter empty quotation marks as the value for allowedOrigins.

My observations,

  • You cannot use generics, so http://127.0.0.1:* is the same as “*” – or allow all cross-origin requests
  • You must specify {scheme:address:port} so http or https,  the url with // at the front, and the specific port number
  • The match is an string equality test, so the case, spacing and values must be the same

How does an HTTP request work?

When you click on a web page, data is sent to the back end server.   The following data is exchanged

  • the request
  • request headers
  • your data going to the server
  • response headers
  • response data – such as the content of web page.

In more detail…

Request

Request Headers

  • accept: for example  text/html, application/xml
  • accept-languages: en-GB
  • dnt:  1  this is “do not track me”
  • user-agent:  Chromium
  • cookie: bcookie=”….”

Response headers

  • status: 200
  • status: 200 OK
  • set-cookie ….
  • server: nginx
  • ….

Response data

This might be the web page to be displayed.  It can include script, images etc.

What is the origin of the page?

If your web page, invoked a script, for example from clicking a button, an “Origin” header is added, for example Origin: http://localhost:8884 which is the address of the web server hosting the page.   The backend server checks to see if this header is present, and looks up site in the authorised list.

If the Origin is acceptable, it sends down additional response headers (CORS Headers), so the browser (or your program) knows and can use the web site.

As part of the handshake, the browser can send up an “OPTIONS” request (instead of a get/delete etc), with a header saying the browser will be doing a GET/DELETE etc from this origin.    If there is a positive response, where the additional CORS headers are send back then the get/delete is allowed.   If the CORS headers are not present in the response, then the request will not be permitted.   This is called a preflight check – just like having your boarding pass checked at the gate before you get on the plane.

Does it work?

If there is no Origin header in the request, the backend server thinks it is all same domain, and does no CORS checks.

This CORS support is really aimed at web browsers, as the web browsers will automatically add headers for you.  If you are using curl or other tools to create your own request, you specify exactly the headers you want, so if you omit the Origins header, the server will not check.

My mqwebuser.xml file had <variable name=”mqRestCorsAllowedOrigins” value=”https://9999.0.0.1:19442,http://localhost:8884”/&gt;

So origins for https://9999.0.0.1:19442 and http://localhost:8884 are permitted.

I used a configuration file for curl (because command parameters did not work passing the headers) and had in curl.parms

cacert ./cacert.pem
cert ./colinpaice.pem:password
key ./colinpaice.key.pem
cookie cookie.jar.txt
cookie-jar cookie.jar.txt
request OPTIONS
header “Access-Control-Request-Method: DELETE”
header “Access-Control-Request-Headers: Content-Type”
header “Origin: https://9999.0.0.1:19442
header “ibm-mq-rest-csrf-token : COLINCSRF”
include

I used the command

curl –verbose –config curl.parms –url https://localhost:9445/ibmmq/rest/v1/messaging/qmgr/QMA/queue/DEEPQ/message

The headers were displayed, and I had

OPTIONS /ibmmq/rest/v1/messaging/qmgr/QMA/queue/DEEPQ/message HTTP/1.1
> Host: localhost:9445
> User-Agent: curl/7.58.0
> Accept: */*
> Cookie: LtpaToken2_….
> Access-Control-Request-Method: DELETE
> Access-Control-Request-Headers: Content-Type
> Origin: https://9999.0.0.1:19442
> ibm-mq-rest-csrf-token : COLINCSRF

The response headers were

< HTTP/1.1 200 OK
< X-Powered-By: Servlet/3.1
< X-XSS-Protection: 1;mode=block
< X-Content-Type-Options: nosniff
< Content-Security-Policy: default-src ‘none’; script-src ‘self’ ‘unsafe-inline’ ‘unsafe-eval’; connect-src ‘self’; img-src ‘self’; style-src ‘self’ ‘unsafe-inline’; font-src ‘self’
< Cache-Control: no-cache, no-store, must-revalidate
< Access-Control-Allow-Credentials: true
< Access-Control-Allow-Origin: https://9999.0.0.1:19442
< Access-Control-Allow-Max-Age: 90
< Access-Control-Allow-Methods: GET, POST, PATCH, PUT, DELETE
< Access-Control-Allow-Headers: Accept-Charset, Accept-Language, Authorization, Content-Type, ibm-mq-rest-csrf-token, ibm-mq-md-correlationId, ibm-mq-md-expiry, ibm-mq-md-persistence, ibm-mq-md-replyTo, ibm-mq-rest-mft-total-transfers

The response header Access-Control-Allow-Origin: https://9999.0.0.1:19442 shows that requests with origin https://9999.0.0.1:19442 is acceptable.

When I used a different “Origin”,   I did not get any Access-Control-Allow-* headers.  So from the absence,  I could tell the request was not support from the different origin.

The Access-Control-Allow-Headers is a list of header names which can be sent, so the header ibm-mq-rest-csrf-token : COLINCSRF is valid, but “COLIN:value”  is not valid, because ibm-mq-rest-csrf-token  is in the Access-Control-Allow-Headers list, and COLIN is not in the list.

The Access-Control-Allow-Methods: GET, POST, PATCH, PUT, DELETE says my application can use any of the methods in the list.

Using a web browser.

If you use HTML in a local file, then the “Origin” is null, and so does not match any elements in the authorised list.  I had to set up my own web server – which was easy to do using Python.

Using the web page at the bottom of this posting, I pointed my web browser at the web server.   It displayed a button.  I pressed it, and it invoked a script. Using the developer mode ( Al+Ctrl +i) in Chrome could see network flows etc.

The request headers had

  • Host: localhost:9445 This is where my webserver is hosted
  • ibm-mq-rest-csrf-token : 99  I specified this header value
  • Origin: http://localhost:8884  this overrode the value I had specified in the headers.

The response headers included

  • Access-Control-Allow-Credentials: true
  • Access-Control-Allow-Origin: http://localhost:8884
  • Access-Control-Expose-Headers: Content-Language, Content-Length, Content-Type, Location, ibm-mq-qmgrs, ibm-mq-md-messageId, ibm-mq-md-correlationId, ibm-mq-md-expiry, ibm-mq-md-persistence, ibm-mq-md-replyTo, ibm-mq-rest-mft-total-transfers
  • ibm-mq-md-expiry: unlimited
  • ibm-mq-md-messageId: 414d5120514d412020202020202020204c27165d04a98a25
  • ibm-mq-md-persistence: persistent

We can see from

  • the Access-Control-* headers we know this has validated for http://localhost:8884
  • the Access-Control-Expose-Headers we can see what headers will be accepted
  • ibm-mq-md-persistence:  persistence. For the returned messages, it was a persistent message

The web page used

<HTML>
<HEAD>
<TITLE>Call a  mqweb rest API </TITLE>
<script>
  function local()
  {
    fetch("https://localhost:9445/ibmmq/rest/v1/messaging/qmgr/QMA/queue/DEEPQ/message" 
      {
         method :'DELETE',
         headers: {
                    'Origin': 'https://localhost:888499'
                    , 'Access-Control-Request-Headers' : 'Content-Type'
                    ,'ibm-mq-rest-csrf-token' : '99'
                  }
     }
    )
    .then((response) => response.text() )
    .then(x => {  document.write("OUTPUT:"+x);    } )
    .catch(error => { console.log("Booo:" + error);});
  }
</script> /head> <body> <button onclick="local()"">press me</button> </body> </html>

 

For information about “fetch(…,…)” see here.

For information about the “.then(…) ” see here.

Use the Liberty REST API to access the JMX data in Liberty

Rather than set up traditional JMX, where you specify the JMX port etc, you can use the REST support provided in LIberty to access the JMX data.   The rest support is easier to set up.

The Liberty documentation recommends that you do not have the native JMX support (configured in jvm.options), and the Liberty REST support for JMX configured at the same time.

The REST request to get the statistics worked and was easy to use.   I could not get the “traditional JMX interface”, such as jconsole to work with the REST interface.  See below.

Configure the server:  mqwebuser.xml

In mqwebuser.xml  add the support

<featureManager>
  <feature>restConnector-2.0</feature>
</featureManager>

Set up authorisation with

<administrator-role>
   <user>colinpaice</user>
   <group>MQADMIN</group>
</administrator-role>

<reader-role>
  <user>John</user>
</reader-role>

As the mqconsole and rest statistics are read only, then it may be better to set up every user as a reader-role.

As with the MQ support, it will use the userid if specified, or the Common Name from the digital certificate.

Using Curl and the rest API

I used

curl --cacert ./cacert.pem --cert-type P12 
--cert colinpaice.p12:password
-url https : //localhost:9443/IBMJMXConnectorREST/mbeans/...

Where … was  WebSphere:name=com.ibm.mq.console.javax.ws.rs.core.Application,type=ServletStats/attributes/ResponseTimeDetails  . This gives the JMX statistics for the mq.console.

The data comes back as JSON (as you might expect) for example

"name": "ResponseTimeDetails",
  "value": {
    "value": {
      "count": "99",
      "description": "Average Response Time for servlet",
      "maximumValue": "3183755861",
      "mean": "1.116053166969697E8",
      "minimumValue": "1777114",
      "reading": {
        "count": "99",
        "maximumValue": "3183755861",
        "mean": "1.116053166969697E8",
        "minimumValue": "1777114",
        "standardDeviation": "4.360373971884932E8",
        "timestamp": "1580819294060",
        "total": "1.1048926353E10",
        "unit": "ns",
        "variance": "1.90128611746915776E17"
      },
      "standardDeviation": "3.218674128849494E8",
      "total": "1.1048926353E10",
      "unit": "ns",
      "variance": "1.63102693370991648E17"
    ...

As well as the data I have covered before, you also get the time stamp value.  This is the value in milliseconds from a well known time.

I used the python to convert the timestamp (1580978634610) to a date time

import datetime
s = 1580978634610 / 1000.0
print(datetime.datetime.fromtimestamp(s).strftime('%Y-%m-%d %H:%M:%S.%f'))

to give  2020-02-04 12:28:14.060000.

Which URL to use for traditional JMX?

The IBM documentation says the url to access the JMX data using “traditional JMX” is in file /var/mqm/web/installations/Installation1/servers/mqweb/log/state/com.ibm.ws.jmx.rest.address.  For me this was service:jmx:rest://localhost:9443/IBMJMXConnectorREST .

Client set up: Using the “traditional JMX interface” did not work for me

The Configuring secure JMX connection to Liberty page says you can use this url in jconsole and other tools.  I could not get this to work.   I got  messages like Error connecting to JMX endpoint: Unsupported protocol: rest.  This page gives a lot of information on JMX, and towards the end of the second page it says Error connecting to JMX endpoint: Unsupported protocol: xxxx is likely to be a problem with the class path.  I used  -verbose=class, and did not see the jar file being loaded.

What else can you do with the REST interface?

Show what is available

You may get data like

WebSphere%3Aname%3Dcom.ibm.mq.console.javax.ws.rs.core.Application %2C type%3DServletStats

The punctuation has been “escaped” so you need to change

  • %3A- to  :
  • %3D to :
  • %2C to ,

and the string becomes the familiar

WebSphere: name=com.ibm.mq.console.javax.ws.rs.core.Application, type=ServletStats