I’m sorry I haven’t a clue…

As well as being a very popular British comedy, it is how I sometimes feel about what is happening inside the Liberty Web servers, and products like z/OSMF, z/OS Connect and MQWEB. It feels like a spacecraft in cartoons – there are usually only two controls – start and stop.

One reason for this is that the developers often do not have to use the product in production, and have not sat there, head in hand saying “what is going on ?”.

In this post I’ll cover

What data values to expose

As a concept, if you give someone a lever to pull – you need to give them a way of showing the effect of pulling the level.

If you give someone a tuning parameter, they need to know the impact of using the tuning parameter. For example

  • you implement a pool of blocks of storage.
  • you can configure the number of maximum number of blocks
  • if a thread needs some storage, and there is a free block in the pool, then assign the block to the thread. When the thread has finished with it, the thread goes back into the pool.
  • if all the blocks in the pool are in-use, allocate a block. When the thread has finished with the block – free it.
  • if you specify a very large number of blocks it could cause a storage shortage

The big questions with this example is “how big do you make the pool”?

To be able to specify the correct pool size you need to know information like

  • What was the maximum number of blocks used – in total
  • How many times were additional blocks allocated (and freed)
  • What was the total number of blocks requested.

You might decide that the pool is big enough if less than1% of requests had to allocate a block.

If you find that the maximum value used was 1% of the size of the pool, you can make the pool much smaller.

If you find that 99% of the requests were allocated/freed, this indicates the pool is much to small and you need to increase the size.

For other areas you could display

  • The number of authentication requests that were userid+ password, or were from a certificate.
  • The number of authentication requests which failed.
  • The list of userid names in the userid cache.
  • How many times each application was invoked.
  • The number of times a thread had to wait for a resource.
  • The elapsed time waiting for a resource, and what the resource was.

What attributes to expose

You look at the data to ask

  • Do I have a problem now?
  • Will I have a problem in the future? You need to collect information over time and look at trends.
  • When we had a problem yesterday, did this component contribute to it? You need to have historical data.

It is not obvious what data attributes you should display.

  • The “value now” is is easy to understand.
  • The “average value” is harder. Is this from the start of the application (6 months ago), or a weighted average (99 * previous average + current value)/100. With this weighted average, a change since the previous value indicates the trend.
  • The maximum value is hard – from when? There may have been a peak at startup, and small peaks since then will not show up. Having a “reset command” can be useful, or have it reset on a timer – such as display and reset every 10 minutes.
  • If you “reset” the values and display the value before any activity, what do you display? “0”s for all of the values, or the values when the reset command was issued.

Resetting values can make it easier to understand the data. Comparing two 8 digit numbers is much harder than comparing two 2 digit numbers.

How to expose data

Java has a Java Management eXtension (JMX) for reporting management information. It looks very well designed, is easy to use, and very compact! There is an extensive document from Oracle here.

I found Basic Introduction to JMX by Baeldung , was an excellent article with code samples on GitHub. I got these working in Eclipse within an hour!

The principal behind JMX is …

For each field you want to expose you have a get… method.

You define an interface with name class| |”MBean” which defines all of the methods for displaying the data.

public interface myClassMBean {
public String getOwner();
public int getMaxSize();
}

You define the class and the methods to expose the data.

public class myClass implements myClassMBean{

// and the methods to expose the data

public String getOwner() {
return fileOwner;
}

public int getMaxSize() {
return fileSize;
}

}

And you tell JMX to implement it

myClass myClassInstance = new myClass(); // create the instance of myClass

MBeanServer server = ManagementFactory.getPlatformMBeanServer();
ObjectName objectName =….
server.registerMBean(myClassInstance, objectName);

Where myClassInstance is a class instance. The JMX code extracts the name of the class from the object, and can the identify all the methods defined in the class||”MBean” interface. Tools like jconsole can then query these methods, and invoke them.

ObjectName is an object like

ObjectName objectName = new ObjectName(“ColinJava:type=files,name=onefile”);

Where “ColinJava” is a high level element, “type” is a category, and “name” is the description of the instance .

That’s it.

When you use jconsole ( or other tools) to display it you get

You could have

MBeanServer server = ManagementFactory.getPlatformMBeanServer();

ObjectName bigPoolName = new ObjectName(“ColinJava:type=threadpool,name=BigPool”);
server.registerMBean(bigpoolInstance, bigPoolName);

ObjectName medPoolName = new ObjectName(“ColinJava:type=threadpool,name=MedPool”);
server.registerMBean(medpoolInstance, medPoolname);

ObjectName smPoolName = new ObjectName(“ColinJava:type=threadpool,name=SmallPool”);
server.registerMBean(smallpoolInstance,smPoolName);

This would display the stats data for three pools

  • ColinJava
    • threadpool
      • Bigpool..
      • MedPool….
      • SmallPool…

And so build up a tree like

  • ColinJava
    • threadpool
      • Bigpool..
      • MedPool….
      • SmallPool…
    • Userids
      • Userid+password
      • Certificate
    • Applications
      • Application 1
      • Application 2
    • Errors
      • Applications
      • Authentication

You can also have set…() methods to set values, but you need to be more careful; checking authorities, and possibly synchronising updates with other concurrent activity.

You can also have methods like resetStats() which show up within jconsole as Operations.

How do I build up the list of what is needed?

It is easy to expose data values which have little value. I remember MQ had a field in the statistics “Number of times the hash table changed”. I never found a use for this. Other times I thought “If only we had a count of ……”

You can collect information from problems reported to you. “It was hard to diagnose because… if we had the count of … the end user could have fixed it without calling us”.

Your performance team is another good source of candidates fields. Part of the performance team’s job is to identify statistics to make it easier to tune the system, and reduce the resources used. It is not just about identifying hot spots.

Before you implement the collection of data, you could present to your team on how the data will be used, and produce some typical graphs. You should get some good feedback, even if it is “I dont understand it”.

What can I use to display the data

There are several ways of displaying the data.

  • jconsole – which comes as part of Java can display the data in a window
  • python – you can issue a query can capture the data. I have this set up to capture the data every 10 seconds
  • other tools using the standard interfaces.

Running a python rest application on z/OS

I installed Python and co-req packaged on my z/OS system, described here. I wanted to run a REST workload into z/OSMF. I could have used, Liberty, z/OS Connect or MQWEB as the backend.

It makes use of the python requests package.

Initial script

#!/usr/bin/env python3 
import requests 
from timeit import default_timer as timer 
import urllib3 
my_header = { 
  'Connection': 'keep-alive', 
  'Content-Type': 'application/json', 
  'Cache-Control': 'max-age=0', 
  'Authorization': 'Basic Y395aW46cGFu67GhlMG4=', 
  'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', 
  'Accept-Language': 'en-GB,en;q=0.5', 
  'DNT': '1', 
  'Connection': 'keep-alive', 
  'Upgrade-Insecure-Requests': '1', 
  'Cache-Control': 'max-age=0' 
}
urllib3.disable_warnings() 
geturl ="https://10.1.1.2:29443/zosmf/rest/mvssubs?ssid=IZUG" 
jar = requests.cookies.RequestsCookieJar() 
start=timer() 
res = requests.get(geturl,headers=my_header,verify=False,cookies=jar) 
end=timer() 
print("duration=",end-start) 
if res.status_code != 200: 
  print(res.status_code) 
print("Output=",res.text) 
jar=  res.cookies  

Comments on the python script

  • Authorization’: ‘Basic Y395aW46cGFu67GhlMG4=’ is the 64 bit encoding of the userid and password. Which is trivial(!) to decode.
  • urllib3.disable_warnings() Without this set you get a message InsecureRequestWarning: Unverified HTTPS request is being made to host ‘10.1.1.2’. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings. This is because the certificate sent down from the server has not been validated.
  • jar= res.cookies says save the cookies into the jar dictionary, for future use

The output was

duration= 1.210928201675415
output= {“items”:[
{“subsys”:”IZUG”, “active”:true, “dynamic”:true, “funcs”:[10]}
],”JSONversion”:1}

Verifying TLS certificate

With urllib3.disable_warnings() present it will cause error warnings to be suppressed.
When this statement is not present, there will be warnings about certificate problems.

In the statement res = requests.get(geturl,headers=my_header,verify=False,cookies=jar) verify is either “False” or the name of a CA .pem file containing the CA certificates. I used verify=ABC and got

ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed:

I got the error because “ABC” is not a valid file, and the verification could not be done.

I exported the CA certificate used by the server using

RACDCERT CERTAUTH   EXPORT(LABEL('TEMP-CA')) -         
       DSN('IBMUSER.CERT.TEMP.CA.PEM')   -             
       FORMAT(CERTB64) -                               
       PASSWORD('password')                            

I could only get verify=…. to work with a USS file, so I had to copy the dataset IBMUSER.CERT.TEMP.CA.PEM into a USS file CACert.pem. Then when I used

res = requests.get(geturl,headers=my_header,verify=CACert.pem,cookies=jar)

it worked fine.

Using a client certificate.

You cannot use RACF certificate with the requests facility, because the underlying code does not support it. You have to use .pem style certificate.

The support does not allow you to specify a password for the private key, so this is not very secure.

You define

cf=”colinpaicesECp256r1.pem”
kf=”colinpaicesECp256r1.key.pem”
cpcert=(cf,kf)

where

  • cf is the name of the file with the certificate in it
  • kf is the name of the file with the private key in it
  • cpcert is the python tuple.

If your certificate file also includes the private key, you do not need the kf, just use cpcert=(cf).

You use it

res = requests.get(geturl,headers=my_header,verify=CACert.pem,cookies=jar,cert=cpcert)

I tried exporting a certificate from z/OS using RACDCERT EXPORT … format(PKCS12B64), copying it to a uss file, and using that, but it did not work. The file could not be read.

I tried creating a private key with a password (to make it more secure) but when I used it I got the message

urllib3.exceptions.SSLError: Client private key is encrypted, password is required

There is a package request_pkcs12 which provides support for a password on the certificate. https://github.com/m-click/requests_pkcs12. I did not use this, I recreated my certificate and private key without a password.

I tried running on Linux using my Hardware Security Module (which plugs into a USB socket). This also failed as I could not enter the PIN for the device.

Compare the response time to running across the network.

I ran the same python script on z/OS and on Linux. The round trip time of the rest request was

  • 1.41 seconds on z/OS
  • 0.92 seconds on Linux.

I think I’ll run my tests from Linux in future.

I cut the CPU cost of doing nothing.

I was running z/OSMF and saw that the CPU costs where high when it was sitting there doing nothing. I managed to reduce the CPU costs by more than half. This would apply to other Liberty based web servers, such as MQWEB, and z/OS Connect.

I could see from the MVS system trace there was a lot of activity creating a thread, and deleting a thread, a lot of costs associated with these activities, such as allocating and freeing storage.

I increased the number of threads so that this allocating a thread and delete a thread activity disappeared.

In the xml configuration file (based from server.xml) was the default

<executor name=”LargeThreadPool” id=”default” coreThreads=”100″
maxThreads=”0″ keepAlive=”60s” stealPolicy=”STRICT”
rejectedWorkPolicy=”CALLER_RUNS” />

I changed this to

<executor name=”LargeThreadPool” id=”default”
coreThreads=”300″ maxThreads=”600″ keepAlive=”60s”
stealPolicy=”STRICT” rejectedWorkPolicy=”CALLER_RUNS” />

and restarted the server.

The options are documented here. There is an option keepAlive which defaults to 60 seconds. If a thread has been idle for this time, the thread is a candidate to be freed to reduce the pool back to corethreads size.

I was alerted to this problem when I looked at an MVS system trace. This is described here.

There is a discussion how sun thread pools work in this post. It is not obvious. This may or may not be how this executor works.

What value should you use?

This is a hard question, as Liberty does not provide this information directly.

I used the Health Checker connects from Eclipse to the JVM and extracts information about the JVM and applications.

This shows that at rest there was a lot of activity. I increased it to 250 threads and restarted the server and got

So better … but still some activity. I increased it to 300 threads, and the graph was flat.

I set up USER.Z24A.PROCLIB(CEEOPT) with

RPTSTG(ON),
RPTOPTS(ON)

in my z/OSMF job I had

//CEEOPTS DD DISP=SHR,DSN=USER.Z24A.PROCLIB(CEEOPT)

This printed out a lot of useful information about the stack and heap usage. It the bottom it said

Largest number of threads concurrently active: 397

The number of threads includes threads from the pool I had specified, plus other threads that z/OSMF creates. The health check showed there were 372 threads, event though coreThreads was set to “300”.

I also used jconsole to display information about the highest thread usage. The URL was service:jmx:rest://10.1.1.2:10443/IBMJMXConnectorREST. It displays peak threads and live threads.

Security

I found the security of both jconsole, and health check, was weak (userid and password). I was unable to successfully set up a TLS certificate logon to the server.

The information from rptstg was only available at shutdown.

Why does increasing the number of threads reduce the CPU when idle?

The thread pool has logic to remove unused threads and shrink it to the coreThreads size. If the pool size is too small it has to create threads and delete threads according to the load. See here. The keepAlive mentioned at the top is how long a thread can be idle for, before it can be considered a candidate for deletion.

Summary

Monitor the CPU used when idle and see if increasing the threadpool to 300 helps.

Some of the mysteries of Java shared classes

When Java executes a program it read in the jar file, breaks it into the individual classes, converts the byte codes into instructions, and when executing it may replace instructions with more efficient instructions (Jitting). It can also convert the byte codes into instructions ahead of time, so called Ahead Of Time (AOT) compilation.

With shared classes, the converted byte codes, any Jitted code, and any AOT code can be saved in a data space.

  • When the java program runs a second time, it can reuse the data in the dataspace, avoid the overhead of the reading the jar file from the file system, and coverting the byte codes into instructions.
  • The data space can be hardened to a file, and restored to a data space, so can be used across system IPLs.

Using this, it reduced the start-up time of my program by over 20 seconds on my slow zPDT system. The default size of the cache is 16MB – one of my applications needed 100 MB, so most of the benefits of he shared classes could not be exploited if the defaults were used.

This blog post describes more information about this, and what tuning you can do.

You have to use the same level of Java to display information, as used by the program. For example

export PATH="/usr/lpp/java/new/J8.0_64/bin:$PATH"
/usr/lpp/java/new/J8.0_64/bin/java -Xshareclasses:cacheDir=/tmp,printStats,cache=rseapi

If the levels are different it may just report JVMSHRC005I No shared class caches available.

Issuing commands to manage the shared classes cache

Commands to manage the shared classes cache are issued like

java -Xshareclasses:cacheDir=/tmp,name=client6,printStats

which can be done using JCL

// SET V=’listAllCaches’
// SET V=’printStats’
// SET C=’/tmp’
// SET N=’client6′
//S1 EXEC PGM=BPXBATCH,REGION=0M,
// PARM=’SH java -Xshareclasses:cacheDir=&C,name=&N,verbose,&V’
//STDERR DD SYSOUT=*
//STDOUT DD SYSOUT=*

Enabling share classes

You specify -Xsharedclasses information as a parameter to the program, for example in the command line or in a jvm properties file.

To use the shared classes capability you have to specify all of the parameters on one line, like

-Xshareclasses:verbose,name=client6,cacheDirPerm=0777,cacheDir=/tmp

Having it like

-Xshareclasses:name=client6,,cacheDirPerm=0777,cacheDir=/tmp
-Xshareclass:verbose

means the name, etc all take their defaults. Only shareclass:verbose would be used.

Changing share classes parameters

You can have more than one cache; you specify a name. You specify a directory were an image is stored when the cache is hardened to disk.

Some of the options like name= and cacheDir= are picked up when the JVM starts, Other parameters like cacheDirPerm are only used when the cache is (re-)created.

You can delete the cache in two ways.

Delete the cache from your your Java program

When you are playing around, you can add reset to the end of the -Xshareclasses string to caused the cache to be deleted and recreated.This gives output like

JVMSHRC010I Shared cache “client6” is destroyed
JVMSHRC158I Created shared class cache “client6”
JVMSHRC166I Attached to cache “client6”, size=20971328 bytes

This was especially useful when tuning the storage allocations.

Delete the cache independently

java -Xshareclasses:cacheDir=/tmp,name=client6,destroy

How to allocate the size of the cache

You specify the storage allocations using -Xsc.. (where sc stands for shareclasses)

If you have -Xsharedcache:verbose… specified then when the JVM shuts down you get

JVMSHRC168I Total shared class bytes read=11660. Total bytes stored=5815522
JVMSHRC818I Total unstored bytes due to the setting of shared cache soft max is 0.
Unstored AOT bytes due to the setting of -Xscmaxaot is 1139078.
Unstored JIT bytes due to the setting of -Xscmaxjitdata is 131832.

This shows the values of maxaot and maxjitdata are too small they were

-Xscmx20m
-Xscmaxaot2k
-Xscmaxjitdata2k

Whem the values were big enough I got

JVMSHRC168I Total shared class bytes read=12960204. Total bytes stored=8885038
JVMSHRC818I Total unstored bytes due to the setting of shared cache soft max is 0.
Unstored AOT bytes due to the setting of -Xscmaxaot is 0.
Unstored JIT bytes due to the setting of -Xscmaxjitdata is 0.

How big a cache do I need?

If you use -Xshareclasses:verbose… it will display messages

for example

JVMSHRC166I Attached to cache “client6”, size=2096960 bytes
JVMSHRC269I The system does not support memory page protection

JVMSHRC096I Shared cache “client6” is full. Use -Xscmx to set cache size.
JVMSHRC168I Total shared class bytes read=77208. Total bytes stored=2038042

Message JVMSHRC096I Shared cache “client6” is full. Use -Xscmx to set cache size, tells you the cache is full – but no information about how big it needs to be.

You can use

java -Xshareclasses:cacheDir=/tmp,name=client6,printStats

to display statistics like

-Xshareclasses persistent cache disabled]                                         
[-Xshareclasses verbose output enabled]                                            
JVMSHRC159I Opened shared class cache "client6"                                    
JVMSHRC166I Attached to cache "client6", size=2096960 bytes                        
JVMSHRC269I The system does not support memory page protection                     
JVMSHRC096I Shared cache "client6" is full. Use -Xscmx to set cache size.          
                                                                                   
Current statistics for cache "client6": 
cache size                           = 2096592                       
softmx bytes                         = 2096592                       
free bytes                           = 0                             
ROMClass bytes                       = 766804                        
AOT bytes                            = 6992                          
Reserved space for AOT bytes         = -1                            
Maximum space for AOT bytes          = 1048576                       
JIT data bytes                       = 212                           
Reserved space for JIT data bytes    = -1                            
Maximum space for JIT data bytes     = 1048576                       
Zip cache bytes                      = 1131864                       
Startup hint bytes                   = 0                             
Data bytes                           = 13904                         
Metadata bytes                       = 12976                         
Metadata % used                      = 0%                            
Class debug area size                = 163840                        
Class debug area used bytes          = 119194                        
Class debug area % used              = 72%

Cache is 100% full  
                                                                             

This show the cache is 100% full, and how much space is used for AOT and JIT. The default value of -Xscmx I had was almost 16MB. I made it 200MB and this was large enough.

I could not find a way of getting my program to issue printStats.

How do I harden the cache?

You can use use the

java -Xshareclasses:cacheDir=/tmp,name=zosmf,verbose,snapshotCache

command to create the cache on disk. Afterwards the listAllCaches command gave

Cache name level        cache-type     feature 
client6    Java8 64-bit non-persistent cr
client6    Java8 64-bit snapshot       cr

Showing the non persistent data space, and the snapshot file.

You can use the restoreFromSnapshot to restore from the file to the data cache; before you start your Java program. You would typically do this after an IPL.

How can I tell what is going on and if shared classes is being used?

The java options “-verbose:dynload,class

reports on the

  • dynamic loading of the files, and processing them,
  • what classes are being processed.

For example

<Loaded java/lang/reflect/AnnotatedElement from /Z24A/usr/lpp/java/J8.0_64/lib/rt.jar>
< Class size 3416; ROM size 2672; debug size 0>
< Read time 1196 usec; Load time 330 usec; Translate time 1541 usec>
class load: java/lang/reflect/AnnotatedElement from: /Z24A/usr/lpp/java/J8.0_64/lib/rt.jar
class load: java/lang/reflect/GenericDeclaration from: /Z24A/usr/lpp/java/J8.0_64/lib/rt.jar

dynload gave

<Loaded java/lang/reflect/AnnotatedElement from /Z24A/usr/lpp/java/J8.0_64/lib/rt.jar>
< Class size 3416; ROM size 2672; debug size 0>
< Read time 1196 usec; Load time 330 usec; Translate time 1541 usec>

this tells you a jar file was read from the file system, and how long it took to process it.

class gave

class load: java/lang/reflect/AnnotatedElement from: /Z24A/usr/lpp/java/J8.0_64/lib/rt.jar
class load: java/lang/reflect/GenericDeclaration from: /Z24A/usr/lpp/java/J8.0_64/lib/rt.jar

This shows two classe were extracted from the jar file.

In a perfect system you will get the class load entries, but not <Loaded….

Even when I had a very large cache size, I still got dynload entries. These tended to be loading class files rather than jar files.

For example there was a dynload entry for com/ibm/tcp/ipsec/CaApplicationProperties. This was file /usr/lpp/zosmf./installableApps/izuCA.ear/izuCA.war/WEB-INF/classes/com/ibm/tcp/ipsec/CaApplicationProperties.class

If you can make these into a .jar file you may get better performance. (But you may not get better performance, as it may take more time to load a large jar file).

I also noticed that there was dynload for com/ibm/xml/crypto/IBMXMLCryptoProvider which is in /Z24A/usr/lpp/java/J8.0_64/lib/ext/ibmxmlcrypto.jar, so shared classes has some deeper mysteries!

What happens if the .jar file changes?

As part of the class load, it checks the signature of the file on disk, matches the signature on the data space. If they are different the data space will be updated.

Why do they ship java products on z/OS with the handbrake on? And how to take the brake off.

I noticed that it takes seconds to start MQ on my little z/OS machine, but minutes (feels like days) to start anything with Liberty Web server.  This include the MQWEB, z/OSMF,  and Z/OSConnect.  I mentioned this to an IBM colleague who asked if I was using Java Shared classes.  These get loaded into z/OS shared pages.

When I implemented it, my Liberty server came up in half the time!

I found this blog post which was very helpful, and showed me where to look for more information.  I subsequently found this document (from 2006!)

The kinder garden overview of how Java works.

  • You start with a program written in the Java language.
  • When you run this, Java converts it into byte codes
  • These byte codes get converted to native instructions  – so a byte code “push onto the stack” may become 8  390 assembler instructions.
  • This code can be optimised, for example code which is executed frequently can have the assembler instructions rewritten to go faster.  It might put code inline instead of out in a subroutine.
  • If you are using Java shared classes, this code can be written out and reused by other applications, or if you restart the server, it can reused what it created before.  Reusing the shared classes means that programs benefit because the byte codes have already been converted into native code, and optimisations have been done on the hot code.

What happens on z/OS?

By default, z/OS writes the code to virtual memory and does not save anything to disk.  If you restart your Java application within the same IPL, it can exploit the shared classes which have been converted to native code, and optimised – great- good design.   I found the second time I started the web server it took half the time.  However I IPL once a day, and start my web server once a day. I do not benefit from having it start faster a second time – as I only started it once per session. By default when you re-ipl, the shared classes code is discarded, and so next time you need the code, it has to be to convert to native instructions again, and it loses any optimisation which had been done.

What is the solution?

It is two easy steps:!

  1. Tell Java to write the information from memory to disk – to take a snaphot.
  2. After IPL tell Java to load memory from the disk image – to restore a snapshot.

It is as simple as that.

Background.

It is all to do with the java -Xshareclasses.

With your application you tell Java where to store information about the shared classed.  It defaults to Cache=/tmp/ name=javasharedresources.

In my jvm.options I overrode the defaults and specified

-Xshareclasses:nonFatal 
-Xshareclasses:groupAccess
-Xshareclasses:cacheDirPerm=0777
-Xshareclasses:cacheDir=/tmp,name=mqweb

If you give each application a name (such as mqweb)  you can isolate the cache to an application and not disrupt another JVM if you change the cache.  For example if you restore from a snapshot, only users of that “name” will be affected.

List what is in the cache

You can use the USS command,

java -Xshareclasses:cacheDir=/tmp/,listAllCaches

I used a batch job to do the same thing.

//IBMJAVA  JOB  1 
// SET V='listAllCaches' 
// SET C='/tmp/' 
//S1       EXEC PGM=BPXBATCH,REGION=0M, 
// PARM='SH java -Xshareclasses:cacheDir=&C,&V' 
//STDERR   DD   SYSOUT=* 
//STDOUT   DD   SYSOUT=*            

The output below, shows the cache name is mqweb.  Once you have created a snapshot it has an entry for it.

Listing all caches in cacheDir /tmp/                                                                          
                                                                                                              
Cache name       level         cache-type      feature         OS shmid       OS semid 
mqweb            Java8 64-bit  non-persistent  cr              8197           4101 

For a different product I got

Incompatible shared caches                                     
rseapi                  Java8 32-bit  non-persistent  default  

The Incompatible shared caches looks like it means you are using 64 bit Java – but there is a cache using 32 bit Java.

For MQWEB the default parameters are -Xshareclasses:cacheDir=/u/mqweb/servers/.classCache,name=liberty-%u” where /u/mqweb is the WLP parameter, where my parameter are defined, and %u is the userid the server is running under, so in my case liberty=START1.

When I had /u/mqweb/servers/.classCache, then the total command line was too long for BPXBATCH.   (Putting it into STDPARM gave me IEC020I 001-4 on the instream STDPARM because the resolved line wa greater than 80 characters.   I resolved this by adding -Xshareclasses:cacheDir=/u/mqweb,name=cache to the jvm.options file.

To take a snapshot


//IBMJAVA  JOB  1 
// SET C='/tmp/' 
// SET N='mqweb' 
// SET V='restoreFromSnapshot' 
// SET V='listAllCaches'
// SET V='snapshotCache' //S1 EXEC PGM=BPXBATCH,REGION=0M, // PARM='SH java -Xshareclasses:cacheDir=&C,name=&N,&V' //STDERR DD SYSOUT=* //STDOUT DD SYSOUT=* //

This job took a few seconds to run.

I believe you have to take the snapshot while your Java application is executing – but I do not know for definite.

Restore a snapshot

To restore a snapshot just use restoreFromSnapshot in the above JCL. This took a few seconds to run. 

How to use it.

If you put the restoreFromSnaphot JCL at the start of the web server, it will preload it whenever you use your server.

If you take a snapshot every day before shutting down your server, you will get a copy with the latest optimisations.  If you do not take a new snapshot it continues to use the old one.

If you want to not use the shared cache you can get rid of it using the command destroySnapshot.

Is my cache big enough?

If you use the printStats request you get information like

Current statistics for cache "mqweb":                                                
...                                                                                     
cache size                           = 104857040                                     
softmx bytes                         = 104857040                                     
free bytes                           = 70294788 
...
Cache is 32% full                                     
                                                      
Cache is accessible to current user = true                                                 

The documentation says

When you specify -Xshareclasses without any parameters and without specifying either the -Xscmx or -XX:SharedCacheHardLimit options, a shared classes cache is created with a default size, as follows:

  • For 64-bit platforms, the default size is 300 MB, with a “soft” maximum limit for the initial size of the cache (-Xscmx) of 64MB, …

I had specified -Xscmx100m  which matches the value reported.

What is in the cache?

You can use the printAllStats command.  This displays information like

Classpath

1: 0x00000200259F279C CLASSPATH
/usr/lpp/java/J8.0_64/lib/s390x/compressedrefs/jclSC180/vm.jar
/usr/lpp/java/J8.0_64/lib/se-service.jar
/usr/lpp/java/J8.0_64/lib/math.jar

Methods for a class
  • 0x00000200259F24A4 ROMCLASS: java/util/HashMap at 0x000002001FF7AEB8.
  • ROMMETHOD: size Signature: ()I Address: 0x000002001FF7BA88
  • ROMMETHOD: put Signature: (Ljava/lang/Object;Ljava/lang/Object;)Ljava/lang/Object; Address: 0x000002001FF7BC50

This shows

  • there is a class HashMap. 
  • It has a method size() with no parameters returning an Int.  It is at…. in memory
  • There is another method put(Object o1, Object o2)  returning an Object.  It is at … in memory
Other stuff

There are sections with JITHINTS and other performance related data.

Understanding Z/OS Connect SMF 123 subtype 2 record

Introduction to the z/OS Connect SMF records

z/OS Connect can provide two types of SMF record

  1. SMF 120 subtype 11, provided by the base Liberty support. This gives information on the URL used to access Liberty, and the CPU used to perform requests. This is enabled at the Server level – so you can have records for all request, or no requests. There is one SMF record for each web server request.
  2. SMF 123 provides information about API and the service used, and the “pass through” services. It provides elapsed time of the request, and of the the “pass through” requests. It does not provide CPU usage figures. This can be configured to produce records depending on the http host and port used to access z/OS Connect. One SMF record can have data for multiple web server requests. The SMF records are produced when the SMF record is full – or the server is shut down.

The SMF 120-11 and SMF 123 records are produced independently, and there is no correlating field between them. They both have a URI field, and time stamps, so at low volumes it may be possible to correlate the SMF data.

I’ll document the fields which I think are interesting. If you think other fields are useful please let me know and I’ll update this document.

I have written an SMF formatter in C which prints out interesting data, and summarises it.

SMF 123 subtype 2 fields

  • You get the standard date and time the record was produced, and with the LPAR. You can use PGM=IFASMFDP with the following to filter which records are copied
DATE(2020282,2020282)
START(1000)
START(2359)
  • There is server version (3), system(SOW1), SYSPLEX(ADCDPLEX) and job id(STC04774) which are not very interesting
  • Server job name(SM3) is more interesting. I started the server with s baqstrt,parms=’MQTEST’,jobname=sm3
  • The config dir (/var/zosconnect/servers/MQTEST/) is boring – as is server level (open beta)
  • The HTTP code, for example 200 OK, 403 Forbidden. You may want to report requests with errors in a separate file
    1. So you know you have errors and can fix them
    2. Your statistics, such as average response time do not have dirty data in them.
  • An HTTP 1 character flag – this has always been 00 for me. I cannot find it documented.
  • The Client IP address. 10.1.1.1
  • You get userid information.
    • I used a certificate to authenticate. The DN from the certificate is not available. You only get the userid from the RACF mapping of DN to userid. This mapped userid was in the 64 byte field. The 8 byte userid field was empty for me. The lack of certificate DN, and having the userid in the wrong field feels like a couple of buglets. If you use LDAP, I think the long ID is stored in the long field, and the z/OS userid stored in the short field – so inconsistent.
  • You get the URL(URI) used /stockmanager/stock/items/999999. I treat this as a main key for processing and summarising data. You may need to process this value as the partnumber (999999) will be different for each query. You may want to have standards which say the first two elements (/stockmanager/stock) are useful for reporting. The remaining elements should be ignored when summarising records.
  • The start and stop times (2020/10/08 09:18:19.852360 and 2020/10/08 09:18:22.073946) are interesting. You can calculate the duration – which is the difference between them.
  • Request type API, Service, Admin. An Admin request is like using an URL like /zosConnect/services/stockQuery to get information about the stockQuery service.
  • The API name and version – stockmanager 1.0.0
  • The service name and version – stockQuery 1.0.0. You get the version information. If you do an online display of the service the version is not available.
  • Method GET/POST etc
  • The service provider. This is the code which does the real work of connection to CICS, MQ, or passing the request through. IBM MQ for z/OS, restclient-1.0
  • Request id starts at 1 and increments for the life of the server. If you restart the server it will restart from 1. I do not think this is very useful.
  • For “pass through” requests, z/OS Connect confusingly calls the back end service the Statement of Record or (SOR). (MQ is a transport, not a Statement of Record.) The “pass through” service definition is built from a parameter file and zconbt program. The reported data is
    • SOR ID the host and port 10.1.3.10:19443. These are from the <zosconnect_zosConnectServiceRestClientConnection host and port values.
    • SOR Reference restClientServiceY This is from the connectionRef=in the parameter file and the <zosconnect_zosConnectServiceRestClientConnection…> definition
    • SOR Resource zosConnect/apis/through. This is from the uri= in the parameter file.
    • Time of entry and time of return of the SOR service.
    • From the times calculate the difference to get the duration of the remote request.
  • It would be useful to have this “pass through time” for services calling MQ, CICS etc, so we could get a true picture of the time spent processing the requests.
  • The size of the data payload (0) , and the size of the response(94) excluding any headers.
  • A tracking token. The hex 64 byte string is passed to the some called servers. It is passed to some backends (CICS and “pass through”) to be able to correlate the request across systems. It is not passed to MQ. See X-Correlation-ID below for an example. This field is nulls for Admin request. When a request was “passed through” to another z/OS Connect server which processed the request, the tracking token was not reported in the SMF data of the second system. I dont know if the CICS SMF data records this token, but it is of little use for MQ, or for “pass through”.
  • You get 4 request header, and 4 response header fields. They were blank in my measurements, even though headers were passed to the pass through service. Looking a the http traffic, the request coming in had a header “Content-Type:application/json”. The request passed through to the back end included
    • User-Agent: IBM_zOS_Connect_REST_SP/open beta/20200803-0918
    • X-Correlation-ID: BAQ1wsHYAQAYwcTDxNfTQEDi8ObxQEBAQNikjpk+1klAAA==

What can you do with the data?

Do I need to use the SMF data?

From a performance perspective these records do not provide much information, as they are lacking information about CPU usage. From an audit perspective they have some useful information – but the records are missing information which would provide useful audit information. There is an overlap between the information in the SMF 123 records and the …/servers/…logs/http_access.log file which provides, date time, userid, URI, HTTP code.

What do I want to report on?

Decide what elements of the URI you want to report on. For example the URI /stockmanager/stock/items/999999 includes the stock part number, which may be different for each request. You might decide to summarise API usage on just the first two elements /stockmanager/stock/. You may have to treat each API individually to extract the key information.

I’ll use the term key for the interesting part of the URI – for example /stockmanager/stock.

What reports are interesting?

I think typical questions are:

  1. Which is the most popular API key?
  2. Is the usage of an API key increasing?
  3. How many API key requests were unsuccessful? This can show set-up problems, or penetration attempts.
  4. What is the response time profile of the requests? Are you meeting the business response time criteria?
  5. Which sites are sending in most of the requests. You cannot charge back on CPU used, as you do not know the CPU usage. You could do charge back at a fixed cost per API request, with each API having a different rate.
  6. Which userids are sending in most of the requests. You may want to provide more granular certificate to userid mapping to give you more detailed information

Understanding z/OS Connect SMF 120 subtype 11 data

z/OS Connect can provide two types of SMF record

  1. SMF 120 subtype 11, provided by the base Liberty support. This gives information on the URL used to access Liberty, and the CPU used to perform requests. This is enabled at the Server level – so you can have records for all request, or no requests. There is one SMF record for each web server request. Would I use this to report CPU used ? No – see the bottom of this blog.
  2. SMF 123 provides information about API and the service used, and the “pass through” services. It provides elapsed time of the request, and of the the “pass through” requests. It does not provide CPU usage figures. This can be configured to produce records depending on the http host and port used to access z/OS Connect. One SMF record can have data for multiple web server requests. The SMF records are produced when the SMF record is full – or the server is shut down.

The SMF 120-11 and SMF 123 records are produced independently, and there is no correlating field between them. They both have a URI field, and time stamps, so at low volumes it may be possible to correlate the SMF data.

I’ll document the fields which I think are interesting. If you think other fields are useful please let me know and I’ll update this document.

I have written an SMF formatter in C which prints out interesting data, and summarises it.

SMF 120-11

  • You get the standard date and time the record was produced, and with the LPAR. You can use PGM=IFASMFDP with the following to filter which records are copied
DATE(2020282,2020282)
START(1000)
START(2359)
  • There is server version (3), system(SOW1), and job id(STC04774) which are not very interesting
  • Server job name(SM3) is more interesting. I started the server with s baqstrt,parms=’MQTEST’,jobname=sm3
  • The config dir (/var/zosconnect/servers/MQTEST/) is boring – as is code level (20.0.0.6)
  • The start and stop times (2020/10/08 09:18:19.852360 and 2020/10/08 09:18:22.073946) are interesting as is the duration – which is the difference between them.
  • You get userid information.
    • I used a certificate to authenticate. The DN from the certificate is not available. You only get the userid from the RACF mapping of DN to userid. This mapped userid was in the 64 byte field. The 8 byte userid field was empty for me. The lack of certificate DN, and having the userid in the wrong field feels like a couple of buglets.
  • You get the URL used /stockmanager/stock/items/999999 I treat this as a main key for processing and summarising data. If you want to summarise the data, you may want so summarise it just on /stockmanager/stock/. The full URI contains the part number – and so I would expect a large number of parts.
  • You can configure your requests to WLM. For example
<wlmClassification>
<httpClassification transactionClass="TCI1" method="GET" 
    resource="/zosConnect/services/stockQuery"/>
</wlmClassification>

This produced in the SMF record

WLMTRan :TCI1
WLM Classify type :URI :/zosConnect/services/stockQuery
WLM Classify type :Target Host :10.1.3.10
WLM Classify type :Target Port :19443

This means that the URL, the host, and the port were passed to WLM to classify.

If you get the WLM classification you also get CPU figures when the enclave request ended (was deleted).

  • You get the ports associated with the request.
    • Which port was used on the server – Target Port :9443
    • Where did the request come from? Origin :10.1.1.1 and port :36786
  • The number of bytes in the response Response bytes :791
  • CPU figures for the CPU used on the TCB. See discussion below on the usefulness of this number. You get the CPU figures before the request, and after the request – so you have to calculate the difference yourself! The values come from the timeused facility. You can calculate the delta and get
    • CPU Used Total : 0.967417
    • CPU Used on CP : 0.026327
    • and calculate these to to get CPU Delta. on Z**P : 0.941090 This is the CPU offloaded to ZIIP or ZAAP.
  • If you had the URI classified with WLM, you get Enclave data, see below for a discussion on what the numbers mean.
    • Enclave CPU time : 0.148803
    • Enclave CPU service : 0.000000
    • Enclave ZIIP time : 0.148803
    • Enclave ZIIP Service : 0.000000

What do the CPU numbers mean?

Typically a transaction flow is as follows

  1. A listening thread listens on the HTTP(s) host and port.
  2. When a request arrives, it passes the request to a worker thread, and goes back to listening
    1. The worker thread may do some work and send the response back
    2. The worker thread may need to call another thread to do some work. For example to issue an MQ request,
      1. the MQ code looks for a thread in a pool for a matching queue manager and userid. If it find one it uses it the thread and issues the MQ request.
      2. If it does not find a matching thing thread it may allocate a new thread, and issue an MQCONN to connect to MQ. These are both expensive operations, which is why having a pool of threads with queue manager and userid is a good way of saving CPU
      3. The work is done
      4. The thread is put back into the MQ pool, and the application returns to the worker thread
      5. The worker thread sends the response back to the originator

A thread can ask the operating system, how much CPU time it(the thread) has used. What usually happens is

  1. the thread requests how much CPU it has used
  2. the thread does some work
  3. the thread requests how much CPU it has used,
  4. the thread calculates the difference between the two CPU values and reports this delta.

I the SMF 120 record records the CPU from just the worker thread – and no other thread.

Enclaves

When there are more than one thread involved it gets more complex, as you could have a CICS transaction issuing an MQ request, then a DB2 request, and then an IMS request. You can set up z/OS WorkLoad Manager(WLM) to say “these CICS transactions in this CICS region are high priority”.

With some subsystems you can pass a WLM token into a request. The thread being invoked call tell WLM that the thread is now working on behalf of this token. The thread does some work, and tells WLM that it has finished doing the work. WLM can manage the priority of the threads to achieve the best throughput, for example making the thread high or low priority. WLM can manage a thread doing work in multiple LPARs across a sysplex!

WLM records the CPU used by the thread while performing the work, accumulates and reports this.

This use of multiple threads for a business transaction across one or more address spaces is known as an enclave.

What happens with enclaves?

  1. A request arrives at the listener thread.
  2. The Liberty looks up the URI in the <wlmClassification httpClassification…. It compare the server’s host, server’s port, the URI resource /stockmanager… method ( GET) and finds the best match for the transactionClass.
    1. If there is a transactionClass,
      1. the server calls WLM with the Subsystem type of CB, the specified collectionName, and the transactionClass.
      2. WLM looks for these parameters and if WLM has a matching definition then WLM will manage the priority of the work,
      3. WLM returns a WLM token.
      4. This WLM token is passed to threads which are set up for enclaves.
    2. If there is no transaction class specified in Liberty, or WLM does not have the subsystem, collectionname, transactionClass then there is no token or a null WLM token
    3. The work continue as before.
    4. If another thread is used then pass the WLM token. If the code is set up for WLM token then report “work started”, when it has finished report “work ended”

What happens if the request is not known to WLM.

The worker thread calculates the CPU used for just its work, and reports this. The CPU used by any other thread is not report. The figures reported are the CPUTotal timeused values. You have to calculate the difference yourself

What happens if the request is known to WLM.

You get the timeused CPU for the worker thread – as with the case where the request is not known to WLM.

From RMF (or other products) you get out reports for an interval with

  1. The number of requests in the interval
  2. The rate of requests in the interval
  3. The amount of time on a CP engine in seconds
  4. The amount of time on a ZIIP engine is seconds
  5. The amount of time on a ZAAP in seconds.
  6. Over the interval, what percentage of time was CP on CP engines, zAAP on zAAP engines, zAAP on CP engines, zIIP on zIIP engines.

From the SMF 120 records you get

Enclave CPU time
Enclave ZAAP time
Enclave ZIIP time

Example Enclave figures.

For 100 API requests, the figures as reported by SMF 120-11, and I averaged the values.

  1. Average CPU(1) 0.023
  2. Average CPU(2) 0.0008
  3. Enclave CPU 0.029
  4. Enclave ZAAP 0
  5. Enclave ZIIP 0.028

The figures reported by RMF per request

  1. CPU 0.031
  2. ZIIP 0.039
  3. ZAAP 0.000
  4. Total 0.070 seconds of CPU per transaction

These figures tie up – the Enclave CPU, ZIIP, and ZAAP are similar.

The CPU used by the server address space was

  1. CPU 30.1 seconds
  2. ZIIP 28.7 seconds
  3. ZAPP 0 seconds.
  4. Total 58.8.

Each request took 0.070, and there were 100 requests – so reported 7 second of CPU.

The difference(51) seconds is not reported in the transaction costs. It looks like the “timeused” value is less than 1% of the CPU value, and the enclave figures are under 2% of the grand total.

Looking at the trace in a dump, I can see many hot TCBs using much more CPU that is reported by WLM and RMF. I expect that many TCBs used in a request, but they do not have the enclave support in them. Overall – pretty useless for charge back and understanding the cost per transaction.

What’s the difference between MQ Web, and z/OS Connect MQ support?

With MQ Web

  1. You can issue commands to administer MQ  for example display, define, delete MQ objects.
  2. You can put and get messages to and from a queue.  The message is what you specify – typically a character string.

With Z/OS Connect MQ support

  1. You can put and get messages to and from a queue, and do transformations on the message.  For example mapping a COBOL structure to JSON.  
  2. You can do field validation.
  3. You can covert HTTP code “200” to “great it worked”.

What is common?

They both use z/OS WebSphere Liberty to provide the basic web server.

A practical path to installing Liberty and z/OS Connect servers – 10 use the MQ sample

Introduction

I’ll cover the instructions to install z/OS Connect, but the instructions are similar for other products. The steps are to create the minimum server configuration and gradually add more function to it.

The steps below guide you through

  1. Overview
  2. planning to help you decide what you need to create, and what options you have to choose
  3. initial customisation and creating a server,  creating defaults and creating function specific configuration files,  for example a file for SAF
  4. starting the server
  5. enable logon security and add SAF definitions
  6. add keystores for TLS, and client authentication
  7. adding an API and service application
  8. protecting the API and service applications
  9. collecting monitoring data including SMF
  10. use the MQ sample
  11. using WLM to classify a service

With each step there are instructions on how to check the work has been successful.

Use the MQ sample

You need to have installed the service, and protected it.

You need to configure MQ to include the MQ support and tell JMS where the libraries are

<server> 
<!-- Enable features --> 
<featureManager> 
    <feature>zosconnect:mqService-1.0</feature> 
</featureManager> 
                                                                                                         
<wmqJmsClient nativeLibraryPath="/Z24A/usr/lpp/mqm/V9R1M1/java/lib"/> 

<variable name="wmqJmsClient.rar.location"
   value="/Z24A/usr/lpp/mqm/V9R1M1/java/lib/jca/wmq.jmsra.rar"/> 
</server> 

You could configure a variable for the mq directory so you use it once, and use

<variable name="wmq"  value="/Z24A/usr/lpp/mqm/V9R1M1/java/lib/"
<wmqJmsClient nativeLibraryPath="${wmq}”/>
<variable name="wmqJmsClient.rar.location"
   value="${wmq}wmq.jmsra.rar}">

You could also pass the mq location as a variable in STDENV, and so pass it in through JCL.

Configure the jms to define the queue manager and queues

<server> 
 <jmsConnectionFactory jndiName="jms/cf1" 
     connectionManagerRef="ConMgr1"> 
    <properties.wmqJms transportType="BINDINGS" 
         queueManager="CSQ9"/> 
 </jmsConnectionFactory> 
                                                                                                      
 <connectionManager id="ConMgr1" maxPoolSize="5"/> 
                                                                                                      
 <!-- A queue definition where request messages 
      for the stock query application are sent. --> 
 <jmsQueue jndiName="jms/stockRequestQueue"> 
    <properties.wmqJms 
       baseQueueName="STOCK_REQUEST" 
       targetClient="MQ"/> 
 </jmsQueue> 
                                                                                                      
 <!-- A queue definition where response messages from 
      the stock query application are sent. --> 
 <jmsQueue jndiName="jms/stockResponseQueue"> 
    <properties.wmqJms baseQueueName="STOCK_RESPONSE" targetClient="MQ"/> 
 </jmsQueue> 
</server>

and include these in the server.xml file.

You need to compile and run the back end service.  See here.  Take care if using cut and paste as there a long lines which wrap, and cause compilation errors.

Because the MQ path name is long I used

export HLQ="/usr/lpp/mqm/V9R1M1/java/lib"

java -cp $HLQ/com.ibm.mq.allclient.jar:. -Djava.library.path=$HLQ TwoWayBackend CSQ9 STOCK_REQUEST STOCK_RESPONSE

I set up a job to run this in back ground, so I could free up my TSO terminal.

Use the API

Once installed you should be able to use the API. For example

curl –insecure -i –cacert cacert.pem –cert adcdd.pem:password –key adcdd.key.pem https://10.1.3.10:9443/stockmanager/stock/items/999999

If the back end application was working I got

{"SQRESP":{"ITEM_ID":999999,"ITEM_DESC":"A description. 00050","ITEM_COST":45,"ITEM_COUNT":0}}

If the back end application was not working I got back an empty response.

The back-end application runs until Ctrl+c is used.  On my USS, the ESCape key is cent symbol ¢ (Unicode 00a2) which I do not have on my default keyboard.    See  x3270 – where’s the money key? for guidance on how to set it.

 

Use the Service

To use the API I used a web browser with

https:10.1.3.10/9443/stockmanager/stock/items/999999

and got back

{"SQRESP":{"ITEM_ID":999999,"ITEM_DESC":"A description. 00050","ITEM_COST":45,"ITEM_COUNT":0}}

or curl with

 curl --insecure -X post -H Content-Type: application/json
  -H "Content-Type:application/json" -i --cacert cacert.pem
  --cert adcdd.pem:password --key adcdd.key.pem 
  --data {"STOCKQRYOperation": {"sqreq" : { "item_id": 2033}}}
  https://10.1.3.10:9443/zosConnect/services/stockQuery?action=invoke

A practical path to installing Liberty and z/OS Connect servers – 11 WLM classifying a service

Introduction

I’ll cover the instructions to install z/OS Connect, but the instructions are similar for other products. The steps are to create the minimum server configuration and gradually add more function to it.

The steps below guide you through

  1. Overview
  2. planning to help you decide what you need to create, and what options you have to choose
  3. initial customisation and creating a server,  creating defaults and creating function specific configuration files,  for example a file for SAF
  4. starting the server
  5. enable logon security and add SAF definitions
  6. add keystores for TLS, and client authentication
  7. adding an API and service application
  8. protecting the API and service applications
  9. collecting monitoring data including SMF
  10. use the MQ sample
  11. using WLM to classify a service

With each step there are instructions on how to check the work has been successful.

Classify the services with WLM to give them the right priority

You can configure an API or service, for example https://10.1.3.10:9443/stockmanager/stock/items/999999, so it gets an appropriate service class in WLM, for example high priority.

You can use RMF to report on service classes to see the response time profile, and if the service class is meeting its performance goals.

See here for a good article.

This page gives the example

<wlmClassification>
  <httpClassification transactionClass="CLASS001" 
      host="127.0.0.1" 
      port="9080" 
      method="GET"
      resource="/testResource" />
</wlmClassification>

You can classify the traffic depending on the IP address and port of the server, as well as the resource name.