This blog post gives an example of zFS statistics, and my interpretation of what they mean.
Related posts
- zFS on z/OS concepts, from a performance perspective
- How to collect zFS statistics
- Example of zFS statistics
- zFS performance reports I would like to use on z/OS (but can’t)
I IPLed my z/OS to give a clean system.
I used a batch job to read all of the files in a directory and throw away the output.
sh cat /usr/lpp/java/J8.0_64/lib/ext/* 1> /dev/null
The command
du -ka /usr/lpp/java/J8.0_64/lib/ext/
gave 16728 KB, and there were 30 files in the directory.
The interface layer
The command
query -knpfs
gave
------------- ---------- ---------- ---------- ---------- Operation Count XCF req Avg Time Bytes ------------- ---------- ---------- ---------- ---------- zfs_opens 37 0 0.053 zfs_closes 37 0 0.024 zfs_reads 4160 0 0.080 16.234M zfs_getattrs 86 0 0.036 zfs_accesses 377 0 0.027
There were 4160 read requests of 4096 bytes = 16MB
There were 30 opens one for each file.
There was an open for ‘/’, ‘/usr’ ‘/usr/lpp’ etc .. so 37 opens in total. At the end, each of these objects were closed.
The interface layer calls the buffer manager
The command
query -usercache
gave the User FIle (VM) Caching System Statistics report. It had
External requests Reads 4160 Fsyncs 0 Schedules 0 Writes 0 Setattrs 0 Unmaps 0 Asy Reads 4126 Getattrs 153 Flushes 0
Which says there were 4160 read requests, which matches the zfs_reads request.
There were 4126 requests from the interface layer which had read-ahead set. This tells the buffer manager to get the pages. If they are not already in the buffer start reading them from disk. The Asy Reads does not give the reads from disk.
When I repeated the test I had: Reads 4160, Asy Reads 4120, with reads from disk 0 (as expected).
File System Reads: Reads Faulted 34 (Fault Ratio 0.817%) Writes Faulted 0 (Fault Ratio 0.000%) Read Waits 34 (Wait Ratio 0.817%) Total Reads 276
This shows there were 276 reads from a file system, of which 34 requests had to wait for I/O.
I interpret this as saying there were 34 requests for get page which required disk I/O. The remained 276 – 34 caused I/O for read ahead so the application did not have to wait. I think the first page of each file was not in the cache, so there was an I/O to read the first segment(16 pages) of records in. There were 30 files, so 34 is close enough. The first request also started a Read Ahead to read the next segment in.
Page Management (Segment Size = (64K) ) (Page Size = 8K) -------------------------------------------------------- Total Pages 121725 Free 118843 Segments 395 Steal Invocations 0 Waits for Reclaim 0
Before the test the free pages was 120933, so the delta is 2490 pages. Each page is 8KB, so the amount of storage used is 2490 * 8KB = 19.5 MB. The amount of data read from disk is 16.234MB so these numbers are comparable.
The Steal Invocations is the number of 64KB segments released to make space in the cache. In another test, I used a very small cache (10MB) and read 25636 KB of data in, and repeated the reads. Steal invocations was 404. 404 * 64 * 1024 = 25856 KB. This is close to the amount of data processed. Note: The documentation is incorrect,it says the value is the number of 4KB pages, not 64KB segments.
Data level
I/O Summary By Type ------------------- Count Waits Cancels Merges Type ---------- ---------- ---------- ---------- ---------- 75 61 0 0 File System Metadata 0 0 0 0 Log File 276 51 0 0 User File Data
This shows there were 75 I/O requests for meta information about the file, and 276 I/O requests to read the file itself. Reading the documentation I think the WAITS column indicates an I/O request was delayed before its I/O started, for example there was already an I/O outstanding.
zFS I/O by Currently Attached Aggregate DASD PAV VOLSER IOs Mode Reads K bytes Writes K bytes Dataset Name ------ --- ---- ----- ------- ------ ------- ------------ ... A4PRD3 1 R/O 302 16780 0 0 JVB800.ZFS ... ------ --- ---- ----- ------- ------ ------- ------------ 337 17104 14 56 *TOTALS*
This shows there was I/O to the data set containing the Java file system. There were 302 reads, and it read 16780 KB of data.
I’ve omitted the other file systems which with 35 Reads, and 14 Writes.
These counts do not seem to tie up. There were 276 Reads to the User File Data, and 75 reads for File System Meta data, a total of 351. The zFS read count was 337.
3 thoughts on “Example of zFS statistics”