So I just found ZFS on my test Linux ubuntu system, and gave my perf-tools (https://github.com/brendangregg/perf-tools) a spin.
Per-second zfs* calls:
# ./funccount -Ti 1 'zfs*'
Tracing "zfs*"... Ctrl-C to end.
Tue Aug 5 00:51:41 UTC 2014
FUNC COUNT
zfs_getattr_fast 12
zfs_open 12
zfs_close 13
zfs_dirty_inode 13
zfs_sa_upgrade_txholds 13
zfsctl_is_node 1002
zfs_inode_update 1002
zfs_is_readonly 1002
zfs_range_lock 1002
zfs_range_unlock 1002
zfs_read 1002
zfs_tstamp_update_setup 1002
Tue Aug 5 00:51:42 UTC 2014
FUNC COUNT
zfs_getattr_fast 12
zfs_open 12
zfs_close 13
zfs_dirty_inode 13
zfs_sa_upgrade_txholds 13
zfsctl_is_node 1039
zfs_inode_update 1039
zfs_is_readonly 1039
zfs_range_lock 1039
zfs_range_unlock 1039
zfs_read 1039
zfs_tstamp_update_setup 1039
^C
Tue Aug 5 00:51:43 UTC 2014
FUNC COUNT
zfs_close 3
zfs_dirty_inode 3
zfs_getattr_fast 3
zfs_open 3
zfs_sa_upgrade_txholds 3
zfsctl_is_node 232
zfs_inode_update 232
zfs_is_readonly 232
zfs_range_lock 232
zfs_range_unlock 232
zfs_read 232
zfs_tstamp_update_setup 232
Ending tracing...
Per-second ARC calls:
# ./funccount -Ti 1 'arc*'
Tracing "arc*"... Ctrl-C to end.
Tue Aug 5 00:51:48 UTC 2014
FUNC COUNT
arc_adjust 1
arch_report_meminfo 2
arc_tempreserve_clear 6
arc_tempreserve_space 6
arch_vma_name 8
arch_dup_task_struct 14
arch_randomize_brk 15
arch_setup_additional_pages 15
arch_align_stack 30
arch_pick_mmap_layout 30
arch_unmap_area_topdown 121
arch_get_unmapped_area_topdown 497
arc_access.isra.5 505
arc_buf_add_ref 505
arc_buf_eviction_needed 505
arc_buf_freeze 505
arc_buf_remove_ref 505
arc_released 505
Tue Aug 5 00:51:49 UTC 2014
FUNC COUNT
arc_adjust 1
arch_report_meminfo 2
arch_vma_name 8
arc_tempreserve_clear 10
arc_tempreserve_space 10
arch_randomize_brk 16
arch_setup_additional_pages 16
arch_dup_task_struct 17
arch_align_stack 32
arch_pick_mmap_layout 32
arch_unmap_area_topdown 165
arch_get_unmapped_area_topdown 436
arc_access.isra.5 829
arc_buf_add_ref 829
arc_buf_eviction_needed 829
arc_buf_freeze 829
arc_buf_remove_ref 829
arc_released 829
^C
Tue Aug 5 00:51:49 UTC 2014
FUNC COUNT
arc_adjust 1
arc_buf_free 1
arch_irq_stat 1
arch_irq_stat_cpu 1
arc_buf_evict 2
arc_buf_size 2
arc_hdr_destroy 2
arch_report_meminfo 2
arc_buf_alloc 4
arc_buf_destroy 4
arc_get_data_buf.isra.19 4
arc_tempreserve_clear 5
arc_tempreserve_space 5
arc_space_consume 6
arc_space_return 6
arch_vma_name 8
arch_randomize_brk 10
arch_setup_additional_pages 10
arch_dup_task_struct 11
arc_freed 19
arch_align_stack 20
arch_pick_mmap_layout 20
arc_set_callback 26
arc_cksum_compute.isra.15 34
arc_write 34
arc_write_done 34
arc_write_ready 34
arc_release 36
arc_cksum_verify.isra.11 38
arc_buf_thaw 43
arch_unmap_area_topdown 63
arc_change_state.isra.4 71
arch_get_unmapped_area_topdown 185
arc_buf_add_ref 467
arc_buf_eviction_needed 469
arc_buf_remove_ref 472
arc_buf_freeze 488
arc_released 491
arc_access.isra.5 501
Ending tracing...
Dynamic tracing of arc_buf_add_ref(), showing stacks:
# ./kprobe -s p:arc_buf_add_ref | head -30
Tracing kprobe arc_buf_add_ref. Ctrl-C to end.
cksum-11501 [000] 3123658.777201: arc_buf_add_ref: (arc_buf_add_ref+0x0/0x140 [zfs])
cksum-11501 [000] 3123658.777207: <stack trace>
=> dbuf_hold_impl
=> dbuf_hold
=> dmu_buf_hold_array_by_dnode
=> dmu_buf_hold_array
=> dmu_read_uio
=> zfs_read
=> zpl_read_common
=> zpl_read
=> vfs_read
=> sys_read
=> system_call_fastpath
cksum-11501 [000] 3123658.777672: arc_buf_add_ref: (arc_buf_add_ref+0x0/0x140 [zfs])
cksum-11501 [000] 3123658.777675: <stack trace>
=> dbuf_hold_impl
=> dbuf_hold
=> dmu_buf_hold_array_by_dnode
=> dmu_buf_hold_array
=> dmu_read_uio
=> zfs_read
=> zpl_read_common
=> zpl_read
=> vfs_read
=> sys_read
=> system_call_fastpath
cksum-11501 [000] 3123658.778140: arc_buf_add_ref: (arc_buf_add_ref+0x0/0x140 [zfs])
cksum-11501 [000] 3123658.778143: <stack trace>
=> dbuf_hold_impl
Return value of zfs_read():
# ./kprobe 'r:zfs_read rval=$retval' | head
Tracing kprobe zfs_read. Ctrl-C to end.
cksum-13979 [000] 3123849.996390: zfs_read: (zpl_read_common+0x52/0x80 [zfs] <- zfs_read) rval=0
cksum-13979 [000] 3123849.996855: zfs_read: (zpl_read_common+0x52/0x80 [zfs] <- zfs_read) rval=0
cksum-13979 [000] 3123849.997380: zfs_read: (zpl_read_common+0x52/0x80 [zfs] <- zfs_read) rval=0
cksum-13979 [000] 3123849.997841: zfs_read: (zpl_read_common+0x52/0x80 [zfs] <- zfs_read) rval=0
cksum-13979 [000] 3123849.998304: zfs_read: (zpl_read_common+0x52/0x80 [zfs] <- zfs_read) rval=0
cksum-13979 [000] 3123849.998764: zfs_read: (zpl_read_common+0x52/0x80 [zfs] <- zfs_read) rval=0
cksum-13979 [000] 3123849.999225: zfs_read: (zpl_read_common+0x52/0x80 [zfs] <- zfs_read) rval=0
cksum-13979 [000] 3123850.007806: zfs_read: (zpl_read_common+0x52/0x80 [zfs] <- zfs_read) rval=0
cksum-13979 [000] 3123850.008267: zfs_read: (zpl_read_common+0x52/0x80 [zfs] <- zfs_read) rval=0
Checking for slow reads:
# ./funcslower -P zfs_read 1000
Tracing "zfs_read" slower than 1000 us... Ctrl-C to end.
0) cksum-17054 | ! 8144.082 us | } /* zfs_read */
0) cksum-17056 | ! 8160.678 us | } /* zfs_read */
0) cksum-17060 | ! 8137.766 us | } /* zfs_read */
0) cksum-17061 | ! 8138.776 us | } /* zfs_read */
0) cksum-17063 | ! 8159.633 us | } /* zfs_read */
0) cksum-17066 | ! 8135.308 us | } /* zfs_read */
^C
Ending tracing...
Over 8 ms? I thought this workload would cache. Are we really issuing disk I/O?
Tracing stacks from block device issue:
# ./tpoint -s block:block_rq_insert
Tracing block:block_rq_insert. Ctrl-C to end.
z_ioctl_iss/0-9678 [000] 3124122.009945: block_rq_insert: 202,16 WS 0 () 2540456 + 8 [z_ioctl_iss/0]
z_ioctl_iss/0-9678 [000] 3124122.009948: <stack trace>
=> blk_flush_plug_list
=> blk_finish_plug
=> generic_writepages
=> do_writepages
=> __filemap_fdatawrite_range
=> filemap_write_and_wait_range
=> ext3_sync_file
=> vfs_fsync
=> vn_fsync
=> vdev_file_io_start
=> zio_vdev_io_start
=> zio_execute
=> taskq_thread
=> kthread
=> kernel_thread_helper
z_ioctl_iss/0-9678 [000] 3124122.014818: block_rq_insert: 202,16 FWS 0 () 0 + 0 [z_ioctl_iss/0]
z_ioctl_iss/0-9678 [000] 3124122.014822: <stack trace>
=> blk_queue_bio
=> generic_make_request.part.50
=> generic_make_request
=> submit_bio
=> blkdev_issue_flush
=> ext3_sync_file
=> vfs_fsync
=> vn_fsync
=> vdev_file_io_start
=> zio_vdev_io_start
=> zio_execute
=> taskq_thread
=> kthread
=> kernel_thread_helper
^C
Ending tracing...
Hell freezes over: ext3 and zfs in the same stack! Well, that's what I get for using an ext3 file as a backing store...