Is ZFS L2ARC required if primary data is already on SSD?












3















I'm trying to tune ZFS on Linux for my workload (Postgres and a fileserver on the same physical machine [1]), and wanted to understand if I really need L2ARC, or not.



If the information given at https://www.zfsbuild.com/2010/04/15/explanation-of-arc-and-l2arc/ (written in 2010, when, I guess, SSDs were expensive), is correct, shouldn't I be disabling L2ARC? If there is a cache-miss on the ARC, reading from L2ARC and the main data-set will take the same time (both will be SSD). Is my understanding correct?



A related question -- how do I check the summary of L2ARC? I don't think arc_summary gives any information about L2ARC, right?




The L2ARC is the second level adaptive replacement cache. The L2ARC is often called “cache drives” in the ZFS systems.



[..]



These cache drives are physically MLC style SSD drives. These SSD drives are slower than system memory, but still much faster than hard drives. More importantly, the SSD drives are much cheaper than system memory.



[..]



When cache drives are present in the ZFS pool, the cache drives will cache frequently accessed data that did not fit in ARC. When read requests come into the system, ZFS will attempt to serve those requests from the ARC. If the data is not in the ARC, ZFS will attempt to serve the requests from the L2ARC. Hard drives are only accessed when data does not exist in either the ARC or L2ARC.




[1] Hardware config: https://www.hetzner.com/dedicated-rootserver/px61-nvme




  • Two 512 GB NVMe Gen3 x4 SSDs

  • 64 GB DDR4 ECC RAM

  • Intel® Xeon® E3-1275 v5 Quad-Core Skylake processor (4-core / 8 thread)


Output of zpool status



  pool: firstzfs
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
firstzfs ONLINE 0 0 0
nvme0n1p3 ONLINE 0 0 0

errors: No known data errors


Output of arc_summary



ZFS Subsystem Report                            Wed Jan 30 09:26:07 2019
ARC Summary: (HEALTHY)
Memory Throttle Count: 0

ARC Misc:
Deleted: 43.56k
Mutex Misses: 0
Evict Skips: 0

ARC Size: 65.51% 20.54 GiB
Target Size: (Adaptive) 100.00% 31.35 GiB
Min Size (Hard Limit): 6.25% 1.96 GiB
Max Size (High Water): 16:1 31.35 GiB

ARC Size Breakdown:
Recently Used Cache Size: 86.54% 16.66 GiB
Frequently Used Cache Size: 13.46% 2.59 GiB

ARC Hash Breakdown:
Elements Max: 4.64m
Elements Current: 89.55% 4.16m
Collisions: 83.96m
Chain Max: 8
Chains: 721.73k

ARC Total accesses: 985.94m
Cache Hit Ratio: 95.94% 945.94m
Cache Miss Ratio: 4.06% 40.00m
Actual Hit Ratio: 93.33% 920.18m

Data Demand Efficiency: 87.42% 313.82m
Data Prefetch Efficiency: 100.00% 25.94m

CACHE HITS BY CACHE LIST:
Anonymously Used: 2.72% 25.76m
Most Recently Used: 27.97% 264.53m
Most Frequently Used: 69.31% 655.65m
Most Recently Used Ghost: 0.00% 0
Most Frequently Used Ghost: 0.00% 0

CACHE HITS BY DATA TYPE:
Demand Data: 29.00% 274.35m
Prefetch Data: 2.74% 25.94m
Demand Metadata: 68.21% 645.27m
Prefetch Metadata: 0.04% 379.71k

CACHE MISSES BY DATA TYPE:
Demand Data: 98.68% 39.47m
Prefetch Data: 0.00% 0
Demand Metadata: 1.32% 527.28k
Prefetch Metadata: 0.00% 0


DMU Prefetch Efficiency: 865.60m
Hit Ratio: 9.64% 83.45m
Miss Ratio: 90.36% 782.14m



ZFS Tunable:
dbuf_cache_hiwater_pct 10
dbuf_cache_lowater_pct 10
dbuf_cache_max_bytes 104857600
dbuf_cache_max_shift 5
dmu_object_alloc_chunk_shift 7
ignore_hole_birth 1
l2arc_feed_again 1
l2arc_feed_min_ms 200
l2arc_feed_secs 1
l2arc_headroom 2
l2arc_headroom_boost 200
l2arc_noprefetch 1
l2arc_norw 0
l2arc_write_boost 8388608
l2arc_write_max 8388608
metaslab_aliquot 524288
metaslab_bias_enabled 1
metaslab_debug_load 0
metaslab_debug_unload 0
metaslab_fragmentation_factor_enabled 1
metaslab_lba_weighting_enabled 1
metaslab_preload_enabled 1
metaslabs_per_vdev 200
send_holes_without_birth_time 1
spa_asize_inflation 24
spa_config_path /etc/zfs/zpool.cache
spa_load_verify_data 1
spa_load_verify_maxinflight 10000
spa_load_verify_metadata 1
spa_slop_shift 5
zfetch_array_rd_sz 1048576
zfetch_max_distance 8388608
zfetch_max_streams 8
zfetch_min_sec_reap 2
zfs_abd_scatter_enabled 1
zfs_abd_scatter_max_order 10
zfs_admin_snapshot 1
zfs_arc_average_blocksize 8192
zfs_arc_dnode_limit 0
zfs_arc_dnode_limit_percent 10
zfs_arc_dnode_reduce_percent 10
zfs_arc_grow_retry 0
zfs_arc_lotsfree_percent 10
zfs_arc_max 0
zfs_arc_meta_adjust_restarts 4096
zfs_arc_meta_limit 0
zfs_arc_meta_limit_percent 75
zfs_arc_meta_min 0
zfs_arc_meta_prune 10000
zfs_arc_meta_strategy 1
zfs_arc_min 0
zfs_arc_min_prefetch_lifespan 0
zfs_arc_p_aggressive_disable 1
zfs_arc_p_dampener_disable 1
zfs_arc_p_min_shift 0
zfs_arc_pc_percent 0
zfs_arc_shrink_shift 0
zfs_arc_sys_free 0
zfs_autoimport_disable 1
zfs_compressed_arc_enabled 1
zfs_dbgmsg_enable 0
zfs_dbgmsg_maxsize 4194304
zfs_dbuf_state_index 0
zfs_deadman_checktime_ms 5000
zfs_deadman_enabled 1
zfs_deadman_synctime_ms 1000000
zfs_dedup_prefetch 0
zfs_delay_min_dirty_percent 60
zfs_delay_scale 500000
zfs_delete_blocks 20480
zfs_dirty_data_max 4294967296
zfs_dirty_data_max_max 4294967296
zfs_dirty_data_max_max_percent 25
zfs_dirty_data_max_percent 10
zfs_dirty_data_sync 67108864
zfs_dmu_offset_next_sync 0
zfs_expire_snapshot 300
zfs_flags 0
zfs_free_bpobj_enabled 1
zfs_free_leak_on_eio 0
zfs_free_max_blocks 100000
zfs_free_min_time_ms 1000
zfs_immediate_write_sz 32768
zfs_max_recordsize 1048576
zfs_mdcomp_disable 0
zfs_metaslab_fragmentation_threshold 70
zfs_metaslab_segment_weight_enabled 1
zfs_metaslab_switch_threshold 2
zfs_mg_fragmentation_threshold 85
zfs_mg_noalloc_threshold 0
zfs_multihost_fail_intervals 5
zfs_multihost_history 0
zfs_multihost_import_intervals 10
zfs_multihost_interval 1000
zfs_multilist_num_sublists 0
zfs_no_scrub_io 0
zfs_no_scrub_prefetch 0
zfs_nocacheflush 0
zfs_nopwrite_enabled 1
zfs_object_mutex_size 64
zfs_pd_bytes_max 52428800
zfs_per_txg_dirty_frees_percent 30
zfs_prefetch_disable 0
zfs_read_chunk_size 1048576
zfs_read_history 0
zfs_read_history_hits 0
zfs_recover 0
zfs_resilver_delay 2
zfs_resilver_min_time_ms 3000
zfs_scan_idle 50
zfs_scan_min_time_ms 1000
zfs_scrub_delay 4
zfs_send_corrupt_data 0
zfs_sync_pass_deferred_free 2
zfs_sync_pass_dont_compress 5
zfs_sync_pass_rewrite 2
zfs_sync_taskq_batch_pct 75
zfs_top_maxinflight 32
zfs_txg_history 0
zfs_txg_timeout 5
zfs_vdev_aggregation_limit 131072
zfs_vdev_async_read_max_active 3
zfs_vdev_async_read_min_active 1
zfs_vdev_async_write_active_max_dirty_percent 60
zfs_vdev_async_write_active_min_dirty_percent 30
zfs_vdev_async_write_max_active 10
zfs_vdev_async_write_min_active 2
zfs_vdev_cache_bshift 16
zfs_vdev_cache_max 16384
zfs_vdev_cache_size 0
zfs_vdev_max_active 1000
zfs_vdev_mirror_non_rotating_inc 0
zfs_vdev_mirror_non_rotating_seek_inc 1
zfs_vdev_mirror_rotating_inc 0
zfs_vdev_mirror_rotating_seek_inc 5
zfs_vdev_mirror_rotating_seek_offset 1048576
zfs_vdev_queue_depth_pct 1000
zfs_vdev_raidz_impl [fastest] original scalar sse2 ssse3 avx2
zfs_vdev_read_gap_limit 32768
zfs_vdev_scheduler noop
zfs_vdev_scrub_max_active 2
zfs_vdev_scrub_min_active 1
zfs_vdev_sync_read_max_active 10
zfs_vdev_sync_read_min_active 10
zfs_vdev_sync_write_max_active 10
zfs_vdev_sync_write_min_active 10
zfs_vdev_write_gap_limit 4096
zfs_zevent_cols 80
zfs_zevent_console 0
zfs_zevent_len_max 128
zfs_zil_clean_taskq_maxalloc 1048576
zfs_zil_clean_taskq_minalloc 1024
zfs_zil_clean_taskq_nthr_pct 100
zil_replay_disable 0
zil_slog_bulk 786432
zio_delay_max 30000
zio_dva_throttle_enabled 1
zio_requeue_io_start_cut_in_line 1
zio_taskq_batch_pct 75
zvol_inhibit_dev 0
zvol_major 230
zvol_max_discard_blocks 16384
zvol_prefetch_bytes 131072
zvol_request_sync 0
zvol_threads 32
zvol_volmode 1









share|improve this question





























    3















    I'm trying to tune ZFS on Linux for my workload (Postgres and a fileserver on the same physical machine [1]), and wanted to understand if I really need L2ARC, or not.



    If the information given at https://www.zfsbuild.com/2010/04/15/explanation-of-arc-and-l2arc/ (written in 2010, when, I guess, SSDs were expensive), is correct, shouldn't I be disabling L2ARC? If there is a cache-miss on the ARC, reading from L2ARC and the main data-set will take the same time (both will be SSD). Is my understanding correct?



    A related question -- how do I check the summary of L2ARC? I don't think arc_summary gives any information about L2ARC, right?




    The L2ARC is the second level adaptive replacement cache. The L2ARC is often called “cache drives” in the ZFS systems.



    [..]



    These cache drives are physically MLC style SSD drives. These SSD drives are slower than system memory, but still much faster than hard drives. More importantly, the SSD drives are much cheaper than system memory.



    [..]



    When cache drives are present in the ZFS pool, the cache drives will cache frequently accessed data that did not fit in ARC. When read requests come into the system, ZFS will attempt to serve those requests from the ARC. If the data is not in the ARC, ZFS will attempt to serve the requests from the L2ARC. Hard drives are only accessed when data does not exist in either the ARC or L2ARC.




    [1] Hardware config: https://www.hetzner.com/dedicated-rootserver/px61-nvme




    • Two 512 GB NVMe Gen3 x4 SSDs

    • 64 GB DDR4 ECC RAM

    • Intel® Xeon® E3-1275 v5 Quad-Core Skylake processor (4-core / 8 thread)


    Output of zpool status



      pool: firstzfs
    state: ONLINE
    scan: none requested
    config:

    NAME STATE READ WRITE CKSUM
    firstzfs ONLINE 0 0 0
    nvme0n1p3 ONLINE 0 0 0

    errors: No known data errors


    Output of arc_summary



    ZFS Subsystem Report                            Wed Jan 30 09:26:07 2019
    ARC Summary: (HEALTHY)
    Memory Throttle Count: 0

    ARC Misc:
    Deleted: 43.56k
    Mutex Misses: 0
    Evict Skips: 0

    ARC Size: 65.51% 20.54 GiB
    Target Size: (Adaptive) 100.00% 31.35 GiB
    Min Size (Hard Limit): 6.25% 1.96 GiB
    Max Size (High Water): 16:1 31.35 GiB

    ARC Size Breakdown:
    Recently Used Cache Size: 86.54% 16.66 GiB
    Frequently Used Cache Size: 13.46% 2.59 GiB

    ARC Hash Breakdown:
    Elements Max: 4.64m
    Elements Current: 89.55% 4.16m
    Collisions: 83.96m
    Chain Max: 8
    Chains: 721.73k

    ARC Total accesses: 985.94m
    Cache Hit Ratio: 95.94% 945.94m
    Cache Miss Ratio: 4.06% 40.00m
    Actual Hit Ratio: 93.33% 920.18m

    Data Demand Efficiency: 87.42% 313.82m
    Data Prefetch Efficiency: 100.00% 25.94m

    CACHE HITS BY CACHE LIST:
    Anonymously Used: 2.72% 25.76m
    Most Recently Used: 27.97% 264.53m
    Most Frequently Used: 69.31% 655.65m
    Most Recently Used Ghost: 0.00% 0
    Most Frequently Used Ghost: 0.00% 0

    CACHE HITS BY DATA TYPE:
    Demand Data: 29.00% 274.35m
    Prefetch Data: 2.74% 25.94m
    Demand Metadata: 68.21% 645.27m
    Prefetch Metadata: 0.04% 379.71k

    CACHE MISSES BY DATA TYPE:
    Demand Data: 98.68% 39.47m
    Prefetch Data: 0.00% 0
    Demand Metadata: 1.32% 527.28k
    Prefetch Metadata: 0.00% 0


    DMU Prefetch Efficiency: 865.60m
    Hit Ratio: 9.64% 83.45m
    Miss Ratio: 90.36% 782.14m



    ZFS Tunable:
    dbuf_cache_hiwater_pct 10
    dbuf_cache_lowater_pct 10
    dbuf_cache_max_bytes 104857600
    dbuf_cache_max_shift 5
    dmu_object_alloc_chunk_shift 7
    ignore_hole_birth 1
    l2arc_feed_again 1
    l2arc_feed_min_ms 200
    l2arc_feed_secs 1
    l2arc_headroom 2
    l2arc_headroom_boost 200
    l2arc_noprefetch 1
    l2arc_norw 0
    l2arc_write_boost 8388608
    l2arc_write_max 8388608
    metaslab_aliquot 524288
    metaslab_bias_enabled 1
    metaslab_debug_load 0
    metaslab_debug_unload 0
    metaslab_fragmentation_factor_enabled 1
    metaslab_lba_weighting_enabled 1
    metaslab_preload_enabled 1
    metaslabs_per_vdev 200
    send_holes_without_birth_time 1
    spa_asize_inflation 24
    spa_config_path /etc/zfs/zpool.cache
    spa_load_verify_data 1
    spa_load_verify_maxinflight 10000
    spa_load_verify_metadata 1
    spa_slop_shift 5
    zfetch_array_rd_sz 1048576
    zfetch_max_distance 8388608
    zfetch_max_streams 8
    zfetch_min_sec_reap 2
    zfs_abd_scatter_enabled 1
    zfs_abd_scatter_max_order 10
    zfs_admin_snapshot 1
    zfs_arc_average_blocksize 8192
    zfs_arc_dnode_limit 0
    zfs_arc_dnode_limit_percent 10
    zfs_arc_dnode_reduce_percent 10
    zfs_arc_grow_retry 0
    zfs_arc_lotsfree_percent 10
    zfs_arc_max 0
    zfs_arc_meta_adjust_restarts 4096
    zfs_arc_meta_limit 0
    zfs_arc_meta_limit_percent 75
    zfs_arc_meta_min 0
    zfs_arc_meta_prune 10000
    zfs_arc_meta_strategy 1
    zfs_arc_min 0
    zfs_arc_min_prefetch_lifespan 0
    zfs_arc_p_aggressive_disable 1
    zfs_arc_p_dampener_disable 1
    zfs_arc_p_min_shift 0
    zfs_arc_pc_percent 0
    zfs_arc_shrink_shift 0
    zfs_arc_sys_free 0
    zfs_autoimport_disable 1
    zfs_compressed_arc_enabled 1
    zfs_dbgmsg_enable 0
    zfs_dbgmsg_maxsize 4194304
    zfs_dbuf_state_index 0
    zfs_deadman_checktime_ms 5000
    zfs_deadman_enabled 1
    zfs_deadman_synctime_ms 1000000
    zfs_dedup_prefetch 0
    zfs_delay_min_dirty_percent 60
    zfs_delay_scale 500000
    zfs_delete_blocks 20480
    zfs_dirty_data_max 4294967296
    zfs_dirty_data_max_max 4294967296
    zfs_dirty_data_max_max_percent 25
    zfs_dirty_data_max_percent 10
    zfs_dirty_data_sync 67108864
    zfs_dmu_offset_next_sync 0
    zfs_expire_snapshot 300
    zfs_flags 0
    zfs_free_bpobj_enabled 1
    zfs_free_leak_on_eio 0
    zfs_free_max_blocks 100000
    zfs_free_min_time_ms 1000
    zfs_immediate_write_sz 32768
    zfs_max_recordsize 1048576
    zfs_mdcomp_disable 0
    zfs_metaslab_fragmentation_threshold 70
    zfs_metaslab_segment_weight_enabled 1
    zfs_metaslab_switch_threshold 2
    zfs_mg_fragmentation_threshold 85
    zfs_mg_noalloc_threshold 0
    zfs_multihost_fail_intervals 5
    zfs_multihost_history 0
    zfs_multihost_import_intervals 10
    zfs_multihost_interval 1000
    zfs_multilist_num_sublists 0
    zfs_no_scrub_io 0
    zfs_no_scrub_prefetch 0
    zfs_nocacheflush 0
    zfs_nopwrite_enabled 1
    zfs_object_mutex_size 64
    zfs_pd_bytes_max 52428800
    zfs_per_txg_dirty_frees_percent 30
    zfs_prefetch_disable 0
    zfs_read_chunk_size 1048576
    zfs_read_history 0
    zfs_read_history_hits 0
    zfs_recover 0
    zfs_resilver_delay 2
    zfs_resilver_min_time_ms 3000
    zfs_scan_idle 50
    zfs_scan_min_time_ms 1000
    zfs_scrub_delay 4
    zfs_send_corrupt_data 0
    zfs_sync_pass_deferred_free 2
    zfs_sync_pass_dont_compress 5
    zfs_sync_pass_rewrite 2
    zfs_sync_taskq_batch_pct 75
    zfs_top_maxinflight 32
    zfs_txg_history 0
    zfs_txg_timeout 5
    zfs_vdev_aggregation_limit 131072
    zfs_vdev_async_read_max_active 3
    zfs_vdev_async_read_min_active 1
    zfs_vdev_async_write_active_max_dirty_percent 60
    zfs_vdev_async_write_active_min_dirty_percent 30
    zfs_vdev_async_write_max_active 10
    zfs_vdev_async_write_min_active 2
    zfs_vdev_cache_bshift 16
    zfs_vdev_cache_max 16384
    zfs_vdev_cache_size 0
    zfs_vdev_max_active 1000
    zfs_vdev_mirror_non_rotating_inc 0
    zfs_vdev_mirror_non_rotating_seek_inc 1
    zfs_vdev_mirror_rotating_inc 0
    zfs_vdev_mirror_rotating_seek_inc 5
    zfs_vdev_mirror_rotating_seek_offset 1048576
    zfs_vdev_queue_depth_pct 1000
    zfs_vdev_raidz_impl [fastest] original scalar sse2 ssse3 avx2
    zfs_vdev_read_gap_limit 32768
    zfs_vdev_scheduler noop
    zfs_vdev_scrub_max_active 2
    zfs_vdev_scrub_min_active 1
    zfs_vdev_sync_read_max_active 10
    zfs_vdev_sync_read_min_active 10
    zfs_vdev_sync_write_max_active 10
    zfs_vdev_sync_write_min_active 10
    zfs_vdev_write_gap_limit 4096
    zfs_zevent_cols 80
    zfs_zevent_console 0
    zfs_zevent_len_max 128
    zfs_zil_clean_taskq_maxalloc 1048576
    zfs_zil_clean_taskq_minalloc 1024
    zfs_zil_clean_taskq_nthr_pct 100
    zil_replay_disable 0
    zil_slog_bulk 786432
    zio_delay_max 30000
    zio_dva_throttle_enabled 1
    zio_requeue_io_start_cut_in_line 1
    zio_taskq_batch_pct 75
    zvol_inhibit_dev 0
    zvol_major 230
    zvol_max_discard_blocks 16384
    zvol_prefetch_bytes 131072
    zvol_request_sync 0
    zvol_threads 32
    zvol_volmode 1









    share|improve this question



























      3












      3








      3








      I'm trying to tune ZFS on Linux for my workload (Postgres and a fileserver on the same physical machine [1]), and wanted to understand if I really need L2ARC, or not.



      If the information given at https://www.zfsbuild.com/2010/04/15/explanation-of-arc-and-l2arc/ (written in 2010, when, I guess, SSDs were expensive), is correct, shouldn't I be disabling L2ARC? If there is a cache-miss on the ARC, reading from L2ARC and the main data-set will take the same time (both will be SSD). Is my understanding correct?



      A related question -- how do I check the summary of L2ARC? I don't think arc_summary gives any information about L2ARC, right?




      The L2ARC is the second level adaptive replacement cache. The L2ARC is often called “cache drives” in the ZFS systems.



      [..]



      These cache drives are physically MLC style SSD drives. These SSD drives are slower than system memory, but still much faster than hard drives. More importantly, the SSD drives are much cheaper than system memory.



      [..]



      When cache drives are present in the ZFS pool, the cache drives will cache frequently accessed data that did not fit in ARC. When read requests come into the system, ZFS will attempt to serve those requests from the ARC. If the data is not in the ARC, ZFS will attempt to serve the requests from the L2ARC. Hard drives are only accessed when data does not exist in either the ARC or L2ARC.




      [1] Hardware config: https://www.hetzner.com/dedicated-rootserver/px61-nvme




      • Two 512 GB NVMe Gen3 x4 SSDs

      • 64 GB DDR4 ECC RAM

      • Intel® Xeon® E3-1275 v5 Quad-Core Skylake processor (4-core / 8 thread)


      Output of zpool status



        pool: firstzfs
      state: ONLINE
      scan: none requested
      config:

      NAME STATE READ WRITE CKSUM
      firstzfs ONLINE 0 0 0
      nvme0n1p3 ONLINE 0 0 0

      errors: No known data errors


      Output of arc_summary



      ZFS Subsystem Report                            Wed Jan 30 09:26:07 2019
      ARC Summary: (HEALTHY)
      Memory Throttle Count: 0

      ARC Misc:
      Deleted: 43.56k
      Mutex Misses: 0
      Evict Skips: 0

      ARC Size: 65.51% 20.54 GiB
      Target Size: (Adaptive) 100.00% 31.35 GiB
      Min Size (Hard Limit): 6.25% 1.96 GiB
      Max Size (High Water): 16:1 31.35 GiB

      ARC Size Breakdown:
      Recently Used Cache Size: 86.54% 16.66 GiB
      Frequently Used Cache Size: 13.46% 2.59 GiB

      ARC Hash Breakdown:
      Elements Max: 4.64m
      Elements Current: 89.55% 4.16m
      Collisions: 83.96m
      Chain Max: 8
      Chains: 721.73k

      ARC Total accesses: 985.94m
      Cache Hit Ratio: 95.94% 945.94m
      Cache Miss Ratio: 4.06% 40.00m
      Actual Hit Ratio: 93.33% 920.18m

      Data Demand Efficiency: 87.42% 313.82m
      Data Prefetch Efficiency: 100.00% 25.94m

      CACHE HITS BY CACHE LIST:
      Anonymously Used: 2.72% 25.76m
      Most Recently Used: 27.97% 264.53m
      Most Frequently Used: 69.31% 655.65m
      Most Recently Used Ghost: 0.00% 0
      Most Frequently Used Ghost: 0.00% 0

      CACHE HITS BY DATA TYPE:
      Demand Data: 29.00% 274.35m
      Prefetch Data: 2.74% 25.94m
      Demand Metadata: 68.21% 645.27m
      Prefetch Metadata: 0.04% 379.71k

      CACHE MISSES BY DATA TYPE:
      Demand Data: 98.68% 39.47m
      Prefetch Data: 0.00% 0
      Demand Metadata: 1.32% 527.28k
      Prefetch Metadata: 0.00% 0


      DMU Prefetch Efficiency: 865.60m
      Hit Ratio: 9.64% 83.45m
      Miss Ratio: 90.36% 782.14m



      ZFS Tunable:
      dbuf_cache_hiwater_pct 10
      dbuf_cache_lowater_pct 10
      dbuf_cache_max_bytes 104857600
      dbuf_cache_max_shift 5
      dmu_object_alloc_chunk_shift 7
      ignore_hole_birth 1
      l2arc_feed_again 1
      l2arc_feed_min_ms 200
      l2arc_feed_secs 1
      l2arc_headroom 2
      l2arc_headroom_boost 200
      l2arc_noprefetch 1
      l2arc_norw 0
      l2arc_write_boost 8388608
      l2arc_write_max 8388608
      metaslab_aliquot 524288
      metaslab_bias_enabled 1
      metaslab_debug_load 0
      metaslab_debug_unload 0
      metaslab_fragmentation_factor_enabled 1
      metaslab_lba_weighting_enabled 1
      metaslab_preload_enabled 1
      metaslabs_per_vdev 200
      send_holes_without_birth_time 1
      spa_asize_inflation 24
      spa_config_path /etc/zfs/zpool.cache
      spa_load_verify_data 1
      spa_load_verify_maxinflight 10000
      spa_load_verify_metadata 1
      spa_slop_shift 5
      zfetch_array_rd_sz 1048576
      zfetch_max_distance 8388608
      zfetch_max_streams 8
      zfetch_min_sec_reap 2
      zfs_abd_scatter_enabled 1
      zfs_abd_scatter_max_order 10
      zfs_admin_snapshot 1
      zfs_arc_average_blocksize 8192
      zfs_arc_dnode_limit 0
      zfs_arc_dnode_limit_percent 10
      zfs_arc_dnode_reduce_percent 10
      zfs_arc_grow_retry 0
      zfs_arc_lotsfree_percent 10
      zfs_arc_max 0
      zfs_arc_meta_adjust_restarts 4096
      zfs_arc_meta_limit 0
      zfs_arc_meta_limit_percent 75
      zfs_arc_meta_min 0
      zfs_arc_meta_prune 10000
      zfs_arc_meta_strategy 1
      zfs_arc_min 0
      zfs_arc_min_prefetch_lifespan 0
      zfs_arc_p_aggressive_disable 1
      zfs_arc_p_dampener_disable 1
      zfs_arc_p_min_shift 0
      zfs_arc_pc_percent 0
      zfs_arc_shrink_shift 0
      zfs_arc_sys_free 0
      zfs_autoimport_disable 1
      zfs_compressed_arc_enabled 1
      zfs_dbgmsg_enable 0
      zfs_dbgmsg_maxsize 4194304
      zfs_dbuf_state_index 0
      zfs_deadman_checktime_ms 5000
      zfs_deadman_enabled 1
      zfs_deadman_synctime_ms 1000000
      zfs_dedup_prefetch 0
      zfs_delay_min_dirty_percent 60
      zfs_delay_scale 500000
      zfs_delete_blocks 20480
      zfs_dirty_data_max 4294967296
      zfs_dirty_data_max_max 4294967296
      zfs_dirty_data_max_max_percent 25
      zfs_dirty_data_max_percent 10
      zfs_dirty_data_sync 67108864
      zfs_dmu_offset_next_sync 0
      zfs_expire_snapshot 300
      zfs_flags 0
      zfs_free_bpobj_enabled 1
      zfs_free_leak_on_eio 0
      zfs_free_max_blocks 100000
      zfs_free_min_time_ms 1000
      zfs_immediate_write_sz 32768
      zfs_max_recordsize 1048576
      zfs_mdcomp_disable 0
      zfs_metaslab_fragmentation_threshold 70
      zfs_metaslab_segment_weight_enabled 1
      zfs_metaslab_switch_threshold 2
      zfs_mg_fragmentation_threshold 85
      zfs_mg_noalloc_threshold 0
      zfs_multihost_fail_intervals 5
      zfs_multihost_history 0
      zfs_multihost_import_intervals 10
      zfs_multihost_interval 1000
      zfs_multilist_num_sublists 0
      zfs_no_scrub_io 0
      zfs_no_scrub_prefetch 0
      zfs_nocacheflush 0
      zfs_nopwrite_enabled 1
      zfs_object_mutex_size 64
      zfs_pd_bytes_max 52428800
      zfs_per_txg_dirty_frees_percent 30
      zfs_prefetch_disable 0
      zfs_read_chunk_size 1048576
      zfs_read_history 0
      zfs_read_history_hits 0
      zfs_recover 0
      zfs_resilver_delay 2
      zfs_resilver_min_time_ms 3000
      zfs_scan_idle 50
      zfs_scan_min_time_ms 1000
      zfs_scrub_delay 4
      zfs_send_corrupt_data 0
      zfs_sync_pass_deferred_free 2
      zfs_sync_pass_dont_compress 5
      zfs_sync_pass_rewrite 2
      zfs_sync_taskq_batch_pct 75
      zfs_top_maxinflight 32
      zfs_txg_history 0
      zfs_txg_timeout 5
      zfs_vdev_aggregation_limit 131072
      zfs_vdev_async_read_max_active 3
      zfs_vdev_async_read_min_active 1
      zfs_vdev_async_write_active_max_dirty_percent 60
      zfs_vdev_async_write_active_min_dirty_percent 30
      zfs_vdev_async_write_max_active 10
      zfs_vdev_async_write_min_active 2
      zfs_vdev_cache_bshift 16
      zfs_vdev_cache_max 16384
      zfs_vdev_cache_size 0
      zfs_vdev_max_active 1000
      zfs_vdev_mirror_non_rotating_inc 0
      zfs_vdev_mirror_non_rotating_seek_inc 1
      zfs_vdev_mirror_rotating_inc 0
      zfs_vdev_mirror_rotating_seek_inc 5
      zfs_vdev_mirror_rotating_seek_offset 1048576
      zfs_vdev_queue_depth_pct 1000
      zfs_vdev_raidz_impl [fastest] original scalar sse2 ssse3 avx2
      zfs_vdev_read_gap_limit 32768
      zfs_vdev_scheduler noop
      zfs_vdev_scrub_max_active 2
      zfs_vdev_scrub_min_active 1
      zfs_vdev_sync_read_max_active 10
      zfs_vdev_sync_read_min_active 10
      zfs_vdev_sync_write_max_active 10
      zfs_vdev_sync_write_min_active 10
      zfs_vdev_write_gap_limit 4096
      zfs_zevent_cols 80
      zfs_zevent_console 0
      zfs_zevent_len_max 128
      zfs_zil_clean_taskq_maxalloc 1048576
      zfs_zil_clean_taskq_minalloc 1024
      zfs_zil_clean_taskq_nthr_pct 100
      zil_replay_disable 0
      zil_slog_bulk 786432
      zio_delay_max 30000
      zio_dva_throttle_enabled 1
      zio_requeue_io_start_cut_in_line 1
      zio_taskq_batch_pct 75
      zvol_inhibit_dev 0
      zvol_major 230
      zvol_max_discard_blocks 16384
      zvol_prefetch_bytes 131072
      zvol_request_sync 0
      zvol_threads 32
      zvol_volmode 1









      share|improve this question
















      I'm trying to tune ZFS on Linux for my workload (Postgres and a fileserver on the same physical machine [1]), and wanted to understand if I really need L2ARC, or not.



      If the information given at https://www.zfsbuild.com/2010/04/15/explanation-of-arc-and-l2arc/ (written in 2010, when, I guess, SSDs were expensive), is correct, shouldn't I be disabling L2ARC? If there is a cache-miss on the ARC, reading from L2ARC and the main data-set will take the same time (both will be SSD). Is my understanding correct?



      A related question -- how do I check the summary of L2ARC? I don't think arc_summary gives any information about L2ARC, right?




      The L2ARC is the second level adaptive replacement cache. The L2ARC is often called “cache drives” in the ZFS systems.



      [..]



      These cache drives are physically MLC style SSD drives. These SSD drives are slower than system memory, but still much faster than hard drives. More importantly, the SSD drives are much cheaper than system memory.



      [..]



      When cache drives are present in the ZFS pool, the cache drives will cache frequently accessed data that did not fit in ARC. When read requests come into the system, ZFS will attempt to serve those requests from the ARC. If the data is not in the ARC, ZFS will attempt to serve the requests from the L2ARC. Hard drives are only accessed when data does not exist in either the ARC or L2ARC.




      [1] Hardware config: https://www.hetzner.com/dedicated-rootserver/px61-nvme




      • Two 512 GB NVMe Gen3 x4 SSDs

      • 64 GB DDR4 ECC RAM

      • Intel® Xeon® E3-1275 v5 Quad-Core Skylake processor (4-core / 8 thread)


      Output of zpool status



        pool: firstzfs
      state: ONLINE
      scan: none requested
      config:

      NAME STATE READ WRITE CKSUM
      firstzfs ONLINE 0 0 0
      nvme0n1p3 ONLINE 0 0 0

      errors: No known data errors


      Output of arc_summary



      ZFS Subsystem Report                            Wed Jan 30 09:26:07 2019
      ARC Summary: (HEALTHY)
      Memory Throttle Count: 0

      ARC Misc:
      Deleted: 43.56k
      Mutex Misses: 0
      Evict Skips: 0

      ARC Size: 65.51% 20.54 GiB
      Target Size: (Adaptive) 100.00% 31.35 GiB
      Min Size (Hard Limit): 6.25% 1.96 GiB
      Max Size (High Water): 16:1 31.35 GiB

      ARC Size Breakdown:
      Recently Used Cache Size: 86.54% 16.66 GiB
      Frequently Used Cache Size: 13.46% 2.59 GiB

      ARC Hash Breakdown:
      Elements Max: 4.64m
      Elements Current: 89.55% 4.16m
      Collisions: 83.96m
      Chain Max: 8
      Chains: 721.73k

      ARC Total accesses: 985.94m
      Cache Hit Ratio: 95.94% 945.94m
      Cache Miss Ratio: 4.06% 40.00m
      Actual Hit Ratio: 93.33% 920.18m

      Data Demand Efficiency: 87.42% 313.82m
      Data Prefetch Efficiency: 100.00% 25.94m

      CACHE HITS BY CACHE LIST:
      Anonymously Used: 2.72% 25.76m
      Most Recently Used: 27.97% 264.53m
      Most Frequently Used: 69.31% 655.65m
      Most Recently Used Ghost: 0.00% 0
      Most Frequently Used Ghost: 0.00% 0

      CACHE HITS BY DATA TYPE:
      Demand Data: 29.00% 274.35m
      Prefetch Data: 2.74% 25.94m
      Demand Metadata: 68.21% 645.27m
      Prefetch Metadata: 0.04% 379.71k

      CACHE MISSES BY DATA TYPE:
      Demand Data: 98.68% 39.47m
      Prefetch Data: 0.00% 0
      Demand Metadata: 1.32% 527.28k
      Prefetch Metadata: 0.00% 0


      DMU Prefetch Efficiency: 865.60m
      Hit Ratio: 9.64% 83.45m
      Miss Ratio: 90.36% 782.14m



      ZFS Tunable:
      dbuf_cache_hiwater_pct 10
      dbuf_cache_lowater_pct 10
      dbuf_cache_max_bytes 104857600
      dbuf_cache_max_shift 5
      dmu_object_alloc_chunk_shift 7
      ignore_hole_birth 1
      l2arc_feed_again 1
      l2arc_feed_min_ms 200
      l2arc_feed_secs 1
      l2arc_headroom 2
      l2arc_headroom_boost 200
      l2arc_noprefetch 1
      l2arc_norw 0
      l2arc_write_boost 8388608
      l2arc_write_max 8388608
      metaslab_aliquot 524288
      metaslab_bias_enabled 1
      metaslab_debug_load 0
      metaslab_debug_unload 0
      metaslab_fragmentation_factor_enabled 1
      metaslab_lba_weighting_enabled 1
      metaslab_preload_enabled 1
      metaslabs_per_vdev 200
      send_holes_without_birth_time 1
      spa_asize_inflation 24
      spa_config_path /etc/zfs/zpool.cache
      spa_load_verify_data 1
      spa_load_verify_maxinflight 10000
      spa_load_verify_metadata 1
      spa_slop_shift 5
      zfetch_array_rd_sz 1048576
      zfetch_max_distance 8388608
      zfetch_max_streams 8
      zfetch_min_sec_reap 2
      zfs_abd_scatter_enabled 1
      zfs_abd_scatter_max_order 10
      zfs_admin_snapshot 1
      zfs_arc_average_blocksize 8192
      zfs_arc_dnode_limit 0
      zfs_arc_dnode_limit_percent 10
      zfs_arc_dnode_reduce_percent 10
      zfs_arc_grow_retry 0
      zfs_arc_lotsfree_percent 10
      zfs_arc_max 0
      zfs_arc_meta_adjust_restarts 4096
      zfs_arc_meta_limit 0
      zfs_arc_meta_limit_percent 75
      zfs_arc_meta_min 0
      zfs_arc_meta_prune 10000
      zfs_arc_meta_strategy 1
      zfs_arc_min 0
      zfs_arc_min_prefetch_lifespan 0
      zfs_arc_p_aggressive_disable 1
      zfs_arc_p_dampener_disable 1
      zfs_arc_p_min_shift 0
      zfs_arc_pc_percent 0
      zfs_arc_shrink_shift 0
      zfs_arc_sys_free 0
      zfs_autoimport_disable 1
      zfs_compressed_arc_enabled 1
      zfs_dbgmsg_enable 0
      zfs_dbgmsg_maxsize 4194304
      zfs_dbuf_state_index 0
      zfs_deadman_checktime_ms 5000
      zfs_deadman_enabled 1
      zfs_deadman_synctime_ms 1000000
      zfs_dedup_prefetch 0
      zfs_delay_min_dirty_percent 60
      zfs_delay_scale 500000
      zfs_delete_blocks 20480
      zfs_dirty_data_max 4294967296
      zfs_dirty_data_max_max 4294967296
      zfs_dirty_data_max_max_percent 25
      zfs_dirty_data_max_percent 10
      zfs_dirty_data_sync 67108864
      zfs_dmu_offset_next_sync 0
      zfs_expire_snapshot 300
      zfs_flags 0
      zfs_free_bpobj_enabled 1
      zfs_free_leak_on_eio 0
      zfs_free_max_blocks 100000
      zfs_free_min_time_ms 1000
      zfs_immediate_write_sz 32768
      zfs_max_recordsize 1048576
      zfs_mdcomp_disable 0
      zfs_metaslab_fragmentation_threshold 70
      zfs_metaslab_segment_weight_enabled 1
      zfs_metaslab_switch_threshold 2
      zfs_mg_fragmentation_threshold 85
      zfs_mg_noalloc_threshold 0
      zfs_multihost_fail_intervals 5
      zfs_multihost_history 0
      zfs_multihost_import_intervals 10
      zfs_multihost_interval 1000
      zfs_multilist_num_sublists 0
      zfs_no_scrub_io 0
      zfs_no_scrub_prefetch 0
      zfs_nocacheflush 0
      zfs_nopwrite_enabled 1
      zfs_object_mutex_size 64
      zfs_pd_bytes_max 52428800
      zfs_per_txg_dirty_frees_percent 30
      zfs_prefetch_disable 0
      zfs_read_chunk_size 1048576
      zfs_read_history 0
      zfs_read_history_hits 0
      zfs_recover 0
      zfs_resilver_delay 2
      zfs_resilver_min_time_ms 3000
      zfs_scan_idle 50
      zfs_scan_min_time_ms 1000
      zfs_scrub_delay 4
      zfs_send_corrupt_data 0
      zfs_sync_pass_deferred_free 2
      zfs_sync_pass_dont_compress 5
      zfs_sync_pass_rewrite 2
      zfs_sync_taskq_batch_pct 75
      zfs_top_maxinflight 32
      zfs_txg_history 0
      zfs_txg_timeout 5
      zfs_vdev_aggregation_limit 131072
      zfs_vdev_async_read_max_active 3
      zfs_vdev_async_read_min_active 1
      zfs_vdev_async_write_active_max_dirty_percent 60
      zfs_vdev_async_write_active_min_dirty_percent 30
      zfs_vdev_async_write_max_active 10
      zfs_vdev_async_write_min_active 2
      zfs_vdev_cache_bshift 16
      zfs_vdev_cache_max 16384
      zfs_vdev_cache_size 0
      zfs_vdev_max_active 1000
      zfs_vdev_mirror_non_rotating_inc 0
      zfs_vdev_mirror_non_rotating_seek_inc 1
      zfs_vdev_mirror_rotating_inc 0
      zfs_vdev_mirror_rotating_seek_inc 5
      zfs_vdev_mirror_rotating_seek_offset 1048576
      zfs_vdev_queue_depth_pct 1000
      zfs_vdev_raidz_impl [fastest] original scalar sse2 ssse3 avx2
      zfs_vdev_read_gap_limit 32768
      zfs_vdev_scheduler noop
      zfs_vdev_scrub_max_active 2
      zfs_vdev_scrub_min_active 1
      zfs_vdev_sync_read_max_active 10
      zfs_vdev_sync_read_min_active 10
      zfs_vdev_sync_write_max_active 10
      zfs_vdev_sync_write_min_active 10
      zfs_vdev_write_gap_limit 4096
      zfs_zevent_cols 80
      zfs_zevent_console 0
      zfs_zevent_len_max 128
      zfs_zil_clean_taskq_maxalloc 1048576
      zfs_zil_clean_taskq_minalloc 1024
      zfs_zil_clean_taskq_nthr_pct 100
      zil_replay_disable 0
      zil_slog_bulk 786432
      zio_delay_max 30000
      zio_dva_throttle_enabled 1
      zio_requeue_io_start_cut_in_line 1
      zio_taskq_batch_pct 75
      zvol_inhibit_dev 0
      zvol_major 230
      zvol_max_discard_blocks 16384
      zvol_prefetch_bytes 131072
      zvol_request_sync 0
      zvol_threads 32
      zvol_volmode 1






      zfs zfsonlinux zfs-l2arc






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Jan 30 at 8:27







      Saurabh Nanda

















      asked Jan 30 at 5:42









      Saurabh NandaSaurabh Nanda

      174110




      174110






















          2 Answers
          2






          active

          oldest

          votes


















          4














          L2ARC is only useful when using a faster than main pool device, and it is only active when you explicitly attach a cache device to the pool.



          arc_summary clearly reports your L2ARC stats, but obviously only if you attached it to the main pool.



          If you see no L2ARC stats, it means you have no L2 cache right now. To be sure, please post the output of zpool status



          EDIT: zpool status confirms you have no L2ARC. The output of arcstat shows no signs of L2ARC either; the only references are about tunables which, in this case, have no effects.






          share|improve this answer


























          • have added the output of zpool status and arc_summary to my question. Btw, I noticed that arc_summary has something related to L2ARC, but I don't understand what it means.

            – Saurabh Nanda
            Jan 30 at 8:28











          • @SaurabhNanda I've updated my answer.

            – shodanshok
            Jan 30 at 8:37



















          1














          To add to @shodanshok's answer:



          Using an L2ARC with ZFS doesn't unequivocally make things faster. There's a bunch of discussions on various forums where the background is explained in detail, but basically you a) want to keep the size of the L2ARC down to reduce read latency, and b) probably don't want to use one at all if you don't have massive amounts of RAM. You say that you have 64 GB of memory in the server, and according to several discussions that's the lowest where an L2ARC may make sense.



          In other words: Implementing a ZFS L2ARC should be the result of performing life-like tests on your own system load, and may not be required at all.






          share|improve this answer
























            Your Answer








            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "2"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f951428%2fis-zfs-l2arc-required-if-primary-data-is-already-on-ssd%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            4














            L2ARC is only useful when using a faster than main pool device, and it is only active when you explicitly attach a cache device to the pool.



            arc_summary clearly reports your L2ARC stats, but obviously only if you attached it to the main pool.



            If you see no L2ARC stats, it means you have no L2 cache right now. To be sure, please post the output of zpool status



            EDIT: zpool status confirms you have no L2ARC. The output of arcstat shows no signs of L2ARC either; the only references are about tunables which, in this case, have no effects.






            share|improve this answer


























            • have added the output of zpool status and arc_summary to my question. Btw, I noticed that arc_summary has something related to L2ARC, but I don't understand what it means.

              – Saurabh Nanda
              Jan 30 at 8:28











            • @SaurabhNanda I've updated my answer.

              – shodanshok
              Jan 30 at 8:37
















            4














            L2ARC is only useful when using a faster than main pool device, and it is only active when you explicitly attach a cache device to the pool.



            arc_summary clearly reports your L2ARC stats, but obviously only if you attached it to the main pool.



            If you see no L2ARC stats, it means you have no L2 cache right now. To be sure, please post the output of zpool status



            EDIT: zpool status confirms you have no L2ARC. The output of arcstat shows no signs of L2ARC either; the only references are about tunables which, in this case, have no effects.






            share|improve this answer


























            • have added the output of zpool status and arc_summary to my question. Btw, I noticed that arc_summary has something related to L2ARC, but I don't understand what it means.

              – Saurabh Nanda
              Jan 30 at 8:28











            • @SaurabhNanda I've updated my answer.

              – shodanshok
              Jan 30 at 8:37














            4












            4








            4







            L2ARC is only useful when using a faster than main pool device, and it is only active when you explicitly attach a cache device to the pool.



            arc_summary clearly reports your L2ARC stats, but obviously only if you attached it to the main pool.



            If you see no L2ARC stats, it means you have no L2 cache right now. To be sure, please post the output of zpool status



            EDIT: zpool status confirms you have no L2ARC. The output of arcstat shows no signs of L2ARC either; the only references are about tunables which, in this case, have no effects.






            share|improve this answer















            L2ARC is only useful when using a faster than main pool device, and it is only active when you explicitly attach a cache device to the pool.



            arc_summary clearly reports your L2ARC stats, but obviously only if you attached it to the main pool.



            If you see no L2ARC stats, it means you have no L2 cache right now. To be sure, please post the output of zpool status



            EDIT: zpool status confirms you have no L2ARC. The output of arcstat shows no signs of L2ARC either; the only references are about tunables which, in this case, have no effects.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Jan 30 at 8:37

























            answered Jan 30 at 8:00









            shodanshokshodanshok

            26.7k34788




            26.7k34788













            • have added the output of zpool status and arc_summary to my question. Btw, I noticed that arc_summary has something related to L2ARC, but I don't understand what it means.

              – Saurabh Nanda
              Jan 30 at 8:28











            • @SaurabhNanda I've updated my answer.

              – shodanshok
              Jan 30 at 8:37



















            • have added the output of zpool status and arc_summary to my question. Btw, I noticed that arc_summary has something related to L2ARC, but I don't understand what it means.

              – Saurabh Nanda
              Jan 30 at 8:28











            • @SaurabhNanda I've updated my answer.

              – shodanshok
              Jan 30 at 8:37

















            have added the output of zpool status and arc_summary to my question. Btw, I noticed that arc_summary has something related to L2ARC, but I don't understand what it means.

            – Saurabh Nanda
            Jan 30 at 8:28





            have added the output of zpool status and arc_summary to my question. Btw, I noticed that arc_summary has something related to L2ARC, but I don't understand what it means.

            – Saurabh Nanda
            Jan 30 at 8:28













            @SaurabhNanda I've updated my answer.

            – shodanshok
            Jan 30 at 8:37





            @SaurabhNanda I've updated my answer.

            – shodanshok
            Jan 30 at 8:37













            1














            To add to @shodanshok's answer:



            Using an L2ARC with ZFS doesn't unequivocally make things faster. There's a bunch of discussions on various forums where the background is explained in detail, but basically you a) want to keep the size of the L2ARC down to reduce read latency, and b) probably don't want to use one at all if you don't have massive amounts of RAM. You say that you have 64 GB of memory in the server, and according to several discussions that's the lowest where an L2ARC may make sense.



            In other words: Implementing a ZFS L2ARC should be the result of performing life-like tests on your own system load, and may not be required at all.






            share|improve this answer




























              1














              To add to @shodanshok's answer:



              Using an L2ARC with ZFS doesn't unequivocally make things faster. There's a bunch of discussions on various forums where the background is explained in detail, but basically you a) want to keep the size of the L2ARC down to reduce read latency, and b) probably don't want to use one at all if you don't have massive amounts of RAM. You say that you have 64 GB of memory in the server, and according to several discussions that's the lowest where an L2ARC may make sense.



              In other words: Implementing a ZFS L2ARC should be the result of performing life-like tests on your own system load, and may not be required at all.






              share|improve this answer


























                1












                1








                1







                To add to @shodanshok's answer:



                Using an L2ARC with ZFS doesn't unequivocally make things faster. There's a bunch of discussions on various forums where the background is explained in detail, but basically you a) want to keep the size of the L2ARC down to reduce read latency, and b) probably don't want to use one at all if you don't have massive amounts of RAM. You say that you have 64 GB of memory in the server, and according to several discussions that's the lowest where an L2ARC may make sense.



                In other words: Implementing a ZFS L2ARC should be the result of performing life-like tests on your own system load, and may not be required at all.






                share|improve this answer













                To add to @shodanshok's answer:



                Using an L2ARC with ZFS doesn't unequivocally make things faster. There's a bunch of discussions on various forums where the background is explained in detail, but basically you a) want to keep the size of the L2ARC down to reduce read latency, and b) probably don't want to use one at all if you don't have massive amounts of RAM. You say that you have 64 GB of memory in the server, and according to several discussions that's the lowest where an L2ARC may make sense.



                In other words: Implementing a ZFS L2ARC should be the result of performing life-like tests on your own system load, and may not be required at all.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Jan 30 at 8:49









                Mikael HMikael H

                67619




                67619






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Server Fault!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f951428%2fis-zfs-l2arc-required-if-primary-data-is-already-on-ssd%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    MongoDB - Not Authorized To Execute Command

                    How to fix TextFormField cause rebuild widget in Flutter

                    in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith