EDIT: SOLUTION:

Nevermind, I am an idiot. As @ClickyMcTicker pointed out, it’s the client side that is causing the trouble. His comment gave me thought so I checked my testing procedure again. Turns out that, completely by accident, everytime I copied files to the LVM-based NAS, I used the SSD on my PC as the source. In contrast, everytime I copied to the ZFS-based NAS, I used my hard drive as the source. I did that about 10 times. Everything is fine now. Maybe this can help some other dumbass like me in the futere. Thanks everyone!

Hello there.

I’m trying to setup a NAS on Proxmox. For storage, I’m using a single Samsung Evo 870 with 2TB (backups will be done anyway, no need for RAID). In order to do this, I setup a Debian 12 container, installed Cockpit and the tools needed to share via SMB. I set everything up and transfered some files: about 150mb/s with huge fluctuations. Not great, not terrible. Iperf reaches around 2.25Gbit/s, so something is off. Let’s do some testing. I started with the filesystem. This whole setup is for testing anyway.

  1. Storage via creating a directory with EXT4, then adding a mount point to the container. This is what gave me those speeds mentioned above. Okay, not good. –> 150mb/s, speed fluctuates
  2. a Let’s do ZFS, which I want to use anyway. I created a ZFS pool with ashift=12, atime=off, compression=lz4, xattr=sa and 1MB record size. I did “some” research and this is what I came up with, please correct me. Mount to container, and go. –> 170mb/s, stable speed
  3. b Tried OpenMediaVault and used EXT4 with ZFS as base for the VM-Drive. –> around 200mb/s
  4. LVM-Thin using Proxmox GUI, then mount to container. –> 270mb/s, which is pretty much what I’m reaching with Iperf.

So where is my mistake when using ZFS? Disable compression? A different record size? Any help would be appreciated.

  • MangoPenguin@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 months ago

    Have you benchmark the disk locally directly on the proxmox host? Need to figure out if this is an IO limitation, CPU limitation, or something else.

    • Pete90@feddit.deOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      7 months ago

      Good point. I used fio with different block sizes:

      fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --name seq_read --filename=/dev/sda
      
      4K = IOPS=41.7k, BW=163MiB/s (171MB/s)
      8K = IOPS=31.1k, BW=243MiB/s (254MB/s)
      IOPS=13.2k, BW=411MiB/s (431MB/s)
      512K = IOPS=809, BW=405MiB/s (424MB/s)
      1M = IOPS=454, BW=455MiB/s (477MB/s)
      

      I’m gonna be honest though, I have no idea what to make of these values. Seemingly, the drive is capable of maxing out my network. The CPU shouldn’t be the problem, it’s a i7 10700.

      • MangoPenguin@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        Basically you’re getting 477MB/s for a sequential read, which is spot on for a SATA SSD.

        What size are the files you were transferring when you only got 150Mbps? Also did you mean Mb/s or MB/s? There’s an 8x difference between the two.

        • Pete90@feddit.deOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          I meant mega byte (I hope that’s correct I always mix them up). I transferred large videos files, both when the file system was zfs or lvm, yet different transfer speeds. The files were between 500mb to 1.5gb in size

          • NeoNachtwaechter@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            7 months ago

            ZFS compression is costing some CPU power for sure. How many cores/threads does your CPU have?

            And if it is mostly video files: they are already compressed heavily, so you don’t gain anything with another layer of compression.

            • Pete90@feddit.deOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              Its videos, pictures, music and other data as well. I’ll try playing around with compression today, see if disabeling helps at all. The CPU has 8C/16T and the container 2C/4T.

          • ClickyMcTicker@hachyderm.io
            cake
            link
            fedilink
            arrow-up
            0
            ·
            7 months ago

            @Pete90 @MangoPenguin Bytes (B) are used for storage, bits (b) are used for network. 1B=8b.
            2.5Gbps equals 312.5MBps.
            With that in mind, there are a lot of moving parts to diagnose, assuming you want to reach that speed for a transfer. Can the storage of both machines reach that speed? I believe I saw the NAS’s disk tested and clocked at 470ish MBps, but can the client side keep up? I saw the iPerf test, but what was the exact command used? Did you multithread it?

            • Pete90@feddit.deOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              Both machines are easily capable of reaching around 2.2Gbps. I can’t reach full 2.5Gbps speed even with Iperf. I tried some tuning but that didn’t help, so its fine for now. I used iperf3 -c xxx.xxx.xxx.xxx, nothing else.

              The slowdown MUST be related to ZFS, since LVM as a storage base can reach the “full” 2.2Gbps when used as a smb share.

            • Pete90@feddit.deOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              7 months ago

              Nevermind, I am an idiot. You’re comment gave me thought and so I checked my testing procedure again. Turns out that, completly by accident, everytime I copied files to the LVM-based NAS, I used the SSD on my PC as the source. In contrast, everytime I copied to the ZFS-based NAS, I used my hard driver as the source. I did that about 10 times. Everything is fine now. THANKS!