• 16 Posts
  • 822 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle




  • Not that I want to push ZFS or anything, mdraid/LVM/XFS is a fine setup, but for informational purposes - ZFS can absolutely expand onto larger disks. I wasn’t aware of this until recently. If all the disks of an existing pool get replaced with larger disks, the pool can expand onto the newly available space. E.g. a RAIDz1 with 4x 4T disks will have usable space of 12T. Replace all disks with 8T disks (one after another so that it can be done on the fly) and your pool will have 24T of space. Replace those with 16T and you get 48T, and so on. In addition you can expand a pool by adding another redundant topology just like you can with LVM and mdraid. E.g. 4x 4T RAIDz1 + 3x 8T RAIDz2 + 2x 16T mirror for a total of 44T. Finally, expanding existing RAIDz with additional disks has recently landed too.

    And now for pushing ZFS - I was doing file based replication on a large dataset for many years. Just going over all the hundreds of thousands of dirs and files took over an hour on my setup. That’s then followed by a diff transfer. Think rsync or Syncthing. That’s how I did it on my old mdraid/LVM/Ext4 setup, and that’s how I continued doing on my newer ZFS setup. Recently I tried using ZFS send/receive which operates within the filesystem. It completely eliminated the dataset file walk and stat phase since the filesystem already knows all of the metadata. The replication was reduced to just the diff file transfer time. What used to take over an hour got reduced to seconds or minutes, depending on the size of the changed data. I can now do multiple replications per hour without significant load on the system. Previously it was only feasible overnight because the system would be robbed of IOPS for over an hour.










  • The thing is that I’m already at the last couple of leaves in the investigation tree and I’m not willing to change anything upwards of the USB driver level. That’s why there isn’t much point in getting people to spin their wheels for solutions I can’t or won’t apply. If I was completely unable to get the data corruption and disconnects under control, I’d trash the system and replace it with Intel. Fortunately, a PCIe add-in USB controller seems to work well so I avoided the most costly solution. At this point I don’t actually need to get the motherboard ports to work well but I’m curious to follow down the signalling rabbit hole because I’m not the only one who’s having this problem and the problem doesn’t affect just this one use case. If I find a solution like an in-line 5Gb USB hub (reduces data rate), or just using USB-C ports instead of USB-A (reduces noise), or using this kind of cable instead of that kind, I could throw that as a cheaper workaround in this ZFS thread and elsewhere. The PCIe cards work but aren’t cheap.


  • Unfortunately it won’t because the transfers are happening between ZFS and the hardware storing the data so I can’t control the data rate at the application level (there are many different applications) or even at the ZFS level. This is why in this particular case I’m stuck with a potential hardware-related workaround. I mean I could do something stupid like configuring a suboptimal recordsize in ZFS but there could still be spikes and I’d prefer to get the hardware to stop losing bits and hoping ZFS would catch that. Decreasing data rates is a generally acceptable strategy to deal with signalling issues, if the decreased rate is usable for the application at hand. In my case it is.


  • I am trying to transfer data via USB at high speed without data corruption, silent resets and occasional device disconnects. Those are things that happen because the USB controllers on my motherboard made by AMD with some help from ASMedia do not function correctly at the speed they advertise. So given the problem the right solution is to get a firmware or hardware fix for these USB controllers, however that’s unlikely to happen. So I’m trying to find a workaround. I already have one (PCIe add-in card) but now I’m also testing running the bad controllers at half-speed which seems successful so far but I was wondering if there’s a way to do it in software. I’m currently bottlenecking the links by using 5Gb hubs between the controllers and the devices.



  • Great question. In short, garbagy AMD USB controllers. I recently switched to a newer AMD board and have been hit with the same issues faced by these poor sods. I’ve been conducting testing over the last week, different combinations of ports, cables, loads, add-in PCIe USB controllers. The add-in cards seem to behave well, which is one way the folks from that thread solved their problems. The other being changing to Intel-based systems. Yesterday however I was watching an intro about USB redrivers by TI and they were discussing various signalling issues that could occur and how redrivers help. That led me to form the hypothesis that what I’m experiencing might be signalling related. E.g. that the combination of controllers/ports/cables simply can’t handle 10Gbps. That might be noise from some of those devices or surrounding ones that causes signal loss when operating at 10Gbps, speeds this setup can actually achieve. In order to test that I tried placing the DAS boxes behind a 5Gb hub plugged in a port that has previously shown a failure. So far it’s stable. This is why I was wondering whether there’s some magic in the kernel that could allow configuring 10Gb ports to operate at 5Gb.