• addie@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Assuming that these have fairly impressive 100 MB/s sustained write speed, then it’s going to take about 93 hours to write the whole contents of the disk - basically four days. That’s a long time to replace a failed drive in a RAID array; you’d need to consider multiple disks of redundancy just in case another one fails while you’re resilvering the first.

    • DaPorkchop_@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      My 16TB ultrastars get upwards of 180MB/s sustained read and write, these will presumably be faster than that as the density is higher.

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        I’m guessing that only works if the file is smaller than the RAM cache of the drives. Transfer a file that’s bigger than that, and it will go fast at first, but then fill the cache and the rate starts to drop closer to 100 MB/s.

        My data hoarder drives are a pair of WD ultrastar 18TB SAS drives on RAID1, and that’s how they tend to behave.

        • DaPorkchop_@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          4 months ago

          This is for very long sustained writes, like 40TiB at a time. I can’t say I’ve ever noticed any slowdown, but I’ll keep a closer eye on it next time I do another huge copy. I’ve also never seen any kind of noticeable slowdown on my 4 8TB SATA WD golds, although they only get to about 150MB/s each.

          EDIT: The effect would be obvious pretty fast at even moderate write speeds, I’ve never seen a drive with more than a GB of cache. My 16TB drives have 256MB, and the 8TB drives only 64MB of cache.