Hello I’ve been playing around with an old laptop as my home server for 1 year and I think that now it’s a good time to upgrade to something better since it feels a bit too slow.

I was thinking to buy a synology but I would prefer something custom because I hate that sometimes the manufacturers decide to abandon support or change all their terms of service.

My budget is about 1000$ USD, I’m looking for it to have at least 20TB and the option to later add a graphics card would be nice.

What do you recommend to buy? Also what software do you recomend? Also could it work with an n100 mini PC?

I’ve been using Ubuntu server, with docker containers for several services, but I mainly use it for Nextcloud

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    29 days ago

    Well if you want a proper upgrade, 40TB plus redundancy and space for a GPU, I’d say you don’t want a mimi PC but a full-blown one. I built my server myself from components. It’s hard to find good numbers on power consumption and that was one of my main concerns. I had a look at some PC magazines and what kind of mainboards they recommend for a home server. Figured I wanted 6 SATA ports and I started from that. Unfortunately said magazine doesn’t have a good article right now, so I don’t know what to recommend. Another way is to look for refurbished PCs. If they’re some brand like Lenovo or Dell, you’ll find the specs online. With a N100 mini pc, I’m not so sure if that’s a big step up from your current setup… I don’t think they have more internal harddrive ports or slots for GPUs than your current laptop.

  • scholar@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    29 days ago

    I built a server a few years ago in a Fractal Design Node (big square box) which has 4 6TB drives in raid 5 for 18TB of storage and a 6 core AMD cpu. It cost around £1200 and half of that was the hard drives.

    It’s been really good, so if you’re looking to build one yourself I’d recommend having a look at the case and the price of drives.

    • JustEnoughDucks@feddit.nl
      link
      fedilink
      English
      arrow-up
      2
      ·
      28 days ago

      This is a good way to do it.

      I went one smaller with the Node 304 which only can do 4 HDDs with a GPU inserted. Going used for consumer desktop CPU is the most powerful play for the money I think.

      This is a good path forward OP for a pretty powerful server

      • Node 804
      • Used AM4 motherboard ( microatx B550) (can be around 150€)
      • used 5700X or similar (seen as low as 100€)
      • new 500W power supply
      • 32GB DDR4 3200 ram in 16GB sticks
      • WD red plus 10TB helium filled for balance of noise and performance and price. My 10TB drives are as quiet as my 4TB. My scheme is ZFS mirror of 4TB (2 drives) for important docs, and 10TB drives for non critical data. Drives are by far the most expensive unless you get good second hand drives
      • if you want to do Jellyfin media server, pick up an arc A310
  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    29 days ago

    Best bang for your buck is business workstations. $1000 is a fairly big budget and is likely a but overkill. Get 3 decently speced workstations and put storage and fast networking in them. Cluster them and then setup high availability. Depending on your setup you could also modify one to also be a NAS. Get a sata or SAS card and put some drives in the chassis. You may need to get dirty but that’s the fun part.

  • lemming741@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    29 days ago

    I think the N100 type CPUs are limited on PCIe lanes. You end up with less nvme, less sata, and usually no slots.

    You can find x570 am4 boards for less than $100 now. Two nvme, 8 sata, 2 big slots and 2 small.

    But all of that flexibility and expandability is going to cost you in power. My 7700x w/A380, 3 hdd is 125 watts 24/7. $10 a month on my power bill. I think those n100 mini PCs only have a 35w brick and idle at less than 15w.

  • qaz@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    29 days ago

    An N100 would be fine, I use it for my own server. Despite it being about as fast as an i5-6500T with a general benchmark, quicksync makes a big difference when encoding video with e.g. Jellyfin. I “upgraded” from a i5-6500T to a custom built N100 server and the performance improved a lot. However, if you plan on hosting game servers it probably won’t be enough.

  • Friend of DeSoto@startrek.website
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    29 days ago

    I purchased a case, SilverStone Technology CS382 8-Bay. Around $200-225.

    Bought used parts off eBay:

    Asus P8Z77-M LGA 1155 DDR3 SDRAM Desktop Motherboard $75

    32GB DDR3 1333 $35

    LSI 6Gbps SAS HBA 9200-81 IT Mode P20 $35

    Nvidia Quadro P620 2GB GDDR5 4x mini DisplayPort $70

    I have six 12tb drives (seagate exos), purchased refurb from serverpartdeals.com and had great luck with them and their support. I found that on Reddit data hoarder sub.

    I run Truenas. 4 drives for primary. 2 drives for backup of the first 4. And I have a qnap 4 bay dumb raid box for a third backup with old drives I had. My paranoia but not related really to the nas.

    Anyway it’s possible and I enjoy what I built. Also that case is loud, get a fan controller too.

      • Friend of DeSoto@startrek.website
        link
        fedilink
        English
        arrow-up
        1
        ·
        29 days ago

        I was limited by the processor and some existing ram which basically dictated my purchases to save money.

        You’re completely right though, a more modern system would be similar in price and more capable.

        I blew my budget on drives and a hot swap case. The rest is easy to upgrade when the time comes.

  • foremanguy@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    29 days ago

    One of the best choice is an old entreprise tower factor server, but it has some downside, it’s a bit power hungry, do not work if you can’t support the noise at all (tower factors are not loud but not silent either). The positive is that it’s really cheap his power (got mine 120$ for 3To, 12vcores, and 32 ddr4 ram).

    EDIT : buy some used HDD, easily getting 20tb for around 300$

  • Nutbolt@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    29 days ago

    I just asked a very similar question over here: https://reddthat.com/post/29255208

    Where I had put together a proposed self build and looking for feedback on it.

    I’ve been running Unraid for a few years and it’s been great and really user friendly as well.

  • thirdBreakfast@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    29 days ago

    There’s lots of ways to skin this particular cat. My current approach is low powered Synology (j series?) for mass storage, then 1 litre PC’s running proxmox for my compute power using their NVME for storage, all backed up to the Synology.

    • pezhore@infosec.pub
      link
      fedilink
      English
      arrow-up
      6
      ·
      29 days ago

      This is basically my homelab. Synology 1618 + 3x Lenovo M920Q systems with 1TB names. I upgraded to a 10gb fibre switch so they run Proxmox + Ceph, with the Synology offering additional fibre storage with the add on 10gb fibre card.

      That’s probably a few steps up from what the OP is asking for.

      Splitting out storage and computer is definitely good first step to increase optimization and increase failure resiliency.

      • voracitude@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        29 days ago

        Splitting out storage and computer is definitely good first step to increase optimization and increase failure resiliency.

        Exactly why I’ve been considering doing it this way for my new setup! I had to leave my last one on the other side of the planet and have felt positively cramped with just a couple TB worth of internal drives, can’t wait to properly spread out again.

      • ddh@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        29 days ago

        I’m interested in how you like Ceph.

        My setup is similar, using a DS1522+ volume as shared block storage for an iSCSI SAN for three Proxmox nodes. Two nodes are micro PCs and the third is running on the 1522+. There’s a DS216j for backups.

        • pezhore@infosec.pub
          link
          fedilink
          English
          arrow-up
          2
          ·
          28 days ago

          Ceph is… fine. I feel like I don’t know it enough to properly maintain it. I only went with 10gbe because I was basically told on a homelab reddit that Ceph will fail in unpredictable ways unless you give it crazy speeds for it’s storage and network. And yet, it has perpetually complained about too many placement groups.

          1 pools have too many placement groups
          
          Pool tank has 128 placement groups, should have 32
          

          Aside from that and the occasional falling over of monitors it’s been relatively quiet? I’m tempted to use use the Synology for all the storage and let the 10GbE network be carved up into VM traffic instead. Right now I’m using bonded USB 1GbE copper and it’s kind of sketchy.

          • nickwitha_k (he/him)@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            28 days ago

            I maintained a CEPH cluster a few years back. I can verify that speeds under 10GbE will cause a lot of weird issues. Ideally, you’ll even want a dedicated 10GbE purely for CEPH to do its automatic maintenance stuff and not impact storage clients.

            The PGs is a separate issue. Each PG is like a disk partition. There’s some funky math and guidelines to calculate the ideal number for each pool, based upon disks, OSDs, capacity, replicas, etc. Basically, more PGs means that there are more (but smaller) places for CEPH to store data. This means that balancing over a larger number of nodes and drives is easier. It also means that there’s more metadata to track. So, really, it’s a bit of a balancing act.

      • mipadaitu@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        29 days ago

        Any idea what your power consumption is for the 1618? I currently have a 720, but with only two drives it’s kind of limiting for HDD upgrades.

        • pezhore@infosec.pub
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          29 days ago

          Unfortunately, no - not specifically. I want to get a kilawatt monitor at one point. The best I can do is share my UPS’s reported power output - currently at around 202-216W, but that includes both my DS1618 and the DS415+ along with my Ubiquiti NVR and two of my Lenovo M920Qs.

          I should probably look at what adding the 5 bay external expansion would take power wise and maybe decommission the very aged 415

          Edit: this is also my annual reminder to finally hook up the USB port on my UPSs to… something. I really wanted to get some smart - “Oh shit there’s a power outage and we’re running low on reserves, intelligently and gracefully shut things off in this order”, but I never got around to it.

          • mipadaitu@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            29 days ago

            If you’re running home assistant, you can put some inline power monitoring plugs in. I like the thirdreality ones, cause you can set them to “default on” or “default off” after power failure and run it as a zigbee local network without requiring internet access.

          • mipadaitu@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            29 days ago

            Oof, that’s a lot of juice.

            I’m running a UPS, Syno720+, old gaming laptop as a portainer host, my wifi, router, cable modem, and switches, and that’s only using about 50w for everything. Pretty sure the Synology is using the bulk of that power, but I don’t have data to back that up.

            I’d like to upgrade a few things, but I’m really trying to keep it below 75w. Ideally below 50w if I can. I think my old laptop is good for now, just want more flexibility in my NAS if I can do it without bumping up the power budget.

            • pezhore@infosec.pub
              link
              fedilink
              English
              arrow-up
              2
              ·
              29 days ago

              To be fair - both synologies are running big spinny NAS drives - I could reduce my capacity and my power usage by going with SSDs, but shockingly, I can’t seem to figure out what to cull in the 35TB combined storage.

              I am debating moving my Vault cluster from a Clusterhat to pods on my fresh kubes deployment - and if I virtualize Pihole, that would also reduce some power consumption. Admittedly, I’m going overboard on my “homelab” - it’s more of a full blown SMB at this point with Palo firewall and brocade 48p switch. I do infosec for a living though, and there’s reason to most of my madness.

  • lorentz@feddit.it
    link
    fedilink
    English
    arrow-up
    3
    ·
    28 days ago

    I got a terramaster nas and I’m super happy https://www.terra-master.com/global/f4-5067.html

    The main reason to choose it is that it is just a PC in the form factor of a NAS. You can just boot it from a pendrive and install your favourite operating system. I had a Qnap before, and while it was great to start, self hosting wasn’t the best experience on their OS.

    this is a small form factor, it should be low power consumption (I’ve never measured to confirm it) and supports both nvme and sata drives. Currently I’ve an nvme for the OS and two sata for storage. CPU is powerful enough to run home assistant, vpn, pihole, commafeed, and a bunch of other Docker images. I just plan to increase the ram soonish because the stock feels a little constrained.

  • phucyall@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    29 days ago

    I have a Synology. I love it, but if you’re on a budget build one server and use that for storage and hosting all your stuff.

    Use PCPartsPicker and build yourself a full desktop tower. Something like https://pcpartpicker.com/list/gHLHxg. You can get a lot for your money on the used market, but it will use way more power and will be slower.

    For above build I picked lower to mid range components, but you can see what matters to you most. Maybe get a CPU with more cores and less storage to start and add more storage later. Or do the opposite if you don’t care about CPU but want more storage now.

    Some hardware notes, do get AMD CPU and stay away from Intel. Last 2 years of their CPUs are plagued with major issues. Do also get DDR5 ram and whatever motherboard supports that. Get a fast NVMe for your OS drive. 1Tb should be plenty.

    Finally don’t install Ubuntu on it. Two options for OS: if you want to use it as a nas then use TrueNAS Scale otherwise use ProxMox. Then you can create a virtual machine on either one of those and install Ubuntu on that if you still want to. You can also run containers on both of those.

    • Nutbolt@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      28 days ago

      You mention about getting an AMD cpu, and I’ve heard similar stories about Intel quality lately, however I’ve also heard in the past that AMD cpus aren’t very good at going low power. Electricity is expensive and I want it to idle as low as possible. Plus for my build, I’d certainly make use of quicksync on an Intel CPU.

      https://uk.pcpartpicker.com/user/Nutbolt/saved/#view=rrchkL

      Any thoughts as I’m looking for opinions on the intel vs amd but also on my proposed build. Thanks

  • Thoralf Will@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    29 days ago

    I run a Proxmox cluster with 2x Dreamquest pro, each with 2x10TB had in an IcyBox, plus an external Raspberry Pie with a 12 TB disk for backup.

    The disks are refurbished to keep the costs down but run in a mirror setup. So if one fails, I’m fine. I use an old laptop as a 3rd node and the main nodes replicate their load, so I‘m fine even when one node is dead.

  • Moreless@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    29 days ago

    With Synology your not getting the latest greatest hardware your basically buying the DSM operating system.

    DSM is a really nice one stop shop though.

    Unless you know you’re doing something DSM can’t support it’s hard to go wrong with Synology.

    Just make sure whatever version you buy has access to the DSM apps. For instance, you said you use docker, so make sure the Synology device you’re interested in works with Container Management.

    • Owljfien@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      29 days ago

      The rat dogs have it locked on mine, others with exact same SOC have it, which makes me very unhappy

  • iggy@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    28 days ago

    I have a couple Aoostar R7’s (4x in a hyper-converged ceph+cloud-hypervisor+k0s cluster, but that’s overkill for most). They have been rock solid. They also have an n100 version with less storage expansion if you don’t need it. My nodes probably idle at about 20w fully loaded with drives (2x nvme, 1x sata SSD, 1x sata HDD). Running ~15 containers and a VM or 2. You should be able to easily get 1 (plus memory and drives) for $1000. Throw proxmox and/or some NAS OS on it and you’re good to go.