Hey, I want to dip my feet into self-hosting, but i find the hardware side of things very daunting. I want to self host a Minecraft server (shocking, i know), and i’ve actually done this before both on my own PC and through server hosts. I’d like to run a Plex server as well (Jellyfin is champ now it sounds like? So maybe that instead), but I imagine the Minecraft server is going to be the much more intensive side of things, so if it can handle that, plex/jellyfin will be no issue.
The issue is, I can’t seem to find good resources on the hardware side of building a server. I’m finding it very difficult to “map out” what I need, I don’t want to skimp out and end up with something much less powerful than what I need, but i also don’t want to spend thousands of dollars on something extremely overkill. I looked through the sidebar, but it seems to mostly cover the software side of things. Are there any good resources on this?
Buy yourself a new gaming rig, and use your old gaming rig as a server. That’s what I usually do.
Or, see if you can get an old office PC for a couple hundred bucks on eBay. Anything that’s around 5 years old (10 is pushing it) and has decent specs (maybe an i7 and 16GB of RAM) should work fine as a Minecraft and Plex server. Then you can get a cheap (ideally less than $200) graphics card and be good to go.
Bottom line, a “server” is just a PC that’s serving things. You don’t need enterprise grade hardware. If you’re new to hosting, I’d advise you to start cheap and then upgrade to better hardware in a few years when you KNOW what you need. No need to get something really nice and expensive right now.
I have something 10 years old for jellyfin only, (other light stuff too, not important) and it handles it fine. No hardware acceleration but the CPU can keep up for just me and 1 friend using it. I got it for 50 bucks on eBay and it rocks. I don’t know about Minecraft servers though.
Edit: It didn’t come with drives. Don’t ever trust old drives.
It depends on whether or not you’re transcoding, how many users you have, and your resolution. If you’re just direct streaming 720p/1080p content to a couple of people then even a Raspberry Pi is fine. But if you’re sending transcoded 4K streams to several people simultaneously, you need some horsepower.
Might I suggest Server Part Deals for drives? Excellent track record and very responsive. They are my goto for refurbished enterprise drives and have never let me down.
Thanks. I don’t currently have any raid or backup set up, so I should probably do that before it becomes a problem.
Buy yourself a new gaming rig, and use your old gaming rig as a server. That’s what I usually do.
Seconded. A few years ago I upgraded my CPU, which also required me to swap the motherboard and RAM. The old Mobo / CPU / RAM combo was sitting around in my closet. I just bought a decent case, power supply, and a few hard drives, and bam. Instant server.
As far as graphics card, I would go with something cheaper unless you have a specific reason. If your CPU has a built in graphics processor, that’s probably good enough. My CPU didn’t, so I had to throw a $30 card in.
Modded minecraft servers are heavily dependent on single threaded performance. For more vanilla servers paper helps a lot. For forge I highly recommend trying mohist. It isn’t compatible with all forge mods but it works well enough that you can just replace the server jar in many modpacks and see a large performance boost.
The biggest thing that slows down mc servers in my experience is world gen. Pre generating the world and adding a world border can help a lot.
I’ve not done a larger scale fabric server so I can’t offer much advice in optimizing it but the client speed ups available through fabric look very impressive.
If you are running a server without world borders or with a lot of simultaneous players I’d look in depth on what ssd you’re saving the world to. You want dram cache, random write speeds are way more important than sequential. If you can find an Intel optane for cheap they are pretty amazing. The ssd is less important than your cpu and having enough ram to run the server.
Generally an older gaming pc is better than an older server. Again you are targeting single threaded performance. If you are purchasing hardware it might make more sense to go with lower end new hardware than higher end old hardware. It’s all about trade offs for your use case and budget. For a long time I just used my main pc to play games and host servers (ram is cheaper than another pc) but I tinker too much to keep good ‘server’ uptime.
Transcoding can get pretty taxing on a system but any semi modern quad core can handle a few 1080p streams or a 4k stream. Plus you can use a gpu for transcoding. The nice thing is it scales with core count pretty well so older server or workstation hardware works well.
The only thing I’d add to this is that the people who make Paper Minecraft are working on Folia, a multi threaded server. It’s probably worth looking into if you’re starting from scratch :)
Not terribly useful for anything less than a lot of players
Fabric has some amazing open source projects dedicated to performance.
Idk if any multithread it yet but its my current go to for low end systems
Don’t get server hardware, use regular desktop/laptop machines as they’ll be more than enough for you. Server hardware is way more expensive and won’t be of any advantage. If you’re looking to buy you can even get very good 9-10th gen Intel CPUs and motherboards that are perfect to run servers (very high performance) but that people don’t want because they aren’t good to play the latest games. This hardware is also way more power efficient and sometimes even more powerful than any server hardware that you might get for the same price. Get this hardware for cheap and enjoy.
I’ve got enterprise level hardware, rack moubtable all that jazz.
Between the cost of power, and the heat it generates (which uses more AC and thus power) its not feasible to run it.
I’m looking into clustering some raspberry pis for a more power (and heat) efficient hardware as my next project. Barely scratched the surface of research though.
So hey, if anyone has any tips or links, it would be much appreciated.
Just out today, but probably overpriced: https://www.cnx-software.com/2023/10/20/mixtile-cluster-box-supports-four-rockchip-rk3588-sbcs-connected-over-pcie/
Why not get a few HP / Dell mini computers and cluster them? Small, nice and power efficient (because there are models with “mobile” CPUs).
Cost and a personal bias, also I’ve seen more helpful communities amongst Linux and FOSS advocates than trying to deal with a big brand.
I’ve done a lot of IT stuff in my life, even before working in IT.
I’ve seen too many issues from big brands, and its usually caused by the company.
I have a Pi 2 from way back. I’ve thrown so many distros at that thing over time, and without fail I don’t run into any problems I didn’t personally create while learning or through human error.
I understand all too well that those big brands have support for businesses, warranties, etc. It makes them cost effective long term for business. At a personal level I just don’t see the benefits outweighing the negatives.
Again, personal bias. Same core reason I avoid apple products, bias, though I mainly dislike apples cost combined with their closed off, well, everything.
Yes but ARM is great but compared to server hardware it is shit when it comes to performance and reliability. If you come from server hardware and you really max it out you’re going to have a poor experience.
Also I personally like to avoid the Raspberry Pi and their stuff as much as possible. They’ve done good things for the community however they’ve some predatory tactics and shenanigans that aren’t cool. Here a few examples of what people usually fail to see:
- Requires a special tool to flash. In the past it was all about getting a image and using etcher, dd or wtv to flash it into a card, now they’re pushing people to use Raspberry Pi Imager. Without it you won’t be able to easily disable telemetry and/or login via network out of the box;
- Includes telemetry;
- No alternative open Debian based OS such as Armbian (only the Ubuntu variant);
- Raspberry Pi 5 finally has PCI. But instead of doing what was right they decided to include some proprietary bullshit connector that requires yet another board made by them. For those who are unware other SBC manufacturers simply include a standard PCI slot OR a standard NVME M2 slot. Both great option as hardware for them is common and cheap;
- It is overpriced and behind times.
For what’s worth the NanoPi M4 released in 2018 with a RK3399 already had a PCI interface, 4GB of RAM and whatnot and was cheaper than the Raspberry Pi 3 Model B+ from the same year that had Ethernet shared with the USB bus.
If you don’t want those big brands (I only suggested them because they’re cheap second hand) build something yourself on consumer hardware or pick a Chinese brand.
Those big brands are cheap though, for 100€ you can get an HP Mini with an i5 8th gen + 16GB of ram + 256GB NVME that obviously has a case, a LOT of I/O, PCI (m2) comes with a power adapter and more importantly it outperforms a RPi5 in all possible ways. Note that the RPi5 8GB of ram will cost you 80€ + case + power adapter + bullshit pci adapter + sd card + whatever else money grab.
Side not on alternative brands, HP mini units are reliable the BIOS is good and things work. Now the trendy MINISFORUM is cool however their BIOS come out of the factory with wired bugs and the hardware isn’t as reliable - missing ESD protection on USB in some models and whatnot.
Performance isn’t key. But I like performance, lol. I also wasn’t aware of their more recent practices. So thank you.
I’ll have to check out the HP mini. As I said, just barely scratched the surface on researching this, and its more of a thought than a project at the moment, lol.
I just can’t afford (and cool) enterprise level stuff at home. It was free (to me) so no big loss other than buying a better CPU used ~50 bucks. I’ve spent more on worse ideas lol.
I was just trying to share a bit of my experience, I too have datacenter / server hardware experience and have dealt with a ton of mini computers and those ARMs and Chinese brands aren’t what one usually expects at the most fundamental details.
Hardware wise, you just need a good PC. One thing to note is that graphics are almost irrelevant for servers. In your case, it would help to have AV1 encoding, so you could go with a $110 Intel A380 or A310.
The most important thing is RAM. The more server applications you start putting on there, the more RAM you’ll need. 16GB is fine for what you need right now, but make sure your mobo has two extra slots so you can up it to 32 if needed.
Storage, it’s really up to you. If you want everything on an NVMe, great! If you want everything in a RAID array, expensive, but great! Using mdadm for RAID arrays is fairly easy, just a lot of reading. Make sure you have enough SATA ports to support all the disks you need if you want that.
CPU, avoid ones with integrated graphics to save cost. Unless you’re buying one that does AV1 encoding, then you don’t need the A380.
deleted by creator
You don’t need to buy server hardware, although it is nice. Depending on where you live you might be able to buy some decent second hand server hardware.
If it was me, I would buy new desktop hardware. Here is a fairly decent server that will do almost anything: Go for around 16 or 24 core CPU with high Ghz per core. 64GB or 128GB DDR5 RAM. Your most important factor will be storage speed. Go with NVMe drives. You have some choices here. JBOD: One or more independent M.2 key drives. Software RAID: Use your CPU to manage the RAID configuration. Hardware RAID: Use a RAID controller HBA card to manage the RAID (faster but single point of failure). Use RAID 1 for data protection (can lose one drive and still have all your data), RAID 0 (double the speed of your drives), RAID 10 (best of both but needs double the drives). Choose a motherboard that suits your choices.
Things to take into account: If you go with a RAID controller card, make sure that the PCIe lanes it uses can take the full speed of your RAID configuration or you might be bottlenecked there. Choosing an Intel or AMD CPU doesn’t make much difference. If you are not good with linux distros and don’t want a learning curve, stick with something like Ubuntu LTS 22.04 server. You most likely won’t need any graphics card, but it depends what you want to do.
You can run a minecraft server on an old laptop, so these specs might be overkill, I just put what I would get and it will do almost anything you want to do with it. An 8 core CPU, 16GB RAM, with 1 NVMe drive will also be capable of all your described needs just fine.
I just use my old gaming PC, GPU and all. I self host quite a few services on it and I have yet to find something that puts it into high usage.
I don’t have any better hardware suggestions than you’ve already been given, but I would recommend avoiding Plex.
I get problems with it pretty much every day. Usually it’s the client stuttering and crashing, but I often get the client connecting to the server, but not getting a list of media back.
I’ve bought a lifetime licence, but I’m looking into switching to Jellyfin because it’s so frustrating.
Honestly for what you’re trying to accomplish, any PC built in the past 10 years would suffice.
I’d say the bigger issue would be what server operating system you’d want to run. Personally I use UnRaid and I love it, all of the apps you mention and more are available as premade docker templates in the Community Apps plugin. I’ve tried Windows and FreeNas before but I find UnRaid just so user friendly and reliable.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters LTS Long Term Support software version LXC Linux Containers NAS Network-Attached Storage NVMe Non-Volatile Memory Express interface for mass storage PCIe Peripheral Component Interconnect Express PSU Power Supply Unit Plex Brand of media server package RAID Redundant Array of Independent Disks for mass storage SATA Serial AT Attachment interface for mass storage SBC Single-Board Computer SSD Solid State Drive mass storage
[Thread #225 for this sub, first seen 19th Oct 2023, 16:05] [FAQ] [Full list] [Contact] [Source code]
Media servers can be pretty demanding particularly when doing on the fly transcoding. Look for refurbished servers, big companies routinely toss perfectly good hardware as part of product lifecycle management. A favorite of mine is called ‘techmikeny’ although their site and search is pretty janky.
I/O performance needs to be considered along with the number of processing threads which really comes into play if you have a lot of virtual machines/containers running. Less than $1000 upfront and you can get well more than you think you need, and have space to improve. I’d say focus on CPU first, it’s easy to add memory and storage later if you buy a big enough box to have extra slots open, but adding CPUs is more of a pain.
Electricity and noise should be a thought too. My largest box is using about 240 watts right now and if you go with actual rack servers they tend to be loud with a half dozen fans running at 6000 rpm or so. If you can stash it somewhere out of your living space all the better.
deleted by creator
The power is only needed for transcoding. Multiple 4k streams should be little more than directly serving up the files to the client machine (like your TV) which consumes very few resources. You should avoid transcoding 4k down to 1080p or 720p by either avoiding 4k content, grabbing only stuff that is directly compatible, or having duplicate copies of stuff in 4k and 1080p so that the 1080p file gets transcoded if needed.
Many of us have separate 4k libraries on our servers to prevent any possibility of transcoding it (like for remote streams when you don’t have the upload speed to stream 4k directly). Like for example i have about a dozen family members using my server remotely but I don’t share my 4k libraries with them since the best upload I can get with Comcast is 12Mbps. In the Plex settings I have everyone limited to 3-4Mbps so that I can handle 3-4 people watching remotely at once which leads to these streams getting transcoded down to 720p.
deleted by creator
That was just an example of when you might need to transcode multiple streams at once. Typically you shouldn’t need to transcode anything especially if you’re just watching at home. In that case you can have dozens of streams in any resolution running at once without the computer sweating at all.
Yes exactly. QuickSync has been on Intel CPUs (i5 and up) since Sandy Bridge. But I’ve heard that only since 4th gen has it been out.
I would recommend a used SFF PC for docker, and a separate NAS like a Qnap for file storage.
In a transient way I might say rather than constantly. I use Emby and when something is streaming to a Roku in a format that’s not native it ends up using something around 80% of the allocated power. I don’t use the throttling option though so it’s actually working well ahead of the stream and finishes up a full movie in a few minutes rather than going along in realtime.
So yeah it could be heavily mitigated but I’d rather just have it done rather than hoping it’s smart enough plan ahead.
I used to run a Minecraft server with PaperMC on an RPi4, and I would only give the java environment 2G of RAM. It never crashed except when I overloaded it with plugins. The same Pi was also hosting Pihole and Ubiquiti UNMS. As long as you aren’t planning on hosting hundreds of players at the same time, you should be fine with whatever (and assuming you’re doing this at home on residential internet, your network would be the bottleneck anyway). I do recommend PaperMC, it improves the performance and stability of Minecraft and it’s a fork of Spigot so it’s compatible with most plugins.
Also /u/ShellMonkey is correct about used server hardware. You can pick up a Dell PowerEdge for about $200.
I’ve gone with Unraid and consumer level hardware (intel i3 12100 and 16gb of standard ddr4 ram) the only “server hardware” I have, is an LSI HBA card that’s in IT mode so I can connect more HDDs.
I’m even used SMR drives in my array, just use a good CMR drive for parity and the biggest SSD you can get for your cache drive and you will be good to go.
I started on a raspberry pi and would highly recommend. Great for tons of things to self host.
It’ll run Plex no problem, just forget transcoding.
Hosting a minecraft server on a Pi is ambitious.
I tried it. Not great.
This is a repost from my suggestion some weeks ago:
I went for the ASRock J5040 board, 16gb ram a 500gb m.2 as system using a PCI adapter , 2x4tb ironwolf as ZFS mirror pool, 350 W power supply all in the node 304 fractal case for 550 euro alltogether.
Runs proxmox as hypervisor for VM or Container. 6 LXC running motioneye, plex, pyload with openvpn, syncthing, rclone cloud backup and openbookshelf.
Typical power usage is around 20W
That said it could also run on PicoPSU
There are a few things I’d consider:
- How many users are going to be on the MC server? MC is pretty notorious for eating RAM, and since most of my home server adventures often includes multiple VMs, I would look for something with at least 32 gb of ram.
- for plex (I’m guessing similar is going to be the case for Jellyfin) how many users do you expect to support concurrently, and how good are you at downloading in formats that the clients support direct play for? Most remote plex users are going to require transcoding because of bandwidth limits, but if you have direct play for most of your local clients or have a good upload and don’t have to transcode 3+ streams at a time, you’re probably fine with just about anything from the last 10 years in terms of CPU.
- also re: plex, do you have any idea in terms of storage requirements? Again, if you’re just getting started < 10 tb of storage in mind, you can get by with most computers.
Anyway, to give you an idea, I run both of these and quite a few other things besides on a Dell R710 I bought like 4 years ago and never really have any issue.
My suggestion would be grab basically any old computer laying around or hit up eBay for some ~$100-$200 used server (be careful about 1u’s or rack mounts in general if noise is a concern, you can get normal tower-case servers as well) and start by running your services on that. That’s probably just about what all of us have done at some point. Honestly, your needs are pretty slim unless you’re talking about hosting those services for hundreds of people, but if you’re just hosting for you and a few friends or immediate family, pretty much any any computer will do.
I wanted to keep things very budget conscious, so I have the r710 paired with a rackable 3016 jbod bay. The r710 and the rackable were both about $200, and then I had to buy an HBA card to connect them, so another $90 there. The r710 has 64 gb of ram and I think dual Xeons plus 8 2.5" slots. The rackable is 16 3.5" slots, so what this means is I basically don’t have to decommission drives until they die. I run unRAID on the server, which also means that I can easily get a decent level of protection for drive failure, and I don’t have to worry about matching up drives and all that. I put a couple of cheap SSDs in the 710 for cache drives and to run things I wanted to be a little more performant (MC server, though tbh I never really had an issue running it on spinning disks) and this setup has been more or less rock solid for about 5 years now hosting these services for about 10 people.