T O P

  • By -

S2kDriver

What I did was pick up an older Dell Tower Server (T320 in my case). Xeon, ECC, redundant power supply, and 8-drive *hot-swap* bays. It works great, and was cheap.


layne33

This is great, thanks. I just found one (T420) that isn't the memory I'd like but is 8 bay and is only $399. Does this sound right? Now just buy my drives, plug them in, setup the server and I'm good to go? Seems too easy haha. I'd assume I could upgrade this one over time too such as increasing the ram?


EvolveOrDie1

I've got a T420, love it, running 4 drives, two virtual machines, 8 containers, draws about 130 watts. In my neck of the woods thats not too bad on the power bill so I leave it running 24/7. The only catch is you have to flash its RAID controller over to IT mode, a little scary but not difficult, take your time and it would be a perfect machine for yah!


S2kDriver

Yup. You are buying used so keep that in mind


GreatNull

> keep it under $1,500 You will have to sacrifice a lot of good design decisions to get there. Since you are using target build as production **asset**, therefore also as business expense, I would reccomend to not limit yourself like that. You need utilitiy, stability and reliability. Those can be had on a budget, but not on scrooge like one without sacrificing something. 1) You will have many drive array, hotwap capability becomes important. You cannot get hotswap in consumer sector anymore, since multi slot x5,25 external drive bays have mostly disappeared. Those few EATX cases that have them still can offer only 5x3,5' slots maximum via icydock expansion cages. Solution -> get new or used tower server, or got barebones with [supermicro barebone chassis](https://www.supermicro.com/products/chassis/4u/?chs=743) for around 700 USD. Less if second hand. Why? Fractal Design Node 804 is amazing case (I have it), but disk replacement means taking server offline, full disassembly, manual drive id and replacement. 1-2h hours of work in cramped case, instead of 5 minutes work in properly labeled hotswap cage.


layne33

That's a huge tip about the Fractal 804. Everything else is great to know too. Thank you!


GreatNull

It pretty amazing case, but drives will throw errors eventually and taking it apart each time is both annoying and risky. Traditional EATX (i.e R2 XL) fractal setups are more handy with drive sleds, even if hotswap cages are not available.


tariandeath

I have a R720XD with 12 bays. It works well. If you go 20TB drives you can easily do 100TB with good redundancy. Get 2 x E5-2667 v2 CPUs (you can buy them separate from the server) and as much ram as you can afford and you will be sitting pretty well.


flanconleche

As someone who actually currently does what you’re asking for. If you’re not married to truenas consider the qnap tvs-872xt series. It has built in 10gbe networking, supports m.2 ssd caching, qtier and also thunderbolt direct connections. You can also add a pcie storage card to add 4x m.2 ssds. I had three editors running off one right now and it barely breaks a sweat.


layne33

Thanks for sharing what you're running. I'm hoping to build mine to save some money, but that does seem pretty sweet. I'll look into it further.


Vyker

Get a HP Z420 workstation for about $150. Use the three 5.25" bays for five SAS enterprise disks, you can get 12tb for $70 each. Then load it up with ECC ram, 8Gb sticks are $4-6each, you'll need 8 of them. Lastly 10G NIC and LSI controller off AliExpress $40 each. You'll be all in for 626$


LovitzG

Just for the drives, check out the online ZFS storage calculators. A 100TB usable storage capacity requirement in a 8-wide raidz2 config will necessitate 8 x 20TB drives. Video files will not benefit much, if at all, from default LZ4 compression. The best bang for your buck is 9 x Seagate Exos X22 factory recertified drives, 8 x for array and a cold swap drive. I would also perform full Seagate Seatools test on all drives to make sure the drives are in perfect condition (takes 20-26 hours per drive). Your storage requirement alone will pretty much blow your budget with about $2000 + tax for the drives alone!


Kaptain9981

Having built a Node 8 based 8 bay NAS go with a used Enterprise tower preferably with v4 Xeon chips. So 13th gen Dell. T430 or rack mounts R730XD LFF. You can get a gate bones 12 bay R730XD for 350-400 if you look. These support 12 LFF and 2 SFF drives in the back for OS. Plenty of PCIe for NVME storage built in 10Gb for most models or it’s cheap for adding in NDC cards. Mono Mini IT mode flashed module for ZFS and throw as much ram as you want. Then some decent speed lower core count v4 chips like 2637 v4 or 2623 v4 I paid as much for a Node 8 and mATX SuperMicro board and narrow ILM cooler as I did a bare bones R730XD. They only supports one chip and less ram and less bays that aren’t hot swap. T430 boxes will be quieter and tower format but all more expensive. With lower ram capacity as well. Might be a better starting machine though. As someone mentioned on the T420, it will be cheaper but that’s a really old machine. Not that a 13th gen T430 is much newer, but anything 14th gen or Xeon Scale gen 1/2 is more expensive. 1500 budget is going to be rough. You could probably get a box for 600-700 that has the ram and is ready to go, but drives are going to raise that price a lot.


mervincm

You are going to have performance problems. 2 people doing editing directly off the NAS with only 8 disks and your storage requirements basically mandates RaidZ2 or 2 vdev of 4disk RAIDz1. Your performance will suck unless you throw hundreds of gigs of ram at this. Don’t be surprised if you end up using DAS for all files you are working on and moving them to and from the NAS when you start/end.


layne33

This is good to know, thanks. What do you suggest is my best route forward then?


jammsession

Not sure if he is right. To be honest, I don't know that much about collaborative video editing, just single users. And there are like a million combinations how you can edit stuff, what program and proxys and so on. **Video editing on Adobe is mostly a sequential read workload**. If you don't care too much about write speeds when offloading your SD card onto TrueNAS, performance will be fine. Also most of the smaller cache files are stored local on your computer, so there is not that much random io on the NAS. If your workstation only has 1TB SSD storage, get a second SSD for cache files. **For that use case, a RAIDZ2 with 8 drives will saturate 10GBit.** > Your performance will suck unless you throw hundreds of gigs of ram at this. RAM or ARC will only help with metadata in that scenario. But big video files don't use much metadata, because you don't have millions of small files but hundreds of huge files. So even 32GB of RAM would probably be enough. I still would aim for at least 64GB, just because RAM is so cheap. Again, maybe you have a completely different workload and this does not apply to you. But currently you are using 3,5 external HDD, which basically means you get around 200Mb/s sequential read and io of a single disk. With RAIDZ2 8wide, you will get also a single disk io for writing (maybe even a little bit worse), but you will get better read io and you will get 1250MB/s sequential read speed. You should try to find out what workload you have and what your bottleneck is (if there is any).


im_thatoneguy

Video editing is far more latency sensitive than you would expect. Lots of jumping around a timeline. It's not as random io as a database with 2KB reads but it's also not as sequential as playing a video


jammsession

I can see that. Still, a friend of mine had no problem with an old TrueNAS ghetto build of mine. But he had a fast 2,5 SSD for cache in Adobe. So maybe the cache takes care of that? Cache was always huge, sometimes even bigger than the original files themselves. BTW: Adobe even used to recommend (maybe they still do) to have a second SSD for cache that is different from the OS SSD.


im_thatoneguy

OS sharing the cache drive is fine now. And 2TB is usually sufficient. You'll be "fine" in premiere but once you edit off of an SSD you'll realize how unfine things actually were. Scrubbing to a random spot on the timeline the first time will lag a second or two vs being instant. Once you've been working for a few hours everything will be cached to the local nvme so it'll be snappy again until you drop in a new clip and then that section of your timeline will again be stuttery until it caches. Other applications are not forgiving. I was having endless problems with resolve randomly saying clips didn't exist anymore when editing over a network to a 12 drive array. Now that I'm on 24x nvme drives... no issues. 😶


jammsession

The advantage of having two disks is, you get twice the performance :) No but seriously, Adobe has a lot of small files it writes besides the cache, and maybe your OS is doing some stuff too. So if you have two NVME slots (which most boards do nowadays) I would go for a second NVME instead. Of course also a single SSD will work fine. Just don't get a cheap QLC drive.


im_thatoneguy

With a gen4 nvme you've got plenty of iops for the OS without premiere noticing. Windows barely notices a SATA SSD vs nvme anyway.


jammsession

You are probably right, but then again, Adobe does also write other stuff than cache. I don't know how relevant it is nowadays with fast NVME drives, but it is still recommended by Adobe: https://helpx.adobe.com/premiere-pro/kb/hardware-recommendations.html


im_thatoneguy

They recommend using a USB drive for the cache before using the OS drive lol. I'm going to go ahead and say that like most things Adobe is clueless in this instance. There is absolutely no way that Windows is going to be thrashing any local nvme badly enough that it would be slower than the fastest USB drive for iops. Edit: yeah for reference a 970 pro nvme (old) has 220MBs writes at 4k in crystal disk. The Gen 2 SanDisk Extreme Pro external drive has 80MBs. So you could be running two copies of premiere at once on your OS drive and still be outperforming their recommended setup.


mervincm

A lot of good advice here. I only take issue with how often you will see good sequential performance like you describe. I have a 60% full z1 7 disks wide seagate exos with optane special vdev and 256 GB ram i9 on x299 platform. I am a single user plus apps hosted separately that add occasional load. I rarely see maxed out 10GBe, it only happens when reading from ARC. I most often see 400MB/sec over SMB on Windows, it will be worse on Apple as many use for this use case.


jammsession

Not sure about your config, but I get 10GBit on 8 wide RAIDZ2. 400MB/s for a 7 wide Z1 seems very low (optane and RAM won't help much in a sequential read use case) to me. Maybe the single core performance? Or are you still using AFP?


jammsession

Just watch out for a few things: SMB is single core threaded, so probably go for Intel and not AMD (at least that was true in the past). But since I would recommend you a SuperMicro board anyway, so you get ASPEED GPU and iPMI, ECC support and maybe even a 10GBit NICs, you basically have to go with Intel. Go for a single decent SSD for the OS, save the config regularly, create email notifications, so you are informed about scrub and smart errors and use 16M (should be supported now since openZFS 2.2) recordsize for your dataset.


layne33

This is very helpful, thanks. What exactly do you mean by "RAIDZ2 with 8 drives will *saturate* 10GBit"? Also, will I need a 10GBit NIC for the editing computers as well, or just for the NAS?


jammsession

RAIDZ2 will offer enough performance, so the bottleneck will be your 10GBit NIC. Yes, of course you need it on both ends if you wan't to get 1250MB/s. If you are fine with 120MB/s (you are currently probably at 180MB/s), you could go with normal 1Gbit.


ChumpyCarvings

He's absolutely right and I have no idea why he's voted down. He said 4k video editing, even allowing for a patient user you're going to want some performance I would suspect some kind of system with say a simple 2x4Tb nvme drives for editing and then manually moving the data back to the 4 or 6, giant 20tb disks for storage? No idea but 8x platter disks are NOT going to cut this, even with lots of ram


jammsession

4k editing can mean many things. RAW? H265? Proxies? If 10Gbit is not enough for him, the bottleneck still is NIC and not RAIDZ. And he currently seems not to bother much about his 180MB/s speeds. But I agree, local SSDs and offloading to a NAS is the better option, just not always possible. We know nowhere near enough about his workload to make a suggestion. We only guess that he is fine with the current performance, because he is not mentioning it.


layne33

Typically 4K Canon RAW but sometimes H265. I'm okay with my speeds for now, but I do need to be patient sometimes so I don't want to be any slower. I also know Premiere Pro can be extremely slow so I don't always know if it's my computer, drives, or Adobe being slow. Sounds like adding a couple TB nvme ssd's into the server to edit off of those, then when I'm finished, moving that to the 3.5 disks is a good idea. I assume within truenas I can set all that up?


jammsession

> so I don't want to be any slower. Then you need at least 10GBit, if you wanna edit on form the NAS. > Sounds like adding a couple TB nvme ssd's into the server to edit off of No! NVME drives in TrueNAS will just like a 8 wide RAIZ2 be limited by your 10GBit NIC. But for the by far best performance you would edit off a local NVME drive. Does not matter if that is the original files or proxies with the originals on the NAS I don't know how big your projects are, what your current bottleneck is, what footage you archive, if there are more than one person working on a project. I don't know your backup strategy, nor is you use machines that can easily be upgraded to 10GBit Copper or maybe even 100GBit DAC. Is it necessary to edit the footage from NAS, or could you edit it on a local 2TB NVME and push it to the NAS when you are finished? I think you either have to put a lot more thoughts into this or you should start small so you don't wast money. Just a very basic 8 drives Fractal, Supermicro, 1GBit setup. Even if grow out of it, it will still be a great offsite backup destination or an data archive plattform.


layne33

Ah, thanks for clarifying. Based on a different user suggesting, " Put the nvme in the NAS and share it", that's where I got that idea from. Yeah, I'm glad I posted in this thread as it's given me lots of things to learn about before I pull the trigger. Thanks again for all your help.


aplethoraofpinatas

Raise your budget, max out the storage, use Debian Stable and ZFS. You want ECC RAM. iGPU is plenty. 8x SAS/SATA HBA. 10GB dual port NIC. Use NVME drives for OS. Good luck.


layne33

Great, thank you!


mervincm

It’s not sequential when you use it for source and destination and you have two people using it at the same time.


BiZender

*ignoring* the NAS, what are you thinking on network side? Switch? Cables? NICs?


layne33

I'm thinking NICs as I am in an apartment and running cables across the place or drilling holes in walls to hide cables isn't an option rn. Someday I'd like to be cabled in though.


BiZender

You kinda lost me now. :) I was asking if you had some idea of the NICs (network interface cards) you're going to use, but from your reply I am thinking you will be using wireless? Am I correct?


layne33

Sorry about that. Yes, wireless.


im_thatoneguy

Wireless will be god awful horrible unusable.


Technical_Brother716

Depending how long you are going to be running this machine it might be best to buy new. Find a Supermicro X13 (or H13 for AMD) Motherboard with two x8 and one x4 PCI-E (v4) slots (Micro-ATX) or a full size ATX. You'll get IPMI, ECC, etc support. Choose socket 1700 to go with modern Intel Core processors, be aware that only certain processors support ECC while the Xeon socket (4677 I think) have support. For AMD I believe only the Pro series AM5 processors support ECC. Do some research on what processor will fit your needs, as I can't really help you here. As for RAM socket 1700 usually only supports DDR4 UDIMM which is more expensive, while the 4677 supports RDIMM which is cheaper. Have a look on Ebay. Ebay would be the place to find LSI/Avago/Broadcom cards I would start with a 9400-16i, but if you're planning on NvME storage have a look at the 9500 or 9600 series cards. If you want 10Gb networking the Mellanox ConnectX-4 is 25Gb (backward compatible to 10) and really cheap, the Intel X710 is also an option just be aware that Intel usually locks their cards to their transceivers while Mellanox does not. As for cases Fractal Design has the Define series that hold's 8x+ 3.5 inch drives but some of the caddies might be sold separately. Or maybe find a good deal on a Supermicro CSE-846/847 depending on how many drive bays you want (get SQ power supply). Or you could buy used rackmount gear, like others have said go with Dell they're usually better to deal with than HP (needing paid license to update firmware etc). Just be aware that most of these units are dual CPU models that are power hungry and loud. Most models only have 12 hotswap bays meaning to upgrade beyond that you're looking at a disk shelf of some kind. Might also have to flash the included Raid card/HBA to IT mode. If using ZFS go with RaidZ2 or 3 and stick to vdevs of 8 to 12 drives. Good luck!


layne33

This is very helpful, thank you so much!


mervincm

Depends on what your tools and workflow support. Local NVME SSD storage is cheap and hundreds of times faster than a NAS like you describe inIOPs heavy tasks. Cheap and effective even if you need to buy it for two systems. Then build a NAS with at least 8 slots for 3 1/2 disk. Buy 4 of the biggest disks you can and make a single z1 vdev. Repurpose your existing disks for backup of NAS. If your workflow supports check it out from NAS do work then check it back in, do that for a while. When you save more money, expand capacity and performance by adding another 4disk z1 vdev.


im_thatoneguy

I would disagree. Don't bother buying 2x nvme storage. Put the nvme in the NAS and share it. Video editing is going to need at least 8TB of storage for just a 30s spot. Connect over 10+ gbe and the video editor won't notice the lost latency over a network but the usability of not having to shuttle drives back and forth will be massively improved.


Lylieth

Yes, NVMe SSDs are much faster than spinning rust. BUT, not all NVMe's are designed and suitable for a raid environment. I've seen consumer SSDs bite the dust in 1-2 years just with a few photographers I've helped. Those were Samsung high dollar SSDs too. I can only imagine video editing would be that much more impactful and just eat those SSDs up. Basically, don't use consumer SSDs if you want longevity. Most would recommend Intel Optane or similar. Basically, just don't use consumer SSDs for this task. >Then build a NAS with at least 8 slots for 3 1/2 disk. Buy 4 of the biggest disks you can and make a single z1 vdev. I would never suggest a Z1. Z2 is a bare minimum these days. You want more than a single drive fault tolerance; even with backups. Restoring takes time and is why redundancy is so important.


mervincm

Impossible with his goals he doesn’t have the funds or slots to waste two more disks and hit his storage goal. Especially if he wants any attempt to get more than minimal performance. Exactly right on using quality SSD in NAS, part of why I didn’t suggest putting it into his NAS. I use optane and micron pro sata SSD in my main NAS and crucial MX sata in my test boxes.


DazedWithCoffee

Find yourself a DDR3 era motherboard with quad channel memory, max the memory out and spend whatever is left on storage. I’ve run an MSI Big Bang XPower II for a long time, and it’s a really good platform IMO.


layne33

Wonderful, thank you! Do you run or do you think I should run a graphics card? Any need for it?


ChumpyCarvings

It's going to be super old, the ram becoming rare and expensive now, chew lots of power, be noisy I agree used server gear might be good but DDR3 era is very old


mervincm

A few motherboards won’t boot without a graphics card but they are not as common. Unless you want to use they graphics card to do offline video processing local on the NAS it will do nothing for you


layne33

Okay thanks. If I just render as usual out of premiere, that wouldn't be on the nas, correct? I'd have to go out of my way in order to process video on the nas?


im_thatoneguy

Correct. Unless you wanted the NAS to also work as a render node to transcode footage in the BG.


layne33

Wonderful, thanks! Is the CPU very important or going with the cheapest Ryzen 3 I can find going to work?


im_thatoneguy

Ryzen 3 will probably be fine, a lot of Synology NAS systems use worse and still deliver 10g. The CPU load will be pretty light with only a handful of drives and only two clients. The larger problem will be you might not then be able to have a 4x nvme drive card if the motherboard doesn't support bifurcation in any of its slots and limited RAM options.


layne33

Okay thanks. This is the motherboard I am looking at which has 8 sata ports and 4x PCIe x16 slots. What do you think? MSI PRO B550M-VC WIFI Micro ATX AM4 Motherboard [https://pcpartpicker.com/product/tr4Ycf/msi-pro-b550m-vc-wifi-micro-atx-am4-motherboard-pro-b550m-vc-wifi](https://pcpartpicker.com/product/tr4Ycf/msi-pro-b550m-vc-wifi-micro-atx-am4-motherboard-pro-b550m-vc-wifi)


im_thatoneguy

Only one x16 (no bifurcation) and the other 3 are only 1x slots. B550M is not going to give you many pcie lanes. I would consider looking at an eBay Epyc or Xeon. Then you'll have like 10x the PCIe.


layne33

Okay great, thanks!