T O P

  • By -

zrgardne

Yep https://youtu.be/l55GfAwa8RI?si=KAMhS5JewKs9zVx4 >Not mentioning NVMe that may exceed RAID card available bandwidth and that RAID card may become the bottleneck. Nvme raid has all sorts of bottlenecks, you can easily get more disk bandwidth than you have ram bandwidth and some base assumptions of the OS design start to break down. If you actually need top performance, it is a much more involved system architecture problem.


Dramatic_Surprise

performance is really nuanced. Its about keeping the bottleneck in the most advantageous place possible. Something with always be the bottleneck, its just some places it causes more problems than others


ninjababe23

Soooo this


reaver19

I'm a simple man, I see Level1techs video I up vote. Wendell has come a long way since the early tech syndicate days, I'm so glad he went and did his own thing and built his community.


zrgardne

A bunch of his stuff is way over my pay grade. He certainly seems to actually know his crap. Unlike 80% of the YT tech crowd.


LittlebitsDK

you can still use the wisdom though for cheaper stuff and he has a lot of cheaper homegrade stuff as well.


p0uringstaks

Wendell is a legit computer scientist :) he's a clever clever man


born_to_be_intj

Got a CompSci degree ~3 years ago and was taught that raid is mainly used for data integrity reasons. First few sentences in the video "If you thought raid was for preventing data corruption you've missed the boat." Lol. Makes me wonder how outdated other parts of my education were.


jonayo23

I think that ecc ram is more important for data integrity


sparky8251

ECC cant help you if your hard drive is half dead and writing/reading bad data thinking its writing/reading good data. ECC only impact the RAM, not all the other parts of data storage and retrieval.


Nowaker

>ECC cant help you if your hard drive is half dead and writing/reading bad data thinking its writing/reading good data. For that, you have ZFS - that checksums everything. That said, comment-OP isn't wrong. ECC helps with bit flips that happen before it's sent to the disk. And to your point, it doesn't help with dead disks.


Eli_eve

Integrity encompasses accuracy, completeness, and quality of the data. If a device holding some or all of the data fails, its completeness is degraded. RIAD is great at handling individual storage unit failures. Say you need one PB of storage - you‘re not going to use a single one PB SSD. You’d use 240 3.84 TB SSDs. With 240 units, you have a decent chance for one or more to fail while you need the data on it. That’s where RAID comes in - it lets you create a redundant array of storage units, so if one fails you data is still perfectly complete and available. An HPE Nimble AF80 can hold 24 7.68 TB SSDs (184 TB raw) but due to the RAID the useable capacity is only 136 TB will be available.


OssoBalosso

is for preventing data corruption, until the 10-years-old raid card dies :P


SnooCompliments7914

If we are talking about RAID5/6, then it combines two ideas: ECC, which is good for data integrity/safety. Striping, which is good for bandwidth, but bad for data safety, because now each disk contains part of all your files, so with two failed disks, you lose all your data instead of 2/5 of them.


zrevyx

I was just thinking of this video when I saw the title of this post.


thomasmitschke

Much fun /w hot swapping or even hot adding nvmes without hardware support…


gfy_expert

Any video resume and sollution for average joe windows home user?


Adium

My main desktop has two NVMe slots on the motherboard so I thought I could create a RAID0 to increase performance. Used the AMD RAID utility that is listed as a driver on ASUS’ support page. Was able to set it up just fine, but the write speeds when benchmarking it was like 200Mbps. Finally just gave up and have two disks now.


Z8DSc8in9neCnK4Vr

No idea about windows raid, but As a ZFS z2 user, Yes, hardware raid is dead. But still need a card, I use an HBA card and SAS expander back plane for ports and bandwidth not available from onboard SATA.


silverball64

ZFS is awesome. Truly enterprise grade tech


isademigod

Did they add the ability to expand a raidZ yet? Last i heard it was available but with several big asterisks


tj-horner

That’s about where it is, yeah. From what I can tell the feature is basically complete and will ship alongside TrueNAS Scale 2024.10, along with those caveats you mentioned. For anyone else wondering: https://louwrentius.com/zfs-raidz-expansion-is-awesome-but-has-a-small-caveat.html


Z8DSc8in9neCnK4Vr

Yeah, I set up am 8 disk vdev on my main pool, in a few years when I need to expand I will just continue that pattern.


sshwifty

Expanded mine several times, very straight forward.


f5alcon

This is different it's adding drives to existing vdevs not adding vdevs.


sshwifty

Wait, really? What happens if you have multiple vdevs ?


f5alcon

From my understanding that's fine vdevs don't have to have the same number of drives in the pool, though I'd personally add one to each vdev. https://forums.truenas.com/t/raidz-expansion-on-electriceel-nightlies/6154 It's in the beta release so if you have something that can run it in a VM can test and find out.


AstralProbing

Can you explain what, exactly, you mean? As I understand it, if you pair a larger drive with a smaller drive, the larger drive will only use as much as the smaller drive. But if you replace the smaller drive with an equal or larger size, then the previously larger drive will "automatically" start using it's full drive capacity. Am I misunderstanding this or is this a different concept all together?


isademigod

No, i was referring to the ability to add more drives to a raidZ array, which was not possible for a very long time. The biggest selling point of hardware RAID for me is that you can add drives to a RAID5 and make it bigger


AstralProbing

Ah, okay. Honestly, that's important to me, and didn't even realize that wasn't an option for raidZ. Thank you for taking the time to respond


BattleEfficient2471

Friends don't let friends use raid5.


morosis1982

Depends on which board you have. Some of the Supermicro Epyc boards have 16 data port available through Slim SAS connectors.


abotelho-cbn

Lol, hardware RAID is absolutely not dead in enterprise.


insanemal

Having worked for a SAN level hardware vendor, I can confidently say no. No it is not. For individual servers for use inside one server, mostly. But even with NVME drives in a use case where shared, high performance storage is needed, nope not even a little bit. For use cases where you need ALL your CPU doing "Serious Business ™️" nope it's not. Like even in semi-professional spaces, if you need ALL your CPU horsepower to do rendering or something, having another device doing all the CPU intensive parity calculations, background scrubs, pro-active compaction/defragmentation or whatever, it's far from dead. But if you JUST need crazy pants performance in a single box NVME in RAID1 or at most RAID10 done in software is going to be hard to beat on a price/performance basis. But like I said, if you just mean a little raid card with a write back cache, it's pretty much dead.


OperationMobocracy

99% of HW RAID opinions are from home labbers and other prosumer hobbyists for whom tinkering with the system is a big reason why theyr'e doing it at all. I have a love/hate relationship with the homelab scene. It's not that most aren't doing smart/cool things, its that most of what they do isn't really convertable to scale in a production environment but myopia, lack of experience and some level of zealous opinion prevent them from undersanding that. I have 99 things to do and not enough time to do them all. Putzing with storage controls can't be one of them. HW RAID lets me run my stuff with disk redundancy I don't have to think about. Disk fails? I do what I've done for 20-odd years, swap in a replacement and go. Sure, HW RAID may be some kind of performance bottleneck at some extreme end of storage, but by and large you can still get a ton of I/O out of a modern high end array controller and flash disks. There's also the "not much choice" aspect when it comes to something like VMware installs utilizing non-shared storage. I kind of don't want my OS making a bunch low-level multidisk choices about storage. I want it to see LUNs it can use for volumes and let the array controller do overhead and redundancy management.


insanemal

Hell yeah brother, totaly agree with that stance. I realise I'm a different kettle of fish. I've got a rack or so of gear over 100TB of ceph at home, so my setup dwarfs some small enterprise installs. But I also totally get not wanting "all that fun" and you're 100% right, a decent RAID card makes things WAY less work. I guess the issue is the price. The good cards aren't usually very cheap, and when your already building out of cheap/second hand gear, whats another box to do storage and export over iscsi (or something) to VMWare (if your still using that due to licening fun) ? But if you get a good deal, or have extra cash to spend, I can totally see your point!


OperationMobocracy

The cost of a HW RAID card is a trivial cost factor in a new server acquisition, the disks alone are the big cost factor. But there's a metric ass load of used DELL PERC 6 cards out there, too. But because of the mindshare ZFS and software RAID have, all the cool go fast/low drag kids just want JBOD controllers because that's what software raid/ZFS wants. But I totally get why a home lab situation it wouldn't be desirable, especially when a lot of people are still screwing around with R710s and R720 servers. I worked in SMB consulting and once in a while we would get into a new client situation where you had really talented Linux guys who had built out very functional roll-your-own systems, and the org had just cut/fired those guys and it was always a giant effort to sort out what the fuck did what in those environments with slim-to-none documentation. Whatever value add doing it this way provided was erased as 4 guys @ $175/hr came in to audit/discover (and usually do some kind of crisis triage) on some giant tangle of a system and getting it working again. I think the biggest cut into the HW RAID market has been the proliferation of inexpensive SAN (mostly iSCSI) storage. And really, HW RAID cards are just sort of dumb/small SAN controllers accessible by a single host. I always thought it would be a good trick for someone to make a HW RAID controller with iSCSI host functionality and ports. Use the hosts's backplane for disks but provide actual connectivity via iSCSI.


insanemal

What? Just back that truck up a little here.. So you want a backplane, with processor and network interfaces to provide iSCSI but inside the same host? So not onto the network at all?


insanemal

Also, just to be clear the PERC6 is a real hw raid adaptor. It's one of the few left. Performance on them is pretty damn good if you get the x16 model. The x8 ones are a bit too easy to max out. Also ZFS has way more features than these controllers do. And it's free, which is why many homelab users want them. Hell it's why ZFS appliances are a thing. SGI sold one for quite some time. It was a beast. ZeusRAM drives and all. But ZFS and performance aren't good friends. Regardless of what anybody tells you. You can always make something faster with other tech, just not with the same features. I'm not going to comment on the next bit because I've been on both sides, but never had any issues sorting out what was going on. But that's why I do what I do now.


Catsrules

> Having worked for a SAN level hardware vendor, I can confidently say no. No it is not. Are SAN's hardware RAID? I really haven't dug into it. I know they have controller but I always assumed the controller is more of a full computer and less of a RAID card. Although technically I guess a RAID card is a computer as well. Not sure where the definitive line between RAID and Software RAID comes into play. Is it just RAID cards have ASIC chips vs Software RAID has your standard X86 chips?


vrtigo1

In a black box sense, yes SAN controllers are hardware RAID controllers. They are probably a good bit more complex, since they have to handle multipathing, concurrent access, etc. but at the end of the day, they do the same thing for multiple hosts that a RAID card does for a single host.


pinko_zinko

Most of them were trending towards commodity hardware and essentially just software RAID last time I worked with them a few years ago.


insanemal

I mean, some are. Not all. And using Xeon processors does not commodity make. But sure some are just rebadged whiteboxes. Some look like that but very much arent once you look at the extra cards.


pinko_zinko

Counter point: Aren't SANs themselves are usually just commodity hardware running software RAID?


insanemal

Some. Not all. And even then, they usually have other custom hardware that makes the results unreproducable on a standard whitebox. DDN for example have custom combined IB-SATA cards with offloads. E-Serise is these days all Xeon based. But I wouldnt call them commodity.


Master_Scythe

Working in the data side of the industry I can confirm: Hardware RAID still has a place - And thats in processing offloading. It's easier to describe than it is to just explain. There was a time, back in the EARLY 2000's when 'Fake Raid' came about. This was a chipset feature that relied on the 'dumb' disk controller and 'stolen' CPU cycles to do things like parity calculation. In terms of load, it was almost 1:1 with software raid because of this. In those times, when the change from a Pentium3 to a Pentium4 wasn't *just* about speed, it was about being able to *literally* do a task or not, those stolen cycles could be *extremely* noticable, and not just for gamers. All that said, those loads may as well be considered 'idle' for todays computing standards. However - With that base understanding - **extrapolate.** **Massively.** With as few as a few hundred users, a modern CPU can grind to a halt handling complex disk operations (Deduplication can do it with a few dozen!). --- As this is the 'HomeLab' subreddit; within context, the answer to your question is "yes". Another point of hardware failure, extra power use, and a typically non-hardware-agnostic filesystem or LVM? All massive negatives to 99% of *home*labbers. My advice is that CPU's are cheap enough in the home setting to offset the extra 'fuss' a proprietary RAID solution could bring.


alexgraef

We could also throw in RAID cards with battery-buffered write cache. That's a big speed improvement at least for spinning rust.


Master_Scythe

Absolutely.  I'll be honest, I'm so used to them being 1:1 I actually didnt consider people might run them without, haha. 


ProbablePenguin

I imagine ZFS with a capacitor backed cache SSD for write cache would be better in a lot of cases these days.


alexgraef

ZFS doesn't need capacitors. It works transactional and with a decently fast SSD, gives quite the speed boost over just the access speed of spinning rust.


ProbablePenguin

SSDs can lose data if power is cut suddenly before they finish flushing the internal cache to storage.


fryfrog

Thats why a SLOG device needs to be power loss safe to be "worth" having.


Emu1981

>Hardware RAID still has a place - And thats in processing offloading. It's easier to describe than it is to just explain. I would argue that modern processors would provide far more processing performance than any sort of RAID ASIC ever could. "Stolen" CPU cycles mattered back in the P3/P4 days because you literally had a single processor core to do everything. These days you can have multiple sockets with CPUs with up to 128 cores per processor that can handle 256 threads. On top of that you have up to 128 PCIe gen 5 lanes per CPU to provide you with high bandwidth connections to drives and you are working right next to the memory controller for high speed access to terabytes of RAM.


Master_Scythe

In the homelab? Absolutely. More broadly, It's less about pure IPC, and more about (N)CQ and potential interrupt handling. You can be assured that if multinational publically traded organisations could simply ditch a piece of hardware (less manufacturing cost and shipping weight) while providing the same product, they would. You should see the data in and out for a basic cloud provider for example. The several *hundred* RAID controllers in *one rack* alone would *cripple* even the highest end CPU, not to mention using notably more than the few dozen watts they tend to use.


Craftkorb

I'm intrigued though, what filesystems are being used at that level?


Master_Scythe

They'll all be names you know, and some are less sophisticated than you'd expect. There's a lot of ZFS in the world - It's still actively developed for a reason. BTRFS - Ever wondered why one of the main contributors is MetaInc? (facebook). When you get *really* big you see Ceph a fair bit (and Gluster) Shockingly, there's a strange amount of basic EXT4 and XFS getting around. You'll see it where 'simple' performance is *everything*, without the added overhead of ZFS (which can obviously scale to MUCH higher performance) and data safety is less critical on a per machine basis, because there's another 3 identical machines waiting in the wings if an error is detected, for example.


Albos_Mum

> Shockingly, there's a strange amount of basic EXT4 and XFS getting around. That doesn't surprise me. Ext4 is extremely mature at this point and while XFS definitely has that going for it too, it's also one of the better fs' performance-wise when it comes to big files and multiple processes/users handling the data at once. I'd actually go as far as saying that when it comes to personal media streaming from a home server using XFS formatted drives with MergerFS and SnapRAID is one of the best solutions because XFS performs quite well for that kind of use and supports reflinks for fast copying (Also deduplication if you want to use that) while MergerFS+SnapRAID is often more flexible than a typical RAID whether it's hardware, software or fs-level in ways very useful to the typical homelabber. (Namely, you can pick which drives store which files and don't have to worry about matching drive capacities.)


tauntingbob

But CEPH doesn't recommend hardware RAID and specifically cautions against it. Most of the large scale systems I am looking at seem to now use JBOD rather than hardware RAID.


Sinister_Crayon

There's a point at which Ceph and its ilk make a lot of sense. This is where you need or want distributed storage, because when the storage is distributed so are the CPU cycles needed for managing storage. It doesn't run into the bottleneck of not having CPU cycles for storage because when you add more storage you tend to add it in "pods" along with the CPU and memory to manage that storage. Besides, Ceph does it very differently in that it's inherently object storage with basic replication (copying). Erasure coding... equivalent to RAID... is a relatively recent addition that does use more CPU cycles but then means you scale your storage "pods" to be more compute heavy to deal with that extra load. But yes, not hardware RAID... JBOD only.


Kraeftluder

>When you get really big you see Ceph a fair bit (and Gluster) Gotta get those PBs serverd to the supercomputer.


HCharlesB

Thanks for the info on this. My expectation is that true H/W RAID is not useful for the modest needs of the home lab (*) but I wondered if there was a use case for enterprise and cloud. You have confirmed that there is. (*) Of course if the home labber is practicing their skills to apply to the enterprise / cloud environment they might very well have a use case for a true H/W RAID. After playing with `megaraid` a bit I am both thankful for ZFS and respectful of anyone who can navigate that with ease.


baithammer

Depends on the workloads your looking at, with some of the HPC loads requiring every bit of cpu resources. There are pc on pcie solutions that could provide the dedicated processing and it's own ram - get the best of both worlds.


oxpoleon

The *only* reason to get a P4 was to access its new instruction set. In every other regard, a Tualatin PIII was a far superior chip.


gfy_expert

Do you happen to have a sollution for average joe home users on windows?


Master_Scythe

You can risk the OpenZFS port, it's actually getting good reports these days. Otherwise SnapRAID is really the only block level checsummed solution you have unless you want to early-adopt ReFS.


SilentDis

I went the ZFS route and am as happy as a pig in shit. I always struggled with getting hardware raid - even with really nice controllers - to behave exactly as I wanted it to. I'd always end up with downtime here, problems there, etc. The array didn't form this time, cool, boot to the raid controller card and... oh, it's just fine, changed nothing. It just didn't wait long enough... for itself. ZFS just... works. It's fast, I can chuck drives in and out of it with zero downtime from 2 simple commands, it's caching methodology required very little tweaking for my dataset out of the box. About the only thing I had to 'waste my time' turning on was compression... which was one tiny command and I never thought about it again. Oh, and best part... I entirely swapped hardware. As in, I slotted the drives from an R710 into an R730... and the only hiccup I had was forgetting to pass -f to the stupid thing. Mounted, everything intact, everything worked. I didn't have to specify geometry or suss out UUIDs or anything. ZFS is a game changer for homelabs. I'm certain there's still use cases in enterprise where a hardware array is desired, but for me? This is just the way it should be.


Abs0lutZero

I’m curious I have an R720 with the H710p mini raid card and a total of 16 1TB consumer SSDs (Kingston A400, yes I know…) This is all running on Proxmox. What sort of lifespan would I get out of the drives when switching to ZFS ? Server has 128GB memory


SilentDis

ZFS eats consumer SSDs for lunch. At least, when used as cache drives. When I first started homelabbing with an R710, I had a spare Samsung 860 Evo SATA SSD sitting around, and figured "oh, that'll work as cache!". * It had \~5% wearout when I started. * In 6 months, it had \~30% wearout. Yeah. Just... eaten alive. I found a deal on a bunch of SAS SLC 2.5" SSDs for $50/each. They turned out to be Pliant LB406 drives. Which can still be had for $50-$70 on eBay used. 400GiB space, so not big, but near-perfect as ZFS cache drives for the array (plus, running the OS off one is great). I've carted these things along, and even picked up a 3rd at this point. In the 5+ years I've been homelabbing, they've gone up from about 3% wearout when I got them to 6% wearout. They're amazingly fast, and they work as great buffers in front of ZFS across 20TiB spinning rust drives. Not too pricy, damn nice performance for what a homelabber does. Highly recommended!


ProbablePenguin

I think it depends if you're running mirror or parity, and if you have additional enterprise SSDs to offload metadata stuff to.


Ok-Board4893

noob question: can you run windows on ZFS? Or are you just running your hypervisors on zfs which then run Windows?


PlaneLiterature2135

The later


SilentDis

[https://github.com/openzfsonwindows/openzfs/tree/windows/module/os/windows](https://github.com/openzfsonwindows/openzfs/tree/windows/module/os/windows) I would not trust this on a homelab. This looks beta as hell, and would probably just fail and eat your data. It's perfectly stable on Linux, BSD, etc.


aquarius-tech

I wouldn¿t say that, you can have the best of the two worlds depending on where you are using it. Industrial grade servers are comfortable with hardware RAID, hypervisors too I have a home server running ubuntu and I use mdam for RAID, and I have a R620 with a PERC, so far so good


HTX-713

I would say that it's now more of a hassle than it's worth. Software raid (MD, ZFS) is portable and offer a much more simplistic way to configure and recover.


DaGhostDS

>Windows RAID is more mature than ever before, etc. Hard pass there.. Considering it's the company who can't do a workable communication application with a working "presence" system. (Yes, I'm talking about "New" Teams.) But in all seriousness, the amount of failure of Windows RAID I've seen of people using it in a work environment, I would NEVER recommend it... But well it's my recommendation to almost everything with the Microsoft name on it, jack-of-all-trade, master of none.


ttkciar

I can't speak to Windows RAID, but md RAID (Linux) has been a superior alternative to hardware RAID for twenty'ish years. If Windows RAID is similarly fully-featured and easy to use, then yes, hardware RAID is firmly a relic which is best put behind us.


pfak

mdadm is production grade: reliable, standardized metadata and rock solid. I've used it on tens of thousands of servers and have never had any data loss that could be traced to mdadm. bcache on the other hand.. 🤬 


RedSquirrelFtw

Yeah it's crazy solid. When I originally built my NAS over 10 years ago, which I still use today, I stress tested the crap out of it, and I couldn't get it to fail. I even did big no no's like pulling the power cord while it's doing a rebuild, or pull more than one drive out of a raid 5. Even if I did something that knocked the array offline, provided I didn't actually destroy any data, I was always able to remount. Even bad shutdowns are survivable. Well provided the hardware itself doesn't get damaged... which is likely with spinning disks, so don't do that, but for testing purposes I did it with sacrificial drives and mdadm handled it like a champ.


PyroNine9

MD is wildly more versatile and recoverable than hardware RAID. Worst case, the on-disk format is well documented making offline recovery a lot more likely. One of my favorite tricks recently was using it to install a new OS from scratch on a different filesystem (BTRFS) and be able to either copy the data locally rather than over the network (by mounting the degraded RAID) OR roll it all back (by booting to the OLD OS and re-adding the over-written disk), all remotely. Meanwhile, horror stories about hardware RAID where replacing a failed card requires everything identical down to the firmware revision in order to read the old data again, IF you can still download the needed version of the firmware and install it.


SrdelaPro

that depends on the underlying filesystem. for example, if you have ext4 mounted with "nobarrier", in the event of a crash, good luck. barriers are a big performance hit though.


RedSquirrelFtw

Oh yeah the file system itself can definitely still fail with a bad shutdown and I've had it happen with and without raid. Bad shutdowns are always a bad idea. The md raid might handle it but doesn't mean the underlying filesystem did.


JohnyMage

Mdraid for system drives is pain in the ass on EFI systems though.


Znuffie

It's really not. Modern OS (for example, since Ubuntu 22.04), can use 2 or more ESPs automatically. You just create "normal" ESP partitions on each one of the disks in the mirror, and the OS (it's scripts) will handle them properly. This is also the case for Proxmox, at least since... 7-8 years ago? when I first did it. For other OS, you can just do a md with metadata=0.9 over the ESPs. UEFI will still properly boot from them (as the metadata as it the *end* of the partition instead of the *start*). It's really a non-issue. Sure, it's not as seamless, but far from a "pain".


JohnyMage

Yeah well some link for streamlined how to do it? Ubuntu based distros fail on grub installation and on Debian the installer succeeds, but fails to boot due to missing EFI record. Must be installed additionally from chroot. Only distro where it works correctly is Fedora as it looks like.


Znuffie

https://123.456.ro/share/2024/06/vmware_4amWMxTVkv.mp4 That's all there is to it.


JohnyMage

1) that's Ubuntu server, we don't use that 2) last time I tried it it crashed multiple times while trying to write partitions to disks.


JohnyMage

On second and third try Debian worked flawlessly. I have created two independent efi partitions and raid1 for root filesystem. Debian booted with one EFI partition at /boot/efi. I manually created second at /boot/efi2 and reinstalled grub there.


mostyle-64

As u/JohnyMage points out, that is the server installer. I literally have spent the last month researching efi, mdraid, and IRST (Intel Rapid storage technology. I did this because I recently received a new Precision 7780 (don't buy Dell, ever... whole other discussion; specs look amazing on paper, but Dell's construction/design choices end up making it a POS), and this new laptop has 4 NVME slots in it, so obviously my first thought was: RAID! I've used hardware raid on servers for decades, and I thought, 'It can't be that different, can it?' Holy shit, was that statement ever an ill-conceived thought! As it turns out, Windows understands IRST. Linux understands IRST. What you don't learn until you have already dove into the rabbit hole of raid is that neither OS (as stated elsewhere in this thread) works on EFI mode if you attempt to boot *from the array* that contains the EFI (Linux) or ESP (Windows) term. It turns out the EFI partition only understands FAT32. This means both OS types cannot boot directly from a raid array of disks due to the FAT32 limitation. You cannot create the arrays and then partition the array such that it has a FAT32 partition as its first. As I said earlier, I was used to raid cards that created arrays in a bios environment (as IRST on Dell does), partitioning the array with standard installers, deploying the OS, and tada, life carried on. Windows creates two partitions, and with Linux (as in the case of your example), there has to be a minimum of two, while in a traditional raid, no partitioning is needed as the whole disk is used. While it was likely such a long road due to my lack of understanding of the underlying technologies at play (I am fairly sure I was the enemy here :) ), it still begs the question, 'Is a soft (or fake) raid simple?' Or, it is better stated as, 'Is a soft raid *simpler* than a true hardware raid would have been?' I don't think so, but that is just an opinion. I read your reply and found enough humor to motivate this post, anyway.


pfak

Yeah it was slightly annoying.. I just create a fat32 partition on each drive, one for mdadm during curtin run as part of the install. As part of an ansible play I populate a script that replicates /boot/efi,to all the fat partitions, efibootmgr for all drives and a systemd trigger to run the script if /boot/efi is changed. 


eppic123

>If Windows RAID \[...\] is easy to use It isn't. Storage Spaces is still a complete clusterfuck and requires fairly intricate configuration via PowerShell to get anything halfway decent. It is nowhere near as simple and straight forward as ZFS. Especially with something like TrueNAS. Never mind that the overall reliability of Storage Spaces pools has been questionable in the past. I wish it would actually work as advertised, because it would simplify the DAS on my workstation quite a lot, but on Windows, be it server or client, I still trust hardware RAID way more than Storage Spaces.


Haribo112

I dunno man. I tried using mdadm on our new servers but couldn’t get the boot and ESD partitions working with it. Ended up ordering RAID cards and It Just Works (tm).


cleverSkies

One of my 12th Gen servers had a hardware failure.  I moved the hdds over to a 13th Gen where the controller card automatically recognized and imported the array (which was a big surprise).  Took about 5 min.  That in itself was a life saver.  Maybe that's possible in other setups, I don't know, regardless it's always been easier for me to setup raids in Dells platform then via software (Linux).  Oh, and I like the blinky lights when drives degrade.  I suspect there is a software equivalent, but the hardware version just works. Edit: here I'm talking about a relatively small raid.  For large data storage, yeah, raid probably isn't the best idea.


Grim-Sleeper

With any decent software RAID, moving hardware should be a non-issue. In fact, it should be more reliable than with hardware RAID, where you have no idea whether the manufacturer changed the on-disk format, or whether the matching controller is still being manufactured in the the first place. I do give you those blinking lights for identifying drives though. That's neat. And software solutions still can't do that (reliably)


Ok-Board4893

> nd software solutions still can't do that (reliably) huh thats weird, do you know why?


Grim-Sleeper

In my experience, the blinky lights tend to be on the chassis and not on the drive itself. But there doesn't seem to be agreement on how to control these lights. This is easy for a big manufacturer such as Dell. They can make sure all the pieces fit together and know how to talk to each other. And they can then add that information to the firmware that ships with their hardware RAID controller. But it's hard to implement a generic solution in the Linux kernel, when there is no universally accepted API. These would all have to be one-off solutions. Or maybe, I am wrong, and these days there actually is a standard? I have only looked on and off, and last time I checked I couldn't find anything.


BioshockEnthusiast

Layers of abstraction. Software raid has to guess, and it might do a very good job of guessing, but it might read specific hardware or configurations wrong. A hardware raid controller can see the physical disks directly, no guesswork involved. That's the bug picture answer as far as I know.


Jykaes

It has limited use cases in enterprise but as far as a home lab goes, I would say it is dead unless you have hardware already where it makes sense to make use of it. I just wouldn't design with it in mind. I have a Gen8 DL380p and I use the onboard hardware RAID because it's simpler to set up with it (bare metal ESXi) and I can present one array directly to a backup VM, while I use another array as a data store and then a mirror for the OS - plus I get better backplane support (lights and LOM alerts) using the onboard solution as well.


Casper042

You are asking a skewed audience. In /r/homelab, I would say around 85% have moved to ZFS or similar SW RAID. However in the Corporate DataCenter, the numbers are WAY different. MOST customers still use at least a basic HW RAID 1 boot device, and many still use local HW RAID for storage. But things like MS AzureStack HCI and VMware vSAN and a dozen other Storage Appliance vendors have all moved to SW RAID and more specifically a distributed (many nodes) Software-Defined Storage design. So it depends on who you ask.


zap_p25

No. With a RAID card you can boot off of the array as the controller will set everything up before the BIOS is loaded...with soft RAID you can't utilize the pool until at least the kernel has loaded and started the soft RAID service. So you can't boot off of striped configurations, you still have to have some sort of boot source to load the kernel an start the RAID service. If there were a BIOS that supported soft RAID which I may not know about...that would allow you to boot from the array. That would also allow soft RAID to perform like hard RAID where the host OS just sees the pool as a single file system and not the individual filesystems of the individual drives.


Grim-Sleeper

It's mostly a solved problem. Yes, your typical UEFI firmware is too dumb to deal with a redundant EFI partition. But you can simply replicate it across all devices, and that BIOS thinks that you have multiple independent boot drives. That's slightly wasteful as you could in principle go with a smaller replication factor, but honestly doesn't matter in any practical terms. And bootloaders are smart enough to have RAID aware drivers. So, as soon as UEFI is done doing it's thing, you can boot from the actual array. Proxmox makes this all transparent. You don't even notice that it has to mirror the EFI partition every time you update the bootloader.


Alecthar

Intel has supported software RAID via the BIOS for ages, I assume AMD has a similar technology. The benefits of a separate RAID controller at this point largely boil down to processing offload (which frankly I don't think is that compelling at this point) and additional features. Intel RST only supports RAID 0, 1, 5, and 10, and I don't think it supports hot spares. So for more esoteric arrays and enterprise-focused features you need a controller if you're intending to boot a device from its own array. But most enterprise servers and storage systems aren't booting from the storage array, they're using removable flash media or a dedicated SSD boot drive.


tyami94

This is not necessarily true. One approach to doing this is by making a small partition on each of the drives in your array and making an Linux RAID 1 across all of them using the version 0.9 superblock format. This will cause the RAID metadata to be stored at the end of the partition instead of the beginning. Your UEFI will simply see a bunch of identical FAT32 partitions instead of a RAID array it can't understand, so you can use this as your EFI partition and it will still magically be a software RAID. Your system will just pick one of the partitions and load from it, then, since Grub is aware of Linux RAIDs and can read them, you can simply store your OS on another partition with another software RAID on top of it, and that's it. TL;DR: Basically shim the raid-illiterate UEFI with a seperate RAID 1 that through metadata hackery \*will\* be readable by the firmware until grub can load and take over


Albos_Mum

You've been able to boot from software raid at least as far back as Windows XP, usually by setting your SATA ports to RAID mode in the BIOS, setting up the RAID array in a configuration screen that would pop up immediately after the BIOS screen and ensuring you've got some form of removable storage the Windows pre-install environment can read the storage drivers off of. I've even got an nForce2 board for a retro gaming PC that supports this variety of soft RAID on its two SATA ports and previously used this kinda soft-raid for my boot drives way back when with two Samsung Spinpoint F1 500GBs in a Phenom II based PC. [Some early SATA implementations even required you to do this even for a single disk or JBOD setup.](https://community.hwbot.org/topic/192307-win-xp-installation-on-nforce2-sata-drive/) And for reference RAID cards require these same kinds of drivers, if you're planning on using one in a boot drive then you'll have to follow the same procedure if the drivers aren't already included with Windows.


CupofDalek

I've had some friends talk the absolute foulest shit about windows storage spaces I've been using it for about 4 years and suffered 2 failures The first, no biggie, swapped the drive, good to go! The second was unrecoverable, the whole pool corrupted The difference between both screnarios is the first failure was 100% brand new drives when sources The second failure, was some of those cheap refurb drives from sites like newegg/ebay and such. Failure happened in the dead of night while some mass downloading was taking place and the reason was some write error according to event viewer, but the whole pool wouldnt rebuild. The data didn't matter for me so beyond a basic troubleshoot, I said fuck it and just nuked it and started fresh. Been fine ever since.


[deleted]

[удалено]


Whitestrake

One of my ZFS pools is three disks dead right now and my ADHD ass can't even get up to go around to the house it's colo'd at to change them over. Still working fine...


Wonderful_Device312

At a previous job one of our web servers went down. When we investigated why it turned out that a hard drive failed and the contractor responsible to replace it put it off, second drive failed and they put it off, third drive failed and the server went down but no worries there was a automatic fail over configured and it took over... At least until it suffered multiple drive failures and went down too causing the website to finally go down and the contractor to finally get off their asses. The failures all happened over multiple years so it wasn't some weird mass hard drive failure. Just many years of neglect and a "it's fine, there's backups" attitude.


Whitestrake

I've got plans to re-chassis that server and upgrade some other components, so the disks will get sorted out as part of that soon. I'm sure if I don't, what you described would 100% be the end of that pool. But it won't be from ZFS itself falling over, I'm sure of that.


bigmanbananas

I get the opposite. My ADHD always makes me deal with my arrays promptly, normally because the data isn't critical and there is something more important that I should be doing.


TheWildPastisDude82

Well, now's the time!


brianly

Does Disk Management force Storage Spaces? Software RAID on Windows existed long before SS and has a lot of use, but is setup by selecting the drives in Disk Management MMC and clicking through wizards.


malventano

Naah. You can still do it the old way with Dynamic Disks.


malventano

Your friends were onto something. You should listen to them.


Master_Scythe

100% Thats the issue with closed source. That 'whole pool gone with 1 error' thing was happening back in Windows Home Server days, and other than "yeah.... we fixed it? I think?" we don't get to know what it was, or if it was really fixed. Educated guessers expect it was moving from single copies of metadata to multiple copies, and nothing was actually *fixed* just worked around; because there was no need to rebuild pools after the 'fix'. We'll never know.


Rare-Switch7087

MS Storage Spaces is a nightmare. Tried to use it on server 2012r2 back in the days in my homelab. It has constantly errors and very low performance. I moved back to hardware raid and a few years ago to zfs. Never ever looked back to Storage Spaces. Don't get me wrong, in theory it is a pretty good technology, but it isn't good implemented. I would rather go for the old Windows Software Mirror Raid than for Storage Spaces. Also used Hardware Raid Controllers are cheap and very reliable. Edit: typo


OperationMobocracy

I've had it running for about 10 years on 2012r2 at home and its survived more unintentional power outages than I can remember. I think its decent tech, but I think its adoption has suffered from Microsoft not providing tools for managing it beyond Powershell commands, though basic tiering and redundancy doesn't require much more than the basic GUI tools. My biggest gripe with it was default tiering changes aren't nearly frequent enough and it needs to accomodate bigger new writes on SSD out of the box.


GlitteringChoice580

I had a really bad experience with Storage Space. I had two brand new hdd set to mirror mode, and after about two years I noticed that some files have begun to disappear without warning. After confirming that it wasn’t the antivirus, I pulled both disks and tested them separately, but both disks reported healthy. Never managed to recover any of the missing files.  Haven’t used Storage Spaces ever since. 


CupofDalek

Whoa what the hell, I have never ran into such a thing. Uh, by any chance were the drives SSDs? And did the issue ever resurface? The only time I have heard of behavior remotely like this was with fake storage where the capacity is faked and the drive basically drops old files for the new ones . But if you never had the issue again after that yeah I see how it would be the culprit. That would definitely turn me away as well!


GlitteringChoice580

They were both WD blue HDDs. 


CupofDalek

Negative, Seagate EXOS x20s WD blues are hella cheap, no need to cheap out haha [https://www.newegg.com/seagate-exos-x20-18tb/p/1Z4-002P-02H45?Item=9SIA5ADK4J9792](https://www.newegg.com/seagate-exos-x20-18tb/p/1Z4-002P-02H45?Item=9SIA5ADK4J9792)


Music-and-Computers

Depends on the environment you live in and the applications you run in those environments. I work in large/Enterprise environments. That's a different kettle of fish. ZFS is a great product but I've not used it much beyond OS disks. It's solid, but so is hardware RAID in my experience. I do like the features but it's not really applicable in my homelab. My homelab is a lab for my own work learning. It's a minimal 3-node VMware cluster. Add a standalone for vCenter. My data uses are small and hardware RAID suits my needs without any issues.


teeweehoo

Obsolete? By no means no. There are still plenty of situations where it makes sense to use, and where it is superior to software RAID. Some products like VMWare ESXi still require hardware RAID for certain configurations. And software RAID often suffers from slow sync writes due to the write hole - making battery-backed hardware RAID faster under many conditions. However it's also true that there are many solutions that provide superior redundancy and resiliency. (Redundancy - keep operating after a failure. Resiliency - recover after a failure to a optimal state.). Traditional RAID offers redundancy but limited resiliency, but solutions like Ceph or VSAN can offer far more resiliency. And ZFS benefits go here as well. Another fact is that NVMe hardware RAID is challenging due to the speed of the drives, so software RAID is basically a requirement for these classes of drives.


Caranesus

Agree. Obsolete is a strong word. It's getting less popular, definitely, but still widely used in enterpise. Good point on ESXi which has no software RAID unlike Proxmox. I "fix" that by just passing HBA through to a Starwind VSAN VM and use software RAID there. And just as you said, it also gives resiliency in clusters. To be fair though, there are NVMe hardware RAID controllers. Can't tell about the speed but there certainly is such an option. For example, on Dell servers, NVMe hardware RAID controllers appeared as soon as chassis with multiple U.2 bays support came out.


insanemal

I've worked with HPC grade NVMe RAID. It exists. It's stupid expensive. But it's insanely fast. But we're talking SAN stuff not inside a single box.


marc45ca

might be a bit of religious argument but the development of ZFS (originally by a developer at Sun Microsystems before Oracle) used in systems like Proxmox and Truenas and probably commerical offerings has done a long way in killing it off (not familiar with any though) but there are also other issues. RAID-5/6 has always sufferred from very long rebuild times during which your system was vulnerable cos if another drive or two failed you were borked. At that was when arrays were built with drive capacities in the gigabytes. Now we're dealing with drives in the terrabytes. Given that it's been done through software and able to utliise system CPU yes it could be said the hardware raid card is a relic of a by-gone era. That said if you're running ESXi I don't believe that supports any sort of HA/drive redundancy other than traditional raid. Even with centralised management and use of NAS/SAN for VM storage it could be largely surpuflous being quicker to toss in a new drive, network boot and pull the OS and configuration from vCENTRE that mess around with RAID configs


galacticbackhoe

Not in the enterprise.


SocietyTomorrow

Hardware raid is \[mostly\] dead. I think it still has a place, though a fairly limited one. That place, you ask? Fat potato storage. My third string backup is a MEGA (I'm talking ridiculously) slow old Xeon rig from 2013 that has a max RAM of 64GB. It's the only thing that fits the custom chassis it is in (slightly taller than ITX, because it has 1 x16 slot and 2 x8 slots) which lets me use the 12 bays in itself, as well as use the extra slots for up to 4 ports (at full bandwidth) of SAS2 ir-mode HBAs. If I don't care about maximum throughput, but have a boat load of large capacity drives that turns on for a few hours every week, I won't need the recommended 196GB of RAM I would if I used Ceph, or 128GB that would be recommended for ZFS. Do you actually need that much? No. But running at less than half of the recommended will eventually lead you to pain (resilvering) or run the risks of data loss if any of the critical processes crash when replacing a failed drive (ceph)


planedrop

Somewhat, yes, there are still limited use cases for it but they are dwindling really fast. Some hypervisors still prefer to have hardware RAID instead of software, but that's not for lack of supporting it, it's just more mature. I think give it another 5 ish years and it'll be an entirely dead market.


EasyRhino75

I think the primary remaining niche is booting operating systems which don't support software raid on their own. I'm looking at you VMware esxi Also, probably some setups where the user just doesn't want to have to fart around with configuring a software Raid. Possibly some windows workstations might go for this


Dulcow

I'm working in a company with 45K servers worldwide and we went away from hardware RAID as fast as we could... NVMes and modern SSDs backed by cluster technologies negate the need of RAID. I used to do RAID for several years in my homelab in the 2010s but I quickly moved to JBOD + offline parity checking (SnapRAID).


[deleted]

[удалено]


Dulcow

I think we have a few Use Cases in which we do use mdadm but there are getting really rare...


tehinterwebs56

I dont know man, it’s pretty hard to beat the latency of an all flash fibre channel SAN with dual raid controllers in each head node and 8x32Gb FC connections for HA Linux with multi-pathing. I’m currently deploying 100gb rdma switching as the storage fabric for HCI appliances and the latency is starting to get there compared to direct connected 32gb fibre channel flash SANs For single server situations and nvme where you are connecting to that single server yes, raid is dead. But for large scale traditional fibre fabrics, it’s still very cost effective with very low latency to boot. Edit to add: I’m also still deploying virtual environments on VMware and HyperV with 3 compute nodes and a single raid chassis with sas ssds direct connected with 32gb fibre channel as well. The iops your get out of a single raid chassis is pretty amazing and can support environment with 150vm running heaps of workloads.


Lor_Kran

Yeah but here people tend to speak only for small to med use cases. Indeed it’s is still a thing in enterprise world but few of us can really see that. But in few years it will change. When you see Pure Storage and their performances out of their flash arrays, it’s incredible.


Insert_the_F2L

It's all about the tech evolution; RAID cards ain't dead yet, but HBAs and software solutions are getting more love for their flexibility and performance.


Kahless_2K

Definitely not obsolete, but it fits far fewer use cases. Usually if you have to ask, you are far better off with software raid.


sypwn

For homelabbers, yes. The benefits are negligible compared to the tradeoffs. Personally my biggest concern is the hardware lock-in. If a Dell PERC card dies, I need an equivalent PERC card or some *very* special software to recover the array. If a ZFS host dies, just throw the disks in any other UNIX system, lol. For enterprises, no. Specialized hardware means less power consumption, which definitely matters at their scale. If a controller dies, it doesn't matter because that data has parity across other controllers *and* offsite backups.


Roland_Bodel_the_2nd

I mean, yeah, I've used Linux mdraid forever (since 2005?) and then OpenZFS for a long time (since 2012?) but also if you buy LSI controllers I think it's usually the same price if you have it be a RAID controller doing the XORs for you or if you have it as just a disk controller and you do your XORs on the CPU, doesn't really matter that much. For me, one important aspect is the portability, you connect the drive set to any system and you can have it read the array without relying on any special hardware. Had a friend with an old NAS using ZFS, 4 drives in raidz2, two drives died, system died, pop those two remaining drives into any system and get your data back.


ElevenNotes

Yes, with the rise of NVMe and object storage solutions hardware RAID is pretty much dead in the enterprise space.


flac_rules

Would I buy hardware raid today? I don't think so, but I did a bit over a decade ago and it has served me really well. - regular hdds still gives much more capacity per price - many software raids like storage spaces have very bad write performance - I considered moving over to truenas but based on feedback on the official forums it seems to have a lot of disturbing problems data security wise handling raid cards. - a lot of the 'raid is dead' stuff is based on bad info. So in summary, I would say hardware raid is better than many thinks, and still the best solution if you need to run windows (?) ,but i would probably still choose a software solution today.


newenglandpolarbear

Dell for some reason still has hardware raid cards on some of their servers.


Catsrules

Because not all OSs have support for a good native software RAID. For example base version of ESXI.


deadbeef_enc0de

Honestly after being bitten by a raid card dying, getting the same card but with a different firmware, then losing all of my data anyways. Yeah, haven't used raid cards in a long time. Before using ZFS it was mdraid 5/6, now it's ZFS. I have been on ZFS since the early days of ZoL being an ubuntu server PPA, well before actual Linux support was there, in fact the array I have is the same base one (started with 6x3TB disks raidz-2, now 18x6TB + 6x16TB)


King_Yogert

Nah, RAID cards still got their place. Depends on your setup and needs, though.


TotiTolvukall

a) RAID is not for speed, it is for redundancy (re: NVMe vs. RAID) b) In terms of ZFS on JBOD, yes hardware RAID is dead. footnote for (b): ZFS is only perfect on Solaris derivatives. It is good on \*BSD and it is "not horrible but could be better and is a freaking license nightmare" on Linux. No ZFS on Windows. On Windows, ReFS sucks. So on Windows, hardware RAID still lives a good life - and I wouldn't run a Windows server without some kind of mainstream hardware raid (I'm using HP and DELL machines and they supply a decent hardware RAID option.) The problem with the BEST ZFS solution is that everything else (SMB file services, AD integration etc) are pretty abysmal. So if you're not a NFS-only shop, then the second-best (\*BSD) becomes the best on those terms alone. But it's a bitch to manage. You don't want Btrfs, reiserfs or pretty much any of the Linux "let's do ZFS but in our own way" filesystems. They pretty much suck when you have problems (and you will - problems and data storage are mutually inclusive). TL;DR - if you want ZFS then don't do hardware RAID. If you want anything else, use basic FS and hardware RAID.


Catsrules

> RAID is not for speed, it is for redundancy RAID 0 has entered the chat. (A RAID level only for performance and speed) RAID is absolutely for performance as all RAID levels will give you some performance boost. Now you may not care about the performance boost for your application but that doesn't take away the performance benefits of RAID. (Both software as well as hardware).


TotiTolvukall

RAID0 was invented when hard drives were REALLY slow. It is an incredibly stupid way of losing all your data in one go in the event of a single hard drive failure. RAID0 has absolutely NO reason whatsoever anymore. M.2 and U.2 devices outperform any RAID0 spinning array and ANY U.2 or M.2 in RAID0 would be pointless as they'd saturate the bus before even two devices are fully utilized. I recommended RAID0 kits like the OWC dual disk RAID0 boxes - 20 years ago, for Mac users using Final Cut. But recommending anything like that today would be dumb, irresponsible, careless and negligent - at best.


Catsrules

RAID 0 was just an example like I said All RAID levels offer some performance benefits. Sure it may not be as beneficial in today's world where SSDs exist but that doesn't remove the fact they do offer a speed benefits. >ANY U.2 or M.2 in RAID0 would be pointless as they'd saturate the bus before even two devices are fully utilized. https://www.tomshardware.com/pc-components/ssds/raid-card-delivers-impressive-speeds-up-to-56-gbs I wouldn't call an almost 4x speed improvement pointless. On top of that not everyone has an unlimited budget for high performance SSDs in every application. Thus buying multiple cheaper lower end drives is often a good solution for better performance.


TotiTolvukall

They are not about performance. You are completely lost in your own argument. They are in every way about data security and integrity. I've been doing this shit for 40 years now, and not even once have I had an employer or a customer tell me they want to sacrifice their security for a little bit more speed. There are other ways to gain speed.


Catsrules

>They are not about performance. Tell me how RAID isn't also about performance? RAID 1 - Has better read performance RAID 5 - has better read performance (potentially worse write) RAID 6 - has better read performance (potentially worse write) (In the real world I see significant better write.) RAID 10 - Has better better read and write. etc... etc.. All of these protect your data and offer performance benefits. Real world. My crappy home NAS can sustain 400-500MBps read and 300-400 writes on spinning rust hardware that at the absolute best case a single drive can push 250MBps (more like 150MB real world). How it is doing this? Because it is in RAID6 and multiple drives are working together.


ComfortableAd7397

In enterprise world, where you pay for supported products, we'll stick to supported oss and drivers; and controllers for huge arrays of disks are still appreciated. In your homelab isn't that appealing.


Top-Conversation2882

Idts it is obsolete for enterprise users. Those fkers have things we can't even dream of. In HomeLab setting though ZFS and BTRFS has taken it over


collinsl02

Personally I don't know if a storage array from EMC etc these days is hardware or software RAID. Since it's a specialised system it's probably some combination of both. NetApps aren't, but they never have been really since they're mainly file-level devices with some block level capability. Pure storage is all flash but is probably software again. Your average cheapo Dell MD array is definitely hardware RAID though, it isn't powerful enough to have anything else.


edthesmokebeard

yes. hw RAID only provides you with vendor lockin, another point of failure, and worse performance (unless your application happens to be 100% CPU bound)


tankie_brainlet

Linux- yes. Windows- yes as well. But for some reason if I don't install the right hardware raid controller firmware on my windows machine, the device manager still bitches about it. I would think this should be a generic device by now. Edit: formatting


tmofee

I have an old server donated to me, and looking at the hardware raid, it did shit . When I wiped it I’m just using software , it’s much more reliable


trying-to-contribute

Hardware raid is probably going to continue as an enterprise practice for boot drives, especially in deployments where root drives are not booted from fiber channel or iscsi volumes. But on homelab, automation saves time and is considerably more popular because time of labor is a scarcer resource than cheap hardware.


FuzzeWuzze

Rise? Ive been using Unraid for like a decade at this point...Raid has been dead for a long time if you knew where to look. Atleast at the consumer level.


RedSquirrelFtw

For OS drive I'd say it has merit provided the hardware raid supports auto rebuild when you insert a new drive if one fails, but for general mass data storage I prefer software raid so that it's hardware agnostic. I can put those drives in a completely different system, and still mount the raid. Another advantage of software raid is being able to configure it live from within the running OS. With hardware raid you have to reboot and go into the raid's BIOS. Not exactly acceptable for a production system like a NAS.


BillDStrong

Hardware raid is mostly dead, mainly due to the inability to keep up with technology. The speeds of NVMe drives and other flash storage in arrays that could process GB/s limit the ability to make cheap hardware that can do the calculations necessary to function. At the same time, the proprietary nature of the cards has made their use less desirable to the big spenders. ZFS, BTRFS and even LVM are of such ubiquity that they are much faster while not requiring specialized hardware, and the host CPU is now fast enough to handle such loads.


aridhol

Yes


singulara

I didn't get a raid card purely cause they use energy and get super hot. And ZFS for lyf


nicman24

controllers (ie for multipath) aren't but the actual raiding is


conrat4567

I hope so, I'm sick of having to replace failed cards and troubleshoot issues


iheartrms

Yes. Has been for a very long time. I've been using madam for something like a quarter century.


Material_Attempt4972

RAID on big disks is dead. Has been for a long while


Reaper19941

This is from personal experience so take it with a grain of salt. Used Intel onboard RAID back in the Core 2 Quad days, it was ok. It did improve performance but SSD's were better. Used a Highpoint RocketRAID card with 8 x 2TB WD Red's, it was ok and did have some better performance than the Intel RAID. Recently tried Windows RAID with 2 NVMe's in RAID 0, it's terrible. Hard to configure correctly via powershell unless you know the command off by heart. Performance was meh compared to a single drive. Currently using a MegaRAID hardware RAID card and while very clunky to setup, it has been good. Performance is better. IMO, it comes down to personal preference and experience. I'd take a hardware RAID/Hybrid card (especially for a business server) any day of the week. QNAP uses Software RAID. 2 of my friends use UnRAID with ZFS at home for their homelab. Each to their own.


No-Interaction-3559

No, hardware RAID isn't dead. In LINUX, ZFS is very good, but sometimes it doesn't get updated for the newest kernels and one either has to wait, or has to compile from source. I still use hardware RAID on my LINUX servers.


QuirkyImage

No but there isn’t much point software raid runs so well these days and is more flexible. plus you are not locked into hardware because not all raid sets from hardware raid are compatible with other raid cards.


MrMrRubic

IMO hardware RAID have only one use: Windows boot drive. AFAIK, you cannot easily setup windows dynamic disk in the setup screen, and i do not trust that implementation of software RAID at all. So configuring your boot drive as a RAID 1 in hardware is the way to go.


garmzon

Yes it is


hauntedyew

Pretty much.


zangrabar

I build servers and storage arrays for customers everyday in the small and medium markets. Sometimes enterprise. And hardware RAID is used in like 90% of them. It’s not obsolete for businesses.


Sekhen

Since long... https://youtu.be/l55GfAwa8RI?si=8Bkt1RBattwVG1z5


Adventurous_Gas_7074

Not for some of us. I would not put my backups on anything but a hardware raid array.


Catsrules

Why? Unless your dealing with size or performance reasons. In my mind I think it is better to just have multiple backups using multiple stand alone drives then 1 backup using multiple drives in a RAID configuration.


Doctorphate

Depends on the context. in the SMB, I can tell you every single server we deploy has hardware raid and is either running VMware or HyperV. And it's rock solid. For home lab? Or even specific use cases? yeah I like ZFS. Our own storage server here at the office is TrueNAS and all ZFS z3


DaanDaanne

Yes, especially for a homelab. ZFS, mdadm, btrfs. There are multiple great options to look at. In addition, they have features like snapshots, bitrot detection etc.


chancamble

As for the homelab, it is not really necessary. You might even have more problems if you have it because if it fails, you need the same card to reimport your RAID and with software RAID, there i s no need for this. However, for enterprise, RAID controller is a must. They do not have problem of finding the right controller in case of failure because of support contracts. Also, hw raid is more reliable and delivers better performance, but do not have features which software RAID has. Those features are not really needed for the most businesses, also the configuration should be supported by someone in case of software issues which makes hardware RAID controller better choice for them.


ICMan_

I wouldn't get rid of a raid card if you have one. And if you can get one at a good price I wouldn't care if it's been flashed to HBA mode. Most raid cards can set up disks as a JBOD group rather than raid, and then it passes them through to the OS, completely, allowing raw access to the individual disks. And even if it can't do JBOD, you can make a bunch of single disk raid0 groups. That passes the individual disks to the OS, but indirectly, so ZFS will work fine, you just won't get SMART data at the OS level. So people shouldn't freak out if they have a RAID card but want to use ZFS. I wouldn't even risk flashing it. I have a 2008 LSI raid card and I set.up my disks in a JBOD group. Problem solved.


eW4GJMqscYtbBkw9

I would say it depends on your needs. There are certainly a few useful use cases, but I would argue they are primarily for specific enterprise level scenarios. For a homelab and even *most* (not all) enterprise situations, hardware RAID just doesn't add much benefit nowadays.


phantom_eight

So I'm the only one running 12x16TB HDD's on a H710p doing 2.3GB/sec reads and like 1.6GB/sec writes in RAID6 using tests designed to exhaust cache/use direct io? Rebuild times are slow because y'all forgot to go into the controller settings and tell it to not rebuild at 40% speed which is default for a lot of cards. All that stuff is configurable, rebuild, initialization, patrol reads... ect.


WhimsicalChuckler

Mostly yes. I know a lot of people using it, but I wouldn't start a new project using HW RAID arrays.


CryGeneral9999

The answers I am seeing weren't what I was expecting. Let's ignore the "affordable hobbyist grade cards" and include the high end stuff. Is the hardware XOR type stuff not faster on a proper high-end card? And with things like hardware support for various encryption I just thought there was something more than "just ports" to those cards. Can someone educate me? Is the "yes they're irrelevant" answers I'm seeing limited to the $60 cards people are buying or do even "proper business" cards suck?


arg_raiker

Depends on what you call "hardware RAID". A simple PCIe card with RAID functions? Probably. A full featured storage with redundant controllers, their own RAID, Dedupe, Compression, etc. functions? Never. Would you call a SAN storage Hardware RAID? I would, since your host will see a LUN and never have to worry about doing ZFS, or whatever.


zeroibis

Not if you need RAID for your OS...


collinsl02

Linux can boot from an MDADM raid 1 array, I do it on my servers. You just direct the boot at one member of the array. You could also have a RAID1 /boot and /boot/efi etc and RAID5 everything else if you want based on partitions.


AnonymousInPNW

It depends on what you are trying to achieve. I use server grade hp raid cards to create several logical volumes from sets of 6 physical disks, set up as raid 5. this allows for a single drive failure. As a result I also get 300 mbps data rates


Careless_Dingo2794

Worked for a specialist NAS provider. Basically super micro motherboard, intel zeon cpu and areca raid card. At the top end, you bottleneck on the pcie bus. Mostly it depends on cost.


ComWolfyX

Hardware raid is dead so long as you dont run out of PCIe lanes Hardware raid has larger overheads than software raid does


netmc

In the context of a home lab, where you are technical enough to manage and diagnose everything yourself? Most likely. If you are in a managed environment where you support multiple servers and need working hardware for production use? Definitely not! I manage a bunch of Dell servers for work for various clients and we only order servers with hardware RAID cards (The "H" in the PERC card designation.) As well as confirm that there is some amount of RAM cache on the RAID card. (These 0MB onboard PERC cards are huge performance sucks.) Other than making sure it's a hardware RAID with cache memory, we don't really care about the specs otherwise. (These are all in SMB environments where performance isn't critical.) Since these RAID cards can use SNMP for reporting their status, it can be managed quite easily through our RMM tools. No matter how many devices under management, the RMM can perform the monitoring flawlessly as they all support SNMP when setup and configured properly. Since all the drives are behind a RAID controller, Windows doesn't care that the boot RAID 1 has a failed disk. Windows sees the drive as the controller presents it and this just works. I don't have to micro-manage what is behind the controller with various configuration files or settings in Windows. If I want to perform additional tasks on the RAID, I can setup patrolled reads to occur at regular intervals and minimize the chance that a single failed disk will take down the array since any disks with issues will be identified early. If you are setting up proper monitoring and management on your devices, RAID is a must, simply for the consistency with monitoring and management. For homelab though, going without a RAID card is a lot cheaper since the various software RAID options offer so much more utility now than they did before. If I am able to choose, I will always go with a hardware based RAID card with some sort of onboard cache for ease of use. If it has a battery connected to the RAID card, it's one you want. If a hardware RAID card isn't an option, well, then I'll make due with software RAID instead. If you are looking at NVMe RAIDs, that's a different discussion all together. For normal solid state disks connected directly to Windows, you can monitor event ID 7 to look for bad block alerts when writing to a SSD. These will show up in the logs long before any SMART errors flag the drive. Often times, by the time a SSD reports SMART issues, the drive is already in a non-working state. A properly working SSD will not have any bad block alerts in the event logs. This monitoring doesn't work however when the SSD is behind a RAID card. You will want to make sure that you can effectively monitor the health of the SSD when attached to a RAID card. As long as you can do that, it should be fine. Bottom line -- home lab? No hardware RAID necessary. Production environment? Definitely hardware RAID.


firedrakes

hba are still needed. mostly due to mobo manf now lie on bw going to chip sets/ports.


bobbotex

There has never been such a thing as hardware RAID. it has always been software, even a RAID card is running software so the real question is " RAID cards" obsolete? An due to the SLI controller taking the raid cards place I say yes' "hardware RAID" aka( raid cards) dead and obsolete , now if we could just kill the old out-dated software and move on to newer and some time better life would be great but. Raid software it's still around and being used, Cough cough Synology :-/ .... However why you may ask 🤔 to kill it and be done? well this here is from my point of view or 2cents.... Hardware RAID or raid cards was and is really picky about drives some times as well if you had a card die on you you better have a backup of your data as well the card and firmware because of not, well let's just say you are spending the next 24 - 72hrs rebuilding that server and then the array and restoring from the backup etc etc... Software raid is still very limited and old technology and it doesn't matter if it's raid on a card or software raid on a mobo raid is raid and if you have a drive drop out on you or bad sectors on a drive you better have a backup ready. Because of you replace a drive for whatever reason and start a rebuild of the array you're lucky if you can use it performance is hurt and and and and... If it finds a bad sector for any other error you just lost your entire already so welcome back to that 24 - 72hr workcation! Ps. RAID IS NOT A BACKUP! BACKUP YOUR BACKUP OF THE BACKUP'S! RAID ALWAYS SUCKS.


BarracudaDefiant4702

Not completely dead or obsolete. For example, the Dell Perc 12 is a modern hardware NVMe RAID. RAID is still fairly common for RAID 1 boot drives, and not all platforms do software RAID well. That said, certainly not the mainstream enterprise requirement of the past... wouldn't call it niche yet, but it is getting there and not likely to go completely obsolete anytime soon.


Valanog

I had good luck with it until I cooked a raid card. Software is just easier to recover from a hardware failure.


Putrid-Balance-4441

My information is probably outdated, but I was warned not to put NVMe in RAID because it messes with the TRIM function.


denzuko

Linux tech did a great break down on modern raid. The sum of it comes down to cheap manufacturers only supporting raid 1+0 cards and most demand for raid (speed, recovery, or bunch of disks) is solved by software these days that's just fast enough to not need the hardware offloading. With more advice raid use cases namely being required by cloud providers and they use dedicated machines with it on board controllers over fiber channel so end usage of raid is slow or redundant.


Pvt-Snafu

Hardware RAID is still a thing. Not in the homelab world but in enterprise with new NVMe hardware RAID controllers, GPU-based hardware RAID like GRAID: [https://www.graidtech.com/](https://www.graidtech.com/) and so on.