(M)  s i s t e m a   o p e r a c i o n a l   m a g n u x   l i n u x ~/ · documentação · suporte · sobre

  Next Previous Contents

2. Why RAID ?

There can be many good reasons for using RAID. A few are; the ability to combine several physical disks into one larger ``virtual'' device, performance improvements, and redundancy.

2.1 Technicalities

Linux RAID can work on most block devices. It doesn't matter whether you use IDE or SCSI devices, or a mixture. Some people have also used the Network Block Device (NBD) with more or less success.

Be sure that the bus(ses) to the drives are fast enough. You shouldn't have 14 UW-SCSI drives on one UW bus, if each drive can give 10 MB/s and the bus can only sustain 40 MB/s. Also, you should only have one device per IDE bus. Running disks as master/slave is horrible for performance. IDE is really bad at accessing more that one drive per bus. Of Course, all newer motherboards have two IDE busses, so you can set up two disks in RAID without buying more controllers.

The RAID layer has absolutely nothing to do with the filesystem layer. You can put any filesystem on a RAID device, just like any other block device.

2.2 Terms

The word ``RAID'' means ``Linux Software RAID''. This HOWTO does not treat any aspects of Hardware RAID.

When describing setups, it is useful to refer to the number of disks and their sizes. At all times the letter N is used to denote the number of active disks in the array (not counting spare-disks). The letter S is the size of the smallest drive in the array, unless otherwise mentioned. The letter P is used as the performance of one disk in the array, in MB/s. When used, we assume that the disks are equally fast, which may not always be true.

Note that the words ``device'' and ``disk'' are supposed to mean about the same thing. Usually the devices that are used to build a RAID device are partitions on disks, not necessarily entire disks. But combining several partitions on one disk usually does not make sense, so the words devices and disks just mean ``partitions on different disks''.

2.3 The RAID levels

Here's a short description of what is supported in the Linux RAID patches. Some of this information is absolutely basic RAID info, but I've added a few notices about what's special in the Linux implementation of the levels. Just skip this section if you know RAID. Then come back when you are having problems :)

The current RAID patches for Linux supports the following levels:

  • Linear mode
    • Two or more disks are combined into one physical device. The disks are ``appended'' to each other, so writing to the RAID device will fill up disk 0 first, then disk 1 and so on. The disks does not have to be of the same size. In fact, size doesn't matter at all here :)
    • There is no redundancy in this level. If one disk crashes you will most probably lose all your data. You can however be lucky to recover some data, since the filesystem will just be missing one large consecutive chunk of data.
    • The read and write performance will not increase for single reads/writes. But if several users use the device, you may be lucky that one user effectively is using the first disk, and the other user is accessing files which happen to reside on the second disk. If that happens, you will see a performance gain.
  • RAID-0
    • Also called ``stripe'' mode. Like linear mode, except that reads and writes are done in parallel to the devices. The devices should have approximately the same size. Since all access is done in parallel, the devices fill up equally. If one device is much larger than the other devices, that extra space is still utilized in the RAID device, but you will be accessing this larger disk alone, during writes in the high end of your RAID device. This of course hurts performance.
    • Like linear, there's no redundancy in this level either. Unlike linear mode, you will not be able to rescue any data if a drive fails. If you remove a drive from a RAID-0 set, the RAID device will not just miss one consecutive block of data, it will be filled with small holes all over the device. e2fsck will probably not be able to recover much from such a device.
    • The read and write performance will increase, because reads and writes are done in parallel on the devices. This is usually the main reason for running RAID-0. If the busses to the disks are fast enough, you can get very close to N*P MB/sec.
  • RAID-1
    • This is the first mode which actually has redundancy. RAID-1 can be used on two or more disks with zero or more spare-disks. This mode maintains an exact mirror of the information on one disk on the other disk(s). Of Course, the disks must be of equal size. If one disk is larger than another, your RAID device will be the size of the smallest disk.
    • If up to N-1 disks are removed (or crashes), all data are still intact. If there are spare disks available, and if the system (eg. SCSI drivers or IDE chipset etc.) survived the crash, reconstruction of the mirror will immediately begin on one of the spare disks, after detection of the drive fault.
    • Write performance is the slightly worse than on a single device, because identical copies of the data written must be sent to every disk in the array. Read performance is usually pretty bad because of an oversimplified read-balancing strategy in the RAID code. However, there has been implemented a much improved read-balancing strategy, which might be available for the Linux-2.2 RAID patches (ask on the linux-kernel list), and which will most likely be in the standard 2.4 kernel RAID support.
  • RAID-4
    • This RAID level is not used very often. It can be used on three or more disks. Instead of completely mirroring the information, it keeps parity information on one drive, and writes data to the other disks in a RAID-0 like way. Because one disks is reserved for parity information, the size of the array will be (N-1)*S, where S is the size of the smallest drive in the array. As in RAID-1, the disks should either be of equal size, or you will just have to accept that the S in the (N-1)*S formula above will be the size of the smallest drive in the array.
    • If one drive fails, the parity information can be used to reconstruct all data. If two drives fail, all data is lost.
    • The reason this level is not more frequently used, is because the parity information is kept on one drive. This information must be updated every time one of the other disks are written to. Thus, the parity disk will become a bottleneck, if it is not a lot faster than the other disks. However, if you just happen to have a lot of slow disks and a very fast one, this RAID level can be very useful.
  • RAID-5
    • This is perhaps the most useful RAID mode when one wishes to combine a larger number of physical disks, and still maintain some redundancy. RAID-5 can be used on three or more disks, with zero or more spare-disks. The resulting RAID-5 device size will be (N-1)*S, just like RAID-4. The big difference between RAID-5 and -4 is, that the parity information is distributed evenly among the participating drives, avoiding the bottleneck problem in RAID-4.
    • If one of the disks fail, all data are still intact, thanks to the parity information. If spare disks are available, reconstruction will begin immediately after the device failure. If two disks fail simultaneously, all data are lost. RAID-5 can survive one disk failure, but not two or more.
    • Both read and write performance usually increase, but it's hard to predict how much.

Spare disks

Spare disks are disks that do not take part in the RAID set until one of the active disks fail. When a device failure is detected, that device is marked as ``bad'' and reconstruction is immediately started on the first spare-disk available.

Thus, spare disks add a nice extra safety to especially RAID-5 systems that perhaps are hard to get to (physically). One can allow the system to run for some time, with a faulty device, since all redundancy is preserved by means of the spare disk.

You cannot be sure that your system will survive a disk crash. The RAID layer should handle device failures just fine, but SCSI drivers could be broken on error handling, or the IDE chipset could lock up, or a lot of other things could happen.

2.4 Swapping on RAID

There's no reason to use RAID for swap performance reasons. The kernel itself can stripe swapping on several devices, if you just give them the same priority in the fstab file.

A nice fstab looks like:

/dev/sda2       swap           swap    defaults,pri=1   0 0
/dev/sdb2       swap           swap    defaults,pri=1   0 0
/dev/sdc2       swap           swap    defaults,pri=1   0 0
/dev/sdd2       swap           swap    defaults,pri=1   0 0
/dev/sde2       swap           swap    defaults,pri=1   0 0
/dev/sdf2       swap           swap    defaults,pri=1   0 0
/dev/sdg2       swap           swap    defaults,pri=1   0 0
This setup lets the machine swap in parallel on seven SCSI devices. No need for RAID, since this has been a kernel feature for a long time.

Another reason to use RAID for swap is high availability. If you set up a system to boot on eg. a RAID-1 device, the system should be able to survive a disk crash. But if the system has been swapping on the now faulty device, you will for sure be going down. Swapping on the RAID-1 device would solve this problem.

There has been a lot of discussion about whether swap was stable on RAID devices. This is a continuing debate, because it depends highly on other aspects of the kernel as well. As of this writing, it seems that swapping on RAID should be perfectly stable, except for when the array is reconstructing (eg. after a new disk is inserted into a degraded array). When 2.4 comes out this is an issue that will most likely get addressed fairly quickly, but until then, you should stress-test the system yourself until you are either satisfied with the stability or conclude that you won't be swapping on RAID.

You can set up RAID in a swap file on a filesystem on your RAID device, or you can set up a RAID device as a swap partition, as you see fit. As usual, the RAID device is just a block device.


Next Previous Contents