RAID Configuration on Linux
With Amazon EBS, you can use any of the standard RAID configurations that you can use with a traditional bare metal server, as long as that particular RAID configuration is supported by the operating system for your instance. This is because all RAID is accomplished at the software level. For greater I/O performance than you can achieve with a single volume, RAID 0 can stripe multiple volumes together; for on-instance redundancy, RAID 1 can mirror two volumes together.
Amazon EBS volume data is replicated across multiple servers in an Availability Zone to prevent the loss of data from the failure of any single component. This replication makes Amazon EBS volumes ten times more reliable than typical commodity disk drives. For more information, see Amazon EBS Availability and Durability in the Amazon EBS product detail pages.
If you need to create a RAID array on a Windows instance, see RAID Configuration on Windows in the Amazon EC2 User Guide for Microsoft Windows Instances.
RAID Configuration Options
The following table compares the common RAID 0 and RAID 1 options.
| Configuration | Use | Advantages | Disadvantages |
|---|---|---|---|
|
RAID 0 |
When I/O performance is more important than fault tolerance; for example, as in a heavily used database (where data replication is already set up separately). |
I/O is distributed across the volumes in a stripe. If you add a volume, you get the straight addition of throughput. |
Performance of the stripe is limited to the worst performing volume in the set. Loss of a single volume results in a complete data loss for the array. |
|
RAID 1 |
When fault tolerance is more important than I/O performance; for example, as in a critical application. |
Safer from the standpoint of data durability. |
Does not provide a write performance improvement; requires more Amazon EC2 to Amazon EBS bandwidth than non-RAID configurations because the data is written to multiple volumes simultaneously. |
Important
RAID 5 and RAID 6 are not recommended for Amazon EBS because the parity write operations of these RAID modes consume some of the IOPS available to your volumes. Depending on the configuration of your RAID array, these RAID modes provide 20-30% fewer usable IOPS than a RAID 0 configuration. Increased cost is a factor with these RAID modes as well; when using identical volume sizes and speeds, a 2-volume RAID 0 array can outperform a 4-volume RAID 6 array that costs twice as much.
Creating a RAID 0 array allows you to achieve a higher level of performance for a file system than you can provision on a single Amazon EBS volume. A RAID 1 array offers a "mirror" of your data for extra redundancy. Before you perform this procedure, you need to decide how large your RAID array should be and how many IOPS you want to provision.
The resulting size of a RAID 0 array is the sum of the sizes of the volumes within it, and the bandwidth is the sum of the available bandwidth of the volumes within it. The resulting size and bandwidth of a RAID 1 array is equal to the size and bandwidth of the volumes in the array. For example, two 500 GiB Amazon EBS volumes with 4,000 provisioned IOPS each will create a 1000 GiB RAID 0 array with an available bandwidth of 8,000 IOPS and 640 MB/s of throughput or a 500 GiB RAID 1 array with an available bandwidth of 4,000 IOPS and 320 MB/s of throughput.
This documentation provides basic RAID setup examples. For more information about RAID configuration, performance, and recovery, see the Linux RAID Wiki at https://raid.wiki.kernel.org/index.php/Linux_Raid.
Creating a RAID Array on Linux
Use the following procedure to create the RAID array. Note that you can get directions for Windows instances from Creating a RAID Array on Windows in the Amazon EC2 User Guide for Microsoft Windows Instances.
To create a RAID array on Linux
Create the Amazon EBS volumes for your array. For more information, see Creating an Amazon EBS Volume.
Important
Create volumes with identical size and IOPS performance values for your array. Make sure you do not create an array that exceeds the available bandwidth of your EC2 instance. For more information, see Amazon EC2 Instance Configuration.
Attach the Amazon EBS volumes to the instance that you want to host the array. For more information, see Attaching an Amazon EBS Volume to an Instance.
Use the mdadm command to create a logical RAID device from the newly attached Amazon EBS volumes. Substitute the number of volumes in your array for
number_of_volumesand the device names for each volume in the array (such as/dev/xvdf) fordevice_name. You can also substituteMY_RAIDwith your own unique name for the array.Note
You can list the devices on your instance with the lsblk command to find the device names.
(RAID 0 only) To create a RAID 0 array, execute the following command (note the
--level=0option to stripe the array):[ec2-user ~]$sudo mdadm --create --verbose /dev/md0 --level=0 --name=MY_RAID--raid-devices=number_of_volumesdevice_name1 device_name2(RAID 1 only) To create a RAID 1 array, execute the following command (note the
--level=1option to mirror the array):[ec2-user ~]$sudo mdadm --create --verbose /dev/md0 --level=1 --name=MY_RAID--raid-devices=number_of_volumesdevice_name1 device_name2Create a file system on your RAID array, and give that file system a label to use when you mount it later. For example, to create an
ext4file system with the labelMY_RAID, execute the following command:[ec2-user ~]$sudo mkfs.ext4 -LMY_RAID/dev/md0Depending on the requirements of your application or the limitations of your operating system, you can use a different file system type, such as ext3 or XFS (consult your file system documentation for the corresponding file system creation command).
Create a mount point for your RAID array.
[ec2-user ~]$sudo mkdir -p /mnt/raidFinally, mount the RAID device on the mount point that you created:
[ec2-user ~]$sudo mount LABEL=MY_RAID/mnt/raidYour RAID device is now ready for use.
(Optional) To mount this Amazon EBS volume on every system reboot, add an entry for the device to the
/etc/fstabfile.Create a backup of your
/etc/fstabfile that you can use if you accidentally destroy or delete this file while you are editing it.[ec2-user ~]$sudo cp /etc/fstab /etc/fstab.origOpen the
/etc/fstabfile using your favorite text editor, such as nano or vim.Add a new line to the end of the file for your volume using the following format.
device_labelmount_pointfile_system_typefs_mntopsfs_freqfs_passnoThe last three fields on this line are the file system mount options, the dump frequency of the file system, and the order of file system checks done at boot time. If you don't know what these values should be, then use the values in the example below for them (
defaults,nofail 0 2). For more information about/etc/fstabentries, see the fstab manual page (by entering man fstab on the command line). For example, to mount the ext4 file system on the device with the labelMY_RAIDat the mount point/mnt/raid, add the following entry to/etc/fstab.Note
If you ever intend to boot your instance without this volume attached (for example, so this volume could move back and forth between different instances), you should add the
nofailmount option that allows the instance to boot even if there are errors in mounting the volume. Debian derivatives, such as Ubuntu, must also add thenobootwaitmount option.LABEL=MY_RAID /mnt/raid ext4 defaults,nofail 0 2After you've added the new entry to
/etc/fstab, you need to check that your entry works. Run the sudo mount -a command to mount all file systems in/etc/fstab.[ec2-user ~]$sudo mount -aIf the previous command does not produce an error, then your
/etc/fstabfile is OK and your file system will mount automatically at the next boot. If the command does produce any errors, examine the errors and try to correct your/etc/fstab.Warning
Errors in the
/etc/fstabfile can render a system unbootable. Do not shut down a system that has errors in the/etc/fstabfile.(Optional) If you are unsure how to correct
/etc/fstaberrors, you can always restore your backup/etc/fstabfile with the following command.[ec2-user ~]$sudo mv /etc/fstab.orig /etc/fstab

