Entries Tagged as 'Redundant Array of Inexpensive Disks (RAID)'

Affordable RAID5 NAS

What a difference a year makes in the storage market… 1TB drives cost under $150 each and Network Attached Storage devices are almost consumer grade.

For about $300 you can purchase a Promise Technology SmartStore NS4300N; put up to four SATA-II hard drives in it and have yourself a fault tolerate storage device that your Windows, Mac, and *nix computers can access via their native file sharing protocols, and manage it via your browser.

The device is derived from an Intel reference design, obviously using Intel technology.  It’s got relatively good performance, very easy to use, and provides anyone with any computer ability a simple fault tolerate storage device of up to 1.5TB (assuming you buy four 1TB drives, and configure it for RAID5).

The technology of this device is very similar to the 16-channel SATA-II RAID5/6 controllers I use in my servers, and the device is somewhat like the Infrant ReadyNAS 600s that I was quite fond of (Infrant was acquired by NetGear, and since then they have been slow to innovate, and maintained what I would say is an outdated pricing model).

There’s a host of reasons beyond just having a fault tolerant storage device that makes something like this a potential buy.  You don’t need to keep computer’s your using on to access data (that can be important if you have multiple computers), you don’t need to worry about backing up your data if you need to re-install your operating system, you don’t need to worry about how to share data between Windows and Mac.

The only downside I’ve found to the Promise verses the Infrant devices is that Promise botched the implementation of spin-down; so the devices keep the drives spinning all the time.  Yeah, it would say a little power to spin down the drives when they weren’t being access (at the cost of taking longer to access data once they’ve spun down), but with today’s drives we’re not talking about that much power — and you have options when purchasing drives of ones that have “green” / high-efficiency ratings.

For both small business, and personal use for those who depend on storage I highly recommend you consider a device like this.

 Promise SmartStore NS4300N

Originally posted 2008-05-15 22:11:53.

Disk Drive Temperature / Airflow

I upgraded both of my workstations (one Windows one Linux) to have a mirror pair as the secondary drive…  which added a third drive to each of the cases (the cases are setup so that you can have five 3.5″ internal drives and four 5.25″ external units)… the 400GB SATA-2 drive in the Windows machine keep producing SMART warnings that it was getting close to the recommended maximum temperature, and I decided it likely had to do with the fact that the power management of the motherboard slowed down the main case fan which reduced the airflow.

The case actually had two cutouts for fans in front of the disk drive array, so I wired up a couple fans for each one off a single power connector, put the fans in and now the drives are running cooler (the 3TB SATA-3 drives in the mirror in the Windows machine are much newer drives and run much cooler).

Keep in mind, that the cooler your drives run, the longer they’ll probably last and the fewer problems you’re going to have — plus when you run drives close to their maximum recommended temperature you’re going to see thermal re-calibrations which are going to make your computer look like it’s hanging or at least stuttering.

While I don’t think you should get crazy with fans, you should insure that any location in the case that has a heat producing component should have airflow — and many fans come with speed adjustments so you can run them at their lowest setting and provide enough airflow while minimizing the fan noise (which can be deafening if you have lots of fans).

One last thing — make sure when you buy fans you buy good quality ball-bearing fans — if you don’t, you’re just wasting money and asking for a fan failure (plus way too much noise).

Originally posted 2013-07-10 08:00:16.

Online Capacity Expansion

Well…

  • Call me old fashion…
  • Call me conservative…
  • Call me a doubting “Thomas”…
  • Call me tickled pink…
  • Call me surprised…

I just finished adding four additional spindles to one of my virtual hosts; when I originally built it out I only had four spindles available, and didn’t want to buy more since I knew I would be freeing up smaller spindles for it soon.

The first task was to have the RAID software add the new spindles to the array, then to “expand” the array container… the first step took only a few moments, the second step took about 20 hours for the array controller to rebuild / expand the array.

The second task was to get Windows to actually use the added space by expanding the volume; to do that was a simple matter of using diskpart.exe (you can search Microsoft’s Knowledge Base) only took a few moments.

The incredible thing about this was that my virtual host and virtual machines was online for the entire 20 hours — with absolutely no service interruption.

This particular machine used a Dell / LSI controller; but the Promise controllers also support dynamic capacity expansion as do 3Ware controllers.  I believe the Intel Matrix pseudo RAID controller also support dynamic capacity expansion; but as with other RAID and pseudo-RAID controllers you should check the documentation specific to it and consult the manufacturer’s web site for errata and updates before proceeding.

The bottom line is Windows and RAID arrays have come a long way, and it’s quite possible that you will be able to expand the capacity of your array without taking your server down; however, if the data on the server is irreplaceable, I recommend you consider backing it up (at least the irreplaceable data).

Originally posted 2008-12-01 12:00:56.

Ubuntu – Creating A Disk Mirror

A disk mirror, or RAID1 is a fault tolerant disk configuration where every block of one drive is mirrored on a second drive; this provides the ability to lose one drive (or have damaged sectors on one drive) and still retain data integrity.

RAID1 will have lower write performance than a single drive; but will likely have slightly better read performance than a single drive.  Other types of RAID configurations will have different characteristics; but RAID1 is simple to configure and maintain (and conceptually it’s easy for most anyone to understand the mechanics) and the topic of this article.

Remember, all these commands will need to be executed with elevated privileges (as super-user), so they’ll have to be prefixed with ‘sudo’.

First step, select two disks — preferably identical (but as close to the same size as possible) that don’t have any data on them (or at least doesn’t have any important data on them).  You can use Disk Utility (GUI) or gparted (GUI) or cfdisk (CLI) or fdisk (CLI) to confirm that the disk has no data and change (or create) the partition type to “Linux raid autotected” (type “fd”) — also note the devices that correspond to the drive, they will be needed when building the array.

Check to make sure that mdadm is installed; if not you can use the GUI package manager to download and install it; or simply type:

  • apt-get install mdadm

For this example, we’re going to say the drives were /dev/sde and /dev/sdf.

Create the mirror by executing:

  • mdadm ––create /dev/md0 ––level=1 ––raid-devices=2 /dev/sde1 missing
  • mdadm ––manage ––add /dev/md0 /dev/sdf1

Now you have a mirrored drive, /dev/md0.

At this point you could setup a LVM volume, but we’re going to keep it simple (and for most users, there’s no real advantage to using LVM).

Now you can use Disk Utility to create a partition (I’d recommend a GPT style partition) and format a file system (I’d recommend ext4).

You will want to decide on the mount point

You will probably have to add an entry to /etc/fstab and /etc/mdadm/mdadm.conf if you want the volume mounted automatically at boot (I’d recommend using the UUID rather than the device names).

Here’s an example mdadm.conf entry

  • ARRAY /dev/md0 level=raid1 num-devices=2 UUID=d84d477f:c3bcc681:679ecf21:59e6241a

And here’s an example fstab entry

  • UUID=00586af4-c0e8-479a-9398-3c2fdd2628c4 /mirror ext4 defaults 0 2

You can use mdadm to get the UUID of the mirror (RAID) container

  • mdadm ––examine ––scan

And you can use blkid to get the UUID of the file system

  • blkid

You should probably make sure that you have SMART monitoring installed on your system so that you can monitor the status (and predictive failure) of drives.  To get information on the mirror you can use the Disk Utility (GUI) or just type

  • cat /proc/mdstat

There are many resources on setting mirrors on Linux; for starters you can simply look at the man pages on the mdadm command.

NOTE: This procedure was developed and tested using Ubuntu 10.04 LTS x64 Desktop.

Originally posted 2010-06-28 02:00:37.

Ubuntu – Creating A RAID5 Array

A RAID5 array is a fault tolerant disk configuration which uses a distributed parity block; this provides the ability to lose one drive (or have damaged sectors on one drive) and still retain data integrity.

RAID5 will likely have slightly lower write performance than a single drive; but will likely have significantly better read performance than a single drive. Other types of RAID configurations will have different characteristic.  RAID5 requires a minimum of three drives, and may have as many drives as desires; however, at some point RAID6 with multiple parity blocks should be considered because of the potential of additional drive failure during a rebuild.

The following instructions will illustrate the creation of a RAID5 array with four SATA drives.

Remember, all these commands will need to be executed with elevated privileges (as super-user), so they’ll have to be prefixed with ‘sudo’.

First step, select two disks — preferably identical (but as close to the same size as possible) that don’t have any data on them (or at least doesn’t have any important data on them). You can use Disk Utility (GUI) or gparted (GUI) or cfdisk (CLI) or fdisk (CLI) to confirm that the disk has no data and change (or create) the partition type to “Linux raid autotected” (type “fd”) — also note the devices that correspond to the drive, they will be needed when building the array.

Check to make sure that mdadm is installed; if not you can use the GUI package manager to download and install it; or simply type:

  • apt-get install mdadm

For this example, we’re going to say the drives were /dev/sde /dev/sdf /dev/sdg and /dev/sdh.

Create the RAID5 by executing:

  • mdadm ––create /dev/md1 ––level=5 ––raid-devices=4 /dev/sd{e,f,g,h}1

Now you have a RAID5 fault tolerant drive sub-system, /dev/md1 (the defaults for chunk size, etc are reasonable for general use).

At this point you could setup a LVM volume, but we’re going to keep it simple (and for most users, there’s no real advantage to using LVM).

Now you can use Disk Utility to create a partition (I’d recommend a GPT style partition) and format a file system (I’d recommend ext4).

You will want to decide on the mount point

You will probably have to add an entry to /etc/fstab and /etc/mdadm/mdadm.conf if you want the volume mounted automatically at boot (I’d recommend using the UUID rather than the device names).

Here’s an example mdadm.conf entry

  • ARRAY /dev/md1 level=raid5 num-devices=4 UUID=d84d477f:c3bcc681:679ecf21:59e6241a

And here’s an example fstab entry

  • UUID=00586af4-c0e8-479a-9398-3c2fdd2628c4 /mirror ext4 defaults 0 2

You can use mdadm to get the UUID of the RAID5 container

  • mdadm ––examine ––scan

And you can use blkid to get the UUID of the file system

  • blkid

You should probably make sure that you have SMART monitoring installed on your system so that you can monitor the status (and predictive failure) of drives. To get information on the RAID5 container you can use the Disk Utility (GUI) or just type

  • cat /proc/mdstat

There are many resources on setting RAID5 sub-systems on Linux; for starters you can simply look at the man pages on the mdadm command.

NOTE: This procedure was developed and tested using Ubuntu 10.04 LTS x64 Desktop.

Originally posted 2010-06-29 02:00:15.

Ubuntu – RAID Creation

I think learning how to use mdadm (/sbin/mdadm) is a good idea, but in Ubuntu Desktop you can use Disk Utility (/usr/bin/palimpsest) to create most any of your RAID (“multiple disk”) configurations.

In Disk Utility, just access “File->Create->Raid Array…” on the menu and choose the options.  Before doing that, you might want to clear off the drives you’re going to use (I generally create a fresh GTP partition to insure the drive is ready to be used as a component of the RAID array).

Once you’ve created the container with Disk Utility; you can even format it with a file system; however, you will still need to manually add the entries to /etc/mdadm/mdadm.conf and /etc/fstab.

One other minor issue I noticed.

I gave my multiple disk containers names (mirror00, mirror01, …) and Disk Utility will show them mounted on device /dev/md/mirror00 — in point of fact, you want to use device names like /dev/md0, /dev/md1, … in the /etc/mdadm/mdadm.conf file.  Also, once again, I highly recommend that you use the UUID for the array configuration (in mdadm.conf) and for the file system (in fstab).

Originally posted 2010-07-12 02:00:33.

Disk Bench

I’ve been playing with Ubuntu here of late, and looking at the characteristics of RAID arrays.

What got me on this is when I formatted an ext4 file system on a four drive RAID5 array created using an LSI 150-4 [hardware RAID] controller I noticed that it took longer than I though it should; and while most readers probably won’t be interested in whether or not to use the LSI 150 controller they have in their spare parts bin to create a RAID array on Linux, the numbers below are interesting just in deciding what type of array to create.

These numbers are obtained from the disk benchmark in Disk Utility; this is only a read test (write performance is going to be quite a bit different, but unfortunately the write test in Disk Utility is destructive, and I’m not willing to lose my file system contents at this moment; but I am looking for other good benchmarking tools).

drives avg access time min read rate max read rate avg read rate

ICH8 Single 1 17.4 ms 14.2 23.4 20.7 MB/s
ICH8 Raid1 (Mirror) 2 16.2 ms 20.8 42.9 33.4 MB/s
ICH8 Raid5 4 18.3 ms 17.9 221.2 119.1 MB/s
SiL3132 Raid5 4 18.4 ms 17.8 223.6 118.8 MB/s
LSI150-4 Raid5 4 25.2 ms 12.5 36.6 23.3 MB/s

All the drives used are similar class drives; Seagate Momentus 120GB 5400.6 (ST9120315AS) for the single drive and RAID1 (mirror) tests, and Seagate Momentus 500GB 5400.6 (ST9500325AS) for all the RAID5 tests.  Additionally all drives show that they are performing well withing acceptable operating parameters.

Originally posted 2010-06-30 02:00:09.

Linux Server

I’ve been experimenting with a Linux server solution for the past couple months — I was prompted to look at this when my system disk failed in a Windows Server 2008 machine.

First, I’m amazed that after all these years Microsoft doesn’t have a standard module for monitoring the health of a system — at the SMART from disk drives.

I do have an Acronis image of the server from when I first installed it, but it would be a pain to reconfigure everything on that image to be as it was — and I guess I just haven’t been that happy with Windows Server 2008.

I personally find Windows Server 2008 needlessly complicated.

I’m not even going to start ranting on Hyper-V (I’ve done that enough, comparing it head-to-head with other technology… all I will say is it’s a good thing their big competitor is Vmware, or else Microsoft would really have to worry about having such a pathetic virtualization offering).

With a Linux distribution it’s a very simple thing to install a basic server. I actually tried Ubuntu, Centos, and Fedora. I also looked at the Xen distribution as well, but that wasn’t really of interest for a general purpose server.

Personally I found Centos (think Red Hat) to be a little too conservative on their releases/features; I found Fedora to be a little too bleeding edge on their releases/features (plus there’s no long term support commitment); so I was really just left with Ubuntu.

I didn’t really see any reason to look exhaustively at every Debian based distribution — Ubuntu was, in my mind, the best choice of that family; and I didn’t want to look at any distribution that wasn’t available at no cost, nor any distribution that didn’t have a good, stable track record.

With Ubuntu 10.04 LTS (10.04 is a Long Term Support release – which makes it a very good choice to build a server on) you could choose the Desktop or the Server edition — the main difference with the Server verses the Desktop is that the server does not install the XServer and graphical desktop components (you can add them).

The machine I was installing on had plenty of memory and processor to support a GUI, and I saw no reason not to install the Desktop version (I did try out the server version on a couple installs — and perhaps if you have an older machine or a machine with very limited memory or a machine that will be taxed to it’s limits or a machine that you want the absolute smallest attack surface you’d want desktop — though almost all those requirements would probably make me shift to Centos rather than Ubuntu).

My requirements were fairly simple — I wanted to replace the failed Windows 2008 Server with a machine that could perform my DNS, DHCP, web server, file store (home directories — served via CIFS/Samba), and active P2P downloads.

Additionally, the server would have to have fault-tolerate file systems (as did the Windows server).

Originally my testing focused on just making sure all the basic components worked, and worked reasonably well.

Then I moved on to getting all the tools I had written working (I converted all the C# code to PHP).

My final phase involved evaluating fault tolerant options. Initially I’d just used the LSI 150-4 RAID controller I had in the Windows Server 2008 (Linux supported it with no real issues — except that Linux was not able to monitor the health of the drives or the array).

I didn’t really see much need to use RAID5 as I had done with Windows Server 2008; so I concentrated on just doing RAID1 (mirroring) — I tried basic mirrors just using md, as well as using lvm (over md).

My feelings were that lvm added an unnecessary level of complexity on a standalone server (that isn’t to say that lvm doesn’t have feature that some individuals might want or need). So my tests focused primarily on just simple mirrors using md.

I tested performance of my LSI 150-4 RAID5 SATA1 PCI controller (with four SATA2 drives) against RAID1 SATA2 using Intel ICH9 and SiI3132 controllers (with pairs of SATA1 or SATA2 drives). I’d expected that the LSI 150-4 would outperform the md mirror with SATA1 drives on both read and write, but that with SATA2 drives I’d see better reads on the md mirror.

I was wrong.

The md mirrors actually performed better across the board (though negligibly better with SATA1 drives attached) — and the amazing thing was that CPU utilization was extremely low.

Now, let me underscore here that the LSI 150-4 controller is a PCI-X (64-bit) controller that I’m running as PCI (32-bit); and the LSI 150-4 represents technology that’s about six years old… and the LSI 150-4 controller is limited to SATA1 with no command set enhancements.

So this comparison wouldn’t hold true if I were testing md mirrors against a modern hardware RAID controller — plus the other RAID controllers I have are SAS/SATA2 PCIe and have eight and sixteen channels (more spindles means more performance).

Also, I haven’t tested md RAID5 performance at all.

My findings at present are that you can build a fairly high performance Linux based server for a small investment. You don’t need really high end hardware, you don’t need to invest in hardware RAID controllers, and you don’t need to buy software licenses — you can effectively run a small business or home office environment with confidence.

Originally posted 2010-06-24 02:00:09.

SFF-8484 to 4 x SATA Cables

I just purchased a Dell PERC 5/i (basically an LSI 8404) RAID card off eBay and I needed to purchase two SFF-8484 cables to connect it to my SATA hot swap bays.

There seems to be a great deal of confusion on eBay from vendors that have these cables — many of the vendors just don’t know what they have; and it’s important to know, since there are two different cables fitting the general description — and they are not interchangeable.

The cable I needed could be identified by a Trip Lite part number S502-01M or an Adaptec part number 2167000-R (discontinued) or a StarTech part number SAS84S450.

The description should contain the key phrase that the cable is used to attach a SAS (or SATA) HBA (Host Bus Adapter) to individual SATA drives.  The description should not mention anything about hooking up a SATA controller to a SATA/SAS back plane.

What’s the difference in the cables???

Well, the SAS controller to SATA device cable is straight through; the SATA controller to SAS back plane has the RX and TX swapped… and generally speaking there’s not a lot of call for the SATA controller to SAS back plane so those will be the least expensive, and the most prevalent on eBay.

The sellers who do know what they have, and advertise it as such want a phenomenal price for the cables (they’re only $19.99 on Amazon, buy the two you’ll need and they ship free)…

Do your home work and ask your questions before you commit to buy on eBay — particularly if it’s from China or Hong Kong (it’ll take several weeks to get the item, and returning it will be half the price you paid).  While Amazon’s gone down hill a great deal recently; it’s still easy to return, and in the long run you might save both time and money.

SFF-8484
Tripp Lite S502-01M

Originally posted 2010-11-13 01:00:28.