web apps | arfore dot com

So in the process of applying the new SSL cert here at work, I discovered an issue with the reCAPTCHA service.

The problem was that I was getting errors saying that my forms were only partially encrypted.  This was due to my use of the reCAPTCHA library, which by default doesn’t use an SSL connection to grab the challenge HTML.

The documentation at the reCaptcha site has a section on this.  Specifically it says:

In order to avoid getting browser warnings, if you use reCAPTCHA on an SSL site, you should replace http://api.recaptcha.net with https://api-secure.recaptcha.net.

Ref: http://recaptcha.net/apidocs/captcha/client.html

The example it uses shows how to change the Javascript itself.  While this was nice to know it really didn’t help too much in my particular case. To solve this when using the reCAPTCHA PHP library, all you need to do is change the value of a single variable.  In the file recaptchalib.php look for the function recaptcha_get_html then change the declaration to read as follows:

function recaptcha_get_html ($pubkey, $error = null, $use_ssl = true)

This will force all calls to be transmitted over an SSL connection, thus eliminating the dialog box in Internet Explorer and the slashed-lock in Firefox.

However since I am not encrypting the entire site by default, yet due to an issue with our website editing/management system, Adobe Contribute, I had to do a bit more than just updating the boolean variable.  Since some of my forms are encrypted and some are not, I added the following code to the function referenced above:

if ($_SERVER['SERVER_PORT'] == 443) { $use_ssl = true;

}

This needs to be added just above the check for the value of the variable use_ssl in the function recaptcha_get_html.  Once you do this you can use the same copy of the recpatchalib.php file for both secure and non-secure forms.

There are many really nice web apps out there now. Some of them are designed for pure entertainment, others are designed for tracking personal information, and still others serve a clear design purpose.

Here are a couple that I like:

  • My Mile Marker – a nice app that helps you track your car’s mpg over time.
  • Wufoo – an online html form builder. They have a number of pre-designed templates that you can choose from and alter.
  • Typetester – an online font comparison app that helps you see what your online content will look like in various fonts.

What web apps are out there that you use or find particularly interesting?

Sungevity is a company that does residential solar panel installations. They have this cool web app that lets you enter you address and then determines how much energy you will need. They use satellite imagery to help design the system.

When you’re ready to see how much solar your home needs, Sungevity makes it easy. Simply enter your address, and we’ll design a system for your roof remotely, using satellite images. We’ll get back to you with the systems that will fit on your roof – all online and free.

Pretty cool, but unfortunately it is for California residents only.

The Digital Divide Hits the Airwaves | arfore dot com

This next week the Senate is expected to vote on legislation to delay the transition of broadcast television in the United States from analog signals to digital signals.

The initial deadline was to have been February 17, 2009, however some in Congress as well as President Obama claim that more time is needed due to the fact that evidence has shown that consumers are not prepared. The new legislation sets a deadline for the switch to June 12, 2009, however broadcasters can switch over to digital prior to that deadline if they so choose.

While I am sure that enough money was not provided to fund the coupon program, and that the whole information campaign has been bungled from the beginning, I don’t think that extending the deadline is really going to do any good. The consumers have been seeing ads from both the federal government as well as their local cable providers informing them about the transition and what they can do. Changing this deadline is not going to help any.

No matter when they switch the deadline to consumers are going to be left out in the cold. Sometimes you have to pull the band-aid off fast in order to lessen the long term pain.

ReCaptcha, SSL, and PHP | arfore dot com

So in the process of applying the new SSL cert here at work, I discovered an issue with the reCAPTCHA service.

The problem was that I was getting errors saying that my forms were only partially encrypted.  This was due to my use of the reCAPTCHA library, which by default doesn’t use an SSL connection to grab the challenge HTML.

The documentation at the reCaptcha site has a section on this.  Specifically it says:

In order to avoid getting browser warnings, if you use reCAPTCHA on an SSL site, you should replace http://api.recaptcha.net with https://api-secure.recaptcha.net.

Ref: http://recaptcha.net/apidocs/captcha/client.html

The example it uses shows how to change the Javascript itself.  While this was nice to know it really didn’t help too much in my particular case. To solve this when using the reCAPTCHA PHP library, all you need to do is change the value of a single variable.  In the file recaptchalib.php look for the function recaptcha_get_html then change the declaration to read as follows:

function recaptcha_get_html ($pubkey, $error = null, $use_ssl = true)

This will force all calls to be transmitted over an SSL connection, thus eliminating the dialog box in Internet Explorer and the slashed-lock in Firefox.

However since I am not encrypting the entire site by default, yet due to an issue with our website editing/management system, Adobe Contribute, I had to do a bit more than just updating the boolean variable.  Since some of my forms are encrypted and some are not, I added the following code to the function referenced above:

if ($_SERVER['SERVER_PORT'] == 443) { $use_ssl = true;

}

This needs to be added just above the check for the value of the variable use_ssl in the function recaptcha_get_html.  Once you do this you can use the same copy of the recpatchalib.php file for both secure and non-secure forms.

Apple TV cares about your FPS | arfore dot com

In my process of tranferring my DVD collection to a digital media server I discovered that the Apple TV software is smarter than I thought.

I have been ripping my DVD collection using Handbrake on my Mac and transferring them to a Windows box which is shared out via my internal only network to the Apple TV using iTunes.  I use the built-in Apple TV profile to do this.  The profile sets the frame rate option on the encoder to be “Same As Source”.  It turns out that if your rip has a final fps (frames per second) that is greater than 30 then the resulting movie will not be available in the list of Shared Movies on the Apple TV.

openfiler | arfore dot com

As many of your know, I have a Plex-based Mac Mini media center setup which has enabled me to to the cord with the cable monopoly. Part of this setup is the Openfiler NAS that I use to store all of the digital copies of my dvd collection.

Lately I have been wishing that I had configured the NAS with a RAID setup instead of just using an all-eggs-in-one-basket approach with three drives in a LVM configuration. In the process of cleaning out a bunch of videos that I was never going to watch I managed to free enough space to be able to disband the existing volume group and setup my two 1TB drives in a RAID 1 set. The happy result was that I was also able to re-purpose the 500GB drive for use with my ever growing iTunes library. (Thanks, Apple and Steve Jobs for making all my music DRM free!)

The problem I ran into was that Openfiler will not allow you to create a RAID set in a degraded state. This was necessary to enable me to work with the drives I owned and not spend additional funds. After discovering this I began investigating the possibility of doing the RAID configuration by hand and completing the rest of the setup in the Openfiler web console. Here is the process I used.

A few steps were left out, mainly the pieces revolving around moving the data from one logical volume to the other then pruning the volume group of the old volume. Excpet for the copy process, all of this can be easily accomplished in the Openfiler web console. The only piece that I couldn’t easily determine from the console interface was which physical devices were used in which logical volume. That information can be easily found using the following command:

[root@mangrove ~]# lvdisplay -m

Create the RAID 1 set with a missing drive and prepare the physical volume

Step 1: SSH as root into your Openfiler setup

arfore$ ssh [email protected]

Step 2: As root, partition the RAID member using fdisk. You will want to create a single, primary partition. Accept the defaults on the partition size so that it uses the whole drive.

[root@mangrove ~]# fdisk /dev/sda
 
The number of cylinders for this disk is set to 121601.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
 
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-121601, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-121601, default 121601):
Using default value 121601

Next change the partition type to be Linux raid autodetect (Note: the hex code is fd)

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
 
Now exit saving the changes. This will write the new partition table to the drive.
 
Command (m for help): w
The partition table has been altered!
 
Calling ioctl() to re-read partition table.
Syncing disks.

Step 3: Create the RAID 1 set using your newly partitioned drive. Normally when creating a RAID 1 set you would specify two drive since this minimum number for a RAID 1 set in a clean, non-degraded state. However in our case we need to start out with a set containing one drive, which will show up in Openfiler as a RAID 1 set in a clean, degraded state.

[root@mangrove ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 missing

If it all worked, you should see the following result from the mdadm command:

mdadm: array /dev/md0 started.

To check the status of your newly created RAID 1 set, execute the following command:

[root@mangrove ~]# cat /proc/mdstat

The result should look like this:

Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sda1[0] 976759936 blocks [2/1] [U_]
 
unused devices:

What this means is that your system supports RAID levels 1, 4, 5, 6, and 10. By extension this means it also supports RAID level 0. The [2/1] entry on the md0 line means that your RAID set is configured for two devices, but only one device exists. In a clean, non-degraded state, a RAID 1 set would show [2/2].
Step 4: Initialize a partition (in this case the RAID 1 partition) for use in a volume group.

[root@mangrove ~]# pvcreate /dev/md0

If the command completed successfully, you should see the following result:

Physical volume "/dev/md0" successfully created

Create the Volume Group and Logical Volume

Note: The next parts are done in the Openfiler web console.

Step 1: First check the status of your RAID set by selecting the Volumes tab then clicking on Software RAID in the Volumes section.

Step 2: On the Software Raid Management screen you will see your new RAID 1 set listed with a state of Clean & degraded. Normally, this would indicate a possible drive failure, however in this instance it is expected.

Step 3: The next step is to create a new volume group using the physical volume created during the previous command line steps. Click on the Volume Groups link in the Volumes section.

Create your new Volume Group and select the physical volume to be added to the group. Then click the Add volume group button.

Notice that now your new volume group shows up in the Volume Group Management section.

Step 4: Next you need to add a new volume to your newly create volume group. You can do this on the Add Volume screen. Click on the Add Volume link in the Volumes section.

On the Add Volume screen, setup your new volume and click the create button. The suggestion from Openfiler is to keep the default filesystem (which currently is xfs). Also, make sure to increase the volume size or Required Space to create the desired volume. In my case I am going to select the maximum available space.

The system now sends you to the Manage Volumes screen which will show you that you now have a volume group containing one volume.

At this point you can copy all your data over from the old LVM setup into the new LVM with RAID 1 setup. This may take some time. In my case, given that may hardware is far from the latest and greatest, it took just over an hour to copy 900GB of data.

Adding the empty drive to the RAID 1 set

The next steps assume that you have migrated your data from the last remaining volume group from the old setup into you degraded RAID 1 set. At this point what we are going to do is to add the free disk into the RAID set and start it syncing the disks.

Step 1: The first thing you need to do is to make sure that the partitioning structure of the new disk matches the structure of the RAID 1 member. In a RAID set the parition structure needs to match.

This can be easily accomplished using the following command:

[root@mangrove ~]# sfdisk -d /dev/sda | sfdisk /dev/sdb

The output from the command should look something like this:

Checking that no-one is using this disk right now ...
OK
 
Disk /dev/sdb: 121601 cylinders, 255 heads, 63 sectors/track
Old situation:
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
  Device Boot Start End #cyls #blocks Id System
/dev/sdb1 0+ 121600 121601- 976760001 fd Linux raid autodetect
/dev/sdb2 0 - 0 0 0 Empty
/dev/sdb3 0 - 0 0 0 Empty
/dev/sdb4 0 - 0 0 0 Empty
New situation:
Units = sectors of 512 bytes, counting from 0
  Device Boot Start End #sectors Id System
/dev/sdb1 63 1953520064 1953520002 fd Linux raid autodetect
/dev/sdb2 0 - 0 0 Empty
/dev/sdb3 0 - 0 0 Empty
/dev/sdb4 0 - 0 0 Empty
Warning: no primary partition is marked bootable (active)
This does not matter for LILO, but the DOS MBR will not boot this disk.
Successfully wrote the new partition table
 
Re-reading the partition table ...
 
If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)

As long as the process doesn’t throw any wonky error messages then you are good to move on to the next step.

Step 2: Now we need to add the disk /dev/sdb into the RAID 1 set /dev/md0.

[root@mangrove ~]# mdadm --manage /dev/md0 --add /dev/sdb1

If the command completes successfully, you will get the following result:

mdadm: added /dev/sdb1

Step 3: At this point the system show automatically begin syncing the disks. To check on this run the command:

[root@mangrove ~]# cat /proc/mdstat

The result show look like so:

Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sdb1[2] sda1[0] 976759936 blocks [2/1] [U_] [>....................] recovery = 0.0% (535488/976759936) finish=182.3min speed=89248K/sec
 
unused devices:

Notice that it now shows that the RAID 1 set is in recovery mode. This indicates that the set is in the process of being synchronized. During this process the data shown on the RAID Management screen in the Openfiler web console will show that the state of the RAID is Clean & degraded & recovering. It will also show the progress of synchronization.

Step 4: Once the RAID set has finished syncing the mdstat results will look as follows:

[root@mangrove ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sdb1[1] sda1[0] 976759936 blocks [2/2] [UU]
 
unused devices:

As you can see, the RAID identity now shows that there are 2 volumes in the set with /dev/sda1 being the primary and /dev/sdb1 being the mirror. Also, it now shows that there are 2 fully synced disks, indicated by [2/2] [UU].

In the Openfiler web console the RAID Management screen now shows the state of the array as Clean with a sync status of Synchronized.

Final Steps

Step 1: Now that you have a completely clean RAID 1 set you will need to ensure that the /etc/mdadm.conf file has the correct information concerning the array. Normally this is created automatically for you by the Openfiler administration tool, however since we built the array by hand we will need to add this information into the existing file. (Note: backup the existing mdadm.conf file!)

This can be accomplished by the following commands:

[root@mangrove ~]# cp /etc/mdadm.conf /etc/mdadm.conf_orig
[root@mangrove ~]# mdadm --examine --scan >> /etc/mdadm.conf

The mdadm.conf file now contains the following (Note: the UUID entry will be unique to your system):

#
# PLEASE DO NOT MODIFY THIS CONFIGURATION FILE!
# This configuration file was auto-generated
# by Openfiler. Please do not modify it.
#
# Generated at: Sat Jul 24 19:43:42 EDT 2010
#
 
DEVICE partitions
PROGRAM /opt/openfiler/bin/mdalert
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=542fa4dc:c920dae9:c062205a:a8df35f1

Testing the RAID set

Testing the RAID set is the next step that I would recommend. Now, if something really bad(TM) goes wrong here, then you could end up losing data. If you are extremely worried, make sure that you have a backup of it all somewhere else, however if you don’t actually test the mirror then you don’t know that it all works and then you could be relying on a setup that will fail you when you need it the most.

Step 1: Get the details of your existing RAID set first prior to testing the removal of a device, that way you will have something to compare it to.

[root@mangrove ~]# mdadm –detail /dev/md0

This should yield some valuable data on the health of the array.

/dev/md0: Version : 00.90.03 Creation Time : Sat Jul 24 19:55:44 2010 Raid Level : raid1 Array Size : 976759936 (931.51 GiB 1000.20 GB) Used Dev Size : 976759936 (931.51 GiB 1000.20 GB) Raid Devices : 2 Total Devices : 2
Preferred Minor : 0 Persistence : Superblock is persistent
  Update Time : Sun Jul 25 13:41:07 2010 State : clean Active Devices : 2
Working Devices : 2 Failed Devices : 0 Spare Devices : 0
  UUID : 542fa4dc:c920dae9:c062205a:a8df35f1 Events : 0.3276
  Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1

Step 2: The next step is to actually remove a device from the array. Now, since /dev/sda was the initial disk I put into my set, I am going to remove it and see what happens.

To remove the device from the array and mark it as failed, use the following command:

[root@mangrove ~]# mdadm --manage --fail /dev/md0 /dev/sda1

You should receive the following result:

mdadm: set /dev/sda1 faulty in /dev/md0

Step 3: Check the status of the array by using the following command:

[root@mangrove ~]# cat /proc/mdstat

You will see that the device /dev/sda1 is now marked as failed and that the status of the array is degraded:

Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sdb1[1] sda1[2](F) 976759936 blocks [2/1] [_U]
 
unused devices:

Also, in the RAID Management screen in the Openfiler web console you will see the state of the array change to be Clean & degraded.

Step 4: The next step is to test the mirroring of the data. What you should do now is to remove the drive that you marked as failed, then add the drive back into the mirror. In a real world scenario you would also replace the actual drive itself, however that is not necessary in this test. Also, if you were replacing the actual drive you would need to repeat the duplication of the partitioning structure.

First, remove the failed drive from the set /dev/md0 using the following command:

[root@mangrove ~]# mdadm --manage --remove /dev/md0 /dev/sda1

The result that you get back will show a hot remove since this command was executed while the system was live. In an enterprise environment where the storage array supported hotplug devices you could then replace the failed drive without shutting the system down.

Next, add the new drive into the array like so:

[root@mangrove ~]# mdadm --manage --add /dev/md0 /dev/sda1

The result from this command will be:

mdadm: re-added /dev/sda1

Step 5: If nothing has gone horribly wrong with the test, the array will now begin the mirroring process again. As you can see from the output of mdstat, the recovery process will be indicated in the same manner as the initial mirror was:

[root@mangrove ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sda1[2] sdb1[1] 976759936 blocks [2/1] [_U] [>....................] recovery = 0.0% (834432/976759936) finish=175.4min speed=92714K/sec
 
unused devices:

Note: this is not a smart recovery process. When you break the mirror, the entire mirroring process has to complete, even if it was a test where the data had not actually disappeared. As you can see, the recovery process is going to take about the same length of time that initial build of the mirror did.

Also, in the RAID Management screen, the state of the array will now be shown as Clean & degraded & recovering, just as before when we built the mirror in the first place.

Step 6: Once the mirroring process of the testing has completed, the results of mdstat should look similar to the following:

Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sda1[0] sdb1[1] 976759936 blocks [2/2] [UU]
 
unused devices:

If you had actually replaced a drive you might need to update mdadm.conf to reflect the changes. At the very least it is wise to run the scan command again to ensure that the UUID of the array matches the file after the rebuild has completed.

Final Thoughts

One of the big lessons that I learned from this whole process, both using Openfiler and with handling storage arrays in general, is that more thought and planning on the frontend can save you a lot of tedium later. Had I initially planned my storage setup, I would have configured the drives in a RAID 1 set in the beginning. Another thing I learned is that while managing your NAS with an appliance type of setup (whether you use ClarkConnet, FreeNAS or Openfiler) is a great convenience, it doesn’t give you the insight and understanding that can be gained by doing everything, at least once, the manual way. I now have a much better understanding of how the software raid functions work in Linux and of the LVM process as a whole.

References

This whole process would have been much more difficult had it not been for the input of a friend as well as a series of postings on the Internet. Thanks, Joe for your help on this, given that it was my first foray into software raid on Linux.

RAID | arfore dot com

As many of your know, I have a Plex-based Mac Mini media center setup which has enabled me to to the cord with the cable monopoly. Part of this setup is the Openfiler NAS that I use to store all of the digital copies of my dvd collection.

Lately I have been wishing that I had configured the NAS with a RAID setup instead of just using an all-eggs-in-one-basket approach with three drives in a LVM configuration. In the process of cleaning out a bunch of videos that I was never going to watch I managed to free enough space to be able to disband the existing volume group and setup my two 1TB drives in a RAID 1 set. The happy result was that I was also able to re-purpose the 500GB drive for use with my ever growing iTunes library. (Thanks, Apple and Steve Jobs for making all my music DRM free!)

The problem I ran into was that Openfiler will not allow you to create a RAID set in a degraded state. This was necessary to enable me to work with the drives I owned and not spend additional funds. After discovering this I began investigating the possibility of doing the RAID configuration by hand and completing the rest of the setup in the Openfiler web console. Here is the process I used.

A few steps were left out, mainly the pieces revolving around moving the data from one logical volume to the other then pruning the volume group of the old volume. Excpet for the copy process, all of this can be easily accomplished in the Openfiler web console. The only piece that I couldn’t easily determine from the console interface was which physical devices were used in which logical volume. That information can be easily found using the following command:

[root@mangrove ~]# lvdisplay -m

Create the RAID 1 set with a missing drive and prepare the physical volume

Step 1: SSH as root into your Openfiler setup

arfore$ ssh [email protected]

Step 2: As root, partition the RAID member using fdisk. You will want to create a single, primary partition. Accept the defaults on the partition size so that it uses the whole drive.

[root@mangrove ~]# fdisk /dev/sda
 
The number of cylinders for this disk is set to 121601.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
 
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-121601, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-121601, default 121601):
Using default value 121601

Next change the partition type to be Linux raid autodetect (Note: the hex code is fd)

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
 
Now exit saving the changes. This will write the new partition table to the drive.
 
Command (m for help): w
The partition table has been altered!
 
Calling ioctl() to re-read partition table.
Syncing disks.

Step 3: Create the RAID 1 set using your newly partitioned drive. Normally when creating a RAID 1 set you would specify two drive since this minimum number for a RAID 1 set in a clean, non-degraded state. However in our case we need to start out with a set containing one drive, which will show up in Openfiler as a RAID 1 set in a clean, degraded state.

[root@mangrove ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 missing

If it all worked, you should see the following result from the mdadm command:

mdadm: array /dev/md0 started.

To check the status of your newly created RAID 1 set, execute the following command:

[root@mangrove ~]# cat /proc/mdstat

The result should look like this:

Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sda1[0] 976759936 blocks [2/1] [U_]
 
unused devices:

What this means is that your system supports RAID levels 1, 4, 5, 6, and 10. By extension this means it also supports RAID level 0. The [2/1] entry on the md0 line means that your RAID set is configured for two devices, but only one device exists. In a clean, non-degraded state, a RAID 1 set would show [2/2].
Step 4: Initialize a partition (in this case the RAID 1 partition) for use in a volume group.

[root@mangrove ~]# pvcreate /dev/md0

If the command completed successfully, you should see the following result:

Physical volume "/dev/md0" successfully created

Create the Volume Group and Logical Volume

Note: The next parts are done in the Openfiler web console.

Step 1: First check the status of your RAID set by selecting the Volumes tab then clicking on Software RAID in the Volumes section.

Step 2: On the Software Raid Management screen you will see your new RAID 1 set listed with a state of Clean & degraded. Normally, this would indicate a possible drive failure, however in this instance it is expected.

Step 3: The next step is to create a new volume group using the physical volume created during the previous command line steps. Click on the Volume Groups link in the Volumes section.

Create your new Volume Group and select the physical volume to be added to the group. Then click the Add volume group button.

Notice that now your new volume group shows up in the Volume Group Management section.

Step 4: Next you need to add a new volume to your newly create volume group. You can do this on the Add Volume screen. Click on the Add Volume link in the Volumes section.

On the Add Volume screen, setup your new volume and click the create button. The suggestion from Openfiler is to keep the default filesystem (which currently is xfs). Also, make sure to increase the volume size or Required Space to create the desired volume. In my case I am going to select the maximum available space.

The system now sends you to the Manage Volumes screen which will show you that you now have a volume group containing one volume.

At this point you can copy all your data over from the old LVM setup into the new LVM with RAID 1 setup. This may take some time. In my case, given that may hardware is far from the latest and greatest, it took just over an hour to copy 900GB of data.

Adding the empty drive to the RAID 1 set

The next steps assume that you have migrated your data from the last remaining volume group from the old setup into you degraded RAID 1 set. At this point what we are going to do is to add the free disk into the RAID set and start it syncing the disks.

Step 1: The first thing you need to do is to make sure that the partitioning structure of the new disk matches the structure of the RAID 1 member. In a RAID set the parition structure needs to match.

This can be easily accomplished using the following command:

[root@mangrove ~]# sfdisk -d /dev/sda | sfdisk /dev/sdb

The output from the command should look something like this:

Checking that no-one is using this disk right now ...
OK
 
Disk /dev/sdb: 121601 cylinders, 255 heads, 63 sectors/track
Old situation:
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
  Device Boot Start End #cyls #blocks Id System
/dev/sdb1 0+ 121600 121601- 976760001 fd Linux raid autodetect
/dev/sdb2 0 - 0 0 0 Empty
/dev/sdb3 0 - 0 0 0 Empty
/dev/sdb4 0 - 0 0 0 Empty
New situation:
Units = sectors of 512 bytes, counting from 0
  Device Boot Start End #sectors Id System
/dev/sdb1 63 1953520064 1953520002 fd Linux raid autodetect
/dev/sdb2 0 - 0 0 Empty
/dev/sdb3 0 - 0 0 Empty
/dev/sdb4 0 - 0 0 Empty
Warning: no primary partition is marked bootable (active)
This does not matter for LILO, but the DOS MBR will not boot this disk.
Successfully wrote the new partition table
 
Re-reading the partition table ...
 
If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)

As long as the process doesn’t throw any wonky error messages then you are good to move on to the next step.

Step 2: Now we need to add the disk /dev/sdb into the RAID 1 set /dev/md0.

[root@mangrove ~]# mdadm --manage /dev/md0 --add /dev/sdb1

If the command completes successfully, you will get the following result:

mdadm: added /dev/sdb1

Step 3: At this point the system show automatically begin syncing the disks. To check on this run the command:

[root@mangrove ~]# cat /proc/mdstat

The result show look like so:

Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sdb1[2] sda1[0] 976759936 blocks [2/1] [U_] [>....................] recovery = 0.0% (535488/976759936) finish=182.3min speed=89248K/sec
 
unused devices:

Notice that it now shows that the RAID 1 set is in recovery mode. This indicates that the set is in the process of being synchronized. During this process the data shown on the RAID Management screen in the Openfiler web console will show that the state of the RAID is Clean & degraded & recovering. It will also show the progress of synchronization.

Step 4: Once the RAID set has finished syncing the mdstat results will look as follows:

[root@mangrove ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sdb1[1] sda1[0] 976759936 blocks [2/2] [UU]
 
unused devices:

As you can see, the RAID identity now shows that there are 2 volumes in the set with /dev/sda1 being the primary and /dev/sdb1 being the mirror. Also, it now shows that there are 2 fully synced disks, indicated by [2/2] [UU].

In the Openfiler web console the RAID Management screen now shows the state of the array as Clean with a sync status of Synchronized.

Final Steps

Step 1: Now that you have a completely clean RAID 1 set you will need to ensure that the /etc/mdadm.conf file has the correct information concerning the array. Normally this is created automatically for you by the Openfiler administration tool, however since we built the array by hand we will need to add this information into the existing file. (Note: backup the existing mdadm.conf file!)

This can be accomplished by the following commands:

[root@mangrove ~]# cp /etc/mdadm.conf /etc/mdadm.conf_orig
[root@mangrove ~]# mdadm --examine --scan >> /etc/mdadm.conf

The mdadm.conf file now contains the following (Note: the UUID entry will be unique to your system):

#
# PLEASE DO NOT MODIFY THIS CONFIGURATION FILE!
# This configuration file was auto-generated
# by Openfiler. Please do not modify it.
#
# Generated at: Sat Jul 24 19:43:42 EDT 2010
#
 
DEVICE partitions
PROGRAM /opt/openfiler/bin/mdalert
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=542fa4dc:c920dae9:c062205a:a8df35f1

Testing the RAID set

Testing the RAID set is the next step that I would recommend. Now, if something really bad(TM) goes wrong here, then you could end up losing data. If you are extremely worried, make sure that you have a backup of it all somewhere else, however if you don’t actually test the mirror then you don’t know that it all works and then you could be relying on a setup that will fail you when you need it the most.

Step 1: Get the details of your existing RAID set first prior to testing the removal of a device, that way you will have something to compare it to.

[root@mangrove ~]# mdadm –detail /dev/md0

This should yield some valuable data on the health of the array.

/dev/md0: Version : 00.90.03 Creation Time : Sat Jul 24 19:55:44 2010 Raid Level : raid1 Array Size : 976759936 (931.51 GiB 1000.20 GB) Used Dev Size : 976759936 (931.51 GiB 1000.20 GB) Raid Devices : 2 Total Devices : 2
Preferred Minor : 0 Persistence : Superblock is persistent
  Update Time : Sun Jul 25 13:41:07 2010 State : clean Active Devices : 2
Working Devices : 2 Failed Devices : 0 Spare Devices : 0
  UUID : 542fa4dc:c920dae9:c062205a:a8df35f1 Events : 0.3276
  Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1

Step 2: The next step is to actually remove a device from the array. Now, since /dev/sda was the initial disk I put into my set, I am going to remove it and see what happens.

To remove the device from the array and mark it as failed, use the following command:

[root@mangrove ~]# mdadm --manage --fail /dev/md0 /dev/sda1

You should receive the following result:

mdadm: set /dev/sda1 faulty in /dev/md0

Step 3: Check the status of the array by using the following command:

[root@mangrove ~]# cat /proc/mdstat

You will see that the device /dev/sda1 is now marked as failed and that the status of the array is degraded:

Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sdb1[1] sda1[2](F) 976759936 blocks [2/1] [_U]
 
unused devices:

Also, in the RAID Management screen in the Openfiler web console you will see the state of the array change to be Clean & degraded.

Step 4: The next step is to test the mirroring of the data. What you should do now is to remove the drive that you marked as failed, then add the drive back into the mirror. In a real world scenario you would also replace the actual drive itself, however that is not necessary in this test. Also, if you were replacing the actual drive you would need to repeat the duplication of the partitioning structure.

First, remove the failed drive from the set /dev/md0 using the following command:

[root@mangrove ~]# mdadm --manage --remove /dev/md0 /dev/sda1

The result that you get back will show a hot remove since this command was executed while the system was live. In an enterprise environment where the storage array supported hotplug devices you could then replace the failed drive without shutting the system down.

Next, add the new drive into the array like so:

[root@mangrove ~]# mdadm --manage --add /dev/md0 /dev/sda1

The result from this command will be:

mdadm: re-added /dev/sda1

Step 5: If nothing has gone horribly wrong with the test, the array will now begin the mirroring process again. As you can see from the output of mdstat, the recovery process will be indicated in the same manner as the initial mirror was:

[root@mangrove ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sda1[2] sdb1[1] 976759936 blocks [2/1] [_U] [>....................] recovery = 0.0% (834432/976759936) finish=175.4min speed=92714K/sec
 
unused devices:

Note: this is not a smart recovery process. When you break the mirror, the entire mirroring process has to complete, even if it was a test where the data had not actually disappeared. As you can see, the recovery process is going to take about the same length of time that initial build of the mirror did.

Also, in the RAID Management screen, the state of the array will now be shown as Clean & degraded & recovering, just as before when we built the mirror in the first place.

Step 6: Once the mirroring process of the testing has completed, the results of mdstat should look similar to the following:

Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sda1[0] sdb1[1] 976759936 blocks [2/2] [UU]
 
unused devices:

If you had actually replaced a drive you might need to update mdadm.conf to reflect the changes. At the very least it is wise to run the scan command again to ensure that the UUID of the array matches the file after the rebuild has completed.

Final Thoughts

One of the big lessons that I learned from this whole process, both using Openfiler and with handling storage arrays in general, is that more thought and planning on the frontend can save you a lot of tedium later. Had I initially planned my storage setup, I would have configured the drives in a RAID 1 set in the beginning. Another thing I learned is that while managing your NAS with an appliance type of setup (whether you use ClarkConnet, FreeNAS or Openfiler) is a great convenience, it doesn’t give you the insight and understanding that can be gained by doing everything, at least once, the manual way. I now have a much better understanding of how the software raid functions work in Linux and of the LVM process as a whole.

References

This whole process would have been much more difficult had it not been for the input of a friend as well as a series of postings on the Internet. Thanks, Joe for your help on this, given that it was my first foray into software raid on Linux.

LVM | arfore dot com

As many of your know, I have a Plex-based Mac Mini media center setup which has enabled me to to the cord with the cable monopoly. Part of this setup is the Openfiler NAS that I use to store all of the digital copies of my dvd collection.

Lately I have been wishing that I had configured the NAS with a RAID setup instead of just using an all-eggs-in-one-basket approach with three drives in a LVM configuration. In the process of cleaning out a bunch of videos that I was never going to watch I managed to free enough space to be able to disband the existing volume group and setup my two 1TB drives in a RAID 1 set. The happy result was that I was also able to re-purpose the 500GB drive for use with my ever growing iTunes library. (Thanks, Apple and Steve Jobs for making all my music DRM free!)

The problem I ran into was that Openfiler will not allow you to create a RAID set in a degraded state. This was necessary to enable me to work with the drives I owned and not spend additional funds. After discovering this I began investigating the possibility of doing the RAID configuration by hand and completing the rest of the setup in the Openfiler web console. Here is the process I used.

A few steps were left out, mainly the pieces revolving around moving the data from one logical volume to the other then pruning the volume group of the old volume. Excpet for the copy process, all of this can be easily accomplished in the Openfiler web console. The only piece that I couldn’t easily determine from the console interface was which physical devices were used in which logical volume. That information can be easily found using the following command:

[root@mangrove ~]# lvdisplay -m

Create the RAID 1 set with a missing drive and prepare the physical volume

Step 1: SSH as root into your Openfiler setup

arfore$ ssh [email protected]

Step 2: As root, partition the RAID member using fdisk. You will want to create a single, primary partition. Accept the defaults on the partition size so that it uses the whole drive.

[root@mangrove ~]# fdisk /dev/sda
 
The number of cylinders for this disk is set to 121601.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
 
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-121601, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-121601, default 121601):
Using default value 121601

Next change the partition type to be Linux raid autodetect (Note: the hex code is fd)

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
 
Now exit saving the changes. This will write the new partition table to the drive.
 
Command (m for help): w
The partition table has been altered!
 
Calling ioctl() to re-read partition table.
Syncing disks.

Step 3: Create the RAID 1 set using your newly partitioned drive. Normally when creating a RAID 1 set you would specify two drive since this minimum number for a RAID 1 set in a clean, non-degraded state. However in our case we need to start out with a set containing one drive, which will show up in Openfiler as a RAID 1 set in a clean, degraded state.

[root@mangrove ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 missing

If it all worked, you should see the following result from the mdadm command:

mdadm: array /dev/md0 started.

To check the status of your newly created RAID 1 set, execute the following command:

[root@mangrove ~]# cat /proc/mdstat

The result should look like this:

Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sda1[0] 976759936 blocks [2/1] [U_]
 
unused devices:

What this means is that your system supports RAID levels 1, 4, 5, 6, and 10. By extension this means it also supports RAID level 0. The [2/1] entry on the md0 line means that your RAID set is configured for two devices, but only one device exists. In a clean, non-degraded state, a RAID 1 set would show [2/2].
Step 4: Initialize a partition (in this case the RAID 1 partition) for use in a volume group.

[root@mangrove ~]# pvcreate /dev/md0

If the command completed successfully, you should see the following result:

Physical volume "/dev/md0" successfully created

Create the Volume Group and Logical Volume

Note: The next parts are done in the Openfiler web console.

Step 1: First check the status of your RAID set by selecting the Volumes tab then clicking on Software RAID in the Volumes section.

Step 2: On the Software Raid Management screen you will see your new RAID 1 set listed with a state of Clean & degraded. Normally, this would indicate a possible drive failure, however in this instance it is expected.

Step 3: The next step is to create a new volume group using the physical volume created during the previous command line steps. Click on the Volume Groups link in the Volumes section.

Create your new Volume Group and select the physical volume to be added to the group. Then click the Add volume group button.

Notice that now your new volume group shows up in the Volume Group Management section.

Step 4: Next you need to add a new volume to your newly create volume group. You can do this on the Add Volume screen. Click on the Add Volume link in the Volumes section.

On the Add Volume screen, setup your new volume and click the create button. The suggestion from Openfiler is to keep the default filesystem (which currently is xfs). Also, make sure to increase the volume size or Required Space to create the desired volume. In my case I am going to select the maximum available space.

The system now sends you to the Manage Volumes screen which will show you that you now have a volume group containing one volume.

At this point you can copy all your data over from the old LVM setup into the new LVM with RAID 1 setup. This may take some time. In my case, given that may hardware is far from the latest and greatest, it took just over an hour to copy 900GB of data.

Adding the empty drive to the RAID 1 set

The next steps assume that you have migrated your data from the last remaining volume group from the old setup into you degraded RAID 1 set. At this point what we are going to do is to add the free disk into the RAID set and start it syncing the disks.

Step 1: The first thing you need to do is to make sure that the partitioning structure of the new disk matches the structure of the RAID 1 member. In a RAID set the parition structure needs to match.

This can be easily accomplished using the following command:

[root@mangrove ~]# sfdisk -d /dev/sda | sfdisk /dev/sdb

The output from the command should look something like this:

Checking that no-one is using this disk right now ...
OK
 
Disk /dev/sdb: 121601 cylinders, 255 heads, 63 sectors/track
Old situation:
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
  Device Boot Start End #cyls #blocks Id System
/dev/sdb1 0+ 121600 121601- 976760001 fd Linux raid autodetect
/dev/sdb2 0 - 0 0 0 Empty
/dev/sdb3 0 - 0 0 0 Empty
/dev/sdb4 0 - 0 0 0 Empty
New situation:
Units = sectors of 512 bytes, counting from 0
  Device Boot Start End #sectors Id System
/dev/sdb1 63 1953520064 1953520002 fd Linux raid autodetect
/dev/sdb2 0 - 0 0 Empty
/dev/sdb3 0 - 0 0 Empty
/dev/sdb4 0 - 0 0 Empty
Warning: no primary partition is marked bootable (active)
This does not matter for LILO, but the DOS MBR will not boot this disk.
Successfully wrote the new partition table
 
Re-reading the partition table ...
 
If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)

As long as the process doesn’t throw any wonky error messages then you are good to move on to the next step.

Step 2: Now we need to add the disk /dev/sdb into the RAID 1 set /dev/md0.

[root@mangrove ~]# mdadm --manage /dev/md0 --add /dev/sdb1

If the command completes successfully, you will get the following result:

mdadm: added /dev/sdb1

Step 3: At this point the system show automatically begin syncing the disks. To check on this run the command:

[root@mangrove ~]# cat /proc/mdstat

The result show look like so:

Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sdb1[2] sda1[0] 976759936 blocks [2/1] [U_] [>....................] recovery = 0.0% (535488/976759936) finish=182.3min speed=89248K/sec
 
unused devices:

Notice that it now shows that the RAID 1 set is in recovery mode. This indicates that the set is in the process of being synchronized. During this process the data shown on the RAID Management screen in the Openfiler web console will show that the state of the RAID is Clean & degraded & recovering. It will also show the progress of synchronization.

Step 4: Once the RAID set has finished syncing the mdstat results will look as follows:

[root@mangrove ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sdb1[1] sda1[0] 976759936 blocks [2/2] [UU]
 
unused devices:

As you can see, the RAID identity now shows that there are 2 volumes in the set with /dev/sda1 being the primary and /dev/sdb1 being the mirror. Also, it now shows that there are 2 fully synced disks, indicated by [2/2] [UU].

In the Openfiler web console the RAID Management screen now shows the state of the array as Clean with a sync status of Synchronized.

Final Steps

Step 1: Now that you have a completely clean RAID 1 set you will need to ensure that the /etc/mdadm.conf file has the correct information concerning the array. Normally this is created automatically for you by the Openfiler administration tool, however since we built the array by hand we will need to add this information into the existing file. (Note: backup the existing mdadm.conf file!)

This can be accomplished by the following commands:

[root@mangrove ~]# cp /etc/mdadm.conf /etc/mdadm.conf_orig
[root@mangrove ~]# mdadm --examine --scan >> /etc/mdadm.conf

The mdadm.conf file now contains the following (Note: the UUID entry will be unique to your system):

#
# PLEASE DO NOT MODIFY THIS CONFIGURATION FILE!
# This configuration file was auto-generated
# by Openfiler. Please do not modify it.
#
# Generated at: Sat Jul 24 19:43:42 EDT 2010
#
 
DEVICE partitions
PROGRAM /opt/openfiler/bin/mdalert
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=542fa4dc:c920dae9:c062205a:a8df35f1

Testing the RAID set

Testing the RAID set is the next step that I would recommend. Now, if something really bad(TM) goes wrong here, then you could end up losing data. If you are extremely worried, make sure that you have a backup of it all somewhere else, however if you don’t actually test the mirror then you don’t know that it all works and then you could be relying on a setup that will fail you when you need it the most.

Step 1: Get the details of your existing RAID set first prior to testing the removal of a device, that way you will have something to compare it to.

[root@mangrove ~]# mdadm –detail /dev/md0

This should yield some valuable data on the health of the array.

/dev/md0: Version : 00.90.03 Creation Time : Sat Jul 24 19:55:44 2010 Raid Level : raid1 Array Size : 976759936 (931.51 GiB 1000.20 GB) Used Dev Size : 976759936 (931.51 GiB 1000.20 GB) Raid Devices : 2 Total Devices : 2
Preferred Minor : 0 Persistence : Superblock is persistent
  Update Time : Sun Jul 25 13:41:07 2010 State : clean Active Devices : 2
Working Devices : 2 Failed Devices : 0 Spare Devices : 0
  UUID : 542fa4dc:c920dae9:c062205a:a8df35f1 Events : 0.3276
  Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1

Step 2: The next step is to actually remove a device from the array. Now, since /dev/sda was the initial disk I put into my set, I am going to remove it and see what happens.

To remove the device from the array and mark it as failed, use the following command:

[root@mangrove ~]# mdadm --manage --fail /dev/md0 /dev/sda1

You should receive the following result:

mdadm: set /dev/sda1 faulty in /dev/md0

Step 3: Check the status of the array by using the following command:

[root@mangrove ~]# cat /proc/mdstat

You will see that the device /dev/sda1 is now marked as failed and that the status of the array is degraded:

Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sdb1[1] sda1[2](F) 976759936 blocks [2/1] [_U]
 
unused devices:

Also, in the RAID Management screen in the Openfiler web console you will see the state of the array change to be Clean & degraded.

Step 4: The next step is to test the mirroring of the data. What you should do now is to remove the drive that you marked as failed, then add the drive back into the mirror. In a real world scenario you would also replace the actual drive itself, however that is not necessary in this test. Also, if you were replacing the actual drive you would need to repeat the duplication of the partitioning structure.

First, remove the failed drive from the set /dev/md0 using the following command:

[root@mangrove ~]# mdadm --manage --remove /dev/md0 /dev/sda1

The result that you get back will show a hot remove since this command was executed while the system was live. In an enterprise environment where the storage array supported hotplug devices you could then replace the failed drive without shutting the system down.

Next, add the new drive into the array like so:

[root@mangrove ~]# mdadm --manage --add /dev/md0 /dev/sda1

The result from this command will be:

mdadm: re-added /dev/sda1

Step 5: If nothing has gone horribly wrong with the test, the array will now begin the mirroring process again. As you can see from the output of mdstat, the recovery process will be indicated in the same manner as the initial mirror was:

[root@mangrove ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sda1[2] sdb1[1] 976759936 blocks [2/1] [_U] [>....................] recovery = 0.0% (834432/976759936) finish=175.4min speed=92714K/sec
 
unused devices:

Note: this is not a smart recovery process. When you break the mirror, the entire mirroring process has to complete, even if it was a test where the data had not actually disappeared. As you can see, the recovery process is going to take about the same length of time that initial build of the mirror did.

Also, in the RAID Management screen, the state of the array will now be shown as Clean & degraded & recovering, just as before when we built the mirror in the first place.

Step 6: Once the mirroring process of the testing has completed, the results of mdstat should look similar to the following:

Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sda1[0] sdb1[1] 976759936 blocks [2/2] [UU]
 
unused devices:

If you had actually replaced a drive you might need to update mdadm.conf to reflect the changes. At the very least it is wise to run the scan command again to ensure that the UUID of the array matches the file after the rebuild has completed.

Final Thoughts

One of the big lessons that I learned from this whole process, both using Openfiler and with handling storage arrays in general, is that more thought and planning on the frontend can save you a lot of tedium later. Had I initially planned my storage setup, I would have configured the drives in a RAID 1 set in the beginning. Another thing I learned is that while managing your NAS with an appliance type of setup (whether you use ClarkConnet, FreeNAS or Openfiler) is a great convenience, it doesn’t give you the insight and understanding that can be gained by doing everything, at least once, the manual way. I now have a much better understanding of how the software raid functions work in Linux and of the LVM process as a whole.

References

This whole process would have been much more difficult had it not been for the input of a friend as well as a series of postings on the Internet. Thanks, Joe for your help on this, given that it was my first foray into software raid on Linux.

storage | arfore dot com

As many of your know, I have a Plex-based Mac Mini media center setup which has enabled me to to the cord with the cable monopoly. Part of this setup is the Openfiler NAS that I use to store all of the digital copies of my dvd collection.

Lately I have been wishing that I had configured the NAS with a RAID setup instead of just using an all-eggs-in-one-basket approach with three drives in a LVM configuration. In the process of cleaning out a bunch of videos that I was never going to watch I managed to free enough space to be able to disband the existing volume group and setup my two 1TB drives in a RAID 1 set. The happy result was that I was also able to re-purpose the 500GB drive for use with my ever growing iTunes library. (Thanks, Apple and Steve Jobs for making all my music DRM free!)

The problem I ran into was that Openfiler will not allow you to create a RAID set in a degraded state. This was necessary to enable me to work with the drives I owned and not spend additional funds. After discovering this I began investigating the possibility of doing the RAID configuration by hand and completing the rest of the setup in the Openfiler web console. Here is the process I used.

A few steps were left out, mainly the pieces revolving around moving the data from one logical volume to the other then pruning the volume group of the old volume. Excpet for the copy process, all of this can be easily accomplished in the Openfiler web console. The only piece that I couldn’t easily determine from the console interface was which physical devices were used in which logical volume. That information can be easily found using the following command:

[root@mangrove ~]# lvdisplay -m

Create the RAID 1 set with a missing drive and prepare the physical volume

Step 1: SSH as root into your Openfiler setup

arfore$ ssh [email protected]

Step 2: As root, partition the RAID member using fdisk. You will want to create a single, primary partition. Accept the defaults on the partition size so that it uses the whole drive.

[root@mangrove ~]# fdisk /dev/sda
 
The number of cylinders for this disk is set to 121601.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
 
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-121601, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-121601, default 121601):
Using default value 121601

Next change the partition type to be Linux raid autodetect (Note: the hex code is fd)

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
 
Now exit saving the changes. This will write the new partition table to the drive.
 
Command (m for help): w
The partition table has been altered!
 
Calling ioctl() to re-read partition table.
Syncing disks.

Step 3: Create the RAID 1 set using your newly partitioned drive. Normally when creating a RAID 1 set you would specify two drive since this minimum number for a RAID 1 set in a clean, non-degraded state. However in our case we need to start out with a set containing one drive, which will show up in Openfiler as a RAID 1 set in a clean, degraded state.

[root@mangrove ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 missing

If it all worked, you should see the following result from the mdadm command:

mdadm: array /dev/md0 started.

To check the status of your newly created RAID 1 set, execute the following command:

[root@mangrove ~]# cat /proc/mdstat

The result should look like this:

Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sda1[0] 976759936 blocks [2/1] [U_]
 
unused devices:

What this means is that your system supports RAID levels 1, 4, 5, 6, and 10. By extension this means it also supports RAID level 0. The [2/1] entry on the md0 line means that your RAID set is configured for two devices, but only one device exists. In a clean, non-degraded state, a RAID 1 set would show [2/2].
Step 4: Initialize a partition (in this case the RAID 1 partition) for use in a volume group.

[root@mangrove ~]# pvcreate /dev/md0

If the command completed successfully, you should see the following result:

Physical volume "/dev/md0" successfully created

Create the Volume Group and Logical Volume

Note: The next parts are done in the Openfiler web console.

Step 1: First check the status of your RAID set by selecting the Volumes tab then clicking on Software RAID in the Volumes section.

Step 2: On the Software Raid Management screen you will see your new RAID 1 set listed with a state of Clean & degraded. Normally, this would indicate a possible drive failure, however in this instance it is expected.

Step 3: The next step is to create a new volume group using the physical volume created during the previous command line steps. Click on the Volume Groups link in the Volumes section.

Create your new Volume Group and select the physical volume to be added to the group. Then click the Add volume group button.

Notice that now your new volume group shows up in the Volume Group Management section.

Step 4: Next you need to add a new volume to your newly create volume group. You can do this on the Add Volume screen. Click on the Add Volume link in the Volumes section.

On the Add Volume screen, setup your new volume and click the create button. The suggestion from Openfiler is to keep the default filesystem (which currently is xfs). Also, make sure to increase the volume size or Required Space to create the desired volume. In my case I am going to select the maximum available space.

The system now sends you to the Manage Volumes screen which will show you that you now have a volume group containing one volume.

At this point you can copy all your data over from the old LVM setup into the new LVM with RAID 1 setup. This may take some time. In my case, given that may hardware is far from the latest and greatest, it took just over an hour to copy 900GB of data.

Adding the empty drive to the RAID 1 set

The next steps assume that you have migrated your data from the last remaining volume group from the old setup into you degraded RAID 1 set. At this point what we are going to do is to add the free disk into the RAID set and start it syncing the disks.

Step 1: The first thing you need to do is to make sure that the partitioning structure of the new disk matches the structure of the RAID 1 member. In a RAID set the parition structure needs to match.

This can be easily accomplished using the following command:

[root@mangrove ~]# sfdisk -d /dev/sda | sfdisk /dev/sdb

The output from the command should look something like this:

Checking that no-one is using this disk right now ...
OK
 
Disk /dev/sdb: 121601 cylinders, 255 heads, 63 sectors/track
Old situation:
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
  Device Boot Start End #cyls #blocks Id System
/dev/sdb1 0+ 121600 121601- 976760001 fd Linux raid autodetect
/dev/sdb2 0 - 0 0 0 Empty
/dev/sdb3 0 - 0 0 0 Empty
/dev/sdb4 0 - 0 0 0 Empty
New situation:
Units = sectors of 512 bytes, counting from 0
  Device Boot Start End #sectors Id System
/dev/sdb1 63 1953520064 1953520002 fd Linux raid autodetect
/dev/sdb2 0 - 0 0 Empty
/dev/sdb3 0 - 0 0 Empty
/dev/sdb4 0 - 0 0 Empty
Warning: no primary partition is marked bootable (active)
This does not matter for LILO, but the DOS MBR will not boot this disk.
Successfully wrote the new partition table
 
Re-reading the partition table ...
 
If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)

As long as the process doesn’t throw any wonky error messages then you are good to move on to the next step.

Step 2: Now we need to add the disk /dev/sdb into the RAID 1 set /dev/md0.

[root@mangrove ~]# mdadm --manage /dev/md0 --add /dev/sdb1

If the command completes successfully, you will get the following result:

mdadm: added /dev/sdb1

Step 3: At this point the system show automatically begin syncing the disks. To check on this run the command:

[root@mangrove ~]# cat /proc/mdstat

The result show look like so:

Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sdb1[2] sda1[0] 976759936 blocks [2/1] [U_] [>....................] recovery = 0.0% (535488/976759936) finish=182.3min speed=89248K/sec
 
unused devices:

Notice that it now shows that the RAID 1 set is in recovery mode. This indicates that the set is in the process of being synchronized. During this process the data shown on the RAID Management screen in the Openfiler web console will show that the state of the RAID is Clean & degraded & recovering. It will also show the progress of synchronization.

Step 4: Once the RAID set has finished syncing the mdstat results will look as follows:

[root@mangrove ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sdb1[1] sda1[0] 976759936 blocks [2/2] [UU]
 
unused devices:

As you can see, the RAID identity now shows that there are 2 volumes in the set with /dev/sda1 being the primary and /dev/sdb1 being the mirror. Also, it now shows that there are 2 fully synced disks, indicated by [2/2] [UU].

In the Openfiler web console the RAID Management screen now shows the state of the array as Clean with a sync status of Synchronized.

Final Steps

Step 1: Now that you have a completely clean RAID 1 set you will need to ensure that the /etc/mdadm.conf file has the correct information concerning the array. Normally this is created automatically for you by the Openfiler administration tool, however since we built the array by hand we will need to add this information into the existing file. (Note: backup the existing mdadm.conf file!)

This can be accomplished by the following commands:

[root@mangrove ~]# cp /etc/mdadm.conf /etc/mdadm.conf_orig
[root@mangrove ~]# mdadm --examine --scan >> /etc/mdadm.conf

The mdadm.conf file now contains the following (Note: the UUID entry will be unique to your system):

#
# PLEASE DO NOT MODIFY THIS CONFIGURATION FILE!
# This configuration file was auto-generated
# by Openfiler. Please do not modify it.
#
# Generated at: Sat Jul 24 19:43:42 EDT 2010
#
 
DEVICE partitions
PROGRAM /opt/openfiler/bin/mdalert
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=542fa4dc:c920dae9:c062205a:a8df35f1

Testing the RAID set

Testing the RAID set is the next step that I would recommend. Now, if something really bad(TM) goes wrong here, then you could end up losing data. If you are extremely worried, make sure that you have a backup of it all somewhere else, however if you don’t actually test the mirror then you don’t know that it all works and then you could be relying on a setup that will fail you when you need it the most.

Step 1: Get the details of your existing RAID set first prior to testing the removal of a device, that way you will have something to compare it to.

[root@mangrove ~]# mdadm –detail /dev/md0

This should yield some valuable data on the health of the array.

/dev/md0: Version : 00.90.03 Creation Time : Sat Jul 24 19:55:44 2010 Raid Level : raid1 Array Size : 976759936 (931.51 GiB 1000.20 GB) Used Dev Size : 976759936 (931.51 GiB 1000.20 GB) Raid Devices : 2 Total Devices : 2
Preferred Minor : 0 Persistence : Superblock is persistent
  Update Time : Sun Jul 25 13:41:07 2010 State : clean Active Devices : 2
Working Devices : 2 Failed Devices : 0 Spare Devices : 0
  UUID : 542fa4dc:c920dae9:c062205a:a8df35f1 Events : 0.3276
  Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1

Step 2: The next step is to actually remove a device from the array. Now, since /dev/sda was the initial disk I put into my set, I am going to remove it and see what happens.

To remove the device from the array and mark it as failed, use the following command:

[root@mangrove ~]# mdadm --manage --fail /dev/md0 /dev/sda1

You should receive the following result:

mdadm: set /dev/sda1 faulty in /dev/md0

Step 3: Check the status of the array by using the following command:

[root@mangrove ~]# cat /proc/mdstat

You will see that the device /dev/sda1 is now marked as failed and that the status of the array is degraded:

Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sdb1[1] sda1[2](F) 976759936 blocks [2/1] [_U]
 
unused devices:

Also, in the RAID Management screen in the Openfiler web console you will see the state of the array change to be Clean & degraded.

Step 4: The next step is to test the mirroring of the data. What you should do now is to remove the drive that you marked as failed, then add the drive back into the mirror. In a real world scenario you would also replace the actual drive itself, however that is not necessary in this test. Also, if you were replacing the actual drive you would need to repeat the duplication of the partitioning structure.

First, remove the failed drive from the set /dev/md0 using the following command:

[root@mangrove ~]# mdadm --manage --remove /dev/md0 /dev/sda1

The result that you get back will show a hot remove since this command was executed while the system was live. In an enterprise environment where the storage array supported hotplug devices you could then replace the failed drive without shutting the system down.

Next, add the new drive into the array like so:

[root@mangrove ~]# mdadm --manage --add /dev/md0 /dev/sda1

The result from this command will be:

mdadm: re-added /dev/sda1

Step 5: If nothing has gone horribly wrong with the test, the array will now begin the mirroring process again. As you can see from the output of mdstat, the recovery process will be indicated in the same manner as the initial mirror was:

[root@mangrove ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sda1[2] sdb1[1] 976759936 blocks [2/1] [_U] [>....................] recovery = 0.0% (834432/976759936) finish=175.4min speed=92714K/sec
 
unused devices:

Note: this is not a smart recovery process. When you break the mirror, the entire mirroring process has to complete, even if it was a test where the data had not actually disappeared. As you can see, the recovery process is going to take about the same length of time that initial build of the mirror did.

Also, in the RAID Management screen, the state of the array will now be shown as Clean & degraded & recovering, just as before when we built the mirror in the first place.

Step 6: Once the mirroring process of the testing has completed, the results of mdstat should look similar to the following:

Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sda1[0] sdb1[1] 976759936 blocks [2/2] [UU]
 
unused devices:

If you had actually replaced a drive you might need to update mdadm.conf to reflect the changes. At the very least it is wise to run the scan command again to ensure that the UUID of the array matches the file after the rebuild has completed.

Final Thoughts

One of the big lessons that I learned from this whole process, both using Openfiler and with handling storage arrays in general, is that more thought and planning on the frontend can save you a lot of tedium later. Had I initially planned my storage setup, I would have configured the drives in a RAID 1 set in the beginning. Another thing I learned is that while managing your NAS with an appliance type of setup (whether you use ClarkConnet, FreeNAS or Openfiler) is a great convenience, it doesn’t give you the insight and understanding that can be gained by doing everything, at least once, the manual way. I now have a much better understanding of how the software raid functions work in Linux and of the LVM process as a whole.

References

This whole process would have been much more difficult had it not been for the input of a friend as well as a series of postings on the Internet. Thanks, Joe for your help on this, given that it was my first foray into software raid on Linux.

iPad for the sysadmin | arfore dot com

A few weekends ago I had the privilege of being assigned to evaluate an iPad for use as a support tool by my boss. (thanks Ike!)

The first order of business was to figure out some basic tasks that we would need to accomplish as sysadmins that we could realistically use the iPad for.

Remote control via ssh for a unix server

For ssh I already had the iSSH application by Zinger-Soft [iTunes]. Fortunately they updated the application to be a universal application for both the iPhone and the iPad. I had used it with a fair amount of success on my iPhone in the past to reboot several servers over both WiFi and 3G data, most notably when I needed to reboot a MySQL server will on the way to Atlanta on I-75.

I was pleased with the changes that they made for the expanded screen real estate of the iPad. The split screen function when in portrait mode is quite useful when you need to juggle two connections at the same time, even if it can be a bit confusing at first.

The ability to handle X11 forwarding is also a nice touch, because there are some administration activities that require the GUI even on a unix system (think that favorite Oracle installer that we all know and love).

Remote access via RDP to Windows servers and desktops

Generally this is actually an easier task to sort out, due to the number of RDP clients that exist of the iPad. There are more clients out to handle this than you can shake a stick at, however they don’t all have the same features. The fly in the ointment with RDP support is the ability to work with the widest variety of server and desktop os installations, with encryption, etc. The large majority of them did state that they supported Server 2003, 2008 as well as Windows XP, Vista and 7.

What took some doing to was to find a client that would work in our security environment. Currently we require that all off-site RDP connections be tunneled through SSH. It turns out that none of the RDP clients out there support this yet. One of the most promising from this standpoint looks like iTap RDP by Honeder Lacher Wallner Softwareentwicklung OEG [iTunes]. This client supports FIPS and NLA. They have a nice compression algorithm that makes the connection work well even on a 3G network. While they don’t currently support RDP over SSH tunnels this is a planned feature in a future release.

Another possibility, depending on where we go with our VDI initiative is Wyse PocketCloud by Wyse Technology [iTunes]. PocketCloud for iPad supports both VMWare View connections and standard RDP connections.  This is the application I ended up testing, and I must say, I was pretty happy with it.  The manner in which it handles the mouse functionality is superb.  The support for the application seems a little subpar, but there is a fairly active forum.

Currently the only solution that I was able to find was to use iSSH for a tunneled VNC connection, since iSSH supports this. Of course, this means that you will need to install a VNC server on your desktop or server, but in my testing it did seem to work fairly well if a little sluggishly. One advantage to this is the fact that Mac OS X includes a VNC server by default, making connections to Mac servers and clients a fairly easy thing to accomplish. With WIndows 2008, it was a little more challenging due to the changes in security that were added by the UAC system from MS. I was unsuccessful in getting RealVNC Enterprise trial to work properly, however the beta of TightVNC worked nicely.  The latest version of iSSH does support ssh tunnels.  When you combine this with multitasking support on iOS 4 you then have ability to access a remote machine through a perimeter firewall without the need for VNC server. Unfortunately, this support is useless on the iPad until we get iOS 4, but it is nice to know that it is there.

Access to various web-based support services

This is not really much of a challenge, however it is worth mentioning that there are a number of web-based systems that don’t cooperate easily with Mobile Safari for various reasons. Some of them are Flash-based, which obviously won’t work, others are just not designed to work properly on a touchscreen device. Your mileage may vary.

Password storage

As is the case with most system admins, I have way too many passwords to keep up with than I can easily remember. When you combine that with the necessity of locking accounts after a certain number of failed attempts, it becomes rapidly necessary that I have a secure method of carrying passwords with me.

On my iPhone I have been using Lockbox Pro by GEE! Technologies [iTunes] for a while now, however in investigating an app for the iPad I spent a fair amount of time playing around with SplashID by SplashData [iTunes]. (Also, it looks like GEE! Technologies is having issues, since the company website link for their app in the AppStore doesn’t work and the support website looks fairly similar to the myriad of web-squatter websites that are out there.) Now if you use password managers, you most likely have run into SplashID before. One of the major points in it’s favor is the use of both 256-bit Blowfish encryption. New for the iPad version is the ability to use a swipe pattern to unlock the application, similar to the process that you can use to unlock some Android-based devices. It also supports numeric and alpha passwords for unlocking the database.

One of my favorite features of Lockbox Pro is the ability to have a large number of additional fields for an entry, not just a username and password. SplashID also has this feature. Also, another great advantage to SplashID is the ability to have a desktop application (both Mac and Windows) that you can sync your mobile device to. Not only does SplashID support the iPhone, iPad and iPod Touch, they also have clients for Android, WebOS, PalmOS, Blackberry and Series 60. The simple fact that I can sync my password data between multiple devices as well as my desktop makes this an ideal application. SplashID also supports auto-fill for websites, if that is your thing.  Of course, if you want it all on your the iPhone, iPad and the desktop your are going to have to fork out a lot of money, since each application is a separate charge.

Access to notes, procedures and documentation

As an admin, one of the most useful applications is one that allows me to have notes, procedures and documentation available when I need it. It can be difficult the juggle a keyboard, serial cable and a big fat, dead tree manual when in a datacenter, so having the essential docs on hand in a mobile environment is a must.

I think there are actually more possibilities in this particular category than any other I researched for this post. I have been a big fan of Evernote by Evernote Corp [iTunes] since it was released. It syncs to both the iPhone and iPad, as well as to the client on my desktop. Combine those abilities with web-clipping functionality in both Safari and Firefox on the desktop and you have a great tool for support.

Of course, sometimes you will need to store large documents, and unless you feel like paying for storage with Evernote, it might not work to upload the entire Solaris 10 reference, or the latest edition of the PHP function reference. To begin with I started searching for the perfect sysadmin application in the App Store, then I realized that I already had it, iBooks [iTunes]. With iBooks 1.1, Apple made PDF storage easy. Just drag the PDF into your Books section in iTunes and sync. Voila! Of course to make the docs more useful, they need to be converted into eBook format so that you can use the highlighting and search features, but in a pinch a raw PDF is quite handy.

I wish I could do that

There are still somethings that I wish I could do with the iPad, however I doubt I will get them.  One item on my wishlist would be a mechanism to allow me to use the iPad as a serial terminal.  Frequently I have to use a laptop with a serial port (or USB-to-serial adapter) to connect to a server in order to access the console.  It would be really nice to be able to do this from the iPad.  Another feature that would be nice would be something along the lines of the certificate management that you have in the Keychain Access application on the Mac.  I can see where it could come in handy to be able to import and export SSL certs from the device.