December | 2006 | arfore dot com

When you setup a website to be managed by Contribute, the name that shows up in the client for the connection description is generated from the text of the title tag of the index page of the website.

For setups where you are only managing a single site, this may not matter, but if you have a system where you use a development server and a production server, you may want the connection string to depict which server the user is editing and publishing to.

In order to change the text of the description you have to alter some text in a few files on the CPS management server.

In the installation directory of CPS, there is a database directory. This is the location of the files that are specific to the individual websites that are being managed.

The files in each site being managed are in a directory that is “numbered”, the number relates to the numerical order in which they were set up. If you remove a particular site, the numbers are not reused.

In each of the site directories there is a file named:

This file, in XML format, contains a

tag. The attribute

is the text that shows up in the Contribute client on the start page.

Once this is updated and a user logs into the Contribute server, the client connections are updated.

While this may be enough on it’s own to effect the change desired, the original name is still referenced in the cthub file for each individual site. I changed the name in both locations for completeness.

AWStats is a great open-source stats analyzation program.

In the process of setting up a replacement webserver at VSU I was investigating the replacement of analog, another nice open-source stats program.

The problem with analog is that the people requesting stats were confused by the technical output. These people aren’t computer folks, they just want to know who or what or when the pages they maintain are being visited.

The default setup of AWStats is fairly easy to accomplish, but if you want to generate separated stats on individual directories off the web root, then it takes a little know how.

In the configuration file for AWStats, there is a section that allows you to limit the processing to a given set of files. Interestingly enough the variable controlling this is called OnlyFiles. The relevant section of the default configuration file is:

# Include in stats, only accesses to URLs that match one of following entries. # For example, if you want AWStats to filter access to keep only stats that # match a particular string, like a particular directory, you can add this # directory name in this parameter. # The opposite parameter of “OnlyFiles” is “SkipFiles”. # Note: Use space between each value. This parameter is or not case sensitive # depending on URLNotCaseSensitive parameter. # Note: You can use regular expression values writing value with REGEX[value]. # Change : Effective for new updates only # Example: “REGEX[marketing_directory] REGEX[office\/.*\.(csv|sxw)$]” # Default: “” #

OnlyFiles=””

What you have to do is to change the value to the regular expression that represents you desired subset. For instance if you wanted to limit the stats generation to the directory foobarbaz then the particular string would be:

OnlyFiles=”REGEX[^\/foobarbaz]”

In order to make this work without keeping you from obtaining stats for the entire site as well you will need to duplicate the master configuration file for awstats and alter the entry as stated above. Make sure the provide a unique name for the config file, such as:

awstats.foobarbaz.conf

At VSU, we are implementing the Adobe (formerly Macromedia) Contribute Publishing Server and CMS.

This is a two-piece system that involves a client-side component (Contribute) and a server-side component (CPS).

We are running this on a Linux server, so we needed an easy way to start the service up should a system reboot occur.

Now Macromedia included a very simple shell script that made a call the included OEM Jrun binary to start the server. The only problem is that it had no facility to plugin to the chkconfig tool that can be used to manage services in the various runlevels.

So I wrote a very simple one:

#!/bin/bash # # CPS Startup script for the Macromedia Contribute Publishing Server # # chkconfig: 2345 80 20 # description: The CPS is the backend to manage the Macromedia Contribute CMS. # processname: jrun -nohup -start contribute-wps

# pidfile: none noticed

# Source function library.
. /etc/rc.d/init.d/functions jrun=${JRUN-/usr/Macromedia_CPS/jrun4/bin/jrun} prog=CPS lockfile=${LOCKFILE-/var/lock/subsys/macromedia_cps}

RETVAL=0

start() { echo -n $”Starting $prog: ” $jrun -nohup -start contribute-wps RETVAL=$? echo [ $RETVAL = 0 ] && touch ${lockfile} return $RETVAL } stop() { echo -n $”Stopping $prog: ” $jrun -stop contribute-wps RETVAL=$? echo [ $RETVAL = 0 ] && rm -f ${lockfile}

}

# See how we were called. case “$1″ in start) start ;; stop) stop ;; status) $jrun status ;; restart) stop start ;; *) echo $”Usage: $prog {start|stop|restart|status|}” exit 1

esac

Note that the lockfile referenced was an invention on my part, since the standard startup of jrun included with CPS doesn’t appear to create either a standard lockfile or pidfile.

After creating the file in /etc/init.d directory, you will need to run the following command:

This will add your new script to the service list maintained for use with chkconfig. At this point all the standard chkconfig commands can be used to manage this.

For more on chkconfig check out the chkconfig online man page at LinuxCommand .

In the process of troubleshooting the LDAP user problems I was experiencing I found that by default the logging of info and debug messages is turned off by default for the OEM jrun install that is part of Contribute Publishing Server 3.11.

In order to enable these logging levels you have to edit the sevrer configuration xml file. This file should be located in the configuration directory of your jrun4 server’s WEB-INF folder.

The name of the file is:

Open this file in your favorite editor and look for the following section:

<logger_settings> <out> <file>/usr/Macromedia_CPS/logs/out.log</file> </out> <err> <file>/usr/Macromedia_CPS/logs/err.log</file> </err> <show> <debug>true</debug> <info>true</info> <error>true</error> </show>

</logger_settings>

By default the logger is only set to show error messages. This section also shows the location of the error logs and the output logs.

Note, that enabling this you will get larger log files, since this will log all connections to the LDAP server by the Contribute client(s) that you have installed. You may find it necessary to open the admin console and set a max log file size to control this.

One of the annoyances I have found with the Contribute Client is that in the Administration interface section, when adding a user, the menu of roles is not sorted for you. The list that appears when you reassign a user or when you create a new role, is sorted for you.

Each time that a new role is added, the client updates the hub file, adding the new role to the end of the list.

In investigating this I found that the hub file is just an XML file. This file is stored in the root _mm folder of the website that you are managing. Note that this file is connection specific, so if you are managing multiple websites, the location of this control file will vary. And if you are managing a large deployment by having multiple sites with a single directory structure, you will have a different _mm folder and hub file for each site, even though they are physically on the same server they are treated differently by the logic of the software.

November | 2009 | arfore dot com

While working on a method to allow the VSU Communications Unit to add or change the stories in the rotation on the main VSU webpage, I ran into a problem that involved a known Safari issue involving file uploads.

I don’t regularly create forms that allow for an upload of a file, however I don’t like to store binary data in the MySQL database either. Allowing the files to be uploaded makes creating pages that use them a whole lot easier, since I don’t have to “create” the image from the binary data, just pass off a file location and let the browser do the rest.

The symptoms exhibited were that when submitting the form, Safari would hang about 30-40% of the time. No error messages or timeout messages were displayed. Zip, zilch, nada!
Continue reading

April | 2008 | arfore dot com

Today we have a trio of live performances. Two of them are from concerts and the third is from the Late Show with David Letterman. The three artists are: the Counting Crows, the Cure, and Maroon 5.

Raining in Baltimore Counting Crows

Live – September 1997

Live in Wembley Arena in ’91

Won’t Go Home Without You Maroon 5

Live on David Letterman 1/14/08

There are many really nice web apps out there now. Some of them are designed for pure entertainment, others are designed for tracking personal information, and still others serve a clear design purpose.

Here are a couple that I like:

  • My Mile Marker – a nice app that helps you track your car’s mpg over time.
  • Wufoo – an online html form builder. They have a number of pre-designed templates that you can choose from and alter.
  • Typetester – an online font comparison app that helps you see what your online content will look like in various fonts.

What web apps are out there that you use or find particularly interesting?

Sungevity is a company that does residential solar panel installations. They have this cool web app that lets you enter you address and then determines how much energy you will need. They use satellite imagery to help design the system.

When you’re ready to see how much solar your home needs, Sungevity makes it easy. Simply enter your address, and we’ll design a system for your roof remotely, using satellite images. We’ll get back to you with the systems that will fit on your roof – all online and free.

Pretty cool, but unfortunately it is for California residents only.

Today we have a trio of various artists. The first artist is Kathleen York (aka Bird York) with her song In The Deep from the album Wicked Little High. Many people might recognize this song from the movie Crash. Then we have the video of Sting’s song Desert Rose. We close out today with a live performance of Sarah Brightman performing her cover of the Hooverphonic song Eden. This performance is from the 1998 Goldene Europa Award ceremony.

In The Deep
Bird York – Wicked Little High
EMI (2006)

Desert Rose
Sting – Brand New Day
A&M Records (1999)

Eden
Sarah Brightman – Eden
Angel Records

Today we have a quartet of house/electronica/dance music.

We have two live performances and two videos. First we have a live performance of Sasha and John Digweed in Buenos Aries. Next up is a video of the song From Paris to Berlin from the group Infernal. Then we have a live performance from Ultra Music Festival 2004 of Paul Oakenfold spinning Southern Sun. The last video in the lineup is Future Sound of London’s Amoeba.

Sasha and John Digweed
Live in Buenos Aries

Infernal
From Paris to Berlin

Paul Oakenfold
Southern Sun – Live at Ultra Music Festival 2004

Future Sound of London
Amoeba

February | 2007 | arfore dot com

The Gypsy Violin
by Munda

The compelling violin lures With an irresistible yearn Dance, dance, please dance for me

I can no longer adjourn!

Ethereal notes float from its strings Caressing like a lover’s hand Sensual music, Angel’s touch

Leading the way to wonderland

Embracing with utter delight Craving, beckoning me Tempting my lonely heart

Dance, dance on my melody!

Faster, faster the music escapes Without compassion to body or soul Seducer of lonely hearts

Until dancing is my only goal

Faces gyrate while I dance on passion Flashes of fire in the corner of my eyes The violin plays like never before

Until I become one and loneliness dies

With a final cry and a final touch The violin stops, the music ends Leaving behind an emptiness

We’ll meet again, my violin friend

When You Are Old
by William Butler Yeats

When you are old and gray and full of sleep And nodding by the fire, take down this book, And slowly read, and dream of the soft look

Your eyes had once, and of their shadows deep;

How many loved your moments of glad grace, And loved your beauty with love false or true; But one man loved the pilgrim soul in you,

And loved the sorrows of your changing face.

And bending down beside the glowing bars, Murmur, a little sadly, how love fled And paced upon the mountains overhead,

And hid his face amid a crowd of stars.

ref. url: When You Are Old

So, according to a story on Reuters, the Free Software Foundation (FSF) is evaluating whether or not to ban Novell from didtributing future versions of their Linux OS.

“The community of people wants to do anything they can to interfere with this deal and all deals like it. They have every reason to be deeply concerned that this is the beginning of a significant patent aggression by Microsoft,” Eben Moglen, the Foundation’s general counsel, said on Friday.

Apparently they might use their lock on the intellectual property rights to key pieces of the opern-source OS to achieve this.

My questions:

  1. Exactly how are they going to achieve this, if the software is open-source?
  2. Which version of the GPL are they going to claim that permits this?
  3. How does this action promote the goals of the FSF which according to their About Us page include: “our worldwide mission to preserve, protect and promote the freedom to use, study, copy, modify, and redistribute computer software, and to defend the rights of all free software users.”?

Has Richard Stallman lost his ever-loving mind? He wants people to use Linux. And not just as the OS they run on their servers, but as an everyday OS. He wants people to stop using DRM on their electronically available downloads, as evidenced by the campaign to stomp out DRM.

If Novell wants to enter into a business agreement that results in commercial support and interoperability with the non-free software juggernaut Microsoft, then how is this bad for Linux?

Just because RS doesn’t like Billy Gates and his commercial giant, doesn’t mean that he needs to start by using the same tactics he stands against when someone gets in bed with MS and Linux at the same time.

Shame on you RS, put your money where you values are. If you want people to use open-source then don’t use bullying tactics to keep it from happening.

UPDATE: according to a story at Linux-Watch, the Reuters story is misleading. Apparently the patent agreement is completely legal under GPL v2, but they are working on a language for the next GPL v3 draft that will make it a violation of the license. I say again: why is this MS/Novell deal bad for Linux? And as the Linux-Watch story points out, the current Linux kernel developers don’t like GPL v3 and apparently have no plans to move from GPL v2.

June | 2012 | arfore dot com

linux_apps-150x150-7018926Editor’s Note: This article is part of the Tales of A Linux Switcher series.

In my search to make the complete switch from the Mac OS (see Tales of a Linux Switcher – Part 1), the biggest research effort has been finding applications that accomplish the same tasks in Linux.  Some of these tasks are pretty obvious, e.g., web browsing or email, while others are not quite so ordinary, e.g., filesystem encryption or software development.

So, with all of that in mind, the subject of this particular post is going to be a discussion of some of the common tasks that I set out to handle and the application I chose to fit the bill.

When everything is said and done, the important part of using any desktop (or server really) OS is getting what you need to do accomplished.  The tasks can be office productivity or software development or just casual web surfing.

The arguments about which OS is better, more secure, more extensible, or more “free” are all great and wonderful, but in the end what matters is getting it done.  There are some people that believe that software being free is top priority, while others (like myself) are not as concerned over whether the software is free, cheap, open source, or proprietary, as long as it works to get from point a to point b.

Don’t get me wrong, I like open source software, and it’s even better when it’s FOSS (free, open source software), but when it all shakes out I want a computer setup that I can rely on from day-to-day to do what I need it to do.

So in my quest to get to point b, I have found that there are generally any number of application choices to accomplish my tasks in Linux that I did in the Mac OS ecosystem.

Some of the application choices were easy options, like LibreOffice in place of MS Office 2011, while others required more research to replace, e.g., iTunes, 1Password, etc.  With each choice I have tried to find an alternative that gave me the closest experience in terms of usability and feature set of the application being replaced.

When looking for alternatives I used Google for basic searching, but I also found the following sites to be of use:

Using those sites in combination with various forum posts and basic searches, I have been able to find software to do most everything I was doing on Mac OS X.  Bear in mind that sometimes it’s not quite as easy to set everything up, but I took that as a challenge.  There are some instances that presented particular challenges.  I will be posting on those individually as time permits.

To see the list I have personally come up, have a gander at my Linux Switcher Software Choices spreadsheet.

code_128-2874207So here at work we are running SGHE’s Banner Student Information System.  Part of the integration with the eFollett online bookstore isn’t working quite the way we want due to a bug that will not be fixed until Banner release 8.5 which we won’t have until sometime after classes start.

Due to the desire to find the books based on a class now, we had to create a system that would allow us to build the correct URLs for the eFollett system in a programmatic fashion.

The way we did it was to include an anchor tag as custom text within the Banner module.  The href attribute of the anchor tag contains an inline Javascript function that is used to pull the querystring parameters from the current Banner URL and pass that off to a separate system that will handle the redirection to the appropriate eFollett URL.

Too bad you have to be logged into the Banner account for it to work, since the query string is only available to an authenticated user.

The inspiration for this was the blog post Read URL GET variables with JavaScript by Ashley Ford.

May | 2012 | arfore dot com

As some of you will no doubt have noticed over the years, I am a die-hard Macintosh fan.  I have run Windows desktops and servers, as well as Linux desktops and servers over the years, but my true love has always been the Apple Macintosh computers.  So it is with some trepidation that I have faced the situation that I no longer have any Macintosh computers of my own.

While the situation was not anticipated, I have faced it head on and am rapidly on my way to filling all my computing needs with the Linux desktop that I have.  This is the first of several posts where I will document that process and the solutions that I have come up with to achieve the same goals in my personal computing experience with Linux that I did with the Mac.

Chapter 1 – Choosing a distribution

As a long time Linux user, dating all the way back to running a specialized distribution of RedHat on the 486 PC card in my PowerPC 6100 like some other folks, I am well acquainted with the passionate arguments that can arise among Linux aficionado when the topic of choosing a distribution arises.

In the beginning many of the arguments centered around the needs of various kernel configurations and packaging systems.  Do you compile your kernel by hand?  Do you go modular or monolithic?  Is RPM a better choice than deb?  Do you go hard core and start a stage 1 Gentoo install where you have to bootstrap the kernel just to compile and install?

Some of these decisions will be familiar to you and some won’t be.  Many of the old arguments don’t apply anymore due to major improvements over the years.  Ofttimes the new arguments center around free vs. non-free, Gnome 2 vs. Gnome 3, Gnome vs. KDE, etc.

With all of this in mind, I developed a rather simple set of criteria based on my personal experience with the philosophy Apple has espoused in it’s ad campaigns of “it just works.”  Here’s the list I came up with:

  1. Community involvement
    With any OS choice, it is very important that there be a large community of users, comprised of multiple skill levels, that can provide innovative solutions and workarounds for usability problems that can be encountered.
  2. Multiple update tracks
    While having a stable only release track makes sense for a production-level environment, as a tech-enthusiast and a geek it is great to have access to testing and unstable release tracks when you want to try something on the bleeding edge.
  3. Robust driver support
    It was important that recent hardware support be available. I don’t want to have to wait until a major point release to get something as important as a network card working.
  4. Eye candy
    Yes, I know that to a lot of die-hard UNIX guys, the concept of eye candy being a major bullet item for picking a distribution is nuts, but coming from the Macintosh environment, which is arguably one of the most visually appealing, it was important.

linux-mint-logo-domed-case-badge-techiant-150x150-8310264After doing a large amount of research and testing numerous live cd’s, I settled on Linux Mint 13 with the Cinnamon desktop environment.  Linux Mint is a Ubuntu-based distribution, which means it traces it’s genealogy back to the grand old distribution of Debian.

Ubuntu is known for having a extremely active community base and it has become the distribution of choice for many hardware vendors outside of the server market that are looking to pull Linux users into their product lines.

Being a Ubuntu/Debian based distribution, there are lots of opportunities for bleeding edge development when you want to go there.  For example, Oracle’s Java 7 Update 4 is available as a package through a PPA repo.

Also, since Linux Mint 13 is a Gnome 3-base with the sleek, modern looking Cinnamon environment on top, there is plenty of eye candy to go around.

References

  1. Fischba, S. (1997, June 06). Running linux on ppc/486 card?. Retrieved from http://www.linuxmisc.com/7-freebsd/2fd450d75fd55344.htm
  2. Lagna, G. (2010, April 23). Apple’s ad campaign, a brief history… Retrieved from http://www.macgasm.net/2010/04/23/apples-ad-campaign-a-brief-history/
  3. Linux Mint – from freedom came elegance. Ubuntu-based Linux distribution. http://www.linuxmint.com/
  4. Cinnamon – Love your Linux, Feel at Home, Get things Done! Window manager for Linux. http://cinnamon.linuxmint.com/
  5. Andrei, A. (2012, January 17). Install oracle java 7 in ubuntu via ppa repository. Retrieved from http://www.webupd8.org/2012/01/install-oracle-java-jdk-7-in-ubuntu-via.html

October | 2011 | arfore dot com

An excerpt of the poem Halloween by Robert Burns.

Upon that night, when fairies light On Cassilis Downans dance, Or owre the lays, in splendid blaze, On sprightly coursers prance; Or for Colean the route is ta’en, Beneath the moon’s pale beams; There, up the cove, to stray and rove, Among the rocks and streams

To sport that night.

Among the bonny winding banks, Where Doon rins, wimplin’ clear, Where Bruce ance ruled the martial ranks, And shook his Carrick spear, Some merry, friendly, country-folks, Together did convene, To burn their nits, and pou their stocks, And haud their Halloween

The lasses feat, and cleanly neat, Mair braw than when they’re fine; Their faces blithe, fu’ sweetly kythe, Hearts leal, and warm, and kin’; The lads sae trig, wi’ wooer-babs, Weel knotted on their garten, Some unco blate, and some wi’ gabs, Gar lasses’ hearts gang startin’

It’s that wonderful time of year again, Halloween! A time of magic, mystery, fun, and all manner of good-natured tomfoolery!

As a kid growing up, Halloween meant going trick-or-treat.  We had costumes, oftentimes homemade, which were pretty cool.  I remember a tiger somewhere in there.

We got to harass our neighbors for food that wasn’t good for us, we got to play tricks on other kids at school, we got to have parties with other kids, and we could get away with some things that we couldn’t at other times of the year.  It was a time for scary stories, for scary movies, and for a generally boo-riffic good time.

As I got older and went to college Halloween was still a time for parties, but of a more adult nature. And now instead of going to other people’s house getting treats, parents were bringing their kids to mine.  I went through many bags of mini-size candybars, candy corn, and other not-so-healthy goodies.

Then there are the Halloween TV specials.  “It’s the Great Pumpkin, Charlie Brown” is a annual favorite.  Other shows that I have enjoyed watching, either by myself or with others (and/or their kids), included the Halloweentown series on the Disney Channel and A Disney Halloween, among others.  There have always been the classic horror movies as well, such as The Mummy and Frankenstein.

So no matter what memories you may have of Halloween or how you choose to celebrate it, I wish you all a wonderful All Hallow’s Eve!

On Wednesday, October 5, 2011, the world lost a true visionary.

It was with great sadness that I heard of the passing of Steve Jobs, after a long struggle with pancreatic cancer.  He was a impresario in the world of computers and technology, constantly pushing the boundaries, always coming up with “one more thing.”

Seldom has there been an individual that has shaped the course of the technological world in the way that Steve has done.  While everyone may not agree with the way in which he guided Apple, Inc., there can be little doubt in anyone’s mind that his vision of the future has left an indelible mark on the fabric of society.  From the garage of a small house came the seed that sparked a revolution in computers.

What a computer is to me is the most remarkable tool that we have ever come up with. It’s the equivalent of a bicycle for our minds.
– Steve Jobs, 1991

I wish his family and his wife Laurene my condolences in the time of their loss.

August | 2010 | arfore dot com

As many of your know, I have a Plex-based Mac Mini media center setup which has enabled me to to the cord with the cable monopoly. Part of this setup is the Openfiler NAS that I use to store all of the digital copies of my dvd collection.

Lately I have been wishing that I had configured the NAS with a RAID setup instead of just using an all-eggs-in-one-basket approach with three drives in a LVM configuration. In the process of cleaning out a bunch of videos that I was never going to watch I managed to free enough space to be able to disband the existing volume group and setup my two 1TB drives in a RAID 1 set. The happy result was that I was also able to re-purpose the 500GB drive for use with my ever growing iTunes library. (Thanks, Apple and Steve Jobs for making all my music DRM free!)

The problem I ran into was that Openfiler will not allow you to create a RAID set in a degraded state. This was necessary to enable me to work with the drives I owned and not spend additional funds. After discovering this I began investigating the possibility of doing the RAID configuration by hand and completing the rest of the setup in the Openfiler web console. Here is the process I used.

A few steps were left out, mainly the pieces revolving around moving the data from one logical volume to the other then pruning the volume group of the old volume. Excpet for the copy process, all of this can be easily accomplished in the Openfiler web console. The only piece that I couldn’t easily determine from the console interface was which physical devices were used in which logical volume. That information can be easily found using the following command:

[root@mangrove ~]# lvdisplay -m

Create the RAID 1 set with a missing drive and prepare the physical volume

Step 1: SSH as root into your Openfiler setup

arfore$ ssh [email protected]

Step 2: As root, partition the RAID member using fdisk. You will want to create a single, primary partition. Accept the defaults on the partition size so that it uses the whole drive.

[root@mangrove ~]# fdisk /dev/sda

The number of cylinders for this disk is set to 121601.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): n
Command action
e   extended
p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-121601, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-121601, default 121601):
Using default value 121601

Next change the partition type to be Linux raid autodetect (Note: the hex code is fd)

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Now exit saving the changes.  This will write the new partition table to the drive.

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Step 3: Create the RAID 1 set using your newly partitioned drive. Normally when creating a RAID 1 set you would specify two drive since this minimum number for a RAID 1 set in a clean, non-degraded state. However in our case we need to start out with a set containing one drive, which will show up in Openfiler as a RAID 1 set in a clean, degraded state.

[root@mangrove ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 missing

If it all worked, you should see the following result from the mdadm command:

mdadm: array /dev/md0 started.

To check the status of your newly created RAID 1 set, execute the following command:

[root@mangrove ~]# cat /proc/mdstat

The result should look like this:

Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sda1[0]
      976759936 blocks [2/1] [U_]

unused devices:

What this means is that your system supports RAID levels 1, 4, 5, 6, and 10. By extension this means it also supports RAID level 0. The [2/1] entry on the md0 line means that your RAID set is configured for two devices, but only one device exists. In a clean, non-degraded state, a RAID 1 set would show [2/2].
Step 4: Initialize a partition (in this case the RAID 1 partition) for use in a volume group.

[root@mangrove ~]# pvcreate /dev/md0

If the command completed successfully, you should see the following result:

Physical volume "/dev/md0" successfully created

Create the Volume Group and Logical Volume

Note: The next parts are done in the Openfiler web console.

Step 1: First check the status of your RAID set by selecting the Volumes tab then clicking on Software RAID in the Volumes section.

Step 2: On the Software Raid Management screen you will see your new RAID 1 set listed with a state of Clean & degraded. Normally, this would indicate a possible drive failure, however in this instance it is expected.

Step 3: The next step is to create a new volume group using the physical volume created during the previous command line steps. Click on the Volume Groups link in the Volumes section.

Create your new Volume Group and select the physical volume to be added to the group. Then click the Add volume group button.

Notice that now your new volume group shows up in the Volume Group Management section.

Step 4: Next you need to add a new volume to your newly create volume group. You can do this on the Add Volume screen. Click on the Add Volume link in the Volumes section.

On the Add Volume screen, setup your new volume and click the create button. The suggestion from Openfiler is to keep the default filesystem (which currently is xfs). Also, make sure to increase the volume size or Required Space to create the desired volume. In my case I am going to select the maximum available space.

The system now sends you to the Manage Volumes screen which will show you that you now have a volume group containing one volume.

At this point you can copy all your data over from the old LVM setup into the new LVM with RAID 1 setup. This may take some time. In my case, given that may hardware is far from the latest and greatest, it took just over an hour to copy 900GB of data.

Adding the empty drive to the RAID 1 set

The next steps assume that you have migrated your data from the last remaining volume group from the old setup into you degraded RAID 1 set. At this point what we are going to do is to add the free disk into the RAID set and start it syncing the disks.

Step 1: The first thing you need to do is to make sure that the partitioning structure of the new disk matches the structure of the RAID 1 member. In a RAID set the parition structure needs to match.

This can be easily accomplished using the following command:

[root@mangrove ~]# sfdisk -d /dev/sda | sfdisk /dev/sdb

The output from the command should look something like this:

Checking that no-one is using this disk right now ...
OK

Disk /dev/sdb: 121601 cylinders, 255 heads, 63 sectors/track
Old situation:
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/sdb1          0+ 121600  121601- 976760001   fd  Linux raid autodetect
/dev/sdb2          0       -       0          0    0  Empty
/dev/sdb3          0       -       0          0    0  Empty
/dev/sdb4          0       -       0          0    0  Empty
New situation:
Units = sectors of 512 bytes, counting from 0

   Device Boot    Start       End   #sectors  Id  System
/dev/sdb1            63 1953520064 1953520002  fd  Linux raid autodetect
/dev/sdb2             0         -          0   0  Empty
/dev/sdb3             0         -          0   0  Empty
/dev/sdb4             0         -          0   0  Empty
Warning: no primary partition is marked bootable (active)
This does not matter for LILO, but the DOS MBR will not boot this disk.
Successfully wrote the new partition table

Re-reading the partition table ...

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)

As long as the process doesn’t throw any wonky error messages then you are good to move on to the next step.

Step 2: Now we need to add the disk /dev/sdb into the RAID 1 set /dev/md0.

[root@mangrove ~]# mdadm --manage /dev/md0 --add /dev/sdb1

If the command completes successfully, you will get the following result:

mdadm: added /dev/sdb1

Step 3: At this point the system show automatically begin syncing the disks. To check on this run the command:

[root@mangrove ~]# cat /proc/mdstat

The result show look like so:

Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sdb1[2] sda1[0]
      976759936 blocks [2/1] [U_]
      [>....................]  recovery =  0.0% (535488/976759936) finish=182.3min speed=89248K/sec

unused devices:

Notice that it now shows that the RAID 1 set is in recovery mode. This indicates that the set is in the process of being synchronized. During this process the data shown on the RAID Management screen in the Openfiler web console will show that the state of the RAID is Clean & degraded & recovering. It will also show the progress of synchronization.

Step 4: Once the RAID set has finished syncing the mdstat results will look as follows:

[root@mangrove ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sdb1[1] sda1[0]
      976759936 blocks [2/2] [UU]

unused devices:

As you can see, the RAID identity now shows that there are 2 volumes in the set with /dev/sda1 being the primary and /dev/sdb1 being the mirror. Also, it now shows that there are 2 fully synced disks, indicated by [2/2] [UU].

In the Openfiler web console the RAID Management screen now shows the state of the array as Clean with a sync status of Synchronized.

Final Steps

Step 1: Now that you have a completely clean RAID 1 set you will need to ensure that the /etc/mdadm.conf file has the correct information concerning the array. Normally this is created automatically for you by the Openfiler administration tool, however since we built the array by hand we will need to add this information into the existing file. (Note: backup the existing mdadm.conf file!)

This can be accomplished by the following commands:

[root@mangrove ~]# cp /etc/mdadm.conf /etc/mdadm.conf_orig
[root@mangrove ~]# mdadm --examine --scan >> /etc/mdadm.conf

The mdadm.conf file now contains the following (Note: the UUID entry will be unique to your system):

#
# PLEASE DO NOT MODIFY THIS CONFIGURATION FILE!
#   This configuration file was auto-generated
#   by Openfiler. Please do not modify it.
#
# Generated at: Sat Jul 24 19:43:42 EDT 2010
#

DEVICE partitions
PROGRAM /opt/openfiler/bin/mdalert
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=542fa4dc:c920dae9:c062205a:a8df35f1

Testing the RAID set

Testing the RAID set is the next step that I would recommend. Now, if something really bad(TM) goes wrong here, then you could end up losing data. If you are extremely worried, make sure that you have a backup of it all somewhere else, however if you don’t actually test the mirror then you don’t know that it all works and then you could be relying on a setup that will fail you when you need it the most.

Step 1: Get the details of your existing RAID set first prior to testing the removal of a device, that way you will have something to compare it to.

[root@mangrove ~]# mdadm –detail /dev/md0

This should yield some valuable data on the health of the array.

/dev/md0:
        Version : 00.90.03
  Creation Time : Sat Jul 24 19:55:44 2010
     Raid Level : raid1
     Array Size : 976759936 (931.51 GiB 1000.20 GB)
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Sun Jul 25 13:41:07 2010
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 542fa4dc:c920dae9:c062205a:a8df35f1
         Events : 0.3276

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1

Step 2: The next step is to actually remove a device from the array. Now, since /dev/sda was the initial disk I put into my set, I am going to remove it and see what happens.

To remove the device from the array and mark it as failed, use the following command:

[root@mangrove ~]# mdadm --manage --fail /dev/md0 /dev/sda1

You should receive the following result:

mdadm: set /dev/sda1 faulty in /dev/md0

Step 3: Check the status of the array by using the following command:

[root@mangrove ~]# cat /proc/mdstat

You will see that the device /dev/sda1 is now marked as failed and that the status of the array is degraded:

Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sdb1[1] sda1[2](F)
      976759936 blocks [2/1] [_U]

unused devices:

Also, in the RAID Management screen in the Openfiler web console you will see the state of the array change to be Clean & degraded.

Step 4: The next step is to test the mirroring of the data. What you should do now is to remove the drive that you marked as failed, then add the drive back into the mirror. In a real world scenario you would also replace the actual drive itself, however that is not necessary in this test. Also, if you were replacing the actual drive you would need to repeat the duplication of the partitioning structure.

First, remove the failed drive from the set /dev/md0 using the following command:

[root@mangrove ~]# mdadm --manage --remove /dev/md0 /dev/sda1

The result that you get back will show a hot remove since this command was executed while the system was live. In an enterprise environment where the storage array supported hotplug devices you could then replace the failed drive without shutting the system down.

Next, add the new drive into the array like so:

[root@mangrove ~]# mdadm --manage --add /dev/md0 /dev/sda1

The result from this command will be:

mdadm: re-added /dev/sda1

Step 5: If nothing has gone horribly wrong with the test, the array will now begin the mirroring process again. As you can see from the output of mdstat, the recovery process will be indicated in the same manner as the initial mirror was:

[root@mangrove ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sda1[2] sdb1[1]
      976759936 blocks [2/1] [_U]
      [>....................]  recovery =  0.0% (834432/976759936) finish=175.4min speed=92714K/sec

unused devices:

Note: this is not a smart recovery process. When you break the mirror, the entire mirroring process has to complete, even if it was a test where the data had not actually disappeared. As you can see, the recovery process is going to take about the same length of time that initial build of the mirror did.

Also, in the RAID Management screen, the state of the array will now be shown as Clean & degraded & recovering, just as before when we built the mirror in the first place.

Step 6: Once the mirroring process of the testing has completed, the results of mdstat should look similar to the following:

Personalities : [raid6] [raid5] [raid4] [raid10] [raid1]
md0 : active raid1 sda1[0] sdb1[1]
      976759936 blocks [2/2] [UU]

unused devices:

If you had actually replaced a drive you might need to update mdadm.conf to reflect the changes. At the very least it is wise to run the scan command again to ensure that the UUID of the array matches the file after the rebuild has completed.

Final Thoughts

One of the big lessons that I learned from this whole process, both using Openfiler and with handling storage arrays in general, is that more thought and planning on the frontend can save you a lot of tedium later. Had I initially planned my storage setup, I would have configured the drives in a RAID 1 set in the beginning. Another thing I learned is that while managing your NAS with an appliance type of setup (whether you use ClarkConnet, FreeNAS or Openfiler) is a great convenience, it doesn’t give you the insight and understanding that can be gained by doing everything, at least once, the manual way. I now have a much better understanding of how the software raid functions work in Linux and of the LVM process as a whole.

References

This whole process would have been much more difficult had it not been for the input of a friend as well as a series of postings on the Internet. Thanks, Joe for your help on this, given that it was my first foray into software raid on Linux.

May | 2010 | arfore dot com

Lately I have been taken with watching foreign films.  Some of them are ones I have seen before, but most of them have been new ones to my collection.

Unfortunately many foreign films are not available in Region 1 (or NTSC) format.  If you are lucky enough to know how to rip a DVD and watch it on your computer then you are able to watch them, however most DVD players sold in the United States are region locked.  If you use a Mac then you can change the region encoding on the DVD player application, but only five times, which makes it damned inconvenient.  If watching them on your computer isn’t working for you then take a look here for some region-free DVD players.

Here are a few of the ones that I have screened over the last few weeks:

What are some of your favorite foreign films?

September | 2009 | arfore dot com

I noticed something today in the new iTunes Store interface.  When you hover over a song in the store you are presented with a nifty play icon that replaces the track number in the album listing.  This is quite similar to the iTunes Store interface functionality on the iPhone/iPod Touch OS.  Clicking on the play icon or double-clicking on the song title starts the 30-sec preview of the track.

Just like the iPhone version, the new iTunes Store desktop interface then displays a round blue icon with the ubiquitous stop square with the progress of the 30-sec preview rotating in a contrasting blue color.

Clicking on the stop square does not always stop the playback of the preview.  What should happen when you click on the stop icon is that the preview ceases to play and the icon goes away to be replaced once again by the track number.  On some albums in the store this function works.  On other albums it does revert back to the track number, however the preview continues to play until it finishes or until you hit the pause button in the iTunes window.  Also when you let the preview play out to the end, the stop icon does not disappear either, to release the icon you must click the stop button even though the preview has completed.

This definitely seems like a bug in the interface.  I have confirmed this in both the Mac OS X and Windows versions of iTunes 9 running on Snow Leopard and Windows Vista, respectively.

Finding the right case for you iPhone can be a challenging and somewhat frustrating process.  Not only do you have to contend with the sheer number of case types, but you also have to balance the needs of your particular listening and working environments.  If you are like me you may have found that you actually need more than one type of case.  While it would be nice to have the ultimate iPhone case that I could comfortably and easily use in any situation, I have yet to discover it.

Recently I purchased an Otterbox iPhone 3G Defender case for use with my iPhone 3GS.  The main motivation behind this particular purchase was the ruggedness of the case.  Next summer I am going to be riding a self-supported bike tour with a couple of friends in Pittsburgh, so I was in the market for a case that could handle the shocks, drops and dust that I would encounter both on the tour and while training for it (man, do I ever need to start the training).

My daily driver of a case to this point has been a red and black (Goooo Dawgs!) iFrogz Luxe.  This is a very nice case that adds minimal bulk to the iPhone design while providing a basic level of protection from scuffs and bumps that can occur during average daily use.

While the iFrogz Luxe turned out to be great for a daily case, it became rapidly apparent that it was not going to withstand the rigors of an extended bike tour and training process.  After determining this, I turned to the Otterbox.  Otterbox is known for making very rugged cases, waterproof cases, and water proof equipment boxes.

Otterbox states that the iPhone 3G Defender is not intended for protection against water intrusion, due to it’s openings for the microphones and speakers of the iPhone 3G design.  This being said a friend that also has one said that it will protect your phone from an occasional spill, like when someone knocks over a coke on the table at a meeting.  I can personally attest to the drop and bump protection, having purposefully dropped my phone while incased onto a concrete sidewalk from a height of three feet.  (Not recommended for the faint of heart!)

I really liked the additional grip that the case provides.  Sometimes the slick plastic back of the iPhone 3G and 3GS can be a little hazardous.  The buttons are fairly easy to operate even while incased in the poly-carbonate shell and silicon rubber cushioning.  All of the ports with the exception of the speakers and microphone are firmly covered with silicon rubber flaps that interlock into the plastic shell when not in use.  This is great, since the water sensors on the 3G and 3GS are located in the headphone jack and inside the dock connecter port.  With the openings firmly covered and protected it is possible to fudge a little on reporting water damage when attempting to get a warranty or AppleCare replacement.

If you want to dock your phone while in the 3G Defender, however, you maybe out of luck depending on the dock connector design.  Due to the nature of the case design, there is a fairly deep recession that has to be navigated in order to connect anything to the dock connector.  A cable or two won’t be a problem, but if you use a device like the iHome or a car mount then you will most likely be out of luck, unless you buy something like the iStubz from CableJive.

Another problem you may run into has to do with the sheer extra bulk added by the case.  I frequently use my iPhone while in my 2007 Toyota Tundra, both for music and for navigation.  I mounted my iPhone on the console in place of the ashtray using a mount and device holder combination from ProClip.  While the combination is a bit pricey, I like their product choices.  Fortunately my device holder is adjustable enough to hold the 3G Defender case, but unfortunately the dock connector plug does not extend high enough to connect with the iPhone while in the case.

Beyond those two issues, which are fairly easy to overcome, I am still having trouble getting used to the confinement of the screen itself.  The 3G Defender enclosure leaves all of the screen itself usable, but some functionality is tricky when using the onscreen keyboard and sliders.  This will be especially noticeable by those of us that don’t trim our fingernails all the way to the quick.  I know that many of my female friends, as well as some males, will find the edges of the case get in the way.  The one application feature I am having the most trouble with is the address bar in mobile Safari.  When using Safari and trying to get the browser to re-display the address bar, I find myself having to use the side of my finger tips instead of end of the finger.

I would judge that the 3G Defender is a great case for use in a physically demanding environment.  I am not completely sold on its use in an average daily environment that doesn’t involve lots of physical abuse.

Pros

  • shock protection
  • dust protection
  • better overall grip (especially for individuals with larger hands)

Cons

  • dock connector recessed farther than desired
  • added bulk may make accessories unusable without additional cabling
  • some on-screen functionality can be impaired due to the side of the case surrounding the screen

Overall I would say this is an excellent case and well worth the price being charged for it.  Paying $50 to protect your $400 investment is a no-brainer.