Interesting aspects to Fringe | arfore dot com

Fringe, a new series on the Fox network that is using a new format Fox dubs Remote-Free TV.

The episodes are longer than your average sitcom.  The Wikipedia entry on the show states that this series will have less commercials and promos.

While this is true, one of the interesting bits is that before each commercial they tell you how many seconds should pass before the next segment starts.

Another interesting aspect of the series is how they identify each location during the episode.  Often shows just print plain text on the screen, however in Fringe they use 3D text on the screen and it is oriented differently depending on the scene.

Customizing the Client Episode I “Welcome Page” | arfore dot com

As with many software packages that we deploy, system admins quite often get called upon to customize the application to suit the needs and brand of the enterprise they work for.

When I started deploying the Adobe Contribute client, customization wasn’t really on the radar. But as we got further into our deployment and support phases, it was clear that the default screens just weren’t cutting it when it came to clarity of use.

For instance, most of our users connect to the enterprise Contribute Publishing Server. In order to do this they have to enter a custom connect string after opening the client for the first time after installation. As normal with this sort of deployment, there were some users that didn’t read the instructions provided them, or failed to remember the special connection string used in the training classes.

Well, on the default welcome page in Contribute, there is this handy button labeled Create Connection. While this would be quite fine if we weren’t using the Publishing Server for maintaining user permissions and privileges, it doesn’t cooperate with our normal setup.

By customizing the welcome page I was able to remove the nice large link to creating your own connection.

Here’s what you need to know in order to do this. The welcome page is built on the fly from a Dreamweaver template.

Mac File Location

  • /Applications/Adobe Contribute CS3/Configuration/Content/CCWelcome

Windows File Location

  • C:\Program Files\Adobe\Adobe Contribute CS3\Configuration\Content\CCWelcome

software | arfore dot com

While re-loading the OS and apps on my iMac at work, I ran into major issues whilst updating MS Office 2008.  When running the first update, Office 2008 SP1 (12.1.0) I had no problems, however none of the other updates would run.  I kept getting the error “You cannot install Office 2008 Updates on this volume. A version of the software required to install this update was not found on this volume.”

At first I thought that this might be due to some permissions shenanigans revolving around my AD/OD setup, since the logged in user was not a local admin, but had been granted administrator privileges through a nested group trick.

After more searching I ran across a post on the forums MacRumors.com pointing out problems when running updates on an Office 2008 install that had been altered by using Monolingual or XSlimmer.

Both of these programs were developed to slim down the sizes of binary applications on OS X.  Monolingual strips the “additional languages” from OS X programs and operating system files, while XSlimmer is designed to remove both the extra language information and the unused binary code in a fat binary.  I have never used either of these programs, since I was not concerned about the amount of disk space they utilize.

After more searching, I ran across a post in the Entourage Help Pages discussing troubleshooting Office 2008 installations.  While this page also mentioned issues with installations being altered by Monolingual and XSlimmer, it also pointed out an issue with a workaround created to handle a bug in how Safari deals with the docx file extension.  While the automator workflow mentioned does not appear to actually affect anything other than the names of files, it did jog my memory about something else Safari related that occured when installing Adobe CS 4 earlier the same day.

While installing CS 4 and the available updates, I was prompted to not only quit Safari, but also to quit XMarks for Safari.  For those that don’t know, XMarks is a great service for synchronizing your browser bookmarks between multiple machines, platforms, and browsers.

On a hunch I quit XMarks for Safari, as well as the browser itself.  No dice, I still got the error.  Knowing how easy it would be to reinstall the helper application, I uninstalled XMarks.  Eureka!  The Office updaters now ran without a hitch.  So, if you are having this problem, try deactivating or removing anything plugins that effect the default nature of Safari.

So this morning I fired up my iMac at work to continue on with editing this PHP form I have been working on.  Now I usually use TextMate for my daily editor, since it is very lightweight.

Since I hadn’t actually created a TextMate project file, I just selected all the files and opened them using “Open With” in the context menu.  Now normally I ignore the fact that I see the Opera browser listed in the menu, but now I saw it twice.  I decided to find out where they live.

Turns out that the most updated versions of Adobe Device Central CS4 and Adobe Bridge CS4 contain Opera in the application package bundles.  Opera version 9.27 is inside the Adobe Bridge CS4 application bundle while Opera version 9.20 is inside the Device Central CS4 application bundle.

I can understand why Adobe might need to ship Opera inside their application bundles to make their apps work, but I really wish that the Mac OS wouldn’t see them as usable outside the Adobe usage.

UPDATE (2009-04-26 7:06PM EDT): Apparently I was mistaken.  When poking through the preferences of ClamXav in order to restructure my watch folders, I noticed a checkbox that I had overlooked.  Apparently you can add the login item from within the main application.  However, it still doesn’t start the Sentry app when adding the item.  You have to manually click the “Save settings & Launch Sentry” button.

Recently I have bowed to the necessity of installing antivirus software on my Mac, both at work and at home.

In investigating the possibilities I decided to try out the open source antivirus solution ClamAV.  While I tend to gravitate towards commercially supported security products when possible, I currently don’t have the extra money to spend on the Intego VirusBarrier product, and the budget at work is quite strained, as are budgets for most people.

I like the ClamXav frontend for the ClamAV engine.  I know that I can do all the scanning functions from the command line, but I am fan of gui frontends do to the fact that they are often more user-friendly.

The ClamXav is a nice frontend.  The only problem I have with it is that there is inherent mechanism to launch the sentry program at user login.  The ClamXav Sentry application is contained in the Resources section of the Contents of the ClamXav application bundle. Below are the steps to add the application as a login item.

Adding ClamXav Sentry as Login Item

1. Open System Preferences from the Apple Menu

Open System Preferences

2. Open Accounts Preference Pane

System Prefences

3. Select Login Items

Login Items

4. Click the Plus sign button at the button of the Login Items list.

5. When the dialog window comes up, hit the Command + Shift + G keyboard combo.

6. In the window type the following:

/Applications/ClamXav.app/Contents/Resources/”

then click the Go button.

Enter the file path to the Resources of the ClamXav bundle

7. Select ClamXavSentry.app from the list and click the Add button.

Select the Sentry app

8. Congratulations, you have successfully added the ClamXav Sentry as a login item.

Login Item Added

I also wrote an Applescript application that will add the login item for you.  The benefit of using my utility is that it launches ClamXav Sentry after adding the login item.  You download a zipfile containing both the application and script file.

My last two posts, Starting NRPE via launchd and Nagios NRPE on OS X Server 10.5, concerned getting NRPE to run on OS X Server 10.5 and having it startup at system boot.

However, this is only part of the battle.  Once you have Nagios monitoring setup on your server you also need to have some nice options for checking the availability of your running services.

Tim Wilson from the Savvy Technologist, wrote an NRPE plugin that helps out with this.  The plugin check_osx_services does an excellent job of checking on the status for many services running on 10.5 Server.

The documentation on the plugin at the NagiosExchange site is pretty thorough.  One thing that is not mentioned is that you will need to run the check_osx_services script as superuser since it calls the system level command serveradmin which must be run as root.

One of the small annoyances I have with Firefox is the default URL used for the Google search plug-in.  While I generally just type in a search term and hit enter, I do sometimes just hit enter without a corresponding search term just to get sent to the main Google page.  Why do I do this?  Mainly so that I can view the updated Google logos when they change for holidays.

With a default installation of Firefox the default Google page is the Mozilla Firefox Start Page.  While this is nice from a corporate branding sense, this special page does not have the links to either iGoogle or the Google Accounts login page, nor does is feature the often customized Google logo.  Also, none of the other search plug-ins that I have tested in Firefox exhibit a similar “feature”, they all dump you at the default page for that particular service.
Here’s how to change all of that.

Firefox 2.x for Mac OS X

  1. Quit Firefox.
  2. In the Finder, navigate to /Applications
  3. Right-click (or control-click) on Firefox.app and select Show Package Contents from the context menu
  4. In the window that comes navigate to Contents -> MacOS -> searchplugins
  5. Open the file named google.xml in your favorite text editor
  6. Change the value for the XML attribute named SearchForm as follows:

    Default:  http://www.google.com/firefox
    Changed: http://www.google.com

  7. Save the file and start Firefox.

Firefox 2.x for Windows

  1. Quit Firefox.
  2. In Windows Explorer open the following directory C:\ -> Program Files -> Mozilla Firefox -> searchplugins
  3. Open the file named google.xml in your favorite text editor
  4. Change the value for the XML attribute named SearchForm as follows:

    Default:  http://www.google.com/firefox
    Changed: http://www.google.com

Voila!  Now you have what many of my friends would have logically concluded as the expected action for the Google search plugin for Firefox.

Note that this mod will have to be changed for each successive update of the Firefox application, so it may not be to your taste.

Passing Browser Check on Luminis 3 with Firefox 2 | arfore dot com

Those of you out there who are running an installation of SCT Luminis 3 may have noticed that the browser check always comes up warning you that the browser is unsupported when using Firefox 2, even though all the features seem to be completely supported.

This is due to the fact that the browsercheck javascript does not know about the new agent string that was introduced with Firefox 2. Generally a new release, or service pack to Luminis fixes this for newer browsers.

In order to change this you will need to alter a couple of files in you Luminis install.

The two files that need to be altered are:

  1. webapps/luminis/js/clientsniffer.js
  2. /webapps/luminis/WEB-INF/templates/portal/browserchk.thtml

clientsniffer.js

In this file you will need to alter the conditional of the big if-statement that follows the assignment for the variable is_nav5.

The problem is that the if checks for the existence of a revision number of 1.8. What you need to do is add an additional check for a revision number of 1.8.1.6. So the if-statement conditional becomes:

if (is_nav5 || agt.indexOf(“rv:1.7.12″) != -1 || agt.indexOf(“rv:1.8″) != -1 || agt.indexOf(“rv:1.8.1.6″) != -1)

The next thing to do is to add an additional Firefox variable that is set to true if the major number is 2. I added this after the existing variable is_fox1_5.

var is_fox2 = (is_fox && (is_major == 2));

browserchk.thtml

In the browsercheck file you need to alter if-statement that sets the variable supported to have a true value. This if-statement should follow immediately after the one that checks for whether java is enabled in your browser.

What you need to add is an additional OR check, so that the if-statement conditional looks like the following:

if ((is_nav8) || (is_nav7) || (is_moz1_7) || (is_win && is_ie5up) || (is_win && is_ie6) || (is_saf1_3) || (is_fox1_5) || (is_fox2) || (is_win && is_fox1))

I have tested this change with Firefox 2.0.0.6 on the following browsers:

  • Mac OS X 10.4.10
  • Windows XP SP2
  • Windows Vista
  • Ubuntu 6.10

Resources

geeky | arfore dot com | Page 2

I have been updating my wallpaper to a new monthly desktop wallpaper from the Smashing Magazine site for several months.

With earlier versions of Mac OS X it was easy to update all spaces at once because the default change action affected all the spaces. With the advent of Mac OS X 10.7 (aka Lion) each space is capable of having a unique wallpaper. While this is a neat feature, there is no option to apply the change to all the spaces. One workaround is to manually change each space.  Another workaround is remove all your spaces, make the change then add the spaces back.

Neither of these options is suitable to me. The first option is fairly cumbersome, and the second will undo my application-to-space bindings. To solve this problem I have written a script than handles it.

Here’s the script:

#!/bin/bash

# Simple Script to update the desktop wallpaper
# background for all desktop spaces in Mac OS X 10.7
#
# Usage: update_desktop_wallpaper.sh old_wallpaper_path new_wallpaper_path
#
# Andy Fore
# http://arfore.com

# Check for command line arguments
if [ -z "$1" ]; then
    echo "Usage: update_desktop_wallpaper.sh old_wallpaper_path new_wallpaper_path"
    exit 1;
else
    # Change location to the active user preferences directory
    cd ~/Library/Preferences

    # Backup the original plist file
    echo "Making a backup of the original plist file..."
    cp com.apple.desktop.plist com.apple.desktop.plist_backup

    # Convert the desktop plist from binary to xml
    echo "Converting plist file to text format..."
    plutil -convert xml1 com.apple.desktop.plist

    # Update the desktop wallpaper file location/name
    echo "Editing the file..."
    sed -i "" "s/$1/${2}/g" com.apple.desktop.plist

    # Convert the desktop plist back to binary format
    echo "Converting plist file back to binary format..."
    plutil -convert binary1 com.apple.desktop.plist

    # Killing Dock process
    echo "Sending the kill signal to the Dock process to force reload of plist"
    killall -HUP Dock

    # Display completion message
    echo "Operation now complete."
fi

Example

$ ./update_desktop_wallpaper.sh June2012_Calendar.jpg July2012_Calendar.jpg

Note that I only used a filename in the example. This is because all of my calendar wallpapers are saved in the same directory path, making the unique part just the filename itself.

linux_apps-150x150-9143081Editor’s Note: This article is part of the Tales of A Linux Switcher series.

In my search to make the complete switch from the Mac OS (see Tales of a Linux Switcher – Part 1), the biggest research effort has been finding applications that accomplish the same tasks in Linux.  Some of these tasks are pretty obvious, e.g., web browsing or email, while others are not quite so ordinary, e.g., filesystem encryption or software development.

So, with all of that in mind, the subject of this particular post is going to be a discussion of some of the common tasks that I set out to handle and the application I chose to fit the bill.

Chapter 2 – Getting it done

When everything is said and done, the important part of using any desktop (or server really) OS is getting what you need to do accomplished.  The tasks can be office productivity or software development or just casual web surfing.

The arguments about which OS is better, more secure, more extensible, or more “free” are all great and wonderful, but in the end what matters is getting it done.  There are some people that believe that software being free is top priority, while others (like myself) are not as concerned over whether the software is free, cheap, open source, or proprietary, as long as it works to get from point a to point b.

Don’t get me wrong, I like open source software, and it’s even better when it’s FOSS (free, open source software), but when it all shakes out I want a computer setup that I can rely on from day-to-day to do what I need it to do.

Chapter 3 – It’s all about the apps

So in my quest to get to point b, I have found that there are generally any number of application choices to accomplish my tasks in Linux that I did in the Mac OS ecosystem.

Some of the application choices were easy options, like LibreOffice in place of MS Office 2011, while others required more research to replace, e.g., iTunes, 1Password, etc.  With each choice I have tried to find an alternative that gave me the closest experience in terms of usability and feature set of the application being replaced.

When looking for alternatives I used Google for basic searching, but I also found the following sites to be of use:

Using those sites in combination with various forum posts and basic searches, I have been able to find software to do most everything I was doing on Mac OS X.  Bear in mind that sometimes it’s not quite as easy to set everything up, but I took that as a challenge.  There are some instances that presented particular challenges.  I will be posting on those individually as time permits.

To see the list I have personally come up, have a gander at my Linux Switcher Software Choices spreadsheet.

code_128-2726928So here at work we are running SGHE’s Banner Student Information System.  Part of the integration with the eFollett online bookstore isn’t working quite the way we want due to a bug that will not be fixed until Banner release 8.5 which we won’t have until sometime after classes start.

Due to the desire to find the books based on a class now, we had to create a system that would allow us to build the correct URLs for the eFollett system in a programmatic fashion.

The way we did it was to include an anchor tag as custom text within the Banner module.  The href attribute of the anchor tag contains an inline Javascript function that is used to pull the querystring parameters from the current Banner URL and pass that off to a separate system that will handle the redirection to the appropriate eFollett URL.

Too bad you have to be logged into the Banner account for it to work, since the query string is only available to an authenticated user.

The inspiration for this was the blog post Read URL GET variables with JavaScript by Ashley Ford.

As some of you will no doubt have noticed over the years, I am a die-hard Macintosh fan.  I have run Windows desktops and servers, as well as Linux desktops and servers over the years, but my true love has always been the Apple Macintosh computers.  So it is with some trepidation that I have faced the situation that I no longer have any Macintosh computers of my own.

While the situation was not anticipated, I have faced it head on and am rapidly on my way to filling all my computing needs with the Linux desktop that I have.  This is the first of several posts where I will document that process and the solutions that I have come up with to achieve the same goals in my personal computing experience with Linux that I did with the Mac.

Chapter 1 – Choosing a distribution

As a long time Linux user, dating all the way back to running a specialized distribution of RedHat on the 486 PC card in my PowerPC 6100 like some other folks, I am well acquainted with the passionate arguments that can arise among Linux aficionado when the topic of choosing a distribution arises.

In the beginning many of the arguments centered around the needs of various kernel configurations and packaging systems.  Do you compile your kernel by hand?  Do you go modular or monolithic?  Is RPM a better choice than deb?  Do you go hard core and start a stage 1 Gentoo install where you have to bootstrap the kernel just to compile and install?

Some of these decisions will be familiar to you and some won’t be.  Many of the old arguments don’t apply anymore due to major improvements over the years.  Ofttimes the new arguments center around free vs. non-free, Gnome 2 vs. Gnome 3, Gnome vs. KDE, etc.

With all of this in mind, I developed a rather simple set of criteria based on my personal experience with the philosophy Apple has espoused in it’s ad campaigns of “it just works.”  Here’s the list I came up with:

  1. Community involvement
    With any OS choice, it is very important that there be a large community of users, comprised of multiple skill levels, that can provide innovative solutions and workarounds for usability problems that can be encountered.
  2. Multiple update tracks
    While having a stable only release track makes sense for a production-level environment, as a tech-enthusiast and a geek it is great to have access to testing and unstable release tracks when you want to try something on the bleeding edge.
  3. Robust driver support
    It was important that recent hardware support be available. I don’t want to have to wait until a major point release to get something as important as a network card working.
  4. Eye candy
    Yes, I know that to a lot of die-hard UNIX guys, the concept of eye candy being a major bullet item for picking a distribution is nuts, but coming from the Macintosh environment, which is arguably one of the most visually appealing, it was important.

linux-mint-logo-domed-case-badge-techiant-150x150-2544781After doing a large amount of research and testing numerous live cd’s, I settled on Linux Mint 13 with the Cinnamon desktop environment.  Linux Mint is a Ubuntu-based distribution, which means it traces it’s genealogy back to the grand old distribution of Debian.

Ubuntu is known for having a extremely active community base and it has become the distribution of choice for many hardware vendors outside of the server market that are looking to pull Linux users into their product lines.

Being a Ubuntu/Debian based distribution, there are lots of opportunities for bleeding edge development when you want to go there.  For example, Oracle’s Java 7 Update 4 is available as a package through a PPA repo.

Also, since Linux Mint 13 is a Gnome 3-base with the sleek, modern looking Cinnamon environment on top, there is plenty of eye candy to go around.

References

  1. Fischba, S. (1997, June 06). Running linux on ppc/486 card?. Retrieved from http://www.linuxmisc.com/7-freebsd/2fd450d75fd55344.htm
  2. Lagna, G. (2010, April 23). Apple’s ad campaign, a brief history… Retrieved from http://www.macgasm.net/2010/04/23/apples-ad-campaign-a-brief-history/
  3. Linux Mint – from freedom came elegance. Ubuntu-based Linux distribution. http://www.linuxmint.com/
  4. Cinnamon – Love your Linux, Feel at Home, Get things Done! Window manager for Linux. http://cinnamon.linuxmint.com/
  5. Andrei, A. (2012, January 17). Install oracle java 7 in ubuntu via ppa repository. Retrieved from http://www.webupd8.org/2012/01/install-oracle-java-jdk-7-in-ubuntu-via.html

softwarebug-150x150-6896781Recently I ran into an issue with several websites and their functionality, or lack thereof, on Mobile Safari in iOS 4.3.3 on the iPad.

Mobile Safari doesn’t give you much in the way of native debug tools.  There is a debug console, which will display, at least in theory, any CSS, HTML or Javascript errors.

The only problem is that it won’t actually display all HTML errors.  For instance the problem I ran into was an HTML tag mismatch between an opening H2 and a closing H3.  Mobile Safari on iOS 5.1 displayed the page as designed, however on iOS 4.3.3 the bad closing tag was omitted which meant that all the children of that H2 had the CSS style “hidden” applied to them due to a class assignment.

You would think that this might trigger an error code in the debug console, however no such error occurred, and using the Safari iOS 4.3.3 – iPad user agent in desktop Safari on Mac OS X did not exhibit the error.

In searching for a tool to assist with debugging this problem natively on the iPad I ran across a great bookmarklet by Mark Perkins, called Snoopy.

This bookmarklet gives you all kinds of nifty information about the page you are looking at, including a view of the generated source.  Thanks to this tool I was able to find out exactly what was breaking the display on the iPad.

ubuntu | arfore dot com

imac_al_w_disc-150x150-4171716Editor’s Note: This article is part of the Tales of A Linux Switcher series.

As part of my on-going switch to Ubuntu 12.04 from Mac OS X, I ran into an issue where my cdrom device was not being mapped properly in the OS.

Everything works as desired except for one little thing: the eject key on the Apple Aluminum USB keyboard was not triggering the eject sequence of the built-in slot loading SuperDrive.

I assumed that there would be a device mapped to the actual drive using a link to /dev/cdrom.  This didn’t turn out to be the case.  When using the eject command from a terminal I received the following:

$ eject
eject: unable to find or open device for: `cdrom'

When I did a directory list to find any applicable cdrom device entries in the udev root (/dev) I got the following:

$ udevadm info --root
/dev
root@foreandy-iMac:~# ls -l /dev/*cd*
ls: cannot access /dev/*cd*: No such file or directory

In order to determine exactly which device was being used for the optical drive, I looked at the output from system’s cdrom device entry:

$ cat /proc/sys/dev/cdrom/info
CD-ROM information, Id: cdrom.c 3.20 2003/12/17

drive name: sr0
drive speed: 24
drive # of slots: 1
Can close tray: 1
Can open tray: 1
Can lock tray: 1
Can change speed: 1
Can select disk: 0
Can read multisession: 1
Can read MCN: 1
Reports media changed: 1
Can play audio: 1
Can write CD-R: 1
Can write CD-RW: 1
Can read DVD: 1
Can write DVD-R: 1
Can write DVD-RAM: 0
Can read MRW: 0
Can write MRW: 0
Can write RAM: 1

The next step was to create the symbolic link in the device root to map cdrom to the appropriate device as listed in the above output:

$ sudo ln -s /dev/sr0 /dev/cdrom
$ ls -l /dev/*cd*
lrwxrwxrwx 1 root root 8 Jul 30 09:58 /dev/cdrom -> /dev/sr0

Now I can use both command line utilities to work with the optical drive as well as the built-in eject key on my keyboard.

If you want a lot more detail on this issue check out this bug comment.  While not specifically dealing with a Mac, the issues and solution are the same.

gimp_logo-150x150-2090204Editor’s Note: This article is part of the Tales of A Linux Switcher series.

If you are a graphic designer or developer, or you just have a need to edit images, a mainstay of your Linux toolbox is likely to be the Gimp.

If you are coming from the Mac or Windows world, it is probable that you have used Adobe’s Photoshop program to achieve your image editing needs in the past.  Having used Photoshop and Gimp extensively over the past decade, I can tell you that one of the features I liked about the Photoshop environment on Windows has been the unified window.  All the palettes, toolbars and editing windows exist inside a single, unified window.

I always missed this when using Gimp on Linux (or the other OS as well, since Gimp is available for all three icon_smile-6649399 ).  One of the main feature draws for me to the latest Gimp release, version 2.8, was this single line in the release notes:

GIMP 2.8 introduces an optional single-window mode.

Awesome! Of course, Gimp 2.8 is not in the current Ubuntu 12.04 repository (Note: Ubuntu 12.10 has version 2.8 listed in the repository!) :

$ apt-cache policy gimp
gimp:
 Installed: (none)
 Candidate: 2.6.12-1ubuntu1
 Version table:
 2.6.12-1ubuntu1 0
 500 http://us.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages

Not to fear! Using the following set of commands you can successfully obtain the Gimp 2.8 software as well as a compatible version of the plugin registry:

sudo add-apt-repository ppa:otto-kesselgulasch/gimp
sudo apt-get update

As you see from a policy check, after adding the repository and updating the cache, you will now be receiving the Gimp package and the updated plugin-registry from the new PPA:

$ apt-cache policy gimp
gimp:
 Installed: (none)
 Candidate: 2.8.0-1ubuntu0ppa6~precise
 Version table:
 2.8.0-1ubuntu0ppa6~precise 0
 500 http://ppa.launchpad.net/otto-kesselgulasch/gimp/ubuntu/ precise/main amd64 Packages
 2.6.12-1ubuntu1 0
 500 http://us.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages
$ apt-cache policy gimp-plugin-registry
gimp-plugin-registry:
 Installed: (none)
 Candidate: 5.20120523-2ubuntu0ppa9~precise
 Version table:
 5.20120523-2ubuntu0ppa9~precise 0
 500 http://ppa.launchpad.net/otto-kesselgulasch/gimp/ubuntu/ precise/main amd64 Packages
 3.5.4-1 0
 500 http://us.archive.ubuntu.com/ubuntu/ precise/universe amd64 Packages

To install it now enter the following:

sudo apt-get install gimp gimp-plugin-registry

Now you have the most recent release!

web-safe-fonts-150x150-1185815

Editor’s Note: This article is part of the Tales of A Linux Switcher series.

One of the things I have always hated about using Linux is the difference in the base font collection.  Many web designers still use the defaults of Arial, Verdana and Georgia.  The reasons for this are that these fonts are available on the two main commercial operating systems, Mac OS and Microsoft Windows.

Until the majority of websites support webfonts like Google Web Fonts or Monotype’s fonts.com service, we still need access to the standard MS fonts. For more on this situation, check out the article A Web Designer’s Guide to Linux Fonts by Six Revisions.

Fortunately, these fonts are available for installation on Linux.  You can download them directly from the Sourceforge repository or look for the package in your particular distribution.

In Ubuntu you can install them from the Ubuntu Software Center by searching for the package named ttf-mscorefonts-installer or by using the following on the command line (Note: if you install from the command line then you will be prompted to accept the license aggreement in an ncurses interface.):

sudo apt-get install ttf-mscorefonts-installer

Either way, you will end up with the following additional fonts:

  • Andale Mono
  • Arial Black
  • Arial (bold, italic, bold italic)
  • Comic Sans MS (bold)
  • Courier New (bold, italic, bold italic)
  • Georgia (bold, italic, bold italic)
  • Impact
  • Times New Roman (bold, italic, bold italic)
  • Trebuchet (bold, italic, bold italic)
  • Verdana (bold, italic, bold italic)
  • Webdings

To see the difference in the display of websites after the installation, check out the following screenshots from this website.

mscorefonts_before-150x136-7868829mscorefonts_after-150x147-7640286

As you can see the use of these fonts definitely make a difference.  Happy surfing!

TLS | arfore dot com

imac_al_w_disc-150x150-8314459Editor’s Note: This article is part of the Tales of A Linux Switcher series.

As part of my on-going switch to Ubuntu 12.04 from Mac OS X, I ran into an issue where my cdrom device was not being mapped properly in the OS.

Everything works as desired except for one little thing: the eject key on the Apple Aluminum USB keyboard was not triggering the eject sequence of the built-in slot loading SuperDrive.

I assumed that there would be a device mapped to the actual drive using a link to /dev/cdrom.  This didn’t turn out to be the case.  When using the eject command from a terminal I received the following:

$ eject
eject: unable to find or open device for: `cdrom'

When I did a directory list to find any applicable cdrom device entries in the udev root (/dev) I got the following:

$ udevadm info --root
/dev
root@foreandy-iMac:~# ls -l /dev/*cd*
ls: cannot access /dev/*cd*: No such file or directory

In order to determine exactly which device was being used for the optical drive, I looked at the output from system’s cdrom device entry:

$ cat /proc/sys/dev/cdrom/info
CD-ROM information, Id: cdrom.c 3.20 2003/12/17

drive name: sr0
drive speed: 24
drive # of slots: 1
Can close tray: 1
Can open tray: 1
Can lock tray: 1
Can change speed: 1
Can select disk: 0
Can read multisession: 1
Can read MCN: 1
Reports media changed: 1
Can play audio: 1
Can write CD-R: 1
Can write CD-RW: 1
Can read DVD: 1
Can write DVD-R: 1
Can write DVD-RAM: 0
Can read MRW: 0
Can write MRW: 0
Can write RAM: 1

The next step was to create the symbolic link in the device root to map cdrom to the appropriate device as listed in the above output:

$ sudo ln -s /dev/sr0 /dev/cdrom
$ ls -l /dev/*cd*
lrwxrwxrwx 1 root root 8 Jul 30 09:58 /dev/cdrom -> /dev/sr0

Now I can use both command line utilities to work with the optical drive as well as the built-in eject key on my keyboard.

If you want a lot more detail on this issue check out this bug comment.  While not specifically dealing with a Mac, the issues and solution are the same.

gimp_logo-150x150-4502986Editor’s Note: This article is part of the Tales of A Linux Switcher series.

If you are a graphic designer or developer, or you just have a need to edit images, a mainstay of your Linux toolbox is likely to be the Gimp.

If you are coming from the Mac or Windows world, it is probable that you have used Adobe’s Photoshop program to achieve your image editing needs in the past.  Having used Photoshop and Gimp extensively over the past decade, I can tell you that one of the features I liked about the Photoshop environment on Windows has been the unified window.  All the palettes, toolbars and editing windows exist inside a single, unified window.

I always missed this when using Gimp on Linux (or the other OS as well, since Gimp is available for all three icon_smile-5796417 ).  One of the main feature draws for me to the latest Gimp release, version 2.8, was this single line in the release notes:

GIMP 2.8 introduces an optional single-window mode.

Awesome! Of course, Gimp 2.8 is not in the current Ubuntu 12.04 repository (Note: Ubuntu 12.10 has version 2.8 listed in the repository!) :

$ apt-cache policy gimp
gimp:
 Installed: (none)
 Candidate: 2.6.12-1ubuntu1
 Version table:
 2.6.12-1ubuntu1 0
 500 http://us.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages

Not to fear! Using the following set of commands you can successfully obtain the Gimp 2.8 software as well as a compatible version of the plugin registry:

sudo add-apt-repository ppa:otto-kesselgulasch/gimp
sudo apt-get update

As you see from a policy check, after adding the repository and updating the cache, you will now be receiving the Gimp package and the updated plugin-registry from the new PPA:

$ apt-cache policy gimp
gimp:
 Installed: (none)
 Candidate: 2.8.0-1ubuntu0ppa6~precise
 Version table:
 2.8.0-1ubuntu0ppa6~precise 0
 500 http://ppa.launchpad.net/otto-kesselgulasch/gimp/ubuntu/ precise/main amd64 Packages
 2.6.12-1ubuntu1 0
 500 http://us.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages
$ apt-cache policy gimp-plugin-registry
gimp-plugin-registry:
 Installed: (none)
 Candidate: 5.20120523-2ubuntu0ppa9~precise
 Version table:
 5.20120523-2ubuntu0ppa9~precise 0
 500 http://ppa.launchpad.net/otto-kesselgulasch/gimp/ubuntu/ precise/main amd64 Packages
 3.5.4-1 0
 500 http://us.archive.ubuntu.com/ubuntu/ precise/universe amd64 Packages

To install it now enter the following:

sudo apt-get install gimp gimp-plugin-registry

Now you have the most recent release!

web-safe-fonts-150x150-6638880

Editor’s Note: This article is part of the Tales of A Linux Switcher series.

One of the things I have always hated about using Linux is the difference in the base font collection.  Many web designers still use the defaults of Arial, Verdana and Georgia.  The reasons for this are that these fonts are available on the two main commercial operating systems, Mac OS and Microsoft Windows.

Until the majority of websites support webfonts like Google Web Fonts or Monotype’s fonts.com service, we still need access to the standard MS fonts. For more on this situation, check out the article A Web Designer’s Guide to Linux Fonts by Six Revisions.

Fortunately, these fonts are available for installation on Linux.  You can download them directly from the Sourceforge repository or look for the package in your particular distribution.

In Ubuntu you can install them from the Ubuntu Software Center by searching for the package named ttf-mscorefonts-installer or by using the following on the command line (Note: if you install from the command line then you will be prompted to accept the license aggreement in an ncurses interface.):

sudo apt-get install ttf-mscorefonts-installer

Either way, you will end up with the following additional fonts:

  • Andale Mono
  • Arial Black
  • Arial (bold, italic, bold italic)
  • Comic Sans MS (bold)
  • Courier New (bold, italic, bold italic)
  • Georgia (bold, italic, bold italic)
  • Impact
  • Times New Roman (bold, italic, bold italic)
  • Trebuchet (bold, italic, bold italic)
  • Verdana (bold, italic, bold italic)
  • Webdings

To see the difference in the display of websites after the installation, check out the following screenshots from this website.

mscorefonts_before-150x136-2786987mscorefonts_after-150x147-6113549

As you can see the use of these fonts definitely make a difference.  Happy surfing!

linux_apps-150x150-3949715Editor’s Note: This article is part of the Tales of A Linux Switcher series.

In my search to make the complete switch from the Mac OS (see Tales of a Linux Switcher – Part 1), the biggest research effort has been finding applications that accomplish the same tasks in Linux.  Some of these tasks are pretty obvious, e.g., web browsing or email, while others are not quite so ordinary, e.g., filesystem encryption or software development.

So, with all of that in mind, the subject of this particular post is going to be a discussion of some of the common tasks that I set out to handle and the application I chose to fit the bill.

Chapter 2 – Getting it done

When everything is said and done, the important part of using any desktop (or server really) OS is getting what you need to do accomplished.  The tasks can be office productivity or software development or just casual web surfing.

The arguments about which OS is better, more secure, more extensible, or more “free” are all great and wonderful, but in the end what matters is getting it done.  There are some people that believe that software being free is top priority, while others (like myself) are not as concerned over whether the software is free, cheap, open source, or proprietary, as long as it works to get from point a to point b.

Don’t get me wrong, I like open source software, and it’s even better when it’s FOSS (free, open source software), but when it all shakes out I want a computer setup that I can rely on from day-to-day to do what I need it to do.

Chapter 3 – It’s all about the apps

So in my quest to get to point b, I have found that there are generally any number of application choices to accomplish my tasks in Linux that I did in the Mac OS ecosystem.

Some of the application choices were easy options, like LibreOffice in place of MS Office 2011, while others required more research to replace, e.g., iTunes, 1Password, etc.  With each choice I have tried to find an alternative that gave me the closest experience in terms of usability and feature set of the application being replaced.

When looking for alternatives I used Google for basic searching, but I also found the following sites to be of use:

Using those sites in combination with various forum posts and basic searches, I have been able to find software to do most everything I was doing on Mac OS X.  Bear in mind that sometimes it’s not quite as easy to set everything up, but I took that as a challenge.  There are some instances that presented particular challenges.  I will be posting on those individually as time permits.

To see the list I have personally come up, have a gander at my Linux Switcher Software Choices spreadsheet.

As some of you will no doubt have noticed over the years, I am a die-hard Macintosh fan.  I have run Windows desktops and servers, as well as Linux desktops and servers over the years, but my true love has always been the Apple Macintosh computers.  So it is with some trepidation that I have faced the situation that I no longer have any Macintosh computers of my own.

While the situation was not anticipated, I have faced it head on and am rapidly on my way to filling all my computing needs with the Linux desktop that I have.  This is the first of several posts where I will document that process and the solutions that I have come up with to achieve the same goals in my personal computing experience with Linux that I did with the Mac.

Chapter 1 – Choosing a distribution

As a long time Linux user, dating all the way back to running a specialized distribution of RedHat on the 486 PC card in my PowerPC 6100 like some other folks, I am well acquainted with the passionate arguments that can arise among Linux aficionado when the topic of choosing a distribution arises.

In the beginning many of the arguments centered around the needs of various kernel configurations and packaging systems.  Do you compile your kernel by hand?  Do you go modular or monolithic?  Is RPM a better choice than deb?  Do you go hard core and start a stage 1 Gentoo install where you have to bootstrap the kernel just to compile and install?

Some of these decisions will be familiar to you and some won’t be.  Many of the old arguments don’t apply anymore due to major improvements over the years.  Ofttimes the new arguments center around free vs. non-free, Gnome 2 vs. Gnome 3, Gnome vs. KDE, etc.

With all of this in mind, I developed a rather simple set of criteria based on my personal experience with the philosophy Apple has espoused in it’s ad campaigns of “it just works.”  Here’s the list I came up with:

  1. Community involvement
    With any OS choice, it is very important that there be a large community of users, comprised of multiple skill levels, that can provide innovative solutions and workarounds for usability problems that can be encountered.
  2. Multiple update tracks
    While having a stable only release track makes sense for a production-level environment, as a tech-enthusiast and a geek it is great to have access to testing and unstable release tracks when you want to try something on the bleeding edge.
  3. Robust driver support
    It was important that recent hardware support be available. I don’t want to have to wait until a major point release to get something as important as a network card working.
  4. Eye candy
    Yes, I know that to a lot of die-hard UNIX guys, the concept of eye candy being a major bullet item for picking a distribution is nuts, but coming from the Macintosh environment, which is arguably one of the most visually appealing, it was important.

linux-mint-logo-domed-case-badge-techiant-150x150-8124947After doing a large amount of research and testing numerous live cd’s, I settled on Linux Mint 13 with the Cinnamon desktop environment.  Linux Mint is a Ubuntu-based distribution, which means it traces it’s genealogy back to the grand old distribution of Debian.

Ubuntu is known for having a extremely active community base and it has become the distribution of choice for many hardware vendors outside of the server market that are looking to pull Linux users into their product lines.

Being a Ubuntu/Debian based distribution, there are lots of opportunities for bleeding edge development when you want to go there.  For example, Oracle’s Java 7 Update 4 is available as a package through a PPA repo.

Also, since Linux Mint 13 is a Gnome 3-base with the sleek, modern looking Cinnamon environment on top, there is plenty of eye candy to go around.

References

  1. Fischba, S. (1997, June 06). Running linux on ppc/486 card?. Retrieved from http://www.linuxmisc.com/7-freebsd/2fd450d75fd55344.htm
  2. Lagna, G. (2010, April 23). Apple’s ad campaign, a brief history… Retrieved from http://www.macgasm.net/2010/04/23/apples-ad-campaign-a-brief-history/
  3. Linux Mint – from freedom came elegance. Ubuntu-based Linux distribution. http://www.linuxmint.com/
  4. Cinnamon – Love your Linux, Feel at Home, Get things Done! Window manager for Linux. http://cinnamon.linuxmint.com/
  5. Andrei, A. (2012, January 17). Install oracle java 7 in ubuntu via ppa repository. Retrieved from http://www.webupd8.org/2012/01/install-oracle-java-jdk-7-in-ubuntu-via.html

rhel5 | arfore dot com

Recently, while configuring two new RHEL5 webserver nodes at VSU, I decided to install the Linux server for the iStat program.  The iStat program is an iPhone app that will allow you to receive vital stats on your remote servers.  It also gives you a nice interface for both the ping and traceroute network tools.

The installation of the istatd daemon, an open source server for Linux, Solaris and FreeBSD, was quite easy.  The configuration file is easily understood and edited for your particular installation needs, allowing you to change the defaults when necessary, for example I did not want to add an additional user, so I configured istatd to use the monitor account I created for use with Nagios.

One thing that was missing from istatd was an init script to allow for easily controlling the startup and shutdown of the daemon as well as determining what runlevels the daemon should be active in.

To solve this I wrote an init.d script.  These scripts are fairly self-explanatory.  I used the script that starts the xinetd service as my base, since I knew that this one checks to ensure that the networking service is active before it starts the service.

The location of both the binary and the configuration file may vary depending on the installation itself. When I built istatd I used the following configure command:

./configure --sysconfdir=/etc

Here’s how to install the script

  1. Copy the file into /etc/init.d/
    cp istatd /etc/init.d/
  2. make a symbolic link to this file in the rc.d directories for each runlevel specified in the script (note: this may differ based on the runlevels you use in the script.)
    ln -s /etc/init.d/istatd /etc/rc3.d/S99istatd
    ln -s /etc/init.d/istatd /etc/rc4.d/S99istatd
    ln -s /etc/init.d/istatd /etc/rc3.d/S99istatd

Here is the script in it’s entirety, and you can download it as well.

#!/bin/bash
#
# istatd        This starts and stops istatd.
#
# chkconfig: 345 99 01
# description: istatd is a daemon serving statistics to your \
#              iStat iPhone application from Linux, Solaris & FreeBSD. \
#              istatd collects data such as CPU, memory, network and \
#              disk usage and keeps the history. Once connecting from \
#              the iPhone and entering the lock code this data will be \
#              sent to the iPhone and shown in fancy graphs.
#
# processname: /usr/local/bin/istatd
# config: /etc/istat.conf
# pidfile: /var/run/istat/istatd.pid

# Source function library.
. /etc/init.d/functions

# Get config.
test -f /etc/sysconfig/network && . /etc/sysconfig/network
test -f /etc/istat.conf

# Check that we are root ... so non-root users stop here
[ `id -u` = 0 ] || exit 1

# Check that networking is up.
[ "${NETWORKING}" = "yes" ] || exit 0

[ -f /usr/local/bin/istatd ] || exit 1
[ -f /etc/istat.conf ] || exit 1

RETVAL=0

prog="/usr/local/bin/istatd"

start(){
    echo -n $"Starting istatd: "

    $prog -d --pid=/var/run/istat/istatd.pid
    RETVAL=$?
    echo
    touch /var/lock/subsys/istatd
    return $RETVAL

}

stop(){
    echo -n $"Stopping istatd: "
    killproc $prog
    RETVAL=$?
    echo
    rm -f /var/lock/subsys/istatd
    return $RETVAL

}

reload(){
    stop
    start
}

restart(){
    stop
    start
}

# See how we were called.
case "$1" in
    start)
     start
     ;;
    stop)
     stop
     ;;
    status)
     status $prog
     ;;
    restart)
     restart
     ;;
    reload)
     reload
     ;;
    *)
     echo $"Usage: $0 {start|stop|status|restart|reload}"
     RETVAL=1
esac

exit $RETVAL

If you need any assistance creating your own scripts, you might find this link from Novell’s Cool Solutions site: Creating Custom init Scripts.

Image Credit: Ohio State University

Yesterday in my post on Solaris 10 Password Policy Enforcement, I outlined the steps necessary to implement the password requirements that have been decided upon in my system environment.  This post will outline the same process on the RHEL5 systems that I admin.  While the policy requirements are the same, the implementation is vastly different.

Desired Policy

To re-cap, here is the policy that is to be applied to normal users:

  • at least 8 characters in length
  • no more than 20 characters in length
  • contain at least on letter
  • contain at least one number
  • forced to change at least every 180 days
  • 15 minute lockout after 5 unsuccessful attempts

Implementation Differences with Solaris 10

While there were a couple of pieces of the desired password policy that I was unable to implement on Solaris 10, the ease of which the others were configured wins the game hands down.  The PAM module setup on Solaris makes it dead simple to update the policy.  All you have to do is to change the various tunable settings.  And they are all listed in fairly understandable verbiage, no complex or arcane settings.

On the RHEL5 systems I had to delve into the vagaries of PAM module attributes and ordering.  As always, it is important to make backups of any files to protect yourself and allow for disaster recovery. To implement the requirements, I had to edit two files on the system:

  1. /etc/login.defs
  2. /etc/pam.d/system-auth

Implementation Process

It is important during this process to recognize that if you set the PAM requirements incorrectly you can get burned to the point that the root user will be unable to login, forcing you to boot into single-user mode to recover or to boot the system from a live cd and revert the authentication files.

Setting the password expiration requirement and length setting

Before we get into this please note the warning notice from the login.defs file manpage on a RHEL5 system

Much of the functionality that used to be provided by the shadow password suite is now handled by PAM. Thus, /etc/login.defs is no longer used by programs such as: login(1), passwd(1), su(1). Pleaserefer to the corresponding PAM configuration files instead.

It is still important to configure the password length in the login.defs file so that we can account for legacy codebases.

  1. Open /etc/login.defs in your favorite editor
  2. Set the attribute of PASS_MAX_DAYS to be 180
  3. Set the attribute of PAS_MIN_LEN to be 9

Setting the password complexity requirements

Now here is where the going gets real interesting.  Before we look at /etc/pam.d/system-auth a strong caution

Backup up the file before you alter it and open a backup terminal session as the root user before continuing.  If you put the wrong attributes in place or put the PAM directives in the wrong order you will lock yourself, root user and all, out of the system.  At that point you have two options: single user mode recovery from the console or use a live cd to boot the machine and revert to the backup after mounting the filesystem.  Oh, and it is wise to give yourself a delay with either GRUB or LILO because without the delay you won’t be able to change the boot option to allow the single user mode recovery option.

So, the file involved in this process is /etc/pam.d/system-auth and before I go into some of the nitty gritty, here’s the configuration I ended up using:

#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth        required      pam_env.so
auth        required      pam_tally2.so deny=6 unlock_time=900
auth        sufficient    pam_unix.so nullok try_first_pass nodelay
auth        requisite     pam_succeed_if.so uid >= 500 quiet
auth        required      pam_deny.so

account     required      pam_unix.so
account     sufficient    pam_succeed_if.so uid < 500 quiet
account     required      pam_permit.so
account     required      pam_tally2.so per_user

password    required      pam_passwdqc.so min=disabled,disabled,12,9,9 max=20 similar=deny enforce=users retry=3
password    sufficient    pam_unix.so md5 shadow nullok try_first_pass use_authtok
password    required      pam_deny.so

session     optional      pam_keyinit.so revoke
session     required      pam_limits.so
session     [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session     required      pam_unix.so

The retries requirement is implemented using the following line

auth        required      pam_tally2.so deny=6 unlock_time=900

The complexity and length requirements are implements using the following line

password    required      pam_passwdqc.so min=disabled,disabled,12,9,9 max=20 similar=deny enforce=users retry=3

The following line is set to ensure that the retries count is maintained even if the counter for the pam_tally2 module is corrupted

account     required      pam_tally2.so per_user

References

Rather than go into the details of each individual attribute and how they interact, here are the resources used to develop this ruleset.  They contain an large amount of valuable information.

During the migration of a production system from Solaris 10 to RedHat Enterprise Linux 5, I discovered that I had a problem with a couple of my LDAP scripts.  The commands being run were standard ldapsearch and ldapmodify commands in a format similar to the following:

ldapsearch -h hostname.domain.com -p 389 -b o=organisation -D cn=admin -w password cn=foobar
ldapmodify -h hostname.domain.com -p 389 -b o=organisation -D cn=admin -w password -f updates.ldif

Each time I ran the commands I got the following error:

SASL/EXTERNAL authentication started
ldap_sasl_interactive_bind_s: Unknown authentication method (-6)
        additional info: SASL(-4): no mechanism available:

It turns out that the versions of the ldapsearch and ldapmodify commands that comes with RHEL5 are based on the standard OpenLDAP code.  The OpenLDAP code defaults to expecting an SASL authentication mechansim on the server-side.  Given that the LDAP server I am connecting to is a iPlanet 5.1 LDAP server, it is not configured to understand the SASL authentication types.

The solution is to add the -x option to the commands:

ldapsearch -x -h hostname.domain.com -p 389 -b o=organisation -D cn=admin -w password cn=foobar
ldapmodiy -x -h hostname.domain.com -p 389 -b o=organisation -D cn=admin -w password -f updates.ldif

This command option specifies that the command should be executed using simple authentication instead of SASL.

When doing system administration it is often more convenient to connect to a server through some sort of remote connection setup rather than having to sit at a console in a datacenter.  The comfort of one’s office (or living-room) is often far superior in terms of noise and temperature than the environs of the datacenter itself.

When setting up the RHEL5 server this week here at VSU, I was forced to use the Sun iLOM connection to do the initial install of the server.  While I generally use command-line only tools, the ease of use one gains from the GUI tools can often make some tasks much simpler.  Towards this end I decided to setup the server and my client to allow XDMCP sessions so that I could have full access to the GUI when necessary.

On the server there are a couple of things that you need to configure in order to make this workFirew:

  1. Firewall ports
  2. GDM configuration options

On the client you will need to configure the OS X firewall, as well as use the correct Xephyr connection syntax.
Continue reading

WD Caviar and Sabrent SRD4 SATA card | arfore dot com

A couple of  weekends ago I was rebuilding my home NAS to replace the Openfiler installation with CentOS 5.5.

After adding my two new Western Digital Green Caviar 1TB drives I booted the computer up to find that only the Hitachi drive was being found. After a little troubleshooting I determined that the problem was with the add-in SATA controller, a Sabrent SRD4 4-port controller.

It turns out that the firmware for the Silicon Images 3114CTU chipset support the drives. I found a post online that concerned a different model of drive, and that his problems were resolved with a firmware update.

After researching the chipset and controller I found that while Sabrent didn’t have a firmware update available, there was one available on the Silicon Images website.

After figuring out how to go about running the updater from a boot cd, since this is a Linux machine with no floppy drive, I was able to successfully use my new drives as additional storage!

vsu | arfore dot com

Recently I had to build a custom form for VSU’s implementation of R25 by CollegeNet.  The form was designed to allow individuals to schedule an event at VSU using our facilities and equipment.  The form is a multi-part form that branches off at the third page based on prior answers.

One of the hurdles in the form creation was the necessity of validating the form input on a page before proceeding to the next part of the form.  While this fairly routine process can be accomplished by using a self-referencing form and validating the contents of the $_POST superglobal, the number of form elements made it somewhat cumbersome.

Enter the PHP Form Validation Script.  While searching for some ways to make the validation more painless to code, I ran across a nifty PHP script at the HTML Form Guide website.  It is a object-oriented PHP script that make it much easier to do the validation on html form elements.  There are quite a few pre-defined validation descriptors, plus a method that allows for overriding the DoValidate function to create your own custom descriptor.

There is one thing that I would like the script to handle natively:

  1. use of a “pretty” or “friendly” name in the validation error messages, currently it displays the element name

There is also an undocumented validation descriptor in the script.  The pre-defined selone is used for a select/option element.  According to the code the default error message is “Please select an option for %s” and it check to ensure that the value for the element is set and that the value is less than or equal to zero.  If either of those check fail then the error message is displayed.

Last week my friend Lindsay and I were making the rounds of the various thrift stores and used furniture stores in Valdosta.  It is quite interesting to see what people get rid of and to think of ways to use some of it.

At one of the thrift shops we found three VSU glasses.  One of them is a glass that was given out to the VSU student employees at the Student Awards banquet last year.  Another one was from the Faculty and Staff Campaign from 2002.  There were tons of promo glassware from all kinds of companies, restaurants and schools.  I wonder just how much of that kind of stuff is bought and given out only to end up in some landfill or thrift shop.