One of the things that many system administrators encounter in the quest for maintaining up-to-date servers is the need to apply regular maintenance releases. With some operating systems, Mac OS X for instance, the patches are released in two forms:
- a delta update, which contains only the changes necessary to bring the system up-to-date from the current running release level
- a combo (cumulative, full, etc.) update, which contains all changes for the current release branch
If you are lucky enough to be using an OS that gives you delta updates then you may not ever run into an issue where you don’t have enough internal drive space to update the OS. However, if you are running an OS, like Solaris, that uses cumulative clusters then this becomes more interesting.
One situation I recently encountered was a need to patch a Solaris 10 Sparc system that did not have sufficient internal drive space to store the unzipped patch cluster for patching the system in single-user mode. (You are patching in single-user mode right?)
The most obvious question would be: why not add another drive? Another obvious question might be: why not patch from cd/dvd? Well, adding a new drive to this system was not a viable solution since there were no available drives to install. Installing from DVD would have been a possible solution, if the patches had been unzipped and burned to disc prior to the maintenance window.
The next available option was to install the patches over the network. When patching a machine in single-user mode this becomes a little more problematic, since network resources and services are not generally available unless the server has been brought up in a multi-user mode.
After bringing the server up in single-user mode the next step was to start SSH and NFS so that the patch cluster could be installed over the NFS share. Generally with Solaris 10 all you would need to do is execute the following command for both SSH and NFS client:
svcadm enable
Unfortunately with single-user mode this will fail to work, since the dependent services are not auto-started. To accomplish this in single-user mode you need to add the -r flag which instructs svcadm to start the service and recursively start the dependent services. If you want a little more checking, also add the -s flag which tells svcadm to wait for each service to enter an online or degraded state before returning. Below are the commands for starting SSH and NFS along with the output of a service check to show the state after the command was executed.
SSH
# svcadm enable -rs svc:/network/ssh:default Reading ZFS config: done. # svcs -a | grep ssh online 15:49:26 svc:/network/ssh:default
NFS
# svcadm enable -rs svc:/network/nfs/client:default # svcs -a | grep nfs disabled 15:11:34 svc:/network/nfs/cbd:default disabled 15:11:34 svc:/network/nfs/mapid:default disabled 15:11:35 svc:/network/nfs/server:default online 15:50:35 svc:/network/nfs/status:default online 15:50:35 svc:/network/nfs/nlockmgr:default online 15:50:35 svc:/network/nfs/client:default uninitialized 15:11:37 svc:/network/nfs/rquota:default
After this was done all that was left was to mount the exported file system and run the patch cluster installation script. Since the cluster was not local to the system it took a little longer to install the cluster, but other than that everything went smoothly.