Desktop Google Chrome Reader Mode – the foremind

If you are a Safari user then you are likely used to the “reader mode” which disables all the extra graphical stuff and focuses the view on the content of the article.  Thanks to a tip from Google Plus user Francois Beaufort, here’s how to enable it on the desktop (in Windows at the very least, I haven’t tried in any other OS).

If you’re on desktop, playing with it is as easy as running chrome with the –enable-dom-distiller switch. Once it’s done, you’ll notice a new “Distill page” menu item.

Hopefully this will make it to mainstream with a nice icon.

Checking your password expiration date – the foremind

While logging into one of the Linux jump boxes at work today, it occurred to me that while I recently got notified about my password expiration from our Active Directory farm, I had no idea when my Linux password would expire or what the password life was.

To find out this information you can easily use the chage command.

Here is what the output looks like:

[code language=”bash”][[email protected] ~]$ chage -l user Last password change : Apr 09, 2015 Password expires : Jul 08, 2015 Password inactive : never Account expires : never Minimum number of days between password change : 1 Maximum number of days between password change : 90 Number of days of warning before password expires : 7

[/code]

It may seem like such a simple thing to do, but knowing when your password expires can be a lifesaver on occasion.

Disable subscription pop-up in Proxmox v.5.1-3 – the foremind

proxmox-logo-150x150-6120302If you have just installed the most recent release of the virtualization platform Proxmox, you might have noticed the that steps to disable the subscription pop-up dialog have changed, well, other than actually purchasing a subscription, I suppose.  I have chosen to not purchase a subscription for the same reason I don’t have one for VMware’s vSphere Hypervisor, I am not running this in a production setting that requires paid support or premium features. The following steps will disable the subscription pop-up.

Backup the javascript file

The pop-up contents, and whether or not are displayed, are controlled by a function in a javascript file.  The first step should always be to make a backup, just in case Murphy rings your doorbell.

[email protected]:~# cd /usr/share/pve-manager/js/ [email protected]:/usr/share/pve-manager/js# cp -p pvemanagerlib.js pvemanagerlib.js_backup

Edit the javascript file

Open the pvemanagerlib.js file in your favorite editor.  If this is a vanilla, unmodified installation, skip to line 850.  If this is not the first time that you have edited the file, search for the first occurrence of the following snippet, which will be in the function that we need to alter:

gettext('No valid subscription')

The text of the check for the function should be altered so that the conditional for the check reads as follows:

before

if (data.status !== 'Active') {

after

if (false) {

Notes

As I stated in the original paragraph, the specifics apply to v. 5.1-3 and that the location of the file has changed from previous versions.  A good way to find the file is to use the locate command, which you will have to install first:

[email protected]:~# apt-get update Ign:1 http://ftp.us.debian.org/debian stretch InRelease Get:2 http://security.debian.org stretch/updates InRelease [94.3 kB] Get:3 http://ftp.us.debian.org/debian stretch Release [118 kB] Get:4 http://ftp.us.debian.org/debian stretch Release.gpg [2,434 B] Get:5 http://security.debian.org stretch/updates/main amd64 Packages [374 kB] Get:6 http://ftp.us.debian.org/debian stretch/main amd64 Packages [7,122 kB] Get:7 http://security.debian.org stretch/updates/main Translation-en [165 kB] Get:8 http://security.debian.org stretch/updates/contrib amd64 Packages [1,776 B] Get:9 http://security.debian.org stretch/updates/contrib Translation-en [1,759 B]  [email protected]:~# apt-get install mlocate Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: mlocate 0 upgraded, 1 newly installed, 0 to remove and 85 not upgraded. Need to get 96.5 kB of archives. After this operation, 495 kB of additional disk space will be used. Get:1 http://ftp.us.debian.org/debian stretch/main amd64 mlocate amd64 0.26-2 [96.5 kB] Fetched 96.5 kB in 0s (315 kB/s) Selecting previously unselected package mlocate. (Reading database ... 40185 files and directories currently installed.) Preparing to unpack .../mlocate_0.26-2_amd64.deb ... Unpacking mlocate (0.26-2) ... Setting up mlocate (0.26-2) ... update-alternatives: using /usr/bin/mlocate to provide /usr/bin/locate (locate) in auto mode Adding group `mlocate' (GID 115) ... Done. Processing triggers for man-db (2.7.6.1-2) ... [email protected]:~# updatedb [email protected]:~# locate pvemanagerlib.js
/usr/share/pve-manager/js/pvemanagerlib.js
/usr/share/pve-manager/js/pvemanagerlib.js_backup

As you can see the mlocate package makes finding the file so much easier.

Windows Tip of the Week: Find your account password expiration date in an AD environment – the foremind

password-5427149

In many cases your enterprise Active Directory will not involve too many domains, in fact it is quite common for an Active Directory implementation to only include one domain.  In some cases, however, when you have the unfortunate situation of having a username in multliple domains with differing policies on password expiration it is useful to be able to know when your password, or that of another user will expire.  Here is an easy way to accomplish this from the command line.

For the current active user

[code language=”bash”] net user /domain

[/code]

For a different user

[code language=”bash”] net user /domain _username_here_

[/code]

Here is an example of the output:

[code language=”bash”] User name afore Full Name Andrew Fore Comment User’s comment Country code 000 (System Default) Account active Yes

Account expires Never

Password last set 1/29/2015 4:38:37 PM Password expires 4/29/2015 4:38:37 PM Password changeable 1/29/2015 4:38:37 PM Password required Yes

User may change password Yes

Workstations allowed All Logon script User profile Home directory

Last logon 3/18/2015 3:27:55 PM

Logon hours allowed All

Local Group Memberships Global Group memberships *VMWare Admins *Domain Users *Staff

[/code]

If you notice there is a lot of useful information regarding the user account here, but of particular interest in my situation was the value of Password expires since I was trying to ensure that I got my password reset prior to the policy setting so that I would not find myself locked out over the weekend that I went on call when the Helpdesk would be closed.

Setting package publisher in Solaris 11 – the foremind

During the installation and setup of my new Solaris 11 Automated Installer host, I ran into a situation where even though I was specifying both the origin to remove AND the origin to add, the OS refused to allow me to perform both options in the same command.  While you should be able do this, I ended up having to remove the default system configured publisher and then adding the new local IPS repository as the publisher.

This is what the default publisher was configured for:

[email protected]:~# pkg publisher PUBLISHER TYPE STATUS P LOCATION solaris origin online F http://pkg.oracle.com/solaris/release/ [email protected]:~# pkg publisher solaris

            Publisher: solaris
                Alias: 
           Origin URI: http://pkg.oracle.com/solaris/release/
              SSL Key: None
             SSL Cert: None
          Client UUID: 
      Catalog Updated: October  6, 2015 02:41:00 PM 
              Enabled: Yes

Here is the command that was part of the Oracle guide How to Get Started Customizing and Configuring Systems Using the Automated Installer in Oracle Solaris 11.1 which didn’t work for me:

[email protected]:~# pkg set-publisher –G '*' -g http://10.202.46.80 solaris
pkg set-publisher: only one publisher name may be specified
Usage:
        pkg set-publisher [-Ped] [-k ssl_key] [-c ssl_cert]
            [-g origin_to_add|--add-origin=origin_to_add ...]
            [-G origin_to_remove|--remove-origin=origin_to_remove ...]
            [-m mirror_to_add|--add-mirror=mirror_to_add ...]
            [-M mirror_to_remove|--remove-mirror=mirror_to_remove ...]
            [-p repo_uri] [--enable] [--disable] [--no-refresh]
            [--reset-uuid] [--non-sticky] [--sticky]
            [--search-after=publisher]
            [--search-before=publisher]
            [--search-first]
            [--approve-ca-cert=path_to_CA]
            [--revoke-ca-cert=hash_of_CA_to_revoke]
            [--unset-ca-cert=hash_of_CA_to_unset]
            [--set-property name_of_property=value]
            [--add-property-value name_of_property=value_to_add]
            [--remove-property-value name_of_property=value_to_remove]
            [--unset-property name_of_property_to_delete]
            [--proxy proxy to use]
            [publisher]

I tried several different variations of the one line command, however I was met with the same lack of success. In order to achieve the desired result where the local IPS repository was set up for publisher name solaris I had to do an unset of the existing repo and then a set to configure my new repo.

[email protected]:~# pkg unset-publisher solaris Updating package cache 1/1 [email protected]:~# pkg publisher PUBLISHER TYPE STATUS P LOCATION [email protected]:~# pkg set-publisher -g http:// solaris [email protected]:~# pkg publisher PUBLISHER TYPE STATUS P LOCATION solaris origin online F http:/// [email protected]:~# pkg publisher solaris

            Publisher: solaris
                Alias: 
           Origin URI: http:///
              SSL Key: None
             SSL Cert: None
          Client UUID: 
      Catalog Updated: October  6, 2015 07:45:07 PM 
              Enabled: Yes

Dear Comcast: You Suck – the foremind

Editorial Note: Apparently Comcast really would prefer that people not use the term data cap when referring to the limitations being placed on their customers data usage and would much rather prefer that we use the term data usage plan or data threshold, however, I don’t really care. 🙂

Dear Comcast,

I would like to go on record as saying that you suck.  I recognize that you are a for profit company and that you would like to make a profit on the services that you provide.  I even think that having your company make a profit is a good thing because that enables you to pay your employees so that they can put food on their tables and afford to pay the fees for their children to participate in Little League baseball and other such childhood activities.

Your data usage cap system is bogus.  According to the data available on your own website you have eight (8) different trial markets where you have rolled out data caps since 2012:

  • August 1, 2012: Nashville, Tennessee – 300GB cap
  • October 1, 2012: Tucson, Arizona – 3 tiers (300GB, 350GB, 600GB)
  • August 22, 2013: Fresno, California – Economy Plus option added
  • September 1, 2013: Savannah, Georgia; Central Kentucky; Jackson, Mississippi – 300GB
  • October 1, 2013: Mobile, Alabama; Knoxville, Tennessee.
  • November 1, 2013: Huntsville, Alabama; Augusta, Georgia; Tupelo, Mississippi; Charleston, South Carolina; Memphis, Tennessee – 300GB cap
  • December 1, 2013: Atlanta, Georgia; Maine – 300GB cap
  • October 1, 2015: Fort Lauderdale, the Keys and Miami, Florida – 300GB cap plus $30 option for increasing to  unlimited

In fact, your page on this even refers to these as “trial start dates” which to a reasonably minded person would imply that they have an end date as well, however to the best of my knowledge (as well as the comments made by a customer support representative) there is no plan to end these trials OR any plan to actually collapse them into a single cohesive plan that applies to your entire service.

Now before I get into the real meat of my complaint with your pricing plans, let me go on record as saying that I have no real problem with a metered usage system for data bandwidth.  I pay for a metered system with electricity usage, as do most consumers (unless maybe you can afford to own a skyscraper with your name on it).  If my usage of power goes up, then so does my bill.  If my usage goes down, then so does my bill.  However, my power bill is not likely to increase at that rate my cable bill will for my bandwidth usage.

The problem I have with my data cap (or as you would say data usage plan) is that I have no choice in plans.  If all you get is one option then quit calling it a choice.  If you want to call it a choice, then give me one!  I am fine with paying an additional $30 a month for the option to not have a cap.  Will I use less than my current 300GB cap some months thus giving you extra money in your pocket? Sure I will, but I am ok with that.  I currently pay for 20GB of wireless data on my cell phone plan and most months I don’t even hit 10GB of usage, but I am happy to pay for the extra every month so that on the months that I am travelling and need to use my phone as a hotspot that I won’t suddenly find that I have an additional hit of $80-$100 on my bill due to that.  Give me the option to pay for a plan that will allow me to meet my usage needs at the high end and budget for that and I will happily pay it.

Oh and the idea that 300GB of data is actually a reasonable place to start your cap is laughable.  With more and more consumers, even in the rural South where I live, moving to services like Netflix and Hulu for media consumption, your insistence that 300GB is a good median limit is just making your service ripe for competition.  Take a look at places where there is actual competition and you will see what I am talking about (of course the fact that Google and AT&T apparently don’t care about consumers living outside of a rural area puts the lie to their claim of offering competition).

On October 1, 2015, you flipped the billing switch that allows for customers in three markets in Florida to pay $30 more an have no data cap.  Why not just flip that switch for the whole country?  Better yet, why not just up my bill by $30 and remove the cap completely?  Want to just switch to a completely metered plan?  Fine, then do it, but while you are at it make the price per GB reasonable.  A recent summer 2015 survey from the Georgia PSC showed that I paid roughly 11.7 cents per kwh to my rural electric municipal coporation, Satilla REMC for a 1000 kWh block and I think that they probably have a higher cost per kwh than you do per gigabyte.  If I breakdown my cable bill, I pay roughly 25 cents per gigabyte for my data.  When I exceed my data cap, I get charged an overage fee of $10 for every 50GB chunk, which means that whether I use 1GB extra or 50GB extra, I get charge the extra $10.  That breaks down to 20 cents per gigabyte.  If you can charge me $.20 for the overage data, then why not charge me $.20 for the original 300GB?  So instead of my data portion costing me $76 it would cost $60.  And while we all know that I am being overcharged per gigabyte, let’s just be honest and up front and try not to gouge the only customers you have.  For example, a 2011 article in the Globe and Mail talked about this very issue in Canada and determined that while Bell was selling data to wholesale customers at $4.25 per 40GB block (that’s Canadian dollars and it breaks down to approximately $0.10 Canadian per gigabyte), yet the same service was costing consumers $1 or more per gigabyte.  I haven’t seen numbers for the costs in the US per gigabyte, but I am willing to bet that it’s not that much more.

So why don’t you do everyone a favor treat all your customers the same?  Quit dithering around on the usage cap terms and give us, the consumers that you claim to care about, actual choice in data plans.  It’s a crazy thing, but when you start treating your customers like people that are worth something then it’s just possible that you might not be vilified in the press every day.

And while we are at it, thanks oh so much for silently increasing my download rate from 50Mbps to 75Mbps.  I am sure that at some point in the future you will just up my rate to make up for the speed increase without actually changing my data cap. So yeah, thanks a lot for that.

And not that it matters, but yeah, that FCC complaint?  That was me.

Sincerely,

Andrew Fore

RHEL7 and ncat changes – the foremind

One of the tools that I use on a regular basis to test network connectivity updates is the “z” option of netcat.  Apparently when RedHat rolled out the latest version of their distribution of RedHat Enterprise Linux (RHEL) they decided to move to using the nmap-ncat package instead of the nc package.  The command options a very different.

So when attempting to test single port like I would have under previous releases I now use the following syntax:

# echo | nc -w1 $host $port >/dev/null 2>&1 ;echo $?

If the result that is returned is a zero then you have successfully connected to the remote host on the desired port. This also applies to CentOS 7 since it is a “clone” or copyleft port of the RHEL7 binaries.

Solaris Tip of the Week: a better du experience – the foremind

cli_img-3913223

In my day job as a Systems Engineer I frequently find myself switching between different UNIX and Linux distributions.  While many of the commands exist on both sides of the aisle, I often find vast differences in the command line parameters that can be consumed by a given command when used in, for example, Linux vs Solaris.

Recently I came upon this again with the need to easily ferret out the majority consumer of drive space on a Solaris 10 system.  While we did have the xpg4 specification support available, the du command was still missing my favorite option “max-depth”.

In Linux I use this to limit the output to only the current directory level so that I don’t have to face to possibility of wading through a tremendously large listing of sub-directories to find the largest directory in the level I am in.  Unfortunately, in Solaris, even with xpg4, the du command doesn’t have this option, so my solution was to pipe the results through egrep and use that to filter out the sub-directories.

Here is some example output from a RedHat Linux 5.11 server:

[code language=”bash” gutter=”false”]
[[email protected] var]# du -h 8.0K ./games 8.0K ./run/saslauthd 8.0K ./run/lvm 8.0K ./run/setrans 8.0K ./run/ppp 8.0K ./run/snmpd 4.0K ./run/mysqld 8.0K ./run/pm 8.0K ./run/dbus 8.0K ./run/nscd 8.0K ./run/console 8.0K ./run/sudo 8.0K ./run/netreport 176K ./run 8.0K ./yp/binding 24K ./yp 8.0K ./lib/games 8.0K ./lib/mysql 4.0K ./lib/nfs/statd/sm.bak 8.0K ./lib/nfs/statd/sm 24K ./lib/nfs/statd 8.0K ./lib/nfs/v4recovery 0 ./lib/nfs/rpc_pipefs/statd 0 ./lib/nfs/rpc_pipefs/portmap 0 ./lib/nfs/rpc_pipefs/nfs/clntf 0 ./lib/nfs/rpc_pipefs/nfs/clnt5 0 ./lib/nfs/rpc_pipefs/nfs/clnt0 0 ./lib/nfs/rpc_pipefs/nfs 0 ./lib/nfs/rpc_pipefs/mount 0 ./lib/nfs/rpc_pipefs/lockd 0 ./lib/nfs/rpc_pipefs 40K ./lib/nfs 8.0K ./lib/dhclient 8.0K ./lib/iscsi/isns

[/code]

Here is the same example ouput from the RedHat server using the max-depth option:

[code language=”bash” gutter=”false”]
[[email protected] var]# du -h –max-depth=1 8.0K ./games 176K ./run 24K ./yp 22M ./lib 32K ./empty 1.5G ./log 12K ./account 236K ./opt 24K ./db 8.0K ./nis 2.9M ./tmp 8.0K ./tmp-webmanagement 40K ./lock 8.0K ./preserve 8.0K ./racoon 16K ./lost+found 1.4M ./spool 8.0K ./net-snmp 83M ./cache 8.0K ./local 1.6G .

[/code]

Here is the command example run without my egrep mod in Solaris 10:

[code language=”bash” gutter=”false”]
[[email protected] log]# /usr/xpg4/bin/du -h 25K ./webconsole/console 26K ./webconsole 1K ./pool 1K ./swupas 2K ./ilomconfig 1K ./current/ras1_sfsuperbatchb 1K ./current/od1_atl4sfsuperbatchb 4.3G ./current/ras1_atl4sfsbatchb 2.1G ./current/od1_atl4sfsbatchb 560K ./current/avs 2K ./current/ebaps/output 9.3M ./current/ebaps 4.0M ./current/psh 3.1M ./current/autoresponder 5K ./current/fdms_download 29K ./current/fdms_server 109K ./current/fmt 5K ./current/paris/output 653K ./current/paris 1K ./current/od1_sfsuperbatchb 28K ./current/ccTemplateLoader 633K ./current/ccTemplateLoaderLegacy 15M ./current/whinvoices 1K ./current/appmonitor.prod.netsol.com 132M ./current/chase 6.6G ./current 160K ./archive/ccTemplateLoader 1K ./archive/od1_atl4sfsuperbatchb 4.9M ./archive/avs 1K ./archive/ebaps/output 26M ./archive/ebaps 881M ./archive/psh 1014M ./archive/autoresponder 1K ./archive/fdms_download 6.8M ./archive/fdms_server 21M ./archive/paris 1K ./archive/ccTemplateLoaderLegacy 4.1G ./archive/ras1_atl4sfsbatchb 3.1G ./archive/od1_atl4sfsbatchb 5.9G ./archive/chase 102M ./archive/whinvoices 15G ./archive 22G .

[/code]

And here is the improved command output using my egrep mod on the same Solaris server:

[code language=”bash” gutter=”false”]
[[email protected] log]# /usr/xpg4/bin/du -hx | egrep -v ‘.*/.*/.*’ 26K ./webconsole 1K ./pool 1K ./swupas 2K ./ilomconfig 6.6G ./current 15G ./archive 22G .

[/code]

QuickReview: LIFX White 800 WiFi LED Smart Bulb – the foremind

Lately I have been dipping my toe into the pool of home automation and smarthome technologies.  While I have been interested in having a smarthome ever since I watched my first few episodes of the SyFy channel show Eureka.  My interest was advanced even more by Google I/O 2016 and the demo of Google Assistant.

So a few months ago I ventured into this new world of technology (new for me at least) cautiously by purchasing a pair of the LIFX White 800 smart bulbs that were on sale at Walmart due to the release of the LIFX Generation 3 A19.

I found that the Android app was very easy to configure, and that I could easily add the light bulbs to multiple Android devices.  I was disappointed to find that they were not immediately compatible Siri on my wife’s iPhone due to the lack of a suitable homekit bridge/hub.  This was remedied easily enough by configuring the open-source NodeJS server homebridge and a plugin (homebridge-lifx-lan or homebridge-lifx)  to connect the light bulbs to the Apple Home application.

Adding the lightbulbs to the LIFX app on my Pixel was fairly straight forward and went off without a hitch.

I have found the light bulbs easy enough to manage.  The hue range and brightness are quite suitable for the application, namely the nightstand lights in the master bedroom and I would definitely recommend these to anyone that doesn’t have a need for more than just white led light bulbs.

Privacy and cookie policy – the foremind

cookiebot_featured_image-e1526150506967-5304807As the whole GDPR craze has hit my place of employment, I decided to look into an easy method for to add a privacy policy and cookie policy to my WordPress site.

Adding a privacy policy is pretty simple, all you have to do is to create a new page with the policy on it.  Simple right? Well, it turns out that the devil is in the details as normal.  After a few minutes of research, I found a nice site with a template-based policy generator.  The site, Privacy Policy Template, is pretty self-explanatory.  The code for the template is very easy to edit after you generate the code.  This provides a great starting point for sites that need more information, but also a nice final product for blogs like this one that don’t really deal with any user data other than the owner.

The other shoe is a cookie policy.  Towards this end I used the CookieBot service and the accompanying plugin.  It is a paid service if you have a large number of pages, but if you have a site with 100 sub-pages or less then the service is free.  Be aware that for the purposes of billing and scanning, CookieBot treats each post or page as a separate item.  The pricing structure is really quite reasonable given the capabilities of the service.  For example, this site has a total of 121 pages and posts, so they automatically upgraded my account from free to a 30-day free trial.  Until I hit 500 pages/posts, the cost is only $10/month, which in the overall scheme of things is not that bad at all.

cookiebot-300x212-6306414 Screenshot of slide-down panel with default settings.

The initial scan can take as long as 24 hours to be generated, however that all depends on the size of the site being scanned.  The cookie categories and reports are automatically created for you, however you can also customize the panel with custom cookies and categories, as well customizing the CSS to control the look and feel of the panel.  I choose to start out with defaults only.  The image to the right is what my site looks like after the initial report was generated.

I also received a report in the inbox configured for the account with details on what was found in the scan and which cookies were treated as falling into the pre-defined categories.  For example, the scan found two cookies that were designated as necessary for the site to function.  There were also items found in the preferences and statistics categories.

While I don’t really feel that it is necessary for me to even have a privacy policy or cookie consent feature, I have decided to do it anyway simply out of an attitude of prevention, due to the political climate surrounding personal data collection and the like. (Thanks, Mark Zukerberg, as well as your friends at Cambridge Analytica.)

While these two services are by no means the only ones out there, I found them to be the easiest and simplest to implement.

(The featured image is copyright of HarperCollins Publishers)