Clean install HP ENVY 8 Note – the foremind

hp_envy_note-2052809

Today I embarked on my voyage to refurbish and restore to working order all my unused technology gadgets that I have let gather dust in my closet.  The first of these is my HP ENVY 8 Note Windows Tablet.

I purchased this quite some time ago when I was using Windows Mobile 10 for a phone platform and converted most everything in my personal arsenal over to Windows land.

At some point I installed Ubuntu on the tablet as well as Arch Linux on it, but today I wanted to restore it back to it’s original OS.  Thanks to Smays Micro USB Ethernet Adapter and USB hub combo, I was able to connect a USB keyboard, mouse and USB thumb drive to it in order to perform a normal clean install of Windows 10 onto the tablet.

After installing all the available OS updates and the available app updates from the Microsoft Store the tablet is as good as new, in fact it actually works better than the original installs, which is likely a testament to the continued development by the coders at MS.

Massive Numbers of Chrome Helper Messages in system logs – the foremind

Today when attempting to figure out why Google Hangouts would not start on my Mac after the application was re-enabled due to a permissions change, I noticed a large number of messages like the following:

6/10/15 10:20:14.000 AM kernel[0]: Google Chrome He (map: 0xffffff804da160f0) triggered DYLD shared region unnest for map: 0xffffff804da160f0, region 0x7fff99a00000->0x7fff99c00000. While not abnormal for debuggers, this increases system memory footprint until the target exits.

After some research I found that this is a reported issue in the bug tracker for Chromium.  At first I thought that maybe this was the cause of the problem I was having but that turned out to not be the case, simply removing the Hangouts app in Chrome and re-adding it fixed my issue.  However, the sheer number of these errors makes the log a bit unwieldy.  It turns out that there is a way to hide all these messages (thanks to the commenter in the Chromium bug thread!):

[code language=”bash” light=”true”]sudo sysctl -w vm.shared_region_unnest_logging=0[/code]

While it doesn’t help at all with Chrome’s memory issues or other UI issues on Mac OS X, it is rather nice to hide all those spurious messages from the system log.

Desktop Google Chrome Reader Mode – the foremind

If you are a Safari user then you are likely used to the “reader mode” which disables all the extra graphical stuff and focuses the view on the content of the article.  Thanks to a tip from Google Plus user Francois Beaufort, here’s how to enable it on the desktop (in Windows at the very least, I haven’t tried in any other OS).

If you’re on desktop, playing with it is as easy as running chrome with the –enable-dom-distiller switch. Once it’s done, you’ll notice a new “Distill page” menu item.

Hopefully this will make it to mainstream with a nice icon.

Disable subscription pop-up in Proxmox v.5.1-3 – the foremind

proxmox-logo-150x150-6120302If you have just installed the most recent release of the virtualization platform Proxmox, you might have noticed the that steps to disable the subscription pop-up dialog have changed, well, other than actually purchasing a subscription, I suppose.  I have chosen to not purchase a subscription for the same reason I don’t have one for VMware’s vSphere Hypervisor, I am not running this in a production setting that requires paid support or premium features. The following steps will disable the subscription pop-up.

Backup the javascript file

The pop-up contents, and whether or not are displayed, are controlled by a function in a javascript file.  The first step should always be to make a backup, just in case Murphy rings your doorbell.

[email protected]:~# cd /usr/share/pve-manager/js/ [email protected]:/usr/share/pve-manager/js# cp -p pvemanagerlib.js pvemanagerlib.js_backup

Edit the javascript file

Open the pvemanagerlib.js file in your favorite editor.  If this is a vanilla, unmodified installation, skip to line 850.  If this is not the first time that you have edited the file, search for the first occurrence of the following snippet, which will be in the function that we need to alter:

gettext('No valid subscription')

The text of the check for the function should be altered so that the conditional for the check reads as follows:

before

if (data.status !== 'Active') {

after

if (false) {

Notes

As I stated in the original paragraph, the specifics apply to v. 5.1-3 and that the location of the file has changed from previous versions.  A good way to find the file is to use the locate command, which you will have to install first:

[email protected]:~# apt-get update Ign:1 http://ftp.us.debian.org/debian stretch InRelease Get:2 http://security.debian.org stretch/updates InRelease [94.3 kB] Get:3 http://ftp.us.debian.org/debian stretch Release [118 kB] Get:4 http://ftp.us.debian.org/debian stretch Release.gpg [2,434 B] Get:5 http://security.debian.org stretch/updates/main amd64 Packages [374 kB] Get:6 http://ftp.us.debian.org/debian stretch/main amd64 Packages [7,122 kB] Get:7 http://security.debian.org stretch/updates/main Translation-en [165 kB] Get:8 http://security.debian.org stretch/updates/contrib amd64 Packages [1,776 B] Get:9 http://security.debian.org stretch/updates/contrib Translation-en [1,759 B]  [email protected]:~# apt-get install mlocate Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: mlocate 0 upgraded, 1 newly installed, 0 to remove and 85 not upgraded. Need to get 96.5 kB of archives. After this operation, 495 kB of additional disk space will be used. Get:1 http://ftp.us.debian.org/debian stretch/main amd64 mlocate amd64 0.26-2 [96.5 kB] Fetched 96.5 kB in 0s (315 kB/s) Selecting previously unselected package mlocate. (Reading database ... 40185 files and directories currently installed.) Preparing to unpack .../mlocate_0.26-2_amd64.deb ... Unpacking mlocate (0.26-2) ... Setting up mlocate (0.26-2) ... update-alternatives: using /usr/bin/mlocate to provide /usr/bin/locate (locate) in auto mode Adding group `mlocate' (GID 115) ... Done. Processing triggers for man-db (2.7.6.1-2) ... [email protected]:~# updatedb [email protected]:~# locate pvemanagerlib.js
/usr/share/pve-manager/js/pvemanagerlib.js
/usr/share/pve-manager/js/pvemanagerlib.js_backup

As you can see the mlocate package makes finding the file so much easier.

Setting package publisher in Solaris 11 – the foremind

During the installation and setup of my new Solaris 11 Automated Installer host, I ran into a situation where even though I was specifying both the origin to remove AND the origin to add, the OS refused to allow me to perform both options in the same command.  While you should be able do this, I ended up having to remove the default system configured publisher and then adding the new local IPS repository as the publisher.

This is what the default publisher was configured for:

[email protected]:~# pkg publisher PUBLISHER TYPE STATUS P LOCATION solaris origin online F http://pkg.oracle.com/solaris/release/ [email protected]:~# pkg publisher solaris

            Publisher: solaris
                Alias: 
           Origin URI: http://pkg.oracle.com/solaris/release/
              SSL Key: None
             SSL Cert: None
          Client UUID: 
      Catalog Updated: October  6, 2015 02:41:00 PM 
              Enabled: Yes

Here is the command that was part of the Oracle guide How to Get Started Customizing and Configuring Systems Using the Automated Installer in Oracle Solaris 11.1 which didn’t work for me:

[email protected]:~# pkg set-publisher –G '*' -g http://10.202.46.80 solaris
pkg set-publisher: only one publisher name may be specified
Usage:
        pkg set-publisher [-Ped] [-k ssl_key] [-c ssl_cert]
            [-g origin_to_add|--add-origin=origin_to_add ...]
            [-G origin_to_remove|--remove-origin=origin_to_remove ...]
            [-m mirror_to_add|--add-mirror=mirror_to_add ...]
            [-M mirror_to_remove|--remove-mirror=mirror_to_remove ...]
            [-p repo_uri] [--enable] [--disable] [--no-refresh]
            [--reset-uuid] [--non-sticky] [--sticky]
            [--search-after=publisher]
            [--search-before=publisher]
            [--search-first]
            [--approve-ca-cert=path_to_CA]
            [--revoke-ca-cert=hash_of_CA_to_revoke]
            [--unset-ca-cert=hash_of_CA_to_unset]
            [--set-property name_of_property=value]
            [--add-property-value name_of_property=value_to_add]
            [--remove-property-value name_of_property=value_to_remove]
            [--unset-property name_of_property_to_delete]
            [--proxy proxy to use]
            [publisher]

I tried several different variations of the one line command, however I was met with the same lack of success. In order to achieve the desired result where the local IPS repository was set up for publisher name solaris I had to do an unset of the existing repo and then a set to configure my new repo.

[email protected]:~# pkg unset-publisher solaris Updating package cache 1/1 [email protected]:~# pkg publisher PUBLISHER TYPE STATUS P LOCATION [email protected]:~# pkg set-publisher -g http:// solaris [email protected]:~# pkg publisher PUBLISHER TYPE STATUS P LOCATION solaris origin online F http:/// [email protected]:~# pkg publisher solaris

            Publisher: solaris
                Alias: 
           Origin URI: http:///
              SSL Key: None
             SSL Cert: None
          Client UUID: 
      Catalog Updated: October  6, 2015 07:45:07 PM 
              Enabled: Yes

Dear Comcast: You Suck – the foremind

Editorial Note: Apparently Comcast really would prefer that people not use the term data cap when referring to the limitations being placed on their customers data usage and would much rather prefer that we use the term data usage plan or data threshold, however, I don’t really care. 🙂

Dear Comcast,

I would like to go on record as saying that you suck.  I recognize that you are a for profit company and that you would like to make a profit on the services that you provide.  I even think that having your company make a profit is a good thing because that enables you to pay your employees so that they can put food on their tables and afford to pay the fees for their children to participate in Little League baseball and other such childhood activities.

Your data usage cap system is bogus.  According to the data available on your own website you have eight (8) different trial markets where you have rolled out data caps since 2012:

  • August 1, 2012: Nashville, Tennessee – 300GB cap
  • October 1, 2012: Tucson, Arizona – 3 tiers (300GB, 350GB, 600GB)
  • August 22, 2013: Fresno, California – Economy Plus option added
  • September 1, 2013: Savannah, Georgia; Central Kentucky; Jackson, Mississippi – 300GB
  • October 1, 2013: Mobile, Alabama; Knoxville, Tennessee.
  • November 1, 2013: Huntsville, Alabama; Augusta, Georgia; Tupelo, Mississippi; Charleston, South Carolina; Memphis, Tennessee – 300GB cap
  • December 1, 2013: Atlanta, Georgia; Maine – 300GB cap
  • October 1, 2015: Fort Lauderdale, the Keys and Miami, Florida – 300GB cap plus $30 option for increasing to  unlimited

In fact, your page on this even refers to these as “trial start dates” which to a reasonably minded person would imply that they have an end date as well, however to the best of my knowledge (as well as the comments made by a customer support representative) there is no plan to end these trials OR any plan to actually collapse them into a single cohesive plan that applies to your entire service.

Now before I get into the real meat of my complaint with your pricing plans, let me go on record as saying that I have no real problem with a metered usage system for data bandwidth.  I pay for a metered system with electricity usage, as do most consumers (unless maybe you can afford to own a skyscraper with your name on it).  If my usage of power goes up, then so does my bill.  If my usage goes down, then so does my bill.  However, my power bill is not likely to increase at that rate my cable bill will for my bandwidth usage.

The problem I have with my data cap (or as you would say data usage plan) is that I have no choice in plans.  If all you get is one option then quit calling it a choice.  If you want to call it a choice, then give me one!  I am fine with paying an additional $30 a month for the option to not have a cap.  Will I use less than my current 300GB cap some months thus giving you extra money in your pocket? Sure I will, but I am ok with that.  I currently pay for 20GB of wireless data on my cell phone plan and most months I don’t even hit 10GB of usage, but I am happy to pay for the extra every month so that on the months that I am travelling and need to use my phone as a hotspot that I won’t suddenly find that I have an additional hit of $80-$100 on my bill due to that.  Give me the option to pay for a plan that will allow me to meet my usage needs at the high end and budget for that and I will happily pay it.

Oh and the idea that 300GB of data is actually a reasonable place to start your cap is laughable.  With more and more consumers, even in the rural South where I live, moving to services like Netflix and Hulu for media consumption, your insistence that 300GB is a good median limit is just making your service ripe for competition.  Take a look at places where there is actual competition and you will see what I am talking about (of course the fact that Google and AT&T apparently don’t care about consumers living outside of a rural area puts the lie to their claim of offering competition).

On October 1, 2015, you flipped the billing switch that allows for customers in three markets in Florida to pay $30 more an have no data cap.  Why not just flip that switch for the whole country?  Better yet, why not just up my bill by $30 and remove the cap completely?  Want to just switch to a completely metered plan?  Fine, then do it, but while you are at it make the price per GB reasonable.  A recent summer 2015 survey from the Georgia PSC showed that I paid roughly 11.7 cents per kwh to my rural electric municipal coporation, Satilla REMC for a 1000 kWh block and I think that they probably have a higher cost per kwh than you do per gigabyte.  If I breakdown my cable bill, I pay roughly 25 cents per gigabyte for my data.  When I exceed my data cap, I get charged an overage fee of $10 for every 50GB chunk, which means that whether I use 1GB extra or 50GB extra, I get charge the extra $10.  That breaks down to 20 cents per gigabyte.  If you can charge me $.20 for the overage data, then why not charge me $.20 for the original 300GB?  So instead of my data portion costing me $76 it would cost $60.  And while we all know that I am being overcharged per gigabyte, let’s just be honest and up front and try not to gouge the only customers you have.  For example, a 2011 article in the Globe and Mail talked about this very issue in Canada and determined that while Bell was selling data to wholesale customers at $4.25 per 40GB block (that’s Canadian dollars and it breaks down to approximately $0.10 Canadian per gigabyte), yet the same service was costing consumers $1 or more per gigabyte.  I haven’t seen numbers for the costs in the US per gigabyte, but I am willing to bet that it’s not that much more.

So why don’t you do everyone a favor treat all your customers the same?  Quit dithering around on the usage cap terms and give us, the consumers that you claim to care about, actual choice in data plans.  It’s a crazy thing, but when you start treating your customers like people that are worth something then it’s just possible that you might not be vilified in the press every day.

And while we are at it, thanks oh so much for silently increasing my download rate from 50Mbps to 75Mbps.  I am sure that at some point in the future you will just up my rate to make up for the speed increase without actually changing my data cap. So yeah, thanks a lot for that.

And not that it matters, but yeah, that FCC complaint?  That was me.

Sincerely,

Andrew Fore

QuickReview: LIFX White 800 WiFi LED Smart Bulb – the foremind

Lately I have been dipping my toe into the pool of home automation and smarthome technologies.  While I have been interested in having a smarthome ever since I watched my first few episodes of the SyFy channel show Eureka.  My interest was advanced even more by Google I/O 2016 and the demo of Google Assistant.

So a few months ago I ventured into this new world of technology (new for me at least) cautiously by purchasing a pair of the LIFX White 800 smart bulbs that were on sale at Walmart due to the release of the LIFX Generation 3 A19.

I found that the Android app was very easy to configure, and that I could easily add the light bulbs to multiple Android devices.  I was disappointed to find that they were not immediately compatible Siri on my wife’s iPhone due to the lack of a suitable homekit bridge/hub.  This was remedied easily enough by configuring the open-source NodeJS server homebridge and a plugin (homebridge-lifx-lan or homebridge-lifx)  to connect the light bulbs to the Apple Home application.

Adding the lightbulbs to the LIFX app on my Pixel was fairly straight forward and went off without a hitch.

I have found the light bulbs easy enough to manage.  The hue range and brightness are quite suitable for the application, namely the nightstand lights in the master bedroom and I would definitely recommend these to anyone that doesn’t have a need for more than just white led light bulbs.

Privacy and cookie policy – the foremind

cookiebot_featured_image-e1526150506967-5304807As the whole GDPR craze has hit my place of employment, I decided to look into an easy method for to add a privacy policy and cookie policy to my WordPress site.

Adding a privacy policy is pretty simple, all you have to do is to create a new page with the policy on it.  Simple right? Well, it turns out that the devil is in the details as normal.  After a few minutes of research, I found a nice site with a template-based policy generator.  The site, Privacy Policy Template, is pretty self-explanatory.  The code for the template is very easy to edit after you generate the code.  This provides a great starting point for sites that need more information, but also a nice final product for blogs like this one that don’t really deal with any user data other than the owner.

The other shoe is a cookie policy.  Towards this end I used the CookieBot service and the accompanying plugin.  It is a paid service if you have a large number of pages, but if you have a site with 100 sub-pages or less then the service is free.  Be aware that for the purposes of billing and scanning, CookieBot treats each post or page as a separate item.  The pricing structure is really quite reasonable given the capabilities of the service.  For example, this site has a total of 121 pages and posts, so they automatically upgraded my account from free to a 30-day free trial.  Until I hit 500 pages/posts, the cost is only $10/month, which in the overall scheme of things is not that bad at all.

cookiebot-300x212-6306414 Screenshot of slide-down panel with default settings.

The initial scan can take as long as 24 hours to be generated, however that all depends on the size of the site being scanned.  The cookie categories and reports are automatically created for you, however you can also customize the panel with custom cookies and categories, as well customizing the CSS to control the look and feel of the panel.  I choose to start out with defaults only.  The image to the right is what my site looks like after the initial report was generated.

I also received a report in the inbox configured for the account with details on what was found in the scan and which cookies were treated as falling into the pre-defined categories.  For example, the scan found two cookies that were designated as necessary for the site to function.  There were also items found in the preferences and statistics categories.

While I don’t really feel that it is necessary for me to even have a privacy policy or cookie consent feature, I have decided to do it anyway simply out of an attitude of prevention, due to the political climate surrounding personal data collection and the like. (Thanks, Mark Zukerberg, as well as your friends at Cambridge Analytica.)

While these two services are by no means the only ones out there, I found them to be the easiest and simplest to implement.

(The featured image is copyright of HarperCollins Publishers)

May 11, 2018 – the foremind

proxmox-logo-150x150-3263166If you have just installed the most recent release of the virtualization platform Proxmox, you might have noticed the that steps to disable the subscription pop-up dialog have changed, well, other than actually purchasing a subscription, I suppose.  I have chosen to not purchase a subscription for the same reason I don’t have one for VMware’s vSphere Hypervisor, I am not running this in a production setting that requires paid support or premium features. The following steps will disable the subscription pop-up.

Backup the javascript file

The pop-up contents, and whether or not are displayed, are controlled by a function in a javascript file.  The first step should always be to make a backup, just in case Murphy rings your doorbell.

[email protected]:~# cd /usr/share/pve-manager/js/ [email protected]:/usr/share/pve-manager/js# cp -p pvemanagerlib.js pvemanagerlib.js_backup

Edit the javascript file

Open the pvemanagerlib.js file in your favorite editor.  If this is a vanilla, unmodified installation, skip to line 850.  If this is not the first time that you have edited the file, search for the first occurrence of the following snippet, which will be in the function that we need to alter:

gettext('No valid subscription')

The text of the check for the function should be altered so that the conditional for the check reads as follows:

before

if (data.status !== 'Active') {

after

if (false) {

Notes

As I stated in the original paragraph, the specifics apply to v. 5.1-3 and that the location of the file has changed from previous versions.  A good way to find the file is to use the locate command, which you will have to install first:

[email protected]:~# apt-get update Ign:1 http://ftp.us.debian.org/debian stretch InRelease Get:2 http://security.debian.org stretch/updates InRelease [94.3 kB] Get:3 http://ftp.us.debian.org/debian stretch Release [118 kB] Get:4 http://ftp.us.debian.org/debian stretch Release.gpg [2,434 B] Get:5 http://security.debian.org stretch/updates/main amd64 Packages [374 kB] Get:6 http://ftp.us.debian.org/debian stretch/main amd64 Packages [7,122 kB] Get:7 http://security.debian.org stretch/updates/main Translation-en [165 kB] Get:8 http://security.debian.org stretch/updates/contrib amd64 Packages [1,776 B] Get:9 http://security.debian.org stretch/updates/contrib Translation-en [1,759 B]  [email protected]:~# apt-get install mlocate Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: mlocate 0 upgraded, 1 newly installed, 0 to remove and 85 not upgraded. Need to get 96.5 kB of archives. After this operation, 495 kB of additional disk space will be used. Get:1 http://ftp.us.debian.org/debian stretch/main amd64 mlocate amd64 0.26-2 [96.5 kB] Fetched 96.5 kB in 0s (315 kB/s) Selecting previously unselected package mlocate. (Reading database ... 40185 files and directories currently installed.) Preparing to unpack .../mlocate_0.26-2_amd64.deb ... Unpacking mlocate (0.26-2) ... Setting up mlocate (0.26-2) ... update-alternatives: using /usr/bin/mlocate to provide /usr/bin/locate (locate) in auto mode Adding group `mlocate' (GID 115) ... Done. Processing triggers for man-db (2.7.6.1-2) ... [email protected]:~# updatedb [email protected]:~# locate pvemanagerlib.js
/usr/share/pve-manager/js/pvemanagerlib.js
/usr/share/pve-manager/js/pvemanagerlib.js_backup

As you can see the mlocate package makes finding the file so much easier.

September 2, 2017 – the foremind

Recently I acquired an EdgeRouter X from Ubiquiti Networks to handle the routing and firewall functions of my home network.  This was prompted by a desire to separate each of my network functions to individual components and to get a better piece of equipment than the run-of-the-mill Comcast rental gear.

After configuring the equiment and updating to the latest firmware, I decided to also configure my network DNS to flow through OpenDNS instead of Comcast DNS.  This also allowed me to configure content filtering so that my grandchildren wouldn’t accidentally get shuffled into some crazy website instead of Disney Junior.

The steps to configure this are not quit as simple as on some other setups.  OpenDNS didn’t have any instructions on this and sent inquiring users to the Ubiquiti Community Forums.  Here is the method that I used:

Step One – Open main system configuration

In the main windows of the web interface for the EdgeRouter X, click on the System button towards the bottom left of the window. This will bring up the main system configuration screen.

Step Two – Configure the System Name Server values

Add the first OpenDNS IP address in the visible field.  Click the Add New button to add a second field, then enter the second OpenDNS IP address into that field.  Scroll down to the bottom of the System settings and click the Save button.

Step Three – Login to the command line interface

In the upper right section of the admin interface, click on the CLI button to open a window to the command line interface (aka cli).  When the window opens, login using the same username and password you use for the web interface (Security Tip: please take the time to change the password from the default…)

Step Four – Update the DNS Fowarding

After logging into the cli, you need to enter the following commands:

configure
set service dns forwarding system
commit
save
exit
exit

What this does is to alter the functionality of the built-in DNS forwarding service to use the system name server values instead of the values from your ISP source (in my case an Arris SB6190 cable modem connected to Comcast).

After you have completed the above steps, then you can easily control the content filtering on your network using the OpenDNS tools.