Monitoring SSL Certificate Expiry Dates with Nagios

It is good practice to separate Nagios checks of your web server being available from checking SSL certificate expiry. The latter need only be run once per day and should not add unnecessary noise to a more immediately important web service failure.

To use check_http to monitor SSL certificate expiry dates, first ensure you have a daily service definition – let’s call this service-daily. Now create two service commands as follows:

define command{
    command_name check_cert
    command_line /usr/lib/nagios/plugins/check_http -S \
        -I $HOSTADDRESS$ -w 5 -c 10 -p $ARG1$ -C $ARG2$
}

define command{
    command_name check_named_cert
    command_line /usr/lib/nagios/plugins/check_http -S \
        -I $ARG3$ -w 5 -c 10 -p $ARG1$ -C $ARG2$
}

The second is useful for checking named certificates on additional IP addresses on web servers serving multiple SSL domains.

We can use these to check SSL certificates for POP3, IMAP, SMTP and HTTP:

define service{
    use service-daily
    host_name mailserver
    service_description POP3 SSL Certificate
    check_command check_cert!993!21
}

define service{
    use service-daily
    host_name mailserver
    service_description IMAP SSL Certificate
    check_command check_cert!995!21
}

define service{
    use service-daily
    host_name mailserver
    service_description SMPT SSL Certificate
    check_command check_cert!465!21
}

define service{
    use service-daily
    host_name webserver
    service_description SSL Cert: www.example.com
    check_command check_named_cert!443!21!www.example.com
}

define service{
    use service-daily
    host_name webserver
    service_description SSL Cert: www.example.net
    check_command check_named_cert!443!21!www.example.net
}

Residential Property Price Register

Ireland finally has a residential property price register thanks to the newly formed Property Services Regulatory Authority.

Unfortunately their interface to this data is extremely cumbersome with a constant CAPTCHA challenge.

We knocked out our own version here: http://www.opensolutions.ie/ppr.

Kudos to the PSRA for opening up the information as they explain here.

All our code is licensed under BSD and is available here on GitHub.

Nagios Plugin for Checking Backups via rsnapshot

We’ve just added a check_rsnapshot.php script to our nagios-plugins bundle on Github. This script will verify rsnapshot backups via Nagios using a number of checks / tests:

  • minfiles – checks the number of files in a snapshot against a minimum expected number;
  • minsize – checks the size of a snapshot against a minimum expected size;
  • log – parses the rsnapshot log to ensure the most recent runs for each retention period completed successfully;
  • timestamp – checks for files created server side containing a timestamp and thus ensuring snapshots are succeeding;
  • rotation – checks that retention directories are being rotated; and
  • dir-creation – checks that retention directories are being created.

Please see this Github wiki page for more information including instructions.

Analysing MySQL Slow Query Logs

MySQL has a really useful feature that allows it to log slow queries where slow is a minimum time defined by you in micro seconds. It helps a lot is diagnosing website outages or slow responsiveness issues after the fact.

Unfortunately I couldn’t find any nice graphical tools for analysing these but there are a few command line tools:

mysqldumpslow

MySQL’s own tool, mysqldumpslow, which aggregates queries and allows you to sort them by: query time or average query time; lock time or average lock time; rows sent or average rows sent; or the number of queries.

Percona’s MySQL Slow Query Log Analyser

Dating from 2006, Percona’s Peter Zaitsev wrote about their own version of a slow query log analyser (local copy) which has given me good results. Note that their micro time patch has since been incorporated into MySQL mainstream.

One of the main differences over MySQL’s own version is that as well as printing the aggregated query (with number and string literals wildcarded), it also prints a real example of the query allowing a copy and paste to MySQL for execution with EXPLAIN.

Example output with query details redacted:

### 230 Queries 
### Total time: 4708.948293, Average time: 20.4736882304348
### Taking 0.093420 to 203.693466 seconds to complete
### Rows analyzed 0 - 141008
SET timestamp=XXX;
SELECT ... FROM ... AS A 
        INNER JOIN ... AS C ON C.item_id = A.item_id 
    WHERE XXX AND C.item_lang = 'XXX' AND ... 
    ORDER BY CATALOG.item_sort LIMIT XXX;

SET timestamp=1348032761;
SELECT ... FROM ... AS A 
        INNER JOIN ... AS C ON C.item_id = A.item_id 
    WHERE 1 AND C.item_lang = '1' AND ... 
    ORDER BY C.item_sort LIMIT 1;

 

So it’s finally happened…

RIPE put out a press release today:

RIPE NCC Begins to Allocate IPv4 Address Space From the Last /8

14 Sep 2012

On Friday 14 September, 2012, the RIPE NCC, the Regional Internet Registry (RIR) for Europe, the Middle East and parts of Central Asia, distributed the last blocks of IPv4 address space from the available pool.

This means that we are now distributing IPv4 address space to Local Internet Registries (LIRs) from the last /8 according tosection 5.6 of “IPv4 Address Allocation and Assignment Policies for the RIPE NCC Service Region“.

This section states that an LIR may receive one /22 allocation (1,024 IPv4 addresses), even if they can justify a larger allocation. This /22 allocation will only be made to LIRs if they have already received an IPv6 allocation from an upstream LIR or the RIPE NCC. No new IPv4 Provider Independent (PI) space will be assigned.

It is now imperative that all stakeholders deploy IPv6 on their networks to ensure the continuity of their online operations and the future growth of the Internet.

In other words, for all intents and purposes, Europe (and Central Asia and the Middle East) is out of IPv4 addresses. Funnily enough, I’m actually happy that this long predicted day has arrived and we can start the next phase of IPv6 deployment.

Thunderbird 15 Released with Chat – But Shite Error Handling

I’ve just moved a couple of my chat and IRC accounts over to Thunderbird 15 to see how they look. After all, one communications application is better than three. Mozilla have a good instruction set here and they work well for standard chat accounts.

However, most corporations using in house XMPP servers with self signed certificates and these fail with no error messages whatsoever. It was by fluke I found this post: How to Make Thunderbird Chat Work with Most XMPP/Jabber Accounts. Thanks for doing the legwork Rod, this helped a lot.

 

Anonymous Objects in PHP

A little known but often useful feature of PHP’s object oriented functionality is anonymous objects which are generally used as value stores. Here’s an example:

$valueStore = (object) array(
    'name' => 'John Smith',
    'address' => array(
        '1 Some Street',
        'Some Town, Post Code',
        'Ireland'
    )
);

If you were to dump the resultant object, you’d get:

object(stdClass)#1 (2) {
    ["name"] => string(10) "John Smith"
    ["address"] => array(3) {
        [0] => string(13) "1 Some Street"
        [1] => string(20) "Some Town, Post Code"
        [2] => string(7) "Ireland"
    }
}

You can now add new items ($valueStore->item = 'qwerty'), check is items are set (isset()), remove items (unset()), and retrieve items ($valueStore->item).

But why? Well, a reason I use these anonymous objects for frequently is when I need to pass around a value store between different functions, objects, registries and so forth. With standard arrays, these are passed by value meaning you are making copies but also that changes on one won’t affect another.

If we use anonymous objects then these are passed by reference and no copies are created. We could use the reference operator I hear you scream. Yes, we could. But that is more prone to forgetful errors and is not as elegant as the above!

If you want to take it further, you can add functionality via anonymous functions and closures.

Try Git In Your Browser

I know, I know. The fourth Git post in a row…

But, in case you missed it or in case you’d like to try Git without installing any software, then checkout http://try.github.com/

Today [GitHub are] launching a unique and easy way, in the format of a Code School interactive course, for new Git and GitHub users to try both the tool and the service without a single bit of software installation.

If you know of a developer, designer, or other knowledge worker that would benefit from using Git but that hasn’t made the leap just yet, send them over to try.github.com when they have a spare 15 minutes.

Two Git Branching Models

In current projects, we tend to float between two branching models depending on the requirements of the customer / project and the planned deployment process.

git-flow

This was previously known as A successful Git branching model with an extremely detailed overview and instructions here. It has since spawned a project to help make using this model easier – git-flow.

This is a great branching model for a team of people who work towards creating planned versioned deployments (whether it be time based such as every second Tuesday, or milestone based).

In essence, this model uses two main branches:

  • origin/master – the main branch where the source code of HEAD always reflects aproduction-ready state.
  • origin/develop - where the source code of HEAD always reflects a state with the latest delivered development changes for the next release. Some would call this the “integration branch”. This is where any automatic nightly builds are built from.

Then developers create supporting branches as part of their development process:

  • feature branches – branched off origin/develop for active development of a new feature / enhancement;
  • hotfix branches – a branch off origin/master to apply a critical fix to production code;
  • release branches – short lived branches off origin/develop which are used for final testing and patched before being merged to origin/master.

The model works very well in practice and the above linked document is a excellent read on Git branching practices in general as well as git-flow in particular.

Github Flow

This is the model used at Github and discussed by Github developer Scott Chacon here.

This is suited for projects that deploy continuously rather than around the concept of releases and so it’s less constrained than git-flow. These are two very different models and Scott puts forth his arguments in favour of the Github model in his post linked above.

In essence, Github flow works as follows:

  • origin/master is always deployable. Always.
  • Similarly to git-flow, new features are done in their own branch – but off of origin/master in this case.
  • Now, when you believe your new feature is complete, a merge request is opened so someone else can review and check your work. This is the QA process.
  • Once someone else signs off, you merge back into origin/master.
  • This is now deployable and can and will be deployed at anytime.

 

Specifying Specific SSH Keys for Git Remotes

When using Git via SSH with services such as GitHub and Gitorious, it can be useful to use specific / different ssh keys than your default.

This is accomplished with an entry such as the following in your ~/.ssh/config:

Host some-alias
    IdentityFile /home/username/.ssh/id_rsa-some-alias
    IdentitiesOnly yes
    HostName git.example.com
    User git

You then specify this remote as follows in .git/config:

url = some-alias:project/repository.git