Translating SNMP OIDs Using MIB Files

I get caught trying to remember this a lot and there’s a really useful tutorial on this at the Net-SNMP website: Using and loading MIBS.

If you’re using Ubuntu, also consider checking the comments in /etc/snmp/snmp.conf which (in 13.04) contains:

As the snmp packages come without MIB files due to license reasons, loading of MIBs is disabled by default. If you added the MIBs you can reenable loading them by commenting out the following line.

Also, run the following:

apt-get install snmp-mibs-downloader

which will download some basic MIBs as part of the installation.

Nagios / Icinga Alerts via Pushover

I came across Pushover recently which makes it easy to send real-time notifications to your Android and iOS devices. And easy it is. It also allows you to set up applications with logos so that you can have multiple Nagios installations shunting alerts to you via Pushover with each one easily identifiable. After just a day playing with this, it’s much nicer than SMS’.

So, to set up Pushover with Nagios, first register for a free Pushover account. Then create a new application for your Nagios instance. I set the type to Script and also upload a logo. After this, you will be armed with two crucial pieces of information: your application API tokan/key ($APP_KEY) and your user key ($USER_KEY).

To get the notification script, clone this GitHub repository or just down this file – notify-by-pushover.php.

You can test this immediately with:

echo "Test message" | \
    ./notify-by-pushover.php HOST $APP_KEY $USER_KEY RECOVERY OK

The parameters are:

USAGE: notify-by-pushover.php  <$APP_KEY> \
    <$USER_KEY> <NOTIFICATIONTYPE>

Now, set up the new notifications in Nagios / Icinga:

# 'notify-by-pushover-service' command definition
define command{
    command_name notify-by-pushover-service
    command_line /usr/bin/printf "%b" "$NOTIFICATIONTYPE$: \
        $SERVICEDESC$@$HOSTNAME$: $SERVICESTATE$           \
        ($SERVICEOUTPUT$)" |                               \
      /usr/local/nagios-plugins/notify-by-pushover.php     \
        SERVICE $APP_KEY $CONTACTADDRESS1$                 \
        $NOTIFICATIONTYPE$ $SERVICESTATE$
}

# 'notify-by-pushover-host' command definition
define command{
  command_name notify-by-pushover-host
  command_line /usr/bin/printf "%b" "Host '$HOSTALIAS$'    \
        is $HOSTSTATE$: $HOSTOUTPUT$" |                    \
      /usr/local/nagios-plugins/notify-by-pushover.php     \
        HOST $APP_KEY $CONTACTADDRESS1$ $NOTIFICATIONTYPE$ \
        $HOSTSTATE$
}

Then, in your contact definition(s) add / update as follows:

define contact{
  contact_name ...
  ...
  service_notification_commands ...,notify-by-pushover-service
  host_notification_commands ...,notify-by-pushover-host
  address1 $USER_KEY
}

Make sure you break something to test that this works!

Recovering MySQL Master-Master Replication

MySQL Master-Master replication is a common practice and is implemented by having the auto-increment on primary keys increase by n where n is the number of master servers. For example (in my.conf):

auto-increment-increment = 2
auto-increment-offset = 1

This article is not about implementing this but rather about recovering from it when it fails. A word of caution – this form of master-master replication is little more than a useful hack that tends to work. It is typically used to implement hot stand-by master servers along with a VRRP-like protocol on the database IP. If you implement this with a high volume of writes; or with the expectation to write to both without application knowledge of this you can expect a world of pain!

It’s also essential that you use Nagios (or another tool) to monitor the replication on all masters so you know when an issue crops up.

So, let’s assume we have two master servers and one has failed. We’ll call these the Good Server (GS) and the Bad Server (BS). It may be the case that replication has failed on both and then you’ll have the nightmare of deciding which to choose as the GS!

1) You will need the BS to not process any queries from here on in. This may already be the case in a FHRP (e.g. VRRP) environment; but if not, use combinations of stopping services, firewalls, etc. to stop / block access to the BS. It is essential that the BS does not process any queries besides our own during this process.

2) On the BS, execute STOP REPLICA to prevent it replicating from the GS during the process.

3) On the GS, execute:

STOP SLAVE; (to stop it taking replication information from BS);
FLUSH TABLES WITH READ LOCK; (to stop it updating for a moment);
SHOW MASTER STATUS; (and record the output of this);

4) Switch to the BS and import all the data from the GS via something like:

$ mysqldump -h GS -u root -psoopersecret --all-database --quick \
--lock-all-tables | mysql -h BS -u root -psoopersecret;

Note that I am assuming that you are  replicating all databases here. Change as appropriate if not.

5) You can now switch back to the GS and execute UNLOCK TABLES to allow it to process queries again.

6) On the BS, set the master status with the information your recorded from the GS via:

CHANGE REPLICATION SOURCE TO SOURCE_LOG_FILE='mysql-bin.xxxxxx', SOURCE_LOG_POS=yy;

7) Then, again on the BS, execute START REPLICA. The BS should now be replication from the GS again and you can verify this via SHOW REPLICA STATUS.

8) We now need to have the GS replicate from the BS again. On the BS, execute SHOW MASTER STATUS and record the information. Remember that we have stopped the execution of queries on the BS in step 1 above. This is essential.

9) On the GS, using the information just gathered from the BS, execute: 

CHANGE REPLICATION SOURCE TO SOURCE_LOG_FILE='mysql-bin.xxxxxx', SOURCE_LOG_POS=yy;

10) Then, on the GS, execute START REPLICA. You should now have two way replication again and you can verify this via SHOW REPLICA STATUS on the GS.

11) If necessary, undo anything from step 1 above to put the BS back into production.

There is a --master-data switch for mysqldump which would remove the requirement to lock the GS server above but in our practical experience, there are various failure modes for the BS and the --master-data method does not work for them all.

Irish Radio Stations on Linux (2013)

This is updating an older article from October 2010. While Linux has come a long way since then for playing back various types of media (and new services such as tunein make it easier again), I still like to just play the radio from the command line.

The following are updating working aliases:

alias newstalk='cvlc http://newstalk.fmstreams.com:8008/listen.pls'
alias rteradio1='cvlc http://av.rasset.ie/av/live/radio/radio1.m3u'
alias rteradio1extra='cvlc http://av.rasset.ie/av/live/radio/radio1extra.m3u'
alias 2fm='cvlc http://av.rasset.ie/av/live/radio/2fm.m3u'
alias todayfm='cvlc http://audiostore.todayfm.com/audio/todayfmIRL_64K.asx'

MySQL 5.6 – Memcached / NoSQL Support and More

MySQL 5.6 has been released with some interesting new features and performance increases:

  • What’s New in MySQL 5.6
  • DBA and Developer Guide to MySQL 5.6
  • InnoDB Integration with memcached:MySQL 5.6 includes a NoSQL interface, using an integrated memcached daemon that can automatically store data and retrieve it from InnoDB tables, turning the MySQL server into a fast “key-value store” for single-row insert, update, or delete operations. You can still also access the same tables through SQL for convenience, complex queries, bulk operations, application compatibility, and other strengths of traditional database software.

    With this NoSQL interface, you use the familiar memcached API to speed up database operations, letting InnoDB handle memory caching using its buffer pool mechanism. Data modified through memcached operations such as ADD, SET, INCR are stored to disk, using the familiar InnoDB mechanisms such as change buffering, the doublewrite buffer, and crash recovery. The combination of memcached simplicity and InnoDB durability provides users with the best of both worlds.

  • Multi-threaded Slaves
  • Improved IPv6 Support – both in the bind to address option and the INET_ATON() function.
  • Replication improvements.

All in all, some nice new features. Especially the memcached integration.

That said, MariaDB seems to be making inroads on MySQL with some distributions considering a switch. Some interesting reading from that project includes:

For OpenVPN Fans – Optimising for GE Networks

I came across this article today which discusses OpenVPN optimisations for Gigebit Ethernet networks.

http://community.openvpn.net/openvpn/wiki/Gigabit_Networks_Linux

It also shows the phenomenal improvement that can be made thanks to the AES-NI instruction set on newer Intel and AMD chips.

Apple OS X as an NFS Server (with Linux Clients)

For a customer, I had to set up a Linux-based virtualised environment on a MacBook Pro using VirtualBox. This environment included making a couple of 8TB external hard drives available under NFS to the Linux hosts.

In all fairness, what better use can one put OS X to than to virtualise Linux?!?  Just kidding fanboys… well, sort of 😉

Let’s begin with a quick description of the environment:

  • A MacBook Pro (MBP) with OS X 10.8.2
  • VirtualBox with it’s own network (MBP: 192.168.56.1/24) for NFS as well as bridged adapters for general Internet access;
  • Multiple external HDDs – for simplicity, let’s just do one here which is mounted under /Volumes/DATA-1.

We want to export the DATA-1 volume to the Linux clients. That bit’s actually not too hard (see below), the main issue is we needed to match what on Linux is call no_root_squash – i.e. so the root user on the Linux clients would have root access to the NFS shares. That bit was harder.

I’ll assume root access / sudo use in the following commands.

To configure NFS, we edit / create /etc/exports (e.g. nano /etc/exports) such as:

/Volumes/DATA-1 -maproot=root:wheel -network 192.168.56.0 -mask 255.255.255.0

In other words:

  • export /Volumes/DATA-1
  • map the clients root user to local root user and the clients root group to local group wheel (gid = 0)
  • allow the export to be accessed by any host on the private VirtualBox network.

With that entry, NFS can be enabled at boot and started via:

nfsd enable
nfsd start

On a Linux client, this can then be mounted at boot with an /etc/fstab entry:

192.168.56.1:/Volumes/DATA-1 /mnt/data-1 nfs defaults 0 0

The problem was that no matter what variation of options I used, I could not get root access from the Linux clients.

The answer came by chance when I glanced an odd mount option on the external HDD:

/dev/disk2s2 on /Volumes/DATA-1 (hfs, NFS exported, local, nodev, nosuid, journaled, noowners)

noowners? What pray-tell is this? The internet provided some insight:

In Leopard, due to an unfortunate design decision by Apple, “admin” authentication is now required to make this change (no noowners) and non-admin users are no longer able to use “Get Info” to change this setting, even on devices they own and have mounted themselves.

An unfortunate design decision indeed. The temporary solutions is to execute:

mount -u -o owners /Volumes/DATA-1

Thereafter, I now have root access / effective UID from the Linux clients. This of course needs to be entered each time – if someone has a more permanent solution, I’m all ears (see below for a cron script I have implemented for this).

Just as an aside, we have a lot of NFS activity which required some tuning. First, additional NFS threads by adding nfs.server.nfsd_threads=16 to /etc/nfs.conf (execute nfsd restart after that). I’ve also added the following line to /etc/rc.local:

sysctl -w kern.aiomax=64 kern.aioprocmax=32 kern.aiothreads=4

Cron Script for Automatically Removing noowners

As mentioned above, removing this mount option every time you connect these HDDs is damn annoying at best and error prone at worst. I have a script for this now which I locate in /var/root/bin/mount-check.sh which is:

#! /bin/bash

NOOWNERS=`/sbin/mount | grep "/Volumes/DATA-1" | grep noowners | wc -l`

if [[ "X${NOOWNERS//[[:space:]]/}X" = "X1X" ]]; then
    /sbin/mount -u -o owners /Volumes/DATA-1;
fi

This is then executed via a new line in /etc/crontab:

* * * * *    root    /var/root/bin/mount-check.sh

 

Nagios Plugin for Checking Backups via rsnapshot

We’ve just added a check_rsnapshot.php script to our nagios-plugins bundle on Github. This script will verify rsnapshot backups via Nagios using a number of checks / tests:

  • minfiles – checks the number of files in a snapshot against a minimum expected number;
  • minsize – checks the size of a snapshot against a minimum expected size;
  • log – parses the rsnapshot log to ensure the most recent runs for each retention period completed successfully;
  • timestamp – checks for files created server side containing a timestamp and thus ensuring snapshots are succeeding;
  • rotation – checks that retention directories are being rotated; and
  • dir-creation – checks that retention directories are being created.

Please see this Github wiki page for more information including instructions.

Analysing MySQL Slow Query Logs

MySQL has a really useful feature that allows it to log slow queries where slow is a minimum time defined by you in micro seconds. It helps a lot is diagnosing website outages or slow responsiveness issues after the fact.

Unfortunately I couldn’t find any nice graphical tools for analysing these but there are a few command line tools:

mysqldumpslow

MySQL’s own tool, mysqldumpslow, which aggregates queries and allows you to sort them by: query time or average query time; lock time or average lock time; rows sent or average rows sent; or the number of queries.

Percona’s MySQL Slow Query Log Analyser

Dating from 2006, Percona’s Peter Zaitsev wrote about their own version of a slow query log analyser (local copy) which has given me good results. Note that their micro time patch has since been incorporated into MySQL mainstream.

One of the main differences over MySQL’s own version is that as well as printing the aggregated query (with number and string literals wildcarded), it also prints a real example of the query allowing a copy and paste to MySQL for execution with EXPLAIN.

Example output with query details redacted:

### 230 Queries 
### Total time: 4708.948293, Average time: 20.4736882304348
### Taking 0.093420 to 203.693466 seconds to complete
### Rows analyzed 0 - 141008
SET timestamp=XXX;
SELECT ... FROM ... AS A 
        INNER JOIN ... AS C ON C.item_id = A.item_id 
    WHERE XXX AND C.item_lang = 'XXX' AND ... 
    ORDER BY CATALOG.item_sort LIMIT XXX;

SET timestamp=1348032761;
SELECT ... FROM ... AS A 
        INNER JOIN ... AS C ON C.item_id = A.item_id 
    WHERE 1 AND C.item_lang = '1' AND ... 
    ORDER BY C.item_sort LIMIT 1;

 

Migrating SVN with Branches and Tags to Git

Following my love affair with Git, I’ve also started using a local install of Gitorious for private and commercial projects at Open Solutions. Before Gitorious, this meant setting up authentication and Apache aliases for each new Git repository which meant we were pretty disinclined to create repositories when we should have.

With Gitorious, it’s just a couple of clicks and we have internal public repositories, team repositories or individual developer private repositories. It’s grrrrrrreat!

Last night and this morning, I’ve stated a process of finding the many SVN repositories I / we have scattered around to import them into Git (with all branches and tags). Here’s the process:

Importing Subversion Repositories with Branches and Tags to Git

1. Create a users file so you can correctly map SVN commit usernames to Git users. For example, users.txt:

user1 = First Last Name 
user2 = First Last Name  
...

2. Now clone the Subversion repository:

git svn clone -s -A users.txt svn://hostname/path dest_dir-tmp
cd dest_dir-tmp
git svn fetch --fetch-all

3. Now we have all the SVN repository. We need to create Git tags to match the SVN tags:

git branch -r | sed -rne 's, *tags/([^@]+)$,\1,p' | \
while read tag; do \
 echo "git tag $tag 'tags/${tag}^'; git branch -r -d tags/$tag"; \
done | sh

4. Now we need to create matching branches:

git branch -r | grep -v tags | sed -rne 's, *([^@]+)$,\1,p' | \
 while read branch; \
 do echo "git branch $branch $branch"; \
done | sh

5. To help speed up a remote push, we’ll compact the repository:

git repack -d -f -a --depth=50 --window=100

6. Then we remove the meta-data that was used by git-svn:

git config --remove-section svn-remote.svn
git config --remove-section svn
rm -r .git/svn

7. And finally, we push it to our own Gitorious server:

git remote add origin git@example.com:zobie/my_great_app.git
git push --all && git push --tags

References:

  1. http://stackoverflow.com/questions/3239759/checkout-remote-branch-using-git-svn
  2. http://stackoverflow.com/questions/79165/how-to-migrate-svn-with-history-to-a-new-git-repository
  3. http://blog.zobie.com/2008/12/migrating-from-svn-to-git/