Pastebin Alternatives

Update 2019: we now use PrivateBin a lot. It is a self-hosted, minimalist, open source online pastebin where the server has zero knowledge of pasted data.


Pastebin has been a valuable tool for years – to the extent that pastebin it has entered the common lexicon of sysadmins, network engineers and developers.

There are, however, a few notable alternatives:

  • GitHub Gists – what’s particularly cool about these is that each Gist (which is just pasted text) is also a fully fledged Git repository with versioning and the ability for fork. There’s also syntax highlighting and a nice UI. If you’re a GitHub user, your own Gists are also linked to your account.
  • p.ip.fi – this scores big points for its pure simplicity. You’d argue that a pastebin doesn’t really need a complex UI and p.ip.fi is laudable in its complete lack of a UI. Just paste and hit Ctrl-S and you’re done. Very nice. (Credit to Nick for pointing this out).
  • sprunge.us – this is a command line pastebin which should appeal directly to sysadmins and network engineers. (Credit to dnolan for leading me to this one). This is best demonstrated via an epic traceroute:
$ traceroute -m 60 216.81.59.173 | curl -F 'sprunge=<-' http://sprunge.us
http://sprunge.us/THie

Check out the result at http://sprunge.us/THie.

Enabling External Commands in Nagios / Ubuntu

I get caught by the following quite often (too many Nagios installations!):

Error: Could not stat() command file ‘/var/lib/nagios3/rw/nagios.cmd’!

The external command file may be missing, Nagios may not be running, and/or Nagios may not be checking external commands. An error occurred while attempting to commit your command for processing.

The correct way to fix this in Ubuntu is:

service nagios3 stop
dpkg-statoverride --update --add nagios www-data 2710 /var/lib/nagios3/rw
dpkg-statoverride --update --add nagios nagios 751 /var/lib/nagios3
service nagios3 start

Translating SNMP OIDs Using MIB Files

I get caught trying to remember this a lot and there’s a really useful tutorial on this at the Net-SNMP website: Using and loading MIBS.

If you’re using Ubuntu, also consider checking the comments in /etc/snmp/snmp.conf which (in 13.04) contains:

As the snmp packages come without MIB files due to license reasons, loading of MIBs is disabled by default. If you added the MIBs you can reenable loading them by commenting out the following line.

Also, run the following:

apt-get install snmp-mibs-downloader

which will download some basic MIBs as part of the installation.

Nagios / Icinga Alerts via Pushover

I came across Pushover recently which makes it easy to send real-time notifications to your Android and iOS devices. And easy it is. It also allows you to set up applications with logos so that you can have multiple Nagios installations shunting alerts to you via Pushover with each one easily identifiable. After just a day playing with this, it’s much nicer than SMS’.

So, to set up Pushover with Nagios, first register for a free Pushover account. Then create a new application for your Nagios instance. I set the type to Script and also upload a logo. After this, you will be armed with two crucial pieces of information: your application API tokan/key ($APP_KEY) and your user key ($USER_KEY).

To get the notification script, clone this GitHub repository or just down this file – notify-by-pushover.php.

You can test this immediately with:

echo "Test message" | \
    ./notify-by-pushover.php HOST $APP_KEY $USER_KEY RECOVERY OK

The parameters are:

USAGE: notify-by-pushover.php  <$APP_KEY> \
    <$USER_KEY> <NOTIFICATIONTYPE>

Now, set up the new notifications in Nagios / Icinga:

# 'notify-by-pushover-service' command definition
define command{
    command_name notify-by-pushover-service
    command_line /usr/bin/printf "%b" "$NOTIFICATIONTYPE$: \
        $SERVICEDESC$@$HOSTNAME$: $SERVICESTATE$           \
        ($SERVICEOUTPUT$)" |                               \
      /usr/local/nagios-plugins/notify-by-pushover.php     \
        SERVICE $APP_KEY $CONTACTADDRESS1$                 \
        $NOTIFICATIONTYPE$ $SERVICESTATE$
}

# 'notify-by-pushover-host' command definition
define command{
  command_name notify-by-pushover-host
  command_line /usr/bin/printf "%b" "Host '$HOSTALIAS$'    \
        is $HOSTSTATE$: $HOSTOUTPUT$" |                    \
      /usr/local/nagios-plugins/notify-by-pushover.php     \
        HOST $APP_KEY $CONTACTADDRESS1$ $NOTIFICATIONTYPE$ \
        $HOSTSTATE$
}

Then, in your contact definition(s) add / update as follows:

define contact{
  contact_name ...
  ...
  service_notification_commands ...,notify-by-pushover-service
  host_notification_commands ...,notify-by-pushover-host
  address1 $USER_KEY
}

Make sure you break something to test that this works!

Recovering MySQL Master-Master Replication

MySQL Master-Master replication is a common practice and is implemented by having the auto-increment on primary keys increase by n where n is the number of master servers. For example (in my.conf):

auto-increment-increment = 2
auto-increment-offset = 1

This article is not about implementing this but rather about recovering from it when it fails. A word of caution – this form of master-master replication is little more than a useful hack that tends to work. It is typically used to implement hot stand-by master servers along with a VRRP-like protocol on the database IP. If you implement this with a high volume of writes; or with the expectation to write to both without application knowledge of this you can expect a world of pain!

It’s also essential that you use Nagios (or another tool) to monitor the replication on all masters so you know when an issue crops up.

So, let’s assume we have two master servers and one has failed. We’ll call these the Good Server (GS) and the Bad Server (BS). It may be the case that replication has failed on both and then you’ll have the nightmare of deciding which to choose as the GS!

1) You will need the BS to not process any queries from here on in. This may already be the case in a FHRP (e.g. VRRP) environment; but if not, use combinations of stopping services, firewalls, etc. to stop / block access to the BS. It is essential that the BS does not process any queries besides our own during this process.

2) On the BS, execute STOP REPLICA to prevent it replicating from the GS during the process.

3) On the GS, execute:

STOP SLAVE; (to stop it taking replication information from BS);
FLUSH TABLES WITH READ LOCK; (to stop it updating for a moment);
SHOW MASTER STATUS; (and record the output of this);

4) Switch to the BS and import all the data from the GS via something like:

$ mysqldump -h GS -u root -psoopersecret --all-database --quick \
--lock-all-tables | mysql -h BS -u root -psoopersecret;

Note that I am assuming that you are  replicating all databases here. Change as appropriate if not.

5) You can now switch back to the GS and execute UNLOCK TABLES to allow it to process queries again.

6) On the BS, set the master status with the information your recorded from the GS via:

CHANGE REPLICATION SOURCE TO SOURCE_LOG_FILE='mysql-bin.xxxxxx', SOURCE_LOG_POS=yy;

7) Then, again on the BS, execute START REPLICA. The BS should now be replication from the GS again and you can verify this via SHOW REPLICA STATUS.

8) We now need to have the GS replicate from the BS again. On the BS, execute SHOW MASTER STATUS and record the information. Remember that we have stopped the execution of queries on the BS in step 1 above. This is essential.

9) On the GS, using the information just gathered from the BS, execute: 

CHANGE REPLICATION SOURCE TO SOURCE_LOG_FILE='mysql-bin.xxxxxx', SOURCE_LOG_POS=yy;

10) Then, on the GS, execute START REPLICA. You should now have two way replication again and you can verify this via SHOW REPLICA STATUS on the GS.

11) If necessary, undo anything from step 1 above to put the BS back into production.

There is a --master-data switch for mysqldump which would remove the requirement to lock the GS server above but in our practical experience, there are various failure modes for the BS and the --master-data method does not work for them all.

Dovecot – checkpassword – bash

The data from Dovecot’s checkpassword authentication mechanism can be read from a bash script via:

read -d $'\0' -r -u 3 USER
read -d $'\0' -r -u 3 PASS

Irish Radio Stations on Linux (2013)

This is updating an older article from October 2010. While Linux has come a long way since then for playing back various types of media (and new services such as tunein make it easier again), I still like to just play the radio from the command line.

The following are updating working aliases:

alias newstalk='cvlc http://newstalk.fmstreams.com:8008/listen.pls'
alias rteradio1='cvlc http://av.rasset.ie/av/live/radio/radio1.m3u'
alias rteradio1extra='cvlc http://av.rasset.ie/av/live/radio/radio1extra.m3u'
alias 2fm='cvlc http://av.rasset.ie/av/live/radio/2fm.m3u'
alias todayfm='cvlc http://audiostore.todayfm.com/audio/todayfmIRL_64K.asx'

Synchronising Microsoft Exchange to Another IMAP Server

This post is much less of a detailed how-to but rather some useful links. We were tasked with the job of sync’ing about 1,000 MS Exchange mailboxes to a Dovecot server. This needed to be done via an administrator account on the Exchange end as individual user passwords were not available.

The tool of choice for this is imapsync.  Unfortunately, there is not a single formula that will work for all as it can depend on the Exchange configuration and version as well as the use of domains on the Exchange and ActiveDirectory servers.

To help understand the various combinations of logins for Exchange, I found the following invaluable: Understanding login strings with POP3/IMAP.

Also invaluable is the imapsync FAQ – just search for mentions of Exchange.

In the end, the following worked for me (but your mileage will most definitely vary!):

./imapsync --host1 exchange-server 
    --user1 'domain/adminuser/user' --password1 'admin-password' 
    --authmech1 LOGIN 
    --host2 dovecot-server --user2 user@example.com 
    --password2 userpassword

One key element here is that when logging into Exchange as an individual user I had to use --authmech1 NTLM but if you use this auth method with the above user string, you will always end up logging into the admin’s mailbox, not the user’s. That, at least, was my experience.

Adventures with LDAP (OpenLDAP) – SSL, Multi-Master Replication and Monitoring

In my career to date, I successfully managed to avoid all but the periphery engagement in OpenLDAP. Until recently that is – we had to build a Microsoft Exchange like environment with open source software in a way that was closely integrated and easily managed. But, more on that another time. For anyone else diving into OpenLDAP, here are some articles on my experiences that I have penned:

Securing LDAP with TLS / SSL

This is a continuation of a previous post, Creating an LDAP Addressbook / Directory where we add SSL encryption to the directory.

In our case, we used a signed Unified Communications Certificate (UCC) (also known as a Subject Alternative Names (SAN) Certificate) from GoDaddy. The following will work for those as well as standard signed certificates. I have not tested with wildcard certificates. If you want to use a self-signed certificate, see the TLS and SSL section of Ubuntu’s OpenLDAP documentation as well as notes at the end of this document.

GoDaddy (or any other signing authority) will, when presented with a CSR (Certificate Signing Request), return a signed certificate as well as their own CA cert. You will already have your private key which you used to generate the CSR. With this information, prepare a file called tls.ldif with (for example):

dn: cn=config
add: olcTLSCACertificateFile
olcTLSCACertificateFile: /etc/ssl/gd_bundle.crt
-
add: olcTLSCertificateFile
olcTLSCertificateFile: /etc/ssl/webmail.opensolutions.ie.crt
-
add: olcTLSCertificateKeyFile
olcTLSCertificateKeyFile: /etc/ssl/webmail.opensolutions.ie.key

And apply the change via:

ldapmodify -Y EXTERNAL -H ldapi:/// -f tls.ldif

On Ubuntu (you own distribution may vary here), you need to add the SSL service by editing /etc/default/slapd and updating the SLAPD_SERVICES line to read:

SLAPD_SERVICES="ldap:/// ldapi:/// ldaps:///"

and then restart the server (/etc/init.d/slapd restart). You should now consider firewalling the standard port (389) to force users to use the encrypted SSL port.

Following our example with Thunderbird, you can now update your LDAP directory configuration by setting the hostname to match the subject name in your UCC / certificate (e.g. abook.opensolutions.ie) and the port to 636.

Notes for Self Signed Certificates

If you are using a self-signed certificate, you need to ensure a couple of things. Let’s assume you created a self-signed certificate for abook.opensolutions.ie. Clients need a special configuration parameter for untrusted / self-signed certificates. Copy your self-signed certificate (e.g. /etc/ssl/webmail.opensolutions.ie.crt above) to the client machine(s) – say /etc/ssl/certs/abook.crt.

Now, on the client machine, add the following line to /etc/ldap/ldap.conf:

TLS_CACERT /etc/ssl/certs/abook.crt

Secondly, the hostname you use to access the LDAP server must also match the certificate subject name – i.e. use abook.opensolutions.ie in this example rather than an IP address / alternative hostname.