Migrating Legacy Web Applications to Laravel

Originally published in php[architect] Magazine, March 2019 issue. [PDF] and discussed in Building Bridges (podcast), php[podcast] – The Official Podcast of php[architect], March 25th 2019. [Official Link] [Local Copy]


Thanks to Taylor Otwell’s Laravel framework, PHP is reclaiming its rightful place as the go-to language for web application development. For those of us maintaining and developing applications using legacy frameworks, the grass certainly looks greener on Laravel’s side. In this article, I will show you how to do an in-place migration from a legacy framework to Laravel.

Introduction

IXP Manager is an open source tool we developed at INEX for managing IXPs (internet exchange points – network switching centers which facilitate the regional exchange of internet traffic between different networks). It has run on Zend Framework V1 (ZF1) since 2008.

As most readers will know, ZF1 went end-of-life in 2016, but its obituary was written a couple years before that. In 2015, we released V4 of IXP Manager which was a framework transition release. Over the course of nine minor releases of V4, we migrated from ZF1 to Laravel finally completing the project with V4.9 released in January of 2019.

Admittedly, a two and half year transition sounds like a long time but this was an in-place migration where Laravel handled new and migrated controllers while anything still to be migrated fell back on ZF1. You should also note the IXP Manager project has a single full-time developer plus me when time allows.

The Approach

There are two possible approaches to migrating your application to Laravel: a flag-day or an in-place/side-by-side migration.

Your gut feeling may lean towards a flag day – let’s just get this done” – but it is the more drastic path. It means pausing all feature development and rewriting the application completely. In any project, commercial or open source, this is a very difficult argument to make. For a commercial project, it puts a real cost on the migration: ( number of developers * monthly salary * n months ) + the opportunity cost of the development freeze where n will realistically be six months at an absolute minimum. This will be very difficult to get approved by the higher-ups! Plus, have you ever met a development project that finished on time? That six months will creep to a year and even beyond very quickly.

With the in-place migration, we add Laravel to our application so that it has the first opportunity to service a request (route). Otherwise, it hands off to the legacy framework. This has two immediate advantages: you can develop all new features immediately on Laravel as well as use Laravel features and facades within the legacy framework. It also means you can migrate legacy controllers on a case-by-case basis as time and resources allow. Migrating the smaller/easier legacy controllers are also excellent projects for interns, student work experience or new hires getting up to speed. The true cost is buried in day-to-day development, there’s no promised flag-day deadline to miss, and there’s no frustrating feature freeze.

Making the Case

Part of making the case to fellow developers and decision makers in your organization is being able to reference that Laravel is now the number one web application framework on GitHub – across all languages. Other important arguments include:

  1. Prevent developer apathy: or, better phrased for management, retain key employees and attract more developers. Let’s face it, as developers, we prefer to engage in projects that use current frameworks and which support modern versions of PHP (i.e., greater than or equal to 7.1).
  2. You will have to eventually: this is a corollary of the above point. If you do not migrate to a modern framework, then you will inevitably face each of the following consequences. You will hemorrhage employees/developers, and your code will grow more outdated and consequently prove more difficult and costly to upgrade eventually. You’ll be running on frameworks that have passed end-of-life and end-of-support which means security holes will be discovered but remain unpatched and you’ll be forced to run older operating systems to run older versions of PHP for framework compatibility yielding yet more known but unpatched security holes.
  3. Develop with modern techniques and services: Laravel makes it incredibly easy to use modern features such as job queues, an integrated command line interface, broadcasting, caching, events with listeners, scheduling, modern templating engine, database abstraction and ORM, and more.
  4. Reference applications: refer to projects that successfully demonstrate an in-place migration including IXP Manager which supports critical internet infrastructure in 70 locations around the world and has successfully completed the migration, and LibreNMS, a hugely popular network monitoring system with thousands of installations that is also well along the path of replacing a custom framework with Laravel.

Prerequisites

Before you start the process of integrating Laravel for an in place migration, you need to ensure your existing application is ready for it.

Your legacy application needs to use Composer, a dependency manager for PHP. If you are not using it already, it will need to be integrated into your application by using autoloaders (classmap, psr-0/4) for existing namespaces (whether modern PHP namespaces or the Zend_ type prefix).

Your application should have a single point of entry (e.g., index.php). If it doesn’t, you can create an index.php to handle this by (carefully and securely) examining the $_REQUEST object and running the requested script from a new index.php.

Your application entry point should exist in a dedicated subdirectory such as public/ – i.e., the framework and other PHP files should not be exposed by your web server. This should be fairly easy to retrofit if not already in place.

The Migration

Step One: Install Laravel

The first step is to install the Laravel application base files alongside your existing application files. Begin by installing Laravel using its own documentation into a separate directory and then move the files over to your application root directory in a piecemeal fashion.

You will need to resolve any filename or directory conflicts, and you should do this by moving your own files out of the way and renaming or refactoring them rather than altering Laravel’s files. The level of effort here will be framework dependent, but the good news is it was very easy for ZF1. I also looked at the file and directory structures for CodeIgniter and Symfony, and both also seem like they shouldn’t pose any significant problems. Lastly, if you are running a custom or non-application framework (LibreNMS was in this category), you will still be able to use the technique I am demonstrating here. Continue reading and pay particular attention to moving but keeping your index.php in step two below.

When you complete the file moves as shown by example in Listing 1, examine any files remaining in the Laravel directory and move them if necessary/desired. Also, note the example was based on Laravel v5.7 so your mileage may vary for other versions.

# Get the Laravel files from GitHub:
git clone https://github.com/laravel/laravel.git

# Switch to the version of Laravel you want to migrate to:
cd laravel
git checkout vx.y.z

# Assuming you are in the new Laravel app directory above
# and your legacy application is located at ../legacyapp

# You can start to move the files as follows (and feel free
# to break this into smaller steps if there are conflicts):
mv app/ artisan bootstrap/ config/ database/ package.json \
   phpunit.xml resources/ routes/ server.php storage/     \
   tests/ webpack.mix.js    ../legacyapp

mv .env.example ../legacyapp/.env
mv public/js/app.js ../legacyapp/public/js
mv public/css/app.css ../legacyapp/public/css

# For now, we ignore public/index.php and we do not need
# any of composer.json, readme.md, vendor/ or CHANGELOG.md

As well as the base Laravel files, you also need the actual Laravel framework and supporting packages. Integrate the lines shown in Listing 2 to your composer.json file (ensuring you match this to your version of Laravel).

{
    "require": {
        "fideloper/proxy": "^4.0",
        "laravel/tinker": "^1.0",
        "laravel/framework": "5.7.*"
    },
    "require-dev": {
        "beyondcode/laravel-dump-server": "^1.0",
        "filp/whoops": "^2.0",
        "fzaninotto/faker": "^1.4",
        "mockery/mockery": "^1.0",
        "nunomaduro/collision": "^2.0",
        "phpunit/phpunit": "^7.0"
    },
    "autoload": {
        "psr-4": {
            "App\\": "app/"
        },
        "classmap": [
            "database/seeds",
            "database/factories"
        ]
    },
    "autoload-dev": {
        "psr-4": {
            "Tests\\": "tests/"
        }
    }
}

You should now run composer update to install Laravel and its dependencies. You should also examine the other sections of Laravel’s composer.json file including the config, extra, and scripts sections and copy them across.

Before you proceed any further, you should check that your legacy application continues to work as expected. While we have installed Laravel’s files and supporting libraries, we have not changed index.php so your application should run as it always has. If you have integration tests, they can really shine here. If you don’t, consider writing them as you port functionality over to Laravel. Diagnose and fix any issues now.

Step Two: Activate Laravel as the Default Framework

You need to verify you successfully completed Step One. To do this, move your index.php out of the way (e.g., mv index.php legacy_index.php) and copy over Laravel’s own index.php to replace it. Ensure Laravel starts up instead of your own legacy application. If it works, you will see the standard Laravel application welcome page. If this does not work, diagnose and fix those issues now.

When finished, leave Laravel’s index.php in place. The handoff to the legacy framework will happen within the Laravel application and not index.php.

Step Three: Hand Off to Legacy Framework

There are two ways to hand off to the legacy framework I have seen in use: the way we did it with IXP Manager via a 404 error handler and the way LibreNMS did it using a catch all route. I will show you both methods here, and you can choose which suits you.

Using a 404 Handler

In Laravel, if a route does not exist to handle a request, it throws a 404 exception. In Laravel v5.7, this gets handled in app/Exceptions/Handler.php:

class Handler extends ExceptionHandler {
    // ...

    public function render($request, Exception $exception)
    {
        return parent::render($request, $exception);
    }
}

We augment this render() function to handle 404 exceptions differently by handing them off to the legacy framework – here’s a skeleton example.

use Symfony\Component\HttpKernel\Exception\{
    NotFoundHttpException };

public function render($request, Exception $exception) {
   if( $exception instanceof NotFoundHttpException ) {
      // pass to legacy framework - contents of index.php
      die();
   }
}

Before we fill in the detail of pass to legacy framework contents of index.php above, we need to decide how to actually handoff. We could just jam in the contents of legacy_index.php and it would work fine. But as we migrate more and more elements to Laravel, we’ll find various complications that make this unwieldy. A better way to handle the legacy framework within Laravel is to treat it as a service provider. For example, we could create a file app/Providers/ZendFrameworkServiceProvider.php as shown in Listing 3.

class ZendFrameworkServiceProvider extends ServiceProvider{
   protected $defer = true;

   public function register() {
      $this->app->singleton("ZendFramework",function($app){

         // here are the contents of the legacy index.php:
         require_once “Zend/Application.php”;
         $zf = new Zend_Application(
            $app->environment(), $this->createOptions()
         );

         return $zf->bootstrap();
      });
   }
}

IXP Manager’s actual production version of this can be seen here in our v4.8 GitHub tree. You should note we have completely removed Zend’s own configuration INI files at this point and instead take the configuration directly from Laravel’s config/ files. This is then passed into the legacy framework as an array. Our application only has one configuration mechanism (more on this later).

Also, to make require_once "Zend/Application.php" work, we installed the ZF1 library via Composer. As mentioned above, you can use classmaps, psr-0, or psr-4 within Composer to ensure Laravel can resolve your legacy application’s namespace.

Do not forget to register the new service provider in config/app.php:

   'providers' => [
      // ...
      App\Providers\ZendFrameworkServiceProvider::class,
      // ...
   ],

Now that we have our legacy framework service provider, we can return to the 404 exception handler’s (app/Exceptions/Handler.php) render() function and fill in the missing piece:

// Render an exception into a HTTP response
public function render( $request, Exception $exception ) {
  if( $exception instanceof NotFoundHttpException ) {
    // pass to legacy framework
    App::make("ZendFramework")->run();
    die(); // prevent Laravel sending a 404 response
  }
}

There are some great advantages to using a service provider and putting Laravel first:

  • You can use all of Laravel’s facades immediately in your legacy code (e.g., Cache::, Queue::, Mail::, etc.).
  • You can migrate code on an action by action basis rather than controller by controller or even have Laravel handle new action based requests for existing legacy controllers.
  • you can eventually cleanly and simply remove the legacy framework by removing the 404 handler lines, the entry in config/app.php, legacy related packages from composer.json, and the legacy service provider.

Using a Default Route

This is how the LibreNMS project handled the side-by-side migration. At the end of Laravel’s routes/web.php file, they added:

// Legacy Framework Routes
Route::any( "/{path?}", "LegacyController@index" )
    ->where( "path", ".*" );

This catches all routes not having a specific previous match in Laravel in the same way the 404 handler does. They then hand off to to the legacy framework in a controller (app/Http/Controllers/LegacyController.php) as follows:

namespace App\Http\Controllers;
class LegacyController extends Controller {
    public function index($path = "") {
        ob_start();
        include base_path("html/legacy_index.php");
        return response( ob_get_clean() );
    }
}

This will also work, but be aware you’ve entered Laravel’s HTTP kernel handling, and loaded and run all middleware associated with the web routes. This can be useful in some circumstances, but the 404 handler method will generally be more efficient.

Continuing the Migration

You can now proceed with the migration on a controller by controller basis (or action by action) along with the views and models as necessary.

Other Considerations

Parallel Configurations

For our own project, our users download, install, and maintain it themselves. As the in-place migration went on for two years, it would have been completely unreasonable – and downright confusing – to ask those users to configure and maintain settings in two different places and using two different methods (ZF1’s application.ini file and Laravel’s .env).

Instead, we chose to configure everything in Laravel from the beginning. In our ZendFrameworkServiceProvider we then build an array using Laravel’s config() function in the same format ZF1 would have when reading the application.ini file. This array is then passed as a parameter when instantiating the legacy service provider. We already provided a link to the production version of this file in GitHub above.

If your application is an in-house enterprise system or a cloud-based hosted service, this may not be an issue for you. But if you expect your end users to install and configure the application, switching to use Laravel configuration only and passing that to the legacy framework is definitely the developer-friendly choice.

Session Management

I was quite worried about this one from the outset and had nightmares of the legacy framework and Laravel tripping over each other in PHP’s default session management system. Then, I discovered these comments within Laravel’s session middleware framework files:

// If a session driver has been configured, we will need to
// start the session here so that the data is ready for an
// application. Note that the Laravel sessions do not make
// use of PHP "native" sessions in any way since they are
// crappy.

I won’t start an argument on whether the statement is true or not, but from a migration point of view, it’s a really useful position for Laravel to take. Essentially, as Laravel implements its own cookie-based session management system, there are no conflicts with any other legacy frameworks. It essentially just works.

If you need to access the Laravel session in your legacy code, you can use the Session:: facade.

User Authentication

Frameworks typically handle user authentication using sessions. As Laravel has its own session management system, our goal is to ensure when a user has logged into one framework, they are logged into the other framework (and same for logging out).

We choose to leave the migration of the authentication controller until last – there was no particular reason for this, but it was going to be the first thing we did or the last. In the end, we just felt it was one of the more complex systems, and it would be easier to start with some of the simpler controllers. This meant we needed to ensure we logged into Laravel if we were logged into ZF1 (and logged out as appropriate).

There are a few ways (and places) to handle this. We chose to add a block of temporary code to the top of routes/web.php as it is executed on every request and it is a file that is edited regularly so we could be confident we would also remember to remove it when the migration was complete. It looked like this:

if( php_sapi_name() !== "cli" ) {
    $auth = Zend_Auth::getInstance();
    if( $auth->hasIdentity() && Auth::guest() ) {
        Auth::login( App\User::findOrFail( $auth->getId() ) );
    } else if( !$auth->hasIdentity() ) {
        Auth::logout();
    }
}

First, we do not run the code if we are running on the CLI (e.g., an Artisan command).

The if() statement says if we are logged into ZF1 and not Laravel, then log into Laravel. Conversely, the else if() asks if we are not logged into ZF1 then ensure we are also not logged into Laravel.

When the time comes to plan the migration of the authentication system, it is an opportune moment to consider other enhancements including:

  • integrate Laravel Socialite which allows users to log in with OAuth providers such as Facebook, Twitter, LinkedIn, Google, GitHub, GitLab, and many more;
  • add 2-factor authentication;
  • add Log in As functionality which is useful for diagnosing issues as end users see them (see the viacreative/sudo-su package for a good example of this);
  • and, of course, upgrade password hashing to bcrypt/Argon2.

Duplication of Views/Templates

One of the more significant headaches of an in-place migration is having to duplicate your layout views (menus, headers, footers, etc.) and maintain both versions during the process. When you do this, you will want to keep the new Laravel view template layouts as close to the legacy ones as possible. This ensures your end users will not realize two frameworks are running the backend.

This doesn’t mean you can’t modernize the frontend libraries. For example, you could still upgrade from Bootstrap v2 to Bootstrap v4 and smooth out the differences with custom CSS.

Also, as you migrate actions and controllers, don’t forget to update links in both sets of views.

ORM/Database Model Migration

Laravel has a very nice ORM called Eloquent. It also has its own DBAL (database abstraction layer) on which Eloquent is built. As you migrate the legacy application, you will also need to consider how best to migrate the legacy database code.

If you have been using PHP’s mysql_* functions directly or have built up a custom library to wrap the usage of these functions, you should just bite the bullet and move to Eloquent as you migrate.

Our situation with IXP Manager was a little different as we migrated to Doctrine2 in 2012 so we were already using a high-performance modern ORM library. Rather than try and migrate this, we were fortunate the Laravel Doctrine project provides a drop-in Doctrine2 implementation for Laravel 5+. This allowed us to use our Doctrine entities and repositories in both the legacy and Laravel framework in parallel.

Tracking Progress

This is accomplished by watching the number of legacy controllers (or files) you still have to reduce with each iteration. As each action/controller is migrated, the legacy code should be removed. This is accompanied by a nice endorphin release when you commit and push deletions of legacy code!

The typical decision to migrate a controller was either:

  • no pressing feature requests so pick off the next controller and migrate it; or,
  • there was a required new feature in a legacy controller, so the feature was implemented in Laravel as part of the process of migrating the controller.

About 18 months into the IXP Manager migration project, we estimated we were about 75 percent of the way to removing ZF1. The remaining legacy controllers were static code which were rarely touched. To bring it over the line, we put two months of concentrated effort into this while still not neglecting other smaller improvements, bug reports, and feature requests.

Summary

This article is a write-up of a talk I gave at the Laravel Live UK conference in 2018. Let me close with some encouragement: while migrating a large application to a new framework is a daunting and time-consuming task, it is possible. IXP Manager has roughly 85k lines of PHP code, and we got through it with a single full-time developer in a little over two years while still adding and improving features.

Please feel free to reach out to me on @barryo79 with comments and questions.

INEX’s Shiny New Route Servers

Copy of an article I wrote on INEX’s own blog for longevity – original published here on April 10 2019.


In this article, we talk about the new route servers that we deployed across all three peering platforms at INEX during February 2019 and, particularly, RPKI support.

Most route server instances at internet exchanges (IXPs) perform prefix filtering based on route/route6 objects published by internet routing registries (IRRDBs). INEX members would be used to creating these through RIPE’s database. However there are many other registries and the data quality of some of these IRRDB objects is often poor, with problems relating to missing, stale and incorrectly duplicated information.

A typical IRRDB entry would resemble the following:

route:          192.0.2.0/24
descr:          Example IPv4 route object
origin:         AS65500
created:        2004-12-06T11:43:57Z
last-modified:  2016-11-16T22:19:51Z
source:         SOME-IRRDB

RPKI

RPKI is a public key infrastructure framework designed to secure the internet’s routing infrastructure in a way that replaces IRRs with a database where trust is assigned by the resource holder. The equivalent of a route object in RPKI is called a ROA (Route Origin Authorisation). It is a cryptographically secure triplet of information which represents a route, the AS that originates it and the maximum prefix length that may be advertised. An example of an IPv4 and an IPv6 ROA would be:

( Origin AS,  Prefix,         Max Length )
( AS65500,    2001:db8::/32,  /48        )
( AS65501,    192.0.2.0/24,   /24        )

ROAs are typically created through your own RIR (so, RIPE for most INEX members). These RIRs are called trust anchors in RPKI. RIPE have created an extremely easy wizard for creating ROAs through the LIR Portal.

To implement RPKI in a router, the router needs to build and maintain a table of verified ROAs from the five RIRs/trust anchors. The easiest way of doing this is to use a local cache server which pulls and validates the ROAs from the trust anchors and uses a new protocol called RPKI-RTR to feed that information to routers. Currently there are three validators: RIPE’s RPKI Validator v3; Routinator 3000 from NLnetLabs; and Cloudflare’s GoRTR. INEX currently uses the former two.

RPKI validation of a route against the table of ROAs yields one of three possible results:

  • VALID: a ROA exists for the route and both the prefix length is within the allowed range and the origin ASN matches.
  • INVALID: a ROA exists for the route but either (or both) the prefix length is outside the allowed range and/or the origin ASN is different.
  • UNKNOWN: no ROA exists for the route.

UNKNOWN is a common response as the database has only a fraction of the prefix coverage as IRR databases do. We are now in a multi-year transition from IRR to RPKI route validation while ROAs are created.

Bird V2

As well as RPKI support, we have also upgrading all route servers to Bird v2.

This is a significant rewrite to Bird which, for v1, maintained separate code and daemons for IPv4 and IPv6. Bird v2 merges these code bases and also introduces support for new SAFIs such as l3vpns / mpls.

Overall, the configuration changes required were minimal and INEX continues to run separate daemons of Bird v2 for IPv4 and IPv6 daemons. Route servers are CPU intensive and separate daemons allows for maximum stability, keeps the configuration clean and fits into the existing deployment processes we have built up with IXP Manager.

Route Server Filtering Flow

Our work on the new route servers will be released to the community as part of IXP Manager v5 shortly. The new filtering flow is enumerated below. One of the key new features is that if any route fails a step, we use internal large community tagging to indicate this and the specific reason to our members through the IXP Manager looking glass (more on that later).

  1. Filter small prefixes (>/24 for IPv4, >/48 for IPv6).
  2. Filter martian / bogon ranges.
  3. Sanity check to ensure the AS path has at least one ASN and no more than 64.
  4. Sanity check to ensure the peer ASN is the same as first ASN in the prefix’s AS path.
  5. Prevent next-hop hijacking (where a member advertises a route but puts the next hop as another member’s router rather than their own). We do allow same-AS’s to specify their other router(s).
  6. Filter known transit networks.
  7. Ensure that the origin AS is in set of ASNs from member’s AS-SET. See below for some additional detail on this.
  8. RPKI validation. If it is RPKI VALID, accept the route. If it is RPKI INVALID then filter it.
  9. If the route is RPKI UNKNOWN, revert to standard IRRDB filtering.

Regarding step 7 above, an AS-SET is another type of IRRDB database entry where a network which also acts as a transit provider for other networks can enumerate the AS numbers of those downstream networks. This is something RPKI does not yet support but it is being worked on – see AS-Cones.

Lastly we have enhanced the BGP large community support to allow our members request as-path prepends on announcements to specific members over the route servers. For these and more supported communities, see the INEX route server page here.

Bird’s Eye and the Looking Glass

As well as IXP Manager, INEX has also written and open sourced a secure micro service for querying Bird daemons called Bird’s Eye. IXP Manager uses this to provide a web-based looking glass into our route collectors and servers. We have recently released v1.2.1 of Bird’s Eye which adds support for Bird v2.

We have greatly enhanced IXP Manager’s looking glass to support both Bird v2 and the large communities we use to tag filtered reasons. You can explore any of INEX’s route servers to see this yourself – for example this is route server #1 for IPv4 on INEX LAN1. When members log into IXP Manager they will also find a new Filtered Prefixes tool which will summarise any filtered routes across all 12 of INEX’s route server instances.

More Information

We have spoken about this at a number of conferences recently:

UI Tests with Laravel Dusk for IXP Manager

We use standard PHPUnit tests for IXP Manager for some mission critical aspects. These take data from a test database filled with known sample data (representing a range of different member configurations). The tests then use this information to generate configurations and compare these against known-good configurations.

This way, we know that for a given set of data, we will get a predictable output so long as we haven’t accidentally broken anything in development.

But, as an end user, how do you know that what you stick in a web-based form gets put into the database correctly? And conversely, how do you know that form represents the database data correctly when editing?

This is an issue we ran into recently around some checkbox logic and a dropdown showing the wrong selected element. These issues are every bit as dangerous to mission critical elements as the output tests we do with PHPUnit.

To test the frontend, we turn to Laravel Duskan expressive, easy-to-use browser automation and testing API. What this actually means is that we can right code like this:

$this->browse( function( $browser ) use ( $user ) {
    $browser->visit('/login')
        ->type( 'username', $user->email )
        ->type( 'password', 'secret' )
        ->press('Login')
        ->assertPathIs('/home')
        ->assertSee( 'Welcome to IXP Manager ' . $user->name );

We have now added Dusk tests for UI elements that involve adding, editing and deleting all aspects of a member interface and all aspects of adding, editing and delete a router object. Here’s an example of the latter:

Laravel Dusk - Animated Gif Example

Doctrine2 with GROUP_CONCAT and non-related JOIN

Doctrine2 ORM is a fantastic and powerful object relational mapper (ORM) for PHP. We use it for IXP Manager to great effect and we only support MySQL so our hands are not tied to pure Doctrine2 DQL supported functions. We also use the excellent Laravel Doctrine project with the Berberlei extensions.

Sometimes time is against you as a developer and the documentation (and StackOverflow!) lacks the obvious solutions you need and you end up solving what could be a single elegant query very inefficiently in code with iterative database queries. Yuck. 

I spent a bit of time last night trying to unravel one very bad example of this where the solution would require DQL that could:

  1. group / concatenate multiple column results from a one-to-many relationship;
  2. join a table without a relationship;
  3. ensure the joining of the table without the relationship would not exclude results where the joint table had no matches;
  4. provide a default value for (3).

Each of these was solved as follows:

  1. via MySQL’s GROUP_CONCAT() aggregator. The specific example here is that when a MAC address associated with a virtual interface can be visible in multiple switch ports. We want to present the switch ports to the user and GROUP_CONCAT() allows us to aggregate these as a comma separated concatenated string (e.g. "Ethernet1,Ethernet8,Ethernet9").
  2. Normally with Doctrine2, all relationships would be well-defined with foreign keys. This is not always practical and sometimes we need to join tables on the result of some equality test. We can do this using a DQL construct such as: JOIN Entities\OUI o WITH SUBSTRING( m.mac, 1, 6 ) = o.oui.
  3. This is as simple as ensuring you LEFT JOIN.
  4. The COALESCE() function is used for this: COALESCE( o.organisation, 'Unknown' ) AS organisation.

We have not yet pushed the updated code into IXP Manager mainline but the above referenced function / code is not replaced with the DQL query:

SELECT m.mac AS mac, vi.id AS viid, m.id AS id, 
    m.firstseen AS firstseen, m.lastseen AS lastseen,  
    c.id AS customerid, c.abbreviatedName AS customer,
    s.name AS switchname, 
    GROUP_CONCAT( sp.name ) AS switchport, 
    GROUP_CONCAT( DISTINCT ipv4.address ) AS ip4, 
    GROUP_CONCAT( DISTINCT ipv6.address ) AS ip6,
    COALESCE( o.organisation, 'Unknown' ) AS organisation

FROM Entities\\MACAddress m
    JOIN m.VirtualInterface vi
    JOIN vi.VlanInterfaces vli
    LEFT JOIN vli.IPv4Address ipv4
    LEFT JOIN vli.IPv6Address ipv6
    JOIN vi.Customer c
    LEFT JOIN vi.PhysicalInterfaces pi
    LEFT JOIN pi.SwitchPort sp
    LEFT JOIN sp.Switcher s
    LEFT JOIN Entities\\OUI o WITH SUBSTRING( m.mac, 1, 6 ) = o.oui

GROUP BY m.mac, vi.id, m.id, m.firstseen, m.lastseen, 
    c.id, c.abbreviatedName, s.name, o.organisation

ORDER BY c.abbreviatedName ASC

 

Evaluating zsh

I’ve always been a bash user but I’ve recently decided to give zsh a while. It has some pretty useful features such as path expansion and replacement (see this slideshare). And yes, I’m well aware of bash-completion thank you very much.

It also has a nice eco system of expansions including oh-my-zsh with which I’m using plugins for git, composer (php), laravel5, brew, bower, vagrant, node and npm. I went with the agnoster theme and for iTerm2 (my terminal application of choice) I installed the Solarized Dark and Light themes. Both work well with the agnoster theme. I also installed and use the Meslo font.

One issue I did find immediately is things like file type colourisation with ls were not as good as bash. To resolve this, I installed the warhol plugin (as well as brew install zsh-syntax-highlighting grc). Now I find my ls’, ping’s, traceroute’s etc all nicely coloured.

We use Dropbox with work and to keep my work and home office laptops in sync, I moved the configs into Dropbox and symlinked to them:

cd ~
mv .oh-my-zsh Dropbox/ 
mv .zshrc Dropbox
ln -s Dropbox/.og-my-zsh
ln -s Dropbox/.zshrc

This all works really well. My bash aliases are fully compatible so I just pull them in at the end of .zshrc (source ~/.bash_aliases). Lastly – to prevent the prompt including my username and hostname on my local laptop, I set the following in .zshrc:

export DEFAULT_USER=barryo

So far, so happy.

PhpStorm and Xdebug – macOS / Homebrew

After many years of Sublime Text and, latterly, Atom, I’ve decided to give an integrated IDE another look – this time PhpStorm. I’ve always dropped them in the past as they tended to crash (hello Zend Studio) and were slow as hell (hello again Zend Studio). But so far so good – I’m only a couple days into an evaluation license but it’s fast (admittedly I have fast laptops – Intel i7’s with four cores, PCI SSD and 16GB RAM) and it’s yet to crash.

One of the key advantages of IDE’s is integrated debugging. This was ridiculously easy with PhpStorm. I use Homebrew for PHP:

$ brew ls --full-name
 homebrew/completions/composer-completion
 homebrew/php/composer
 homebrew/php/php71
 homebrew/php/php71-intl
 homebrew/php/php71-mcrypt
 homebrew/php/php71-xdebug

I’ve then configured xdebug as follows:

$ cat /usr/local/etc/php/7.1/conf.d/ext-xdebug.ini
[xdebug]
zend_extension="/usr/local/opt/php71-xdebug/xdebug.so"
xdebug.remote_enable=1
xdebug.remote_host=localhost
xdebug.remote_port=9001
xdebug.remote_autostart=1
xdebug.idekey=PHPSTORM

If you’re not using Laravel’s Valet for local development then you should check it out immediately: https://laravel.com/docs/5.3/valet. If you are using it, issue a valet restart.

Port 9001 was chosen above as Valet tends to use 9000 also. We now need to reconfigure PhpStorm to list on this port. Open preferences and type xdebug into the search box. Then find Languages & Frameworks -> PHP -> Debug on the left hand navigation pane and change the port to 9001.

That’s pretty much it for PhpStorm. They really mean zero-configuration debugging. When editing a project in the IDE, there’s a Start Listening for PHP Debug Connections toggle icon in the top left – it looks like a phone. Just turn it on.

The last thing we need to do is have an easy way to enable Xdebug when we want it when testing our applications in the browser. Chrome has a very useful plugin for this: Xdebug-helper. Just install it and edit its options and change the IDE form Eclipse to PhpStorm. You can now use this to start a debug session from within Chrome to your listening PhpStorm IDE.

Oh, just found this useful resource also covering similar topics with a CGI/CLI xdebug split.

Disk Usage on Windows under Windows\Installer – Useful Tools

Windows usage is mostly an occupational hazard for access to tools such as Microsoft Visio, VMware vSphere, proprietary VPN software, XenCenter, etc. We tend to use it on an as-needed basis via Parallels on macOS.

For a virtual machine with very little software, the assigned 60GB drive was full. Windows doesn’t display folder sizes in its File Explorer and so TreeSize Free proved a very useful tool for locating the space hogs.

This led me to a system and hidden directory: \Windows\Installer. It had amassed 30GB or orphaned update files. 50% of my allocated space. To clear this up, PatchCleaner worked a charm.

Good hunting.

Linux (Ubuntu 16.04), PHP and MS SQL

In the many years I’ve been using the traditional LAMP stack, I’ve successfully managed to avoid having anything to do with MS SQL server. Until 2016. This year I’ve had to work quiet a bit with it – administration, backups and, now, scripted queries from Linux with PHP.

I suspect I’m (a) lucky I haven’t had to do this before now; and (b) that Azure seems to have pushed Microsoft into greater Linux based support for MS SQL. The evidence? This open source Mircosoft repository with a MS SQL PHP binary driver for Linux released just a few months ago.

NB: installing the Microsoft PHP driver is different to installing the Microsoft ODBC driver for SQL Server on Linux. These may even be incompatible.

For me, I just took a standard Ubuntu 16.04 install (64bit obviously) with PHP 7.0 and downloaded the latest MS PHP SQL extension (for me, at time of writing, this was 4.0.6. When you untar the Ubuntu16.tar file, copy the .so files to /usr/lib/php/20151012/ and then create a /etc/php/7.0/mods-available/msphpsql.ini file with contents:

extension=php_pdo_sqlsrv_7_nts.so
extension=php_sqlsrv_7_nts.so

Note that the tar also contains two ‘ts’ versions of these files. Trying to use those resulted in errors. Link this for Apache2 / CLI as required. E.g. for PHP CLI:

cd /etc/php/7.0/cli/conf.d/
ln -s ../../mods-available/msphpsql.ini 20-msphpsql.ini

You can confirm it’s working via:

$ php -i | grep sqlsrv
Registered PHP Streams => https, ftps, compress.zlib, php, file, glob, data, http, ftp, sqlsrv, phar
PDO drivers => sqlsrv
pdo_sqlsrv
pdo_sqlsrv support => enabled
pdo_sqlsrv.client_buffer_max_kb_size => 10240 => 10240
pdo_sqlsrv.log_severity => 0 => 0
sqlsrv
sqlsrv support => enabled
sqlsrv.ClientBufferMaxKBSize => 10240 => 10240
sqlsrv.LogSeverity => 0 => 0
sqlsrv.LogSubsystems => 0 => 0
sqlsrv.WarningsReturnAsErrors => On => On

And, finally, for using it, following the the sample scripts from the repository worked a charm.

A Brief History of IXP Manager

For another INEX project, I was asked to put together a timeline for IXP Manager – an open source application for managing Internet eXchange Points. Reproduced here:

IXP Manager was originally a web portal written in PHP by Nick Hilliard in 2005. It was a basic database frontend that just did fairly simple CRUD (CReate, Update, Delete operations) and allowed our members to log in and view their traffic usage graphs.

Around this was a ton of Perl scripts that sucked that data out of the the database and created configuration files for route collectors, graphing, monitoring, etc.

The major achievement of Nick’s original system was the database design (the schema). The core of that schema is still the core of IXP Manager over 10 years later.

I started in INEX in 2007 and started to expand IXP Manager using what was becoming a more modern web development paradigm – Model/View/Controller with a framework called Zend Framework.

There wasn’t a grand plan here – it was just “as we needed” organic growth over the coming years.

In 2010 we decided what we had was actually pretty good and could be very useful for other IXPs. We got committee approval to open source the software and we released IXP Manager V2 in 2010 under the GPL2 license (GNU Public License v2).

This license essentially means anyone can use the software free of charge but also that they should contribute back improvements that they may make. The idea being that INEX would eventually benefit from other IXPs contributing to the project.

Open sourcing a project doesn’t mean it’ll be successful though! What we didn’t do in 2010 was put infrastructure around it such as: presentations at IXP conferences, mailing lists for user support, decent documentation, etc.

We corrected all that and re-released an updated version called IXP Manager v3 in 2012. This time it took off! We also started collaborating with ISOC (The Internet Society) around this time to help start-up IXPs (mainly eastern Europe and Africa) use IXP Manager.

Some established IXPs also contributed money towards development of missing features – most notably LONAP in the UK – and these new features fed back into INEX.

We’ve worked hard on v3 and it’s developed well since with many new features and improvements. Sometime in late 2016 – maybe even this month – we’ll release v4 which is a major leap forward again and should hopefully attract new users and developers.

INEX is very well regarded in the IXP community as an exchange that is well run and both operates and teaches best practice. All of what we’ve learnt running a good exchange has fed into IXP Manager and it helps those IXPs that use it to implement those same good practices. IXP Manager has helped raise INEX’s reputation even further.

Lately we’ve begun to realise that as a small team we can’t do it all ourselves – the more exchanges that use it, the more requests for help and features we receive and as a result, new developments take a back seat.

To try and improve this we launched a new website in 2016 – http://www.ixpmanager.org/ – and issued a call for sponsorship so we could hire a full time developer. The ‘we’ here by the way is my and Nick’s own company – Island Bridge Networks. We’re doing this on a purely cost recovery basis. I’m delighted to say we’ve just about reached our funding goal with three top line sponsors all contributing about €20k each – ISOC, Netflix and SwissIX. The hiring process has now begun!

I’m also delighted to say that there are 33 exchanges around the world using IXP Manager /that we know of/.

Personal Profile for INEX

I was asked to write a personal profile for INEX in <= 300 words. Reproduced here.

If you want to confuse Barry, ask him where he’s from: born in Cork, spent his formative years in Galway and married into Dublin. He got a honours degree in Maths, his first love, from NUI, Galway in 2001 and went on to do four years research in information theory in UCD’s Computer Science department.

In 2005 he took a job with imag!ne to help build their ADSL broadband service from scratch to a position where it supported tens of thousands of subscribers. Barry branched out on his own in 2007 when he formed his own consultancy business, Open Solutions, which continued working with imag!ne as well as building up a portfolio of network, VoIP and web application development customers.

It was 2008 when Nick Hilliard, INEX’s CTO, approached Barry to provide a couple days operational support to INEX. Little did Barry realise what a huge part of his life INEX would become – both here in Ireland supporting INEX’s infrastructure and membership but also as part of the larger European and international IXP community.

Barry is the lead developer of IXP Manager (http://www.ixpmanager.org/) – a full stack management system for IXPs which includes an administration and customer portal; provides end to end provisioning; and both teaches and implements best practice. INEX is very proud to say that this project is now in use at over 33 IXPs and has grown legs of its own with the wider community sponsoring a full-time developer.

INEX has always been happy to help other IXPs and, through our relationship with the Internet Society (ISOC), RIPE and Euro-IX, Barry has travelled to countries with a less developed internet infrastructure to advise on best practice, has delivered a number of IXP Manager workshops and contributes to policy development.