So you need to log changes to some / all database records, who made them, when and what was added / changed deleted? An easy an effective way to do this with Laravel Eloquent models is via a custom Observable trait. On any model that you wish to track changes for, you then just add the trait to the model (and an optional static function to set the message format):
use App\Traits\Observable;
Let’s start with the migration we need for a table in the database to record all these changes:
public function up() {
Schema::create('logs', function (Blueprint $table) {
$table->id();
$table->unsignedBigInteger('user_id')->nullable();
$table->string('model',100);
$table->string('action',7);
$table->text('message');
$table->json('models');
$table->timestamps();
$table->foreign('user_id')->references('id')->on('users');
});
}
The Observable trait looks like:
namespace App\Traits;
use Illuminate\Database\Eloquent\Model;
use Illuminate\Support\Facades\Auth;
use App\Models\Log;
trait Observable
{
// bootObservable() will be called on model instantiation automatically
public static function bootObservable() {
static::saved(function (Model $model) {
// create or update?
if( $model->wasRecentlyCreated ) {
static::logChange( $model, 'CREATED' );
} else {
if( !$model->getChanges() ) {
return;
}
static::logChange( $model, 'UPDATED' );
}
});
static::deleted(function (Model $model) {
static::logChange( $model, 'DELETED' );
});
}
public static function logChange( Model $model, string $action ) {
Log::create([
'user_id' => Auth::check() ? Auth::user()->id : null,
'model' => static::class,
'action' => $action,
'message' => static::logSubject($model),
'models' => [
'new' => $action !== 'DELETED' ? $model->getAttributes() : null,
'old' => $action !== 'CREATED' ? $model->getOriginal() : null,
'changed' => $action === 'UPDATED' ? $model->getChanges() : null,
]
]);
}
/**
* String to describe the model being updated / deleted / created
* Override this in the model class
* @return string
*/
public static function logSubject(Model $model): string {
return static::logImplodeAssoc($model->attributesToArray());
}
public static function logImplodeAssoc(array $attrs): string {
$l = '';
foreach( $attrs as $k => $v ) {
$l .= "{ $k => $v } ";
}
return $l;
}
}
So, again, just use this trait in any model and you have full logging of changes to the database.
You’ll find complete files for the above and an example usage with the User class on this GitHub Gist.
We call IXP Manager’s statistics and graphing architecture Grapher. It’s a backend agnostic way to collect and present data. Out of the box, we support MRTG for standard interface graphs, sflow for peer to peer and per-protocol graphs, and Smokeping for latency/packet loss graphs. You can see some of this in action on INEX’s public statistics section.
Internet Exchange Points (IXPs) play a significant role in national internet infrastructures and IXP Manager is used in nearly 100 of these IXPs worldwide. In the last couple weeks we have got a number of queries from those IXPs asking for suggestions on how they can extract traffic data to address queries from their national Governments, regulators, media and members. We just published our own analysis of this for traffic over INEX here.
Grapher has a basic API interface (documented here) which we use to help those IXP Manager users address the queries they are getting. What we have provided to date are mostly quick rough-and-ready solutions but we will pull all these together over the weeks (and months) to come to see which of them might be useful permanent features in IXP Manager.
How to Use These Examples
The code snippets below are expected to be placed in a PHP file in the base directory of your IXP Manager installation (e.g. /srv/ixpmanager) and executed on the command line (e.g. php myscript.php).
Each of these scripts need the following header which is not included below for brevity:
We’ve placed a working API endpoint for INEX above – change this for your own IXP / scenario.
Data Volume Growth
An IXP was asked by their largest national newspaper to provide daily statistics of traffic growth due to COVID-19. For historical reasons linked to MRTG graph images, the periods in IXP Manager for this data is such that: day is last 33.3 hours; week is last 8.33 days; month is last 33.33 days; and year is last 366 days.
This is fine within IXP Manager when comparing averages and maximums as we are always comparing like with like. But if we’re looking to sum up the data exchanged in a proper 24hr day then we need to process this differently. For that we use the following loop:
$start = new Carbon('2020-01-01 00:00:00');
$bits = 0;
$last = $data[0][0];
$startu = $start->format('U');
$end = $start->copy()->addDay()->format('U');
foreach( $data as $d ) {
// if the row is before our start time, skip
if( $d[0] < $startu ) { $last = $d[0]; continue; }
if( $d[0] > $end ) {
// if the row is for the next day break out and print the data
echo $start->format('Y-m-d') . ','
. $bits/8 / 1024/1024/1024/1024 . "\n";
// and reset for next day
$bits = $d[1] * ($d[0] - $last);
$startu = $start->addDay()->format('U');
$end = $start->copy()->addDay()->format('U');
} else {
$bits += $d[1] * ($d[0] - $last);
}
$last = $d[0];
}
The output is comma-separated (CSV) with the date and data volume exchanged in that 24 hour period (in TBs via 8/1024/1024/1024/1024). This can, for example, be pasted into Excel to create a simple graph:
The elements of the $d[] array mirror what you would expect to find in a MRTG log file (but the data unit represents the API request – e.g. bits/sec, pkts/sec, etc.):
d[0] – the UNIX timestamp of the data sample.
$d[1] and $d[2] – the average incoming and outgoing transfer rate in bits per second. This is valid for the time between the $d[0] value of the current entry and the $d[0] value of the previous entry. For an IXP where traffic is exchanged, we expect to see $d[1] roughly the same as $d[2].
$d[3] and $d[4] – the maximum incoming and outgoing transfer rate in bits per second for the current interval. This is calculated from all the updates which have occured in the current interval. If the current interval is 1 hour, and updates have occured every 5 minutes, it will be the biggest 5 minute transfer rate seen during the hour.
Traffic Peaks
The above snippet uses the average traffic values and the time between samples to calculate the overall volume of traffic exchanged. If you just want to know the traffic peaks in bits/sec on a daily basis, you can do something like this:
The output is comma-separated (CSV) with the date and data volume exchanged in that 24 hour period (in Gbps via 1000/1000/1000). This can also be pasted into Excel to create a simple graph:
Import to Carbon / Graphite / Grafana
Something that is on our development list for IXP Manager is to integrate Graphite as a Grapher backend. Using this stack, we could create much more visually appealing graphs as well as time-shift comparisons. In fact this is how we created the graphs for this article on INEX’s website which includes graphs such as:
To create this, we need to get the data into Carbon (Graphite’s time-series database). Carbon accepts data via UDP so we used a script of the form:
The Carbon / Graphite / Grafana stack is quite complex so unless you are familiar with it, this option for graphing could prove difficult. To get up and running quickly, we used the docker-grafana-graphite Docker image. Beware that the default graphite/storage-schemas.conf in this image limits data retention to only 7 days.
There’s a very interesting package called calebporzio/sushi for Laravel that allows one to use arrays as Eloquent drivers / sources of data. @calebporzio posted his own example of using this to front API results here.
It’s a very interesting proof of concept for this use case (probably needs more work and more knobs for production use). So interesting, I had a quick look myself with a bare bones Laravel app:
$ laravel new test-sushi
$ cd test-sushi
$ composer require calebporzio/sushi
$ composer require kitetail/zttp
$ php artisan make:model IxpdbProviders
The only interesting part of the model, IxpdbProviders, is the getRows() function:
the array_map() which is required to remove sub-arrays (sub-objects) within the response as Sushi requires flat rows.
Using Zttp out of curiosity rather than Guzzle directly.
Sushi then takes the array of IXPs (the result of the API call) and stores these in a dedicated in-memory Sqlite database for the duration of the request.
We can now query this as if it were a typical database table:
We’ve just released IXP Manager v5.3.0. The headline feature in this release is two-factor authentication (2fa) and user session management. This blog post overviews the PHP elements on how we did that.
While IXP Manager is a Laravel framework application, it uses Doctrine ORM as its database layer via the Laravel Doctrine bridge. For those curious, this really is a carry over from when IXP Manager was a Zend Framework application. For the migration, we concentrated on the controller and view elements of the MVC stack leaving the model layer on Doctrine. Over time we’ll probably migrate the model layer over to Laravel’s Eloquent.
Before reading on, it would be useful to first read the official documentation we have written aroud 2fa and user session management:
Hopefully the how we did this will be useful for anyone else in the same boat or even just trying to understand the Laravel authentication stack.
Two factor authentication (2fa) strengthens access security by requiring two methods (also referred to as factors) to verify your identity. Two factor authentication protects against phishing, social engineering and password brute force attacks and secures your logins from attackers exploiting weak or stolen credentials.
User session management allows a user to be logged in and remembered from multiple browsers / devices and to manage those sessions from within IXP Manager.
For 2fa, we used the antonioribeiro/google2fa-laravel package which is built on antonioribeiro/google2fa. If we were 100% in Laravel’s eco-system the would have been easier but because we use Doctrine, we needed to override a number of classes.
Structurally we need a database table to indicate if a user has 2fa enabled and to hold their 2fa secret – for this we created Entities\User2FA. Similarly, we have a controller to handle the UI interaction of enabling, configuring and disabling 2fa: User2FAController – this also includes generating QR codes for the typical 2fa activation process.
On the user session management side, we created Entities\UserRememberToken to hold multiple tokens per user (rather than Laravel’s default single token in a column in the user’s user database entry. For the frontend UI, UserRememberTokenController allows a user to view their active sessions and invalidate (delete) them if required.
The actual mechanism of enforcing 2fa is via middleware: IXP\Http\Middleware\Google2FA. This is added, as appropriate, to web routes via the RouteServiceProvider. This will check the user’s session and if 2fa is enabled but has not been completed, then the middleware will enforce 2fa before granting access to any routes covered by it.
Note that because we also implemented user session management via long-lived cookies and because the fact that a user has passed 2fa or not is held in the session, we need to persistently store the fact in the user’s specific remember token database entry. This is done via the Google2FALoginSucceeded listener. This is then later checked in the SessionGuard – where, if we log a user in via the long-lived cookie, we also make them as having passed 2fa if so set.
Speaking of the SessionGuard, this was one of the bigger changes we had to make – we overrode the Illuminate\Auth\SessionGuard as we needed to replace a few functions to make 2fa and user session management work. We have kept these to a minimum:
The user() function – Laravel’s long lived session uses a single token but we require a token per device / browser. We also need to side-step 2fa for existing sessions as discussed above and allow for features such as allowing a user to delete other long-lived sessions and to provide functionality to allow these sessions to expire.
The above constitutes a bulk to the changes. Because 2fa can be enforced via middleware, it doesn’t really touch the core Laravel authentication process. The user session management was more invasive and responsible for the bulk of the changes required in the DoctrineUserProvider and SessionGuard.
While Vue.js‘ popularity continues to sky rocket, there are some alternatives when you want to keep the declarative style but Vue.js is far too much for smaller requirements.
Stimulus is a JavaScript framework with modest ambitions. It doesn’t seek to take over your entire front-end—in fact, it’s not concerned with rendering HTML at all. Instead, it’s designed to augment your HTML with just enough behavior to make it shine. Stimulus pairs beautifully with Turbolinks to provide a complete solution for fast, compelling applications with a minimal amount of effort.
A very recent new framework is Alpine.js which uses the tag-line think of it like Tailwind for JavaScript which, has a huge Tailwind fan, is very intriguing.
Alpine.js offers you the reactive and declarative nature of big frameworks like Vue or React at a much lower cost.
I’ve just finished Something in the Water – How Skibbereen Rowing Club Conquered the Worldby Kieran McCarthy. It’s excellent.
You’d see this book on the shelf and be a little put off – how much do we really need to know about Paul and Gary O’Donovan? But this book is only partly about them – it’s about the club, the town and its people and how they built a club and an environment that could produce an Olympic medal winning crew.
The book weaves the story of Skibbereen Rowing Club from its humble beginnings to the powerhouse in Irish rowing that it is now. The author does this by moving back and forth over the time line in a way that kept me enthralled throughout.
Kudos to Mercier Press as well – as book covers go, this one is beautiful. I do most of my reading on Kindle these days but I was given the ‘real’ book at Christmas and it’ll have a place of pride on the bookshelf. I look forward to dipping back in again in the future.
I rowed for ‘the Bish’ – my secondary school rowing club which is formally known as St Joesph’s Patrician College, Galway – from 1992 to ’97. The Irish Junior National Championships of 1997 feature in this book because it was the first time that Skibb won a national junior championship with a crew of 8 – the premier junior title. It was nice to relive it – but it also stung – we came fourth in that race, just outside of medal contention. We thought we were going to win it – but then I guess every crew thinks that.
Kieran probably didn’t realise but my partner in the bow pairing on that 8 was Alan Martin. Alan – who besides being an incredible athlete, is one of the most genuine and nicest people you’ll ever meet – gets a number of mentions in the book as he rowed in mixed crews that included Skibb rowers and was also the sub for the Irish heavyweight 4 in the Beijing olympics.
The book captures the joy and pain of rowing superbly. How beautiful and calm it can look from the bank, while the rowers’ muscles can be burning and their lungs ready to explode inside the boat. I’m probably not painting the best picture there but it’s a truly wonderful sport. In looking around for some of my old races while writing this, I came across a draft history of the Bish club which included a quote from a former member:
Lest people should think that rowing is all about winning I hasten to disabuse them of that idea. Winning is sweet and is usually only the just return on investment in hard work and discipline.
Secretly I believe that what rowing is all about is being on the river on a flat calm day in early Summer, the boat is sitting up well, the calls of the water birds all about, the smells of growing things in the nostrils, and being part of that camaraderie forged of mutual dependence and trust that is reserved for oarsmen.
Frank Cooke
Thanks Kieran for the trip down memory lane.
Some other resources I found while looking back:
Irish Rowing Archives – lots of treasures here including scanned in copies of programmes.
As installed on Ubuntu 19.10, Kamailio v5.3 will not work out of the box with MySQL 8 due to changes in the way in which users are created and privileges granted between MySQL 5.x and 8.
To fix this, edit /usr/lib/x86_64-linux-gnu/kamailio/kamctl/kamdbctl.mysql as follows:
# diff /usr/lib/x86_64-linux-gnu/kamailio/kamctl/kamdbctl.mysql.orig /usr/lib/x86_64-linux-gnu/kamailio/kamctl/kamdbctl.mysql
163,164c163,166
< sql_query "" "GRANT ALL PRIVILEGES ON $1.* TO '${DBRWUSER}'@'$DBHOST' IDENTIFIED BY '$DBRWPW';
< GRANT SELECT ON $1.* TO '${DBROUSER}'@'$DBHOST' IDENTIFIED BY '$DBROPW';"
---
> sql_query "" "CREATE USER '$DBRWUSER'@'$DBHOST' IDENTIFIED BY '$DBRWPW';
> CREATE USER '$DBROUSER'@'$DBHOST' IDENTIFIED BY '$DBROPW';
> GRANT ALL PRIVILEGES ON $1.* TO '${DBRWUSER}'@'$DBHOST';
> GRANT SELECT ON $1.* TO '${DBROUSER}'@'$DBHOST';"
172,173c174,177
< sql_query "" "GRANT ALL PRIVILEGES ON $1.* TO '$DBRWUSER'@'localhost' IDENTIFIED BY '$DBRWPW';
< GRANT SELECT ON $1.* TO '$DBROUSER'@'localhost' IDENTIFIED BY '$DBROPW';"
---
> sql_query "" "CREATE USER '$DBRWUSER'@'localhost' IDENTIFIED BY '$DBRWPW';
> CREATE USER '$DBROUSER'@'localhost' IDENTIFIED BY '$DBROPW';
> GRANT ALL PRIVILEGES ON $1.* TO '$DBRWUSER'@'localhost';
> GRANT SELECT ON $1.* TO '$DBROUSER'@'localhost';"
181,182c185,188
< sql_query "" "GRANT ALL PRIVILEGES ON $1.* TO '$DBRWUSER'@'$DBACCESSHOST' IDENTIFIED BY '$DBRWPW';
< GRANT SELECT ON $1.* TO '$DBROUSER'@'$DBACCESSHOST' IDENTIFIED BY '$DBROPW';"
---
> sql_query "" "CREATE USER '$DBRWUSER'@'$DBACCESSHOST' IDENTIFIED BY '$DBRWPW';
> CREATE USER '$DBROUSER'@'$DBACCESSHOST' IDENTIFIED BY '$DBROPW';
> GRANT ALL PRIVILEGES ON $1.* TO '$DBRWUSER'@'$DBACCESSHOST';
> GRANT SELECT ON $1.* TO '$DBROUSER'@'$DBACCESSHOST';"
The above worked fine for me but do note:
Make sure the database and users do not already exist on the database (or delete them if they do).
Use a different username for the read-only and read-write users.
MySQL 8 has a bug so issue FLUSH PRIVILEGES if you have trouble manually removing a user.
I had the pleasure of giving a talk at HEAnet’s National Conference 2019 last Friday on Ireland’s internet history as seen from INEX’s perspective. HEAnet is a founding member of INEX and one of our greatest supporters. They were the first to order a 10Gb port way back when they were new and shiny; and again the first to order a 100Gb port when they became available in 2015. Both of these were collaborative efforts allowing us each to get familiar with this new technology.
Ireland’s internet history – especially the dial-up era – has many fascinating stories. I was of school-going age when this all kicked off but there are some recent excellent projects covering the era and well worth a bedtime read.
The History of the Irish Internet – internethistory.ie – by Niall Richard Murphy. As well as telling his own story, Niall sat down with luminaries of that era including INEX’s own Nick Hilliard and Barry Rhodes.
The TechArchives project which collects stories about Ireland’s long and convoluted relationship with information technology and preserves them. This is done through personal testimonies and includes people such as Barry Flanagan who formed one of Ireland’s first dial-up ISPs from his garage in Galway and gave me my start in the ISP industry; and Barry Rhodes whose history with Ireland’s internet starts long before INEX.
For INEX’s 20th anniversary, we undertook a project to record the history of the exchange which can be found here – it also includes some personal reflections from those involved in its early days.
Single-page applications (SPAs) are web-based applications that rewrite the current browser DOM rather than doing full page reloads. They look and feel responsive and crisp but are pretty complex to write. At least differently complex – the balance of developer knowledge moves from backend templates and view logic to pretty heavy frontend JavaScript. It’s also quite hard to migrate traditional web-based applications.
Some of the more popular SPA frameworks include Vue.js with Vue Router; Ember.js; and AngularJS. For anyone coming across this for the first time, Vue.js looks really interesting.
There’s a new framework that works with Laravel and tries to bridge the gap between the traditional full page reload model and the new SPA model called Inertia.js. Jonathan’s stated goal with this is:
I wanted to blend the best parts of classic server-side apps (routing, controllers, and ORM database access) with the best parts of single-page apps (JavaScript rendering and no full page reloads).
There’s also a second new framework that’s in this between-two-houses-mould but still quite different called Livewire. It really is best to look at the code to see how this works – it really is different but also very interesting.