I’ve been re-writing my Auroras.live app in Ionic 4, and I decided to give the @ionic/vue beta a try. The documentation is severely lacking, which is understandable, given that it’s beta. I’ve also not used Ionic since whatever version was available in 2016 when I wrote version 1 of my app, so I had to learn a whole bunch of new things.

One thing I had a LOT of difficulty with, was tabs. I could get tabs to show up, but as soon as I clicked on one, the whole page, tab bar and all, would be replaced with whatever tab I had clicked on.

After much hair tearing, I came across a solution in the Ionic Conference demo app. Turns out any tabs needed to be a child of my tabs page.

So this wouldn’t work:

export default new IonicVueRouter({
  mode: 'history',
  base: process.env.BASE_URL,
  routes: [
      path: '/',
      name: 'home',
      component: Home,
      path: 'about',
      name: 'about',
      components: About

But this would:

export default new IonicVueRouter({
  mode: 'history',
  base: process.env.BASE_URL,
  routes: [
      path: '/',
      name: 'home',
      component: Home,
      children: [
          path: 'about',
          name: 'about',
          component: About

Now, to build the rest!

I’m looking in to BotMan to revamp my existing Messenger-based chatbot and had a lot of trouble getting BotMan to listen. When using BotMan Studio, it’s easy, but I didn’t want to create a whole new Laravel app just to use the software, so I followed BotMan’s integration documentation.

I installed the package, followed the instructions, but nothing was happening. Then after much fiddling, I came across a few things that got it working:

  • The DialogFlow token should be your Client Access Token, which you can find by going into the settings for your DialogFlow agent:
The settings button for DialogFlow
  • You need to add the Laravel Cache to the BotMan Factory:

namespace App\Http\Controllers;

use Illuminate\Http\Request;

use BotMan\BotMan\BotMan;
use BotMan\BotMan\BotManFactory;
use BotMan\BotMan\Middleware\DialogFlow;
use BotMan\BotMan\Cache\LaravelCache; // Add this

class AuroraBotController extends Controller

  function handleIntent(Request $request) {
    $botman = BotManFactory::create(config('aurorabot.botman', []), new LaravelCache(), app()->make('request')); // And change this line to this
    $dialogflow = DialogFlow::create(config('aurorabot.apikey', null))->listenForAction();

    $botman->hears('get_current_kp', function (BotMan $bot) {
      // Handle the intent here



Recently I deployed my Laravel app to a droplet on DigitalOcean using the LAMP image. I was scratching my head for a bit because after deployment, I couldn’t get the site to work. I’d just continue to see the default site.

I then realised that I had to disable the default site, as it was conflicting with the new site I’d enabled. So I ran

sudo a2dissite 000-default

And the issue was resolved.

I also found that Laravel would throw errors about not being able to write to logs when attempting to access static resources (e.g. storage/images/test.png or /login) so I needed to:

  • Generate an application key to get the login page to load via php artisan key:generate
  • Assign the new user I had created for the site to the www-data group
  • Run php artisan storage:link
  • chmod the storage folder to 755

Once I had those set up, the site worked as normal. Next step, setting up mod_pagespeed..

The other day I bought a 5v USB relay from eBay for a project I’m working on. The ultimate goal is to use it on a Raspberry Pi in a build where the GPIO pins are already being used by a touch screen. For just shy of $5 AUD with free shipping, it was a good buy I reckon.

The eBay listing says it comes with a Windows DLL for controlling it, but it never did (just the device in one of those silvery electrostatic bags and no download links in the listing). Plus the listing seemed to suggest the DLL was closed source and who needs that kind of negativity in their life? So I looked around to see if anyone else had worked with this.

Turns out they’re a fairly common device manufactured by “www.dcttech.com” and several people have written libraries that control these. The relay is basically a HID device. You send a feature report along with the relay you wish to activate and the state, plus some padding bytes, and you can control it without some unknown DLL file.

And turns out there’s an excellent cross-platform node.js library called node-hid. Give it the path or VID / PID of your device and you can read, write, set feature reports, get feature reports, and so on. Brilliant!

But no matter what I tried, I couldn’t get the damn thing to operate. In the end I “brute forced” it by writing a for loop that incremented some of the bytes until the relay clicked on. Slowing it down with a setInterval told me what bytes to send to turn them on and off. To turn it on, send 0xFE (254) to Report 0. To turn it off, send 0xFC (252) to Report 0. The 8th byte of report 0 when you request the feature report is 0 for off, and 3 for on. In other words, zero = off, nonzero = on

So here’s a short demo node.js app that will find the first USBRelay device and toggle it every second.

Note that this has only been tested on the one-relay board. However byte 3 when sending should be the relay number you wish to control

At work, we have a captive portal that non-domain-joined machines must authenticate to at least once a day in order to use the internet. Our firewall lets you set exceptions to that rule, but it’s cumbersome and doesn’t always work.

This poses a problem when setting up a new Raspberry Pi, because Raspbian Jessie Lite doesn’t have lynx installed by default, and you can’t easily use cURL to log in (because of session tokens and such). The solution is to proxy the internet connection from the Pi to the computer and authenticate on the computer

This’ll be a short post, mostly for my own reference later, but I’ll be using PuTTY for this.

  1. Connect to the Pi over SSH. Go into the connection settings for PuTTY. Expand Connection > SSH and click on Tunnels.
  2. For the source port, type 1080. Leave destination blank, but set the radio buttons to “Dynamic” and “Auto”. Click “Add”
  3. Open up Internet Explorer. Go to Tools > Internet Options > Connections > LAN Settings
  4. Tick “Use a Proxy”. Clear out the address and port, and click “Advanced”
  5. Untick “Use the same proxy server for all protocols”
  6. Under the Socks section, set the address to and the port to 1080
  7. In Internet Explorer, try and load a website. Authenticate to the captive portal
  8. Back in PuTTY, connect to your Pi and check that the internet is working by doing something like installing a package, wget-ting a file and so forth.


I’ve just installed GitLab on a server at work, in an attempt to unify all our separate projects that are stored in various locations. I’ve been a big fan of Git (or rather, GitHub) since I created my first project about 4 years ago. Now everything I do, including various parts of my Aurora Hunting Website, is maintained via Git. Deploying new code to servers is dead simple when you throw git into the fray.

I work in a department that is a mix of programmers, technical-but-not-programmers and definitely-non-technical people. This makes it a bit tricky to explain git in a way that non-technical people can understand. So I’m going to offer an EXTREMELY basic and naive view of what git is. Most things here are going to be anger-inducingly wrong, but this is for someone who has almost zero technical experience.

I’m impatient! Give me the too long; didn’t read version!

Git is a system that tracks who changes files, when they were changed and what changed in them. Changes are sent to a git server that keeps a record. People can make copies of those files, make changes, and send those changes back to be included in the record. It’s mostly used to keep track of who changed code and to allow better collaboration

So what is git?

Imagine you’re walking down the road and decide to keep track of all the neat things you find along the way. This is git. Git watches a folder or a series of folders and takes notice of what files have changed.

When you’ve seen a few cool things (or made a few changes to files), you might want to write them down. This is where a git server comes in.

How do I keep track of the stuff I’ve seen?

You’ll need a place to log the cool stuff you’ve seen so far, so you buy a notebook and write everything in it. It’s a permanent log of what you saw and when.

This is what a git server is. when you’ve lined up enough changes to files, you can push those details to the server for permanent storage. Git servers keep a record of who changed what file and when, plus what was changed in it, and offer a centralised place to store those changes

What if I go for a walk elsewhere?

Sure you could use the same book again, but it’d make sense to have more than one book. Maybe one for walks through the forest, and maybe one for walks through the city? In the git world, this is a repository, and you’d create a new one for each project you were working on.

Makes sense. So why would I want to use git?

Say you’ve got three people working on a single project. What happens if someone changes a file and everything falls into a crashing heap? Who did it? When? What did they change to break it all? Because the git server keeps a record of all this, you can see what was changed by who and when.

Git isn’t just for code either. It can keep track of things like photos and documents, though it won’t be able to show you what changed (because photos and documents with complex formatting and such aren’t easy to compare like text is)

So multiple people can change files?

Yes! Once those files are on the git server, people with the right permissions can come in and make a copy of those files. There’s a few ways to make a copy:


Imagine you’ve invited a friend to join you on your walk. Each person is looking out for cool stuff, but you share the same notebook. When you clone, you make a copy of the files, but each change you send back is recorded under that repository


When you make a new branch, you give your friend a new section in your notebook, and they take notes on their own page when they walk. At any point in time, you can stop and copy what they have, into the main part of your book. When you branch, you’re still operating from the same repository. You’d typically use branches to test new things and fix issues, then when you’re done testing and fixing, you can pull the changes from that branch, back into the main (master) branch.


This is where git gets good. You’re walking down the road, and your friend decides to walk on a different path. You go one way, your friend goes the other. You make an exact copy of your book, and give it to them. Along the way you see some cool things, and you write them down in your separate books.

When you fork on a git server, you’re creating a separate repository. It contains all the changes made up until the point you forked it, but any new changes made go into that forked repository (meaning any changes made to the original repository are NOT added to your change list, and any changes you make are not added to the original change list).

What happens if my friend wants to rejoin me?

Your friend is done walking down their path, so they skip across the field and rejoin you. They want to share with you all the cool stuff they’ve seen so you can write it in your (master) book. In git terms, this is a “pull request”, because the forker or the brancher wants the original repository to pull in the changes they’ve made. The owner of the original repository can say yes, no or ask that changes be made before it’s accepted.

After a pull request is done, the fork can be deleted, or used to make more changes and do another pull request in the future.

Pull requests can be done between branches too, so in order to get your friend’s section of the book merged into the main section, they need to ask you to do it, because it’s your book.

Note that a pull request is one-way. So when your friend gives you all their new information, they don’t automatically get your new information. They would need to explicitly request that. This means that if your goals and their goals change (e.g. they want to focus on cars instead of birds, while you want to focus on birds only), they can give you information without fear of getting a ton of info about birds that they don’t want.

And what about conflicts?

So you’ve met back up and compared notes, but there’s a problem. You both saw the same bird, but your descriptions of it vary. They say the bird was white with blue spots, but you say the bird was black with no spots. Who is right? There needs to be a compromise.

Similar things happen on the git server. If you make a change to file A, and someone makes a change to file A in their fork or branch and tries to merge, which version is correct?

When there is a conflict, git changes the file to include BOTH of the changes. The owner of the original repository needs to go through and resolve the conflict by changing the file so only one of the two changes is in there. Once all conflicts have been resolved, the merge can go ahead.

In the case of our bird, you might put your foot down and say “no, the bird was definitely black”, but you might agree that you made a mistake, and that it had blue spots, and so you amend your notes. Now that that’s resolved, you can copy their info into your book.

Ignoring important stuff

While you’re out walking, you might not want to log everything you see. Maybe you’re only interested in birds. Or maybe you want to log all animals, except those sitting in a person’s yard. In git, you’d create a special file that tells it what to ignore. So if you have a file with passwords in it, you can tell git to ignore that file. Or entire folders, or a mix of both.

Final Thoughts

Hopefully this has been useful for you. If you need to explain git to a non-technical person, this can help. If I’ve missed something, feel free to leave a comment!

I’m writing a Facebook Messenger bot and needed to send the degrees character to the user. The multitude of different ways didn’t work for me (\u00b0 etc.), but here’s a way that did:

// The rest of the messenger code here
$strToSend = "It's 13" . utf8_encode("°") . "C outside";
// And your code to send that message on

A short post, but hopefully it helps someone.

My app, auroras.live, has been out in the app stores for about two months now. There’s two versions available, a free and a paid version. Previously I was maintaining three GitHub branches — Master, Free and Paid. I’d make the changes in master, then make a PR to sync free and paid, then edit the config.xml in the respective repos so the app would detect and use the appropriate version.

After a while, this got tedious because I’d have to ensure all three branches were in sync, except for the config.xml file (which got reformatted each time a plugin was added), so I gave up on the idea. Gulp seemed like a great fit for all of this, so I whipped up a quick gulpfile that does a few things for me:

  • Sets the app name (e.g. Auroras.live Free or Auroras.live)
  • Sets the app ID (e.g. live.auroras.app.free or live.auroras.app)
  • Copies the correct icon file, then runs ionic resources to generate the proper icons
  • Builds the production version of the app
  • Signs the JAR, then runs zipalign.

All I need to do is call gulp build-android-free or gulp build-android-paid and it’s all done. No more manually editing config files, no more copying files around. It’s easy! Want this for your own app? The code is below:

All you need to do is:

  • Run npm install --save xmldoc in addition to the other dependencies for Ionic’s default gulpfile
  • Edit gulpfile.js and replace the defaults at the top of the file with your own.
  • Go into your resources folder and make two icons: icon_free.png and icon_paid.png.
  • Call either gulp build-android-free --storepass mykeystorepassword or gulp build-android-paid --storepass mykeystorepassword
  • You can also call this script with a few parameters:
    • --packageid – Sets the package ID
    • --packagename – Sets the package name
    • --jarsigner – Path to jarsigner
    • --zipalign – Path to zipalign
    • --keystore – Path to your keystore file
    • --keystorealias – The alias of your keystore


I’m writing my first Cordova / Ionic Android and iOS app, and ran into an issue when submitting to the iOS app store. In order to submit your app, you need screenshots. And not the “minimum of 320px on the longest side” type of screenshot where you can submit almost anything, but the “must be exactly X and Y dimensions” type where you need (up to) 5 screenshots per device type.

This was a problem for me, because while I have a Mac, it’s slow as a wet week. It’s a 2009 unibody Mac that I bought off eBay. I went for the cheapest Mac I could find because I just needed it to deploy to my test device (an iPhone 4 I also bought off eBay) and fire off production builds when I was ready.

Because it’s so slow, running the simulator on it is nigh on impossible. It takes forever to start up and click on stuff, so I ruled that out. I then came across snapshot, part of the fastlane set of tools. This would let me create a new UI test, then edit the code to take snapshots when I needed to. It still relied on the simulator in the end, but it was automatic, so I could let it run overnight.

But I had to rule that out as well, because Cordova apps aren’t fully supported. I’d click on some elements in my app while recording a UI test, and they’d just get logged as generic ‘element.tap()’ events that did nothing when played back. Plus it required changes to the xcode project that would have just been overwritten when I next built the app, unless I wrote a hook to make this stuff for me. With no (easy) way around all that, I turned to “faking” the screenshots using a browser.

Google Chrome (and in fact, almost every other browser out there) has a set of developer tools that can emulate various screen sizes. The iPhone 4 emulation is nearly pixel perfect compared to an actual iPhone 4, so I decided to go for that. I fired up Ionic’s “ionic serve” command (or Cordova’s equivalent) and went to town.

The app in two sizes

iPhone 4 on the left, custom size on the right. Whoops!

The problem is, the size required for the 3.5″ screens is 640×920 and Chrome’s iPhone 4 preset gave me screenshots that were half that size. So I added a manual preset that was 640×920. But then my previews were off, because media queries kicked in and it was showing my app differently to how it really was.

Zooming did nothing in Chrome, and overriding the media queries was going to be a not-worth-the-hassle type of difficult. So I turned to Firefox.

Firefox gave me some success, because I could set a custom size, then zoom the window in 200% so in theory, I’d have a window size of 640×920, but the content doubled in size to negate the media queries. But when I clicked the screenshot button, I got a screenshot that didn’t honor the zoom settings in the way I expected, so I was left with a screenshot that was 320×460.

After literally hitting my head against the desk and trying six different screenshot tools, and thinking I’d have to resort to using Windows’ screenshot button and stitching the results in Photoshop, I finally nailed it.

Hidden screenshot buttonThere’s a second screenshot button in Firefox’s dev tools. You have to turn it on under the settings menu, but it gives you a screenshot of the viewport as it actually appears. I finally had a way to get screeshots at the correct resolution AND the correct view!

Now I was left with one last minor issue: How do I quickly sort out the screenshots?

So I did what I do best – Write a script to automate that shit!



Just run that in node.js (in the same folder as your downloads) and start taking screenshots. Anything it finds, it’ll compare the image dimensions and shove everything into the right folders.

A few minutes later, I had all the screenshots I needed and I was able to submit my app for approval by the end of the night. Ahh progress!