Bicycle Dashcam Part 4: New Hardware

I was reading an article about Oak Vision Modules on Hackaday, and thought, wow, this is the PERFECT platform for my bicycle dashcam. The Oak Vision module is a Kickstarter project with camera modules, depth mapping capability using stereo vision, and a processor (Intel Movidius Myriad X) designed to accelerate machine vision in 1 package for $149US – see https://www.kickstarter.com/projects/opencv/opencv-ai-kit/

At the 3:55 mark in the marketing video, I THEN see the board mounted to a bicycle saddle, which is EXACTLY what I want to do:

Luxonis OAK-D vision module prototype as seen in their Kickstarter video

I went to see what I could find about the developers, and read about them on TechCrunch:
“The actual device and onboard AI were created by Luxonis, which previously created the CommuteGuardian, a sort of smart brake light for bikes that tracks objects in real time so it can warn the rider. The team couldn’t find any hardware that fit the bill so they made their own, and then collaborated with OpenCV to make the OAK series as a follow-up.”

This is pretty exciting – CommuteGuardian is the first project I’ve come across with similar goals to mine: Prevent and Deter Car-Bicycle accidents. I exchanged a few emails with Brandon Gilles, the Luxonis CEO, and he shared some background – they also checked out OpenALPR, and started work on mobile phone implementations, but decided to move to a custom board when the Myriad X processor was launched.

You can read more about CommuteGuardian here:
https://luxonis.com/commuteguardian and
https://discuss.luxonis.com/d/8-it-works-working-prototype-of-commute-guardian

I decided to back Luxonis’ Oak project. I’ll have to learn some new tools, but this board will be much faster than the Pi for image analysis (much faster than the 1 frame per 8 seconds I’m getting now!). The stereo vision capabilities on the Oak-D will allow for depth mapping, a capability for which I had previously been considering adding a LIDAR sensor. Looking forward to receiving my Oak-D, hopefully in December. In the interim, I’ll continue to experiment with different license plate recognition systems, read more about the tooling I can use with Oak-D, and perhaps try a different camera module on the Pi.

Bicycle Dashcam Part 3: More field testing

On a sunny mid-June Saturday, I took my bike for a ride down Yonge St to lake Ontario with my bicycle dashcam, testing my latest changes (May 18th). Over the course of a 2 hour ride, taking a photo about every 10 seconds:

  • Reviewing the photos with my own eyes, I can make out about 45 images with readable plates (not every image was usable or had a car in the photo)
  • Of these 45, OpenALPR can make out about 10
Sign misidentified as license plate
OpenALPR picked out this dry cleaning sign as license plate 2 H0UR

I’m going to try running these photos through alternate ALPR engines, and compare results.

On this run, I tested the Pi Camera V2’s various sensor modes: the streaming modes at 1920×1080 30 fps, 3280×2464 15 fps, 640×922 30 fps, 1640×922 40 fps, 1280×720 41 fps, 1280×720 60 fps, as well as the still mode at 3280×2464. Further testing is likely still required, but I continue to get the best results from the still mode – all of the successful matches were shot using still mode.

Dashcam Successfully Recognizes Plate
Dashcam Successfully Recognizes License Plate

I’m getting better results than I had on previous runs as a result of tweaking the pi-camera-connect NodeJS library to:

  • use a 5 second capture delay, which allows exposure time, gain, and white balance to be determined
  • set the exposure mode to sports, which reduces motion blur by preferentially increasing gain rather than exposure time

However, the images are still not as good as I would like.

Rolling shutter issues are apparent in some photos taken by the Pi Camera V2 while in motion

The Plate Recognizer service has an excellent article on Camera Setup for ALPR. It highlights many of the challenges I’m seeing with my setup:

  • Angle, lighting
  • 8 MP is suggested for highway or street monitoring – the Pi Camera is sufficient in this regard
  • Zoom – I think this is a challenge in my setup – I liked the idea of getting everything around me, but I think I have to reduce the area I capture to get a view of the plate with more detail. Perhaps focus to my “7 o’clock” rather than capture everything behind me.
  • At 30 mph (~45 km/hr), which probably covers most bike riding in traffic, they suggest at least 3 to 5 frames at 15-25 frames per second.

I might order the latest Pi camera with the zoom lens and see if I get better results.

Reviewing the data from my ride, there’s also an issue with my code that pulls the GPS coordinates from the phone, which I didn’t see when testing at home. I figure this is either the phone locking while I ride, and not running the javascript – I’ll try using the NoSleep.js library before my next test run.

Bicycle Dashcam Part 2: Camera upgrade + GPS

In February, I saw Robert Lucian’s Raspberry Pi based license plate reader project on Hackaday.  His project is different, in that he wrote his own license plate recognition algorithm, which runs in the cloud – the Pi feeds the images to the cloud for processing.  He had great results – 30 frames per second, with 1 second of latency.  This is awesome, but I want to process on the device – I want to avoid cellular data and cloud charges.  Once I get this working, I’ll look at improving performance with a more capable processor, like the nVidia Jetson or the Intel Neural compute stick.

In any case, Robert was getting great images from his Pi – so I asked him how he did it.  He wrote me back with a few suggestions – he is using the Pi camera in stream mode 5 at 30 fps.

I wondered if one of the issues was my Pi Camera (v1), so I ordered a version 2 camera (just weeks before the HQ camera came out!).  The images I was getting still weren’t great.  I’m using the pi-camera-connect​ package, here’s what I’ve learned so far:

  • The best documentation I’ve seen for the Pi camera can be found at https://picamera.readthedocs.io/en/release-1.13/
  • ​Some of the modes are capable of higher frames per second, but results may be poor.  Start with 30 fps
  • In stream mode, streamCamera.startCapture() must be called 2-3 seconds before streamCamera.takeImage()
  • There are a number of parameters not exposed by the pi-camera-connect package, but ultimately, this package is just a front end for raspistill and raspivid.  All the settings can be tweaked in the source.  Specifically, ​​increase the –timeout delay for still images.  I also want to experiment with the –exposure sports setting.​

I still have more tweaking to do with the Pi Camera 2.  If after a few more runs, I don’t get the images I need, I’ll try the HQ camera or try interfacing with an action camera.

Finally, I’ve added GPS functionality.  If you access the application with your phone while you’re riding, the application will associate your phone’s GPS coordinates with each capture.​

Source: https://github.com/raudette/plates
Bicycle Dashcam Part 1

iOS Safari’s WebSockets implementation doesn’t work with self signed certs

I’m building a Node application hosted on a Raspberry Pi, that will not be connected to the internet. A user will interface with the application through the browser on their phone. The application calls the browser for its GPS coordinates using the HTML Geolocation API.

In iOS, the HTML Geolocation API only works for HTTPS sites. I found an excellent post on Stackoverflow for creating a self signed cert that works in most browsers. I created the cert, added it to my desktop and phone. HTTPS worked great.

I first tried the Node ws websocket library, and the Node application would call out to the browser to fetch GPS coordinates when it needed them.

The application worked great in Firefox and Chrome, but it would not work in the iOS browser. If I dropped to HTTP (vs HTTPS) and WS (vs WSS), it worked fine. For some reason, the iOS browser accepted the cert for HTTPS, but not WSS. Unfortunately, I needed HTTPS to use Geolocation.

I couldn’t get it to work. I ended up moving my application to Socket.IO, which has a fallback method to HTTPS polling if a websocket connection cannot be established. This worked for my scenario. If you need a websocket like capability and have to use self signed certificates on iOS, try Socket.IO.

Building A TV Remote

After only a couple of uses, I decided the Alexa-powered TV remote I built earlier this year was not very useful. In my experience, there are few cases where voice control makes sense, and powering on my TV is not one of these cases. So I set out to build my own remote.

I built my own Arduino on a prototype board, using a circuit I had used before, the RRRRRRRRRRBBA really bare bones Arduino design. As I want it to last for a while on a set of batteries, I wired the buttons into an interrupt line. The micro-controller is programmed to be in sleep mode until a button is pressed, and then it wakes up, and sends the corresponding signal to the infrared LED. It will be interesting to see how long the batteries last – I’m hoping at least 6 months!

TV Remote Circuit
TV Remote Circuit

I laugh at the idea that my remote control runs at 16 MHz – 16x as fast as the C64 of my childhood.

TV Remote Circuit Board
TV Remote Circuit Board

I picked out a project box and some interesting buttons at my local electronics shop. The buttons are quite deep, which doesn’t lend itself to a thin remote. My remote has 4 controls: Power, Mute, Volume Up, Volume Down.

Completed TV Remote
Completed TV Remote

You can download the code here: ToshibaRemote.ino

Controlling an older TV with Alexa

Background

I have an old TV which was acquired used, without a remote. The power button has become a little finicky. Rather than going out and buying a new TV, or a universal remote, I thought it would be fun to build one. I had a Sparkfun ESP8266 dev board that wasn’t currently being used, and an Amazon Echo Dot in the same room as my TV, so I decided to make an Amazon Alexa-controlled remote rather than build a physical one with buttons.

As I had mentioned in my last post, I’ve built Alexa skills before, and to do this, you need to build a web service that Amazon’s servers can reach – something on the public Internet that can handle HTTPS. I wanted this service that was handling Alexa calls to control this ESP8266 in my home. I don’t like opening up ports on my home firewall, so I had to figure out a way to control the ESP8266 on my home network like every IoT device you can buy off the shelf. I hadn’t done this before, but I knew it was possible.

There seem to be a number of services to make this easy – I came across Blink.io and PubNub. Blink, in particular, seems like a good choice, as there’s sample code for the ESP8266 micro-controller I had decided to use. But in the end, I decided to use web sockets.

Building the solution

Alexa IR Remote System Diagram
Alexa IR Remote System Diagram

Most of the Alexa Skills tutorials are built on Amazon’s Lambda serverless compute service. But using websockets with Lambda seems to require using Amazon’s API Gateway, and I decided that was too much to take on for this particular project.

I setup the new skill on the Alexa portal, which I configured to call a VM I built out on Azure (I had free credits available there!). On this VM, I hosted a program I built with Node using Alexa’s ask-sdk-core library, and the websockets/ws library to build the web socket server handling the connection from the ESP8266.

On the ESP8266, I used the ArduinoWebSockets library to build the web socket client. Getting the ESP8266 to send IR commands to the TV was super simple. I connected an infrared LED to pin 4 of the ESP8266. Some article I read suggested using a transistor to increase current supplied to the LED, so I did. I used the IRremoteESP8266 for IR, and found the IR codes for my TV on http://irdb.tk/.

ESP8266 IR Remote Circuit
ESP8266 IR Remote Circuit
Alexa IR Remote on Breadboard
Alexa IR Remote on Breadboard

There is no authentication mechanism built in to web sockets. I decided that as I’m not scaling the service, and I’m just playing around, not to worry about it – if you use the code below, beware! The side effect is, every web socket client that is connected to the service will receive every command sent through the skill.

Solution in action

Here are some videos of the solution in action:

Turn on TV
Turn up volume

If you’re interested in trying something similar, you can check out the code here: https://github.com/raudette/AlexaTVRemote

Alexa skill, written in Node JS, Using Express, with ask-sdk-express-adapter

In 2018, after reading an article on Hackaday, I picked up an Amazon Echo Dot to experiment with building voice interfaces. It was surprisingly easy, and with no experience, I got something up and running in a couple hours.

I haven’t looked at this in a while, and had another project in mind. Looking at the Alexa development documentation today, all the examples leverage Amazon’s Lambda’s compute service. For my project, I didn’t want to use Lambda, I just wanted to use Express on Node JS. Amazon has NPM library for this, ask-sdk-express-adapter, but I couldn’t find ANY end-to-end example, and I struggled for a bit to get it to work. I think it took me longer the 2nd time around!

SO – here’s a simple example, hopefully it’s got the right keywords for anyone who’s stumbling on the same problem. Keywords:

  • node js
  • javascript
  • ask-sdk-express-adapter
  • express
  • sample code
  • example code
  • alexa
const express = require('express');
const { ExpressAdapter } = require('ask-sdk-express-adapter');
const Alexa = require('ask-sdk-core');
const app = express();
const skillBuilder = Alexa.SkillBuilders.custom();

var PORT = process.env.port || 8080;

const LaunchRequestHandler = {
    canHandle(handlerInput) {
        return handlerInput.requestEnvelope.request.type === 'LaunchRequest';
    },
    handle(handlerInput) {
        const speechText = 'Hello World - Your skill has launched';

        return handlerInput.responseBuilder
            .speak(speechText)
            .reprompt(speechText)
            .withSimpleCard('Hello World', speechText)
            .getResponse();
    }
};

skillBuilder.addRequestHandlers(
    LaunchRequestHandler
)

const skill = skillBuilder.create();

const adapter = new ExpressAdapter(skill, false, false);

app.post('/', adapter.getRequestHandlers());

app.listen(PORT);

Hope that helps!

Playing around with Hugo and different ways of hosting content

When I initially built out this blog, I:

  • wanted a content management tool. I didn’t want to be writing pages in HTML
  • wanted to host it myself. Geocities came and went. I wanted ownership of my hosting.
  • wanted a VM on the Internet anyway. I wanted something always up, that I could host services on. I had hosted PCs on the Internet at home, but with cloud services, I just didn’t need this anymore
  • wanted very low costs
  • needed to support extremely low readership.

So, I built out a tiny VM on AWS I can deploy services on, and it costs next to nothing.

But my content is static. It really makes more sense to host the files on S3, and use a static content generator. It’s much more secure, I don’t have to worry about keeping OSs and applications patched, and it could scale if ever required.

So over Christmas break, I built https://articles.hotelexistence.ca/ with Hugo, hosted on S3, fronted by CloudFront, which seemed to be the only way to host content from S3 on my domain with HTTPS. With Hugo (and any other static site generator), you create your content, it applies a template, and creates the links – it reminds me of working with Fog Creek Software’s defunct CityDesk almost 20 years ago. This AWS Hugo Hosting article was really helpful for the AWS setup. I still can’t figure out how to use Hugo’s Image Processing features, but I didn’t need them. The new site is accessible from the ‘Articles’ section up top. I’m not sure if I’ll move everything over or what I’ll do that moving forward.

Detect web skimming with web automation

I was listening to the Darknet Diaries Magecart episode before the holidays and was thinking, “Magecart attacks should be pretty easy to detect with web automation”, so I wrote up how I would do it. If you run a web property that processes sensitive data, it might be of interest. Check it out here: https://articles.hotelexistence.ca/posts/browserautomationtodetectwebskimming/

I have been thinking about changing how I host this site, and decided to try it out for this article – more on this later.

Nano Cidery

In September, I went out apple picking with the kids, and decided to pick up some cider, to try to ferment it, something I’ve been wanting to do for a while. I don’t usually drink hard cider, but I’ve been wanting to try making it ever since reading about the process in Make Magazine years ago.

I ended up following guidance from these sites:
https://www.midwestsupplies.com/blogs/specialty/instructions-on-how-to-make-hard-cider
https://howtomakehardcider.com/

I really like the idea of working at a small scale – it works really well for our apartment, and limits waste while experimenting. Startup costs were really low – apart from the cider, I picked up everything at Toronto Brewing:
– Cider
– Starsan sanitizer
– Lalvin D47 yeast (recommended by store staff for cider)
– Bottles (I re-used Grolsch swing-top bottles I started drinking prior to the exercise for bottling, and a wine bottle for fermenting)
– A food grade hose for decanting
– An airlock and stopper

My first batch was a bust. It turns out Downey’s farm adds potassium sorbate to their cider, and it didn’t ferment.

For my second batch, I went to our local Loblaws grocery store and bought their house brand cider. I added some brown sugar at fermentation time to increase alcohol content, and then added some dextrose at bottling time for carbonation. I fermented for two weeks, bottled, and tried my first bottle two weeks after bottling. The carbonation was perfect – lightly carbonated, tiny bubbles. But the cider was mostly flavorless – it wasn’t terrible, but didn’t taste great. I tried another bottle today, after 4 weeks – it was still flavorless, but somehow much better.

My third batch is currently in a second stage of fermentation. I fermented for two weeks, decanted, and have let it sit for two weeks. I plan to bottle it tomorrow. Should be ready to try around Christmas!

My personal brain dump, Opinions, Projects, Toronto