In February, I saw Robert Lucian’s Raspberry Pi based license plate reader project on Hackaday. His project is different, in that he wrote his own license plate recognition algorithm, which runs in the cloud – the Pi feeds the images to the cloud for processing. He had great results – 30 frames per second, with 1 second of latency. This is awesome, but I want to process on the device – I want to avoid cellular data and cloud charges. Once I get this working, I’ll look at improving performance with a more capable processor, like the nVidia Jetson or the Intel Neural compute stick.
In any case, Robert was getting great images from his Pi – so I asked him how he did it. He wrote me back with a few suggestions – he is using the Pi camera in stream mode 5 at 30 fps.
I wondered if one of the issues was my Pi Camera (v1), so I ordered a version 2 camera (just weeks before the HQ camera came out!). The images I was getting still weren’t great. I’m using the pi-camera-connect package, here’s what I’ve learned so far:
Some of the modes are capable of higher frames per second, but results may be poor. Start with 30 fps
In stream mode, streamCamera.startCapture() must be called 2-3 seconds before streamCamera.takeImage()
There are a number of parameters not exposed by the pi-camera-connect package, but ultimately, this package is just a front end for raspistill and raspivid. All the settings can be tweaked in the source. Specifically, increase the –timeout delay for still images. I also want to experiment with the –exposure sports setting.
I still have more tweaking to do with the Pi Camera 2. If after a few more runs, I don’t get the images I need, I’ll try the HQ camera or try interfacing with an action camera.
Finally, I’ve added GPS functionality. If you access the application with your phone while you’re riding, the application will associate your phone’s GPS coordinates with each capture.
I’m building a Node application hosted on a Raspberry Pi, that will not be connected to the internet. A user will interface with the application through the browser on their phone. The application calls the browser for its GPS coordinates using the HTML Geolocation API.
In iOS, the HTML Geolocation API only works for HTTPS sites. I found an excellent post on Stackoverflow for creating a self signed cert that works in most browsers. I created the cert, added it to my desktop and phone. HTTPS worked great.
I first tried the Node ws websocket library, and the Node application would call out to the browser to fetch GPS coordinates when it needed them.
The application worked great in Firefox and Chrome, but it would not work in the iOS browser. If I dropped to HTTP (vs HTTPS) and WS (vs WSS), it worked fine. For some reason, the iOS browser accepted the cert for HTTPS, but not WSS. Unfortunately, I needed HTTPS to use Geolocation.
I couldn’t get it to work. I ended up moving my application to Socket.IO, which has a fallback method to HTTPS polling if a websocket connection cannot be established. This worked for my scenario. If you need a websocket like capability and have to use self signed certificates on iOS, try Socket.IO.
After only a couple of uses, I decided the Alexa-powered TV remote I built earlier this year was not very useful. In my experience, there are few cases where voice control makes sense, and powering on my TV is not one of these cases. So I set out to build my own remote.
I built my own Arduino on a prototype board, using a circuit I had used before, the RRRRRRRRRRBBA really bare bones Arduino design. As I want it to last for a while on a set of batteries, I wired the buttons into an interrupt line. The micro-controller is programmed to be in sleep mode until a button is pressed, and then it wakes up, and sends the corresponding signal to the infrared LED. It will be interesting to see how long the batteries last – I’m hoping at least 6 months!
I laugh at the idea that my remote control runs at 16 MHz – 16x as fast as the C64 of my childhood.
I picked out a project box and some interesting buttons at my local electronics shop. The buttons are quite deep, which doesn’t lend itself to a thin remote. My remote has 4 controls: Power, Mute, Volume Up, Volume Down.
I have an old TV which was acquired used, without a remote. The power button has become a little finicky. Rather than going out and buying a new TV, or a universal remote, I thought it would be fun to build one. I had a Sparkfun ESP8266 dev board that wasn’t currently being used, and an Amazon Echo Dot in the same room as my TV, so I decided to make an Amazon Alexa-controlled remote rather than build a physical one with buttons.
As I had mentioned in my last post, I’ve built Alexa skills before, and to do this, you need to build a web service that Amazon’s servers can reach – something on the public Internet that can handle HTTPS. I wanted this service that was handling Alexa calls to control this ESP8266 in my home. I don’t like opening up ports on my home firewall, so I had to figure out a way to control the ESP8266 on my home network like every IoT device you can buy off the shelf. I hadn’t done this before, but I knew it was possible.
There seem to be a number of services to make this easy – I came across Blink.io and PubNub. Blink, in particular, seems like a good choice, as there’s sample code for the ESP8266 micro-controller I had decided to use. But in the end, I decided to use web sockets.
Building the solution
Most of the Alexa Skills tutorials are built on Amazon’s Lambda serverless compute service. But using websockets with Lambda seems to require using Amazon’s API Gateway, and I decided that was too much to take on for this particular project.
I setup the new skill on the Alexa portal, which I configured to call a VM I built out on Azure (I had free credits available there!). On this VM, I hosted a program I built with Node using Alexa’s ask-sdk-core library, and the websockets/ws library to build the web socket server handling the connection from the ESP8266.
On the ESP8266, I used the ArduinoWebSockets library to build the web socket client. Getting the ESP8266 to send IR commands to the TV was super simple. I connected an infrared LED to pin 4 of the ESP8266. Some article I read suggested using a transistor to increase current supplied to the LED, so I did. I used the IRremoteESP8266 for IR, and found the IR codes for my TV on http://irdb.tk/.
There is no authentication mechanism built in to web sockets. I decided that as I’m not scaling the service, and I’m just playing around, not to worry about it – if you use the code below, beware! The side effect is, every web socket client that is connected to the service will receive every command sent through the skill.
In 2018, after reading an article on Hackaday, I picked up an Amazon Echo Dot to experiment with building voice interfaces. It was surprisingly easy, and with no experience, I got something up and running in a couple hours.
I haven’t looked at this in a while, and had another project in mind. Looking at the Alexa development documentation today, all the examples leverage Amazon’s Lambda’s compute service. For my project, I didn’t want to use Lambda, I just wanted to use Express on Node JS. Amazon has NPM library for this, ask-sdk-express-adapter, but I couldn’t find ANY end-to-end example, and I struggled for a bit to get it to work. I think it took me longer the 2nd time around!
SO – here’s a simple example, hopefully it’s got the right keywords for anyone who’s stumbling on the same problem. Keywords:
wanted a content management tool. I didn’t want to be writing pages in HTML
wanted to host it myself. Geocities came and went. I wanted ownership of my hosting.
wanted a VM on the Internet anyway. I wanted something always up, that I could host services on. I had hosted PCs on the Internet at home, but with cloud services, I just didn’t need this anymore
wanted very low costs
needed to support extremely low readership.
So, I built out a tiny VM on AWS I can deploy services on, and it costs next to nothing.
But my content is static. It really makes more sense to host the files on S3, and use a static content generator. It’s much more secure, I don’t have to worry about keeping OSs and applications patched, and it could scale if ever required.
So over Christmas break, I built https://articles.hotelexistence.ca/ with Hugo, hosted on S3, fronted by CloudFront, which seemed to be the only way to host content from S3 on my domain with HTTPS. With Hugo (and any other static site generator), you create your content, it applies a template, and creates the links – it reminds me of working with Fog Creek Software’s defunct CityDesk almost 20 years ago. This AWS Hugo Hosting article was really helpful for the AWS setup. I still can’t figure out how to use Hugo’s Image Processing features, but I didn’t need them. The new site is accessible from the ‘Articles’ section up top. I’m not sure if I’ll move everything over or what I’ll do that moving forward.
In September, I went out apple picking with the kids, and decided to pick up some cider, to try to ferment it, something I’ve been wanting to do for a while. I don’t usually drink hard cider, but I’ve been wanting to try making it ever since reading about the process in Make Magazine years ago.
I really like the idea of working at a small scale – it works really well for our apartment, and limits waste while experimenting. Startup costs were really low – apart from the cider, I picked up everything at Toronto Brewing: – Cider – Starsan sanitizer – Lalvin D47 yeast (recommended by store staff for cider) – Bottles (I re-used Grolsch swing-top bottles I started drinking prior to the exercise for bottling, and a wine bottle for fermenting) – A food grade hose for decanting – An airlock and stopper
My first batch was a bust. It turns out Downey’s farm adds potassium sorbate to their cider, and it didn’t ferment.
For my second batch, I went to our local Loblaws grocery store and bought their house brand cider. I added some brown sugar at fermentation time to increase alcohol content, and then added some dextrose at bottling time for carbonation. I fermented for two weeks, bottled, and tried my first bottle two weeks after bottling. The carbonation was perfect – lightly carbonated, tiny bubbles. But the cider was mostly flavorless – it wasn’t terrible, but didn’t taste great. I tried another bottle today, after 4 weeks – it was still flavorless, but somehow much better.
My third batch is currently in a second stage of fermentation. I fermented for two weeks, decanted, and have let it sit for two weeks. I plan to bottle it tomorrow. Should be ready to try around Christmas!
I love my bike – it is a workhorse I can park anywhere, a mid-1990s hybrid. After years of limited maintenance, in the past year, I’ve had to replace a tire, cassette, all the cables, pads, grips, and shifters. I’ve also just upgraded my headlight and taillight – the improvements that have been made in bicycle lighting over the last 15 years have been incredible.
I’m using my bike more this year – my downtown office recently moved to a building with badge access indoor bicycle parking and showers, with towel service, for cyclists – what a cool perk. So, I’ve been biking to work for the first time since I started at this company in 2006, 17 km down Yonge St in Toronto, about twice a week since June.
My rides have been great. Drivers along my route leave a lot of space. But it’s hard to assess risk. The City of Toronto keeps detailed data on cyclists killed or seriously injured. There have been 11 KSIs on my route since 2008. But how do I compare that against, say, the risk of the 30 km drive to my Mississauga office? I’ve been rear-ended 3 times since 2010 commuting by car to Mississauga, but all have been at low speeds, only resulting in damage to my car – the consequences of getting hit on my bike are far more severe.
I was trying to think about what I could do beyond riding cautiously and ensuring I am visible. And, I have to say, a part of me is just always on the lookout for small, fun projects.
Envision a bicycle dashcam
Bicycle dashcams have been done before, by Cycliq and others. But I envision something difference, a bicycle dashcam that could:
Recognize the license plates of the cars around you. From a picture, it would look at the plates on all the cars, and then associate a plate number with the picture
Record the speed of the cars around you
Record the proximity of the cars around you
A driver readable display, ie: “Driver ABCD1234, your current speed is 45”. Like a mobile Toronto Watch Your Speed program sign. Would a driver allow a cyclist more space if they were aware their actions are being logged?
Log this data on a remote server
Share this data, with a group. Perhaps associate “near miss” data from many cyclists, and identify troublesome areas, or troublesome cars.
Introducing my Bicycle Dashcam, Mark I
My Mark I dashcam consists of a Raspberry Pi 3 with a Pi Camera (v1.3), a battery pack, running a small Node application which takes pictures, tries to recognize license plates with OpenALPR, controlled through a phone friendly web interface.
Testing and Results So Far
On the Pi 3, it takes between 8 and 800 ms to capture a photo with the Pi Camera, and another 7-8 seconds to run the OpenALPR license plate recognition process. I haven’t looked into optimizing this, but I would be curious to see how fast this could get by adding a processor optimized for these tasks, like an Intel Neural Compute Stick.
I’ve taken my prototype on a few drives, and a 5 minute bicycle ride. I don’t know why I even tried using a Lego frame to mount the dashcam to my bike – it only held together for a few minutes of riding, and completely fell apart – I’ll have to come up with something better for bicycle testing.
In the car, over a 30 minute drive (~120 photos) in traffic, about 15 license plates are identified. OpenALPR works exceptionally well – it can pick out the plate numbers even when it would be hard for a human to do so from the same photo. The limiting factor is the Pi Camera. At a stop, the pictures are fine, and OpenALPR will recognize the plates.
However, as soon as the car is in motion, the image is washed out.
I have spent some time tweaking the photos taken by the Pi camera, trying out different modes. So far, I haven’t been able to get great results.
As I look to take this further, I’ll look at other Pi camera options, run further tests on my bicycle, perhaps move the project to a mobile phone app, as my phone’s camera is significantly better than the Pi’s. Also, I may explore inexpensive LED matrix screens for the driver readable display.
I’m not sure where we got the idea, and the solution we proposed was gimmicky, even at the time, but the exercise was more about design process – my team did fine. Imagine my surprise, when I was browsing for something else recently on AliExpress (and on Amazon), that some company builds and sells a device similar to our proposed design.
As automakers have added lane following systems and basic autopilots to their cars over the last ten years, they’ve also invested in systems that ensure drivers remain alert to supervise these systems and are ready to take over. Tesla’s systems have sensors to ensure hands remain on the steering wheel, Cadillac’s Supercruise has a camera that ensures the driver’s eyes are focused on the road ahead. What seemed like a silly idea is now a little industry…
My personal brain dump, Opinions, Projects, Toronto