All posts by raudette

Controlling an older TV with Alexa

Background

I have an old TV which was acquired used, without a remote. The power button has become a little finicky. Rather than going out and buying a new TV, or a universal remote, I thought it would be fun to build one. I had a Sparkfun ESP8266 dev board that wasn’t currently being used, and an Amazon Echo Dot in the same room as my TV, so I decided to make an Amazon Alexa-controlled remote rather than build a physical one with buttons.

As I had mentioned in my last post, I’ve built Alexa skills before, and to do this, you need to build a web service that Amazon’s servers can reach – something on the public Internet that can handle HTTPS. I wanted this service that was handling Alexa calls to control this ESP8266 in my home. I don’t like opening up ports on my home firewall, so I had to figure out a way to control the ESP8266 on my home network like every IoT device you can buy off the shelf. I hadn’t done this before, but I knew it was possible.

There seem to be a number of services to make this easy – I came across Blink.io and PubNub. Blink, in particular, seems like a good choice, as there’s sample code for the ESP8266 micro-controller I had decided to use. But in the end, I decided to use web sockets.

Building the solution

Alexa IR Remote System Diagram
Alexa IR Remote System Diagram

Most of the Alexa Skills tutorials are built on Amazon’s Lambda serverless compute service. But using websockets with Lambda seems to require using Amazon’s API Gateway, and I decided that was too much to take on for this particular project.

I setup the new skill on the Alexa portal, which I configured to call a VM I built out on Azure (I had free credits available there!). On this VM, I hosted a program I built with Node using Alexa’s ask-sdk-core library, and the websockets/ws library to build the web socket server handling the connection from the ESP8266.

On the ESP8266, I used the ArduinoWebSockets library to build the web socket client. Getting the ESP8266 to send IR commands to the TV was super simple. I connected an infrared LED to pin 4 of the ESP8266. Some article I read suggested using a transistor to increase current supplied to the LED, so I did. I used the IRremoteESP8266 for IR, and found the IR codes for my TV on http://irdb.tk/.

ESP8266 IR Remote Circuit
ESP8266 IR Remote Circuit
Alexa IR Remote on Breadboard
Alexa IR Remote on Breadboard

There is no authentication mechanism built in to web sockets. I decided that as I’m not scaling the service, and I’m just playing around, not to worry about it – if you use the code below, beware! The side effect is, every web socket client that is connected to the service will receive every command sent through the skill.

Solution in action

Here are some videos of the solution in action:

Turn on TV
Turn up volume

If you’re interested in trying something similar, you can check out the code here: https://github.com/raudette/AlexaTVRemote

Alexa skill, written in Node JS, Using Express, with ask-sdk-express-adapter

In 2018, after reading an article on Hackaday, I picked up an Amazon Echo Dot to experiment with building voice interfaces. It was surprisingly easy, and with no experience, I got something up and running in a couple hours.

I haven’t looked at this in a while, and had another project in mind. Looking at the Alexa development documentation today, all the examples leverage Amazon’s Lambda’s compute service. For my project, I didn’t want to use Lambda, I just wanted to use Express on Node JS. Amazon has NPM library for this, ask-sdk-express-adapter, but I couldn’t find ANY end-to-end example, and I struggled for a bit to get it to work. I think it took me longer the 2nd time around!

SO – here’s a simple example, hopefully it’s got the right keywords for anyone who’s stumbling on the same problem. Keywords:

  • node js
  • javascript
  • ask-sdk-express-adapter
  • express
  • sample code
  • example code
  • alexa
const express = require('express');
const { ExpressAdapter } = require('ask-sdk-express-adapter');
const Alexa = require('ask-sdk-core');
const app = express();
const skillBuilder = Alexa.SkillBuilders.custom();

var PORT = process.env.port || 8080;

const LaunchRequestHandler = {
    canHandle(handlerInput) {
        return handlerInput.requestEnvelope.request.type === 'LaunchRequest';
    },
    handle(handlerInput) {
        const speechText = 'Hello World - Your skill has launched';

        return handlerInput.responseBuilder
            .speak(speechText)
            .reprompt(speechText)
            .withSimpleCard('Hello World', speechText)
            .getResponse();
    }
};

skillBuilder.addRequestHandlers(
    LaunchRequestHandler
)

const skill = skillBuilder.create();

const adapter = new ExpressAdapter(skill, false, false);

app.post('/', adapter.getRequestHandlers());

app.listen(PORT);

Hope that helps!

Playing around with Hugo and different ways of hosting content

When I initially built out this blog, I:

  • wanted a content management tool. I didn’t want to be writing pages in HTML
  • wanted to host it myself. Geocities came and went. I wanted ownership of my hosting.
  • wanted a VM on the Internet anyway. I wanted something always up, that I could host services on. I had hosted PCs on the Internet at home, but with cloud services, I just didn’t need this anymore
  • wanted very low costs
  • needed to support extremely low readership.

So, I built out a tiny VM on AWS I can deploy services on, and it costs next to nothing.

But my content is static. It really makes more sense to host the files on S3, and use a static content generator. It’s much more secure, I don’t have to worry about keeping OSs and applications patched, and it could scale if ever required.

So over Christmas break, I built https://articles.hotelexistence.ca/ with Hugo, hosted on S3, fronted by CloudFront, which seemed to be the only way to host content from S3 on my domain with HTTPS. With Hugo (and any other static site generator), you create your content, it applies a template, and creates the links – it reminds me of working with Fog Creek Software’s defunct CityDesk almost 20 years ago. This AWS Hugo Hosting article was really helpful for the AWS setup. I still can’t figure out how to use Hugo’s Image Processing features, but I didn’t need them. The new site is accessible from the ‘Articles’ section up top. I’m not sure if I’ll move everything over or what I’ll do that moving forward.

Detect web skimming with web automation

I was listening to the Darknet Diaries Magecart episode before the holidays and was thinking, “Magecart attacks should be pretty easy to detect with web automation”, so I wrote up how I would do it. If you run a web property that processes sensitive data, it might be of interest. Check it out here: https://articles.hotelexistence.ca/posts/browserautomationtodetectwebskimming/

I have been thinking about changing how I host this site, and decided to try it out for this article – more on this later.

Nano Cidery

In September, I went out apple picking with the kids, and decided to pick up some cider, to try to ferment it, something I’ve been wanting to do for a while. I don’t usually drink hard cider, but I’ve been wanting to try making it ever since reading about the process in Make Magazine years ago.

I ended up following guidance from these sites:
https://www.midwestsupplies.com/blogs/specialty/instructions-on-how-to-make-hard-cider
https://howtomakehardcider.com/

I really like the idea of working at a small scale – it works really well for our apartment, and limits waste while experimenting. Startup costs were really low – apart from the cider, I picked up everything at Toronto Brewing:
– Cider
– Starsan sanitizer
– Lalvin D47 yeast (recommended by store staff for cider)
– Bottles (I re-used Grolsch swing-top bottles I started drinking prior to the exercise for bottling, and a wine bottle for fermenting)
– A food grade hose for decanting
– An airlock and stopper

My first batch was a bust. It turns out Downey’s farm adds potassium sorbate to their cider, and it didn’t ferment.

For my second batch, I went to our local Loblaws grocery store and bought their house brand cider. I added some brown sugar at fermentation time to increase alcohol content, and then added some dextrose at bottling time for carbonation. I fermented for two weeks, bottled, and tried my first bottle two weeks after bottling. The carbonation was perfect – lightly carbonated, tiny bubbles. But the cider was mostly flavorless – it wasn’t terrible, but didn’t taste great. I tried another bottle today, after 4 weeks – it was still flavorless, but somehow much better.

My third batch is currently in a second stage of fermentation. I fermented for two weeks, decanted, and have let it sit for two weeks. I plan to bottle it tomorrow. Should be ready to try around Christmas!

Bicycle Dashcam Mark I

I love my bike – it is a workhorse I can park anywhere, a mid-1990s hybrid. After years of limited maintenance, in the past year, I’ve had to replace a tire, cassette, all the cables, pads, grips, and shifters. I’ve also just upgraded my headlight and taillight – the improvements that have been made in bicycle lighting over the last 15 years have been incredible.

I’m using my bike more this year – my downtown office recently moved to a building with badge access indoor bicycle parking and showers, with towel service, for cyclists – what a cool perk. So, I’ve been biking to work for the first time since I started at this company in 2006, 17 km down Yonge St in Toronto, about twice a week since June.

My rides have been great. Drivers along my route leave a lot of space. But it’s hard to assess risk. The City of Toronto keeps detailed data on cyclists killed or seriously injured. There have been 11 KSIs on my route since 2008. But how do I compare that against, say, the risk of the 30 km drive to my Mississauga office? I’ve been rear-ended 3 times since 2010 commuting by car to Mississauga, but all have been at low speeds, only resulting in damage to my car – the consequences of getting hit on my bike are far more severe.

I was trying to think about what I could do beyond riding cautiously and ensuring I am visible. And, I have to say, a part of me is just always on the lookout for small, fun projects.

Envision a bicycle dashcam

Bicycle dashcams have been done before, by Cycliq and others. But I envision something difference, a bicycle dashcam that could:

  • Recognize the license plates of the cars around you. From a picture, it would look at the plates on all the cars, and then associate a plate number with the picture
  • Record the speed of the cars around you
  • Record the proximity of the cars around you
  • A driver readable display, ie: “Driver ABCD1234, your current speed is 45”. Like a mobile Toronto Watch Your Speed program sign. Would a driver allow a cyclist more space if they were aware their actions are being logged?
  • Log this data on a remote server
  • Share this data, with a group. Perhaps associate “near miss” data from many cyclists, and identify troublesome areas, or troublesome cars.

Introducing my Bicycle Dashcam, Mark I

Bicycle Dashcam Mark 1
Bicycle Dashcam Mark I

My Mark I dashcam consists of a Raspberry Pi 3 with a Pi Camera (v1.3), a battery pack, running a small Node application which takes pictures, tries to recognize license plates with OpenALPR, controlled through a phone friendly web interface.

Bicycle Dashcam Dashboard
Bicycle Dashcam Dashboard

Testing and Results So Far

On the Pi 3, it takes between 8 and 800 ms to capture a photo with the Pi Camera, and another 7-8 seconds to run the OpenALPR license plate recognition process. I haven’t looked into optimizing this, but I would be curious to see how fast this could get by adding a processor optimized for these tasks, like an Intel Neural Compute Stick.

I’ve taken my prototype on a few drives, and a 5 minute bicycle ride. I don’t know why I even tried using a Lego frame to mount the dashcam to my bike – it only held together for a few minutes of riding, and completely fell apart – I’ll have to come up with something better for bicycle testing.

In the car, over a 30 minute drive (~120 photos) in traffic, about 15 license plates are identified. OpenALPR works exceptionally well – it can pick out the plate numbers even when it would be hard for a human to do so from the same photo. The limiting factor is the Pi Camera. At a stop, the pictures are fine, and OpenALPR will recognize the plates.

Pi Camera image quality sufficient for OpenALPR when stopped
Pi Camera image quality sufficient for OpenALPR when stopped

However, as soon as the car is in motion, the image is washed out.

Photo from Pi Dashcam while car is moving.
Just a blur. Photo from Pi Dashcam while car is moving.

I have spent some time tweaking the photos taken by the Pi camera, trying out different modes. So far, I haven’t been able to get great results.

As I look to take this further, I’ll look at other Pi camera options, run further tests on my bicycle, perhaps move the project to a mobile phone app, as my phone’s camera is significantly better than the Pi’s. Also, I may explore inexpensive LED matrix screens for the driver readable display.

Source code: https://github.com/raudette/plates

What seemed like a silly idea

Throughout University, we had these Engineering Design courses, where we would go through a defined process to design something.

In my second year, my team submitted “A System for Maintaining Driver Alertness <link to pdf>“.

A System for Maintaining Driver Alertness
A System for Maintaining Driver Alertness

I’m not sure where we got the idea, and the solution we proposed was gimmicky, even at the time, but the exercise was more about design process – my team did fine. Imagine my surprise, when I was browsing for something else recently on AliExpress (and on Amazon), that some company builds and sells a device similar to our proposed design.

Commercial Driver Alertness Device
Commercial Driver Alertness Device – As Seen on Amazon

As automakers have added lane following systems and basic autopilots to their cars over the last ten years, they’ve also invested in systems that ensure drivers remain alert to supervise these systems and are ready to take over. Tesla’s systems have sensors to ensure hands remain on the steering wheel, Cadillac’s Supercruise has a camera that ensures the driver’s eyes are focused on the road ahead. What seemed like a silly idea is now a little industry…

RC Sailboat Version 2

Six years ago, I built a wifi-controlled pop bottle sailboat. Smartphone control wasn’t great, so I turned my decommissioned weather station into a remote control.

RC Pop Bottle Sailboat, V2
RC Pop Bottle Sailboat, V2

My re-used weather station project board is a homemade Arduino board, with an APC220 transceiver radio. I added two rotary potentiometers for rudder and sail control. I removed the Raspberry Pi in the boat, and connected another APC220 transceiver to the Arduino Uno that controlled the sail and rudder servos.

We drove to Downsview Park and launched the boat.

RC Sailboat Launch
RC Sailboat Launch

Control still wasn’t great:

  • Controlling the sail and the rudder is fine, but with the boat just floating on the pop bottles, the rudder has very little effect. Our boat design itself needs improvement – I think this is currently the greatest issue.
  • My transmitter and receiver code could use some optimization – as I was troubleshooting at home, my code limited updates, and was only sampling every second – controls seemed “laggy”.
  • I’m using very inexpensive TowerPro MG995 servos, which many advise against using. They were fine for playing around with interfacing, but they are slow, they seem to have a hard time holding their position, as well as not consistently reaching their programmed position.
RC Boat Halfway Across The Pond
RC Boat Halfway Across The Pond

I did write my phone number on the boat in case the boat got stuck in the middle, and someone else eventually found it. In the end, it wasn’t required. We just played with the controls as the wind carried it to the other side – probably about 100 m.

Downsview Park Test Run
Downsview Park Test Run

Maybe some time over the next 6 years, I’ll optimize the RC code, install better servos, and improve the boat design by adding a keel.

finally a reason and time to play with an esp8266 wifi capable microcontroller

Ever since I read about the ESP8266 in Make magazine in 2015, I’ve been wanting to build something with it. I picked up a Sparkfun ESP8266 Thing Dev board at Creatron, probably a year ago, and let it gather dust.

Enter 35 degree weather. I have a window air conditioner, that I install in a metal sleeve built into our wall. For some reason, the sleeve is sloped such that water flows INSIDE. When the A/C runs on humid days, the water it collects from dehumidifying can leak inside, creating an unpredictable annoying mess that has to be cleaned up.

I could pickup a commercial leak sensor, but that’s not fun, the mobile app is probably not very good, it probably sends more information than needed to its cloud service, will never receive updates, and it seems like we’re always reading about IoT device vulnerabilities.

So, I bought a water sensor ($2.20!) in June, connected it to the dev board, and started to write a client in the Arduino environment for the ESP8266, and the server in Node.js. Then summer happened. Today, it’s August, its only 24 degrees outside, the A/C is off, and I’m done! The client reads the sensor every 10 seconds, and calls the server with a standard web service call, which will check the sensor reading, and send an alert by email if a water leak is detected.

ESP 8266 Water Sensor
ESP 8266 Water Sensor

The code is simple, but I had challenges getting the ESP8266 to HTTP POST a JSON payload. It seemed every example I found used different libraries or versions than the ones I had installed. I eventually got it working.

In the end, we didn’t have any leaks this summer in any case, and I don’t expect to make use of this project. If you’re interested in checking it out, you can download the code here: https://www.hotelexistence.ca/projects/watersensor.zip

Playing with tools instead of getting stuff done and other useless pursuits

This website is running WordPress on an Amazon EC2 instance.

If I were looking to keep a blog, this is not how I would do things, I’d just use a service.  The micro EC2 instance is slow, I have ensure Linux is patched, WordPress is patched, etc…  But playing around with the server is as much fun as writing the blog.

Here are a few changes to the site recently:

  • I run the EFF Privacy Badger on my browser at home, and I couldn’t believe how many trackers were running on my self hosted site, because I don’t track, and I don’t have ads.  I dropped the Youtube videos, that got rid of many (I just link to Youtube now instead of embedding).  I can’t remember what else I did, but now I’m just down to Google Fonts, used by the template.
  • The site now defaults to HTTPS.  With default settings, Qualys rates the default Ubuntu 18.04 LTS Apache HTTPS setup on this site as an A.  Its funny how many important companies struggle to get this right on their sites, given how easy this is.
  • Recently update the site to Ubuntu 18.04 LTS – the latest version of WordPress didn’t like the version of PHP on the previous LTS version I had been running (not sure what that was).  This is the third VM on which this site has been hosted.
  • I hadn’t been resizing photos and the site got REALLY slow.  I’ve resized the largest ones – it’s not painfully slow anymore.  I may eventually move the image hosting to S3, but keep the server/DB on EC2 – I expect the site would run faster without increasing costs.

Update October 5th, 2019:

  • Google Lighthouse ranks the site load speed at 100
  • Finally got the fonts loading locally with the OMGF WordPress plugin. The site no longer has any external trackers!