Category Archives: computing

1-Click Passwords

I was recently presented with a situation where I would have to regularly enter a 48 random character password for a month or more to log in to a computer that was assigned to me. Given that I couldn’t possibly memorize this string, and the computer is reasonably physically secure, I decided to build a device to do this for me.

I had previously used an Arduino to emulate a gamepad for a homemade Dance Dance Revolution mat. This time, I needed to emulate a keyboard. A search for “HID Arduino” returned the Arduino HID page, which suggested an Arduino with an Atmel 32u4 microcontroller. A search for Arduino 32u4 on Amazon returned the KeeYees Pro Micro clone, which I ordered.

Arduino Pro Micro Clone, button wired to I/O 4

It came in, I soldered a button to I/O 4, and uploaded the following code:

include "Keyboard.h"
include "Bounce2.h"
const int buttonPin = 4;
Bounce bounceTrigger = Bounce();
void setup() {
bounceTrigger.attach(buttonPin, INPUT_PULLUP );
Keyboard.begin();
}
void loop() {
bounceTrigger.update();
if ( bounceTrigger.rose() ) {
Keyboard.println("I put my password here");
}
}

Now, every morning, instead of copying 48 characters from a Post-it, I just click the button.

It should be said, this defeats the purpose of the password, and the password isn’t stored in a secure way on the microcontroller. But this technique is great for any time you need to automate a sequence of keystrokes.

iOS Safari’s WebSockets implementation doesn’t work with self signed certs

I’m building a Node application hosted on a Raspberry Pi, that will not be connected to the internet. A user will interface with the application through the browser on their phone. The application calls the browser for its GPS coordinates using the HTML Geolocation API.

In iOS, the HTML Geolocation API only works for HTTPS sites. I found an excellent post on Stackoverflow for creating a self signed cert that works in most browsers. I created the cert, added it to my desktop and phone. HTTPS worked great.

I first tried the Node ws websocket library, and the Node application would call out to the browser to fetch GPS coordinates when it needed them.

The application worked great in Firefox and Chrome, but it would not work in the iOS browser. If I dropped to HTTP (vs HTTPS) and WS (vs WSS), it worked fine. For some reason, the iOS browser accepted the cert for HTTPS, but not WSS. Unfortunately, I needed HTTPS to use Geolocation.

I couldn’t get it to work. I ended up moving my application to Socket.IO, which has a fallback method to HTTPS polling if a websocket connection cannot be established. This worked for my scenario. If you need a websocket like capability and have to use self signed certificates on iOS, try Socket.IO.

Alexa skill, written in Node JS, Using Express, with ask-sdk-express-adapter

In 2018, after reading an article on Hackaday, I picked up an Amazon Echo Dot to experiment with building voice interfaces. It was surprisingly easy, and with no experience, I got something up and running in a couple hours.

I haven’t looked at this in a while, and had another project in mind. Looking at the Alexa development documentation today, all the examples leverage Amazon’s Lambda’s compute service. For my project, I didn’t want to use Lambda, I just wanted to use Express on Node JS. Amazon has NPM library for this, ask-sdk-express-adapter, but I couldn’t find ANY end-to-end example, and I struggled for a bit to get it to work. I think it took me longer the 2nd time around!

SO – here’s a simple example, hopefully it’s got the right keywords for anyone who’s stumbling on the same problem. Keywords:

  • node js
  • javascript
  • ask-sdk-express-adapter
  • express
  • sample code
  • example code
  • alexa
const express = require('express');
const { ExpressAdapter } = require('ask-sdk-express-adapter');
const Alexa = require('ask-sdk-core');
const app = express();
const skillBuilder = Alexa.SkillBuilders.custom();

var PORT = process.env.port || 8080;

const LaunchRequestHandler = {
    canHandle(handlerInput) {
        return handlerInput.requestEnvelope.request.type === 'LaunchRequest';
    },
    handle(handlerInput) {
        const speechText = 'Hello World - Your skill has launched';

        return handlerInput.responseBuilder
            .speak(speechText)
            .reprompt(speechText)
            .withSimpleCard('Hello World', speechText)
            .getResponse();
    }
};

skillBuilder.addRequestHandlers(
    LaunchRequestHandler
)

const skill = skillBuilder.create();

const adapter = new ExpressAdapter(skill, false, false);

app.post('/', adapter.getRequestHandlers());

app.listen(PORT);

Hope that helps!

Playing around with Hugo and different ways of hosting content

When I initially built out this blog, I:

  • wanted a content management tool. I didn’t want to be writing pages in HTML
  • wanted to host it myself. Geocities came and went. I wanted ownership of my hosting.
  • wanted a VM on the Internet anyway. I wanted something always up, that I could host services on. I had hosted PCs on the Internet at home, but with cloud services, I just didn’t need this anymore
  • wanted very low costs
  • needed to support extremely low readership.

So, I built out a tiny VM on AWS I can deploy services on, and it costs next to nothing.

But my content is static. It really makes more sense to host the files on S3, and use a static content generator. It’s much more secure, I don’t have to worry about keeping OSs and applications patched, and it could scale if ever required.

So over Christmas break, I built https://articles.hotelexistence.ca/ with Hugo, hosted on S3, fronted by CloudFront, which seemed to be the only way to host content from S3 on my domain with HTTPS. With Hugo (and any other static site generator), you create your content, it applies a template, and creates the links – it reminds me of working with Fog Creek Software’s defunct CityDesk almost 20 years ago. This AWS Hugo Hosting article was really helpful for the AWS setup. I still can’t figure out how to use Hugo’s Image Processing features, but I didn’t need them. The new site is accessible from the ‘Articles’ section up top. I’m not sure if I’ll move everything over or what I’ll do that moving forward.

Detect web skimming with web automation

I was listening to the Darknet Diaries Magecart episode before the holidays and was thinking, “Magecart attacks should be pretty easy to detect with web automation”, so I wrote up how I would do it. If you run a web property that processes sensitive data, it might be of interest. Check it out here: https://articles.hotelexistence.ca/posts/browserautomationtodetectwebskimming/

I have been thinking about changing how I host this site, and decided to try it out for this article – more on this later.

Fixing ink blobs on epson xp-830 prints

Black ink blobs dropped randomly on pages

My Epson XP-830 started dropping black ink globs on my prints, which would smudge and wreck photos. As I had recently installed $150 worth of ink, I didn’t want to just go out and get a new printer. I also liked the compact format of this printer, and wouldn’t just buy the same one, as this was starting to look like a doorstop after its 2nd set of cartridges. I wasn’t concerned about breaking the printer at this point, because I was ready to throw it out.

I managed to resolve the issue – I’ve decided to write about what I did, and perhaps some will find this article and I’ll save a few printers from an early trip to the landfill. I expect this will work for any Epson XP printer.

First, I ordered a print head cleaning kit from Amazon (kit, Amazon link). In hindsight, I don’t actually think this was an issue with my print heads, but I did a number of things all at once, so I don’t know exactly which step resolved my issue. I recommend watching their video before ordering the kit.

The first step was getting the print head out of its right-side dock. Go to the menu, click maintenance, and then click Ink Cartridge Replacement.

Click proceed.

At this point, the print head will have moved to its change cartridge position. Disconnect the power.

I used card stock and paper towels to clean all of the ink I saw in the areas identified by red arrows

At this point, I took out the cartridges, and I wrapped them in plastic wrap, following the guidance of the Print Head Hospital.

I did clean the heads, as instructed in the Print Head Hospital video, but I think what really made the difference for the black ink globs was the following: using cheap papertowels and cardstock, I cleaned up all the ink in the areas highlighted by arrows in the above image. I cleaned under the print head by cutting a ~1″ piece of cardstock, wrapping it with a paper towel, and running it underneath the assembly as shown at the 3:40 mark in the Print Head Hospital video, and repeated until the paper towel would come out clean.

I plugged the printer back in, re-installed the cartridges, ran the regular print head cleaning cycle 3 times (until the test page came out fine), and am now getting perfect prints.

Good luck – hope this helps.

Code like it’s 1981

In my primary school years, I’d read my Dad’s “Compute!” magazines. Recently, I discovered they’ve been published on Archive.org https://archive.org/details/compute-magazine , and I browsed through a few issues.

I came across this ad in a 1981 issue:

Ad for SORT, an EPROM with a sorting algorithm for Apple and Commodore PET owners.
SORT algorithm on EPROM for Apple and Commodore Pet

It’s a sorting algorithm, written in assembler, distributed on an EPROM chip, mounted on a circuit board, that you’d plug into your Commodore PET or Apple II computer and call from your BASIC program.

I few things I find interesting about this ad:

  • How big was the market in 1981, for people who were writing BASIC programs, couldn’t write a sorting algorithm, and would pay $55 per seat for one?
  • If someone were looking to sell their program that they built, they’d have to bundle in this SORT product
  • At some point, sorting libraries were built-in

I actually found documentation for this product online:
http://mikenaberezny.com/wp-content/uploads/2013/10/sort-installation.pdf
http://mikenaberezny.com/wp-content/uploads/2013/10/sort-user-instructions.pdf
http://mikenaberezny.com/wp-content/uploads/2013/10/sort-review-compute-dec-1981.pdf

Creating Turing-test passing chatbots is getting easier

“The Turing test, developed by Alan Turing in 1950, is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.”
Wikipedia

I’d heard this neat story recently, on the More or Less Human episode of Radiolab, on how it has become easier to write a chatbot that passes to Turing test, because how we communicate has changed.

Over the last 5-10 years, most of our chat clients (eg: Messages, WhatsApp, Android Messages) have auto-complete, canned responses. Often, when you try to type something unique, the chat client suggests something else.

AND, data entry, whether by keyboard or voice on mobile devices, make it challenging to write human sentences.

Even when a human is writing, because of auto-correct, suggestions, poor data entry on mobile devices, and canned responses, our writing has become a lot more bot-like. Our expectations are for bot-like communication.

So, even without AI advances, it has become easier to write a Turing-test passing bot. Alternately, a human from 30 years ago would probably identify a human using a chat client on a mobile phone as a machine.

Antifragile – Hidden Benefit of Chaotic Systems

Although not related to IT, there are ideas worth considering as we think about our systems in the following book:
Antifragile: Things That Gain from Disorder by Nassim Nicholas Taleb

“Just as human bones get stronger when subjected to stress and tension, and rumors or riots intensify when someone tries to repress them, many things in life benefit from stress, disorder, volatility, and turmoil. What Taleb has identified and calls “antifragile” is that category of things that not only gain from chaos but need it in order to survive and flourish.”

The idea is that a bunch of un-aligned, disorganized systems suffer from a bunch of small, recoverable failures which make the whole more resilient. Whereas large, organized, homogeneous systems may suffer from fewer small failures, they are susceptible to larger failures which can lead to catastrophe.

I have seen some evidence of these patterns at work. I worked for years on a product which required an Intel server running RedHat acting as a proxy, an Intel server running Windows handling connectivity with other systems, a Sun server running the SunONE application server, and a Sun server running Oracle (all before Oracle bought Sun!). When I worked in support, a “Severity 1” server down alert might mean an issue with the application server, and a single client would be out of service.

In 2012, significant upgrades were made to our infrastructure. All of those Intel servers for all of our lenders were consolidated onto VMWare clusters. All of our Sun servers were consolidated onto larger Sun servers. Significant savings were realized in infrastructure expenses, and systems became easier to manage. The number of incidents decreased.

But as we consolidated our infrastructure, an outage now had much greater scale. A “Severity 1” server down alert now meant that multiple customers were out of service. As we consolidated our servers, we also consolidated our incidents. A Sev 1 became bigger and more complex. If we were using the number of Sev 1 incidents as a performance metric, were we counting the same thing?

As we look to the cloud, the potential scale is even bigger – here are a few examples:

What happens when all applications are hosted by Amazon AWS, Microsoft Azure and Google Cloud? When every server runs Linux on Intel?

Given the choice, I don’t think anyone wants to manage a impossible patchwork 1000’s of systems unsupported by vendors that no one understands with different versions of everything. However, the dangers of homogeneous systems should be considered as we design and assess our systems – there can be strength in disorder!

Hiring for Potential and Building The Amiga Team

I spent a good portion of my childhood in front of a Commodore Amiga 500, an amazing home computer for the late 1980s. I purchased mine used, after having saved months of hard-earned income delivering newspapers.

When author Brian Bagnall created a Kickstarter campaign to fund Commodore: The Amiga Years, a book about the history of the Amiga in 2015, I backed it. As Kickstarter projects go, 2 years later, I received it (now you can buy it on Amazon).

The Amiga was a really neat computer with great capabilities for its price point, much of it enabled by a number of custom chips. The design of these chips was lead by Jay Miner, a former Atari Engineer. I was surprised to learn that for one of the chips, Jay Miner hired Glenn Keller, an oceanographic engineer visiting California looking for work in submarine design, with no prior experience in chip design.

From The Amiga Years:
The engineer who would end up designing the detailed logic in Portia seemed like an unlikely candidate to design a disk controller and audio engine, considering he had no prior experience with either and didn’t even use computers.

In 1971, MIT accepted his application and he embarked on a masters in ocean engineering, graduating in 1976. As an oceanic engineer, Keller hoped to design everything from submersible craft to exotic instruments used in ocean exploration. “I’m the guy that builds all those weird things that the oceanographers use, and ships, and stuff,” he says.
When the oil crisis hit in 1973, Western powers began looking for alternative sources of energy. One of those potential sources was the power of ocean waves. The project caught Keller’s eye while he was attending MIT, and in 1977 he moved to Scotland to work for Stephen Salter, the inventor of “Salter’s duck”, a bobbing device that converted wave energy into electrical power.
The British government created the UK Wave Energy program and in turn, the University of Edinburgh received funds for the program. This resulted in them hiring Keller to work for the university.
The experience allowed Keller to develop skills in areas of analog electronics (with the study of waves playing an important role), digital electronics, and working with large water tanks to experiment with waves. “That resulted in some actual power generated from ocean waves,” he says. “It was a lot of fun.”
In March 1982, with oil prices returning to normal, the UK government shut down the Wave Energy program and Keller returned to the United States ready to continue his career in oceanographic engineering. He soon landed in California, where much of the development of submersibles was occurring. “I was up in the North Bay looking for oceanography jobs and ocean engineering jobs,” he recalls.

Soon, Keller was boarding a train for what would become a life changing experience. When he exited the train he was greeted by Jay Miner, wearing one of his trademark Hawaiian T-shirts. “I go to Sunnyvale, I show up at the train station, and there is this guy in a Lincoln Continental with a little dog sticking out,” laughs Keller.

One doubt Keller had was his lack of experience in the computer industry, or with personal computers of any sort. This was 1983, after all, and millions of personal computers had already permeated homes across North America. “I had done programming but I didn’t understand the world of personal computers or indeed the world of Silicon Valley,” he explains. “I hadn’t been there.”
Once at Koll Oakmead Park, Miner brought him into the shared office space with the whiteboards and block diagrams. Although Miner hoped the proposed system would have a great impact on Keller, he failed to get it. “I didn’t really understand why the architecture was so great in a general sense, because I didn’t know that much about where computers were at that point,” says Keller.
Instead, he hoped his diverse electronics background would give him enough skills for the job. “I had done a lot of electronics but no chips,” he says. “But I liked Jay and I always liked pretty colored wires. I had done a lot of different kinds of electronics. Being in ocean engineering, you do everything: digital, analog, interfaces, all that stuff. Even software. You do the whole thing. So I had a pretty broad base even though I hadn’t done chip design.”
Decades later, Keller sounds mystified as to why Miner would hire an oceanographic engineer into a computer company. “He hired me for some reason,” he says, musing the reason might be because, “I guessed correctly the difference between a flip flop and a latch.”
Most likely, Miner knew all he needed was an engineer with a good understanding of both analog and digital electronics for Portia. He could bridge the gap of chip design by mentoring a junior engineer.

A great story about a successful hire based on an assessment of someone’s potential to learn and grow.

Incidentally, in my high school years, that Amiga 500 landed me my first part time job at Dantek Computers, a small store that assembled IBM PC clones. By this time, around 1994, the Amiga was obsolete, and parent company Commodore was bankrupt. At my interview, Dan of Dantek looked at my resume, saw “Amiga”, and said in French:
“Amiga – ça c’est un signe de bon goût “. I started the next Thursday at 4 PM – I worked there after school for 2 years, and saved enough to pay for a good chunk of my engineering degree.