Category Archives: computing

Creating Turing-test passing chatbots is getting easier

“The Turing test, developed by Alan Turing in 1950, is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.”
Wikipedia

I’d heard this neat story recently, on the More or Less Human episode of Radiolab, on how it has become easier to write a chatbot that passes to Turing test, because how we communicate has changed.

Over the last 5-10 years, most of our chat clients (eg: Messages, WhatsApp, Android Messages) have auto-complete, canned responses. Often, when you try to type something unique, the chat client suggests something else.

AND, data entry, whether by keyboard or voice on mobile devices, make it challenging to write human sentences.

Even when a human is writing, because of auto-correct, suggestions, poor data entry on mobile devices, and canned responses, our writing has become a lot more bot-like. Our expectations are for bot-like communication.

So, even without AI advances, it has become easier to write a Turing-test passing bot. Alternately, a human from 30 years ago would probably identify a human using a chat client on a mobile phone as a machine.

Antifragile – Hidden Benefit of Chaotic Systems

Although not related to IT, there are ideas worth considering as we think about our systems in the following book:
Antifragile: Things That Gain from Disorder by Nassim Nicholas Taleb

“Just as human bones get stronger when subjected to stress and tension, and rumors or riots intensify when someone tries to repress them, many things in life benefit from stress, disorder, volatility, and turmoil. What Taleb has identified and calls “antifragile” is that category of things that not only gain from chaos but need it in order to survive and flourish.”

The idea is that a bunch of un-aligned, disorganized systems suffer from a bunch of small, recoverable failures which make the whole more resilient. Whereas large, organized, homogeneous systems may suffer from fewer small failures, they are susceptible to larger failures which can lead to catastrophe.

I have seen some evidence of these patterns at work. I worked for years on a product which required an Intel server running RedHat acting as a proxy, an Intel server running Windows handling connectivity with other systems, a Sun server running the SunONE application server, and a Sun server running Oracle (all before Oracle bought Sun!). When I worked in support, a “Severity 1” server down alert might mean an issue with the application server, and a single client would be out of service.

In 2012, significant upgrades were made to our infrastructure. All of those Intel servers for all of our lenders were consolidated onto VMWare clusters. All of our Sun servers were consolidated onto larger Sun servers. Significant savings were realized in infrastructure expenses, and systems became easier to manage. The number of incidents decreased.

But as we consolidated our infrastructure, an outage now had much greater scale. A “Severity 1” server down alert now meant that multiple customers were out of service. As we consolidated our servers, we also consolidated our incidents. A Sev 1 became bigger and more complex. If we were using the number of Sev 1 incidents as a performance metric, were we counting the same thing?

As we look to the cloud, the potential scale is even bigger – here are a few examples:

What happens when all applications are hosted by Amazon AWS, Microsoft Azure and Google Cloud? When every server runs Linux on Intel?

Given the choice, I don’t think anyone wants to manage a impossible patchwork 1000’s of systems unsupported by vendors that no one understands with different versions of everything. However, the dangers of homogeneous systems should be considered as we design and assess our systems – there can be strength in disorder!

Hiring for Potential and Building The Amiga Team

I spent a good portion of my childhood in front of a Commodore Amiga 500, an amazing home computer for the late 1980s. I purchased mine used, after having saved months of hard-earned income delivering newspapers.

When author Brian Bagnall created a Kickstarter campaign to fund Commodore: The Amiga Years, a book about the history of the Amiga in 2015, I backed it. As Kickstarter projects go, 2 years later, I received it (now you can buy it on Amazon).

The Amiga was a really neat computer with great capabilities for its price point, much of it enabled by a number of custom chips. The design of these chips was lead by Jay Miner, a former Atari Engineer. I was surprised to learn that for one of the chips, Jay Miner hired Glenn Keller, an oceanographic engineer visiting California looking for work in submarine design, with no prior experience in chip design.

From The Amiga Years:
The engineer who would end up designing the detailed logic in Portia seemed like an unlikely candidate to design a disk controller and audio engine, considering he had no prior experience with either and didn’t even use computers.

In 1971, MIT accepted his application and he embarked on a masters in ocean engineering, graduating in 1976. As an oceanic engineer, Keller hoped to design everything from submersible craft to exotic instruments used in ocean exploration. “I’m the guy that builds all those weird things that the oceanographers use, and ships, and stuff,” he says.
When the oil crisis hit in 1973, Western powers began looking for alternative sources of energy. One of those potential sources was the power of ocean waves. The project caught Keller’s eye while he was attending MIT, and in 1977 he moved to Scotland to work for Stephen Salter, the inventor of “Salter’s duck”, a bobbing device that converted wave energy into electrical power.
The British government created the UK Wave Energy program and in turn, the University of Edinburgh received funds for the program. This resulted in them hiring Keller to work for the university.
The experience allowed Keller to develop skills in areas of analog electronics (with the study of waves playing an important role), digital electronics, and working with large water tanks to experiment with waves. “That resulted in some actual power generated from ocean waves,” he says. “It was a lot of fun.”
In March 1982, with oil prices returning to normal, the UK government shut down the Wave Energy program and Keller returned to the United States ready to continue his career in oceanographic engineering. He soon landed in California, where much of the development of submersibles was occurring. “I was up in the North Bay looking for oceanography jobs and ocean engineering jobs,” he recalls.

Soon, Keller was boarding a train for what would become a life changing experience. When he exited the train he was greeted by Jay Miner, wearing one of his trademark Hawaiian T-shirts. “I go to Sunnyvale, I show up at the train station, and there is this guy in a Lincoln Continental with a little dog sticking out,” laughs Keller.

One doubt Keller had was his lack of experience in the computer industry, or with personal computers of any sort. This was 1983, after all, and millions of personal computers had already permeated homes across North America. “I had done programming but I didn’t understand the world of personal computers or indeed the world of Silicon Valley,” he explains. “I hadn’t been there.”
Once at Koll Oakmead Park, Miner brought him into the shared office space with the whiteboards and block diagrams. Although Miner hoped the proposed system would have a great impact on Keller, he failed to get it. “I didn’t really understand why the architecture was so great in a general sense, because I didn’t know that much about where computers were at that point,” says Keller.
Instead, he hoped his diverse electronics background would give him enough skills for the job. “I had done a lot of electronics but no chips,” he says. “But I liked Jay and I always liked pretty colored wires. I had done a lot of different kinds of electronics. Being in ocean engineering, you do everything: digital, analog, interfaces, all that stuff. Even software. You do the whole thing. So I had a pretty broad base even though I hadn’t done chip design.”
Decades later, Keller sounds mystified as to why Miner would hire an oceanographic engineer into a computer company. “He hired me for some reason,” he says, musing the reason might be because, “I guessed correctly the difference between a flip flop and a latch.”
Most likely, Miner knew all he needed was an engineer with a good understanding of both analog and digital electronics for Portia. He could bridge the gap of chip design by mentoring a junior engineer.

A great story about a successful hire based on an assessment of someone’s potential to learn and grow.

Incidentally, in my high school years, that Amiga 500 landed me my first part time job at Dantek Computers, a small store that assembled IBM PC clones. By this time, around 1994, the Amiga was obsolete, and parent company Commodore was bankrupt. At my interview, Dan of Dantek looked at my resume, saw “Amiga”, and said in French:
“Amiga – ça c’est un signe de bon goût “. I started the next Thursday at 4 PM – I worked there after school for 2 years, and saved enough to pay for a good chunk of my engineering degree.

Security: Not a new problem

Here’s an OLD story about famous scientist Richard Feynman, who had fun cracking the safes of all his fellow scientists working on the Manhattan project in WW2:
http://www.cs.virginia.edu/cs588/safecracker.pdf (this is a long read best left for an evening at home).

What’s interesting is how easily you can draw parallels to the security issues we face today.  You could almost swap the word “safe” with “web application”, and “atom bomb design” with “financial data”, and the story almost carries over to today. These safes/filing cabinets contained documents relating to the atomic bomb (ie: something worth protecting).

To break the safes, he used:

  • social techniques
  • default safe codes
  • known design defects

Sound familiar?  What’s funny, is the reaction to his activities was not to improve security, but to try keep him out of the rooms, and pretend the problem didn’t exist.

Building SIO2Arduino to enable an Atari 800XL to use SD Cards

Last winter, I built an SIO2Arduino circuit – it is an adapter, that enables the Atari to use disk images loaded on to a regular SD card.

My build of the SIO2Arduino SD Card Adapter
My build of the SIO2Arduino SD Card Adapter

To the Atari, the SD card works just like a floppy drive.  It’s was built following the instructions found here:
http://whizzosoftware.com/sio2arduino/

With a program called SDRIVE, I can select a disk image on the SD card, and then load it:

Selecting an Atari image on the SD card using the SDRIVE program
Selecting an Atari image on the SD card using the SDRIVE program

I never did get the adapter working perfectly – I can load certain disk images, such as ballblazer, but not others, like Karateka.  I think it would take a lot more investigation, and perhaps digging into code, to figure out how to fix this issue.

ballblazer running on Atari from SD Card
ballblazer running on Atari from SD Card

Until I get a suitable TV, this is likely as far as I’m taking this particular project.

Fragile Media

Am I the only one who worries about data?

Wordperfect-5.1-dos-300x225

In grade 5 (this was in the 80s), I was one of perhaps two kids that typed up my projects in a word processor.  Night before a project was due, WordPerfect 4.2 froze on me, and the 286 I was running it on wouldn’t boot again, and my work was lost.  I had actually printed it, and handed in a marked up draft.  But an important lesson was learned, very early – keep backups.  Note here how resilient the daisy-wheel printed draft was, and how fragile the PC was.

Flash forward to current day.  Most of the work I produce belongs to my employer – I see that data as their responsibility – what they lose costs them, they can back up the work I produce as they choose.

But of interest to me in my personal life is MY data.  And I have lots.  I have lost my fair share of hard drives, floppies, and CD/DVDs over the years.  But I’ve been lucky – apart from:
1) MS Basic 2 games written as a primary school student on a C64
2) email from pre-2001, lost due to silly University data retention policies (and they were silly – I bet that Alumni department wishes they could reach me by email now)
I haven’t lost anything due to reasonable backups.  But I’ve never dealt with fire, loss, or theft.

I have little use for physical media.  I live in an apartment and have little use or physical storage space for prints.  So I’ve prioritized – I have little work or correspondence to back up – all of the physical stuff sits in the top drawer of a small filing cabinet.  I hope for the best for important documents.

Photos of events are shared online – not much use for prints.  Or is there?  I had been copying camera flash cards to a hard drive and optical media.  The optical media is brand name and stored in a dark place, but I still don’t trust it.  I’ve seen photos of myself as a child – I think my daughter deserves the same.  “Oh, Nameless Manufacturer put out a bunch of drives with buggy firmware in 2009 which ate your first step photos” isn’t going to cut it for me as an excuse.

What about music?  I’m still pretty old school – the indie record store across the street shut its doors last year – but I still buy CDs.  Prior to a 2009 hard drive crash, I never backed up my MP3s thinking, “I’ve got the original on CD”.  But I don’t have the time I did as a student – ripping 100s of CDs takes A LOT OF TIME.  As for iTunes – I had fun as a teen flipping thru my dad’s LPs – I don’t think any child born in 2010 will get the chance to hear the music of their parent’s youth (I can imagine “I bought that album BEFORE Apple removed DRM” or “the Avril Lavigne hard drive must have crashed in 2005”).

Hard drives are pretty cheap these days – the last time I bought a drive, I bought two, and set one up to copy over to the other once a week.  The stuff on the 2nd drive never gets deleted – so if I accidentally delete a folder on my working drive, I can get it back on the the 2nd (point in time back ups).  I run Linux at home, and use a little app called Back In Time to accomplish this task.  The last time I looked at this from the Windows side, there was an application called “Dantz Retrospect” that seemed to get a lot of these things right – it seemed like the perfect solution for a small office (I think they’ve since been bought out by EMC).

Of course, as important as making backups is making sure you can restore them.  I have to say, I’m not that forward thinking.  I think I need my own IT employee at home to think this thru.

Backups is actually something I think Apple does best for home users (but, perhaps, still not well) – I think Time Machine is probably the best thing to happen to the masses who don’t really have the time or interest in worrying about this stuff.

This still doesn’t cover loss due to fire or theft.  Fortunately, online storage is pretty inexpensive these days – cheap enough for the data I create, still too expensive and slow for the music and video I consume.  With Linux, I use s3fs to backup my work to Amazon’s S3 online storage service – it costs me a couple dollars a month.  On the Windows side, I haven’t tried either of these solutions but did come across JungleDisk for Amazon S3 backups and mozy.com in previous searches.

What I don’t understand is why this is still a problem?  Thirty years of personal computing and no one has fixed this.  We’ve got a long way to go.  Anyone think the “cloud” will solve this 🙂