Hours of Fun Creating Visual Art with Prompts

“koala bear eating eggs benedict in a bistro” is the prompt I entered into OpenAI’s DALL·E system to generate this image. I have been reading articles about AI image generation since DALL·E 2 launched earlier this year, and have been experimenting with it hands-on since I received access earlier this week. It is lots of fun, but doesn’t always generate the results you might expect. I’ve been trying to describe to it artist Henri Julien’s Chasse-gallerie, a drawing of 8 voyageurs flying in a canoe at night, and DALL·E struggles with the flying canoe. For each prompt, DALL·E initially creates 4 image variations – I’ve selected the most interesting one for each below.

La Chasse-galerie by Henri Julien
4 men in a flying canoe at night, generated by DALL·E
4 men paddling a flying canoe through the sky at night, generated by DALL·E
4 men in a canoe in the sky at night, generated by DALL·E
lumberjacks flying in a canoe past the moon, by DALL·E. I like how they are chopping the tree while flying the canoe.

Generating Images on your PC

There are models that exist that you can run on your home PC. I’ve checked out min-dalle (used by https://www.craiyon.com/) and stable-diffusion (demo). They can both create imagery that roughly matches my prompts, which is amazing. I found that the output from mid-dalle was a bit crude, with distorted features. Output I’ve generated from stable-diffusion is much better.

For me, quality really makes a difference in how much I get out of the tool. Generating low quality images isn’t as much fun. Perhaps this is a reflection of my artistic skills – I can make a crappy drawing of a panda climbing with a ball point pen while watching a Powerpoint in a random meeting. But I personally don’t have the skill to render my ideas as well as DALL·E – maybe that’s what makes it so fascinating.

A painting of a panda climbing a skyscraper, generated by DALL·E mini
A painting of a panda climbing a skyscraper, generated by stable-diffusion
A painting of a panda climbing a skyscraper, generated by DALL·E

Not all fun and games

As much fun as this is, there is a lot of controversy about the implications of making tools like this accessible.

NBC News actually has a pretty good article about the biases exhibited by DALL·E. For example, ‘a flight attendant’ only produces images of women. In this respect, it is very much like other tools currently in the marketplace – I was curious, and searched Getty Images for ‘flight attendant’ stock photos, and found only women in the top results. Image generation tools continue to propagate the bias problems we see everywhere.

An Atlantic columnist raises some other interesting points about the backlash he faced when he used generated images in his newsletter, as opposed to professional illustration that you might see in a magazine feature. Here are some further thoughts on the ideas he presented:

  • Using a computer program to illustrate stories takes away work that would go to a paid artist, “AI art does seem like a thing that will devalue art in the long run” – I almost wonder here, if the computer program just becomes another tool in the artist’s arsenal. Is the value of an illustrator in the mechanics of creating an image, or visually conveying an idea? What if we think of a program like DALL-E as a creative tool, like Photoshop. Did Photoshop devalue photography?
  • There is no compensation or disclosure to the artists who created the imagery used to train these art tools – “DALL-E is trained on the creative work of countless artists, and so there’s a legitimate argument to be made that it is essentially laundering human creativity in some way for commercial product.” – At primary school age, my children brought home art created using pointillism techniques, without compensating or disclosing inspiration from Georges Seurat and Paul Signac. Popular image editing packages have had Van Gogh filters for years. We all learn and build on the work of those who preceded us. Once a style or idea is presented to the world, who owns it? Is the use case of training an algorithm a separate right, different from training an artist?

There are also challenges with image generation tools facilitating the creation of offensive imagery and disinformation, making it easier to cause harm than it is today with existing tools like Photoshop.

These tools will continue to progress, and will create change in ways, and on a scale that are hard to predict. It remains to be seen if we collectively decide to put new controls in place to address these challenges. In the meantime, I’ll be generating imagery of pandas and koalas in urban environments.

CRTC publishes Rogers’ Response

I have been in IT “war room” type situations a number of times, working to get service to a production system restored. It was with professional interest that I followed the Rogers outage on July 8th, 2022.

For anyone looking for more information than what was covered in the media, Rogers’ response to the CRTC’s questions about the incident was published and can be downloaded from the CRTC (the .DOCX link on the July 22, 2022 post).

Rogers response has been redacted, and is light on specifics (eg: no information about their network). However, there were some interesting details, such as how Rogers issues alternate mobile SIMs from competing mobile carriers to some its technical teams to maintain contact in the event of an outage like this one.

Photo: Diefenbunker Situation Room, CC BY-SA 3.0, by Wikipedia User Z22

Exploring Bluetooth Trackers at GeekWeek 7.5

I recently participated in GeekWeek 7.5, the Canadian Centre for Cyber Security’s (CCCS) annual cybersecurity workshop. I was assigned to a team of peers in banking, telecom, government and academia. We were to work together on analyzing how Bluetooth item trackers (eg: Apple AirTags, Tile) can be covertly used for malicious purposes, and developing processes and tools to detect them.

It was my first time attending the event, and I wasn’t sure what to expect. Here’s how it worked. The CCCS builds teams from the pool of GeekWeek applicants, based on interests and skills identified by the applicant in the application process. Leading up to the event, CCCS appoints a team lead, who defines goals for the team.

A map with a drive to Vaughan that was tracked at multiple points.
Testing a homemade AirTag clone

I was appointed as a team co-lead and assigned a team. My co-lead and I were to define a challenge in the IoT space. Inspired by recent headlines concerning stalking facilitated by AirTags, my co-lead suggested analyzing how item trackers can be covertly used for malicious purposes, and developing processes and tools to detect them.

The team worked in 5 streams:

Collecting Data

An existing baseline data collection tool was modified for our purposes for logging Bluetooth LE data for further analysis.
https://github.com/raudette/geekweek-7.5_1.3_loggingble

Detect Stealth AirTag Clones

“Can you find my stealth AirTag clone in this data?” we asked the data scientist on our team.
🍰 <piece of cake>, he responded.

As various news outlets reported about the malicious use of these trackers, manufacturers implemented anti-stalking measures which were quickly defeated by research efforts such as find-you. The find-you tracker avoids detection by rotating through identifiers – to an iPhone, it appears no differently from walking past other people with iPhones.

The team built find-you stealth trackers which rotated through 4 keys and went to work. We wanted to see if we could develop a technique for detecting and identifying the trackers. Data was collected by stalking ourselves, walking through busy urban areas with find-you trackers, and logging all Apple Find My Bluetooth advertisements.

Our hypothesis was that we could identify the find-you tracker based on signal strength patterns. We believed that the signal strength of the find-you tracker we were wearing would be consistent over time, as its location relative to our data logger wouldn’t change, and other Find My devices would vary, as we walked past other pedestrians carrying devices.

A team member assessed the data and built a model for identifying the keys based on signal strength.

Stealth Tag Easily Identified
Stealth Tag Easily Identified

The stealth tag, with its four keys, could easily be picked out in the data.

We experimented with randomizing the transmit power and time between key rotations of the stealth tracker.

Stealth Tag with Varying Transmit Power

Even at its lowest transmit power, the stealth tag’s four keys could still be could easily be picked out in the data.

Enhancing Stealth AirTag Clones

We created a “SneakyAirTag“, which tried to increase the stealthiness of Positive Security’s Find-You tag, found here: https://github.com/positive-security/find-you, which itself was built on the excellent work of: https://github.com/seemoo-lab/openhaystack

It tries to improve the stealthiness as follows:

  • Rotates keys on a random interval, between 30 and 60 seconds (Find-You rotates every 30 seconds)
  • Transmits at random power levels, in an attempt to avoid detection based on a consistent signal strength.

In our testing, even with these changes, we were still able to identify stealth tags in an area with other Apple Find My devices. Even at the ESP32’s lowest power level, the stealth tracker can be identified as the signal strength of the stealth tracker is higher than other devices the vicinity of the tracked subject.

Further areas for exploration would be reducing signal strength with shielding material or some other means, and adding an accelerometer such that the device only transmitted Bluetooth advertisements when the target subject is in motion.

Building a Clone Tile Tracker

The team was inspired by Seemoo Lab‘s work on the AirTag. Could we do similar things with the Tile tracker?

And, we figured out some things, but got stumped. Tile uses the Bluetooth MAC address to track a Tile. We got our clone Tile to the point where we could take MAC of a tile registered to our account, load our firmware with the MAC onto an ESP32, and walk around with our Tile app and it would track it.

But, it seemed like if we sent our Tile’s MAC ID to someone else to load on to their ESP32, and track with their Tile app, it wouldn’t report the location. And, although Tile reports thousands of users in my area, even a genuine Tile didn’t seem to get picked up by other Tile users walking through office food courts, malls, or transit hubs. As a result, at the end of two weeks, many questions remained unanswered.

We also experimented with altering transmit power and rotating through MAC addresses as a means of avoiding detection. Our work can be found here:
https://github.com/raudette/geekweek-7.5_1.3_tileclone

Blueprints

One of our team members had access to a few HackRF One Software Defined Radios (SDRs), and set about learning how to use them. They wanted to duplicate the results of a paper that demonstrated that with an SDR, one can identify individual bluetooth devices based on differences in how they transmit as a result of minute manufacturing imperfections. The team member called their Bluetooth device fingerprint a Blueprint, and documented their work here: https://github.com/raudette/geekweek-7.5_1.3_blueprint

Overall Experience

The experience of stepping away from the day-to-day, learning about different technologies and building things was very similar to my previous experiences participating in corporate hackathons.

What made GeekWeek exciting for me was leading a team of people from different companies, with different professional backgrounds. At a corporate hackathon, everyone already knows their teammates, what they’re capable of, they share the same corporate culture, they typically work on the same software stack, and have the same collaboration tools pre-loaded on their laptops. At GeekWeek, it was interesting just breaking down the problem, and finding out who had what tools, what experience, and was best suited to take on a task. Also, it was interesting to hear about everyone’s professional work – some were really pushing boundaries in their fields.

I hope to participate again in the future!

Virtual Hackintosh

Ever since I first read about Hackinthoshes, I’ve thought about building one. A friend of mine edits all of his video on a purpose built Hackintosh. I never did build one – for myself, I like to run Linux, I don’t really need a Mac for anything, and I find that off-lease corporate grade laptops are the best value in computing. But, every once in a while, I have something I want to build on my iPhone, and a Mac is like a dongle that makes it possible.

Simple things, like finding out why the site I’ve built with the HTML5 geolocation API will work on my PC, but not on my iPhone, are just not possible without a Mac. To solve these types of problems in the past, I’ve just borrowed a Mac from the office.

Recently, another project came up, and I decided to try to build a virtual Hackintosh with OSX-KVM – a package that simplifies running OSX in a virtual machine. Years ago, I tried running OSX in VMWare Player – this would have been on a Intel Core 2 system at the time – in my opinion, it was interesting, but unusable. My experience with OSX-KVM has been much better.

Just following the instructions, within a couple of hours, I had OSX Big Sur running in a VM. OSX-KVM does almost everything, including downloading the OS from Apple. My PC is pretty basic – a Ryzen 3400g, SSD, 16 GB. I assigned the VM a couple CPU cores and 12 GB of ram, and it’s usable.

In a few hours of dabbling around, I’ve come across four issues:

  1. First, I couldn’t “pass through” my iPhone from the host PC to the guest OSX OS. The best overview of this issue I could find is logged on Github. There are configuration related workarounds, but I decided the easiest way to solve it was to acquire a PCIe USB controller and allocate it to the VM. PCIe cards with the FL1100 and AS3142 chipsets are said to work with OSX – I ended up buying this AS3142 card, as it was the one I could get my hands on quickest, and it seems to work fine – I can see my iPhone in OSX now, and run the dev tools for Safari iOS.
  2. Second, I can’t log into Apple ID, and as a result, I can’t compile iOS apps in Xcode. It looks like this is solvable.
  3. Chrome doesn’t display properly. I wonder if this is related to graphics acceleration. I don’t need Chrome to work, but its a reminder to me that this exercise is really just a big hack – I don’t think I would count on this to do any real work.
  4. Finally, I seem to lose control keyboard/mouse of the VM if it goes to sleep, and I can’t seem to get it back. I’m sure this is solvable, but addressed it by turning sleep mode off.

Presumably, Hackintoshes will eventually stop working as Apple moves its platform to ARM processors, but for now, it’s definitely worth trying out.

Update (2022/03/03):
The Apple ID issue was addressed by following the previously linked instructions, the Chrome issue was resolved by turning off GPU acceleration (–disable-gpu).

Update (2022/03/26): I needed Bluetooth. I bought an Asus BT-400 adapter, it seems to work fine. AirDrop functionality doesn’t seem to work – haven’t investigated. And – not a Hackinstosh issue, but a Mac issue – the CH9102F serial chip on one of my ESP32 microcontroller boards isn’t natively supported by Big Sur, found a third-party driver. Apple Maps doesn’t work, as it requires a GPU, and the OpenHaystack project that I’m playing with requires Maps.

Smart Dashcam for Bicycles – Part 7: Training A Vision Model

I wanted to build my own vision model for a few reasons:

  1. I wanted to learn how
  2. In my limited experience with OpenALPR, it looked like it was missing some license plates that seemed fairly readable to my eyes – could I possibly do better training my own model?
  3. Just the way it is built, I know I wouldn’t be able to get OpenALPR to run faster on my Pi – I wouldn’t be able to get it to run faster by off loading image processing to a VPU like the Myriad X in my Oak D camera.
  4. The gen2-license-plate-recognition example provided by Luxonis, built from Intel’s Model Zoo, does not work well with Ontario license plates

The first step was building a library of images to train a model with. I sorted through hundreds of images I’d taken on rides in September, and selected 65 where the photos were clear, and there were license plates in the frame. As this was my first attempt, I wasn’t going to worry about sub-optimal situations (out of focus, low light, over exposed, etc…). I then had to annotate the images – draw boxes around the license plates in the photos, and “tag” them as plates. I looked at a couple tools – I started with Microsoft’s VoTT, but ended up using labelimg. Labelimg was efficient, with great keyboard shortcuts, and used the mouse scroll wheel to control zoom, which was great for labeling small details in larger photos.

I then tried one tutorial after another, and struggled to get them to work. Many examples were setup to run on Google Colab. I found when I was following these instructions, and I got to part where I was actually training the model, Colab would time out. Colab is only intended for short interactive sessions – perhaps it wouldn’t work for me as I was working with higher resolution images, which would take more computing time.

What I ended up doing was manually running the steps in the Train_YoloV3.ipynb notebook from pysource, straight into the console. As my home PCs don’t have dedicated GPUs, I setup a p3.2xlarge Amazon EC2 instance to run the training. If memory serves, training against those 65 images, using the settings from that tutorial, took a couple of hours.

I took the model I created from my September rides, and then tested it against images from my October rides – I’m surprised how well it worked.

My Yolov3 model running on Oak-D

Since training that model, I’ve been on the lookout for an nVidia video card I can use for training at home. It’s hard to know for sure, but it seems it wouldn’t take long to recoup the cost of a GPU vs training on an EC2 instance in the cloud, and I can always resell a GPU. I’ve tried a few times with the fastest CPU I have in the house (a Ryzen 3400g), and it just doesn’t seem feasible. I haven’t seen a cheap GPU option, and the prices just seem to be going higher since I started looking in November.

I don’t have usable code or a useful model to share at this point, at this point, I’m mostly learning and trying to figure out the process.

Printing and Binding an ePub eBook

I wanted a hard copy of an eBook I had that is out of print. There are many resources out there for binding books. Many recommend using acid free PVA glue. I can’t speak to how it compares to other glues, but “Aleene’s Tacky Glue” is a PVA glue, available acid free, which was available at craft stores in my area.

This post will focus on prepping an eBook for print. As US Letter is the common paper size here, which is too big for a book, I decided to print 4 pages per US letter page, 2 pages per side, each 5.5″ wide by 8.5″ tall.

First, I loaded to book into Calibre, opened it, and printed it to PDF. For this exercise, I’ve used Ian Fleming’s Casino Royale, which is out of copyright in Canada.

Calibre Print to PDF screen
Calibre Print to PDF

Next, I had to re-arrange the pages. If we just print 2 pages per side, duplex, page 4 will end up on the back of page 1. We want page 2 on the back of page 1 – we want to reorder the PDF following the patterns 1, 3, 4, 2, 5, 7, 8, 6… This LibreOffice spreadsheet might help: Pages.ods

Illustration of required page ordering
Pages have to be re-ordered for regular duplex printing – page 2 has to be on the back of page!

PDFTK is a great tool for re-ordering PDFs. I have re-ordered the book, skipping the first page, with PDFTK as follows:
pdftk Casino\ Royale.pdf cat 2 4 5 3 6 8 9 7 10 12 13 11 14 16 17 15 18 20 21 19 22 24 25 23 26 28 29 27 30 32 33 31 34 36 37 35 38 40 41 39 42 44 45 43 46 48 49 47 50 52 53 51 54 56 57 55 58 60 61 59 62 64 65 63 66 68 69 67 70 72 73 71 74 76 77 75 78 80 81 79 82 84 85 83 86 88 89 87 90 92 93 91 94 96 97 95 98 100 101 99 102 104 105 103 106 108 109 107 110 112 113 111 114 116 117 115 118 120 121 119 122 124 125 123 126 128 129 127 130 132 133 131 134 136 137 135 138 140 141 139 142 144 145 143 146 148 149 147 150 152 153 151 154 156 output collated.pdf

Next, I used a tool called pdfjam to fit 2 pages per side:
pdfjam collated.pdf -o collated-2perpagealternate.pdf --nup 2x1 --landscape

I sent this PDF to my local printer, and had them cut the pages in half for me. With this output, I bound the book, roughly following a Youtube tutorial. My book turned out OK, but it feels like it would take me a few more attempts to get a book as sturdy as a commercially bound book.

Smart Dashcam for Bicycles – Part 6: Experimenting With A New Camera Platform

One of the features I have in mind for my bicycle dashcam was license plate recognition. In parts 1, 2 and 3, I experimented with the OpenALPR license plate recognition library and a couple different Pi cameras. I encountered a few challenges:

  • Image quality challenges: out-of-focus images, warped images due to the “rolling shutter” of the Pi camera
  • Field of view: capturing more than just the license plate
  • Speed: Only able to process 1 image every 8 seconds on my Pi 3

I acquired the Luxonis Oak-D AI accelerated camera to experiment with different image sensors which could potentially address my image quality challenges, stereo vision/depth sensing provided interesting capabilities, and the AI acceleration to increase the speed. This spring, I mounted it to my bike and started capturing images on my rides.

I had issues with my Pi 3 – it would stop running reliably after a minute or two – I suspect it had been damaged by vibration from previous rides, being strapped to my bike rack. I acquired a new Pi 4, and was up and running again.

Initially, with the Oak-D setup, I had a lot of the same image quality problems I was having with the Pi 1 and 2 cameras – lots of out-of-focus images, the camera just kept on trying to focus, which is a hard problem with taking photos in moving traffic on a bumpy bicycle ride. My application would also crash – this turned out to be due to filling buffers – I was writing more data to my USB thumb drive than it could handle. I ended up getting acceptable results by reducing my capture speed to 2 fps, recording at 4056×3040, turning auto focus off, locking the focus at its 120 setting, and setting the scene mode sports, in the DepthAI API as follows:

rgb.setFps(2)
rgb.initialControl.setManualFocus(120)
rgb.initialControl.SceneMode(dai.CameraControl.SceneMode.SPORTS) rgb.initialControl.setAutoFocusMode(dai.RawCameraControl.AutoFocusMode.OFF)

With these settings, images are focused in the narrow range where it’s possible to read a license plate – when cars are too far back, the plates are impossible to read anyway, and it doesn’t matter if that’s out of focus. Luxonis will soon launch a model with fixed focus cameras, which should further improve image quality in high vibration environments. I hope to try this out in the future.

I wanted to build a library of images I could later use to test against various machine vision models, and potentially train my own. I posted a the question on the Luxonis Discord channel – their team directed me to their gen2-record-replay code sample. This code allows you to record imagery, and later play it back against a model – it was exactly what I needed. So I started to collect imagery on my next few rides.

Lilygo TTGO, TFT_eSPI, and the Dino/T-Rex Game

Ever since the Espressif’s ESP8266 wi-fi capable microcontroller was launched, I’ve been thinking about all the possibilities for low cost network connected devices. And, nothing world changing, but I have used it to build a data logging CO2 monitor and a device to control my old TV with Alexa.

I have been thinking of NEW possibilities as I see development boards with the ESP8266’s successor, the ESP32, with a small screen, for less than $20CAD shipped from Aliexpress. What can I build with a really tiny internet connected dashboard? So I ordered a Lilygo TTGO.

A day after it arrived, Hackaday published an article about a re-creation of Google Chrome’s T-Rex game for this TTGO dev board. Getting that loaded on to the board seems like a good test. I downloaded the TRexTTGOdisplay and installed Lilygo’s TFT_eSPI driver, compiled and…

undefined reference to `TFT_eSprite::pushToSprite(TFT_eSprite*, int, int, unsigned short)'

Hmm. I search around, and I see a hint in the comment’s of the author’s Youtube video: “You will need to update tft library”. I find the source of the TFT_eSPI library, review it a bit, and see that it is designed for a number of microcontrollers and screen controllers – so I copy the User_Setup_Select.h from the Lilygo repository to Bodmer’s most recent TFT_eSPI libary. For anyone doing this now, this will fix the TFT_eSprite::pushToSprite issue and just work… but I got:

TFT_eSPI/TFT_eSPI.cpp: In member function 'virtual void TFT_eSPI::drawPixel(int32_t, int32_t, uint32_t)':
TFT_eSPI/TFT_eSPI.cpp:3289:21: error: 'SPI_X' was not declared in this scope
while (spi_get_hw(SPI_X)->sr & SPI_SSPSR_BSY_BITS) {};


I take a look at what’s happening around line 3289 in TFT_eSPI.cpp, and it appears to be optimization code for the RP2040 – it shouldn’t be compiled in… Taking a look at line 3285:

// Temporary solution is to include the RP2040 optimised code here
#elif (defined (ARDUINO_ARCH_RP2040) || !defined (ARDUINO_ARCH_MBED)) && !defined(TFT_PARALLEL_8_BIT)

See that exclamation point? And everywhere else in the code there are RP2040 optimizations, I see:

// Temporary solution is to include the RP2040 optimised code here
#elif (defined (ARDUINO_ARCH_RP2040) || defined (ARDUINO_ARCH_MBED)) && !defined(TFT_PARALLEL_8_BIT)

Cool, I’ll submit a patch. So I fork the code, and… I don’t see the bug, it’s already been fixed.

I was hoping for another successful contribution to open source, but I was beaten to the punch – if I had started this project a day later, it just would have worked with the latest TFT_eSPI library. In any case, the important thing is, I got TRexTTGOdisplay running. My next project for this dev board will be a little internet connected dashboard.

Smart Dashcam for Bicycles – Part 5: Blindspot Detection

I continue to experiment with how a dashcam can assist urban cyclists. This time, I’ve started a fresh design with a different idea, a new camera, new models, and new code, which I’m submitting as an entry for the Toronto ♥️’s Bikes Make-a-Thon.

I enjoy biking from my home in North York, near Mel Lastman Square, to my office near Union Station during the week. The most harrowing part of this ride is the Yonge-401 interchange, which requires two lane changes with fast moving traffic from the 401 on and off ramp.

Yonge - Hwy 401 Interchange
Yonge – Hwy 401 Interchange

As cyclists in the city, we all have “scary spots” like these on our routes. I would like to present you with a Smart Dashcam for Bicycles as a tool for these challenges. A dashcam could:

  • increase safety
  • collect evidence in the event of an incident
  • gather data

For the purposes of the Make-a-Thon, I have built a smart dashcam with blind spot detection, similar to what you would see in modern cars. The IIHS says that in cars, this feature lowers the rate of all lane-change crashes by 14 percent.

My prototype consists of a laptop, a USB AI accelerated camera from Luxonis mounted to my bicycle seat post, and my smartphone as a display. It’s a few hundred lines of Python code that builds on a freely available AI vehicle recognition model from the Intel Open Model Zoo. I’ve built on the license plate recognition and MJPEG video streaming sample code from Luxonis that was supplied with the OAK-D camera. I tether the laptop to the smartphone using wifi, and I use an iOS app called IPCams to view the video stream.

Bicycle Dashcam with Smartphone Display
Bicycle Dashcam with Smartphone Display

The vehicles are recognized and identified. The video is streamed over wifi to the smartphone. A caution alert is added to the video when a vehicle is detected.

Phone screen shot.  Caution, car approaching
Phone screen shot. Caution, car approaching

A demo video can be found here: https://youtu.be/zMTRDsA6uJM

In this proof of concept, the dashcam is just a fancy, complicated, expensive, rear view mirror. A final version would expand on this functionality by integrating features such as:

  • Sounding an audible alert when danger is detected
  • Recording the speed and proximity of the cars around you
  • Integrated GPS
  • Cloud and social features for sharing data with the city and fellow cyclists
  • A car driver readable display, eg: “Driver ABCD1234, your current speed is 45”. Like a mobile Toronto Watch Your Speed program sign. Would a driver allow a cyclist more space if they were aware their actions are being logged?

Parts:
Luxonis OAK-D Camera

Bike Phone Mount
Bike Camera Mount
IPCams for iOS (to watch MJPEG stream)

Source:
https://github.com/raudette/SmartDashcamForBikes

Previous Articles:
See my previous articles on bicycle dashcams:
Part 4, Part 3, Part 2, Part 1

UPDATE: This project was featured on Hackaday, November 1, 2021.

My personal brain dump, Opinions, Projects, Toronto