Last updated: June 30, 2022.
This week I spoke with Karthik Kannan, cofounder and CTO of Envision, a company that builds on top of the Google Glass and using Augmented Reality features of phones to allow visually impaired people to better sense the environment or objects around them.
Their software and devices are pretty popular and as you’ll hear in this conversation, they’ve been on a real journey to get to where they are now.
In particular, I really enjoyed the parts where Karthik explained their development and deployment process in detail. It’s not too often that you get a deep dive into the workflows and stacks of an embedded computer vision company and tool and so I think you’re going to really enjoy this one.
In this clip, Karthik explains how they had to adapt and build their own image datasets suited to the specific purpose and use cases of visually impaired users.
Pre-trained models and some open datasets were useful, he says, but they don’t give best-in-class performance if the thing you’re building doesn’t use similar kinds and qualities of data.
As always, full show notes and links are available on our dedicated podcast page.