One of the problems I have is datasets. I can make more, sure, but that’s hard. So it’s easier to stick with ones others have made when you’re pre-money. I have great datasets for tracking humans, anything to do with self driving cars, cups, cameras, shoes and chairs. I also have the general Google machine learning dataset that can see basically an infinite number of things that google search can see, but that can only see in 2D.
Luckily, Target has a whole aisle dedicated to selling cups. So we can capture and do things to those. Well, capture and photoshop, and boy, can we capture cups good. So tomorrow I’m planning an outing, my first out of the house trip besides walking the dog in over a week to take pictures of the cup aisle of Target.
I once trained numerous datasets for Magic Leap when I was trying to convince them to implement a system that allowed for URLs called Visual IQ that tied into a markup language called Open ML. The idea was that everything would be classified into something (cup, shoe, chair) and then an identifier would further identify to see if there was a unique URL associated with it, and display an open markup language around it in virtual 3D that you could interact with and that we would make this freely available across all platforms. You could see a movie poster and tap on it for more info and see unique info, or a picture of your daughter and connect to them directly. It was a contextless and open operating system. A complete 180 from the closed ecosystem direction they were going in.
This was all because Rony had tasked me with “what can I get on stage and say that will make people quit their jobs to work on this platform?” and VisualIQ / OpenML was my answer.
I of course no longer work at Magic Leap.
Graeme.
Comments