Taking on giants: a QA with Matic co-founder Mehul Nariyawala

VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More

They’re so commonplace now that they are scarcely worth mentioning, but robotic vacuum cleaners were at one point a revolutionary new device. The idea of a vacuum that could move around a home independently and suck up dust and debris reliably without a human guiding it seemed like sci-fi come to life, back when MIT AI researchers formed the company iRobot in 1990, and again when they debuted the Roomba back in 2002.

“Roomba” has since become a widely recognizable brand name up there with Kleenex, Tylenol and Band-Aid, and many other brands have jumped in to offer competing products at higher and lower price points, including vacuum stalwart Dyson and Anker with its Eufy brand. Despite that, some believe the technology is far from as advanced as it should be, and that there is room for disruption from the high-end.

“We wanted ‘Rosey the Robot‘ [from The Jetsons] and all we got were these disc robots that are bumbling around,” said Mehul Nariyawala, co-founder of a new entrant in the space, Matic, which just this week emerged from stealth with nearly $30 million in funding from heavy hitters of Nest, Stripe, and GitHub, and its own combination robot vacuum cleaner/mop product. It’s now available for pre-order in the U.S. for $1,495 through the end of this year (the price jumps after that to $1,795) with a shipping time frame of early 2024.

Matic, which promises to reinvent not just cleaning but the entire space of indoor robotics by going back to first principles, has been in the works since 2017, when Nariyawala left Google’s Nest division where he was the lead Product Manager for the Nest Cams portfolio. Prior to that, he worked as a product manager at Google and co-founded Flutter.


AI Unleashed

An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.


Learn More

While the robo vacuum market is more more mature, it doesn’t show signs of slowing or plateauing yet — researchers suggest compound annual growth-rates between 12.3% to 17.87% leading to a size ranging from USD $9.12 billion to as high as $USD 17.9 billion by 2028. This growth is driven by an increasing demand for automated cleaning solutions and the advantages of time-saving smart appliances.

So, having worked for both startups and tech giants, why does Nariyawala think he can make a dent in the robot vacuum market and ultimately build a more intelligent home robot that is closer to the “Rosey the Robot” of our retrofuturistic dreams? Read our Q&A to find out.

The following has been edited and convinced for clarity.

VentureBeat: Where are you from, originally?

Mehul Nariyawala: Originally, I grew up in India, went to high school in Florida, went to undergrad at the University of Maryland and graduated at the height of the first [tech] bubble [in the 2000s]. I went straight to a startup and it was a spectacular failure — we burned through $30 million in 11 months.

Tell me about the product [Matic]?

The genesis of the idea was actually me getting a golden retriever and having lots of hair to clean. So, my wife told me to go get a robot.

I knew Roomba sucks. I ended up getting a Dyson 360 robotic vacuum, which had launched in 2016.

It turned out it was probably one of the worst robots I’ve used, because that thing just kept failing to find its own dock nine out of 10 times. Suction-wise, all Dysons are great, but robot-wise, it was really sort of not that great.

So that that piqued our curiosity. We were at Nest at the time, and we thought, “wait a minute, why isn’t anyone really innovating in this space?”

There are 200-plus self-driving car startups, 200-plus industrial automation startups, but no one in the home space. We just have these sort of “disc robots,” and that’s about it. So what’s going on?

At a very high level, we came to conclusion that the entire space of indoor robotics is built a bit upside down. It’s like putting the cart before the horse. And what I mean by that is imagine trying to build self-driving cars without having Google Maps or GPS. No matter how smart the car is, if it doesn’t know where the road is going or where it’s located on the road, it’s useless, right?

And what we realized based on this experience is that these [existing disc] robots don’t actually know whether they’re on the right side of the couch, the left side, or the top of it; whether they’re in the kitchen, or in the nook of the dining area or in the dining room. All these things are critical information for you to navigate precisely.

And that’s the point: the entire indoor robotics space is still focused on building actuators and sensors and adding to them, when the real bottlenecks are really the SLAM (simultaneous localization and mapping) and perception.

And this is where our background was, we had been working in computer vision since 2005 onwards. So we just felt like we could approach this more from an algorithmics-first approach and add the brains to the robot.

This is where we thought that floor cleaning is still the best place to start. The reason being is that by definition, if you’re cleaning floors, you will explore every inch of an indoor surface and build a map. If you’re cleaning floors, well floors get dirty multiple times a day, so you have to go through it again and again and self-update the map. And we can give it an ability the way we [humans] have which is we go in an indoor space, we walk around and we build a mental map.

If you go through it once, you don’t remember everything. But if you go through 10 times you actually remember very precisely where things are.

So in this same same exact way, this robot can self-learn over time and gets more and more precise with each home environment. If we can do that, that’s a huge value proposition.

Floor cleaning was also a great space to start because these are still the only robots accepted in our homes. Most importantly, there were many customers like me, who had tried robotic vacuums and just didn’t like it. When we looked at the category, the net promoter score is negative one, for females its negative 18. They’re worse than Comcast which is negative 10, which I think as everyone’s favorite company to hate in the United States.

So for us, this was the idea that here’s the intense problem that no one is paying attention to.

I totally get it and I share your frustration with the disc robots. You guys approach this from a completely different starting point looking at computer vision and SLAM — to your knowledge, that’s not what the competitors are doing?

The very first generation of disc robots were just this algorithm where they would bounce their way through the home. Then, there were some versions that came out that just used single-pixel LIDAR, which just has one laser pointer and if it’s too high or low, it doesn’t see anything. So it just sees walls, and beyond that, it struggles. And lately, they have been starting to add cameras and there is some basic visual SLAM there. But the best way to describe this is like a touch interface pre-iPhone and post-iPhone. Yes, they were around, but the fidelity was so bad you had to jab your finger all the way through it to make it work.

Initially, when we started out, to be entirely honest, we didn’t think SLAM would be the biggest hurdle we’d have to cross. But what we realized as we started digging into it is that even though theoretically it has been considered a solved problem since the mid-1980s, in practice, nobody has implemented it in a precise manner ever. It just doesn’t exist.

And if you’re going to solve fully autonomous indoor robots as a category, this is the most important thing because robots have to know where they are. If they don’t know where they are, if they don’t understand the precise location, everything is useless. And that includes all kinds of robots, whether it’s industrial robots, warehouses, factories, humanoids — you have to know where you are. If you don’t, then it’s like us with a blindfold. We’re not going to be all that useful if we have a blindfold on.

What do you guys do differently? You said you take an algorithmic approach — this idea of the robot learning. I think me, myself, and a lot of other people, we hope that’s what our robots are doing already. It’s already doing this task a hundred times, every time I run it, it should get experience every time I run it.

The best way to think about about it is that for fully autonomous indoor robots, hardware is not a problem — complex actuators have been around for a long time. It’s really 3D perception and SLAM, those are the bottlenecks.

Within 3D perception and SLAM, the approach that the industry has sometimes taken is very similar to the self driving car debate: do you start with a bunch of sensors or do you just use cameras?

What’s different about us is we decided to take a very Tesla-like approach in the sense that we’re just using cameras and software, that’s it. [5 RGB cameras, to be specific.]

The reason being is that we just felt like the indoor space specifically is built by humans for humans, using the human perception system.

So, if we’re going to bring in a robot that does the same thing as we do, [vacuuming and mopping] on our behalf in an indoor space, they need a similar system to us.

The second thing is, we humans don’t need go to the cloud to make a decision, right? We don’t have a hive mind or any of that. We’re actually just making decisions and learning things each of us on our own, in that space, in that time, in that situation.

We came to the conclusion that if you’re going to bring cameras into an indoor space, privacy becomes an issue. Latency becomes an issue. You want to learn on-device because the indoor world is quite dynamic.

In 2017, it was obvious edge devices are coming and edge computes are going to skyrocket. And all these self-supervised learning algorithms were emerging and would have a huge impact, even in the vision space. So we made a bet that these two trends would make actually help us quite a bit. So everything we do is on-device and once you’re there on the device, that’s when you can predict without even jeopardizing users’ privacy.

So now that we have this robot that has a self learning algorithm. And the good thing about our robot is that it is going to sit on the dock eight hours a day, at least. And at that time, it’s like a server it can collect the data without ever sending it to cloud. On device, it can just keep learning and keep getting better. So in the context of a floor cleaning robot, we are actually enabling embodied AI. That’s the approach: it is just purely vision-based, see what happens, predict, trial and error. The robot says “I’ll try to predict let me try to god own here, I’ll see if it works.”

Is the underlying AI and machine learning (ML) based on existing frameworks, did you have to write a lot of code yourselves, are you pulling together a lot of open source stuff, or what’s the mix behind-the-scenes of what you’re using to put it all together?

I think across the board, no one had approached fully autonomous indoor robots in a very Tesla-centric way. So we had to push the needle beyond the state of the art and write our own new code.

The reason for that is there is a huge difference between building something in a lab and publishing papers and actually implementing it so that hundreds of thousands of users can access it.

You can have a drug in a lab but manufacturing it for millions of users is a whole different thing.

The way we go about doing this almost always, and this is where my partner Navneet Dalal‘s fundamental perspective has always been “don’t bet against nature.” Nature has had four billion years and they give us two eyes and bunch of algorithms and there is a method to the madness. Let’s use that to let’s start with the product and work backwards.

What does this product need? It needs precision, it needs a privacy, and more importantly, it needs affordability. If you just combine a lot of open source systems, they’re not all that efficient. That forced us into writing some code ourselves. We had to engineer it so that it just works at an affordable price point. You can build a $30,000 robot that is fully autonomous but no one’s gonna buy it.

Do you see competition in this space of home robotics intensifying as you see things like the Tesla Optimus (humanoid robot, currently in development)? You compared yourself favorably to Tesla — do you think you will have to go head-to-head with them at some point?

There are many, many, many different approaches to this problem. We fundamentally believe that the blocker is not the hardware, it’s more of a software and SLAM and perception problem. So the approach we take is “let’s solve SLAM and perception first, and then maybe we’ll solve other problems.”

In terms of consumer versus enterprise, it boils down to whether those robots are affordable or not. So can we get to a point where we really ever buy a $20,000 robot the the way we buy a car? I don’t know the answer to that question. My assumption at the moment is no. So affordability becomes a big piece of the puzzle.

And my third point is really about comfort. At least in your home, you want something that’s friendly, you want a robot that’s not going to make people afraid, that dogs and kids and pets are not afraid of. We always imagine that if there is a home robot, it’s going to be a little bit more like Big Hero 6 form and cuddly — something you want to hug more than a big scary humanoid.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Source link