Skip to content

R3plica

The Evolution of Machine Vision

Robotics, Machine Vision, 3D Vision4 min read

Uri Dubin

Meet Uri Dubin

CEO & Co-Founder @ Robot AI

Haifa, Israel

Uri Dubin is the CEO & Co-Founder of RobotAI. RobotAI develops proprietary and patented algorithms for indoor navigation, arm-vision control, and object manipulation based on advanced machine learning tools and hands-on experience.

He has experienced in multi-disciplinary project/system development and management. Primarily in applications span Medical, Industrial and Military areas.


What inspired you to pursue a career in robotics?

I have been working in the Israeli defense industry for 15 years, making “smart” machines. It was a corporation where I gained a lot of experience and learned from many people.

I wanted to experience startup life, do something more personal, something that you can be a big part of. So then, in 2012, I joined two founders and started a company in the telemedical space called TytoCare.

We started in a small apartment with a team of 5 people, building a remote monitoring device, like a smartphone.

One day I remember, we came to the “office” and our apartment was rubbed and all the computers disappeared.

We recovered, received FDA, and so far, TytoCare raised around $150M.

I also received my Ph.D. in Neuroscience, and I felt that I needed to do something else by myself. It is a feeling that you got inside; it is hard to explain.

Describe the story and vision of RobotAI?

Robotics is an essential step in evolution. Instead of the passive devices, like monitors, that give people commands, we are going to the next step.

The next step is the ability of machines to apply forces and does work in the physical world.

However, what kind of robotic applications will not have regulations, are helpful, and not available on the market yet?

I started to think about other ideas that can be useful, challenging, and new. For example, in 2017, the autonomous car hype was on, and people were rushing into this area. I understood that a driving robot must do a very complex cognitive task.

I have suspected it will take a lot of time and resources to bring something reliable to the market. Also, health-related things usually will require a lot of your time and FDA clearance. The same goes for flying robots.

The idea of the toilet cleaning robot then came to mind - one hand mobile platform that can clean public toilets.

It is a job that no one wants to do; B2B, you can map areas in advance, limit people interaction and not require precision, and most importantly, do not kill people.

I found two partners, and we started a company called C-Robotics, received some small funding, and even built a prototype.

We purchased a robot, and we were working on teaching it to recognize its position in space, objects (toilet sinks, bowls), and manipulate objects.

Unfortunately, however, we failed to raise more funding for this project.

What is Monocular 3D Vision?

Robots that operate in the human environment need to adapt to changing conditions.

For example, if you want to grab a toilet brush (C-Robotics), you need to “see” it. However, it is not always in the same place.

The same goes for navigation when the robot is not sure about its position in the room.

We used different sensors to find objects around: LIDARs, Laser Scanners, RealSense depth cameras - all of them had limitations like range, precision, size, and power consumption (essential for the battery operating platforms).

In addition, they required a lot of processing, and the algorithms were slow.

Moreover, all the reflective surfaces in the toilet rooms were not suitable for laser-based active vision. This is an example of the depth sensor output and artifacts it creates.

In the example above, you can see that RGB data is reliable. We understood that humans with one eye closed still can grab a cup, drive a car, and do all the tasks.

It means that a single camera can have enough information to understand the 3D world around us.

Describe the RobotAI 3D vision-based robotic applications and their primary purpose?

To operate in the environment around us, we need to identify the objects and understand their positions.

Based on this information, we can build situational awareness and make decisions.

This is very similar to what has been done in the autonomous car industry. For example, Tesla already switched to RGB-only cameras, giving up on LIDARs and other sensors.

The primary purpose of our technology is to simplify the detection and measurement position and orientation of objects in the environment around the robots.

In addition, we provide tools to teach robots to recognize different things - since there is a myriad of objects that we want robots to work with.

This ability underlies the manufacturing tasks like pick and place, bin picking, assembly.

One goal in the next five years in the robotics industry for RobotAI?

The word CoBots stands for Collaborative Robots. We want to recoin the new name:

Cognitive Robots - robots that can adapt to the environment, respond to changes, and be flexible. It would greatly simplify the deployment, reduce cost and increase flexibility.

We want to be an outbox vision solution for robotics. MobileMe for robotics.

Explain one advantage of RobotAI technology over other 3D measurement devices?

We use software and a single 2D camera (like a webcam) to extract any known object's position and orientation information (6DoF).

Thus, we mimic essential brain capability, and The solution does not require laser, light projection, or multiple cameras. As a result, it outperforms active vision solutions in speed, precision, cost, weight, and form factors.

One challenge robotics is facing for their use and development?

The connection between eye and hand is impressive in humans, which is challenging in robotics. The hardest thing to do is to imitate human fingers.

Grasping different objects is not yet a resolved problem. There is no unique solution yet.

One case where RobotAI is implemented today with success?

RobotAI gains a lot of interest from many small and big companies.

We have completed several POC and integrated our product in Denso robots, and we are finishing POC with Whirlpool, Ford Otosan.

In addition, we created a solution for Deutsche Telekom. We have ongoing POC and integration with ABB, Techman Robotics, and Rockwell Automation.