Skip to content


A Vision For Robotics

Robotics, Computer Vision6 min read

Aman Virmani

Meet Aman Virmani

Robotics Research Assistant @ University of Maryland

College Park, Maryland 

Aman Virmani is a Graduate Research Assistant Robotics Engineer in the Perception and Robotics Group at the University of Maryland pursuing M.Eng. in Robotics. Lately, he has been immersed in computer vision and robotics research.

Aman worked at Samsung R&D as Senior Engineer in Camera Design Team.

What inspired you to pursue a career in robotics?

I have always been fascinated by something that I can create by my hands and make it work. During my undergrad in electrical engineering, I used to work on various projects involving robotics and computer vision.

Apart from this, I was a part of the CMOS Image Sensor Design team at Samsung R&D wherein I did several camera sensor design projects for smartphones and the automotive industry.

This made me more intrigued about computer vision and its power of automation in all sectors of the industry.

Hereafter, I decided to pursue a Masters in Engineering in Robotics at the University of Maryland and align my career with my passion.

I am proud of this decision since I have been part of different robotics projects that have not only strengthened my knowledge and skills but also my love for robotics.

What is IC design, it’s importance, and function in camera sensors?

IC stands for Integrated Circuit. IC design process is a lot different from standard circuit design we study in our Analog and Digital Circuits courses in undergraduate studies.

IC design starts from designing building blocks of the complete logic and converting it into transistor logic after a series of checks verifying the functionality and other design constraints.

After logic synthesis (described above) starts the process of design layout which involves converting the transistor logic into a physical circuit which is made on silicon wafers. This starts from floor planning, i.e. placing of major blocks and IO pads on the layout, followed by placement of logic cells.

When all the logic has been placed, detailed routing is performed to connect the logic cells, keeping in mind the physical design constraints of timing and power, etc. After this phase, a complete physical verification is required to ensure the layout phase is properly done.

The file produced at the output of the layout is used by the foundry to fabricate the silicon. The layout should be done according to the silicon foundry design rules.

IC design can revolutionize the camera industry hence it remains of the utmost importance while manufacturing camera sensors.

The latest cameras are not only smaller but better in terms of resolution and other parameters due to advances in IC design methodology and technology.

What differences the Computer vision from the Human vision?

Even though both computer vision and human vision perform the same task of converting the light signals into meaningful information, they are still quite different in many aspects.

Human vision is very good at inferring data from images about objects, people’s faces, poses, intentions, actions, etc.

Computer Vision has made many advances over recent years but is still not as good as the human vision in generalized tasks of image recognition and inferencing useful information from it.

Computer vision is quite good at identifying and segmenting specific objects from images, even better than humans in some cases.

However, Human Vision has the advantage of the evolution in the fact that it has learned to combine the data from images to other sensory inputs such as touch and sound to understand meaningful connections between all data. Computer Vision systems still struggle to do so.

Human Vision is also very specifically designed to have a better color vision in the center of the field of view and low-light vision in the periphery of the field of view. Computer Vision on the other hand is uniform in treating every pixel of the image.

What challenge you have faced in the development and implementation of deep learning algorithms?

One of the major challenges that I have faced in the development and implementation of planning and deep learning algorithms is that deep learning algorithms take a lot of time to train.

This means that to tune a deep learning model would take much more time in the iterations.

To overcome this challenge, I generally fire multiple runs with different hyper-parameter changes based on the results of the previous iterations and also monitor the runs at regular intervals to see if training the model is providing better results.

If not, it is better to kill that iteration and save system resources for further runs.

Describe a problem and solution you have come to the development IC design of camera sensors on phones?

During a project that I did in Samsung R&D, I encountered that the power consumption of the IC was exceeding the project specifications.

When I debugged the issue further, I observed that there were not sufficient power stripes in some blocks of the IC due to less available area for power routing.

To solve this problem, I had to change the positions of IO pads that supply power to the IC and redesign the power grid for an efficient power supply.

I am proud of this work because it helped to put my contribution to the world’s smallest 20MP camera and I even received the President Award for my work on this project in Samsung R&D.

Do you have any Robotics project that you are working on and want to share your experience with us?

Currently, I am working on a robot that can avoid moving obstacles to reach its target location. I am working on different approaches to solve this problem and identify the benefits and limitations of each approach.

I have implemented a dynamic programming approach to move the obstacles from source to target while avoiding obstacles dynamically. This approach depends on a known map environment, which can be generated beforehand.

I have also tried reinforcement learning-based approaches such as Q-Learning and DQN algorithms. These algorithms perform better but take up a lot of time to train. I am working on other approaches as well that might perform better.

Explain with a brief example case that Robotics control systems can be beneficial?

Robotics Control Systems can be beneficial to human society in more than one way. They not only automate the jobs but also do so with increased safety and efficiency.

They can even perform tasks that humans can’t or are dangerous for them. There are several examples like drones that are effectively being used to find and rescue disaster victims.

Robots have reached in Space and they send back data which helps in advancing new research about the Universe. Self-driving cars are already starting to come on roads to help drivers.

How do you perceive the growth of the robotics market in the next 10 years?

As per the current growth rate in various sectors, I believe industrial robotics will grow to a larger scale with better and more autonomous robots for the complete pipeline.

Also, people in industries might accept collaborative robots that increase productivity while working alongside humans. At the same time, I believe robots will substantially increase productivity in the agriculture sector.

In the social sector, AI agents with the help of IoT might control most of the electronics around us with better speech recognition and natural language processing.

However, the most immediate impact I believe would be in the automotive industry with the arrival of self-driving vehicles on roads with higher autonomy levels (4, maybe even 5).

Explain the function of a camera sensor in robotics?

Many robots do not have a camera sensor on them yet they do perform well. However, with the latest advancements in computer vision, robots with one or more camera sensors can create a map around them and localize itself relative to its world.

Camera sensors are cheaper compared to other sensors like LIDAR and can also provide additional color information which helps in applications such as self-driving vehicles to detect traffic signals.

With the advancements of camera sensor technology, cameras are becoming better in terms of cost, size as well as resolution.

Mention any book about robotics that recently has your attention?

I have currently started to read the book “Bio-inspired Artificial Intelligence: Theories, Methods, and Technologies” by Dario Floreano and Claudio Mattiussi.

This book not only talks about the traditional approaches of AI to mimic the human brain functions but also introduces the new approaches that are inspired by other biological structures such as immune systems, bio-robotics, and swarm intelligence.

This book is really interesting because it aligns the concept of biological designs that have sustained several life forms for millions of years to the improvements required in our technological designs.

The book covers concepts on evolutionary computation and electronics; cellular systems; neural systems, including neuromorphic engineering; developmental systems, behavioral systems (behavior-based, bio-mimetic, epigenetic, and evolutionary robots), and collective systems (swarm robotics as well as cooperative and competitive co-evolving systems).

For more about Aman Virmani projects go to AmanVirmani on Github