PEOPLE
Yashovardhan Chaturvedi, a senior ML engineer at TORC Robotics, focuses on real-time perception and decision intelligence for autonomous systems, innovating with semi-supervised learning.
With the shift of autonomous systems out of the research labs into deployment at the front line, expectations of machine learning models are changing. The current AI should not only be able to identify the patterns, but also understand the context, make a decision within the current moment and work effectively and consistently in a situation that is highly unpredictable and under high pressure. Autonomy, perception, and decision intelligence speak to senior machine learning engineer Yashovardhan Chaturvedi at TORC Robotics: to him, this convergence of expertise is not an abstraction, but where innovation and making a difference intersect.
Chaturvedi has worked in robotics, computer vision, and edge intelligence along with the increasing use of applied machine learning in challenging areas in safety-sensitive systems, such as autonomous platforms and smart asset surveillance. Between his technical direction and specifically in moving previous research towards field-ready prototypes, he is leading a re-writing of the limits of autonomous robots with regards to what they can sense and accomplish.
Learning in the context: Frames to Fluid Situations
This attitude is now applied in his work at TORC where extremely low latency and real time processing capability is central to safe, autonomous decision making on the road.
Perception over time is also one of the greatest problems in robotics. Where static image analysis was used in the early computer vision systems, an autonomous system operation has to be under fluid, dynamic conditions. A car trying to negotiate a foggy landscape or a mobile robot trying to sense small and moving objects, everything here matters in the element of time.
A representative example of the employment of this principle can be seen in the case of Chaturvedi, his former work at Pano AI. There, he pioneered advanced multi-frame ensemble detection system, replacing the conventional single-frame image processing to the video-based paradigm that tracks dynamics of subtle visual distinguishing and changes as time passes by. Such a temporal classification framework is emerging to be more applicable in the field of robotics, wherein situational awareness needs an upgrade so that it may become context-aware intelligence by being sequence-awareness.
It is not the perception he says but prioritization. In robotics you are not dealing with just recognizing objects. It is up to you whether you react to something, how soon and to what extent you are sure about it.” Recently, he presented Navigating Bottlenecks: Infrastructure Lessons from AV ML Systems at the ADAS & Autonomous Vehicle Technology Summit which presented an overview of the technical challenges and infrastructure design solutions to successfully deploy machine learning technology into autonomous vehicles.
This thinking has become the foundation of his new job at TORC where real-time operation and low-latency model performance is needed to support safe autonomous decision making on the road.
Semi-Supervised Learning of Scalable Autonomy
Edge cases tend to occur in autonomous systems and engineers are not always able to orientate them. In order to be resilient the models must be trained using very large and diverse amounts of data which is not feasible to annotate manually. It is there where the contribution of Chaturvedi towards semi-supervised learning becomes essential.
At Pano AI, he launched a new type of training pipeline 4 times times larger in volume with only small human involvement, which was based on self- training methods, making the training efficient and preserving sufficient quality because of its scalability. The techniques, which are becoming widely used throughout the robotics industry, make it possible to iterate more and perform better in the long-tail setting.
This helped the data pipeline of Chaturvedi scale up as well as learn, a key condition of any on the wild autonomous system.
His latest academic publication, Evaluating the Performance of SQL-Based vs. Python-Based Data Processing in Cloud Computing for Machine Learning Applications, looks at the ebbs and flows related to various paradigms of data processing in the ML context of cloud-based systems. As far as it deals with infrastructure, the information about performance, scalability, and data handling presented in the paper can be directly transferred to the sphere of robotics workflows where smooth data transportation and processing form the foundations of real-time intelligence.
Engineering for the Edge
From deploying PyTorch-based detection models in remote wildfire environments to building real-time pipelines that power autonomous trucks, Chaturvedi’s work illustrates the full lifecycle of machine learning for robotics: from data strategy and model design to deployment and feedback integration.
Chaturvedi, who also served as Session Chair at the IEEE International Conference on Augmented Reality, Intelligent Systems, and Industrial Automation (ARIIA 2024), has the ability to navigate the complexity of robotics applications, whether environmental, industrial, or mobility-focused, positions him at the vanguard of a fast-moving discipline where theory must translate into reliability.
Because in robotics, there is no margin for guesswork. The AI must work. And it must work now.