This is me - Control Engineer

Irfan Ahmad Ganie

As a PhD candidate and Graduate Research Assistant at Missouri University of Science and Technology, USA, under the guidance of Dr. Sarangapani, my research delves into the compelling nexus of continual lifelong online deep reinforcement learning and safety control mechanisms within nonlinear systems. This study focuses on multitasking environments and includes Human-Robot Interaction (HRI) and Human-Swarm Interaction (HSI). The scope of my work spans a wide range of potential applications, encompassing robotics, mobile robots, and unmanned aerial and ground vehicles, emphasizing the ability to continuously learn and adapt from new data throughout their operational lifetime.

Research Interests:

  • Reinforcement Learning: Exploring dynamic strategies and algorithms.
  • Optimal Control: Developing methods to achieve the best performance under given constraints.
  • Deep Neural Networks: Harnessing deep learning architectures for complex problem-solving.
  • Artificial Intelligence Applications: Implementing AI in nonlinear systems for enhanced functionality.
  • Machine Learning in Control Systems: Integrating ML techniques to improve control in nonlinear environments.
  • Robotics and Autonomous Vehicles: Advancing automation in robotics and vehicle systems.
  • Human-Robot Interaction (HRI): Fostering effective communication and cooperation between humans and robots.
  • Safety and Security in Nonlinear Systems: Ensuring robustness and resilience in critical system operations.
  • Energy Systems: Innovating within microgrids and smart grids to optimize energy distribution and consumption.

Academic Background and Research Initiatives

I completed my M-Tech degree in Cyber-physical Systems from the distinguished IIT Jodhpur, India, where I refined my expertise in problem formulation and experimental techniques using advanced tools such as MATLAB, DSpace, and Opal RT. My thesis project focused on developing advanced control mechanisms for power electronic converters and microgrids, utilizing the Integral Sliding Mode Control technique to effectively reduce power fluctuations. This research was supported by generous funding from the Department of Science and Technology (DST).

During my tenure at IIT Jodhpur, I embarked on several innovative projects in Cyber Physical Systems and Autonomous Cars, employing image-based inputs through technologies like OpenCV and YOLO. My passion for cutting-edge research led me to pursue further studies at Missouri University of Science and Technology, USA , where my work now revolves around deep learning-based control systems for nonlinear, multitasking environments. Here, I focus on the safety and control of robotic and unmanned vehicles (both ground and aerial), incorporating extensive human interaction. My research toolkit has expanded to include MATLAB, ROS2, GAZEBO, MOVEIT for robotics simulation, and SLAM for real-time mapping and navigation.

Additionally, I have hands-on experience with Hardware-in-the-Loop (HIL) simulations, specifically using differential drive robots, and Quanser’s innovative platforms like Qcars, Qbots, and Qdrones. These projects have significantly enhanced my practical understanding of dynamic system behaviors and realtime control challenges.

My research has been funded by esteemed organizations such as the Army Research Office, Office of Naval Research, and the Intelligent Systems Centre. I have contributed to numerous international conferences and journals, with my publications—garnering over 300 citations—covering a broad spectrum of topics, including Cyber Physical Systems, Nonlinear Control, Safety, Machine Learning, Deep Learning, Lifelong Learning, and Robotics.

I am eagerly looking forward to advancing my research and exploring new partnerships in this dynamic field of technology.

Software Proficiency & Skills
MATLAB
ROS2
PYTHON
C++
C
embedded assembly language.
  • MATLAB: Formulated and implemented the adaptive control and estimation algorithms, published in American Control Conferences and Journals.
  • Python: Devised learning based algorithms (Deep Learning and Reinforcement Learning using Image data for autonomous vehicles) published in conferences.
  • Robot Operating System (ROS2): I have developed a working knowledge of the Robot Operating System (ROS2). This framework allows me to efficiently create complex and robust robot behavior across a wide variety of robotic platforms.
  • Network simulator: Implemented it on smart field monitoring and in vehicular ADHOC networks, published in conferences.
  • dSPACE/HIL (Power electronics): Used to integrate Simscape Electrical™ models in a dSPACE hardware-in-the-loop (HIL) environment for testing microgrid converters
  • C++: Devised a working knowledge of C++, that helps in easier implementation of hardware integration with the software.
  • MoveIT: Used for ROS2 robotic manipulation.
  • SLAM: Used for navigation and mapping for differential drive robot and wheeled robots in ROS2.
  • Gazebo: Used to create environment in ROS2 for various projects as mentioned in projects section.

Work Experience

Missouri University of Science and Technology, Rolla, USA

Position: Research Assistant.

August 2021 - Present


Tenure: Aug 2021- Aug 2022

Project: Online Safe Adaptive Lifelong Deep Learning (LDL) System for Tracking Control

During my initial year at Missouri University of Science and Technology, I was actively involved in the development of this innovative system, specifically designed for multitasking environments and applied in controlling n-link robot manipulators, mobile robots, and unmanned surface vehicles. The project, supported by the Army Research Office, the Office of Naval Research, and the Intelligent Systems Centre, focused on safety assurance through the implementation of time-varying barriers to adapt safely to changing conditions and enhance system reliability.

Responsibilities and Achievements:
  • Engineered a safe nonlinear control system leveraging lifelong deep learning to facilitate robust operations across various multitasking environments. This method was tested on various systems which are in control affine form, strict-feedback form, etc.
  • The project emphasized the integration of time-varying barriers to ensure continuous adaptability and safety, tested on Quanser platforms to meet rigorous safety standards.
  • Demonstrated the system’s capability in practical settings, leading to significant contributions to scholarly journals and international conference proceedings.
Videos
  • Leader-follower-arc-shaped-trajectory. Watch Now

  • Circular formation control. Watch Now

  • Formation structure change in multi-environment scenarios. Watch Now

  • Formation control and obstacle avoidance. Watch Now

  • UAV tracking in multienvironment and different path using Deep continual optimal tracking. Watch Now

  • Formation control using Continual deep optimal tracking. Watch Now


Tenure: Aug 2022- Jul 2023

Project: Deep Continual Optimal Reinforcement Learning (DCORL) for Tracking Control

In this period, I contributed significantly to a project aimed at nonlinear control systems operating in multitasking environments. The focus was on safety, achieved through the implementation of time-varying constraints. The applications of this project extended to various domains including n-link robot manipulators, mobile robots, unmanned aerial vehicles, and unmanned surface vehicles.

Responsibilities and Achievements:
  • Developed a novel adaptive tracking control scheme utilizing deep continual optimal reinforcement learning, focused on nonlinear control systems. A multilayer SVD-based neural network observer is used for output-feedback control to estimate the states.
  • Implemented time-varying constraints to ensure safety and adaptability in multitasking environments, applicable to n-link robot manipulators, mobile robots, unmanned aerial, and surface vehicles.
  • Tested the approach on Quanser-based hardware, validating the system’s effectiveness through simulations and practical deployments.
  • The project’s outcomes contributed to multiple high-profile publications and presentations at international conferences.
Videos
  • BLF-preventing-violation-of-constraints. Watch Now

  • Formation control Lifelong learning with barrier first Trajectory. Watch Now

  • Tracking in multi environment with different path. Watch Now


Tenure: Aug 2023- Present

Project: Human Interaction with Robots in Multi-Environment Scenarios Using Online Deep Continual Learning

Currently, I am deeply involved in this cutting-edge project that explores human-robot interaction within multi-environment scenarios. The project utilizes admittance-based controllers, explainable AI, and deep reinforcement learning in an online framework to enhance human-swarm interaction, aiming to develop systems that are intuitive and safe for human operators in complex settings.

Videos
  • Hardware in loop simulation of multiple drones using the proposed control framework. Watch Now

  • Formation control for leader and followers using deep continual learning optimal tracking based control in multi-environment scenario. Watch Now

Indian Institute of Technology Jodhpur, India

Position: Teaching Assistant.

Tenure: July 2019 - July 2021

During my tenure at IIT Jodhpur, I was deeply involved in my master’s thesis project, a DST-funded initiative aimed at developing a Nonlinear integral sliding mode control system for ripple mitigation in microgrids

My key responsibilities in this major project included:
  • Theoretical Design: Conceptualizing and developing a nonlinear controller.
  • Mathematical Proofing: Providing rigorous mathematical proofs to ensure the stability of the controller.
  • Hardware Integration: Designing the necessary hardware and ensuring seamless software-hardware integration using DSpace and Opal RT.
Responsibilities:
  • Designed and mathematically proved the stability of a nonlinear controller for microgrids aimed at reducing power ripples.
  • Integrated the controller with custom-designed hardware, using DSPACE and Opal RT for seamless software-hardware interaction.
  • Conducted extensive testing and validation, ensuring the project’s objectives were successfully met and documented.

Alongside my thesis, I also engaged in several side projects:
  • Real-Time Autonomous Car Development Using Internet of Things and Image Processing: Played an instrumental role in the design and implementation of a real-time autonomous car using Image Processing and IoT technologies.
  • Responsibilities:
    • Developed algorithms for object detection using YOLO and Canny Edge Detection, integral for the autonomous car’s navigation.
    • Implemented IoT connectivity to allow the car to interact with other devices and systems over the internet, enhancing operational efficiency and safety.
    • Verified and validated all systems to ensure the car performed reliably under various operational scenarios.

  • Smart Field Monitoring Using Cyber-physical System: Contributed to a project focused on enhancing agricultural practices through Smart Field Monitoring using Cyber-Physical Systems.
  • Responsibilities:
    • Led the development of a cyber-physical system for intelligent agricultural monitoring to mitigate food loss and environmental impact.
    • Utilized Network Simulator 2 and ToxTrac for designing and validating an intelligent surveillance wireless sensor network.
    • The project significantly advanced the field of smart agriculture through innovative technology applications.

  • Algorithm Development and Validation: Wrote and validated various control algorithms using MATLAB and Keil Embedded Development Tools for ARM Cortex-M, incorporating technologies like YOLO, OpenCV, and Network Simulator 2.

In addition to my project responsibilities, I also taught undergraduate lab courses in Circuits and Systems, providing practical knowledge and hands-on experience to students.

Projects

Funding and Awards

CONFERENCES ATTENDED

Publications

[1] Irfan Ahmad; Karunakar Pothuganti, ”Smart Field Monitoring using ToxTrac: A Cyber-Physical System Approach in Agriculture”, in ”2020 International Conference on Smart Electronics and Communication (ICOSEC)”
[2] Irfan Ahmad; Karunakar Pothuganti, ”Design implementation of real time autonomous car by using image processing IoT”, in” 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT)”.
[3] Muhammad Shabir, Jamalud Din, Irfan Ahmad Ganie, ”Multigranulation roughness based on soft relations” in ”Journal of Intelligent & Fuzzy Systems”.
[4] [4] Elias Aklilu, Irfan Ahmad (2021), ”Predicting Factors of Vehicular Accidents using Machine Learning Algoritms”, in ”International Journal of Emerging Trends in Engineering Research”.
[5] Irfan Ganie, Sarangapani Jagannathan, ” Adaptive Control of Robotic Manipulators using Deep Neural Networks”, in ”6th IFAC Conference on Intelligent Control and Automation Sciences ICONS 2022”.
[6] Irfan Ganie, Sarangapani Jagannathan, ”Continual Optimal Adaptive Tracking of Uncertain Nonlinear Continuous-time Systems using Multilayer Neural Networks”, in ” American Control Conference (ACC 2023).
[7] Irfan Ganie, Sarangapani Jagannathan, ”Lifelong Learning Control of Nonlinear Systems with Constraints Using Multilayer Neural Networks with Application to Mobile Robot Tracking” in ” IEEE CCTA (2023)”.
[8] Irfan Ganie, S. Jagannathan. "Lifelong learning-based multilayer neural network control of nonlinear continuous-time strict- feedback systems" in International Journal of Robust and Nonlinear Control.”
[9] Irfan Ganie S.Jagannathan. "Lifelong deep learning-based control of robot manipulators" in International Journal of Adaptive Control and Signal Processing.”
[10] Irfan Ganie, S.Jagannathan. "Lifelong Learning-based Optimal Trajectory Tracking of Constrained Nonlinear Affine Systems using Deep Neural Networks" in IEEE Transaction on Cybernetics 2024.”
[11] Irfan Ganie, S.Jagannathan. "Optimal Trajectory Tracking of Uncertain Nonlinear Continuous-time Strict-Feedback Systems with Dynamic Constraints" in International Journal of Control (2024).”
[12] Irfan Ganie, S.Jagannathan, Continual online learning- based optimal tracking control of nonlinear strict- feedback systems: application to unmanned aerial vehicles in Complex engineering systems (2024)”
[13] Irfan Ganie, S.Jagannathan, Online Continual Safe Reinforcement Learning- based Optimal Control of Mobile Robot Formations IEEE CCTA (2024)”
[14] Safety Assist Lifelong Reinforcement Learning Tracking Control of Nonlinear Strict-Feedback Systems Using Multilayer Neural Networks in Neurocomputing (2024)”