SMARTVISION - Road Pothole Detection
DOI:
https://doi.org/10.15662/IJEETR.2026.0802098Keywords:
SMARTVISION, Road Safety, Pothole Detection, Computer Vision, AI Surveillance, Smart Cities, Image ProcessingAbstract
Road potholes are one of the primary causes of traffic accidents, vehicle damage, and congestion, especially in developing areas. The situation becomes more critical during monsoon seasons when puddled water completely hides potholes from visual observation. Conventional methods for detecting potholes depend on either manual inspection or basic image processing techniques which require lengthy time periods for their implementation and produce less precise results which cannot be used in expansive operational settings.
The research introduces SMARTVISION which functions as a hybrid system for detecting potholes under both dry and flooded road conditions. A vehicle-mounted camera captures real-time road images which the system processes using the YOLOv8 deep learning model to detect potholes through visual features that include edges and texture and irregular shapes. The system uses underwater SONAR sensors to measure road depth changes through ultrasonic wave reflection principles because visual detection has become unreliable in flooded environments. The proposed system achieves better detection performance through its combination of computer vision technology and sensor-based depth analysis which works across different environ- mental conditions. The experimental results show that the system achieves better accuracy and real-time performance and stronger performance than traditional methods. The system enhances transportation safety while it helps develop intelligent transportation systems and smart city infrastructure development
References
SMARTVISION - Road Pothole Detection
Meenakshi N, Premkumar C, Swaminathan B, Muthukumaran P
Associate Professor, Department of Electronics and Communication Engineering, Meenakshi Sundararajan Engineering, College, Chennai, India
Department of Electronics and Communication Engineering, Meenakshi Sundararajan Engineering College,
Chennai, India
Department of Electronics and Communication Engineering, Meenakshi Sundararajan Engineering College,
Chennai, India
Department of Electronics and Communication Engineering, Meenakshi Sundararajan Engineering College,
Chennai, India
ABSTRACT: Road potholes are one of the primary causes of traffic accidents, vehicle damage, and congestion, especially in developing areas. The situation becomes more critical during monsoon seasons when puddled water completely hides potholes from visual observation. Conventional methods for detecting potholes depend on either manual inspection or basic image processing techniques which require lengthy time periods for their implementation and produce less precise results which cannot be used in expansive operational settings. The research introduces SMARTVISION which functions as a hybrid system for detecting potholes under both dry and flooded road conditions. A vehicle-mounted camera captures real-time road images which the system processes using the YOLOv8 deep learning model to detect potholes through visual features that include edges and texture and irregular shapes. The system uses underwater SONAR sensors to measure road depth changes through ultrasonic wave reflection principles because visual detection has become unreliable in flooded environments. The proposed system achieves better detection performance through its combination of computer vision technology and sensor-based depth analysis which works across different environ- mental conditions. The experimental results show that the system achieves better accuracy and real-time performance and stronger performance than traditional methods. The system enhances transportation safety while it helps develop intelligent transportation systems and smart city infrastructure development.
KEYWORDS: SMARTVISION, Road Safety, Pothole Detection, Computer Vision, AI Surveillance, Smart Cities, Image Processing
I. INTRODUCTION
The transportation system depends on road infrastructure to provide safe and efficient movement of people and goods. Pot- holes exist as the most frequent and dangerous road problems which impact all types of roads in both urban and rural lo- cations. The undetected surface irregularities result in vehicle destruction and traffic interruptions and major collisions.
The traditional method for detecting potholes requires exten- sive manual inspections which consume excessive time and workforce resources while struggling to cover extensive road systems. Intelligent transportation systems (ITS) technological progress creates increasing demand for systems that can dis- cover and track events in real time with automated processes.
The latest computer vision and deep learning advancements enable systems to identify potholes through image recognition methods. The YOLO model demonstrates effective perfor- mance for pothole detection during clear weather conditions
[8] [9]. The systems encounter major difficulties because they struggle to detect potholes which become hidden during low- light situations and heavy rainfall.
The research introduces SMARTVISION which functions as a detection system through its use of vision-based deep learning and sensor-based depth measurement technology. The system achieves dependable pothole detection through its integration of YOLOv8 and SONAR sensing technology which works in both visible and non-visible environments to enhance system performance in real-world situations.
II. RELATED WORK
The current literature on pothole detection studies which two main methods that use either visual systems or sensor technologies as their primary detection methods. Vision-based methods utilize image processing or deep learning techniques to identify potholes from road images. The initial methods used edge detection with texture analysis techniques which proved to be vulnerable to changes in lighting conditions [5]. Recent research studies have utilized deep learning systems which include convolutional neural networks (CNNs) and YOLO systems to achieve better accuracy and instant detection capabilities [1] [2].
Sensor-based approaches, on the other hand, use accelerome- ters, ultrasonic sensors, and vibration analysis to detect road irregularities. The methods successfully detect changes that occur on road surfaces; however, their performance decreases when vehicles generate dynamic movements which cause false detection errors [3].
Some studies have explored the use of smartphone-based data collection and deep learning for road damage detection through their research [6]. Most existing systems use either vision-based methods or sensor-based methods as separate
solutions. Very few approaches address the challenge of de- tecting potholes under flooded conditions.
A significant research gap exists because researchers have not developed a unified system which can detect road condi- tions in both dry and submerged environments. The proposed SMARTVISION system addresses this limitation by combin- ing deep learning-based visual detection with SONAR-based depth sensing, ensuring reliable performance across diverse environmental conditions.
III. PROPOSED SYSTEM ARCHITECTURE
The SMARTVISION system which has been proposed op- erates as a hybrid adaptive system which detects potholes through its combination of vision-based and sensor-based detection methods. The system architecture contains multiple operational modules which work together to achieve accurate pothole detection results which maintain their accuracy across various weather conditions from dry streets to flooded areas. The system consists of three main components which are the vision-based detection module and the sensor-based detection module and the decision-making module. The system links its different components through interconnections which enable real-time data handling and intelligent detection functions.
Vision-Based Detection Module
The vision-based detection module identifies potholes through its capability to analyze live road images. The vehicle-mounted camera system continuously records road surface images which are processed by the YOLOv8 deep learning model. YOLOv8 operates as a single-stage object detection system which achieves both rapid performance and accurate results needed for immediate use.
The model examines each video frame to identify potholes through visual analysis of irregular shapes and cracks and texture changes. The system creates bounding boxes around identified areas which it pairs with confidence scores that show how likely the detection is correct. This method enables exact positioning of potholes on the roadway. The vision- based module shows good performance during regular daylight and complete sighting conditions. The system detects objects with brief pauses which enable it to function as an immediate operational tool. The system experiences performance issues during conditions of dim light and shadow presence and when water fills potholes because visual details become harder to identify.
Sensor-Based Detection Module
The vision system receives enhancement through the sensor- based detection module which detects potholes through phys- ical measurements that do not depend on visual data. The distance through the time-of-flight method. When a pothole is present, there is a sudden increase in the measured distance due to the depression in the road surface.
The system uses two types of sensors to determine road flooded conditions through the SONAR sensor and a rain sen- sor which detects water present on the road. The accelerometer (MPU6050) detects vibrations and sudden movements when the vehicle passes over uneven surfaces, providing additional confirmation of pothole presence.
The module achieves improved detection results through its ability to analyze data from various sensors while maintaining the ability to detect hidden potholes.
Decision-Making and Mode Selection Module
The decision-making module functions as the main control system for the SMARTVISION system. It uses both vision- based and sensor-based system outputs to create the final detection output.
The system functions in two different modes which depend on the existing environmental conditions. Dry Road Mode uses the YOLOv8 model as its primary tool for detecting potholes through visual data. The system switches to sensor- based detection through SONAR and other sensor inputs when the rain sensor detects water presence or visibility is poor in Flooded Road Mode.
The decision algorithm examines confidence scores from the vision model and depth variation sensor data and vibration sensor data. The system confirms a pothole detection through its two modules which generates an alert when it reaches sufficient confidence.
This hybrid decision system improves accuracy while decreas- ing false detections and maintaining dependable performance throughout various road conditions and weather conditions.
IV. WORKING PRINCIPLE
The SMARTVISION system operates through its core design which combines visual detection methods with depth sensing technologies.
The camera system starts its operation by capturing ongoing road surface conditions through real-time image acquisition. The YOLOv8 model processes the images to identify potholes by examining visual elements which include unusual shape patterns and texture details. The model produces bounding box results together with confidence score data.
The SONAR sensor performs two functions: it transmits ultrasonic waves and calculates distance through time-of-flight measurement.
v × t (1)
module becomes essential for detecting objects during heavy rain or flooded road conditions which make camera systems ineffective.
where:
D = (1)
The vehicle uses an underwater SONAR sensor which it installs at its base to measure distance between the vehicle and the road surface. The sensor uses ultrasonic waves to measure
D = distance between sensor and road
• v = speed of sound
• t = time taken for echo to return
A sudden increase in distance indicates a pothole. Additional sensors such as rain sensors and accelerometers provide sup- porting information regarding environmental conditions and vehicle movement.
The system integrates outputs from both modules to make a final decision, ensuring accurate detection under all conditions.
Intersection over Union (IoU)
The IoU metric evaluates detection accuracy.
AreaOverlap
for detection when it detects flooded conditions because this approach provides better results during times of low visibility. The MPU6050 accelerometer detects vibrations and sudden movements which occur when vehicles drive on uneven road surfaces. The system provides additional evidence of pothole existence which helps to improve detection accuracy.
An RCWL motion sensor detects vehicle movement which re- stricts detection activities to vehicle operation while decreasing false alarm rates.
The system components receive power from a regulated power
IoU =AreaUnion (2)
supply which maintains consistent system operation. The system achieves effective data transmission between sensors
Pothole Depth Calculation
Depthpothole = Dpothole − Dnormal (3)
where:
• Dpothole = measured distance at pothole
• Dnormal = normal road distance
Pothole Severity Estimation
Severity = α × Areapothole + β × Depthpothole (4)
where α and β are weighting parameters.
V. HARDWARE IMPLEMENTATION
The hardware design of the SMARTVISION system concen- trates on its three essential design elements which include simple design, efficient operation, and real-time system perfor- mance capabilities for its intended use in vehicles. The system uses multiple sensors together with processing units to create a system which reliably detects potholes across various weather conditions.
The system uses an ESP32 microcontroller as its primary processing unit which handles all system control functions. The device gathers sensor information which it processes before sending data to the vision-based system. The use of embedded controllers such as ESP32 enables efficient real- time data handling in smart transportation applications [3].
A camera module is mounted at the front of the vehicle to continuously capture images of the road surface. The system uses the YOLOv8 deep learning model to process these images for pothole detection purposes. Deep learning-based vision ap- proaches have demonstrated exceptional accuracy for detecting road damage through typical operational conditions [1], [9]. To enhance detection in challenging scenarios, an underwater SONAR sensor is installed at the bottom of the vehicle. The sensor uses ultrasonic waves to measure the distance between the vehicle and the road surface. The distance measurements enable the detection of potholes which remain hidden during water accumulation. Sensor-based detection techniques have been widely used to detect road irregularities through physical measurements [5] [6].
The system includes a rain sensor which detects water pres- ence on the road. The system selects sensor-based methods and microcontroller through proper interfacing methods. The hardware setup maintains a small size and low pricing while effectively operating in real-world situations of intelligent transportation systems.
SOFTWARE DESIGN
The SMARTVISION system software processes real-time data while detecting objects and integrating sensors with high effi- ciency. The YOLOv8 model is used for vision-based pothole detection. It is trained using annotated road images containing potholes under different conditions. The model processes input images and detects potholes by generating bounding boxes and confidence scores.
The ESP32 microcontroller is programmed using the Arduino IDE to read sensor data from the ultrasonic sensor, rain sensor, accelerometer, and motion sensor. The firmware continuously monitors sensor readings and identifies abnormal patterns such as sudden depth changes and vibrations.
The system uses a decision-making algorithm which combines results from vision and sensor modules to make its determi- nation. The system confirms pothole detection when either module detects a pothole and creates an alert.
The system displays sensor data and detection outcomes through a serial monitor and interface which enables users to observe system performance in real time while debugging the system.
The software detects potholes with high accuracy because it runs at low latency and real-time performance.
VI. RESULTS AND ANALYSIS
The section provides an evaluation of the performance re- sults which the SMARTVISION pothole detection system has achieved. The system combines vision detection through YOLOv8 with sensor detection capabilities which use ul- trasonic and rain and motion and vibration sensors. The evaluation process tests the hybrid system under two different road conditions which are dry and flooded.
The YOLOv8 model detects potholes in real time through its ability to identify visual characteristics which include abnormal shapes and surface breakpoints. The model shows detection capabilities through its output of bounding boxes which come with confidence scores as shown in Fig. 1. Deep learning-based approaches like YOLO have proven to be effective for real-time object detection tasks [8] [9].
Fig.1. YOLOv8 detecting a pothole with bounding box and confidence score
Fig.2. Training performance showing reduction in loss and improvement in precision, recall, and Map
The training results demonstrate how the model learns throughout its entire learning process. The loss curves demon- strate a continuous decline which proves that the model successfully acquires important features. The detection ability of the system improves through time because its precision, recall, and mAP values continue to rise. Deep learning- based road damage detection systems have shown comparable advancements according to research studies [1] [6].
Fig.3. Confusion matrix representing classification performance
The confusion matrix in Fig. 3 highlights the classification performance of the model. A high number of true positive detections and relatively low false detections indicate that the model performs well in distinguishing potholes from normal road surfaces.
Fig.4. Precision-Recall curve showing trade-off between precision and recall
The precision-recall curve demonstrates the balance between detection accuracy and completeness. The model maintains high precision across different recall values, indicating that it can detect potholes reliably without generating excessive false positives.
Fig.5. F1-score vs confidence curve indicating optimal detection threshold
The F1-score curve helps determine the optimal confidence threshold for detection. It reflects a balanced trade-off between precision and recall, ensuring that the system performs effec- tively under varying conditions.
Fig6. Sample pothole detection results using YOLOv8 with bounding boxes
The detection results establish the YOLOv8 model as an effective detection system. The system identifies potholes on various road types while providing confidence ratings which prove its effectiveness in real-time operations.
Vision-Based Detection Analysis
The researchers tested the vision-based system by using actual street footage which they obtained from various weather condi- tions and daylight situations. The YOLOv8 model successfully detected potholes with high accuracy by generating bounding boxes and confidence scores.
The results show that the model performs well in dry con- ditions, achieving high precision and low detection latency. However, its performance decreases in situations where visi- bility is poor, such as during low lighting or when potholes are filled with water. This limitation exists in all vision- based systems which depend on visual elements to function according to [2].
Sensor-Based Detection Analysis
The researchers conducted tests on the sensor-based module which used data from four different sensors including the ultrasonic sensor and rain sensor and MPU6050 accelerometer and RCWL motion sensor. The ultrasonic sensor effectively captured depth variations in the road surface whereas the accelerometer detected vibrations caused by uneven terrain. The rain sensor detected water presence in flooded road conditions while the ultrasonic sensor showed irregular depth variations. The system used this feature to detect potholes which were not visible to the camera. Sensor-based approaches are known to be effective in detecting hidden road irregularities [5].
The sensor-based system generates false positives when vehi- cles move suddenly or road conditions change. The system enhances detection reliability under difficult environmental conditions.
Fig.7. ESP32 serial monitor showing confirmed pothole detection
Fig. 7 shows the real-time output of the system, where pothole detection is confirmed based on sensor data.
Comparative Analysis
A comparison between different detection methods is pre- sented in Table I.
The results of the comparison demonstrate that SMARTVI- SION system achieves superior accuracy while processing data in real time. The system detects potholes in flooded areas which current methods cannot accomplish.
TABLE I PERFORMANCE COMPARISON OF DETECTION METHODS
Overall Analysis
The hybrid method through its implementation brings about substantial enhancements to pothole detection results. The vision-based module delivers precise detection for standard conditions, while the sensor-based module provides depend- able detection in difficult conditions which include flooded pathways.
The SMARTVISION system creates a strong solution that can be applied to multiple situations by connecting both systems together. The system shows better accuracy and shorter re- sponse times and greater system dependability which makes it ready for use in intelligent transportation systems in real-world situations.
ADVANTAGES
The SMARTVISION pothole detection system brings multiple important benefits to the table when compared to traditional detection methods. The system achieves better accuracy and reliability through its combination of deep learning-based vision techniques and sensor-based depth analysis which out- performs existing standalone methods [1] [3]. SMARTVISION provides real-world operational capabilities through its ability to function under both dry road conditions and flooded road conditions which traditional systems can only operate through visual detection methods [2] [5]. The YOLOv8 model enables immediate detection capabilities with minimal delays which transportation systems require to function effectively [9]. The model uses advanced visual feature analysis to accurately iden- tify potholes through its study of edges and texture and surface irregularities [1]. The system gains greater operational strength through its addition of sensor-based detection which allows it to maintain proper performance during low-light and under- water situations. The entire system achieves cost efficiency through its use of low-cost components which include ESP32 and standard sensors that enable large-scale deployment. The system achieves better detection accuracy because it integrates multiple sensing methods which decrease false positives and enhance decision-making performance according to the study results. [3] The SMARTVISION system enhances road safety through its ability to identify potholes at an early stage which decreases the chance of accidents and vehicle destruction.
VII. LIMITATIONS
The SMARTVISION system demonstrates effective perfor- mance but requires consideration of its existing limitations. The system’s accuracy depends on sensor performance which environmental factors including noise and water turbulence and improper calibration will disrupt [5]. The system experi- ences reliability problems because sudden vehicle movements and road irregularities trigger false positive results in sensor- based detection. The vision-based module works with high accuracy during standard conditions however poor lighting conditions and road surface shadows and reflections reduce its performance [2]. The YOLOv8 model achieves its best per- formance through high-quality diverse training datasets which limited data help create detection problems [1] [9]. The need for real-time processing with large-scale deployment creates hardware limitations because it needs advanced computational resources. The system needs to improve its current limitations which will assist future research to develop stronger system security and increased system capacity.
VIII. FUTURE SCOPE
The SMARTVISION system shows effective performance but needs multiple system improvements to enhance its operational capabilities and real-world function. The system can be ex- panded through GPS module integration which enables the system to record pothole locations while creating maps that help road maintenance teams identify damaged areas for effi- cient repairs. The YOLOv8 model needs training on larger and diverse datasets which will enhance its ability to detect objects correctly across various lighting and weather and road situa- tions [1] [9]. The system achieves better compactness through its operational deployment on embedded edge AI platforms which include Raspberry Pi and Jetson Nano for real-time vehicle integration. The system can be expanded by supporting cloud-based data storage with communication systems that allow pothole data collection from multiple vehicles to enable extensive monitoring of road conditions [3]. The system can expand its capabilities to detect various road anomalies which include cracks and speed breakers and obstacles to function as a complete road monitoring system. The system needs integration with smart city infrastructure which will enable automated road maintenance together with traffic management improvements that support intelligent transportation system development [2]. The system can be extended to estimate or predict remaining road surface life through its analysis of pothole occurrence and depth changes and its study of long- term surface deterioration [10].
IX. CONCLUSION
The hybrid pothole detection system combines camera vision technology with depth-sensing sensors to achieve accurate detection across different road conditions. The system uses the YOLOv8 deep learning model to detect potholes on dry roads through real-time image analysis which enables visual feature extraction of shape, texture and surface irregularities
[1] [9].
To solve the problems that vision-based techniques face in flooded areas the system includes an underwater SONAR sensor. The sensor uses ultrasonic wave reflection to measure road depth changes which allows it to detect potholes that are not visible. The system achieves better detection performance through its use of depth measurement combined with motion and vibration data collection [5] [6].
The experimental results show that the hybrid method pro- vides better detection performance and reliability results than traditional systems which use only one detection method. The vision-based module achieves good performance during clear weather conditions whereas the sensor-based module maintains active detection capability during low-visibility and flooded situations.
The SMARTVISION system provides an effective solution for detecting potholes in real time through its practical and efficient operational methods. The system improves road safety while decreasing vehicle damage and facilitating the development of intelligent transportation systems and smart infrastructure [2] [3].
REFERENCES
1. M. Y. Manu, M. J. Prasanna Kumar, K. Anand, and S. V. Shashikala, “Pothole Detection Using Deep Learning Methods,” in Proceedings of the IEEE Bangalore Humanitarian Technology Conference (B-HTC), IEEE, 2025.
2. R. S. Sandhya Devi, A. Jeni Santina, S. Swathi, and S. K. Tamilselvan, “Edge-Enhanced YOLOv8 for Adaptive Real-Time Pothole Detection in Smart Road Networks,” in Proceedings of the IEEE International Con- ference on Smart Technologies, Communication and Robotics (STCR), IEEE, 2025.
3. D. Jyothirmai, S. Sai Charan Reddy, N. Sai Karthikeya, S. Dinesh Reddy, P. Jagadeesh, and R. Pitchai, “Pothole Detection and Enhanced Road Safety Using Machine Learning,” in Proceedings of the IEEE In- ternational Conference on Electronics and Sustainable Communication Systems (ICESC), IEEE, 2024.
4. S. S. Gadekar, J. Musale, R. Gaikwad, S. M. Nalawade, S. M. Kambale, and A. A. Phatak, “Real-Time Pothole Detection Using Deep Learning,” SSRN Electronic Journal, Elsevier, 2024.
5. T. Nienaber, M. Booysen, and R. Kroon, “Detecting Potholes Using Sim- ple Image Processing Techniques and Real-Time Data,” IEEE Intelligent Transport Systems Applications, IEEE, 2015.
6. H. Maeda, Y. Sekimoto, T. Seto, T. Kashiyama, and H. Omata, “Road Damage Detection and Classification Using Deep Neural Networks with Smartphone Images,” Computer-Aided Civil and Infrastructure Engineering, Wiley, 2018.
7. S. Zhang, Y. Wen, X. Bian, Z. Lei, and S. Z. Li, “Single-Shot Refinement Neural Network for Object Detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2018.
8. J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,”
arXiv preprint arXiv:1804.02767, Cornell University Library, 2018.
9. G. Jocher, A. Chaurasia, and J. Qiu, “YOLOv8: Next-Generation Real-Time Object Detection Model,” Ultralytics Documentation and Research, Ultralytics Inc., 2023.
10. C.Nagarajan and M.Madheswaran - ‘Stability Analysis of Series Parallel Resonant Converter with Fuzzy Logic Controller Using State Space Techniques’- Taylor &Francis, Electric Power Components and Systems, Vol.39 (8), pp.780-793, May 2011. DOI: 10.1080/15325008.2010.541746
11. C.Nagarajan and M.Madheswaran - ‘Experimental verification and stability state space analysis of CLL-T Series Parallel Resonant Converter’ - Journal of Electrical Engineering, Vol.63 (6), pp.365-372, Dec.2012. DOI: 10.2478/v10187-012-0054-2
12. C.Nagarajan and M.Madheswaran - ‘Performance Analysis of LCL-T Resonant Converter with Fuzzy/PID Using State Space Analysis’- Springer, Electrical Engineering, Vol.93 (3), pp.167-178, September 2011. DOI 10.1007/s00202-011-0203-9
13. S.Tamilselvi, R.Prakash, C.Nagarajan,“Solar System Integrated Smart Grid Utilizing Hybrid Coot-Genetic Algorithm Optimized ANN Controller” Iranian Journal Of Science And Technology-Transactions Of Electrical Engineering, DOI10.1007/s40998-025-00917-z,2025
14. S.Tamilselvi, R.Prakash, C.Nagarajan,“ Adaptive sliding mode control of multilevel grid-connected inverters using reinforcement learning for enhanced LVRT performance” Electric Power Systems Research 253 (2026) 112428, doi.org/10.1016/j.epsr.2025.112428
15. S.Thirunavukkarasu, C. Nagarajan, 2024, “Performance Investigation on OCF and SCF study in BLDC machine using FTANN Controller," Journal of Electrical Engineering And Technology, Volume 20, pages 2675–2688, (2025), doi.org/10.1007/s42835-024-02126-w
16. C. Nagarajan, M.Madheswaran and D.Ramasubramanian- ‘Development of DSP based Robust Control Method for General Resonant Converter Topologies using Transfer Function Model’- Acta Electrotechnica et Informatica Journal , Vol.13 (2), pp.18-31,April-June.2013, DOI: 10.2478/aeei-2013-0025.
17. C.Nagarajan and M.Madheswaran - ‘DSP Based Fuzzy Controller for Series Parallel Resonant converter’- Springer, Frontiers of Electrical and Electronic Engineering, Vol. 7(4), pp. 438-446, Dec.12. DOI 10.1007/s11460-012-0212-0.
18. C.Nagarajan and M.Madheswaran - ‘Experimental Study and steady state stability analysis of CLL-T Series Parallel Resonant Converter with Fuzzy controller using State Space Analysis’- Iranian Journal of Electrical & Electronic Engineering, Vol.8 (3), pp.259-267, September 2012.
19. C.Nagarajan and M.Madheswaran, “Analysis and Simulation of LCL Series Resonant Full Bridge Converter Using PWM Technique with Load Independent Operation” has been presented in ICTES’08, a IEEE / IET International Conference organized by M.G.R.University, Chennai.Vol.no.1, pp.190-195, Dec.2007
20. Suganthi Mullainathan, Ramesh Natarajan, “An SPSS and CNN modelling based quality assessment using ceramic materials and membrane filtration techniques”, Revista Materia (Rio J.) Vol. 30, 2025, DOI: https://doi.org/10.1590/1517-7076-RMAT-2024-0721
21. M Suganthi, N Ramesh, “Treatment of water using natural zeolite as membrane filter”, Journal of Environmental Protection and Ecology, Volume 23, Issue 2, pp: 520-530,2022
22. K. Gopalakrishnan, S. Khaitan, A. Choudhary, and A. Agrawal, “Deep Convolutional Neural Networks with Transfer Learning for Computer Vision-Based Pavement Distress Detection,” Construction and Building Materials, Elsevier, 2017.
23. Anand, L. (2023). An Intelligent AI and ML–Driven Cloud Security Framework for Financial Workflows and Wastewater Analytics. International Journal of Humanities and Information Technology, 5(02), 87-94.
24. Murugeshwari, B., Sudharson, K., Panimalar, S. P., Shanmugapriya, M., & Abinaya, M. (2020). SAFE–Secure Authentication in Federated Environment using CEG Key code.
25. Sugumar, R. (2025). Cyber-Secure Cloud Architecture Integrating Network and API Controls for Risk-Aware SAP Healthcare Data Platforms. International Journal of Humanities and Information Technology, 7(4), 53-60.
26. Sharma, K. P., Kumar, I., Singh, P. P., Anbazhagan, K., Albarakati, H. M., Bhatt, M. W., ... & Rana, A. (2024). Advancing spacecraft rendezvous and docking through safety reinforcement learning and ubiquitous learning principles. Computers in Human Behavior, 153, 108110.
27. Anand, L., Tyagi, R., & Mehta, V. (2024, January). Food recognition using deep learning for recipe and restaurant recommendation. In Proceedings of Eighth International Conference on Information System Design and Intelligent Applications (pp. 269-279). Springer Nature Singapore.
28. Soundappan, S. J. (2020). Big Data Analytics in Healthcare: Applications for Pandemic Forecastin. International Journal of Advanced Research in Computer Science & Technology (IJARCST), 3(1), 2248-2253.
29. Anbazhagan, K. (2024). Trustworthy and Adaptive AI Systems for Enterprise Analytics Cybersecurity and Decision Optimization Using API-First and Cloud-Native Architectures. International Journal of Technology, Management and Humanities, 10(03), 65-74.
30. Mathew, A. (2021). Edge Computing and its convergence with blockchain in 6G: Security challenges. Int. J. Comput. Sci. Mob. Comput, 10(8), 8-14.
31. Gopinathan, V. R. (2025). AI-Powered Kubernetes Orchestration for Complex Cloud-Native Workloads. International Journal of Research Publications in Engineering, Technology and Management (IJRPETM), 8(6), 13215-13225.
32. Mathew, A. (2023). Learning Metaverse Powered by Artificial Intelligence. Recent Progress in Science and Technology Vol. 4, 4, 134-141.
33. Garg, V. K., Soundappan, S. J., & Kaur, E. M. (2020). Enhancement in intrusion detection system for WLAN using genetic algorithms. South Asian Research Journal of Engineering and Technology, 2(6), 62–64. https://doi.org/10.36346/sarjet.2020.v02i06.003
34. Sugumar, R. (2025). Secure and Explainable AI Systems in Cloud-Based Applications: Bridging Trust and Performance. International Journal of Engineering & Extended Technologies Research (IJEETR), 7(4), 10328-10335.
35. Mathew, A., & Alex, H. (2023). From Code to Cure: The Role of AI in Accelerating Drug Discovery. Advances and Challenges in Science and Technology Vol. 2, 94-102.
36. Vimal, V. R., John Justin Thangaraj, S., Narayanan, L. K., Alagu Thangam, S., Loganayagi, S., & Balakrishnan, S. (2025, April). Enhanced Phishing Detection and Classification Using an Ensemble Machine Learning Approach for URL Analysis. In International Conference on Information and Communication Technology for Intelligent Systems (pp. 229-239). Springer Nature Singapore.
37. Mathew, A. (2021). Obfuscation Techniques for Magecart Detection and Prevention. International Journal of Computer Science and Mobile Computing, 10(2), 39-44.
38. Soundappan, S. J. (2026). Building Trustworthy AI: Explainability and Security in Modern Cloud-Native Data-Driven Ecosystem Platforms. International Journal of Engineering & Extended Technologies Research (IJEETR), 8(2), 570-579.





