Volume 10, Special Issue - IJIRTM

(2026)

Impact Factor: 5.86 | Volume 10 | Special Issue

1

AI-Powered Incident Detection for Smart City Surveillance Using Computer Vision and Automated Emergency Response

πŸ‘₯ Bhavya Kaushik, Archit Kumar, Shreyash Bhardwaj, Dr.Anju Saini

πŸ“™ Abstract : Because of rapid urbanization, today the surveillance systems have become inefficient for real time emergency response. This paper presents an AI-based smart city incident detection system which processes images or videos and helps in detection of incident including the severity of the incident and nearby hospitals or police stations and immediate actions that could be taken. The system has a YOLOv8n-based detection model, rule-based severity classification, OpenStreetMap-based geolocation, PDF report generation, and automated email alerts. The system is evaluated on 9,355 samples across six incident categories, in which the system achieves the system achieves 92.1% accuracy with an average latency of 1.8 seconds. The results demonstrate its effectiveness for real-time deployment in smart city environments.

πŸ”– Keywords :️ Smart City; Incident Detection; Computer Vision; YOLOv8; Severity Classification; Emergency Response; Geolocation; FastAPI; OpenStreetMap; Automated Alerting.

2

Evolution of Computer Vision: From Traditional Techniques to Sequential Deep Learning

πŸ‘₯ Mayank Paul, Dr.Preeti Malhotra

πŸ“™ Abstract : Most existing CCTV setups now days are fundamentally reactive, implying that they only record crimes for later review instead of inhibiting them as they happen. Dependence on human operators on regular monitoring is also impractical. Because rapid onset of human fatigue leads to missed threats. To overcome this, our paper examines the incline towards proactive security, notably through a hybrid framework we call 'Falcon AI defence system'. Contrary to standard systems that only detect objects. Our proposed approach employs two distinct deep learning pipelines parallelly. First is an Identity Module and second is a Behavior Module. While the first module manages facial recognition, the second utilizes MediaPipe for skeletal landmark extraction paired with a Long Short-Term Memory (LSTM) network. By assessing 30-frame sequences in a second, the system enables reliably classify complex actions like fighting or falling, etc. Our findings indicate that merging identity tracking with real-time behavioral alerts provides a cost-effective, scalable defense for smart city infrastructures, effectively turning passive cameras into intelligent security tools.

πŸ”– Keywords :️ Smart-Surveillance, Action Recognition, Proactive Surveillance Systems, Deep-Learning, LSTM, MediaPipe.

3

Curevia: An Integrated AI Framework for Early Detection of Neurological Disorders, Brain Tumours, and Ocular Diseases Using EEG, MRI, and Retinal Imaging

πŸ‘₯ Lakshay, Ritika, Nancy, Param, Rajni

πŸ“™ Abstract : The exponential growth of neurological and ophthalmic disorders worldwide has intensified the demand for diagnostic frameworks that are precise, scalable, and accessible beyond the confines of specialist-intensive clinical settings. This paper presents Curevia, a multi-modal artificial intelligence platform engineered to deliver early, automated identification of brain tumors, ocular pathologies, and neurological anomalies by harnessing three complementary data streams: Magnetic Resonance Imaging (MRI), Optical Coherence Tomography (OCT) retinal scans, and Electroencephalogram (EEG) signals. The architecture employs Convolutional Neural Networks for spatial feature extraction from volumetric brain scans and retinal images, while Recurrent Neural Networks and Long Short-Term Memory networks decode temporal patterns embedded in EEG time-series data. A data fusion mechanism integrates predictions across all three modalities, yielding a holistic diagnostic profile that surpasses the capability of any single-modality approach. Experimental evaluation demonstrates tumor segmentation accuracy reaching 94%, retinal disease classification at 92%, and the system processes each imaging study within six to ten seconds, substantially outperforming conventional baseline CNN models in both accuracy and processing efficiency. The platform further incorporates an interactive, web-based interface built with Streamlit, enabling seamless image and signal uploads, real-time result visualization, and personalized clinical recommendations. This work establishes Curevia as a clinically viable, computationally efficient, and ethically compliant diagnostic assistant capable of bridging the accessibility gap between specialist expertise and patient need.

πŸ”– Keywords :️ Convolutional Neural Networks, EEG Signal Processing, Brain Tumor Detection, Ocular Disease Classification, Multi-modal Medical AI, Deep Learning Healthcare.

4

Corona Virus Tracking System for Infected Patients

πŸ‘₯ Shruti, Sahil, Dishita, Khushi, Harkesh Kumar

πŸ“™ Abstract : The recent coronavirus disease outbreak (COVID-19) has been recognized as one of the major health crises in the 21st century. As India is already an overcrowded nation with diverse demography and socio-economic status, hence, making the management of the disease outbreak challenging. Therefore, in this research paper, we have used the multi-dimensional data analytical approach to explore the impacts and control measures for the pandemic of COVID-19 in India. Multiple data analytic approaches have been employed in this study using quantitative data including infections rate, mortality rate, recovery rate, reproduction number (Rβ‚€) and doubling time, among others. Moreover, an analysis of lockdown and non-pharmaceutical interventions in controlling the disease transmission has also been conducted. The findings show that certain factors including urbanization, density of population and economic activities were some of the critical determinants of virus transmission. The lockdown was effective in the initial stage; however, its impact could not be sustained owing to improper public cooperation.

πŸ”– Keywords :️ COVID-19, Data Analysis, Pandemic, India, Machine Learning, Visualization.

5

Tradesense - AI-Powered Stock Analysis and Trade Stimulation Platform

πŸ‘₯ Vansh, Manvi, Abhishek Gulia

πŸ“™ Abstract : Financial markets have experienced a rapid development and the increasing involvement of retail investors has resulted in a high demand of smart and user-friendly decision support systems. The conventional tools of financial analysis generally demand substantial expertise in the domain, and do not offer user-friendly interaction to non-expert users. To solve this problem, this paper proposes TradeSense, an AI-based financial analytics system that combines Large Language Models (LLMs), real-time financial information APIs, and cloud-based computing to allow the analysis of investments based on natural language input. The system enables users to type in plain language financial queries which are executed by deploying a high-performance LLM through the Groq API to understand user intent and produce formatted analytical queries. The suggested solution gathers real-time financial information through the Yahoo Finance API, which was chosen due to its flexibility and limited rates, and predictive analytics to determine possible investment performance, based on the past. The TradeSense is powered by a scalable architecture that uses Firebase as an authentication server, database, and frontend, and Hugging Face as the backend server. Also, the system keeps a history of user queries so that it can be used more easily and customized. The combination of natural language processing, real-time data retrieval, and predictive modeling within a single system makes TradeSense stand out of the current solutions and can help make financial analytics more approachable, interactive, and efficient to a broad spectrum of users.

πŸ”– Keywords :️ TradeSense, Financial Analytics, Stock Market Prediction, Large Language Models (LLMs), Natural Language Processing (NLP), Groq API, Yahoo Finance API, Firebase, Cloud Computing, Machine Learning, Investment Analysis, Decision Support System.

6

Flow Sync: A Workflow Automation System

πŸ‘₯ Shubhaam, Sanyam, Hardik, Satyam, Harkesh

πŸ“™ Abstract : Manual coordination of disconnected digital tools is repetitive and it takes time and introduces avoidable errors into day-to-day work. FlowSync is a workflow automation platform (SaaS) that is designed specifically to do this. It integrates Google Drive, Slack, Notion, and Discord with a webhook-based backend, allowing them to build event-driven pipelines by using a drag-and-drop interface, without a single line of code. When a file arrives in Google Drive, e.g., the system can automatically add a Slack notification and add an entry in Notion in seconds and without the human touch. The site is developed on Next.js, Node.js, Express.js, PostgreSQL and Prisma ORM, and Clerk to provide authentication. The experiments in various automation conditions proved the presence of reliable trigger detection and fast action execution. The current implementation does not include artificial intelligence, which is viewed as an extension of workflow suggestion in the future.

πŸ”– Keywords :️ Workflow Automation, Trigger-Action Systems, SaaS Platform, Webhook Integration, OAuth 2.0, API Integration, Node.js, Drag-and-Drop Interface.

7

Gesture to Voice: Translating the Unspoken Using Real-Time Indian Sign Language Recognition

πŸ‘₯ Ishu, Jatin, Pranav, Sourabh, Dr.Anju Saini

πŸ“™ Abstract : Communication between deaf and hearing individuals remains a significant challenge due to the limited understanding of sign language among the general population. Existing sign language recognition systems primarily focus on isolated gesture recognition or text-based translation, which restricts natural and real-time interaction. Additionally, many approaches suffer from limitations such as dependency on controlled environments, sensitivity to background variations, and lack of multilingual support. This paper presents a novel real-time gesture-to-voice communication system designed to enable seamless and natural interaction between sign language users and non-signers. The proposed system utilizes advanced deep learning and computer vision techniques to recognize continuous sign language gestures and convert them directly into speech output in real time. Unlike traditional methods, the system is capable of handling sentence-level continuous conversation, making it more suitable for practical communication scenarios. The proposed model supports multi-language datasets, including Indian Sign Language (ISL), enhancing its adaptability and inclusivity across diverse user groups. Furthermore, the system is robust to variations in background, lighting conditions, and user differences, ensuring reliable performance in real-world environments. The architecture is optimized for low latency and efficient processing, making it suitable for real-time deployment. Experimental results demonstrate that the proposed system achieves high accuracy and stability in continuous gesture recognition while maintaining real-time responsiveness. The integration of gesture recognition with speech generation significantly improves usability compared to existing text-based systems. This work contributes toward bridging the communication gap and provides a scalable, efficient, and user-friendly solution for assistive communication technologies.

πŸ”– Keywords :️ Indian Sign Language; gesture recognition; real-time communication; deep learning; MediaPipe; text-to-speech synthesis; CNN-LSTM.

8

A Self-Healing MLOps Framework for Autonomous Detection and Recovery in Production Machine Learning Systems

πŸ‘₯ Moaksha, Jigyanshu, Animesh, Manan, Anisha

πŸ“™ Abstract : Drifts in data, degradation of model performance, pipeline failure, and infrastructure instability are some of the issues that are likely to be encountered with machine learning (ML) systems used in production. Traditional Machine Learning Operations (MLOps) solutions are based on human super- vision and intervention, which leads to a longer response time and low system reliability. In the following paper, a self-healing MLOps system is presented, aimed at making such failures autonomously detected and recovered. The suggested framework incorporates continual monitoring, anomaly detection, and automated remediation into the lifecycle of MLOps. The data validation, model performance tracking, and infrastructure health monitoring are the key components of the system. The anomaly detection methods are used to determine the aberrations in data distribution, forecasting and system behaviour. When an issue is detected, the system initiates self-healing measures that include automated retraining of the model, re-running the pipelines, restoring to the last model versions that were considered stable, and scaling of resources. The design is made flexible and resilient through the use of feedback loops, model versioning, and continuous integration and deployment (CI/CD) pipelines. The experimental findings indicate that the suggested self-healing mechanism decreases the number of system downtimes, increases model stability, and boosts the general efficiency of the operation in comparison to the conventional MLOps systems. The system suggested helps advance the creation.

πŸ”– Keywords :️ Anomaly detection, automated retraining, CI/CD, MLOps, model drift, machine learning operations, self-healing systems.

9

Intelligent Automation of Digital Workflows Using AI

πŸ‘₯ Bablu, Ajay, Rakhi, Param, Dr.Preeti Malhotra

πŸ“™ Abstract : Artificial Intelligence (AI) is steadily reshaping how digital work is carried out. Unlike traditional automation, which depends on predefined rules, modern AI systems can learn from data, adapt to changing inputs, and improve performance over time. This paper presents Taskify, a web-based platform designed to automate workflows using AI-driven logic and no-code principles. The system enables users to design, execute, and optimize workflows with minimal technical expertise. The study highlights the system architecture, methodology, and advantages of combining orchestration with intelligent services. The system suggests improved efficiency, flexibility, and scalability in managing digital processes.

πŸ”– Keywords :️ AI automation, workflow orchestration, no-code platforms, large language models Taskify.

10

AI-Based Hand Gesture Controlled Cursor System

πŸ‘₯ Harsh Vardhan, Vansh Gupta, Ashish, Nishant, Richa Singh

πŸ“™ Abstract : This paper presents a novel method for controlling mouse movement using a livewebcam. This research presents a new approach to live camera-based mouse movement control. Two popular methods are to add more buttons or move the tracking ball of the mouse. Instead, we propose redesigning the hardware. We propose using a camera and computer vision technology to control mouse functions (scrolling and clicking), and we show that it can do all the functions provided by current mouse devices. The creation of a mouse control system is demonstrated in this project. To boost the performance of virtual reality(VR)and Augmented Reality(AR) areas, virtual mice have been developed .The potential of virtual mice lies in captivating real world experience that it provides the users. Traditional methods like mouse and keyboard are not suitable for giving a a such lifelike and engaging experience.

πŸ”– Keywords :️ Hand Gesture Recognition, virtual mouse, Finger Movement Detection, AI-Based Input System , Image Processing , Hand Landmark Detection.

11

Multimodal Deepfake Detection Using Visual, Audio, and Synchronization-Based Features

πŸ‘₯ Yuvraj Sharma, Sanyam Malhotra, Chirag Sharma, Akshat Chawla, Vijay Bharti

πŸ“™ Abstract : Deepfake technology has caused revolutionary change in how someone can create along with manipulated digital media. Deep learning has made it so almost too easy to create any image, audio, and video that will feel completely real. Sometimes they are convincing enough that you can't tell the difference no matter how much you try to. This is everywhere on social media right now that spotting any fakes keeps getting harder. This presents a real problem for anyone who has any concerns about the truth online or their security. Due to all of this happening, we need to find better ways to detect deepfakes. In this paper, we take a close look at deepfake detection methods which are built on deep learning to combat the Deepfake methods. In here, we are going to break down the current approaches into four main groups which are image-based, video-based, audio-based, together with those combined with different media types.

πŸ”– Keywords :️ Hand Gesture Recognition, virtual mouse, Finger Movement Detection, AI-Based Input System , Image Processing , Hand Landmark Detection.

12

ChronoCampus – AI Powered College Resource Optimization System

πŸ‘₯ Ashika, Gungun, Payal Kharb, Shally Nagpal

πŸ“™ Abstract : Managing academic activities efficiently involves proper scheduling, using resources and controlling access. Conventional ways of generating timetables are lengthy and prone to conflicts and the current systems tend to be un-scaled and rigid. In this paper, the author suggests a full-stack Smart Classroom Management System, ChronoCampus, which is based on artificial intelligence and role-based design to automate academic processes. It is a system that generates complete weekly schedules with a single inference using a Large Language Model (Google Gemini) using institutional information, including courses, faculty, and room availability, and meeting specific constraints. An access control system with four levels of Role-Based Access Control (RBAC) is used to secure access, which is supported by the authentication of the JSON Web Token (JWT) and authorization through the middleware. Moreover, an AI-enabled chatbot responds to academic inquiries in real-time, making it easier to use. The system is scalable and responsive and built with React, Node.js, and MongoDB. The experimental results prove the timetable generation is rapid with a high degree of constraint accuracy, which can be used effectively in the modern management of academic institutions.

πŸ”– Keywords :️ Educational resource management, role-based access, academic scheduling, constraint based optimization, Genetic Algorithm.

13

Personalised Learning App for Special Children

πŸ‘₯ Ishita, Anshika, Tarak, Hemlata

πŸ“™ Abstract : Special children, for example, if they suffer from any of the disease or any of the mental disability that is mainly a CP murmur. Or IP children's that are able to write but have some problem with the. Synchronization of the brain and the other motor organs like hands or legs. So writing is one of their therapy that is being used in the schools to give them a specific training for these motor skills only so the teachers cannot focus. On every child as much as possible because they give a common homework to everyone present in the class. So this app is going to make everything personalized. Basically what it does for each and every student there would be a separate learning strategy. There would be separate learning platform or there would be a separate analysis of what they are doing and how their motor skills are being improving or whether they are lagging behind in some of the participation of teacher. Teachers here can also interact with the AI feature so that they can. As the AI according to the. The students data that is present in the database that we have built.

πŸ”– Keywords :️ Learning, Children, Softmax.

14

IntelliML: An AutoML Platform for End-to-End Data Science Lifecycle with Explainable AI and Conversational Interface

πŸ‘₯ Anil Paneru, Mahesh Karki, Rahul Mishra, Ritika

πŸ“™ Abstract : Difficult preprocessing, feature engineering, and model opaqueness are barriers to machine learning. Unlike the current AutoML platforms which are limited to model search, IntelliML can autonomously run the complete pipeline of the ML process via a modular three-tier architecture and Model-Context Protocol in a multi-model coordination setup. The best innovations include a composite ranking algorithm, which penalizes overfitting and latency, multi-layer security middleware, fault-tolerant circuit breaking, an instance-level explainability based on SHAP, class imbalance including the SMOTE approach, training feedback via WebSockets, and a voice-enabled conversational assistant. Experiments reveal that IntelliML trains, ranks, and explains a variety of models with few expert efforts, confirming the convergence of AutoML, XAI and conversational AI in integrated ML workflows.

πŸ”– Keywords :️ Automated Machine Learning, Explainable AI, SHAP, Conversational Interface, Machine Learning Workflow.

15

Real-Time Emotion Recognition Using Multimodal Deep Learning

πŸ‘₯ Mridul, Amit, Monty, Rajni

πŸ“™ Abstract : The use of emotion recognition in human computer interaction, mental health, intelligent agents and adaptiveAI systems is in a number of ways.We propose a real-time multi modal emotion recognition system which use facial expressions, speech signals, body language and textual inputs using deep learning in this paper. It is based on a mixture of CNN and LSTM networks, transformer-based NLP models, and posees timation. It uses a late fusion approach to make cross-modal predictions. The outcomes of the experiment prove that the suggested system achieves the average performance of 91.4% and the regular detectability under various environmental conditions. This is a Python-based system designed to be deployed in real-time.

πŸ”– Keywords :️ Emotion Recognition, Multimodal Learning, Computer Vision, NLP, Speech Processing, Deep Learning, Python.

16

FarmLens: Cattle Breed Recognition and Disease Detection

πŸ‘₯ Garv Kumar Sharma, Dev Sharma, Surya Pratap Singh, Pankaj Kumar, Vijay Bharti

πŸ“™ Abstract : While conducting our research on this topic, we found out that conventional cattle monitoring systems, such as visual inspection and laboratory examination, are inefficient and hard to implement on farms. These processes are very time-consuming and may postpone decision-making. To solve this problem, our approach involved using machine learning and computer vision for breed classification and disease detection through images. We have examined various models, both simple algorithms such as the SVM and k-NN models and more advanced deep learning algorithms such as CNN, ResNet, EfficientNet, and YOLO. In our analysis, YOLO was more effective in real-time detection due to its speed whereas CNN-based models were effective in disease classification accuracy. Although, we had a few limitations such as a lack of data, different environmental factors, and the inability to use these models in actual farm scenarios. To a certain extent, this publication represents both advantages and practical constraints of the application of AI to the monitoring of livestock.

πŸ”– Keywords :️ Emotion Recognition, Multimodal Learning, Computer Vision, NLP, Speech Processing, Deep Learning, Python.

17

Impact of Database Management Systems on Industrial Performance

πŸ‘₯ Kartik, Sushant, Shubham Panghal, Dr.BK Verma

πŸ“™ Abstract : The majority of manufacturing enterprises manage operational data in separate spreadsheets, paperwork, and standalone software. Such segregation leads to time-consuming processes, data duplication, and a continuous divide between factory floor activity and management oversight. This article outlines the development and deployment of FactoryFlow, a full-stack web app developed to analyze the impact of a specially-designed relational DBMS on organizational performance within the realm of manufacturing. The application employs Next.js 15, PostgreSQL 16 on Neon Cloud, and Prisma ORM to unify employee records, batch production, departmental structures, authorization, and accounting information in a normalized database. The FactoryFlow AI Command Center powered by Groq API (LLaMA 3.3 70B) incorporates four analytical modules, including predictive maintenance, resource optimization, quality control analysis, and performance benchmarks. The experiment revealed that core DBMS functions performed within less than 200 ms, while AI analysis returned within three seconds. The findings provide substantial evidence that the implementation of a DBMS, coupled with AI analytics, positively impacts data accessibility, decision-making pace, and operational visibility within industrial enterprises.

πŸ”– Keywords :️ Database Management Systems, Industrial Performance, PostgreSQL, Prisma ORM, Groq API, Next.js, KPI Dashboard, Predictive Analytics, Production Tracking, Audit Logging.

18

AI-Powered Phishing Protection and URL Threat Analysis System

πŸ‘₯ Avnish Kumar, Manjeet, Paras, Nirmal, Dr.Shally

πŸ“™ Abstract : Phishing attacks that lead to the compromise of login credentials and financial information continue being one of the biggest pains when it comes to cyber security. With hackers continuously finding ways around detection, simple URL validation is not enough for protection anymore. This is the reason why Threat Scan was developed. The application was made to detect malicious URLs and prevent social engineering techniques from working on users. How does Threat Scan work? The system has two levels the first involves checking the page against blacklists of all of the globe. Next, machine learning models are used to conduct a deeper analysis. All the user needs to do is enter the suspected URL and receive an instant evaluation of how safe it is.

πŸ”– Keywords :️ Phishing Detection; URL Threat Analysis; Machine Learning; Cyber-security; Feature Extraction; Global API Blacklists; Zero-Day Threats; Dual-Layer Security; Real-Time Processing.

19

AI-Based Missing Person Identification System

πŸ‘₯ Aryan, Saloni Sharma, Ritesh Verma, Archita, Mitu Sehgal

πŸ“™ Abstract : The swift rise in the number of missing persons globally highlights the acute need for technology-driven solutions to support faster identification and recovery. Traditional search methods often face delays, reduces the chances of safe recovery. In this work, we propose a web-based system that integrates deep learning based facial recognition with advanced computer vision techniques to detect missing individuals. The system employs Media pipe for efficient real-time facial landmark detection and Deep Face for robust facial recognition and feature embedding. Developed on the Streamlit framework, the application enables families, citizens, and authorities to upload photographs, which are then compared against a secure database of reported missing persons. When a match is identified, alerts are sent to stakeholders in real time. Experimental estimation conveys reliable performance across various conditions, establishing this approach as a promising step toward deploying AI-based solutions for social good. The system indicates strong potential to perform faster search operations, enhance collaboration among communities and reduce human error in identification tasks.

πŸ”– Keywords :️ Facial recognition, computer vision, missing person detection, AI, surveillance systems, image processing.

20

Real-Time Deepfake Detection System Using Convolutional Neural Networks

πŸ‘₯ Anurag Pandey, Amit Rawal, Harshvardhan, Vinay Raj Vats, Mitu Sehgal

πŸ“™ Abstract : The proliferation of deep learning has led to exponential growth in deepfake media, presenting challenges to digital security and media authenticity. This study presents a real-time deepfake detection system leveraging Convolutional Neural Networks (CNNs) to identify manipulated photographs and videos. We employ an augmented dataset with transformations focusing on facial changes, brightness variations, and temporal artifacts. A binary classification approach distinguishes genuine and fake media using deep feature extraction. For videos, we propose a frame-selection strategy capturing temporal discrep-ancies with aggregated predictions. The system achieves 96% accuracy, outperforming traditional methods by 14%. A user-friendly Streamlit interface enables seamless media uploads and real-time analysis. This contributes to reliable AI-driven tools for media verification and combating digital fraud.

πŸ”– Keywords :️ Deep Learning, Deepfake Detection, GANs, Digital Forensics, CNNs, Tempo-ral Analysis, Media Authentication.

21

Quantum Computing for Tumour-Grade Classification

πŸ‘₯ Komal, Kirti, Dr.BK Verma

πŸ“™ Abstract : This paper presents a dual-pipeline framework for brain glioma grading. It combines deep learning image segmentation with variational quantum classification and has 2 pipelines. Pipeline 1 employs a ResNet50-UNet with attention gates which segment tumour regions from 3,929 MRI scans across 110 low-grade glioma (LGG) patients and achieved a Dice coefficient of 85.71% and IoU of 82.30% which is then followed by a 4-qubit variational quantum circuit (VQC) for Grade 1 vs Grade 2 sub-classification. The 54% grading accuracy of quantum model confirms the WHO 2021 finding that LGG sub-grades are molecularly distinct and are not visually separable from MRI texture features alone. Pipeline 2 addresses this limitation by classifying LGG versus glioblastoma multiforme (GBM) using five genomic mutation features (IDH1, Age, PTEN, EGFR, ATRX) extracted from 862 TCGA patients. We train and compare four VQC architectures and six classical models under identical 5-fold stratified cross-validation. Our proposed VQC-2 architectureβ€”employing a RyRz+CZ feature map and Ry+CNOT ansatz over 5 qubitsβ€”achieves 84.81% accuracy and 93.67% recall, the highest recall among all ten models evaluated. The best classical model (Decision Tree) achieves 86.08% accuracy but only 90.63% recall, a gap of 1.27% in accuracy that is not statistically significant (McNemar p>0.05). Our VQC-1 replication surpasses the reference benchmark by 8.96 percentage points (82.96% vs 74%). A clinical-grade web application with adjustable detection threshold is deployed on Streamlit Community Cloud. These results demonstrate that quantum entanglement effectively captures genomic co-mutation patterns, and that recall-optimised quantum classifiers are clinically superior to accuracy-optimised classical models for cancer detection.learning image segmentation with variational quantum classification and has 2 pipelines. Pipeline 1 employs a ResNet50-UNet with attention gates which segment tumour regions from 3,929 MRI scans across 110 low-grade glioma (LGG) patients and achieved a Dice coefficient of 85.71% and IoU of 82.30% which is then followed by a 4-qubit variational quantum circuit (VQC) for Grade 1 vs Grade 2 sub-classification. The 54% grading accuracy of quantum model confirms the WHO 2021 finding that LGG sub-grades are molecularly distinct and are not visually separable from MRI texture features alone. Pipeline 2 addresses this limitation by classifying LGG versus glioblastoma multiforme (GBM) using five genomic mutation features (IDH1, Age, PTEN, EGFR, ATRX) extracted from 862 TCGA patients. We train and compare four VQC architectures and six classical models under identical 5-fold stratified cross-validation. Our proposed VQC-2 architectureβ€”employing a RyRz+CZ feature map and Ry+CNOT ansatz over 5 qubitsβ€”achieves 84.81% accuracy and 93.67% recall, the highest recall among all ten models evaluated. The best classical model (Decision Tree) achieves 86.08% accuracy but only 90.63% recall, a gap of 1.27% in accuracy that is not statistically significant (McNemar p>0.05). Our VQC-1 replication surpasses the reference benchmark by 8.96 percentage points (82.96% vs 74%). A clinical-grade web application with adjustable detection threshold is deployed on Streamlit Community Cloud. These results demonstrate that quantum entanglement effectively captures genomic co-mutation patterns, and that recall-optimised quantum classifiers are clinically superior to accuracy-optimised classical models for cancer detection.

πŸ”– Keywords :️ variational quantum circuit; brain tumour grading; glioma; ResNet50-UNet; LGG; GBM; quantum machine learning; PennyLane; TCGA; medical image segmentation.