• Send Your paper at :
  • ijirtm@gmail.com
  • editor@ijirtm.com

Impact Factor - 5.445
Impact Factor - 5.68



IJIRTM: Volume-6, Issue-4, 2022

Paper Title : AMC Using Statistical Signal Processing and Machine Learning Algorithms in the form of Cognitive Radios
Author Name : Ramdas, Prof.Anoop Kumar Khambra, Prof.Jitendra Mishra
Keywords : Automatic modulation classification, Statistical Signal Processing, Physical Layer, QAM, QPSK, Cognitive radio.
Abstract :

Programming characterized radio (SDR) frameworks definitely stand out enough to be noticed as of late for their reasonableness and effortlessness for involved trial and error. They can be utilized for execution of dynamic range allotment (DSA) calculations in mental radio (CR) stage. There has been a monstrous exploration in the DSA calculations both in AI and sign handling worldview, yet, these CRs are as yet unfit to choose which calculation suites for explicit situation. An examination between the range detecting calculations utilizing AI strategies and factual sign handling procedures is required to know which calculation suits best for asset compelled conditions for CRs and range observatories. Two difficulties; to be specific, multi-transmitter location and programmed adjustment arrangement (AMC) are picked. Novel AI based and measurable sign handling based multi-transmitter location calculation are proposed and utilized in the correlation. In the wake of contrasting precision, for multi-transmitter recognition, machine learning calculation has precision of 70% and 80% for 2 and 5 client framework, separately, while, the precision for factual sign handling calculation is half for 2 and 5 client framework. For AMC, both sign handling and AI calculation have an ideal precision past 10 dB for 100 test tests (64-QAM being an exemption) be that as it may, for 1000 test tests, the AI calculation outflanks the sign iii processing calculation. Time correlation showed that sign handling calculations, in the two cases, take part of the time expected by AI calculations. Thus, it is prescribed to utilize AI procedures where precision is significant and utilize signal handling approach where timing is significant. The interaction of choosing the calculations can be viewed as a trade off among precision and time.

Download IJIRTM-6-4-0604202201
Paper Title : New Education Policy-2020: A Critical Analysis & Overview
Author Name : Ankit Singh Bisen, Dr.DD Bedia
Keywords : New Education Policy, Kendriya Vidyalaya, Navodaya Vidyalaya, 2020, Modi Ji 2.0.
Abstract :

It took the nation 34 years to implement the New Education Policy 2020. The drafting committee presented its final draft for approval to the union cabinet on July 29, 2020, and it was accepted and authorised. The new plan aims to prepare the way for revolutionary reforms in the country's primary and secondary education sectors. This was one of the most major steps taken to overhaul the education system of the nation. The objective of this study is to assess the deplorable condition of the places where the policy has advised action. It is inconceivable to have a policy that mandates the creation of a complete infrastructure. A fundamental reorganisation and a paradigm change must be considered during the execution of this policy. As it is well known that education is a concurrent issue, the execution of the New Education Policy 2020 idea is dependent on future legislation enacted by the centre and the state.

Download IJIRTM-6-4-0604202202
Paper Title : Analyze the Performance of Breast Cancer Detection using Neural Network Techniques
Author Name : Kumari Deepshikha, Prof.Chetan Agrawal, Prof.Himanshu Yadav
Keywords : Diseases diagnosis, Classification, Machine learning, Image segmentation, CAD.
Abstract :

In the human body, there are several types of tissues formed by a plurality of cells. The inharmonious and vertiginous growth of these cells can cause a tumor, being able to be benign or malignant thus originating the cancer. Breast cancer is the type of cancer that affects women more; however, there is a small possibility of occurring in men, even in a very unusual way, since according to statistics, for every 1 man diagnosed with cancer 100 women present the disease. Breast cancer accounts for more than 1 in 10 new cancer diagnosis each year and is the leading cause of cancer death in women. For early and efficient diagnosis of breast cancer more and more techniques are being developed. The feed forward back propagation neural network is one of famous approaches in neural network. In this work we present the feed forward neural network classifier for the breast cancer detection and improved the accuracy rate over the previous approach.

Download IJIRTM-6-4-0604202203
Paper Title : A study on Image Compression and Encryption Algorithm based on Chaotic System and Compressive Sensing
Author Name : Kuldeep Shakya, Prof.Devendra Patle
Keywords : Image Encryption and Compression, Image Encryption, Chaotic system, Compressive sensing.
Abstract :

Wavelet transform popularly used signal processing method in image processing and pattern recognition, currently it became an important feature to be used in texture classification. In this proposed work, an attempt has been made to propose an efficient and less complex image code compression algorithm that would be suitable for the various application and low bit rate image transmission purposes using various devices. Proposed algorithm is supposed to produce a good quality image for a given bit rate and will accomplish this task in an embedded fashion i.e., in such a way that all encoding of same image at lower bit rates are embedded in the beginning of the bit stream for the target bit rate. It will be helpful in many applications, such the medical science and real-life applications. The input images considered are subjected to mentioned transform coding techniques which are simulated on MATLAB R2013a, running on an Intel core i3 processor, for lossy image compression algorithms, comparison metrics are calculated and tabulated according to the considered input images. The algorithms considered for analysis include Particle Swarm Optimization methods, with using all the input images we getting some performance parameter value such as Compression Rate, PSNR and Elapsed time for each input image.

Download IJIRTM-6-4-0604202204
Paper Title : A study on Image Dehazing by using Depth Estimation and Fusion with Guided Filter
Author Name : Manish Mehta, Prof.Devendra Patle
Keywords : Fog, Haze, Air light, Transmission Map, Dark Channel, Fusion with Guided filter.
Abstract :

Haze is an atmospheric singularity that meaningfully degrades the visibility of outdoor sections. This is mostly due to the atmosphere particles that absorb and scatter the light. This thesis presents a novel single image methodology that improves the visibility of such ruined images. Our universal dehazing method that determining the atmospheric light and produces a spread map in the color channels. The atmospheric light is estimated from the densest pixel. Here we use the edge information to represent the neighboring pixel’s comparative depth information. With this relative depth information, we can build the equivalent atmospheric light to detain the edge halation. We create the spread map by appraising the atmospheric light except a continuous region which has no edge information. The method performs a per-pixel manipulation, which is straightforward to implement and then apply the Guided filter to improve the image quality. The experimental results demonstrate that the method yields results comparative to and even better than the more complex state-of-the-art techniques, having the advantage of being appropriate for real-time applications.

Download IJIRTM-6-4-0604202205
Paper Title : A Study on PAPR Reduction in OFDM Systems Using Peak Windowed Selective Mapping
Author Name : Shailendra Kumar Yadav, Prof.Devendra Patle
Keywords : Orthogonal Frequency Division Multiplexing (OFDM), Peak to Average Power Ratio (PAPR), Bit Error Rate (BER), Selective Mapping (SLM).
Abstract :

Orthogonal Frequency division multiplexing OFDM has emerged has a key enabler in high-speed digital communication. It has found its applications in LTE, WLANs, Wi Max systems etc. It is by far better than FD multiplexed type of technique from the efficiency point of aspect. PAPR as it one may see is the major type of challenge apparent due to very intrinsic way of the OFDM symbol addition. Due to high PAPR obtaining, one may not be able in developing the structured amplifier piecewise linear region which causes distortions in the capacity of non-linearity while reaching the end receiver. Non-linear distortions result in high bit error rate or BER of the system. This in turn divulges the system in poor QoS type conditions and yielded the undesirable and detrimental effects in OFDM, PAPR plummeting methods often regarded effective for PAPR plummeted values are resorted to. Out of several PAPR reduction techniques, Selective Mapping is one of the most potent techniques with high PAPR reduction capability. In the proposed technique, a weighted selective mapping technique is proposed with windowed weighting unction used for peak cancellation. In the proposed work, original OFDM, clopping, selective sort of phase mapping (SLM) with a functional weight incurred mapping W-SLM) are analyzed using the (CCDF). It is outrageously visible to observant that the system outperforms the conventional Selective Mapping by far. The proposed system attains a PAPR of 13 dB lesser than the conventional SLM at a PAPR of 10-1 and 10-2, which is a substantial reduction in PAPR.

Download IJIRTM-6-4-0604202206
Paper Title : A Structure for Detecting Document Plagiarism Using the Rabin Karp Algorithm
Author Name : Atul Kumar Chaudhary, Prof.Chetan Agrawal, Prof.Pooja Meena
Keywords : Plagiarism, N Gram, Semantic Analysis, Document Sampling, Data Redundancy, Compressive Sensing.
Abstract :

Replicating the thoughts or words of another author's work, as well as copying and pasting research that has been circulated is considered to be plagiarism. Some frequent kinds of plagiarism that may be identified include cheating, failing to attribute sources, and patchwork. Now, plagiarism is transferring from mediaeval history to this current era (in the Research field, academic culture, workplace, Students locations, etc.). With the expanding usage of the internet, numerous online articles are accessible boundlessly. Plagiarism is shifting from the mediaeval period to this current era. However, it is possible to go around this by using a technique that detects plagiarism in order to get a paper that is not plagiarized. Word matching, string matching, semantic-based and knowledge-based text classification are some examples of the many algorithms that have been developed for such document processing in order to detect similarities. These algorithms all operate on the same basic premise. Previous methods have their limitations when it comes to dealing with data analysis activity that only uses one level of processing. Compressive sensing based Rabin Karp (CS-RKP), a sophisticated and unique method is recommended for use in the system that is now under consideration. This approach used a sampling module for the purpose of decreasing the size of the dataset, as well as an additional cost function for the identification of document redundancy. Additionally, it calculated both syntax and semantic for the purpose of locating similarities throughout the text. The efficiency of the suggested approach is shown by the evaluation of computation time and similarity measure with respect to n-gram for varying values of N in the result observation.

Download IJIRTM-6-4-0604202207
Paper Title : A Framework for Recognition of Hate Speech on Social Media Based on Machine Learning Algorithms
Author Name : Bishnu Gupta, Prof.Chetan Agrawal, Prof.Pawan Meena
Keywords : Dimensionality Reduction, Hate Speech, Social media, Machine Learning, CNN, BERT.
Abstract :

During the course of the inquiry into hate speech in online life, one of the most important challenges that arose was the dissociation of hate speech from particular components of hostile language. Lexical disclosure strategies, as a rule, are not quite viable since they do not comprehend the two gatherings in any message comprising specific words as hate as a conversation. This is the primary reason why lexical disclosure strategies are not exactly feasible. In this suggestion, teeming detest discourse vocabulary were developed in order to collect tweets that contained terms of despising discourse. In addition, the dimensionality reduce approach was employed in order to increase the accuracy of degree. This research has classified tweets into three categories: those that contain hate speech or discourse, those that contain hostile language, and those that do not contain either of these. We are getting ready to recognize all of these different classifications by combining them into multiple categories. The calculation of ordered relapse using a dimensionality decrease approach has been actualized with 83% exactness, which is superior to other calculations that already exist, such as 71.33% for Naive Bayes and 80.56% for SVMs. The approach was used to actualize the calculation.

Download IJIRTM-6-4-0604202208