Online ISSN: 2515-8260

Keywords : Deep learning

Implementation of Deep Learning for Automatic Classification of Covid-19 X-Ray Images

Muhammad Shofi Fuad; Choirul Anam; Kusworo Adi; Muhammad Ardhi Khalif; Geoff Dougherty

European Journal of Molecular & Clinical Medicine, 2021, Volume 8, Issue 2, Pages 1650-1662

Background:Reading radiographic images for Covid-19 identification by an expert radiologist requires significant time, therefore the development of an automated analysis system to assisting and saving time in diagnosing Covid-19 is important.
Objective: The purpose of this study is to implement the GoogleNet architecture with various epochs in hope achieving higher level of accuracy in Covid-19 detection.
Methods: We retrospectively used 813 images, i.e. 409 images indicating Covid-19 and 404 normal images. The deep TL model with GoogleNet architecture was implemented.The training was carried out several times to get the best acquisition value with a learning rate of 0.0001 for all levels. The network training was carried out with different epochs, i.e. 12, 18, and 24 epochs, and each epoch with 65 iterations.
Results: It was found that accuracy was determined by changes in the number of epochs. The classification accuracy was 96.9% in epoch 12, 98.2% in epoch 18, and 99.4% in epoch 24.
Conclusion: An increase in the number epochs increases the accuracy in the detection of Covid-19. In this study, the accuracy of the method reached 99.4%. These results are promising for the automation of Covid-19 detection from radiographic images.


D.Raghu Raman; D. Saravanan; R. Parthiban; Dr.U. Palani; Dr.D.Stalin David; S. Usharani; D. Jayakumar

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 9, Pages 2531-2557

In today’s world, digitization plays an extremely prominent role in day-to-day applications.Its future deployment, needs an Internet of Things (IoT) to embrace automation, remote monitoring and predictive analysis. IoT is a device connected with an internet and it’s a combined embedded technology including actuator and sensor device. Also, it encompasses, wired and wireless communication devices, and real-world physical objects connected to the
internet. IoTis majorly used in diversified fields like smart classroom, smart banking, smart home, smart agriculture, smart healthcare application etc. Typically, IoT requires intelligence, to achieve theautomation process in an efficient way in many applications. Artificial Intelligence (AI) paves the way to makes the IoT smarter and efficient by its approaches. Due to enormous amount of data being generated in various applications, IoT combined with Machine Learning(ML) and Deep Learning(DL) models is used to enhance the functionality in complex applications. In this survey the applicationof AI, ML and DLmodels deployed in IoT are deeply explored.

Segmentation on Brain Cancer Disease using Deep Learning Techniques

J. Josphin Mary; R. Charanya; V. Shanthi; G. Sridevi; Meda Srinivasa Rao

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 9, Pages 1439-1446
DOI: 10.31838/ejmcm.07.09.153

Segmenting brain tumors is a major challenge in the production of scientific pictures. To order to maximize care outcomes and increasing the hospital success rate, early detection of brain tumors plays an important part. A challenging and time-consuming job is the manual segmentation of brain tumors from large quantities of MRI images produced in clinical routine. Automatic brain tumor segmentation is possible. This article aims to analyze strategies for the segmentation of brain tumors dependent on MRI. Automatic segmentation using deep learning approaches has recently been proven common because these approaches accomplish the latest findings much better than other methods would solve this issue. Deep learning approaches may also provide for effective analysis and unbiased interpretation of vast volumes of picture evidence dependent on MRI. There are many papers on MRI based brain tumor segmentation which focus on traditional methods. Different from others, we concentrate on the recent trend in the field of deep learning. Next, the brain tumors and techniques for segmenting the brain tumor are added. Then, the new architectures are explored with a emphasis on the current development in deep learning methods. Finally, an evaluation is introduced and further improvements are discussed to standardize brain tumor segmentation procedures dependent on MRI in the day-to-day clinical practice.


Dr.C. ANNADURAI; Dr. I. Nelson

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 11, Pages 5228-5241

Underwater image processing has always been a promising and thrilling task due to the natural condition and the lighting effect for taking the image requires good artificial lights. While taking underwater images lots of difficulties are faced by photographers such as the shadows, non-uniform lighting, color shading, etc. Recognizing the object underwater is very difficult in order to the environmental condition. Man-made object recognition was made with underwater optical sensors to capture underwater images that have gained more attention from the users. Deep learning methods have demonstrated impressive performance in object recognition tasks from natural images. Anyhow it is hard to collect all the labelled underwater optical images for training the model. It is possible to acquire labelled images. Based on the assumption that it is possible to acquire sufficient labeled in-air images, the proposed work leverages a combination of deep learning and transfer learning to develop a novel recognition system for the man-made object from underwater optical images. The extracted features from the proposed network have high representative power and demonstrate robustness in both in-air and underwater imaging modalities. Therefore, our proposed framework has the ability to recognize underwater man-made objects using only labeled in-air images. The results of experiments on simulated data demonstrate that the proposed method outperforms traditional deep learning methods in the task of underwater man-made object recognition.

Automatic Classification of the Severity of COVID-19 Patients Based on CT Scans and X-rays Using Deep Learning

Sara Bhatti; Dr. Asif Aziz; Dr. Naila Nadeem; Irfan Usmani; Prof. Dr. Muhammad Aamir; Dr. Irum Khan

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 10, Pages 1436-1455

The 2019 novel coronavirus (COVID-19), which originated from China, has been declared a pandemic by the World Health Organization (WHO) as it has surpassed over eighty three million cases worldwide, with nearly two million deaths. The unexpected exponential increase in positive cases and the limited number of ventilators, personal safety equipment and COVID-19 test kits, especially in Low to Middle Income Countries (LMIC), had put undue pressure on medical staff, first responders as well as the overall health care systems. The Real-Time Reverse Transcriptase-Polymerase Chain Reaction (RT-PCR) test is the decisive test for the diagnosis of COVID-19, but a significant percentage of positive tests return a false negative result. For patients in LMICs, the availability and affordability of routine Computerized Tomography (CT) scanning and chest X-rays is better compared to an RT-PCR test, especially in rural areas. Chest X-rays and CT scans can aid in the prognosis and management of COVID-19 positive patients, but are not recommended for diagnostic purposes. Using Deep Convolutional Neural Networks (CNN), three network based pre-trained models (AlexNet, GoogleNet and Resnet50) were used for the automatic classification of positive COVID-19 chest X-Rays and CT scans based on their severity into three classes- normal, mild/moderate, severe. This classification can aid health care workers in performing expeditious analysis of large numbers of thoracic CT scans and chest X-rays of COVID-19 positive patients, and aid in their prognosis and management. The images were obtained from public repositories, and were verified and classified by trained and highly experienced radiologist from Agha Khan University Hospital prior to simulations. The images were augmented and trained, and ResNet50 was concluded to achieve the highest accuracy. This research can be used for other lung infections, and can aid the authorities in the preparations of future pandemics.


S. Praveen; Dr. R. Priya

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 9, Pages 2832-2847

Text clustering is an important method for effectively organising, summarising, and navigating text information. The purpose of the clustering is to distinguish and classify the similarity among the text instance as label. However, in the absence of labels, the text data to be clustered cannot be used to train the text representation model based on deep learning as it contains high dimensional data with complex latent distributions. To address this problem, a new unified deep learning framework for text clustering based on deep representation learning is proposed using the deep adaptive fuzzy clustering in this paper to provide soft partition of data. Initially reconstruction of original data into feature space carried out using the word embedding process of deep learning. Word embedding process is a learnt representation of the text or sentence towards clustering into vector containing words, characters and N-grams of words. Further clustering of feature vector is carried out with max pooling layer to determine the inter-cluster seperability and intra-cluster compactness. Moreover learning of the feature space is processed with gradient descent. Moreover tuning of feature vector is fine tuned on basis of Discriminant information using hyper parameter optimization with fewer epochs. Finally representation learning and soft clustering has been achieved using deep adaptive fuzzy clustering and quantum annealing based optimization has been employed .The results demonstrate that the clustering approach more stable and accurate than the traditional FCM clustering algorithm on employing k fold validation for evaluation. The Experimental results demonstrates the proposed technique outperforms of state of arts approaches in terms of set based measures like Precision, Recall and F measure and rank based measures like Mean Average Precision and Cumulative Gain.

Survey On Aspect Based Sentiment Analysis Using Machine Learning Techniques

Syam Mohan E; R. Sunitha

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 10, Pages 1664-1684

Web 2.0 facilitates the expression of views through diverse Internet applications which serve as a rich source of information. The textual expressions have latent information that when processed and analysed reveal the sentiment of the user/people. This is known as sentiment analysis, which is the process of computationally extracting the opinions and viewpoints from textual data and it is also known as opinion mining, review mining or attitude mining, etc. Aspect-level sentiment analysis is one among the three main types of sentiment analysis, where granule level processing takes place in which the different aspects of entities are harnessed to identify the sentiment orientations. The emergence of machine learning and deep learning techniques has made a striking mark towards aspect-oriented sentiment analysis. This paper presents a survey and review of different works from the recent literature on aspect-based sentiment analysis done using machine learning techniques.


J. Josphin Mary; R. Charanya; V. Shanthi; G. Sridevi

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 9, Pages 1447-1453
DOI: 10.31838/ejmcm.07.09.154

Glaucoma is a persistent, permanent eye disease that contributes to vision and quality of life loss. Within this paper we build a deep learning system for the automatic diagnosis of glaucoma with a Convolutionary neural network. Deep learning algorithms, such as CNNs, that infer a hierarchical representation of images to differentiate between glaucoma and NG trends of diagnostic decisions. The DL architecture proposed contains six learning strategies: four Convolutionary strata and two entirely linked layers. Strategies for drop-out and data rise were implemented to further enhance the treatment of glaucoma. Extensive validation of ORIGA and SCES databases is carried out. The findings show that the recipient's operating curve field under curve (AUC) is significantly higher than the state of the art algorithms in glaucoma identification at 0,831 and 0,887 in the two databases. The method may be used for the detection of glaucoma.

A Study Of Preprocessing Techniques And Features For Ovarian Cancer Using Ultrasound Images

Ms.ArathiBoyanapalli .; Dr.Shanthini. A

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 10, Pages 293-303

Ovarian Cancer is the third leading cancer among women in India. The early detection-rate of ovarian cancer is very low [1]. Transvaginal ultrasound is the most common screening test to detect the presence of tumors but adnexal masses are very common in patients, the challenging part is to discriminate whether the masses are benign or malignant. This distinction is very essential for optimal surgical management, but reliable pre-surgical differentiation is sometimes difficult using clinical features, ultrasound examination, or tumor markers alone[2]. Recent trends in medical imaging facilitate the detection of most cancers at a very initial stage. Still, an ovarian cancer diagnosis is not accurate. The patient has to undergo painful practices such as biopsies or surgeries, even with benign nodules. Ultrasound images with deep learning techniques in ovarian cysts help in diagnosis whether the cyst is benign or malignant at a very early stage without any surgeries. This method not only cuts the medical expenses of the patient but also reduces the mental stress of the patients.

An Empirical Study of Deep Learning Strategies for Spatial Data Mining

K. Sivakumar; A.S. Prakaash

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 2, Pages 5124-5132

The emergence of scalable frameworks for machine learning to efficiently analyse and derive valuable insights from these data has triggered growing volumes of data collected. Huge spatial data frameworks cover a wide variety of priorities, including tracking of infectious diseases, simulation of climate change, etc. Conventional mining techniques, especially statistical frameworks to handling these data, are becoming exhausted due to the rise in the number, volume and quality of spatio-temporal data sets. Various machine learning tasks have recently shown efficiency with the development of deep learning methods. We therefore include a detailed survey in this paper on important impacts in the application of deep learning techniques to the mining of spatial data.

Video Based Fall Detection Using Deep Convolutional Neural Network

Gangireddy Prabhakar Reddy; M. Kalaiselvi Geetha

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 2, Pages 5542-5551

Falling often causes deadly conditions such as unconsciousness and related injuries among the elderly population if failing provided with aid and caretakers nearby. In this context, an automatic fall monitoring system gains its popularity by solving the problem with immediate prompting, thereby allowing the caretakers and other persons to get activated with an alarm message. It assists older adults in living without fear of falling and being independent in society. In recent decades, vision-based fall monitoring receiving attention among research communities for its diversified features. It helps identify the human in the intended regions, and by using the collected phenomenon from the area, it trains the fall recognition classifiers. Besides, human detection errors and lack of massive-scale datasets make the vision-based fall monitoring face challenges like robustness and efficiency in performing generalization to invisible regions. Hence a robust learning and classification system is reasonably needed to combat the challenges. In this proposed system, automatic fall detection using deep learning is modeled using RGB images gathered from the single-camera source. More significantly, it determines the sensitive details that prevailed in the original images and ensures privacy, widely considered for safety and protection. Various experiments are carried out using real-time world fall data sets. The results show that the system enhances fall recognition awareness and achieves a high F-Score by performing high accurate fall detection from real-world environments.


Gouri Nandan; Dr. Neeba E A

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 8, Pages 5467-5475

Sign languages are languages that solely utilize gestures to convey meaning. Communication, based on the sign language is a mix of manual explanations and non-manual elements. Sign language recognition framework positively reflects communication between the person who is hard of hearing and world around. It also helps in communicating with machines. One of the most utilized types of gesture based communication is the American sign language (ASL). In the proposed work, the letters are detected from a video frame using convolutional neural network (CNN) and then converted into speech using Google Text-to-Speech (gTTS). The systems are trained with 75% of images and tested with 25% of images from the database.

Helmet, Violation, Detection Using Deep Learning

Sherin Eliyas; K. Swaathi; Dr.P. Ranjana; A. Harshavardhan

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 2, Pages 5173-5178

Road incidents are among the significant reasons, for the human passing. The majority of the passings in mishaps are because of harm to the top of the bike riders. Among the various sorts of street mishaps, bike mishaps are normal and cause extreme wounds. To reduce the involved risk for the motorcycle riders it is exceptionally fascinating to utilize helmet. The helmet is the motorcyclist's primary security. Many countries require the utilization of caps by motorcyclists, however numerous individuals neglect to comply with the law for different reasons. We present the advancement of a framework utilizing profound convolutional neural networks, (CNNs) for discovering bikers who are disregarding cap rules. The system involves motorcycle, detection, helmet, vs. no-helmet, classification, and method counting. Faster R-CNN with ResNet 50 network, model is implementing for motorcycle detector process. CNN classification model proposes for classify the helmet vs. no-helmet. Finally making alarm sound to alert the officer too preventing motorcycle accident. We assess the framework as far as accuracy and speed.

IOT based urban surveillance using RaspberryPi and Deep learning with Mobile-Net Pre-trained model

Sathya Vignesh R; Vaishnavi.R. G; G. Aravind; G. SreeHarsha; B. HariKrishnaReddy; Yogapriya J

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 4, Pages 2473-2477

The object detection is required to have a stronger protection in the surveillance areas. some of the surveillance systems uses cc cameras to monitor the area .It needs someone to check the output in particular area with-out rest. It is a difficult process for people who have to secure distant areas like fields , homes ,roads, restricted areas which cannot be monitored continuously by a person. object detection using raspberry pi and deep learning with pre-trained model can able to secure the place even without the person. It continuously monitors the area and identifies if any unwanted presence is detected and immediately sends an alert message to the respective device. The setup is fed with a lot of sample images like person, dog ,cat etc .The system checks the unwanted object to the sample images using mobile-net single shot detection by determining the accuracy of common features .Thus it helps to detect the unwanted presence with more accuracy than the previous existing systems.

Moving Towards Non-AI To AI

Nargis A Vakil; S.B. Goyal

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 2, Pages 5638-5646

A large number of researches have been conducted in the field of AI. This paper is all about the enhancements made in this popular field. Making a machine that is able to understand the background ideas of the words is very essential as it can increase the chances of better translation as well as can execute conversations as humans do. In particular, this paper states the difference between the AI and the Non-AI tasks. The work is generated for new candidates coming in the area of AI as well as some issues related to AI are also talked about.

Glioma Tumor Detection Through Faster Region-Based Convolutional Neural Networks Using Transfer Learning.

Shrwan Ram; Anil Gupta

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 2, Pages 4789-4815

Glioma Tumor is generally found in the brain and spinal cord. This tumor begins in glial cells that cover the nerve cells and control the function of that. The Glioma tumor is classified based on glial cells involved in the Glioma tumor formation. The tumor affects the normal activity of the patients such as loss of memory, difficulties in speech, confuse the identification of objects, and also causes difficulties to maintain the balance of the body. The early detection of Glioma tumor helps healthcare practitioners to suggest a suitable treatment for the disease. The detection of a Glioma tumor is a challenging task. Many types of approaches had been proposed by the researchers and academicians for accurately detecting the  Glioma tumor. Accurately detecting the brain tumor is still a big challenge. Because of recent advances in image processing and computer vision, healthcare professionals are using sophisticated disease diagnostic tools for disorders/disease prediction. The Neurosurgeons and Neuro-Physicians use the magnetic resonance imaging technique to identify multiple brain tumors. The approaches to computer vision play a significant role in the automated identification of different Brain tumors. This research paper explores the Convolutional neural network-based Faster R-CNN approach for the Glioma tumor detection using four pre-trained deep networks such as Alexnet, Resnet18, Resnet50, and Googlenet. The proposed approach of object detection as compared to other R-CNN approaches is more efficient and accurate having higher precision.  The proposed model detects the Glioma tumor with 99.9% accuracy. The pre-trained networks used to train the tumor detection model are Alexnet, Resnet18, and Resnet50, and Googlenet. As compare to Alexnet, resnet18, and Googlenet deep networks, the Resnet50 Pre-trained network performed well with higher accuracy of detection.

Vision Based Alert System for Road Signs Detection

K. Hemalatha; D.Uma Nandhini; Karthika S

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 4, Pages 1872-1877

TOver recent years, there is a huge increase in road accidents which makes us take more surveillance actions to reduce road accidents. In recent due to researches there is a huge improvement in the field of deep learning and computer vision. Our project is mainly focused on developing a vision based alert system for drivers. We built the model with the help of convolution neural networks a sub field of deep learning and computer vision. We have taken road sign data and trained the model to detect 32 different road signs. The data has been collected from German road sign datasets which consists of 20000 images. We developed the learning model with Keras frame- work which is a high-level API. The Keras works on the Tensor Flow backend which is developed by Google. The Keras framework enables us to build a state-of-the-art model to detect the road sign. For developing the model and to pre-processes the image we have used python language which has a vast number of libraries for image computations and to build deep neural networks. The main aim of our project is to develop a vision based alert system for drivers which will help us to improve road safety. Our model will also help new learners to improve the driving experience.


Dr. Vikas Jain; Dr.S. Kirubakaran; Dr.G. Nalinipriya; Binny. S; Dr.M. Maragatharajan

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 2, Pages 3294-3301

Brain-computer interface (BCI) decoding connects the human nervous world to the external world.
People's brain signals to commands that computer devices can detect. In-depth study the performance
of brain-computer interface systems has recently increased. In this article, we will systematically
Investigate brain signal types for BCI and explore relevant in-depth study concepts for brain signal
analysis. In this study, we have a comparison of different traditional classification algorithms new
methods of in-depth study. We explore two different types Deep learning methods, i.e., traditional
neural networks Architecture with Long Short term Memory and Repetitive Neural Networks. We
check the classification Accuracy of Recent 5-Class Study-State Visual Evoked Opportunities dataset.
The results demonstrate in-depth expertise learning methods compared to traditional taxonomy

Identification and Detection of Abnormal Human Activities using Deep Learning Techniques


European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 4, Pages 408-417

In recent years, it is in public to use the surveillance cameras for continuous monitoring of public and private spaces because of increasing crime. Most current surveillance systems need a human operator to constantly watch them and are ineffective as the amount of video data is increasing day by day. Surveillance cameras will be more useful tools if instead of passively recording; they generate warnings or real-time actions when unusual activity is detected. But recognizing and classifying human activity as normal or abnormal from a live video stream is a stimulating job in the pitch of CPU vision. There is a need for a smart surveillance system for the automatic identification of abnormal behaviour of humans for a specific-scene. Presentpaperstretches an overview of different machine learning methods used in recent years to develop such a model. It also gives an exposure to the recent works in the field of anomaly detection in surveillance video and its applications

Assessment of Patient Health Condition based on Speech Emotion Recognition (SER) using Deep Learning Algorithms

Dr. DNVSLS Indira; B. Lakshmi Hari Prasanna; Chunduri Pavani; Ganta Vandana

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 4, Pages 1135-1147

Human Emotion detection either through face or speech became a relatively nascent research area. Speech Emotion Acknowledgment concerns the undertaking of perceiving a speaker's feelings from their discourse chronicles. Perceiving feelings from discourse can go far in deciding an individual's physical and mental condition of prosperity. These emotions can be used for further assessment of patient’s status for better diagnosis. This paper aims to categorize emotions in speech into four different categories which are happy, sad, angry and neutral. For this analysis, four different algorithms - the Support Vector Machine (SVM), Multi-Layer Perceptron (MLP), Random Forest (RF) and Convolutional Neural Network (CNN-1D) are developed. Detection of Emotion through speech of an individual might be a bit hectic, because of the dynamic changes in voice signal of the same person within a very subtle period of time. So, features like mfcc, chroma, tomez contrast and mel were extracted and given to the model in order to detect the emotions. Those features were given as input to the algorithms and the empirical results implicate that Convolutional Neural Network-1D performs well comparatively. RAVDESS database is chosen for the categorization. A good recognition rate of 89% was obtained from CNN-1D.

A Comparative Study On Performance Of Pre-Trained Convolutional Neural Networks In Tuberculosis Detection

Ms.SweetyBakyarani. E; Dr. H. Srimathi; Dr. P.J. Arul Leena Rose

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 3, Pages 4852-4858

India accounts for 26% of the words Tuberculosis population. The WHO’s Global TB Program states that in India, the number of people newly diagnosed with TB increased by 74% when compared to other countries from 1.2 million to 2.2 million between 2013 and 2019. Tuberculosis was and still remains a disease that causes high death rates in the country. Many of these deaths can be easily prevented if diagnosed at an early stage. The easiest, cost-effective and non-invasive method of detecting tuberculosis is through a frontal chest x-ray (CXR). But this requires a radiologist to manually examine and analyse each of the X-ray, considering the heavy patient count this puts a great burden on the resources available. A computer aided diagnosis system can easily mitigate this problem and can greatly help in reducing the cost. In recent times deep learning has made great progress in the field of image classification and has produced remarkable outputs in terms of image classification in various domains. But there still remains a scope for improvements when it comes to Tuberculosis detection. The aim of this study is toapply three pre-trained convolutional neural networks that have proven record in image classification on to publically available CXR dataset and classify CXR’s that manifest tuberculosis and compare their performances. The CNN models that are used on our CXR images dataset as a part of this study are VGG-16 ,VGG-19,AlexNet ,Xception and ResNet-50. Also visualization techniques have been applied to help understand the features whose weights played a role in the classification process. With the help of this system, we can easily classify CXR’s that have active TB and even CXR’s that show mild abnormalities, thus ensuring that high risk patients get the help they require on time.



European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 3, Pages 2271-2285

With the rapid growth of COVID-19 pandemic infectious disease caused by the Corona Virus. It was first identified in Wuhan in December 2019. It expanded its circle all over the world and finally spreading its route to India. The whole world is fighting against the spread of this deadly disease, cases in India also gradually increasing day by day since May after lockdown. This article proposes how to contribute to utilizing the machine learning and deep learning models with the aim for understanding its everyday exponential behaviour along with the prediction of future reachability of the COVID-2019 across the nations by utilizing the real-time information from the Johns Hopkins. This paper studies the COVID-19 dataset and explore the data by data visualization with different libraries that are available in Python. The paper also discusses the current situation in India while tackling the Covid-19 pandemic and the ongoing development in AI and ML has significantly improved treatment, medication, screening tests, prediction, forecasting, contact tracing, and drug/vaccine development process for the Covid19 pandemic and reduce the human intervention in medical practice. However, most of the models are not deployed enough to show their real-world operation, but they are still up to the mark. Within this paper, we present Exploratory Data Analysis, Data Preprocessing, Data Cleaning and Manipulations, Machine Learning Algorithms, Pandemic Analyzing Engine GUI, and Deep Learning. We have performed linear regression, Decision Tree, SVM, Random Forest and for forecasting, we performed FBPrompet, ARIMA model to predict the next 15 day’s Pandemic situation.

Multi-Stage Classification Technique for Breast Cancer Detection in Histopathology Images using Deep Learning

Nagamani Gonthina; C. Jagadeeswari; Prabhavathi V; Sneha B

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 4, Pages 1104-1110

This research paper proposes the past decenary, substantial improvement in computational ability and betterment in algorithms for analysis of Images has gained vast fame in resolving challenges in the area of medical diagnosis. Subsequently, computerized tissue histopathology at present is becoming tractable towards the implementation of digitized analysis of images and deep learning methods. Cancer is a cluster of disorders involving irregular cell maturation with the capability to conquer or proliferate to other organs of the body. Detection of cancer in the earlier stages is a exacting task due to which many people are prone to death. Treatment of cancer benefits from the pace, perfection of Deep Learning-obliged practice of diagnosis. Deep Learning techniques are utilized to diagnose the features of progressed carcinoma with enhanced perfection compared to individual pathologist. This paper suggests a deep convolution neural network for categorizing a tissue as malicious, there after segregate the tissue then ultimately perform multi-class detection and classification of Breast Cancer disease and its stages in histopathology images

Deep Learning in Tuberculosis Diagnosis: A Survey

B. Sandhiya; Dr.R. Punniyamoorthy; Saravanan. B; Vijay Prabhu. R; Subhash. V

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 4, Pages 2736-2740

Tuberculosis is a contagious syndrome that leads to death Worldwide. In majority of the developing countries, the access to the diagnostic tool and the test usage is relatively poor. Now the recent advancement in the field of Artificial Intelligence may help them to fill this technology gap. Computer Aided Detection and Diagnosis helps in diagnosing the diseases through some clinical symptoms as well as X-ray images of the patients. Nowadays many strategies are formulated to increase the classification accuracy of tuberculosis diagnosis using AI and Deep Learning approaches. Our survey paper, focus to describe the wide AI and deep learning approaches employed in the diagnosis of tuberculosis.

Improved Convolution Neural Network For Detecting Covid-19 From X-Ray Images

Sankara Sai Sumanth Kota; Anthony Rajesh Reddy Yeruva; Rohit Desai; Venubabu Rachapudi

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 3, Pages 1221-1230

Coronavirus disease 2019 (COVID-19) is a communicable disease caused by coronavirus 2 (SARS-CoV-2), a severe acute respiratory syndrome. It was first identified in Wuhan, Hubei, China in December 2019 and has contributed to a continuing pandemic. As of early July 2020, more than 10.6 million cases throughout 188 countries around the world were identified culminating in much more than 516,000 deaths. To prevent COVID-19 from spreading among people, an automated detection system needs to be introduced as a fast-alternative diagnosis method. Machine learning algorithms based on radiographic images can be used as mechanism to support decision taking and help radiologists speed up the diagnostic process. This work introduces a new paradigm for automatic detection of COVID-19 using raw X-ray images in the chest. The proposed model with 4 Convolutional Layers, 2 Max Pooling Layers and Drop Outs, is designed to provide reliable diagnostics for binary (COVID vs. No-Findings) and multi-class (COVID vs. No-Findings vs. Pneumonia) diagnosis. Our model provided gives 98.9% Binary Classification accuracy and 85% Multi Classification accuracy.

Human Activity Recognition using SVM and Deep Learning

V. Parameswari; S. Pushpalatha

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 4, Pages 1984-1990

Human activity recognition is one among the foremost vital rising technology. Principle parts from the body parts territory are utilized for human movement acknowledgment to scale back spatial property. A multi scale delineation human action acknowledgment is done to save the segregate data before spatial property decrease. This paper could be a human action Recognition system for identification of person. It takes input a video of COVID-19 patients and searches for a match within the hold on pictures. This method is predicate d on Gabor options extraction mistreatment Gabor filter. For feature extraction the input image is matching with Gabor filter and further personal sample generation formula is employed to pick out a collection of informative and non redundant Gabor options. DNN (Deep learning Models) is used for matching the input human action image to the hold on pictures. This method is used in hospital management application for detecting the COVID-19 patient activity from surveillance cameras. By using the SVM and deep learning the human activity is recognized using matlab tool.