Education Ph.D. in Biomedical Engineering (Bio-electric Engineering), 2006 Amirkabir University of Technology M.Sc. in Biomedical Engineering (Bio-electric Engineering), 2000 Amirkabir University of Technology B.Sc. in Electrical Engineering (Electronic Engineering), 1997 |
O. Mehdizadeh Dastjerdi, M. Bakhtiarnia, M. Yazdchi, K. Maghooli, F. Farokhi, K. Jadidi September, 2023, Heliyon. The common disorder, Keratoconus (KC), is distinguished by cumulative corneal slimming and steepening. The corneal ring implantation has become a successful surgical procedure to correct the KC patient’s vision. The determination of suitable patients for the surgery alternative is among the paramount concerns of ophthalmologists. To reduce the burden on them and enhance the treatment, this research aims to previse the ocular condition of KC patients after the corneal ring implantation. It focuses on predicting post-surgical corneal topographic indices and visual characteristics. This study applied an efficacious artificial neural network approach to foretell the aforementioned ocular features of KC subjects 6 and 12 months after implanting KeraRing and MyoRing based on the accumulated data. The datasets are composed of sufficient numbers of corneal topographic maps and visual characteristics recorded from KC patients before and after implanting the rings. The visual characteristics under study are uncorrected visual acuity (UCVA), sphere (SPH), astigmatism (Ast), astigmatism orientation (Axe), and best corrected visual acuity (BCVA). In addition, the statistical data of multiple KC subjects were registered, including three effective indices of corneal topography (i.e., Ast, K-reading, and pachymetry) pre- and post-ring embedding. The outcomes represent the contribution of practical training of the introduced models to the estimation of ocular features of KC subjects following the implantation. The corneal topographic indices and visual characteristics were estimated with mean errors of 7.29% and 8.60%, respectively. Further, the errors of 6.82% and 7.65% were respectively realized for the visual characteristics and corneal topographic indices while assessing the predictions by the leave-one-out cross-validation (LOOCV) procedure. The results confirm the great potential of neural networks to guide ophthalmologists in choosing appropriate surgical candidates and their specific intracorneal rings by predicting post-implantation ocular features. |
Mehdi Bazargani, Amir Tahmasebi, Mohammadreza Yazdchi, Zahra Baharlouei, October, 2023, JMSS. An Emotion Recognition Embedded System using a Lightweight Deep Learning Model Diagnosing emotional states would improve human-computer interaction (HCI) systems to be more effective in practice. Correlations between Electroencephalography (EEG) signals and emotions have been shown in various research; therefore, EEG signal-based methods are the most accurate and informative. In this study, three Convolutional Neural Network (CNN) models, EEGNet, ShallowConvNet and DeepConvNet, which are appropriate for processing EEG signals, are applied to diagnose emotions. We use baseline removal preprocessing to improve classification accuracy. Each network is assessed in two setting ways: subject-dependent and subject-independent. We improve the selected CNN model to be lightweight and implementable on a Raspberry Pi processor. The emotional states are recognized for every three-second epoch of received signals on the embedded system, which can be applied in real-time usage in practice. Average classification accuracies of 99.10% in the valence and 99.20% in the arousal for subject-dependent and 90.76% in the valence and 90.94% in the arousal for subject independent were achieved on the well-known DEAP dataset. Comparison of the results with the related works shows that a highly accurate and implementable model has been achieved for practice. |
Dr. Mohammadreza Yazdchi (Ph.D.) Associate Professor |
I obtained my B.Sc. in Electrical Engineering from the Isfahan University of Technology in 1997. I received my M.Sc. And Ph.D. in Biomedical Engineering from Amirkabir University of Technology in 2000 and 2006 respectively. I am currently an Associate Professor in the Department of Biomedical Engineering at the University of Isfahan since 2007. My research interests include Biomedical signal and medical image processing to achieve automated diagnostic and recognition systems usable in advanced therapeutic and diagnostic devices. Electrocardiogram signal processing acquired with external and implantable devices for implementation on wearable, non-portable, and implantable devices to achieve automatic patient monitoring devices, EEG signal processing in the detection of neurocognitive, neurological, neurodevelopmental disorders and neurodegenerative diseases, medical image processing to achieve automated diagnostic and recognition systems.
|
Interests ¨ Biomedical Signal Processing ¨ Medical Image Processing ¨ Biomedical Instrumentation ¨ Biologically Inspired Computing ¨ Bio-Inspired Engineering. |
About me |
I have taught various courses since 2000 at different universities; But recently I focus on the following courses:
|
Teaching |
Featured Publications |
Negin Alamatsaz, Leyla Tabatabaei, Mohammadreza Yazdchi, Hamidreza Payan, Nima Alamatsaz, Fahimeh Nasimi , April, 2024, BSPC. A lightweight hybrid CNN-LSTM explainable model for ECG-based arrhythmia detection Electrocardiogram (ECG) is the most frequent and routine diagnostic tool used for monitoring heart electrical signals and evaluating its functionality. The human heart can suffer from a variety of diseases, including cardiac arrhythmias. Arrhythmia is an irregular heart rhythm that in severe cases can lead to stroke and can be diagnosed via ECG recordings. Since early detection of cardiac arrhythmias is of great importance, computerized and automated classification and identification of these abnormal heart signals have received much attention for the past decades. This paper introduces a light Deep Learning (DL) approach for high accuracy detection of 8 different cardiac arrhythmias and normal rhythms. To employ DL techniques, the ECG signals were preprocessed using resampling and baseline wander removal techniques. The classification was performed using an 11-layer network employing a combination of Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM). In order to evaluate the proposed technique, ECG signals are chosen from the two physionet databases, the MIT-BIH arrhythmia database and the long-term AF database. The proposed DL framework based on the combination of CNN and LSTM showed promising results than most of the state-of-the-art methods. The proposed method reaches the mean diagnostic accuracy of 98.24%. A trained model for arrhythmia classification using diverse ECG signals were successfully developed and tested. This study presents a lightweight classification technique with high diagnostic accuracy compared to other notable methods, making it a potential candidate for implementation in Holter monitor devices for arrhythmia detection. Finally, we used SHapley Additive exPlanations (SHAP), the most popular Explainable Artificial Intelligence (XAI) method to understand how our model make predictions. The results indicate that those features (ECG samples) that have contributed the most to a prediction are consonant with clinicians’ decisions. Therefore, the use of interpretable models increases the trust of clinicians in AI and thus leads to decreasing the number of misdiagnoses of cardiovascular diseases. |
G Zafaripour, M Yazdchi, As' ad Alizadeh, M Ghadiri Nejad, D Abasi Dehkordi, DT Semirumi, August, 2023, MSE: B. Different types of wounds, such as surgical wounds, first-degree burns, and graft sites are among acute wounds. However, chronic wounds are the ones that have a gradual onset, and the healing process is stopped in terms of factors, such as poor blood flow, local pressure, diabetes and other factors. In this study, a novel wound dress made of carboxymethyl chitosan was composed of titanium nanoparticles (TiNP) using the freeze-drying technique. The wound dress contained pH-sensitive mini-emulsion, in which time and temperature are controlled, and adjusted by pH sensors. A heater was connected to the wound temperature sensor and, as the wound temperature drops below the specified range, it automatically turned on and raised the temperature to the maximum range. The mechanical strength, and biological behavior of tissue were evaluated to find an optimized sample. The prepared wound tissue was characterized using a scanning electron microscope (SEM) and the biodegradation rate was evaluated using weight change analysis. The obtained results show that the stress-strain diagram sample containing polyvinyl alcohol (PVA) had the lowest amount and the sample with TiNP had the highest strength. The results indicate that TiNP has the highest mechanical strength compared to other samples due to its inherent antibacterial properties. Moreover, fuzzy logic modeling was used to find a linguistic model for obtained experimental data. After developing the model, the results found by the model were compared to the experimental outputs. The results show that the developed method was efficient which can be used to predict the output of given data before performing any test practically. |
Masoumeh Sharafi, Mohammadreza Yazdchi, Javad Rasti, February, 2023, IPRIA. Audio-Visual Emotion Recognition Using K-Means Clustering and Spatio-Temporal CNN Emotion recognition is a challenging task due to the emotional gap between subjective feeling and low-level audio-visual characteristics. Thus, the development of a feasible approach for high-performance emotion recognition might enhance human-computer interaction. Deep learning methods have enhanced the performance of emotion recognition systems in comparison to other current methods. In this paper, a multimodal deep convolutional neural network (CNN) and bidirectional long short-term memory (BiLSTM) network are proposed, which fuses the audio and visual cues in a deep model. The spatial and temporal features extracted from video frames are fused with short-term Fourier transform (STFT) extracted from audio signals. Finally, a Softmax classifier is used to classify inputs into seven groups: anger, disgust, fear, happiness, sadness, surprise, and neutral mode. The proposed model is evaluated on Surrey Audio-Visual Expressed Emotion (SAVEE) database with an accuracy of 95.48%. Our experimental study reveals that the suggested method is more effective than existing algorithms in adapting to emotion recognition in this dataset. |
Neda Abdollahpour, Mohammadreza Yazdchi, Zahra Baharlouei, December 2022, ICBME. EEG Artifact Removal Based on Brain Dipoles' Regions Using ICA and Dipfit in Motor Imagery Tasks In this article, a new semi-automatic Electroencephalogram (EEG) artifact removal has been proposed for Motor Imagery (MI) tasks to improve the system's performance. There are eight reference clusters whose locations have been calculated based on the precise coordinates of a copious amount of brain dipoles acquired from a large number of users performing MI tasks. In this method, called 8 Ref-Clusters, a kind of Blind Source Separation (BSS) algorithm along with the DIPFIT plugin of the EEGLAB platform take on a decisive role. These eight clusters demonstrate which dipoles are brain sources and which ones are artifacts to eliminate. In the case of improving the performance of a system for a particular subject, we defined a specific threshold that could alter the size of clusters in three dimensions. The elaborate threshold pointed out above is unquestionably user-dependent. Having made a comparison between results before and after applying the 8-Ref-Clusters on the BCI-Competition IV 2a datasets, the average performance increased by roughly 4% which is promising when the datasets used in these evaluations had been filtered, between 8 and 30 Hz, only before applying Independent Component Analysis (ICA). Making a comparison between the results of the proposed method and those of other CSP-based methods shows that applying the proposed artifact removal only before CSP can significantly enhance the performance of the system at about 15.6%, in the case of the mCSP method. All in all, the proposed artifact removal method is a semi-automatic method that is computationally fast and able to detect the various types of artifacts like a heartbeat, muscle and head movements, eye blink, line noise, and so on. |
Neda Abdollahpour, Mohammadreza Yazdchi, Zahra Baharlouei, December 2022, ICBME. In this article, a new framework is proposed to address multi-class Motor Imagery Brain-Computer Interface (MIBCI) problems containing a small portion of labeled datasets. In this framework, the combination of Independent Component Analysis (ICA), multi-class Common Spatial Pattern (CSP), and a functional Application Programming Interface (API) model assumes a pivotal role. In the feature extraction stage of the work, a concatenated altered signal affected by spatial weights is proposed for each trial in three frequency ranges. This distribution of features can both provide suitable feature maps for augmentation, preparing data for the deep learning analysis, and underscore distinguishable features of MI classes. In the classification stage, spatial and temporal features are dominated by using the effective combination of a one-dimensional Convolutional Neural Network (CNN) and a two-staged Bidirectional Long Short-Term Memory (BLSTM) in three branches containing different distributions of frequency. Given that, the model simultaneously learns past-to-future and future-to-past patterns in two stages. The experimental result on datasets 2a BCI-Competition IV illustrates that the proposed method can be liable, practical and more competitive than the other popular methods pointed out in this paper. All in all, the proposed framework can alleviate the issue of small portions of labeled datasets in MI problems. |
Masoumeh Sharafi, Mohammadreza Yazdchi, Reza Rasti, Fahimeh Nasimi , September 2022, BSPC. A novel spatio-temporal convolutional neural framework for multimodal emotion recognition Proposing a practical method for high-performance emotion recognition could facilitate human–computer interaction. Among existing methods, deep learning techniques have improved the performance of emotion recognition systems. In this work, a new multimodal neural design is presented wherein audio and visual data are combined as the input to a hybrid network comprised of a bidirectional long short term memory (BiLSTM) network and two convolutional neural networks (CNNs). The spatial and temporal features extracted from video frames are fused with Mel-Frequency Cepstral Coefficients (MFCCs) and energy features extracted from audio signals and BiLSTM network outputs. Finally, a Softmax classifier is used to classify inputs into the set of target categories. The proposed model is evaluated on Surrey Audio–Visual Expressed Emotion (SAVEE), Ryerson Audio–Visual Database of Emotional Speech and Song (RAVDESS), and Ryerson Multimedia research Lab (RML) databases. Experimental results on these datasets prove the effectiveness of the proposed model where it achieves the accuracy of 99.75%, 94.99%, and 99.23% for the SAVEE, RAVDESS, and RML databases, respectively. Our experimental study reveals that the suggested method is more effective than existing algorithms in adapting to emotion recognition in these datasets. |
Sahar Karimian, Mohammadreza Yazdchi, Reza Hajian, April 2022, JMSS. One of the most prevalent methods in noninvasive blood pressure (BP) measurement with cuff is oscillometric, which has two different types of deflation, including linear and step deflation. With this approach, in addition to designing a novel algorithm by the step deflation method, a sample of its module was constructed and validated during clinical tests in different hospitals. .Method: In this study, by controlling the valve, the pressure would be deflated through optimized steps. By real-time processing on the obtained signal from the pressure sensor, pulses in each step would be extracted. After that, in offline mode, mean arterial pressure is estimated based on curve fitting. Result: A BP simulator, various modules, and an auditory method were used to validate the algorithm and its results. During clinical tests, 80 people (men and women), 11 dialysis patients, and 69 non-dialysis (healthy or with other diseases) in the age range of 17–85 years participated. The obtained results compared with the BP simulator are in the standard range according to the international medical standards of the British Hypertension Society (BHS) and the US Association for the Advancement of Medical Instrumentation (AAMI), which are the global standard of comparison in this field. |
Fahimeh Nasimi, Mohammadreza Yazdchi, February 2022, PLOS ONE. LDIAED: a lightweight deep learning algorithm implementable on automated external defibrillators Differentiating between shockable and non-shockable Electrocardiogram (ECG) signals would increase the success of resuscitation by the Automated External Defibrillators (AED). In this study, a Deep Neural Network (DNN) algorithm is used to distinguish 1.4-second segment shockable signals from non-shockable signals promptly. The proposed technique is frequency-independent and is trained with signals from diverse patients extracted from MIT-BIH, MIT-BIH Malignant Ventricular Ectopy Database (VFDB), and a database for ventricular tachyarrhythmia signals from Creighton University (CUDB) resulting, in an accuracy of 99.1%. Finally, the raspberry pi minicomputer is used to load the optimized version of the model on it. Testing the implemented model on the processor by unseen ECG signals resulted in an average latency of 0.845 seconds meeting the IEC 60601-2-4 requirements. According to the evaluated results, the proposed technique could be used by AED’s. |
Research Topics |
· Biomedical signal and medical image processing to achieve automated diagnostic and recognition systems usable in advanced therapeutic and diagnostic devices · Electrocardiogram signal processing acquired with external and implantable devices for implementation on wearable, non-portable and implantable devices to achieve automatic patient monitoring devices · EEG signal processing in detection of neurocognitive, neurological, neurodevelpmental disorders and neurodegenerative diseases · Medical image processing to achieve automated diagnostic and recognition systems
|
|
Biomedical Signal Processing |
|
Digital Signal Processing |
|
Biomedical Instrumentation |
|
Fuzzy Control Systems |
|
Signals and Systems |
|
Electrical Circuits I |
|
Electrical Circuits II |
|
Electronics I |
|
Electronics II |
|
Generic Equipment of Medical Centers |
|
Digital and Pulse Circuits |
|
Logic Circuits |
|
Introduction to Computational and Biological Intelligence |
|
Microprocessor 1 |