M. Ali Akcayol, PhD
Department of Computer Engineering, Gazi University
akcayol@gazi.edu.tr,  maakcayol@gmail.com


Photo Gallery
Home|Publications|Projects|Research Interests|Teaching|Students|Experience|Announcements|Contact     Gazi AI Center


SHORT BIO (Curriculum vitae)
He received BS, MS and PhD degrees from Gazi University, 1993, 1998 and 2002, respectively. In his dissertation, he worked on applying fuzzy logic and artificial neural networks to dynamic systems.

He has been at Michigan State University, USA, 2004, for postdoctoral research.

He is currently director of Big Data and Artificial Intelligence Laboratory that receives both national and international funding. The projects that he was principal investigator have been funded by various public and private sector organizations including Gazi University, TÜBİTAK, Ministry of Industry and Technology, International Bank for Reconstruction and Development, Havelsan, TUSAŞ, KoçSistem and Huawei.

He has served as editor/field editor and reviewer for numerous international and national journals. He has served as session chairman, member of the organizing committee and member of the scientific committee at prestigious conferences around the world. He has ACM professional membership.

His research interests include, artificial intelligence, deep learning, big data analytics, recommender systems, intelligent optimization systems, mobile wireless networks and smart buildings.
           


RECENT PROJECTS (All projects)
CAPE: Cognitively smart assistant in phygital environment
EUREKA ITEA4 22017. (Consultant)
SINTRA: Security of critical infrastructure by multi-modal dynamic sensing and AI
EUREKA ITEA4 22006. (Consultant)
Artificial intelligence based identity and access management system
TÜBİTAK 1507. (Consultant)
A model for integration of urban heat island effect mitigation into planning processes: Local climate zone based morphological approach
TÜBİTAK 1001. (Researcher)
Data collection, verification and querying from heterogeneous data sources on the Internet
TÜBİTAK BİDEB 2244 Industrial PhD Program 118C127, Huawei Technologies Co. Ltd. (Principal Investigator)


LATEST PUBLICATIONS (All publications)
Spread patterns of COVID-19 in European countries: Hybrid deep learning model for prediction and transmission analysis
Utku A., Akcayol M.A.
Neural Computing and Applications, DOI: 10.1007/s00521-024-09597-y, 2024.
Abstract | pdf
The COVID-19 pandemic has profoundly impacted healthcare systems and economies worldwide, leading to the implementation of travel restrictions and social measures. Efforts such as vaccination campaigns, testing, and surveillance have played a crucial role in containing the spread of the virus and safeguarding public health. There needs to be more research exploring the transmission dynamics of COVID-19, particularly within European nations. Therefore, the primary objective of this research was to examine the spread patterns of COVID-19 across various European countries. Doing so makes it possible to implement preventive measures, allocate resources, and optimize treatment strategies based on projected case and mortality rates. For this purpose, a hybrid prediction model combining CNN and LSTM models was developed. The performance of this hybrid model was compared against several other models, including CNN, k-NN, LR, LSTM, MLP, RF, SVM, and XGBoost. The empirical findings revealed that the CNN-LSTM hybrid model exhibited superior performance compared to alternative models in effectively predicting the transmission of COVID-19 within European nations. Furthermore, examining the peak of case and death dates provided insights into the dynamics of COVID-19 transmission among European countries. Chord diagrams were drawn to analyse the inter-country transmission patterns of COVID-19 over 5-day and 14-day intervals.
Hybrid deep learning model based advanced AI-driven identity and access management system for enhanced security and efficiency
Demirsoy H.B., Köse E.N., Aydoğan F., Ezgin M.H., Akcayol M.A.
8th International Symposium on Innovative Approaches in Smart Technologies, İstanbul, Türkiye, 6-7 December 2024.
Abstract | pdf
Identity and access management (IAM) systems are essential for securing enterprise environments by ensuring that only authorized users can access critical resources. However, traditional IAM systems often fail to address the complexity of evolving cyber threats. This paper introduces an AI-driven IAM system that enhances security protocols through real-time anomaly detection. By leveraging a hybrid architecture consisting of convolutional neural networks (CNN) and long short-term memory (LSTM) layers, the system provides real-time analysis of user behavior to detect identity-related anomalies. The data was collected from real-world environments using a .NET worker service and preprocessing involved user-specific normalization techniques. The proposed model achieved test accuracy of 85.44%, precision of 87.95%, recall of 85.44%, and area under curve (AUC) score of 0.8578. These results demonstrate the model’s ability to provide scalable and adaptive solutions for modern IAM challenges.
Automated test case output generation using Seq2Seq models
Özer E., Akcayol M.A.
3th International Conference on Software and Information Engineering, Derby, UK, 2-4 December 2024.
Abstract | pdf
The aim of this paper is to present a creative approach to generate test case outputs for a given input automatically for software testing. Sequence-to-sequence (seq2seq) model is applied. Our approach aims to address the challenge of creating meaningful test case outputs for input variations in software testing, improving efficiency and accuracy in test automation. With the help of natural language processing techniques, the model is trained on an original dataset of test inputs and their corresponding outputs, predicting the output for a given test case input. We employ evaluation metrics including BLEU, ROUGE, and JACCARD similarity scores to assess the quality of generated outputs, comparing them against reference outputs. Our initial results show that the seq2seq model has a huge potential of producing accurate test case outputs, significantly reducing manual effort in test case generation. This work demonstrates the potential for integrating Recurrent Neural Network techniques into software testing and providing a scalable solution for automated test case output generation.
CNN based automatic speech recognition: A comparative study
Ilgaz H., Akkoyun B., Alpay Ö., Akcayol M.A.
Advances in Distributed Computing and Artificial Intelligence Journal, DOI: 10.14201/adcaij.29191, 2024.
Abstract | pdf
Recently, one of the most common approaches used in speech recognition is deep learning. The most advanced results have been obtained with speech recognition systems created using convolutional neural network (CNN) and recurrent neural networks (RNN). Since CNNs can capture local features effectively, they are applied to tasks with relatively short-term dependencies, such as keyword detection or phoneme- level sequence recognition. This paper presents the development of a deep learning and speech command recognition system. The Google Speech Commands Dataset has been used for training. The dataset contained 65.000 one-second-long words of 30 short English words. That is, %80 of the dataset has been used in the training and %20 of the dataset has been used in the testing. The data set consists of one-second voice commands that have been converted into a spectrogram and used to train different artificial neural network (ANN) models. Various variants of CNN are used in deep learning applications. The performance of the proposed model has reached %94.60.
Advanced AI-driven identity and access management system for enhanced security and efficiency
Demirsoy H.B., Köse E.N., Aydoğan F., Ezgin M.H., Akcayol M.A.
6th International Conference on Artificial Intelligence and Applied Mathematics in Engineering, Warsaw, Poland, September 26–28, 2024.
Abstract | pdf
Identity and Access Management (IAM) systems are essential for securing enterprise environments, ensuring that only authorized users can access critical resources. However, traditional IAM systems often fall short in addressing the evolving complexity of cyber threats, making it challenging for organizations to maintain robust security measures. This paper introduces an AI-driven IAM system designed to tackle these challenges by enhancing security protocols and optimizing user authentication processes through real-time anomaly detection. The proposed system leverages advanced machine learning algorithms to identify and mitigate identity-related anomalies, preventing unauthorized access and potential security breaches. Initial evaluations reveal that the system significantly improves detection accuracy, demonstrating its effectiveness across various sectors, including finance, healthcare, and technology. The model not only addresses the limitations of traditional IAM systems but also provides a robust, adaptive, and scalable solution tailored for modern enterprises. In experimental studies, the model achieved an accuracy of 0.73, a precision of 0.70, and a recall of 0.80. These results indicate the system's strong capability to accurately detect anomalies, reinforcing its potential to redefine IAM practices and enhance security measures in dynamic enterprise environments. By integrating AI-driven anomaly detection, the developed IAM system offers a forward-looking and innovative approach to managing access controls.
Multiple attention-based deep learning model for MRI captioning
Maraş B., Karatorak S., Özdem Karaca K., Gedik A.O., Akcayol M.A.
Muş Alparslan University Journal of Science, Accepted.
Abstract | pdf
In recent years, the use of artificial intelligence in medicine, as in many other fields, has begun to increase considerably. Creating magnetic resonance (MR) reports manually by medical doctors is a very difficult, time-consuming, and potentially error-prone process. In order to address these problems, a deep learning-based image captioning model is proposed in this study to automatically generate reports from brain MRIs. In the developed model, image processing, natural language processing, and deep learning methods are used together to produce text for the content and diagnoses in the medical image. First, pre-processing, such as rotating at random angles, changing size, cropping, changing brightness and contrast, adding shadows, and mirroring, were performed for MR images. Then, a model that generates reports was developed by utilizing the Bootstrapping Language Image Pre-Training (BLIP) model and the transformer architecture of the model. The experimental studies showed that the proposed model had successful results; the produced reports were highly similar to the original reports and could be used as a supplementary tool in medicine.
Anomaly detection on servers using log analysis
Özelgül S.B., Saygılı M.İ., Öztürk İ.S., Karaca K.Ö., Gedik M.O., Akcayol M.A.
IEEE 8th International Artificial Intelligence and Data Processing Symposium, Malatya, Türkiye, September 21–22, 2024.
Abstract | pdf
Increasing in the data volume and complexity make log analysis mandatory for security and performance management in server systems. In this new era, where traditional manual methods are insufficient, the automatic log analysis potential of artificial intelligence and deep learning techniques comes to the fore. In this study, a deep learning model is developed to detect anomalies by analyzing log data collected from servers and devices. This log anomaly detection model, developed using Convolutional Neural Network (CNN), uses structured log data processed with the Drain log parsing algorithm and effectively classifies anomalies by extracting features from this data. In the experimental studies conducted on Hadoop Distributed File System (HDFS) log data, it is observed that the model reaches up to 99% accuracy rates and improves both debugging processes and operating efficiency.
Real time malicious drone detection using deep learning on FANETs
Yapıcıoğlu C., Demirci M., Akcayol M.A.
IEEE International Black Sea Conference on Communications and Networking, Tbilisi, Georgia, June 24–27, 2024.
Abstract | pdf
Lately, Unmanned aerial vehicles, especially drones, are mainly used for transportation, communication and military purposes. Using only one drone to accomplish a mission leads to a solution that is costly and has low error tolerance. For this reason, a network structure called as Flying Ad-Hoc Networks (FANET) has been created which consisting of organized drones with lower costs and task sharing mechanism. However, these networks remain vulnerable to various attacks due to various vulnerabilities such as the use of civilian drones, use of unencrypted GPS signals, physical attacks using malicious drones and so on. Although efforts are being made to find solutions to these attacks, an effective solution cannot be produced due to the limited memory and calculation capabilities of drones. Encryption of drone communications is important for security. However, computational costs associated with encryption cause a decrease in drone battery life. In the literature, Eliyptic Curve Cryptography (ECC) algorithm is mostly used due to its low computational cost. Even though the algorithm has low computational cost when compared to other cryptographic algorithms, it also increases power consumption. In this study, drone detection and subsequent classification of malicious drones which are potential menaces for man-in-the-middle or physical attacks for the network were implemented by using real time frames taken in real time from the drone camera. YOLO (You Only Look Once) detection algorithm was used in the drone detection phase and Convolutional Neural Networks (CNN) was used in the classification phase. While communication between drones is normally carried out unencrypted, communication has been enabled to be encrypted with the help of ECC after detecting a malicious drones. Thus, it is aimed to increase the drone battery life and switch to encrypted communication only in case of doubt. In the study, a dataset which is consisting of 4 classes whose names are Yuneec Typhon, DJI Tello, DJI Phantom 4 and Other types of Drones was created using internet resources and YouTube videos, and the classification success was measured as 88.78%.
Artificial intelligence and its areas of use in healthcare
Bostancı S.D., Karaca K.Ö., Akcayol M.A., Bani M.
Journal of Gazi University Health Sciences Institute, DOI: : 10.59124/guhes.1453052, 2024.
Abstract | pdf
Artificial intelligence (AI) is computer systems that can perform tasks that require human intelligence. It consists of data based on machine learning, deep learning and artificial neural networks. AI; with the increase in data collection and the ability to store large numbers of data, its use in the field of health has increased. It has been increasing rapidly recently. AI is being used more and more frequently with its features that help physicians in diagnosis, treatment planning, prognosis prediction and application of treatments. In this review, it is aimed to specify AI and its areas of use in the healthcare system.
Neural network based a comparative analysis for customer churn prediction
Utku A., Akcayol M.A.
Muş Alparslan University Journal of Science, DOI: 10.18586/msufbd.1466246, 2024.
Abstract | pdf
Customer churn refers to disconnection of a customer from a business. The cost of customer churn includes both lost revenue and marketing costs to acquire new customers. Reducing customer churn is the primary goal for every business. Customer churn prediction can contribute to the development of strategies that enable businesses to retain the customers with high risk of loss. Nowadays, the importance of customer churn prediction models is increasing day by day. In this study, a multi-layer perceptron (MLP) based model has been developed for prediction of customer churn by using dataset of an anonymous telecommunications company. The developed model has been compared with k-Nearest Neighbors (kNN), logistic regression (LR), naive bayes (NB), random forest (RF) and support vector machine (SVM) extensively. The experimental results have shown that the developed MLP-based model has more successful than others with respect to accuracy, precision, recall, sensitivity, balanced classification rate, Matthews’s correlation coefficient and area under ROC curve (AUC).
Hybrid ConvLSTM model for evaluating the performance of SMEs in the software sector
Utku A., Sevinç A., Akcayol M.A.
Naturengs, Vol.5(1), 2024.
Abstract | pdf
SME is a term used for businesses based on the number of employees, size and turnover. SMEs form the basis of the economy and are indispensable organizations of business life around the world. In this study, ConvLSTM model was created to evaluate the financial performance of SMEs operating in the software sector in Turkey. The motivation of the study is to analyze the performance of SMEs operating in the software sector in Turkey. In the study, data from the Turkish Small and Medium Enterprises Development and Support Institution for the period 2018-2022 was used. ConvLSTM was compared with LR, LSTM, SVM, CNN, RF and MLP. Experiments showed that ConvLSTM outperformed other models, with performance above 0.8 R2 for all parameters.
Log anomaly detection in application servers using deep learning
Alagöz E., Şahin Y.M., Özdem K., Gedik A.O., Akcayol M.A.
Innovative Methods in Computer Science and Computational Applications in the Era of Industry 5.0, Vol.1(1), pp.258-268, 2024. (Selected from ICAIAME 2023)
Abstract | pdf
Log anomaly detection is vital in managing large-scale and distributed systems used today. Log analysis must be done in a short time and with high accuracy to be beneficial. As attacks on systems become more and more complex, traditional log anomaly detection methods have become more cumbersome, unsuccessful, and unuseful. In this study, a deep learning-based model has been developed for anomaly detection using log data from application servers in large-scale systems. First, pre-processing was carried out on the log data, and then parsing and grouping were carried out. The Drain method was used to parse the log files. The obtained data were divided into two groups, and the training and testing of the deep learning model developed were carried out. In the feature extraction phase, log data were converted into vectors and used as input for the developed model. The developed model learns normal and abnormal behavior in the data set and then detects abnormal behavior. The results obtained from the experimental studies showed that the developed model successfully detected 93% of the anomaly data. It has been observed that the level of success at the data labeling stage is very effective in training the model and detecting anomalies.
Hybrid deep learning model for earthquake time prediction
Utku A., Akcayol M.A.
Gazi University Journal of Science, DOI: 10.35378/gujs.1364529, 2024.
Abstract | pdf
Earthquakes are one of the most dangerous natural disasters that have constantly threatened humanity in the last decade. Therefore, it is extremely important to take preventive measures against earthquakes. Time estimation in these dangerous events is becoming more specific, especially in order to minimize the damage caused by earthquakes. In this study, a hybrid deep learning model is proposed to predict the time of the next earthquake to potentially occur. The developed CNN+GRU model was compared with RF, ARIMA, CNN and GRU. These models were tested using an earthquake dataset. Experimental results show that the CNN+GRU model performs better than others according to MSE, RMSE, MAE and MAPE metrics. This study highlights the importance of predicting earthquakes, providing a way to help take more effective precautions against earthquakes and potentially minimize loss of life and material damage. This study should be considered an important step in the methods used to predict future earthquakes and supports efforts to reduce earthquake risks.
EMACrawler: Web search engine database freshness optimization
Alanoğlu Z., Akcayol M.A.
Journal of Polytechnic, DOI: 10.2339/politeknik.1347054, 2024.
Abstract | pdf
In today's information and technology age, search engines have become an important part of our lives. Although search engines are the first to be used to access information, old and unnecessary information is included in the content offered to users. In terms of providing up-to-date data, today's search engines often cannot offer the desired success. In order to keep the data presented by web browsers up-to-date, the time of return visits must be accurately estimated. In this study, EMACrawler based on exponential moving average is proposed to determine the revisit times, which is the most important feature that affects the performance of search engines. The proposed method is tested using precision, total coverage and efficiency metrics. It has been seen that EMACrawler obtains the current data on the web pages in an accurate and quick manner. As a result of the experimental studies, it has been seen that EMACrawler is more successful than other methods in obtaining up-to-date data and maintaining the freshness of the browser database.
Disruptive effects of Earth's orbit environmental conditions on spacecraft electronic systems
Tarakcıoğlu O., Aydemir M.T., Akcayol M.A.
International Journal for Multidisciplinary Research, DOI: 10.36948/ijfmr.2023.v05i05.6285, 2023.
Abstract | pdf
The impact of disruptive factors originating from the space environment on spacecraft is highly significant in terms of design, operations, and safety. Factors such as solar wind, ionizing radiation, atomic corrosion, particle collisions, extreme vacuum, low gravity, and temperature changes affect spacecraft. Space agencies and research institutes have been working for years to model these effects. In the context of FaultTolerant Control (FTC) in spacecraft, numerous preventive measures are taken against disruptive factors of the space environment, aiming for spacecraft to perform their missions reliably for extended periods in space. This study represents the effects of the space environment in Earth's orbit and the precautions taken against these effects during the development and operation of spacecraft systems.
Analysis of cervical neoplasia with artificial intelligence
Zergeroğlu S., Sarı M.E., Taplamacıoğlu M.C., Alpay Ö., Akcayol M.A.
5th International Conference on Artificial Intelligence and Applied Mathematics in Engineering, Antalya, Türkiye, November 03-04, 2023.
Abstract | pdf
Aim: Cervical cancer is the second most common cancer in women after breast cancer, which causes female death. Nowadays, it is known that Human Papilloma Virus (HPV) must exist for the development of cervical cancer, and most cervical cancers %(99.7) are associated with HPV. HPV-16 and HPV-18 are positive in %70 of patients. In addition, cervical cancer is a type of genital cancer that can be prevented by early diagnosis by screening tests. In this study, it is planned to analyze the data using Artificial Intelligence (AI) techniques that retrospectively investigated cervical cancer HPV relationship, dysplasia and cancer development rates.
Method: This study has been carried out between 2018 and June 2020 with 1147 people selected from a total of 2850 patients between the ages of 20 and 59 who applied to the Department of Obstetrics and Gynecology of Ankara Education and Research Hospital at the University of Health Sciences. The clinical information of the selected patients is questioned and all the cases are examined by the same obstetrician and pap test is performed. Pap testing is reassessed by the same pathologist using the Bethesta (2001, modified 2014) system. In addition, the results of hrhpv DNA (HPV types 16, 18, 31, 33, 35, 39, 4, 51, 52, 56, 58, 59, 66, 68) tests applied to the patients in the study are revised. Pap test is interpreted with AI techniques using Low Grade SIL (LGSIL), High Grade SIL (HGSIL), cancer findings and hpv DNA analysis patients’ results and other data contained in the file information.
Results: 147 of the 1147 patients have LGSIL and 165 in HGSIL, 97 patients have cervical cancer. The highest incidence of LGSIL, HGSIL and cancer is observed in the 30-39 age group, while the rate of these diseases in the 50-59 age group is lower than in other groups. All patients have SIL and cancer are smoker. 231 of the 1147 patients tested positive for HPV DNA. The most HrHPV DNA positive patients are seen in 90 patients aged 30-39. Of the 97 patients with cervical cancer, 71 in HrHPV DNA positive are reported. In addition, the obtained results showed that the accuracy value is %95, the precision value is %100, the recall value is %92 and the F1-Score value is %96.
Deep learning based classification for hoverflies (Diptera: Sryphidae)
Utku A., Ayaz Z., Çiftçi D., Akcayol M.A.
Journal of the Entomological Research Society, DOI 10.51963/jers.v25i3.2445, 2023.
Abstract | pdf
Syrphidae is essential in pollinating many flowering plants and cereals and is a family with high species diversity in the order Diptera. These family species are also used in biodiversity and conservation studies. This study proposes an image-based CNN model for easy, fast, and accurate identification of Syrphidae species. Seven hundred twenty-seven hoverfly images were used to train and test the developed deep-learning model. Four hundred seventy-nine of these images were allocated to the training set and two hundred forty-eight to the test dataset. There are a total of 15 species in the dataset. With the CNN-based deep learning model developed in this study, accuracy 0.96, precision 0.97, recall 0.96, and f-measure 0.96 values were obtained for the dataset. The experimental results showed that the proposed CNN-based deep learning model had a high success rate in distinguishing the Syrphidae species.
Artificial intelligence approaches in social network analysis
Editors: Parlar T., Esen F.S.
Chapter: Data collection, indexing and content originality in social networking applications
Alanoğlu Z., Akcayol M.A., pp.75-90.
Nobel Academic Publishing, ISBN: 978-625-427-965-2, March 2023.
Improving QoS in real-time mobile multimedia streaming with SCTP-PQ
Huseynli A., Şimşek M., Akcayol M.A.
Acta Polytechnica, DOI: 10.14311/AP.2023.63.0347, 2023.
Abstract | pdf
Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) dominate the Internet infrastructure. While TCP provides strict sequencing and reliable delivery and UDP provides fast transmission without reliability. Stream Control Transmission Protocol (SCTP) supports multihoming and multi-streaming applications and has a congestion mechanism like TCP. Media streaming is composed of different types of frames with different levels of importance, such as I frames carry more information than B frames in Moving Picture Experts Group (MPEG). Usually, MPEG frames are processed using the First-In-First-Out (FIFO) algorithm. In this paper, a four-level priority queue integrated protocol named SCTP-PQ has been developed to reduce jitter and delay in real-time multimedia streaming for mobile devices. The developed protocol has been tested and compared with SCTP extensively. The results have shown that SCTP-PQ is more successful than the standard SCTP in terms of jitter and delay.
Continuous knowledge graph refinement with confidence propagation
Huseynli A., Akcayol M.A.
IEEE Access, DOI: 10.1109/ACCESS.2023.3283925, 2023.
Abstract | pdf
Although Knowledge Graphs (KGs) are widely used, they suffer from hosting false information. In the literature, many studies have been carried out to eliminate this deficiency. These studies correct triples, relations, relation types, and literal values or enrich the KG by generating new triples and relations. The proposed methods can be grouped as closed-world approaches that take into account the KG itself or open-world approaches using external resources. The recent studies also considered the confidence of triples in the refinement process. The confidence values calculated in these studies affect either the triple itself or the ground rule for rule-based models. In this study, a propagation approach based on the confidence of triples has been proposed for the refinement process. This method ensures that the effect of confidence spreads over the KG without being limited to a single triple. This makes the KG continuously more stable by strengthening strong relationships and eliminating weak ones. Another limitation of the existing studies is that they handle refinement as a one-time operation and do not give due importance to process performance. However, real-world KGs are live, dynamic, and constantly evolving systems. Therefore, the proposed approach should support continuous refinement. To measure this, experiments were carried out with varying data sizes and rates of false triples. The experiments have been performed using the FB15K, NELL, WN18, and YAGO3-10 datasets, which are commonly used in refinement studies. Despite the increase in data size and false information rate, an average accuracy of 90% and an average precision of 98% have been achieved across all datasets.
Examining knowledge extraction processes from heterogeneous data sources
Sarıkoz S.K., Akcayol M.A.
Journal of Brilliant Engineering, DOI: 10.36937/ben.2023.4798, 2023.
Abstract | pdf
In the last 20 years, e-mail, instant messaging, documents, blogs, news, text communication in the transfer of information over the web, as a result of the presentation and transmission of information as a result of the Web the dramatic increase in the amount of data in digital environments has increased the importance of studies in the field of knowledge extraction from unstructured data. Since the 2000s, one of the primary goals of researchers in the field of artificial intelligence has been to extract knowledge from heterogeneous data sources on the World Wide Web, including real-life entities and semantic relationships between entities, and to display them in machine-readable format. Advances in natural language processing and information extraction have increased the importance of large-scale knowledge bases in complex applications, resulting in scalable information extraction from semi-structured and unstructured heterogeneous data sources on the Web, and the detection of entities and relationships; It enabled the automatic creation of prominent knowledge bases in this field such as DbPedia, YAGO, NELL, Freebase, Probase, Google Knowledge Vault, IBM Watsons, which contain millions of semantic relationships between hundreds of thousands of entities, and displaying the created information in machine-readable format. Within the scope of this article; Web-scale(end-to-end) knowledge extraction from heterogeneous data sources, methods, challenges and opportunities are provided.
Forecasting the spread of COVID-19 using deep learning and big data analytics methods
Kiganda C., Akcayol M.A.
Springer Nature Computer Science, DOI: 10.1007/s42979-023-01801-5, 2023.
Abstract | pdf
To contain the spread of the COVID-19 pandemic, there is a need for cutting-edge approaches that make use of existing technology capabilities. Forecasting its spread in a single or multiple countries ahead of time is a common strategy in most research. There is, however, a need for all-inclusive studies that capitalize on the entire regions on the African continent. This study closes this gap by conducting a wide-ranging investigation and analysis to forecast COVID-19 cases and identify the most critical countries in terms of the COVID-19 pandemic in all five major African regions. The proposed approach leveraged both statistical and deep learning models that included the autoregressive integrated moving average (ARIMA) model with a seasonal perspective, the long-term memory (LSTM), and Prophet models. In this approach, the forecasting problem was considered as a univariate time series problem using confirmed cumulative COVID-19 cases. The model performance was evaluated using seven performance metrics that included the mean-squared error, root mean-square error, mean absolute percentage error, symmetric mean absolute percentage error, peak signal-to-noise ratio, normalized root mean-square error, and the R2 score. The best-performing model was selected and used to make future predictions for the next 61 days. In this study, the long short-term memory model performed the best. Mali, Angola, Egypt, Somalia, and Gabon from the Western, Southern, Northern, Eastern, and Central African regions, with an expected increase of 22.77%, 18.97%, 11.83%, 10.72%, and 2.81%, respectively, were the most vulnerable countries with the highest expected increase in the number of cumulative positive cases.
Deep convolutional neural network-the evaluation of cervical vertebrae maturation
Akay G., Akcayol M.A., Özdem K., Güngör K.
Oral Radiology, DOI: 10.1007/s11282-023-00678-7, 2023.
Abstract | pdf
Objectives: This study aimed to automatically determine the cervical vertebral maturation (CVM) processes on lateral cephalometric radiograph images using a proposed deep learning-based convolutional neural network (CNN) model and to test the success rate of this CNN model in detecting CVM stages using precision, recall, and F1-score.
Methods: A total of 588 digital lateral cephalometric radiographs of patients with a chronological age between 8 and 22 years were included in this study. CVM evaluation was carried out by two dentomaxillofacial radiologists. CVM stages in the images were divided into 6 subgroups according to the growth process. A convolutional neural network (CNN) model was developed in this study. Experimental studies for the developed model were carried out in the Jupyter Notebook environment using the Python programming language, the Keras, and TensorFlow libraries. Results As a result of the training that lasted 40 epochs, 58% training and 57% test accuracy were obtained. The model obtained results that were very close to the training on the test data. On the other hand, it was determined that the model showed the highest success in terms of precision and F1-score in the CVM Stage 1 and the highest success in the recall value in the CVM Stage 2.
Conclusion: The experimental results have shown that the developed model achieved moderate success and it reached a classification accuracy of 58.66% in CVM stage classification.
Effective seed URL selection and scope extension algorithm for web crawler
Alanoğlu Z., Akcayol M.A.
International Journal of Advances in Engineering and Pure Sciences, Vol.35(1), pp.27-38, 2023.
Abstract | pdf
The web is a huge data source which is rapidly growing and which keeps all kinds of data. Users use search engines to get the data they want from this data source. Search engines obtain these data through web crawlers. Web crawlers retrieve, parse, and index data on all pages they reach by tracking uniform resource locators (URL) on web pages. The most important issues in the web crawling process are which URLs to start from, and the scope of the crawl. In this study, seed URL selection and scope expansion methods of a general web crawler were presented. In the selection of seed URLs, three different seed URL sets were created based on the daily hours spent by the visitors in 102 different countries, the number of daily page views per visitor, the percentage of traffic from the search, and the total number of affiliate sites, and their performance was analyzed thoroughly. Furthermore, a new search algorithm based on link score was proposed to expand the scope quickly, searches were made, compared, and detailed analyzes were performed using seed URL sets.
Deep learning based model for predicting the contribution of SMEs to the economy
Utku A., Sevinç A., Akcayol M.A.
Fırat University Journal of Engineering Science, Vol.35(2), pp.865-874, 2023.
Abstract | pdf
Small and Medium-sized Enterprises (SMEs) are private sector enterprises whose capital, workforce and assets are below the thresholds determined according to national regulations. SMEs play an important role in the economy of most countries in the world, especially in developing countries. SMEs, which make up approximately 90% of enterprises worldwide, provide more than 50% of employment. Estimating the contribution of SMEs to the economy at the country level is very important in terms of planning and investment. In this study, a deep learning-based model was developed to predict the contribution of SMEs to the economy. The developed LSTM-based deep learning model was compared with RF, SVM, CNN, GRU, MLP and RNN. Experimental results showed that the developed model had a better prediction performance than other models compared with 2.169 MSE, 1.473 RMSE, 1.175 MAE, and 0.959 R2 values.
Seed URL selection and performance analysis in Web crawlers: A comprehensive review
Alanoğlu Z., Akcayol M.A.
Duzce University Journal of Science and Technology, Vol.11(3), pp.1399-1423, 2023.
Abstract | pdf
Web is a data repository where various types of information posted on the internet are found. Structures that contain this information and are connected to each other by hyperlinks are called web pages. Web crawlers are programs that browse the web and download pages using hyperlinks on web pages. The performance of a search engine also depends on the performance of the web crawler. Performance metrics, scope, and seed URL selection methods of the web browsers are the most important factors affecting performance. In this study, a comprehensive review and analysis of the performances, scopes and seed URL usage methods of the web crawlers, classified in six categories as general, focused, incremental, hidden, mobile and distributed, was carried out. In addition, the performance criteria of each crawlers in various studies were compared.
Deep learning based video event classification
Gençaslan S., Utku A., Akcayol M.A.
Journal of Polytechnic, Vol.26(3), pp.1155-1165, 2023.
Abstract | pdf
In recent years, due to the growth of digital libraries and video databases, automatic detection of activities from videos and obtaining patterns from large datasets have come to the fore. Object detection from image is used as a tool for various applications and is the basis of video classification. Objects in videos are more difficult to identify than in a single image, as the information in videos has a time continuity constraint. Following the developments in the field of computer vision, the use of open source software packages for machine learning and deep learning and the developments in hardware technologies have enabled the development of new approaches. In this study, a deep learning-based classification model has been developed for the classification of sports branches on video. In the model developed using CNN, transfer learning has been applied with VGG-19. Experimental studies on 32827 frames using CNN and VGG-19 models showed that VGG-19 has a more successful classification performance than CNN with an accuracy rate of 83%.


SELECTED PUBLICATIONS (All publications)
Continuous knowledge graph refinement with confidence propagation
Huseynli A., Akcayol M.A.
IEEE Access, DOI: 10.1109/ACCESS.2023.3283925, 2023.
Abstract | pdf
Although Knowledge Graphs (KGs) are widely used, they suffer from hosting false information. In the literature, many studies have been carried out to eliminate this deficiency. These studies correct triples, relations, relation types, and literal values or enrich the KG by generating new triples and relations. The proposed methods can be grouped as closed-world approaches that take into account the KG itself or open-world approaches using external resources. The recent studies also considered the confidence of triples in the refinement process. The confidence values calculated in these studies affect either the triple itself or the ground rule for rule-based models. In this study, a propagation approach based on the confidence of triples has been proposed for the refinement process. This method ensures that the effect of confidence spreads over the KG without being limited to a single triple. This makes the KG continuously more stable by strengthening strong relationships and eliminating weak ones. Another limitation of the existing studies is that they handle refinement as a one-time operation and do not give due importance to process performance. However, real-world KGs are live, dynamic, and constantly evolving systems. Therefore, the proposed approach should support continuous refinement. To measure this, experiments were carried out with varying data sizes and rates of false triples. The experiments have been performed using the FB15K, NELL, WN18, and YAGO3-10 datasets, which are commonly used in refinement studies. Despite the increase in data size and false information rate, an average accuracy of 90% and an average precision of 98% have been achieved across all datasets.
A new topic modeling based approach for aspect extraction in aspect based sentiment analysis: SS-LDA
Özyurt B., Akcayol M.A.
Expert Systems with Applications, Vol.168, 114231, April 2021.
Abstract | pdf
With the widespread use of social networks, blogs, forums and e-commerce web sites, the volume of user generated textual data is growing exponentially. User opinions in product reviews or in other textual data are crucial for manufacturers, retailers and providers of the products and services. Therefore, sentiment analysis and opinion mining have become important research areas. In user reviews mining, topic modeling based approaches and Latent Dirichlet Allocation (LDA) are significant methods that are used in extracting product aspects in aspect based sentiment analysis. However, LDA cannot be directly applied on user reviews and on other short texts because of data sparsity problem and lack of co-occurrence patterns. Several studies have been published for the adaptation of LDA for short texts. In this study, a novel method for aspect based sentiment analysis, Sentence Segment LDA (SS-LDA) is proposed. SS-LDA is a novel adaptation of LDA algorithm for product aspect extraction. The experimental results reveal that SS-LDA is quite competitive in extracting products aspects.
Thermal infrared colorization using deep learning
Çiftçi O., Akcayol M.A.
IEEE International Conference on Electrical and Electronics Engineering, Antalya, Turkey, April 9-11, 2021.
Abstract | pdf
Day by day the usage of infrared cameras has been increasing in the world. With the increasing use of thermal infrared cameras and images, especially in military, security and medicine, the need for coloring thermal infrared images to visible spectrum has arisen. In this study, a deep based model has been developed to generate visible spectrum images (RGB - Red Green Blue) from thermal infrared (TIR) images. In the proposed model, an encoder-decoder architecture with skip connections has been used to generate RGB images. KAIST-MS (Korea Advanced Institute of Science and Technology-Multispectral) dataset used for training and test the developed model. The experimental results extensively tested using Least Absolute Deviations (L1), Root Mean Squared Error (RMSE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index Measure (SSIM).
A weighted multi-attribute-based recommender system using extended user behavior analysis
Akcayol M.A., Utku A., Aydoğan E., Mutlu B.
Electronic Commerce Research and Applications, Vol.28, pp.86-93, 2018.
BibTeX | Abstract | pdf
A new weighted multi-attribute based recommender system (WMARS) has been developed using extended user behavior analysis. WMARS obtained data from number of clicked items in the recommendation list, sequence of the clicked items in recommendation the list, duration of tracking, number of tracking same item, likes/dislikes, association rules of clicked items, remarks for items. WMARS has been applied to a movie web site. The experimental results have been obtained from a total of 567 heterogeneous users, including employers in different sectors, different demographic groups, and undergraduate and graduate students. Using different weighted sets of the attributes’ parameters, WMARS has been tested and compared extensively with collaborative filtering. The experimental results show that WMARS is more successful than collaborative filtering for the data set that was used.
Calculation of electron energy distribution functions from electron swarm parameters using artificial neural network in SF6 and Argon
Tezcan S.S., Akcayol M.A., Ozerdem O.C., Dincer, M.S.
IEEE Transactions on Plasma Science, Vol.38(9), pp.2332-2339, 2010.
Abstract | pdf
This paper proposes an artificial neural network (ANN) to obtain the electron energy distribution functions (EEDFs) in SF6 and argon from the following: 1) mean energies; 2) the drift velocities; and 3) other related swarm data. In order to obtain the required swarm data, the electron swarm behavior in SF6 and argon is analyzed over the range of the density-reduced electric field strength E/N from 50 to 800 Td from a Boltzmann equation analysis based on the finite difference method under a steady-state Townsend condition. A comparison between the EEDFs calculated by the Boltzmann equation and by ANN for various values of E/N suggests that the proposed ANN yields good agreement of EEDFs with those of the Boltzmann equation solution results.
An educational tool for fuzzy logic controlled BDCM
Akcayol M.A., Çetin A., Elmas Ç.
IEEE Transactions on Education, Vol.45(1), pp.33-42, 2002.
BibTeX | Abstract | pdf
Fuzzy logic controllers (FLC) have gained popularity in the past few decades with successful implementation in many areas, including electrical machines’ drive control. Many colleges are now offering fuzzy logic courses due to successful applications of FLCs in nonlinear systems. However, teaching students a fuzzy logic controlled drive system in a laboratory, or training technical staff, is time consuming and may be an expensive task. This paper presents an educational tool for fuzzy logic controlled brushless direct current motor (BDCM), which is a part of a virtual electrical machinery laboratory project. The tool has flexible structure and graphical interface. Motor and controller parameters of the drive system can be changed easily under different operating conditions.
Application of adaptive neuro-fuzzy controller for SRM
Akcayol M.A.
Advances in Engineering Software, Vol.35(3-4), pp.129-137, 2004.
Abstract | pdf
In this paper, an adaptive neuro-fuzzy inference system (ANFIS) has been presented to speed control of a switched reluctance motor (SRM). SRMs have become an attractive alternative in variable speed drives due to their advantages such as structural simplicity, high reliability, high efficiency and low cost. But, the SRM performance often degrades for the machine parameter variations. The SRM converter is difficult to control due to its nonlinearities and parameter variations. In this study, to tackle these problems, an adaptive neurofuzzy controller is proposed. Heuristic rules are derived with the membership functions then the parameters of membership functions are tuned by ANFIS. The algorithm has been implemented on a digital signal processor (TMS320F240) allowing great flexibility for various real time applications. Experimental results demonstrate the effectiveness of the proposed ANFIS controller under different operating conditions of the SRM.
NEFCLASS-based neuro fuzzy controller for SRM drive
Akcayol M.A., Elmas Ç.
Engineering Applications of Artificial Intelligence, Vol.18(5), pp.595-602, 2005.
BibTeX | Abstract | pdf
Switched reluctance motor (SRM) is increasingly employed in industrial applications where variable speed is required because of their simple construction, ease of maintenance, low cost and high efficiency. However, the SRM performance often degrades for the machine parameter variations. The SRM converter is difficult to control due to its nonlinearities and parameter uncertainties. In this paper, to overcome this problem, a neuro fuzzy controller (NFC) is proposed. Heuristic rules are derived with the membership functions of the fuzzy variables tuned by a neural network (NN). The algorithm is implemented on a digital signal processor (TMS320F240) allowing great flexibility for various real time applications. Experimental results demonstrate the effectiveness of the NFC with various working conditions of the SRM.