Mohammad Ansari Shiri; Najme Mansouri
Abstract
The topic of feature selection has become one of the hottest subjects in machine learning over the last few years. The results of evolutionary algorithm selection have also been promising, along with standard feature selection algorithms. For K-Nearest Neighbor (KNN) classification, this paper presents ...
Read More
The topic of feature selection has become one of the hottest subjects in machine learning over the last few years. The results of evolutionary algorithm selection have also been promising, along with standard feature selection algorithms. For K-Nearest Neighbor (KNN) classification, this paper presents a hybrid filter-wrapper algorithm based on Equilibrium Optimization (EO). With respect to the selected feature subset, the filter model is based on a composite measure of feature relevance and redundancy. The wrapper model consists of a binary Equilibrium Optimization (BEO). The hybrid algorithm is called filter-based BEO (FBBEO). By combining filters and wrappers, FBBEO achieves a unique combination of efficiency and accuracy. In the experiment, 11 standard datasets from the UCI repository were utilized. Results indicate that the proposed method is effective in improving the classification accuracy and selecting the best optimal features subsets with the least number of features.
Najme mansouri; Amir Mohammad Sharafaddini
Abstract
The selection of features is a crucial step in the analysis of high-dimensional data in machine learning and data mining. Gannet Optimization Algorithm (GOA) is a recently proposed metaheuristic algorithm that has not yet been investigated in terms of its capacity to solve feature selection problems. ...
Read More
The selection of features is a crucial step in the analysis of high-dimensional data in machine learning and data mining. Gannet Optimization Algorithm (GOA) is a recently proposed metaheuristic algorithm that has not yet been investigated in terms of its capacity to solve feature selection problems. A new wrapper feature selection approach based on GOA is proposed to extract the best features. The GOA is a robust meta-heuristic algorithm that can deal with higher dimensions. A fitness function is used to account for the entropy of the sensitivity and specificity, as well as the accuracy of the classifier and the fraction of features selected. Additionally, four new algorithms are compared with the proposed algorithm in this paper. Based on the experimental results, fewer features can be obtained with a higher classification accuracy using the proposed algorithm.
Reyhane Ghafari; Najme Mansouri
Abstract
Cloud computing is a high-performance computing environment that can remotely provide services to customers using a pay-per-use model. The principal challenge in cloud computing is task scheduling, in which tasks must be effectively allocated to resources. The mapping of cloud resources to customer requests ...
Read More
Cloud computing is a high-performance computing environment that can remotely provide services to customers using a pay-per-use model. The principal challenge in cloud computing is task scheduling, in which tasks must be effectively allocated to resources. The mapping of cloud resources to customer requests (tasks) is a popular Nondeterministic Polynomial-time (NP)-Complete problem. Although the task scheduling problem is a multi-objective optimization problem, most task scheduling algorithms cannot provide an effective trade-off between makespan, resource utilization, and energy consumption. Therefore, this study introduces a Priority-based task scheduling algorithm using Harris Hawks Optimizer (HHO) which is entitled as PHHO. The proposed algorithm first prioritizes tasks using a hierarchical process based on length and memory. Then, the HHO algorithm is used for optimally assigning tasks to resources. The PHHO algorithm aims to decrease makespan and energy consumption while increasing resource utilization and throughput. To evaluate the effectiveness of the PHHO algorithm, it is compared with other well-known meta-heuristic algorithms such as Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Whale Optimization Algorithm (WOA), Salp Swarm Algorithm (SSA), and Moth-Flame Optimization (MFO). The experimental results show the effectiveness of the PHHO algorithm compared to other algorithms in terms of makespan, resource utilization, throughput, and energy consumption.
Aboozar Zandvakili; Najme Mansouri; Mohammad Masoud Javidi
Abstract
Task scheduling in cloud computing plays an essential role for service provider to enhance its quality of service. Grasshopper Optimization Algorithm (GOA) is an evolutionary computation technique developed by emulating the swarming behavior of grasshoppers while searching for food. GOA is easy to implement ...
Read More
Task scheduling in cloud computing plays an essential role for service provider to enhance its quality of service. Grasshopper Optimization Algorithm (GOA) is an evolutionary computation technique developed by emulating the swarming behavior of grasshoppers while searching for food. GOA is easy to implement but it cannot make full utilization of every iteration, and there is a risk of falling into the local optimal. This paper proposes a suitable approach for adjusting the comfort zone parameter based on the fuzzy signatures called signature GOA (SGOA) to balance exploration and exploitation. Then, we propose task scheduling based on SGOA by considering different objectives such as completion time, delay, and the load balancing on the machines. Finally, different algorithms such as Particle Swarm Optimization (PSO), Simulated Annealing (SA), Tabu Search (TS), and multi-objective genetic algorithm, are used for comparison. The results show that among all algorithms, SGOA can be successful in much less iteration.
Najme Mansouri; Mohammad Masoud Javidi
Abstract
A data-intensive computing platform, encountered in some grid and cloud computing applications, includes numerous tasks that process, transfer or analysis large data files. In such environments, there are large and geographically distributed users that need these huge data. Data management is one of ...
Read More
A data-intensive computing platform, encountered in some grid and cloud computing applications, includes numerous tasks that process, transfer or analysis large data files. In such environments, there are large and geographically distributed users that need these huge data. Data management is one of the main challenges of distributed computing environment since data plays on devoted role. Dynamic data replication techniques have been widely applied to improve data access and availability. In order to introduce an appropriate data replication algorithm, there are four important problems that must be solved. 1) Which file should be replicated; 2) How many suitable new replicas should be stored; 3) Where the new replicas should be placed; 4) Which replica should be deleted to make room for new copies. In this paper, we focus particularly on replica replacement issue which makes a significant difference in the efficiency of replication algorithm. We survey replica replacement approaches (from 2004 to 2018) that are developed for both grid and cloud environments. The presented review illustrates the replica replacement problem from a technological and it differs significantly from previous reviews in terms of comprehensiveness and integrated discussion. In this paper, we present different parameters involved in replacement process and show the key points of the recent algorithms with a tabular representation of all those factors. We also report open issues and new challenges in the area.