Maedeh Mehravaran; Fazlollah Adibnia; Mohammad-Reza Pajoohan
Abstract
In real world, organization's requirements for high performance resources and high capacity storage devices encourage them to use resources in public clouds. While private cloud provides security and low cost for scheduling workflow, public clouds provide a higher scale, potentially exposed to the risk ...
Read More
In real world, organization's requirements for high performance resources and high capacity storage devices encourage them to use resources in public clouds. While private cloud provides security and low cost for scheduling workflow, public clouds provide a higher scale, potentially exposed to the risk of data and computation breach, and need to pay the costs. Task scheduling, therefore, is one of the most important problems in cloud computing. In this paper, a new scheduling method is proposed for workflow applications in hybrid cloud considering security. Sensitivity of tasks has been considered in recent works; we, however, consider security requirement for data and security strength for resources. The proposed scheduling method is implemented in Particle Swarm \linebreak Optimization (PSO) algorithm. Our proposed algorithm considers minimizing security distance, that is maximizing similarity of security between data and resources. It, meanwhile, follows time and budget constraints. Through analysis of experimental results,it is shown that the proposed algorithm has selected resources with the most security similarity while user constraints are satisfied.
Najme Mansouri; Mohammad Masoud Javidi
Abstract
A data-intensive computing platform, encountered in some grid and cloud computing applications, includes numerous tasks that process, transfer or analysis large data files. In such environments, there are large and geographically distributed users that need these huge data. Data management is one of ...
Read More
A data-intensive computing platform, encountered in some grid and cloud computing applications, includes numerous tasks that process, transfer or analysis large data files. In such environments, there are large and geographically distributed users that need these huge data. Data management is one of the main challenges of distributed computing environment since data plays on devoted role. Dynamic data replication techniques have been widely applied to improve data access and availability. In order to introduce an appropriate data replication algorithm, there are four important problems that must be solved. 1) Which file should be replicated; 2) How many suitable new replicas should be stored; 3) Where the new replicas should be placed; 4) Which replica should be deleted to make room for new copies. In this paper, we focus particularly on replica replacement issue which makes a significant difference in the efficiency of replication algorithm. We survey replica replacement approaches (from 2004 to 2018) that are developed for both grid and cloud environments. The presented review illustrates the replica replacement problem from a technological and it differs significantly from previous reviews in terms of comprehensiveness and integrated discussion. In this paper, we present different parameters involved in replacement process and show the key points of the recent algorithms with a tabular representation of all those factors. We also report open issues and new challenges in the area.