Reyhane Ghafari; Najme Mansouri
Abstract
Cloud computing is a high-performance computing environment that can remotely provide services to customers using a pay-per-use model. The principal challenge in cloud computing is task scheduling, in which tasks must be effectively allocated to resources. The mapping of cloud resources to customer requests ...
Read More
Cloud computing is a high-performance computing environment that can remotely provide services to customers using a pay-per-use model. The principal challenge in cloud computing is task scheduling, in which tasks must be effectively allocated to resources. The mapping of cloud resources to customer requests (tasks) is a popular Nondeterministic Polynomial-time (NP)-Complete problem. Although the task scheduling problem is a multi-objective optimization problem, most task scheduling algorithms cannot provide an effective trade-off between makespan, resource utilization, and energy consumption. Therefore, this study introduces a Priority-based task scheduling algorithm using Harris Hawks Optimizer (HHO) which is entitled as PHHO. The proposed algorithm first prioritizes tasks using a hierarchical process based on length and memory. Then, the HHO algorithm is used for optimally assigning tasks to resources. The PHHO algorithm aims to decrease makespan and energy consumption while increasing resource utilization and throughput. To evaluate the effectiveness of the PHHO algorithm, it is compared with other well-known meta-heuristic algorithms such as Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Whale Optimization Algorithm (WOA), Salp Swarm Algorithm (SSA), and Moth-Flame Optimization (MFO). The experimental results show the effectiveness of the PHHO algorithm compared to other algorithms in terms of makespan, resource utilization, throughput, and energy consumption.
A.M.S.. Ramasamy; R Ponraj
Abstract
In this paper, an alternative proof is provided for a theorem of R.L.Graham concerning Chebyshev polynomials. While studying the properties of a double star, R.L.Graham [2] proved a theorem concerning Chebyshev polynomials of the first kind ${T_n (x)}$. The purpose of this paper is to provide an ...
Read More
In this paper, an alternative proof is provided for a theorem of R.L.Graham concerning Chebyshev polynomials. While studying the properties of a double star, R.L.Graham [2] proved a theorem concerning Chebyshev polynomials of the first kind ${T_n (x)}$. The purpose of this paper is to provide an alternative proof for his theorem. Our method is based on the divisibility properties of the natural numbers. One may observe that the Chebyshev polynomials evaluated at integers considered by R.L.Graham match with the solutions of the Pell's equation for a general, square-free $D \in N$.
Hashem Ezzati; Mahmood Amintoosi; Hashem Tabasi
Abstract
Graph matching is one of the most important problems in graph theory and combinatorial optimization, with many applications in various domains. Although meta-heuristic algorithms have had good performance on many NP-Hard and NP-Complete problems, but for graph matching problem, there were not reported ...
Read More
Graph matching is one of the most important problems in graph theory and combinatorial optimization, with many applications in various domains. Although meta-heuristic algorithms have had good performance on many NP-Hard and NP-Complete problems, but for graph matching problem, there were not reported superior solutions by these sort of algorithms. The reason of this inefficiency has not been investigated yet. In this paper it has been shown that Simulated Annealing (SA) as an instance of a meta-heuristic method is unlikely to be even close to the optimal solution for this problem. Mathematical and experimental results showed that the chance to reach to a partial solution, is very low, even for small number of true matches. In addition to theoretical discussion, the experimental results also verified our idea; for example, in two sample graphs with $10000$ vertices, the probability of reaching to a solution with at least three correct matches is about $0.02$ with simulated annealing.
Amin Ghodousian; Babak Sepehri Rad; oveys qodousian
Abstract
In this paper, a linear programming problem is investigated in which the feasible region is formed as the intersection of fuzzy relational equalities and the harmonic mean operator is considered as fuzzy composition. Theoretical properties of the feasible region are derived. It is proved that the feasible ...
Read More
In this paper, a linear programming problem is investigated in which the feasible region is formed as the intersection of fuzzy relational equalities and the harmonic mean operator is considered as fuzzy composition. Theoretical properties of the feasible region are derived. It is proved that the feasible solution set is comprised of one maximum solution and a finite number of minimal solutions. Furthermore, some necessary and sufficient conditions are additionally presented to determine the feasibility of the problem. Moreover, an algorithm is presented to find the optimal solutions of the problem and finally, an example is described to illustrate the algorithm.
Reza Habibi
Abstract
The Kalman-Bucy filter is studied under different scenarios for observation and state equations, however, an important question is, how this filter may be applied to detect the change points. In this paper, using the Bayesian approach, a modified version of this filter is studied which has good and justifiable ...
Read More
The Kalman-Bucy filter is studied under different scenarios for observation and state equations, however, an important question is, how this filter may be applied to detect the change points. In this paper, using the Bayesian approach, a modified version of this filter is studied which has good and justifiable properties and is applied in change point analysis.
Yaser Shokri Kalandaragh,
Abstract
Dealing with constraints is always very common in real-world implementation issues. Search algorithms for real problems are also no exception. Because of the constraints in search problems (named Constraint Satisfaction Problems (CSPs)), their main solving algorithm is presented in backtracking form. ...
Read More
Dealing with constraints is always very common in real-world implementation issues. Search algorithms for real problems are also no exception. Because of the constraints in search problems (named Constraint Satisfaction Problems (CSPs)), their main solving algorithm is presented in backtracking form. The constraint propagation algorithm is an auxiliary tool to avoid facing constraint conditions as well as reducing search options. This algorithm has been presented in almost seven versions so far. In this paper, we have updated the third version of this algorithm, which is presented under the title of AC-3, from five aspects and have increased its capabilities. The most important feature of our proposed algorithm is its low time complexity. This feature has been made possible by two auxiliary criteria introduction for detecting more critical binary constraints. Faster investigation of critical constraints leads to early detection of dead-end in the search path and the search continues in this direction stops.
Mehdi Shams; Gholamreza Hesamian
Abstract
The statistical methods based on cumulative distribution function is a start point for many parametric or nonparametric statistical inferences. However, there are many practical problems that require dealing with observations/parameters that represent inherently imprecise. However, Hesamian ...
Read More
The statistical methods based on cumulative distribution function is a start point for many parametric or nonparametric statistical inferences. However, there are many practical problems that require dealing with observations/parameters that represent inherently imprecise. However, Hesamian and Taheri (2013) was extended a concept of fuzzy cumulative distribution function. Applying a common notion of fuzzy random variables, they extended a vague concept of fuzzy cumulative distribution function. However, the main properties of the proposed method has not yet been considered in fuzzy environment. This paper aims to extend the classical properties of the fuzzy cumulative distribution function in fuzzy environment.
Abolfazl Poureidi
Abstract
Let $G=(V,E)$ be a graph. A doubleRoman dominating function (DRDF) on $G$ is a function$f:V\to\{0,1,2,3\}$ such that for every vertex $v\in V$if $f(v)=0$, then either there is a vertex $u$ adjacent to $v$ with $f(u)=3$ orthere are vertices $x$ and $y$ adjacent to $v$ with $f(x)=f(y)=2$ and if $f(v)=1$, ...
Read More
Let $G=(V,E)$ be a graph. A doubleRoman dominating function (DRDF) on $G$ is a function$f:V\to\{0,1,2,3\}$ such that for every vertex $v\in V$if $f(v)=0$, then either there is a vertex $u$ adjacent to $v$ with $f(u)=3$ orthere are vertices $x$ and $y$ adjacent to $v$ with $f(x)=f(y)=2$ and if $f(v)=1$, then there is a vertex $u$ adjacent to $v$ with$f(u)\geq2$.A DRDF $f$ on $G$ is a total DRDF (TDRDF) if for any $v\in V$ with $f(v)>0$ there is a vertex $u$ adjacent to $v$ with $f(u)>0$.The weight of $f$ is the sum $f(V)=\sum_{v\in V}f(v)$. The minimum weight of a TDRDF on $G$ is the total double Romandomination number of $G$. In this paper, we give a linear algorithm to compute thetotal double Roman domination number of agiven tree.
Mohammad Zeynali Azim; Saeid Alikhani; Babak Anari; Bagher Zarei
Abstract
This paper is about producing a new kind of pairs which we call MS-pairs. To produce these pairs, we use an algorithm for dividing a natural number $x$ by two for two arbitrary numbers and consider their related graphs. We present some applications of these pairs that show their interesting properties ...
Read More
This paper is about producing a new kind of pairs which we call MS-pairs. To produce these pairs, we use an algorithm for dividing a natural number $x$ by two for two arbitrary numbers and consider their related graphs. We present some applications of these pairs that show their interesting properties such as unpredictability, irreversible, aperiodicity and chaotic behavior.
Amin Ghodousian; Sara Zal
Abstract
In this paper, a linear programming problem is investigated in which the feasible region is formed as a special type of fuzzy relational equalities (FRE). In this type of FRE, fuzzy composition is considered as the weighted power mean operator (WPM). Some theoretical properties of the feasible region ...
Read More
In this paper, a linear programming problem is investigated in which the feasible region is formed as a special type of fuzzy relational equalities (FRE). In this type of FRE, fuzzy composition is considered as the weighted power mean operator (WPM). Some theoretical properties of the feasible region are derived and some necessary and sufficient conditions are also presented to determine the feasibility of the problem. Moreover, two procedures are proposed for simplifying the problem. Based on some structural properties of the problem, an algorithm is presented to find the optimal solutions and finally, an example is described to illustrate the algorithm.
Ferya Fathi; Mohammad Ali Fariborzi Araghi; Seyed Abolfazl Shahzadeh Fazeli
Abstract
Inverse eigenvalue problems (IEPs) of tridiagonal matrices are among the most popular IEPs, this is due to the widespread application of this matrix. In this paper, two different IEPs with different eigen information including eigenvalues and eigenvectors are presented on the nonsymmetric ...
Read More
Inverse eigenvalue problems (IEPs) of tridiagonal matrices are among the most popular IEPs, this is due to the widespread application of this matrix. In this paper, two different IEPs with different eigen information including eigenvalues and eigenvectors are presented on the nonsymmetric tridiagonal matrix. A recursive relation of characteristic polynomials of the leading principal submatrices of the required matrix is presented to solve the problems. The application of the problems in graph and perturbation theory is studied. The necessary and sufficient conditions for solvability of the problems are obtained.The algorithms and numerical examples are given to show the applicability of the proposed scheme.
Reza Habibi
Abstract
threaten system self-worth by preventing them from seeing themselves as a good system, and it can generally erode trust in society. Lying may be considered a game. This paper is concerned with the effect of lying in a system containing two agents using the game theory. From a repeated measurement model, ...
Read More
threaten system self-worth by preventing them from seeing themselves as a good system, and it can generally erode trust in society. Lying may be considered a game. This paper is concerned with the effect of lying in a system containing two agents using the game theory. From a repeated measurement model, two agents (which constitute a system) play a global game and it is seen that during a repeated game, the system will be destroyed by laying an agent. The probability of failing negotiation is derived and its behavior is studied under different scenarios. Simulation results are proposed to support theoretical results. Finally, concluding remarks are given.
Maedeh Mehravaran; Fazlollah Adibnia; Mohammad-Reza Pajoohan
Abstract
In real world, organization's requirements for high performance resources and high capacity storage devices encourage them to use resources in public clouds. While private cloud provides security and low cost for scheduling workflow, public clouds provide a higher scale, potentially exposed to the risk ...
Read More
In real world, organization's requirements for high performance resources and high capacity storage devices encourage them to use resources in public clouds. While private cloud provides security and low cost for scheduling workflow, public clouds provide a higher scale, potentially exposed to the risk of data and computation breach, and need to pay the costs. Task scheduling, therefore, is one of the most important problems in cloud computing. In this paper, a new scheduling method is proposed for workflow applications in hybrid cloud considering security. Sensitivity of tasks has been considered in recent works; we, however, consider security requirement for data and security strength for resources. The proposed scheduling method is implemented in Particle Swarm \linebreak Optimization (PSO) algorithm. Our proposed algorithm considers minimizing security distance, that is maximizing similarity of security between data and resources. It, meanwhile, follows time and budget constraints. Through analysis of experimental results,it is shown that the proposed algorithm has selected resources with the most security similarity while user constraints are satisfied.
Saleh Hatami Sharif Abadi; Hasan Hosseini Nasab; Mohammad Bagher Fakhrzad; Hasan Khademi Zarei
Abstract
We can apply any method for organizing a supply chain, but contracting is more viable. Among many contracts that does so, the Insurance contract is more efficient. The problem is tuning the contract's parameters (for a two-level two-period supply chain with one supplier and one retailer) to achieve the ...
Read More
We can apply any method for organizing a supply chain, but contracting is more viable. Among many contracts that does so, the Insurance contract is more efficient. The problem is tuning the contract's parameters (for a two-level two-period supply chain with one supplier and one retailer) to achieve the optimum point where there is more gain for everyone separately, and the predictable risks all have been covered. The insurance contract covers every predictable risk that the downstream is facing. Instead, the retailer gives the supplier some money as a side payment (Premium). So, it has two main parameters, first, the fraction ($\beta$) of every predictable loss by the retailer, which the supplier must pay, and second, the side payment (M), which the retailer must pay. We will find the best $\beta$ for a one-supplier one-retailors supply chain with two main sale periods. But for reaching the optimum state of all the insurance contract's possible forms, we designed some mathematical models based on scenarios. Then we optimize these stochastic models to find the best contract possible for 1000 initial scenarios. Every scenario indicates one possible number for price and one for demand in each period (generating 4000 possible numbers for the independent demand and price). We needed to know the maximum possible profit for the whole supply chain. First, we designed a centralized supply chain where the maximum profit is possible, then a decentralized supply chain to know the minimum of what's possible. When we have the ends of our range, we can design the final model with an insurance contract applied. In this model, we insert the $\beta$ and M into the model as the insurance contract does. First, we reduce the number of scenarios to 20 with a novel method. Then we find the optimum point by solving the final model for each $\beta$. The result was better at $\beta$=0.25. In the next step, by supposing equal negotiating power for the sides, we split the extra money into two equivalent sizes, and the M amount was measured as half of the extra money that the contract can reach. The extra money was at 0.9893 of our range means the contract earns 99.97\% of what is possible.
Mina Moosapour; Ahmad Bagheri; Mohammad Javad Mahmoodabadi
Abstract
The imperialist competitive algorithm (ICA) is developed based on the socio-political process of imperialist competitions. It is an efficient approach for single-objective optimization problems. However, this algorithm fails to optimize multi-objective problems (MPOs) with conflicting objectives. This ...
Read More
The imperialist competitive algorithm (ICA) is developed based on the socio-political process of imperialist competitions. It is an efficient approach for single-objective optimization problems. However, this algorithm fails to optimize multi-objective problems (MPOs) with conflicting objectives. This paper presents a modification of the ICA to different multi-objective problems. To improve the algorithm performance and adapt to the characteristics of MOPs, the Sigma method was used to establish the initial empires, the weighted sum approach (WSum) was employed for empire competition, and an adaptive elimination approach was used for external archiving strategy. the results indicated that the suggested algorithm had a higher performance compared to other algorithms based on diversity and convergence characteristics.
Dariush Salimi; mohaddese salimi; Ali Moieni
Abstract
The demand for extracting sophisticated features, capable of effectively predicting gene interaction networks, from DNA or RNA sequences has increased in computational biology. The epigenetic modifications along with their patterns have been intensely recognized as appealing features affecting gene interaction. ...
Read More
The demand for extracting sophisticated features, capable of effectively predicting gene interaction networks, from DNA or RNA sequences has increased in computational biology. The epigenetic modifications along with their patterns have been intensely recognized as appealing features affecting gene interaction. However, studying sequenced-based features highly correlated to this key element has remained limited. In this paper, the classification of 23 genes in the PPAR signaling pathway associated with muscle fat tissue in humans was proposed based on statistical distributions of the specified RNA-5-mers abundance. Then, we suggested that these 5-mers highly correlated to epigenetic modifications can efficiently categorize the different gene interactions, particularly co-expression interaction and physical interaction. Our results were evaluated according to GeneMania web interface shows that the geometric distribution of 5 mers in the epigenetic modifications region indicates the proportion of most physical interactions and the Poisson distribution the proportion of most Co-expression between genes
Vahid Heidari; Dara Moazzami
Abstract
In this paper we show that, if $NP\neq ZPP$, for any $\epsilon > 0$, the tenacity of graphwith $n$ vertices is not approximable in polynomial time within a factor of$\frac{1}{2} \left( \frac{n-1}{2} \right) ^{1-\epsilon}$.
Read More
In this paper we show that, if $NP\neq ZPP$, for any $\epsilon > 0$, the tenacity of graphwith $n$ vertices is not approximable in polynomial time within a factor of$\frac{1}{2} \left( \frac{n-1}{2} \right) ^{1-\epsilon}$.
R Ponraj; A Gayathri; S Somasundaram
Abstract
\noindent Let $G = (V, E)$ be a $(p,q)$ graph.\\Define \begin{equation*}\rho =\begin{cases}\frac{p}{2} ,& \text{if $p$ is even}\\\frac{p-1}{2} ,& \text{if $p$ is odd}\\\end{cases}\end{equation*}\\ and $L = \{\pm1 ,\pm2, \pm3 , \cdots ,\pm\rho\}$ called the set of labels.\\\noindent Consider ...
Read More
\noindent Let $G = (V, E)$ be a $(p,q)$ graph.\\Define \begin{equation*}\rho =\begin{cases}\frac{p}{2} ,& \text{if $p$ is even}\\\frac{p-1}{2} ,& \text{if $p$ is odd}\\\end{cases}\end{equation*}\\ and $L = \{\pm1 ,\pm2, \pm3 , \cdots ,\pm\rho\}$ called the set of labels.\\\noindent Consider a mapping $f : V \longrightarrow L$ by assigning different labels in L to the different elements of V when p is even and different labels in L to p-1 elements of V and repeating a label for the remaining one vertex when $p$ is odd.The labeling as defined above is said to be a pair difference cordial labeling if for each edge $uv$ of $G$ there exists a labeling $\left|f(u) - f(v)\right|$ such that $\left|\Delta_{f_1} - \Delta_{f_1^c}\right| \leq 1$, where $\Delta_{f_1}$ and $\Delta_{f_1^c}$ respectively denote the number of edges labeled with $1$ and number of edges not labeled with $1$. A graph $G$ for which there exists a pair difference cordial labeling is called a pair difference cordial graph. In this paper we investigate the pair difference cordial labeling behavior of some snake and butterfly graphs.
KISHORE KRISNA S; Manav R Samant; RAAJ KHISHORRE K R; Sreeharan B N
Abstract
Selecting suitable journals for publishing manuscripts for publication is one of the most essential processes before publishing any manuscript. Finding the relevant journal is a key factor which proves one's work valuable to the entire society. The final output and the performance of one's research is ...
Read More
Selecting suitable journals for publishing manuscripts for publication is one of the most essential processes before publishing any manuscript. Finding the relevant journal is a key factor which proves one's work valuable to the entire society. The final output and the performance of one's research is ultimately validated only if the paper is published in a right journal. One of the greatest mistakes that the authors make is submitting their manuscript in an unsuitable journal. The author should also consider all the six parameters such as Scope, Cite Score, Impact factor, Acceptance Rate, Time to first decision and Time to publication. Some authors only consider the acceptance rate and the time to first decision and publication as their main criteria. The author should consider all these parameters while publishing the paper. An algorithm named DEAR is used in the work which can consider all these parameters to find the right journal among the various alternatives. This DEAR method serves as a user-friendly method in selecting the best journal.
Behnam Iranfar; Mohammad Farshi
Abstract
Given a point set $S\subset \mathbb{R}^d$, the $\theta$-graph of $S$ is as follows: for each point $s\in S$, draw cones with apex at $s$ and angle $\theta$ %fix a line through $p$ at each cone and connect $s$ to the point in each cone such that the projection of the point on the bisector of the cone ...
Read More
Given a point set $S\subset \mathbb{R}^d$, the $\theta$-graph of $S$ is as follows: for each point $s\in S$, draw cones with apex at $s$ and angle $\theta$ %fix a line through $p$ at each cone and connect $s$ to the point in each cone such that the projection of the point on the bisector of the cone is the closest to~$s$. One can define the $\theta$- graph on an uncertain point set, i.e. a point set where each point $s_i$ exists with an independent probability $\pi_i \in (0,1]$. In this paper, we propose an algorithm that computes the expected weight of the $\theta$-graph on a given uncertain point set. The proposed algorithm takes $O(n^2\alpha(n^2,n)^{2d})$ time and $O(n^2)$ space, where $n$ is the number of points, $d$ and $\theta$ are constants, and $\alpha$ is the inverse of the Ackermann's function.
Mostafa Akhavan-Safar; Babak Teimourpour; Mahboube Ayyoubi
Abstract
One of the important topics in oncology for treatment and prevention is the identification of genes that initiate cancer in cells. These genes are known as cancer driver genes (CDG). Identifying driver genes is important both for a basic understanding of cancer and for helping to find new therapeutic ...
Read More
One of the important topics in oncology for treatment and prevention is the identification of genes that initiate cancer in cells. These genes are known as cancer driver genes (CDG). Identifying driver genes is important both for a basic understanding of cancer and for helping to find new therapeutic goals or biomarkers. Several computational methods for finding cancer-driver genes have been developed from genome data. However, most of these methods find key mutations in genomic data to predict cancer driver genes. methods are dependent on mutation and genomic data and often have a high rate of false positives in the results. In this study, we proposed a network-based method, GeneIC, which can detect cancer driver genes without the need for mutation data. In this method, the concept of influence maximization and the independent cascade model is used. First, a cancer gene regulatory network was created using regulatory interactions and gene expression data. Then we implemented an independent cascade propagation algorithm on the network to calculate the coverage of each gene. Finally, the genes with the highest coverage were introduced as driver genes. The results of our proposed method were compared with 19 previous computational and network methods based on the F-measure metric and the number of detected drivers. The results showed that the proposed method has a better outcome than other methods. In addition, more than 25.49\% of the driver genes reported by GeneIC are new driver genes that have not been reported by any other computational method.
Saeid Sadeghi; Kooroush Manochehri; mohsen jahanshahi
Abstract
There are several techniques for implement an authentication system for computers that most commonly use the clear text password. One of the security problems is the use of a text password, the lack of choosing a complicated password by users due to forgetting, and being guessable and retrieved by attackers. ...
Read More
There are several techniques for implement an authentication system for computers that most commonly use the clear text password. One of the security problems is the use of a text password, the lack of choosing a complicated password by users due to forgetting, and being guessable and retrieved by attackers. One of the methods for passing text passwords is the use of graphical password authentication systems that increase the amount of forgetting passwords by using images instead of text characters. In this paper, the security challenges of using a graphical password are discussed. Then, explain a method for using the watermarking digital image for the authentication process and providing an algorithm suitable for watermarking and enhance the security of graphical password authentication system, and its quantitative and qualitative security parameters will be examined.
Mahdieh Hasheminezhad
Abstract
An edge coloring of a digraph $D$ is called a $P_3$-rainbow edge coloring if the edges of any directed path of $D$ with length 2 are colored with different colors. It is proved that for a $P_3$-rainbow edge coloring of a digraph $D$, at least $\left\lceil{log_2{\chi(D)}} ...
Read More
An edge coloring of a digraph $D$ is called a $P_3$-rainbow edge coloring if the edges of any directed path of $D$ with length 2 are colored with different colors. It is proved that for a $P_3$-rainbow edge coloring of a digraph $D$, at least $\left\lceil{log_2{\chi(D)}} \right\rceil$ colors are necessary and $ 2\left\lceil{log_2{\chi(D)}}\right\rceil\}$ colors are enough. One can determine in linear time if a digraph has a $P_3$-rainbow edge coloring with 1 or 2 colors. In this paper, it is proved that determining that a digraph has a $P_3$-rainbow edge coloring with 3 colors is an NP-complete problem even for planar digraphs. Moreover, it is shown that $\left\lceil{log_2{\chi(D)}}\right\rceil$ colors is necessary and sufficient for a $P_3$-rainbow edge coloringof a transitive orientation digraph $D$.
Mahboubeh Shamsi; Abdolreza Rasouli Kenari; Roghayeh Aghamohammadi
Abstract
On a graph with a negative cost cycle, the shortest path is undefined, but the number of edges of the shortest negative cost cycle could be computed. It is called Negative Cost Girth (NCG). The NCG problem is applied in many optimization issues such as scheduling and model verification. The existing ...
Read More
On a graph with a negative cost cycle, the shortest path is undefined, but the number of edges of the shortest negative cost cycle could be computed. It is called Negative Cost Girth (NCG). The NCG problem is applied in many optimization issues such as scheduling and model verification. The existing polynomial algorithms suffer from high computation and memory consumption. In this paper, a powerful Map-Reduce framework implemented to find the NCG of a graph. The proposed algorithm runs in $O(\log_{}{k})$ parallel time over $O(n^3)$ on each Hadoop nodes, where $n, k$ are the size of the graph and the value of NCG, respectively. The Hadoop implementation of the algorithm shows that the total execution time is reduced by 50\% compared with polynomial algorithms, especially in large networks concerning increasing the numbers of Hadoop nodes. The result proves the efficiency of the approach for solving the NCG problem to process big data in a parallel and distributed way.
.Dara Moazzami
Abstract
The edge-tenacity $T_e(G)$ of a graph G was defined as\begin{center} $T_e(G)=\displaystyle \min_{F\subset E(G)}\{\frac{\mid F\mid +\tau(G-F)}{\omega(G-F)}\}$\end{center}where the minimum is taken over all edge cutset F of G. We defineG-F to be the graph induced by the edges of $E(G)-F$, $\tau(G-F)$is ...
Read More
The edge-tenacity $T_e(G)$ of a graph G was defined as\begin{center} $T_e(G)=\displaystyle \min_{F\subset E(G)}\{\frac{\mid F\mid +\tau(G-F)}{\omega(G-F)}\}$\end{center}where the minimum is taken over all edge cutset F of G. We defineG-F to be the graph induced by the edges of $E(G)-F$, $\tau(G-F)$is the number of edges in the largest component of the graphinduced by G-F and $\omega(G-F)$ is the number of components of$G-F$. A set $F\subset E(G)$ is said to be a $T_e$-set of G if\begin{center} $T_e(G)=\frac{\mid F\mid+\tau(G-F)}{\omega(G-F)}$\end{center}Each component has at least one edge. In this paper we introducea new invariant edge-tenacity, for graphs. it is another vulnerability measure.we present several properties and bounds on the edge-tenacity. we alsocompute the edge-tenacity of some classes of graphs.