Bertrand Teguia Tabuguia
Abstract
Linear and homogeneous recurrence equations having polynomial coefficients are said to be holonomic. These equations are useful for proving and discovering combinatorial and hypergeometric identities. Given a field $\mathbb{K}$ of characteristic zero, $a_n$ is a hypergeometric term with respect to $\mathbb{K}$, ...
Read More
Linear and homogeneous recurrence equations having polynomial coefficients are said to be holonomic. These equations are useful for proving and discovering combinatorial and hypergeometric identities. Given a field $\mathbb{K}$ of characteristic zero, $a_n$ is a hypergeometric term with respect to $\mathbb{K}$, if the ratio $a_{n+1}/a_n$ is a rational function over $\mathbb{K}$. Two algorithms by Marko Petkov\v{s}ek (1993) and Mark van Hoeij (1999) were proposed to compute hypergeometric term solutions of holonomic recurrence equations. The latter algorithm is more efficient and was implemented by its author in the Computer Algebra System (CAS) Maple through the command \texttt{LREtools[hypergeomsols]}. We describe
a variant of van Hoeij's algorithm that performs with the same efficiency without considering certain recommendations of the original version. We implemented our algorithm in the CASes Maxima and Maple. It also appears for some particular cases that our code finds results where \texttt{LREtools[hypergeomsols]} fails.
Our implementation is part of the \texttt{FPS} software which can be downloaded at \url{http://www.mathematik.uni-kassel.de/~bteguia/FPS_webpage/FPS.htm}. The command is \texttt{HypervanHoeij} for Maxima 5.44 and \texttt{rectohyperterm} for Maple 2021.
Mahmood Amintoosi
Abstract
A popular research topic in Graph Convolutional Networks (GCNs) is to speedup the training time of the network. The main bottleneck in training GCN is the exponentially growing of computations. In Cluster-GCN based on this fact that each node and its neighbors are usually grouped in the same ...
Read More
A popular research topic in Graph Convolutional Networks (GCNs) is to speedup the training time of the network. The main bottleneck in training GCN is the exponentially growing of computations. In Cluster-GCN based on this fact that each node and its neighbors are usually grouped in the same cluster, considers the clustering structure of the graph, and expand each node's neighborhood within each cluster when training GCN.The main assumption of Cluster-GCN is the weak relation between clusters; which is not correct at all graphs. Here we extend their approach by overlapped clustering, instead of crisp clustering which is used in Cluster-GCN. This is achieved by allowing the marginal nodes to contribute to training in more than one cluster. The evaluation of the proposed method is investigated through the experiments on several benchmark datasets.The experimental results show that the proposed method is more efficient than Cluster-GCN, in average.
R Ponraj; A Gayathri; S Somasundaram
Abstract
Let $G = (V, E)$ be a $(p,q)$ graph.Define \begin{equation*}\rho =\begin{cases}\frac{p}{2} ,& \text{if $p$ is even}\\\frac{p-1}{2} ,& \text{if $p$ is odd}\\\end{cases}\end{equation*}\\ and $L = \{\pm1 ,\pm2, \pm3 , \cdots ,\pm\rho\}$ called the set of labels.\noindent Consider a mapping ...
Read More
Let $G = (V, E)$ be a $(p,q)$ graph.Define \begin{equation*}\rho =\begin{cases}\frac{p}{2} ,& \text{if $p$ is even}\\\frac{p-1}{2} ,& \text{if $p$ is odd}\\\end{cases}\end{equation*}\\ and $L = \{\pm1 ,\pm2, \pm3 , \cdots ,\pm\rho\}$ called the set of labels.\noindent Consider a mapping $f : V \longrightarrow L$ by assigning different labels in L to the different elements of V when p is even and different labels in L to p-1 elements of V and repeating a label for the remaining one vertex when $p$ is odd.The labeling as defined above is said to be a pair difference cordial labeling if for each edge $uv$ of $G$ there exists a labeling $\left|f(u) - f(v)\right|$ such that $\left|\Delta_{f_1} - \Delta_{f_1^c}\right| \leq 1$, where $\Delta_{f_1}$ and $\Delta_{f_1^c}$ respectively denote the number of edges labeled with $1$ and number of edges not labeled with $1$. A graph $G$ for which there exists a pair difference cordial labeling is called a pair difference cordial graph. In this paper we investigate pair difference cordial labeling behavior of planar grid and mangolian tent graphs.
Abolfazl Javan; Majid Javan; M. Jafarpour; Dara Moazzami; Ali Moieni
Abstract
We introduce a new invariant vulnerability parameter named “J-Tightness” or “J(G)” for graphs. As a stability measure, its properties along with comparisons to other parameters of a graph are proposed. We show how it is calculated for complete graphs and cycles. We show that J-Tightness ...
Read More
We introduce a new invariant vulnerability parameter named “J-Tightness” or “J(G)” for graphs. As a stability measure, its properties along with comparisons to other parameters of a graph are proposed. We show how it is calculated for complete graphs and cycles. We show that J-Tightness better fits the properties of vulnerability measures and can be used with more confidence to assess the vulnerability of any classes of graphs.
Gholamreza Hesamian; Mehdi Shams
Abstract
The statistical methods based on cumulative distribution function is a start point for many parametric or nonparametric statistical inferences. However, there are many practical problems that require dealing with observations/parameters that represent inherently imprecise. However, Hesamian ...
Read More
The statistical methods based on cumulative distribution function is a start point for many parametric or nonparametric statistical inferences. However, there are many practical problems that require dealing with observations/parameters that represent inherently imprecise. However, Hesamian and Taheri (2013) was extended a concept of fuzzy cumulative distribution function. Applying a common notion of fuzzy random variables, they extended a vague concept of fuzzy cumulative distribution function. However, the main properties of the proposed method has not yet been considered in fuzzy environment. This paper aims to extend the classical properties of the fuzzy cumulative distribution function in fuzzy environment.
Davood Bakhshesh
Abstract
Let $S$ be a set of points in the plane that are in convex position. Let~$\cal O$ be a set of simple polygonal obstacles whose vertices are in $S$. The visibility graph $Vis(S,{\cal O})$ is the graph which is obtained from the complete graph of $S$ by removing all edges intersecting some obstacle ...
Read More
Let $S$ be a set of points in the plane that are in convex position. Let~$\cal O$ be a set of simple polygonal obstacles whose vertices are in $S$. The visibility graph $Vis(S,{\cal O})$ is the graph which is obtained from the complete graph of $S$ by removing all edges intersecting some obstacle of $\cal O$. In this paper, we show that there is a plane $5.19$-spanner of the visibility graph $Vis(S,{\cal O})$ of degree at most 6. Moreover, we show that there is a plane $1.88$-spanner of the visibility graph $Vis(S,{\cal O})$. These improve the stretch factor and the maximum degree of the previous results by A. van Renssen and G. Wong ({\em Theoretical Computer Science, 2021}) in the context of points in convex position.
Faranak Tohidi; Mohammad Reza Hooshmandasl; Manoranjan Paul
Abstract
Confirming the integrity of transmitted sensitive digital content is a significant issue due to the evolution in communication technologies and the accessibility of image processing tools. Watermarking has been a successful method of authentication and integrity verification recently. However, several ...
Read More
Confirming the integrity of transmitted sensitive digital content is a significant issue due to the evolution in communication technologies and the accessibility of image processing tools. Watermarking has been a successful method of authentication and integrity verification recently. However, several significant problems remain such as confronting some serious attacks and recovery after higher tampering rates. We propose a hybrid method to enable an image to be recovered successfully after a range of attacks. A blind watermarking approach is adopted which includes fragile authentication but robust recovery references. This is performed by embedding verification code as part of the watermarked data along with key features of the original image into a location that is resistant to the attack. To combat different kinds of attacks, the areas of the image have been investigated to find which area is more likely to be affected in each type of specific attack.
Reyhane Ghafari; Najme Mansouri
Abstract
Cloud computing is a high-performance computing environment that can remotely provide services to customers using a pay-per-use model. The principal challenge in cloud computing is task scheduling, in which tasks must be effectively allocated to resources. The mapping of cloud resources to customer requests ...
Read More
Cloud computing is a high-performance computing environment that can remotely provide services to customers using a pay-per-use model. The principal challenge in cloud computing is task scheduling, in which tasks must be effectively allocated to resources. The mapping of cloud resources to customer requests (tasks) is a popular Nondeterministic Polynomial-time (NP)-Complete problem. Although the task scheduling problem is a multi-objective optimization problem, most task scheduling algorithms cannot provide an effective trade-off between makespan, resource utilization, and energy consumption. Therefore, this study introduces a Priority-based task scheduling algorithm using Harris Hawks Optimizer (HHO) which is entitled as PHHO. The proposed algorithm first prioritizes tasks using a hierarchical process based on length and memory. Then, the HHO algorithm is used for optimally assigning tasks to resources. The PHHO algorithm aims to decrease makespan and energy consumption while increasing resource utilization and throughput. To evaluate the effectiveness of the PHHO algorithm, it is compared with other well-known meta-heuristic algorithms such as Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Whale Optimization Algorithm (WOA), Salp Swarm Algorithm (SSA), and Moth-Flame Optimization (MFO). The experimental results show the effectiveness of the PHHO algorithm compared to other algorithms in terms of makespan, resource utilization, throughput, and energy consumption.
KISHORE KRISNA S; Manav R Samant; RAAJ KHISHORRE K R; Sreeharan B N
Abstract
Selecting suitable journals for publishing manuscripts for publication is one of the most essential processes before publishing any manuscript. Finding the relevant journal is a key factor which proves one's work valuable to the entire society. The final output and the performance of one's research is ...
Read More
Selecting suitable journals for publishing manuscripts for publication is one of the most essential processes before publishing any manuscript. Finding the relevant journal is a key factor which proves one's work valuable to the entire society. The final output and the performance of one's research is ultimately validated only if the paper is published in a right journal. One of the greatest mistakes that the authors make is submitting their manuscript in an unsuitable journal. The author should also consider all the six parameters such as Scope, Cite Score, Impact factor, Acceptance Rate, Time to first decision and Time to publication. Some authors only consider the acceptance rate and the time to first decision and publication as their main criteria. The author should consider all these parameters while publishing the paper. An algorithm named DEAR is used in the work which can consider all these parameters to find the right journal among the various alternatives. This DEAR method serves as a user-friendly method in selecting the best journal.
Mahdieh Hasheminezhad
Abstract
An edge coloring of a digraph $D$ is called a $P_3$-rainbow edge coloring if the edges of any directed path of $D$ with length 2 are colored with different colors. It is proved that for a $P_3$-rainbow edge coloring of a digraph $D$, at least $\left\lceil{log_2{\chi(D)}} ...
Read More
An edge coloring of a digraph $D$ is called a $P_3$-rainbow edge coloring if the edges of any directed path of $D$ with length 2 are colored with different colors. It is proved that for a $P_3$-rainbow edge coloring of a digraph $D$, at least $\left\lceil{log_2{\chi(D)}} \right\rceil$ colors are necessary and $ 2\left\lceil{log_2{\chi(D)}}\right\rceil\}$ colors are enough. One can determine in linear time if a digraph has a $P_3$-rainbow edge coloring with 1 or 2 colors. In this paper, it is proved that determining that a digraph has a $P_3$-rainbow edge coloring with 3 colors is an NP-complete problem even for planar digraphs. Moreover, it is shown that $\left\lceil{log_2{\chi(D)}}\right\rceil$ colors is necessary and sufficient for a $P_3$-rainbow edge coloringof a transitive orientation digraph $D$.
Mahboubeh Shamsi; Abdolreza Rasouli Kenari; Roghayeh Aghamohammadi
Abstract
On a graph with a negative cost cycle, the shortest path is undefined, but the number of edges of the shortest negative cost cycle could be computed. It is called Negative Cost Girth (NCG). The NCG problem is applied in many optimization issues such as scheduling and model verification. The existing ...
Read More
On a graph with a negative cost cycle, the shortest path is undefined, but the number of edges of the shortest negative cost cycle could be computed. It is called Negative Cost Girth (NCG). The NCG problem is applied in many optimization issues such as scheduling and model verification. The existing polynomial algorithms suffer from high computation and memory consumption. In this paper, a powerful Map-Reduce framework implemented to find the NCG of a graph. The proposed algorithm runs in $O(\log_{}{k})$ parallel time over $O(n^3)$ on each Hadoop nodes, where $n, k$ are the size of the graph and the value of NCG, respectively. The Hadoop implementation of the algorithm shows that the total execution time is reduced by 50\% compared with polynomial algorithms, especially in large networks concerning increasing the numbers of Hadoop nodes. The result proves the efficiency of the approach for solving the NCG problem to process big data in a parallel and distributed way.
Mohammad Mirabi; Mohammad Reza Zare Banadkouki
Abstract
selecting an efficient project portfolio among available projects is a vital decision for any project manager. The main questions are which projects can have more long-term benefit for manager or organization. Due to the complexity of this field of research, todays so many approaches are developed for ...
Read More
selecting an efficient project portfolio among available projects is a vital decision for any project manager. The main questions are which projects can have more long-term benefit for manager or organization. Due to the complexity of this field of research, todays so many approaches are developed for project selection. Calculation time and the quality of result are two main criterion that almost all researchers have considerate on them. In this research a new hybrid genetic algorithm with new heuristic mutation and cross over are developed to choosing a good portfolio of available projects Presented algorithm is fast and effective to reach the good result in reasonable time. Finding a good point to start as initial population and using good operator a heuristic mutation and cross over are main points of our algorithm. To check the quality of results we compare developed algorithm with some recent ones in the literature and comparison studies and statistical calculation demonstrate the efficiency of the new genetic algorithm to select a good portfolio.