A. Ghodousian; Fatemeh Elyasimohammadi
Abstract
Dombi family of t-norms includes a parametric family of continuous strict t-norms, whose members are increasing functions of the parameter. This family of t-norms covers the whole spectrum of t-norms when the parameter is changed from zero to infinity. In this paper, we study a nonlinear optimization ...
Read More
Dombi family of t-norms includes a parametric family of continuous strict t-norms, whose members are increasing functions of the parameter. This family of t-norms covers the whole spectrum of t-norms when the parameter is changed from zero to infinity. In this paper, we study a nonlinear optimization problem in which the constraints are defined as fuzzy relational equations (FRE) with the Dombi family of t-norms. We firstly investigate the resolution of the feasible solutions set when it is defined with max-Dombi composition and present some necessary and sufficient conditions for determining the feasibility. Also, some procedures are presented for simplifying the problem. Since the feasible solutions set of FREs is non-convex, conventional nonlinear programming methods may not be directly employed to solve the problem. Based on some theoretical properties of the problem, a genetic algorithm is presented, which preserves the feasibility of new generated solutions. Moreover, a method is presented to generate feasible max-Dombi FREs as test problems for evaluating the performance of our algorithm. The proposed method has been compared with some related works. The obtained results confirm the high performance of the proposed method in solving such nonlinear problems.
Arif Mehmood; Fawad Nadeem; Choonkil Park; Giorgio Nordo; Humaira Kalsoom; Muhammad Rahim Khan; Naeem Abbas
Abstract
In this paper, the notion of generalized neutrosophic soft open set (GNSOS) in neutrosophic soft open set (GNSOS) in neutrosophic soft topological structures relative to neutrosophic soft points is introduced.The concept of generalized neutrosophic soft separation axioms in neutrosophic soft topological ...
Read More
In this paper, the notion of generalized neutrosophic soft open set (GNSOS) in neutrosophic soft open set (GNSOS) in neutrosophic soft topological structures relative to neutrosophic soft points is introduced.The concept of generalized neutrosophic soft separation axioms in neutrosophic soft topological spaces with respect to soft points. Several related properties, structural characteristics have been investigated. Then the convergence of sequence in neutrosophic soft topological space is defined and its uniqueness in generalized neutrosophic soft Hausdorff space (GNSHS) relative to soft points is examined. Neutrosophic monotonous soft function and its characteristics are switched over to different results. Lastly, generalized neutrosophic soft product spaces with respect to crisp points have been addressed.
Saeed Jafaripour; Zahra Nilforoushan; Keivan Borna
Abstract
Deciding whether a musical rhythm is good or not, depends on many factors like geographical conditions of a region, culture, the mood of society, the view of rhythm over years, and so on. In this paper, we want to make a decision from the scientific point of view, using geometric features of rhythms, ...
Read More
Deciding whether a musical rhythm is good or not, depends on many factors like geographical conditions of a region, culture, the mood of society, the view of rhythm over years, and so on. In this paper, we want to make a decision from the scientific point of view, using geometric features of rhythms, about bad ones. The researchers who are investigating the relationship between geometry and music, certainly realize that there is a big vacuum in this regard, not using computers to detect a good or bad rhythm. Here, using computer programming and applying geometric features to more than four thousand rhythms, we decide on the bad musical rhythms. Then we present algorithms for deciding about bad rhythms using geometrical features.
Anuj Kapoor
Abstract
Hash or B-Tree based composite indexes, are the two most commonly used techniques for searching and retrieving data from memory. Although these techniques have a serious memory limitation, that restricts \textit{freedom} to search by any combination of single key/data attribute, that comprises the composite ...
Read More
Hash or B-Tree based composite indexes, are the two most commonly used techniques for searching and retrieving data from memory. Although these techniques have a serious memory limitation, that restricts \textit{freedom} to search by any combination of single key/data attribute, that comprises the composite search key, the techniques are still accepted considering the trade offs with better performance on insert and update operations. But when the data is semi-static, which does not change often, there is a need and scope for a better technique that provides the flexibility and freedom to efficiently search by any possible key, without creating any composite index. This paper explains such algorithmic technique along with its data structures.
Peyman Nasehpour
Abstract
In this paper, we compute the asymptotic average of the decimals of some real numbers. With the help of this computation, we prove that if a real number cannot be represented as a finite decimal and the asymptotic average of its decimals is zero, then it is irrational. We also show that the asymptotic ...
Read More
In this paper, we compute the asymptotic average of the decimals of some real numbers. With the help of this computation, we prove that if a real number cannot be represented as a finite decimal and the asymptotic average of its decimals is zero, then it is irrational. We also show that the asymptotic average of the decimals of simply normal numbers is 9/2.
Mostafa Boskabadi; Mahdi Doostparast; Majid Sarmad
Abstract
Cox proportional hazards models are the most common modelling framework to prediction and evaluation of covariate effects in time-to-event analyses.These models usually do not account the relationship among covariates which may have impacts on survival times.In this article, we introduce regression tree ...
Read More
Cox proportional hazards models are the most common modelling framework to prediction and evaluation of covariate effects in time-to-event analyses.These models usually do not account the relationship among covariates which may have impacts on survival times.In this article, we introduce regression tree models for survival analyses by incorporating dependencies among covariates. Various properties of the proposed model are studied in details. To assess the accuracy of the proposed model, a Monte--Carlo simulation study is conducted.A real data set from assay of serum free light chain is also analysed to illustrate advantages of the proposed method in medical investigations.
Abolfazl Poureidi
Abstract
Let $G=(V,E)$ be a graph. A doubleRoman dominating function (DRDF) on $G$ is a function$f:V\to\{0,1,2,3\}$ such that for every vertex $v\in V$if $f(v)=0$, then either there is a vertex $u$ adjacent to $v$ with $f(u)=3$ orthere are vertices $x$ and $y$ adjacent to $v$ with $f(x)=f(y)=2$ and if $f(v)=1$, ...
Read More
Let $G=(V,E)$ be a graph. A doubleRoman dominating function (DRDF) on $G$ is a function$f:V\to\{0,1,2,3\}$ such that for every vertex $v\in V$if $f(v)=0$, then either there is a vertex $u$ adjacent to $v$ with $f(u)=3$ orthere are vertices $x$ and $y$ adjacent to $v$ with $f(x)=f(y)=2$ and if $f(v)=1$, then there is a vertex $u$ adjacent to $v$ with$f(u)\geq2$.A DRDF $f$ on $G$ is a total DRDF (TDRDF) if for any $v\in V$ with $f(v)>0$ there is a vertex $u$ adjacent to $v$ with $f(u)>0$.The weight of $f$ is the sum $f(V)=\sum_{v\in V}f(v)$. The minimum weight of a TDRDF on $G$ is the total double Romandomination number of $G$. In this paper, we give a linear algorithm to compute thetotal double Roman domination number of agiven tree.
Maedeh Mehravaran; Fazlollah Adibnia; Mohammad-Reza Pajoohan
Abstract
In real world, organization's requirements for high performance resources and high capacity storage devices encourage them to use resources in public clouds. While private cloud provides security and low cost for scheduling workflow, public clouds provide a higher scale, potentially exposed to the risk ...
Read More
In real world, organization's requirements for high performance resources and high capacity storage devices encourage them to use resources in public clouds. While private cloud provides security and low cost for scheduling workflow, public clouds provide a higher scale, potentially exposed to the risk of data and computation breach, and need to pay the costs. Task scheduling, therefore, is one of the most important problems in cloud computing. In this paper, a new scheduling method is proposed for workflow applications in hybrid cloud considering security. Sensitivity of tasks has been considered in recent works; we, however, consider security requirement for data and security strength for resources. The proposed scheduling method is implemented in Particle Swarm \linebreak Optimization (PSO) algorithm. Our proposed algorithm considers minimizing security distance, that is maximizing similarity of security between data and resources. It, meanwhile, follows time and budget constraints. Through analysis of experimental results,it is shown that the proposed algorithm has selected resources with the most security similarity while user constraints are satisfied.
Behnam Iranfar; Mohammad Farshi
Abstract
Given a point set $S\subset \mathbb{R}^d$, the $\theta$-graph of $S$ is as follows: for each point $s\in S$, draw cones with apex at $s$ and angle $\theta$ %fix a line through $p$ at each cone and connect $s$ to the point in each cone such that the projection of the point on the bisector of the cone ...
Read More
Given a point set $S\subset \mathbb{R}^d$, the $\theta$-graph of $S$ is as follows: for each point $s\in S$, draw cones with apex at $s$ and angle $\theta$ %fix a line through $p$ at each cone and connect $s$ to the point in each cone such that the projection of the point on the bisector of the cone is the closest to~$s$. One can define the $\theta$- graph on an uncertain point set, i.e. a point set where each point $s_i$ exists with an independent probability $\pi_i \in (0,1]$. In this paper, we propose an algorithm that computes the expected weight of the $\theta$-graph on a given uncertain point set. The proposed algorithm takes $O(n^2\alpha(n^2,n)^{2d})$ time and $O(n^2)$ space, where $n$ is the number of points, $d$ and $\theta$ are constants, and $\alpha$ is the inverse of the Ackermann's function.
.Dara Moazzami
Abstract
The edge-tenacity $T_e(G)$ of a graph G was defined as\begin{center} $T_e(G)=\displaystyle \min_{F\subset E(G)}\{\frac{\mid F\mid +\tau(G-F)}{\omega(G-F)}\}$\end{center}where the minimum is taken over all edge cutset F of G. We defineG-F to be the graph induced by the edges of $E(G)-F$, $\tau(G-F)$is ...
Read More
The edge-tenacity $T_e(G)$ of a graph G was defined as\begin{center} $T_e(G)=\displaystyle \min_{F\subset E(G)}\{\frac{\mid F\mid +\tau(G-F)}{\omega(G-F)}\}$\end{center}where the minimum is taken over all edge cutset F of G. We defineG-F to be the graph induced by the edges of $E(G)-F$, $\tau(G-F)$is the number of edges in the largest component of the graphinduced by G-F and $\omega(G-F)$ is the number of components of$G-F$. A set $F\subset E(G)$ is said to be a $T_e$-set of G if\begin{center} $T_e(G)=\frac{\mid F\mid+\tau(G-F)}{\omega(G-F)}$\end{center}Each component has at least one edge. In this paper we introducea new invariant edge-tenacity, for graphs. it is another vulnerability measure.we present several properties and bounds on the edge-tenacity. we alsocompute the edge-tenacity of some classes of graphs.
Elham Daadmehr; Reza Habibi
Abstract
To check the financial stability, it is important to alarm the possibility of future potential financial crisis. In the literature, the early warning system (EWS) is designed to warn the occurrence of a financial crisis before it happens. This tool gives strengthens to managers to make efficient policy ...
Read More
To check the financial stability, it is important to alarm the possibility of future potential financial crisis. In the literature, the early warning system (EWS) is designed to warn the occurrence of a financial crisis before it happens. This tool gives strengthens to managers to make efficient policy in real economic activities. Hyperinflation, as a financial crisis, is an uncommon bad phenomenon in every economy. It quickly erodes the real value of the local currency, as the prices of all goods increase. This causes people to minimize their holdings in that currency as they usually switch to more stable foreign currencies, often the US Dollar. Hence, designing a EWS for detecting hyperinflation is valuable task. In the current paper, Iran monthly inflation is modeled by a first
orders autoregressive and moving average model (ARMA) with two-state Markov switching (MS) states, i.e., \( MS \left( 2 \right) -ARMA \left( 1,1 \right) \) . Based on this model, a logistic-EWS is proposed. From the empirical results, it is seen that, in Iran, the low inflation state is more probable than state of high inflation. Beside this, the time of remaining in the low inflation position is almost 9 times more than of high inflation position. To check validity of the results and control prediction errors,it is seen that at least 89 percentages of future states of inflation are correctly predicted with a low noise-to-signal ratio discrepancy measure.
Mahdi Imanparast; Mehdi Kazemi Torbaghan
Abstract
A new algorithm for point-inclusion test in convex polygons is introduced. The proposed algorithm answers the point-inclusion test in convex polygons in $\mathcal{O}(\log n)$ time without any preprocessing and with $\mathcal{O}(n)$ space. The proposed algorithm is extended to do the point-inclusion test ...
Read More
A new algorithm for point-inclusion test in convex polygons is introduced. The proposed algorithm answers the point-inclusion test in convex polygons in $\mathcal{O}(\log n)$ time without any preprocessing and with $\mathcal{O}(n)$ space. The proposed algorithm is extended to do the point-inclusion test in convex polyhedrons in three dimensional space. This algorithm can solve the point-inclusion test in convex $3D$ polyhedrons in $\mathcal{O}(\log n)$ time with $\mathcal{O}(n)$ preprocessing time and $\mathcal{O}(n)$ space.