Masoud Rabbani; Reza Tavakkoli Moghaddam; Yasaman Khodadadegan; Mo'eed Hagh Nevis
Abstract
Nowadays, the problem of the flexibility in any organization has essential importance and the associated investigation with the organizational efficiency and performance is considered. In real-world situation, the process of decision making is often based on linguistic and mental data, so applying fuzzy ...
Read More
Nowadays, the problem of the flexibility in any organization has essential importance and the associated investigation with the organizational efficiency and performance is considered. In real-world situation, the process of decision making is often based on linguistic and mental data, so applying fuzzy logic in modeling at this point of view is convenient. In this paper, a fuzzy approach is applied for a developed organizational flexibility model considering both organizational efficiency and flexibility. In fuzzy environment, this model makes real conditions for the problem. This problem is solved in certain and fuzzy conditions and the associated results are presented.
Shekoufeh Hakim; Mehdi Nekoumanesh Haghighi; Mehrdad Agha'ee Niat
Seyyed Asghar Arjmandi; Vahid Lotfi
Abstract
Ritz method is one of the techniques for reduction of the degrees of freedom. Efficiency of Ritz method depends on used vectors. The Ritz method uses load depended vectors in spite of modal method and for this reason it is expected to give better results than modal method. This is the advantage of Ritz ...
Read More
Ritz method is one of the techniques for reduction of the degrees of freedom. Efficiency of Ritz method depends on used vectors. The Ritz method uses load depended vectors in spite of modal method and for this reason it is expected to give better results than modal method. This is the advantage of Ritz method. It is worth mention that the mode shapes are independent of loading. The vectors that are calculated based on spatial distribution of loads are load dependent vectors. Humar et.al. have introduced a new type of vectors that are dependent not only on spatial distribution of loads but also on frequency of those. This type of vectors is named frequency dependent vectors. In this paper a new style of vectors is proposed that can replace both previous types. It is shown that these vectors are linearly independent. Various examples utilizing new Ritz vectors in two and three dimensions are analyzed. It is observed that this method is effective and efficient.
Navid Khabbazi; Keyvan Sadeghi
Abstract
In the present work, the effect of fluid’s elasticity was investigated on the hydrodynamic instability of Blasius flow. To determine the critical Reynolds number as a function of the elasticity number, a two-dimensional linear temporal stability analysis was invoked. The viscoelastic fluid is assumed ...
Read More
In the present work, the effect of fluid’s elasticity was investigated on the hydrodynamic instability of Blasius flow. To determine the critical Reynolds number as a function of the elasticity number, a two-dimensional linear temporal stability analysis was invoked. The viscoelastic fluid is assumed to obey the Walters’ B fluid model for which base flow velocity profiles were fortunately available from literature. Neglecting terms nonlinear in the perturbation quantities, a Generalized Orr-Sommerfeld equation was obtained incorporating an elastic term. The eigenvalue problem so obtained was solved numerically using the Chebychev collocation-point method. Based on the results obtained in this work, fluid’s elasticity is predicted to have a destabilizing effect on the Blasius flow.
Charles Colbourn; Jose Torres-Jimenez
Abstract
Covering arrays of strength two have been widely studied as combinatorial models of software interaction test suites for pairwise testing. While numerous algorithmic techniques have been developed for the generation of covering arrays with few columns (factors), the construction of covering arrays with ...
Read More
Covering arrays of strength two have been widely studied as combinatorial models of software interaction test suites for pairwise testing. While numerous algorithmic techniques have been developed for the generation of covering arrays with few columns (factors), the construction of covering arrays with many factors and few tests by these techniques is problematic. Random generation techniques can overcome these computational difficulties, but for strength two do not appear to yield a number of tests that is competitive with the fewest known.
Dara Moazzami
Abstract
In this paper we discuss about tenacity and its properties in stability calculation. We indicate relationships between tenacity and connectivity, tenacity and binding number, tenacity and toughness. We also give good lower and upper bounds for tenacity. Since we are primarily interested in the case where ...
Read More
In this paper we discuss about tenacity and its properties in stability calculation. We indicate relationships between tenacity and connectivity, tenacity and binding number, tenacity and toughness. We also give good lower and upper bounds for tenacity. Since we are primarily interested in the case where disruption of the graph is caused by the removal of a vertex or vertices (and the resulting loss of all edges incident with the removed vertices), we shall restrict our discussion to vertex stability measures. In the interest of completeness, however, we have included several related measures of edge stability.
R. Jamunarani; P. Jeyanthi; M. Velrajan
Abstract
In this paper, we introduce λκ−closed sets and study its properties in generalized topological spaces.
Read More
In this paper, we introduce λκ−closed sets and study its properties in generalized topological spaces.
Peyman Nasehpour
Abstract
In this paper, we introduce a family of graphs which is a generalization of zero-divisor graphs and compute an upper-bound for the diameter of such graphs. We also investigate their cycles and cores
Read More
In this paper, we introduce a family of graphs which is a generalization of zero-divisor graphs and compute an upper-bound for the diameter of such graphs. We also investigate their cycles and cores
G. Marimuthu; S. Stalin Kumar
Abstract
An H-magic labeling in a H-decomposable graph G is a bijection f : V (G) ∪ E(G) → {1, 2, ..., p + q} such that for every copy H in the decomposition, ΣνεV(H) f(v) + ΣeεE(H) f(e) is constant. f is said to be H-E-super magic if f(E(G)) = {1, 2, · ...
Read More
An H-magic labeling in a H-decomposable graph G is a bijection f : V (G) ∪ E(G) → {1, 2, ..., p + q} such that for every copy H in the decomposition, ΣνεV(H) f(v) + ΣeεE(H) f(e) is constant. f is said to be H-E-super magic if f(E(G)) = {1, 2, · · · , q}. A family of subgraphs H1,H2, · · · ,Hh of G is a mixed cycle-decomposition of G if every subgraph Hi is isomorphic to some cycle Ck, for k ≥ 3, E(Hi) ∩ E(Hj) = ∅ for i ≠ j and ∪hi=1E(Hi) = E(G). In this paper, we prove that K2m,2n is mixed cycle-E-super magic decomposable where m ≥ 2, n ≥ 3, with the help of the results found in [1].
Fatemeh Ganji; Zahrasadat Zamani
Abstract
Optimization of inventory costs is the most important goal in industries. But in many models, the constraints are considered simple and relaxed. Some actual constraints are to consider the combinatorial production and purchase models in multi-products environment. The purpose of this article is to improve ...
Read More
Optimization of inventory costs is the most important goal in industries. But in many models, the constraints are considered simple and relaxed. Some actual constraints are to consider the combinatorial production and purchase models in multi-products environment. The purpose of this article is to improve the efficiency of inventory management and find the economic order quantity and economic production quantity that can minimize the cost of inventory and customer satisfaction. In this study, the models with these targets in combinatorial production and purchase systems with the assumption the warehouse and budget constraints are proposed. Since a long time for solving the problem with an exact method is required, we develop a genetic algorithm. To evaluate the efficiency of the proposed algorithm, test problems with different sizes of the problem in the range from 1 to 2000 jobs, are generated. The results show that the genetic method is efficient to determine economic order quantity and economic production quantities. The computational results demonstrate that the average error of the solution is 10.93\%.
R Ponraj; J Maruthamani
Abstract
Let $G$ be a $(p,q)$ graph. Let $f:V(G)\to\{1,2, \ldots, k\}$ be a map where $k \in \mathbb{N}$ and $k>1$. For each edge $uv$, assign the label $\gcd(f(u),f(v))$. $f$ is called $k$-Total prime cordial labeling of $G$ if $\left|t_{f}(i)-t_{f}(j)\right|\leq 1$, $i,j \in \{1,2, \cdots,k\}$ where $t_{f}(x)$ ...
Read More
Let $G$ be a $(p,q)$ graph. Let $f:V(G)\to\{1,2, \ldots, k\}$ be a map where $k \in \mathbb{N}$ and $k>1$. For each edge $uv$, assign the label $\gcd(f(u),f(v))$. $f$ is called $k$-Total prime cordial labeling of $G$ if $\left|t_{f}(i)-t_{f}(j)\right|\leq 1$, $i,j \in \{1,2, \cdots,k\}$ where $t_{f}(x)$ denotes the total number of vertices and the edges labelled with $x$. A graph with a $k$-total prime cordial labeling is called $k$-total prime cordial graph. In this paper we investigate the $4$-total prime cordial labeling of some graphs like Prism, Helm, Dumbbell graph, Sun flower graph.
D. Moazzami
Abstract
The stability of a communication network composed of processing nodes and communication links is of prime importance to network designers. As the network begins losing links or nodes, eventually there is a loss in its effectiveness. Thus, communication networks must be constructed ...
Read More
The stability of a communication network composed of processing nodes and communication links is of prime importance to network designers. As the network begins losing links or nodes, eventually there is a loss in its effectiveness. Thus, communication networks must be constructed to be as stable as possible, not only with respect to the initial disruption, but also with respect to the possible reconstruction of the network. For any fixed integers n,p with p ≥ n + 1, Harary constructed classes of graphs Hn,p that are n-connected with the minimum number of edges. Thus Harary graphs are examples of graphs with maximum connectivity. This property makes them useful to network designers and thus it is of interest to study the behavior of other stability parameters for the Harary graphs. In this paper we study the toughness of the third case of the Harary graphs.
P. Jeyanthi; T. Saratha Devi
Abstract
Let G be a (p,q) graph. An injective map f : E(G) → {±1,±2,...,±q} is said to be an edge pair sum labeling if the induced vertex function f*: V (G) → Z - {0} defined by f*(v) = ΣP∈Ev f (e) is one-one where Ev denotes the set of edges in G that are ...
Read More
Let G be a (p,q) graph. An injective map f : E(G) → {±1,±2,...,±q} is said to be an edge pair sum labeling if the induced vertex function f*: V (G) → Z - {0} defined by f*(v) = ΣP∈Ev f (e) is one-one where Ev denotes the set of edges in G that are incident with a vertex v and f*(V (G)) is either of the form {±k1,±k2,...,±kp/2} or {±k1,±k2,...,±k(p-1)/2} U {±k(p+1)/2} according as p is even or odd. A graph with an edge pair sum labeling is called an edge pair sum graph. In this paper we prove that the graphs GL(n), double triangular snake D(Tn), Wn, Fln, <Cm,K1,n> and <Cm * K1,n> admit edge pair sum labeling.
Fatemehzahra Saberifar; Ali Mohades; Mohammadreza Razzazi; Jason J. M. O'Kane
Abstract
Combinatorial filters, which are discrete representations of estimationprocesses, have been the subject of increasing interest from the roboticscommunity in recent years.%This paper considers automatic reduction of combinatorial filters to a givensize, even if that reduction necessitates changes to the ...
Read More
Combinatorial filters, which are discrete representations of estimationprocesses, have been the subject of increasing interest from the roboticscommunity in recent years.%This paper considers automatic reduction of combinatorial filters to a givensize, even if that reduction necessitates changes to the filter's behavior.%We introduce an algorithmic problem called \emph{improper filter reduction}, in which the input is a combinatorial filter $F$ alongwith an integer $k$ representing the target size. The output is anothercombinatorial filter $F'$ with at most $k$ states, such that the differencein behavior between $F$ and $F'$ is minimal.We present two methods for measuring the distance between pairs of filters, describe dynamic programming algorithms for computing these distances, andshow that improper filter reduction is NP-hard under these methods.%We then describe two heuristic algorithms for improper filter reduction, one\changed{greedy sequential} approach, and one randomized global approach based on prior workon weighted improper graph coloring. We have implemented these algorithmsand analyze the results of three sets of experiments.
Ali Asghar Atai; Masoud Shariat Panahi; Kave Ebrahimi
Abstract
Cables have always been under consideration as a structural element because of important features such as large strength to weight ratios and long spans. Their equilibrium analysis is an important issue in this regard. But this analysis involves highly nonlinear equations arising from large deformations ...
Read More
Cables have always been under consideration as a structural element because of important features such as large strength to weight ratios and long spans. Their equilibrium analysis is an important issue in this regard. But this analysis involves highly nonlinear equations arising from large deformations and material nonlinearity. Different methods have been under use for numerical analysis of such structures. Using the principle of minimum total potential energy is one of the common methods of analyzing the equilibrium configuration of structures. In this method, which is considered an alternative to direct solution of nonlinear equations of equilibrium analytically or numerically by finite element for example, the total potential energy of the structure is minimized using optimization techniques and forces and deformations corresponding to the equilibrium configuration are computed. In cable structures, which under ideal assumption cannot withstand compressive forces, the potential energy functional has discontinuous derivative and thus the classic methods of optimization, which make use of the derivative of the objective function, cannot be used in this case. Usually, the energy consideration is used as the basis of obtaining the equilibrium equations of the structure but rarely is it used as a function the numerical minimization of which gives the equilibrium configuration. In this paper a new method of solving nonlinear equations of elastic equilibrium of cable structures is presented. In this method, first the potential energy functional of the cable structure with large deformations is established. Then, the Powell algorithm of optimization, which doesn't depend on the derivatives of the objective function, is applied and the equilibrium configuration as the minimizer of the functional is obtained. The proposed method has the ability to determine the force in each cable and displacements of the cable junctions (nodes) and the slack cables (cables with no tension) with great speed and accuracy compared to the classic methods. This work consists of a brief explanation of different analyzing methods of cable structures currently in the market. Then, the potential energy for a single cable is obtained and is generalized for a cable network. After that, the Powell's minimization technique is explained. In the examples section, a very simple structure for which the analytical result is available is considered as a validating example. Following that, several illustrative examples are considered. The results are in good agreement with previous published ones.
Sohrab Ali Ghorbanian; Ali Reza Salehpour; Saeed Reza Radpour
Asghar Gharedaghi; Seyyed Ali Asghar Akbari Mousavi
Homa Keshavarzi Shirazi; Ayyoub Torkian; Ali Akbar Azimi; Naser Mehrdadi
Abstract
Performance of a pilot Biofiltration system in removing of Triethylamine (TEA) vapor from air stream was evaluated in this study. Experiments were conducted with two 6-L three section biofilters containing a mixture of compost (60%) and wood chips (40%). The systems were operated at 20±2 and 30±1 ?C. ...
Read More
Performance of a pilot Biofiltration system in removing of Triethylamine (TEA) vapor from air stream was evaluated in this study. Experiments were conducted with two 6-L three section biofilters containing a mixture of compost (60%) and wood chips (40%). The systems were operated at 20±2 and 30±1 ?C. Municipal activated sludge was added initially to promote microbial growth and the systems were started after initial adaptation period of 40 days. Various loading rates (8-130 g/m3.hr) and detention times (40-60 seconds) were studied to evaluate the effect on performance of biofilter for TEA removal. Results indicated significant decrease in EC for HRT < 48 s but negligible differences were observed between 60 and 48 s. TEA removal in section one was significant higher than the other two. Maximum E.C. of 61 g/m3.hr at HRT of 48 second, humidity of 50-55 %, and loading rate of 90.6 g/m3.hr was observed for the reactor A. Maximum E.C. of 72 g/m3.hr at HRT of 48 s, humidity of 50-55 %, and loading rate of 114.4 g/m3.hr was observed for the reactor B having a higher temperature.
Ali Dehghani Ahmadabadi; Ziaeddin Pourkarimi; Mohammad Noparast
Abstract
Due to the importance of material classification issue in grinding circuits, the performance of primary and secondary hydrocyclones in grinding circuit of Esfordi phosphate plant was investigated by sampling within 7 days from feed, overflow, and underflow of these hydrocyclones. At first, ...
Read More
Due to the importance of material classification issue in grinding circuits, the performance of primary and secondary hydrocyclones in grinding circuit of Esfordi phosphate plant was investigated by sampling within 7 days from feed, overflow, and underflow of these hydrocyclones. At first, the d80 of above streams samples were calculated by the screening analysis. The results showed that the average values of d80 of feed, overflow and underflow of primary hydrocyclone were respectively 238.7, 236.5 and 100.9 microns, and 100.9 and 94.1 microns for the feed and underflow of secondary hydrocyclone. For the primary hydrocyclone, with plotting distribution curves, its cut-points, the actual and corrected cut-points were calculated which were 35 and 45 microns, respectively; whereas they were equal to 100 microns according to the design documents. However based on the undesirable performance of primary hydrocyclone, it was tried to investigate on its performance using simulation software. The results indicated 80 microns cut-point and 192.30 kpa of pressure drop in particle classification. In addition, the amount of circulating load obtained 931.84% (in primary mill design, this amount was equal to 150%), and high feed rate to hydrocyclone was one the major reasons for this high circulating load. To solve this problem, it was considered one more hydrocyclone could be added to this circuit in the form of parallel and series, and results were studied using BMCS software. The results of this criterion of adding a parallel hydrocyclone, showed 86 microns of cut-point with 62.91 kpa of pressure drop which were suitable for the circuit.
Seyyed Amir Reza Beyabanaki; Ahmad Jafari
Abstract
In this paper, first the validity of 3-D DDA is examined by comparing its solution for dynamic block displacement with analytical solution. Displacement of a single block on an inclined plane subjected to a dynamic loading is studied for analytical solution with respect to the frictional resistance offered ...
Read More
In this paper, first the validity of 3-D DDA is examined by comparing its solution for dynamic block displacement with analytical solution. Displacement of a single block on an inclined plane subjected to a dynamic loading is studied for analytical solution with respect to the frictional resistance offered by the inclined slope. 3-D DDA predicts accurately the analytical displacements. The modification of point-to-face contact constraints is also studied too. In the original 3-D DDA method, block contact constraints are enforced using the penalty method. This approach is quite simple, but may lead to inaccuracies which may be large for small values of the penalty number. The penalty method also creates block contact overlap which violates the physical constraints of the problem. These two limitations are overcome by using the Augmented Lagrangian Method which has been programmed in VC++ and its implementation into the 3-D DDA is presented by an illustrative example. This method has been found to model block contact quite well.
Zahra Mousavi; Behzad Vosoghi
Abstract
A new and significant source of information on earthquake studies has been provided by space geodesy. The data which are gathered by various techniques of space geodesy, can quantify potential of seismic activity in the region of interest. To achieve this goal, the main advantage of extra-terrestrial ...
Read More
A new and significant source of information on earthquake studies has been provided by space geodesy. The data which are gathered by various techniques of space geodesy, can quantify potential of seismic activity in the region of interest. To achieve this goal, the main advantage of extra-terrestrial geodetic data in comparison with the conventional data from geology and seismology is the ability of this data in portraying the present kinematics of the area with faults that are unknown, too slowly slipping, or too deeply buried. Seismic moment rate, that can be calculated based on geological fault data and historical earthquake catalogue, is the amount of accumulation of earthquake potential in a region. Space geodetic data are used in the deformation analysis for quantifying strain rate tensor. The strain tensor is related to the moment rate tensor according to Kostrov formula presented in 1974 for the first time. In 1994, Ward calculated geodetic seismic moment rate by means of eigenvalues of strain rate tensor. The seismic moment rate that are calculated based on these three discipline, namely geodesy, geology and seismology can grant us a comprehensive view of seismic potential of the region. Since 1999 the National Cartographic Center (NCC) of Iran has been established and maintained a high precision geodetic network in the region of Iran. The network has 28 stations on two of the main plates, namely Eurasian and Arabian plates. This is a long term project, with planned re-observations every two years. First epoch of GPS measurements was done in 1999 and the second GPS campaign was carried out in 2001.
This paper will focus on this data to derive geodetic seismic moment rates. Our results show that south-east region and central
Alborz presents largest values of the seismic moment rates in comparison to other parts
of Iran. The moment rates in area unit are 5.7659x1015(N m-1 yr-1) and 2.0147x1015
(Nm-1 yr-1) over the predefined seismic regions of south-east and central Alborz, respectively. The derived magnitudes of the seismic moment rate show the smallest value in north-east region. The determined rate is equal to 1.0832x1015(N m-1 yr-1) in this region.
Hossein Ahmadi Kia; Ahmad Reza Pishevar
Abstract
Effect of altitude is discussed in the unsteady separation of multi stage rockets. Axisymmetric, unsteady and turbulent Navier stokes equations are solved numerically. The governing equations are split into a hyperbolic inviscid part and a parabolic diffusion part. The hyperbolic part is solved by an ...
Read More
Effect of altitude is discussed in the unsteady separation of multi stage rockets. Axisymmetric, unsteady and turbulent Navier stokes equations are solved numerically. The governing equations are split into a hyperbolic inviscid part and a parabolic diffusion part. The hyperbolic part is solved by an explicit second-order time and space of Godunov-type scheme. Moving mesh and moving boundary algorithm are used in the numerical simulation. Separation of missile staging is simulated at 10, 20, 30 and 40 kilometers altitudes, and 10 Mach number. The flow physics of injection of a secondary supersonic jet into a hypersonic turbulent primary core over a missile configuration is studied numerically. The injection occurs opposite to the free-stream direction. It is shown that the intense interactions between the jet flow and the main free-stream have a noticeable effect on the aerodynamic loads. High momentum of the jet injected to the main flow of rocket causes interaction of shock waves and consequently it changes flow pattern. The aerodynamic forces can be changed significantly by the intense jet flow interactions. The results show that drag forces of head and body of the rocket are irregularly periodical. Amplitude of drag force has been increased by altitude. At primary time of separation, rate of displacement of head and body has incredible behavior in various altitudes too.
Abdorreza Safari; Yahya Jamour; Meisam Ghanizadeh
Mohammad Reza Ghasemi; Mohammad Reza Mostakhdemin Hosseini
Abstract
Due to the probabilistic nature and uncertainties of structural parameters, reliability-based optimization will enable engineers to account for the safety of the structures and allow for its decision making applicability. Thus, reliability-based design will substitute deterministic rules of codes of ...
Read More
Due to the probabilistic nature and uncertainties of structural parameters, reliability-based optimization will enable engineers to account for the safety of the structures and allow for its decision making applicability. Thus, reliability-based design will substitute deterministic rules of codes of practice. Space structures are of those types that have an exceedingly high range of applicability aspects in civil engineering. Therefore optimization of such structures with great and considerable number of members will be economically wise. For this purpose, the optimization process could be carried out using various mathematical models. One such model is to minimize weight while considering elements failure probability as constraints. Another form is to minimize weight and then regarding the whole structure system reliability as constraint. The third type could be to minimize failure probability as well as its weight, while taking into account the structural system reliability as the constraint. In this research each of the above forms were studied and the results were compared. Also, apart from reliability considerations for the members, the reliability of nodes was also taken into account. Node failure means that node displacement in at least one direction exceeds that of the allowable value. As well the effect of various stochastic parameters such as load, yield stress, modulus of elasticity and cross section were studied. The stochastic parameters discussed in this study are statistically independent and possess standardized normal distribution. To avoid local convergence during the process of optimization, Genetic Algorithms is used as means of optimization. This study show that with increasing the members or system admissible failure probability, optimum weight of structure increases, but with increasing the coefficient of variation of load or yield stress, optimum weight increases.
Soheila Aslani; Abbas Bahroudi; Jalal Karami; Amir Khodras Haghighi
Abstract
Sarbisheh mineral occurrence is located in the vicinity of the Tourshab village about 50 km south of Birjand, the center of Southern Khorassan province, east Iran. The predominant geologic characteristics of the region are highly-tectonic deformation, and intrusive, granodiroritic-dioritic, to volcanic, ...
Read More
Sarbisheh mineral occurrence is located in the vicinity of the Tourshab village about 50 km south of Birjand, the center of Southern Khorassan province, east Iran. The predominant geologic characteristics of the region are highly-tectonic deformation, and intrusive, granodiroritic-dioritic, to volcanic, dacitic-andsitic rocks associated with broad alteration. To the northern part of the studied area, extensive variety of alteration, kaolanitic, sillicified to choloritic, can be observed clearly in the filed and remotely sensed data. Concerning to previous geologic works, these alteration zones have been formed apparently NW-trending extensional trends. This is more obvious feature to the southeast part of the area where the main tectonic features with NW-SE trend delimited the southern border of Kaolinitic alteration in adjacent of the Torshab spring. Based on previous geologic surveying and preliminary mineral prospecting, the area apparently has a convincing mineral potential, especially for copper, gold etc., to encourage some governmental organization or any private company to launch more detailed exploration study. The main goal of the present study was to introduce an integrated method for mineral exploration for the copper-gold prospects using the remote sensing and field observations and to produce different alteration maps. To distinguish between different types of alteration developed within the studied area, ASTER satellite data were collected and analyzed using various methods. In this manner, argillaceous alteration haloes and iron-oxide zones representing possible high-mineral potential extents were extracted successfully by applying the Korsta method. Meantime, other remote sensing techniques, such as color composite maps, principle component analysis, supervised classification, least-squares fit, band rationing and MF methods. By applying the above methods, this study produced different accurate alteration maps of the area.