Mehdi Ghassemieh; Mehdi Jalalpour
Abstract
Today connections are playing an important role in the steel frames subjected to lateral seismic loading. The steel extended end-plate moment connection can play an important role in the steel frame when subjected to seismic loadings. In the past, this type of connection has been under investigation ...
Read More
Today connections are playing an important role in the steel frames subjected to lateral seismic loading. The steel extended end-plate moment connection can play an important role in the steel frame when subjected to seismic loadings. In the past, this type of connection has been under investigation mainly when it was subjected to monotonic loading. After major recent earthquakes and failure of number of traditionally practiced moment connections in US, it seems that the extended end-plate moment connections can be qualified as a suitable semi rigid type connection to be used in the heavy steel frames.
In this article, the seismic behavior of the extended moment connection is investigated. Using the nonlinear finite element method of analysis and with the help of yield line analysis, the behavior of the connection, with the emphasis on the column depth and end-plate thickness effecting on the panel zone as well as the connection as a whole, is obtained and then the results are compared with those obtained from tests by other researchers. Plastic moment versus plastic rotation hysteresis, stress contours of the pertinent part of the connections and displacements are part of the results of this study. The difficulties arising in modeling and analysis technique, exists in such connection, is also discussed.
Bahareh Bafandeh Mayvan
Abstract
The edge tenacity Te(G) of a graph G is dened as:Te(G) = min {[|X|+τ(G-X)]/[ω(G-X)-1]|X ⊆ E(G) and ω(G-X) > 1} where the minimum is taken over every edge-cutset X that separates G into ω(G - X) components, and by τ(G - X) we denote the order of a ...
Read More
The edge tenacity Te(G) of a graph G is dened as:Te(G) = min {[|X|+τ(G-X)]/[ω(G-X)-1]|X ⊆ E(G) and ω(G-X) > 1} where the minimum is taken over every edge-cutset X that separates G into ω(G - X) components, and by τ(G - X) we denote the order of a largest component of G. The objective of this paper is to determine this quantity for split graphs. Let G = (Z; I; E) be a noncomplete connected split graph with minimum vertex degree δ(G) we prove that if δ(G)≥|E(G)|/[|V(G)|-1] then its edge-tenacity is |E(G)|/[|V(G)|-1] .
Dara Moazzami
Abstract
The tenacity of a graph $G$, $T(G)$, is defined by$T(G) = min\{\frac{\mid S\mid +\tau(G-S)}{\omega(G-S)}\}$, where theminimum is taken over all vertex cutsets $S$ of $G$. We define$\tau(G - S)$ to be the number of the vertices in the largestcomponent of the graph $G-S$, and $\omega(G-S)$ be the number ...
Read More
The tenacity of a graph $G$, $T(G)$, is defined by$T(G) = min\{\frac{\mid S\mid +\tau(G-S)}{\omega(G-S)}\}$, where theminimum is taken over all vertex cutsets $S$ of $G$. We define$\tau(G - S)$ to be the number of the vertices in the largestcomponent of the graph $G-S$, and $\omega(G-S)$ be the number ofcomponents of $G-S$. In this paperwe consider the relationship between the minimum degree $\delta (G)$ of a graph and the complexityof recognizing if a graph is $T$-tenacious. Let $T\geq 1$ be a rational number. We first show that if$\delta(G)\geq \frac{Tn}{T+1}$, then $G$ is $T$-tenacious. On the other hand, for any fixed $\epsilon>0$, weshow that it is $NP$-hard to determine if $G$ is $T$-tenacious, even for the class of graphs with $\delta(G)\geq(\frac{T}{T+1}-\epsilon )n$.
M.K.Karthik Chidambaram; S. Athisayanathan; R. Ponraj
Abstract
Let G be a (p,q) graph and A be a group. We denote the order of an element $a \in A $ by $o(a).$ Let $ f:V(G)\rightarrow A$ be a function. For each edge $uv$ assign the label 1 if $(o(f(u)),o(f(v)))=1 $or $0$ otherwise. $f$ is called a group A Cordial labeling if $|v_f(a)-v_f(b)| \leq 1$ and $|e_f(0)- ...
Read More
Let G be a (p,q) graph and A be a group. We denote the order of an element $a \in A $ by $o(a).$ Let $ f:V(G)\rightarrow A$ be a function. For each edge $uv$ assign the label 1 if $(o(f(u)),o(f(v)))=1 $or $0$ otherwise. $f$ is called a group A Cordial labeling if $|v_f(a)-v_f(b)| \leq 1$ and $|e_f(0)- e_f(1)|\leq 1$, where $v_f(x)$ and $e_f(n)$ respectively denote the number of vertices labelled with an element $x$ and number of edges labelled with $n (n=0,1).$ A graph which admits a group A Cordial labeling is called a group A Cordial graph. In this paper we define group $\{1 ,-1 ,i ,-i\}$ Cordial graphs and characterize the graphs $C_n + K_m (2 \leq m \leq 5)$ that are group $\{1 ,-1 ,i ,-i\}$ Cordial.
P. Titus; S. Santha Kumari
Abstract
A chord of a path $P$ is an edge joining two non-adjacent vertices of $P$. A path $P$ is called a monophonic path if it is a chordless path. A longest $x-y$ monophonic path is called an $x-y$ detour monophonic path. A detour monophonic graphoidal cover of a graph $G$ is a collection $\psi_{dm}$ ...
Read More
A chord of a path $P$ is an edge joining two non-adjacent vertices of $P$. A path $P$ is called a monophonic path if it is a chordless path. A longest $x-y$ monophonic path is called an $x-y$ detour monophonic path. A detour monophonic graphoidal cover of a graph $G$ is a collection $\psi_{dm}$ of detour monophonic paths in $G$ such that every vertex of $G$ is an internal vertex of at most one detour monophonic path in $\psi_{dm}$ and every edge of $G$ is in exactly one detour monophonic path in $\psi_{dm}$. The minimum cardinality of a detour monophonic graphoidal cover of $G$ is called the detour monophonic graphoidal covering number of $G$ and is denoted by $\eta_{dm}(G)$. In this paper, we find the detour monophonic graphoidal covering number of corona product of wheel with some standard graphs
Dara Moazzami
Abstract
If we think of the graph as modeling a network, the vulnerability measure the resistance of the network to disruption of operation after the failure of certain stations or communication links. Many graph theoretical parameters have been used to describe the vulnerability of communication networks, including ...
Read More
If we think of the graph as modeling a network, the vulnerability measure the resistance of the network to disruption of operation after the failure of certain stations or communication links. Many graph theoretical parameters have been used to describe the vulnerability of communication networks, including connectivity, integrity, toughness, binding number and tenacity.In this paper we discuss tenacity and its properties in vulnerability calculation.
A. Ghodousian; Abolfazl Javan; Asieh Khoshnood
Abstract
Yager family of t-norms is a parametric family of continuous nilpotent t-norms which is also one of the most frequently applied one. This family of t-norms is strictly increasing in its parameter and covers the whole spectrum of t-norms when the parameter is changed from zero to infinity. In this paper, ...
Read More
Yager family of t-norms is a parametric family of continuous nilpotent t-norms which is also one of the most frequently applied one. This family of t-norms is strictly increasing in its parameter and covers the whole spectrum of t-norms when the parameter is changed from zero to infinity. In this paper, we study a nonlinear optimization problem where the feasible region is formed as a system of fuzzy relational equations (FRE) defined by the Yager t-norm. We firstly investigate the resolution of the feasible region when it is defined with max-Yager composition and present some necessary and sufficient conditions for determining the feasibility and some procedures for simplifying the problem. Since the feasible solutions set of FREs is non-convex and the finding of all minimal solutions is an NP-hard problem, conventional nonlinear programming methods may involve high computation complexity. For these reasons, a method is used, which preserves the feasibility of new generated solutions. The proposed method does not need to initially find the minimal solutions. Also, it does not need to check the feasibility after generating the new solutions. Moreover, we present a technique to generate feasible max-Yager FREs as test problems for evaluating the performance of the current algorithm. The proposed method has been compared with Lu and Fang’s algorithm. The obtained results confirm the high performance of the proposed method in solving such nonlinear problems.
Ali Haghighi Asl; Akbar Shahsavand; Mohammad Reza Ghorbani
Abstract
In the present article an energy distribution function of heterogeneous solid was estimated. Energy distribution function is an important characterization for heterogeneous adsorbent. An overall adsorption quantity for a heterogeneous solid is usually expressed by a first kind of Fredholm equation, which ...
Read More
In the present article an energy distribution function of heterogeneous solid was estimated. Energy distribution function is an important characterization for heterogeneous adsorbent. An overall adsorption quantity for a heterogeneous solid is usually expressed by a first kind of Fredholm equation, which contains unknown distribution function and local adsorption isotherm as a kernel. The calculation of this distribution function is an ill-posed problem. The current article shows that the difficulties arising from the ill-posed nature of an adsorption equation can be overcome with the linear regularization method and inverse theory. Performance of the regularization method for calculation multipeak energy distribution functions was examined in the present work. The results expressed with different charts and several random errors. The results showed that linear regularization method is very convenient for prediction of energy distribution function of heterogeneous solids. Furthermore, if a large amount of data at low pressure is available, the performance of this method would be very suitable. Therefore, in the most cases that some data have a large random error (about 50 %), regularization method can predict the energy distribution function satisfactorily.
Mirhamed Mousavi; Parisa Khadivparsi; Seyyed Mohammad Ali Mousavian
Abstract
In this research, effect of bicomponent mixed surfactant was studied on drop interface coalescence phenomenon in ambient temperature. First basic chemical system was water and toluene and 0.01 gr of sodium dodecyle sulfate (SDS) and the second basic system was water and toluene and 0.01 gr of cethyl ...
Read More
In this research, effect of bicomponent mixed surfactant was studied on drop interface coalescence phenomenon in ambient temperature. First basic chemical system was water and toluene and 0.01 gr of sodium dodecyle sulfate (SDS) and the second basic system was water and toluene and 0.01 gr of cethyl trimethy amonium bromide (CTAB). Various weight fractions of second surfactant including 2-heptanol (non-ionic) and aniline (cationic) added to each of these systems, respectively, in order to study the effect of mixed surfactant on coalescence time. It was shown that chemical systems including mixture of nonionic/anionic surfactant increased coalescence time. However, chemical systems including mixture of nonionic/cationic surfactant decreased coalescence time. If a mixture of anionic/cationic surfactant was considered, in terms of being weak or powerful cationic surfactant, coalescence time would have been decreased in the later and for mixture of ctionic/ctionic surfactant, coalescence time increased in small diameter and decreased in further diameter. Effect of sodium chloride (NaCl) on selected systems, including 40% CTAB in the basic system of water and toluene and SDS, 40% 2-heptanol in basic system of water and toluene and CTAB and 40% 2-heptanol in basic system of water and toluene and SDS was considered and it was shown that in general, coalescence time decreases.
Mehdi Hassanlou; Mohammad Reza Serajian
Abstract
Oceanographic images obtained from environmental satellites by a wide range of sensors allow characterizing natural phenomena through different physical measurements. For instance Sea Surface Temperature (SST) images, altimetry data and ocean color data can be used for characterizing currents and vortex ...
Read More
Oceanographic images obtained from environmental satellites by a wide range of sensors allow characterizing natural phenomena through different physical measurements. For instance Sea Surface Temperature (SST) images, altimetry data and ocean color data can be used for characterizing currents and vortex structures in the ocean. The purpose of this thesis is to derive a relatively complete framework for processing large dynamic oceanographic image sequences in order to detect global displacements such as oceanographic streams and to localize particular structures like motion current and vortices and fronts. These characterizations will help in initializing particular processes in a global monitoring system. Using area-based algorithms, two least squares methods have been used to solve the apparent motion which involves Least Squares Matching (LSM) and Hierarchical least squares Lucas and Kanade (HLK). SST images of Caspian Sea taken by MODIS sensor on board Terra satellite have been used in this study. Three daily SST images with 24 hours time interval are used as input data. The LSM technique, as a flexible technique for most data matching problems, offers an optimum spatial solution for the motion estimation. The algorithm allows for simultaneous local (i.e. template) radiometric correction and local geometrical image orientation estimation. Actually, the correspondence between two image templates is modeled both geometrically and radiometrically. In order to implement weighted least squares fit of local first-order optical flow constraints in each spatial neighborhood, the HLK method has been used. This method locates water current using coarse-to-fine strategy to track motion in Gaussian pyramids of SST images. This method allows the detection of large motion in coarse resolution layer and guides to correct result in finer layers. The method used in this study has presented more efficient and robust solution compared to the traditional motion estimation schemes to extract water currents.
Mehdi Ashja'ee; Touraj Yousefi; Hossein Shokouhmand
Abstract
An experimental and numerical study of free convection heat transfer from a channel consisting of a vertical sinusoidal wavy surface and a vertical flat plate has been carried out. The vertical wavy surface was maintained at a constant temperature, while the flat plate is adiabatic. A Mach-Zehnder Interferometer ...
Read More
An experimental and numerical study of free convection heat transfer from a channel consisting of a vertical sinusoidal wavy surface and a vertical flat plate has been carried out. The vertical wavy surface was maintained at a constant temperature, while the flat plate is adiabatic. A Mach-Zehnder Interferometer was used to determine the local heat transfer coefficients of sinusoidal wavy surface. FLUENT code was used for numerical simulation. The numerical results are in good agreement with experimental data. The amplitude-wavelength ratio, , in this investigation is kept constant at .The effects of Rayleigh number and wall spacing are investigated as well. Experiments were carried out for eight different Rayleigh numbers and thirteen different wall spacing. Results indicate that the frequency of the local heat transfer rate is the same as that of the wavy surface. The average heat transfer coefficient increases as the Rayleigh number, increases. For each Rayleigh number there is an optimum wall spacing where the heat transfer rate from the wavy sinusoidal surface reaches its maximum value. This optimum wall spacing depends on Rayleigh number and decreases with increasing Rayleigh number.
Ali Babaee Zadeh; Hashem Mehrazin
Abstract
When the degree of saturation at intersection approaches one, Webster’s optimum cycle length equation becomes inapplicable, because the cycle length will becomes very big when the degree of saturation approaches one and will be fully unrealistic when the degree of saturation becomes greater than one. ...
Read More
When the degree of saturation at intersection approaches one, Webster’s optimum cycle length equation becomes inapplicable, because the cycle length will becomes very big when the degree of saturation approaches one and will be fully unrealistic when the degree of saturation becomes greater than one. This is not a problem for HCM2000 method. But optimum cycle length calculation in this method has not specific equation and based on try and error to minimize delay time. Also this method requires many input parameters that made it expensive. In this paper new modified Webster’s optimum cycle length equations for some specific situation of geometric and phasing based on HCM2000 method have been presented that have not described problem. The purpose of this paper is ability to use of “total lost time within the cycle (L)” and “the sum of critical phase flow ratios (Y)” parameters and creation new minimum cycle length equation based on HCM2000 method. Regarding to this fact that intersection geometry and phasing is related to the optimum cycle length, four situations of intersection have been considered. After this stage the following step by step procedure was used:
- having low traffic volume and low “L”
- using “HICAP2000” software to calculate optimum cycle length
- also using Webster’s equation to calculate optimum cycle length
- increasing traffic volume and repeating the above steps
- the above steps continue until degree of saturation at intersection approaches one
- increasing “L” and repeating above steps
- with renewed increased “L” and repeating above steps we have optimum cycle length for many of “L” and traffic volume at specific intersection
After this method we used “SPSS” software to modeling new relationship between “L” and “Y” and finally new equations are presented for four situations of intersection. This method can be expanded for other geometry and phasing intersections.
Habibollah Zolfkhani; Jalil Rashed Mohassel; Farrokh Hojjat Kashani
Abstract
Modern microwave and millimeter wave phased array antennas are attractive because of their ability to steer wave beams in space without physically moving the antenna element. A typical phased array antenna may have several thousand elements fed by a phase shifter for every antenna, which can steer the ...
Read More
Modern microwave and millimeter wave phased array antennas are attractive because of their ability to steer wave beams in space without physically moving the antenna element. A typical phased array antenna may have several thousand elements fed by a phase shifter for every antenna, which can steer the resulting array beam to different directions. Their low loss, low cost and lightweight phase shifters are important for the design of phased array antennas. The ferrite phase shifters have low insertion loss and can handle significantly higher powers, but they are complex in nature and have a high fabrication cost. While semiconductor phase shifters using PIN diodes or FET’s are less expensive and smaller in size than ferrites, their application is limited because of high insertion losses. Recently, others types of phase shifters using MEMS technology have been investigated to overcome the above limitations.
This paper presents analysis and design of distributed MEMS phase shifters for Ka-band communication systems. The phase shift can be obtained by changing MEMS bridge capacitors located periodically over the transmission line. Simulation results of phase shifters with various structural parameters are analyzed to develop the optimized designs. It is observed that the distributed microelectromechanical transmission-line (DMTL) phase shifter can be accurately modeled using a combination of full-wave electromagnetic and microwave circuit analysis. The full-wave electromagnetic simulation of the unit cell is done by finite element using Ansoft High Frequency Structure Simulator (HFSS). After the full wave analysis is performed, S-parameters are extracted in the frequency range going from 26 to 40 GHz for different widths and heights of the MEMS bridge. S-parameter presentation of phase shifters is very important in computer aided design (CAD). Finally, the S-parameters are combined to obtain the overall phase shifter performance over Ka-band. This phase shifter offers the potential for building a low loss device for a variety of phased arrays and radar. The average insertion loss and return loss and the phase errors of our phase shifter are compared with the reported MEMS phase shifters at various references.
The overall performance of n-bit phase shifter is obtained, using S-parameters and microwave circuit theory. Using phase shifts versus numbers of cells, it is shown that the n-bit phase shifter can be obtained with a suitable combination of one-bit phase shifters with 11.25, 22.5, 45, 90 and 180 degrees. Insertion losses, return losses, and phase shifts ware obtained in 32-states at the frequency range 26-40 GHz. Average insertion loss –1.68 dB, return loss –11.94 dB, and phase errors of 2.33 was obtained in 33 GHz for 4-bit phase shifter. The results are in good agreement in comparison with the reported MEMS phase shifters.
Maryam Haj Fath'alian; Mahmoud Nili Ahmadabadi; Mohammad Habibi Parsa; Tahereh Shah Hosseini; Hannaneh Ghadirian; Tahereh Hosseinzadeh Nik
Abstract
Equi-atomic Nickel-Titanium (NiTi or Nitinol) has the ability to return to a former shape when subjected to an appropriate thermo-mechanical regime. Pseudoelastic and shape memory effects are some of the behaviors presented by these alloys. The unique properties concerning these alloys have encouraged ...
Read More
Equi-atomic Nickel-Titanium (NiTi or Nitinol) has the ability to return to a former shape when subjected to an appropriate thermo-mechanical regime. Pseudoelastic and shape memory effects are some of the behaviors presented by these alloys. The unique properties concerning these alloys have encouraged many investigators to look for applications of NiTi for biomedical applications. One of the most successful applications of Nitinol is orthodontic archwire. The best features of these wires are super-elasticity, the phenomena that causes easy engagement (loading conditions). Superelastic nitinol wires deliver clinically desired light continuous force during deactivation (unloading conditions), enabling effective tooth movement with minimal damage for periodontal tissues. Superelasticity is characterized by a load-deflection plot with a horizontal region [plateau] during unloading, implying that a constant force may be exerted over that particular range of tooth movement. It is known that the NiTi alloy wire undergoes a phase transformation from an austenitic to a martensitic phase as the load increases during the loading process. Metallurgical studies have attributed these characteristics to a reversible phase transformation from the body centered cubic structure to the monoclinic structure of nickel-titanium when the stress reaches a certain level during deformation.The increasing amount of energy stored inside the NiTi wire during this process is consumed during the unloading process as the transformation is reversed, and the martensite structure reverts to austenite . Superelasticity is only exhibited by wires showing high endothermic energy in the reverse transformation from the martensitic phase to the parent phase and with low Load/deflection ratios. These wires show nearly constant forces in the unloading process, a desirable physiological property for orthodontic tooth movement. NiTi archwires have gained acceptable by orthodontists as initial alignment wires. Most of the information about the behavior of orthodontic wires is based on mechanical laboratory three point bending tests to study load-deflection characteristics. A three-point bending test allowed load-deflection curves offer reproducibility. Variations in model design have been shown to affect load- deflection plots. The load deflection performance of NiTi wires depends on the design of the test model. Modified three point bending test which simulates wire force on the teeth in the oral conditions has more correct results than ordinary three point bending test. The purpose of this study is to investigate the load-deflection characteristics of superelastic nickel titanium wires with a new model design trough modified bending tests. In this research a new three point bending fixture was invented and designed to determine the superelastic property in clinical conditions, and the wire samples were held in the fixture similar to oral cavity. By means of this instrument the three point bending test simulates wire force on the teeth in the oral configuration. The lower section of fixture is a rail fabricated from steel, a special movable base with a curved canal assembled over the rail. The upper section designed to simulate the teeth arrangement and curvature: A stainless steel disk (316L, ?80 mm, h10 mm) selected; and twelve rods (316L, ?5 mm, h10 mm) welded to the points representing center of teeth on the disk. The center points of teeth were located on the medium upper standard arc. Distance between the center points of teeth (interbracket distance) was similar to Wilkinson model.To achieve deflections 1, 2 and 4 mm, teeth 5,3 (right) and 2 (left) were selected respectively and rods in these points were movable. A fillet face machined on the rod surface parallel to the standard arc. Brackets fixed on the flat face of rod by superglue and orthodontic wire attached to fixed appliance. The superelastic behavior has been investigated through load-deflection test
Navid Shad Manaman; Morteza Eskandari Ghadi
Abstract
The existing theory for wave propagation through a soil layer are not compatible with the real soil layers because in the theory the layers are flat and the sub-layers are parallel, while in real the soil layers are not flat and they may not be parallel. Thus, wave propagations through a corrugated interface ...
Read More
The existing theory for wave propagation through a soil layer are not compatible with the real soil layers because in the theory the layers are flat and the sub-layers are parallel, while in real the soil layers are not flat and they may not be parallel. Thus, wave propagations through a corrugated interface are so important. In this paper, a two-dimensional SH-wave propagation through a corrugated interface between two linear transversely isotropic half-spaces is assessed. In order to do this, Lord Rayleigh's method is accepted to express the non-flat surface by a Fourier series. In this way, the amplitude of the reflected and transmitted waves is analytically determined in terms of the incident SH-wave amplitude. It is shown that except for the regular reflected and refracted waves, some irregular reflected and refracted waves are exist, and the amplitudes of these waves vary in terms of the angle and frequency of incident wave, equation of surface, and the material properties of the domains. The numerical computations for some cases of different amplitude/wave-length ratio of the interface are done. This work is an extension of Asano's paper (1960) for a more complicated interface, where more non-zero coefficients are considered in expressing the equation of surface in the form of Fourier series. The analytical results for some simpler case of isotropic domain are collapsed on Asano's results (1960). In addition, the numerical evaluation is in good agreement with Asano's.
Mohammad Ali Lotfollah Yaghin; Ali Reza Mojtahedi
Abstract
Using the statistical characteristics is one of the methods to justify the random nature of the ocean waves. Probability function are used to facilitate the studies of the random waves parameters, such as the surface and height and period of the waves. Since, the force of the ocean waves are the prevalent ...
Read More
Using the statistical characteristics is one of the methods to justify the random nature of the ocean waves. Probability function are used to facilitate the studies of the random waves parameters, such as the surface and height and period of the waves. Since, the force of the ocean waves are the prevalent principal forces on the offshore structures, the assignment of the significant structural responses such as applied inline and transverse forces and also total moment on the piles is the main step for design and construction of the structures.
In this paper, the random waves and their effects on the cylindrical pile will be investigated. These waves have been recorded during some tests which will be illustrated separately. The oscillation of some linear or nonlinear responses of the pile have been discussed statistically and then spectral analysis performed and their extremity amplitude statistical analysis interpreted and prevalent probability functions determined.
Mohammad Sa'eed Jabal Ameli; Ayat Reza'ee far; Ali Cha'ee Bakhsh Langaroudi
Jafar Razmi; Mohsen sadegh Amal Nik; Mehdi Hashemi
Hooman Naimi; Mehrdad Raisee
Abstract
The present paper deals with the prediction of three-dimensional fluid flow and heat transfer in rib-roughened ducts of square cross-section. Such flows are of direct relevance to the internal cooling system of modern gas turbine blades. The main objective is to assess how a recently developed variant ...
Read More
The present paper deals with the prediction of three-dimensional fluid flow and heat transfer in rib-roughened ducts of square cross-section. Such flows are of direct relevance to the internal cooling system of modern gas turbine blades. The main objective is to assess how a recently developed variant of a cubic non-linear model (proposed by Craft et al. (1999)), that has been shown to produce reliable thermal predictions through axi-symmetric and plane two-dimensional ribbed passages (Raisee et al. (2004)), can predict flow and heat transfer characteristics through more complex three-dimensional ribbed ducts. To fulfil this objective, the present paper discusses turbulent air flow and heat transfer through two different configurations, namely: (I) a square duct with “in-line” ribs normal to the flow direction at and (II) a square duct with normal ribs in a “staggered” arrangement at . In this paper the flow and thermal predictions of the linear model (EVM) are also included, as a set of baseline predictions. Both turbulence models have been used with the form of length-scale correction term to the dissipation rate originally proposed by ‘Yap’ and also a differential version of this term, ‘NYP’. The mean flow predictions show that both linear and non-linear models can successfully reproduce most of the measured data for stream-wise and cross-stream velocity components. Moreover, the non-linear model, which is sensitive to turbulence anisotropy, is able to produce better results for the turbulent stresses. As far as heat transfer predictions are concerned, it was found that both EVM and NLEVM2, the more recent variant of the non-linear , with the algebraic length-scale correction term, overestimate the measured Nusselt numbers for both geometries examined. While the EVM with the differential length-scale correction term underestimates heat transfer levels, the Nusselt number predictions with the NLEVM2 and the ‘NYP’ term are in close agreements with the measured data. Comparisons with our earlier work, Iacovides and Raisee (1999), show that the NLEVM2 thermal predictions are of similar quality to those of a second-moment closure. This modified version of the non-linear model, that in earlier studies was shown to improve thermal predictions in axi-symmetric and plane ribbed passages, is thus now found to also produce reasonable heat transfer predictions in three-dimensional ribbed ducts.
Hamid Doost Hosseini
Abstract
The tenacity of a graph G, T(G), is dened by T(G) = min{[|S|+τ(G-S)]/[ω(G-S)]}, where the minimum is taken over all vertex cutsets S of G. We dene τ(G - S) to be the number of the vertices in the largest component of the graph G - S, and ω(G - S) be the number of components of G ...
Read More
The tenacity of a graph G, T(G), is dened by T(G) = min{[|S|+τ(G-S)]/[ω(G-S)]}, where the minimum is taken over all vertex cutsets S of G. We dene τ(G - S) to be the number of the vertices in the largest component of the graph G - S, and ω(G - S) be the number of components of G - S.In this paper a lower bound for the tenacity T(G) of a graph with genus γ(G) is obtained using the graph's connectivity κ(G). Then we show that such a bound for almost all toroidal graphs is best possible.
Ahmad Kahrobaian; Hamid Reza Malekmohammadi
Abstract
A new method of optimization on linear parabolic solar collectors using exergy analysis is presented. A comprehensive mathematical modeling of thermal and optical performance is simulated and geometrical and thermodynamic parameters were assumed as optimization variables. By applying a derived expression ...
Read More
A new method of optimization on linear parabolic solar collectors using exergy analysis is presented. A comprehensive mathematical modeling of thermal and optical performance is simulated and geometrical and thermodynamic parameters were assumed as optimization variables. By applying a derived expression for exergy efficiency, exergy losses were generated and the optimum design and operating conditions, were investigated. The objective function (exergy efficiency) along with constraint equations constitutes a four-degree freedom optimization problem. Using Lagrange multipliers method, the optimization procedure was applied to a typical collector and the optimum design point was extracted. The optimum values of collector inlet temperature, oil mass flow rate, concentration ratio and glass envelope diameter are calculated simultaneously by numerical solution of a highly non-linear equations system. To study the effect of changes in optimization variables on the collected exergy, the sensitivity of optimization to changes in collector parameters and operating conditions is evaluated and variation of exergy fractions at this point are studied.
Najme Mansouri; Mohammad Masoud Javidi
Abstract
A data-intensive computing platform, encountered in some grid and cloud computing applications, includes numerous tasks that process, transfer or analysis large data files. In such environments, there are large and geographically distributed users that need these huge data. Data management is one of ...
Read More
A data-intensive computing platform, encountered in some grid and cloud computing applications, includes numerous tasks that process, transfer or analysis large data files. In such environments, there are large and geographically distributed users that need these huge data. Data management is one of the main challenges of distributed computing environment since data plays on devoted role. Dynamic data replication techniques have been widely applied to improve data access and availability. In order to introduce an appropriate data replication algorithm, there are four important problems that must be solved. 1) Which file should be replicated; 2) How many suitable new replicas should be stored; 3) Where the new replicas should be placed; 4) Which replica should be deleted to make room for new copies. In this paper, we focus particularly on replica replacement issue which makes a significant difference in the efficiency of replication algorithm. We survey replica replacement approaches (from 2004 to 2018) that are developed for both grid and cloud environments. The presented review illustrates the replica replacement problem from a technological and it differs significantly from previous reviews in terms of comprehensiveness and integrated discussion. In this paper, we present different parameters involved in replacement process and show the key points of the recent algorithms with a tabular representation of all those factors. We also report open issues and new challenges in the area.
A. Javan; M. Jafarpour; D. Moazzami; A. Moieni
Abstract
In this paper, we introduce the novel parameters indicating Normalized Tenacity ($T_N$) and Normalized Toughness ($t_N$) by a modification on existing Tenacity and Toughness parameters. Using these new parameters enables the graphs with different orders be comparable with each other regarding their ...
Read More
In this paper, we introduce the novel parameters indicating Normalized Tenacity ($T_N$) and Normalized Toughness ($t_N$) by a modification on existing Tenacity and Toughness parameters. Using these new parameters enables the graphs with different orders be comparable with each other regarding their vulnerabilities. These parameters are reviewed and discussed for some special graphs as well.
Seyyed Mehdi Zahra'i; Asghar Vatani Osku'ee; Seyyed Farid Hashemi
Mahmood Shabankhah
Abstract
The analysis of vulnerability in networks generally involves some questions about how the underlying graph is connected. One is naturally interested in studying the types of disruption in the network that maybe caused by failures of certain links or nodes. In terms of a graph, the concept of connectedness ...
Read More
The analysis of vulnerability in networks generally involves some questions about how the underlying graph is connected. One is naturally interested in studying the types of disruption in the network that maybe caused by failures of certain links or nodes. In terms of a graph, the concept of connectedness is used in dierent forms to study many of the measures of vulnerability. When certain vertices or edges of a connected graph are deleted, one wants to know whether the remaining graph is still connected, and if so, what its vertex - or edge - connectivity is. If on the other hand, the graph is disconnected, the determination of the number of its components or their orders is useful. Our purpose here is to describe and analyze the current status of the vulnerability measures, identify its more interesting variants, and suggesta most suitable measure of vulnerability.