Professional Documents
Culture Documents
simplicial unstructured grids to viscous
ow simulations are far less popular, acceptable, and successful.
For high Reynolds number viscous
ows, the velocity variations in the normal direction are much larger
than those in the tangential direction in the boundary layer and wake regions. This requires the use
of the highly stretched meshes to eciently resolve
such viscous regions. For simplicial elements, this
stretching invariably leads to the use of long and thin
triangular cells. Numerical evidence, as reported by
Aftosmis et al.1 , Haselbacher et al.2 , and Viozat et
al.3 , indicates that for the discretization of the inviscid
uxes, the classic 1D-based upwind schemes using median-dual nite volume approximation suer
from excessive numerical diusion due to such skewing. This is mainly due to the presence of diagonal
connections, where the right and left state data required for the upwind schemes are not well aligned
with the normal to the interface. Two general options
exist for solving this problem: scheme-based methods and mesh-based methods. The former approach
attempts to develop multidimensional upwind methods that are based more deeply on physical aspects of
the Euler equations. Several eorts have been made
in this direction4 9 . Among them are rotated upwind methodologies, multidimensional wave decomposition methods, and characteristic theory-based advection schemes. Despite much progress, numerical
investigations have shown that none of these methods have yet reached a satisfactory level with respect
to their accuracy and robustness. The latter approach
is based on improving of the grids. The typical example of this approach is to use hybrid grids10 11,
where the cell shapes, which do not become skewed
with stretching (e.g., hexahedra and prisms), are used
in the viscous regions and tetrahedral cells away from
viscous regions. However, this is not completely satisfactory from the application point of view, as the
existence of several cell types in a hybrid grid undoubtedly makes mesh generation and adaptation,
ow solver, and post-processing more complex. Alternatively, one could use a containment dual nite
volume approximation3 12, which signicantly reduces
the in
uence of the diagonal edges in the grid. In
ABSTRACT
The use of unstructured meshes for computational
uid dynamics problems has become
widespread due to their ability to discretize arbitrarily
complex geometries and due to ease of adaption in enhancing the solution accuracy and eciency through
the use of adaptive renement.
While unstructured grids composed of geometric
simplices, i.e., triangle in two dimensions and tetrahedra in three dimensions are widely and routinely
used for producing high-quality solutions for inviscid
ow simulations, the development and application of
c 2000 by the authors. Published by the
Copyright
American Institute of Aeronautics and Astronautics,
Inc. with permission.
fact, one can show that the resulting control volume on an ideal triangulated quadrilateral grid using such containment dual is essentially identical to
the control volume on the quadrilateral grid using
a median control volume. Although the concept of
the containment dual-based nite volume method is
not very new, its accuracy, eciency, and capability
to treat real engineering problems is relatively unexplored. One of the major objectives of the eorts
presented in this paper is to demonstrate that using
containment dual-based nite volume approximation,
simplicial unstructured grids can be applied to accurately compute high Reynolds number viscous
ows.
Development of computationally ecient algorithms for high Reynolds number viscous
ow simulations on highly stretched unstructured grids remains one of the unresolved issues in computational
uid dynamics. Due to the large dierences between the convective and diusive time scales, high
Reynolds number
ows produce very sti systems
of equations. This presents a severe challenge to
the time advancement procedure. By itself, explicit,
multi-stage time advancement is not a computationally viable alternative. A multigrid strategy or an
implicit temporal discretization is required in order
to speed up convergence. In general, implicit methods can outperform their explicit counterparts by a
factor of 10 or even more. Unfortunately, this performance is usually achieved at the expense of memory increase, as some ecient implicit methods may
need up to an order of magnitude more storage than
the explicit methods. This is mainly due to the
fact that implicit methods usually require the storage of the left-hand-side Jacobian matrix. Any implicit methods requiring the storage of the Jacobian
matrix would be impractical, if not impossible, to use
to perform turbulent simulations involving complex,
realistic aerodynamic congurations, where hundreds
of thousands of mesh points are necessary to represent such engineering-type congurations accurately.
The authors have developed a fast, matrix-free implicit method, GMRES+LU-SGS13 , for solving compressible Euler and Navier-Stokes equations on unstructured grids. The developed GMRES+LU-SGS
method has proven to be very eective for accelerating the convergence to both steady13 and unsteady14
ow problems. Another objective of this paper is to
demonstrate that the matrix-free GMRES+LU-SGS
method can also been extended and applied to eciently compute high Reynolds number viscous
ows
on highly stretched unstructured grids.
The outline of this paper is as follows. In section 2, the Reynolds-averaged Navier-Stokes equations and the Spalart-Allmaras one-equation turbulence model1 5 are described. The numerical method
for solving the governing equations are presented in
@U @F
@G
+
=
;
(2:1)
@t @ x
@x
where the summation convention has been employed.
The
ow variable vector U, inviscid
ux vector F, and
viscous
ux vector G, are dened by
01
0 u 1
U = @ u A ; F = @ u u + p A ;
e
u (e + p)
0 0 1
A:
(2:2)
G = @
u +q
Here ; p, and e denote the density, pressure, and specic total energy of the
uid, respectively, and u is
the velocity of the
ow in the coordinate direction x .
The pressure can be computed from the equation of
state
1
p = (
1)(e
u u );
(2:3)
2
which is valid for perfect gas, where
is the ratio
of the specic heats. The components of the viscous
stress tensor and the heat
ux vector are given by
2
@u
@u @u
+
) ( + )
; (2:4)
= ( + )(
@x @x
3
@x
1
@T
q =
( +
) :
(2:5)
1 P r P r @x
In the above equations, T is the temperature of the
uid, P r the laminar Prandtl number, which is taken
as 0.7 for air, and P r the turbulent Prandtl number,
which is taken as 0.9. represents the molecular viscosity, which can be determined through Sutherland's
law
T 3 T +S
= ( )2 0
:
(2:6)
0
T0 T + S
0 denotes the viscosity at the reference temperature
T0 , and S is a constant which for air assumes the
value S=110K. The temperature of the
uid T is
determined by
p
(2:7)
T =
;
and denotes the turbulence eddy viscosity, which
is computed using Spalart and Allmaras one equation
turbulence model
= ~f 1 :
(2:8)
j
ij
ij
lj
ij
ij
ij
The system is closed by including the governing equation for the working variable ~, which can be written
as
D~ 1
c2
= r (( + (1 + c 2 )~ )r~)
~r~
Dt
~
c1
+c 1 (1 f 2 )S~~ (c 1 f
f )( )2 : (2:9)
2 2 d
The constants appearing in the above equations take
on the standard values recommended in Ref. 15. In
the sequel, we assume that
is the
ow domain,
its boundary, and n the unit outward normal vector
to the boundary.
b
3. NUMERICAL METHOD
The numerical algorithms for solving the governing equations (2.1) and (2.9) are based on a loosely
coupled approach. In this approach, the
ow and turbulence variables are updated separately. At each
time step, the
ow equations (2.1) are solved rst
using the previously updated turbulent eddy viscosity. The turbulent equation (2.9) is then solved using the newly updated
ow variables to advance the
turbulence variable in time. Such a loosely coupled
approach allows for the easy interchange of new turbulence models. Another advantage of this approach
is that all the working arrays used to solve the
ow
equations can be used to solve the turbulence equation. As a result, no additional storage is needed for
the present implementation.
3.1 HYBRID DISCRETE FORMULATION
Assuming
a classical triangulation of
, N a
standard linear nite element shape function associated with a node I , and C a dual mesh cell associated
with the node, the following hybrid nite volume and
nite element formulation is used for discretization of
the Navier-Stokes equations:
h
1111111111111111111111111111
0000000000000000000000000000
0000000000000000000000000000
1111111111111111111111111111
0000000000000000000000000000
1111111111111111111111111111
0000000000000000000000000000
1111111111111111111111111111
0000000000000000000000000000
1111111111111111111111111111
0000000000000000000000000000
1111111111111111111111111111
Uh
CI
111111111111111111111
000000000000000000000
000000000000000000000
111111111111111111111
000000000000000000000
111111111111111111111
000000000000000000000
111111111111111111111
@ CI
Uh
@x
Uh
nj
(3:1)
where T is a discrete approximation space of suitably
continuous functions. Inviscid
uxes are discretized
using a cell-vertex nite volume formulation, where
the control volumes are nonoverlapping dual cells.
Due to the easy construction, the median dual control
volume, which is constructed by connecting the centroid of the triangle to its midpoint sides as shown in
Figure 1, is widely used for the discretization of the
inviscid
uxes. It can also be shown that the nite
volume approximation using such median dual control
volume is equivalent to the nite element approximation using linear elements. Numerical evidence, as
h
111111111111111111111
000000000000000000000
000000000000000000000
111111111111111111111
000000000000000000000
111111111111111111111
000000000000000000000
111111111111111111111
j j (U
ij
ij
n
i
n
i
ij
j;
(3:8)
ij
ij
ij
+ ij
ij
ij
jx
xi
j;
(3:9)
ij
ij
ij
ij
ij
ij
ij
n
i
)] j s
ij
j J (U~ ) j (U
)] j s j ;
(3:10)
for the inviscid
ux vector, and the viscous Jacobian
matrix is simply approximated by its spectral radius
in the above linearization process. The linearization
of
ux function (3.8) yields
Ui
U
@R
= (R +
U );
(3:5)
t
@U
where R is the right-hand-side residual and equals to
zero for a steady state solution. Writing the equation
for all nodes leads to the delta form of the backward
Euler scheme
AU = R;
(3:6)
where
@R
V
:
(3:7)
A= I+
t
@U
Note that as t tends to innity, the scheme reduces
to standard Newton's method for solving a system
of nonlinear equations. Newton's method is known
to have a quadratic convergence property. The term
@R
@ U represents symbolically the Jacobian matrix. It
involves the linearization of both inviscid and viscous
ux vectors. In order to obtain the quadratic convergence of Newton's method, the linearization of the
numerical
ux function must be virtually exact. Unfortunately, explicit formation of the Jacobian matrix
resulting from the exact linearization of any second
ij
where s is the area vector normal to the controlvolume interface associated with the edge ij , n =
s = j s
j its unit vector in the direction s , C
the speed of sound, and the summation is over all
neighboring vertices j of vertex i. Note that this
ux function is derived by replacing the Roe's matrix by its spectral radius in the well-known Roe's
ux function18
inv = X 1 [F(U ; n ) + F(U ; n )
R
2
n
i
Ui
j j=j V n j +C
where
ij
ij
@R X 1
=
(J (U ; n )+ j
@U
2
ij
@R X 1
=
(J (U ; n )
@U
2
i
ij
j I) j s j (3:11)
ij
ij
j j I) j s j;(3:12)
ij
ij
F
where J = @@U
represents the Jacobian of the
ux vector. The penalty for making these approximations in the linearization process is that the
quadratic convergence of Newton's method can no
longer be achieved because of the mismatch and inconsistency between the right- and left-hand-side in
equation (3.6). Although the number of time steps
(Newton iterations, if t tends to innity) may increase, the cost per each time step is signicantly reduced: it takes less CPU time to compute the Jacobian matrix and the conditioning of the simplied
Jacobian matrix is improved, thus reducing computational cost to solve the resulting linear system.
As only a rst order representation of the numerical
uxes is considered, the number of nonzero
entries in each row of the matrix is related to the
number of edges incident to the node associated with
that row. In other words, each edge ij will guarantee
nonzero entries in the i-th column and j -th row and
similarly the j -th column and i-th row. In addition,
nonzero entries will be placed on the diagonal of the
matrix. Using an edge-based data structure, the lefthand-side Jacobian matrix is stored in upper, lower,
and diagonal forms, which can be expressed as
1
U = (J (U ; n )
2
ij
ij
j j I) j s j;
ij
ij
ii
ij
ij
Both lower and upper sweep can be vectorized by appropriately reordering the grid points,22 resulting a
very ecient algorithm. It is found that the CPU
cost of one LU-SGS step is approximately 50% of one
three-stage Runge-Kutta explicit step.
It is clear that the above algorithm involves primarily the Jacobian matrix-solution incremental vector product. Such operation can be approximately
replaced by computing increments of the
ux vector
F:
(3:13)
J U F = F(U + U)
ij
ij
ij
( ):
(3:19)
F U
ij
(D + L)D 1 (D + U )U =
(3:18)
1
L = ( J (U ; n ) j j I) j s j;
(3:14)
2
X1
V
D = I+
(J (U ; n )+ j j I) j s j :
t
2
(3:15)
Note that U , L, and D represent strict upper matrix,
strict lower matrix, and diagonal matrix, respectively.
Both upper and lower matrices require a storage of
nedge neqns neqns and the diagonal matrix needs
a storage of npoin neqns neqns, where npoin is
the number of grid points, neqns (=5 in 3D) the number of unknown variables, and nedge the number of
edges. Note that in 3D nedge 7npoin. Clearly, the
upper and lower matrix consume substantial amounts
of memory, taking 93% storage required for left-handside Jacobian matrix.
Equation (3.6) represents a system of linear
simultaneous algebraic equations and needs to be
solved at each time step. The most widely used
methods to solve this linear system are iterative solution methods and approximate factorization methods. Recently, the Lower-Upper Symmetric GaussSeidel method developed rst by Jameson and Yoon19
on structured grids has been successfully generalized and extended to unstructured meshes by several
authors.20 22 The LU-SGS method is attractive because of its good stability properties and competitive
computational cost in comparison to explicit methods. In this method, the matrix A is split in three
matrices, a strict lower matrix L, a diagonal matrix
D, and a strict upper matrix U . This system is approximately factored by neglecting the last term on
the right hand side of equation (3.16). The resulting equation can be solved in the two steps shown in
equations (3.17) and (3.18), each of them involving
only simple block matrix inversions.
ij
(3:17)
X1
:
j j<i
U = D 1 [
?
i
(F(u ; n )
?
j
ij
j j U ) j s j];
D 1
U = U
?
i
Ri
?
j
ij
X1
:
j j>i
(F(U
j j U ) j s j :
(3:20)
ij
j nij
(3:21)
The most remarkable achievement of this approximation is that there is no need to store the upper- and
lower- matrices U and L, which substantially reduces
the memory requirements. It is found that this approximation does not compromise any numerical accuracy, and the extra computational cost is negligible.
Although the LU-SGS method is more ecient
than its explicit counterpart, a signicant number of
time steps are still required to achieve the steady state
solution, due to the nature of the approximation factorization schemes. One way to speed up the convergence is to use iterative methods. In this work, the
system of linear equations is solved by the Generalized
Minimal RESidual (GMRES) method of Saad and
Schultz.23 This is a generalization of the conjugate
gradient method for solving a linear system where the
coecient matrix is not symmetric and/or positive
denite. The use of GMRES combined with dierent
preconditioning techniques is becoming widespread in
the CFD community for the solution of the Euler
and Navier-Stokes equations.12 13 24 27 GMRES minimizes the norm of the computed residual vector over
the subspace spanned by a certain number of orthogonal search directions. It is well known that the speed
of convergence of an iterative algorithm for a linear
ij
ij
+ (LD 1 U )U
(3:16)
5
system depends on the condition number of the matrix A. GMRES works best when the eigenvalues of
matrix A are clustered. The easiest and the most
common way to improve the eciency and robustness of GMRES is to use preconditioning to attempt
to cluster the eigenvalues at a single value. The preconditioning technique involves solving an equivalent
preconditioned linear system
~= R
~
A~U
to compute
uxes can be used by the GMRES+LUSGS. Overall, the extra storage of the GMRES+LUSGS method is approximately 10% of the total memory requirements.
The turbulence model equation is integrated in
time using an implicit method similar to that of the
mean
ow equations. The same GMRES+LU-SGS
procedure is then used to solve the resulting linear system. No matrix-free approach is attempted
to solve the turbulence equation, as the storage of
the left-hand-side Jacobian for the single turbulence
equation is not very demanding. Since the turbulence equation is solved separately from the mean
ow
equations, all the working arrays used to solve the
ow equations can be used to solve the turbulence
equation without any additional storage.
(3:22)
P 1 AU = P 1 R ;
(3:23)
P = (D + L)D 1 (D + U ):
4. NUMERICAL RESULTS
(3:24)
Pressure
1.9137E+00
1.8763E+00
1.8388E+00
1.8014E+00
1.7639E+00
1.7265E+00
1.6890E+00
1.6515E+00
1.6141E+00
1.5766E+00
1.5392E+00
1.5017E+00
1.4643E+00
1.4268E+00
1.3894E+00
1.3519E+00
1.3145E+00
1.2770E+00
1.2396E+00
1.2021E+00
1.1646E+00
1.1272E+00
1.0897E+00
1.0523E+00
1.0148E+00
9.7737E-01
9.3992E-01
9.0247E-01
8.6501E-01
8.2756E-01
7.9010E-01
7.5265E-01
Fig. 4d: Computed eddy viscosity contours for turbulent
ow over a RAE2822 airfoil (M1 =0.73, Re=6.5
million, =2.8).
Fig. 4a: Unstructured mesh used for computing turbulent
ow over a RAE2822 airfoil (nelem=25,172,
npoin=12,741, nboun=310).
1.5
Computation
Experiment
1
Mach number
1.2229E+00
1.1834E+00
1.1440E+00
1.1045E+00
1.0651E+00
1.0257E+00
9.8620E-01
9.4675E-01
9.0731E-01
8.6786E-01
8.2841E-01
7.8896E-01
7.4951E-01
7.1007E-01
6.7062E-01
6.3117E-01
5.9172E-01
5.5227E-01
5.1283E-01
4.7338E-01
4.3393E-01
3.9448E-01
3.5503E-01
3.1558E-01
2.7614E-01
2.3669E-01
1.9724E-01
1.5779E-01
1.1834E-01
7.8896E-02
3.9448E-02
1.0948E-21
-Cp
0.5
0
-0.5
-1
-1.5
0
0.2
0.4
0.6
0.8
X/C
Fig. 4b: Computed Mach number contours for turbulent
ow over a RAE2822 airfoil (M1 =0.73, Re=6.5
million, =2.8).
0.012
Computation
Experiment
0.01
0.008
0.006
Cf
0.004
0.002
0
-0.002
-0.004
-0.006
-0.008
0
0.1
0.2
0.3
0.4
0.5
X/C
0.6
0.7
0.8
0.9
Fig. 4f: Comparison of computed skin friction coecient with experimental data for turbulent
ow over
a RAE2822 airfoil.
0
explicit
matrix-free LUSGS
matrix-free GMRES+LUSGS
-0.5
-1
Log(resi)
-1.5
-2
-2.5
-3
-3.5
-4
-4.5
0
500
1000
1500
Time steps
2000
2500
3000
-0.5
-1
Log(resi)
-1.5
-2
-2.5
-3
-3.5
-4
-4.5
0
1000
2000
3000
4000
CPU (second)
5000
6000
Fig. 5a: Unstructured mesh used for computing turbulent
ow over a Douglas 3-element airfoil
(nelem=42,059, npoin=21,343, nboun=633).
12
Computation
Experiment
10
Mach number
6.8758E-01
6.6540E-01
6.4322E-01
6.2104E-01
5.9886E-01
5.7668E-01
5.5450E-01
5.3232E-01
5.1014E-01
4.8796E-01
4.6578E-01
4.4360E-01
4.2142E-01
3.9924E-01
3.7706E-01
3.5488E-01
3.3270E-01
3.1052E-01
2.8834E-01
2.6616E-01
2.4398E-01
2.2180E-01
1.9962E-01
1.7744E-01
1.5526E-01
1.3308E-01
1.1090E-01
8.8720E-02
6.6540E-02
4.4360E-02
2.2180E-02
1.3018E-19
-Cp
6
4
2
0
-2
-0.2
0.2
0.4
0.6
0.8
1.2
X/C
Fig. 5b: Computed Mach number contours for turbulent
ow over a four-element airfoil (M1 =0.1, Re=10
million, =10).
0
Explicit
Matrix-free LU-SGS
Matrix-free GMRES+LUSGS
-0.5
Pressure
-1
1.8373E+01
1.8210E+01
1.8047E+01
1.7883E+01
1.7720E+01
1.7556E+01
1.7393E+01
1.7230E+01
1.7066E+01
1.6903E+01
1.6739E+01
1.6576E+01
1.6413E+01
1.6249E+01
1.6086E+01
1.5923E+01
1.5759E+01
1.5596E+01
1.5432E+01
1.5269E+01
1.5106E+01
1.4942E+01
1.4779E+01
1.4615E+01
1.4452E+01
1.4289E+01
1.4125E+01
1.3962E+01
1.3799E+01
1.3635E+01
1.3472E+01
1.3308E+01
Log(resi)
-1.5
-2
-2.5
-3
-3.5
-4
-4.5
0
500
1000
1500
Time steps
2000
2500
3000
Fig. 5c: Computed pressure contours for turbulent
ow over a Douglas 3-element airfoil (M1 =0.1,
Re=10 million, =10).
0
Explicit
Matrix-free LU-SGS
Matrix-free GMRES+LUSGS
-0.5
Eddy Viscosity
-1
3.3379E-03
3.2302E-03
3.1225E-03
3.0149E-03
2.9072E-03
2.7995E-03
2.6918E-03
2.5842E-03
2.4765E-03
2.3688E-03
2.2612E-03
2.1535E-03
2.0458E-03
1.9382E-03
1.8305E-03
1.7228E-03
1.6152E-03
1.5075E-03
1.3998E-03
1.2921E-03
1.1845E-03
1.0768E-03
9.6913E-04
8.6146E-04
7.5379E-04
6.4612E-04
5.3845E-04
4.3078E-04
3.2311E-04
2.1544E-04
1.0777E-04
1.0461E-07
Log(resi)
-1.5
-2
-2.5
-3
-3.5
-4
-4.5
0
5000
10000
15000
CPU (second)
20000
25000
Fig. 5d: Computed eddy viscosity contours for turbulent
ow over a Douglas 3-element airfoil (M1 =0.1,
Re=10 million, =10).
9
law distribution. It can be seen that the solution using containment dual control volumes is more accurate than the one using median dual control volumes.
Note that the median dual control volume solution
looks fairly good, as the mesh does not contain very
high stretching ratios.
Test Case 4. Turbulent
ow past NACA0012 airfoil
This test case computes turbulent
ow over a
NACA0012 airfoil at a Mach number of 0.3, a chord
Reynolds number of 1.86 million, and an incidence of
3.59 degrees. This is a 3D simulation of a 2D problem.
The mesh used in the computation, shown in Fig. 7a,
contains 2,014,616 elements, 359,860 grid points, and
45,865 boundary points. The minimum normal spacing at the wall is about 0.94E-05. The computed
0.8
eta-coordinates
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.2
0.4
0.6
Vx/Vinf
0.8
1.2
-Cp
0.5
-0.5
-1
-1.5
0
0.2
0.4
0.6
0.8
X/C
Fig. 7d: Comparison of pressure coecient distribution between the computational result and experimental data at M1 = 0.3, = 3:59, and
Re=1,860,000.
Test Case 5. Transonic turbulent
ow past a M6wing
The fth test is the well-documented case of transonic
ow over a ONERA M6 wing conguration. The
ow solutions are presented at a Mach number of 0.84,
a Reynolds number of 18.2 millions, and an angle of
attack of 3.06. The mesh used in the computation
consists of 2,953,333 elements, 504,790 grid points,
and 24,166 boundary points. The minimum normal
spacing at the wall is about 2.0E-06 and the biggest
stretching ratio of the tetrahedra is 11,961. The upper
1.2
1.5
Tetrahedra(Median)
Tetrahedra(Containment)
Hybrid
Experiment
1
0.8
Tetrahedra(Median)
Tetrahedra(Containment)
Hybrid
Experiment
0.6
0.5
0.2
-Cp
-Cp
0.4
-0.2
-0.4
-0.5
-0.6
-0.8
-1
-1
0
0.2
0.4
0.6
0.8
0.2
0.4
X/C
0.8
Fig. 8f: Comparison between experimental and computed pressure coecient distributions for wing section at 80% semispan.
Fig. 8c: Comparison between experimental and computed pressure coecient distributions for wing section at 20% semispan.
1.5
Tetrahedra(Median)
Tetrahedra(Containment)
Hybrid
Experiment
1.5
Tetrahedra(Median)
Tetrahedra(Containment)
Hybrid
Experiment
0.6
X/C
0.5
-Cp
-Cp
0.5
0
0
-0.5
-0.5
-1
0
-1
0
0.2
0.4
0.6
0.8
0.2
0.4
X/C
0.8
Fig. 8g: Comparison between experimental and computed pressure coecient distributions for wing section at 90% semispan.
Fig. 8d: Comparison between experimental and computed pressure coecient distributions for wing section at 44% semispan.
1.5
Tetrahedra(Median)
Tetrahedra(Containment)
Hybrid
Experiment
1.5
Tetrahedra(Median)
Tetrahedra(Containment)
Hybrid
Experiment
0.6
X/C
-Cp
0.5
-Cp
0.5
0
0
-0.5
-0.5
-1
0
0.2
0.4
0.6
0.8
X/C
-1
0
0.2
0.4
0.6
0.8
Fig. 8h: Comparison between experimental and computed pressure coecient distributions for wing section at 95% semispan.
X/C
Fig. 8e: Comparison between experimental and computed pressure coecient distributions for wing section at 65% semispan.
12
Fig. 9a. Surface mesh used for computing turbulent
ow past M5 airplane (nelem=2,104,803,
npoin=368,122, nboun=33,234).
14