Professional Documents
Culture Documents
Agenda
Introduction
What is CFD?
Fluid Characteristics
Flow Modeling
Turbulence Modeling
CFD Workflow
CFD Optimisation
CFD Compute challenges
MPI & CFD
MDO
Example case
CFD 101
The study of dynamic fluid flows using numerical methods
CFD 101
Fluid flows encountered in everyday life include:
CFD 101
Example:
Wind Tunnels
Basic Idea
Traditionally all fluid simulation based on laws of
conservation, specifically:
Conservation of momentum
Conservation of mass
Conservation energy
CFD 101
Reference Frame: temporal or spatial
Temporal (Lagrangian)
Reference to Time t=0
Reference moves with time
Used for particle-based CFD and solid mechanics
Spatial (Eulerian)
Reference to Time t = t
Reference moves with space
Used for flow-based CFD
CFD 101
Generally, we use spatial coordinates to
determine fluid parcel properties at a
particular point, ie Eulerian system:
CFD 101
Classification:
Continuum
Particle
Scheme:
Navier-Stokes
Euler
SPH
LBM
Plus:
No Topology
Handle complex
boundaries
Guaranteed
conservation of
mass
Microscopic
Interactions
Pressure from
kernel
Handle Multiphase
boundaries
Easy to implement
Scales well
Need a lot of
particles (many will
never be used)
Minus:
Fluid Characteristics
Macroscopic Properties
Density
Viscosity
p Pressure
T Temperature
v Velocity
Fluid Types
Newtonian or Non-Newtonian:
Newtonian rate of deformation (strain rate) is
proportional to shear stress
Ie water, air, petrol, beer
Non-Newtonian Types:
Viscosity
The resistance of a fluid to shear stress
Measured in Pascals, symbol
Technically, is the dynamic viscosity. For
convenience, we also sometimes use the
kinematic viscosity, v, which is dynamic
viscosity divided by density, ie:
v = /
Note that v is the greek letter nu, not the english
letter v
Equation of motion in x
Equation of motion in y
Equation of motion in z
Continuity Equation
Energy Equation
Deriving Momentum
Deriving Momentum
If forces are not equal, then the fluid parcel accelerates
As well as the gravitational force and pressure gradient, we also
have three additional terms; acceleration (v/ t) , convection and
anisotropic surface stresses:
Convection (convective acceleration) is acceleration w.r.t space (eg; a
tapered nozzle). It is defined as:
(vx / x + vy / y + vz / z)v = vv
Surface stresses are more complicated. These depend on the fluid
properties. In the general case, we can define as T Where T is the
deviatoric stress tensor (see later)
Stress
Stress is defined as the average force per unit
area.
Symbol:
Definition: = F/A
Measured in Pascals (Pa) 1 Pa = 1N/1m2
Stress Modeling
Stress can be defined by 9 components:
3 orthogonal normal stresses ()
6 orthogonal shear stresses ()
Expressed as a stress tensor,
11 12 13 xx xy xz
x xy xz
= 21 22 23 = yx yy yz = yx y yz
31 32 33 zx zy zz
zx zy z
Normal stress = stretching ie for x: xx = v + 2(vx/x)
Shear stress = deformation ie for xy: xy = yx = (vy/x + vx/y)
(v/ t + vv ) = -p + 2v + g
Unsteady
Acceleration
Convective
Acceleration
Pressure
Gradient
Viscosity
Other
Forces
Turbulence Modeling
Turbulence Modeling
Turbulence Models
DNS
RANS-based
Boussineq: algebraic
Spalart-Allmaras: k-
Prandtl: Mixing length model
RSM
LES
DES (RANS + LES)
PDF (with particles)
Vortex Method (grid free,
uses FMM)
DNS
Direct Numerical Simulation Explicitly
solving Navier-Stokes
Must resolve all turbulence scales in the mesh
Mesh N3 must be >= Re9/4 for spatial resolution
Memory required grows as Re2.25
Timestep must be very fine for explicit integration
(CFL condition for convergence, Cmax = 1), so:
C = vxt/x + vyt/y + vzt/z < Cmax
Turbulence Modeling
RANS Reynolds Averaged Navier-Stokes
Navier-Stokes provides instantaneous velocity
field, v
RANS provides time-averaged equations
Have an average value over a period of time, v , plus a
fluctuating value, v
Mean value determined by spatial or temporal
averaging
Fluctuating (turbulent) part is a stress Tensor, called the
Reynolds stress Tensor,
ij = vivj
Turbulence Modeling
The problem with RANS
The term vivj is non-linear, preventing closure
We have too many unknowns, not enough eqns
Turbulence Modeling
Spalart-Allmaras proposed a one equation
model for the turbulent viscosity for
aerodynamic flows (see AIAA paper or CFD
Wiki)
Prandtl added a mixing length to handle eddy
viscosity varying with distance from bounding
wall (boundary layer)
K models turbulent kinetic energy (k) and
dissipation () separately in two eqns
Turbulence Modeling
RSM Reynolds Stress Model
LES
Large Eddy Simulation
Fundamentally, a low-pass filtering to remove small
scales
Solution is a filtered flow velocity field
Can be explicit or implicit
Implicit filter based on the discretisation scheme
Good: Use full mesh, no computational cost for subfilter term
Bad: Shape of LES filter depends on gird, hard to determine,
truncation errors
LES
Parameters:
Filter type (usually box or gaussian)
Cutoff Filter width (cutoff length), denoted
Cutoff time scale, denoted c
DNS: Domain = L3
LES: = L/32
LES: = L/16
Images User:Charlesreid1 / Wikimedia Commons / CC-BY-SA-3.0
The Future?
Discretisation
Meshing
Temporal discretisation
Time steps
Equation discretisation
Numerical methods
Spatial Discretisation
Temporal Discretisation
Set start/stop times (range)
Steady-state: rule of thumb 10x domain
Unsteady: as desired
CFL Number
For convergence of the cases using explicit (timemarching) solvers, the Courant-Friedrichs-Levy
condition (Cmax = 1) should be satisfied:
C = |U|t/x < Cmax
CFL Number
Large x
Small t
Both
CFL Number
Example:
Geometry 1mx1mx0.1m
Fixed mesh 20x20x1 => x = 0.05m
|U| 2.0 m/s
For 10x domain, set end time = 5s
Calculate t required:
t = Cx / |U|
= 1.0 x 0.05 / 2.0
= 0.0025s
Discretisation Methods
Finite Differences (differential form of PDE)
Approximation of nodal derivatives
Simple and effective, easy to derive
Limited to structured meshes
Computational Meshes
Three basic classifications:
Structured Meshes
Topologically equivalent to cartesian grid
Limited to simple domains
Can be periodic, nonperiodic, or periodic with
cusp
Computational Meshes
Block Structured Meshes
Multilevel subdivision of the domain
Special treatment required at block interfaces
Greater flexibility, blockwise local refinement
Unstructured Meshes
Suit arbitrary domains, amenable to AMR
techniques
Complex data stuctures, difficult to implement
Boundary Conditions:
Inlet (in, vn < 0)
Solver
We have a system of Coupled nonlinear
algebraic equations to solve iteratively:
Outer loop: update coefficients using previous
iteration
Inner loop: use iterative solvers (ie cg) as direct
solvers (eg Gaussian Elimination) are too
expensive
Solving Techniques
Generally too large for direct solvers
Use iterative solvers, eg:
SOR
Lanczos
CG, BiCG
GAMG
GMRES
BiCGStab
Post Processing
Important Variables:
Pressure -> Forces on objects
Velocity -> Flow Structures
Shear -> Erosion
Vorticity -> rotation, mixing
Turbulent K.E & Dissipation rate -> heat
transfer/mass transfer
Temperature -> evaporation, combustion
Radiation
Post Processing
Calculate derived quantities (ie vorticity)
Calculate integral parameters (ie lift)
Visualisation:
1D lines
2D Streamlines, contours, slices
3D cutlines, planes, isosurfaces, isovolumes
Arrow plots, particle trackers
Animations
Post Processing
Translation:
Velocity vectors/flowlines
streamlines, pathlines, streaklines
Deformation
Strain Rate
Rotation
Vorticity/Helicity
Post Processing
Force calculations:
Lift, Drag, Cf, etc
Integrate pressure field over surface area
CFD Optimisation
CFD Challenges
Direct Numerical Simulation of Navier-Stokes is well
beyond present HPC capacity for real-world problems.
Demands for improved CFD simulations, ie:
Grid Generation
Model must be discretised into a mesh in Euclidian
space
Structured vs Unstructured meshes
Triangles, Quads, hexahedrons
Grid generators
Large mesh pre-processing
10M Cells needs ~ 100GB of RAM
DMP Workload
Load balance between nodes
Communication between nodes
Shifting balance (dynamic mesh, AMR,
animated models)
Rotate the model, or the frame of reference?
Weak Scaling
Works well for CFD models as domains are readily
decomposed
..but, diminishing returns as we add more cores
Communication between cores becomes dominant,
eg. At 512 cores, threads spend 1/3 time on compute,
2/3 on communications/wait
Interconnect bandwidth & Latency become critical
Inter-domain message size defined by size of domain
faces => Bandwidth
Multi-level solvers (ie multigrid) use lots of small
messages => latency
Compute
Allreduce
Waitall
262.1
53.1
Send
Recv
Barrier
Others
47.5
5.6
b_allred
allred
Multi-Rail Networks
Using multiple network rails can provide
several advantages:
Separation of Communications and I/O traffic
Load sharing across rails
Splitting large messages across multiple rails
Redundancy in case of network port/cable failure
MDO Example
Supersonic Business Jet
Modeled as a coupled system:
Structures
Aerodynamics
Propulsion
Aircraft Range
Optimisation goal:
Maximum Range
S. Kodiyalam and J. S. Sobieski, Alternate Sampling Methods for Multidisciplinary Design Optimization in a High Performance Computing environment,
Proceedings, ASME Design Technical Conferences, Pittsburg, PA, September 2001. ASME Paper Number: DETC2001/DAC-21081.
MDO Example
S. Kodiyalam and J. S. Sobieski, Alternate Sampling Methods for Multidisciplinary Design Optimization in a High Performance Computing environment,
Proceedings, ASME Design Technical Conferences, Pittsburg, PA, September 2001. ASME Paper Number: DETC2001/DAC-21081.
Open Source
Solvers
Solvers
CFD++ (Metacomp)
CFD-ACE (ESI)
CFD-FASTRAN (ESI)
CFX (Ansys)
Fluent (Ansys)
PowerFLOW (Exa)
STAR-CD (CD-Adapco)
STAR-CCM+ (CD-Adapco)
Grid Generators
Gambit (Ansys)
HyperMesh (Altair)
ICEM CFD (Ansys)
Visualisation
CFD-VIEW (ESI)
Ensight (CEI)
HyperView (Altair)
CalculiX
ISAAC
OpenFOAM
OpenFVM
SU2
Grid Generators
GMSH
Gridgen
Netgen
Visualisation
Gnuplot
ParaView
Visit
vtk
Pressure (psia)
Temperature (R)
Angle-of-Attack (deg)
Angle-of-Sideslip
(deg)
0.1
6.0
700.0
0.0
0.0
Go!....
= 6.149x10-4 m2/s