Professional Documents
Culture Documents
Reference: [1] Bruce Jacob, Spencer W. Ng, David T. Wang, MEMORY SYSTEMS
Cache, DRAM, Disk
[2] Hideo Sunami, The invention and development of the first trench-
capacitor DRAM cell, http://www.cmoset.com/uploads/4.1-08.pdf
[3] JEDEC STANDARD: DDR2 SDRAM SPECIFICATION
[4] John P. Uyemura, Introduction to VLSI circuits and systems
[5] Benson, university physics
OutLine
• Preliminary
- parallel-plate capacitor
- RC circuit
- MOSFET
• DRAM cell
• DRAM device
• DRAM access protocol
• DRAM timing parameter
• DDR SDRAM
SRAM
Typical PC organization.
Cache: use SRAM
main memory : use DRAM
DRAM cell
( 1T1C cell )
1 ps = 10−12 s
Gaussian pillbox
Example: infinite conducting plate
Φ E = ∫ E ⋅ dA = Eupper Aupper + Edown Adown = EA
σ
E=
ε0
+++++++++
+++++++++ +++++++++ +++++++++
σ +++++++++ +++++++++
ε0
d 2σ
E=
σ ε0
−−−−−−−−
ε0 −−−−−−−−
−−−−−−−− −−−−−−−−
−−−−−−−−
−−−−−−−−
σ
ε0
Capacitor: parallel plate [2]
In most cases, we don’t care about thickness of the plate. For simplicity we may
assume plate has no thickness, say flat sheet, with total charge Q on each plate
respectively. Then definition of surface charge density is different
Q
surface charge density σ=
A
+++++++++ +++++++++
d σ
E=
ε0
−−−−−−−−
−−−−−−−−
Q
Φ E = ∫ E ⋅ dA = E1 A1 + E2 A2 =
ε0
σ
E1 = E2 =
2ε 0
Capacitance [1]
Q (t ) dQ ( t )
Kirchhoff’s voltage law: V = VR + VC VR = I ( t ) ⋅ R VC = I (t ) =
C dt
R (resistance) R
+++
V C (capacitor) V C
−−−
R
T >> 1 ,capacitor contains maximum charge
+++++ VC = V , I ( t ) = 0 doesn’t charge capacitor anymore
V C
−−−−−
Capacitance [2]
+++++++++
d σ
E= VC = Ed
ε0
−−−−−−−−
Q ε0 A
capacitance is defined by C = =
VC d
C ∝ ε 0 since from Gauss’s law Φ = E ⋅ dA = Qenc , E ∝ 1
1
E ∫ ε0 ε0
1
2 C∝ since if we fix total charge Q and area A, then
d
Q σ
σ = is fixed ⇒ E = is fixed ⇒ V = Ed ∝ d
A ε0
3 C ∝ A since if we fix potential difference V and space d, then
V σ
E = is fixed ⇒ σ is fixed due to E = ⇒ Q =σ A∝ A
d ε0
Capacitance [3]
Suppose we add an insulator into parallel metal plate, what happens on capacitor?
dipole insulator
+q
+
d
−q −
p = qd : dipole moment
metal When charge is stored on capacitor, then
electric field would separate positive and
No charge on capacitor, nothing happens negative charge inside insulator.
+q
+
dipole moment
d p = qd : dipole moment P= = polarization
unit volume
−q −
+++++++++
σ Q ε0 A
d Eext = VC = Eext d C0 = =
ε0 VC d
−−−−−−−−
+++++++++
d 1 Q ε A
E= Eext VC = Ed C= = ε r 0 = ε r C0
εr VC d
−−−−−−−−
Insulator (dielectric)
1 Keep all geometrical parameters, area A and height d, then we can add insulator to increase
capacitance of capacitor
2 Insulator would induce polarization to cancel part of external field such that small voltage gap
can store the same charge. In other words, capability of charge storage is increasing so that
capacitance is also increasing
Area of plate: A
3 Design parameters of a capacitor are Distance between two plate : d
Dielectric constant : εr
RC circuit
Q (t ) dQ ( t )
Kirchhoff’s voltage law: V = VR + VC VR = I ( t ) ⋅ R VC = I (t ) =
C dt
dQ Q
First order ODE: V =R + I .C. Q ( 0 ) = q
dt C
R
dQ Q
1 Charging: Q ( 0 ) = 0 with V = R +
dt C
t
V C VC = V 1 − exp −
RC
R
dQ Q
2 discharging: Q ( 0 ) = CV with 0 = R +
dt C
+++++ t
C V VC = V exp −
−−−−− RC
x x
x = 0 ⇒ x =1 x =1⇒ x = 0
current flow
MOSFET (Metal-Oxide-Semiconductor) [1]
top view
polysilicon (poly)
SiO2
pFET
side view
http://ezphysics.nchu.edu.tw/prophys/electron/lecturenote/7_5.pdf
MOSFET [2]
nFET cross section pFET cross section
G (gate)
S (source) D (drain)
(
Typical gate capacitance CG ∼ fF femtofarad ,10 −15 F )
MOSFET operation [1]
zero gate voltage open switch
n+ n+ W
p + p − p
V V No current
n − n + n
= n p n two pn junction
MOSFET operation [2]
positive gate voltage closed switch
n+ n+ W
electron channel
• DRAM device
• DRAM access protocol
• DRAM timing parameter
• DDR SDRAM
DRAM cell
DRAM cell = cell transistor +
storage capacitor
1 charging R
V C
R
2 leakage
+++
C V
−−−
Scaling of memory cell and die size of DRAM
Storage capacitance should be kept constant despite the cell scaling to provide
adequate operational margin with sufficient signal-to-noise ratio
對於在製程微縮過程中電容所面臨問題的解決方式,為了增加平行電板的面積又
不至於增加細胞的尺寸,有兩種製程流派來維持電容值在容許的數值之上:
深溝電容(trench capacitor)以及堆疊電容 (stack capacitor)。
Popular model of DRAM cell
深溝電容(trench capacitor)
A scaling limit of capacitor structure
Dielectric film should be physically thin enough not fill up the trench.
F: feature size
2Ti < F
After K. Itoh, H. Sunami, K. Nakazato, and M. Horiguchi, ECS Spring Meeting, May 4, 1998
Objective: decrease feature size to increase density of DRAM cells
off open
∆V
Vdd Vdd
Vref Vref
Sense amplifier capacitor Sense amplifier capacitor
3 Data restoration
Question 3: what do you think “if transistor is off,
open then capacitor is isolated, no leakage current
flows out” ?
Vdd
Vdd
Sense amplifier capacitor
• Restores the value of cell after the voltage on the bitline is sensed
Vcc
Signal EQ activates tow transistors such that source Vref = charges two drains (bitlines)
2
4 steps of amplifier operation [2]
Vref + ∆V
1
1 Vref + ∆V > Vcc exceeds threshold such that transistor is turned on
2 2
SAN = 0
2 signal SAN is set GND (ground)
Vref 3 1 3 current from bitline flows into SAN, then voltage of bitline is
decreasing till voltage is 0
4 1
Vref + ∆V V < Vcc , its complement exceeds threshold such that
2
6 transistor is turned on
Vref 8
7
Vcc 8 signal CSL (column-select line) is activated,
then transistor is turn on, current flows into
output. After voltage is stable in output, CSL
is deactivated and turn off transistor, then
SAN = 0 data is stored in output (row buffer)
SAP = Vcc
V =0
Bi-stable circuit
Written into DRAM array
• Data written by memory controller is buffered by I/O buffer of DRAM device and used
to overwrite sense amplifiers and DRAM cells.
• The time period required for write data to overdrive sense amplifiers and written
through into DRAM cells is t_WR
• The row cycle time of DRAM device is write-cycle limited due to t_WR
OutLine
• Preliminary
• DRAM cell
• DRAM device
- DRAM SPEC
- input/output signal
- channel, rank, bank, row, column
NC : no connect
Spec of DDR (double data rate)
JEDEC document: http://www.jedec.org/Catalog/display.cfm
From http://shopping.pchome.com.tw/
FSB/CPU外頻對照表
North bridge
memory slots
CPU
DIMM
North bridge
Two channels
Nomenclature: rank
Memory system with 2 ranks of DRAM devices
A “rank” is a set of DRAM devices that operate in lockstep in response to a given command.
Chip-select signal is used to select appropriate rank of DRAM devices to respond to a given
command.
Nomenclature: bank
SDRAM device with 4 banks of DRAM arrays internally
A “row” is a group of storage cells that are activated in parallel in response to a row
activation command.
size of row = size of row of a DRAM device x # of DRAM devices in a given rank
Nomenclature: column
A column of data is the smallest addressable unit of memory
DIMMs are built using "x4" (by 4) memory chips or "x8" (by 8) memory chips with 8(9)
chips per side. "x4" or "x8" refer to the data width of the DRAM chips in bits.
Example: a x4 DRAM indicates that DRAM has at least four memory array in a single
bank and a column width is 4 bits.
Configuration of DRAM [2]
256-Mbit SDRAM device configuration
Device configuration 64 M x 4 32 M x 8 16 M x 16
Number of banks 4 4 4
Number of rows 8192 8192 8192
Number of columns 2048 1024 512
Data bus width 4 8 16
From http://shopping.pchome.com.tw/
• Preliminary
• DRAM cell
• DRAM device
• DRAM access protocol
- pipelined-base resource usage model
- read / write operation
Command and data movement on a generic SDRAM device DRAM memory-access protocol
defines commands and timing
constraints that a DRAM
memory controller uses to
manage the movement of data
between itself and DRAM
devices
resource usage model : at any given instance, 4 operations exist in 4 phases, this
constitute 4-stage pipelined. Resources are not shared among these 4 phases.
Sometimes we call it as 4-stage pipelined.
Generic DRAM command format
t parameter 1 is minimum time between two commands whose relative timing is limited by
the sharing of resources within a given bank of DRAM arrays
t parameter 2 is minimum time between two commands whose relative timing is limited by
the sharing of resources by multiple banks of DRAM arrays within the same
DRAM devices.
parameter description
t_CMD Command transport duration. The time period that a command occupies on the
command bus as it is transported from the DRAM controller to the DRAM devices.
Row Access Command
Objective: move data from the cells in DRAM arrays to sense amplifiers and then
restore the data back into the cells in DRAM array.
parameter description
t_RCD Row to Column command Delay. The time interval between row access and data ready at
sense amplifiers.
The time required between RAS (Row Address Select) and CAS (Column Address Select).
t_RAS Row Access Strobe latency. The time interval between row access command and data
restoration in DRAM array. A DRAM bank cannot be precharged until at least t_RAS time
after the previous bank activation.
Column-Read Command [1]
Objective: move data from array of sense amplifiers through data bus back to
memory controller
parameter description
t_CAS ( t_CL ) Column Access Strobe latency. The time interval between column access
command and start of data return by DRAM devices.
Column-Read Command [2]
parameter description
t_BURST Data burst duration. The time period that data burst occupies on the data bus.
In DDR2 SDRAM, 4 beats of data occupy 2 full clock cycles.
parameter description
t_CCD Column-to-Column Delay. The minimum column command timing, determined by internal
burst (prefetch) length. Multiple internal bursts are used to form longer burst for column
read.
t_CCD is 2 beats (1 cycles) for DDR SDRAM
t_CCD is 4 beats (2 cycles) for DDR2 SDRAM
t_CCD is 8 beats (4 cycles) for DDR3 SDRAM
Column-Write Command [1]
Objective: move data from memory controller to sense amplifiers of targeted bank.
Clearly ordering of phases is reversed between column-read and column-write
commands.
parameter description
t_CWD Column Write Delay. The time interval between issuance of column-write command and
placement of data on the bus by DRAM controller.
SDRAM: t_CWD = 0 cycle
DDR SDRAM: t_CWD = 1 cycle
DDR2 SDRAM: t_CWD = t_CAS – t_CMD cycles
DDR3 SDRAM : t_CWD = 5 ~ 8 cycles
Column-Write Command [2]
parameter description
t_WTR Write To Ready delay time.
The minimum time interval between the end of a write data burst and the start of a
column-read command. I/O gating is released by write command.
Write command read command
t_WR Write Recovery time.
The minimum time interval between the end of a write data burst and the start of a
precharge command. Allows sense amplifiers to restore data to cells.
Wrtie command precharge command
Precharge Command [1]
• Step1: row access command moves data from DRAM cells to sense amplifiers (data
is cached), then column access command moves data between DRAM device and
memory controller
• Step 2: precharge command completes the row access sequence as it resets the
sense amplifiers and bitlines and prepares them for another row access command to
the same DRAM array.
Precharge Command [2]
parameter description
t_RP Row Precharge.
The time interval that it takes for a DRAM array to be precharged (precharge bitline and
sense amplifiers) for another row access.
Switching between memory banks.
t_RC Row Cycle.
The time interval between accesses to different rows in a bank.
t_RC = t_RAS + t_RP
Refresh Command [1]
• Non-persistent charge storage in DRAM cells means that charge stored in capacitor
will gradually leak out through access transistors.
• To maintain data integrity, DRAM must be periodically read out and restored before
charge decay to indistinguishable level.
parameter description
t_RFC ReFresh Cycle time.
The time interval between refresh and activation commands.
One refresh command may refresh 1, 2, 4, 8 rows. The more rows are refreshed, the
more time t_RFC is.
Refresh Command [2]
A refresh command refresh DRAM cells in all banks since all banks can operate independently
DRAM capacity Number of Refresh count Number of row per t_RC t_RFC
device family rows refresh command
DDR 512MB 8192 8192 1 55 ns 70 ns
DDR2 512MB 16384 8192 2 55 ns 105 ns
4096MB 65536 8192 8 ~ 327.5 ns
Principle of spatial locality: row access command fetch a row into sense amplifier
A [i ][ j ] A [i ][ j + 1]
A [i ][ j ] A [i + 1][ j ]
A [i ][ j ] → A [i + 1][ j ] Row access column access precharge Row access column access
Read Cycle [2]
t RC
t RAS t RP
cmd & addr bus row acc col read prec. row act
bank utilization data sense bank access data restore array precharge
device utilization I/O gating
data bus time
t RCD data burst
tCAS t BURST
cmd & addr bus row acc col read prec. row act
bank utilization data sense bank access data restore array precharge
device utilization I/O gating
data bus time
t RCD data burst
tCAS t BURST
cmd & addr bus row acc col read prec. row act
bank utilization data sense bank access data restore array precharge
device utilization I/O gating
data bus time
t RCD data burst
tCAS t BURST
Row cycle time is limited by the duration of write cycle since data path of write is
memory controller data bus I/O gating MUX sense amplifier DRAM cells
t RC
t RAS t RP
cmd & addr bus row acc col write prec. row act
bank utilization data sense write data restore array precharge
device utilization I/O gating
data bus time
t RCD data burst
tCAS t BURST
Precharge is not necessary since one row of data has been latched in sense amplifier
t RC
t RAS t RP
cmd & addr bus row acc col read prec. row act
bank utilization data sense bank access data restore array precharge
device utilization I/O gating
data bus time
tRCD data burst
tCAS t BURST
tCAS t BURST
N consecutive column-read needs time t RCD + tCAS + N ⋅ t BURST , not N ( t RCD + tCAS + t BURST )
Consecutive reads to different rows of same bank
t RAS + t RP
cmd & addr row acc read 0 prec row acc read 1
bank “i” utilization data sense row x open -data restore bank i precharge data sense row y open-data restore
rank “m” utilization I/O gating I/O gating
data bus data burst data burst
t RAS t RP
A [i ][ j ] → A [i + 1][ j ] destroys spatial locality such that A [i ][ j ] , A [i + 1][ j ] are on different rows.
t RP + t RCD
t RP tRCD
bank i and bank j are open together but read request to bank j is different from
active row in sense amplifier, hence bank j must precharge bitline first. This is
called “bank conflict”.
Consecutive reads to different ranks
t BURST + t RTRS
time
cmd & addr read 0 read 1
bank “i” of rank “m” bank i open
bank “j” of rank “n” bank j open
rank “m” utilization I/O gating
rank “n” utilization I/O gating
data bus data burst sync data burst
Later on, we will determine value of timing parameter and calculate overhead explicitly
OutLine
• Preliminary
• DRAM cell
• DRAM device
• DRAM access protocol
• DRAM timing parameter
- CL value
- system calibration
• DDR SDRAM
CL value of commodity DDRx SDRAM
From http://shopping.pchome.com.tw/
• when DDR is read, a single read produces 64 bits of data from 8 chips, 8 bits per chip.
• when talking about the time between bits, it is referring to the time from the
appearance of the first group of bits (8 bits a chip) until the appearance of the next
group of bits
• CAS latency only specifies the delay between the request and the first bit.
• Remaining bits (7 bits) are fetched one bits per cycle.
type Data rate ns/bit Command rate ns/cycle CL first word (ns) 8 word (ns)
DDR-400 400MHz 2.5 200MHz 5 3 15 32.5
DDR2-800 800MHz 1.25 400MHz 2.5 5 12.5 21.25
DDR2-1066 1066MHz 0.94 533MHz 1.88 5 9.4 15.98
DDR3-1333 1333MHz 0.75 666MHz 1.5 9 13.5 18.75
DDR3-1600 1600MHz 0.625 800MHz 1.25 8 10 14.375
Example: DDR-400
1
first word needs time CL × = 3 × 5 = 15 ns
command rate
1
remaining 7 words need time 7 × = 7 × 2.5 = 17.5 ns
data rate
CAS latency [2]
I/O bus clock
type Data rate ns/bit Command rate ns/cycle CL first word (ns) 8 word (ns)
DDR-400 400MHz 2.5 200MHz 5 3 15 32.5
DDR2-800 800MHz 1.25 400MHz 2.5 5 12.5 21.25
DDR2-1066 1066MHz 0.94 533MHz 1.88 5 9.4 15.98
DDR3-1333 1333MHz 0.75 666MHz 1.5 9 13.5 18.75
DDR3-1600 1600MHz 0.625 800MHz 1.25 8 10 14.375
type Data rate ns/bit Command rate ns/cycle CL first word (ns) 8 word (ns)
(memory clock)
DDR-400 400MHz 2.5 200MHz 5 3 15 32.5
DDR2-800 800MHz 1.25 200MHz 5 5 25 33.75
DDR2-1066 1066MHz 0.94 266MHz 3.75 5 18.75 25.33
DDR3-1333 1333MHz 0.75 133MHz 6 9 54 59.25
DDR3-1600 1600MHz 0.625 200MHz 5 8 40 44.375
Memory divider
from http://en.wikipedia.org/wiki/Memory_divider
• A Memory divider is a ratio which is used to determine the operating clock frequency
of computer memory in accordance with Front Side Bus frequency, if memory system
is dependent on FSB clock speed
• Ideally, Front Side Bus and system memory should run at the same clock speed
because FSB connects memory system to the CPU. But, it is sometimes desired to
run the FSB and system memory at different clock speeds when you overclock FSB
type Data rate ns/bit Command rate ns/cycle CL first word (ns) 8 word (ns)
(memory clock)
DDR2-800 800MHz 1.25 200MHz 5 5 25 33.75
DDR2-1066 1066MHz 0.94 266MHz 3.75 5 18.75 25.33
Motherboard: P5Q PRO with system clock 266MHz, FSB = 1066 MHz
Memory: DDR2-800 with CL = 5
http://www.lavalys.com/ System calibration Software
System information tool, show everything of PC
http://www.tweakers.fr/memset.html
Memory system
Memory timing parameter
Timing parameter
DRAM RAS to CAS delay Write to Read Delay(S) READ to PRE Delay
DRAM RAS Activate to READ to READ Delay(S) ALL PRE to ACT Delay
Precharge time
RAS to RAS Delay READ to READ Delay(D) ALL PRE to REF
Delay
CPU clock = 外頻 x 倍頻
7 Dual-channel: enable
type Data rate ns/bit memory clock ns/cycle CL first word (ns) 8 word (ns)
倍頻 = 9
外頻 = 267MHz
Lmax = 5 L
2
Area of Die = (
286mm 2 = 286 106 nm ) = 286 × 1012 nm 2
area of Die 286 × 1012 nm 2
Maximum number of MOS in a Die = = 2
= 2700 Million
area of MOS 105625nm
582 M
number of MOS of CPU = 582 M, about = 21.6% of Die
2700 M
This means a large space of Die is preserved for other usage
EVEREST: motherboard information
Type Data rate ns/bit memory clock ns/cycle CL first word (ns) 8 word (ns)
DDR2-800 800MHz 1.25 200MHz 5 5 25 33.75
60 CPU cycles 81 CPU cycles
EVEREST: SDRAM information
t_RTRS Rank-To-Rank-Switching 1
time.
t_BURST Data burst duration. 2
prev next rank bank row scheduling distance between column access Memory CPU
commands (no command reordering) clocks clocks
R R s s t_BURST 2 24
R R s d t_RP + t_RCD 10 120
R R s s d t_RAS + t_RP 23 276
R R d s/d t_RTRS + t_BURST 3 36
R W s d t_CAS + t_BURST + t_RTRS - t_CWD 5 60
W R s d t_CWD + t_BURST + t_WTR 16 192
W W s s t_BURST 2 24
W W s s d t_CWD + t_BURST + t_WR + t_RP + t_RCD 29 348
W W d s/d t_OST + t_BURST 3 36
dual-channel (雙通道)
tCL = 5 tCL = 7
tri-channel (三通道)
tCL = 9
tCL = 8
Objective: choose low CL-value and high clock speed memory module
• Preliminary
• DRAM cell
• DRAM device
• DRAM access protocol
• DRAM timing parameter
• DDR SDRAM
- DDR2-SDRAM, DDR3-SDRAM
- dual-channel, tri-channel
- memory controller
DDR SDRAM [1]
• SDRAM device operates data bus at the same data rate as the address and command buses.
• DDR SDRAM device operates data bus twice the data rate as the address and command buses.
SDRAM
1 data out per cycle
DDR SDRAM
Two data out per cycle
DDR SDRAM [2]
I/O bus clock run 2 times faster than memory clock and sample data at rising edge and falling edge of clock
signal such that I/O bus can transfer 4N data at one time unit
1GB addressing
DDR2 SDRAM SPEC [2]
Simplified state diagram (not real)
DRAM controller
• Row-Buffer-Management Policy
- open-page policy
- close-page policy
• Address Mapping Scheme
- minimize bank address conflicts in temporal adjacent requests and maximize the
parallelism in memory system (parallelism of channels, ranks, banks, rows, and
columns)
- utilize dual-channel architecture
- flexibility for inserting/removing memory module
• DRAM Command Ordering Scheme
North-bridge on P5Q PRO motherboard
http://www.intel.com/products/desktop/chipsets/p45/p45-overview.htm
Quad data rate (or quad pumping) is a communication signaling technique wherein data is transmitted
at four points in the clock cycle: on the rising and falling edges, and at two intermediate points between
them. The intermediate points are defined by a 2nd clock that is 90° out of phase from the first.
Intel 82955X MCH (Memory Controller Hub)
http://www.d-cross.com/show_article.asp?page=2&article_id=693
device
device device
device device
device device
device
Bank
device
Bank 23 Bank
device
Bank 23 Bank
device
Bank 23 Bank
device
Bank 23
Device 0
Bank 1 Device 1
Bank 1 Device 2
Bank 1 Device 3
Bank 1
8192 rows
Bank 0 Bank 0 Bank 0 Bank 0
col size per rank = col size per device x device count per rank = 2 x 4 = 8 (bytes)
Per-channel, per-rank address mapping scheme for
single/asymmetric channel mode
Rank Rank 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 9 8 7 6 5 4 3 2 1 0
capacity configuration: 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
(MB) row count x bank
count x col count
x col size
128 8192x4x512x8 1 9 8 7 6 5 4 3 2 1 0 1 1 0 1 8 7 6 5 4 3 2 1 0 X X X
0 1 2
256 8192x4x1024x8 1 1 9 8 7 6 5 4 3 2 1 0 1 1 0 9 8 7 6 5 4 3 2 1 0 X X X
2 0 1
512 16384x4x1024x8 1 1 1 9 8 7 6 5 4 3 2 1 0 1 1 0 9 8 7 6 5 4 3 2 1 0 X X X
3 2 0 1
512 8192x8x1024x8 1 1 1 9 8 7 6 5 4 3 2 1 0 0 1 2 9 8 7 6 5 4 3 2 1 0 X X X
2 1 0
1024 16384x8x1024x8 1 1 1 1 9 8 7 6 5 4 3 2 1 0 0 1 2 9 8 7 6 5 4 3 2 1 0 X X X
3 2 1 0
Channel address and rank address are mapped to highest bit field such that each rank or
channel is a contiguous block of memory
Per-rank address mapping scheme for
dual channel mode
Rank Rank 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 9 8 7 6 5 4 3 2 1 0
capacity configuration: 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
(MB) row count x bank
count x col count
x col size
128 8192x4x512x8 1 9 8 7 6 5 4 3 2 1 0 1 1 0 1 8 7 6 5 4 3 0 2 1 0 X X X
0 1 2
256 8192x4x1024x8 1 1 9 8 7 6 5 4 3 2 1 0 1 1 0 9 8 7 6 5 4 3 0 2 1 0 X X X
2 0 1
512 16384x4x1024x8 1 1 1 9 8 7 6 5 4 3 2 1 0 1 1 0 9 8 7 6 5 4 3 0 2 1 0 X X X
3 2 0 1
512 8192x8x1024x8 1 1 1 9 8 7 6 5 4 3 2 1 0 0 1 2 9 8 7 6 5 4 3 0 2 1 0 X X X
2 1 0
1024 16384x8x1024x8 1 1 1 1 9 8 7 6 5 4 3 2 1 0 0 1 2 9 8 7 6 5 4 3 0 2 1 0 X X X
3 2 1 0