Professional Documents
Culture Documents
A P P N O T E S SM
Using Tessent MemoryBIST with Memory Shared Bus Interface
By: Luc Romain and Giri Podichetty
Last Modified: 10th May, 2012
Revision: 1.4
Abstract
This document gives an overview of the flow and hardware used to test memories behind a
shared bus interface. A shared bus interface is defined as a set of ports that provides access to a
number of memories inside a module. This provides scalability when adding memories inside a
module while preserving a fixed foot print at the module boundary for memory BIST access.
Table of Contents:
Introduction ..................................................................................................................................... 2
Trademarks that appear in Mentor Graphics product publications that are not owned by Mentor Graphics are
trademarks of their respective owners.
Using Tessent MemoryBIST with Memory Shared Bus Interface
Introduction
This application note describes the flow, architecture and configuration files needed to generate,
insert and test memories using shared bus interfaces. Shared bus interfaces are used to provide a
common access port to a number of memories. Typical applications for shared bus interfaces
include the testing of memories inside processor core modules or external memories. The terms
memory cluster or cluster refer to a module that provides access to multiple memories using a
common shared bus interfaces. The memories that are accessed via the shared bus interface are
called logical memories. A logical memory is an address space that is composed of one or more
physical memory. The description of the memory cluster module, the shared bus interface ports,
logical and physical memory information is provided inside library files. This document
describes the steps required to create these library files as well as the tool flow that performs the
generation, insertion and verification of embedded test hardware.
This design has one shared bus interface named I1. Four logical memories named LM_0
through LM_3 are accessible using the common shared bus interface. Each logical memory is
represented in light gray background with a dashed outline. The address space that contains the
logical memory may be composed of one or more physical memories that are represented by the
blue boxes. The logical memories represent an address space that is accessible from the external
shared bus interface I1. This example uses a cluster module with a single shared bus interface but
a cluster module may have more than one.
Each shared bus interface provides access to the memory data, control and clock ports as well as
other control ports required to address specific memories inside the cluster module.
A single logical memory can be accessed at any time per shared bus interface. Each logical
memory is enabled by specifying its corresponding selection code on the array select port of the
shared bus interface. Once an array access code is specified on the shared bus interface, the
corresponding logical memory can be accessed externally through the clock, data and control
ports of the shared bus interface.
During the embedded test planning phase, one dedicated memory BIST controller is assigned per
cluster module. The memory BIST controller and interface logic is instantiated at the same level
as the cluster module as illustrated in Figure 2.
If the design contains standard memories, a different memory BIST controller will be assigned to
test these memories. A memory BIST controller that is assigned to a cluster module cannot be
used to test memories outside the cluster module. Multiple cluster modules can be instantiated
inside the design.
When running ETPlanner in genPlan mode on a design with cluster modules, a <design>.etplan
file is automatically generated with the appropriate assignment of memory BIST controllers for
memories and cluster modules in the design.
If you make changes to ETChecker generated files after running ETChecker (not a recommended
practice), please ensure the etCheckInfo/xxx.memLib file has the internalScanLogic On
specified for the lv.MemoryModule constraint for the cluster module. E.g.:
Note: If you specify the lv.MemoryModule constraint with internalScanLogic On within the
.etchecker file, the property with be superseded by the setting in the etCheckInfo/xxx.memLib
file.
2.1. Limitations when applying logic test
All lv.testmode constraints used in the etchecker file which point to a location within the cluster
module will currently be ignored by ETPlanner. Instead, Lv.Assert should be used to sensitize
nodes inside the cluster module.
If the cluster module has logic test inserted as an ELTCore, it is still possible to run
MemoryBIST through the shared bus interface, but it will not be possible to repair memories
contained with the cluster as BISR registers needed to control the repair ports cannot be inserted.
The embedded test hardware generated for shared bus interfaces is very similar to what is
generated for standard memories. A memory BIST controller and memory interfaces are
generated and instantiated as usual. Extra modules such as memory emulation logic and
multiplexing logic are also generated and connected between the memory BIST interfaces and
the cluster module as shown in Figure 3.
The memory emulation logic blocks shown in green correspond to the logical memories inside
the cluster module. One memory emulation logic module is generated for each logical memory.
The multiplexing logic also shown in green handles the control and access logic between the
shared bus interface ports, the memory BIST controller and the memory emulation modules.
Together, the memory emulation modules and the multiplexing logic provide a virtual access to
all the logical memories. This enables the memory BIST controller to execute the BIST
algorithms and perform standard operations on all logical memories.
The BIST controller, memory interface modules, memory emulation modules and multiplexing
logic can optionally be grouped inside a wrapper module. Wrapping the BIST logic allows cross
boundary area optimization during synthesis and reduces the loose logic in the design after
synthesis. The benefits of using the wrapper module are improved logic optimization and
significant area reduction.
4. Flow overview
4.1. Library requirements
Two additional libraries are required in addition to the standard physical memory library files:
The syntax of these library files are described in the following sections.
4.1.1 Memory Cluster Library file syntax
The following section describes the library file syntax for memory cluster modules.
MemoryClusterTemplate (<clusterModuleName>) {
Port (<portName>) {
Direction : InOut | (Input) | Output;
Function : None;
SafeValue : 0 | 1;
}
MemoryBistInterface (<interfaceName>) { // repeatable
Port (<portName>) {
Direction : InOut | (Input) | Output;
Polarity : (ActiveHigh) | ActiveLow;
Function : <portFunction>;
}
MemoryGroupAddressDecoding (GroupAddress | Address[x:y]){
Code(<binaryValue>) : <memoryIDList>; // repeatable
}
LogicalMemoryToInterfaceMapping (<memoryID>) {
MemoryTemplate : <logicalTemplateName>;
ConfigurationData : <binaryValue>;
PipelineDepth : <int>;
PinMappings {
LogicalMemoryDataInput[<indexList>] : InterfaceDataInput[<indexList>];
LogicalMemoryDataOutput[<indexList>] : InterfaceDataOutput[<indexList>];
LogicalMemoryAddress[<indexList>] : InterfaceAddress[<indexList>];
LogicalMemoryWriteAddress[<indexList>] : interfaceWriteAddress[<indexList>];
LogicalMemoryReadAddress[<indexList>] : interfaceReadAddress[<indexList>];
LogicalMemoryGroupWriteEnable[<indexList>] :
InterfaceGroupWriteEnable[<indexList>];
}
}
}
}
Figure 4 : Memory Cluster Library syntax
Here are details on the wrappers and properties that are found in the MemoryClusterTemplate:
Port Wrapper
This wrapper is used to describe the ports on the cluster that are common to all shared bus
interfaces, such as clocks, resets or ports that must be held to constant value during memory
BIST.
MemoryBistInterface Wrapper
This wrapper contains the information about ports, logical memories and access codes associated
with this interface. An arbitrary InterfaceName can be specified.
MemoryBistInterface::Port Wrapper
The Port wrapper inside the MemoryBistInterface wrapper describes the port properties
associated to this interface. The following port functions are specific to shared bus interfaces:
ReadAddress
WriteAddress
MemoryGroupAddress
InterfaceReset
ConfigurationData
ClockEnable
MemoryGroupAddressDecoding Wrapper
This wrapper specifies the access method and the decode value to access each logical memory.
There are two methods to select the logical memories:
GroupAddress decoding
The group address decoding method uses a dedicated array selection port (port function
MemoryGroupAddress).
Address[x:y] decoding
The address decoding method uses address bits to perform the logical memory selection.
If the cluster module has a port that is used to select the logical memories, then the
GroupAddress decoding method should be used.
If the cluster module selects the logical memories based on address bus ranges, then the
Address[x:y] decoding method should be used.
The specified decoding (GroupAddress or Address) method indicates which port to use to enable
the logical memories specified by the Code property.
If the GroupAddress decoding method is specified, the size of the Code property binary value
must match the width of the port with the MemoryGroupAddress port function.
MemoryGroupAddressDecoding::Code Property
When the GroupAddress decode method is used, the specified Code property value is applied on
the MemoryGroupAccess port to enable access to the list of logical memories. The number of
bits in the binary code must correspond to the size of the MemoryGroupAddress port, unless the
direct physical modeling method is used, in which case extra bits are added to the code value.
Please refer to section 5.2.2 Preparation of the MemoryClusterTemplate file for more details.
When the Address[x:y] decoding method is used, the specified Code property value is applied on
the bit range [x:y] of the address port to enable access to the logical memories. The number of
bits in the binary code must match the size of the address bus range [x:y].
Multiple memories can share the same code value. A comma separated list of logical memories
may be specified for each Code property. When multiple memories share the same code value,
the memories are accessed simultaneously through the shared bus interface. All memories share
the same control signals but use different data bits on the shared bus interface.
Refer to the cluster module datasheet to determine the logical memory access method.
LogicalMemoryToInterfaceMapping Wrapper
The MemoryID is the name given to the logical memory and corresponds to one of the
MemoryIDList value specified in the MemoryGroupAddressDecoding::Code property.
The MemoryTemplate property specifies the name of the MemoryTemplate wrapper that
corresponds to the logical MemoryID.
The optional ConfigurationData property specifies a binary value that must be applied on the
port with the ConfigurationData port function when the logical memory is selected. Refer to the
cluster module datasheet for more information on the configuration words required when
accessing the logical memories.
The PipelineDepth property specifies the total number of pipeline stages that surrounds the
logical memory. For example, if a memory has one stage of pipeline registers on the data inputs
and one stage of pipeline registers on the data outputs, then the PipelineDepth for this memory is
2. The PipelineDepth property value is dependent on the operation set. The above example is
true only if the operation set specifies the strobe position in a way that assumes a delay of at least
one cycle through the memory. This is the case of the built-in operation sets available in Tessent
MemoryBIST. However, if only custom operation sets are used for a cluster, it is possible to
specify PipelineDepth to represent the delay through all pipeline stages + 1 and adjust the
position of the strobe accordingly in the operation set. In some cases, this methodology may be
preferred since the documentation of the pipeline depth of some commercial IP cores includes
the delay of the memory itself.
The PinMappings wrapper is used to specify the mappings between the logical memory ports
and the shared bus interface ports. There should be one mapping specified for each port declared
using the MemoryBistInterface::Port wrapper.
4.1.1.2 Example
MemoryClusterTemplate(CLUSTER) {
Port(clk) {
Function : clock;
Direction : Input;
}
MemoryBistInterface(I1) {
Port(I1_A[5:0]) {
Function : Address;
}
Port(I1_DI[15:0]) {
Function : Data;
Direction: Input;
}
Port(I1_DO[15:0]) {
Function : Data;
Direction: Output;
}
Port(I1_RE) {
Function : ReadEnable;
Direction: Input;
}
Port(I1_WE) {
Function : WriteEnable;
Direction: Input;
}
Port(I1_SEL[2:0]) {
Function : MemoryGroupAddress;
Direction: Input;
}
Port(nrst) {
Function : InterfaceReset;
Direction : Input;
}
MemoryGroupAddressDecoding(GroupAddress) {
code(3'b001) : LM_0;
code(3'b010) : LM_1;
code(3'b011) : LM_2;
code(3'b100) : LM_3;
}
LogicalMemoryToInterfaceMapping(LM_0) {
MemoryTemplate : LM_32x8;
PipelineDepth : 9;
PinMappings {
LogicalMemoryDataInput[7:0] : InterfaceDataInput[8:1];
LogicalMemoryDataOutput[7:0] : InterfaceDataOutput[8:1];
LogicalMemoryAddress[4:0] : InterfaceAddress[4:0];
}
}
}
}
Figure 5 : Memory Cluster Template wrapper example
Below are details of the library information provided in the above example.
MemoryClusterTemplate Description
MemoryClusterTemplate(CLUSTER) { Declare cluster template library file
for module CLUSTER. The name
specified in the parenthesis must
correspond to the actual memory
cluster module name.
Port(clk) { Declare clk port
Function : clock;
Direction : Input; This port is common to all
} interfaces. Global signals such as
scan test mode and reset should also
be defined here with SafeValue
settings.
MemoryBistInterface(I1) { Declare shared bus interface wrapper
for I1
Port(I1_A[5:0]) { Declare the ports that are used by the
Function : Address;
} I1 shared bus interface. The ports
Port(I1_DI[15:0]) { defined in this wrapper can later be
Function : Data; used inside the
Direction: Input;
} LogicalMemoryToInterfaceMapping
Port(I1_DO[15:0]) { wrapper.
Function : Data;
Direction: Output;
}
Port(I1_RE) {
Function : ReadEnable;
Direction: Input;
}
Port(I1_WE) {
Function : WriteEnable;
Direction: Input;
}
Port(I1_SEL[2:0]) {
Function : MemoryGroupAddress;
Direction: Input;
}
Port(nrst) {
Function : InterfaceReset;
Direction : Input;
}
MemoryTemplate (<moduleName>) {
Port (<portName>) {
Direction : InOut | (Input) | Output;
Polarity : (ActiveHigh) | ActiveLow;
Function : <portFunction>;
}
AddressCounter {
Function (Address) {
LogicalAddressMap {
ColumnAddress[x:y]: Address[a:b];
RowAddress[x:y]: Address[a:b];
BankAddress[x:y]: Address[a:b];
}
}
Function (ColumnAddress | RowAddress | BankAddress) {
CountRange: [<lowRange>:<highRange>];
}
}
MemoryGroupAddressDecoding (Address[a:b]) {
Code(<binaryValue>) : <memoryIDList>;
}
PhysicalToLogicalMapping (<memoryID>) {
MemoryTemplate : <physicalTemplateName>;
PinMappings {
PhysicalMemoryDataInput[<indexList>] : LogicalMemoryDataInput[<indexList>];
PhysicalMemoryDataOutput[<indexList>] : LogicalMemoryDataOutput[<indexList>];
PhysicalMemoryAddress[<indexList>] : LogicalMemoryAddress[<indexList>];
PhysicalMemoryWriteAddress[<indexList>] :
LogicalMemoryWriteAddress[<indexList>];
PhysicalMemoryReadAddress[<indexList>] : LogicalMemoryReadAddress[<indexList>];
PhysicalMemoryGroupWriteEnable[<indexList>] :
LogicalMemoryGroupWriteEnable[<indexList>];
}
}
}
Figure 6: Logical Memory Library syntax
Here are details on the wrappers and properties that are found in the logical MemoryTemplate:
Port Wrapper
This wrapper is used to describe the ports used by the logical memory. Each port must have a
corresponding entry in the PinMappings wrapper of the MemoryClusterTemplate that associates
it to a shared bus interface port.
MemoryGroupAddressDecoding Wrapper
This wrapper contains the information about the physical memories that form the logical
memory. Multiple physical memories can be used to form the address space of the logical
memory. Each physical memory represents a portion of the address space of the logical memory.
The Code property is used to describe the portion of the address range of the logical memory
where the specified physical memory is located. The Address[a:b] notation identifies the address
bits that are used to enable the physical memories associated with each binary code of the Code
property.
If multiple physical memories are assembled to extend the number of IOs, then these memories
will share the same code and the two MemoryID are listed in a comma separated list for this
code.
If multiple physical memories are assembled to extend the address space, then two Code
properties are defined with the corresponding physical MemoryID specified on the right-hand
side.
PhysicalToLogicalMapping Wrapper
The PhysicalToLogicalMapping wrapper is used to specify the mappings from the physical
memory ports to the logical memory ports. One PhysicalToLogicalMapping wrapper is needed
for each physical memory that is used to form the logical memory. This is by default a one to one
mapping. Currently, this wrapper is ignored, but will be handled in a later release.
The MemoryTemplate property specifies the name of the MemoryTemplate wrapper that
corresponds to the physical MemoryID.
The PinMappings wrapper specifies the mappings of the physical memory pins to the logical
memory pins.
4.1.2.2 Example
MemoryTemplate(LM_32x8) {
Algorithm : SMarchCHKBcil;
OperationSet : SyncWR;
Port(A[4:0]) {
Function : Address;
Direction : Input;
}
Port(D[7:0]) {
Function : Data;
Direction : Input;
}
Port(Q[7:0]) {
Function : Data;
Direction : Output;
}
Port(RE) {
Function : ReadEnable;
Direction : Input;
}
Port(WE) {
Function : WriteEnable;
Direction : Input;
}
AddressCounter {
Function(ColumnAddress) {
LogicalAddressMap {
ColumnAddress[0] : Address[0];
ColumnAddress[1] : Address[1];
}
CountRange [0:3];
}
Function(RowAddress) {
LogicalAddressMap {
RowAddress[0] : Address[2];
RowAddress[1] : Address[3];
RowAddress[2] : Address[4];
}
CountRange [0:7];
}
}
PhysicalToLogicalMapping(MEM_0) {
MemoryTemplate : SYNC_1RW_32x4;
PinMappings {
PhysicalMemoryDataInput[3:0] : LogicalMemoryDataInput[3:0];
PhysicalMemoryDataOutput[3:0] : LogicalMemoryDataOutput[3:0];
PhysicalMemoryAddress[4:0] : LogicalMemoryAddress[4:0];
}
}
PhysicalToLogicalMapping(MEM_1) {
MemoryTemplate : SYNC_1RW_32x4;
PinMappings {
PhysicalMemoryDataInput[3:0] : LogicalMemoryDataInput[7:4];
PhysicalMemoryDataOutput[3:0] : LogicalMemoryDataOutput[7:4];
PhysicalMemoryAddress[4:0] : LogicalMemoryAddress[4:0];
}
}
}
Figure 7: Logical Memory Template example
The following table provides the details about the information provided in the logical
MemoryTemplate above.
}
PhysicalToLogicalMapping(MEM_0) { The PhysicalToLogicalMapping wrappers are
MemoryTemplate : SYNC_1RW_32x4;
PinMappings { used to specify the pin mappings between the
PhysicalMemoryDataInput[3:0]: physical memories and the logical memory.
LogicalMemoryDataInput[3:0]; One PhysicalToLogicalMapping wrapper is
PhysicalMemoryDataOutput[3:0] :
LogicalMemoryDataOutput[3:0]; specified for each physical memory that is
PhysicalMemoryAddress[4:0] : located inside the logical memory. This
LogicalMemoryAddress[4:0];
}
example uses two physical memories
} (SYNC_1RW_32x4) to form one logical
PhysicalToLogicalMapping(MEM_1) { memory with a size of 32x8. The first physical
MemoryTemplate : SYNC_1RW_32x4;
PinMappings {
memory (MEM_0) is aligned with the logical
PhysicalMemoryDataInput[3:0] : data IOs[3:0] and the second physical memory
LogicalMemoryDataInput[7:4]; (MEM_1) is aligned with the logical data
PhysicalMemoryDataOutput[3:0] :
LogicalMemoryDataOutput[7:4]; IOs[7:4].
PhysicalMemoryAddress[4:0] :
LogicalMemoryAddress[4:0]; Since all logical memory address bits are used
}
} by both physical memories, the
MemoryGroupAddressDecoding wrapper is
not needed and all physical memories are
assumed to be enabled when the logical
memory is selected.
Table 2: Memory Template information
The cluster template files must be provided to ETChecker using the memlib command line
option. If there are standard memories in addition to the cluster, their memory library files must
also be provided to ETChecker using the memLib option. The logical memory templates are not
needed by ETChecker. Multiple instances of clusters and different types of clusters are supported
in the design. Once the cluster memory template files have been provided to ETChecker, the
ETChecker flow can be executed as usual.
The cluster and logical memory template files must be provided to ETPlanner. One dedicated
memory BIST controller is assigned for each cluster found in the design. If other memories are
present in the design, the usual partitioning rules will be applied and separate memory BIST
controllers will be assigned for them as usual.
In the .etplan file, the memory BIST controller assigned to the cluster module does not list all the
steps in which the logical memories are tested. The expansion of these steps will be done in the
ETAssemble flow.
The actual test time for each memory BIST controller associated with a cluster cannot be
calculated during ETPlanner. The actual test time will be calculated in the ETAssemble flow
only. All logical memories are tested serially. Although the logical memories behind two shared
bus interfaces could be tested in parallel, the current implementation only assigns one logical
memory per controller step.
The MemBistControllerOptions wrapper can be used to configure options of the memory BIST
controller assigned to the cluster module. However, the MemBistCollarOptions wrapper should
not be used for clusters. Any collar specific option related to testing of the logical memories
should be specified in the MemoryClusterTemplate or logical MemoryTemplate wrappers.
The timing of the signals between the cluster module and the memory BIST controller may be
pipelined using the two following MemBistControllerOptions wrapper properties:
MemBistControllerOptions(<ControllerNameRE>) {
PipelineClusterInputs : Yes | (No);
PipelineClusterOutputs : Yes | (No);
AssemblyWrapper : (Yes) | No;
}
Figure 8: MemBistControllerOptions syntax
When PipelineClusterInputs is enabled, a single pipeline stage will be added in the multiplexing
logic on signals sent directly to the shared bus interface ports. Similarly, enabling
PipelineClusterOutputs will pipeline signals received directly from the shared bus interface. The
AssemblyWrapper property sets whether a wrapper module around the BIST controller, memory
interface, memory emulation logic and muxing logic (as shown in Figure 3 : MemoryBist shared
bus hardware overview logic) is added. All the logic inside the wrapper except BIST controller
will be flattened during the synthesis to reduce the hardware overhead. The wrapper module can
be used as a layout grouping to manage placement and timing of the memory BIST logic.
Only SoftProgrammable or HardProgrammable controller types are allowed when testing cluster
modules.
ETPlan ( WIRELESS_CORE ) {
CADEnvironment {
...
}
ICTechnology(MGC_Generic) {
...
}
DesignSpecification {
...
}
EmbeddedTest {
GlobalOptions {
}
ModuleOptions (.*) {
}
MemBistControllerOptions(WIRELESS_CORE_clk_MBIST1) {
ControllerType : HardProgrammable;
PipelineClusterInputs : Yes;
PipelineClusterOutputs : Yes;
}
MemBistStepOptions(.*:.*) {
LocalComparators : Off;
}
4.4. ETAssemble
No edits are needed inside the .etassemble file. The embedded test generation flow can be
executed normally. ETAssemble will generate and insert the memory BIST logic inside the
design. The actual expansion of the controller steps for each logical memory can be viewed once
the make embedded_test target has completed. The first step is to determine the name of the
controller that is associated to the cluster module. This can be done by viewing the
ETAssemble/LV_WORKDIR/<design>.EmbeddedTest file:
EmbeddedTest {
MemoryBist ( WIRELESS_CORE_clk_MBIST1 ) {
Step {
BitSliceWidth : 1;
NumberOfBistDataPipelineStages : 0;
TestUnusedAddressRange : OFF;
MemoryInstance : MEM1;
}
MemoryInstance(MEM1) {
MemoryTemplate : CLUSTER;
DisableSelfRepair : OFF;
DisableRedundancyAnalysis : OFF;
MemoryType : CLUSTER;
LocalComparators : OFF;
LocalAddressCounter : OFF;
PipelineSerialDataOut : OFF;
ObservationLogic : ON;
InstancePath : CLUSTERInst;
}
}
Figure 10: Example .EmbeddedTest file generated for a shared bus interface design
The design extraction step make designe can be executed immediately after the make
embedded_test target and is no different than the usual flow.
There are no changes to the make config_etSignOff target. This will generate a standard
ETVerify configuration file for the memory BIST controller assigned to the cluster module. If
logic test is also present, the make target of make config_etEarlyver will be available instead of
make config_etSignOff
4.5. ETVerify
Before generating a memory BIST pattern for the cluster module, a proper initialization
sequence must be created. The cluster initialization sequence is driven by TAP user DR bits and
is executed before launching the memory BIST controller. An initialization sequence is
generally needed if the cluster module implements interface ports that use the InterfaceReset or
BistOn port functions. The initialization sequence must be added in a UserDefinedSequence
wrapper in the .etSignOff file. Tessent MemoryBIST creates a sample initialization sequence
file with the name xxx_clusterInitialization.userDefSeq_tpl, where xxx is the design name. You
can modify this sample initialization sequence to create your own initialization sequence.
Figure 11 shows a modified UserDefinedSequence wrapper based on the sample initialization
sequence. Note: This example is for a MemoryBIST controller being driven from a TAP
controller. If you are using a block flow, a WTAPSettings wrapper is required for each test step
of the user defined sequence as well as referring to UserIRBit(s) rather than UserBit(s).
UserDefinedSequence(InitBistInterface) {
TestStep(RequestAndReset) {
// Apply I1 request user DR bits to access I1 interfaces
UserBitAlias(I1_REQ_0) : 1'b1;
UserBitAlias(I1_REQ_1) : 1'b1;
// Apply I1 reset signals
UserBitAlias(I1_RESET) : 1'b0;
InitialWaitCycles : 16;
}
TestStep(ReleaseReset) {)
InitialWaitCycles : 30;
// Release I1 interface resets
UserBitAlias(I1_RESET) : 1'b1;
}
}
Figure 11: Example UserDefinedSequence wrapper from a .etSignOff file for a shared bus interface design
The name of the initialization sequence is called InitBistInterface. Once the UserDefineSequence
wrapper is created, the PostTapUserDefinedSequence: InitBistInterface property must be added
inside the .etSignOff file for all patterns that are running the memory BIST controller assigned
for the cluster module:
membistPVerify(top_P1) {
PatternName : membistpv_P1_top;
SimulationScript : top_sim.script;
SetupRate : TCK;
TestClockSource : Functional;// clk
ClockPeriod : 10.0ns;
TckRatio : 8;
PostTapUserDefinedSequence : InitBistInterface;
TestStep ( RunTimeProg ) {
RunMode : RunTimeProg;
Controller ( BP0 ) {
CompareGoID : On;
CompareGo : On;
ReducedAddressCount : On;
}
}
}
Figure 12: Example membistPVerify wrapper from a .etSignOff file for a shared bus interface design
All other membistPVerify wrapper properties can be used for this memory BIST controller. By
default, the memory BIST controller will test all memories according to the controller steps
defined in the LV_WORKDIR/<controllerName>.membist file shown previously. However, the
following ETVerify properties are also supported when testing memories inside clusters:
Controller (BP#) {
ConfigurationData(<interfaceName>) : <binaryValue>;
FreezeStep : On | (Off);
FreezeStepNumber : <int>;
}
Figure 13 : ETVerify properties for memoryBist controllers testing memory clusters
The ConfigurationData property can be used to override the default ConfigurationData value
defined in the LogicalMemoryToInterfaceMapping wrapper of the MemoryClusterTemplate. The
configuration data value is automatically driven on the shared bus interface port with the
ConfigurationData port function when the logical memory is under test. This configuration data
value can be overridden at pattern generation time.
Logical memories can be tested individually using the FreezeStep and FreezeStepNumber
properties. The RunMode: RunTimeProg optional is only available if FreezeStep is On or the
ConfigurationData value is the same for all memories. The actual step number corresponding to
each logical memory is found in the ETAssemble/LV_WORKDIR/<ControllerName>.membist
file shown previously. Here is an example memory BIST pattern where the configuration data
value is overridden for one logical memory using FreezeStep:
membistPVerify(top_P1) {
PatternName : membistpv_P1_top;
SimulationScript : top_sim.script;
SetupRate : TCK;
TestClockSource : Functional;// clk
ClockPeriod : 10.0ns;
TckRatio : 8;
PostTapUserDefinedSequence : InitBistInterface;
TestStep ( RunTimeProg ) {
RunMode : RunTimeProg;
Controller ( BP0 ) { //top_clk_MBIST1_LVISION_MBISTPG_CTRL
CompareGoID : On;
CompareGo : On;
FreezeStep : On;
FreezeStepNumber : 1; // Testing LM_1 only
ConfigurationData(I1) : 1'b1;
ReducedAddressCount : On;
}
}
}
Figure 14 : Example FreezeStep to test an individual logical memory
This section applies only if memories with redundancy are present inside the logical memories
and the automated repair capability of Tessent MemoryBIST is used.
Tessent MemoryBIST can insert the built-in repair analysis (BIRA) and built-in self-repair
(BISR) logic for memories accessed through a shared bus interface. By default, BIRA logic is
place outside the cluster module and the BISR logic inside the cluster module, near the memory
to be repaired. There are situations where the BISR and physical memories can be located
outside the cluster module, but this is not addressed the current release (2012.1), but will be
addressed in a later release. The BIRA logic is instantiated inside the memory BIST controller
module as in a normal design. However, two instances of the BISR registers for each physical
memory are inserted inside the design. The first BISR instance is located near the memory
emulation logic and is used to capture the BIRA fuse information. The second BISR instance is
located near the physical memory instance inside the cluster module and is used to drive the
memory repair ports. This means that the core logic must be changed to support the Tessent
MemoryBIST repair feature. An overview of a design after BIRA and BISR insertion is
illustrated in Figure 15 : Design overview of memory cluster module with BISR. The BIRA and
BISR logic is shown in red.
Note: Logical memories LM_1 and LM_2 do not contain memories with redundancy, so there is
no BISR module for those memories
The BISR modules are serially connected together and are tied to a multiplexer that is used to
select the BISR chain to scan out. The multiplexer select signal is controlled by the BISR
controller. The external BISR registers located outside the cluster module are selected when the
fuse box controller performs the BIRA to BISR transfer. Once the BIRA to BISR transfer is
completed, the BISR chain is rotated and the values captured by the external BISR modules are
copied to the corresponding BISR modules inside the cluster module which drive the memory
repair ports. This process is identical to the existing BIRA and BISR flow and does not require
extra test steps to perform the memory BIST pre-repair, BISR programming and memory BIST
post-repair steps.
5.2. Design and library file pre-requisites
The next sections describe the steps that must be performed in order to prepare the design and
memory library files before inserting the BIRA and BISR hardware into the design.
LogicalMemoryToInterfaceMapping (<LogicalMemoryName>) {
ParentModuleName ; <LogicalLibraryName>
MemoryInstanceName : <Path to Physical Memory within Cluster module>;
ConfigurationData: 1'b0;
MemoryTemplate: <PhysicalMemTemplateName>;
PinMappings {
...
}
...
}
Figure 17: LogicalMemoryToMemeoryInterfaceMapping wrapper within MemoryClusterTemplate
In order to repair each physical memory, there needs to be an entry in cluster memlib for each
physical memory. Depending how the physical memories are arranged in the logical memory,
the approach will be different.
When the physical memories are vertical stacked (extending address space), a unique code is
assigned to each physical memory and the code has more bits than the BIST interface port with
MemoryGroupAddress port function.
When the physical memories are horizontal stacked, memories share the same code. PinMapping
allows the tool to distinguish physical memories on the data bus and can be tested in parallel or
serially with the use of DisableMemoryList option.
Let us consider a logical memory composed of 4 physical memories (most of the connections are
not shown for simplicity, but a more complete connection example can found in Figure 22). The
RR0 and RR1 ports on the physical memory are for the mapping of the redundant rows (green
bars) and CR0 port is for the redundant column mapping (green column). The examples are
taken from the BISR testcase that accompanies this AppNote.
Without BISR, the logical memory would be described as a single entry in the cluster library
MemoryGroupAddressDecoding wrapper:
MemoryGroupAddressDecoding(GroupAddress) {
code(3'b001) : LM_0;
...
}
In order to enable repair on each individual physical memory, we need entries for each physical
memory in the MemoryGroupAddressDecoding wrapper:
MemoryGroupAddressDecoding(GroupAddress) {
code(4'b0001) : LM_0_LEFT_UPPER, LM_0_RIGHT_UPPER;
code(4'b1001) : LM_0_LEFT_LOWER, LM_0_RIGHT_LOWER;
...
}
As mentioned early, an extra bit is needed for the code value which will be manipulated later in
the virtual memory logic to correctly address the appropriate memory during memory BIST. This
requirement will not be needed in future releases.
to the physical memory that will be repaired. The logical MemoryTemplate property references a
dummy logical memory that matches the physical memory.
LogicalMemoryToInterfaceMapping(LM_0_LEFT_LOWER) {
MemoryTemplate : LM0_SYNC_1RW_16x4_RC_BISR;
ParentModuleName : LM_0;
MemoryInstanceName : LM_0_inst/SYNC_1RW_16x4_RC_BISRInst_MSB_l;
PipelineDepth : 9;
PinMappings {
LogicalMemoryDataInput[3:0] : InterfaceDataInput[8:5];
LogicalMemoryDataOutput[3:0] : InterfaceDataOutput[8:5];
LogicalMemoryAddress[3:0] : InterfaceAddress[3:0];
}
}
LogicalMemoryToInterfaceMapping(LM_0_RIGHT_LOWER) { // {{{
MemoryTemplate : LM0_SYNC_1RW_16x4_RC_BISR;
ParentModuleName : LM_0;
MemoryInstanceName : LM_0_inst/SYNC_1RW_16x4_RC_BISRInst_LSB_l;
PipelineDepth : 9;
PinMappings {
LogicalMemoryDataInput[3:0] : InterfaceDataInput[4:1];
LogicalMemoryDataOutput[3:0] : InterfaceDataOutput[4:1];
LogicalMemoryAddress[3:0] : InterfaceAddress[3:0];
}
}
For 9.6 and 2012.1 releases, there are no changes required to the physical memory template file.
The _VM.vb files are located in ETAssemble/outdir directory in the LV workspace. Below are
code segments of the _VM.vb for the LM_0 virtual memory right upper and right lower physical
memories (the left lower memory would also be edited in the same manner as the right lower).
WIRELESS_CORE_clk_MBIST1_LVISION_LM_0_RIGHT_LOWER_VM:
...
assign I1_A_toCore[0] = VIRTUAL_MEM_BIST_EN & A_VM[0];
assign I1_A_toCore[1] = VIRTUAL_MEM_BIST_EN & A_VM[1];
assign I1_A_toCore[2] = VIRTUAL_MEM_BIST_EN & A_VM[2];
assign I1_A_toCore[3] = VIRTUAL_MEM_BIST_EN & A_VM[3];
WIRELESS_CORE_clk_MBIST1_LVISION_LM_0_RIGHT_UPPER_VM:
...
assign I1_A_toCore[0] = VIRTUAL_MEM_BIST_EN & A_VM[0];
assign I1_A_toCore[1] = VIRTUAL_MEM_BIST_EN & A_VM[1];
assign I1_A_toCore[2] = VIRTUAL_MEM_BIST_EN & A_VM[2];
assign I1_A_toCore[3] = VIRTUAL_MEM_BIST_EN & A_VM[3];
assign I1_A_toCore[4] = 1'b0;
assign I1_A_toCore[5] = 1'b0;
...
There are memories that share the same data input and bit/byte write enable signals from the
shared interface and have dedicated data output signals going back to the shared interface.
The preferred solution is to model the memories as separate logical memories so that
SMarchCHKBvcd or any custom algorithm controlling inputs with port functions
GroupWriteEnable, ReadEnable or Select are correctly handled. This solution addresses the
following issues:
Addresses problems with datapath and group write enable tests of SMarchCHKBvcd
algorithm when data word size is odd.
Simplifies the modeling for BIRA/BISR.
Sequential test of memories (future release will allow parallel test of identical memories).
The alternative is to model as one logical memory with wider datapath, which will work
correctly if the data word size is even and BitGrouping=1.
Please also refer to the section RTL modifications for SMarchCHKBvcd Phases 3.6 and 3.7 in
Appendix B for additional changes needed to support the SMarchCHKBvcd algorithm.
Ordering of data/parity bits and mapping to bit/byte write enable
Parity bits (if present) are handled as extra data bits during MBIST. At the logical memory level,
all parity bits are packed into the MSB of the data input/output.
At the shared interface level, the parity bits must be interleaved next to the corresponding data
bits.
To allow for this, the data input/output mapping in the LogicalMemoryToInterface wrapper
needs to be reordered and will address problems with datapath and group write enable tests for
the SMarchCHKBvcd algorithm.
The A15 uses delays on all flops in the design. Flops are coded like this:
The delay is provided with a `define DFF_DELAY command that is located in the following
RTL file within the design database:
logical/shared/verilog/eag_header.v
The value for this variable in the delivered RTL is 1 ns, which is too high if the clock period is
also 1 ns. To address this, the timescale setting in this eag_header file can be changed as as
follows:
Original:
`timescale 1 ns / 1 ps
New:
`timescale 1 ps / 1 ps
For the L1 interface GHB logical memory. BE size is 8 and data size is 48.
In order to address this, the input/output mapping in the LogicalMemoryToInterface wrapper has
to be re-ordered. This will addresses problems with datapath and group write enable tests for the
SMarchCHKBvcd algorithm (also referred to as VCD in the rest of this section).
Considerations for L1 logical memories
The GHB logical memory is similar to the other logical memories (except L2 TLB) behind the
L1 interface and it also uses bit/byte write enable. The following describes the library settings
that are applicable to most logical memories followed by the additional settings required for the
GHB logical memory.
Operation Set
As none of the L1 logical memories implement latency, the built-in operation set SyncWRvcd
can be used with the SMarchCHKBvcd algorithm. No need to create a custom operation set for
the L1 logical memories.
DataOutHoldWithInactiveReadEnable : Off;
MemoryHoldWithInactiveSelect : Off;
The requirements for phase 3.6 and 3.7 of the VCD algorithm are different for inputs with
ReadEnable and Select port functions. For ReadEnable, the data output must be preserved when
read enable is deasserted as indicated. However, the requirement for chip select is that it needs to
be controllable from the operation set. This is not the case for ARM cores in general. The
workaround is to change one line of Verilog in the controller RTL file as shown below:
File:
ETAssemble/outDir/<designName>_LVISION_MBISTPG_CTRL.vb
Before:
assign MEM_SELECT_REG_INT = MEM_SELECT_REG;
After:
assign MEM_SELECT_REG_INT = MEM_SELECT_REG & ~DISABLE_RD & ~DISABLE_CS;
Effectively, the memories will be masked during Phases 3.6 and 3.7 only to avoid any simulation
mismatches. In general, these modifications are only needed for algorithms trying to test read
enable and select inputs such as VCD.
The GHB logical memory is similar to the other logical memories (except L2 TLB) with
additional bit write enable (MBISTBE1) inputs. The data input size is 48 bits and the bit write
enable size is 8 bits. Internally, the BE input is broadcasted to all 6 bytes composing the 48 bit
data input. The mapping is such that MBISTBE1[0] controls data bit 0, 8, 16, 24, 32, 40, and
MBISTBE1[1] controls data bit 1, 9, 17, 25, 33, 41, and etc.
However, this mapping cannot be directly described to the tool. From the controller's perspective,
it sees the 48 bit data word being evenly distributed among the 8 MBISTBE1 ports. Therefore,
MBISTBE1[0] controls data bits 0-5, MBISTBE1[1] controls data bits 6-11, and etc.
To reconcile the difference, you can modify the PinMappings to reorder the data input and output
bits between the L1 interface and the GHB logical memory as shown below:
LogicalMemoryToInterfaceMapping (CPU0_IFGHB) {
ConfigurationData: 1'b0;
MemoryTemplate: CPU_IFGHB;
PipelineDepth: 11;
Latency: 0 ;
PinMappings {
LogicalMemoryDataOutput[2]: InterfaceDataOutput[16];
LogicalMemoryDataOutput[3]: InterfaceDataOutput[24];
LogicalMemoryDataOutput[4]: InterfaceDataOutput[32];
LogicalMemoryDataOutput[5]: InterfaceDataOutput[40];
LogicalMemoryDataOutput[6]: InterfaceDataOutput[1];
LogicalMemoryDataOutput[7]: InterfaceDataOutput[9];
LogicalMemoryDataOutput[8]: InterfaceDataOutput[17];
LogicalMemoryDataOutput[9]: InterfaceDataOutput[25];
LogicalMemoryDataOutput[10]: InterfaceDataOutput[33];
LogicalMemoryDataOutput[11]: InterfaceDataOutput[41];
LogicalMemoryDataOutput[12]: InterfaceDataOutput[2];
LogicalMemoryDataOutput[13]: InterfaceDataOutput[10];
LogicalMemoryDataOutput[14]: InterfaceDataOutput[18];
LogicalMemoryDataOutput[15]: InterfaceDataOutput[26];
LogicalMemoryDataOutput[16]: InterfaceDataOutput[34];
LogicalMemoryDataOutput[17]: InterfaceDataOutput[42];
LogicalMemoryDataOutput[18]: InterfaceDataOutput[3];
LogicalMemoryDataOutput[19]: InterfaceDataOutput[11];
LogicalMemoryDataOutput[20]: InterfaceDataOutput[19];
LogicalMemoryDataOutput[21]: InterfaceDataOutput[27];
LogicalMemoryDataOutput[22]: InterfaceDataOutput[35];
LogicalMemoryDataOutput[23]: InterfaceDataOutput[43];
LogicalMemoryDataOutput[24]: InterfaceDataOutput[4];
LogicalMemoryDataOutput[25]: InterfaceDataOutput[12];
LogicalMemoryDataOutput[26]: InterfaceDataOutput[20];
LogicalMemoryDataOutput[27]: InterfaceDataOutput[28];
LogicalMemoryDataOutput[28]: InterfaceDataOutput[36];
LogicalMemoryDataOutput[29]: InterfaceDataOutput[44];
LogicalMemoryDataOutput[30]: InterfaceDataOutput[5];
LogicalMemoryDataOutput[31]: InterfaceDataOutput[13];
LogicalMemoryDataOutput[32]: InterfaceDataOutput[21];
LogicalMemoryDataOutput[33]: InterfaceDataOutput[29];
LogicalMemoryDataOutput[34]: InterfaceDataOutput[37];
LogicalMemoryDataOutput[35]: InterfaceDataOutput[45];
LogicalMemoryDataOutput[36]: InterfaceDataOutput[6];
LogicalMemoryDataOutput[37]: InterfaceDataOutput[14];
LogicalMemoryDataOutput[38]: InterfaceDataOutput[22];
LogicalMemoryDataOutput[39]: InterfaceDataOutput[30];
LogicalMemoryDataOutput[40]: InterfaceDataOutput[38];
LogicalMemoryDataOutput[41]: InterfaceDataOutput[46];
LogicalMemoryDataOutput[42]: InterfaceDataOutput[7];
LogicalMemoryDataOutput[43]: InterfaceDataOutput[15];
LogicalMemoryDataOutput[44]: InterfaceDataOutput[23];
LogicalMemoryDataOutput[45]: InterfaceDataOutput[31];
LogicalMemoryDataOutput[46]: InterfaceDataOutput[39];
LogicalMemoryDataOutput[47]: InterfaceDataOutput[47];
// [end] : LogicalMemoryDataOutput }}}
LogicalMemoryAddress[9:0]: InterfaceAddress[9:0] ;
LogicalMemoryGroupWriteEnable[7:0]:InterfaceGroupWriteEnable[7:0];
}
}
Figure 20: Logical to Interface mapping for GHB Logical Memory
Note: The mapping proposed will cause SiliconInsight to report an incorrect IO number during
diagnosis UNLESS the connections between the logical and physical memory are modified to
undo the mappings described in the LogicalMemoryDataInput wrapper. This is not verified by
the tools. It is up to the user to make these connections changes.
TLB logical memory
The TLB memory has an asymmetric group write enable scheme which is not supported. The
data input size is 100 bits and the bit write enable (MBISTBE1) is 2 bits. The mapping is such
that MBISTBE1[0] controls data bits 0 to 97 and MBISTBE1[1] controls data bits 98, 99.
Testing the TLB memory with SMarchCHKBvcd requires the following steps. The idea is to
control all data bits with a single group write enable signal.
A. In the logical memory template, modify the existing 2-bit port with GroupWriteEnable
function to become a 1-bit bus.
MemoryTemplate(CPU_L2TLB) {
//Port (MBISTBE1[1:0]) {
Port (MBISTBE1[0]) {
Function : GroupWriteEnable;
Polarity : ActiveHigh;
}
C. The following RTL edit to the virtual memory module in this step is no longer required
with 2012.1 release. This step is needed only when using version v9.6 or earlier of the
MemoryBIST tools.
In the virtual memory module, modify the RTL to broadcast the single bit group write
enable signal to the shared interface input MBISTBE1[7:0]. Make sure to preserve the
assignments of bits 2 to 7 since TLB only uses the 2 least significant bits.
File:
ETAssemble/outDir/*_L2TLB_VM.vb
Before:
assign MBISTBE1_toCore[0] =0;
assign MBISTBE1_toCore[1] =0;
assign MBISTBE1_toCore[2] =0;
assign MBISTBE1_toCore[3] =0;
assign MBISTBE1_toCore[4] =0;
assign MBISTBE1_toCore[5] =0;
assign MBISTBE1_toCore[6] =0;
assign MBISTBE1_toCore[7] =0;
After:
For the L2 logical memories, the BE mapping is uniform meaning each MBISTBE2[x] port
controls a contiguous range of data bits. No special re-mapping will be required in cluster library.
Read and write latency
For the L2 memories with latency, a custom operation set is required to be coded to account for
the latency during read and write operations. One custom operation set is needed for each latency
value. The modifications are shown in the following examples between the standard Read
operation and the Read operation with latency of 2.
Tick {
ReadEnable : On; // Initiate second read operation
StrobeDataOut; // Compare result of first read operation
// Strobe is actually delayed by the combined use of
// PipeliningStages (StrobeDataOut) and
// PipelineDepth=n
}
Tick { // Latency 1
ReadEnable : Off;
}
Tick { // Latency 2
ReadEnable : Off;
}
You can use the standard operation sets as templates for modification. The syntax files can be
found in the Tessent install tree under the path:
http://supportnet.mentor.com/reference/appnotes/index.cfm?id=MG576062
Testcase 1 overview:
The generic core testcase has one shared bus interface named I1. Table 4 : Logical memory details describes the ports
used by the I1 shared bus interface:
Four logical memories can be accessed via the shared bus interface : LM_0 through LM_3. Table
4 : Logical memory details describes the access codes, latency, pipelining and memory size for
each logical memory.
In Table 4 : Logical memory details, latency corresponds to the extra number of clock cycles
required for each memory operation. For example, a memory with a latency of 0 can have one
operation per clock cycle. A memory with a latency of 2 means that the control signals must
remain stable for 2 clock cycles after the operation is launched. For example, during a write
operation, the memory Address and Data ports must remain high for 2 clock cycles after the
WriteEnable pulse. Compatible operation sets must be created for memories with latency
values. The following example shows an operation set used for a memory with a latency of 2.
The Tick wrappers for each latency cycle are labeled with comments that indicate the latency
cycle. The first latency cycle corresponds to the cycle when the ReadEnable/WriteEnable control
signal is turned off.
Tick {
}
}
Operation (Write) {
Tick {
Select: On;
WriteEnable: On;
}
Tick { // Latency 1
WriteEnable: Off;
}
Tick { // Latency 2
}
Tick {
}
Tick {
}
Tick {
}
}
Operation (Read) {
Tick {
Select : On;
writeEnable : Off;
ReadEnable : On;
}
Tick { // Latency 1
ReadEnable : Off;
}
Tick { // Latency 2
}
Tick {
ReadEnable : On; // Initiate second read operation
StrobeDataOut; // Compare result of first read operation
}
Tick { // Latency 1
ReadEnable : Off;
}
Tick { // Latency 2
ReadEnable : Off;
}
Operation (ReadModifyWrite) {
Tick {
Select: On;
ReadEnable: On;
}
Tick { // Latency 1
ReadEnable: Off;
}
Tick { // Latency 2
StrobeDataOut;
}
Tick {
WriteEnable: On;
}
Tick { // Latency 1
WriteEnable: Off;
}
Tick { // Latency 2
}
}
Each logical memory is composed of one or more physical memories. The next figures illustrate
the physical memory stacking specific to each logical memory. The gray boxes identified with
P(<INTEGER>) refer to the pipelining stages on the data signals.
A[4:0]
A[4:0]
Testcase 2 overview:
Package: SharedBus_Generic_BISR
Design name : WIRELESS_CORE
Cluster module name : CLUSTER
MemoryClusterTemplate file : MEM/CLUSTER.lvlib
Logical memory templates: MEM/Logical.lvlib
Path to workspace : ETCreate/WIRELESS_CORE_LVWS
The design follows the same logical structure as testcase 1, the exception being that LM_0 and
LM_1 have row and column redundancy and use a repairable SYNC_1RW_16x4 memory
instead of non-repairable SYNC_1RW_32x4. Snippets of the library syntax for this testcase are
used on section 5. Using Repair with Cluster Module