You are on page 1of 18

ADVANCED COMPUTER ARCHITECTURE TWO MARK QUESTIONS UNIT I INSTRUCTION LEVEL PARALLELISM TWO MARKS 1.

List the various data dependence. Data dependence ,Name dependence, Control Dependence 2. What is Instruction Level Parallelism? Pipelining is used to overlap the execution of instructions and improve performance. This potential overlap among instructions is called instruction level parallelism (ILP) since the instruction can be evaluated in parallel. 3. Give an example of control dependence? if p1 {s1;} if p2 {s2;} S1 is control dependent on p1, and s2 is control dependent on p2. 4. What is the limitation of the simple pipelining technique? These technique uses in-order instruction issue and execution. Instructions are issued in program order, and if an instruction is stalled in the pipeline, no later instructions can proceed. 5. Briefly explain the idea behind using reservation station? Reservation station fetches and buffers an operand as soon as available, eliminating the need to get the operand from a register. 6. Give an example for data dependence. Loop: L.D F0,0(R1) ADD.D F4,F0,F2 S.D F4,0(R1) DADDUI R1,R1,#-8 BNE R1,R2, loop 7. Explain the idea behind dynamic scheduling? In dynamic scheduling the hardware rearranges the instruction execution to reduce the stalls while maintaining data flow and exception behavior.

8. Mention the advantages of using dynamic scheduling? It enables handling some cases when dependences are unknown at compile time and it simplifies the compiler. It allows code that was compiled with one pipeline in mind run efficiently on a different pipeline. 9. What are the possibilities for imprecise exception? The pipeline may have already completed instructions that are later in program order than instruction causing exception. The pipeline may have not yet completed some instructions that are earlier in program order than the instructions causing exception. 10. What are multilevel branch predictors? These predictors use several levels of branch-prediction tables together with an algorithm for choosing among the multiple predictors. 11. What are branch-target buffers? To reduce the branch penalty we need to know from what address to fetch by end of IF (instruction fetch). A branch prediction cache that stores the predicted address for the next instruction after a branch is called a branch-target buffer or branch target cache. 12. Briefly explain the goal of multiple-issue processor? The goal of multiple issue processors is to allow multiple instructions to issue in a clock cycle. They come in two flavors: superscalar processors and VLIW processors. 13. What is speculation? Speculation allows execution of instruction before control dependences are resolved. 14. Mention the purpose of using Branch history table? It is a small memory indexed by the lower portion of the address of the branch instruction. The memory contains a bit that says whether the branch was recently taken or not. 15. What are super scalar processors? Superscalar processors issue varying number of instructions per clock and are either statically scheduled or dynamically scheduled.

16. Mention the idea behind hardware-based speculation? It combines three key ideas: dynamic branch prediction to choose which instruction to execute, speculation to allow the execution of instructions before control dependences are resolved and dynamic scheduling to deal with the scheduling of different combinations of basic blocks. 17. What are the fields in the ROB? Instruction type Destination field Value field Ready field

18. How many branch selected entries are in a (2,2) predictors that has a total of 8K bits in a prediction buffer? 22 x 2 x number of prediction entries selected by the branch = 8K number of prediction entries selected by the branch = 1K 19. What is the advantage of using instruction type field in ROB? The instruction field specifies whether instruction is a branch or a store or a register operation 20. Mention the advantage of using tournament based predictors? The advantage of tournament predictor is its ability to select the right predictor for right branch.

UNIT II MULTIPLE ISSUE PROCESSORS 1.What is loop unrolling? A simple scheme for increasing the number of instructions relative to the branch and overhead instructions is loop unrolling. Unrolling simply replicates the loop body multiple times, adjusting the loop termination code. 2.When static branch predictors are used? They are used in processors where the expectation is that the branch behavior is highly predictable at compile time. Static predictors are also used to assists dynamic predictors. 3.Mention the different methods to predict branch behavior? Predict the branch as taken Predict on basis of branch direction (either forward or backward) Predict using profile information collected from earlier runs.

4. Explain the VLIW approach? They uses multiple, independent functional units. Rather than attempting to issue multiple, independent instructions to the units, a VLIW packages the multiple operations into one very long instruction. 5. Mention the techniques to compact the code size in instructions? Using encoding techniques Compress the instruction in main memory and expand them when they are read into the cache or are decoded. 6. Mention the advantage of using multiple issue processor? They are less expensive. They have cache based memory system. More parallelism.

7. What are loop carried dependence? They focuses on determining whether data accesses in later iterations are dependent on data values produced in earlier iterations; such a dependence is called loop carried dependence. e.g for(i=1000;i>0;i=i-1) x[i]=x[i]+s; 8. Mention the tasks involved in finding dependences in instructions? Good scheduling of code. Determining which loops might contain parallelism Eliminating name dependence 9. Use the G.C.D test to determine whether dependence exists in the following loop: for(i=1;i<=100;i=i+1) X[2*i+3]=X[2*i]*5.0; Solution: a=2,b=3,c=2,d=0 GCD(a,c)=2 and d-b=-3 Since 2 does not divide -3, no dependence is possible. 10. What is software pipelining? Software pipelining is a technique for reorganizing loops such that each iteration in the software pipelined code is made from instruction chosen from different iterations of the original loop. 11. What is global code scheduling? Global code scheduling aims o compact code fragment with internal control structure into the shortest possible sequence that preserves the data and control dependence. Finding a shortest possible sequence is finding the shortest sequence for the critical path. 12. What is trace? Trace selection tries to find a likely sequence of basic blocks whose operations will be put together into a smaller number of instructions; this sequence is called trace.

13. Mention the steps followed in trace scheduling? Trace selection, Trace compaction

14. What is superblock? Superblocks are formed by a process similar to that used for traces, but are a form of extended basic block, which are restricted to a single entry point but allow multiple exits. 15. Mention the advantages of predicated instructions? Remove control dependence Maintain data flow enforced by branch Reduce overhead of global code scheduling 16. Mention the limitations of predicated instructions? They are useful only when the predicate can be evaluated early. Predicated instructions may have speed penalty. 17. What is poison bit? Poison bits are a set of status bits that are attached to the result registers written by the speculated instruction when the instruction causes exceptions. The poison bits cause a fault hen a normal instruction attempts to use the register. 18. What are the disadvantages of supporting speculation in hardware? Complexity Additional hardware resources required 19. Mention the methods for preserving exception behavior? Ignore Exception Instructions that never raise exceptions are used Using poison bits Using hardware buffers 20. What is an instruction group? It is a sequence of consecutive instructions with no register data dependence among them. All the instructions in the group could be executed in parallel. An instruction group can be arbitrarily long.

UNIT IV MEMORY AND I/O

1.What is cache miss and cache hit? When the CPU finds a requested data item in the cache, it is called cache miss. When the CPU does not find that data item it needs in the cache, a cache miss occurs. 2.What is write through and write back cache? Write through- the information is written to both the block in the cache and to the block in the lower level memory. write back- The information is written only to the block in the cahce. The modified cache block is written to main memory only when it is replaced. 3.What is miss rate and miss penalty? Miss rate is the fraction of cache access that result in a miss. Miss penalty depends on the number of misses and clock per miss. 4.Give the equation for average memory access time? Average memory access time= Hit time + Miss rate x Miss penalty 5.What is striping? Spreading multiple data over multiple disks is called striping, which automatically forces accesses to several disks. 6.Mention the problems with disk arrays? When devices increases, dependability increases Disk arrays become unusable after a single failure 7.What is hot spare? Hot spares are extra disks that are not used in normal operation. When failure occurs, an idle hot spare is pressed into service. Thus, hot spares reduce the MTTR. 8.What is mirroring? Disks in the configuration are mirrored or copied to another disk. With this arrangement data on the failed disks can be replaced by reading it from the other mirrored disks.

9.Mention the drawbacks with mirroring? Writing onto the disk is slower Since the disks are not synchronized seek time will be different Imposes 50% space penalty hence expensive.

10.Mention the factors that measure I/O performance measures? Diversity Capacity Response time Throughput Interference of I/o with CPU execution

11.What is transaction time? The sum of entry time, response time and think time is called transaction time. 12.State littles law? Littles law relates the average number of tasks in the system. Average arrival rate of new asks. Average time to perform a task. 13.Give the equation for mean number of tasks in the system? Mean number of arrival in the system = Arrival rate x Mean response time. 14.What is server utilization? Mean number of tasks being serviced divided by service rate Server utilization = Arrival Rate/Server Rate The value should be between 0 and 1 otherwise there would be more tasks arriving than could be serviced. 15. What are the steps to design an I/O system? Nave cost-performance design and evaluation Availability of nave design Response time Realistic cost-performance, design and evaluation Realistic design for availability and its evaluation. 16. Briefly discuss about classification of buses? I/O buses - These buses are lengthy ad have any types of devices connected to it. CPU memory buses They are short and generally of high speed. 17. Explain about bus transactions? Read transaction Transfer data from memory Write transaction Writes data to memory

UNIT III MULTIPROCESSORS AND THREAD LEVEL PARALLELISM 1.What are multiprocessors? Mention the categories of multiprocessors? Multiprocessors are used to increase performance and improve availability. The different categories are SISD, SIMD, MISD, MIMD 2.What are threads? These are multiple processors executing a single program and sharing the code and most of their address space. When multiple processors share code and data in the way, they are often called threads. 3.What is cache coherence problem? Two different processors have two different values for the same location. 4.What are the protocols to maintain coherence? Directory based protocol , Snooping Protocol 5.What are the ways to maintain coherence using snooping protocol? Write Invalidate protocol Write update or write broadcast protocol 6.What is write invalidate and write update? Write invalidate provide exclusive access to caches. This exclusive caches ensure that no other readable or writeable copies of an item exists when the write occurs. Write update updates all cached copies of a data item when that item is written. 7.What are the disadvantages of using symmetric shared memory? Compiler mechanisms are very limited Larger latency for remote memory access Fetching multiple words in a single cache block will increase the cost. 8.Mention the information in the directory? It keeps the state of each block that are cached. It keeps track of which caches have copies of the block. 9.What the operations that a directory based protocol handle? Handling read miss Handling a write to a shares clean cache block 10. What are the states of cache block? Shared, Uncached, Exclusive

11. What is the bus master? Bus masters are devices that can initiate the read or write transaction. E.g CPU is always a bus master. The bus cn have many masters when there are multiple CPUs and when the Input devices can initiate bus transaction. 12. Mention the advantage of using bus master? It offers higher bandwidth by using packets, as opposed to holding the bus for full transaction. 13. What is spilt transaction? The idea behind this is to split the bus into request and replies, so that the bus can be used in the time between request and the reply 14.What are the uses of having a bit vector? When a block is shared, the bit vector indicates whether the processor has the copy of the block. When block is in exclusive state, bit vector keep track of the owner of the block. 15. When do we say that a cache block is exclusive? When exactly one processor has the copy of the cached block, and it has written the block. The processor is called the owner of the block. 16. Explain the types of messages that can be send between the processors and directories? 17.What is consistency? Consistency says in what order must a processor observe the data writes of another processor. 18.Mention the models that are used for consistency? Sequential consistency ,,,,Relaxed consistency model 19. What is sequential consistency? It requires that the result of any execution be the same, as if the memory accesses executed by each processor were kept in order and the accesses among different processors were interleaved. Local node Node where the requests originates Home Node Node where memory location and directory entry of the address resides. Remote Node - The copy of the block in the third node called remote node

20.What is relaxed consistency model? Relaxed consistency model allows reads and writes to be executed out of order. The three sets of ordering are: W-> R ordering ,,, W->W ordering ,,,,, R->W and R-> R ordering. 21. What is multi threading? Multithreading allows multiple threads to share the functional uits of the single processor in an overlapping fashion. 22. What is fine grained multithreading? It switches between threads on each instruction, causing the execution of multiple threads to be interleaved. 23. What is coarse grained multithreading? It switches threads only on costly stalls. Thus it is much less likely to slow down the execution of an individual thread. UNIT V MULTI CORE ARCHITECTURE 1.What is multi threading? Multithreading allows multiple threads to share the functional uits of the single processor in an overlapping fashion. 2. What is fine grained multithreading? It switches between threads on each instruction, causing the execution of multiple threads to be interleaved. 3. What is coarse grained multithreading? It switches threads only on costly stalls. Thus it is much less likely to slow down the execution of an individual thread. 4.Advantages of multithreading If a thread gets a lot of cache misses the other threads can continue Fater execution

If a thread cannot use all the computing resources of the cpu running another thread permits to not leave these idle If several threads work on the same set of data they can actually share their cache .leading to better cache usage or synchroniation on its values

5.write disadvantages of multithreading It can interfere with each other when sharing hardware resources . Execution times are not improved but can be degraded. Hardware support for multithreading is more visible to software thus requiring more changes to both application programs and operating systems than multi threading

6.what is CMP CMP represents chip multi Processor also called multi core microprocessors .large uniprocessors are no longer scaling in performance,because it is only possible to extract a limited amount of parllelism from a techniques. 7.what is SMT. SMT means simultaneous multithreading .it is a technique for improving the overall efficiency of superscalar cpus with hardware multithreading .SMT permits multiple independent threads of execution to better utilize the resources provided by modern processors architectures. 8.what are the disadvantages of SMT It cannot improve the performance if any of the shared resources are limilted bottlenecks for the performance. Applications are run slower when multithreading is enabled. critics argue that is a considerable burden to put on the software developers that they have to test whetheir simultaneous multithreading it is good or bad for their application situations and insert extra logic to turn it off if it decreases performance.

9.what are types of multithreading Block multithreading Interleaved multithreading

10.advantages of CMP Smaller less complex cores onto a single chip Dynamically switching between cores and powering down unused cores Increased throughput performance by exploiting parallelism

11.what is multi core It is a design in which a single physical processor contains the core logic of more than one processor.its as if an intel xenon processor were opened up and inside were packaged all the circuitry and logic for two intel xenon processors. Tme multicore design takes several processor cores and packages them as a single physical processor .the goal of this design is to enable a system to run more tasks simultaneously and thereby achieve greater overall system performance.

UNIT I INSTRUCTION LEVEL PARALLELISM 16 MARKS FIRST PART 1. What is instruction-level parallelism? Explain in details about the various dependences and hazards caused in ILP? Define ILP Various dependences include 1. Data dependence, 2. Name dependence and 3. control dependence. Hazards include1. Definition 2. Types of hazards and explain each one

2.Explain in detail about hardware approaches Define reservation stations Explain tomasulos algorithm and its stages Define reservation station components

3. Discuss about tomasulos algorithm to over come data hazard using dynamic scheduling? Dynamic scheduling with an example. How data hazard is caused. Architecture Reservation station. Examples of tomasulos algorithm

4. Explain how hardware based speculation is used to overcome control dependence? Ideas in hardware based speculation dynamic scheduling, speculation, dynamic branch prediction. Architecture and example Reorder buffer. SECOND PART 5. Explain how to reduce branch cost with dynamic hardware prediction? Basic branch prediction and prediction buffers Correlating branch predictors Tournament based predictors

6.Explain compiler techniques for exposing ILP? Basic pipeline scheduling and loop unrolling Summary of Loop unrolling and scheduling technique Example

UNIT II MULTIPLE ISSUE PROCESSORS FIRST PART 1.Discuss about the VLIW approach? VLIW approach basic idea. Limitations and Principleof VLIW. Architecture of VLIW Advantages and disadvantages of VLIW

2.Explain the different techniques to exploit and expose more parallelism using compiler support? Loop-Level parallelism Classifications of data dependences in loop Finding affine index and multidimentional index and examples Software pipelining Global code scheduling Trace scheduling and superblocks.

3. Explain how hardware support for exposing more parallelism at compile time? Conditional or predicated instructions Compiler speculation with hardware support SECOND PART

4. Differentiate hardware and software speculation mechanisms? Control flow Exception conditions Bookkeeping

5.Explain Intel IA-64 Architecture in detail with suitable reference to Itanium processor Intel IA-64 Instruction set Architecture Instruction format and support for exploit parallelism Prediction and speculation support Itanium processors Functional units and instruction issues

6. Explain the limitations of ILP Hardware model Window size and maximum issue count Realistic branch and jump prediction Effects of finite registers Effects of imperfect alias analysis

UNIT III MULTIPROCESSORS AND THREAD LEVEL PARALLELISM FIRST PART 1.Explainan about Taxonomy of parallel architecture? Idea Classification of Taxonomy of parallel architecture Flynns taxonomy Classification based on the memory arrangement Classification based on communication Classification based on kind of parallelism Data parallel Functional parallel

2.Explain the snooping protocol or write invalidated protocol or centralized protocol or symmetric shared memory architecture with a state diagram? Basic schemes for enforcing cache coherence Cache coherence definition Snooping protocol An example protocol state diagram

3.Explain the directory based protocol or distributed shared memory architecture with a state diagram? Directory based cache coherence protocol basics Example directory protocol 4.Explain relavant graphs ,dicuss the performance of symmetric shared memory multiprocessors Twp types of Coherent misses and examples Three application of workloads Performance measurements commercial workloads Multiprogramming and os worklaod Performance of multiprogramming and os workload

5.Explain relavant graph ,discuss the performance of distributed shared memory multiprocessors Directory based performance Miss rate vs num processors Miss rate vs cache size Data miss rate versus block size SECOND PART 6.Define synchronization and explain the different mechanisms employed for synchronization among processors? Basic hardware primitive Implementing locks using coherence Barrier synchronization Hardware and software implementation

7.Discuss about the different models for memory consistency? Sequential consistency model Relaxed consistency model

8.How is multithreading used to exploit thread level parallelism within a processor? Multithreading definition Fine grained multithreading Coarse grained multithreading Simultaneous multithreading

UNIT IV MEMORY AND I/O FIRST PART 1.Explain in detail about cache performance? Average memory access time and processor performance Miss penalty and out of order execution processors Improving cache performance

2. Explain the different technique to reduce cache miss penalty? Multiple caches Critical word first and early restart Giving priority to read misses over writes.

Merging write buffers Victim caches

3.Explain the different technique to reduce miss rate? Larger block size Larger caches Higher associativity Way prediction and pseudo associative caches Compiler optimization

4.Explain the different technique to reduce miss rate? 3Cs of cache miss Division of conflict misses Techniques to reduce miss rate

5.Discuss how main memory is organized to improve performance? Wider main memory Simple interleaved memory Independent memory banks SECON PART 6.Explain the various levels of RAID? No redundancy Mirroring Bit-interleaved parity Block- interleaved parity P+Q redundancy

7.Explain the various ways to measure I/O performance? Throughput versus response time Little queuing theory

8. Explain various types of storage systems Magnetic devices Optical disks Magnetic tapes Automated tape libraries Flash memory

9.Examples of I/O Design performance measures Nave design and cost performance Calculate poor performance of nave design and calculate MTTF Value of nave design

Calculate performance time for first step

UNIT V MULTICORE ARCHITECTURE FIRST PART

1.Explain in detail about hardware multithreading techniques? Fine grained multithreading Coarse grained multithreading Simultaneous multithreading

2.Explain in detail about CMP Architecture and SMT Architecture? Design challenges in SMT Potential performance advantage from SMT

3.Explain in detail about issues of CMP and SMT Architecture? Why are Design issues important? Why Multithreading Today Design Challenges Of SMT Reason for loss of throughput? And Design Challenges Transient Faults and Fault Detection via SMT Simultaneous & Redundantly Threaded Processor (SRT) and SRT Design Challenges Transient Fault Detection in CMPs Transient Fault Recovery for CMPs To maximize SMT performance

SECOND PART 4.Explain in detail about Intel Multi Core Architecture? 5.Describe in detail about SUN CMP architecture 6.Discuss in detail about heterogneous multi core processors 7.Explain about IBM Cell Processor?

You might also like