You are on page 1of 2

Summary of the paper: advance computer architecture Nowadays, multi-core processors are better options in comparison with high-performance

single-core processors and that is because there are several constraints, including thermal, energy, power, complexity, etc. These processors can be used in different applications. Real-time systems are systems that the result is considered to be correct if the answer is both correct and on time. Thus, in these system, we should be able to guarantee the WCET for each task. However, using cache in realtime systems can cause unpredictability, since in some time we have cache miss and we should fetch the data from main memory. One may say that it is possible to set WCET equal to cache_access_time+main_memory_access_time, but in this case using cache make the situation even worse. Anyhow, the paper claims that WCET for single-core processors is solved (don't know how) but for multi-core processors it is a hard problem and it may not be possible solve it. So, they try to use a restrictive manner that ease the analysis, but has some effects on performance. Cache locking and cache partitioning are two methods used to ease the analysis. Cache partitioning is the method they partition the cache between different cores or tasks. Cache partitioning can be divided to three groups: core-based, task-based, no partitioning. In core-based partitioning, each core uses a portion of cache, while in task-based partitioning, a portion of cache is assigned to each task. Cache locking allows user to load content to cache and lock it, which means it cannot be replaced in run-time. Cache locking can be divided to static and dynamic locking. In static locking the cache content remains the same in run-time, while in dynamic locking it can change. Hence, there are 8 different methods. They just consider 4 of them and compare them. They use Q-core multi-processor utilization (U_Q=SUM(c_i/p_i)/Q) for comparison different methods. The smaller, the better. Four different methods are: 1. static and no partitioning: a cache block can be assigned anywhere in the cache (flexible). The performance will decrease if the codes size of the tasks far exceeded than cache size. Idle tasks occupy memory! 2. static and core-based: idle core does not use the memory. Partition reloading and locking cost at every preemption. Simpler cache management. 3. dynamic and task-based: reloading is intra-task. large number of tasks can be a problem because the cache space per task will be small. 4. dynamic and core-based: reloading is supported within task in addition to reloading at preemption. More cache space per task.

Finally based on the experimental results they have guiding for designers: 1. If the designer wants to use a static scheduling, core-based partitioning is the best. 2. DC is better than DT for small shared cache size. 3. Dynamic cache locking is better than static cache locking only for tasks with a larger number of hot regions and for smaller shared cache size.

You might also like