Computer Science

Managing Contention for Shared Resources on Multicore Processors

1. In order to share several hardware structures, the advanced multicore systems are planned that enables clusters. For example LLCs (last-level caches such as L2 or L3), memory controllers, interconnects and prefetching hardware. In addition, the resource-sharing is advised as the clusters work as memory domains. The shared resources act as a memory pyramid. The threads that are present on cores in the memory domains participate for shared resources in a contention-free atmosphere (Bischof, 2008). In order to understand the function of contention for shared resources and its effects on application and performance, an example is given below. There are four applications Soplex, Sphinx, Gamess, and Namd (Balasubramonian, Jouppi, & Muralimanohar, 2011). These applications are taken from SPEC (Standard Performance Evaluation Corporation) CPU 2006 benchmark suite. This runs parallel on an Intel Quad-Core Xeon system. On three altered schedules, a test would run on group of applications for number of times. Each time two different paring is made that shares memory domain. Moreover, within the same domain the three pairing permutations run these applications (FEDOROVA, BLAGODUROV, & ZHURAVLEV, 2010).

  • Soplex and Sphinx ran in a memory domain, while Gamess and Namd shared another memory domain.
  • Sphinx was paired with Gamess, while Soplex shared a domain with Namd.
  • Sphinx was paired with Namd, while Soplex ran in the same domain with Gamess.

2. The main purpose is the captivity of essence of memory-reuse profiles. This can be done in a very simple metric. After metric was discovered an approximate way is determined while utilizing information present in a thread scheduler. In addition, the memory-reuse profiles are successful at modeling contention mainly due to implementation of two core qualities. These core qualities are sensitiveness and greatness. The sensitiveness of threads describes the issues a thread has to face while sharing cache along with other threads. The greatness in terms of thread sharing means that how much a thread damages other threads while sharing with cache. In fact, major information is captured by the sensitiveness and greatness of threads within memory-reuse profiles. In order to approximated the sensitiveness and greatness while using online performance of information we need to check their bases for modeling cache contention between threads. The information is derived from metrics in memory-reuse profiles in order to achieve sensitiveness and greatness (FEDOROVA, BLAGODUROV, & ZHURAVLEV, 2010).

3. While assessing new models for the cache contentions, following objectives are considered such as, the ability of models for generating contention-free thread schedules. A model is constructed that would help to search for top or to avoid worst schedules. Therefore, these models are assessed on the basis of constructed merit schedules. Pain metric is used to evaluate scheduler that finds top schedule. For instance, consider a system that contains two pairs of core sharing and two caches. Moreover, this model also works boundless with the additional cores per caches. In this scenario, we need to analyze and identify the finest schedule for all the four threads. Likewise, the scheduler will develop permutations of threads available within the system as, all the permutations are exclusive in terms of pairing with each other in memory domain. For example if there are four threads present named as A, B, C and D than the exclusive schedules formed are: (1) {(A,B), (C,D)}; (2) {(A,C), (B,D)}; and (3) {(A,D), (B,C)}. Here the notations (A, B) describe the co-schedulers of threads A and B in similar memory domains. The pain regarding each and every pair is calculated by the scheduler for every schedule such as, {(A, B), (C, D)}. Moreover, the Pain (A, B) and Pain (C, D) is calculated by scheduler with the help of equations that are present formerly. The calculated amount is considered as an initial Pain values that are determined via equations. The lowest values of Pain are considered as the ‘Estimated Best Schedule’ by Scheduler. This Pain metric is developed through actual memory-reuse profiles. This pain metric helps to determine the best schedule. Another method for determining is via approximating the Pain metric using online data. After the best schedule value is determined, the performance is associated with the workload in order to obtain the ‘actual best schedule’ from the ‘estimated best schedule’. For this purpose, the ‘estimated best schedule’ is run over the real hardware and its performance is than associated with the ‘actual best schedule’. By running all the schedules on real hardware we can obtain the best value. Moreover, by running all the possible schedules on hardware may limit the workload as the large number of workloads are consuming at the same time. This is the most direct way to determining the ‘actual best schedule’ (FEDOROVA, BLAGODUROV, & ZHURAVLEV, 2010).

4. The reason of contention in multicore systems is discussed in this answer. A number of experiments are performed in order to determine the main cause of contention on multicore systems. Initially, the degree of contention is measured, separated and divided into several kinds of shared resources for instance, Cache, Memory Controller, Front-side bus and Prefetching Hardware (Das, Thankachan, & Debnath, 2011). The detailed arrangement related to the experiments is defined in this study. At first stage we need to focus on the contention for the shared cache. This removes the cache while competing with the threads from cache lines. However, this is not considered as the main reason for performance degradation in the multicore systems. The main reason that is involved in performance degradation in shared resources includes

  • Front-side bus
  • Perfecting resources
  • Memory controller

It is very difficult to describe the limitations and there effects on contention for prefetching hardware. The influence of prefetching reveals the combined effect of all the three factors and hardware itself. In this experiment the longstanding of memory-reuse model that is constructed for model cache contention are not effective. Furthermore, the model contention other than shared cache if applied in a real system did not prove efficient performance in the presence of other kinds of contention. On the contrary, the cache-miss rate is an outstanding analyst for contention for the memory controller, Prefetching hardware and front-side bus (FEDOROVA, BLAGODUROV, & ZHURAVLEV, 2010). Along with the Milc application, each and every application is co-schedule in order to create contention. The memory controller along with front-side bus would engaged by the cache missing applications. This would not damage the applications that are present in hardware. The high LLC miss rate in an application will destructively use prefetching hardware. This is due to the cache that is requested by the data counted via cache misses. Thus, for heavy usage regarding prefetching hardware a high miss rate is an excellent indicator (FEDOROVA, BLAGODUROV, & ZHURAVLEV, 2010).


Balasubramonian, R., Jouppi, N. P., & Muralimanohar, N. (2011). Multi-core cache hierarchies Morgan & Claypool.

Bischof, C. (2008). Parallel computing: Architectures, algorithms, and applications Ios PressInc.

Das, V. V., Thankachan, N., & Debnath, N. C. (2011). Advances in power electronics and instrumentation engineering: Second international conference, PEIE 2011, nagpur, maharashtra, india, april 21-22, 2011. proceedings Springer.

FEDOROVA, A., BLAGODUROV, S., & ZHURAVLEV, S. (2010). Managing contention for shared resources on multicore processors. Communications of the ACM, 53(2), 49-57.