
Semiconductor IP News and Trends Blog
How Does Intel Interconnect Many Integrated Cores (MIC)?
MIC seems to be a great architecture for supercomputing programs that need parallelization to many cores at run time. But I learned little new about the IP interconnect fabric.
Having just visited the “Many Core Application Research Community (MARC)” booth in the Intel Labs pavilion at IDF2011, I was primed with questions for the professionals manning the Many Integrated Cores (MIC) project booth. I had previously learned that MIC was more about hardware direction and architectures, while MARC focused on parallel software development. Unfortunately, this wasn’t really the case.
My original quest was to learn more about Intel On-Chip System Fabric (IOSF), a recently announced System-on-Chip (SoC) interconnect technology. The architectures used to connect semiconductor IP on a single die are critical factors for chip architects to meet ever stringent power, performance and size requirements.
While I learned little new information about Intel’s IOSF architecture at IDF2011, I did find out about several other types of interconnection techniques that Intel uses to tie its processors (not necessarily SoCs) together on a single die.
At the Many Integrated Cores (MIC) booth, I met with a young computer science professional. She was running a simulation on particle impacts, part of high-energy quantum physics endeavor conducted at CERN OpenLab, the European Organization for Nuclear Research. The simulation provided track impact information as to where particles hit sensors on various impact planes. Using “Knights Ferry” – the codename for the first MIC platform – CERN scientists migrated existing C++ applications to a new architecture in just a few days.
Unfortunately, as a software engineer, the booth host knew very little about particle physics or the on-chip interconnect fabric used in the MIC project. She did know that each MIC processor runs at about 1GHz – not the fastest speed but impressive when used when many such processors were integrated together.

Figure 1: Demonstration of CERN’s particle impact measurements using Many Integrated Cores (MIC) at IDF 2011.
She highlighted this last point with a convincing demonstration of the usefulness of Intel’s latest Parallel Studio XE 2011. This software tool parallelizes C/C++ code to optimize performance at run-time (see Figure 1). For example, if the user has a single core processor like a Xeon, then the code runs in a traditional serialized way. If parallelism is possible, e.g., on a platform running 30 MIC processors (as in the demonstration), then the code runs noticeably faster. The amazing thing is that these serial-parallel decisions are made when the code is compiling at run time, producing impressive performance results.

Figure 2: Using the Parallel XE suit, C++ applications can be parallelized onto several many-core platforms at run time.
A bit of research on the web revealed the relationship between Many Integrated Cores (MIC) and MARC projects (see, “IP Core Interconnect leads to MARC at IDF2011” ). The MIC project grew from three other research streams: The 80-core Tera-scale research chip program, the single-chip cloud computer initiative (MARC), and the Larrabee many-core visual computing project. The thing that ties all of these platforms together is that they use the same tools, compilers, libraries and IA technology – most recently embodied in the company’s Xeon processor product line.
I’m still no closer to understanding how are the processors on a MIC platform are connected I suspect that, since MIC is a processor-centric platform, it does not use Intel’s relatively new SoC-based ISOF interconnect architecture. My quest for answers continues.
This entry was posted in General and tagged CERN, Intel, Knight Corner, Many Integrated Cores, MARC, MIC, quantum physics, simulation. Bookmark the permalink.
View all posts by John Blyler