Unlike today’s inductive charging systems, radio-frequency energy harvesters provide ultra-low power over greater distances. And new feedback techniques promise greater energy efficiencies.
Imagine pulling energy out of thin air and from significant distances from the source. This is one of the energy-harvesting technologies that is being studied by Imec.
Radio-frequency (RF) energy harvesters are different from typical induction charging systems, such as the popular “power mats” and electric toothbrushes that require a nearby power source. There, the close proximity of the power sources is needed to ensure an efficient transfer of power—usually within millimeters.
In contrast, RF energy harvesters work over a much larger distance—from about 10 to 15 meters—and at much higher frequencies. (Typically, they are in the GSM mobile-radio band.) The key to successful RF power transfer lies in the design of the antenna system, according to Jan Van der Kam, program director for the Sensors and Energy Harvester Group at Imec.
“We are looking at antennas that direct the RF energy much more efficiently instead of spreading it in all directions,” Kam said at the recent Imec Technology Forum (ITR) held in Leuven, Belgium. Imec is working on designing more efficient antennas, but also developing the feedback mechanism for the energy receptors. This feedback approach will help to increase the efficiency of the energy transfer to even greater distances.
Application areas for this technology would include everything from smart buildings (where the energy could be generated locally and transferred without wires) to recharging a hearing-aid battery. As a proof of concept, Imec acquired a small weather station that was successfully powered at a distance using RF energy harvesters.
Energy harvesting, or scavenging, has become a lucrative topic of late and a source of new intellectual-property (IP) patents. When combined with the added functionality and performance requirements of today’s mobile devices, the slow improvement in battery technology (estimated at about 3% to 5% per year) has created needs for the following: architectural changes to improve system-on-a-chip (SoC) efficiency and new ways to generate energy. The architectural changes can be difficult and expensive to implement. While voltage islands and dark silicon are well discussed, companies are just beginning to work with more exotic approaches like near-threshold computing. Being able to generate energy could postpone the need to move to those techniques while saving costs.
Most approaches focus on movement or light to generate that energy. But pulling energy out of thin air isn’t a new concept. At the turn of the 20th century, Nikola Tesla proposed using wireless power and even set up a laboratory in Wardenclyffe, N.Y., to turn that idea into reality. More than a century later, his idea appears to have some merit.
As chip design moves into the realm of three-dimensional transistor structures and even MEMS, virtual-reality simulators may prove a necessity for both architects and educators.
They say that travel broadens the mind. That has certainly been the case with my recent visits to some of the leading semiconductor and electronics tool companies and research organizations in Europe, including ASML, Dassault Systemes, and Imec. Each of these entities offers technologies that are pertinent to the IP community, which I’ll cover in the coming weeks.
For now, let me whet your interest with a short video clip from Dassault Systemes’s virtual-reality development and deployment system, which is called 3DVIA. Think of it as a super-fast and detailed simulation program expanded into three dimensions.
Such a system might seem like overkill for the world of chip design. After all, EDA-IP tool providers already offer sophisticated modeling and simulation tools for every aspect of chip design – especially in virtual software through hardware-based prototyping. Still, as the chip community moves into an era of 3D structures, through-silicon vias (TSVs), stacked dye, and microelectromechanical (MEMS) devices, the potential benefits of virtual-reality (VR) simulation become more tangible. Such VR simulations could be used to visualize the effects of evolving transistor structures, such as fin-Fetts, or to enhance the accuracy of thermal flows around stacked die. Plus, 3D and virtual-reality models have proven invaluable as teaching aids to both novice and seasoned designers across a wide range of engineering disciplines.
Translating the complex interactions of today’s systems-on-a-chip (SoCs) into a virtual-reality program would require serious processing and graphics hardware. Each dimensional display in the 3DVIA system requires at least one server and GPU cluster. Fortunately, advances in server and GPU technology make these systems available on the commercial market.
Like the EDA industry, chip design must become part of a larger system-design process – both in terms of disciplines (EE-CS-ME) and domains (chips-boards-modules). The move toward a system view necessitates additional simulation and modeling tools. Virtual reality may have a strong play in this evolving world.
The latest results from the annual CDT survey point to changes in the reasons behind ASIC prototyping – from hardware, software, and systems to IP.
This year’s Chip Design Trends (CDT) “ASIC/ASSP FPGA-based Prototyping” (2012) survey reinforced past trends while providing a few surprises. The survey yielded much data, so let’s start with a high-level overview.
In 2012, hardware-software co-design and co-verification were again the number-one reason for ASIC designers to use FPGA-based prototypes (see Figure 1). Not surprisingly, hardware chip verification was the second leading driver, followed by software and then system verification.
A surprise came when designers were asked about future planned projects. All of the above current motivators were still there. But respondents indicated that software development would fall behind IP development and verification as an important issue. This probably means that IP development and verification has proven to be a sore spot for today’s designers.
How do these trends for 2012 compare to years past? Hardware-software co-design and co-verification remain the biggest reason for the FPGA prototyping of ASICs, followed by hardware-chip verification (see Figure 2). In 2012, software development continues to climb as an important driver while system-integration issues fall. IP development and verification has mixed results, suggesting that this factor requires further investigation. I’ll try to cross-correlate the IP trend with other data in a future article.
Was fate the cause of a delayed Russian space launch that enabled SpaceX’s first cargo mission? Or was a contaminated electronics supply chain to blame?
One of the recurring challenges for chip-design companies is the cost of semiconductor intellectual-property (IP) theft and related patent litigation. In addition to huge economic expenses, these challenges often result in unpredictable opportunity costs for end-user markets further up the supply chain. Here’s but one example of this later cost:
“A glitch with a Russian Soyuz TMA-06M spacecraft has helped clear the way for a private capsule’s first contracted cargo flight to the International Space Station early next month, NASA officials say.”
Counterfeit chips are a growing problem in the design of electronic systems. Such counterfeits are part of the shadow supply chain. Earlier this year, one case involved $16 million in counterfeit chips from China and Hong Kong that were sold under major brand names to almost all segments of the U.S. electronics industry – from mission-critical military and medical systems to consumer goods.
In February 2012, I covered the likelihood that counterfeit parts caused the ultimate loss of the Russian Phobos-Grunt unmanned space mission. Today’s story about a technical “glitch” that delayed a manned Soyuz Russian space launch made me wonder as to the cause of the problem. Could it once again be counterfeit electronic chips – perhaps even another faulty memory chip, as was suspected in the earlier failed mission? Shadow Supply Chain Demands System-Level Verification
To be fair, I note that these two incidents involve a different spacecraft and mission: Russia’s Phobos-Grunt unmanned space mission vs. the Soyuz TMA-06M manned spacecraft. Still, the parts supply chain for all of Russia’s space activities is probably the same. The critical quote in today’s news story does little to clarify the source of the recent problems: “But the Soyuz’s liftoff will be delayed by about a week while technicians install a replacement part to fix a technical issue, Russian space officials announced Sunday.”
Has the early contamination of the semiconductor supply chain again affected the Russian space program? It’s just too early to be sure. But a contaminated supply chain may take many years to clean up.
On a personal note, the launch vehicle for the SpaceX program reminds me of my earlier engineering career at Rocketdyne (now part of Boeing). The primary structure of the Falcon’s first stage is a blend of legacy Atlas and Delta rockets – tried and true workhorses of the US space program now long gone. Further, most of SpaceX’s Falcon launch vehicles will use historic Titan rocket launchpads at Vandenberg Air Force Base (VAFB), among other locations. My father worked on Titan rockets as a Launch Engineer for many of the flights in the late ’50s and early ’60s. I can even remember watching a few launches from Vandenberg when I was very young. http://www.vandenberg.af.mil/
John and Sean talk about the Hot Chips show, the decline of Silicon IPOs, Wright vs. Moore’s Law, and spaceports in the desert.
Industry Trends and Experts
A wide range of processor types ranging from datacenter to smartphones should enable the accelerated growth of software applications for Intel-based devices.
Once again, the opening keynote at the Intel Developer’s Forum (IDF) was a visually dazzling event. But something was missing. To understand what, you need to compare this year’s event with the previous one.
Last year – at IDF 2011 – Intel CEO Paul Ortellini talked about the ongoing transformations in transistor technology. Mainly, he focused on the growing consumer market for embedded products. There, transformations have been based on the ever-increasing availability of transistors and device improvements, such as 3D structures and ever-decreasing process geometries.
This year – at IDF 2012 – Intel’s Architecture Group VP and GM, David “Dadi” Perlmutter, explained how computing was shaping the future of datacenter cloud computing to device mobility. He showcased Intel’s ongoing efforts with developers to create applications from cloud to intelligent systems that would “touch everyone on Mother Earth.” Connecting global users in this way requires a wide spectrum of processor technology from the mobile-based Medfield “Atom” (millions of transistors) to the server-grade Xeon (billions of transistors).
Today, both of these devices are in production. Medfield-based smartphones are available in Asia and Europe. Xeon E5 servers are found in many of today’s datacenters. Interestingly, during the post-keynote “question and answer” session, Perlmutter emphasized that the Xeon E5 wasn’t intended as a replacement to Intel’s high-performance-computing (HPC) iTanium processor.
A common thread between IDF 2011 and IDF 2012 is the Ultrabook. These very thin and low-power laptops are powered by Intel’s core processors like the Haswell. One of the more impressive demonstrations benchmarked the third-generation Core processors (32 nm), or Ivy Bridge, with the upcoming fourth-generation Core processors (22 nm), based on the Haswell microarchitecture (see Figure).
One device missing from this year’s event was the Claremont, an experimental prototype processor. This Near Threshold Voltage (NTV) processor uses a novel, ultra-low-voltage circuit powered by a postage-stamp-sized solar cell. The Claremont was demonstrated during the 2011 keynote. This class of processor operates close to the transistor’s turn-on or threshold voltage – hence the NTV name.
Several weeks ago, in mid-August 2012, Intel Labs presented an update of a Claremont-based processor prototype at the Hot Chips forum. The speaker talked about the energy benefits of NTV computing using Intel’s IA-32, 32-nm CMOS processor technology. An important goal for the Claremont prototype was to extend the processor’s dynamic performance – from NTV to higher, more common computing voltages (as in the smartphone-based Medfield) while maintaining energy efficiency.
This year’s keynote theme was about the wide range of products – from smartphones to datacenter servers – being connected by a spiral of software. Developers were encouraged to make a difference to the world by creating useful products based on this range of technology.
What do these tongue-twisting technical phrases have in common? They were all part of the morning session on the last day of the Hot Chips forum.
The catch-all title of “Technology and Scalability” was appropriate for the morning session of the last day at the Hot Chips forum. Michael Parker from Altera began the session by highlighting advances in the floating-point accuracy of floating-point-gate-array (FPGA) devices. FPGAs are inherently better at fixed-point calculations, in part due to their routing architecture. Conversely, accurate floating-point calculations are dependent upon multiplier density for the extensive use of adders, multipliers, and other trigonometric functions. Often, these functions are pulled from libraries to form an inefficient multiplier implementation.
According to Parker, Altera took a different approach by using a new floating-point fused datapath implementation instead of the existing IEEE-based method. The datapath approach removes the typical normalization and de-normalization steps required in the multiplier-based IEEE representation.
However, the datapath approach only achieves this high floating-point accuracy on smaller matrix functions (like FFTs), where low-power GFlops-per-Watt performance and low latency – thanks to enough on-chip memory – are the primary requirements.
Next up was Gregory Ruhl, who shared Intel Lab’s efforts to develop a Claremont-based processor prototype. He talked about the energy benefits of Near Threshold Voltage (NTV) computing using Intel’s IA-32, 32-nm CMOS processor technology.
Readers may remember the NTV processor (code-named “Claremont”) from last year’s Intel Developer Forum. My tweet from that show referenced a solar-powered Claremont demonstration in which the Claremont powered a short video clip of a playful kitten:
The Claremont relies on an ultra-low-voltage circuit to greatly reduce energy consumption. This class of processor operates close to the transistor’s turn-on or threshold voltage – hence the NTV name. Threshold voltages vary with transistor type. Typically, though, they are low enough to be powered by a postage-stamp-sized solar cell.
The other goal for the Claremont prototype was to extend the processor’s dynamic performance – from NTV to higher, more common computing voltages – while maintaining energy efficiency.
Ruhl’s results showed that the technology works for ultra-low-power applications that require only modest performance – from SoCs and graphics to sensor hubs and many-core CPUs. Reliable NTV operation was achieved using unique, IA-based circuit-design techniques for logic and memories.
Further developments are needed to create standard NTV circuit libraries for common, low-voltage CAD methodologies. Apparently, such NTV designs require a re-characterized, constrained standard-cell library to achieve such low corner voltages.
Finishing the session on “Technology and Scalability” was a presentation by Robert Rogenmoser from SuVolta, a semiconductor company focused on reducing CMOS power consumption. Rogenmoser talked about ways to reduce transistor variability for low-power, high-performance chips.
Transistor variability at today’s lower process geometries comes from the typical sources of wafer yield variations and local transistor-to-transistor differences. Such variability has forced the semiconductor industry to look at new transistor technologies, especially for lower-power chips.
What is the solution? Rogenmoser discussed the pros and cons of three transistor alternatives: FinFET or TriGate; fully depleted (FD) silicon-on-insulator (SoI); and deeply depleted channel (DDC) transistors (see Figure 3). FinFET or TriGate promise high drive current, but face manufacturing, cost, and intellectual-property (IP) challenges. The latter point refers to IP changes required to support the new 3D-transistor-gate structures.
According to Rogenmoser, FD-SoI transistor technology enjoys the benefits of undoped channels. But it lacks the capability of multi-voltages and a limited supply chain. According to him, DDC transistors were the best solution. This process offered straightforward insertions into bulk planar CMOS – especially from 90 nm to 20 nm and below. In terms of performance, DDC transistors are less variable with tighter corners. They also require simpler manufacturing steps. Equally important was the ease of migration of existing IP to the DDC process, he explained.
Rogenmoser concluded by explaining how DDC technology can bring back common low- power tools to lower nodes (e.g., dynamic voltage and frequency scaling, body biasing, and low-voltage operation).
Next week: The Rest of the Story
This year’s Stanford event covers many-core to server-grade processors, integration issues, and everything in between.
Sponsored by the IEEE in cooperation with the ACM, the “Hot Chips” symposium is an annual event typically held on the Stanford University campus. Due to renovations at Stanford, this year’s event has been moved to the Flint Center for the Performing Arts in Cupertino.
Hot Chips focuses on the latest developments in the design of high-performance microprocessors and system-on-a-chip (SoC) devices, associated software, and systems.
Today’s (Tuesday, Aug. 28) events include two keynotes by the CTOs from AMD and Alcatel-Lucent and several sessions covering microprocessors, interconnects, many cores, GPUs, multimedia, and integration.
I’ll cover tomorrow’s (Wednesday, Aug. 29) events, which include a keynote by Pat Gelsinger, COO at EMC, titled: “Cloud Transforms IT, Big Data Transforms Business.” Session topics are arranged as follows:
See you there!
This “Dog Days of Summer” technology list is not to be read indoors!
It’s the last of the “Dog Days of Summer” – down Sirius, down! I’m not referring to the satellite-based radio station or even Apple’s Siri voice-recognition system – but you can ask Siri for help.
Anyone reading this blog should really be outside enjoying the last of summer before schools start and people in the Northern Hemisphere return earnestly to work.
Are you still here, trying to figure out where I’m going with this story? I’m going nowhere but outside. You should do the same. So grab your favorite tablet, find a shady tree to sit under, and select one of these fascinating stories for your reading pleasure. I’ll be doing the same. TGIF!
Suggest readings (in no particular order):