Test cost is an increasing significant portion of device cost today as device complexity and size continue to grow. Gigabit memory chips today, for example, are probably no more expensive than megabit chips ten years ago; but the proportion of test cost versus device cost, has definitely increased steadily. Overall, test time has been increasing with ever larger devices; and test time is practically synonymous with test cost today, as devices still rely heavily upon premium ATE for all tests – from wafers to chips to modules.
The current approach of the thorough dependence on ATE for testing chips has created a complex hierarchy of device test industry involving many different players and many highly exclusive and sophisticated technologies. Test cost is a whole category of its own when a chip is produced. For most devices designed, the cost of outsourcing the testing tasks and sharing considerable portion of the benefits with many other businesses in the test industry, is inescapable with today's testing regimes. Trying to keep the test time short as possible, is apparently the only possible strategy to limit test costs. However, even with the greatly improved ATPG techniques of the present day, achieving 100% test coverage is still elusive for many chips, or simply cost-prohibit for even more chips. For many chips, test cost must be tightly budgeted, and therefore, test time must be tightly budgeted. Market value of devices are finite in most cases, and test cost is often the only cost that is with some degree of flexibility in cost reductions. But reducing test time is often equivalent to reducing test quality as well for many chips.
If test time can be eliminated, test cost is practically eliminated as well. So what if test time can be disassociated from device size? The test time - hence the test cost - could be a minuscule fixed amount for any device size?
The various present techniques of BIST has been worthy attempts in reducing test times by implementing test generation partially on-chip. But the BIST technologies, essentially, have been based largely on simple LFSR/NLFSR techniques – which are limited in the capacities. Hence, most BIST are still only suitable as assistive test generation component in most chips – which either implement BIST only for simpler blocks or for partial test generation; the bulk of testings still require the high-premium ATE. Furthermore, even if there are simple chips that can be tested entirely with the present-day BIST technologies, the test responses still can only be verified reliably with ATE.
The bulk of the test time required for testing chips today are for the testing of memory and logics, and these are the parts in chips that the Moore's Law had so rightly predicted the phenomenal growth. Removing the test time for memory and logics, and test time for many devices practically vanished; for the test time of other things - such as analogs and pads - are practically negligible for most devices.
Removing test time for a device will not only require the device to generate test patterns on the chip, the device will also need to be able to verify the test responses - or at least the bulk of the test responses - on the chip.
Virtual Verification - as the term implies - is to verify test responses on-chip, virtually. If raw test responses can be processed on-chip, and a very short - even fixed length - data generated to represent the test result with fairly high degree of reliability, then Virtual Verification is achieved. The test result data would be immensely shorter than all the raw test response data, making the test time required to read the test result data practically negligible - compared with reading the raw test response data; thus, instead of requiring the premium performance of the conventional premium-cost ATE, much less sophisticated and lower cost apparatus can be used for verifying the test result just as well.
One practical solution to achieve Virtual Verification is to apply a cryptographic-grade signature generation function on devices. With the cryptographic signature generation function, a very short signature of the entire test responses can be generated on-chip; and the test result can be represented with high degree of reliability using the generated cryptographic-grade test result signature.
In 2004, Yen Liu patented a method which can use any of the presentday premier cryptographic hash functions for computing a signature out of entire test responses on-chip; and at the same time, the same hash function - to compute the test result signature - is also used to generate high-quality virtually random test patterns on-chip for testing memory and logics.
Figure 1: AMBIST example
Cryptographic hash functions such as the SHS algorithms (FIPS Pub 180-3) – which is applied for signature generation in the DSS (FIPS Pub 186-3) – are the best hash functions available for signature generations. Yen Liu applied the SHS in his AMBIST IP to generate test patterns at the rate of less than 100 clock cycles per block – of either 512 bits or 1024 bits; and at the same time, test responses are computed with the same hash function to produce the test result signature – of either 256-bit or 512-bit – at the same rate of less than 100 clock cycles per block – of either 512 bits or 1024 bits – of test responses.
The AMBIST solves both the requirements to generate test patterns on-chip completely and virtually verify all test responses on-chip; thus, practically eliminated test time for memory and logics. The virtually random test patterns generated by the AMBIST - based on the SHS - are much greater in quality than other BIST methods - based on simple LFSR/NLFSR - can ever achieve; the probability is extremely low for the AMBIST generated test patterns to ever repeat. With such qualities, test patterns generated with the AMBIST is much more likely to achieve high coverage and give good efficiency on-chip. The SHS based test result signature is also as high the quality as it can be with presentday cryptology. The probability can be extremely low for any erroneous test responses not to produce an erroneous test result signature.
The now most commonly implemented scan methodology still rely entirely on ATE for loading test vectors in and reading test responses out of chips. For many chips, test time is a real cost and therefore is budgeted; and even with highly tuned ATPG, chips are hardpressed to achieve 100% coverage. The AMBIST can be coupled with the existing internal on-chip scanchain method to test logics; and can run much more test patterns, and much faster - at speed; there is practically no ATE test time - or test cost - to worry about. It is not unlikely for chips to achieve higher coverage - running lots of patterns - with AMBIST than to run budgeted number of test patterns scanned in from the external ATE.
With the AMBIST on-chip, it is possible for all dies on an entire wafer to start the testing simultaneously - with simple initializing and power supplied to all chips. And testing would be easy to do at any stage, no matter the same chips are in wafer or package or module stage, AMBIST can easily perform tests without external equipments. It would be easy even to allow end users to start tests on AMBIST enabled devices in user systems.
AMBIST is naturally suited to test memory. For each block of the virtually random test pattern generated, the inverted test pattern of the same block can also be used for testing. Every memory bit can still be tested with both logic-1 and logic-0, plus transitions.
A small sample of the AMBIST generated test patterns has recently been implemented into Memtest86+CTP – a modified version of the popular software memory testing utility Memtest86+ released under the Gnu Public License. Field tests of the small sample of AMBIST test patterns in the Memtest86+CTP indicate the AMBIST test is, at least, no less effective than the original Memtest86+ tests with memory.
With the capabilities to fully test memory and logics on-chip, and verifying all test responses on-chip virtually – by generating a test result signature of test responses, the AMBIST can be a practical solution to eliminate test time for many chips. In addition, external test access into chips can be greatly reduced with AMBIST, enhancing security and saving area with the test IO and logics – that would otherwise be required with other older ATE centered test methodologies.
More than just test!
With all the potentials and cost saving offered by the on-chip AMBIST, there is no doubt some area overhead is still required to implement the feature in chips. And even when the overhead is really not very much considering the typical size of modern day chips - easily with gate count in millions; even when the overhead can still be reduced further with down-graded performances; dedicating a tiny piece of chip real estate purely for the purpose of testing may still be somewhat of an excess to some purists. However, there are good news for the purists, as more uses do exist with the bulk of the chip real estate occupied by the AMBIST.
SEA is a cryptographic method patented by Yen Liu in 2004. The method utilizes premier cryptographic hash functions - with many the desired superior security characteristics - to achieve the comprehensive crypto capabilities to generate an integral cipher and signature pair which is with one-time-signature-cipher (OTSC) characteristic intrinsically. Therefore, the method allows for not only securing data transmitted, the data can be verified and authenticated, and is with intrinsic defense against many types of attacks - such as the replay attack or code grabbing; all the capabilities are accomplished at the same time with the single solution, satisfying all the most essential information and communication requirements comprehensively.
The SEA and the AMBIST both share a common element: the cryptographic hash function - which is also the bulk of the hardware requirement for both. There is really no significant area increase to make an AMBIST block on-chip to also function as an accelerator hardware or processor for the SEA. Therefore, the fully autonomous on-chip self-test-verification IP block can also function as an efficient cryptographic hardware with virtually no area penalty. The SEA cryptographic function, when coupled with the same SHS hash hardware in the AMBIST, is capable of processing each block - of 256-bit or 512-bit - of a message with less than 100 clock cycles. And both the cipher and the signature are processed/generated at the same time. Hence, the combined AMBIST/SEA IP block can be utilized not just for test mode, it can be utilized in normal operating mode as well - as a very high-performance and efficient cryptographic engine.
And with the additional crypto capability, the AMBIST/SEA IP block can have even more innovative use than the presently common practice of chip testing and data security. The IP block can readily function as a Chip Authentication Technology (CAT) with the highly secure and efficient authentication capability provided by the SEA. The CAT can be a viable solution to counter electronic devices counterfeiting when implemented in chips.
MAOW is an IP with the combined features of the AMBIST, the SEA crypto engine, and the CAT. MAOW is presently offered by YFL elite – a company founded by Yen Liu, the inventor of AMBIST and SEA technologies.
Yen Liu founded YFL Elite in 2009 and is presently CEO and Chief Design Engineer at the company. He has over ten years experience in the semiconductor industry and presently has several patents granted in areas of semiconductor testing and cryptographic security.