Pages

Thursday, April 12, 2012

beautiful tamil quotesss

வெற்றி என்பது பெற்று கொள்வது,
தோல்வி என்பது கற்று கொள்வது,
முதலில் கற்று கொள்வோம் பிறகு பெற்று கொள்வோம்


வாழ்க்கை பிடிக்கவில்லை என்றால் தற்கொலை செய்து கொள்.
ஆனால்,
தற்கொலை செய்து கொள்ளும் அளவுக்கு தைரியம் இருந்தால்
வாழ்ந்து பார்


வெற்றியை விரும்பும் நமக்கு தோல்வியை தாங்கும் மனம் இல்லை.
தோல்வியை தாங்கும் மனம் இருந்தால் அதுவும் ஒரு வெற்றிதான்


ஒரு துளி கண்ணீரை துடைப்பது நட்பு அல்ல..
மறு துளி கண்ணீர் வராமல் தடுப்பதுதான் உண்மயான நட்பு..


அடிக்கடி பார்க்கின்ற எல்லோரையும் நேசிக்க முடியாது..
நேசிக்கின்ற எல்லோரையும் அடிக்கடி பார்க்க முடியாது


நீ பேசும் வார்த்தை எல்லோருக்கும் புரியும்.
ஆனால்,
நீ பேசாத மெளனம் உன்னை நேசிப்பவற்களுக்கு மட்டுமே புரியும்...


குறை இல்லாத மனிதன் இல்லை.
அதை குறைக்க தெரியாதவன் மனிதனே இல்லை..
-புத்தர்.


சந்தோஷமாக வாழ முயற்சிக்காதே..
நிம்மதியாக வாழ முயற்சி செய்.
உன் வாழ்க்கை முழுவதும் சந்தோஷமாக இருக்கும்.


தற்பெருமை எங்கு முடிகிறதோ
அங்கு கண்ணியம் ஆரம்பமாகிறது


எதற்கும் துணிவில்லாதவன்
எதையும் எதிர்பாற்க முடியாது


தலைக்கு மேல் வழந்தவரை
அடிபதே தவறு
கோடரியால் வெட்டுவது
இப்படிக்கு மரம்



உன் இமைகள் துடிக்க
உன் இனியவன் வரவு நாடி
உன் இமை முடாது
உன் இதையம் திறந்து
உள்ளம் இறங்க
உன் இதழ்கள் வறண்டிட
உதறிய இரவுப் பூக்கள்
உன் ஈரடிக்குள் கசங்கிட
உன் வரவேற்பு கரைகிறதே


Wednesday, March 31, 2010

VLSI DESIGN:
('VLSI) is the process of creating integrated circuits by combining thousands of transistor-based circuits into a single chip. VLSI began in the 1970s when complex semiconductor and communication technologies were being developed. The microprocessor is a VLSI device. The term is no longer as common as it once was, as chips have increased in complexity into billions of transistors.
The first semiconductor chips held two transistors each. Subsequent advances added more and more transistors, and, as a consequence, more individual functions or systems were integrated over time. The first integrated circuits held only a few devices, perhaps as many as ten diodes, transistors, resistors and capacitors, making it possible to fabricate one or more logic gates on a single device. Now known retrospectively as small-scale integration (SSI), improvements in technique led to devices with hundreds of logic gates, known as medium-scale integration (MSI). Further improvements led to large-scale integration (LSI), i.e. systems with at least a thousand logic gates. Current technology has moved far past this mark and today's microprocessors have many millions of gates and billions of individual transistors.
At one time, there was an effort to name and calibrate various levels of large-scale integration above VLSI. Terms like ultra-large-scale integration (ULSI) were used. But the huge number of gates and transistors available on common devices has rendered such fine distinctions moot. Terms suggesting greater than VLSI levels of integration are no longer in widespread use. Even VLSI is now somewhat quaint, given the common assumption that all microprocessors are VLSI or better.
As of early 2008, billion-transistor processors are commercially available, an example of which is Intel's Montecito Itanium chip. This is expected to become more commonplace as semiconductor fabrication moves from the current generation of 65 nm processes to the next 45 nm generations (while experiencing new challenges such as increased variation across process corners). Another notable example is Nvidia's 280 series GPU. This GPU is unique in the fact that almost all of its 1.4 billion transistors are used for logic, in contrast to the Itanium, whose large transistor count is largely due to its 24 MB L3 cache. Current designs, as opposed to the earliest devices, use extensive design automation and automated logic synthesis to lay out the transistors, enabling higher levels of complexity in the resulting logic functionality. Certain high-performance logic blocks like the SRAM cell, however, are still designed by hand to ensure the highest efficiency (sometimes by bending or breaking established design rules to obtain the last bit of performance by trading stability).
CDAC:
Starting from its initial mission on building indigenous supercomputers, Centre For Development of Advanced Computing (C-DAC) has progressively grown to build an eco-system and institutional framework for innovation, technology development, skills, delivery plans, collaboration, partnership and market orientation in a number of niche areas of national importance and market relevance in ICT and Electronics.
Through in-house research, technology and product development efforts and in collaboration with Academia, Research Labs and Industry in India or abroad, it endeavours to identity promising ideas nurtured building of ideas and competencies convert many of them into practical tools, technologies, products and services to meet the needs of : SMEs and other industrial players in the country; intermediate players; and end-users in Science and Engineering, manufacturing & service sectors, government, health, development and strategic sectors.
Of special relevance to C-DAC are innovation and development of solutions impacting large public, in Indian context or those where technology and innovative approaches can make a big difference in cost or performance, offer new functionalities or contribute to better quality of life.
C-DAC’s focus has been on emerging as a leader in chosen enabling technology areas and work towards integration of these in end-to-end solutions in various verticals/domains including infrastructure. The latter is undertaken oftentimes by C-DAC itself but equally or more often in conjunction/collaboration with other public and private agencies through a consortium and partnership mode. Institutional innovation to support scaling up process of such efforts is also one of the priority objectives.
The focal areas in terms of enabling technologies as outlined above would be:
§ High Performance Computing & Grid Computing § Language Computing § Software Technologies with special reference to Free/Open Source Software § Professional Electronics including VLSI and Embedded Systems § Cyber Security § Health Informatics § Education & Training with special reference to Finishing School and areas of specialized skills
The run length based coding schemes have been very effective for the test data compression in case of current generation SoCs with a large number of IP cores. The first part of paper presents a survey of the run length based codes. The data compression of any partially specified test data depends upon how the unspecified bits are filled with 1s and 0s. In the second part of the paper, the five different approaches for “don't care” bit filling based on nature of runs are proposed to predict the maximum compression based on entropy. Here the various run length based schemes are compared with maximum data compression limit based on entropy bounds. The actual compressions claimed by the authors are also compared. For various ISCAS circuits, it has been shown that when the X filling is done considering runs of zeros followed by one as well as runs of ones followed by zero (i.e., Extended FDR), it provides the maximum data compression. In third part, it has been shown that the average test power and peak power is minimum when the don't care bits are filled to make the long runs of 0s as well as 1s.

1. Introduction
As a result of the emergence of new fabrication technologies and design complexities, standard stuck-at scan tests are no longer sufficient. The number of tests, corresponding to data volume and test time, increases with each new fabrication process technology just to maintain test quality requirements.

Conventional external testing involves storing all test vectors and test response on ATE. But these testers have limited speed, memory, and I/O channels. Testing cannot proceed any faster than the amount of time required to transfer the test data:

 Test time ≥ (amount of test data on tester)/(number of tester channels tester clock rate) [1].
As a result, some companies are looking for compression well beyond 100X tester cycle reduction [2–4].

The paper is organized as follows. Section 2 describes the test data compression techniques and the qualities of a good technique. Section 3 presents existing run-length-based codes. Section 4 introduces the different methods of do not care bit filling for run-length-based code. Section 5 introduces entropy. Sections 6 and 7 present the experimental results of test data compression and test power with different methods of X filling. Section 8 compares the actual data compression for various methods claimed in literature with maximum possible compression predicted on the basis of entropy. Section 9 analyzes the nature of test data on the basis of various experimental results. Finally conclusions and future work discussion are presented in Section 10.

2. Code-Based Data Compression Techniques
Test data compression involves adding some additional on-chip hardware before and after the scan chains. This additional hardware decompresses the test stimulus coming from the tester. This permits storing the test data in a compressed form on the tester. With test data compression, the tester still applies a precise deterministic (ATPG-generated) test set to the circuit under test (CUT).

The quantity of test data rapidly increases, while, at the same time, the inner nodes of dense SoCs become less accessible than the external pins. The testing problem is further exacerbated by the use of intellectual property (IP) cores, since their structure is often hidden from the system integrator. In such cases, no modifications can be applied to the cores or their scan chains, whereas neither automatic test pattern generation nor fault simulation tools can be used. Only precomputed test sets are provided by the core vendors, which should be applied to the cores during testing. In this context, code-based test data compression technique seems more interesting. One more advantage is that by generating difference patterns and reordering test patterns, higher compression can be achieved in some cases.

The Code-based schemes use data compression codes to encode the test cubes. This involves partitioning the original data into symbols, and then replacing each symbol with a code word to form the compressed data. To perform decompression, a decoder simply converts each code word in the compressed data back into the corresponding symbol.

A few important quality factors [5] to be considered for any compression technique are as follows.

(i) The amount of compression possible. (ii) The area overhead of the decoding architecture. (iii) The reduction in test time. Transferring compressed test vectors takes less time than transferring the full vectors at a given bandwidth. However, in order to guarantee a reduction in the overall test time, the decompression process should not add much delay. (iv) The scalability of compression with various design sizes, scan channels, and design types (v) Test compression method should effectively use do not cares for compression as well as power reduction. (vi) The robustness in the presence of X states (can the design maintain compression while handling X states without losing coverage?). (vii) The ability to perform diagnostics of failures when applying compressed patterns. (viii) Type of decoder: data-independent decoder or data-dependant decoder. During these years, the researchers have developed a large number of variants of the above schemes.

3. Run-Length-Based Codes
The first data compression codes that researchers investigated for compressing scan vectors were encoded by runs of repeated values.

3.1. Simple-Run-Length Code
In Table 1, a variable number of bits are encoded by a fixed number of bits. Jas and Touba [6] used the above scheme to encode runs of 0s. To increase the prevalence of runs of 0s, this scheme uses cyclical scan architecture to allow the application of difference vectors, where the difference vector between test cubes and is equal to XOR Careful ordering of the test cubes maximizes the number of 0s in the difference vectors, thereby improving the effectiveness of run-length coding.

Table 1: 3-Bits Run Length Code [6]..3.2. Golomb Codes
As shown in Table 2, Chandra and Chakrabarty [7, 9] and Li and Chakrabarty [8] proposed a technique based on Golomb codes that encode runs of 0s with variable-length code words. The code words are divided into groups of equal size ( is any power of 2). Each group is assigned a group prefix “(k –1) 1s followed by a 0” and as each group contains uniquely identifiable symbols, the final code word consists of a group prefix and a tail of bit which identifies the member in the group. The use of variable-length code words allows for efficient encoding of longer runs, although it requires a synchronization mechanism between the tester and the chip. This scheme is applied to difference vector derived the same way as in [6].

Table 2: Golomb code for group length = 4 [7]..Encoded sequence corresponding to 001 0000001 001.. is 010 1010 010.  

3.3. Frequency-Directed Run-Length Codes
Original Test Data: 0 0 1  1 11 1  0 0 0 0 1 11 11 1. Run Length:      2    0 0 0 0   4   0  0  0  0  0. Encoded Test Data: 1000 00 00 00 00 1010 00 00 00 00 00.
In 2001, Chandra and Chakrabarty [10, 11] proposed a new scheme based on the observation that the frequency of runs of 0s with run length less than 20 is high and even within the range of 0 to 20, the frequency of runs of length decreases rapidly with increasing So test data compression can be more efficient if the runs of 0s with shorter run length are mapped to shorter code words. So further optimization can be achieved using frequency-directed run-length (FDR) codes. The FDR is similar to Golomb code but the difference is the variable group size. The size of the group is equal to that is, that group contains members. Table 3 demonstrates the code words for different run lengths.

Table 3: Frequency-directed run-length code [10]..3.4. Extended FDR
The FDR code is very efficient for compressing data that has few 1s and long runs of 0s but inefficient for data streams that are composed of both runs of 0s and runs of 1s. Generally, test vectors contain 0s and 1s in group, that is, there will be a run of 1s followed by a run of 0s and vice versa. Maleh and Abaji proposed an extension of FDR (EFDR) [12]. Here, the run of 0s followed by bit “1” and the run of 1s followed by bit “0” are the coded same way as FDR but adding an extra bit at the beginning of FDR code word. Code words for this method are shown in Table 4.

Table 4: Extended FDR code [12]..3.5. Alternating Run Length Code
Generally, the test set is composed of alternating runs of zeros and runs of ones. The alternating run-length code is also a variable-to-variable-length code. An additional parameter associated with this code is the alternating binary variable a. The encoding produced by the alternating run-length code for a given run length depends on the value of a. If a = 0, the run length is treated as a run of 0s. On the other hand, if a = 1, the run length is treated as a run of 1s. Note that the values of a for the different runs are not added to the encoded data stream. The a is inverted after each run is encoded and it keeps alternating between 0 and 1 thereafter. The default initial value of a = 0, that is, the input data stream starts with a run of 0s. The following example shows the encoded data obtained using this code for a data stream composed of interleaved run of 0s and 1s. [13]

Original Test Data: 0 0 1  1 1 1 1 0  0 0 0 1  1 1 1 1 10. Run Length:     2  4   3   5. Encoded Test Data: 1000 1010 1001 1011.      :    0 1  0  1.
3.6. Shifted Alternating FDR Code (SAFDR)
One more scheme for alternate runs of 1s and 0s has been proposed by Hellebrand and Würtenberger [14]. Each symbol is made up of only 1s or only 0s. The first bit of encoded data will indicate the type of the first run, that is, 0 or 1, then each code word will indicate the run length of alternate runs. Based on the fact that, in the alternating FDR, there is no run length of 0 size, code word for run length size 0 is assigned to run length size 1. This way, each code word is shifted to one position higher. This helps in achieving higher compression compared to Alternate FDR. The following example shows coding for Alternating as well as Shifted Alternating FDR.

Original test data:   00  11111 0000 111111. Run of length:      2  5  4  6. Encoded test data: For Alternating FDR:   0 1000 1011  1010     110000. For Shifted Alternating FDR: 0 01   1010 1001 1011. 3.7. Variable Length Input Huffman Code (VIHC)
In order to decompress an FDR code, the on-chip decoder has to identify the group prefix as well as the tail. Because the code is not dependent on a group size as Golomb codes, the decoder has to detect the length of the prefix in order to decode the tail. So, the FDR code requires a more complicated decoder with higher area overhead. So a mix of Huffman and FDR is proposed which instead of using only patterns of fixed-length uses patterns of variable-length as input to the Huffman algorithm (VIHC) [15, 16]. Here, the compression ratio is retained because of FDR and the area overhead is reduced using selective Huffman Coding.

3.8. Split VIHC
Spilt VIHC [16, 17] approach demonstrates that before going to the VIHC, if the test file is divided into two or more equal parts and the vectors are reordered in a specific way, the compression ratio can be still improved.

3.9. Modified Frequency-Directed Run-Length Code (MFDR)
One more scheme based on probability of 0s and FDR is MFDR (Modified Frequency-Directed Run-length) [18]. In this scheme, the groups of FDR are further modified in such a way that gives better compression ratio than FDR if the probability of 0s in the test set is greater than 0.8565. Table 5 presents the code words for this method.

Table 5: Modified FDR code [18]..3.10. Selective Relaxation of Bits with FDR Code
In 2003, Kajihara et al. [19] proposed a scheme based on selectively relaxing some of bits of test vector before encoding it using FDR or Golomb code. By changing a specified bit with value 1 to a do not care, two consecutive runs of 0s in the test sequence can be concatenated into a longer run of 0s, thereby facilitating run-length coding. This procedure retains the fault coverage of the test set. Since the increase in compression depends on the lengths of the two runs that are concatenated with each bit relaxation, a lookup table, referred to as the gain table, is precomputed and used during the test set relaxation procedure to maximize the likelihood of increasing the amount of test data compression. The gain table is used to pinpoint the bit positions with value 1, which when relaxed to do not-cares, will yield the maximum compression.

3.11. Data-Independent Pattern Run-Length Code
Ruan and Katti [20] have proposed data-independent run-length coding. This scheme explores the do not care bits in test patterns. It transmits the first segment of the pattern as it is and then compares all other subsequent segments with the first segment and decides either the next segment is equal to the first or complement of the first segment. If segment is equal, it sends “0” and if complement, it sends “11.” The code word is ended with “10.”

3.12. Run-Based Reordering with EFDR
Stuck-at fault based-test patterns can be reordered without any loss of fault coverage. The test patterns are reordered based on the minimum hamming distance between them. The Run-Based Reordering approach [21] is based on reordering the test patterns to give the bigger run lengths of 0s. As this bigger run lengths are than coded with Extended FDR it gives better compression to normal Extended FDR.

3.13. Fixed-Plus-Variable-Length Code
In 2007, Zhan et al. [22] proposed a test data compression based on fixed-plus-variable-length (FPVL) coding. This scheme divides code word into two parts: fixed-length head section and variable-length tail section. The width of head section is bits where maximum. possible run length is . The value of the presenting tail is two-times bigger than the length of runs in the original test data. In order to obtain further compression, the highest bit of the tail section is reduced from the code words because all of the highest bits in the tail section of the tail are “1.” The code words for different run lengths is given in Table 6.

Table 6: Fixed plus variable length coding [22]..3.14. Overlapped Vectors with FDR
In 2008, Sheng et al. [23] proposed that in a given test set, there exists such a vector, from which parts of each test vector from the different test vectors can be sought. Based on this, a vector named overlapped vector which contains parts of each test vector and has shorter length than that of the sum of each test vector length is decided. Secondly, the overlapped test vectors are further compressed utilizing Frequency-Directed Run-Length (FDR) coding.

4. Do not Care Bit Filling for Run-Length-Based Codes
To get the maximum compression, the do not care bits should be filled to get the longer runs. The way do not care bits should be filled with 1s or 0s depends upon the different natures of code. The nature of run-length codes can be broadly classified into three categories.

(1) Codes considering the runs of zeros followed by bit one, that is, simple run-length code, Golomb Code, FDR Code, and MFDR Code (2) Codes considering runs of zeros followed by one as well as runs of ones followed by zero, that is, Extended FDR (3) Codes considering alternate runs of zeros followed by ones and runs of ones followed by zeros, that is, Alternate FDR (4) Codes considering the alternate runs of only zeros and only ones without any follower bit, that is, Shifted Alternate FDR (5) Codes considering the runs of ones followed by zero. This is a hypothetical case only. No such code is proposed in literature but we have taken this case to compare it with the above four styles and analyze the results. Considering the above five cases, the fillings of do not care bits of partially filled test vectors are done as per the following schemes.

4.1. X Filling for Codes Considering Runs of Zeroes Only
For codes like Golomb [7], FDR [10], or MFDR [18], the symbols are made of runs of 0s followed by bit “1.” So we have applied a simple technique of replacing all the Xs with 0s. So the overall of the runs of 0s will increase, the number of symbols will decrease and hence entropy will decrease, and data compression will increase.

4.2. X Filling for Codes Considering Runs of Zeros Followed by One As Well As Runs of Ones Followed by Zero
The code Extended FDR [12] is a case which accepts runs of 0s as well as runs of 1s. Here, each symbol is a run of 0s followed by the bit “1” or run of 1s followed by the bit “0.” If the last symbol is a run of 0s without any follower bit “1” ( run of 1s only without the follower bit “0”), in that case, it would be counted as a symbol of the run length which is equal to the number of 0s (1s). The X filling is done in such a way that it should maximize the run length as well as it should not introduce any new symbol. While filling the X, the logic is that if just before the position of X, if the symbol has ended, the X should be filled with reference to next symbol. But if there is a continuous symbol going on at the position of X, X should be filled such that it increases the run length of the current symbol. Proposed algorithm needs to do back tracking as well as forth tracking.

4.3. X Filling for Codes Considering Alternating Runs of Zeros Followed by One and Runs of Ones Followed by Zero
For code like Alternate FDR [12], the symbols are made of alternate runs of 0s followed by one and runs of 1s followed by zero. Here, the first run must be of zero type. So if there is any X at first bit position, it is replaced by 0. If first bit is “1,” then the first run is of 0 length and then it starts with 1. After that all the X bits are filled with last non-X value.

4.4. X Filling for Codes Considering Runs of Ones only
This is a hypothetical case introduced to analyze the compression results for VLSI test data. The symbols are made of runs of 1s followed by bit “0.” Here all Xs are replaced by 1s. So the runs of 1s will increase.

Considering the above cases, we can divide these methods in two categories: considering the runs of one type only, that is, either runs of 0s or runs of 1s, and considering runs of both types, that is, runs of 0s as well as 1s. For the second category, after X filling, there may be runs of similar run length but of different run type. While identifying the unique symbols, such runs are taken as two different symbols in this paper.

5. Entropy
Entropy is an important concept to data compression. The entropy of a symbol is the minimum number of bits needed to encode that symbol. The entropy of the test set is calculated from the probabilities of the occurrence of unique symbols using the formula where is the probability of the occurrence of symbol in the test set and is the total number of unique symbols. In case of fixed symbol length, the formula for the maximum compression that can be achieved is given by and in case of variable symbol length, the maximum compression is equal to . The average symbol length is computed as where is the probability of the occurrence of symbol is the length of symbol and is the total number of unique symbols [24]. Mathematically it can be proved that the following formula for maximum compression is valid for fixed symbol length as well as variable symbol length. where is the total number of bits in original uncoded test data, is the total number of symbols needed to be encoded, and is the entropy. For all further discussions, the above formula of %compression is used.

6. Experimental Results with X Filling for Maximum Test Data Compression
We have implemented all the X filling techniques using MATLAB7.0 language. The experiments are conducted on a workstation with a 3.0 GHz Pentium IV processor and 1GB memory. Experiments were performed for X filling to calculate the theoretical limit on test data compression for the dynamically compacted test cubes generated by MINTEST for the largest ISCAS89 benchmark circuits. These are the same test sets used for experiments in [7, 9, 11, 13, 17]. The compression values in Table 7 are predicted from the exact values of entropy that were generated after the X filling. As can be seen in Table 7, the percentage compression that can be achieved is maximum where the runs are considered of both types, that is, runs of 0s and runs of 1s. Note, however, that these entropy bounds would be different for a different test set for these circuits. If the reordering or any other method is used to change the location or the number of do not cares, the entropy can be different. However, given any test set, the proposed method can be used to determine the corresponding entropy bound for it.

Table 7: Comparison of total no. of symbols needed to be encoded and % compression for various schemes of do not care bits filling..7. Experimental Results with X Filling for Minimum Test Power
The goal of X filling was to reduce the number of runs. As the number of runs will decrease, the number of transitions should be reduced, which should lower the test power. In this paper, a widely used weighted transitions metric (WTM) introduced in [25] is used to estimate the average and peak power consumption. Test data has patterns, and the length of the pattern is n bits. Each test pattern denotes the bit in the pattern. Weighted transitions metric for the average test power and peak power are estimated as per the formulae in [26]:
(1) Intuitively, the average power and peak power for test data should be minimum when there are long runs of ones as well as zeros. This is proved in Table 11. Peak power and average power are minimum when the X filling is done for alternating runs of 0s and 1s without any follower bit.

8. Method of Do not Care Bit Filling, Total Number of Symbols and Nature of Test Data
For the given test set, the different method of X filling gives the different numbers of total symbols, entropy, and hence compression. For the ISCAS89 benchmark circuits, Table 7 compares the total number of symbols needed to be encoded and % compression for various X filling methods. When the X filling is done to make runs of zeros followed by “1” as well as runs of ones followed by “0” both, the total number of symbols needed to be encoded is minimum, hence %compression is maximum. But as shown in Table 8, for the same methodology of X filling, the entropy is maximum. The reason of higher entropy is the higher number of unique symbols. It can be concluded that for the don’t care bit method of “runs of zeros followed by 1 as well as runs of ones followed by 0”, in spite of higher number of unique symbols and higher value of entropy, the total number of runs are minimum. This results into maximum compression. This comparison can be further explored to investigate the nature of test data. If the partially specified test data has maximum numbers of zeros, the “Runs of Only Zeros” method must give the maximum compression and vice versa if the test data has maximum numbers of ones, the “Runs of Only Ones” method should give the maximum compression. So the first conclusion is that as for approximately 2/3 of the total cases, the “runs of zeros” has given better compression, so the probability of “0” can be higher than the probability of “1.” If “alternating runs of 0s and 1s” method gives the better compression, it can be said that in test data, 1s and 0s are distributed in groups like 11100001110011 and so on. If “runs of 0s and runs of 1s both” method gives better compression, it can be said that in test data, 1s and 0s are not distributed in groups but they may be distributed like a zero sitting between the group of ones or vice versa like 111011100001000 and so on. It means that there may be a large number of cases where “1” is followed as well as preceded by group of 0s and “0” is followed as well as preceded by group of 1s. The comparison of the “alternating runs of 0s and runs of 1s” method with the “runs of 0s and runs of 1s both” method shows that the second method gives more compression. So it can be concluded that in test data, the probability of sitting 1(0) sitting between the group of 0s (1s) is high. This conclusion is further enforced by the results shown in columns 3 to 7 of Table 7. Here, it has been shown that the number of symbols needed to be encoded is minimum in case of “runs of 0s and runs of 1s both” method.

Table 8: Comparison of entropy and total no. of unique symbols for various schemes of do not care bits filling..9. Comparison of Compression Based on Entropy with Actual Compression Claimed in Literature
Golomb, FDR, and MFDR scheme are based on run of 0s. EFDR is based on runs of 0s followed by “1” and 1s followed by “0.” Alternating FDR is based on alternate runs of 0s followed by “1” and 1s followed by “0.” Table 9 compares the % compression claimed in literature for each of these categories of coding with its theoretical upper limit predicted by entropy. It should be noted that the %compression claimed here as the upper bound predicted by entropy is achieved after filling all do not care bits with an appropriate method of bit filling but without applying any technique like reordering of test vector or difference vector. Considering %compression, MFDR seems to give the best compression for codes based on runs of zeros. It can be seen that because of reordering and other techniques, in some of the cases of EFDR, the %compression achieved is even higher than the predicted by entropy.

Table 9: Comparison of % compression predicted by entropy with corresponding actual compression claimed in literature..Table 10 compares the % compression in case of ISCAS89 benchmark circuits for different coding schemes described in literature. Run-based reorder used with extended FDR gives better compression ratio for ISCAS circuits compared to other schemes.

Table 10: Comparison of actual compression claimed in literature for various run based coding schemes.. Table 11: Comparison of peak power and average power for various methods of do not care bit filling..10. Conclusion
In this survey paper, we have covered the wide variety of test data compression techniques based on run-length scheme and their variants. Five different techniques of do not care bit filling based on the nature of the runs are used to increase the run length and hence the % compression. The entropy-based % compression for each of these five techniques is calculated and the analysis proves that run-length-based code which includes run of ones followed by zero as well as run of zeros followed by one gives the best compression for VLSI test data. The same conclusion is further emphasized by comparison of actual compression claimed by literature where EFDR gives the maximum compression. The run-based reordering and other techniques used to enhance the run length further improve-compression which is proven by Run-Based Reordering with Extended FDR scheme. The researchers can start with this method and explore the possibilities of further compression with consideration of area overhead of on-chip decoder and overall test

Sunday, March 28, 2010

VLSI Design

Bangalore:
1. Benns Technologies, Part time/Full Time course in VLSI (4 months)
2. TTM Institute of Information Technology, with University of California Extension, and Cadence Design Systems India, Certificate program in VLSI design engineering (Physical Design or Logic Design)
3. UTL Technologies, Advanced Post-Graduate Diploma in VLSI Design (6 months)
4. Centre for Development of Advanced Computing (C-DAC), Advanced Computing Training School, Diploma in VLSI
5. Sandeepani, Post Graduate Diploma in VLSI Design, (14 weeks)
6. Emblitz, Advance Diploma in VLSI Design
7. Silicon Labs, SIMS
8. Kiona, Diploma in VLSI Design (6 months)
9. M.S. Ramaiah School of Advanced Studies, VLSI System Design (50 weeks)

Chennai:

1. Accel Technologies: 'Post Graduate Diploma in VLSI Design' (4 months)
2. Signals and Systems (SANDS), Diploma in VLSI Design (3 months), Certificate courses in Verilog and VHDL (1 month)
3. Horizon Semiconductors: (4 months)

Hyderabad

1. TTM Institute of Information Technology, with University of California Extension, and Cadence Design Systems India, Certificate program in VLSI design engineering
(Physical Design or Logic Design)
2. VEDA IIT (VLSI Engineering and Design Automation), Advanced Diploma in VLSI Engineering,
3. Sandeepani, (20 weeks)


Pune:

1. University of Pune, Department of Electronic Science and Integrated Circuit and Information Technology (ICIT), Certificate Course in VLSI Design.(6 Months)
2. Bit Mapper Integration Technologies, Advanced Certificate Course in VLSI Design
3. Centre for Development of Advanced Computing (C-DAC), DVLSI - Diploma in VLSI Design

Calicut:
1. DOEACC Centre, PG Diploma in VLSI Design

Chandigarh:
1. VEDANT (VLSI Design Education And Training), Semi-Conductor Laboratory (SCL), Advanced Post-Graduate Diploma in VLSI Design (6 months)

IN Kolkata, Nagpur, Mumbai:

1. Centre for Development of Advanced Computing (C-DAC), DVLSI - Diploma in VLSI Design
Courses in Digital Signal Processing, VLSI Design, Embedded Systems and Industrial Automation


Embedded Systems
Bangalore:
1. UTL Technologies
2. Cranes Varsity, Advanced Diploma in Real Time Operating Systems (6 months)
3. Centre for Development of Advanced Computing (C-DAC), DESD - Diploma in Embedded Systems Design, (24 weeks)
4. Waveaxis,Advanced Diploma in Embedded System
5. M.S. Ramaiah School of Advanced Studies, Advanced certificate course in Embedded System Design.
6. UTL Technologies, Diploma in Embedded System Design (6 months)
7. Sandeepani, PG Diploma in Embedded System Design (20 weeks)
8. Embedded Competency Centre, Mistral, Advanced Diploma in Real-time and Embedded Systems (4 months)
9. ei labz, embedded systems training- beginner, intermediate and advanced levels
10. Emblitz, Advance Diploma in Embedded System Design, crash courses (Embedded Linux, RTOS, VxWorks and ARM), online course on Embedded Systems
11. Silicon Labs, SIMS
12. Chip Integration Technologies, Embedded courses
13. Kiona, Diploma in Embedded Systems (6 months)
14. Emertxe, certificate course in Embedded Systems (5 months)



IN CHENNAI:

1. Centre for Development of Advanced Computing (C-DAC), DESD - Diploma in Embedded Systems
Design, (24 weeks)
2. Signals and Systems (SANDS), PG Diploma in Embedded system (4 months), Diploma in Embedded microcontrollers and processors (3 months), Certificate courses in Real Time Operating System, Microcontrollers and Embedded C (1 month)
Hyderabad
1. Centre for Development of Advanced Computing (C-DAC), DESD - Diploma in Embedded Systems Design, (24 weeks)
2. Cranes Varsity
3. UTL Technologies
4. VEDA IIT (VLSI Engineering and Design Automation), Advanced Diploma in Embedded System Design
5. Sandeepani, PG Diploma in Embedded System Design (14 weeks)
6. Sigma Solutions, Embedded systems training
7. Autarchy Technologies, Training on Embedded System Development

IN PUNE:

1.University of Pune, Certificate Course in Embedded Systems Design (6 months)
2. Oasis, Certificate Course in Advanced Embedded Systems Design
3. Centre for Development of Advanced Computing (C-DAC), DESD - Diploma in Embedded Systems Design, (24 weeks)

IN Calicut:

1. DOEACC Centre, PG Diploma course in embedded system design, (6 months, twice a year, during February and August)

IN Cochin:

1. Centre for Development of Advanced Computing (C-DAC), DESD - Diploma in Embedded Systems Design, (24 weeks)

IN Mohali, Kolkata, Noida, Trivandrum:
1. Centre for Development of Advanced Computing (C-DAC), DESD - Diploma in Embedded Systems Design, (24 weeks)