> Even with current low cost silicon, there is still a high rejection 
> rate. That, as was said , cannot be afforded with a bigger chip.

Actually, bigger chips increase the number of rejects incredibly. The
reason is very simple: the basic idea behind chips in most cases relies on
all chip components being operational, i.e. there is no fault tolerance.
There are some exceptions, though, see below.

Take a silicon wafer. Now immagine there are specks of unusable silicon,
say a couple of um across randomly distributed with a density of one on
every 2 square cm. There are two factors that govern the yield:
1) size of geometry - if this is >> than the size of the anomaly, there is
a good chance an anomaly will only produce a degraded component, not a
completely faulty one. However, today, almost all geometry used is far
smaller than the sizes of the anomalies, so we run into problem:
2) size of the chip. Obviously, if the chip size is >2cm square,
statistically ALL of them will have a fault, i.e. yield will be 0. As the
chip gets smaller, somewhere around half of the 2cm square area, the yield
suddenly goes up quickly. For very small chips, the number of failures
approaches area_of_anomaly/area_of_wafer * 100 in %, i.e. the yield becomes
virtually 100%. This is why small signal transistors, having a very small
die, cost pennies, but a CPU that has a die of 100 times the size, does not
cost 100 pennies - all sorts of additional processing are necessary to even
get >0 yield on these, and it has to be payed for.
In reality, anomalies on silicon are not the only problem, there is a vast
number of different pollutants that can affect the process of making a
chip, but the basic behaviour is the same. This is why chip prices are
extremely dependant on chip size, and why all the manufacturers try to
squeeze the size of the die down as much as possible.

This problem first became evident with memory, as these were traditionally
the largest chips. For a long time it held back the jump from 64k bits to
256k bits for dynamic RAM. Finally, someone figured out that providing
extra RAM 'rows' that were programmable will include a dose of fault
tolerance. However, this came at a price - traditionally, with a shrinkage
of geometry, comes a corresponding shrinkage of delays, i.e. speed
increases. But since the 256kb DRAM needed 'programmable' rather than fixed
row decoders, some of the speed benefit was lost, so the 256kb DRAM chips
were of the same speed grade as 64kb chis available earlyer, so one speed
grade jump was 'missed'. Today all memory produced has some fault
tolerance, it is tested and then appropriately programmed at the factory.
Furthermore, even CPUs have a dose of fault tolerance. It is mostly evident
with chips that come in different falvours, such as different speed grades,
or cache sizes (an asside: being memory, most caches in todays big CPUs
also have 'extra' memory cells to provide fault tolerance, or, in some
cases, error detection and correction schemes). This same problem reared
it's head in the manufacture of active matrix LCDs, which are the most
extreme form of a chip - the size of the whole screen. Cost effective sizes
ginally jumped from about 9" diagonal to more once they figured out how to
make the displays line by line - the lines are produced on a drum and
'stuck onto' the glass, one by one. They are tested as they get stuck, and
if found defective, the whole line is scrubbed off, then replaced by a new
one from the drum - instead of throwing the whle screen away. In parallel
with these technology, material technology advances also, so as time goes
by, and prodcts mature, they actually move downwards in the technology
chain. For instance, to get the first 15" LCDs, the drum technology was
required. Advances in materials made it possible to produce 15" screens
using traditional technology today, but at the same time, combined with the
drum thing, now you can make 19" screens.

Nasta

Reply via email to