Flaw in the argument is that productivity here is irrespective of the
integrity of the information being processed. One uses the same formula to
measure noise "productivity" as signal productivity. IT generates a lot of
noise. It is labour intensive to eliminate this noise so IT generates
countervailing noise to mask the noise.

People use spreadsheets in their everyday work. People work on different
platforms. People exchange files to do collaborative work. People cut and
paste from one file to another. People don't know that x-file is 1904 date
system and y-file is 1900 date sytem (did you know?). June 8, 2001 becomes
June 7, 1997. Data is corrupt. Nobody realizes. That is noise.

Sometime in the past month you may have read a number based at least in part
on a date that was off by precisely four years and a day. You may have cited
that number in an analysis you did. You will never know which one it was. It
may have affected your conclusions.

Imagine all these little cut and past four year and a day errors migrating
into actuarial tables, pharmaceutical drug trials data, equipment
maintenance logs, etc., etc., etc. 

What a difference [four years and] a day makes 
[1462 times] twenty-four little hours . . .

Now stop imagining.

Childress forwarded by Saylor:

>By including the time needed  to complete a workflow process in the formula,
>it then becomes possible to arrive at a number that measures relative
>productivity of the workflow process itself using the formula:
>
>Productivity = (nTask_KU /nWorker_KU) + Hours

Tom Walker
Bowen Island, BC
604 947 2213

Reply via email to