Stef,

I'm doing some work in a W3C working group where one of the deliverables is 
a set of test cases.  I.e. a set of machine processable files that give 
some kind of before-and-after indication of how certain constructs may be 
processed.

These are used (a) as discussion points for building consensus about 
exactly what is intended in some circumstance, (b) as a way to go and find 
out what existing code actually does, and (c) as a secondary document to 
back up what the primary specification is trying to state.

#g
--


At 02:11 PM 2/2/02 -0800, Einar Stefferud wrote:
>I keep working on Keeping It Simple in honor of Stupid;-)...  (KISS)
>
>In keeping with this, and still seeking some progress, you might note that 
>my position is reasonably fluid, since the solution(s) do not seem to be 
>obvious from the beginning.
>
>It is extremely difficult to do what is needed in the form of Enforcement, 
>which requires Punishment Consequences and trial courts and all such.  All 
>of which we all agree, should not be mounted or provided by IETF.
>
>But, let's suppose that someone assembled some documented test cases for 
>Interoperability, such as were used first for the first pair of 
>implementations the justified moving a standard from Proposed Standard to 
>Draft Standard.
>
>At levels above IP/TCP I suspect that there is very little code required 
>to do the testing.  what is required instead of code is scenarios for 
>sending this that and the other thing in both directions between 
>interworking systems.
>
>I am assuming that such a test was performed at least once, whether 
>documented or not.  I further assume that this could plausibly be used as 
>an initial Public Standard for testing.  This is the specification of the 
>test, not the code for the test.  What kinds of objects are to be 
>exchanged successfully before that first pair can be accepted as proof of 
>interworking between that first justifying pair of independent applications.
>
>I suggest that the first thing to do is stop tossing those test specs in 
>the trash after they are used, as though they have no further value.
>They in fact have the value of a seed that can grow into a valuable long 
>term testing protocol for all that care about interworking, such that any 
>customer seeking to buy the most interoperable systems can use the 
>published test suite protocol to do in-house testing on the systems 
>offered by bidding vendors.
>
>So, what I propose is to do something that will give the customers a tool 
>for protecting themselves from careless or heedless or even dishonest vendors.
>
>As things are now, we, the end users and customers are basically 
>defenseless in the face of what appear to be hostile vendors who are 
>without any checks and balances in the hands of crippled customers.
>
>If nothing else, our customer community should be interested in founding 
>an operation that will supply interoperability test scenarios for 
>themselves.  to hell with expecting the vendors to protect the 
>customers.  If the testing tools are not in the hands of the customers, 
>who can you trust.
>
>Don't tell me that we should trust the Marketing Droids;-)...
>How much testing do those Droids do?
>
>I suspect they mostly test market savvy, not product reliability.
>
>But, being suspicious is not a useful thing without some tools to use for 
>seeking truth.
>
>I prefer to Trust, but Verify!
>This is the power in customer emPOWERment.
>
>BTW, I do not expect much help from vendors in this strategy.
>Though one or two might find some advantage in helping out.
>
>Especially if they offer real interworking systems;-)...
>
>Cheers..Stef
>
>
>At 12:22 -0500 30/01/02, Mark Adam wrote:
>>Since interoperability on a one-to-many scale would be a problem,
>>perhaps approaching it from the many-to-one point of view would be
>>better.
>>
>>Einar's ideas are good, but still difficult to implement. What happens
>>when a company fails to find every device it should be tested against?
>>It almost seems that what we need is the concept of a reference
>>platform.
>>
>>Having a reference platform allows for a single point of contact for
>>everyone wanting "IETF Certification".
>>
>>I would also suggest that the task of implementing such a platform
>>should be up to the WGs creating the standards or the companies
>>authoring the standard. This would also give you a group that could
>>administer the platform. Of course there would have to be some rules of
>>conduct so that nobody could be excluded from performing their
>>interoperability testing. (Do I smell a BOF here?) I'm sure groups
>>holding reference platforms could find some way to make money off of
>>this without breaking the rules.
>>
>>I'm not saying this would be easy to implement, but it might be worth a
>>thought.
>>
>>mark---------------
>>
>>At 00:25 1/29/02, Einar Stefferud wrote:
>>  >Well now, an idea blinked on here;-)...
>>  >
>>  >As Paul Hoffman noted, it costs a small fortune for an entire set of
>>  >vendor products to be tested against all other interworking products
>>  >(N**2 pairs is the estimate) and there is no proffered business model
>>  >for doing this for the entire involved industry..
>>  >
>>  >But, maybe someone can devise a business model for testing one
>>  >product against all the others that claim to conform to the standard
>>  >under test.
>>  >
>>  >I know that HP did this ounce for their Internet products by hiring a
>>  >person to do it from one of their customer's sites on the Internet.
>>  >It does not matter here who or where it was done.
>>  >
>>  >But, this puts the burden on the vendors that wish to be able to
>>  >claim inter-workability with all others, or with some subset of their
>>  >choice.
>>  >
>>  >Or they can identify those that do not interwork for the benefit of
>>  >those that want to know such stuff.
>>  >
>>  >This then becomes an individual company decision, and does not
>>  >require massed agreement, or require synchronized work schedules.
>>  >Just put your system on the net and find someone out there to test
>>  >against.  Doing it on the real net is just fine for this testing
>>  >model.
>>  >
>>  >Of course, the vendors that do this can brag or not, as they wish.
>>  >
>>  >And here is no great concern for whether every vendor does it or not.
>>  >
>>  >And the market can make up its mind by itself.
>>  >
>>  >For my view, I have trouble believing that all those vendors are not
>>  >vitally interested in inter-working among their products.
>>  >
>>  >And, in addition, I would hope that someone might mount an open
>>  >discussion mailing list for people to use to post their private
>>  >experiences with what does or does not work.
>>  >
>>  >And last:  This is no longer a useful IETF discussion;-)...\Stef
>>  >
>>  >
>>  >At 09:01 -0800 28/01/02, John  W Noerenberg II wrote:
>>  >>At 10:19 PM -0500 1/26/02, [EMAIL PROTECTED] wrote:
>>  >>>
>>  >>>I have in my bedroom a night light, which I purchased at a local
>>  >>>grocery store.  It has a UL logo on it, which doesn't tell me much
>>  >>>about its suitability as a night light (I can't tell if it's bright
>>  >>>enough, or if it's too bright, or what its power consumption is),
>>  >>>but it *does* tell me 2 things:
>>  >>>
>>  >>>1) It has been *tested* and found free of any known safety design 
>> problems.
>>  >>>It may not *work* as a night light, but it won't shock me when I go to
>>  >>>throw it in the trash can because it's not suitable.
>>  >>>
>>  >>>2) A high enough percentage of night light manufacturers get UL listed
>>  >>>that I can afford to be suspicious of any company that doesn't have
>>  >>>the logo on their product.
>>  >>
>>  >>Underwriters Laboratories, Inc.  is a non-profit corporation that
>>  >>was founded in 1894.  This
>>  >><http://www.ul.com/about/otm/otmv3n2/labdata.htm>article describes
>>  >>the process UL uses for developing their standards.  Many UL
>>  >>standards receive ANSI certification.  According to the article, UL
>>  >>relies on information from a number of sources while developing a
>>  >>standard.
>>  >>
>>  >>UL tests products submitted by its customers for *conformance* to
>>  >>its standards.  UL's reputation depends on the rigor and
>>  >>independence of their testing.  I don't know how it costs to submit
>>  >>a product for testing, but obtaining UL certification isn't free.
>>  >>UL's certification program is successful, because when consumers
>>  >>like Valdis (and me) see a UL label, they believe in its value.  As
>>  >>Valdis points out, the value of the label has limits.
>>  >>
>>  >>Certification isn't the work of a volunteer organization like the
>>  >>IETF.  It could be the work of an organization like Underwriters
>>  >>Labs.  This would be a good thing for Internet standards, imho.
>>  >>
>>  >>One idea proposed multiple times in this meandering discussion is
>>  >>that those advocating testing should put up or shut up -- create a
>>  >>testing organization or move on to other topics.  I concur with both
>>  >>those suggestions.  I'm sure you'll all be pleased this is my last
>>  >>word on the topic.
>>  >>
>>  >>best,
>>  >>--
>>  >>
>>  >>john noerenberg
>>  >>[EMAIL PROTECTED]
>>  >>   ----------------------------------------------------------------------
>>  >>   While the belief we  have found the Answer can separate us
>>  >>   and make us forget our humanity, it is the seeking that continues
>>  >>   to bring us together, the makes and keeps us human.
>>  >>   -- Daniel J. Boorstin, "The Seekers", 1998
>>  >>   ----------------------------------------------------------------------
>>  >
>
>-
>This message was passed through [EMAIL PROTECTED], which
>is a sublist of [EMAIL PROTECTED] Not all messages are passed.
>Decisions on what to pass are made solely by Raffaele D'Albenzio.

------------
Graham Klyne
[EMAIL PROTECTED]

Reply via email to