Re: presenting vs discussion in WG meetings (was re:Remote Participation Services)

2013-02-16 Thread Brian E Carpenter
On 15/02/2013 20:57, Keith Moore wrote:
...
> But this makes me realize that there's a related issue.   An expectation
> that WG meetings are for presentations, leads to an expectation that
> there's lots of opportunity to present suggestions for new work to do.  
> WG time scheduled for considering new work can actually take away time
> for discussion of ongoing work.   And once the time is scheduled and
> people have made commitments to travel to meetings for the purpose of
> presenting new work, chairs are understandably reluctant to deny them
> their allotted presentation time.

This is closely related to a well-known problem at academic conferences.
Many people can only get funded to travel if they are presenting a paper.
It's common practice, therefore, to have either a poster session (which
allows massively parallel presentations) or hot-topics sessions (with a
strict and very short time-limit). We tend to throw the hot-topics sessions
into WG meetings, which is not ideal.

Why not have a poster session as part of Bits-n-Bites? It would give
new ideas a chance to be seen without wasting WG time. Make it official
enough that people can use it in their travel requests.

Brian


Re: presenting vs discussion in WG meetings (was re:Remote Participation Services)

2013-02-16 Thread Abdussalam Baryun
+1

AB

On 16/02/13 08:04, Brian Carpenter wrote:
> On 15/02/2013 20:57, Keith Moore wrote:
> ...
>> But this makes me realize that there's a related issue.   An expectation
>> that WG meetings are for presentations, leads to an expectation that
>> there's lots of opportunity to present suggestions for new work to do.
>> WG time scheduled for considering new work can actually take away time
>> for discussion of ongoing work.   And once the time is scheduled and
>> people have made commitments to travel to meetings for the purpose of
>> presenting new work, chairs are understandably reluctant to deny them
>> their allotted presentation time.
>
> This is closely related to a well-known problem at academic conferences.
> Many people can only get funded to travel if they are presenting a paper.
> It's common practice, therefore, to have either a poster session (which
> allows massively parallel presentations) or hot-topics sessions (with a
> strict and very short time-limit). We tend to throw the hot-topics sessions
> into WG meetings, which is not ideal.
>
> Why not have a poster session as part of Bits-n-Bites? It would give
> new ideas a chance to be seen without wasting WG time. Make it official
> enough that people can use it in their travel requests.
>
> Brian
>


Re: presenting vs discussion in WG meetings (was re:Remote Participation Services)

2013-02-16 Thread Keith Moore

On 02/16/2013 03:04 AM, Brian E Carpenter wrote:

On 15/02/2013 20:57, Keith Moore wrote:
...

But this makes me realize that there's a related issue.   An expectation
that WG meetings are for presentations, leads to an expectation that
there's lots of opportunity to present suggestions for new work to do.
WG time scheduled for considering new work can actually take away time
for discussion of ongoing work.   And once the time is scheduled and
people have made commitments to travel to meetings for the purpose of
presenting new work, chairs are understandably reluctant to deny them
their allotted presentation time.

This is closely related to a well-known problem at academic conferences.
Many people can only get funded to travel if they are presenting a paper.
It's common practice, therefore, to have either a poster session (which
allows massively parallel presentations) or hot-topics sessions (with a
strict and very short time-limit). We tend to throw the hot-topics sessions
into WG meetings, which is not ideal.

Why not have a poster session as part of Bits-n-Bites? It would give
new ideas a chance to be seen without wasting WG time. Make it official
enough that people can use it in their travel requests.

That sounds like a great idea to me.

Keith



Re: [IETF] back by popular demand - a DNS calculator

2013-02-16 Thread Warren Kumari


Sent from my iPad

On Feb 16, 2013, at 2:02 AM, Patrik Fältström  wrote:

> 
> On 15 feb 2013, at 23:45, Warren Kumari  wrote:
> 
>> Sure -- the DNS protocol *cannot* "handle any value in the octets" -- in 
>> fact, there are an *infinite* number of values it cannot handle *in the 
>> octets*. For example, it cannot handle 257. It also cannot handle 321, nor 
>> 19.3...
> 
> Ok, it is obvious Friday...somewhere...
> 
> Once when being on IESG way back when I was tasked to write the response to 
> the letter we got with a suggestion on an alternative solution for the 
> "running out of IPv4 addresses" problem.
> 
> The proposal was to not stop counting at 255 in each of the four numbers 
> separated by periods, but continue to (at least) 999.
> 

That's also a solved problem -- there is even a draft about it : 
http://tools.ietf.org/html/draft-terrell-math-quant-ternary-logic-of-binary-sys-01

You just use ternary logic instead of binary and all your problems are 
solved... or something...I get a little lost during the proof of Fermat's...

W


>   Patrik
> 


Re: [therightkey] LC comments on draft-laurie-pki-sunlight-05

2013-02-16 Thread Phillip Hallam-Baker
Sorry for the delay but I have been thinking of CT and in particular the
issues of

* Latency for the CA waiting for a notary server to respond
* Business models for notary servers

As a rule open source software works really well as the marginal cost of
production is zero. Open source services tend to sux because even though
the marginal cost of a service is negligible, large numbers times
negligible adds up to big numbers. Running a DNS server for a university
department costs very little, running it for the whole university starts to
cost real money and running a registry like .com with 99.% reliability
ends up with $100 million hardware costs.

So the idea that I plug my business into a network of notary servers being
run by amateurs or as a community service is a non-starter for me. We have
to align the responsibility for running any server that the CA has a
critical dependency on with a business model.

Looking at the CT proposal, it seems to me that we could fix the business
model issue and remove a lot of the CA operational issues as follows:

1) Each browser provider that is interested in enforcing a CT requirement
stands up a meta-notary server.

2) Each CA runs their own notary server and this is the only resource that
needs to have a check in at certificate issue.

3) Each CA notary server checkpoints to one or more meta-notary servers
every 60 minutes. As part of the check in process it uploads the whole
information for all the certificates issued in that time interval.

4) Meta-Notaries deliver tokens that assert that the CA notaries are
current every 60 minutes. Note here that 'current' is according to the
criteria set by the meta notary. This is an intentional piece of 'slop' in
the system.

5) The OCSP tokens delivered by the CA contain the information necessary to
checkpoint the certificate to the Meta-Notaries.

6) A browser enforcing CT disclosure pulls a list of anchor points from its
chosen meta-notary every 60 minutes and uses them to validate the CT
assertions delivered in certs.


The 'slop' introduced at the meta-notary can of course be removed if we
want to ensure that the system is robust even if there is a collusion
between the CA and the meta-notary. But since the whole point of the scheme
is transparency, the meta-notary operation can be audited by third parties
in any event.


Re: [therightkey] LC comments on draft-laurie-pki-sunlight-05

2013-02-16 Thread Paul Hoffman
On Feb 16, 2013, at 10:22 AM, Phillip Hallam-Baker  wrote:

> Looking at the CT proposal, it seems to me that we could fix the business 
> model issue and remove a lot of the CA operational issues as follows:
> 
> 1) Each browser provider that is interested in enforcing a CT requirement 
> stands up a meta-notary server.
> 
> 2) Each CA runs their own notary server and this is the only resource that 
> needs to have a check in at certificate issue.
> 
> 3) Each CA notary server checkpoints to one or more meta-notary servers every 
> 60 minutes. As part of the check in process it uploads the whole information 
> for all the certificates issued in that time interval.
> 
> 4) Meta-Notaries deliver tokens that assert that the CA notaries are current 
> every 60 minutes. Note here that 'current' is according to the criteria set 
> by the meta notary. This is an intentional piece of 'slop' in the system. 
> 
> 5) The OCSP tokens delivered by the CA contain the information necessary to 
> checkpoint the certificate to the Meta-Notaries.
> 
> 6) A browser enforcing CT disclosure pulls a list of anchor points from its 
> chosen meta-notary every 60 minutes and uses them to validate the CT 
> assertions delivered in certs.

Are you saying that those six items should be added to the experimental RFC as 
requirements, or are you just discussing what might happen operationally after 
the RFC is published? 

--Paul Hoffman

Re: [therightkey] LC comments on draft-laurie-pki-sunlight-05

2013-02-16 Thread Ben Laurie
On 16 February 2013 10:22, Phillip Hallam-Baker  wrote:
> Sorry for the delay but I have been thinking of CT and in particular the
> issues of
>
> * Latency for the CA waiting for a notary server to respond
> * Business models for notary servers
>
> As a rule open source software works really well as the marginal cost of
> production is zero. Open source services tend to sux because even though the
> marginal cost of a service is negligible, large numbers times negligible
> adds up to big numbers. Running a DNS server for a university department
> costs very little, running it for the whole university starts to cost real
> money and running a registry like .com with 99.% reliability ends up
> with $100 million hardware costs.
>
> So the idea that I plug my business into a network of notary servers being
> run by amateurs or as a community service is a non-starter for me. We have
> to align the responsibility for running any server that the CA has a
> critical dependency on with a business model.

Note that we do not expect CAs to talk to _all_ log servers, only
those that are appropriately responsive - and also note that a CA can
fire off a dozen log requests in parallel and then just use the first
three that come back, which would deal with any temporary log issues.

We should probably add this ability to the open source stack at some point.

> Looking at the CT proposal, it seems to me that we could fix the business
> model issue and remove a lot of the CA operational issues as follows:
>
> 1) Each browser provider that is interested in enforcing a CT requirement
> stands up a meta-notary server.
>
> 2) Each CA runs their own notary server and this is the only resource that
> needs to have a check in at certificate issue.

Isn't this part the only part that's actually needed? The
meta-notaries seem like redundant extra complication (and also sound
like they fulfil essentially the same role as monitors).

I assume, btw, that by "notary server" you mean "log server"?

Also, if a CA only uses its own log, what happens when it screws up
and gets its log struck off the list of trusted logs? This is why we
recommend some redundancy in log signatures.

> 3) Each CA notary server checkpoints to one or more meta-notary servers
> every 60 minutes. As part of the check in process it uploads the whole
> information for all the certificates issued in that time interval.
>
> 4) Meta-Notaries deliver tokens that assert that the CA notaries are current
> every 60 minutes. Note here that 'current' is according to the criteria set
> by the meta notary. This is an intentional piece of 'slop' in the system.
>
> 5) The OCSP tokens delivered by the CA contain the information necessary to
> checkpoint the certificate to the Meta-Notaries.
>
> 6) A browser enforcing CT disclosure pulls a list of anchor points from its
> chosen meta-notary every 60 minutes and uses them to validate the CT
> assertions delivered in certs.
>
>
> The 'slop' introduced at the meta-notary can of course be removed if we want
> to ensure that the system is robust even if there is a collusion between the
> CA and the meta-notary. But since the whole point of the scheme is
> transparency, the meta-notary operation can be audited by third parties in
> any event.
>


proceedings not in archival format

2013-02-16 Thread Michael Richardson

I could not recall the name of a vendor who was at the Bits'n'Bites in
Atlanta.   So I went to
https://www.ietf.org/meeting/85/bits-n-bites.html, but the list wasn't
there.  Since it said that they'd get an ackledgement in the plenary, I 
went to: 
  https://www.ietf.org/proceedings/85/technical-plenary.html

looking for the right slide.  

Why are there .pptx files there?
a) It's not an archival quality format (HTML or PDF/A)
b) According to our openstand principles it's hardly a standard.

-- 
]   Never tell me the odds! | ipv6 mesh networks [ 
]   Michael Richardson, Sandelman Software Works| network architect  [ 
] m...@sandelman.ca  http://www.sandelman.ca/|   ruby on rails[ 




pgpZyYhoeEEqk.pgp
Description: PGP signature


Re: proceedings not in archival format

2013-02-16 Thread Alexa Morris
Michael,

You're quite correct -- our proceedings documents are supposed to be maintained 
in PDF. I'm not sure what the issue is here but I can assure you that I'm 
looking into it and that we will get this addressed as quickly as possible.

Regards,
Alexa

On Feb 16, 2013, at 11:20 AM, Michael Richardson wrote:

> 
> I could not recall the name of a vendor who was at the Bits'n'Bites in
> Atlanta.   So I went to
> https://www.ietf.org/meeting/85/bits-n-bites.html, but the list wasn't
> there.  Since it said that they'd get an ackledgement in the plenary, I 
> went to: 
>  https://www.ietf.org/proceedings/85/technical-plenary.html
> 
> looking for the right slide.  
> 
> Why are there .pptx files there?
> a) It's not an archival quality format (HTML or PDF/A)
> b) According to our openstand principles it's hardly a standard.
> 
> -- 
> ]   Never tell me the odds! | ipv6 mesh networks 
> [ 
> ]   Michael Richardson, Sandelman Software Works| network architect  
> [ 
> ] m...@sandelman.ca  http://www.sandelman.ca/|   ruby on rails
> [ 
>   
> 

--
Alexa Morris / Executive Director / IETF
48377 Fremont Blvd., Suite 117, Fremont, CA  94538
Phone: +1.510.492.4089 / Fax: +1.510.492.4001
Email: amor...@amsl.com

Managed by Association Management Solutions (AMS)
Forum Management, Meeting and Event Planning
www.amsl.com 



Re: [therightkey] LC comments on draft-laurie-pki-sunlight-05

2013-02-16 Thread Phillip Hallam-Baker
On Sat, Feb 16, 2013 at 1:55 PM, Ben Laurie  wrote:

> On 16 February 2013 10:22, Phillip Hallam-Baker  wrote:
> > Sorry for the delay but I have been thinking of CT and in particular the
> > issues of
> >
> > * Latency for the CA waiting for a notary server to respond
> > * Business models for notary servers
> >
> > As a rule open source software works really well as the marginal cost of
> > production is zero. Open source services tend to sux because even though
> the
> > marginal cost of a service is negligible, large numbers times negligible
> > adds up to big numbers. Running a DNS server for a university department
> > costs very little, running it for the whole university starts to cost
> real
> > money and running a registry like .com with 99.% reliability ends up
> > with $100 million hardware costs.
> >
> > So the idea that I plug my business into a network of notary servers
> being
> > run by amateurs or as a community service is a non-starter for me. We
> have
> > to align the responsibility for running any server that the CA has a
> > critical dependency on with a business model.
>
> Note that we do not expect CAs to talk to _all_ log servers, only
> those that are appropriately responsive - and also note that a CA can
> fire off a dozen log requests in parallel and then just use the first
> three that come back, which would deal with any temporary log issues.
>
> We should probably add this ability to the open source stack at some point.
>
> > Looking at the CT proposal, it seems to me that we could fix the business
> > model issue and remove a lot of the CA operational issues as follows:
> >
> > 1) Each browser provider that is interested in enforcing a CT requirement
> > stands up a meta-notary server.
> >
> > 2) Each CA runs their own notary server and this is the only resource
> that
> > needs to have a check in at certificate issue.
>
> Isn't this part the only part that's actually needed? The
> meta-notaries seem like redundant extra complication (and also sound
> like they fulfil essentially the same role as monitors).
>
> I assume, btw, that by "notary server" you mean "log server"?
>
> Also, if a CA only uses its own log, what happens when it screws up
> and gets its log struck off the list of trusted logs? This is why we
> recommend some redundancy in log signatures.


That is the reason for checkpointing against meta notaries.

Otherwise a CA might not actually release the logs.

-- 
Website: http://hallambaker.com/


The IETF is coming to New Delhi!

2013-02-16 Thread Ole Jacobsen

Sorry about the late announcement:

http://www.ietfindia.in/

... looks like it ends today, oh well.

:-)


Ole J. Jacobsen
Editor and Publisher,  The Internet Protocol Journal
Cisco Systems
Tel: +1 408-527-8972   Mobile: +1 415-370-4628
E-mail: o...@cisco.com  URL: http://www.cisco.com/ipj
Skype: organdemo



Re: The IETF is coming to New Delhi!

2013-02-16 Thread Eric Burger
They've got us beat by 10 years. Hope they didn't register the trademark.

On Feb 16, 2013, at 8:17 PM, Ole Jacobsen  wrote:

> 
> Sorry about the late announcement:
> 
> http://www.ietfindia.in/
> 
> ... looks like it ends today, oh well.
> 
> :-)
> 
> 
> Ole J. Jacobsen
> Editor and Publisher,  The Internet Protocol Journal
> Cisco Systems
> Tel: +1 408-527-8972   Mobile: +1 415-370-4628
> E-mail: o...@cisco.com  URL: http://www.cisco.com/ipj
> Skype: organdemo
> 



draft-ruoska-encoding-00.txt

2013-02-16 Thread Bill McQuillan
Since there is no author email address in the draft, I'm sending
this to the IETF Discussion list.


Issues:

Section 2.1:
"integer idenfier" -> "integer identifier"

Section 2.1, para 2:
"Implemenations" ->"Implementations"

Section 3.1, 3rd from last para:
"These bits determines" -> "These bits determine"

Section 3.2.3, para 2:
"sub braches but braches" -> "sub branches but branches"

Section 3.2.3, para 3:
"orginal" -> "original"

Section 3.3.2:
This section should mention the format for negative numbers (2's
comp, 1's comp, signed magnitude,...)

Section 4:
No values are given for designating the 4 types of Identifier.

Section 3.4:
Definition of Extended Frame does not allow an Identifier for
*any* new data type frame. Is this reasonable?

Section 4.2, para 2:

>   Integer identifiers are used to make document less resource hungry.
 ^--the
>   They are very efficient from resource point of view when compared to
 ^--a
>   string idenfiers.  Downside is that they make debugging a bit more
   ^--ti   ^--The 
>   complicated.  People are not good in remembering semantics bind to
  at   bound
>   plain numbers so debugging tools maid need access to a look at table
 may   lookup
>   to convert integer idenfiers to more human friendly strings.
   ^--ti

Section 4.3:
Ascii art diagram split across page break.

Section 5, para 2:
"Implemenations" -> "Implementations"

Section 5.1, last para:
"String indentifier" -> "String identifier"

Section 7, para 1:
"Implemenations" -> "Implementations"

Section "Author's Address":
No email address given for Jukka-Pekka Makela.

-- 
Bill McQuillan 



Re: presenting vs discussion in WG meetings (was re:Remote Participation Services)

2013-02-16 Thread joel jaeggli

On 2/16/13 12:04 AM, Brian E Carpenter wrote:

On 15/02/2013 20:57, Keith Moore wrote:
...

But this makes me realize that there's a related issue.   An expectation
that WG meetings are for presentations, leads to an expectation that
there's lots of opportunity to present suggestions for new work to do.
WG time scheduled for considering new work can actually take away time
for discussion of ongoing work.   And once the time is scheduled and
people have made commitments to travel to meetings for the purpose of
presenting new work, chairs are understandably reluctant to deny them
their allotted presentation time.
In v6ops we require that scheduled material to be discussed in the 
meeting have been aired out on the mailing list first. That is a rather 
good filter for which discussions about new work should be accepted. The 
time between meetings is ultimately a lot more scalable and less 
precious than the time during meetings. while there is certainly the 
opportunity for WG discussion to go where it needs to, one does not plan 
a trip to the IETF on the basis of impromptu MIC time.

This is closely related to a well-known problem at academic conferences.
Many people can only get funded to travel if they are presenting a paper.
It's common practice, therefore, to have either a poster session (which
allows massively parallel presentations) or hot-topics sessions (with a
strict and very short time-limit). We tend to throw the hot-topics sessions
into WG meetings, which is not ideal.
There is literally no editorial barrier to the submission of an internet 
draft. Moreover the typical basis for the dicussion of an idea on a 
mailing list or in a WG meeting is that it exists as an internet draft.


Why not have a poster session as part of Bits-n-Bites? It would give
new ideas a chance to be seen without wasting WG time. Make it official
enough that people can use it in their travel requests.

 Brian