Thiago,

I think you give application developers too much credit (present company
excluded, of course). :)  I've heard about cases where chatty applications
have brought down a wireless network because they asked "Are we there yet?"
every few milliseconds when every few minutes would have been sufficient for
the application.  If you give a developer a lever, some can and will pull it
as often as they can.  It's cheap for a client with a big fat battery to
poll a device while it's relatively expensive for a constrained device to
listen to and respond to all of those requests.

Regarding power, I slightly disagree that it's completely out of scope for
the OIC.  Other technologies (again, citing BTLE) have gone to great lengths
to make their protocol as efficient as possible from a bandwidth and power
perspective.  If we intend to compete with them in the "low energy" space,
we need to have a compelling solution, right?  Or do we not consider BTLE a
competitor?

So, I think we should be making all of these seemingly micro decisions with
the macro in mind and yes, all of these little decisions will have an effect
on the bigger picture.  With that said, I'm definitely not the right person
to provide answers but I can always be counted on to ask a lot of questions.

Mitch

-----Original Message-----
From: Thiago Macieira [mailto:[email protected]] 
Sent: Thursday, February 04, 2016 2:07 PM
To: Mitch Kettrick
Cc: 'Subramaniam, Ravi'; 'Maloor, Kishen'; oswg at openinterconnect.org;
iotivity-dev at lists.iotivity.org
Subject: Re: [oswg] Re: [dev] Default interface signaling

On quinta-feira, 4 de fevereiro de 2016 13:50:24 PST Mitch Kettrick wrote:
> Hi Thiago,
> 
> Thank you for your reply.
> 
> Regarding bandwidth and power consumption, I've never implemented a 
> protocol in HW so you may be right that sending extra payload won't have
an effect.
> We all come to this with our direct experience and our assumptions and 
> often times we're wrong no matter how right we think we are. :)

Hello Mitch

You're right, and at this point we're both speculating. In any case, the
power itself is not the issue here, so let's table it.

> Regarding the fact that in the end, the Client dictates the interface 
> used, I agree.  But, many people who write client applications won't 
> know that if the Default is oic.if.baseline, they could save power by 
> adding an oic.if.s query to their request; otherwise the Server will 
> be forced to operate in an less efficient way.
> 
> If we don't agree on giving Servers the flexibility to set their own 
> Default Interface based on the application, one solution, as I said 
> before, is to make the Interface that uses the fewest number of bits 
> as the Default Interface wherever possible to ensure that if Servers 
> have to send "the whole package" it's because the Client explicitly asked
for it.

I agree on having multiple interfaces, I agree on making sure that
application developers know about the more efficient ones and choose to use
it whenever applicable.

I don't agree that setting the default is a way to achieve the above. At
best, I think it has zero benefit or impact, since it will never be used in
decision- making. At worst, it's a red herring and confusing, leading to
poorly written applications failing to communicate when the default in a
device is unexpected.

(hint: add this to the certification testing; the default should *always* be
a nonsensical interface that no one should be using)

--
Thiago Macieira - thiago.macieira (AT) intel.com
  Software Architect - Intel Open Source Technology Center


-----
No virus found in this message.
Checked by AVG - www.avg.com
Version: 2016.0.7357 / Virus Database: 4522/11549 - Release Date: 02/03/16

Reply via email to