Re: [Server-devel] Testing and comparing wireless setups.

2013-06-25 Thread David Farning
On Mon, Jun 24, 2013 at 6:24 PM, James Cameron qu...@laptop.org wrote:
 [Warning: this author is on coffee for the first time in months]

 On Mon, Jun 24, 2013 at 07:14:11AM -0500, David Farning wrote:
 As part of the XSCE, we would like to start offering wireless setup
 guidelines.

 Large deployments might be able to afford a site assessment. Small
 deployments are left scratching their heads and making decision on
 anecdotal references. This results in a bottleneck. Smart and
 motivated people are spending time, early in a deployments
 lifecycle, on wireless which means other critical task are left
 undone.

 This topic is challenging because it is complex. Every workload is
 different and every electromagnetic environment is different.

 My idea is start very simply by assigning a numerical score for
 various wireless devices based on simple criteria like throughput,
 connection count, overheating, and range. My _very_ naive experiences
 are that among consumer grade devices:
 1. Some devices can handle more connections than others before they
 start dropping connections.
 2. Some devices can handle higher throughput - Several kids watching
 youtube and me doing a download.
 3. Some devices overheat and reset more frequently than others.
 4. Some devices have better range than others.

 I think this is overly simplistic, yet I know a simple heuristic is
 what is needed.  So I suggest coming at it from a different angle.

 Does this information seem valuable to deployments? Does the general
 approach seem sane?

 The stack is deep, so deep that anecdote can be inaccurate and
 misleading.  The phase space has a large number of dimensions.  It may
 be better to accumulate test reports so that people can form their own
 opinions.  The test report should include:

 - the wireless access point manufacturer, model, serial number, and
   firmware version,

 - the XSCE version,

 - the XO OS version, model, and wireless card,

 - the measured capability of the internet service, in terms of latency
   and bandwidth, measured with ping and wget,

 - the number of access points deployed adjacent to the XSCE,

 - the number of XO users active during the test,

 - individual user XO performance observations, in terms of latency,
   bandwidth, and packet loss, such as ping, wget, and curl POST,
   rolled up into a total performance score in the range 1 to 10.

 Then, abstract from the collection of reports a list of access points,
 user counts, and total performance score.  Link each line to the
 actual report.

 This ensures that the claims the community make about the access
 points can be substantiated ... which benefits the community, the
 deployers, and the manufacturers of the devices.

 (With enough data, of the order of ten or so reports, the workload and
 radiofrequency environment aspects can be reduced in importance.  I
 think those aspects are better handled by site design guidelines based
 on what we find is common to all working deployments.)

 --
 James Cameron
 http://quozl.linux.org.au/


James,

Let's agree to agree on the type of data that is desirable . Yours is
better than mine :) My next question is how do we collect, analyze,
and publish that data. IE the organization.

Our challenge, as  XSCE, is that we don't _currently_ have the
knowledge or resources to implement the complete testing you propose.
As such, we need to bootstrap the process. (
http://en.wikipedia.org/wiki/Bootstrapping )

As XSCE, we are more likely add value to the ecosystem by breaking the
big problem down into a series incremental problems which provide a
foundation for understand and solving other problems. Working on this
series incrementally enables us to build the credibility to attract
additional knowledge and resources to work on the issue.

So, to slightly shift the conversation. Is is possible to break down
the process of getting your endpoint into a series of series of
iterative steps?

--
David Farning
Activity Central: http://www.activitycentral.com
___
Server-devel mailing list
Server-devel@lists.laptop.org
http://lists.laptop.org/listinfo/server-devel


Re: [Server-devel] Testing and comparing wireless setups.

2013-06-24 Thread James Cameron
[Warning: this author is on coffee for the first time in months]

On Mon, Jun 24, 2013 at 07:14:11AM -0500, David Farning wrote:
 As part of the XSCE, we would like to start offering wireless setup
 guidelines.
 
 Large deployments might be able to afford a site assessment. Small
 deployments are left scratching their heads and making decision on
 anecdotal references. This results in a bottleneck. Smart and
 motivated people are spending time, early in a deployments
 lifecycle, on wireless which means other critical task are left
 undone.
 
 This topic is challenging because it is complex. Every workload is
 different and every electromagnetic environment is different.
 
 My idea is start very simply by assigning a numerical score for
 various wireless devices based on simple criteria like throughput,
 connection count, overheating, and range. My _very_ naive experiences
 are that among consumer grade devices:
 1. Some devices can handle more connections than others before they
 start dropping connections.
 2. Some devices can handle higher throughput - Several kids watching
 youtube and me doing a download.
 3. Some devices overheat and reset more frequently than others.
 4. Some devices have better range than others.

I think this is overly simplistic, yet I know a simple heuristic is
what is needed.  So I suggest coming at it from a different angle.

 Does this information seem valuable to deployments? Does the general
 approach seem sane?

The stack is deep, so deep that anecdote can be inaccurate and
misleading.  The phase space has a large number of dimensions.  It may
be better to accumulate test reports so that people can form their own
opinions.  The test report should include:

- the wireless access point manufacturer, model, serial number, and
  firmware version,

- the XSCE version,

- the XO OS version, model, and wireless card,

- the measured capability of the internet service, in terms of latency
  and bandwidth, measured with ping and wget,

- the number of access points deployed adjacent to the XSCE,

- the number of XO users active during the test,

- individual user XO performance observations, in terms of latency,
  bandwidth, and packet loss, such as ping, wget, and curl POST,
  rolled up into a total performance score in the range 1 to 10.

Then, abstract from the collection of reports a list of access points,
user counts, and total performance score.  Link each line to the
actual report.

This ensures that the claims the community make about the access
points can be substantiated ... which benefits the community, the
deployers, and the manufacturers of the devices.

(With enough data, of the order of ten or so reports, the workload and
radiofrequency environment aspects can be reduced in importance.  I
think those aspects are better handled by site design guidelines based
on what we find is common to all working deployments.)

-- 
James Cameron
http://quozl.linux.org.au/
___
Server-devel mailing list
Server-devel@lists.laptop.org
http://lists.laptop.org/listinfo/server-devel