On Mon, May 19, 2014 at 9:32 PM, Joshua Cranmer 🐧 <pidgeo...@gmail.com>wrote:

> On 5/19/2014 6:10 PM, Rik Cabanier wrote:
>
>> Other platforms offer an API to the number of CPU's and they are able to
>> use it successfully. (see the ten of thousands of examples on GitHub) I
>> don't see why the web platform is special here and we should trust that
>> authors can do the right thing.
>>
>
> By this argument, we should offer shared-memory multithreading to the web.
> But we don't, and I gather that we'd resist any attempts to do so. Why not?
> Because shared-memory multithreading, while extremely powerful, is a giant
> footgun to most developers, web platform or native. With the web, we have a
> chance to design APIs correctly the first thing around.
>

You are building a straw man.
Let's stay on topic.


> If I may draw a more direct analogy, 20 or 25 years ago, exposing
> something like READTSC might have seemed to have make sense for more or
> less similar arguments to what you're making here: it's extremely simple to
> implement, everyone uses it, the alternatives (building a real monotonic
> clock source) is painful and annoying, and it's good enough. Yet on modern
> computers, READTSC is completely and totally useless in its "simple"
> incarnation to mean anything: adaptive frequency scaling and processor
> migration basically guarantee that your results are worthless. As many
> people have repeatedly pointed out, hardwareConcurrency suffers from the
> same newer-hardware-makes-it-meaningless issue: asymmetric multicore
> systems will render the underlying principle somewhat questionable,


That is unlikely. The OS scheduler (I assume that will still exist), will
take care of that problem. At the end, more work will be done which is all
we're looking after.


> and the potential rise of more exotic architectures as separate cores
> (e.g., GPGPU or specialized chip offloading) introduces new complications
> for which this kind of API is not generalizable.


True. It doesn't apply to GPU processing but that's not something we're
trying to solve here.


> So why should we standardize on an API which in all likelihood will be
> more or less useless in 5 or 10 years, instead of an API which can be made
> useful today and still have relevance in 20 years?


I don't think anyone can predict what will happen in 20 years. Technology
shifts all the time.
I'm more interested in what problems we have today and how we can solve
them in the most simple way possible.


>  I don't really follow. Yes, this is not a very important feature which is
>> a reason to provide a simple API that people can use as they see fit, not a
>> reason to come up with a complex one.
>>
>
> The API you propose is simple yet surprisingly meaningless. Using it
> requires assuming a lot of things about hardware and software which, quite
> simply, are only barely true today and won't be true in a few years. Yet
> the bad answers that people write on sites like StackOverflow will
> remain--and show up first in search results--for years to come. Why promote
> bad coding when you can promote good coding from the start?


Are you saying that basing a workload on the number of CPU cores is bad
coding?
If so, I think I can point you to many applications that are using this
successfully.


>  I don't think this is a problem in the real world because you can just
>> spin up a large number of threads and rely on the OS scheduler to sort it
>> out.
>>
>
> Spoken like someone who has never worked on high-concurrency applications.


You should not make assumptions like this. Let's keep this thread positive.
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to