Hi Matthew,

Will the 'basic' run be standard for online reporting?  The reason I ask is
> I think it would be useful to be able to gauge various hardware setups I
> don't have access to and see where things stack up.


Thanks for bringing that up. It's another reason to make benchmark
modes independent. The basic mode would use the default, unchangeable
benchmark settings that would help compare one's result with other results.
It wouldn't be a fair comparison if you could tweak your basic mode
settings to get better results, now would it?


Side note, I can probably get Anandtech.com to use this as a standard
> benchmark for compute if there is a reproducible standard test which can be
> produced.


That would be fantastic!


Also, if we do go that route, can we use a larger (more difficult) sparse
> test?  It completes in the blink of an eye right now.  Might be nice to run
> a few different matrices so we can see how it scales with size, like the
> other benchmarks.


The matrix it currently uses is 50 by 50 ( I was doing some testing and
forgot to increase it back to a larger value ).

Using multiple sizes for a single run could be a good idea.


Regards, Namik



On Tue, Aug 12, 2014 at 2:27 PM, Matthew Musto <[email protected]>
wrote:

> Will the 'basic' run be standard for online reporting?  The reason I ask
> is I think it would be useful to be able to gauge various hardware setups I
> don't have access to and see where things stack up.
>
> Side note, I can probably get Anandtech.com to use this as a standard
> benchmark for compute if there is a reproducible standard test which can be
> produced.
>
> Also, if we do go that route, can we use a larger (more difficult) sparse
> test?  It completes in the blink of an eye right now.  Might be nice to run
> a few different matrices so we can see how it scales with size, like the
> other benchmarks.
>
> Thanks,
> Matt
> On Aug 12, 2014 4:33 AM, "Karl Rupp" <[email protected]> wrote:
>
>> Hi again,
>>
>>
>> > It's actually important to have finer grained data for small vectors,
>> > and more spaced points as the data grows bigger : this is why it is
>> > better to choose the sizes according to a a^x law than an a*x one. You
>> > can experiment other values than 2 for a, if you want. If I were you,
>> > I'd probably go with something like :
>> > [int(1.5**x) for x in range(30,45)]
>> >
>> > That is, an increment 1.5 factor from ~190,000 to ~55,000,000
>>
>> Hmm, 55M elements is a bit too much for the default mode, it would
>> exceed the RAM available on a bunch of mobile GPUs. I'd rather suggest
>> the range ~1k to ~10M elements so that the latency at small vector sizes
>> is also captured.
>>
>> Best regards,
>> Karli
>>
>>
>>
>> ------------------------------------------------------------------------------
>> _______________________________________________
>> ViennaCL-devel mailing list
>> [email protected]
>> https://lists.sourceforge.net/lists/listinfo/viennacl-devel
>>
>
------------------------------------------------------------------------------
_______________________________________________
ViennaCL-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/viennacl-devel

Reply via email to