Hello,

Distribution of data implies that each processing core
has its own data to analyze.

Therefore, distributing data reduces the amount of things to
calculate on a per-core basis. During these calculations there are data
dependencies -- that is why messages are sent and received.

Most of the steps in Ray scale well. However, for instance, the seed
extension step scales with the data you provide. Otherwise, there is 
redundant
work done.


See p.28 of this document for a good definition of scalibility:

Introduction to Parallel Computing Issues
by Prof. Laxmikant Kale
http://www.ks.uiuc.edu/Training/SumSchool/materials/lectures/6-11-Parallel-Computing/Kale.pdf



Sébastien

nikos ioannidis a écrit :
> Hello,
>
> I want to ask, when ray is working in multiple nodes the point is to
> distribute memory only
> or to make the algorithm run faster as well?
>
> ------------------------------------------------------------------------------
> Live Security Virtual Conference
> Exclusive live event will cover all the ways today's security and
> threat landscape has changed and how IT managers can respond. Discussions
> will include endpoint security, mobile security and the latest in malware
> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
> _______________________________________________
> Denovoassembler-users mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/denovoassembler-users


------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Denovoassembler-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/denovoassembler-users

Reply via email to