Talking yesterday at the workshop with the other committers and users it
seams obvious that we should really get rid of the munge headache ASAP. I
wish I had a solution though, probably branching looks the best right now.
On Thursday, June 7, 2012, Avery Ching wrote:
> Regarding the netty securen
Talking yesterday at the workshop with the other committers and users it
seams obvious that we should really get rid of the munge headache ASAP. I
wish I had a solution though, probably branching looks the best right now.
On Thursday, June 7, 2012, Avery Ching wrote:
> Regarding the netty securen
Won't this just postpone the pain?
On Thursday, June 7, 2012, David Garcia wrote:
> Based upon what you have mentioned, o think you are getting heap errors
> because every vertex in your graph will be loaded into memory prior to
> super step one. So if you have a large graph, with lots of state
Based upon what you have mentioned, o think you are getting heap errors because
every vertex in your graph will be loaded into memory prior to super step one.
So if you have a large graph, with lots of state, you probably have memory
issues from the very beginning. A simple way to mitigate the
Regarding the netty secureness, we can of course add this to netty (i.e.
ssl is built in to netty).
Avery
On 5/31/12 10:53 AM, Jakob Homan wrote:
While I see no problem in replacing the Hadoop RPC with the Netty implementation
Avery contributed, I am not 100% sure about the implications in rel
No article or book, but here's a few tips.
1) Use aggregators! This can drastically can reduce the amount of
memory use by combining messages on the server side.
2) -Dmapred.child.java.opts="-Xss128k" or some other value (should
affect the RPC threads or netty threads)
3) You'll want to minimi
Hi Aljoscha!
Sounds fun. Let us know how it goes!
Avery
On 6/6/12 4:45 AM, Aljoscha Krettek wrote:
Hi,
I'm one of the students of Sebastian Schelters (TU Berlin) that will
implement a graph algorithm on top of Giraph. The algorithm in
question is the effective closeness algorithm from this
I've just started playing with Giraph and also have the issue with
Eclipse and munging, though I've not looked deeply into if there is a
solution.
I'm using 1.0.3 Hadoop locally but would also like to ensure Giraph says
compatible with AWS ElasticMapReduce which only uses 0.20.x Hadoop at
present
It's certainly worth getting a survey of this. With the new RPC we're
munging mainly for the differences between 0.20 and 1+. If we drop
0.20 we can probably stop munging pretty quickly.
On Wed, Jun 6, 2012 at 6:13 AM, Paolo Castagna
wrote:
> Sebastian Schelter wrote:
>> AFAIK 0.20.x is the cur
Sebastian Schelter wrote:
> AFAIK 0.20.x is the current stable version most people run on, so I
> think it would not be a good idea...
Are those 'most people' using Giraph too? Or, not?
If they are, they should/could reply to this email and say so. :-)
Paolo
>
> sebastian
>
>
>
> On 06.06.20
AFAIK 0.20.x is the current stable version most people run on, so I
think it would not be a good idea...
sebastian
On 06.06.2012 14:58, Paolo Castagna wrote:
> Hi,
> (perhaps, a stupid question but...) would it be a problem dropping Giraph
> support for Hadoop v0.20.x?
>
> If this is possible,
Hi,
(perhaps, a stupid question but...) would it be a problem dropping Giraph
support for Hadoop v0.20.x?
If this is possible, it might be possible to simplify the all munging situation
and even get rid of it, which would simplify the life for some of developers
and/or users who might want to crea
Hi,
I'm one of the students of Sebastian Schelters (TU Berlin) that will
implement a graph algorithm on top of Giraph. The algorithm in question is
the effective closeness algorithm from this paper:
http://www.cs.cmu.edu/~ukang/papers/CentralitySDM2011.pdf
Regards,
Aljoscha Krettek
13 matches
Mail list logo