Re: How to analyze space usage of Flink algorithms

2016-12-19 Thread otherwise777
Thank you for your reply,
I'm afraid i still don't understand it, the part i don't understand is how
to actually analyze it. It's ok if i can just analyze the system instead of
the actual job, but how would i actually do that?
I don't have any function in my program that extends the richfunction afaik,
so how would i call the getRuntimeContext() to print or store it? 



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/How-to-analyze-space-usage-of-Flink-algorithms-tp10555p10686.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: How to analyze space usage of Flink algorithms

2016-12-16 Thread otherwise777
Hey Fabian,

Thanks for the quick reply, 
I was looking through the flink metrics [1] but i couldn't find anything in
there how to analyze the environment from start to finish, only for
functions that extend the richmapfunction

[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/metrics.html#list-of-all-variables



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/How-to-analyze-space-usage-of-Flink-algorithms-tp10555p10661.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


How to analyze space usage of Flink algorithms

2016-12-09 Thread otherwise777
Currently i'm doing some analysis for some algorithms that i use in Flink,
I'm interested in the Space and time it takes to execute them. For the Time
i used getNetRuntime() in the executionenvironment, but I have no idea how
to analyse the amount of space an algorithm uses.
Space can mean different things here, like Heap space, disk space, overal
memory or allocated memory. I would like to analyze some of these. 



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/How-to-analyze-space-usage-of-Flink-algorithms-tp10555.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Gelly simple program but: java.lang.RuntimeException: Memory ran out

2016-11-30 Thread otherwise777
I have a similar problem as this topic [1], this problem was caused by a bug
in the software, afaik my problem isn't the same.

*The error*: Caused by: java.lang.RuntimeException: Memory ran out.
Compaction failed. numPartitions: 18 minPartition: 4 maxPartition: 5 number
of overflow segments: 54 bucketSize: 39 Overall memory: 5963776 Partition
memory: 2785280 Message: Index: 5, Size: 4
Full error [2], dataset [3], code [4]

It happens when i run a simple iteration algorithm like *Pagerank*, *SSSP*,*
label propagation*.

In the topic [1] they talk about a workaround, although this works it
prevents other flin features.

So far i tried changing a bunch of configuration in Flink and increasing the
memory allocation for Java fort he job, but the error stays the same,
indicating that it CompactingHashTable.java has too little memory to work
with and i'm out of ideas where to look what i'm doing wrong

[1]
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/RuntimeException-Gelly-API-Memory-ran-out-Compaction-failed-td866.html
[2] http://paste.thezomg.com/19947/14805185/
[3] http://konect.uni-koblenz.de/networks/munmun_twitter_social
[4]
https://github.com/otherwise777/Temporal_Graph_library/blob/master/src/main/java/Tgraphs/memoryranouterror.java



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Gelly-simple-program-but-java-lang-RuntimeException-Memory-ran-out-tp10389.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: Executing graph algorithms on Gelly that are larger then memmory

2016-11-28 Thread otherwise777
Small addition, i'm currently running the programs via my IDE intelij



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Executing-graph-algorithms-on-Gelly-that-are-larger-then-memmory-tp10358p10359.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Executing graph algorithms on Gelly that are larger then memmory

2016-11-28 Thread otherwise777
I read somewhere that Flink and Gelly should be able to handle graph
algorithms that require more space then the available memory, i'm currently
getting java OutOfMemoryError heap space and if it would use disk space that
wouldn't happen.
Currently my algorithms use dense graphs with 10m edges, the algorithms that
i use are on my github [1], i did get the same heap space errors when using
the Gelly algorithms, i also tried playing with setting the parallelism but
it mostly gives the same error

Can anyone help me with some methods on how to solve this so that i can
scale up to bigger graphs?

[1] https://github.com/otherwise777/Temporal_Graph_library/



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Executing-graph-algorithms-on-Gelly-that-are-larger-then-memmory-tp10358.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: Type of TypeVariable 'K' in 'class <> could not be determined

2016-11-17 Thread otherwise777
The one that's currently in my github will give you the error,

In my other file i made a really ugly workaround by adding the element in an
ArrayList as a single item.



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Type-of-TypeVariable-K-in-class-could-not-be-determined-tp10173p10184.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: Type of TypeVariable 'K' in 'class <> could not be determined

2016-11-17 Thread otherwise777
Sorry i already pushed a new update, 
But in testclass.java if you change line 266 to:
tempgraphdoubles.run(new
SingleSourceShortestTemporalPathEAT3(maxIterations)).print();

And then run the testclass.java You should get the error




--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Type-of-TypeVariable-K-in-class-could-not-be-determined-tp10173p10181.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: Type of TypeVariable 'K' in 'class <> could not be determined

2016-11-17 Thread otherwise777
Hey Vasia,

I made this simple mapper to illustrate the problem, the file i'm working on
is here:
https://github.com/otherwise777/Temporal_Graph_library/blob/master/src/main/java/Tgraphs/SingleSourceShortestTemporalPathEATBetweenness.java
<https://github.com/otherwise777/Temporal_Graph_library/blob/master/src/main/java/Tgraphs/SingleSourceShortestTemporalPathEATBetweenness.java>
  

Which uses a Tuple3<K,Double,ArrayListK>>

Anyhow, you can ignore the scatter-gather UDFs from the file that i posted 



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Type-of-TypeVariable-K-in-class-could-not-be-determined-tp10173p10177.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: Type of TypeVariable 'K' in 'class <> could not be determined

2016-11-17 Thread otherwise777
Hello timo,

the whole project is on github: 
https://github.com/otherwise777/Temporal_Graph_library
<https://github.com/otherwise777/Temporal_Graph_library>  
The Tgraphalgorithm is here: 
https://github.com/otherwise777/Temporal_Graph_library/blob/master/src/main/java/Tgraphs/TGraphAlgorithm.java
<https://github.com/otherwise777/Temporal_Graph_library/blob/master/src/main/java/Tgraphs/TGraphAlgorithm.java>
  

I just updated Flink and Gelly to 1.1.3 with Maven but the problem still
occurs



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Type-of-TypeVariable-K-in-class-could-not-be-determined-tp10173p10175.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Type of TypeVariable 'K' in 'class <> could not be determined

2016-11-17 Thread otherwise777
I get this error:

*Exception in thread "main"
org.apache.flink.api.common.functions.InvalidTypesException: Type of
TypeVariable 'K' in 'class
Tgraphs.SingleSourceShortestTemporalPathEAT3$InitVerticesMapper' could not
be determined. This is most likely a type erasure problem. The type
extraction currently supports types with generic variables only in cases
where all variables in the return type can be deduced from the input
type(s).*

When i run my function:

*tempgraphdoubles.run(new
SingleSourceShortestTemporalPathEAT3("A",maxIterations)).print();*

The function is here:  http://paste.thezomg.com/19924/93864391/
  

I've narrowed it down to the Tuple2 on line 44, apperently it
doesn't like the K being there in a Tuple2<>

I couldn't find out why though, and how would i make a workaround for this
when i a want to use this scenario?



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Type-of-TypeVariable-K-in-class-could-not-be-determined-tp10173.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: 33 segments problem with configuration set

2016-11-16 Thread otherwise777
Hello Vasia,

thank you for your fast reply,

I am aware that determining the betweenness is very demanding, however i
still want to give a try at it to a certain extent in Flink, not using Flink
is currently not an option since my project is partly about Flink.

I will rethink my login, i guess it's back to the drawing board



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/33-segments-problem-with-configuration-set-tp10144p10151.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: 33 segments problem with configuration set

2016-11-16 Thread otherwise777
Some additional information i just realized, it crashes on this line of code:
collectionDataSet.print();

I tried placing it inside of the loop, it crashes at the 7th iteration now



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/33-segments-problem-with-configuration-set-tp10144p10149.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


33 segments problem with configuration set

2016-11-16 Thread otherwise777
Hello Community,

I'm trying to make a function to determine the betweenness of the Vertices
in a Graph. I'm using Gelly for this and a custom shortestpath function
This is my input graph:  http://prntscr.com/d7y51y
  

What i've done is use collect() on the vertice values and loop over the list
to determine the shortest path from those nodes to the rest of the nodes, in
the loop i use a Union function to put everything in one big DataSet called
"collectionDataSet" 
Here's the code:  http://paste.thezomg.com/19919/79295357/
  

After the 9th iteration i get the error: Too few memory segments provided.
Hash Table needs at least 33 memory segments.
I had this problem before, and it was fixed by increasing the
TASK_MANAGER_NETWORK_NUM_BUFFERS_KEY. Currently that's at 16000 after
increasing it a couple of times, but the error keeps popping up after the
9th iteration.
When i don't use the Union the error won't pop up

The full stack trace can be found here: 
http://paste.thezomg.com/19921/79296084/
  

I tried different methods like using a join, or using reduce right after the
union, but it didn't change anything on the result. 
Are there other settings i need to adjust? And why is this exactly happening





--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/33-segments-problem-with-configuration-set-tp10144.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: Retrieving values from a dataset of datasets

2016-11-15 Thread otherwise777
It seems what i tried did indeed not work.
Can you explain me why that doesn't work though?



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Retrieving-values-from-a-dataset-of-datasets-tp10108p10128.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Retrieving values from a dataset of datasets

2016-11-14 Thread otherwise777
Hey There,

I'm trying to calculate the betweenness in a graph with Flink and Gelly, the
way I tried this was by calculating the shortest path from every node to the
rest of the nodes. This results in a Dataset of vertices which all have
Datasets of their own with all the other vertices and their paths. 

Next i used the Reduce function on the inner DataSets so every inner DataSet
has 1 value.

Now I have a DataSet of DataSets with 1 value each, but how do i efficiently
transform this into a a single DataSet with values? I can do a mapping on
the DataSet and use collect(), but i think that would be very costly





--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Retrieving-values-from-a-dataset-of-datasets-tp10108.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: Retrieving a single element from a DataSet

2016-11-10 Thread otherwise777
Hey there,

I don't really understand what Broadcast does, does it in a way export the
elements from a DataSet in a Collection? Because then it might be what i'm
looking for.

when implementing algorithms in Flink Gelly i keep getting stuck on what i
cannot do with DataSets. For example, if i want to determine the
ShortestPath i need the SourceVertex, this can be inserted manually but this
becomes a problem when you want to determine it for all the Vertices in a
graph, which is what you need when determining certain metrics. Another
example is looping over a DataSet, i found a topic which suggests using a
mapping to loop/iterate over a DataSet, but since it needs to be
serializable it cannot add elements to another Data Structure inside the
mapping which makes it useless if you want to for example export some
elements to another Data Structure.

for the first example i could use collect(), but afaik it's more of a
workaround in Flink to use it.

About the APSP, this could become a problem for me since those are some of
the things i originally wanted to do in my project. I didn't think about it
yet how i'm going to implement it



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Retrieving-a-single-element-from-a-DataSet-tp9731p10027.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: Retrieving a single element from a DataSet

2016-11-04 Thread otherwise777
Cool, thnx for that,

I tried searching for it in teh github but couldn't find it, do you have the
url by any chance?
I'm going to try to implement such an algorithm for temporal graphs



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Retrieving-a-single-element-from-a-DataSet-tp9731p9894.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: Looping over a DataSet and accesing another DataSet

2016-11-03 Thread otherwise777
I just found out that I am able to use arrays in tuple values, nvm about that
question



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Looping-over-a-DataSet-and-accesing-another-DataSet-tp9778p9850.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: Looping over a DataSet and accesing another DataSet

2016-11-02 Thread otherwise777
I did mean the iteratino yes, I currently solved the problem by rewriting the
algorithm in gelly's GathersumApply model, thnx for the tips

I had another question regarding the original message, about appending items
to a list, how would I do that? Because afaik it's not possible to add a
list or array in a Tuple element right?





--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Looping-over-a-DataSet-and-accesing-another-DataSet-tp9778p9843.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: Looping over a DataSet and accesing another DataSet

2016-10-31 Thread otherwise777
Thank you for your reply, this is new information for me,

Regarding the algorithm, i gave it a better look and i don't think it will
work with joining. When looping over the Edge set (u,v) we need to be able
to write and read A[u] and A[v]. If i join them it will create a new
instances of that value and it doesn't matter if it's changed in one
instance.

For example i have the following edges:
 u v
 1 2
 1 3

With vertices and values:
 1 a
 2 b
 3 c

If i join them i get:
 u v u' v'
 1 2 a b
 1 3 a c

If i loop over the joined set and change the u' value of the first instance
to "d" then in my next loop step it will be 'a'.




--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Looping-over-a-DataSet-and-accesing-another-DataSet-tp9778p9784.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: Looping over a DataSet and accesing another DataSet

2016-10-31 Thread otherwise777
Thank you for your reply and explanation, I think there is one issue with
your method though, you said that i should make a join with the the key
value pair A on v and  the Edge set (u,v), this would work, however i not
only need to access A[v] in one iteration but also A[u], so if i join on v
that won't be possible

Did i understand it correctly?



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Looping-over-a-DataSet-and-accesing-another-DataSet-tp9778p9782.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Looping over a DataSet and accesing another DataSet

2016-10-30 Thread otherwise777
Currently i'm trying to implement this algorithm [1] which requires me to
loop over one DataSet (the edges) and access another DataSet (the vertices),
for this loop i use a Mapping (i'm not sure if this is the correct way of
looping over a DataSet) but i don't know how to access the elements of
another DataSet while i'm looping over one.

I know Gelly also has iterative support for these kind of things, but they
loop over the Vertices and not the Edges

[1] http://prntscr.com/d0qeyd



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Looping-over-a-DataSet-and-accesing-another-DataSet-tp9778.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: Retrieving a single element from a DataSet

2016-10-26 Thread otherwise777
That is indeed not the nice way to do it because it will create an
executionplan just to get that value, but it does work, so thnx for that

A more concrete example for what i want,
In gelly you have the SingleSourceShortestPaths algorith which requires the
sourceVertexId, now i want to execute a metric called the betweenness, this
requires me to execute the algorithm on each of the nodes. If i need the id
of every node that means i need to Collect() all those nodes as well, which
means for a graph of 1000 nodes i have 1000+ execution plans



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Retrieving-a-single-element-from-a-DataSet-tp9731p9734.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Retrieving a single element from a DataSet

2016-10-26 Thread otherwise777
I'm currently making a shortest path algorithm in Gelly using DataSets,
here's a piece of the code i've started with:
public DataSet> ShortestPathsEAT(K startingnode) {
DataSet> results =
this.getVertices().distinct().map(new MapFunction,
Vertex>() {
@Override
public Vertex map(Vertex value) throws Exception
{
if (startingnode == value.getId()) {
return new Vertex(value.getId(), 0L);
} else {
return new Vertex(value.getId(),
Long.MAX_VALUE);
}
}
});
return results;
}

For this i need the index of the starting node, now i could just pass this
as a value, but i want to execute this algorithm on the first element from
my DataSet, i know i can get this element with ,first(1) but that function
returns a DataSet and not the actual element.

I looked for a similar question and got to the following topic [1] Which
kinda gave me the feeling that it's not possible at all to retrieve a single
element from a set, is that correct?
What would be a good way to fix this issue?

[1]
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Get-1-element-of-DataSet-td688.html




--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Retrieving-a-single-element-from-a-DataSet-tp9731.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: Flink error: Too few memory segments provided

2016-10-21 Thread otherwise777
thank you so much, it worked immediately. 



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-error-Too-few-memory-segments-provided-tp9657p9669.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: Flink error: Too few memory segments provided

2016-10-21 Thread otherwise777
I tried increasing the taskmanager.network.numberOfBuffers to 4k and later to
8k, i'm not sure if my configuration file is even read, it's stored inside
my IDE as follows:  http://prntscr.com/cx0vrx   
i build the flink program from the IDE and run it. I created several at
different places to see if that helped but nothing changed on the error.

Afaik i'm using Flink 1.1.2 and Gelly 1.2-snapshot, here's my pom.xml: 
http://paste.thezomg.com/19868/41341147/
  
I see that the document i linked to points to an older config file, this is
probably because it's the first hit on google, thanks for pointing it out




--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-error-Too-few-memory-segments-provided-tp9657p9667.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Flink error: Too few memory segments provided

2016-10-20 Thread otherwise777
I got this error in Gelly, which is a result of flink (i believe) 

Exception in thread "main"
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$8.apply$mcV$sp(JobManager.scala:822)
at
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$8.apply(JobManager.scala:768)
at
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$8.apply(JobManager.scala:768)
at
scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at
scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.IllegalArgumentException: Too few memory segments
provided. Hash Table needs at least 33 memory segments.
at
org.apache.flink.runtime.operators.hash.CompactingHashTable.(CompactingHashTable.java:206)
at
org.apache.flink.runtime.operators.hash.CompactingHashTable.(CompactingHashTable.java:191)
at
org.apache.flink.runtime.iterative.task.IterationHeadTask.initCompactingHashTable(IterationHeadTask.java:175)
at
org.apache.flink.runtime.iterative.task.IterationHeadTask.run(IterationHeadTask.java:272)
at 
org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:351)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:584)
at java.lang.Thread.run(Thread.java:745)

I found a related topic: 
http://mail-archives.apache.org/mod_mbox/flink-dev/201503.mbox/%3CCAK5ODX4KJ9TB4yJ=bcnwsozbooxwdb7hm9qvwoa1p9hk-gb...@mail.gmail.com%3E
But i don't think the problem is the same, 

The code is as follows:

ExecutionEnvironment env =
ExecutionEnvironment.getExecutionEnvironment();
DataSource twitterEdges =
env.readCsvFile("./datasets/out.munmun_twitter_social").fieldDelimiter("
").ignoreComments("%").types(Long.class, Long.class);
Graph graph = Graph.fromTuple2DataSet(twitterEdges, new
testinggraph.InitVertices(), env);
DataSet verticesWithCommunity = (DataSet)graph.run(new
LabelPropagation(1));
System.out.println(verticesWithCommunity.count());

And it has only a couple of edges.

I tried adding a config file in the project to add a couple of settings
found here:
https://ci.apache.org/projects/flink/flink-docs-release-0.8/config.html but
that didn't work either

I have no idea how to fix this atm, it's not just the LabelPropagation that
goes wrong, all gelly methods give this exact error if it's using an
iteration.





--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-error-Too-few-memory-segments-provided-tp9657.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: question about making a temporal Graph with Gelly

2016-10-13 Thread otherwise777
Hello Greg,

So far i've added a Tuple3 in the value field of an edge and that seems to
work. However in the end i want to make a library on top of Gelly that
supports temporal graphs all together. For that i want to add a temporal
edge class to use in the graph but i didn't succeed in doing that, i made a
post about it on stackoverflow:
http://stackoverflow.com/questions/40007325/flink-gelly-extending-edge-class-and-using-it-in-dataset

I would also like to extend the Gelly Graph, but i noticed the constructor
being private which disables me from doing so. 



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/question-about-making-a-temporal-Graph-with-Gelly-tp9502p9520.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.