[jira] [Created] (FLINK-8414) Gelly performance seriously decreases when using the suggested parallelism configuration

2018-01-11 Thread flora karniav (JIRA)
flora karniav created FLINK-8414:


 Summary: Gelly performance seriously decreases when using the 
suggested parallelism configuration
 Key: FLINK-8414
 URL: https://issues.apache.org/jira/browse/FLINK-8414
 Project: Flink
  Issue Type: Bug
  Components: Configuration, Documentation, Gelly
Reporter: flora karniav
Priority: Minor


I am running Gelly examples with different datasets in a cluster of 5 machines 
(1 Jobmanager and 4 Taskmanagers) of 32 cores each.

The number of Slots parameter is set to 32 (as suggested) and the parallelism 
to 128 (32 cores*4 taskmanagers).

I observe a vast performance degradation using these suggested settings than 
setting parallelism.default to 16 for example were the same job completes at 37 
seconds vs 140 in the 128 parallelism case.

Is there something wrong in my configuration? Should I decrease parallelism and 
-if so- will this inevitably decrease CPU utilization?

Another matter that may be related to this is the number of partitions of the 
data. Is this somehow related to parallelism? How many partitions are created 
in the case of parallelism.default=128? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-8403) Flink Gelly examples hanging without returning result

2018-01-10 Thread flora karniav (JIRA)
flora karniav created FLINK-8403:


 Summary: Flink Gelly examples hanging without returning result
 Key: FLINK-8403
 URL: https://issues.apache.org/jira/browse/FLINK-8403
 Project: Flink
  Issue Type: Bug
  Components: Gelly
Affects Versions: 1.3.2
 Environment: CentOS Linux release 7.3.1611
Reporter: flora karniav


Hello, I am currently running and measuring Flink Gelly examples (Connected 
components and Pagerank algorithms) with different SNAP datasets. When running 
with the Twitter dataset for example 
(https://snap.stanford.edu/data/egonets-Twitter.html) which has 81,306 vertices 
everything executes and finishes OK and I get the reported job runtime. On the 
other hand,  executions with datasets having a bigger number of vertices, e.g. 
https://snap.stanford.edu/data/com-Youtube.html with 1,134,890 vertices, hang 
with no result and reported time, while at the same time I get "Job execution 
switched to status FINISHED."

I thought that this could be a memory issue so I reached 125GB of RAM assigned 
to my taskmanagers (and jobmanager), but still no luck. 

The exact command I am running is:

./bin/flink run examples/gelly/flink-gelly-examples_*.jar --algorithm PageRank 
--directed false  --input_filename hdfs://sith0:9000/user/xx.txt --input CSV 
--type integer --input_field_delimiter $' ' --output print




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)