Pulsar/Flink Error: PulsarAdminException$NotFoundException: Topic not exist

2022-05-12 Thread Jason Kania
 Hi,

I am attempting to upgrade from 1.12.7 to 1.15.0. One of the issues I am 
encountering is the following exception when attempting to submit a job from 
the command line:
switched from INITIALIZING to FAILED with failure cause: 
org.apache.pulsar.client.admin.PulsarAdminException$NotFoundException: Topic 
not exist        at 
org.apache.pulsar.client.admin.internal.BaseResource.getApiException(BaseResource.java:230)
        at 
org.apache.pulsar.client.admin.internal.TopicsImpl$7.failed(TopicsImpl.java:529)
        at 
org.apache.pulsar.shade.org.glassfish.jersey.client.JerseyInvocation$1.failed(JerseyInvocation.java:882)
        at 
org.apache.pulsar.shade.org.glassfish.jersey.client.JerseyInvocation$1.completed(JerseyInvocation.java:863)
        at 
org.apache.pulsar.shade.org.glassfish.jersey.client.ClientRuntime.processResponse(ClientRuntime.java:229)
        at 
org.apache.pulsar.shade.org.glassfish.jersey.client.ClientRuntime.access$200(ClientRuntime.java:62)
        at 
org.apache.pulsar.shade.org.glassfish.jersey.client.ClientRuntime$2.lambda$response$0(ClientRuntime.java:173)
        at 
org.apache.pulsar.shade.org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
        at 
org.apache.pulsar.shade.org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
Unfortunately, the exception does not name the topic that does not exist 
specifically. Additionally, I would expect that the topic would be created 
automatically as it has in the past under the use of Pulsar.
Can someone confirm if topics must now be created manually? If not, what 
parameter must be set to have the topic automatically created. I am likely 
missing it, but could not see.
Thanks


  

Re: Apache Flink - Error on creating savepoints using REST interface

2020-05-25 Thread M Singh
 Hi Chesney:
The SavepointTriggerRequestBody indicates defaultValue for cancel-job 
attribute, so is it not being honored ?
https://github.com/apache/flink/blob/release-1.6/flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/job/savepoints/SavepointTriggerRequestBody.java

Thanks
On Saturday, May 23, 2020, 10:17:27 AM EDT, Chesnay Schepler 
 wrote:  
 
  You also have to set the boolean cancel-job parameter.
  
  On 22/05/2020 22:47, M Singh wrote:
  
 
 Hi: 
  I am using Flink 1.6.2 and trying to create a savepoint using the following 
curl command using the following references 
(https://ci.apache.org/projects/flink/flink-docs-release-1.6/monitoring/rest_api.html)
 and 
https://ci.apache.org/projects/flink/flink-docs-release-1.6/api/java/org/apache/flink/runtime/rest/handler/job/savepoints/SavepointHandlers.html
 
curl -v -H "Content-Type: application/json" -XPOST 
http:///jobs//savepoints -d 
'{"target-directory":"s3://mys3bucket/savepointdirectory/testMay22-sp1/"}' But 
I am getting the following error:{"errors":["Request did not match expected 
format SavepointTriggerRequestBody."]}Can you please let me know what I could 
be missing ?Thanks   

 
   

Re: Apache Flink - Error on creating savepoints using REST interface

2020-05-23 Thread Chesnay Schepler

You also have to set the boolean cancel-job parameter.

On 22/05/2020 22:47, M Singh wrote:

Hi:

I am using Flink 1.6.2 and trying to create a savepoint using the 
following curl command using the following references 
(https://ci.apache.org/projects/flink/flink-docs-release-1.6/monitoring/rest_api.html) 
and 
https://ci.apache.org/projects/flink/flink-docs-release-1.6/api/java/org/apache/flink/runtime/rest/handler/job/savepoints/SavepointHandlers.html 



curl -v -H "Content-Type: application/json" -XPOST 
http:///jobs//savepoints -d 
'{"target-directory":"s3://mys3bucket/savepointdirectory/testMay22-sp1/"}'

But I am getting the following error:
{"errors":["Request did not match expected format 
SavepointTriggerRequestBody."]}

Can you please let me know what I could be missing ?
Thanks





Apache Flink - Error on creating savepoints using REST interface

2020-05-22 Thread M Singh
Hi:
I am using Flink 1.6.2 and trying to create a savepoint using the following 
curl command using the following references 
(https://ci.apache.org/projects/flink/flink-docs-release-1.6/monitoring/rest_api.html)
 and 
https://ci.apache.org/projects/flink/flink-docs-release-1.6/api/java/org/apache/flink/runtime/rest/handler/job/savepoints/SavepointHandlers.html
curl -v -H "Content-Type: application/json" -XPOST 
http:///jobs//savepoints -d 
'{"target-directory":"s3://mys3bucket/savepointdirectory/testMay22-sp1/"}' 

But I am getting the following error:
{"errors":["Request did not match expected format 
SavepointTriggerRequestBody."]}

Can you please let me know what I could be missing ?Thanks






Re: Flink error;

2020-05-09 Thread Sivaprasanna S
It is working as expected. If I'm right, the print operator will simply
call the `.toString()` on the input element. If you want to visualize your
payload in JSON format, override the toString() in `SensorData` class with
the code to form your payload as a JSON representation using ObjectMapper
or lombok plugin, or something like that.


On Sun, May 10, 2020 at 8:43 AM Aissa Elaffani 
wrote:

> Hello Guys,
> I hope you are well. I am trying to build a pipeline with apache Kafka and
> apache Flink. So i am sendig some data to a kafka topic, the data is
> generated in Json format .. then i try to consume it, so I tried to
> deserialize the message but I think there is a probleme, because when i
> want to print the deserialized results, i got some weird syntax
> "sensors.SensorData@49820beb". I am going to show you some pictures
> fom the project and i hope you can figure it out.
>
>

-- 



*-*


*This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this email in error, please notify the 
system manager. This message contains confidential information and is 
intended only for the individual named. If you are not the named addressee, 
you should not disseminate, distribute or copy this email. Please notify 
the sender immediately by email if you have received this email by mistake 
and delete this email from your system. If you are not the intended 
recipient, you are notified that disclosing, copying, distributing or 
taking any action in reliance on the contents of this information is 
strictly prohibited.*

 

*Any views or opinions presented in this 
email are solely those of the author and do not necessarily represent those 
of the organization. Any information on shares, debentures or similar 
instruments, recommended product pricing, valuations and the like are for 
information purposes only. It is not meant to be an instruction or 
recommendation, as the case may be, to buy or to sell securities, products, 
services nor an offer to buy or sell securities, products or services 
unless specifically stated to be so on behalf of the Flipkart group. 
Employees of the Flipkart group of companies are expressly required not to 
make defamatory statements and not to infringe or authorise any 
infringement of copyright or any other legal right by email communications. 
Any such communication is contrary to organizational policy and outside the 
scope of the employment of the individual concerned. The organization will 
not accept any liability in respect of such communication, and the employee 
responsible will be personally liable for any damages or other liability 
arising.*

 

*Our organization accepts no liability for the 
content of this email, or for the consequences of any actions taken on the 
basis of the information *provided,* unless that information is 
subsequently confirmed in writing. If you are not the intended recipient, 
you are notified that disclosing, copying, distributing or taking any 
action in reliance on the contents of this information is strictly 
prohibited.*


_-_



Re: Flink error handling

2019-07-03 Thread Steven Nelson
We ended up using side outputs for now and basically implementing our own 
map/flatMap that internally uses a ProcessFunction.

Sent from my iPhone

> On Jul 3, 2019, at 6:02 AM, Halfon, Roey  wrote:
> 
> Hi,
> Do you have any progress with that?
> 
> -Original Message-
> From: Steven Nelson  
> Sent: Tuesday, June 18, 2019 7:02 PM
> To: user@flink.apache.org
> Subject: Flink error handling
> 
> 
> Hello!
> 
> We are internally having a debate on how best to handle exceptions within our 
> operators. Some advocate for wrapping maps/flatMaps inside a processfunction 
> and sending the error to a side output. Other options are returning a custom 
> Either that gets filtered and mapped into different sinks.  
> 
> Are there any recommendations or standard solutions to this?
> 
> -Steve


RE: Flink error handling

2019-07-03 Thread Halfon, Roey
Hi,
Do you have any progress with that?

-Original Message-
From: Steven Nelson  
Sent: Tuesday, June 18, 2019 7:02 PM
To: user@flink.apache.org
Subject: Flink error handling


Hello!

We are internally having a debate on how best to handle exceptions within our 
operators. Some advocate for wrapping maps/flatMaps inside a processfunction 
and sending the error to a side output. Other options are returning a custom 
Either that gets filtered and mapped into different sinks.  

Are there any recommendations or standard solutions to this?

-Steve


Flink error handling

2019-06-18 Thread Steven Nelson


Hello!

We are internally having a debate on how best to handle exceptions within our 
operators. Some advocate for wrapping maps/flatMaps inside a processfunction 
and sending the error to a side output. Other options are returning a custom 
Either that gets filtered and mapped into different sinks.  

Are there any recommendations or standard solutions to this?

-Steve

Re: Flink error reading file over network (Windows)

2019-01-03 Thread Chesnay Schepler

Yes, you'll need to create your own InputFormat that understands SMB.

On 03.01.2019 08:26, miki haiat wrote:

Hi,

Im trying to read a csv file from windows shard drive.
I tried numbers option but i failed.

I cant find an option to use SMB format,
so im assuming that create my own input format is the way to achieve 
that ?


What is the correct way to read file from windows network ?.

Thanks,

Miki






Flink error reading file over network (Windows)

2019-01-02 Thread miki haiat
Hi,

Im trying to read a csv file from windows shard drive.
I tried numbers option but i failed.

I cant find an option to use SMB format,
so im assuming that create my own input format is the way to achieve that ?

What is the correct way to read file from windows network ?.

Thanks,

Miki


Flink Error - Remote system has been silent for too long

2018-10-24 Thread Anil
The Flink jobs are deployed in Yarn cluster. I am seeing the following log
for some of my jobs in Job Manager. I'm using Flink 1.4. The job has,
taskmanager.exit-on-fatal-akka-error=true. 
But I don't see the task manager being restarted. 

I made the following observations - 
1. One job does a join on two kafka topic. One of the stream didn't have any
data in last 24 hours. 
2. Two jobs that have the same log in JobManager.out but is working fine and
the records are being generated. 

{"debug_level":"ERROR","debug_timestamp":"2018-10-24
05:23:25,092","debug_thread":"flink-akka.actor.default-dispatcher-20","debug_file":"MarkerIgnoringBase.java",
"debug_line":"161","debug_message":"Association to
[akka.tcp://flink@ip-*-*-*-*.ap-southeast-1.compute.internal:58208] with UID
[930934199] irrecoverably failed. Quarantining address.", "job_name":
"eb99e094-74c9-4036-aa08-d379d62b7ff2" }
java.util.concurrent.TimeoutException: Remote system has been silent for too
long. (more than 48.0 hours)
at
akka.remote.ReliableDeliverySupervisor$$anonfun$idle$1.applyOrElse(Endpoint.scala:375)
at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
at 
akka.remote.ReliableDeliverySupervisor.aroundReceive(Endpoint.scala:203)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
at akka.actor.ActorCell.invoke(ActorCell.scala:495)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
at akka.dispatch.Mailbox.run(Mailbox.scala:224)
at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)


I read other people having the same kind of issue and that
taskmanager.exit-on-fatal-akka-error setting worked for them. I'm not sure
why I'm seeing this issue and why is that the stream is working fine without
restart and with the error. Will appreciate any help. Thanks!




--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/


Re: Flink Error/Exception Handling

2017-03-03 Thread Tzu-Li (Gordon) Tai
Hi Sunil,

There’s recently some effort in allowing `DeserializationSchema#deserialize()` 
to return `null` in cases like yours, so that the invalid record can be simply 
skipped instead of throwing an exception from the deserialization schema.
Here are the related links that you may be interested in:
- JIRA: https://issues.apache.org/jira/browse/FLINK-3679
- PR: https://github.com/apache/flink/pull/3314

This means, however, that this isn’t available until Flink 1.3.
For the time being, a possible workaround with dealing with invalid records is 
explained by Robert in the first comment of 
https://issues.apache.org/jira/browse/FLINK-3679.

Cheers,
Gordon


On March 3, 2017 at 9:15:40 PM, raikarsunil (rsunil...@gmail.com) wrote:

Hi,  

Scenario :  

I am reading data from Kafka.The FlinkKafkaConsumer reads data from it .  
There are some application specific logic to check if the data is  
valid/in-valid. When i receive an invalid message i am throwing an custom  
Exception and it's handled in that class. But the problem is,the flink  
always try to read the same invalid message and the job keeps on restarting.  

Can any one let me know how can the error/exception handling be done without  
the flink job breaking?  

Thanks,  
Sunil  



-  
Cheers,  
Sunil Raikar  
--  
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-Error-Exception-Handling-tp12029.html
  
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.  


Flink Error/Exception Handling

2017-03-03 Thread raikarsunil
Hi,

Scenario :

I am reading data from Kafka.The FlinkKafkaConsumer reads data from it .
There are some application specific logic to check if the data is
valid/in-valid. When i receive an invalid message i am throwing an custom
Exception and it's handled in that class. But the problem is,the flink
always try to read the same invalid message and the job keeps on restarting.

Can any one let me know how can the error/exception handling be done without
the flink job breaking?

Thanks,
Sunil



-
Cheers,
Sunil Raikar
--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-Error-Exception-Handling-tp12029.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: Flink error: Too few memory segments provided

2016-10-21 Thread otherwise777
thank you so much, it worked immediately. 



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-error-Too-few-memory-segments-provided-tp9657p9669.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: Flink error: Too few memory segments provided

2016-10-21 Thread Vasiliki Kalavri
Hi,

On 21 October 2016 at 11:17, otherwise777  wrote:

> I tried increasing the taskmanager.network.numberOfBuffers to 4k and
> later to
> 8k, i'm not sure if my configuration file is even read, it's stored inside
> my IDE as follows:  http://prntscr.com/cx0vrx <http://prntscr.com/cx0vrx>
> i build the flink program from the IDE and run it. I created several at
> different places to see if that helped but nothing changed on the error.
>

​that's correct, if you're running your application through your IDE, the
config file is not read.
For passing configuration options to the local environment​, please refer
to [1]. Alternatively, you can start Flink from the command line and submit
your job as a jar using the bin/flink command or using the web interface.
In that case, the configuration options that you set in flink-config.yaml
will be taken into account. Please refer to [2] for more details.


I hope this helps!
-Vasia.


[1]:
https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/local_execution.html
[2]:
https://ci.apache.org/projects/flink/flink-docs-release-1.1/setup/local_setup.html



>
> Afaik i'm using Flink 1.1.2 and Gelly 1.2-snapshot, here's my pom.xml:
> http://paste.thezomg.com/19868/41341147/
> <http://paste.thezomg.com/19868/41341147/>
> I see that the document i linked to points to an older config file, this is
> probably because it's the first hit on google, thanks for pointing it out
>
>
>
>
> --
> View this message in context: http://apache-flink-user-
> mailing-list-archive.2336050.n4.nabble.com/Flink-error-Too-
> few-memory-segments-provided-tp9657p9667.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive
> at Nabble.com.
>


Re: Flink error: Too few memory segments provided

2016-10-21 Thread otherwise777
I tried increasing the taskmanager.network.numberOfBuffers to 4k and later to
8k, i'm not sure if my configuration file is even read, it's stored inside
my IDE as follows:  http://prntscr.com/cx0vrx <http://prntscr.com/cx0vrx>  
i build the flink program from the IDE and run it. I created several at
different places to see if that helped but nothing changed on the error.

Afaik i'm using Flink 1.1.2 and Gelly 1.2-snapshot, here's my pom.xml: 
http://paste.thezomg.com/19868/41341147/
<http://paste.thezomg.com/19868/41341147/>  
I see that the document i linked to points to an older config file, this is
probably because it's the first hit on google, thanks for pointing it out




--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-error-Too-few-memory-segments-provided-tp9657p9667.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: Flink error: Too few memory segments provided

2016-10-20 Thread Vasiliki Kalavri
Also pay attention to the Flink version you are using. The configuration
link you have provided points to an old version (0.8). Gelly wasn't part of
Flink then :)
You probably need to look in [1].

Cheers,
-Vasia.

[1]:
https://ci.apache.org/projects/flink/flink-docs-release-1.1/setup/config.html

On 20 October 2016 at 17:53, Greg Hogan  wrote:

> By default Flink only allocates 2048 network buffers (64 MiB at 32
> KiB/buffer). Have you increased the value for 
> taskmanager.network.numberOfBuffers
> in flink-conf.yaml?
>
> On Thu, Oct 20, 2016 at 11:24 AM, otherwise777 
> wrote:
>
>> I got this error in Gelly, which is a result of flink (i believe)
>>
>> Exception in thread "main"
>> org.apache.flink.runtime.client.JobExecutionException: Job execution
>> failed.
>> at
>> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$
>> handleMessage$1$$anonfun$applyOrElse$8.apply$mcV$sp(JobManager.scala:822)
>> at
>> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$
>> handleMessage$1$$anonfun$applyOrElse$8.apply(JobManager.scala:768)
>> at
>> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$
>> handleMessage$1$$anonfun$applyOrElse$8.apply(JobManager.scala:768)
>> at
>> scala.concurrent.impl.Future$PromiseCompletingRunnable.lifte
>> dTree1$1(Future.scala:24)
>> at
>> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(F
>> uture.scala:24)
>> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
>> at
>> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.
>> exec(AbstractDispatcher.scala:401)
>> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.
>> java:260)
>> at
>> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(
>> ForkJoinPool.java:1339)
>> at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPoo
>> l.java:1979)
>> at
>> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinW
>> orkerThread.java:107)
>> Caused by: java.lang.IllegalArgumentException: Too few memory segments
>> provided. Hash Table needs at least 33 memory segments.
>> at
>> org.apache.flink.runtime.operators.hash.CompactingHashTable.
>> (CompactingHashTable.java:206)
>> at
>> org.apache.flink.runtime.operators.hash.CompactingHashTable.
>> (CompactingHashTable.java:191)
>> at
>> org.apache.flink.runtime.iterative.task.IterationHeadTask.in
>> itCompactingHashTable(IterationHeadTask.java:175)
>> at
>> org.apache.flink.runtime.iterative.task.IterationHeadTask.
>> run(IterationHeadTask.java:272)
>> at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTas
>> k.java:351)
>> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:584)
>> at java.lang.Thread.run(Thread.java:745)
>>
>> I found a related topic:
>> http://mail-archives.apache.org/mod_mbox/flink-dev/201503.mb
>> ox/%3CCAK5ODX4KJ9TB4yJ=BcNwsozbOoXwdB7HM9qvWoa1P9HK-Gb-Dg@
>> mail.gmail.com%3E
>> But i don't think the problem is the same,
>>
>> The code is as follows:
>>
>> ExecutionEnvironment env =
>> ExecutionEnvironment.getExecutionEnvironment();
>> DataSource twitterEdges =
>> env.readCsvFile("./datasets/out.munmun_twitter_social").fieldDelimiter("
>> ").ignoreComments("%").types(Long.class, Long.class);
>> Graph graph = Graph.fromTuple2DataSet(twitterEdges, new
>> testinggraph.InitVertices(), env);
>> DataSet verticesWithCommunity = (DataSet)graph.run(new
>> LabelPropagation(1));
>> System.out.println(verticesWithCommunity.count());
>>
>> And it has only a couple of edges.
>>
>> I tried adding a config file in the project to add a couple of settings
>> found here:
>> https://ci.apache.org/projects/flink/flink-docs-release-0.8/config.html
>> but
>> that didn't work either
>>
>> I have no idea how to fix this atm, it's not just the LabelPropagation
>> that
>> goes wrong, all gelly methods give this exact error if it's using an
>> iteration.
>>
>>
>>
>>
>>
>> --
>> View this message in context: http://apache-flink-user-maili
>> ng-list-archive.2336050.n4.nabble.com/Flink-error-Too-few
>> -memory-segments-provided-tp9657.html
>> Sent from the Apache Flink User Mailing List archive. mailing list
>> archive at Nabble.com.
>>
>
>


Re: Flink error: Too few memory segments provided

2016-10-20 Thread Greg Hogan
By default Flink only allocates 2048 network buffers (64 MiB at 32
KiB/buffer). Have you increased the value for
taskmanager.network.numberOfBuffers in flink-conf.yaml?

On Thu, Oct 20, 2016 at 11:24 AM, otherwise777 
wrote:

> I got this error in Gelly, which is a result of flink (i believe)
>
> Exception in thread "main"
> org.apache.flink.runtime.client.JobExecutionException: Job execution
> failed.
> at
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$
> anonfun$applyOrElse$8.apply$mcV$sp(JobManager.scala:822)
> at
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$
> anonfun$applyOrElse$8.apply(JobManager.scala:768)
> at
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$
> anonfun$applyOrElse$8.apply(JobManager.scala:768)
> at
> scala.concurrent.impl.Future$PromiseCompletingRunnable.
> liftedTree1$1(Future.scala:24)
> at
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(
> Future.scala:24)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
> at
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(
> AbstractDispatcher.scala:401)
> at scala.concurrent.forkjoin.ForkJoinTask.doExec(
> ForkJoinTask.java:260)
> at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.
> runTask(ForkJoinPool.java:1339)
> at scala.concurrent.forkjoin.ForkJoinPool.runWorker(
> ForkJoinPool.java:1979)
> at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(
> ForkJoinWorkerThread.java:107)
> Caused by: java.lang.IllegalArgumentException: Too few memory segments
> provided. Hash Table needs at least 33 memory segments.
> at
> org.apache.flink.runtime.operators.hash.CompactingHashTable.(
> CompactingHashTable.java:206)
> at
> org.apache.flink.runtime.operators.hash.CompactingHashTable.(
> CompactingHashTable.java:191)
> at
> org.apache.flink.runtime.iterative.task.IterationHeadTask.
> initCompactingHashTable(IterationHeadTask.java:175)
> at
> org.apache.flink.runtime.iterative.task.IterationHeadTask.run(
> IterationHeadTask.java:272)
> at org.apache.flink.runtime.operators.BatchTask.invoke(
> BatchTask.java:351)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:584)
> at java.lang.Thread.run(Thread.java:745)
>
> I found a related topic:
> http://mail-archives.apache.org/mod_mbox/flink-dev/201503.
> mbox/%3CCAK5ODX4KJ9TB4yJ=BcNwsozbOoXwdB7HM9qvWoa1P9HK-
> gb...@mail.gmail.com%3E
> But i don't think the problem is the same,
>
> The code is as follows:
>
> ExecutionEnvironment env =
> ExecutionEnvironment.getExecutionEnvironment();
> DataSource twitterEdges =
> env.readCsvFile("./datasets/out.munmun_twitter_social").fieldDelimiter("
> ").ignoreComments("%").types(Long.class, Long.class);
> Graph graph = Graph.fromTuple2DataSet(twitterEdges, new
> testinggraph.InitVertices(), env);
> DataSet verticesWithCommunity = (DataSet)graph.run(new
> LabelPropagation(1));
> System.out.println(verticesWithCommunity.count());
>
> And it has only a couple of edges.
>
> I tried adding a config file in the project to add a couple of settings
> found here:
> https://ci.apache.org/projects/flink/flink-docs-release-0.8/config.html
> but
> that didn't work either
>
> I have no idea how to fix this atm, it's not just the LabelPropagation that
> goes wrong, all gelly methods give this exact error if it's using an
> iteration.
>
>
>
>
>
> --
> View this message in context: http://apache-flink-user-
> mailing-list-archive.2336050.n4.nabble.com/Flink-error-Too-
> few-memory-segments-provided-tp9657.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive
> at Nabble.com.
>


Flink error: Too few memory segments provided

2016-10-20 Thread otherwise777
I got this error in Gelly, which is a result of flink (i believe) 

Exception in thread "main"
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$8.apply$mcV$sp(JobManager.scala:822)
at
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$8.apply(JobManager.scala:768)
at
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$8.apply(JobManager.scala:768)
at
scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at
scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.IllegalArgumentException: Too few memory segments
provided. Hash Table needs at least 33 memory segments.
at
org.apache.flink.runtime.operators.hash.CompactingHashTable.(CompactingHashTable.java:206)
at
org.apache.flink.runtime.operators.hash.CompactingHashTable.(CompactingHashTable.java:191)
at
org.apache.flink.runtime.iterative.task.IterationHeadTask.initCompactingHashTable(IterationHeadTask.java:175)
at
org.apache.flink.runtime.iterative.task.IterationHeadTask.run(IterationHeadTask.java:272)
at 
org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:351)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:584)
at java.lang.Thread.run(Thread.java:745)

I found a related topic: 
http://mail-archives.apache.org/mod_mbox/flink-dev/201503.mbox/%3CCAK5ODX4KJ9TB4yJ=bcnwsozbooxwdb7hm9qvwoa1p9hk-gb...@mail.gmail.com%3E
But i don't think the problem is the same, 

The code is as follows:

ExecutionEnvironment env =
ExecutionEnvironment.getExecutionEnvironment();
DataSource twitterEdges =
env.readCsvFile("./datasets/out.munmun_twitter_social").fieldDelimiter("
").ignoreComments("%").types(Long.class, Long.class);
Graph graph = Graph.fromTuple2DataSet(twitterEdges, new
testinggraph.InitVertices(), env);
DataSet verticesWithCommunity = (DataSet)graph.run(new
LabelPropagation(1));
System.out.println(verticesWithCommunity.count());

And it has only a couple of edges.

I tried adding a config file in the project to add a couple of settings
found here:
https://ci.apache.org/projects/flink/flink-docs-release-0.8/config.html but
that didn't work either

I have no idea how to fix this atm, it's not just the LabelPropagation that
goes wrong, all gelly methods give this exact error if it's using an
iteration.





--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-error-Too-few-memory-segments-provided-tp9657.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.