Stephan it is exactly the same exception -UknownHost bal bla
In Jboss for example the external are also not working, only the 0.0.0.0 -
this is AWS NAT.
We will proceed with VPC and then I will update you about what we get.
Thanks for your help.
On Sun, Aug 30, 2015 at 6:05 PM, Stephan Ewen wrote
Why are the external IPs not working? Any kind of exception you can share?
On Sun, Aug 30, 2015 at 5:02 PM, Alexey Sapozhnikov
wrote:
> it will not help, since the internal IPs are changing in AWS from time to
> time and you should use only Public IP, which is not recognizable by flink.
> Thats
it will not help, since the internal IPs are changing in AWS from time to
time and you should use only Public IP, which is not recognizable by flink.
Thats why all app servers, for example JBoss or even Flume are using
"0.0.0.0"
On Sun, Aug 30, 2015 at 5:53 PM, Stephan Ewen wrote:
> What you can
What you can do as a temporary workaround is to actually enter the IP
address for "jobmanager.rpc.address" - that circumvents the DNS.
Just saw that Akka 2.4 (released some time in the near future) apparently
introduces an option to listen to all network interfaces.
On Sun, Aug 30, 2015 at 4:44 P
Fully understand.
1.My suggestion is to drop Akka and take something else, since this issue
is really big
2.Not hostname not the endpoint are not working, clarifying the VPC topic
now.
On Sun, Aug 30, 2015 at 5:41 PM, Stephan Ewen wrote:
> Not being able to bind to 0.0.0.0 is an Akka issue. It i
Not being able to bind to 0.0.0.0 is an Akka issue. It is sometimes
annoying, but I have not found a good way around this.
The problem is that the address to bind to an the address used by others to
send messages to the node is the same. (
https://groups.google.com/forum/#!topic/akka-user/cRZmf8u_v
Hi.
First off - many thanks for your efforts and prompt help.
We will try to find how to do it with DNS server on VPC.
however, absence of "0.0.0.0" is definitely a huge bug - just think about
the current situation : if I dont have a VPC, I cant invoke the Flink
functionality remotely in Amazon.
We
Weird, the root cause seems to be "java.net.UnknownHostException:
ip-172-36-98: unknown error"
Flink does not do anything more special than
"InetAddress.getByName(hostname)".
Is it that you can either not resolve the hostname "ip-172-36-98" (maybe
add the fully qualified domain name), or is there
>From this blog post, it seems that this hostname is not resolvable:
https://holtstrom.com/michael/blog/post/401/Hostname-in-Amazon-Linux.html
Can you easily activate a DNS server in the VPC?
0.0.0.0 is not supported because of some requirements of the Akka framework.
But you should be able to use
Here is the exception from the moment we tried to put in
jobmanager.rpc.address the hostname of the machine which is ip-172-36-98
looks like it doesnt recognize this address.
Why it doesnt support "0.0.0.0"
13:43:14,805 INFO org.apache.flink.runtime.jobmanager.JobManager
-
--
Flink uses Akka internally, and Akka requires to have exact host/ip
addresses to bind to. Maybe that is the crash you see.
Having the exact exception would help.
On Sun, Aug 30, 2015 at 3:57 PM, Robert Metzger wrote:
> How is Flink crashing when you start it on the Linux machine in Amazon?
>
>
How is Flink crashing when you start it on the Linux machine in Amazon?
Can you post the exception here?
On Sun, Aug 30, 2015 at 3:48 PM, Alexey Sapozhnikov
wrote:
> Hello Stephan.
>
> We run this Linux machine on Amazon, which I predict, most of the people
> will do.
> We tried to put "0.0.0.0
Hello Stephan.
We run this Linux machine on Amazon, which I predict, most of the people
will do.
We tried to put "0.0.0.0" or Public IP of the machine- Flink crashes on
start, it doesnt recognize himself.
It is very strange that it doesnt work with 0.0.0.0- basically this is a
way in Java to make
Do you start Flink via YARN? In that case the "jobmanager.rpc.address" is
not used, because YARN assigns containers/nodes.
If you start Flink in "standalone" mode, this should be the address of the
node that runs the JobManager. It will be used as the host/IP that Flink
binds to. The same host sho
Hello all.
Firstly- thank you for your valuable advices.
We did some very fine tuned pinpoint test and comes to following conclusions
1.We run on Ubuntu 14 flink for hadoop 2.7
2.Once we copy our Java client program directy to the machine and run it
directly there it worked very good
The program
The output of the YARN session should look like this:
Flink JobManager is now running on quickstart.cloudera:39956
JobManager Web Interface:
http://quickstart.cloudera:8088/proxy/application_1440768826963_0005/
Number of connected TaskManagers changed to 1. Slots available: 1
On Sun, Aug 30, 2
The only thing I can think of is that you are not using the right host/port
for the JobManager.
When you start the YARN session, it should print the host where the
JobManager runs. You also need to take the port from there, as in YARN, the
port is usually not 6123. Yarn starts many services on one
Hello.
Let me clarify the situation.
1. We are using flink 0.9.0 for Hadoop 2.7. We connected it to HDFS 2.7.1.
2. Locally, our program is working: once we run flink as ./start-local.sh,
we are able to connect and run the createRemoteEnvironment and Execute
methods.
3.Due to our architecture and ba
Can you try to not manually create a "RemoteExecutionEnvironment", but to
simply use the recommended way of doing this:
Please use "ExecutionEnvironment.getExecutionEnvironment()" if you run the
program through the command line anyways.
On Fri, Aug 28, 2015 at 1:04 PM, Hanan Meyer wrote:
> Hi
>
Hi
I'm running with a formal server ip but for securuty reasons I can't share
with you the real ip .
I put "FLINK_SERVER_URL" in order to replace the actual ip only in my post .
Hanan Meyer
On Fri, Aug 28, 2015 at 10:27 AM, Robert Metzger
wrote:
> Hi,
>
> in the exception you've posted earlier
Hi,
in the exception you've posted earlier, you can see the following root
cause:
Caused by: akka.actor.ActorNotFound: Actor not found for:
ActorSelection[Anchor(akka.tcp://flink@FLINK_SERVER_URL:6123/),
Path(/user/jobmanager)]
This string "akka.tcp://flink@FLINK_SERVER_URL:6123/" usually looks
Hi
I'm currently using flink 0.9.0 which by maven support Hadoop 1 .
By using flink-clients-0.7.0-hadoop2-incubating.jar with executePlan(Plan
p) method instead, I'm getting the same exception
Hanan
On Fri, Aug 28, 2015 at 8:35 AM, Hanan Meyer wrote:
>
> Hi
>
> 1. I have restarted Flink servi
Hi
1. I have restarted Flink service via stop/start-loval.sh - it have been
restarted successfully ,no errors in log folder
2. default flink port is -6123
Getting this via Eclips IDE:
Thanks
org.apache.flink.client.program.ProgramInvocationException: Failed to
resolve JobManager
at org.apache.
I guess you are getting an entire exception after the "org.apache.flink
.client.program.ProgramInvocationException: Failed to
resolve JobManager".
Can you post it here to help us understanding the issue?
On Thu, Aug 27, 2015 at 6:55 PM, Alexey Sapozhnikov
wrote:
> Hello all.
>
> Some clarificati
Hello all.
Some clarification: locally everything works great.
However once we run our Flink on remote linux machine and try to run the
client program from our machine, using create remote environment- Flink
JobManager is raising this exception
On Thu, Aug 27, 2015 at 7:41 PM, Stephan Ewen wrote
Please subscribe to the mailing list. All your mails are held back and need
to be manually approved.
On Thu, Aug 27, 2015 at 6:49 PM, Alexey Sapozhnikov
wrote:
> Hello all.
>
> Some clarification: locally everything works great.
> However once we run our Flink on remote linux machine and try to
Hello all.
Some clarification: locally everything works great.
However once we run our Flink on remote linux machine and try to run the
client program from our machine, using create remote environment- Flink
JobManager is raising this exception
On Thu, Aug 27, 2015 at 7:41 PM, Stephan Ewen wrote
If you start the job via the "bin/flink" script, then simply use
"ExecutionEnvironment.getExecutionEnvironment()" rather then creating a
remote environment manually.
That way, hosts and ports are configured automatically.
On Thu, Aug 27, 2015 at 6:39 PM, Robert Metzger wrote:
> Hi,
>
> Which va
Hi,
Which values did you use for FLINK_SERVER_URL and FLINK_PORT?
Every time you deploy Flink on YARN, the host and port change, because the
JobManager is started on a different YARN container.
On Thu, Aug 27, 2015 at 6:32 PM, Hanan Meyer wrote:
> Hello All
>
> When using Eclipse IDE to submit
Hello All
When using Eclipse IDE to submit Flink to Yarn single node cluster I'm
getting :
"org.apache.flink.client.program.ProgramInvocationException: Failed to
resolve JobManager"
Using Flink 0.9.0
The Jar copy a file from one location in Hdfs to another and works fine
while executed locally o
30 matches
Mail list logo