Hi Niraj,
Welcome to the Flink community ;)
I'm really excited that you want to contribute to our project, and since
you've asked for something in the security area, I actually have something
very concrete in mind.
We recently added some support for accessing (Kerberos) secured HDFS
clusters in
Hello,
Also, for guidelines on how to implement a graph algorithm in Gelly, you
can
use the provided examples:
https://github.com/apache/flink/tree/master/flink-staging/flink-gelly/src/main/java/org/apache/flink/graph/example
Have fun!
Andra
On Thu, Feb 26, 2015 at 5:31 PM, Fabian Hueske
Hej,
I was busy with other stuff for a while but I hope I will have more time to
work on Flink and Graphs again now.
I need to do some basic analytic's on a large graph set (stuff like degree
distribution, triangle count, component size distribution etc.)
Is there anything implemented in Gelli
Hej,
I was busy with other stuff for a while but I hope I will have more time to
work on Flink and Graphs again now.
I need to do some basic analytic's on a large graph set (stuff like degree
distribution, triangle count, component size distribution etc.)
Is there anything implemented in Gelli
Hi Martin,
as a start, there is a PR with Gelly documentation:
https://github.com/vasia/flink/blob/gelly-guide/docs/gelly_guide.md
Cheers, Fabian
2015-02-26 17:12 GMT+01:00 Martin Neumann mneum...@spotify.com:
Hej,
I was busy with other stuff for a while but I hope I will have more time to
Hi,
It’s great to help out. :)
Setting 127.0.0.1 instead of “localhost” in jobmanager.rpc.address,
helped to build the connection to the jobmanager. Apparently localhost
resolving is different in webclient and the jobmanager. I think it’s good to
set jobmanager.rpc.address:
On 25 Feb 2015, at 16:35, Till Rohrmann trohrm...@apache.org wrote:
The reason for this behaviour is the following:
The log4j-test.properties is not a standard log4j properties file. It is
only used if it is explicitly given to the executing JVM by
-Dlog4j.configuration. The parent pom
Alexander Alexandrov created FLINK-1613:
---
Summary: Cannost submit to remote ExecutionEnvironment from IDE
Key: FLINK-1613
URL: https://issues.apache.org/jira/browse/FLINK-1613
Project: Flink
Hi Dulaj!
Thanks for helping to debug.
My guess is that you are seeing now the same thing between JobManager and
TaskManager as you saw before between JobManager and JobClient. I have a
patch pending that should help the issue (see
https://issues.apache.org/jira/browse/FLINK-1608), let's see if
Hi Flink Dev,
I am looking to contribute to Flink, especially in the area of security. In the
past, I have contributed to Pig, Hive and HDFS. I would really appreciate, if I
can get some work assigned to me. Looking forward to hear back from the
development community of Flink.
Thanks
Niraj
Thanks for clarifying Marton!
I was on the latest build already. However, my local maven repository
contained old jars. After removing all flink-jars from my local maven
repository it works!
Why does maven no automatically update the local repository?
-Matthias
On 02/26/2015 09:20 AM,
To update the local repository, you have to do execute the install goal.
I can recommend to always do a mvn clean install
On Thu, Feb 26, 2015 at 10:11 AM, Matthias J. Sax
mj...@informatik.hu-berlin.de wrote:
Thanks for clarifying Marton!
I was on the latest build already. However, my local
If the streaming-examples module uses the classifier tag to add the
test-core dependency then we should change it into type tag as
recommended by maven [1]. Otherwise it might come to build failures if the
install lifecycle is not executed.
The dependency import should look like:
dependency
Dear Mathias,
Thanks for reporting the issue. I have successfully built
flink-streaming-examples with maven, you can depend on test classes, the
following in the pom does the trick:
dependency
groupIdorg.apache.flink/groupId
artifactIdflink-streaming-core/artifactId
Hi Niraj,
Thanks for your interest at Apache Flink. The quickest is to just give
Flink a spin and figure out how it works.
This would get you start on how it works before actually doing work on Flink =)
Please do visit Flink how to contribute page [1] and subscribe to dev
mailing list [2] to
If we were to drop CDH4 / Hadoop 2.0.0-alpha, would this mean we do
not even to shade the hadoop fat jars, or we do still needed to
support 1.x ?
- Henry
On Thu, Feb 26, 2015 at 8:57 AM, Robert Metzger rmetz...@apache.org wrote:
Hi,
I'm currently working on
16 matches
Mail list logo