Vijay created FLINK-34009:
-
Summary: Apache flink: Checkpoint restoration issue on Application
Mode of deployment
Key: FLINK-34009
URL: https://issues.apache.org/jira/browse/FLINK-34009
Project: Flink
Vijay created FLINK-33944:
-
Summary: Apache Flink: Process to restore more than one job on job
manager startup from the respective savepoints
Key: FLINK-33944
URL: https://issues.apache.org/jira/browse/FLINK-33944
Vijay created FLINK-33943:
-
Summary: Apache flink: Issues after configuring HA (using
zookeeper setting)
Key: FLINK-33943
URL: https://issues.apache.org/jira/browse/FLINK-33943
Project: Flink
Issue
Job is "FAILED" state and hence Flink HA Removed the job graph from
zookeeper along with the state.
One thing is we can't completely rely on Flink HA for state restoring.
It will only until Job hasn't FAILED
If you want to recover Job even after Failure, you should do the following:
a) Use the Ret
till the batch size is reached)
Appreciate your inputs.
ThanksVijayOn Monday, December 16, 2019, 08:20:31 PM PST, Vijay
Srinivasaraghavan wrote:
Hello,
I would like to understand options available to design an ingestion pipeline to
support the following requirements.
1) Events are coming
Hello,
I would like to understand options available to design an ingestion pipeline to
support the following requirements.
1) Events are coming from various sources and depending on the type of the
events it will be stored in specific Kafka topics (say we have 4 topics)
2) The events that are par
Hello,
I have a need to process events in near real-time that are generated from
various upstream sources and are currently stored in Kafka. I want to build a
pipeline that reads the data as a continuous stream, enrich the events and
finally store it in both ClickHouse and Kafka sinks.
To get a
+1 from me
Regards
Bhaskar
On Thu, Oct 31, 2019 at 11:42 AM Gyula Fóra wrote:
> +1 from me, this is a great addition to Flink!
>
> Gyula
>
> On Thu, Oct 31, 2019, 03:52 Yun Gao wrote:
>
> > +1 (non-binding)
> > Very thanks for bringing this to the community!
> >
> >
> > ---
Congratulations Becket
Regards
Bhaskar
On Tue, Oct 29, 2019 at 1:53 PM Danny Chan wrote:
> Congratulations :)
>
> Best,
> Danny Chan
> 在 2019年10月29日 +0800 PM4:14,dev@flink.apache.org,写道:
> >
> > Congratulations :)
>
i.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/asyncio.html).
In your implementation, did you have a need to use this API?
RegardsVijay
On Sunday, October 13, 2019, 08:11:06 PM PDT, JingsongLee
wrote:
Hi vijay:
I developed an append stream sink for Mongo internally,
Hello,
Do we know how much of support we have for Mongo? The documentation page is
pointing to a connector repo that was very old (last updated 5 years ago) and
looks like that was just a sample code to showcase the integration.
https://ci.apache.org/projects/flink/flink-docs-stable/dev/batch/con
13 December 2018 at 5:59:34 PM, Chesnay Schepler (ches...@apache.org)
> wrote:
>
> Specifically which connector are you using, and which Flink version?
>
> On 12.12.2018 13:31, Vijay Bhaskar wrote:
> > Hi
> > We are using flink elastic sink which streams at the rate of 1000
Hi
We are using flink elastic sink which streams at the rate of 1000
events/sec, as described in
https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/elasticsearch.html
.
We are observing connection leak of elastic connections. After few minutes
all the open connections are exceedi
that window.
In case Flink had checkpointed event time watermarks, this problem would
not have occurred.
So, I am wondering if there is a way to enforce event time watermarks'
checkpointing in Flink...
Vijay Kansal
Software Development Engineer
LimeRoad
On Tue, Feb 27, 2018 at 6:
Hi All
Is there a way to checkpoint event time watermarks in Flink ?
I tries searching for this, but could not figure out...
Vijay Kansal
Software Development Engineer
LimeRoad
Vijay Kansal created FLINK-8470:
---
Summary: DelayTrigger and DelayAndCountTrigger in Flink Streaming
Window API
Key: FLINK-8470
URL: https://issues.apache.org/jira/browse/FLINK-8470
Project: Flink
d exclusions that were too aggressive.
Best,
Aljoscha
> On 3. Dec 2017, at 21:58, Vijay Srinivasaraghavan
> wrote:
>
> The issue is reproducible with 2.7.1 as well.
>
> My understanding is from 1.4.0 we don't include Hadoop dependencies by
> default but only when we
ependencies are getting included and I did not see the error that I have
reported earlier.
Regards,
Vijay
Sent from my iPhone
> On Dec 3, 2017, at 11:23 AM, Stephan Ewen wrote:
>
> Hmmm, this is caused by a missing dependency (javax.servlet).
> Could be that this dependency is n
Hello,
I am trying to build and run Flink from 1.4.0-rc2 branch with hadoop binary
2.7.0 compatibility.
Here are the steps I followed to build (I have maven 3.3.9).
===cd
$FLINK_HOMEmvn clean install -DskipTests -Dhadoop.vers
I think it may not work in scenario where Hadoop security is enabled and each
HCFS setup is configured differently, unless if there is a way to isolate the
Hadoop configurations used in this case?
Regards,
Vijay
Sent from my iPhone
> On Aug 24, 2017, at 2:51 AM, Stephan Ewen wrote:
>
configured with one of these HCFS as state backend
store.
Hope this helps.
RegardsVijay
On Wednesday, August 23, 2017 11:06 AM, Ted Yu wrote:
Would HDFS-6584 help with your use case ?
On Wed, Aug 23, 2017 at 11:00 AM, Vijay Srinivasaraghavan <
vijikar...@yahoo.com.invalid>
Hello,
Is it possible for a Flink cluster to use multiple HDFS repository (HDFS-1 for
managing Flink state backend, HDFS-2 for syncing results from user job)?
The scenario can be viewed in the context of running some jobs that are meant
to push the results to an archive repository (cold storage)
Hello,
I would like to know if we have any latency requirements for choosing
appropriate state backend?
For example, if an HCFS implementation is used as Flink state backend (instead
of stock HDFS), are there any implications that one needs to know with respect
to the performance?
- Frequency o
Hello,
I am seeing below error when I try to use ElasticsearchSink. It complains about
serialization and looks like it is leading to "IndexRequestBuilder"
implementation. I have tried the suggestion as mentioned in
http://stackoverflow.com/questions/33246864/elasticsearch-sink-seralizability
(c
05 PM, Till Rohrmann wrote:
> Hi Zhangrucong,
>
> I don't exactly know what's needed to use the ACL of ZooKeeper. I'm pulling
> Vijay in who implemented this feature. He probably knows more about it.
>
> Cheers,
> Till
>
> On Thu, Mar 2, 2017 at 3:09 AM, Zhangru
FS) in terms of
the scenarios/usecase that needs to be tested? Is there any general guidance on
this?---
RegardsVijay
On Wednesday, February 15, 2017 11:28 AM, Vijay Srinivasaraghavan
wrote:
Hello,
Regarding the Filesystem abstraction support, we are planning to use a
distributed file s
Hello,
Regarding the Filesystem abstraction support, we are planning to use a
distributed file system which complies with Hadoop Compatible File System
(HCFS) standard in place of standard HDFS.
According to the documentation
(https://ci.apache.org/projects/flink/flink-docs-release-1.3/internals
Vijay Srinivasaraghavan created FLINK-5593:
--
Summary: Modify current dcos-flink implementation to use runit
service
Key: FLINK-5593
URL: https://issues.apache.org/jira/browse/FLINK-5593
of conflicts and also we
already spent cycles in reviewing and fixing rest of the authorization changes
for other layers.
Regards,
Vijay
Sent from my iPhone
> On Dec 12, 2016, at 8:43 AM, Stephan Ewen wrote:
>
> Hi Vijay!
>
> The workaround you suggest may be doable, but I a
,
Vijay
Sent from my iPhone
> On Dec 12, 2016, at 3:38 AM, Ufuk Celebi wrote:
>
> On 12 December 2016 at 12:30:31, Maximilian Michels (m...@apache.org) wrote:
>>> It seems like we lack the resources for now to properly to take
>> care
>> of your pull request befo
On FLINK-3930, almost all of the feedback has been addressed. The only pending
review is Netty cookie authorization part which I have moved the cookie
validation from message level to a separate channel handler. I have just
rebased the code with master for final review.
Regards,
Vijay
Sent
>>Secure Data Access (FLINK-3930)
The PR for the work is still under review and I hope this could be included in
the release.
Regards,
Vijay
Sent from my iPhone
> On Dec 6, 2016, at 11:51 AM, Robert Metzger wrote:
>
> UNRESOLVED Secure Data Access (FLINK-3930)
Vijay Srinivasaraghavan created FLINK-4950:
--
Summary: Add support to include multiple Yarn application entries
in Yarn properties file
Key: FLINK-4950
URL: https://issues.apache.org/jira/browse/FLINK
Vijay Srinivasaraghavan created FLINK-4919:
--
Summary: Add secure cookie support for the cluster deployed in
Mesos environment
Key: FLINK-4919
URL: https://issues.apache.org/jira/browse/FLINK-4919
Vijay Srinivasaraghavan created FLINK-4918:
--
Summary: Add SSL support to Mesos artifact server
Key: FLINK-4918
URL: https://issues.apache.org/jira/browse/FLINK-4918
Project: Flink
Vijay Srinivasaraghavan created FLINK-4826:
--
Summary: Add keytab based kerberos support to run Flink in Mesos
environment
Key: FLINK-4826
URL: https://issues.apache.org/jira/browse/FLINK-4826
results in occasional (seldom) periods of heavy restart retries, until all
files are visible to all participants.
If you run into that issue, may be worthwhile to look at Flink 1.2-SNAPSHOT.
Best,
Stephan
On Tue, Oct 11, 2016 at 12:13 AM, Vijay Srinivasaraghavan
wrote:
Hello,
Per documentat
?
2) Does checkpoint/savepoint works properly?
Regards
Vijay
Vijay Srinivasaraghavan created FLINK-4667:
--
Summary: Yarn Session CLI not listening on correct ZK namespace
when HA is enabled to use ZooKeeper backend
Key: FLINK-4667
URL: https://issues.apache.org
Vijay Srinivasaraghavan created FLINK-4637:
--
Summary: Address Yarn proxy incompatibility with Flink Web UI when
service level authorization is enabled
Key: FLINK-4637
URL: https://issues.apache.org/jira
Vijay Srinivasaraghavan created FLINK-4635:
--
Summary: Implement Data Transfer Authentication using shared
secret configuration
Key: FLINK-4635
URL: https://issues.apache.org/jira/browse/FLINK-4635
egardsVijay
On Monday, September 5, 2016 7:28 AM, Maximilian Michels
wrote:
Hi Vijay,
The test fails when a NodeReport with used resources set to null is
retrieved. The test assumes that a TaskManager is always exclusively
running in one Yarn NodeManager which doesn't have to be true as o
c.Server - Auth
successful for appattempt_1473023926997_0001_01 (auth:SIMPLE)14:18:56,723
INFO org.apache.flink.yarn.YarnTestBase - Found
expected output in redirected streams14:18:56,730 INFO
org.apache.flink.yarn.YARNSessionCapacitySchedulerIT
Hello,
I am seeing a "timeout" issue for one of the Yarn test case
(YarnSessionCapacitySchedulerITCase>YarnTestBase.checkClusterEmpty) and noticed
similar references in FLINK-2213 (https://github.com/apache/flink/pull/1588)
I have tested in latest mater code. Is anyone seeing this issue?
RegardsV
Yes, I will take care of it as part other JIRA that I am working on.
RegardsVijay
On Wednesday, June 29, 2016 2:40 AM, Maximilian Michels
wrote:
Hi Vijay,
Glad we solved the problem.
Good catch with the FlinkYarnSessionCli. We should clean up the
properties file after running the
nkYarnSessionCli.java#L79)
is not deleted and the test code picks up wrong job manager address. I had to
manually delete the /tmp/.yarn-properties- file before running the test
code again.
RegardsVijay
On Tuesday, June 28, 2016 1:53 AM, Maximilian Michels
wrote:
Hi Vijay,
Hasn
xtended command from above?
Cheers,Aljoscha
On Mon, 27 Jun 2016 at 14:57 Vijay Srinivasaraghavan
wrote:
I am on Ubuntu 16.x, Java OpenJDK 1.8.0_91.
Can you try below commands and see if its working with the latest trunk code.
mvn clean verify -pl flink-yarn-tests -Pinclude-yarn-tests
-Dtest=
, Aljoscha Krettek
wrote:
Hi,I just ran a "mvn clean verify" and it passed on my machine (latest master,
OS X El Capitan, Java 1.8.0_40, Maven 3.3.9). What's your environment?
Cheers,Aljoscha
On Fri, 24 Jun 2016 at 16:47 Vijay Srinivasaraghavan
wrote:
I am seeing below failu
I am seeing below failure consistently with the latest trunk code when I run
"mvn clean verify". Is anyone seeing similar error in your environment?
Failed tests:
LocalFlinkMiniClusterITCase.testLocalFlinkMiniClusterWithMultipleTaskManagers:166
Thread Thread[initialSeedUniquifierGenerator,5,ma
ime classpath does not include the
"flink-shaded-include-yarn-tests-1.1-SNAPSHOT.jar" from
"flink-shaded-include-yarn-tests/target" location. The classpath points to
local repository jar
(/home/vijay/.m2/repository/org/apache/flink/flink-shaded-include-yarn-tests/1.1-SNAPSHO
t)-[master] %
Is the same true on your machine?
On Tue, Jun 21, 2016 at 5:36 PM, Vijay Srinivasaraghavan
wrote:
Hi Rob,
You need to include below lines to the pom.xml to resolve the chain dependency
error.
org.apache.felix
maven-bundle-plugin true
true
RegardsVijay
On Tuesda
local repository,
resolution will not be reattempted until the update interval of central has
elapsed or updates are forced -> [Help 1][ERROR]
How did you do it?
Regards,Robert
On Tue, Jun 21, 2016 at 5:06 PM, Vijay Srinivasaraghavan
wrote:
Hi Rob,
Yes I checked but the result jar does not
Hi Rob,
Yes I checked but the result jar does not contain the classes from
hadoop-minikdc package.
RegardsVijay
On Tuesday, June 21, 2016 7:59 AM, Robert Metzger
wrote:
Hi Vijay,
did you check if the artifact produced by the "flink-shaded-include-yarn-tests"
module co
Hello,
I was trying to include "hadoop-minikdc" component to Yarn test framework by
adding the dependency in "flink-shaded-include-yarn-tests" pom.xml file.
org.apache.hadoop
hadoop-minikdc
${hadoop.version}
The dependency inclusion seems to be working from IDE. IntelliJ picked up the
Thanks Max. I am able to run the test now.
Regards,
Vijay
Sent from my iPhone
> On Jun 14, 2016, at 6:31 AM, Maximilian Michels wrote:
>
> Hi Vijay,
>
> Please try `mvn verify -pl flink-yarn-tests -Pinclude-yarn-tests`. The
> additional profix switch will include the
yarn integration-test" did not fail but no test case is
associated with the module.
Could someone please let me know how to run "yarn" integration test.
Regards
Vijay
Folks,
I am making some code changes to "flink-runtime" module. I am able to
test/debug my changes from IDE (IntelliJ) using remote debug option (test Job
Manager/Task Manager runtime/startup).
Steps followed...1) git clone flink...
2) Import Flink maven project from IntelliJ3) Made some code ch
h the script you tampered with and run the
debugger. Please note that if you set "suspend=y" Flink won't start until
the debugger is attached to the process. Also beware that if the machine
running Flink is far away from the remote debugger you may suffer from
increased latency whe
How do I attach remote debugger to running Flink cluster from IntelliJ?
Appreciate if anyone could share the steps?
RegardsVijay
oscha
> On 23 Mar 2016, at 15:28, Vijay wrote:
>
> Yes, I have updated on all cluster nodes and restarted entire cluster.
>
> Do you see any problems with the steps that I followed?
>
> Regards,
> Vijay
>
> Sent from my iPhone
>
>> On Mar 23, 2016, at 7:
Yes, I have updated on all cluster nodes and restarted entire cluster.
Do you see any problems with the steps that I followed?
Regards,
Vijay
Sent from my iPhone
> On Mar 23, 2016, at 7:18 AM, Aljoscha Krettek wrote:
>
> Hi,
> did you update the log4j.properties file on all nod
(including
RollingSink changes) to the job jar file. Modified changes include both
Sytem.out as well as logger statements.
Updated log4j property file to DEBUG
Regards,
Vijay
Sent from my iPhone
> On Mar 23, 2016, at 6:48 AM, Aljoscha Krettek wrote:
>
> Hi,
> what where the steps you
I have changed the properties file but it did not help.
Regards,
Vijay
Sent from my iPhone
> On Mar 23, 2016, at 5:39 AM, Aljoscha Krettek wrote:
>
> Ok, then you should be able to change the log level to DEBUG in
> conf/log4j.properties.
>
>> On 23 Mar 2016, a
ilable write a
.valid-length file)
3. Move pending files to final location that where part of the checkpoint
4. cleanup any leftover pending/in-progress files
Cheers,
Aljoscha
> On 22 Mar 2016, at 10:08, Vijay Srinivasaraghavan
> wrote:
>
> Hello,
> I have enabled checkpoint and I am
understand the flow during "failover" scenario?
P.S: Functionally the code appears to be working fine but I am trying to
understand the underlying implementation details. public void
restoreState(BucketState state)
Regards
Vijay
If I start a flink job on YARN with below option, does Flink (JM & TM) service
gets killed after the job execution is complete? In otherwords, what is the
lifetime of the Flink service after the job is complete?
Run a single Flink job on YARN
The documentation above describes how to start a Fl
66 matches
Mail list logo