Re: [EXT.MSG] Re: datadog http reporter metrics

2020-03-16 Thread Yitzchak Lieberman
Schepler wrote: > Do you see anything in the logs? In another thread a user reported that > the datadog reporter could stop working when faced with a large number of > metrics since datadog was rejecting the report due to being too large. > > On 15/03/2020 12:22, Yitzchak Lieberman wr

Re: datadog http reporter metrics

2020-03-15 Thread Yitzchak Lieberman
Anyone? On Wed, Mar 11, 2020 at 11:23 PM Yitzchak Lieberman < yitzch...@sentinelone.com> wrote: > Hi. > > Did someone encountered problem with sending metrics with datadog http > reporter? > My setup is flink version 1.8.2 deployed on k8s with 1 job manager and 10 >

datadog http reporter metrics

2020-03-11 Thread Yitzchak Lieberman
Hi. Did someone encountered problem with sending metrics with datadog http reporter? My setup is flink version 1.8.2 deployed on k8s with 1 job manager and 10 task managers. Every version deploy I see metrics on my dashboard but after a few minutes its stopped being sent from all task managers

Re: kinesis consumer metrics user variables

2019-10-07 Thread Yitzchak Lieberman
you prefer? Without the stream name and shard id you'd > end up with name clashes all over the place. > > Why can you not aggregate them? Surely Datadog supports some way to define > a wildcard when definying the tags to aggregate. > > On 03/10/2019 09:09, Yitzchak Lieberman wrote: >

kinesis consumer metrics user variables

2019-10-03 Thread Yitzchak Lieberman
Hi. I would like to have the ability to control the metric group of flink kinesis consumer: As written below it creates metric identifier for each stream name and shard id (in our case more than 1000 metric identifiers), in such matter it cannot be aggregated in data dog graph private static

Re: timeout error while connecting to Kafka

2019-08-25 Thread Yitzchak Lieberman
What is the topic replication factor? how many kafka brokers do you have? I were facing the same exception when one of my brokers was down and the topic had no replica (replication_factor=1) On Sun, Aug 25, 2019 at 2:55 PM Eyal Pe'er wrote: > BTW, the exception that I see in the log is: ERROR >

Re: timeout exception when consuming from kafka

2019-07-28 Thread Yitzchak Lieberman
Hi. Turned out that the cause was non-replicated (replication factor = 1) topics in Kafka. On Wed, Jul 24, 2019 at 4:20 PM Yitzchak Lieberman < yitzch...@sentinelone.com> wrote: > Hi. > > Do we have an idea for this exception? > > Thanks, > Yitzchak. > > On Tue, J

Re: timeout exception when consuming from kafka

2019-07-24 Thread Yitzchak Lieberman
be interesting to know. > > Maybe Gordon (in CC) has an idea of what's going wrong here. > > Best, Fabian > > Am Di., 23. Juli 2019 um 08:50 Uhr schrieb Yitzchak Lieberman < > yitzch...@sentinelone.com>: > >> Hi. >> >> Another question - what will happ

Re: timeout exception when consuming from kafka

2019-07-23 Thread Yitzchak Lieberman
Hi. Another question - what will happen during a triggered checkpoint if one of the kafka brokers is unavailable? Will appreciate your insights. Thanks. On Mon, Jul 22, 2019 at 12:42 PM Yitzchak Lieberman < yitzch...@sentinelone.com> wrote: > Hi. > > I'm running a Flink appli

timeout exception when consuming from kafka

2019-07-22 Thread Yitzchak Lieberman
Hi. I'm running a Flink application (version 1.8.0) that uses FlinkKafkaConsumer to fetch topic data and perform transformation on the data, with state backend as below: StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.enableCheckpointing(5_000,

Re: failed checkpoint with metadata timeout exception

2019-07-18 Thread Yitzchak Lieberman
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 6 ms. On Thu, Jul 18, 2019 at 3:49 PM miki haiat wrote: > Can you share your logs > > > On Thu, Jul 18, 2019 at 3:22 PM Yitzchak Lieberman < > yitzch...@sentinelone.com> wrote: > &

failed checkpoint with metadata timeout exception

2019-07-18 Thread Yitzchak Lieberman
Hi. I have flink a application that produces to kafka with 3 brokers. When I add 2 brokers that are not up yet it fails the checkpoint (a key in s3) due to timeout error. Do you know what can cause that? Thanks, Yitzchak.

Re: File Naming Pattern from HadoopOutputFormat

2019-07-02 Thread Yitzchak Lieberman
regarding option 2 for parquet: implementing bucket assigner won't set the file name as getBucketId() defined the directory for the files in case of partitioning the data, for example: /day=20190101/part-1-1 there is an open issue for that: https://issues.apache.org/jira/browse/FLINK-12573 On

partition columns with StreamingFileSink

2019-06-19 Thread Yitzchak Lieberman
Hi. I'm using the StreamingFileSink for writing partitioned data to s3. The code is below: StreamingFileSink sink = StreamingFileSink.forBulkFormat(new Path("s3a://test-bucket/test"), ParquetAvroFactory.getParquetWriter(schema, "GZIP")) .withBucketAssigner(new

Re: StreamingFileSink in version 1.8

2019-06-11 Thread Yitzchak Lieberman
> hdfs://xxx protocol. > > Another is that you’re in classpath hell, and your job jar contains an > older version of Hadoop jars. > > — Ken > > > On Jun 11, 2019, at 12:16 AM, Yitzchak Lieberman < > yitzch...@sentinelone.com> wrote: > > Hi. > > I'm

StreamingFileSink in version 1.8

2019-06-11 Thread Yitzchak Lieberman
Hi. I'm a bit confused: When launching my flink streaming application on EMR release 5.24 (which have flink 1.8 version) that write Kafka messages to s3 parquet files i'm getting the exception below, but when i'm installing flink 1.8 on EMR custom wise it works. What could be the difference