Hi Jing,
Could you also send the invite to me?
zain.hai...@retailo.co
On Mon, 6 Jun 2022 at 7:04 AM Jing Ge wrote:
> Hi Xiao,
>
> Just done, please check. Thanks!
>
> Best regards,
> Jing
>
>
> On Mon, Jun 6, 2022 at 3:59 AM Xiao Ma wrote:
>
>> Hi Jing,
>>
>> Could you please add me to the
Hi,
We are using table apis to integrate and transform data sources and
converting them to datastream. We want to see the data formatting and
adding a .print() sink to the datastream but the .out files do not show any
output.
We do see records coming in from the metrics in flink UI though.
dev and user mailing
> list. As these are all "user" related questions, can you please focus them
> on the user ML only? Separating user & development (the Flink
> contributions) threads into separate lists allows community to work more
> efficiently.
>
> Best,
> D.
&
Hi,
Which port does flink UI run on in application mode?
If I am running 5 yarn jobs in application mode would the UI be same for
each or different ports for each?
Hi,
I'm getting this error in yarn application mode when submitting my job.
Caused by: java.lang.ClassCastException: cannot assign instance of
org.apache.commons.collections.map.LinkedMap to field
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.pendingOffsetsToCommit
of type
Hi Folks,
I have data coming in this format:
{
“data”: {
“oid__id”: “61de4f26f01131783f162453”,
“array_coordinates”:“[ { \“speed\” : \“xxx\“, \“accuracy\” :
\“xxx\“, \“bearing\” : \“xxx\“, \“altitude\” : \“xxx\“, \“longitude\” :
\“xxx\“, \“latitude\” : \“xxx\“,
tted job
>>> with the JobID?
>>>
>>> Best,
>>> Shengkai
>>>
>>> Weihua Hu 于2022年5月20日周五 10:23写道:
>>>
>>>> Hi,
>>>> You can get the logs from Flink Web UI if job is running.
>>>> Best,
>>>> Weihua
>>>>
>>>> 2022年5月19日 下午10:56,Zain Haider Nemati 写道:
>>>>
>>>> Hey All,
>>>> How can I check logs for my job when it is running in application mode
>>>> via yarn
>>>>
>>>>
>>>>
tor? Simultaneously, could you leave the Kinesis Producer
> configuration settings (apart from queue limit) at their defaults? This
> will give a good baseline from which to improve upon.
>
>
>
> Jeremy
>
>
>
> *From: *Zain Haider Nemati
> *Date: *Wednesday, May 18, 2022
Hey All,
How can I check logs for my job when it is running in application mode via
yarn
Hi Folks,
Would appreciate it if someone could help me out with this !
Cheers
On Thu, May 19, 2022 at 1:49 PM Zain Haider Nemati
wrote:
> Hi,
> Im running flink application on yarn cluster it is giving me this error,
> it is working fine on standalone cluster. Any idea what could b
Hi,
Im running flink application on yarn cluster it is giving me this error, it
is working fine on standalone cluster. Any idea what could be causing this?
Exception in thread "main" java.lang.NoSuchMethodError:
ecord every 5 seconds. This seems very low, is it
> expected?
>
> Thanks,
> Danny Cranmer
>
> On Mon, May 16, 2022 at 10:57 PM Zain Haider Nemati <
> zain.hai...@retailo.co> wrote:
>
>> Hey Danny,
>> Thanks for having a look at the issue.
>> I am using a
Hi,
We are using flink version 1.13 with a kafka source and a kinesis sink with
a parallelism of 3.
On submitting the job I get this error
Could not copy native binaries to temp directory
/tmp/amazon-kinesis-producer-native-binaries
Followed by permission denied even though all the permissions
Hi,
I am using a kafka source with a kinesis sink and the speed of data coming
in is not the same as data flowing out hence the need to configure a
relatively larger queue to hold the data before backpressuring. Which
memory configuration corresponds to this that I'll need to configure?
Hi,
I'm running a job on a local flink cluster but metrics are showing as Bytes
received,records received,bytes sent,backpressure all 0 in the flink UI
even though I'm receiving data in the sink.
Do I need to additionally configure something to see these metrics work in
real time?
Hi,
I am trying to run a job in my local cluster and facing this issue.
org.apache.flink.client.program.ProgramInvocationException: The main method
caused an error: Failed to execute job 'Tracer Processor'.
at
Hi,
Im fetching data from kafka topics converting them to chunks of <= 1MB and
sinking them to a kinesis data stream.
The streaming job is functional however I see bursts of data in kinesis
stream with intermittent dips where data received is 0. I'm attaching the
configuration parameters for
Hi,
I have been running a streaming job which prints data to .out files the
size of the file has gotten really large and is choking the root memory for
my VM. Is it ok to delete the .out files? Would that affect any other
operation or functionality?
Hi,
I have a comma separated string of the format
x,y,z \n
a,b,c ...
I want to split the string on the basis of '\n' and insert each string into
the data stream separately using *flatmap*. Any example on how to do that?
If I add the chunks into an Arralist or List and return that from the
flatmap
/flink/flink-docs-release-1.13/docs/connectors/datastream/kinesis/#kinesis-producer
> [2]
> https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/kinesis/#kinesis-streams-sink
> [3]
> https://docs.aws.amazon.com/streams/latest/dev/service-sizes-and-limits.html
>
Hi,
I am using a kinesis sink with flink 1.13.
The amount of data is in millions and it choke the 1MB cap for kinesis data
streams.
Is there any way to send data to kinesis sink in batches of less than 1MB?
or any other workaround
Thu, 12 May 2022 at 08:41, yu'an huang wrote:
>
>> Hi,
>>
>> Your code is working fine in my computer. What is the Flink version you
>> are using.
>>
>>
>>
>>
>> On 12 May 2022, at 3:39 AM, Zain Haider Nemati
>> wrote:
>>
>
Hi Folks,
Getting this error when sinking data to a firehosesink, would really
appreciate some help !
DataStream inputStream = env.addSource(new
FlinkKafkaConsumer<>("xxx", new SimpleStringSchema(), properties));
Properties sinkProperties = new Properties();
Hi Folks,
Getting this error when sinking data to a firehosesink, would really
appreciate some help !
DataStream inputStream = env.addSource(new
FlinkKafkaConsumer<>("xxx", new SimpleStringSchema(), properties));
Properties sinkProperties = new Properties();
Hi,
I am working on writing a flink processor which has to send transformed
data to redshift/S3.
I do not find any sort of documentation for pyflink in reference to how to
send data to firehose,s3 or redshift. Would appreciate some help here.
25 matches
Mail list logo