Thank you Alexey for the response.
We are using Beam 2.41.0 with Spark 3.3.0 cluster.
We did not run into any issues.
is it because in Beam 2.41.0, compatibility tests were run against spark
3.3.0 ?
https://github.com/apache/beam/blob/release-2.41.0/runners/spark/3/build.gradle
If so, since
Hi John,
Can you please point us to code where Thread-2 will be able to recreate the
state directory once cleaner is done ?
Also, we see that in https://issues.apache.org/jira/browse/KAFKA-6122, retries
around locks is removed. Please let us know why retry mechanism is removed?
Also can you
We are using kakfa streams version 1.1.0
We made some changes to kafka streams code. We are observing following sequence
of events in our production environment. We want to understand if following
sequence of events is possible in 1.1.0 version also.
time T0
StreamThread-1 : got assigned 0_1,
Hi John,
i see in https://github.com/apache/kafka/pull/3653
there is discussion around swallowing of LockException and retry not being
there.
but dguy replied saying that "The retry doesn't happen in this block of code.
It will happen the next time the runLoop executes."
but state of thread is
Hi John,
i see in https://github.com/apache/kafka/pull/3653
there is discussion around swallowing of LockException and retry not being
there.
but dguy replied saying that "The retry doesn't happen in this block of code.
It will happen the next time the runLoop executes."
but state of thread is
[
https://issues.apache.org/jira/browse/KAFKA-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Giridhar Addepalli resolved KAFKA-6645.
---
Resolution: Information Provided
> Host Affinity to facilitate faster resta
[
https://issues.apache.org/jira/browse/KAFKA-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Giridhar Addepalli resolved KAFKA-6645.
---
Resolution: Information Provided
> Host Affinity to facilitate faster resta
[
https://issues.apache.org/jira/browse/KAFKA-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422107#comment-16422107
]
Giridhar Addepalli commented on KAFKA-6645:
---
[~guozhang] & [~mjsax]
Thank you for comm
[
https://issues.apache.org/jira/browse/KAFKA-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16398148#comment-16398148
]
Giridhar Addepalli commented on KAFKA-6645:
---
Thank you for your reply [~mjsax]
Can you please
[
https://issues.apache.org/jira/browse/KAFKA-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Giridhar Addepalli updated KAFKA-6645:
--
Description:
Since Kafka Streams applications have lot of state in the stores
[
https://issues.apache.org/jira/browse/KAFKA-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Giridhar Addepalli updated KAFKA-6645:
--
Summary: Host Affinity to facilitate faster restarts of kafka streams
applications
[
https://issues.apache.org/jira/browse/KAFKA-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Giridhar Addepalli updated KAFKA-6645:
--
Description:
Since Kafka Streams applications have lot of state in the stores
[
https://issues.apache.org/jira/browse/KAFKA-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Giridhar Addepalli updated KAFKA-6645:
--
Summary: Sticky Partition Assignment to facilitate faster restarts of kafka
streams
[
https://issues.apache.org/jira/browse/KAFKA-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Giridhar Addepalli updated KAFKA-6645:
--
Issue Type: New Feature (was: Bug)
> Sticky Partition Assignment across Kafka Stre
Giridhar Addepalli created KAFKA-6645:
-
Summary: Sticky Partition Assignment across Kafka Streams
application restarts
Key: KAFKA-6645
URL: https://issues.apache.org/jira/browse/KAFKA-6645
Giridhar Addepalli created KAFKA-6645:
-
Summary: Sticky Partition Assignment across Kafka Streams
application restarts
Key: KAFKA-6645
URL: https://issues.apache.org/jira/browse/KAFKA-6645
Hi,
I am newbie to Kafka streams.
Tried below example :
https://github.com/confluentinc/kafka-streams-examples/blob/4.0.x/src/main/java/io/confluent/examples/streams/interactivequeries/WordCountInteractiveQueriesExample.java
http://localhost:7070/state/instances
[
{
"host": "localhost",
Hi,
Thank you for providing comparison between Samza and Spark Streaming,
Mupd8, Storm.
Looks like there is new player in the field : Kafka Streams (
https://docs.confluent.io/current/streams/index.html).
It will good to have comparison between Samza and Kafka Streams as well.
>From high-level
Hi,
i am new to Samza.
We are evaluating using Samza in Standalone mode.
Was able to run "Hello Samza" using Zookeeper Deployment Model , on single
machine
http://samza.apache.org/learn/tutorials/latest/hello-samza-high-level-zk.html
We are wondering how to run Samza Job using Zookeeper
Thank you so much Stefan.
Your reply is very helpful.
On Thu, Sep 1, 2016 at 4:09 PM, Stefan Klein <st.fankl...@gmail.com> wrote:
> Hi,
>
> 2016-09-01 12:22 GMT+02:00 Giridhar Addepalli <giridhar1...@gmail.com>:
>
> > Hi,
> >
> > Function
Hi,
Function declaration for reduce functions look like
function(keys, values, rereduce) {
}
My doubt is regarding the 'keys' parameter.
Can multiple keys be passed to single invocation of custom reduce function ?
In Hadoop world, only one key is passed to reduce function along with all
the
Hi All,
I am going through Quorum Journal Design document.
It is mentioned in Section 2.8 - In Accept Recovery RPC section
If the current on-disk log is missing, or a *different length *than the
proposed recovery, the JN downloads the log from the provided URI,
replacing any current copy of the
Hi All,
I am newbie to mongoose.
I am trying to have an embedded mongoose server in my c++ application.
My use case is to have http server , which will look at the request and
using object of a class from my application(say 'myclass') figures out
whether the request is valid and figures out
At Daemon level.
Thanks,
Giridhar.
On Fri, Jul 4, 2014 at 11:03 AM, Vijaya Narayana Reddy Bhoomi Reddy
vijay.bhoomire...@gmail.com wrote:
Vikas,
Its main use is to keep one process at a time...like one one datanode at
a any host - Can you please elaborate in a more detail?
What is meant
Hi,
We are trying to understand Quorum Journal Protocol (HDFS-3077)
Came across a scenario in which active name node is terminated and standby
namenode took over as new active namenode. But we could not understand why
active namenode got terminated in the first place.
Scenario :
We have 3
.
On Wed, Jun 18, 2014 at 10:08 PM, Giridhar Addepalli giridhar1...@gmail.com
wrote:
Hi,
We are trying to understand Quorum Journal Protocol (HDFS-3077)
Came across a scenario in which active name node is terminated and standby
namenode took over as new active namenode. But we could
Hi All,
For monitoring purpose we want to know number of workflows
submitted/finished/failed in the last 5 minutes.
Thinking of publishing these metrics to Ganglia.
I know that there is a way to get specified number of workflows satisfying
given condition( say SUCCEEDED) :
oozie jobs -oozie
Hi all,
Recently many jobs submitted by oozie are being in pending state for every
long times.
We have shell actions in our workflows.
Before this shell action, there is one custom synchronous action to write
event into mysql db. This event is getting completed.
But the oozie-launcher's single
Hi Mahir,
do you have your job.properties at /oozie-examples/job.properties on your
local filesystem ?
From the exception it looks like job.properties is not present at path you
mentioned on your oozie job command.
Thanks,
Giridhar.
On Fri, Nov 22, 2013 at 11:22 AM, Mahin Khan
Hi All,
I have workflow with shell action.
When i kill workflow job( via cli ), the launcher job gets killed but not
the shell script itself.
Is there any direct/indirect way to kill shell script too when workflow job
gets killed.
Thanks,
Giridhar.
Hi All,
We are using 3.3.0 version of Oozie.
Trying to run mapr-reduce app from example/apps that gets shipped with
oozie.
We are getting following error::
2013-11-18 23:20:41,156 INFO ActionStartXCommand:539 - USER[gaddepa]
GROUP[-] TOKEN[] APP[map-reduce-wf]
and it may
succeed.
Thanks,
Puru.
On 11/6/13 8:12 PM, Giridhar Addepalli giridhar1...@gmail.com wrote:
Hi All,
Is there any way to increase length of strings that can be used in
workflow
definition?
Thanks,
Giridhar.
On Tue, Nov 5, 2013 at 4:58 PM, Giridhar
Hi All,
Is there any way to increase length of strings that can be used in workflow
definition?
Thanks,
Giridhar.
On Tue, Nov 5, 2013 at 4:58 PM, Giridhar Addepalli
giridhar1...@gmail.comwrote:
Hi All,
I have very long string ( of length 69390 bytes ) in workflow definition
file
Hi All,
I have very long string ( of length 69390 bytes ) in workflow definition
file.
OozieClient is throwing exception when i try to submit this workflow.
E0803 : E0803: IO error, java.lang.RuntimeException:
java.io.UTFDataFormatException: encoded string too long: 69390 bytes
at
Giridhar Addepalli created OOZIE-1571:
-
Summary: Getting Optimistic Lock Exceptions when trying to query
for status of Job using client api
Key: OOZIE-1571
URL: https://issues.apache.org/jira/browse/OOZIE
[
https://issues.apache.org/jira/browse/OOZIE-1571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Giridhar Addepalli updated OOZIE-1571:
--
Attachment: oozie.log-2013-09-22-15
Please follow logs created towards oozie job
Hi Mohammad,
Please find attached oozie.log
Please let me know if you need further information.
Thanks,
Giridhar.
,
Mohammad
From: Giridhar Addepalli giridhar1...@gmail.com
To: user@oozie.apache.org
Sent: Monday, September 23, 2013 2:19 AM
Subject: Occasional exceptions while polling for job status using java
client API
http://archive.cloudera.com/cdh/3/oozie
Hi,
I am trying to add some arbitrary parameter to jobconf , i.e; all the MR
jobs started by oozie actions should have that parameter in their JobConf.
I tried using below two approaches
1) oozieClient.createConfiguration().setProperty(firstname, Giridhar)
2) on the command line -D
.,
mapred.job.firstname, Giridhar.
On Thu, Sep 19, 2013 at 4:22 PM, Giridhar Addepalli
giridhar1...@gmail.comwrote:
Hi,
I am trying to add some arbitrary parameter to jobconf , i.e; all the MR
jobs started by oozie actions should have that parameter in their
JobConf.
I tried using below two
Hi,
I am following below steps (as part of rpm) to add custom action executor
to oozie-server
step 1) stop oozie server
step 2) modify oozie-site.xml using shell script to add new custom action
executor and xsd file
step 3) call oozie-setup.sh with new custom action executor jar
step 4) start
Hi All,
Please let me know if you need more information from my side.
Thanks,
Giridhar.
On Mon, Aug 19, 2013 at 5:59 PM, Giridhar Addepalli
giridhar1...@gmail.comwrote:
Hi All,
I am trying to create custom action.
I followed steps on infoq.
Here is my xsd for new action
xs:schema
Hi All,
I am trying to create custom action.
I followed steps on infoq.
Here is my xsd for new action
xs:schema xmlns:xs=http://www.w3.org/2001/XMLSchema;
xmlns:publishhamakeevent=uri:oozie:publishhamakeevent-action:0.1
elementFormDefault=qualified
Hi,
As of now, primary namenode and secondary namenode are running on the
same machine in our configuration.
As both are RAM heavy processes, we want to move secondary namenode to
another machine in the cluster.
What does this move take?
Please refer me to some article which
HI,
How to generate crc for files on hdfs?
I copied files from hdfs to remote machine, I want to verify integrity
of files ( using copyToLocal command , I tried using -crc option too ,
but it looks like crc files does not exist on hdfs )
How should I proceed ?
Thanks,
Giridhar.
Hi,
setup( ), method present in mapred api, is called once at the start of
each map/reduce task.
Is it the same with configure( ) method present in mapreduce api ?
Thanks,
Giridhar.
Hi,
I am using hadoop-0.20.2 version of hadoop. Want to use Oozie for
managing workflows.
All of my mapreduce programs use 'mapreduce' api instead of deprecated
'mapred' api.
Downloaded oozie-2.2.1+78 , I see examples in here are using 'mapred'
api.
Is there any version of Oozie which
Hi,
I am using hadoop 0.20.2
Maperduce framework by default writes output to part-r- etc.
I want to write to a file with different name.
I am trying to override getDefaultWorkFile method in TextOutputFormat
class.
I am getting following error :
Hi,
I am trying to write output to MYSQL DB,
I am getting following error
java.io.IOException
at
org.apache.hadoop.mapreduce.lib.db.DBOutputFormat.getRecordWriter(DBOutp
utFormat.java:180)
at
PM, Giridhar Addepalli
giridhar.addepa...@komli.com wrote:
Hi,
I am trying to write output to MYSQL DB,
I am getting following error
java.io.IOException
at
org.apache.hadoop.mapreduce.lib.db.DBOutputFormat.getRecordWriter(DBOutputFormat.java:180
50 matches
Mail list logo