help in specifying
>> filter (where clause).
>>
>> Regards,
>> Ikram
>>
>>
>> On Fri, May 3, 2024 at 8:12 AM Ron Johnson
>> wrote:
>>
>>> On Thu, May 2, 2024 at 8:28 PM Amit Sharma wrote:
>>>
>>>> Hello
Hello,
Has anyone tried delta/incremental data migration for Oracle to PostgreSQL
using Ora2pg? Or what are the best options to run delta migration for
Oracle to PostgreSQL?
Thanks
Amit
management like add, resize or remove disks while PostgreSQL services are
up?
Is equal data distribution a challenge on LVM/ZFS disks?
Thanks
Amit
On Tue, Jan 23, 2024 at 9:49 AM Scot Kreienkamp <
scot.kreienk...@la-z-boy.com> wrote:
> El lun, 22 ene 2024 18:44, Amit Sharma escribió:
>
>
Hi,
We are building new VMs for PostgreSQL v15 on RHEL 8.x For a large database
of 15TB-20TB.
I would like to know from the experts that is it a good idea to create LVMs
to manage storage for the database?
Or are there any other better options/tools for disk groups in PostgreSQL,
similar to ASM
There is some issue with dealii-9.4.0 with spack. I uninstall everything
and install dealii-9.3.1 which works fine. Details are below if anyone
needs it.
spackv0.17
cmake-3.16.3
spack install dealii@9.3.1 target=x86_64
Best Regards,
Amit Sharma
On Friday, April 7, 2023 at 5:51:45 PM UTC+5:30
t;>
>> error: cannot find -lhdf5-shared
>> collect2: error: ld returned 1 exit status
>> make[3]: *** [CMakeFiles/step-1.dir/build.make:223: step-1] Error 1
>> make[2]: *** [CMakeFiles/Makefile2:272: CMakeFiles/step-1.dir/all] Error 2
>> make[1]: *** [CMakeFil
.dir/all] Error 2
make[1]: *** [CMakeFiles/Makefile2:252: CMakeFiles/run.dir/rule] Error 2
make: *** [Makefile:196: run] Error 2
Thanks,
Amit Sharma
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
Hi Team,
I am trying to create a playbook to add the allow users and hostname
dynamically in SSHD config. Ansible playbook should check the entry and add
it if its not exist.
I have written the below code and it is duplicating the same content and
its not checking idempotent. The below code
/Spark versions. Use
> mvn dependency:tree or equivalent on your build to see what you actually
> build in. You probably do not need to include json4s at all as it is in
> Spark anway
>
> On Fri, Feb 4, 2022 at 2:35 PM Amit Sharma wrote:
>
>> Martin Sean, changed it to 3.
PM Sean Owen wrote:
>
>> You can look it up:
>> https://github.com/apache/spark/blob/branch-3.2/pom.xml#L916
>> 3.7.0-M11
>>
>> On Thu, Feb 3, 2022 at 1:57 PM Amit Sharma wrote:
>>
>>> Hello, everyone. I am migrating my spark stream to spark ve
Hello, everyone. I am migrating my spark stream to spark version 3.1. I
also upgraded json version as below
libraryDependencies += "org.json4s" %% "json4s-native" % "3.7.0-M5"
While running the job I getting an error for the below code where I am
serializing the given inputs.
implicit val
xplicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Fri, 28 Jan 2022 at 22:14, Amit Sharma wrote:
>
>> Hello everyone, we have spark streaming application. We send request to
Hello everyone, we have spark streaming application. We send request to
stream through Akka actor using Kafka topic. We wait for response as it is
real time. Just want a suggestion is there any better option like Livy
where we can send and receive request to spark streaming.
Thanks
Amit
I am upgrading my cassandra java driver version to the latest 4.13. I have
a Cassandra cluster using Cassandra version 3.11.11.
I am getting the below runtime error while connecting to cassandra.
Before version 4.13 I was using version 3.9 and things were working fine.
ark-mllib" % sparkVersion ,
"com.datastax.spark" %% "spark-cassandra-connector" % "3.1.0", //
this includes cassandra-driver
"org.apache.spark" %% "spark-hive" % sparkVersion,
"org.apache.spark" %% "spark-streaming-ka
tion: com.codahale.metrics.JmxReporter
at
java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at
java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
On Thu, Jan 20, 2022 at 5:1
Hello, I am trying to upgrade my project from spark 2.3.3 to spark 3.2.0.
While running the application locally I am getting below error.
Could you please let me know which version of the cassandra connector I
should use. I am using below shaded connector but i think that causing the
issue
Hello, everyone. I am replacing log4j with log4j2 in my spark streaming
application. When i deployed my application to spark cluster it is giving
me the below error .
" ERROR StatusLogger Log4j2 could not find a logging implementation. Please
add log4j-core to the classpath. Using SimpleLogger to
...@gmail.com wrote:
> Amit, ping! Any news?
>
> On Monday, September 27, 2021 at 8:19:14 AM UTC+3 Shlomi Fish wrote:
>
>> Hi Amit!
>>
>> On Sun, 26 Sep 2021 20:14:10 -0700 (PDT)
>> Amit Sharma wrote:
>>
>> > Hi there -
>> > T
Hi there -
Trying to install an older version of the sdk on a M1 ARM Mac.
./emsdk install sdk-1.40.0-64bit
The installation fails here:
Installing tool
'releases-upstream-edf24e7233e0def312a08cc8dcec63a461155da1-64bit'..
*Error: Downloading URL
about fixing this error.
thank you
Am
On Sunday, September 26, 2021 at 9:07:14 AM UTC-4 Shlomi Fish wrote:
> Hi Amit,
>
> On Sat, 25 Sep 2021 07:38:38 -0700 (PDT)
> Amit Sharma wrote:
>
> > Hi Folks -
> >
> > I am unable to compile a library using emscripte
Hi Folks -
I am unable to compile a library using emscripten on M1 ARM mac.
On my older mac (intel based), I didn't had any issues.
The issue is around unresolved linker errors.
The specific library that I am trying to compile here is openframeworks.
Can anyone please point me on how to go
)
at
org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:193)
Can anyone help here and let me know about why it happened and what is
resolution for this.
--
Thanks & Regards,
Amit Sharma
Hi , I am using spark 2.7 version with scala. I am calling a method as
below
1. val rddBacklog = spark.sparkContext.parallelize(MAs) // MA is list
of say city
2. rddBacklog.foreach(ma => doAlloc3Daily(ma, fteReview.forecastId,
startYear, endYear))
3.doAlloc3Daily method just doing a database
Hi, we are planning to migrate to spark 3.x version . Just want to confirm
are there any challenges of Livy interaction to spark the 3.x version.
Thanks
Amit
[
https://issues.apache.org/jira/browse/YUNIKORN-656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17343464#comment-17343464
]
Amit Sharma commented on YUNIKORN-656:
--
Thanks [~wilfreds]. Based on the comments, I will convert
[
https://issues.apache.org/jira/browse/YUNIKORN-649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342910#comment-17342910
]
Amit Sharma edited comment on YUNIKORN-649 at 5/11/21, 11:01 PM
[
https://issues.apache.org/jira/browse/YUNIKORN-649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342910#comment-17342910
]
Amit Sharma commented on YUNIKORN-649:
--
Thanks [~wwei] for approval this. Considering this closed
[
https://issues.apache.org/jira/browse/YUNIKORN-651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342901#comment-17342901
]
Amit Sharma commented on YUNIKORN-651:
--
Thanks [~wwei] for approving the changes. Considering
[
https://issues.apache.org/jira/browse/YUNIKORN-667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Amit Sharma updated YUNIKORN-667:
-
Component/s: release
> Update user label key using helm-cha
[
https://issues.apache.org/jira/browse/YUNIKORN-667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Amit Sharma updated YUNIKORN-667:
-
Description: User label key is customizable using defined in
[YUNIKORN-650]. This should
[
https://issues.apache.org/jira/browse/YUNIKORN-667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Amit Sharma updated YUNIKORN-667:
-
Description: User label key is customizable using defined in (was: Define
user identity
Amit Sharma created YUNIKORN-667:
Summary: Update user label key using helm-charts
Key: YUNIKORN-667
URL: https://issues.apache.org/jira/browse/YUNIKORN-667
Project: Apache YuniKorn
Issue
Amit Sharma created YUNIKORN-667:
Summary: Update user label key using helm-charts
Key: YUNIKORN-667
URL: https://issues.apache.org/jira/browse/YUNIKORN-667
Project: Apache YuniKorn
Issue
[
https://issues.apache.org/jira/browse/YUNIKORN-651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17332516#comment-17332516
]
Amit Sharma commented on YUNIKORN-651:
--
Sure so instead of limiting to User, will include
[
https://issues.apache.org/jira/browse/YUNIKORN-656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17331734#comment-17331734
]
Amit Sharma commented on YUNIKORN-656:
--
[~wilfreds] Thanks for the response.
Keeping
[
https://issues.apache.org/jira/browse/YUNIKORN-656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Amit Sharma updated YUNIKORN-656:
-
Description:
LDAP resolution is a popular method to resolve group memberships. It allows
Amit Sharma created YUNIKORN-656:
Summary: LDAP resolver for group resolution
Key: YUNIKORN-656
URL: https://issues.apache.org/jira/browse/YUNIKORN-656
Project: Apache YuniKorn
Issue Type
Amit Sharma created YUNIKORN-656:
Summary: LDAP resolver for group resolution
Key: YUNIKORN-656
URL: https://issues.apache.org/jira/browse/YUNIKORN-656
Project: Apache YuniKorn
Issue Type
[
https://issues.apache.org/jira/browse/YUNIKORN-652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17331421#comment-17331421
]
Amit Sharma commented on YUNIKORN-652:
--
[~wwei] Changes are required in more than 1 place
[
https://issues.apache.org/jira/browse/YUNIKORN-651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17331419#comment-17331419
]
Amit Sharma commented on YUNIKORN-651:
--
[~wwei]
For the information to be added to doc, here
[
https://issues.apache.org/jira/browse/YUNIKORN-650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17331417#comment-17331417
]
Amit Sharma commented on YUNIKORN-650:
--
Sure I am happy I could help:)
> Retrieve u
[
https://issues.apache.org/jira/browse/YUNIKORN-649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17331092#comment-17331092
]
Amit Sharma edited comment on YUNIKORN-649 at 4/23/21, 11:19 PM
[
https://issues.apache.org/jira/browse/YUNIKORN-649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17331092#comment-17331092
]
Amit Sharma commented on YUNIKORN-649:
--
[~wwei] For this, do you want me to open a separate Jira
[
https://issues.apache.org/jira/browse/YUNIKORN-638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17330929#comment-17330929
]
Amit Sharma commented on YUNIKORN-638:
--
[~yuchaoran2011] & [~wwei], I can pick this up.
Ques
[
https://issues.apache.org/jira/browse/YUNIKORN-638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Amit Sharma reassigned YUNIKORN-638:
Assignee: Amit Sharma
> Make placeholder image configura
[
https://issues.apache.org/jira/browse/YUNIKORN-592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Amit Sharma reassigned YUNIKORN-592:
Assignee: Amit Sharma
> Move test code out of utils
[
https://issues.apache.org/jira/browse/YUNIKORN-592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17330916#comment-17330916
]
Amit Sharma commented on YUNIKORN-592:
--
[~wilfreds] Just to clarify, all 4 items need to be moved
[
https://issues.apache.org/jira/browse/YUNIKORN-649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17330368#comment-17330368
]
Amit Sharma commented on YUNIKORN-649:
--
Thanks [~wwei].
I included the prefix as part
[
https://issues.apache.org/jira/browse/YUNIKORN-649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326807#comment-17326807
]
Amit Sharma commented on YUNIKORN-649:
--
Also, the choice between Labels & Annotations, I thin
[
https://issues.apache.org/jira/browse/YUNIKORN-649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Amit Sharma updated YUNIKORN-649:
-
Description:
The Kubernetes metadata does not carry user information by design. This can
[
https://issues.apache.org/jira/browse/YUNIKORN-649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Amit Sharma updated YUNIKORN-649:
-
Description:
The Kubernetes metadata does not carry user information by design. This can
Amit Sharma created YUNIKORN-649:
Summary: Require improved methodology for determining k8s user
Key: YUNIKORN-649
URL: https://issues.apache.org/jira/browse/YUNIKORN-649
Project: Apache YuniKorn
Amit Sharma created YUNIKORN-649:
Summary: Require improved methodology for determining k8s user
Key: YUNIKORN-649
URL: https://issues.apache.org/jira/browse/YUNIKORN-649
Project: Apache YuniKorn
Hi, can we write unit tests for spark code. Is there any specific framework?
Thanks
Amit
[
https://issues.apache.org/jira/browse/YUNIKORN-556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17304351#comment-17304351
]
Amit Sharma commented on YUNIKORN-556:
--
Hi [~kmarton],
I can help if Huang is not available. I
I believe it’s a spark Ui issue which do not display correct value. I
believe it is resolved for spark 3.0.
Thanks
Amit
On Fri, Jan 8, 2021 at 4:00 PM Luca Canali wrote:
> You report 'Storage Memory': 3.3TB/ 598.5 GB -> The first number is the
> memory used for storage, the second one is the
any suggestion please.
Thanks
Amit
On Fri, Dec 4, 2020 at 2:27 PM Amit Sharma wrote:
> Is there any memory leak in spark 2.3.3 version as mentioned in below
> Jira.
> https://issues.apache.org/jira/browse/SPARK-29055.
>
> Please let me know how to solve it.
>
> Thanks
>
performance when
> compared to cache because Spark will optimize for cache the data during
> shuffle.
>
>
>
> *From: *Amit Sharma
> *Reply-To: *"resolve...@gmail.com"
> *Date: *Monday, December 7, 2020 at 12:47 PM
> *To: *Theodoros Gkountouvas , "
> user
at 1:01 PM Sean Owen wrote:
> No, it's not true that one action means every DF is evaluated once. This
> is a good counterexample.
>
> On Mon, Dec 7, 2020 at 11:47 AM Amit Sharma wrote:
>
>> Thanks for the information. I am using spark 2.3.3 There are few more
>> questi
d I suspect that
> DF1 is used more than once (one time at DF2 and another one at DF3). So,
> Spark is going to cache it the first time and it will load it from cache
> instead of running it again the second time.
>
>
>
> I hope this helped,
>
> Theo.
>
>
>
Hi All, I am using caching in my code. I have a DF like
val DF1 = read csv.
val DF2 = DF1.groupBy().agg().select(.)
Val DF3 = read csv .join(DF1).join(DF2)
DF3 .save.
If I do not cache DF2 or Df1 it is taking longer time . But i am doing 1
action only why do I need to cache.
Thanks
Is there any memory leak in spark 2.3.3 version as mentioned in below Jira.
https://issues.apache.org/jira/browse/SPARK-29055.
Please let me know how to solve it.
Thanks
Amit
On Fri, Dec 4, 2020 at 1:55 PM Amit Sharma wrote:
> Can someone help me on this please.
>
>
> Thanks
>
Can someone help me on this please.
Thanks
Amit
On Wed, Dec 2, 2020 at 11:52 AM Amit Sharma wrote:
> Hi , I have a spark streaming job. When I am checking the Excetors tab ,
> there is a Storage Memory column. It displays used memory /total memory.
> What is used memory. Is it memor
Hi , I have a spark streaming job. When I am checking the Excetors tab ,
there is a Storage Memory column. It displays used memory /total memory.
What is used memory. Is it memory in use or memory used so far. How would
I know how much memory is unused at 1 point of time.
Thanks
Amit
please find attached the screenshot of no active task but memory i still
used .
[image: image.png]
On Sat, Nov 21, 2020 at 4:25 PM Amit Sharma wrote:
> I am using df.cache and also unpersisting it. But when I check spark Ui
> storage I still see cache memory usage. Do I need to do any
I am using df.cache and also unpersisting it. But when I check spark Ui
storage I still see cache memory usage. Do I need to do any thing else.
Also in executor tab on spark Ui for each executor memory used/total memory
always display some used memory not sure if no request on streaming job
then
amit sharma created SPARK-33506:
---
Summary: t 130038ERROR ContextCleaner:91 - Error cleaning broadcast
Key: SPARK-33506
URL: https://issues.apache.org/jira/browse/SPARK-33506
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-25316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17236473#comment-17236473
]
amit sharma commented on SPARK-25316:
-
I am using spark 2.3.3 with 16 workers with 30 cores each. I
Russell i increased the rpc timeout to 240 seconds but i am still getting
this issue once a while and after this issue my spark streaming job stuck
and do not process any request then i need to restart this every time. Any
suggestion please.
Thanks
Amit
On Wed, Nov 18, 2020 at 12:05 PM Amit
please help.
Thanks
Amit
On Mon, Nov 9, 2020 at 4:18 PM Amit Sharma wrote:
> Please find below the exact exception
>
> Exception in thread "streaming-job-executor-3" java.lang.OutOfMemoryError:
> Java heap space
> at java.util.Array
Please help.
Thanks
Amit
On Wed, Nov 18, 2020 at 12:05 PM Amit Sharma wrote:
> Hi, we are running a spark streaming job and sometimes it throws below
> two exceptions . I am not understanding what is the difference between
> these two exception for one timeout is 120 seconds an
Hi, we are running a spark streaming job and sometimes it throws below two
exceptions . I am not understanding what is the difference between these
two exception for one timeout is 120 seconds and another is 600 seconds.
What could be the reason for these
Error running job streaming job
Hi , I have few questions as below
1. In the spark ui storage tab is displayed 'storage level',' size in
memory' and size on disk, i am not sure it displays RDD ID 16 with memory
usage 76 MB not sure why it is not getting 0 once a request for spark
streaming is completed. I am caching some RDD
nfun$map$1.apply(Try.scala:237)
at scala.util.Try$.apply(Try.scala:192)
at scala.util.Success.map(Try.scala:237)
On Sun, Nov 8, 2020 at 1:35 PM Amit Sharma wrote:
> Hi , I am using 16 nodes spark cluster with below config
> 1. Executor memory 8 GB
> 2. 5 cores per executor
Can you please help.
Thanks
Amit
On Sun, Nov 8, 2020 at 1:35 PM Amit Sharma wrote:
> Hi , I am using 16 nodes spark cluster with below config
> 1. Executor memory 8 GB
> 2. 5 cores per executor
> 3. Driver memory 12 GB.
>
>
> We have streaming job. We do not see problem
Hi , I am using 16 nodes spark cluster with below config
1. Executor memory 8 GB
2. 5 cores per executor
3. Driver memory 12 GB.
We have streaming job. We do not see problem but sometimes we get exception
executor-1 heap memory issue. I am not understanding if data size is same
and this job
Hi, i have a question while we are reading from cassandra should we use
partition key only in where clause from performance perspective or it does
not matter from spark perspective because it always allows filtering.
Thanks
Amit
Hi, I have 20 node clusters. I run multiple batch jobs. in spark submit
file ,driver memory=2g and executor memory=4g and I have 8 GB worker. I
have below questions
1. Is there any way I know in each batch job which worker is the driver
node?
2. Will the driver node be part of one of the
Can you keep option field in your case class.
Thanks
Amit
On Thu, Aug 13, 2020 at 12:47 PM manjay kumar
wrote:
> Hi ,
>
> I have a use case,
>
> where i need to merge three data set and build one where ever data is
> available.
>
> And my dataset is a complex object.
>
> Customer
> - name -
Any help is appreciated. I have spark batch job based on condition I would
like to start another batch job by invoking .sh file. Just want to know can
we achieve that?
Thanks
Amit
On Fri, Aug 7, 2020 at 3:58 PM Amit Sharma wrote:
> Hi, I want to write a batch job which would call another ba
Hi, I want to write a batch job which would call another batch job based on
condition. Can I call one batch job through another in scala or I can do it
just by python script. Example would be really helpful.
Thanks
Amit
Hi, I have table A in the cassandra cluster cluster -1 in one data
center. I have table B in cluster -2 in another data center. I want to copy
the data from one cluster to another using spark. I faced the problem that
I can not create two spark sessions as we need spark sessions per cluster.
Hi All, sometimes i get this error in spark logs. I notice few executors
are shown as dead in the executor tab during this error. Although my job
get success. Please help me out the root cause of this issue. I have 3
workers with 30 cores each and 64 GB RAM each. My job uses 3 cores per
executor
Please help on this.
Thanks
Amit
On Fri, Jul 17, 2020 at 2:34 PM Amit Sharma wrote:
> Hi All, i am running the same batch job in my two separate spark clusters.
> In one of the clusters it is showing GC warning on spark -ui under
> executer tag. Garbage collection is taking lo
Please help on this.
Thanks
Amit
On Fri, Jul 17, 2020 at 9:10 AM Amit Sharma wrote:
> Hi, sometimes my spark streaming job throw this exception Futures timed
> out after [300 seconds].
> I am not sure where is the default timeout configuration. Can i increase
> it.
Hi All, i am running the same batch job in my two separate spark clusters.
In one of the clusters it is showing GC warning on spark -ui under
executer tag. Garbage collection is taking longer time around 20 % while
in another cluster it is under 10 %. I am using the same configuration in
my
Hi, sometimes my spark streaming job throw this exception Futures timed
out after [300 seconds].
I am not sure where is the default timeout configuration. Can i increase
it. Please help.
Thanks
Amit
Caused by: java.util.concurrent.TimeoutException: Futures timed out after
[300 seconds]
Hi, I have to delete certain raw from Cassandra during my spark batch
process. Is there any way to delete Rawat using spark Cassandra connector.
Thanks
Amit
Hi, i have scenario where i have to read certain raw from a table and
truncate the table and store the certain raws back to the table. I am doing
below steps
1. reading certain raws in DF1 from cassandra table A.
2. saving into cassandra as override in table A
the problem is when I truncate the
I have set 5 cores per executor. Is there any formula to determine best
combination of executor and cores and memory per core for better
performance. Also when I am running local spark instance in my web jar
getting better speed than running in cluster.
Thanks
Amit
hi there kindly accept my invitation from snv.amitsharma .
On Fri, Nov 22, 2019 at 1:30 PM Motaz Hejaze wrote:
> Lets start working on it togather ..
> Contact me on skype..
>
> Skype : m3tz-hjze
>
> On Fri, 22 Nov 2019, 9:33 am amit sharma, wrote:
>
>> Hi the
Hi there , if you want I can work for you.And make inventory management as
per your needs.work will be as per guidelines provided by you plus that
would be very cost efficient.
On Fri, Nov 22, 2019 at 4:36 AM Ibrahim Abdelaty
wrote:
> Hello guys,
>
> Iam in urgent need for help with building
Hi Team,
Do we have any reference document to configure impala with HDP on
distributed mode?
Thanks & Regards
Amit Sharma
Thanks Gabor, its working now.
Thanks & Regards
Amit Sharma
On Thu, Oct 31, 2019 at 2:45 PM Gabor Kaszab wrote:
> This usually happens when the python env vars aren't sourced to the
> environment.
> . bin/set-pythonpath.sh
> This should help.
>
> Gabor
>
> On Thu,
Hi Team,
Finally build got successfully but when I try to start impala its throwing
error
[root@clusternode104 impala]# ${IMPALA_HOME}/bin/start-impala-cluster.py
/usr/bin/env: impala-python: No such file or directory
can someone help me on this.
Thanks & Regards
Amit Sharma
On Sat, Oc
an
as per centos 7.
On Sat, Oct 19, 2019 at 9:25 AM Amit Sharma
wrote:
> yes I am getting error on below code
>
> [root@clusternode104 ~]# if [[ "$UBUNTU" == true ]]; then
> > "$@"
> >fi
> [root@clusternode104 ~]# }
> -bash: syntax error n
mmand not found
Anyway I have commented ubuntu script code and re ran, currently I am in
middle of execution of command "./buildall.sh -notests -so" will share the
results.
Thanks & Regards
Amit Sharma
On Sat, Oct 19, 2019 at 8:14 AM Vincent Tran wrote:
> That is kinda weird.
Please update me if any one knows about it.
Thanks
Amit
On Thu, Oct 10, 2019 at 3:49 PM Amit Sharma wrote:
> Hi , we have spark streaming job to which we send a request through our UI
> using kafka. It process and returned the response. We are getting below
> error and this
Hi , we have spark streaming job to which we send a request through our UI
using kafka. It process and returned the response. We are getting below
error and this stareming is not processing any request.
Listener StreamingJobProgressListener threw an exception
java.util.NoSuchElementException: key
1 - 100 of 3539 matches
Mail list logo