integration is a big effort to do i
would guess.
On Tue, Apr 10, 2018 at 12:46 AM, Ashutosh Chauhan <hashut...@apache.org>
wrote:
> Hi Amit,
>
> Yes only mysql and postgres are supported for druid metadata storage.
> Thats because Druid only supports these. You mentioned t
Hive Druid Integration:
I have Hive and Druid working independently.
But having trouble connecting the two together.
I don't have Hortonworks.
I have Druid using sqlserver as metadata store database.
When I try setting this property in Beeline,
set hive.druid.metadata.db.type=sqlserver;
I get
test
Thanks & Regards,
Amit Kumar,
Scientist B,
Mob: 9910611621
(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
INFO cli.LlapServiceDriver: LLAP service driver finished
Thanks & Regards,
Amit Kumar,
Mob: 9910611621
On Sat, Jul 22, 2017 at 5:00 PM, Amit Kumar <delhiam...@gmail.com> wrote:
> Hi,
>
> I have installed hadoop 2
mpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Thanks & Regards,
Amit Kumar,
Mob: 9910611621
$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Query:
insert into table tableB select col1, col2, col3, col4, col5, col6, col7, col8
from tableA
Thanks
Amit
Legal Disclaimer:
The information contained in this message may be privileged and confidential
Hi,
I am trying to understand how hive is reading the configuration from
hive-site.xml. Where we define the structure of the xml file and code used to
read the hite-site.xml.
Thanks
Amit
Legal Disclaimer:
The information contained in this message may be privileged and confidential
e.printStackTrace();
}
From: Markovitz, Dudu [mailto:dmarkov...@paypal.com]
Sent: Friday, August 05, 2016 3:04 PM
To: user@hive.apache.org
Subject: RE: Error running SQL query through Hive JDBC
Can you please share the query?
From: Amit Bajpai [mailto:amit.baj...@flex
Hi,
I am getting the below error when running the SQL query through Hive JDBC. Can
suggestion how to fix it.
org.apache.hive.service.cli.HiveSQLException: Error while compiling statement:
FAILED: SemanticException UDF = is not allowed
at
You need to increase the value for the below hive property value in Ambari
hive.server2.tez.sessions.per.default.queue
If this does not fix the issue then you need to update the capacity scheduler
property values.
From: Raj hadoop [mailto:raj.had...@gmail.com]
Sent: Wednesday, August 03, 2016
s "SHOW PROCESSLIST"
(and equivalent commands in most other databases).
From: Amit Bajpai [mailto:amit.baj...@flextronics.com]
Sent: Thursday, July 14, 2016 10:22 PM
To: user@hive.apache.org<mailto:user@hive.apache.org>
Subject: Yarn Application ID for Hive query
Hi,
I am usin
user='amit',
password='amit',
database='default') as conn:
with conn.cursor() as cur:
#Execute query
cur.execute("SELECT COMP_ID, COUNT(1) FROM tableA GROUP BY COMP_ID")
#Fetch table results
for i in cur.fetch():
ccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Not sure why it is failing in the server. If any one kindly point it out it
will be great.
Thanks,
Amit
i am using CDH 5.2.1,
Any pointers will be of immense help.
Thanks
On Fri, May 15, 2015 at 9:43 AM, amit kumar ak3...@gmail.com wrote:
Hi,
After re-create my account in Hue, i receives “User matching query does
not exist” when attempting to perform hive query.
The query is succeed
:
this is related to djnago
see this on how to clear sessions from django
http://www.opencsw.org/community/questions/289/how-to-clear-the-django-session-cache
On Fri, May 15, 2015 at 12:24 PM, amit kumar ak3...@gmail.com wrote:
Yes it is happening for hue only, can u plz suggest how i
sessions from server. (this may
clean all users active sessions from hue so be careful while doing it)
On Fri, May 15, 2015 at 11:31 AM, amit kumar ak3...@gmail.com wrote:
i am using CDH 5.2.1,
Any pointers will be of immense help.
Thanks
On Fri, May 15, 2015 at 9:43 AM, amit kumar ak3
Hi,
After re-create my account in Hue, i receives “User matching query does not
exist” when attempting to perform hive query.
The query is succeed in hive command line.
Please suggest on this,
Thanks you
Amit
what error you are getting after mentioning javaxml in place of kryo
On Wed, May 6, 2015 at 12:44 AM, Bhagwan S. Soni bhgwnsson...@gmail.com
wrote:
Please find attached error log for the same.
On Tue, May 5, 2015 at 11:36 PM, Jason Dere jd...@hortonworks.com wrote:
Looks like you are
Jason,
The last comment is This has been fixed in 0.14 release. Please open new
jira if you see any issues.
is this issue resolved in hive 0.14 ?
On Tue, May 5, 2015 at 11:36 PM, Jason Dere jd...@hortonworks.com wrote:
Looks like you are running into
, amit kumar ak3...@gmail.com wrote:
Jason,
The last comment is This has been fixed in 0.14 release. Please open
new jira if you see any issues.
is this issue resolved in hive 0.14 ?
On Tue, May 5, 2015 at 11:36 PM, Jason Dere jd...@hortonworks.com wrote:
Looks like you are running
)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:8553)
After rolling those same changes out, the problem resolved itself.
On Tue, May 5, 2015 at 4:28 AM, Moore, Douglas
douglas.mo...@thinkbiganalytics.com wrote:
Hi Amit,
We've seen the same error on MoveTask with Hive 0.14 / HDP 2.2 release
:36 AM, amit kumar ak3...@gmail.com wrote:
Hi Doug,
I have use CDH 5.2.1
Disable ACLs on Name Nodes
Set Enable Access Control Lists = False
Save Changes
Restart Hadoop Cluster
Stack trace:
2015-05-04 10:38:18,820 INFO [main]: exec.Task
(SessionState.java:printInfo(537)) - Moving
Doug,
Do i need any changes in configuration or else to resolve this issue.
Thanks
On Tue, May 5, 2015 at 4:46 AM, amit kumar ak3...@gmail.com wrote:
Do you have any suggestion to resolve this issue,
I am looking for a resolution.
On Tue, May 5, 2015 at 4:42 AM, Moore, Douglas
!
- Douglas
From: amit kumar ak3...@gmail.com
Reply-To: user@hive.apache.org
Date: Tue, 5 May 2015 04:40:18 +0530
To: user@hive.apache.org
Subject: Re: Unable to move files on Hive/Hdfs
Hi Doug,
I have use CDH 5.2.1
I performed the below task, and getting the error, but after rolling
While moving the data from hive/hdfs we get below error,
Please suggest on this.
Moving data to:
hdfs://nameservice1/tmp/hive-srv-hdp-edh-d/hive_2015-05-04_10-02-39_841_5305383954203911235-1/-ext-1
Failed with exception Unable to move
Hi User,
I want to know the difference of query execution time in hive if I use SSD
for HDFS and HDD for HDFS.
Thanks,
Amit
2014-11-26 20:21:53,923 Stage-1 map = 99%, reduce = 10%, Cumulative CPU
29516.21 sec
2014-11-26 20:22:53,935 Stage-1 map = 99%, reduce = 10%, Cumulative CPU
29552.95 sec
Please help me to find out solution.
Thanks
Amit
FAIL
Total MapReduce CPU Time Spent: 10 minutes 25
seconds 190 mse
Please help me to fix the issue.
Thanks
Amit
hi Daniel,
this stacktrace same for other query .
for different run I am getting slave7 sometime slave8...
And also I registered all machine IPs in /etc/hosts
Regards
Amit
On Mon, Nov 24, 2014 at 10:22 PM, Daniel Haviv
daniel.ha...@veracity-group.com wrote:
It seems that the application
I did not modify in all the slaves. except slave
will it be a problem ?
But for small data (up to 20 GB table) it is running and for 300GB table
only count(*) running sometimes and sometimes failed
Thanks
Amit
On Mon, Nov 24, 2014 at 10:37 PM, Daniel Haviv
daniel.ha...@veracity-group.com
* except slave6, slave7, slave8
On Mon, Nov 24, 2014 at 10:56 PM, Amit Behera amit.bd...@gmail.com wrote:
I did not modify in all the slaves. except slave
will it be a problem ?
But for small data (up to 20 GB table) it is running and for 300GB table
only count(*) running sometimes
.
Daniel
On 24 בנוב׳ 2014, at 19:26, Amit Behera amit.bd...@gmail.com wrote:
I did not modify in all the slaves. except slave
will it be a problem ?
But for small data (up to 20 GB table) it is running and for 300GB table
only count(*) running sometimes and sometimes failed
Thanks
Amit
Hi Daniel,
Thank you , Its running fine.
*Another question:*
could you please tell me what to do If I will get *Shuffle Error*.
one time I got this type of error while running a join query on 300GB data
with 20GB data
Thanks
Amit
On Mon, Nov 24, 2014 at 11:13 PM, Daniel Haviv
daniel.ha
Hi users,
I have hive set up at multi node hadoop cluster.
I want to run multiple queries on top of a table from different machines.
So please help how to achieve multiple access on hive to run multiple
queries simultaneously.
Thanks
Amit
hi Devopam,
Thank you for replying.
I am using Hue on the top of Hive. So can you please help me, how oozie
will help me and how can I integrate oozie with this.
Thanks
Amit
On Fri, Nov 7, 2014 at 7:58 PM, Devopam Mittra devo...@gmail.com wrote:
hi Amit,
Please try to see if Hive CLI
:
calxycns4bv0b7jvbdctxm3utqlhg7wdaf8pppwjvhsetdth...@mail.gmail.com
Subject: Want to Join this
From: Amit Behera amit.bd...@gmail.com
To: user-subscr...@hive.apache.org
Content-Type: multipart/alternative; boundary=001a11c33312a1f5fa05070a9941
X-Virus-Checked: Checked by ClamAV on apache.org
://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/
Thanks
On Tue, Sep 9, 2014 at 8:53 PM, Amit Dutta amitkrdu...@outlook.com wrote:
Thanks a lot for your reply..I changed the following parameters from Cloudera
manager
mapred.tasktracker.map.tasks.maximum = 2 (it was 1 before
Hi I have a only 604 rows in the hive table.
while using A = LOAD 'revenue' USING org.apache.hcatalog.pig.HCatLoader(); DUMP
A; it starts spouting heart beat repeatedly and does not leave this state.Can
please someone help.I am getting following exception
2014-09-09 17:27:45,844 [JobControl]
Hi
Does anyone please let me know how to increase the mapreduce slots? i am
getting infinite heartbeat when i run a PIG script from hue cloudera cdh5.1
Thanks,Amit
Hi
Does anyone please let me know how to increase the mapreduce slots? i am
getting infinite heartbeat when i run a PIG script from hue cloudera cdh5.1
Thanks,Amit
Make sure there are no primary key clash. HBase would over write the row if you
upload data with same primary key. That's one reason you can possibly get less
rows than what you uploaded
Sent from my mobile device, please excuse the typos
On May 1, 2014, at 3:34 PM, Kennedy, Sean C.
to multiples I was
hoping. I changed the striping to 4MB. Tried creating index every 10k rows.
Inserted 6 million rows and did many different type of queries. Any ideas
people what I might be missing ?
Amit
Sent from my mobile device, please excuse the typos
On Apr 4, 2014, at 8:21 PM, Bryan
the step below.
Any pointers would be appreciated.
Amit
I have a single node setup with minimal settings. JPS output is as follows
$ jps
9823 NameNode
12172 JobHistoryServer
9903 DataNode
14895 Jps
11796 ResourceManager
12034 NodeManager
*Running Hadoop 0.2.2 with Yarn.*
Step1
CREATE TABLE pokes
the step below.
Any pointers would be appreciated.
Amit
I have a single node setup with minimal settings. JPS output is as follows
$ jps
9823 NameNode
12172 JobHistoryServer
9903 DataNode
14895 Jps
11796 ResourceManager
12034 NodeManager
*Running Hadoop 0.2.2 with Yarn.*
Step1
CREATE TABLE pokes
storage..
Any pointers would be very helpful.
Amit
Error: java.lang.RuntimeException: Hive Runtime Error while closing
operators
at
org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:240)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61
are not used
and the table names some how might map to similar hashcode values?
Also is changing the alias the only workaround for this problem or is there
any other workaround possible?
Thanks,
Amit
On Sun, Aug 11, 2013 at 9:22 PM, Navis류승우 navis@nexr.com wrote:
Hi,
Hive is notorious
On Thu, Sep 27, 2012 at 10:56 AM, Amit Sangroya sangroyaa...@gmail.comwrote:
Hello everyone,
I am experiencing that Hive v-0.9.0 works with hadoop 0.20.0 only in
default scheduling mode. But when I try to use the Fair scheduler using
this configuration, I see that map reduce do not progress
behaviour? Should
it not create these hash maps on the corresponding mappers in parallel?
Thanks,
Amit
On Thu, Jan 19, 2012 at 9:22 AM, Bejoy Ks bejoy...@yahoo.com wrote:
Hi Avrila
AFAIK the bucketed map join is not default in hive and it happens
only when the values is set to true
as the User with which the hive server is running, and connects to
the default database.
What version of Toad for Cloud are you using?
Thanks,
Amit
On Tue, Jan 31, 2012 at 10:59 AM, Sriram Krishnan skrish...@netflix.comwrote:
I was under the impression that Toad uses JDBC – and AFAIK
Do you know anyway in which this can be done in Hive Server ?
Amit
On Tue, Aug 23, 2011 at 11:21 AM, Chinna Rao Lalam 72745
chinna...@huawei.com wrote:
Hi Amit,
Pls check this issue HIVE-1405 it will help u .This issue targeting same
scenario.
Thanks
Chinna Rao Lalam
Hi Chinna
Hi Chinna,
That worked, Thanks a lot. So once the jar is picked up, is there a way
to create a temporary function, that is retained even if i quit the
interactive shell and start it again? Or do i have to use the create
command to register the function everytime?
Thanks.
Amit
On Mon, Aug 22
/development/depot/dataeng/hive/dist/value/property
Amit
Hi,
I am also trying the same but don't know the exact build steps. Someone please
tell the same.
-regards
Amit
From: Jean-Charles Thomas jctho...@autoscout24.com
To: Hive mailing list user@hive.apache.org
Sent: Tue, 22 March, 2011 11:40:18 AM
Subject: Hive
And hive replaces the MIN_AGE parameter automatically.
-amit
55 matches
Mail list logo