Hi,
I am trying to use beeline to access Hive/Hadoop.
Hive server is running on the Linux node that Hadoop is installed as
follows:
hduser@rhes564::/home/hduser/jobs> hive --service hiveserver 10001 -v &
[1] 30025
hduser@rhes564::/home/hduser/jobs> Starting Hive Thrift Server
Thi
e is established but I never
get beeline prompt
Looks like you are trying to connect to HiveServer v1. Beeline is the CLI
client that is supported to work with HiveServer2. You can start HiveServer2
like this: hive --service hiveserver2 and then connect to it using beeline.
Thanks,
-Vaibhav
ridale Ltd, its subsidiaries nor their employees accept
any responsibility.
From: Mich Talebzadeh [mailto:m...@peridale.co.uk]
Sent: 06 March 2015 20:02
To: user@hive.apache.org
Subject: RE: Connecting to hiveserver via beeline is established but I never
get beeline prompt
Thanks Vaibhav
Hi,
When I log in to hive via beeline and do show databases, I get the following
errors!
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://rhes564:1/ashehadoop> show databases;
org.apache.thrift.TApplicationException: Internal error processing
FetchResults
at
Rebooted hive server and it now works!
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
Coherence Cache
NOTE: The information in this email is proprietary and confidential
Hi all,
I am trying to use an Oracle DB as the metastore for Hive.
I have create all the users and connections, added Oracle jar file
ojdbc6.jar to /usr/lib/hive/lib. I have also renamed the derby jar file
derby-10.10.1.1.jar to derby-10.10.1.1.jar_ori so it does not use the
embedded Derby
email is virus free,
therefore neither Peridale Ltd, its subsidiaries nor their employees accept
any responsibility.
From: Mich Talebzadeh [mailto:m...@peridale.co.uk]
Sent: 09 March 2015 09:28
To: user@hive.apache.org
Subject: configuring Hive metastore to use Oracle DB ignored
Hi all,
Hi,
As you may be aware hive metastore stores hive metadata in a relational
database. The recommendation is to have this database in a remote container.
Currently this database can be created "remotely" on MySQL and PostgresSQL
plus Oracle and I believe MSSQL.
I have now adapted the relevan
>From the sql code do you know which table and primary key it is complaining?
I have used the script for Oracle 11g and modified one work with SAP Sybase ASE
15.7 and they both worked fine.
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-mem
OK not surprising the script does not have a means to drop the object before
creating it so the script is recreating the object again!
So your best bet it to drop metastoreDB and recreate it as empty and run your
script again.
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Hi,
I have set up hiverserv2 running on port 10010 and I can use beeline from
remote host to connect to the server host
beeline -u jdbc:hive2://rhes564:10010/default
org.apache.hive.jdbc.HiveDriver -n hduser -p x
scan complete in 5ms
Connecting to jdbc:hive2://rhes564:10010/default
S
Hi,
I am using a tool to move data from Oracle to HDFS via hiverserver 2 set up.
The tool supposed to produce a text file that hive supposed to load into a
table
I am getting the following error from hiverserver2
Loading data to table asehadoop.rs_lastcommit partition (p_orig=0, p_c
Have you truncated the table before dropping it? I
Truncate table
Drop table
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
Coherence Cache
NOTE: The information in
You can use the command *set* in hive to get the behaviour. You can also do
the same through beeline.
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
Coherence Cache
. At the consumer level what
tools are you going to use? Do you a propriety tool with the correct drivers to
access the data?
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
OK,
This is the way I read it. Crearte table t1 .. partitioned by date will use
horizontal partitioning as per common with any RDBMS say Oracle.
The view I will create it as follows:
hive> create view v1 as select * from t1;
OK
Time taken: 0.073 seconds
hive> analyze table t1 par
uence/display/Hive/PartitionedViews
- Douglas
From: Mich Talebzadeh
Reply-To:
Date: Sun, 15 Mar 2015 19:01:57 +
To: , 'cobby cohen'
Subject: RE: view over partitioned table
OK,
This is the way I read it. Crearte table t1 .. partitioned by date will use
horizontal partitio
pache.org" ; 'cobby cohen'
Sent: Monday, March 16, 2015 4:19 PM
Subject: Re: view over partitioned table
Mich,
What version of Hive are you running?
Have you seen this?
https://cwiki.apache.org/confluence/display/Hive/PartitionedViews
- Douglas
From: Mich Talebzadeh
_ui on table t
(object_id) as 'COMPACT' WITH DEFERRED REBUILD;
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
Coherence Cache
NOTE: The information in this
s.j
ava)
4. I also see that there are errors being propagated back to the
application. Is there some configuration to get detailed logs too?
Not sure if I am missing something here.
Any pointers or assistance will be of great help.
Regards,
Amal
From: Mich Tale
. Pretty easy. You will need to download the ASE driver called
jconn4.jar and place it in $HIVE_HOME/lib directory.
If these are relevant then let me know.
Thanks,
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for Trading
-d -b 25 hqlfile
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
Coherence Cache
NOTE: The information in this email is proprietary and confidential. This
message
Me too J
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
Coherence Cache
NOTE: The information in this email is proprietary and confidential. This
message is for the
not update stats.Failed with exception Buffer
underflow.
org.apache.hive.com.esotericsoftware.kryo.KryoException: Buffer underflow.
Appreciate any feedback on this
Thanks
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid
il is virus free,
therefore neither Peridale Ltd, its subsidiaries nor their employees accept
any responsibility.
From: Mich Talebzadeh [mailto:m...@peridale.co.uk]
Sent: 24 March 2015 20:43
To: user@hive.apache.org
Subject: Analyze stats on 4 million rows table fails with exception Buffer
unde
I do not know for sure but you can always create a view on the table
excluding columns that you don't want this particular application to see.
Rather tedious.
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Gri
. What you are mentioning as temp is (I gather if I am correct)
is eferred to staging in DW. However, there are now requirement for DW to
receive replicate data from transactional databases through SAP replication
server or Oracle Golden Gate.
HTH,
Mich Talebzadeh
http
Hi Elliot,
How do you determine a record in a partition has changed? Are you relying on
timestamp or something like that?
Thanks
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for Trading Systems with Oracle TimesTen
691 seconds, Fetched: 50 row(s)
Trying to understand above does keys: object_id (type: double) refers to use of
index here? I dropped that index and the same plan was produced!
Thanks
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-mem
Have you seen this article although it looks a bit dated.
Adding ACID to Apache Hive
<http://hortonworks.com/blog/adding-acid-to-apache-hive/>
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for Trading S
Thanks for that Elliot.
As a matter of interest what is the source of data in this case. Is the data
delivered periodically including new rows and deltas?
Cheers,
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for
Sure Daniel. Apologies.
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
Coherence Cache
NOTE: The information in this email is proprietary and confidential. This
message is
any thanks
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
Coherence Cache
NOTE: The information in this email is proprietary and confidential. This
message is for the designa
Hi gurus,
STDDEV function can be used for both aggregates and analytics. Fortunately
in Hive it has been implemented which is great. I have a simple question on
this if I may
I would expect the in-built function STDDEV to run pretty efficiently in
most databases as they are system defined f
479751
I tried Google search and seems to be different suggestions. May be I have
to rewrite the code?
Thanks
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
Coherence Cache
622.221465710723 |
+-+--+---+--
---++--+
9 rows selected (209.699 seconds)
Regards,
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
Coherence Cache
NOTE: The inf
Gents,
Hive as I see it does not support ORDER BY Column position. It only supports
ORDER BY Column name.
Thanks
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
Coherence
ect?
Thanks,
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
Coherence Cache
NOTE: The information in this email is proprietary and confidential. This
message is for the desig
te.xml
hive.groupby.orderby.position.alias
true
Eenables using Column Position Alias in GROUP BY and ORDER BY
clauses of queries.
And ran the above query without session level setting and it worked
Regards,
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publicatio
623.160777
So may be the point goes beyond Hive documentation. The value provided by
STDDEV in Hive does not appear to be industry standard
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for Trading Systems with Ora
one who looks after
DDL in production (drop, create, truncate tables etc) is the administrator who
does releases through approved processes.
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for Trading Systems with
ger to default
hive.txn.manager= org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager
and recycled again and all worked!
Sounds like concurrency does not work or something extra I need to do?
Thanks
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shor
the metastore
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
Coherence Cache
NOTE: The information in this email is proprietary and confidential. This
message is for the designa
hat virtual memory error has gone. Any
ideas appreciated.
Thanks
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
Coherence Cache
NOTE: The information in this email is propri
mapreduce.reduce.memory.mb
2048
mapreduce.map.java.opts
-Xmx3072m
mapreduce.reduce.java.opts
-Xmx6144m
yarn.app.mapreduce.am.resource.mb
400
I did not touch anything in mapred-site! I recycled Hadoop and it is now
working.
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
| oraclehadoop | txtest | NULL | ACQUIRED|
SHARED_WRITE | 9 | 1428527740870 | 1428527737658 | hduser
| rhes564 |
+--+---+-++-+---
+-+-++-+
---+--+
The question I have is if we delete from the whole table it seems that
"only" one lock is applied to whole tab
Hi,
I will try to have a go at your points but I am sure there are many experts
around.
As you may know already in RDBMS partitioning (dividing a very large table into
sub-tables conceptually) is deployed to address three areast.
1. Availability -- each partition can reside on a
last save
It will behave much like versioning in an RDBMS.
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practitioner's Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7.
co-author "Sybase Transact SQL Guide
Hi Grant,
Thanks for insight.
You mentioned and I quote
" Acid tables have been a real pain for us. We dont believe they are
production ready.. "
Can you please elaborate on this/
Thanks
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practiti
Hi,
I believe partitioning followed by hash cluster allows only up to 32 buckets
within a single partition?
HTH,
Mich
NOTE: The information in this email is proprietary and confidential. This
message is for the designated recipient only, if you are not the intended
recipient, you
limits on the number of buckets within a partition
and will get back on that.
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7.
co-author "Sybase Transact SQL Guid
Hi Suresh,
I guess you are also using MySQL as your Hive metastore?
What configuration have you set for stats collection?
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15",
ISBN 978
Hi Gary,
Is your hiverserver2 running OK. How did you start it?
$HIVE_HOME/bin/hiveserver2 &
What do you when you run the command below? I assume that your hiveserver is
running on port 1?
netstat -alnp|egrep 'Local|1|9083'
HTH
Mich Taleb
is '^]'.
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7.
co-author "Sybase Transact SQL Guidelines Best Practices", ISBN
978-0-9759693-0-4
Publications
uct) CLUSTERED
BY (id) INTO 32 buckets;
OK
Time taken: 0.787 seconds
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practitioners Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7.
co-author "Sybase Transact S
module
NOSASL: Raw transport
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7.
co-author "Sybase Transact SQL Guidelines Best Practices", ISB
es that
provide good functionality, is there any reason why one should deploy a
columnar database such as Hbase or Cassandra If Hive can do the job as well?
Thanks,
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practitioner's Guide to Upgrading
Thanks John,
I have already registered my interest on development work for Hive. So
hopefully I may be able to contribute at some level.
Regards,
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practitioner's Guide to Upgrading to Syb
Thanks John for the link
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practitioner's Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7.
co-author "Sybase Transact SQL Guidelines Best Practices", ISBN
978-0-9759693-0-4
/home/hduser/hadoop/hadoop-2.6.0/bin/hadoop job -kill
job_1429714224771_0041
java.io.IOException: org.apache.hadoop.hive.ql.lockmgr.LockException: No
record of lock could be found, may have timed out
Anyone has seen this error please?
Thanks
Mich Talebzadeh
http://talebzadehmich.
, may have timed out
What lock or transaction manager are you using?
Alan.
> Mich Talebzadeh <mailto:m...@peridale.co.uk>
> April 23, 2015 at 8:19
>
> Hi all,
>
> Trying to do a direct load from RDBMS to Hive (not using Sqoop).
>
> It sends data in files of rows
quit;
hduser@rhes564::/home/hduser/dba/bin> hdfs dfs -ls /xyz/pqr/
15/04/24 20:43:30 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
Found 1 items
drwxr-xr-x - hduser supergroup 0 2015-04-24 20:42
01_HDD = 100 log on logdev01_HDD = 50
2> go
Msg 156, Level 15, State 2:
Server 'SYB_157', Line 1:
Incorrect syntax near the keyword 'table'.
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practitioner’s Guide to Upgrad
Hi Eugene,
What this with regard to the following thread of mine?
org.apache.hadoop.hive.ql.lockmgr.LockException: No record of lock could be
found, may have timed out
Thanks
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practitioner'
r etc!
Thanks
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practitioner's Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7.
co-author "Sybase Transact SQL Guidelines Best Practices", ISBN
978-0-9759693-0-4
Publi
n the axiom of Hadoop with HDFS +
MapReduce.
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practitioner's Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7.
co-author "Sybase Transact SQL Guidelines Best Practices
identifier for each row in each
table?
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7.
co-author "Sybase Transact SQL Guidelines Best Practices", ISB
enough.
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7.
co-author "Sybase Transact SQL Guidelines Best Practices", ISBN
978-0-9759693-0-4
Publ
previous version and 0.14 and see anything
has changed in DDL? Do you have any records in that table? Mine is empty
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7.
l -9 26085
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practitioner's Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7.
co-author "Sybase Transact SQL Guidelines Best Practices", ISBN
978-0-9759693-0-4
Hi Harsha,
Have you updated stats on table1 after partition adding? In other words it is
possible that the optimiser is not aware of that partition yet?
analyze table table1 partition (dt=201501) compute statistics;
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
| 1430480571 |
2015-05-01 12:42:51.0 | 1430480571 |
+--+++-+
+-+--+
Is this expected? In other words to equate timestamp columns do we need to
cast them to bigint or numeric?
Thanks,
Mich
so. A row in RDBMS is created once,
updated many and deleted once. So the prime interest would be to look for
Inserts and updates.
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15",
_user_id) > 0
) rs
This may work.
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practitioners Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7.
co-author "Sybase Transact SQL Guidelines Best Practices", ISBN
HI Marc,
Regardless of whether you rebuild an index or not I came across checking
whether indexes are used in Hive. In so far as I know indexes are not fully
implemented in Hive and Hive does not use the index.
See the attached emails.
HTH
Mich Talebzadeh
http
try to put the query in a file called .hql don't forget to add exit! to
the bottom.
Then run it as
hive -f .hql > name.log
The output will be streamed to that file.
In general you should see both the execution and result. if you do the way I
mentioned, you would only see the result from quer
Great news thanks for the heads up.
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practitioner's Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7.
co-author "Sybase Transact SQL Guidelines Best Practices", ISBN
978-0-975
Where is your hive metastore? If it is on a database instance, is the instance
hosting metastore running?
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7.
co-auth
, DAY(op_time) AS Day
, count(*) AS Total_Rows
FROM t
GROUP BY
op_type
, YEAR(op_time)
, MONTH(op_time)
, DAY(op_time)
) rs
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practitioner’s Gui
CLASSPATH:$i
done
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7.
co-author "Sybase Transact SQL Guidelines Best Practices", ISBN
978-0-9759693-0-4
Publ
Do you have any issue with hive metastore? Where is the metastore situated?
Hive will crash if connections to metastore are lost or there is a network
issue!
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Author of the books "A Practitioner’s Guide to Upgrading to S
that can be used effectively to
get the data from Hive to visualisations tools like Tableau.
I thought of using Oracle TimesTen in-memory database to get the data out of
Hive/Hadoop and keep the most frequently used data in memory. What are other
alternatives around?
Thanks,
Mich
0: jdbc:hive2://rhes564:10010/default> select count(1) from t, smallt where
t.object_id = smallt.object_id;
+--+--+
| _c0 |
+--+--+
| 100 |
+--+--+
1 row selected (68.978 seconds)
You can see the results and judge for yourself
HTH
Mich Talebzadeh
Syba
$CLASSPATH
HTH,
Mich Talebzadeh
Sybase ASE 15 Gold Medal Award 2008
A Winning Strategy: Running the most Critical Financial Data on ASE 15
<http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf>
http://login.sybase.com/files/Product_Overviews/ASE-W
this?).
You need to work out the selectivity of column you are using for bucketing
using the above formulae or something similar then decide on the number of
buckets say clustered by (object_id) into 256 buckets.
HTH
Mich Talebzadeh
Sybase ASE 15 Gold Medal Award 2008
A Winning
.0.1:56797
ESTABLISHED 2943/java
HTH
Mich Talebzadeh
Sybase ASE 15 Gold Medal Award 2008
A Winning Strategy: Running the most Critical Financial Data on ASE 15
<http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf>
http://login.
access
I believe that will resolve the issue
HTH
Mich Talebzadeh
Sybase ASE 15 Gold Medal Award 2008
A Winning Strategy: Running the most Critical Financial Data on ASE 15
<http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf>
http://login.syba
.95 sec
2015-04-29 22:38:01,262 Stage-1 map = 80%, reduce = 0%, Cumulative CPU 11.8 sec
2015-04-29 22:38:02,295 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 13.28
sec
2015-04-29 22:38:03,335 Stage-1 map = 100%, reduce = 100%, Cumulative CPU
14.52 sec
OK
500000
I really need to g
to this problem is to restart Hadoop, hive and yarn. It will
then work. However, I am not sure about the cause. Sounds like resources are
not released for one reason or other!
Mich Talebzadeh
Sybase ASE 15 Gold Medal Award 2008
A Winning Strategy: Running the most Critical Financial Data
tter explanation I would be interested.
Mich Talebzadeh
Sybase ASE 15 Gold Medal Award 2008
A Winning Strategy: Running the most Critical Financial Data on ASE 15
<http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908
.pdf>
http://login.sybase
mapreduce.map) and their correlation to each other
mapreduce.reduce.memory.mb
8192
mapreduce.map.java.opts
-Xmx3072m
mapreduce.reduce.java.opts
-Xmx6144m
Can you please verify if these settings are correct and how they relate to
each other?
Thanks
Mich Talebzadeh
Thank you. Very helpful
Mich Talebzadeh
Sybase ASE 15 Gold Medal Award 2008
A Winning Strategy: Running the most Critical Financial Data on ASE 15
http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf
Author of the books "A Practitioner’s Guide to Upgradi
Many thanks Gopal.
Mich Talebzadeh
Sybase ASE 15 Gold Medal Award 2008
A Winning Strategy: Running the most Critical Financial Data on ASE 15
http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.
pdf
Author of the books "A Practitioner's Guide to Upgrading to Syb
127.0.0.1:21693
ESTABLISHED 20598/java
If you see it is running just do
kill -9 20598
To get rid of it and start it again.
HTH
Mich Talebzadeh
Sybase ASE 15 Gold Medal Award 2008
A Winning Strategy: Running the most Critical Financial Data on ASE 15
http://login.sybase.com/fil
What type of metastore are you using?
Mich Talebzadeh
Sybase ASE 15 Gold Medal Award 2008
A Winning Strategy: Running the most Critical Financial Data on ASE 15
http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf
Author of the books "A Practitioner’s
You need to pass username and password. For example assuming the OS username is
hduser and password is
beeline -u jdbc:hive2://rhes564:10010/default org.apache.hive.jdbc.HiveDriver
-n hduser -p
Mich Talebzadeh
Sybase ASE 15 Gold Medal Award 2008
A Winning Strategy
Where are you running beeline client?
>From another host or on the same host where hive is installed?
Mich Talebzadeh
Sybase ASE 15 Gold Medal Award 2008
A Winning Strategy: Running the most Critical Financial Data on ASE 15
http://login.sybase.com/files/Product_Overviews/ASE-Winn
What do you have in your xml file for
hive.metastore.sasl.enabled
false
If true, the metastore Thrift interface will be secured with
SASL. Clients must authenticate with Kerberos.
Mich Talebzadeh
Sybase ASE 15 Gold Medal Award 2008
A Winning Strategy: Running
UsedContainers RsvdContainers UsedMem
RsvdMem NeededMem AM info
No locks are held in metastore (Oracle in my case) as well.
Thanks
Mich Talebzadeh
Sybase ASE 15 Gold Medal Award 2008
A Winning Strategy: Running the most Critical Financial Data on ASE 15
<h
the locks are held until
rollback is complete. Killing a process itself will not release the locks!
Regards,
Mich Talebzadeh
Sybase ASE 15 Gold Medal Award 2008
A Winning Strategy: Running the most Critical Financial Data on ASE 15
http://login.sybase.com/files/Product_Overviews
1 - 100 of 794 matches
Mail list logo