Bch
G
Ggg
Vghh
V
Gf
G
Ck
Vjf
Fj
Fuff
Hfg
Fhfhhf
Ggg
Chbg
Gh
Gjg
Jf
Gugghhgg
Fvgv
G
Gv
G
On Aug 1, 2016 1:57 PM, "Dmitry Tolpeko" wrote:
> Hi Emerson,
>
> I did not commit TestHplsqlDb.java since Apache Pre-commit test starts
> executing it, and I did not manage how to pass it (the
a Path from an empty
string
You are fetching for an empty Path.
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Centre for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/
t's reading from the intermediate output
> of the previous job. It's a valid HDFS path, so not sure why the child finds
> an empty string.
>
> Any pointers to what else we can debug ?
>
> Thanks,
>
> Viral
>
>
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Centre for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/
-ConfigurationValuestoSetforCompaction
On Tue, Dec 2, 2014 at 2:46 PM, vic0777 wrote:
>
>
> The compact operation will merge the data and then the blocks may be
> resued.
>
>
>
>
> At 2014-12-02 17:10:43, "unmesha sreeveni" wrote:
>
> So that block will not be reused r
t), and you have to explicitly mark
> the table as transactional. You must also bucket the table. For example:
>
> create table HiveTest (...) clustered by (_col_) into _num_ buckets stored
> as orc tblproperties('transactional' = 'true');
>
> Alan.
>
not
> written to the same block.
>
> Wantao
>
>
>
>
> At 2014-12-02 16:58:26, "unmesha sreeveni" wrote:
>
> Why hive "UPADTE" is not reusing the blocks.
> the update is not written to same block, why is it so ?
>
>
> On Tue, Dec 2, 2014 at
Why hive "UPADTE" is not reusing the blocks.
the update is not written to same block, why is it so ?
On Tue, Dec 2, 2014 at 10:50 AM, unmesha sreeveni
wrote:
> I tried to update my record in hive previous version and also tried out
> update in hive 0.14.0. The newer version wh
ct. I am
> wondering where the base directory is.
>
> Any help is appreciated.
>
> Thanks,
> Wantao
>
>
>
>
>
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Centre for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/
understanding is correct?
Any pointers?
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Centre for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/
On Mon, Dec 1, 2014 at 12:31 PM, yogendra reddy
wrote:
> WARN metastore.RetryingMetaStoreClient: MetaStoreClient lost connection.
>
It looks like hive-metastore service is not running.
Can you please check the same?
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoo
Any pointers would appreciate.
On Mon, Dec 1, 2014 at 11:27 AM, unmesha sreeveni
wrote:
>
> On Mon, Dec 1, 2014 at 11:15 AM, yogendra reddy
> wrote:
>
>> hive --orcfiledump
>
>
> Hi yogendra
>
> shows
> Exception in thread "main" java.io.I
org.apache.hadoop.hive.ql.lockmgr.DbLockManager.lock(DbLockManager.java:80)
> ... 17 more
>
> p.s : Before setting the transaction specific properties I was able to run
> hive queries successfully
>
> Thanks,
> Yogendra
>
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Centre for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/
insert into table HiveMB select
employeeid,firstname,designation,salary,department; *
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Centre for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/
Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Centre for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/
e to retrieve 1 such bucket.
When I did a -cat, It is not in human readable format.
How can I able to see the data stored into each bucket?
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Centre for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/
. while creating a partitioned table, and update is performed ,whether the
partition is deleted and updated with new value or the entire block is
deleted and written once again?
where will be the good place to gather these knowlege
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Big
Hi
Hope this link helps for those who are trying to do practise ACID
properties in hive 0.14.
http://unmeshasreeveni.blogspot.in/2014/11/updatedeleteinsert-in-hive-0140.html
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Centre for Cyber Security | Amrita Vi
gt; hive.compactor.worker.threads
> 1
>
>
> *Make sure your table creation supports ACID ouput format.Create like
> following*.
>
> create table test(id int, name varchar(128)) clustered by (id) into 2
> buckets stored as orc TBLPROPERTIES ('transactional&
operations.
hive>
Am I doing anything wrong?
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Centre for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/
ion in the Hive table.
>>
>>
>> INSERT OVERWRITE TABLE tablename SELECT col1,col2,col3 from tabx where
>> col2='abc';
>>
>> Does the above work ? Please advise.
>>
>>
>>
>
>
> --
> Nitin Pawar
>
--
*Thanks
Hi
This is a blog on Hive updating for older version (hive -0.12.0)
http://unmeshasreeveni.blogspot.in/2014/11/updating-partition-table-using-insert.html
Hope it helps someone.
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Centre for Cyber Security | Am
Hi,
This is a blog on Hive partitioning.
http://unmeshasreeveni.blogspot.in/2014/11/hive-partitioning.html
Hope it helps someone.
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Centre for Cyber Security | Amrita Vishwa Vidyapeetham*
_Trail;
What I tried to include in the query is , In partion with department = A,
update employeeid =19 's salary with 5
Is that query statement wrong? and the replication is not affected to dept
B and C.
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*C
***I created a Hive table with *non*- *partitioned* and using select query
I inserted data into *Partioned* Hive table.
On Mon, Nov 17, 2014 at 10:00 AM, unmesha sreeveni
wrote:
> I created a Hive table with *partition* and inserted data into Partioned
> Hive table.
>
> Refered s
es
displayed,while updating last column it is fine.
Am I doing any thing wrong.
Please suggest.--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Centre for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/
ts and perform
> the join.
>
> On Mon, Oct 6, 2014 at 8:49 PM, unmesha sreeveni
> wrote:
>
>> What I feel like is
>>
>> For question
>> 5
>> it says, the weblogs are already in HDFS (so no need to import
>> anything).Also these are log files, NOT d
ata and Cloud
>
> [image: View my profile on LinkedIn]
> <http://in.linkedin.com/in/adarshdeshratnam>
>
> On Mon, Oct 6, 2014 at 2:25 PM, unmesha sreeveni
> wrote:
>
>> Hi
>>
>> 5 th question can it be SQOOP?
>>
>> On Mon, Oct 6, 2014 at 1:24
Hi
5 th question can it be SQOOP?
On Mon, Oct 6, 2014 at 1:24 PM, unmesha sreeveni
wrote:
> Yes
>
> On Mon, Oct 6, 2014 at 1:22 PM, Santosh Kumar
> wrote:
>
>> Are you preparing g for Cloudera certification exam?
>>
>>
>>
>>
>>
>
http://www.unmeshasreeveni.blogspot.in/2014/09/what-do-you-think-of-these-three.html
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Center for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/
Reduce 100%
>
> Hi ,
>
> My MapReduce program takes almost 10 minutes to finish the job after it
> reaches map 100% reduce 100% ..
>
> Thanks
> Karthik
>
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Center for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/
ion/integration_hive_partition_html
Let me know your thoughts.
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Center for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/
Hi
http://www.unmeshasreeveni.blogspot.in/2014/04/how-to-create-tables-in-hive.html
This is a blog for creating tables in Hive for beginners
Please post your comments for the same.
Let me know your thoughts.
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Ce
analyse user activity per day basis.
>
> Thanks
> Shushant
>
>
>
>
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Center for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/
as
> MAP data type and then get the value of appid etc.
>
>
>
> Thanks,
>
> Moiz
>
>
>
> *From:* unmesha sreeveni [mailto:unmeshab...@gmail.com]
> *Sent:* Wednesday, April 30, 2014 3:39 PM
> *To:* User - Hive
> *Subject:* Re: Create hive table to support m
Reduce starts only after all Map task finishes.Reducers pull data from
mappers ,but processing is done only after all map get finished.
It is better to look into JObtracker UI instead of looking into console.
There you can see only after map 100% Reducer starts
--
*Thanks & Regards *
*Unm
sql and
> will be processed there.
>
> Which datawarehouse is suitable for approach 1 and 2 and why?.
>
> Thanks
> Shushant
>
>
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Center for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/
tc, please help me to create hive table ,
>
>
> |0|{\x22appid\x22:\x228\x22,\x22appname\x22:\x22CONVX-0008\x22,\x22bundleid\x22:\x22com.zeptolab.timetravel.free.google\x22}|14|
>
> -- Thanks,
>
>
> *Kishore *
>
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
ards*
Unmesha Sreeveni U.B
6
30 6
30 6
30 6
30 6
30 6
If we need to practice the same thing in HIVE what will be the option .
Do we have the same in Hive?
Pls suggest
Thanks in Advance.
--
*Thanks & Regards*
Unmesha Sreeveni U.B
Junior Developer
http://www.unmeshasreeveni.blogspot.in/
39 matches
Mail list logo