Hi
I am trying import data from Sybase to HDFS but getting ZipException
It looks like some the jars are not getting downloaded but not able to
trace what is going wrong.
Thanks.
--
* Regards,*
* Vikas *
or any problem while connecting to
> Job tracker…
>
> ** **
>
> Thanks
>
> Devaraj
>
> ** **
>
> *From:* Vikas Jadhav [mailto:vikascjadha...@gmail.com]
> *Sent:* 13 June 2013 12:22
> *To:* user@hadoop.apache.org
> *Subject:* JobTracker UI shows onl
I have set up hadoop cluster on two node but JobTracker UI in Cluster
summary shows only one node
Namenode shows Live nodes 2 but data is always put on same master node
not on slave node
On master node - jps
all process are running
On slave node -jps
tasktracke and datanode are running
i ha
es in the web, eg
> http://riccomini.name/posts/hadoop/2009-11-13-sort-reducer-input-value-hadoop/
> .
>
> Have a nice day,
> Sofia
>
> --
> *From:* Vikas Jadhav
> *To:* user@hadoop.apache.org
> *Sent:* Tuesday, April 23, 2013 8:44 AM
> *Subject:* Sort
Hi
how to sort value in hadoop using standard sorting algorithm of hadoop (
i.e sorting facility provided by hadoop)
Requirement:
1) Values shoulde be sorted depending on some part of value
For Exam (KEY,VALUE)
(0,"BC,4,XY')
(1,"DC,1,PQ")
(2,"EF,0,MN")
Sorted sequence @ reduce reached
.
On Sun, Apr 21, 2013 at 9:53 AM, Azuryy Yu wrote:
> you would look at chain reducer java doc, which meet your requirement.
>
> --Send from my Sony mobile.
> On Apr 20, 2013 11:43 PM, "Vikas Jadhav" wrote:
>
>> Hello,
>> Can anyone help me in follow
Hello,
Can anyone help me in following issue
Writing intermediate key,value pairs to file and read it again
let us say i have to write each intermediate pair received @reducer to a
file and again read that as key value pair again and use it for processing
I found IFile.java file which has reader
aying something like for every row in X, join it to all of the
> rows in Y where Y.a < something?
>
> Is that what you are suggesting?
>
>
> Sent from a remote device. Please excuse any typos...
>
> Mike Segel
>
> On Apr 10, 2013, at 9:11 AM, Vikas Jadhav
> wrote:
&g
lly in the reducer you would have your key and then the set of
>> rows that match the key. You would then perform the cross product on the
>> key's result set and output them to the collector as separate rows.
>>
>> I'm not sure why you would need the reduce context.
.
>>>>>> anyway,thank you.
>>>>>>
>>>>>>
>>>>>> 2013/3/12 samir das mohapatra
>>>>>>
>>>>>>> Through the RecordReader and FileStatus you can get it.
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Mar 12, 2013 at 4:08 PM, Roth Effy wrote:
>>>>>>>
>>>>>>>> Hi,everyone,
>>>>>>>> I want to join the k-v pairs in Reduce(),but how to get the record
>>>>>>>> position?
>>>>>>>> Now,what I thought is to save the context status,but class Context
>>>>>>>> doesn't implement a clone construct method.
>>>>>>>>
>>>>>>>> Any help will be appreciated.
>>>>>>>> Thank you very much.
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>
>
--
*
*
*
Thanx and Regards*
* Vikas Jadhav*
Hello
I have use case where i want to shuffle same pair to more than one reducer.
is there anyone tried this or can give suggestion how to implement it.
I have crated jira for same
https://issues.apache.org/jira/browse/MAPREDUCE-5063
Thank you.
--
*
*
*
Thanx and Regards*
* Vikas Jadhav*
for first job ANT download jar from internet
how to build offline using ANT
--
*
*
*
Thanx and Regards*
* Vikas Jadhav*
t to go to
> only one but in a random fashion ?
>
> AFAIK, 1st is not possible. Someone on the list can correct if I am wrong.
> 2nd is possible by just implementing your own partitioner which randomizes
> where each key goes (not sure what you gain by that).
>
>
> On Wed, Mar 1
> reducer to output.
>
>
> On Wed, Mar 13, 2013 at 2:15 PM, Vikas Jadhav wrote:
>
>> Hello,
>>
>> As by default Hadoop framework can shuffle (key,value) pair to only one
>> reducer
>>
>> I have use case where i need to shufffle same (key,value) pa
information in
> the usrlogs files.
> How do i go about the modification? I am new to Hadoop. Shall i simply
> open the src .mapred . appropriate file in eclipse modify and save?
> Will that help?
>
> Thank you
>
> Regards,
> Preethi Ganeshan
>
--
*
*
*
Thanx and Regards*
* Vikas Jadhav*
ify and save?
> Will that help?
>
> Thank you
>
> Regards,
> Preethi Ganeshan
>
--
*
*
*
Thanx and Regards*
* Vikas Jadhav*
5 SOME DATA FROM RDBMS and SOME DATA FROM HDFS then do filter and
> load into HDFS : *JDBC WITH Map/Reduce program*
>
>
> Note: Can any one suggest me, if I am wrong and we need to do something
> other then this, which will be easy to do .
>
>
> Regards,
>
> samir.
>
>
>
>
--
*
*
*
Thanx and Regards*
* Vikas Jadhav*
tioner you can either
> write mulitple Partitioner implementations or simply one partitioner
> handling all different cases.
>
> Harsh, please correct me if I am wrong.
>
> Best,
> Mahesh Balija,
> Calsoft Labs.
>
>
> On Mon, Mar 4, 2013 at 8:32 PM, Vikas Jad
arly updated virus
> scanning software but you should take whatever measures you deem to be
> appropriate to ensure that this message and any attachments are virus free.
>
> The information in this e-mail is confidential. The contents may not be
> disclosed or used by anyone other than the addressee. Access to this e-mail
> by anyone else is unauthorised.
> If you are not the intended recipient, please notify Airbus immediately and
> delete this e-mail.
> Airbus cannot accept any responsibility for the accuracy or completeness of
> this e-mail as it has been sent over public networks. If you have any
> concerns over the content of this message or its Accuracy or Integrity,
> please contact Airbus immediately.
> All outgoing e-mails from Airbus are checked using regularly updated virus
> scanning software but you should take whatever measures you deem to be
> appropriate to ensure that this message and any attachments are virus free.
>
>
--
*
*
*
Thanx and Regards*
* Vikas Jadhav*
t; need a custom written "high level" partitioner and combiner that can create
> multiple instances of sub-partitioners/combiners and use the most likely
> one based on their input's characteristics (such as instance type, some
> tag, config., etc.).
>
>
help. it only sets mapper class for per dataset manner.
2) Also i am looking MapTask.java file from source code
just want to know where does mapper partitioner and combiner classes are
set for particular filesplit
while executing job
Thank You
--
*
*
*
Thanx and Regards*
* Vikas Jadhav
-- Forwarded message --
From: Vikas Jadhav
Date: Thu, Jan 31, 2013 at 11:14 PM
Subject: Re: Issue with Reduce Side join using datajoin package
To: user@hadoop.apache.org
***source
public class MyJoin extends Configured implements Tool
***source
public class MyJoin extends Configured implements Tool {
public static class MapClass extends DataJoinMapperBase {
protected Text generateInputTag(String inputFile) {
System.out.println("Starting generateInputTag() : "+inputFile)
about it in detail.
>
> HTH
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Thu, Jan 31, 2013 at 11:56 AM, Vikas Jadhav
> wrote:
>
>> Hi
>> I have one windows machine and one linux machine
>> my eclipse
-- Forwarded message --
From: Vikas Jadhav
Date: Tue, Jan 22, 2013 at 5:23 PM
Subject: Bulk Loading DFS Space issue in Hbase
To: u...@hbase.apache.org
Hi
I am trying to bulk load 700m CSV data with 31 colms into Hbase
I have written MapReduce Program for but when run my program
use for the namenode
> process.
>
> I hope that helps.
>
> Regards,
> Robert
>
> On Tue, Jan 22, 2013 at 3:54 AM, Vikas Jadhav wrote:
>
>>
>>
>> --
>> *
>> *
>> *
>>
>> Thanx and Regards*
>> * Vikas Jadhav*
>>
>
>
--
*
*
*
Thanx and Regards*
* Vikas Jadhav*
--
*
*
*
Thanx and Regards*
* Vikas Jadhav*
-- Forwarded message --
From: Vikas Jadhav
Date: Sat, Jan 19, 2013 at 10:58 PM
Subject: new join algorithm using mapreduce
To: user@hadoop.apache.org
I am writing new join algorithm using hadoop
and want to do multi way join in single mapreduce job
map --> processes
ve from Hadoop any knowledge
>> of its prior existence -- do I have to manually delete files with OS
>> commands (what do I remove?) or is there some type of "bin/hadoop namenode
>> -delete" command that undoes the "-format" command?
>>
>> Thanks,
>> Glen
>>
>> --
>> Glen Mazza
>> Talend Community Coders - coders.talend.com
>> blog: www.jroller.com/gmazza
>>
>>
>
>
> --
> Glen Mazza
> Talend Community Coders - coders.talend.com
> blog: www.jroller.com/gmazza
>
>
--
*
*
*
Thanx and Regards*
* Vikas Jadhav*
29 matches
Mail list logo