Sqoop Import Problem ZipException

2014-07-11 Thread Vikas Jadhav
Hi I am trying import data from Sybase to HDFS but getting ZipException It looks like some the jars are not getting downloaded but not able to trace what is going wrong. Thanks. -- * Regards,* * Vikas *

Re: JobTracker UI shows only one node instead of 2

2013-06-13 Thread Vikas Jadhav
or any problem while connecting to > Job tracker… > > ** ** > > Thanks > > Devaraj > > ** ** > > *From:* Vikas Jadhav [mailto:vikascjadha...@gmail.com] > *Sent:* 13 June 2013 12:22 > *To:* user@hadoop.apache.org > *Subject:* JobTracker UI shows onl

JobTracker UI shows only one node instead of 2

2013-06-12 Thread Vikas Jadhav
I have set up hadoop cluster on two node but JobTracker UI in Cluster summary shows only one node Namenode shows Live nodes 2 but data is always put on same master node not on slave node On master node - jps all process are running On slave node -jps tasktracke and datanode are running i ha

Re: Sorting Values sent to reducer NOT based on KEY (Depending on part of VALUE)

2013-04-23 Thread Vikas Jadhav
es in the web, eg > http://riccomini.name/posts/hadoop/2009-11-13-sort-reducer-input-value-hadoop/ > . > > Have a nice day, > Sofia > > -- > *From:* Vikas Jadhav > *To:* user@hadoop.apache.org > *Sent:* Tuesday, April 23, 2013 8:44 AM > *Subject:* Sort

Sorting Values sent to reducer NOT based on KEY (Depending on part of VALUE)

2013-04-22 Thread Vikas Jadhav
Hi how to sort value in hadoop using standard sorting algorithm of hadoop ( i.e sorting facility provided by hadoop) Requirement: 1) Values shoulde be sorted depending on some part of value For Exam (KEY,VALUE) (0,"BC,4,XY') (1,"DC,1,PQ") (2,"EF,0,MN") Sorted sequence @ reduce reached

Re: Writing intermediate key,value pairs to file and read it again

2013-04-22 Thread Vikas Jadhav
. On Sun, Apr 21, 2013 at 9:53 AM, Azuryy Yu wrote: > you would look at chain reducer java doc, which meet your requirement. > > --Send from my Sony mobile. > On Apr 20, 2013 11:43 PM, "Vikas Jadhav" wrote: > >> Hello, >> Can anyone help me in follow

Writing intermediate key,value pairs to file and read it again

2013-04-20 Thread Vikas Jadhav
Hello, Can anyone help me in following issue Writing intermediate key,value pairs to file and read it again let us say i have to write each intermediate pair received @reducer to a file and again read that as key value pair again and use it for processing I found IFile.java file which has reader

Re: How can I record some position of context in Reduce()?

2013-04-10 Thread Vikas Jadhav
aying something like for every row in X, join it to all of the > rows in Y where Y.a < something? > > Is that what you are suggesting? > > > Sent from a remote device. Please excuse any typos... > > Mike Segel > > On Apr 10, 2013, at 9:11 AM, Vikas Jadhav > wrote: &g

Re: How can I record some position of context in Reduce()?

2013-04-10 Thread Vikas Jadhav
lly in the reducer you would have your key and then the set of >> rows that match the key. You would then perform the cross product on the >> key's result set and output them to the collector as separate rows. >> >> I'm not sure why you would need the reduce context.

Re: How can I record some position of context in Reduce()?

2013-04-08 Thread Vikas Jadhav
. >>>>>> anyway,thank you. >>>>>> >>>>>> >>>>>> 2013/3/12 samir das mohapatra >>>>>> >>>>>>> Through the RecordReader and FileStatus you can get it. >>>>>>> >>>>>>> >>>>>>> On Tue, Mar 12, 2013 at 4:08 PM, Roth Effy wrote: >>>>>>> >>>>>>>> Hi,everyone, >>>>>>>> I want to join the k-v pairs in Reduce(),but how to get the record >>>>>>>> position? >>>>>>>> Now,what I thought is to save the context status,but class Context >>>>>>>> doesn't implement a clone construct method. >>>>>>>> >>>>>>>> Any help will be appreciated. >>>>>>>> Thank you very much. >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >> > -- * * * Thanx and Regards* * Vikas Jadhav*

shuffling one intermediate pair to more than one reducer

2013-03-24 Thread Vikas Jadhav
Hello I have use case where i want to shuffle same pair to more than one reducer. is there anyone tried this or can give suggestion how to implement it. I have crated jira for same https://issues.apache.org/jira/browse/MAPREDUCE-5063 Thank you. -- * * * Thanx and Regards* * Vikas Jadhav*

How to build hadoop code using ANT offline

2013-03-14 Thread Vikas Jadhav
for first job ANT download jar from internet how to build offline using ANT -- * * * Thanx and Regards* * Vikas Jadhav*

Re: How to shuffle (Key,Value) pair from mapper to multiple reducer

2013-03-13 Thread Vikas Jadhav
t to go to > only one but in a random fashion ? > > AFAIK, 1st is not possible. Someone on the list can correct if I am wrong. > 2nd is possible by just implementing your own partitioner which randomizes > where each key goes (not sure what you gain by that). > > > On Wed, Mar 1

Re: How to shuffle (Key,Value) pair from mapper to multiple reducer

2013-03-13 Thread Vikas Jadhav
> reducer to output. > > > On Wed, Mar 13, 2013 at 2:15 PM, Vikas Jadhav wrote: > >> Hello, >> >> As by default Hadoop framework can shuffle (key,value) pair to only one >> reducer >> >> I have use case where i need to shufffle same (key,value) pa

Re:

2013-03-11 Thread Vikas Jadhav
information in > the usrlogs files. > How do i go about the modification? I am new to Hadoop. Shall i simply > open the src .mapred . appropriate file in eclipse modify and save? > Will that help? > > Thank you > > Regards, > Preethi Ganeshan > -- * * * Thanx and Regards* * Vikas Jadhav*

Re:

2013-03-11 Thread Vikas Jadhav
ify and save? > Will that help? > > Thank you > > Regards, > Preethi Ganeshan > -- * * * Thanx and Regards* * Vikas Jadhav*

Re: How to solve one Scenario in hadoop ?

2013-03-06 Thread Vikas Jadhav
5 SOME DATA FROM RDBMS and SOME DATA FROM HDFS then do filter and > load into HDFS : *JDBC WITH Map/Reduce program* > > > Note: Can any one suggest me, if I am wrong and we need to do something > other then this, which will be easy to do . > > > Regards, > > samir. > > > > -- * * * Thanx and Regards* * Vikas Jadhav*

Re: mapper combiner and partitioner for particular dataset

2013-03-06 Thread Vikas Jadhav
tioner you can either > write mulitple Partitioner implementations or simply one partitioner > handling all different cases. > > Harsh, please correct me if I am wrong. > > Best, > Mahesh Balija, > Calsoft Labs. > > > On Mon, Mar 4, 2013 at 8:32 PM, Vikas Jad

Re: Hadoop cluster setup - could not see second datanode

2013-03-06 Thread Vikas Jadhav
arly updated virus > scanning software but you should take whatever measures you deem to be > appropriate to ensure that this message and any attachments are virus free. > > The information in this e-mail is confidential. The contents may not be > disclosed or used by anyone other than the addressee. Access to this e-mail > by anyone else is unauthorised. > If you are not the intended recipient, please notify Airbus immediately and > delete this e-mail. > Airbus cannot accept any responsibility for the accuracy or completeness of > this e-mail as it has been sent over public networks. If you have any > concerns over the content of this message or its Accuracy or Integrity, > please contact Airbus immediately. > All outgoing e-mails from Airbus are checked using regularly updated virus > scanning software but you should take whatever measures you deem to be > appropriate to ensure that this message and any attachments are virus free. > > -- * * * Thanx and Regards* * Vikas Jadhav*

Re: mapper combiner and partitioner for particular dataset

2013-03-04 Thread Vikas Jadhav
t; need a custom written "high level" partitioner and combiner that can create > multiple instances of sub-partitioners/combiners and use the most likely > one based on their input's characteristics (such as instance type, some > tag, config., etc.). > >

Fwd: mapper combiner and partitioner for particular dataset

2013-03-03 Thread Vikas Jadhav
help. it only sets mapper class for per dataset manner. 2) Also i am looking MapTask.java file from source code just want to know where does mapper partitioner and combiner classes are set for particular filesplit while executing job Thank You -- * * * Thanx and Regards* * Vikas Jadhav

Fwd: Issue with Reduce Side join using datajoin package

2013-02-01 Thread Vikas Jadhav
-- Forwarded message -- From: Vikas Jadhav Date: Thu, Jan 31, 2013 at 11:14 PM Subject: Re: Issue with Reduce Side join using datajoin package To: user@hadoop.apache.org ***source public class MyJoin extends Configured implements Tool

Re: Issue with Reduce Side join using datajoin package

2013-01-31 Thread Vikas Jadhav
***source public class MyJoin extends Configured implements Tool { public static class MapClass extends DataJoinMapperBase { protected Text generateInputTag(String inputFile) { System.out.println("Starting generateInputTag() : "+inputFile)

Re: Issue with running hadoop program using eclipse

2013-01-31 Thread Vikas Jadhav
about it in detail. > > HTH > > Warm Regards, > Tariq > https://mtariq.jux.com/ > cloudfront.blogspot.com > > > On Thu, Jan 31, 2013 at 11:56 AM, Vikas Jadhav > wrote: > >> Hi >> I have one windows machine and one linux machine >> my eclipse

Fwd: Bulk Loading DFS Space issue in Hbase

2013-01-23 Thread Vikas Jadhav
-- Forwarded message -- From: Vikas Jadhav Date: Tue, Jan 22, 2013 at 5:23 PM Subject: Bulk Loading DFS Space issue in Hbase To: u...@hbase.apache.org Hi I am trying to bulk load 700m CSV data with 31 colms into Hbase I have written MapReduce Program for but when run my program

Re: What is Heap Space in Hadoop Heap Size is 222.44 MB / 888.94 MB (25%)

2013-01-22 Thread Vikas Jadhav
use for the namenode > process. > > I hope that helps. > > Regards, > Robert > > On Tue, Jan 22, 2013 at 3:54 AM, Vikas Jadhav wrote: > >> >> >> -- >> * >> * >> * >> >> Thanx and Regards* >> * Vikas Jadhav* >> > > -- * * * Thanx and Regards* * Vikas Jadhav*

What is Heap Space in Hadoop Heap Size is 222.44 MB / 888.94 MB (25%)

2013-01-22 Thread Vikas Jadhav
-- * * * Thanx and Regards* * Vikas Jadhav*

Fwd: new join algorithm using mapreduce

2013-01-20 Thread Vikas Jadhav
-- Forwarded message -- From: Vikas Jadhav Date: Sat, Jan 19, 2013 at 10:58 PM Subject: new join algorithm using mapreduce To: user@hadoop.apache.org I am writing new join algorithm using hadoop and want to do multi way join in single mapreduce job map --> processes

Re: DFS filesystem used for Standalone and Pseudo-Distributed operation

2013-01-17 Thread Vikas Jadhav
ve from Hadoop any knowledge >> of its prior existence -- do I have to manually delete files with OS >> commands (what do I remove?) or is there some type of "bin/hadoop namenode >> -delete" command that undoes the "-format" command? >> >> Thanks, >> Glen >> >> -- >> Glen Mazza >> Talend Community Coders - coders.talend.com >> blog: www.jroller.com/gmazza >> >> > > > -- > Glen Mazza > Talend Community Coders - coders.talend.com > blog: www.jroller.com/gmazza > > -- * * * Thanx and Regards* * Vikas Jadhav*