Re: Getting error from sqoop2 command

2013-10-08 Thread samir das mohapatra
Dear Sqoop user/dev I am facing on issue , given below. Do you have any Idea why i am facing this error and what could be the problem ? sqoop:000 show connector --all *Exception has occurred during processing command * *Exception: com.sun.jersey.api.client.UniformInterfaceException Message:

Facing issue using Sqoop2

2013-10-08 Thread samir das mohapatra
Dear All I am getting error like blow mention, did any one got from sqoop2 Error: sqoop:000 set server --host hostname1 --port 8050 --webapp sqoop Server is set successfully sqoop:000 show server -all Server host: hostname1 Server port: 8050 Server webapp: sqoop sqoop:000 show version --all

Migrating from Legacy to Hadoop.

2013-10-08 Thread Jitendra Yadav
Hi All, We are planning to consolidate our 3 existing warehouse databases to Hadoop cluster, In our testing phase we have designed the target environment and transferred the data from source to target (not in sync but almost completed ). These legacy systems were using traditional ETL/replication

Re: Migrating from Legacy to Hadoop.

2013-10-08 Thread Jitendra Yadav
Hi Bertrand, Thanks for your reply. As per my understanding mentioned open source does not support procedure language(PL) flexibility. Right? I was looking for some other alternatives so that we can migrate our existing code rather then creating java UDF etc. So handling complex ETL business

Error putting files in the HDFS

2013-10-08 Thread Basu,Indrashish
Hello, My name is Indrashish Basu and I am a Masters student in the Department of Electrical and Computer Engineering. Currently I am doing my research project on Hadoop implementation on ARM processor and facing an issue while trying to run a sample Hadoop source code on the same. Every

Modifying Grep to read Sequence/Snappy files

2013-10-08 Thread Xuri Nagarin
Hi, I am trying to get the Grep example bundled with CDH to read Sequence/Snappy files. By default, the program throws errors trying to read Sequence/Snappy files: java.io.EOFException: Unexpected end of block in input stream at

Re: Error putting files in the HDFS

2013-10-08 Thread Jitendra Yadav
As per your dfs report, available DataNodes count is ZERO in you cluster. Please check your data node logs. Regards Jitendra On 10/8/13, Basu,Indrashish indrash...@ufl.edu wrote: Hello, My name is Indrashish Basu and I am a Masters student in the Department of Electrical and Computer

Re: Error putting files in the HDFS

2013-10-08 Thread Basu,Indrashish
Hi Jitendra, This is what I am getting in the datanode logs : 2013-10-07 11:27:41,960 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /app/hadoop/tmp/dfs/data is not formatted. 2013-10-07 11:27:41,961 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...

Re: Error putting files in the HDFS

2013-10-08 Thread Mohammad Tariq
You don't have any more space left in your HDFS. Delete some old data or add additional storage. Warm Regards, Tariq cloudfront.blogspot.com On Tue, Oct 8, 2013 at 11:47 PM, Basu,Indrashish indrash...@ufl.edu wrote: Hi , Just to update on this, I have deleted all the old logs and files

Re: Error putting files in the HDFS

2013-10-08 Thread Basu,Indrashish
Hi , Just to update on this, I have deleted all the old logs and files from the /tmp and /app/hadoop directory, and restarted all the nodes, I have now 1 datanode available as per the below information : Configured Capacity: 3665985536 (3.41 GB) Present Capacity: 24576 (24 KB) DFS

Re: Error putting files in the HDFS

2013-10-08 Thread Jitendra Yadav
Yes Thanks Jitendra On 10/8/13, Basu,Indrashish indrash...@ufl.edu wrote: Hi , Just to update on this, I have deleted all the old logs and files from the /tmp and /app/hadoop directory, and restarted all the nodes, I have now 1 datanode available as per the below information : Configured

Re: Error putting files in the HDFS

2013-10-08 Thread Basu,Indrashish
Hi Tariq, Thanks a lot for your help. Can you please let me know the path where I can check the old files in the HDFS and remove them accordingly. I am sorry to bother with these questions, I am absolutely new to Hadoop. Thanks again for your time and pateince. Regards, Indrashish

Accessing a secure cluster from another

2013-10-08 Thread Bhooshan Mogal
Hi, What's the recommended way to access a secure cluster from another (both are configured to use the same kerberos realm)? For example, can I run a map-reduce job with input on a secure cluster and output on another? Do I have to change any configurations or add specific credentials for the

Hadoop Fault Injection examples

2013-10-08 Thread Felipe Gutierrez
Hi all, I am new in Hadoop and I am researching about fault Injection. There is this web site as some example: http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/FaultInjectFramework.html but they didn't say how to implement more Fault injection examples using AspectJ. Does

Re: Error putting files in the HDFS

2013-10-08 Thread Mohammad Tariq
You are welcome Basu. Not a problem. You can use *bin/hadoop fs -lsr /* to list down all the HDFS files and directories. See which files are no longer required and delete them using *bin/hadoop fs -rm /path/to/the/file* Warm Regards, Tariq cloudfront.blogspot.com On Tue, Oct 8, 2013 at 11:59

Re: Error putting files in the HDFS

2013-10-08 Thread Basu,Indrashish
Hi Tariq, Thanks for your help again. I tried deleting the old HDFS files and directories as you suggested , and then do the reformatting and starting all the nodes. However after running the dfsadmin report I am again seeing that datanode is not generated.

RE: MapReduce task-worker assignment

2013-10-08 Thread John Lilley
Thanks! From: Arun C Murthy [mailto:a...@hortonworks.com] Sent: Monday, October 07, 2013 6:07 PM To: user@hadoop.apache.org Subject: Re: MapReduce task-worker assignment Short version: MR provides all the info it can to (about all it's tasks locations) and the YARN scheduler deals with

Re: Migrating from Legacy to Hadoop.

2013-10-08 Thread Peyman Mohajerian
I wonder if JDBC driver over Hive could help you. If you legacy ETL job can talk to a jdbc driver, it is a slow way of writing to HDFS and I don't have any experience doing it, e.g.: http://doc.cloveretl.com/documentation/UserGuide/index.jsp?topic=/com.cloveretl.gui.docs/docs/hive-connection.html

how to use ContentSumary

2013-10-08 Thread kun yan
hi all In *org.apache.hadoop.fs * *i found ContentSumary but i am not sure how to use it.Who can help me ,thanks a lot * -- In the Hadoop world, I am just a novice, explore the entire Hadoop ecosystem, I hope one day I can contribute their own code YanBit yankunhad...@gmail.com

Re: Migrating from Legacy to Hadoop.

2013-10-08 Thread Frank
Low cost alternative ETL is syncsort DMX-h ETL which extends hadoop MapReduce Sent from my iPad On Oct 8, 2013, at 10:16 PM, Peyman Mohajerian mohaj...@gmail.com wrote: I wonder if JDBC driver over Hive could help you. If you legacy ETL job can talk to a jdbc driver, it is a slow way of

RE: how to use ContentSumary

2013-10-08 Thread Brahma Reddy Battula
Please check the following for same DistributedFileSystem dfs=new DistributedFileSystem (); dfs.initialize(URI.create(hdfs://hacluster), conf); DistributedFileSystem dfs = new DistributedFileSystem(); cnSum=dfs.getContentSummary(new Path(dirName)); cnSum.getQuota() cnSum.getSpaceQuota()