Neo4j and Hadoop

2017-01-10 Thread unmesha sreeveni
​Hi, I have my input file in HDFS. How to store that data to Neo4j db. Is there any way to do the same? ​ -- *Thanks & Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Centre for Cyber Security | Amrita Vishwa Vidyapeetham* http://www.unmeshasreeveni.blogspot.in/

.doc Custom Format for Hadoop

2016-01-04 Thread unmesha sreeveni
Is there a custom .doc Input Format for hadoop which is already build?

Re: Doubt in DoubleWritable

2015-11-23 Thread unmesha sreeveni
ing()); } if (Double.parseDouble(value[5].toString()) != 0) { total_records_Windspeed = total_records_Windspeed + 1; sumvalueWindspeed = sumvalueWindspeed + Double.parseDouble(value[4].toString()); } } ​Attaching the code​ -- *Thanks & Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer*

Permutations and combination in mapreduce

2015-11-04 Thread unmesha sreeveni
Hi whether permutaions and combinations in mapreduce is implemented by anyone? -- *Thanks & Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Centre for Cyber Security | Amrita Vishwa Vidyapeetham* http://www.unmeshasreeveni.blogspot.in/

Not able to copy one HDFS data to another HDFS location using distcp

2015-09-07 Thread unmesha sreeveni
tem.currentTimeMillis();* *DistCp distCp=new DistCp(config,null);* *distCp.run(args); * * return System.currentTimeMillis() - st;* *}* *}* Am I doing anything wrong. Please suggest -- *Thanks & Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* http://www.unmeshasreeveni.blogspot.in/

Copy Data From HDFS to FTP

2015-08-23 Thread unmesha sreeveni
Hi How can I copy my HDFS data to an FTP server? -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Centre for Cyber Security | Amrita Vishwa Vidyapeetham* http://www.unmeshasreeveni.blogspot.in/

Re: Copy Data From HDFS to FTP

2015-08-23 Thread unmesha sreeveni
showing -cp: The value of property fs.ftp.password.MYIP must not be null On Mon, Aug 24, 2015 at 10:52 AM, Chinnappan Chandrasekaran chiranchan...@jos.com.sg wrote: hadoop fs -cp ftp://userid@youipaddress/directory *From:* unmesha sreeveni [mailto:unmeshab...@gmail.com] *Sent

Build Failure - SciHadoop

2015-05-06 Thread unmesha sreeveni
: directory not found: /installSCID/git/thredds/udunits/target/classes [ERROR] Usage: javac options source files [ERROR] use -help for a list of possible options [ERROR] - [Help 1] [ERROR] Have anyone came across the same.I doubt if I am wrong somewhere. -- *Thanks Regards * *Unmesha Sreeveni U.B

Re: Connect c language with HDFS

2015-05-04 Thread unmesha sreeveni
://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/LibHdfs.html -- Alexander Alten-Lorenz m: wget.n...@gmail.com b: mapredit.blogspot.com On May 4, 2015, at 10:57 AM, unmesha sreeveni unmeshab...@gmail.com wrote: Hi Can we connect c with HDFS using cloudera hadoop distribution

Re: Connect c language with HDFS

2015-05-04 Thread unmesha sreeveni
be within /opt/cloudera/parcels/ CDH/lib64/ (or similar). Or just use linux' locate (locate libhdfs.so*) to find the library. -- Alexander Alten-Lorenz m: wget.n...@gmail.com b: mapredit.blogspot.com On May 4, 2015, at 11:39 AM, unmesha sreeveni unmeshab...@gmail.com wrote: thanks alex

Re: How to stop a mapreduce job from terminal running on Hadoop Cluster?

2015-04-12 Thread unmesha sreeveni
example.jar inputpath outputpath If job is so time taken and we want to stop it in middle then which command is used? Or is there any other way to do that? Thanks, -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Centre for Cyber Security | Amrita Vishwa

cleanup() in hadoop results in aggregation of whole file/not

2015-02-27 Thread unmesha sreeveni
​I am having an input file, which contains last column as class label 7.4 0.29 0.5 1.8 0.042 35 127 0.9937 3.45 0.5 10.2 7 1 10 0.41 0.45 6.2 0.071 6 14 0.99702 3.21 0.49 11.8 7 -1 7.8 0.26 0.27 1.9 0.051 52 195 0.9928 3.23 0.5 10.9 6 1 6.9 0.32 0.3 1.8 0.036 28 117 0.99269 3.24 0.48 11 6 1

Re: Get method in Writable

2015-02-22 Thread unmesha sreeveni
, TreeInfoWritable.class);* I think just use gson.fromJson, not toJson(setupData is already json string, i think). Is this right ? Drake 민영근 Ph.D kt NexR On Sat, Feb 21, 2015 at 4:55 PM, unmesha sreeveni unmeshab...@gmail.com wrote: Am I able to get the values from writable of a previous job. ie I

Get method in Writable

2015-02-20 Thread unmesha sreeveni
Am I able to get the values from writable of a previous job. ie I have 2 MR jobs *MR 1:* I need to pass 3 element as values from reducer and the key is NullWritable. So I created a custom writable class to achieve this. * public class TreeInfoWritable implements Writable{* * DoubleWritable

Re: writing mappers and reducers question

2015-02-19 Thread unmesha sreeveni
You can write MapReduce jobs in eclipse also for testing purpose. Once it is done u can create jar and run that in your single node or multinode. But plese note while doing in such IDE s using hadoop dependecies, There will not be input splits, different mappers etc..

Re: How to get Hadoop's Generic Options value

2015-02-19 Thread unmesha sreeveni
Try implementing your program public class YourDriver extends Configured implements Tool { main() run() } Then supply your file using -D option. Thanks Unmesha Biju

Delete output folder automatically in CRUNCH (FlumeJava)

2015-02-17 Thread unmesha sreeveni
Hi I am new to FlumeJava.I ran wordcount in the same.But how can I automatically delete the outputfolder in the code block. Instead of going back and deleting the folder. Thanks in advance. -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Centre for Cyber Security

Neural Network in hadoop

2015-02-12 Thread unmesha sreeveni
propogation in the above mentioned gradient descent neural network algorithm?Or is it fine with this implementation? 4. what is the termination condition mensioned in the algorithm? Please help me with some pointers. Thanks in advance. -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata

Re: Neural Network in hadoop

2015-02-12 Thread unmesha sreeveni
of perceptron lets find the error: (oj*(1-0j)(tj-oj)) check if error is less than threshold,then delta weight is not updated else update delta weight . Is it like that? On Thu, Feb 12, 2015 at 5:14 PM, unmesha sreeveni unmeshab...@gmail.com wrote: I am trying to implement Neural Network

Re: How to partition a file to smaller size for performing KNN in hadoop mapreduce

2015-01-20 Thread unmesha sreeveni
/data Drake 민영근 Ph.D On Wed, Jan 21, 2015 at 2:12 PM, unmesha sreeveni unmeshab...@gmail.com wrote: Yes I tried the same Drake. I dont know if I understood your answer. Instead of loading them into setup() through cache I read them directly from HDFS in map section. and for each

Re: How to partition a file to smaller size for performing KNN in hadoop mapreduce

2015-01-20 Thread unmesha sreeveni
cache. They use file directly from HDFS with short circuit local read. Like a shared storage method, but almost every node has the data with high-replication factor. Drake 민영근 Ph.D On Wed, Jan 21, 2015 at 1:49 PM, unmesha sreeveni unmeshab...@gmail.com wrote: But stil if the model is very

Re: How to partition a file to smaller size for performing KNN in hadoop mapreduce

2015-01-15 Thread unmesha sreeveni
Is there any way.. Waiting for a reply.I have posted the question every where..but none is responding back. I feel like this is the right place to ask doubts. As some of u may came across the same issue and get stuck. On Thu, Jan 15, 2015 at 12:34 PM, unmesha sreeveni unmeshab...@gmail.com wrote

How to partition a file to smaller size for performing KNN in hadoop mapreduce

2015-01-14 Thread unmesha sreeveni
, 2nd record Distance parttition1,partition2,... This is what came to my thought. Is there any further way. Any pointers would help me. -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Centre for Cyber Security | Amrita Vishwa Vidyapeetham* http

Re: How to partition a file to smaller size for performing KNN in hadoop mapreduce

2015-01-14 Thread unmesha sreeveni
pointers can help me. On Thu, Jan 15, 2015 at 12:17 PM, Ted Dunning ted.dunn...@gmail.com wrote: have you considered implementing using something like spark? That could be much easier than raw map-reduce On Wed, Jan 14, 2015 at 10:06 PM, unmesha sreeveni unmeshab...@gmail.com wrote: In KNN

Re: How to run a mapreduce program not on the node of hadoop cluster?

2015-01-14 Thread unmesha sreeveni
Your data wont get splitted. so your program runs as single mapper and single reducer. And your intermediate data is not shuffeld and sorted, But u can use this for debuging On Jan 14, 2015 2:04 PM, Cao Yi iridium...@gmail.com wrote: Hi, I write some mapreduce code in my project *my_prj*.

Re: Write and Read file through map reduce

2015-01-05 Thread unmesha sreeveni
. What is the best way to store file1 and file2 in HDFS so that they could be used in third map reduce job. Thanks, Hitarth -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Centre for Cyber Security | Amrita Vishwa Vidyapeetham* http

[Blog] My experience on Hadoop Certification

2015-01-02 Thread unmesha sreeveni
http://unmeshasreeveni.blogspot.in/2015/01/cloudera-certified-hadoop-developer-ccd.html -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Centre for Cyber Security | Amrita Vishwa Vidyapeetham* http://www.unmeshasreeveni.blogspot.in/

Re: FileNotFoundException in distributed mode

2014-12-22 Thread unmesha sreeveni
computer. Any ideas? Thanks -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Centre for Cyber Security | Amrita Vishwa Vidyapeetham* http://www.unmeshasreeveni.blogspot.in/

Run a c++ program using opencv libraries in hadoop

2014-12-17 Thread unmesha sreeveni
++ programs using opencv libraries. Thanks in Advance. -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Centre for Cyber Security | Amrita Vishwa Vidyapeetham* http://www.unmeshasreeveni.blogspot.in/

Split files into 80% and 20% for building model and prediction

2014-12-12 Thread unmesha sreeveni
to check if the reducer get filled with 80% data. -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Centre for Cyber Security | Amrita Vishwa Vidyapeetham* http://www.unmeshasreeveni.blogspot.in/

Re: Split files into 80% and 20% for building model and prediction

2014-12-12 Thread unmesha sreeveni
% and 20% for building model and prediction Simple solution.. Copy the HDFS file to local and use OS commands to count no of lines cat file1 | wc -l and cut it based on line number. On 12/12/14, unmesha sreeveni unmeshab...@gmail.com wrote: I am trying to divide my HDFS file into 2 parts

Re: DistributedCache

2014-12-11 Thread unmesha sreeveni
= fs.globStatus(cachefile); for (FileStatus status : list) { DistributedCache.addCacheFile(status.getPath().toUri(), conf); } Hope this link helps [1] http://unmeshasreeveni.blogspot.in/2014/10/how-to-load-file-in-distributedcache-in.html -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop

Detailing on how UPDATE is performed in Hive

2014-11-27 Thread unmesha sreeveni
. while creating a partitioned table, and update is performed ,whether the partition is deleted and updated with new value or the entire block is deleted and written once again? where will be the good place to gather these knowlege​ -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata

[blog] How to do Update operation in hive-0.14.0

2014-11-25 Thread unmesha sreeveni
Hi Hope this link helps for those who are trying to do practise ACID properties in hive 0.14. http://unmeshasreeveni.blogspot.in/2014/11/updatedeleteinsert-in-hive-0140.html -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Centre for Cyber Security | Amrita Vishwa

Re: Fwd: Values getting duplicated in Hive table(Partitioned)

2014-11-18 Thread unmesha sreeveni
Thanks it worked. On Nov 17, 2014 3:32 PM, unmesha sreeveni unmeshab...@gmail.com wrote: -- Forwarded message -- From: unmesha sreeveni unmeshab...@gmail.com Date: Mon, Nov 17, 2014 at 10:49 AM Subject: Re: Values getting duplicated in Hive table(Partitioned) To: User - Hive

[Blog] Hive Partitioning

2014-11-18 Thread unmesha sreeveni
Hi, This is a blog on Hive partitioning. http://unmeshasreeveni.blogspot.in/2014/11/hive-partitioning.html Hope it helps someone. -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Centre for Cyber Security | Amrita Vishwa Vidyapeetham* http

[Blog] Updating Partition Table using INSERT In HIVE

2014-11-18 Thread unmesha sreeveni
Hi This is a blog on Hive updating for older version (hive -0.12.0) http://unmeshasreeveni.blogspot.in/2014/11/updating-partition-table-using-insert.html Hope it helps someone. -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Centre for Cyber Security | Amrita

Showing INFO ipc.Client: Retrying connect to server once hadoop is upgraded to cdh5.2.0

2014-11-17 Thread unmesha sreeveni
version. Whether I missed anything during installation? Why is it so? Pleace Advice​ -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Centre for Cyber Security | Amrita Vishwa Vidyapeetham* http://www.unmeshasreeveni.blogspot.in/

Values getting duplicated in Hive table(Partitioned)

2014-11-16 Thread unmesha sreeveni
last column it is fine. Am I doing any thing wrong. Please suggest.-- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Centre for Cyber Security | Amrita Vishwa Vidyapeetham* http://www.unmeshasreeveni.blogspot.in/

Fwd: Values getting duplicated in Hive table(Partitioned)

2014-11-16 Thread unmesha sreeveni
Subject: Re: Values getting duplicated in Hive table(Partitioned) To: u...@hive.apache.org Can you check your select query to run on non partitioned tables. Check if it's giving correct values. Same as for dept. B On Nov 17, 2014 10:03 AM, unmesha sreeveni unmeshab...@gmail.com wrote: ***I

Re: Can add a regular check in DataNode on free disk space?

2014-10-19 Thread unmesha sreeveni
:584) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:440) Thanks! -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Center for Cyber Security | Amrita Vishwa

[Blog] Doubts On CCD-410 Sample Dumps on Ecosystem Projects

2014-10-06 Thread unmesha sreeveni
http://www.unmeshasreeveni.blogspot.in/2014/09/what-do-you-think-of-these-three.html -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Center for Cyber Security | Amrita Vishwa Vidyapeetham* http://www.unmeshasreeveni.blogspot.in/

Re: [Blog] Doubts On CCD-410 Sample Dumps on Ecosystem Projects

2014-10-06 Thread unmesha sreeveni
Hi 5 th question can it be SQOOP? On Mon, Oct 6, 2014 at 1:24 PM, unmesha sreeveni unmeshab...@gmail.com wrote: Yes On Mon, Oct 6, 2014 at 1:22 PM, Santosh Kumar skumar.bigd...@hotmail.com wrote: Are you preparing g for Cloudera certification exam? Thanks and Regards, Santosh

Re: [Blog] Doubts On CCD-410 Sample Dumps on Ecosystem Projects

2014-10-06 Thread unmesha sreeveni
profile on LinkedIn] http://in.linkedin.com/in/adarshdeshratnam On Mon, Oct 6, 2014 at 2:25 PM, unmesha sreeveni unmeshab...@gmail.com wrote: Hi 5 th question can it be SQOOP? On Mon, Oct 6, 2014 at 1:24 PM, unmesha sreeveni unmeshab...@gmail.com wrote: Yes On Mon, Oct 6, 2014 at 1:22

Re: [Blog] Doubts On CCD-410 Sample Dumps on Ecosystem Projects

2014-10-06 Thread unmesha sreeveni
...) to Hadoop, this is what Sqoop is for (SQL to Hadoop) I'm not certain certification guys are happy with their exam questions ending up on blogs and mailing lists :-) Ulul Le 06/10/2014 13:54, unmesha sreeveni a écrit : what about the last one? The answer is correct. Pig. Is nt

Re: [Blog] Doubts On CCD-410 Sample Dumps on Ecosystem Projects

2014-10-06 Thread unmesha sreeveni
the join. On Mon, Oct 6, 2014 at 8:49 PM, unmesha sreeveni unmeshab...@gmail.com wrote: What I feel like is For question ​ 5​ it says, the weblogs are already in HDFS (so no need to import anything).Also these are log files, NOT database files with a specific schema. So ​ I think​ Pig

Re: toolrunner issue

2014-09-01 Thread unmesha sreeveni
* *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Center for Cyber Security | Amrita Vishwa Vidyapeetham* http://www.unmeshasreeveni.blogspot.in/

Re: Hadoop on Safe Mode because Resources are low on NameNode

2014-08-26 Thread unmesha sreeveni
Shi,* -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Center for Cyber Security | Amrita Vishwa Vidyapeetham* http://www.unmeshasreeveni.blogspot.in/

Re: Create HDFS directory fails

2014-07-29 Thread unmesha sreeveni
On Tue, Jul 29, 2014 at 1:13 PM, R J rj201...@yahoo.com wrote: java.io.IOException: Mkdirs failed to create Check ​if ​ you have permissions to mkdir this directory (try it from the command line) ​​ -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Center

Re: Sqoop command syntax

2014-07-29 Thread unmesha sreeveni
) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229) at org.apache.sqoop.Sqoop.main(Sqoop.java:238) -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Center for Cyber Security | Amrita Vishwa Vidyapeetham* http://www.unmeshasreeveni.blogspot.in/

ListWritable In Hadoop

2014-07-10 Thread unmesha sreeveni
hi Do we have a ListWritable in hadoop ? -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer*

Re: hadoop directory can't add and remove

2014-07-02 Thread unmesha sreeveni
. --- -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Center for Cyber Security | Amrita Vishwa Vidyapeetham* http://www.unmeshasreeveni.blogspot.in/

Re: WholeFileInputFormat in hadoop

2014-06-29 Thread unmesha sreeveni
caught heapspace .. Please correct me if I am wrong. -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Center for Cyber Security | Amrita Vishwa Vidyapeetham* http://www.unmeshasreeveni.blogspot.in/

Re: WholeFileInputFormat in hadoop

2014-06-29 Thread unmesha sreeveni
as many times of the total record. Can anyone suggest me a better way to do this. Hope the usecase is understandable else please tell me.I will explain further. -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Center for Cyber Security | Amrita Vishwa Vidyapeetham* http

WholeFileInputFormat in hadoop

2014-06-28 Thread unmesha sreeveni
Hi A small clarification: WholeFileInputFormat takes the entire input file as input or each record(input split) as whole? -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Center for Cyber Security | Amrita Vishwa Vidyapeetham* http

Finding mamimum value in reducer

2014-06-24 Thread unmesha sreeveni
emits the key/value. Reducer finds the max of key But again I am stuck that cannot be done as we try to get the id , because id is only unique,Values are not uniqe How to solve this. -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Center for Cyber Security | Amrita

Re: Map reduce query

2014-06-20 Thread unmesha sreeveni
) himnshu.shrivast...@ge.com wrote: How can I give Input to a mapper from the command line? –D option can be used but what are the corresponding changes required in the mapper and the driver program ? Regards, -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Center

Re: Map reduce query

2014-06-20 Thread unmesha sreeveni
].getPath()); BufferedReader bf = new BufferedReader(new InputStreamReader(fs.open(getPath))); On Fri, Jun 20, 2014 at 1:12 PM, unmesha sreeveni unmeshab...@gmail.com wrote: Hi You can directly use this right? FileInputFormat.setInputPaths(job,new Path(args[0])); FileOutputFormat.setOutputPath

Re: Counters in MapReduce

2014-06-12 Thread unmesha sreeveni
{* *FileInputFormat.addInputPath(job1, out5); * * }* * FileOutputFormat.setOutputPath(job1,out1);* * job1.waitForCompletion(true);* On Thu, Jun 12, 2014 at 10:29 AM, unmesha sreeveni unmeshab...@gmail.com wrote: I tried out by setting an enum to count no. of lines in output file

Counters in MapReduce

2014-06-09 Thread unmesha sreeveni
continue an iteration - job 3 's output should be the input for job1. And the iteration should continue until the input file is empty. How to accomplish this. Will counters do the work. ​ -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Center for Cyber Security | Amrita

Need advice for Implementing classification algorithms in MapReduce

2014-05-26 Thread unmesha sreeveni
? Please suggest -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Center for Cyber Security | Amrita Vishwa Vidyapeetham* http://www.unmeshasreeveni.blogspot.in/

Issue with conf.set and conf.get method

2014-05-21 Thread unmesha sreeveni
me a workaround. Regards Unmesha -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Center for Cyber Security | Amrita Vishwa Vidyapeetham* http://www.unmeshasreeveni.blogspot.in/

Re: Are mapper classes re-instantiated for each record?

2014-05-16 Thread unmesha sreeveni
processing the last record, my cleanup() method will execute. In other words, my setup() and cleanup() methods will only execute 1 time each. Thanks for the help! -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Center for Cyber Security | Amrita Vishwa

Re: writing multiple files on hdfs

2014-05-11 Thread unmesha sreeveni
unauthorized use or distribution is prohibited. Please consider the environment before printing this email. -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Center for Cyber Security | Amrita Vishwa Vidyapeetham* http://www.unmeshasreeveni.blogspot.in/

[Blog] Map-only jobs in Hadoop for beginers

2014-05-05 Thread unmesha sreeveni
​Hi http://www.unmeshasreeveni.blogspot.in/2014/05/map-only-jobs-in-hadoop.html This is a post on Map-only Jobs in Hadoop for beginers. ​ -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Center for Cyber Security | Amrita Vishwa Vidyapeetham* http

[Blog] Chaining Jobs In Hadoop for beginners.

2014-05-03 Thread unmesha sreeveni
* *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Center for Cyber Security | Amrita Vishwa Vidyapeetham* http://www.unmeshasreeveni.blogspot.in/

Re: Which database should be used

2014-05-02 Thread unmesha sreeveni
On Fri, May 2, 2014 at 1:51 PM, Alex Lee eliy...@hotmail.com wrote: hive ​HBase is better.​ -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Center for Cyber Security | Amrita Vishwa Vidyapeetham* http://www.unmeshasreeveni.blogspot.in/

Re: Wordcount file cannot be located

2014-05-01 Thread unmesha sreeveni
Try this along with your MapReduce source code Configuration config = new Configuration(); config.set(fs.defaultFS, hdfs://IP:port/); FileSystem dfs = FileSystem.get(config); Path path = new Path(/tmp/in); Let me know your thoughts. -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop

[Blog] Code For Deleting Output Folder If Exist In Hadoop MapReduce Jobs

2014-05-01 Thread unmesha sreeveni
-- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Center for Cyber Security | Amrita Vishwa Vidyapeetham* http://www.unmeshasreeveni.blogspot.in/

Re: [Blog] Code For Deleting Output Folder If Exist In Hadoop MapReduce Jobs

2014-05-01 Thread unmesha sreeveni
Please see this link: http://www.unmeshasreeveni.blogspot.in/2014/04/code-for-deleting-existing-output.html On Fri, May 2, 2014 at 8:52 AM, unmesha sreeveni unmeshab...@gmail.comwrote: Hi This is the sample code for Deleting Output Folder(If Exist) In Hadoop MapReduce Jobs for beginners

Re: hadoop.tmp.dir directory size

2014-04-30 Thread unmesha sreeveni
​​ Try *​​hadoop fs -rmr /tmp* -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Center for Cyber Security | Amrita Vishwa Vidyapeetham* http://www.unmeshasreeveni.blogspot.in/

Re: What configuration parameters cause a Hadoop 2.x job to run on the cluster

2014-04-30 Thread unmesha sreeveni
Kirkland, WA 98033 206-384-1340 (cell) Skype lordjoe_com -- Steven M. Lewis PhD 4221 105th Ave NE Kirkland, WA 98033 206-384-1340 (cell) Skype lordjoe_com -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Center for Cyber Security | Amrita Vishwa

Re: How do I get started with hadoop

2014-04-30 Thread unmesha sreeveni
/ You can find similar for almost all versions. Regards, Shahab On Fri, Apr 25, 2014 at 2:26 AM, 破千 997626...@qq.com wrote: Hi, I'm new in hadoop, can I get some useful links about hadoop so I can get started with it step by step. Thank you very much! -- *Thanks Regards * *Unmesha

Re: How do I get started with hadoop on windows system

2014-04-30 Thread unmesha sreeveni
/2014/04/hadoop-installation-for-beginners.html On Fri, Apr 25, 2014 at 11:47 AM, 破千 997626...@qq.com wrote: Hi everyone, I have subscribed hadoop mail list this morning. How do I get started with hadoop on my windows 7 PC. Thanks! -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop

Re: Using Eclipse for Hadoop code

2014-04-30 Thread unmesha sreeveni
Are you asking about standalone mode where we run hadoop using local fs?​​ -- *Thanks Regards * *Unmesha Sreeveni U.B* *Hadoop, Bigdata Developer* *Center for Cyber Security | Amrita Vishwa Vidyapeetham* http://www.unmeshasreeveni.blogspot.in/

Re: hadoop.tmp.dir directory size

2014-04-30 Thread unmesha sreeveni
, but I just want confirmation that the data Hadoop writes into /tmp/hadoop-df/nm-local-dir (df being my user name) can be discarded while the job is being executed. On Wed, Apr 30, 2014 at 6:40 AM, unmesha sreeveni unmeshab...@gmail.comwrote: ​​ Try * ​​hadoop fs -rmr /tmp

Re: how to customize hadoop configuration for a job?

2014-04-01 Thread unmesha sreeveni
. -- *Thanks Regards* Unmesha Sreeveni U.B Hadoop, Bigdata Developer Center for Cyber Security | Amrita Vishwa Vidyapeetham http://www.unmeshasreeveni.blogspot.in/

Re: error in hadoop hdfs while building the code.

2014-03-12 Thread unmesha sreeveni
-- *Thanks Regards* Unmesha Sreeveni U.B Junior Developer http://www.unmeshasreeveni.blogspot.in/

Re: GC overhead limit exceeded

2014-03-10 Thread unmesha sreeveni
attempt_1394160253524_0003_m_01_0 3 Container killed on request. Exit code is 143 at last, the task failed. Thanks for any help! -- *Thanks Regards* Unmesha Sreeveni U.B Junior Developer http://www.unmeshasreeveni.blogspot.in/

Binning for numerical dataset

2014-02-04 Thread unmesha sreeveni
into 'n' buckets . Say, n=5. Bucket Width= (Max - Min) /n Eg: Sepal Length = (7.9-4.3)/5 = 0.72 So, the intervals will be as follows : 4.3 - 5.02 5.02 - 5.74 Likewise, 5.74 -6.46 6.46 - 7.18 7.18- 7.9 continue for all attributes How to do the same in Mapreduce . -- *Thanks Regards* Unmesha

Re: Binning for numerical dataset

2014-02-04 Thread unmesha sreeveni
To do binning in MapReduce we need to find min and max in mapper let mapper() pass the min,max values to reducer.then after reducer calculate the buckets. Is that the best way -- *Thanks Regards* Unmesha Sreeveni U.B

Pre-processing in hadoop

2014-01-29 Thread unmesha sreeveni
Are we able to do preprocessing such as 1.Binning 2.Discretization in hadoop. Some of the reviews are telling it is difficult. Is that right. Pls share some links that will help me alot. -- *Thanks Regards* Unmesha Sreeveni U.B

Re: HIVE+MAPREDUCE

2014-01-21 Thread unmesha sreeveni
. Thanks in advance Ranjini R -- *Thanks Regards* Unmesha Sreeveni U.B Junior Developer http://www.unmeshasreeveni.blogspot.in/

Re: Sorting a csv file

2014-01-17 Thread unmesha sreeveni
are we able to sort multiple columns dynamically as the user suggests? ie user requests to sort col1 and col2 then the user request to sort 3 cols I am not able to find anyof the stuff through googling On Thu, Jan 16, 2014 at 4:03 PM, unmesha sreeveni unmeshab...@gmail.comwrote: yes i did

Merge files

2014-01-17 Thread unmesha sreeveni
How to merge two files using Map-Reduce code . I am aware of -getmerge and cat command.\ Thanks in advance. -- *Thanks Regards* Unmesha Sreeveni U.B

Re: Sorting a csv file

2014-01-16 Thread unmesha sreeveni
(args[1]));* *job.waitForCompletion(true);* * }* On Thu, Jan 16, 2014 at 10:26 AM, unmesha sreeveni unmeshab...@gmail.comwrote: Thanks for ur reply Ramya ok :) .so should i need to transpose the entire .csv file inorder to get the entire col 2 data? On Thu, Jan 16, 2014 at 10:11 AM

Sorting in decending order

2014-01-16 Thread unmesha sreeveni
Are we able to sort a csv file in descending order. -- *Thanks Regards* Unmesha Sreeveni U.B Junior Developer http://www.unmeshasreeveni.blogspot.in/

Combination of MapReduce and Hive

2014-01-16 Thread unmesha sreeveni
with mean. Which is better way 1. writing only mapreduce code or 2. Use a combination of hive and mapreduce. -- *Thanks Regards* Unmesha Sreeveni U.B Junior Developer http://www.unmeshasreeveni.blogspot.in/

Sorting a csv file

2014-01-15 Thread unmesha sreeveni
How to sort a csv file I know , between map and reduce shuffle and sort is taking place. But how do i sort each column in a csv file? -- *Thanks Regards* Unmesha Sreeveni U.B Junior Developer http://www.unmeshasreeveni.blogspot.in/

Re: Sorting a csv file

2014-01-15 Thread unmesha sreeveni
[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); job.waitForCompletion(true); } }* On Wed, Jan 15, 2014 at 2:50 PM, unmesha sreeveni unmeshab...@gmail.comwrote: How to sort a csv file I know , between map and reduce shuffle and sort is taking place. But how do i sort

Re: Sorting a csv file

2014-01-15 Thread unmesha sreeveni
From: unmesha sreeveni [mailto:unmeshab...@gmail.com] Sent: Wed 1/15/2014 4:11 PM To: User Hadoop Subject: Re: Sorting a csv file I did a map only job for sorting a txt file by editing wordcount program. I only need the key . How to set value to null. public class SortingCsv

Adding file to HDFs

2014-01-14 Thread unmesha sreeveni
How to add a *csv* file to hdfs using Mapreduce Code Using hadoop fs -copyFromLocal /local/path /hdfs/location i am able to do . BUt i would like to write mapreduce code. -- *Thanks Regards* Unmesha Sreeveni U.B Junior Developer http://www.unmeshasreeveni.blogspot.in/

Re: Adding file to HDFs

2014-01-14 Thread unmesha sreeveni
(stream); outStream.close(); On Tue, Jan 14, 2014 at 2:04 PM, unmesha sreeveni unmeshab...@gmail.comwrote: How to add a *csv* file to hdfs using Mapreduce Code Using hadoop fs -copyFromLocal /local/path /hdfs/location i am able to do . BUt i would like to write mapreduce code

Re: Adding file to HDFs

2014-01-14 Thread unmesha sreeveni
I tried to copy a 2.5 gb to hdfs. it took 3 -4 min. Are we able to reduce that time. On Tue, Jan 14, 2014 at 3:07 PM, unmesha sreeveni unmeshab...@gmail.comwrote: Thank you sudhakar On Tue, Jan 14, 2014 at 2:51 PM, sudhakara st sudhakara...@gmail.comwrote: Read file from local file

Seggregation in MapReduce

2014-01-12 Thread unmesha sreeveni
Can we do seggregation in MapReduce. If we are having a employee dataset which contains emp id,emp name,emp type. Are we able to group the employees based on different types. say emp type A in one file,say emp type B in another file. -- *Thanks Regards* Unmesha Sreeveni U.B Junior

Re: Find max and min of a column in a csvfile

2014-01-11 Thread unmesha sreeveni
and numbers. On Fri, Jan 10, 2014 at 12:36 AM, unmesha sreeveni unmeshab...@gmail.com wrote: Need help How to find the maximum element and min element of a col in a csv file .What will be the mapper output. -- *Thanks Regards* Unmesha Sreeveni U.B Junior Developer http

Re: what all can be done using MR

2014-01-11 Thread unmesha sreeveni
/8/2014 2:41 AM, unmesha sreeveni wrote: Can we do aggregation with in Hadoop MR like find min,max,sum,avg of a column in a csv file. -- *Thanks Regards* Unmesha Sreeveni U.B Junior Developer http://www.unmeshasreeveni.blogspot.in/ -- *Thanks Regards* Unmesha Sreeveni U.B

Re: what all can be done using MR

2014-01-11 Thread unmesha sreeveni
and put the rest in the value Chris On Jan 11, 2014 10:11 AM, unmesha sreeveni unmeshab...@gmail.com wrote: What about sorting . Acutually it is done by MapReduce itself. But if we are giving a csv file as input and trying to sort one/multiple column...Whether the corresponting columns also

Expressions in MapReduce

2014-01-11 Thread unmesha sreeveni
) -- *Thanks Regards* Unmesha Sreeveni U.B Junior Developer http://www.unmeshasreeveni.blogspot.in/

FAILED EMFILE: Too many open files

2014-01-07 Thread unmesha sreeveni
(UserGroupInformation.java:1408) at org.apache.hadoop.mapred.Child.main(Child.java:262) Why is it so? -- *Thanks Regards* Unmesha Sreeveni U.B Junior Developer http://www.unmeshasreeveni.blogspot.in/

  1   2   >