Re: FW: WritableComparable value changing between Map and Reduce

2012-06-26 Thread Stan Rosenberg
You are using the default timezone. If the timezones differ, this could account for the discrepancy. Based on your (de)serialization code, the long value should be the same; its interpretation is different. stan On Tue, Jun 26, 2012 at 10:20 AM, Dave Shine < dave.sh...@channelintelligence.com> w

FW: WritableComparable value changing between Map and Reduce

2012-06-26 Thread Dave Shine
After about a week of researching, logging, etc. I have finally discovered what is happening, but I have no idea why. I have created my own WritableComparable object so I can emit it as the key from my Mapper. The object contains several Longs, one String, and one Date property. The following

Re: Understanding job completion in other nodes

2012-06-26 Thread Hamid Oliaei
Hi Christoph , I didn't consider waitForCompletion. I'll try using that and hope my workflow didn't need any additional method. Thanks a lot. Hamid Oliaei oli...@gmail.com

AW: Understanding job completion in other nodes

2012-06-26 Thread Christoph Schmitz
Hi Hamid, I'm not sure if I understand your question correctly, but I think this is exactly what the standard workflow in a Hadoop application looks like: Job job1 = new Job(...); // setup job, set Mapper and Reducer, etc. job1.waitForCompletion(...); // at this point, the cluster will run job

Understanding job completion in other nodes

2012-06-26 Thread Hamid Oliaei
Hi, I want to run a job on all of nodes and if one job was completed, the node must wait until the jobs on the other nodes finish. For that, every node must signal to all nodes and when every node receives the signal from every one, next job must be run. How can I handle that in Hadoop? Is there a