I'm pretty sure, like I mentioned before, that the issue isn't that a connection is closed but it's in fact not closed. Threads like those ones talk about it:
http://search-hadoop.com/m/JFj52oETZn http://search-hadoop.com/m/Wxcn42PBN9g2 J-D On Fri, Apr 22, 2011 at 12:16 PM, Pete Tyler <[email protected]> wrote: > One job, then a scan. Both from the same JVM. I do want to run multiple jobs > from the same client JVM and those tests are failing too. > > I'm currently trying to figure out why the job is closing the connection and > how I can stop it doing so. > > From my iPhone > > On Apr 22, 2011, at 12:05 PM, Jean-Daniel Cryans <[email protected]> wrote: > >> Which HTable instantiation is giving you the error here? >> >> Are you starting multiple jobs from the same jvm? >> >> J-D >> >> On Fri, Apr 22, 2011 at 11:16 AM, Pete Tyler <[email protected]> >> wrote: >>> >>> Is it possible my use of map reduce has been rendered invalid / outdated by >>> the upgrade? It appears to create the expected result but causes follow on >>> logic in the client to fail as described above. >>> >>> CLIENT: >>> >>> HBaseConfiguration conf = new HBaseConfiguration() >>> >>> Job job = new Job(conf); >>> job.setJobName("My Native MapReduce"); >>> >>> Scan scan = new Scan(); >>> >>> String tableNameIn = >>> MyHBaseUtils.getDomainTableName(Publisher.class.getName()); >>> String tableNameOut = >>> MyHBaseUtils.getDomainTableName(Result.class.getName()); >>> >>> TableMapReduceUtil.initTableMapperJob(tableNameIn, scan, >>> NativeMapper.class, ImmutableBytesWritable.class, >>> ImmutableBytesWritable.class, job); >>> TableMapReduceUtil.initTableReducerJob(tableNameOut, >>> NativeReducer.class, job); >>> >>> job.setOutputFormatClass(TableOutputFormat.class); >>> job.setNumReduceTasks(1); >>> >>> job.waitForCompletion(true); >>> >>> >>> REDUCE: >>> >>> public class NativeReducer extends TableReducer<Writable, Writable, >>> Writable> { >>> >>> @Override >>> public void reduce(Writable key, Iterable<Writable> values, >>> Context context) throws IOException, InterruptedException { >>> >>> String city = Bytes.toString(((ImmutableBytesWritable) >>> key).get()); >>> int i = 0; >>> for (Writable value : values) { >>> i++; >>> } >>> >>> long nextId = this.getNextId( >>> "MAPREDUCE_RESULT", >>> MyHBaseUtils.getSequenceTableName() >>> ); >>> Put p = new Put(Bytes.toBytes(nextId)); >>> >>> p.add(Constants.DEFAULT_DATA_FAMILY, Bytes.toBytes("job_id"), >>> Bytes.toBytes(jobParams.getJobId())); >>> p.add(Constants.DEFAULT_DATA_FAMILY, >>> Bytes.toBytes("persisted_key"), this.toByteArray(city)); >>> p.add(Constants.DEFAULT_DATA_FAMILY, >>> Bytes.toBytes("persisted_value"), this.toByteArray(i)); >>> >>> long version = 0; >>> p.add(Constants.DEFAULT_CONTROL_FAMILY, >>> Constants.DEFAULT_VERSION_QUALIFIER, >>> Bytes.toBytes(version)); >>> >>> context.write(key, p); >>> >>> } >>> -- >>> View this message in context: >>> http://old.nabble.com/Exception-after-upgrading-to-90.1-tp31457860p31458091.html >>> Sent from the HBase User mailing list archive at Nabble.com. >>> >>> >
