T Vinod Gupta <tvinod@...> writes:

> 
> I am badly stuck and can't find a way out. i want to change my rowkey
> schema while copying data from 1 table to another. but a map reduce job to
> do this won't work because of large row sizes (responseTooLarge errors). 
so
> i am left with a 2 steps processing of exporting to hdfs files and
> importing from them to the 2nd table. so i wrote a custom exporter that
> changes the rowkey to newRowKey when doing context.write(newRowKey,
> result). but when i import these new files into new table, it doesnt work
> due to this exception in put - "The row in the recently added ... doesn't
> match the original one ....".
> 
> is there no way out for me? please help
> 
> thanks
> 

I know this is old, but here is a solution:

You need to pass the new key in the Put constructor as well as overwrite the 
key values w/ the new key.  Here is a helper method I use to do this...

    public static Put resultToPut(byte[] newKey, Result result) throws 
IOException {
        Put put = new Put(newKey);
        for (KeyValue kv : result.raw()) {
                KeyValue kv2 = new KeyValue(newKey, kv.getFamily(), 
kv.getQualifier(), kv.getValue());
                put.add(kv2);
        }
        return put;
    }

--Asher

Reply via email to