[
http://issues.apache.org/jira/browse/HADOOP-302?page=comments#action_12423739 ]
Bryan Pendleton commented on HADOOP-302:
----------------------------------------
Started using this in my code... looks like there are still some bugs. Here's a
testcase that shouldn't fail. (I have no idea how UTF works, or I'd try to
offer an actual solution):
Index: hadoop/src/test/org/apache/hadoop/io/TestText.java
===================================================================
--- hadoop/src/test/org/apache/hadoop/io/TestText.java (revision 425795)
+++ hadoop/src/test/org/apache/hadoop/io/TestText.java (working copy)
@@ -87,7 +87,11 @@
public void testCoding() throws Exception {
- for (int i = 0; i < NUM_ITERATIONS; i++) {
+ String badString = "Bad \t encoding \t testcase.";
+ Text testCase = new Text(badString);
+ assertTrue(badString.equals(testCase.toString()));
+
+ for (int i = 0; i < NUM_ITERATIONS; i++) {
try {
// generate a random string
String before;
> class Text (replacement for class UTF8) was: HADOOP-136
> -------------------------------------------------------
>
> Key: HADOOP-302
> URL: http://issues.apache.org/jira/browse/HADOOP-302
> Project: Hadoop
> Issue Type: Improvement
> Components: io
> Reporter: Michel Tourn
> Assigned To: Hairong Kuang
> Fix For: 0.5.0
>
> Attachments: text.patch, VInt.patch
>
>
> Just to verify, which length-encoding scheme are we using for class Text (aka
> LargeUTF8)
> a) The "UTF-8/Lucene" scheme? (highest bit of each byte is an extension bit,
> which I think is what Doug is describing in his last comment) or
> b) the record-IO scheme in o.a.h.record.Utils.java:readInt
> Either way, note that:
> 1. UTF8.java and its successor Text.java need to read the length in two ways:
> 1a. consume 1+ bytes from a DataInput and
> 1b. parse the length within a byte array at a given offset
> (1.b is used for the "WritableComparator optimized for UTF8 keys" ).
> o.a.h.record.Utils only supports the DataInput mode.
> It is not clear to me what is the best way to extend this Utils code when you
> need to support both reading modes
> 2 Methods like UTF8's WritableComparator are to be low overhead, in partic.
> there should be no Object allocation.
> For the byte array case, the varlen-reader utility needs to be extended to
> return both:
> the decoded length and the length of the encoded length.
> (so that the caller can do offset += encodedlength)
>
> 3. A String length does not need (small) negative integers.
> 4. One advantage of a) is that it is standard (or at least well-known and
> natural) and there are no magic constants (like -120, -121 -124)
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira