Merge branch 'cassandra-2.0' into cassandra-2.1.0

Conflicts:
        CHANGES.txt
        src/java/org/apache/cassandra/tools/SSTableExport.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/eea547c6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/eea547c6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/eea547c6

Branch: refs/heads/cassandra-2.1.0
Commit: eea547c62097ef645a4683026169d770216019e5
Parents: e0c4c6e f3f69cb
Author: Sylvain Lebresne <sylv...@datastax.com>
Authored: Thu Aug 7 16:18:24 2014 +0200
Committer: Sylvain Lebresne <sylv...@datastax.com>
Committed: Thu Aug 7 16:18:24 2014 +0200

----------------------------------------------------------------------
 CHANGES.txt                                     |   1 +
 .../apache/cassandra/tools/SSTableExport.java   | 131 +++++++++++--------
 2 files changed, 74 insertions(+), 58 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/cassandra/blob/eea547c6/CHANGES.txt
----------------------------------------------------------------------
diff --cc CHANGES.txt
index ecc4da2,4392159..1043016
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,70 -1,17 +1,71 @@@
 -2.0.10
 +2.1.0-final
 + * cqlsh DESC CLUSTER fails retrieving ring information (CASSANDRA-7687)
 + * Fix binding null values inside UDT (CASSANDRA-7685)
 + * Fix UDT field selection with empty fields (CASSANDRA-7670)
 + * Bogus deserialization of static cells from sstable (CASSANDRA-7684)
 +Merged from 2.0:
+  * Minor leak in sstable2jon (CASSANDRA-7709)
   * Add cassandra.auto_bootstrap system property (CASSANDRA-7650)
 - * Remove CqlPagingRecordReader/CqlPagingInputFormat (CASSANDRA-7570)
 - * Fix IncompatibleClassChangeError from hadoop2 (CASSANDRA-7229)
 - * Add 'nodetool sethintedhandoffthrottlekb' (CASSANDRA-7635)
   * Update java driver (for hadoop) (CASSANDRA-7618)
 - * Fix truncate to always flush (CASSANDRA-7511)
 + * Remove CqlPagingRecordReader/CqlPagingInputFormat (CASSANDRA-7570)
 + * Support connecting to ipv6 jmx with nodetool (CASSANDRA-7669)
 +
 +
 +2.1.0-rc5
 + * Reject counters inside user types (CASSANDRA-7672)
 + * Switch to notification-based GCInspector (CASSANDRA-7638)
 + * (cqlsh) Handle nulls in UDTs and tuples correctly (CASSANDRA-7656)
 + * Don't use strict consistency when replacing (CASSANDRA-7568)
 + * Fix min/max cell name collection on 2.0 SSTables with range
 +   tombstones (CASSANDRA-7593)
 + * Tolerate min/max cell names of different lengths (CASSANDRA-7651)
 + * Filter cached results correctly (CASSANDRA-7636)
 + * Fix tracing on the new SEPExecutor (CASSANDRA-7644)
   * Remove shuffle and taketoken (CASSANDRA-7601)
 - * Switch liveRatio-related log messages to DEBUG (CASSANDRA-7467)
 - * (cqlsh) Add tab-completion for CREATE/DROP USER IF [NOT] EXISTS 
(CASSANDRA-7611)
 - * Always merge ranges owned by a single node (CASSANDRA-6930)
 - * Pig support for hadoop CqlInputFormat (CASSANDRA-6454)
 + * Clean up Windows batch scripts (CASSANDRA-7619)
 + * Fix native protocol drop user type notification (CASSANDRA-7571)
 + * Give read access to system.schema_usertypes to all authenticated users
 +   (CASSANDRA-7578)
 + * (cqlsh) Fix cqlsh display when zero rows are returned (CASSANDRA-7580)
 + * Get java version correctly when JAVA_TOOL_OPTIONS is set (CASSANDRA-7572)
 + * Fix NPE when dropping index from non-existent keyspace, AssertionError when
 +   dropping non-existent index with IF EXISTS (CASSANDRA-7590)
 + * Fix sstablelevelresetter hang (CASSANDRA-7614)
 + * (cqlsh) Fix deserialization of blobs (CASSANDRA-7603)
 + * Use "keyspace updated" schema change message for UDT changes in v1 and
 +   v2 protocols (CASSANDRA-7617)
 + * Fix tracing of range slices and secondary index lookups that are local
 +   to the coordinator (CASSANDRA-7599)
 + * Set -Dcassandra.storagedir for all tool shell scripts (CASSANDRA-7587)
 + * Don't swap max/min col names when mutating sstable metadata 
(CASSANDRA-7596)
 + * (cqlsh) Correctly handle paged result sets (CASSANDRA-7625)
 + * (cqlsh) Improve waiting for a trace to complete (CASSANDRA-7626)
 + * Fix tracing of concurrent range slices and 2ary index queries 
(CASSANDRA-7626)
 + * Fix scrub against collection type (CASSANDRA-7665)
 +Merged from 2.0:
 + * Set gc_grace_seconds to seven days for system schema tables 
(CASSANDRA-7668)
 + * SimpleSeedProvider no longer caches seeds forever (CASSANDRA-7663)
 + * Always flush on truncate (CASSANDRA-7511)
   * Fix ReversedType(DateType) mapping to native protocol (CASSANDRA-7576)
 + * Always merge ranges owned by a single node (CASSANDRA-6930)
 + * Track max/min timestamps for range tombstones (CASSANDRA-7647)
 + * Fix NPE when listing saved caches dir (CASSANDRA-7632)
 +
 +
 +2.1.0-rc4
 + * Fix word count hadoop example (CASSANDRA-7200)
 + * Updated memtable_cleanup_threshold and memtable_flush_writers defaults 
 +   (CASSANDRA-7551)
 + * (Windows) fix startup when WMI memory query fails (CASSANDRA-7505)
 + * Anti-compaction proceeds if any part of the repair failed (CASANDRA-7521)
 + * Add missing table name to DROP INDEX responses and notifications 
(CASSANDRA-7539)
 + * Bump CQL version to 3.2.0 and update CQL documentation (CASSANDRA-7527)
 + * Fix configuration error message when running nodetool ring (CASSANDRA-7508)
 + * Support conditional updates, tuple type, and the v3 protocol in cqlsh 
(CASSANDRA-7509)
 + * Handle queries on multiple secondary index types (CASSANDRA-7525)
 + * Fix cqlsh authentication with v3 native protocol (CASSANDRA-7564)
 + * Fix NPE when unknown prepared statement ID is used (CASSANDRA-7454)
 +Merged from 2.0:
   * (Windows) force range-based repair to non-sequential mode (CASSANDRA-7541)
   * Fix range merging when DES scores are zero (CASSANDRA-7535)
   * Warn when SSL certificates have expired (CASSANDRA-7528)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eea547c6/src/java/org/apache/cassandra/tools/SSTableExport.java
----------------------------------------------------------------------
diff --cc src/java/org/apache/cassandra/tools/SSTableExport.java
index 41e9fdc,f8b85c3..3572296
--- a/src/java/org/apache/cassandra/tools/SSTableExport.java
+++ b/src/java/org/apache/cassandra/tools/SSTableExport.java
@@@ -228,20 -252,26 +228,26 @@@ public class SSTableExpor
      throws IOException
      {
          KeyIterator iter = new KeyIterator(desc);
-         DecoratedKey lastKey = null;
-         while (iter.hasNext())
+         try
          {
-             DecoratedKey key = iter.next();
- 
-             // validate order of the keys in the sstable
-             if (lastKey != null && lastKey.compareTo(key) > 0)
-                 throw new IOException("Key out of order! " + lastKey + " > " 
+ key);
-             lastKey = key;
- 
-             outs.println(bytesToHex(key.getKey()));
-             checkStream(outs); // flushes
+             DecoratedKey lastKey = null;
+             while (iter.hasNext())
+             {
+                 DecoratedKey key = iter.next();
+ 
+                 // validate order of the keys in the sstable
+                 if (lastKey != null && lastKey.compareTo(key) > 0)
+                     throw new IOException("Key out of order! " + lastKey + " 
> " + key);
+                 lastKey = key;
+ 
 -                outs.println(bytesToHex(key.key));
++                outs.println(bytesToHex(key.getKey()));
+                 checkStream(outs); // flushes
+             }
+         }
+         finally
+         {
+             iter.close();
          }
-         iter.close();
      }
  
      /**
@@@ -257,47 -287,59 +263,53 @@@
      {
          SSTableReader sstable = SSTableReader.open(desc);
          RandomAccessReader dfile = sstable.openDataReader();
+         try
+         {
+             IPartitioner<?> partitioner = sstable.partitioner;
  
-         IPartitioner<?> partitioner = sstable.partitioner;
+             if (excludes != null)
+                 toExport.removeAll(Arrays.asList(excludes));
  
-         if (excludes != null)
-             toExport.removeAll(Arrays.asList(excludes));
+             outs.println("[");
  
-         outs.println("[");
+             int i = 0;
  
-         int i = 0;
+             // last key to compare order
+             DecoratedKey lastKey = null;
  
-         // last key to compare order
-         DecoratedKey lastKey = null;
+             for (String key : toExport)
+             {
+                 DecoratedKey decoratedKey = 
partitioner.decorateKey(hexToBytes(key));
  
-         for (String key : toExport)
-         {
-             DecoratedKey decoratedKey = 
partitioner.decorateKey(hexToBytes(key));
+                 if (lastKey != null && lastKey.compareTo(decoratedKey) > 0)
+                     throw new IOException("Key out of order! " + lastKey + " 
> " + decoratedKey);
  
-             if (lastKey != null && lastKey.compareTo(decoratedKey) > 0)
-                 throw new IOException("Key out of order! " + lastKey + " > " 
+ decoratedKey);
+                 lastKey = decoratedKey;
  
-             lastKey = decoratedKey;
+                 RowIndexEntry entry = sstable.getPosition(decoratedKey, 
SSTableReader.Operator.EQ);
+                 if (entry == null)
+                     continue;
  
-             RowIndexEntry entry = sstable.getPosition(decoratedKey, 
SSTableReader.Operator.EQ);
-             if (entry == null)
-                 continue;
+                 dfile.seek(entry.position);
+                 ByteBufferUtil.readWithShortLength(dfile); // row key
 -                if (sstable.descriptor.version.hasRowSizeAndColumnCount)
 -                    dfile.readLong(); // row size
+                 DeletionInfo deletionInfo = new 
DeletionInfo(DeletionTime.serializer.deserialize(dfile));
 -                int columnCount = 
sstable.descriptor.version.hasRowSizeAndColumnCount ? dfile.readInt()
 -                        : Integer.MAX_VALUE;
 -
 -                Iterator<OnDiskAtom> atomIterator = 
sstable.metadata.getOnDiskIterator(dfile, columnCount,
 -                        sstable.descriptor.version);
  
-             dfile.seek(entry.position);
-             ByteBufferUtil.readWithShortLength(dfile); // row key
-             DeletionInfo deletionInfo = new 
DeletionInfo(DeletionTime.serializer.deserialize(dfile));
-             Iterator<OnDiskAtom> atomIterator = 
sstable.metadata.getOnDiskIterator(dfile, sstable.descriptor.version);
++                Iterator<OnDiskAtom> atomIterator = 
sstable.metadata.getOnDiskIterator(dfile, sstable.descriptor.version);
+                 checkStream(outs);
  
-             checkStream(outs);
+                 if (i != 0)
+                     outs.println(",");
+                 i++;
+                 serializeRow(deletionInfo, atomIterator, sstable.metadata, 
decoratedKey, outs);
+             }
  
-             if (i != 0)
-                 outs.println(",");
-             i++;
-             serializeRow(deletionInfo, atomIterator, sstable.metadata, 
decoratedKey, outs);
+             outs.println("\n]");
+             outs.flush();
+         }
+         finally
+         {
+             dfile.close();
          }
- 
-         outs.println("\n]");
-         outs.flush();
      }
  
      // This is necessary to accommodate the test suite since you cannot open 
a Reader more
@@@ -309,36 -351,39 +321,39 @@@
          if (excludes != null)
              excludeSet = new HashSet<String>(Arrays.asList(excludes));
  
- 
          SSTableIdentityIterator row;
          SSTableScanner scanner = reader.getScanner();
+         try
+         {
+             outs.println("[");
  
-         outs.println("[");
+             int i = 0;
  
-         int i = 0;
+             // collecting keys to export
+             while (scanner.hasNext())
+             {
+                 row = (SSTableIdentityIterator) scanner.next();
  
-         // collecting keys to export
-         while (scanner.hasNext())
-         {
-             row = (SSTableIdentityIterator) scanner.next();
 -                String currentKey = bytesToHex(row.getKey().key);
++                String currentKey = bytesToHex(row.getKey().getKey());
  
-             String currentKey = bytesToHex(row.getKey().getKey());
+                 if (excludeSet.contains(currentKey))
+                     continue;
+                 else if (i != 0)
+                     outs.println(",");
  
-             if (excludeSet.contains(currentKey))
-                 continue;
-             else if (i != 0)
-                 outs.println(",");
+                 serializeRow(row, row.getKey(), outs);
+                 checkStream(outs);
  
-             serializeRow(row, row.getKey(), outs);
-             checkStream(outs);
+                 i++;
+             }
  
-             i++;
+             outs.println("\n]");
+             outs.flush();
+         }
+         finally
+         {
+             scanner.close();
          }
- 
-         outs.println("\n]");
-         outs.flush();
- 
-         scanner.close();
      }
  
      /**

Reply via email to