Updated Branches:
  refs/heads/cassandra-1.1.0 68032e940 -> e05a327e2

Merge branch 'cassandra-1.0' into cassandra-1.1.0

Conflicts:
        src/java/org/apache/cassandra/cql/DeleteStatement.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e05a327e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e05a327e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e05a327e

Branch: refs/heads/cassandra-1.1.0
Commit: e05a327e2cbbbc508bd53dee813de894faf6a272
Parents: 68032e9 b0dfb4c
Author: Sylvain Lebresne <sylv...@datastax.com>
Authored: Thu Mar 29 16:32:52 2012 +0200
Committer: Sylvain Lebresne <sylv...@datastax.com>
Committed: Thu Mar 29 16:32:52 2012 +0200

----------------------------------------------------------------------
 CHANGES.txt                                        |    1 +
 .../org/apache/cassandra/cql/DeleteStatement.java  |    7 ++++---
 .../cassandra/cql3/statements/DeleteStatement.java |    1 -
 3 files changed, 5 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e05a327e/CHANGES.txt
----------------------------------------------------------------------
diff --cc CHANGES.txt
index ae2b0f1,e4d207c..55d17f7
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -47,96 -13,9 +47,97 @@@ Merged from 1.0
   * fix race leading to super columns assertion failure (CASSANDRA-3957)
   * ensure that directory is selected for compaction for user-defined
     tasks and upgradesstables (CASSANDRA-3985)
+  * fix NPE on invalid CQL delete command (CASSANDRA-3755)
  
  
 +1.1-beta1
 + * (cqlsh)
 +   + add SOURCE and CAPTURE commands, and --file option (CASSANDRA-3479)
 +   + add ALTER COLUMNFAMILY WITH (CASSANDRA-3523)
 +   + bundle Python dependencies with Cassandra (CASSANDRA-3507)
 +   + added to Debian package (CASSANDRA-3458)
 +   + display byte data instead of erroring out on decode failure 
 +     (CASSANDRA-3874)
 + * add nodetool rebuild_index (CASSANDRA-3583)
 + * add nodetool rangekeysample (CASSANDRA-2917)
 + * Fix streaming too much data during move operations (CASSANDRA-3639)
 + * Nodetool and CLI connect to localhost by default (CASSANDRA-3568)
 + * Reduce memory used by primary index sample (CASSANDRA-3743)
 + * (Hadoop) separate input/output configurations (CASSANDRA-3197, 3765)
 + * avoid returning internal Cassandra classes over JMX (CASSANDRA-2805)
 + * add row-level isolation via SnapTree (CASSANDRA-2893)
 + * Optimize key count estimation when opening sstable on startup
 +   (CASSANDRA-2988)
 + * multi-dc replication optimization supporting CL > ONE (CASSANDRA-3577)
 + * add command to stop compactions (CASSANDRA-1740, 3566, 3582)
 + * multithreaded streaming (CASSANDRA-3494)
 + * removed in-tree redhat spec (CASSANDRA-3567)
 + * "defragment" rows for name-based queries under STCS, again (CASSANDRA-2503)
 + * Recycle commitlog segments for improved performance 
 +   (CASSANDRA-3411, 3543, 3557, 3615)
 + * update size-tiered compaction to prioritize small tiers (CASSANDRA-2407)
 + * add message expiration logic to OutboundTcpConnection (CASSANDRA-3005)
 + * off-heap cache to use sun.misc.Unsafe instead of JNA (CASSANDRA-3271)
 + * EACH_QUORUM is only supported for writes (CASSANDRA-3272)
 + * replace compactionlock use in schema migration by checking CFS.isValid
 +   (CASSANDRA-3116)
 + * recognize that "SELECT first ... *" isn't really "SELECT *" 
(CASSANDRA-3445)
 + * Use faster bytes comparison (CASSANDRA-3434)
 + * Bulk loader is no longer a fat client, (HADOOP) bulk load output format
 +   (CASSANDRA-3045)
 + * (Hadoop) add support for KeyRange.filter
 + * remove assumption that keys and token are in bijection
 +   (CASSANDRA-1034, 3574, 3604)
 + * always remove endpoints from delevery queue in HH (CASSANDRA-3546)
 + * fix race between cf flush and its 2ndary indexes flush (CASSANDRA-3547)
 + * fix potential race in AES when a repair fails (CASSANDRA-3548)
 + * Remove columns shadowed by a deleted container even when we cannot purge
 +   (CASSANDRA-3538)
 + * Improve memtable slice iteration performance (CASSANDRA-3545)
 + * more efficient allocation of small bloom filters (CASSANDRA-3618)
 + * Use separate writer thread in SSTableSimpleUnsortedWriter (CASSANDRA-3619)
 + * fsync the directory after new sstable or commitlog segment are created 
(CASSANDRA-3250)
 + * fix minor issues reported by FindBugs (CASSANDRA-3658)
 + * global key/row caches (CASSANDRA-3143, 3849)
 + * optimize memtable iteration during range scan (CASSANDRA-3638)
 + * introduce 'crc_check_chance' in CompressionParameters to support
 +   a checksum percentage checking chance similarly to read-repair 
(CASSANDRA-3611)
 + * a way to deactivate global key/row cache on per-CF basis (CASSANDRA-3667)
 + * fix LeveledCompactionStrategy broken because of generation pre-allocation
 +   in LeveledManifest (CASSANDRA-3691)
 + * finer-grained control over data directories (CASSANDRA-2749)
 + * Fix ClassCastException during hinted handoff (CASSANDRA-3694)
 + * Upgrade Thrift to 0.7 (CASSANDRA-3213)
 + * Make stress.java insert operation to use microseconds (CASSANDRA-3725)
 + * Allows (internally) doing a range query with a limit of columns instead of
 +   rows (CASSANDRA-3742)
 + * Allow rangeSlice queries to be start/end inclusive/exclusive 
(CASSANDRA-3749)
 + * Fix BulkLoader to support new SSTable layout and add stream
 +   throttling to prevent an NPE when there is no yaml config (CASSANDRA-3752)
 + * Allow concurrent schema migrations (CASSANDRA-1391, 3832)
 + * Add SnapshotCommand to trigger snapshot on remote node (CASSANDRA-3721)
 + * Make CFMetaData conversions to/from thrift/native schema inverses
 +   (CASSANDRA_3559)
 + * Add initial code for CQL 3.0-beta (CASSANDRA-3781, 3753)
 + * Add wide row support for ColumnFamilyInputFormat (CASSANDRA-3264)
 + * Allow extending CompositeType comparator (CASSANDRA-3657)
 + * Avoids over-paging during get_count (CASSANDRA-3798)
 + * Add new command to rebuild a node without (repair) merkle tree calculations
 +   (CASSANDRA-3483, 3922)
 + * respect not only row cache capacity but caching mode when
 +   trying to read data (CASSANDRA-3812)
 + * fix system tests (CASSANDRA-3827)
 + * CQL support for altering row key type in ALTER TABLE (CASSANDRA-3781)
 + * turn compression on by default (CASSANDRA-3871)
 + * make hexToBytes refuse invalid input (CASSANDRA-2851)
 + * Make secondary indexes CF inherit compression and compaction from their
 +   parent CF (CASSANDRA-3877)
 + * Finish cleanup up tombstone purge code (CASSANDRA-3872)
 + * Avoid NPE on aboarted stream-out sessions (CASSANDRA-3904)
 + * BulkRecordWriter throws NPE for counter columns (CASSANDRA-3906)
 + * Support compression using BulkWriter (CASSANDRA-3907)
 +
 +
  1.0.8
   * fix race between cleanup and flush on secondary index CFSes 
(CASSANDRA-3712)
   * avoid including non-queried nodes in rangeslice read repair

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e05a327e/src/java/org/apache/cassandra/cql/DeleteStatement.java
----------------------------------------------------------------------
diff --cc src/java/org/apache/cassandra/cql/DeleteStatement.java
index 9de571e,1b33a01..19cbc42
--- a/src/java/org/apache/cassandra/cql/DeleteStatement.java
+++ b/src/java/org/apache/cassandra/cql/DeleteStatement.java
@@@ -64,13 -65,17 +64,15 @@@ public class DeleteStatement extends Ab
          return keys;
      }
  
 -    /** {@inheritDoc} */
 -    public List<IMutation> prepareRowMutations(String keyspace, ClientState 
clientState) throws InvalidRequestException
 +    public List<IMutation> prepareRowMutations(String keyspace, ClientState 
clientState, List<ByteBuffer> variables) throws InvalidRequestException
      {
 -        return prepareRowMutations(keyspace, clientState, null);
 +        return prepareRowMutations(keyspace, clientState, null, variables);
      }
  
 -    /** {@inheritDoc} */
 -    public List<IMutation> prepareRowMutations(String keyspace, ClientState 
clientState, Long timestamp) throws InvalidRequestException
 +    public List<IMutation> prepareRowMutations(String keyspace, ClientState 
clientState, Long timestamp, List<ByteBuffer> variables) throws 
InvalidRequestException
      {
+         CFMetaData metadata = validateColumnFamily(keyspace, columnFamily);
+ 
          clientState.hasColumnFamilyAccess(columnFamily, Permission.WRITE);
          AbstractType<?> keyType = Schema.instance.getCFMetaData(keyspace, 
columnFamily).getKeyValidator();
  
@@@ -78,21 -83,20 +80,20 @@@
  
          for (Term key : keys)
          {
-             rowMutations.add(mutationForKey(key.getByteBuffer(keyType, 
variables), keyspace, timestamp, clientState,variables));
 -            rowMutations.add(mutationForKey(key.getByteBuffer(keyType), 
keyspace, timestamp, clientState, metadata));
++            rowMutations.add(mutationForKey(key.getByteBuffer(keyType, 
variables), keyspace, timestamp, clientState, variables, metadata));
          }
  
          return rowMutations;
      }
  
-     public RowMutation mutationForKey(ByteBuffer key, String keyspace, Long 
timestamp, ClientState clientState, List<ByteBuffer> variables)
 -    /** {@inheritDoc} */
 -    public RowMutation mutationForKey(ByteBuffer key, String keyspace, Long 
timestamp, ClientState clientState, CFMetaData metadata) throws 
InvalidRequestException
++    public RowMutation mutationForKey(ByteBuffer key, String keyspace, Long 
timestamp, ClientState clientState, List<ByteBuffer> variables, CFMetaData 
metadata)
 +    throws InvalidRequestException
      {
          RowMutation rm = new RowMutation(keyspace, key);
  
-         CFMetaData metadata = validateColumnFamily(keyspace, columnFamily);
          QueryProcessor.validateKeyAlias(metadata, keyName);
  
 -        AbstractType comparator = metadata.getComparatorFor(null);
 +        AbstractType<?> comparator = metadata.getComparatorFor(null);
  
          if (columns.size() < 1)
          {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e05a327e/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
----------------------------------------------------------------------
diff --cc src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
index 7e322a6,0000000..1e04474
mode 100644,000000..100644
--- a/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
@@@ -1,166 -1,0 +1,165 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + *   http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing,
 + * software distributed under the License is distributed on an
 + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
 + * KIND, either express or implied.  See the License for the
 + * specific language governing permissions and limitations
 + * under the License.
 + */
 +package org.apache.cassandra.cql3.statements;
 +
 +import java.nio.ByteBuffer;
 +import java.util.ArrayList;
 +import java.util.Arrays;
 +import java.util.Iterator;
 +import java.util.List;
 +import java.util.HashMap;
 +import java.util.Map;
 +
 +import org.apache.cassandra.cql3.*;
 +import org.apache.cassandra.config.CFMetaData;
 +import org.apache.cassandra.db.IMutation;
 +import org.apache.cassandra.db.RowMutation;
 +import org.apache.cassandra.db.filter.QueryPath;
 +import org.apache.cassandra.db.marshal.AbstractType;
 +import org.apache.cassandra.service.ClientState;
 +import org.apache.cassandra.thrift.InvalidRequestException;
 +import org.apache.cassandra.thrift.ThriftValidation;
 +
 +/**
 + * A <code>DELETE</code> parsed from a CQL query statement.
 + */
 +public class DeleteStatement extends ModificationStatement
 +{
 +    private CFDefinition cfDef;
 +    private final List<ColumnIdentifier> columns;
 +    private final List<Relation> whereClause;
 +
 +    private final Map<ColumnIdentifier, List<Term>> processedKeys = new 
HashMap<ColumnIdentifier, List<Term>>();
 +
 +    public DeleteStatement(CFName name, List<ColumnIdentifier> columns, 
List<Relation> whereClause, Attributes attrs)
 +    {
 +        super(name, attrs);
 +
 +        this.columns = columns;
 +        this.whereClause = whereClause;
 +    }
 +
 +    public List<IMutation> getMutations(ClientState clientState, 
List<ByteBuffer> variables) throws InvalidRequestException
 +    {
- 
 +        // Check key
 +        List<Term> keys = processedKeys.get(cfDef.key.name);
 +        if (keys == null || keys.isEmpty())
 +            throw new InvalidRequestException(String.format("Missing 
mandatory PRIMARY KEY part %s", cfDef.key.name));
 +
 +        ColumnNameBuilder builder = cfDef.getColumnNameBuilder();
 +        CFDefinition.Name firstEmpty = null;
 +        for (CFDefinition.Name name : cfDef.columns.values())
 +        {
 +            List<Term> values = processedKeys.get(name.name);
 +            if (values == null || values.isEmpty())
 +            {
 +                firstEmpty = name;
 +                // For sparse, we must either have all component or none
 +                if (cfDef.isComposite && !cfDef.isCompact && 
builder.componentCount() != 0)
 +                    throw new InvalidRequestException(String.format("Missing 
mandatory PRIMARY KEY part %s", name));
 +            }
 +            else if (firstEmpty != null)
 +            {
 +                throw new InvalidRequestException(String.format("Missing 
PRIMARY KEY part %s since %s is set", firstEmpty, name));
 +            }
 +            else
 +            {
 +                assert values.size() == 1; // We only allow IN for keys so far
 +                builder.add(values.get(0), Relation.Type.EQ, variables);
 +            }
 +        }
 +
 +        List<IMutation> rowMutations = new ArrayList<IMutation>();
 +
 +        for (Term key : keys)
 +        {
 +            ByteBuffer rawKey = key.getByteBuffer(cfDef.key.type, variables);
 +            rowMutations.add(mutationForKey(cfDef, clientState, rawKey, 
builder, variables));
 +        }
 +
 +        return rowMutations;
 +    }
 +
 +    public RowMutation mutationForKey(CFDefinition cfDef, ClientState 
clientState, ByteBuffer key, ColumnNameBuilder builder, List<ByteBuffer> 
variables)
 +    throws InvalidRequestException
 +    {
 +        QueryProcessor.validateKey(key);
 +        RowMutation rm = new RowMutation(cfDef.cfm.ksName, key);
 +
 +        if (columns.isEmpty() && builder.componentCount() == 0)
 +        {
 +            // No columns, delete the row
 +            rm.delete(new QueryPath(columnFamily()), 
getTimestamp(clientState));
 +        }
 +        else
 +        {
 +            for (ColumnIdentifier column : columns)
 +            {
 +                CFDefinition.Name name = cfDef.get(column);
 +                if (name == null)
 +                    throw new InvalidRequestException(String.format("Unknown 
identifier %s", column));
 +
 +                // For compact, we only have one value except the key, so the 
only form of DELETE that make sense is without a column
 +                // list. However, we support having the value name for 
coherence with the static/sparse case
 +                if (name.kind != CFDefinition.Name.Kind.COLUMN_METADATA && 
name.kind != CFDefinition.Name.Kind.VALUE_ALIAS)
 +                    throw new InvalidRequestException(String.format("Invalid 
identifier %s for deletion (should not be a PRIMARY KEY part)", column));
 +            }
 +
 +            if (cfDef.isCompact)
 +            {
 +                    ByteBuffer columnName = builder.build();
 +                    QueryProcessor.validateColumnName(columnName);
 +                    rm.delete(new QueryPath(columnFamily(), null, 
columnName), getTimestamp(clientState));
 +            }
 +            else
 +            {
 +                // Delete specific columns
 +                Iterator<ColumnIdentifier> iter = columns.iterator();
 +                while (iter.hasNext())
 +                {
 +                    ColumnIdentifier column = iter.next();
 +                    ColumnNameBuilder b = iter.hasNext() ? builder.copy() : 
builder;
 +                    ByteBuffer columnName = b.add(column.key).build();
 +                    QueryProcessor.validateColumnName(columnName);
 +                    rm.delete(new QueryPath(columnFamily(), null, 
columnName), getTimestamp(clientState));
 +                }
 +            }
 +        }
 +
 +        return rm;
 +    }
 +
 +    public ParsedStatement.Prepared prepare() throws InvalidRequestException
 +    {
 +        CFMetaData metadata = 
ThriftValidation.validateColumnFamily(keyspace(), columnFamily());
 +        cfDef = metadata.getCfDef();
 +        AbstractType[] types = new AbstractType[getBoundsTerms()];
 +        UpdateStatement.processKeys(cfDef, whereClause, processedKeys, types);
 +        return new ParsedStatement.Prepared(this, 
Arrays.<AbstractType<?>>asList(types));
 +    }
 +
 +    public String toString()
 +    {
 +        return String.format("DeleteStatement(name=%s, columns=%s, 
consistency=%s keys=%s)",
 +                             cfName,
 +                             columns,
 +                             cLevel,
 +                             whereClause);
 +    }
 +}

Reply via email to