Oh man, you know what my problem was, I was not specifying the keyspace
after nodetool status. After specifying the keyspace i get the 100%
ownership like I would expect.

nodetool status discsussions
ubuntu@prd-usw2b-pr-01-dscsapi-cadb-0002:~$ nodetool status discussions
Datacenter: us-east-1
=====================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens  Owns (effective)  Host ID
                  Rack
UN  10.198.4.80    2.02 MB    256     100.0%
 e31aecd5-1eb1-4ddb-85ac-7a4135618b66  use1d
UN  10.198.2.20    132.34 MB  256     100.0%
 3253080f-09b6-47a6-9b66-da3d174d1101  use1c
UN  10.198.0.249   1.77 MB    256     100.0%
 22b30bea-5643-43b5-8d98-6e0eafe4af75  use1b
Datacenter: us-west-2
=====================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens  Owns (effective)  Host ID
                  Rack
UN  10.198.20.51   1.2 MB     256     100.0%
 6a40b500-cff4-4513-b26b-ea33048c1590  usw2c
UN  10.198.16.92   1.46 MB    256     100.0%
 01989d0b-0f81-411b-a70e-f22f01189542  usw2a
UN  10.198.18.125  2.14 MB    256     100.0%
 aa746ed1-288c-414f-8d97-65fc867a5bdd  usw2b


As for the counts being off,rRunning "nodetool repair discussions", which
you're supposed to do after changing replication factor, fixed the fact
that the counts were off, after doing that on the 6 nodes in my cluster,
that one column family is returning a count of 60 on each node.

Thanks for all the help here, I've only been working with cassandra for a
couple of months now and there is a lot to learn.

Thanks,
Rob


On Sun, Jan 5, 2014 at 11:55 PM, Or Sher <or.sh...@gmail.com> wrote:

> RandomPartitioner was the default at  < 1.2.*
> It looks like since 1.2 the default is Murmur3..
> Not sure that's your problem if you say you've upgraded from 1.2.*..
>
>
> On Mon, Jan 6, 2014 at 3:42 AM, Rob Mullen <robert.mul...@pearson.com>wrote:
>
>> Do you know of the default changed?   I'm pretty sure I never changed
>> that setting the the config file.
>>
>> Sent from my iPhone
>>
>> On Jan 4, 2014, at 11:22 PM, Or Sher <or.sh...@gmail.com> wrote:
>>
>> Robert, is it possible you've changed the partitioner during the upgrade?
>> (e.g. from RandomPartitioner to Murmur3Partitioner ?)
>>
>>
>> On Sat, Jan 4, 2014 at 9:32 PM, Mullen, Robert <robert.mul...@pearson.com
>> > wrote:
>>
>>> The nodetool repair command (which took about 8 hours) seems to have
>>> sync'd the data in us-east, all 3 nodes returning 59 for the count now.
>>>  I'm wondering if this has more to do with changing the replication factor
>>> from 2 to 3 and how 2.0.2 reports the % owned rather than the upgrade
>>> itself.  I still don't understand why it's reporting 16% for each node when
>>> 100% seems to reflect the state of the cluster better.  I didn't find any
>>> info in those issues you posted that would relate to the % changing from
>>> 100% ->16%.
>>>
>>>
>>> On Sat, Jan 4, 2014 at 12:26 PM, Mullen, Robert <
>>> robert.mul...@pearson.com> wrote:
>>>
>>>> from cql
>>>> cqlsh>select count(*) from topics;
>>>>
>>>>
>>>>
>>>> On Sat, Jan 4, 2014 at 12:18 PM, Robert Coli <rc...@eventbrite.com>wrote:
>>>>
>>>>> On Sat, Jan 4, 2014 at 11:10 AM, Mullen, Robert <
>>>>> robert.mul...@pearson.com> wrote:
>>>>>
>>>>>> I have a column family called "topics" which has a count of 47 on one
>>>>>> node, 59 on another and 49 on another node. It was my understanding with 
>>>>>> a
>>>>>> replication factor of 3 and 3 nodes in each ring that the nodes should be
>>>>>> equal so I could lose a node in the ring and have no loss of data.  Based
>>>>>> upon that I would expect the counts across the nodes to all be 59 in this
>>>>>> case.
>>>>>>
>>>>>
>>>>> In what specific way are you counting rows?
>>>>>
>>>>> =Rob
>>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> Or Sher
>>
>>
>
>
> --
> Or Sher
>

Reply via email to