[ 
https://issues.apache.org/jira/browse/CASSANDRA-4150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13254190#comment-13254190
 ] 

Vijay edited comment on CASSANDRA-4150 at 4/15/12 12:07 AM:
------------------------------------------------------------

For the records the comments from issue in CLHM library:
{quote}
I think a long capacity is fine, but I'm not actively working on a next release 
to roll this into soon. If this is critical than it could be a patch release. 
You are of course welcome to fork if neither of those options are okay.

I helped my former colleagues at Google with Guava's CacheBuilder (formerly 
MapMaker), which could be considered the successor to this project. There the 
maximum weight is a long.
{quote}

IRC: Guava doesnt support descendingKeySetWithLimit
Possibly fork the CLHM code into Cassandra code base or drop the hotkey's 
method and use guava (Thats the only limitation which i see for now).
                
      was (Author: vijay2...@yahoo.com):
    For the records the comments from issue in CLHM library:
{quote}
I think a long capacity is fine, but I'm not actively working on a next release 
to roll this into soon. If this is critical than it could be a patch release. 
You are of course welcome to fork if neither of those options are okay.

I helped my former colleagues at Google with Guava's CacheBuilder (formerly 
MapMaker), which could be considered the successor to this project. There the 
maximum weight is a long.
{quote}

IRC: Guava doesnt support descendingKeySetWithLimit

Looks like the minimum overhead serializing a ColumnFamily is 22 bytes:

{code}
// sentinal
out.writeBoolean(cf instanceof RowCacheSentinel);

// CF exist or not
dos.writeBoolean(true);
// CFID
dos.writeInt(columnFamily.id());

// CF Info
dos.writeInt(columnFamily.getLocalDeletionTime());
dos.writeLong(columnFamily.getMarkedForDeleteAt());

// column count
dos.writeInt(count);

1+1+4+4+8+4 = 22
{code}

So for this ticket, we can divide set the maximumWeightedCapacity as 
capacity/22 which will allow us to go up-to 44 GB in 1.1 and we can work 
through the alternative in another ticket for 1.2? Possibly fork the CLHM code 
into Cassandra code base or drop the hotkey's method and use guava (Thats the 
only limitation which i see for now).
                  
> Looks like Maximum amount of cache available in 1.1 is 2 GB
> -----------------------------------------------------------
>
>                 Key: CASSANDRA-4150
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-4150
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 1.1.0
>            Reporter: Vijay
>            Assignee: Vijay
>
> The problem is that capacity is a Integer which can maximum hold 2 GB,
> I will post a fix to CLHM in the mean time we might want to remove the 
> maximumWeightedCapacity code path (atleast for Serializing cache) and 
> implement it in our code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to