Rob,

Yes, you can use the provided iso8601.js file (only generated for you in a 
Rails 3 project, unfortunately, but you can grab it from inside the gem or on 
github) to parse ISO8601 Dates. You'll need to set the js_source_dir in 
app.config to point to the directory where this lives.

If we upgrade erlang_js to a later version of Spidermonkey in the future, the 
iso8601.js file will no longer be needed (1.8.5 supports ISO8601).  You can 
also choose to use RFC822-style dates by setting Ripple.date_format = :rfc822.

Sean Cribbs <[email protected]>
Developer Advocate
Basho Technologies, Inc.
http://basho.com/

On Jun 1, 2011, at 9:37 AM, Dingwell, Robert A. wrote:

> 
> 
> Thanks,
> 
> I was trying to use the number of keys as an indication of how many objects 
> there were in a bucket but I see that is not the smartest approach.  I didn't 
> see anything that was an indication of the number of items in a bucket , is 
> there anything if riak like this or do I need to create a map reduce job for 
> this?  
> 
> 
> This is totally off the original topic but another question I have is related 
> to javascript Date objects.  In ripple if I have a field that is a Time field 
> it serializes to riak as a iso8601 string representation of the object.   
> When performing a map reduce job on the bucket related to the ripple class I 
> have if I attempt to create a javascript Date object from the string in the 
> stored object I get an Invalid Date error.   Is this just due to a limitation 
> in the version of spider monkey being used?
> 
> Thanks
> 
> Rob
> 
> On May 31, 2011, at 8:44 PM, Sean Cribbs wrote:
> 
>> Robert,
>> 
>> What Keith said is misleading -- that key cache was solely in the Ruby 
>> client driver and not part of Riak itself.
>> 
>> In Riak, deletes have two phases; in the first, so-called "tombstones" are 
>> written to the partitions that own replicas of the key.  The tombstone has 
>> special metadata marking it as such and an empty value, but has a descendant 
>> vector clock from the last known value. In the second phase, the tombstones 
>> are read back from the replicas, and iff they all are tombstones (that is, 
>> all replicas respond, and all are tombstones), a reaping command is sent 
>> such that they will be cleared from the backend.
>> 
>> In your case, what may have occurred is that the replica chosen for 
>> key-listing did not receive the tombstone write (only 1/n_val of all 
>> partitions are consulted for key-lists), or had not yet received the reaping 
>> command. When you read the key again, you obviously get a "not found" 
>> because the other replicas will resolve to a tombstone. Eventually your read 
>> requests will invoke read-repair, updating the stale partition and causing 
>> the value to be reaped.
>> 
>> The moral of the story here is, again, don't rely on key-listings for strong 
>> indications of cluster state.
>> 
>> Sean Cribbs <[email protected]>
>> Developer Advocate
>> Basho Technologies, Inc.
>> http://basho.com/
>> 
>> On May 31, 2011, at 8:12 PM, Keith Bennett wrote:
>> 
>>> Robert -
>>> 
>>> Until a source code change a few days ago, riak would by default cache the 
>>> keys reported to be in a bucket, so after fetching them once they would not 
>>> be updated after deletions, additions, etc.  The key is indeed gone, but 
>>> the keys API did not report the change.
>>> 
>>> If you go to the message archive at 
>>> http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-May/thread.html,
>>>  and search for "Riak cleint resources", you'll see the ruckus that I 
>>> started a week and a half ago about this very subject. ;)
>>> 
>>> There is an option to force the reloading of keys but I forget what it is, 
>>> and anyway it is now gone from the current code base since the strategy was 
>>> changed.  Be warned that using the keys method is, anyway, as Sean Cribbs 
>>> pointed out to me, in general an awful idea, and almost always should be 
>>> avoided.  This is because it's a very expensive operation -- in order to 
>>> accomplish it, all keys in the data store need to be accessed.
>>> 
>>> My guess is that testing for the exception you encountered is probably the 
>>> best way to test for existence/absence of a key, but hopefully those more 
>>> knowledgeable than I will enlighten us on that.
>>> 
>>> - Keith
>>> 
>>> On May 31, 2011, at 7:23 PM, Dingwell, Robert A. wrote:
>>> 
>>>> Hi,
>>>> 
>>>> When deleting a key from a bucket I'm noticing that the object associated 
>>>> with the key is gone but the key itself is still sticking around.  I loop 
>>>> though all of the keys in a bucket and then call delete on each one, the 
>>>> object for the key is then gone so if I try to get the object for that key 
>>>> I get a 404 as expected.  But if I look at the bucket in the browser with 
>>>> the  keys=true parameter, all of the keys are still there.  Is this normal 
>>>> and if so how do I get rid of the keys?
>>>> 
>>>> Thanks_______________________________________________
>>>> riak-users mailing list
>>>> [email protected]
>>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>> 
>>> 
>>> _______________________________________________
>>> riak-users mailing list
>>> [email protected]
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> 
> 


_______________________________________________
riak-users mailing list
[email protected]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to