I haven't seen the server from inside (I only wrote a client), so maybe it's anoob question, but

why do we have to treat the key as a "string"? Cannot be just treated as an array of bytes (whihc cannot contain some values listed earlier)? So we do not have to care about codepages and encodings and such?



a.



On Dec 20, 2007, at 5:51 PM, Kieran Benton wrote:

Rakesh,
Just our 2cents, but I think artificially restricting keys to ASCII if there is no technical reason to do so (I.e. as long as it works with both the text and binary protocols) is a bit short sighted. It helps having the whole UTF8 range available to avoid having to pre- hash your keys if they contain non-ascii characters (e.g. delimiters)?

-Kieran

From: [EMAIL PROTECTED] [mailto:memcached- [EMAIL PROTECTED] On Behalf Of Rakesh Rajan
Sent: 20 December 2007 16:43
To: Dustin Sallings
Cc: [email protected]
Subject: Re: What is a valid key?

Dustin, just to clarify the bug report that I emailed you, the problem was with the "value" and not the "key".

Since you bought up the key issue with UTF8, I think that it is acceptable to force users to use ASCII as key, but allow values to be UTF8.

-Rakesh


On Dec 20, 2007 9:15 AM, Steven Grimm <[EMAIL PROTECTED]> wrote:
On Dec 19, 2007, at 7:43 PM, Dustin Sallings wrote:
>> For the binary protocol I think none of this should matter at all.
>> A key has a key length, so the question of valid characters should
>> not be relevant.
>
> That's true, but it'd be really nice to not have different rules
> based on protocol.
In particular, I think it's unacceptable to be able to set a key/value
pair with the binary protocol that you can't retrieve with the text
protocol.

-Steve


Reply via email to