Re: 1.4.19 bug?

2017-10-30 Thread Matt Ingenthron
Note that it could also be that your webapp's threads were all just stuck
waiting to time out on one node.  Depending on the client you're using, how
your app is written, it may appear that everything is 'offline', but it's
really just that your app is spending a large amount of time waiting for
one down node to timeout.

One solution to this is to implement a circuit breaker pattern

Hope that helps!

Matt

On Mon, Oct 30, 2017 at 9:25 AM dormando  wrote:

> Hey,
>
> You're saying the process was just suspended and you resumed it with a
> signal? Or did it crash and you had to restart it?
>
> I don't recall any major bugs, but that version is very old (Dec 2013), so
> support will be limited.
>
> -Dormando
>
> On Mon, 30 Oct 2017, satoshi fujioka wrote:
>
> > Hi.
> > We are running 16 memcached with Tomcat's web application as a client.
> > Suddenly memcached hunged up at 11 o'clock on October 22, 2017.
> > When an event occurs, all web applications that access memcached
> > stopped responding.
> >
> > This event was resolved by resuming the memcached process, Are there
> > already known problems?
> > Please tell me if there is information available when analyzing.
> >
> > Describe the current statistical information.
> >
> > Memcache Version1.4.16
> > Start Time2017/10/22 14:08:56
> > Uptime4 days, 5 hours and 49 minutes
> > Memcached Server Version1.4.16
> > Used Cache Size79.4 MBytes
> > Max Cache Size1.0 GBytes
> > Current Connections Count250
> > Total Connections So Far588
> > Flush CMD count28
> > Get CMD count388652096
> > Set CMD cunt8367177
> > Items Evicted So Far0
> > Bytes Read So Far10.8 GBytes
> > Bytes Written So Far1009.4 GBytes
> > Threads4
> >
> > Discribe software version we used.
> >
> > Apache2.2.34
> > Tomcat7.0.79
> > JDK8u144
> > memcashd1.4.16
> > kernel2.6.32-696.6.3.el6.x86
> >
> > Regards.
> >
> > --
> >
> > ---
> > You received this message because you are subscribed to the Google
> Groups "memcached" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to memcached+unsubscr...@googlegroups.com.
> > For more options, visit https://groups.google.com/d/optout.
> >
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to memcached+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Python Memcached

2017-08-28 Thread Matt Ingenthron
Curious, what kind of protocol problems?  There were a couple Trond and I
were looking at with enabling out-of-order responses and some unsolicited
responses.

On Fri, Aug 25, 2017 at 2:53 PM dormando  wrote:

> I have some thoughts but need to finish a few other things first.
>
> I kind of want something a bit more barebones that speaks both client and
> server so I can embed it into the server and help prevent bitrot. Then
> another wrapper around that which becomes the external client.
>
> At the same time I'd like to fix some protocol problems... Been bothering
> me for a long time. Might be a good opportunity to kill a flock of birds.
>
> On Fri, 25 Aug 2017, Brian Aker wrote:
>
> > I am not really sure where to go with libmemcached right now.
> >
> > Its design shows the age in which it was built.
> >
> > > On Aug 25, 2017, at 00:52, dormando  wrote:
> > >
> > > Pretty sure Frank was asking about server implementations of memcached
> in
> > > python.
> > >
> > > As per that, I've only ever seen people create those for eduational
> > > purposes. Not aware of something anyone runs in production. What do you
> > > need it for?
> > >
> > > As per libmemcached being idle; I've noticed and am hoping to be able
> to
> > > work on it this year.
> > >
> > > -Dormando
> > >
> > > On Thu, 24 Aug 2017, Min-Zhong "John" Lu wrote:
> > >
> > >> We use pylibmc. Fairly stable and built-in handling of multi-server
> key distribution. The disadvantage of pylibmc (and many other client
> libraries) lies
> > >> in its dependency on libmemcached, which has seen little activity for
> quite some while, which in turn hinders utilization of new memcached
> commands; IIRC
> > >> one example is the GAT command.
> > >> Frankly the situation is the same for any python client library
> depending on libmemcached for the actual underlying protocol/network
> communication. I
> > >> believe there's essentially no way to work around it except for
> patching/building your own libmemcached.
> > >>
> > >> Cheers,
> > >> - Mnjul
> > >>
> > >> On Thursday, August 24, 2017 at 2:55:13 PM UTC-7, Frank Wang wrote:
> > >>  Hi,
> > >> I know there are a bunch of Python versions of memcached clients.
> Does anyone know of any good/stable Python implementations of the memcached
> > >> server?
> > >>
> > >> Thanks,
> > >> Frank
> > >>
> > >> --
> > >>
> > >> ---
> > >> You received this message because you are subscribed to the Google
> Groups "memcached" group.
> > >> To unsubscribe from this group and stop receiving emails from it,
> send an email to memcached+unsubscr...@googlegroups.com.
> > >> For more options, visit https://groups.google.com/d/optout.
> > >>
> > >>
> > >
> > > --
> > >
> > > ---
> > > You received this message because you are subscribed to the Google
> Groups "memcached" group.
> > > To unsubscribe from this group and stop receiving emails from it, send
> an email to memcached+unsubscr...@googlegroups.com.
> > > For more options, visit https://groups.google.com/d/optout.
> >
> > --
> >
> > ---
> > You received this message because you are subscribed to the Google
> Groups "memcached" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to memcached+unsubscr...@googlegroups.com.
> > For more options, visit https://groups.google.com/d/optout.
> >
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to memcached+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Can memcached be used for true clustering...still unclear...

2015-10-29 Thread Matt Ingenthron
It's off topic for this list, but that's in effect what Couchbase does
along with persistence.

- Matt

(full disclosure, I work for Couchbase)

On Mon, Oct 26, 2015 at 6:35 AM, Jonas Steinberg 
wrote:

> I've been reading the web for an hour now and I'm still unclear as to
> whether memcached can actually be used to create a cluster without
> essentially having to write a ton of code.  I'm using it currently I some
> webapps but only as a caching store for db load balancing.  I'd like to
> know if I can add additional memcached nodes to these servers such that if
> one goes down all of my key/value pairs, tickets, whatever, will still be
> accessible.  Basically does memcached support this functionality life
> ehcache, hazelcast it infinispan does?
>
> Thanks!
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to memcached+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: memcached fails unexpectedly

2013-08-11 Thread Matt Ingenthron
Best to start with a pstack to see what state it is in.  Use gdb to print a
stacktrace for all threads if your OS has no pstack command.

The other thing you may do is strace -c -f of the pid.  This will let you
know what its trying to ask the OS for.

Finally, maybe try reducing the threads to 1 to see if it continues to get
in this state. That'd be good to know too.
On Aug 11, 2013 8:35 PM, Patrick Rynhart p.rynh...@massey.ac.nz wrote:

 Hello,

 We run memcached version 1.4.4-3.el6 for session caching with our Moodle
 environment.  We are finding that memcached suddenly stops working after a
 couple of weeks, and we are keen to try and work towards resolving this.

 When in the failed state:

 1. memcached is still running.
 2. There is nothing untoward in dmesg or syslog.
 3. We are able to connect to the memcache socket (TCP Port 11211) using
 telnet.  However, the telnet session hangs (indefinitely) after entering
 ‘stats’ then pressing return
 4. No swap is being used by machine running memcached

 No evictions have been logged. Cache hit ratio averages 98%. % used
 storage of memcache averages 3.1% with a max value of 7.5%.

 We also have lots of XYMon metrics / stats available - however, these
 don’t show anything unusual at the time that memcached fails.

 How do we go about troubleshooting this ?  (The failed condition only
 occurs in our production Moodle instance - i.e. we haven’t been able to
 replicated it in testing/dev.)

 Thanks,

 Patrick Rynhart

 Systems Engineer
 Infrastructure Support Section
 Information Technology Services
 Massey University
 Palmerston North
 NEW ZEALAND
 T: +64 6 356 9099 ext 81075

 --

 --- You received this message because you are subscribed to the Google
 Groups memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to 
 memcached+unsubscribe@**googlegroups.commemcached%2bunsubscr...@googlegroups.com
 .
 For more options, visit 
 https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out
 .


-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Is memcached unable to handle large number of connections?

2013-07-26 Thread Matt Ingenthron
It is in response to a relatively rare, good problem to have.  Imagine you
have so many processes each with a connection on so many servers that its
in the 10s of thousands.  Then a proxy/mux makes sense over persistent
connections.

Matt
On Jul 26, 2013 6:01 AM, Ryan Chan ryanchan...@gmail.com wrote:

 Actually have been using memcached for years and didn't have any problem,
 but find a new memcached proxy called twemproxy and it said:

  - Maintains persistent server connections.
  - Keeps connection count on the backend caching servers low.

 Actually what wrong with memcached on the above two points?
 Anyone have experience to share?

 Thanks.

 --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Memcached cluster

2013-03-14 Thread Matt Ingenthron
It's really a discussion for another mailing list, but if you could
elaborate to me directly or to couchb...@googlegroups.com, I'd be
interested in why you say Couchbase is much slower.  I've not seen it that
way.

Full disclosure, I'm a couchbase person.  I also do a lot of work on the
spymemcached client.

Thanks,

Matt


On Thu, Mar 14, 2013 at 4:47 AM, Oleksandr Drach luckyred...@gmail.comwrote:

 Thanks, Henrik!
 I will look onto Cassandra later.

 BTW repcached 1.2.x may fit our needs..
 Anyone has used it in production? What are your feedbacks?


 On Monday, March 11, 2013 1:48:23 PM UTC+2, Henrik Schröder wrote:

 Memcached is a cache, not storage, you really shouldn't use it as such.
 When you set a value in memcached, you have no guarantees whatsoever that
 you'll be able to get the value back afterwards. You're guaranteed to get
 the latest value set if you get something, and you're guaranteed to not get
 a value if it's been deleted or has expired. But there are a lot of factors
 that can cause a value to be spontaneously evicted, to say nothing of the
 fact that you lose everything if you restart it or if the machine crashes.

 Also note that any replication functionality can lead to inconsistency,
 since there are no built-in mechanisms for resolving that, you can just
 hope that your failover server has the same data as the original one.

 If you want storage, then get a piece of software that actually offers
 storage, there are plenty to choose from. But memcached is probably the
 wrong choice for you. If you only need key-value storage, then I suggest
 you check out Cassandra, it scales pretty linearly in that scenario. Or you
 could check out hstore in Postgresql, but you probably need to make your
 own sharding for that.


 /Henrik



 On Mon, Mar 11, 2013 at 9:57 AM, Oleksandr Drach lucky...@gmail.comwrote:

 Hello Dormando!
 Thanks for your reply.

 Description and requirements are:
 - Memcached will be used a primary storage (i.e. not for caching MySQL).
 - It should have failover feature (in case if one server fails all
 connections goes to another server).

 Basically something like Master-Slave will be sufficient, but
 Master-Master architecture is more acceptable.
 Thanks!


 On Sunday, March 10, 2013 1:58:16 AM UTC+2, Dormando wrote:

  Dear memcached community,
  It would be really good to build a failover memcached cluster.�
  As I know this functionality is not provided by default.�
  Hence as options - you may use CouchBase Server or something like
 Repcached.
  Both of them has disadvantages: CouchBase Server is much slower,
 �Repcached works with legacy memcached version 1.2.8
 
  Based on your�experience�what is the best way to build cluster of
 memcached servers?
 
  Thanks in advance!

 Hi,

 This depends on why you need that second cluster and what the
 constraints
 are for it.

 You can do client side replication via libmemcached which will handle a
 lot of potential use cases. Though oftentimes people who are attempting
 to
 do this are doing so because they don't understand normal memcached
 clustering very well.

 So it'd be useful to state all of your requirements up front. Then we
 can
 make a real recommendation/etc.

  --

 ---
 You received this message because you are subscribed to the Google
 Groups memcached group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to memcached+...@**googlegroups.com.

 For more options, visit 
 https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out
 .




  --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Asynchronous notifications

2012-08-04 Thread Matt Ingenthron
Adding TAP with a filter on key and TAP operation type would be cool indeed.


Re: security for memcached on shared hosting

2012-08-04 Thread Matt Ingenthron
Why not run multiple instances?

There is the also the open source bucket engine (written for
membase/Couchbase) but it requires SASL with and 1.6.  Could work,
depending on your environment.  Bucket engine is used at and was written
for Heroku's memcached add on by Couchbase.


Re: Warning from spymemcached

2012-08-04 Thread Matt Ingenthron
You're likely overrunning the input queue.  If you configure it for a bit
of blocking, it'll work better for you.  It's similar to the bulk loading
description here:
http://www.couchbase.com/wiki/display/couchbase/Couchbase+Java+Client+Library

The tuning here is spymemcahed, just extended.


Re: Inserts to memcache very slow

2012-07-30 Thread Matt Ingenthron
Which client are you using?  Any info in that area you can share?  Also, is
there possibly a different amount of network traffic each time?

Actually, one cause could be setting memcached up with more memory than
available on your system.  How much physical memory have you?
On Jul 30, 2012 6:22 PM, rik lijo.k.a...@gmail.com wrote:


 I am a newbie and am using memcache to store about 6 million entries.My
 processor which updates memcache sometimes complete in 9 min (Initially)
 and sometimes take more than one hour.
 Can some one please shed a little light as to why the insert time varies.

 For updating the cache I always use the set command. Before cache updation
 I also call the flush command. Is the insert time greater beacuse I am
 always using set command instead of replace.

 I dont know which all data may be usefule hence copy pasting the stats
 command output

 Thank you in  advance.

 Thanks,
 Rik

 stats
 STAT pid 22845
 STAT uptime 647374
 STAT time 1343697228
 STAT version 1.4.1
 STAT pointer_size 64
 STAT rusage_user 374.297098
 STAT rusage_system 840.344248
 STAT curr_connections 11
 STAT total_connections 395
 STAT connection_structures 30
 STAT cmd_get 9
 STAT cmd_set 60296238
 STAT cmd_flush 9
 STAT get_hits 2
 STAT get_misses 7
 STAT delete_misses 0
 STAT delete_hits 0
 STAT incr_misses 0
 STAT incr_hits 0
 STAT decr_misses 0
 STAT decr_hits 0
 STAT cas_misses 0
 STAT cas_hits 0
 STAT cas_badval 0
 STAT bytes_read 5068593281
 STAT bytes_written 482391298
 STAT limit_maxbytes 1073741824
 STAT accepting_conns 1
 STAT listen_disabled_num 0
 STAT threads 5
 STAT conn_yields 0
 STAT bytes 871357730
 STAT curr_items 6699582
 STAT total_items 60296238
 STAT evictions 0
 END
 stats slabs
 STAT 2:chunk_size 120
 STAT 2:chunks_per_page 8738
 STAT 2:total_pages 1
 STAT 2:total_chunks 8738
 STAT 2:used_chunks 596
 STAT 2:free_chunks 1
 STAT 2:free_chunks_end 8141
 STAT 2:mem_requested 71467
 STAT 2:get_hits 0
 STAT 2:cmd_set 5364
 STAT 2:delete_hits 0
 STAT 2:incr_hits 0
 STAT 2:decr_hits 0
 STAT 2:cas_hits 0
 STAT 2:cas_badval 0
 STAT 3:chunk_size 152
 STAT 3:chunks_per_page 6898
 STAT 3:total_pages 967
 STAT 3:total_chunks 6670366
 STAT 3:used_chunks 6667495
 STAT 3:free_chunks 1
 STAT 3:free_chunks_end 2870
 STAT 3:mem_requested 866361263
 STAT 3:get_hits 2
 STAT 3:cmd_set 60007455
 STAT 3:delete_hits 0
 STAT 3:incr_hits 0
 STAT 3:decr_hits 0
 STAT 3:cas_hits 0
 STAT 3:cas_badval 0
 STAT 4:chunk_size 192
 STAT 4:chunks_per_page 5461
 STAT 4:total_pages 6
 STAT 4:total_chunks 32766
 STAT 4:used_chunks 31491
 STAT 4:free_chunks 1
 STAT 4:free_chunks_end 1274
 STAT 4:mem_requested 4925000
 STAT 4:get_hits 0
 STAT 4:cmd_set 283419
 STAT 4:delete_hits 0
 STAT 4:incr_hits 0
 STAT 4:decr_hits 0
 STAT 4:cas_hits 0
 STAT 4:cas_badval 0
 STAT active_slabs 3
 STAT total_malloced 1021235264
 END




Re: how do I not store the keys?

2011-10-24 Thread Matt Ingenthron
On 10/24/11 12:46 PM, dormando wrote:
  I'm using the last spymemcached release 2.7.3.
  This is caused by validateKey method in MemcachedClient:
 
for(byte b : keyBytes) {
   if(b == ' ' || b == '\n' || b == '\r' || b == 0) {
  throw new IllegalArgumentException(Key contains invalid
  characters: `` + key + '');
 
  What java client do you recommend?
 There should be a flag telling the client to not validate the key if
 you're in binprot mode. Dustin? Anyone?

There's nothing like that currently.  Last discussion I remember is that
we decided against allowing binary keys at the client because we don't
know what other clients may expect when trying to get that item.

We can certainly reconsider that, but it's not been needed thus far.

I might ask, are you doing sha1/md5 because you really need the sum of
something, or are you doing it to simplify what you use for your key?

Matt


Re: how do I not store the keys?

2011-10-24 Thread Matt Ingenthron
On 10/24/11 3:48 PM, dormando wrote:
 There's nothing like that currently.  Last discussion I remember is that
 we decided against allowing binary keys at the client because we don't
 know what other clients may expect when trying to get that item.

 We can certainly reconsider that, but it's not been needed thus far.
 What the hell? I thought 50% of the whole point of the binary protocol was
 to make binary keys possible. It's a flag in most other clients. You know,
 like, that whole utf8 argument? Are you absolutely sure about this?

Calm down.  It clearly wasn't 50% of the use cases given that it's just
now come up.  :)

I'm not absolutely sure, but I do remember something about removing it
and discussing it with Dustin at some point.  I doubt if either of us
remember the conversation exactly.  Maybe Dustin will pop up and call me
a liar.  I doubt that though.

I'd surely take a patch/issue to add a configuration flag to ignore this
check, but there's not one currently. 

In my personal opinion, I think we should allow binary keys.  It is
useful. 


 I might ask, are you doing sha1/md5 because you really need the sum of
 something, or are you doing it to simplify what you use for your key?
 He's trying to reduce the bytes of the item to the absolute minimum.

Sorry, I'd not read the whole thread, but I have now.  Given the
'collisions are okay', it could be just as simple to strip out any
illegal characters.  I'd probably rather add a switch to flip though.

Matt


Re: how do I not store the keys?

2011-10-24 Thread Matt Ingenthron
On 10/24/11 4:33 PM, dormando wrote:
 Nobody fucking does that. Get over it, yo. People read the minimum amount
 of crap they have to read until it works. Everyone else doesn't have a
 hard time finding work.

 Also; because when you don't, people switch to to other systems because
 they believe it's easier, or they complain in IRC or on the mailing list
 and waste my time.
 For what it's worth; on one side of the aisle I have people flaming me
 because we don't support utf8 or long keys or binary whatever or CAS on
 everything, then on the other side people make shitty defaults or don't
 support these things or whatever.

 I just wish you guys could whine at each other *directly* instead of at me
 on different days.

Well, my email was to the list and I'd intended to be interacting with
Miguel.  I don't think either Miguel or I were whining directly to you. 
Or through you.

In this particular case, it's Dustin who had done most of the server
side work and nearly all of the client side work.  There may even be
some logic to why it is the way it is.

Then again, it could just be a bug that the check hasn't been removed
when speaking binary protocol. 

Miguel: I've filed some stuff to track this, and we'll get to the bottom
of it. 
http://code.google.com/p/spymemcached/issues/detail?id=213
http://www.couchbase.org/issues/browse/SPY-63

You may want to track it yourself on either bug tracker too.

Matt


release of spymemcached 2.7.2

2011-10-14 Thread Matt Ingenthron
Hi all,

I'm pleased to announce the release of spymemcached 2.7.2.  All are recommended 
to upgrade.

The master branch is under development, and will likely become the 2.8 release 
in the next month or two.  Further development on the 2.7 release is on the 
refresh branch on the source repository.

From the tag (which also serves as release notes):

Release of 2.7.2

The 2.7.2 release is a patch update, including a number of new
features.  

The 2.7.1 release introduced a required dependency update of
Apache commons codec.  A more elegant solution has been added
to this release so we can now use commons-code 1.3, 1.4 or 1.5.
There is also a more elegant solution to the unfortunate
incompatibility introduced in Netty, where 3.2 changed the
signature of one method.  Thanks to Martin Grotzke for this fix.

Some notable bugs fixed in this release include:
* BaseSerializingTranscode resource leak (issue 190)
* Operation class is used in un-threadsafe, unsynchronized manner
  (issue 195)
* decodeLong() would decode incorrectly and prematurely wrap
  (issue 202)

The development time experience is better starting in 2.7.1.  Classic
buildr test will work just as expected, but it will run only the
subset of tests that are appropriate for memcached.  Using some
environment variables, tests can be run against Membase either locally
or remotely or against a remote memcached.

Examples:

Run tests against a Membase instance on the localhost:

  buildr test SPYMC_SERVER_TYPE=membase


Run memcached IPv4 tests and try to run IPv6 tests against the specified server:

  buildr test SPYMC_TEST_SERVER_V4=10.2.1.58


Run a test against the specified IPv4 and IPv6 servers:

  buildr test SPYMC_TEST_SERVER_V4=10.2.1.58 SPYMC_TEST_SERVER_V6=
some_ipv6_addr


Summary of changes since the 2.6 series:
(see the 2.7 release notes for more details)

The 2.7 series gains significant new capabilities when using
binary protocol with Membase or forthcoming updates to
memcached.

Starting with the 2.7 release, it is now possible to
instantiate a MemcachedClient from the REST interface
supported by Membase.  What this means is that if you have
one or more buckets in Membase, you can create
MemcachedClient objects to work with the buckets.  Furthermore,
if the cluster topology changes (i.e. a node is added or
removed), the client will automatically adjust to the new
topology.

This updated client also has support for other new Membase
operations, some of which will likely be in memcached
as well in a future release:
  touch - extend the expiration for a given item
  get and touch (a.k.a. gat) - get and touch an item
  getl - get and lock an item, with a lock expiration time

The majority of contributions to this release were
funded by Couchbase, Inc.  Thanks to the 2.7.2 contributors:

Daniel Martin (2):
  Fix concurrent access to operations objects, especially near timeouts
  Use direct buffers in TCPMemcachedNodeImpl

Martin Grotzke (1):
  Add compatibility with netty 3.2.0+.

Matt Ingenthron (2):
  No need for old debugging string in test.
  Revert SPY-37  SPY-38: Fixed redistribution performance issue

Mike Wiederhold (21):
  Operations can't timeout when writing to the write buffer.
  SPY-125: Significant performance issue large number of sets
  Improved performance of write queue processing during timeouts
  Add support for commons-codec 1.3, 1.4, and 1.5
  Remove assertions that assert a completed op isn't timed out
  SPY-49: BaseSerializingTranscoder does not close resources.
  Removed unused variables in GetOperationImpl
  Change getBytes() to getData() in CASOperation
  SPY-39: Added toString() to operation heirarchy
  Added toString() functions to ConnectionFactory classes.
  SPY-47: Client object should have toString().
  SPY-54: getBulk() shouldn't log a warning when a key is not found
  Send an ack for all tap opaque messages
  Made cmd variable a byte for binary operations
  Removed a print line statement from TestConfig
  Removed extra variables in tapCustom header
  Flush the PrintWriter in TapMessagePrinter
  Don't reconnect when a tap connection finishes.
  Made vbmap in MultiKey operation synchronized
  SPY-37  SPY-38: Fixed redistribution performance issue
  Refactored tap message classes.

sanada0670 (1):
  SPY-51: Bug in OperationImpl's decodeLong(2)


Bugs fixed/closed in 2.7.2:
http://code.google.com/p/spymemcached/issues/detail?id=125
http://code.google.com/p/spymemcached/issues/detail?id=166
http://code.google.com/p/spymemcached/issues/detail?id=190
http://code.google.com/p/spymemcached/issues/detail?id=193 (post release)
http://code.google.com/p/spymemcached/issues/detail?id=195
http://code.google.com/p/spymemcached/issues/detail?id=196
http://code.google.com/p/spymemcached/issues/detail?id=201
http://code.google.com/p/spymemcached/issues/detail?id=202


Bugs fixed/closed in 2.7.1:
http

release of spymemcached 2.7.1 - Java client for memcached (and Membase)

2011-08-23 Thread Matt Ingenthron
  Added unit tests for touch
  Add touch, get and touch, and get and lock to MemcachedClientIF
  Fixed broken get and touch test
  ASCII get operations now return a false operation status on failure
  Add visibility into operations (status).
  Add visibility into operations (key)
  Added all memcached error codes to spymemcached.
  Removed unused import from ConfigurationProviderHTTP
  Added serial ID's to exceptions.
  Fixed issue regarding connecting to a non-existent bucket
  Removed unused imports in VBucketCacheNodeLocatorTest
  Added constructor to MemcachedClient that takes a ConnectionFactory
  Added generic to SingleElementFiniteIterator in MemcachedClient.
  Added source folder for manuel tests to Eclipse config file
  Made SyncGetTest failures less sporadic
  Changed the value size of items used in LongClientTest
  Made operation timeout longer for QueueOverflowTest
  Added tap client
  Removed unused variables in testcases.
  Refactored Operations to improve correctness of vbucket aware ops
  Made an addOperation function private in MemcachedConnection
  Fixed a bug where multi-gets didn't work with vb aware constructor
  Added a command line parameter for specifying server type
  Issue 96: ClassPathException fix
  Excluded Non-memcached tests when testing memcached
  TapOperation's shouldn't be KeyedOperations.
  Made TapTest only run against Membase.
  Made EINTERNAL and ERR2BIG errors throw an exception
  Added the ability to specify the ip address of the testing server
  Fixed issue with flags not being added properly to tap messages
  Added README.markdown.
  Added ability to do tap dump
  Tap streams now pause every 10,000 messages.

Matt Ingenthron (7):
  Adding a warmup state for nodes.
  Also check for RETRY during clone.
  Encode with commons codec more correctly.
  Ensure nodesMap updates are safe when topology changes.
  VBucketNodeLocator should not implement getSequence()
  Log warnings when retrying due to not my vbucket.
  Update commons-codec to 1.5 in .classpath for Eclipse.

Dustin Sallings (3):
  Fix dumb thing compiler warning was pointing out
  Fixed some shadowing parameter warnings.
  Compiler pointed out ignored exception. :(

Nelz Carpentier (1):
  Adding the repository needed to download netty.

Paul Burnstein (1):
  Spymemcached Issue 134: Performance fix

Vitaly Rudenya (1):
  All NodeLocator's can be reconfigured.

Bugs fixed/closed in 2.7.1:
http://code.google.com/p/spymemcached/issues/detail?id=96
http://code.google.com/p/spymemcached/issues/detail?id=134
http://code.google.com/p/spymemcached/issues/detail?id=152
http://code.google.com/p/spymemcached/issues/detail?id=171
http://code.google.com/p/spymemcached/issues/detail?id=187

Bugs fixed/closed in 2.7:
http://code.google.com/p/spymemcached/issues/detail?id=153
http://code.google.com/p/spymemcached/issues/detail?id=172
http://code.google.com/p/spymemcached/issues/detail?id=165

With others which can be listed here:
http://code.google.com/p/spymemcached/issues/list



Re: Memcached running very slow

2011-08-17 Thread Matt Ingenthron
On 8/17/11 8:44 AM, Neeraj Agarwal wrote:
 I installed memcached on Ubuntu box. Installed MongoDB too on the same
 machine to compare the performance for these two.

By default, mongodb doesn't check for responses at all.  It just sends
requests over.  That *could* be playing a role here.

The reading from makes less sense though if there.  Are you verifying
that you read it back?

Something seems broken for sure with 220 records in 85 seconds.


 Storing in MongoDB  0.0940001010895  for  220  records
 Storing in memcached  83.203687
 Reading from MongoDB  0.0309998989105
 Reading from memcached  85.358951

 All time in seconds.

 I'm using python libray 
 http://www.tummy.com/Community/software/python-memcached/





Re: does memcached cause any disk io?

2011-06-21 Thread Matt Ingenthron
On 6/21/11 1:37 AM, Mark Maggelet wrote:
 I'm hoping someone can explain this to me.
 I turned on fragment caching with memcached for my rails app recently
 and unless I'm tripping it made the disk io part of my ec2 bill go up.
 The memcached server is on a different host than the rails app if that
 means anything.

Since memory is a shared resource, it's possible that setting up a
memcached server on a system that otherwise had stuff in memory would
make those other memory pages move to and from swap.  This could even be
filesystem caching.  You don't say if there are other things on that
other box or if you have swap enabled.

That said, if it's IO you're noticing on the bill, it's probably not swap.

Hope that helps,

Matt


Re: maximum number of keys in a get operation

2011-06-20 Thread Matt Ingenthron
On 6/20/11 11:44 AM, nt wrote:
 What's the upper bound on the number of keys a client can get from
 the server?  Info regarding both the ASCII and binary protocols would
 be greatly appreciated.

127^256, but you can get the same key more than once.

Binary follows ASCII protocol in its restrictions on keys.




Re: memcache 1.4.5 crashed under high load (200-400mbps on the network interface)

2011-06-19 Thread Matt Ingenthron
Hi Nickolay,

Couple things missing are OS and libevent version.  :)

You may want to try preloading libsegfault, that'd give us a bit more
information.  Verify you're updated on the latest 1.4 libevent too.

Otherwise there's not much useful info I see.

Matt

On 6/19/11 2:52 AM, Kutovoy Nickolay wrote:
 Unfortunately I can't reproduce it as the load is gone now, also - no
 logs, almost nothing. It's not the first time this happened.

 [root@memcache1 ~]# memcached -h
 memcached 1.4.5


 ps axf: memcached -d -p 11211 -u nobody -c 4096 -m 7168



 [root@memcache1 ~]# free
  total   used   free sharedbuffers
 cached
 Mem:   609987212570964842776  0 149932
 665644
 -/+ buffers/cache: 4415205658352
 Swap:  8159224  174528141772

 from what can I see in dmesg:

 possible SYN flooding on port 11211. Sending cookies.
 possible SYN flooding on port 11211. Sending cookies.
 possible SYN flooding on port 11211. Sending cookies.

 cpuinfo: 8 cores

 processor   : 7
 vendor_id   : GenuineIntel
 cpu family  : 6
 model   : 26
 model name  : Intel(R) Xeon(R) CPU   L5520  @ 2.27GHz
 stepping: 5
 cpu MHz : 2261.060
 cache size  : 8192 KB
 physical id : 1
 siblings: 8
 core id : 3
 cpu cores   : 4
 apicid  : 23
 fpu : yes
 fpu_exception   : yes
 cpuid level : 11
 wp  : yes
 flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
 mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall
 nx rdtscp lm constant_tsc ida nonstop_tsc pni monitor ds_cpl vmx est
 tm2 ssse3 cx16 xtpr sse4_1 sse4_2 popcnt lahf_lm
 bogomips: 4521.97
 clflush size: 64
 cache_alignment : 64
 address sizes   : 40 bits physical, 48 bits virtual
 power management: [8]


 [root@memcache1 ~]# top
 top - 09:51:50 up 67 days, 12:57,  1 user,  load average: 0.01, 0.02,
 0.00
 Tasks: 126 total,   1 running, 125 sleeping,   0 stopped,   0 zombie
 Cpu(s):  0.2%us,  0.3%sy,  0.0%ni, 98.9%id,  0.0%wa,  0.0%hi,
 0.6%si,  0.0%st
 Mem:   6099872k total,  1256848k used,  4843024k free,   149956k
 buffers
 Swap:  8159224k total,17452k used,  8141772k free,   665656k
 cached

   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+
 COMMAND
 25427 nobody15   0  433m 321m  400 S  4.7  5.4 112:29.40
 memcached
  2854 root  18   0 10232  612  592 S  0.3  0.0   9:58.64 hald-
 addon-
 stor



Crosspost: spymemcached 2.7 release (was: Fwd: release of 2.7 and roadmap)

2011-06-09 Thread Matt Ingenthron
Hi,

For those on this list who are interested, spymemcached 2.7 has been
released.  A message I'd sent out to the spy list is below.

Thanks,

Matt

 Original Message 
Subject:release of 2.7 and roadmap
Date:   Thu, 9 Jun 2011 12:15:15 -0700
To: spymemcac...@googlegroups.com spymemcac...@googlegroups.com



Hi all,

Spymemcached 2.7 has been released!

= Release =

The release notes are in the tag and in the download on the site.  The
big feature is new capabilities for Membase.  This means spymemcached is
now able to bootstrap and react to changes in Membase cluster topology. 
It also has support for some of the new commands proposed for memcached
(TOUCH/GAT) and some of the new commands that Membase has added (SYNC,
along with TOUCH/GAT).  See the release notes for more information.

I'll probably generate new HTML changelog soon as well, and put that on
the site as a link.

Some of you may be wondering, why a 2.7 so shortly after the release of 2.6?

Well, in the ideal case, 2.6 would have been just an update to 2.5, but
to get some of the changes right we had to add on to some interfaces. 
That was the cause of the version bump there.  To get to 2.7, we needed
to take on some new dependencies and add add some additional interface
changes.  Thus the call for another version bump. 

I expect we'll probably have a sustaining tail off of both for a little
while depending on what comes up, but core development will continue in 2.7.

What do we recommend?  Run the latest of course.  Spymemcached 2.7
should be a drop-in replacement for most people.  If you had implemented
your own NodeLocator or other changes, it'll be a bit harder to update,
but not dramatically so.

Downloads are, as always, on the Google code site.  Shortly after the
2.6 intro, a new Maven repository was set up as well by Couchbase.  This
also has a mirror of the downloads and new documentation.  Find all of
this at http://www.couchbase.org/products/sdk/membase-java

= Roadmap =

There are a few items still coming in 2.7.  Most of these are also
related to the development of Membase.  For instance, Mike Weiderhold
has written a TapClient class, which can create TAP connections to
either the proposed update to memcached or Membase.  He's also written
new SYNC commands.  Both of these are up for code review right now.

To facilitate experimentation with them and get more eyeballs on them,
we've posted a preview build to the Couchbase (formerly Membase) wiki:
http://www.couchbase.org/wiki/display/membase/prerelease+spymemcached+vbucket

This is new development and may change before it comes out in an update
to 2.7, but it's usable today.

Please send over any questions or issues.

Thanks,

Matt

-- 
Matt Ingenthron
Couchbase, Inc.




Re: Membase bucket creation.

2011-06-06 Thread Matt Ingenthron
Hi Aditya,

You should take this to memb...@googlegroups.com  or the membase
forums.  This list is about memcached.

Regards,

Matt

On 6/5/11 7:32 PM, Aditya Kumar wrote:

 How to use membase backets?
 I have 4 tables that need to be created.so 4 vbackets.

 Initially I have the default bucket with the it set to all the memory
 space.(2.39GB)
 than I Modified the default bucket to 590MB
 than created one more bucket of size 500MB and saved.

 =
 After that I get continouisly

 memcached.exe Application Error:
 The Instructions at oxo34b8bea referenced memory at 0x474e55a3.
 The Memory could not be read.
 Click on OK to terminate the program.
 click on Cancel to debug the program.
 ==
 and
 Memcached.exe encountered a problem and ned to be closed.
 send Error Report ; Dont sen.
 ==

 If I delete that bucket I dont see this Error in windows.

 Can you please guide me in creating a Membase- Buckets?


 2) Also what are the conditions that need to be taken into
 consideration when creating a bucket.

 memcached/membase.

 I want to know what are the things that need to be condidered




Re: a c memcached client

2011-05-28 Thread Matt Ingenthron
Hi Tony,


On May 28, 2011, at 4:36 AM, tony Cui wrote:

   I wrote a c memcached client. The reason I wrote it  is 
 because spymemcached has some problems, say connection reset by peer. And 
 the problems has driven crazy, so a idea came up , what about write a client. 


I'm never one to fault someone for writing more stuff they release for others 
to use, but I do personally believe it's better to be part of helping fix 
software commons.  I have to say, connection reset by peer sure sounds more 
like a network issue or the server shutting the connection down rather than a 
broken client.

Have you filed any issues against spymemcached?  Have you posted to the mailing 
list?

There are a great number of people who use spymemcached quite successfully, 
it's probably not necessary to tear it down it just because you decided to 
write your own.  I can say with a bit of experience, dealing with all of the 
possible connection issues takes some effort.

Good luck with it,

Matt


spymemcached Java client 2.6 release

2011-05-13 Thread Matt Ingenthron
Hi,

I'm happy to announce the release of spymemcached 2.6.  For your
convenience, release notes are posted below.

Code is posted to the regular download site:
http://code.google.com/p/spymemcached/downloads/list

And I've also set up a maven 2 repository:
http://files.couchbase.com/maven2/

I may make some changes to the maven repo yet, so I'd appreciate any
feedback.  We do plan to consolidate other builds there too.

From the 2.6 tag/release notes:

Release of 2.6

Changes since the 2.5 series:

The main change in 2.6 is logic with respect to timeout handling.
Timeout default is now 2500ms, rather than the old default of
1000ms.  The reason for this change is that a single network
retransmit could cause a longer than 1000ms wait, causing an
exception in the client application.

Additionally, some changes were integrated that help the client to
recover faster when a network connection problem does occur.  These
changes required interface changes, which is why the version has
been bumped to 2.6.

This change also allows client code to be able to tell if an
operation has timed out after being sent over the network or if
it has timed out before even being written to the network.

See issue 136 for more discussions on this topic.

Another feature relating to timeouts is that the continuous
timeout feature introduced in 2.5 has been moved to the connection
level, rather than for an entire client.  The default is that after
1000 timeout operations, a connection will be dropped and
reestablished if possible.  This addresses situations where a server
may fail, but the connection is not reset.

There are also performance improvements and new transcoder abilities
with asyncGetBulk now included.

One other significant bug fixed in this release was a problem with
finding an active node when there is a failed node and the
KetamaNodeLocator is in use.  With only two nodes, the previous
implementation had a 25% chance of not finding a working node
and failing with a timeout.  This has been changed such that now
there is a less than 1% chance it will not find a working node.

It should be noted that spy's algorithm here may be different
than libketama.  This is tracked under issue 117.  This is not a
new change however, and it's never been reported from running in
production.

Thanks much to all of the new contributors to this release!  Thanks
in particular to Taso of Concur who spent much time writing tests
to isolate issue 175.

Andrey Kartashov (1):
  Move continuous timeout counter to individual connections.

Blair Zajac (9):
  Use a private static final byte array for \r\n instead of always
converting the string into a byte array.
  No need to call MessageDigest#reset() on a brand new
MessageDigest.
  Use a faster method to get a MD5 MessageDigest instance.
  Delete a duplicate unit test.
  Fix compilation with JDK 1.5.
  Add an iterator that returns a single element forever.
  Allow per-key transcoders to be used with asyncGetBulk().
  Minor performance improvement for bulk gets.
  Tiny performance improvement.

Boris Partensky (3):
  return partial data from timed out getBulk
  plug potential file descriptor leak
  support timeout based disconnects for bulk ops

Dustin Sallings (5):
  Some minor fixes to make eclipse happy with the code again.
  Some import cleanups.
  Avoid potential NPE as reported by eclipse.
  Compilation fix after spring de-generification.
  Removed a bit of dead test code.

Eran Harel (1):
  Spring FactoryBean support.

Luke Lappin (1):
  Do not use generics with Spring Factory Bean, be 2.5 compatible.

Matt Ingenthron (19):
  Changed ports in tests for non-listening to something higher.
  Do not write timedout operations to the MemcachedNode.
  Increased default timeout to 2500ms.
  Test for timeout from operation epoch.
  Test fixes after adding new timeout logic.
  Fix for stats sizes test.
  Recognize operation may be null at times.  e.g.: flush
  Add a TIMEDOUT state to ops and make callbacks correct.
  Fixes to testSyncGetTimeouts.
  Catch RuntimeException instead.
  Changed transcoder logging to more appropriate defaults.
  Warn when redistribute cannot find another node.
  Fixed small log typo.
  Added ability to see if op unsent but timedout.
  Fixed minor comment formatting.
  Fixed cancellation issue.
  Separate the KetamaIterator for future dynamic configuration.
  Search more with the KetamaIterator.
  Increase the maximum size allowed.  Issue 106.

ddlatham (1):
  Support

Re: spymemcached Java client 2.6 release

2011-05-13 Thread Matt Ingenthron
On 5/13/11 4:10 AM, Alexey Zlobin wrote:
 It looks like module name in POM was changed from memcached to
 spymemcached. May be wiki page about maven repository should be
 updated?

Yes, it should.  Will do.

The changed module name seems more correct to me.

Does anyone have any opinions on that?

Thanks,

Matt


Re: client behavior with large sizes

2011-04-28 Thread Matt Ingenthron
On 4/28/11 10:16 PM, Matt Sellars wrote:

 How does one configured this value?

There is a -I parameter that was added around 1.4.3 I think. 

The param is not to be used lightly, as you may have to do some slab
tuning.  See the release notes for details.

Matt


 On Apr 28, 10:50 pm, Dustin dsalli...@gmail.com wrote:
 On Apr 28, 6:29 pm, Matt Sellars ibbum...@gmail.com wrote:

 Is the maximum check necessary?
 Since the size limitation ultimately resides in the cache
 implementation storage(Membase, Memcached, specially compiled
 Memcached, etc) this should at least be configurable with a system
 property if it has to exist.  I'm not sure the max enforcement is
 needed though.  If you are considering raising it for Membase support
 you're not properly enforcing it for Memcached so why bother at all?
   It's configurable in memcached, and in the client.  This is really
 about raising the default for the client and figuring out what to
 advise for all client authors.

   While it's not strictly necessary to check the size on the client,
 it's certainly a lot nicer to know that an item is too big before we
 even consider putting it on the wire than to send a large object and
 then have the server refuse it.



client behavior with large sizes

2011-04-25 Thread Matt Ingenthron
Hi,

I'd recently been trying to fix an issue with spymemcached when the
server side allows larger than 1MiB.  I'm fairly certain that other
clients may have similar issues, so I wanted to kind of collect how
clients should behave in this area.

With the change to allow max item size to be a parameter, clients SHOULD
NOT assume a maximum size of 1MiB.  Clients SHOULD allow the maximum
size they enforce to be configurable. 

That said, clients MAY want to have some default maximum size.  Perhaps
this size is the existing 1MiB, but it could be something like 20MiB. 
It should always be possible to override it with configuration.

Optionally, a client COULD (generally one that keeps long lived
connections) get item_size_max from stats settings to set it's
max_item_size.  Even though it probably won't work, this SHOULD be able
to be overridden by client configuration.

Obviously, the server should always enforce whatever limit it's
configured for by either returning an error or dropping an offending
connection.

Any thoughts?

Matt


spymemcached 2.6 release candidate

2011-04-25 Thread Matt Ingenthron
Hi all,

I'm just crossposting that there is a 2.6rc2 of spymemcached posted to
it's site.  2.6rc1 had been around for a bit and was well received, but
one critical issue had been found before it's release so I've spun an rc2.

I'd appreciate any feedback you can offer before the final.

Thanks,

Matt
---BeginMessage---
Hi all,

I've just posted a 2.6rc2 for download:
http://code.google.com/p/spymemcached/downloads/list

Between here and final there are a few minor things I expect to
change/fix (formatting, item sizes, maven artifacts), but it's otherwise
pretty well ready to go.  I would appreciate it if anyone can offer
feedback.

Thanks,

Matt

On 4/19/11 12:58 AM, Matt Ingenthron wrote:
 After the review, I'll kick out a short rc2, followed by a final.  The
 plan is also to get this in a public maven repo and also do some
 whitespace reformatting (sorry tab formatting fans) to make this a bit
 more approachable.

---End Message---


small hackathon, March 17

2011-03-17 Thread Matt Ingenthron
Hi all,

Since a few of the core folks are in the same place at the same time,
we're going to hold a small hackathon... this time really hacking... to
work on some merging of a couple of branches together to push to the
next release.

If you are interested in participating and already have some experience
here, we'd love to have you.  If you're just looking to learn how to use
memcached or work on a client, this is probably not the place since we
aim to be pretty focused on the server innards.

This is in the bay area.  Specifically:
Couchbase, Inc. (may still say Membase on the signs!)
200 W. Evelyn Ave. Suite 110
Mountain View, CA 94041

Please let me know if you'd like to come in so we can be ready to
accommodate you.

I'm sure you're all interested in what will come out of it, so I'll aim
to send out some notes afterwards!

Matt

p.s.: sorry for the last minute notice


Re: small hackathon, March 17

2011-03-17 Thread Matt Ingenthron
On 3/16/11 11:56 PM, dormando wrote:
 what time does it start? ;)

Tentatively, 3pm-ish.  That's admittedly an early start, but I suspect
it won't really get going until about 6pm. 

Ideally, it ends when we've gotten something done. :)

 On Wed, 16 Mar 2011, Matt Ingenthron wrote:

 Hi all,

 Since a few of the core folks are in the same place at the same time,
 we're going to hold a small hackathon... this time really hacking... to
 work on some merging of a couple of branches together to push to the
 next release.

 If you are interested in participating and already have some experience
 here, we'd love to have you.  If you're just looking to learn how to use
 memcached or work on a client, this is probably not the place since we
 aim to be pretty focused on the server innards.

 This is in the bay area.  Specifically:
 Couchbase, Inc. (may still say Membase on the signs!)
 200 W. Evelyn Ave. Suite 110
 Mountain View, CA 94041

 Please let me know if you'd like to come in so we can be ready to
 accommodate you.

 I'm sure you're all interested in what will come out of it, so I'll aim
 to send out some notes afterwards!

 Matt

 p.s.: sorry for the last minute notice




Re: Open Source Projects using Memcache?

2011-03-08 Thread Matt Ingenthron
On 3/7/11 4:29 PM, Fuzzpault wrote:
 Anyone know of any open source projects (preferably in php) which use
 memcache at its core, or that is greatly accelerated with it?  

Mediawiki has built in support for memcache (as likely included by
occasional contributor Domas).  It's also very important in Wikipedia's
deployment of MediaWiki.

Hope that helps,

Matt


Re: Problems when one of the memcached server down

2011-03-07 Thread Matt Ingenthron
Hi Lior,


On 3/3/11 1:53 PM, Evil Boy 4 Life wrote:
 Hi,
 I use the .net clients and 2 servers at 2 different machines.
 When one of the memcached servers is down, I try to set an item to the
 cache (According the hashing algoritem of the client, this item should
 stored the inactive server!) and this item get stored at the active
 server.
 After I set the item, the second server get back to work. Now if I'll
 try to get this item I won't succeed, because the client will searche
 the item at the second server (According the hashing algoritem).

Are you referring to the Enyim client?  I believe there's a separate
mailing list for that client, but I am not sure.
 What can i do to solve this problem???
 (To set the item again at the second server isn't a solution, because
 I don't want to store any item at 2 servers)

This seems like something more fundamental is wrong, or I miss the
question. 

Keep in mind, having two servers isn't a situation where they are
active/inactive, it's a collection of two servers.  Each adds to the
pool of cache available.  It's normal for a given key to hash to one
server or another, depending on the key.

Are you saying that in a normal, no failures kind of situation, you
cannot get and then set the same key?

Matt


Re: Load balancing in memcached

2011-03-07 Thread Matt Ingenthron
On 3/4/11 3:55 AM, Priya wrote:
 Does memcached load balance, i.e., what if most of the keys hash to a
 particular server, does memcached take some actions to load balance
 it?

Generally speaking, since responses come from RAM and memcached can
pretty easily saturate network interfaces with very low CPU overhead,
it's quite rare that the imbalance of a pool/cluster of memcached
servers becomes an issue.

If you do have this kind of situation, it's best resolved by getting the
cache closer to the user.  By that, I mean possibly give up a bit of
consistency and cache for 1 sec or less in the application and/or get
the caching all the way down to the end user (i.e., browser).

Hope that helps,

Matt


Re: Replication of key-value pairs

2011-03-07 Thread Matt Ingenthron
On 3/4/11 3:54 AM, Priya wrote:
 Are the key-value pairs replicated on different nodes or does the pair
 remain with only on one RAM?


With memcached, key/value pairs remain only in one node in RAM.  Have a
look at the wiki and the list archives for more info.

Good luck!

Matt


Re: Memcached-1.4.5 Solaris 10 Error

2011-03-03 Thread Matt Ingenthron
Hi Felipe,

Do note that the memcached build on Solaris works best when using Sun's
cc, not gcc.  Last I checked, it was still freely available (but
required registration).

Which version of libevent did you use?  Solaris 10 has libevent built in
over in the SFW repository, and that one has been well tested.  It
should be compatible with 1.4.5.  I'd really recommend using it if at
all possible. 

Also, note that just to get you going, you can set different envvars to
have libevent use different event mechanisms with the OS.  I believe
both event ports and /dev/poll are supported with Solaris.

Hope that helps,

Matt

On 3/3/11 9:33 AM, Felipe Cerqueira wrote:
 Hi All,

 After some problems, i have success in compiling memcached on solaris
 sparc.

 libmemcached was very hard to get working too. I have a version here
 with some patches to detect and implement unsuported features on
 solaris like getopt_long and etc...

 Now, its compiled successfuly and working well on solaris sparc 9.

 On solaris sparc 10, I'm getting the follow error:

 -bash-3.00$ uname -a
 SunOS server 5.10 Generic_118833-33 sun4u sparc SUNW,Sun-Fire-V490
 -bash-3.00$ ./memcached
 [warn] ioctl: DP_POLL: Invalid argument
 [warn] ioctl: DP_POLL: Invalid argument
 [warn] ioctl: DP_POLL: Invalid argument
 [warn] ioctl: DP_POLL: Invalid argument
 [warn] ioctl: DP_POLL: Invalid argument
 -bash-3.00$

 -bash-3.00$ ulimit -a
 core file size(blocks, -c) unlimited
 data seg size (kbytes, -d) unlimited
 file size (blocks, -f) unlimited
 open files(-n) 256
 pipe size  (512 bytes, -p) 10
 stack size(kbytes, -s) 8192
 cpu time (seconds, -t) unlimited
 max user processes(-u) 29995
 virtual memory(kbytes, -v) unlimited
 -bash-3.00$

 -bash-3.00$ truss ./memcached
 ...
 /1:   setsockopt(39, SOL_SOCKET, SO_SNDBUF, 0xFFBFF4FC, 4, SOV_DEFAULT)
 = 0
 /1:   setsockopt(39, SOL_SOCKET, SO_SNDBUF, 0xFFBFF4FC, 4, SOV_DEFAULT)
 Err#132 ENOBUFS
 /1:   bind(39, 0x0003C0A0, 16, SOV_SOCKBSD)   = 0
 /1:   write(9, \0, 1)   = 1
 /1:   write(16, \0, 1)  = 1
 /1:   write(23, \0, 1)  = 1
 /1:   write(30, \0, 1)  = 1
 /1:   pwrite(3, \0\0\004\001\0\0\0\0\0 $.., 24, 0)  = 24
 /1:   ioctl(3, DP_POLL, 0xFFBFF490)   Err#22 EINVAL
 /1:   mmap(0x0001, 65536, PROT_READ|PROT_WRITE|PROT_EXEC,
 MAP_PRIVATE|MAP_ANON|MAP_ALIGN, -1, 0) = 0xFEFA
 [/1:  write(2,  [, 1)   = 1
 warn/1:   write(2,  w a r n, 4) = 4
 ] /1: write(2,  ]  , 2) = 2
 ioctl: DP_POLL: Invalid argument/1:   write(2,  i o c t l :   D P _ P
 O.., 32) = 32

 /1:   write(2, \n, 1)   = 1
 /1:   lwp_unpark(6)   = 0
 /6:   lwp_park(0x, 0) = 0
 /6:   lwp_sigmask(SIG_SETMASK, 0xFFBFFEFF, 0xFFF7) = 0xFFBFFEFF
 [0x]
 /6:   lwp_exit()
 /1:   lwp_wait(6, 0xFFBFF584) = 0
 _exit(0)


 I have look around for some problems on libevent on solaris 10 but
 cant find any solution.

 Thanks in advance



Re: Replication ?

2011-03-03 Thread Matt Ingenthron
Hi Nathan,

On 3/3/11 1:42 PM, Nathan Nobbe wrote:
 Hi all,

 I know I'll get blasted for not googling enough, but I have a quick
 question.

Here's a dime.  Get yourself a web browser and bring me back $0.10
change.  :)   (said jokingly...)

 I was under the impression memcached servers replicated data, such
 that if i have 2 servers and one machine goes down the data would all
 still be available on the other machine.  this with the understanding
 that some data may not yet have been replicated as replication isn't
 instantaneous.


There are a few memcached related things that do replication, but the
core memcached server itself does not replicate.

One is something related (forked?) from memcached called repcache.  It
does it on the server side as clustered pairs.

One is something called Membase, which uses the memcached core (kinda
forked, aiming to put back), which has special hashing called vbucket
hashing.  This can be transparent to the client though.

One is that libmemcached does replication from the client, but has
admittedly lots of interesting potential consistency issues depending on
what fails when and how much it actually fails.

I'll let you go further with that browser you just downloaded.  :)

Matt

p.s.: full disclosure: I'm pretty heavily involved in Membase


Re: Memcached Benchmarks

2011-01-21 Thread Matt Ingenthron
On 1/21/11 6:19 AM, vickycc wrote:
 Besides brutis and memslap, do you know of any other well-known
 benchmarks for memcached?

I don't know if it's well known, but:
https://github.com/ingenthr/memcachetest

(let me know if you want a dist tarball, that'd be easier to build)

There's also:
https://github.com/dustin/mc-hammer

Also, I'd guide you to think about what you want the benchmark to model,
rather than just look for tools.  What is it your app would do, and then
pick a tool like that.

Hope that helps,

Matt



Re: Require Windows build of Memcache

2011-01-04 Thread Matt Ingenthron
Hi Vitold,

On 12/31/10 3:54 PM, Vitold S wrote:
 Hello,

 I am Web developing at Winows platform and I am want tu use Memcache,
 but I am cant use last version with all interesting function...
 Plaease make at memcache.org build or port to Wienows...

memcached.org provides only source. 

Trond Norbye posted instructions on how to build on windows on his blog:
http://trondn.blogspot.com/2010/03/building-memcached-windows.html

The trunk code has changed a little, but that should get you most of the
way there.

Matt


Re: Install Memcached

2011-01-04 Thread Matt Ingenthron
On 1/4/11 6:06 AM, Gustavo Paixão wrote:
 HI,

 I'm new with memcached, I'l like to know hot to install in my server,
 RHEL 5 with CPanel. I'm trying to install by CPanel, I found this module:

 memcached 1.0.2 stable PHP extension for interfacing with
 memcached via libmemcached library

 but when I install I get this message below. Do you know what is wrong
 and what I need to do?

From the log below, it looks as if you're missing a prerequisite:
libmemcached.  You'll need to install that before this pecl extension
will work. 

You may want to read any docs for that extension though, as in my
experience, the version of libmemcached it relies upon may be somewhat
specific.


 downloading memcached-1.0.2.tgz ...
 Starting to download memcached-1.0.2.tgz (22,724 bytes)
 done: 22,724 bytes
 4 source files, building
 running: phpize
 Configuring for:
 PHP Api Version: 20090626

 Zend Module Api No:  20090626
 Zend Extension Api No:   220090626
 building in /root/tmp/pear-build-root/memcached-1.0.2
 running: /root/tmp/pear/memcached/configure
 checking for egrep... grep -E
 checking for a sed that does not truncate output... /bin/sed

 checking for cc... cc
 checking for C compiler default output file name... a.out
 checking whether the C compiler works... yes
 checking whether we are cross compiling... no
 checking for suffix of executables...

 checking for suffix of object files... o
 checking whether we are using the GNU C compiler... yes
 checking whether cc accepts -g... yes
 checking for cc option to accept ANSI C... none needed
 checking how to run the C preprocessor... cc -E

 checking for icc... no
 checking for suncc... no
 checking whether cc understands -c and -o together... yes
 checking for system library directory... lib
 checking if compiler supports -R... no
 checking if compiler supports -Wl,-rpath,... yes

 checking build system type... i686-pc-linux-gnu
 checking host system type... i686-pc-linux-gnu
 checking target system type... i686-pc-linux-gnu
 checking for PHP prefix... /usr/local
 checking for PHP includes... -I/usr/local/include/php 
 -I/usr/local/include/php/main -I/usr/local/include/php/TSRM 
 -I/usr/local/include/php/Zend -I/usr/local/include/php/ext 
 -I/usr/local/include/php/ext/date/lib

 checking for PHP extension directory... 
 /usr/local/lib/php/extensions/no-debug-non-zts-20090626
 checking for PHP installed headers prefix... /usr/local/include/php
 checking if debug is enabled... no
 checking if zts is enabled... no

 checking for re2c... re2c
 checking for re2c version... invalid
 configure: WARNING: You will need re2c 0.13.4 or later if you want to 
 regenerate PHP parsers.
 checking for gawk... gawk
 checking whether to enable memcached support... yes, shared

 checking for libmemcached... yes, shared
 checking whether to enable memcached session handler support... yes
 checking whether to enable memcached igbinary serializer support... no
 checking for ZLIB... yes, shared

 checking for zlib location... /usr
 checking for session includes... /usr/local/include/php
 checking for memcached session support... enabled
 checking for memcached igbinary support... disabled
 checking for libmemcached location... configure: error: memcached support 
 requires libmemcached. Use --with-libmemcached-dir=
 to specify the prefix where libmemcached headers and library are
 located ERROR: `/root/tmp/pear/memcached/configure' failed The
 memcached.so object is not in
 /usr/local/lib/php/extensions/no-debug-non-zts-20090626






Re: Memcache doesn't work when used in Magento

2011-01-04 Thread Matt Ingenthron
On 1/4/11 6:13 AM, Shabam wrote:
 I have installed a Magento in a Web Cluster(include two web server)
 and Magento work fine in any independent server.

 In order to improve performance and shared session between multiple
 Web servers, I config Magento save session to memcache server. But
 now, I found can’t login into magento admin panel(input username and
 password, then click OK button, it will return admin login page and
 display no error message) and can’t add item to cart(tell us cart is
 empty).

 Can someone help me diagnose or tell me how to go about
 troubleshooting this problem?  I've posted it on the Magento forums
 but no one is responding.

I'm sorry, I don't know anything about Magento session handling with
memcached, but I may have a general idea for you.

If you can, run memcached from the command line in another terminal with
-vvv.  Then try your login.  This should show you what, if anything, is
done with memcached when you try to log in. 

If I had to guess though, I think the problem you're running into is
likely somewhere else... possibly getting the connection going.

Regards,

Matt
 [r...@localhost etc]# cat local.xml
 ?xml version=1.0?
 config
 global
 install
 date![CDATA[Fri, 17 Dec 2010 09:34:20 +]]/date
 /install
 crypt
 key![CDATA[helloisme]]/key
 /crypt
 disable_local_modulesfalse/disable_local_modules
 session_save![CDATA[memcache]]/session_save !-- db / memcache /
 empty=files --
 session_save_path![CDATA[tcp://192.168.1.68:11211?
 persistent=1weight;=2timeout;=10retry;_interval=10]]/
 session_save_path
 session_cache_limiter![CDATA[must-revalidate,public]]/
 session_cache_limiter
 cache
 backendmemcached/backend!-- apc / memcached / empty=file --
 memcached!-- memcached cache backend related config --
 servers!-- any number of server nodes can be included --
 server
 host![CDATA[192.168.1.68]]/host
 port![CDATA[11211]]/port
 persistent![CDATA[1]]/persistent
 /server
 /servers
 compression![CDATA[0]]/compression
 cache_dir![CDATA[]]/cache_dir
 hashed_directory_level![CDATA[]]/hashed_directory_level
 hashed_directory_umask![CDATA[]]/hashed_directory_umask
 file_name_prefix![CDATA[]]/file_name_prefix
 /memcached
 /cache
 resources
 db
 table_prefix![CDATA[]]/table_prefix
 /db
 default_setup
 connection
 host![CDATA[192.168.1.239]]/host
 username![CDATA[magento]]/username
 password![CDATA[asdf000!]]/password
 dbname![CDATA[magento]]/dbname
 active1/active
 /connection
 /default_setup
 /resources
 session_save![CDATA[files]]/session_save
 /global
 admin
 routers
 adminhtml
 args
 frontName![CDATA[admin]]/frontName
 /args
 /adminhtml
 /routers
 /admin
 /config



Re: Require Windows build of Memcache

2011-01-04 Thread Matt Ingenthron
Hi Marc,

On 1/4/11 1:46 PM, Marc Bollinger wrote:
 Wasn't NorthScale working on an informal Windows build awhile ago? Did
 that just fizzle due to other priorities (e.g. releasing membase)?

NorthScale (now known as Membase, Inc.) did have a straight compilation
of memcached for Windows available as a binary.  Since Membase now has
support for a memcached bucket type and there were missing pieces that
weren't going to be added, Membase, Inc. isn't providing binaries any
longer.

Membase's memcached bucket type is generally more along the lines of
what people were asking for, since it runs as a Windows service
properly, gives you a way to change the amount of memory an instance is
using, etc. 

With the compiled binaries we distributed, we kept getting emails asking
for that kind of functionality but only had plans to add it in with the
memcached bucket type.  The way it was being done with the Windows
service wasn't really right.

There are some other differences too, which is why I didn't think I
should just point to it on this list (though it is available for free). 
Since you brought it up though...

By the way, along with Trond, Dustin, etc., I'm one of the Membase guys.

Regards,

Matt


 On Tue, Jan 4, 2011 at 10:43 AM, Matt Ingenthron ingen...@cep.net wrote:
 Hi Vitold,

 On 12/31/10 3:54 PM, Vitold S wrote:
 Hello,

 I am Web developing at Winows platform and I am want tu use Memcache,
 but I am cant use last version with all interesting function...
 Plaease make at memcache.org build or port to Wienows...
 memcached.org provides only source.

 Trond Norbye posted instructions on how to build on windows on his blog:
 http://trondn.blogspot.com/2010/03/building-memcached-windows.html

 The trunk code has changed a little, but that should get you most of the
 way there.

 Matt




Re: Install Memcached

2011-01-04 Thread Matt Ingenthron
On 1/4/11 10:07 PM, Gustavo Paixão wrote:
 Hi Matt,

 I was looking for libmemcached inInstall a Perl Module page at WHM
 and I found this *Memcached::libmemcached*, I try to install but I got
 some error. The error message is attached. Do you know if i did
 something wrong or if a need to install something before?

Just a guess from looking at the log, but I don't know for certain, is
that there might be something a bit too old with your toolchain on
CentOS 5, depending on the update version.  Perhaps an older one would
build cleanly?

That's just a guess though... it'd require more digging to see why.  I
doubt libmemached-0.44 is broken in general.

Perhaps try an older one and/or try just building the lib directly
rather than getting it through a perl module.

Hope that helps some,

Matt


Re: can't set entry in memcached database

2010-12-07 Thread Matt Ingenthron
Hi Jason,

On 12/7/10 2:05 AM, jason wrote:
 We are running memcached version 1.2.8 on Solaris 10 118833-36. We are
 running with the following options:

 memcached -c 256 -u nobody -d -m 256 -l 127.0.0.1 -p 11211

 We are seeing the following errors:

 RlsMemGet: cannot get object, error: SYSTEM ERROR
 RlsMemSet: can't set entry in memcached database, error: SYSTEM ERROR

 Anyone seen these erorrs before and know what the cause of them is?

I just went back and looked at the code for 1.2.8, and unless it's being
looked up somewhere else there is nothing in memcached that would say
SYSTEM ERROR.  It looks as if these errors are coming from the app
using memcached, not memcached itself so you may want to start your
investigation there.

Also, you may want to do some stats sampling to see if one of the stats
there will tell you something.

Hope that helps,

Matt


Re: Memcache::set() [memcache.set]: Server 127.0.0.1 (tcp 11211) failed with: SERVER_ERROR unauthorized, null bucket (0)

2010-12-02 Thread Matt Ingenthron
That appears to be using the bucket engine.  Are you sure you're using
memcached, and not membase?

membase has a different mailing list: memb...@googlegroups.com

Hope that helps,

Matt

On 12/2/10 11:52 AM, sush wrote:
 Hi everyone,
 Here i am using wamp server and trying to run memcache, but it dint
 work.
 The steps i followed is, i kept memcached.dll file in apache and as
 well as in php.
 The same process i follow in window vista operating system, it works
 but in window7, it dint work.
 i actually i dont understand that is that because of Operating System
 or something else.

 The error i came up with is:
 Memcache::set() [memcache.set]: Server 127.0.0.1 (tcp 11211) failed
 with: SERVER_ERROR unauthorized, null bucket (0) 

 Please help me if i am wrong with the installation steps.

 Thank You

 Sush



Re: Memcached on dual quad-core server with 32 gig ram

2010-10-29 Thread Matt Ingenthron
On 10/29/10 1:36 AM, Obeyon wrote:
 Dear memcached experts,

 I was wondering what the best configuration of memcached is on a
 server with a high amount of ram (in this case 20 gb for memcached)
 and multiple cores (in this case 8).

 Would it be better to use multiple smaller memcached instances or a
 single large instance, for example in multi-tread mode (flag -t)?

You'll get the best efficiency by default by using a single large
instance of memcached with multiple threads, though even the default
number of threads is probably okay since much of the calls are asych to
the OS which will then have it's own threads doing IO on behalf of the
memcached threads.

I'd start there unless you have a reason to change anything else.

- Matt


Re: next planned release of memcached

2010-10-19 Thread Matt Ingenthron
 Hi Pavel,

On 10/19/10 12:00 AM, Pavel Kushnirchuk wrote:
 Folks,

 May be anybody know, when will be a next planned release of memcached.
 Especially I am interested in a release of Win32 branch.


As dormando said, we're aiming to put some effort into this soon.  I'm
curious though, what about Win32 are you looking for updates for?  Just
want to be sure it's something we're thinking about already.

- Matt


Re: Enhancement of memcached

2010-10-18 Thread Matt Ingenthron
 On 10/18/10 6:35 AM, Pavel Kushnirchuk wrote:
 Sorry may be my question is a bit stupid.
 I need to extend a memcached with very simple additional
 functionality(allow to listen memcached on multiple IP addresses).
 Could I do all necessary changes in Win32 branch? Will all my changes
 to go to any of next releases?

By default, memcached will listen on INADDR_ANY, which is all IPs on the
system.

Just looking through the code, it appears that it already listens to
multiple IPs if they're specified.  I've not tested this anywhere
though, including Windows.

As far as accepting a patch is concerned, I think the answer is sure. 
If it's a good patch and tests well (please include tests if possible),
there'd be no reason to not incorporate it.

- Matt



Re: compiling memcache without threads on multi-core system

2010-09-24 Thread Matt Ingenthron

 On 9/24/10 12:10 AM, Paul Lindner wrote:
A good rule of thumb is one thread per core, but you should run your 
own benchmarks on your hardware.


+1

This is a good general rule of thumb overall, though the old general 
rule is go up to 4x number of CPUs if tuning for throughput, and just at 
or slightly below if tuning for latency.  You'll get both pretty well 
OOTB with one thread per core.


However I can pretty much guarantee you'll run into other bottlenecks 
before you have to worry about your CPU usage.


Likely network.




On Thu, Sep 23, 2010 at 11:59 PM, manoher tadakokkula 
manohe...@gmail.com mailto:manohe...@gmail.com wrote:


one more question, what is the recommendation for number of worker
threads for single-core and dual-core systems ?

-manoher


On Wed, Sep 22, 2010 at 8:37 PM, Trond Norbye
trond.nor...@gmail.com mailto:trond.nor...@gmail.com wrote:

The last thread is the thread running the clock and accepting
new connections.

Cheers

Trond

Sent from my iPhone

On 22. sep. 2010, at 16:50, Paul Lindner lind...@inuus.com
mailto:lind...@inuus.com wrote:


memcache only compiles in a threaded mode these days.  The
docs are out of date.

The 5th thread you see is probably a supervisor, the other 4
are worker threads.

On Wed, Sep 22, 2010 at 6:30 AM, manoher tadakokkula
manohe...@gmail.com mailto:manohe...@gmail.com wrote:

Hi,

I am trying to compile source to install memcached. i
have a dual-core system.
when i did ./configure  make  make install , i think
it is installing threaded version.

Running memached daemon shows, pstree displays as
5*memcached.. hence i think its running 5 threads..

My questions are :
Docs file threads.txt says, by default it is compiled as
single-threaded appication, how come i got threaded version?
how can i compile the source without threads ?
In threaded version , why am i seeing 5 threads? Docs say
-t default value is 4 , what am i missing ?

thanks in advance,
Manoher T





-- 
Paul Lindner -- lind...@inuus.com mailto:lind...@inuus.com

-- linkedin.com/in/plindner http://linkedin.com/in/plindner






--
Paul Lindner -- lind...@inuus.com mailto:lind...@inuus.com -- 
linkedin.com/in/plindner http://linkedin.com/in/plindner




Re: 1TB memcached

2010-09-22 Thread Matt Ingenthron

 On 9/22/10 6:12 AM, ligerdave wrote:

MongoDB is actually cached db, meaning that, most of its records are
in memory.

I think there is also a memcached and DB hybrid which comes w/ a
persistent option. i think it's called memcachedDB, which runs a in-
memory db(like mongodb). this shares most of common api w/ memcached
so you dont have to change code very much


membase is compatible with memcached protocol, has a 20MByte default 
object size limit, lets you define memory and disk usage across nodes in 
different buckets.


memcacheDB is challenging to deploy for a few reasons, one of which is 
that the topology is fixed at deployment time.


- Matt

p.s.: full disclosure: I'm one of the membase guys


Re: 1TB memcached

2010-09-22 Thread Matt Ingenthron

 On 9/22/10 10:23 AM, Les Mikesell wrote:

On 9/22/2010 11:59 AM, Matt Ingenthron wrote:

On 9/22/10 6:12 AM, ligerdave wrote:

MongoDB is actually cached db, meaning that, most of its records are
in memory.

I think there is also a memcached and DB hybrid which comes w/ a
persistent option. i think it's called memcachedDB, which runs a in-
memory db(like mongodb). this shares most of common api w/ memcached
so you dont have to change code very much


membase is compatible with memcached protocol, has a 20MByte default
object size limit, lets you define memory and disk usage across nodes in
different buckets.

memcacheDB is challenging to deploy for a few reasons, one of which is
that the topology is fixed at deployment time.


Does anyone know how these would compare to 'riak', a distributed 
database that can do redundancy with some fault tolerance and knows 
how to rebalance the storage across nodes when they are added or 
removed? (Other than the different client interface...).


This is a very detailed question, but...

Without going too much into advocacy (I'd defer you to the membase 
list/site), membase does have redundancy, fault tolerance and can 
rebalance when nodes are added and removed.  The interface to membase is 
memcached protocol.  It does so by making sure there is an authoritative 
place for any given piece of data at any given point in time.  That 
doesn't mean data's not replicated or persisted, just that there are 
rules about the state changes for a given piece of data based on vbucket 
hashing and a shared configuration.


This was actually inspired by similar concepts that in memcached's 
codebase up through the early 1.2.x, but not in use in anywhere that I'm 
familiar with.


riak is more designed around eventually consistent and lots of tuning 
W+RN, meaning that it is designed more to always take writes and deal 
with consistency for reads by doing multiple reads.  This is different 
than memcached in that memcached expects one and only one location for a 
given piece of data with a given topology.  If the topology changes 
(node failures, additions), things like consistent hashing dictate a new 
place, but there aren't multiple places to write to.


Any time you accept concurrent writes in more than one place, you have 
to deal with conflict resolution.  In some cases this means dealing with 
it at the application level.


I don't know it well, but it's my understanding that MemcacheDB is 
really just memcached with disk (BDB, IIRC) in place of memory on the 
back end.  This has been done a few different times and in a few 
different ways.  Topology changes are the killers here.  Consistent 
hashing can't really help you deal with changes in this kind of deployment.


- Matt



Re: Quick question on the Timezone

2010-09-22 Thread Matt Ingenthron

 On 9/22/10 1:41 PM, Ravi Chittari wrote:

We have memcached instances up and running in production. We want to
change our systems from one timezone ( Say EST) to UTC.
Does memcached instance automatically pick up new Timezone and what
happens to the cached items?


IIRC, times are tracked as relative to the local clock, so if you change 
the system clock things will either expire late or early.  It's probably 
best to restart the cache server for this sort of thing.


Re: Lot of misses

2010-08-18 Thread Matt Ingenthron

marrra wrote:

Hi,

I'm using memcached for caching search suggest box.

I've done this:
Set lifetime for all items to 30 hours and restart memcached.
After 24hours are stored about 87000 items in memcached and in last 2
hours was added only few items.
When I look at graf (bijk.com) I see cache hits and misses are almost
equal now.
  


Did you notice your evictions?


Why? I think that now would by almost only hits.

Here is stats:
STAT pid 13863
STAT uptime 88895
STAT time 1282116614
STAT version 1.2.2
STAT pointer_size 64
STAT rusage_user 12.132758
STAT rusage_system 32.578036
STAT curr_items 87184
STAT total_items 176452
STAT bytes 56215815
STAT curr_connections 2
STAT total_connections 385143
STAT connection_structures 48
STAT cmd_get 385139
STAT cmd_set 176452
STAT get_hits 229838
STAT get_misses 155301
STAT evictions 67890
STAT bytes_read 90153265
STAT bytes_written 522373778
STAT limit_maxbytes 67108864
STAT threads 1
  




Re: Lot of misses

2010-08-18 Thread Matt Ingenthron

marrra wrote:

I am new to memcached.

if i understand it well, evictions menas memcached is out memory ?
  


Yes.  memcached is an LRU(ish) cache.  Therefore, when you go beyond the 
memory available, it'll find an old item to evict to store the new item 
you're asking it to store.


Since you're new, I'd highly recommend looking at this:
http://code.google.com/p/memcached/wiki/NewServerMaint


I've changed memory to 256MB.
I hope it will work.
  


Chances are this will lower your evictions, and you'll have more hits. 

There are a few other things that could be done to make it more 
efficient, but the first step it seems in your case would be to add more 
memory.  Since you've done that, I bet it'll help.


- Matt


On Aug 18, 6:41 pm, Matt Ingenthron ingen...@cep.net wrote:
  

marrra wrote:


Hi,
  
I'm using memcached for caching search suggest box.
  
I've done this:

Set lifetime for all items to 30 hours and restart memcached.
After 24hours are stored about 87000 items in memcached and in last 2
hours was added only few items.
When I look at graf (bijk.com) I see cache hits and misses are almost
equal now.
  

Did you notice your evictions?



Why? I think that now would by almost only hits.
  
Here is stats:

STAT pid 13863
STAT uptime 88895
STAT time 1282116614
STAT version 1.2.2
STAT pointer_size 64
STAT rusage_user 12.132758
STAT rusage_system 32.578036
STAT curr_items 87184
STAT total_items 176452
STAT bytes 56215815
STAT curr_connections 2
STAT total_connections 385143
STAT connection_structures 48
STAT cmd_get 385139
STAT cmd_set 176452
STAT get_hits 229838
STAT get_misses 155301
STAT evictions 67890
STAT bytes_read 90153265
STAT bytes_written 522373778
STAT limit_maxbytes 67108864
STAT threads 1
  




Re: is there a way to tell if a memcached client is still alive / connected?

2010-08-18 Thread Matt Ingenthron

Chad wrote:

I am trying to find a way to check if the memcachedclient is still
alive or not but it seems to be there is no public api for me to do
so. Can someone clarify  this?


Not really.  You should be able to use system level tools (i.e. netstat 
-a | grep 11211) to see connections in the ESTABLISHED state. 

What would you do with the information?  I just wonder if there is a way 
to address what you're trying to do underneath.


- Matt


Re: is there a way to tell if a memcached client is still alive / connected?

2010-08-18 Thread Matt Ingenthron

dormando wrote:

sto
  


I'm pretty sure he sent it only once.  I think this is a problem with 
Google's SMTP and MTAs if I recall correctly.  This happened on a 
majordomo based list I am on, and it died down after a little bit.  I 
can't immediately find the reference though.


Does yelling at it help?  ;)

- Matt

On Wed, 18 Aug 2010, Chad wrote:

  

I am trying to find a way to check if the memcachedclient is still
alive or not but it seems to be there is no public api for me to do
so. Can someone clarify  this?






Re: Also failing tests of 1.4.5 on Solaris 9 Sparc

2010-08-12 Thread Matt Ingenthron

Dagobert Michelsen wrote:

Hi Dustin,

Am 12.08.2010 um 18:25 schrieb Dustin:

 We don't have anyone working on sparc at the moment.  Would love
fixes.  :)


You haven't? What happend to Trond Norbye?


Trond is still doing Solaris development, but has been working on other 
stuff of late.  Still on Solaris though.  :)


We do want to get all  of the builders up to snuff.

Anyway, if anyone is interested
in developing memcached on Solaris I can offer an account on the OpenCSW
buildfarm which is equipped with Solaris 8/9/10 Sparc and x86 and Sun
Studio 11/12/12u1 + GCC3/GCC4 compilers for compatibility testing.
I for myself are mainly working on packaging for Solaris and elaborate
porting work is usually not possible in my timeframe.
  http://www.opencsw.org/extend-it/signup/to-upstream-maintainers/


Best regards

  -- Dago




Re: LRU mechanism question

2010-07-06 Thread Matt Ingenthron

Hi Sergei,

For various reasons (performance, avoiding memory fragmentation), 
memcached uses a memory allocation approach called slab allocation.  The 
memcached flavor of it can be found here:


http://code.google.com/p/memcached/wiki/MemcachedSlabAllocator

Chances are, your items didn't fit into the slabs defined.  There are 
some stats to see the details and you can potentially do some slab tuning.


Hope that helps,

- Matt

siroga wrote:

Hi,
I just started playing with memcached. While doing very basic stuff I
found one thing that confused me a lot.
I have memcached running with default settings - 64M of memory for
caching.
1. Called flushALL to clean the cache.
2. insert 100 of byte arrays 512K each - this should consume about 51M
of memory so  I should have enough space to keep all of them - and to
very that call get() for each of them  - as expected all arrays are
present
3. I call flushAll again - so cache should be clear
4. insert 100 arrays of smaller size ( 256K). I also expected that I
have enough memory to store them (overall I need about 26M), but
surprisingly to me when calling get() only last 15 where found in the
cache!!!

It looks like memcached still hold memory occupied by first 100
arrays.
Memcache-top says that only 3.8M out of 64 used.

Any info/explanation on memcached memory management details is very
welcomed. Sorry if it is a well known feature, but I did not find much
on a wiki that would suggest explanation.

Regards,
Sergei

Here is my test program (I got the same result using both danga and
spy.memcached. clients):

MemCachedClient cl;

@Test
public void strange() throws Throwable
{
byte[] testLarge = new byte[1024*512];
byte[] testSmall = new byte[1024*256];
int COUNT = 100;
cl.flushAll();
Thread.sleep(1000);
for (int i = 0; i  COUNT; i++)
{
cl.set(largekey + i, testLarge, 600);
}
for (int i = 0; i  COUNT; i++)
{
if (null != cl.get(largekey + i))
{
System.out.println(First not null  + i);
break;
}
}
Thread.sleep(1000);
cl.flushAll();
Thread.sleep(1000);
for (int i = 0; i  COUNT; i++)
{
cl.set(smallkey + i, testSmall, 600);
}
for (int i = 0; i  COUNT; i++)
{
if (null != cl.get(smallkey + i))
{
System.out.println(First not null  + i);
break;
}
}

}
  




Re: Is this really Distributed?

2010-06-10 Thread Matt Ingenthron

Dilip wrote:

We read that memcached is Free  open source, high-performance,
distributed memory object caching system Where as distirbuted is
not part of memcached servers. We have to have some client which knows
about all memcached servers and uses some hash based on key to
determine a server.

Is My understanding correct? If it is correct, we should remove
Distributed from the above definition.
  


It is true that the server doesn't know anything about the distribution 
of the key space or other servers.  Having said that, the server has 
never really existed without the client.


HTTP servers know nothing about each other, but the clients make 
requests to distributed servers based on human interaction and links in 
the content served.  I think most people would consider the web 
distributed.  In fact, some of those properties are why the web is 
inherently scalable, just like memcached.


I think 'distributed' is perfectly appropriate.

- Matt


release of spymemcached 2.5

2010-04-14 Thread Matt Ingenthron

Hi all,

I've just posted the 2.5 release of spymemcached.  See the summary of 
changes below.


- Matt



Release of 2.5

Changes since the 2.4 series:

The main new feature of the 2.5 release is support for SASL auth
to go along with the same feature in the memcached 1.4.3 and later
servers.

There is also a new feature which can be used in conjunction with the
failure cancel mode to close all connections to memcached servers if
there are timeouts against a server.  This helps to recover from
situations where a memcached server in the list may hard fail.

Also notable is this commit:
cba26c1 If the String value of the socket address starts with a /, 
remove it.


This does affect Ketama hashing, as it was found there could be some
occasional mismatch with libketama.  A much larger test was added
and to a pretty large degree, compatibility with libketama's hashing is
assured.  Proceed with caution if you'll be running a mixed environment
of 2.5 and pre-2.5 spymemcached clients.  To get the old behavior by
default, a custom interface is easily implemented.

Additionally, there have been a number of bug fixes, comment cleanups
some new tests, and some consistency fixes.

Changes since 2.4.2:

Ahn Seong Hwa (2):
New TCP connection timeout feature; if server fails completely, d/c.
fix for useless check statement that is for continuous timeout 
exception counter


Blair Zajac (3):
Be more generous in the strings that AddrUtil#getAddresses() will 
parse.

Fix AddrUtilTest#testIPv6Host() hostname assertion.
Fix consistency issue in ConnectionFactoryBuilder.isDaemon()

Dustin Sallings (19):
Beginnings of SASL support.
A slightly better model for SASL auth.
Authentication should allow specification of a mechanism.
Refactored broadcast to allow for node selection.
Working multi-step auth.
Refactored SASL auth for greater reuse.
Added support for listing SASL mechanisms.
Reformatted callback handler.
Don't throw away an exception.
Use the socket address as the realm.
Better auth API, handles connection drops.
Log the bug that causes reconnection on first connect.
Replaced Long nanos with long millis for op queue block offer timeout.
Ensure the factory builder can be used to specify enqueue block size.
Get rid of special constructors for op enqueue timeouts.
Do blocking inserts from the cache loader test.
Auth fix for mechanisms that have an initial response.
If the String value of the socket address starts with a /, remove it.
A larger libketama extract for compatibility testing.

Greg Kim (1):
Implementing read-only methods in MemcachedNodeROImpl - issue86

Kristian Eide (1):
Allow user-specified wait time for availability of queue space.

Matt Ingenthron (14):
Invert the ConnectionFactoryBuilderTest to go with new logic.
Document unexpected incr/decr behavior.  Issue 48.
Various Javadoc completeness.
Docs for path to FailureModes on DefaultConnectionFactory. Issue 115.
Clarify FutureBoolean, issue 63.
Clarify what is planned after a disconnect.
Enhance MemcachedNode to know whether auth should happen.
Changed AuthTest description to match reality.
Manual test to ensure correct connection handling with SASL.
Enhanced ConnectionFactoryBuilder test for auth.
Minor fixes to SASL reconnect test.
Handle auth failures more gracefully; maximum failures.
Log operation failures as potential auth failures.
Actually use the args to SASLConnectReconnect; shutdown nicely.

Bugs fixed/closed:
http://code.google.com/p/spymemcached/issues/detail?id=48
http://code.google.com/p/spymemcached/issues/detail?id=115
http://code.google.com/p/spymemcached/issues/detail?id=63
http://code.google.com/p/spymemcached/issues/detail?id=112
http://code.google.com/p/spymemcached/issues/detail?id=111
http://code.google.com/p/spymemcached/issues/detail?id=109
http://code.google.com/p/spymemcached/issues/detail?id=78
http://code.google.com/p/spymemcached/issues/detail?id=104

With others which can be listed here:
http://code.google.com/p/spymemcached/issues/list
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (Darwin)

iEYEABECAAYFAkvDyG4ACgkQQM0/OE2fY9o/BwCfZBcaORvnKuLFBtjsOcZmSiI9
IFEAnArde+Jn1xkICom2MmQg6h3rFZdz
=1oIc
-END PGP SIGNATURE-


--
To unsubscribe, reply using remove me as the subject.


BoF at MySQL Conf (was Re: Memcached release 1.4.5)

2010-04-08 Thread Matt Ingenthron

dormando wrote (among other things):

A number of us will be at MySQLConf (http://en.oreilly.com/mysql2010/)
next week. 



I proposed a BoF at the MySQL Conference to get the memcached users, 
client and server authors together.  It has been accepted so Tuesday the 
13th, we'll have a BoF and look forward to having some good discussion 
about approaches to using memcached, what the clients are capable of, 
how to approach certain application needs, etc.


A number of the client/server authors/contributors are already planning 
to attend.  Come and meet some of your users if you're a client/server 
developer.


Tuesday April 13, 7pm.
http://en.oreilly.com/mysql2010/public/schedule/detail/14627

- Matt


--
To unsubscribe, reply using remove me as the subject.


Re: Get Id of largest objects

2010-03-17 Thread Matt Ingenthron

dormando wrote:

Is there anyway to get the keys of the largest objects currently in my
memcached? I'm not concerned about locking up the server. I just wish
to be able to run a command one time which will give me the key of the
biggest objects that are currently stored.

Thanks



You could use peep or something to walk the largest slab classes... Or gdb
pause it and poke around.
  


If you're going to gdb it, you can just generate a core with gcore 
even.  Then you can poke around as much as you'd like.




Re: objects deleted from cache after 20min

2010-03-12 Thread Matt Ingenthron

Hi,

Alexandre Ladeira wrote:

I'm using memcached to store these data:
- user object - an java object consisting of username, password,
city...;
- user status info;
- counter,

so for every user login, I will make three memcached put operation.
After making 10.000 logins, I have these stats via memcached stats:
bytes - ~1Mb
counter - 9980, as this is an stress test, the server doesn't reply to
every request.
  
The problem is that after 20 min, memcached deleted all login info -

counter decresed to 0, even if I set the expire time (example: 3600 or
60*60*24*29 or 0). Have you ever faced a problem like that?
  


I don't think we've had any reports of things disappearing without any 
load going on.  What are you referring to with counter decreasing to 
0?  Are you referring to the curr_items or total_items statistics?


- Matt



Re: Memcached set is too slow

2010-03-01 Thread Matt Ingenthron

Chaosty wrote:

It runs on the same server, php memcached connects by 127.0.0.1 to it.

it close but not the same. i did kill and run clean memcached server,
each time it was close to 0.1 sec.
  


In that case, I totally agree this is suspicious. 

Just to toss out another idea, it sounds like maybe the library/code 
checking the response time doesn't have enough resolution, and it's 
rounding up.  1/10th of a second is an odd place to stop resolution to 
though.  Bad multiplying or rounding?


I've seen people claim lots of 1ms responses when it turned out all of 
the responses were in microseconds, they just didn't have enough resolution.



I will do strace for memcached later and post the results.

On Mar 2, 3:50 am, dormando dorma...@rydia.net wrote:
  

What about memcached? Is it running on localhost or over the network?

Any chance you could get an strace from the memcached side? Have your test
app talk to a separate test instance or something.

Is it exactly or close to 0.10s each time? That's suspicious.







Re: memcached + SASL

2010-02-23 Thread Matt Ingenthron

Hi Yakuza,

yakuza wrote:

i have compile memcached + sasl support for testing it but all plugin
for wordpress or vbulletin don't support it ..
it's more new feature and none have write support?
  


Two of the most popular clients, libmemcached and spymemcached, have 
SASL support in their development branches and are both planning to have 
releases soon.  Being in libmemcached, many of the high-level languages 
should be able to add support without too much complexity.


Have a look at:
https://bugs.launchpad.net/libmemcached/+bug/462250
and
http://github.com/dustin/java-memcached-client

anyone have tested sasl autentification or have make any code patch
for use it ?
  


I have, but not from wordpress.


another question ... if i have on single server 10 wordpress site ..
it's needed run 10 memcached demons, one for wp installation or i can
use a big demon for all site or this create any problem with data
cache ?
  


That's pretty application dependent, but if you can set a prefix or 
something like that, you can use a single daemon.  If you're using SASL 
auth (which probably takes other code changes), you can potentially use 
the bucket engine here:

http://github.com/northscale/bucket_engine

But... I have to say right now that's a complex thing for one to go 
assemble on their own.  That'll change soon. 

For right now, with off the shelf wordpress, the best thing is probably 
to run multiple instances on different ports.


Good luck,

- Matt


Re: does memcached work on windows 7

2010-02-12 Thread Matt Ingenthron

tvinod wrote:

im trying to run memcached on my windows 7 machine within a rails
project and it errors out..

936 connection closed.
940 new client connection
940 set mem-development:c222efbcfc08b4850198cd1eea610f76 0 0 217
  

940 STORED


Failed to write, and not due to blocking: Resource temporarily
unavailable
940 connection closed.
  


Which release are you running on Windows, and did you build it yourself 
or did you get binaries?  What was the source?


I looked through the code briefly; the only places where that may pop up 
is where memcached tries to write to the socket and gets an error back 
from the OS.  The error handling in the current code appears to print 
more information out, so I suspect you have an older build.

the same rails project works perfectly fine on vista and linux. i saw
somewhere it could be due to firewall issue. but i checked to make
sure memcached is allowed in windows firewall.


Yes, that is listed in older memcached issues, but I know of a couple of 
folks who have tried recent code on Windows 7 without seeing this 
issue.  There could well be some firewall (from Windows or some other 
utility) causing an issue.  If you can confirm it's something recent, 
maybe there is a way to get more info.  One way would be with a 
debugger, but it may also just make sense to get you a binary with some 
more diagnostics.


Regards,

- Matt


Re: memcached and access control

2010-01-06 Thread Matt Ingenthron

Aaron Stone wrote:

On Wed, Jan 6, 2010 at 2:48 AM, pub crawler pubcrawler@gmail.com wrote:
  


(snip...)

Needless to say, permissions and authentication is a feature set that
is going to re-requested for addition now and in the future.  It opens
the door for someone to create a memcached variation with such a
feature set - anyone?



Please don't. Please nobody even think of doing that. Really. Don't.

  


Authentication, at least, has been in the community memcached release 
for the last two micro releases:

http://code.google.com/p/memcached/wiki/ReleaseNotes143

If you need security labels and such, running separate processes would 
seem to be the way to go.  Of course, security labels can also be 
applied to network traffic, meaning that the authentication features are 
redundant.  :)


- Matt


Re: memcached and access control

2010-01-06 Thread Matt Ingenthron

Aaron Stone wrote:

Authentication, at least, has been in the community memcached release for
the last two micro releases:
http://code.google.com/p/memcached/wiki/ReleaseNotes143



It's a reasonable effort at keeping someone from writing to your
cache. Surely they can still sniff the network to get your data.
memcached with encryption is like a Ferrari with monster truck tires.

  


Oops, I see I brought up a few things that were already in the thread, 
sorry!


I think I agree here, but it may depend.  In my opinion, with most 
things security related, the level of controls inserted should be 
proportional to the anticipated threat.  The SASL implementation was put 
into memcached (and libmemcached and spymemcached) with a use case in mind.


The very real use case that some people were trying to deal with in 
cloud deployments (and possibly some cloudy enterprise deployments) is a 
situation where it'd be hard to sniff the network, but it would be 
useful to ensure you know who someone is when they connect.  This was 
covered back when it was released:


http://code.google.com/p/memcached/wiki/SASLHowto
http://blog.northscale.com/northscale-blog/2009/11/sasl-memcached-now-available.html
http://blogs.sun.com/trond/entry/sasl_support_in_libmemcached

There are some who run on clouds (public or private) who may need this 
kind of thing.  There are others who run on pretty well controlled 
networks that don't need any of it.  Right now, memcached provides both.


I could see situations where one may need additional controls, but the 
kind of security controls that started the thread off (namely SELinux) 
which to me means security labels, usually means you don't trust 
non-audited programs and instead have the platform look after them.  I 
guess KaiGai has taken that further with other application components 
though.


Getting down to the item level would be tough to accept due to the 
overhead involved, one would think though.  There may be some ways of 
getting closer to access control though without going down to the item 
level.


- Matt


Re: How to get data after execute flush_all?

2009-12-21 Thread Matt Ingenthron

Stephen wrote:

Hi, everyone:
 In my project, I execute the flush_all command with careless,
however, the data in the memcache is important for me, it has not to
be saved into database.
And I read the doc, it says  flush_all doesn't actually free all
the memory taken up by existing items; that will happen gradually as
new items are stored. Since the memcache just use a half of the
memory allocated, I think the data which are not saved into database
may be still in the MEMORY, so is there anyone tell me how I could get
the data ?
  


There is no method of doing so designed for users, but there are a 
couple of things underneath.  It sounds like you're in a data recovery 
type situation.  This being the case, I might recommend you use gcore to 
get a core of the running memcached process(es) so you'll have something 
point in time stable you can later use a debugger to get the data out of.


If I misunderstand and you're looking to do this on a regular basis or 
as part of some recovery plan for your database, please give us some 
more info on what you want your app to do and we may be able to help 
point you to a solution that doesn't require getting to all of the items 
in the cache.


Hope that helps,

- Matt


Re: mutex on memcached

2009-11-24 Thread Matt Ingenthron

Hi JB,

Just a couple of quick thoughts, and others will likely jump in as well.

juanbackson wrote:

Hi,

I have two distributed clients that need to write stuff to memcached,
but each write will involve two records.   These two records must be
sync.  So, they must be changed together.
  


You may be stretching the limits of what memcached is designed for 
here.  By being simple, memcached is very, very fast and quite scalable, 
but doesn't provide features like distributed locks.



Is there anyway that I can lock on those two records and write them,
so there won't be issue with two clients writing at the same time?
  


There are no locks available to clients of memcached, but with the use 
case you describe it should be possible to put both records into the 
same item stored in memcached.  Since that item will always be 
consistent, you can be sure no two clients will write at the same time.


You may also look into the CAS operations which are part of memcached.  
With CAS, memcached allows one to implement some lock-free algorithms 
which can ensure, for instance, that a client will know that no other 
client has updated an item since that client last fetched it.  It's not 
the same, but locking has overhead and you can accomplish a lot of 
things lock-free with CAS.  It could fit your case, depending on what 
the relationship between those two items is.


Beyond that, there are some other distributed lock managers which one 
could use alongside memcached, but you're going further and further away 
from the common memcached deployments if we start talking about DLM. 


Hope that helps,

- Matt


Re: Windows 2003 64 Bit memory utilization?

2009-11-03 Thread Matt Ingenthron


Hi David,

David H. wrote:

I notice that the standard way to build a Win64 port is to use MinGW/
Cygwin.  I am interested in a native Win64 port which makes the
necessary calls directly, and I'm willing to create and maintain such
a port, so long as I get support from the owners to maintain it with
the main branch.
  


Thanks for the offer.  More below...


I see that one of the requirements is to create a minimal diff.  This
is certainly understandable, but creating thread library portability
entails touching every reference to pthreads in the codebase.  Since
pthreads and Windows threads have similar structure, it seems that we
need only create a very thin abstraction layer and change the main
codebase to use that instead (we could probably get away with macros
to eliminate any abstraction penalty).  Would this be an acceptable
way to proceed?
  


I think the challenge in the past was memcached is C99 based, and the 
'native' compilers on Windows don't support C99.  This means, 
potentially, a bunch of conditional compilation, macros or 
reorganization to keep the code relatively clean.


Have you run into this kind of thing before and been able to do this 
with a thin layer as you recommend?


Also, do you have concerns with the MinGW approach?  Since some 
time/effort has been put into that already, if there are concerns/issues 
I'm sure folks would like to hear about them.


Thanks!

- Matt


Re: Memcached user warnings

2009-10-06 Thread Matt Ingenthron


dormando wrote:

Yo,

I'm debating a small patch for 1.4.2 that would add (more) commandline
warnings for memcached. You'd of course only see most of them if you fire
it up without the -d option, but hopefully in the course of testing
someone tries that.

For exampple, if someone sets `-m 12` they'd get a warning saying the
amount of memory is too small, and they're likely to get out of memory
errors.

And if they set `-t 50` it would warn that having threads  num_cpus is
wasteful. I think a few of our frequent issues can be solved by having
memcached audibly warn when configured weird. I also want to add more
twiddles that can be used for foot-shooting.

Thoughts?
  


I think this is a great.  Anything which helps users be self sufficient 
and not shoot themselves in the foot is a good idea.  In fact, you could 
even say 'refuse to start' unless there is a --overridesafety type thing 
for say '-m 12' or '-t 200'.


It could even make sense (someday) to add some intelligence in the mix.  
i.e. -t figureitoutforme. 


- Matt


Re: Range Spec Posted to Wiki

2009-09-17 Thread Matt Ingenthron


dormando wrote:

That's *kind* of what I thought. I'm unawrae of anyone having real
authoritative specifications on wikipedia?

Is there something to show otherwise?
  


Not that my opinion means that much, but Wikipedia, like the 
encyclopedias they mirror in the treeware world (as opposed to webware I 
guess), is, to me, a secondary source.  I generally wouldn't assume it 
to be authoritative for much of anything, but would probably expect it 
to point to something authoritative.


I also agree we really don't need more sites.  Having said that, I think 
it'd be great to use the wiki as the authoritative doc rather than 
something in the doc/ directory.  This is already partially true of 
documenting contributors and changelog which live in the source 
anyway it's more in the pubic record on the web/github, right?


- Matt



On Wed, 16 Sep 2009, Adam Lee wrote:

  

Am I confused or is it actually being proposed that documentation
exist on Wikipedia?

I see no problem with the current wiki and there's no way that
Wikipedia would that...

http://en.wikipedia.org/wiki/Wikipedia:NOT#Wikipedia_is_not_a_manual.2C_guidebook.2C_textbook.2C_or_scientific_journal

--
awl






Re: How can I connect to memcached using PERL ?

2009-09-09 Thread Matt Ingenthron


daugia247 wrote:

I have installed memcached using danga. But cannot connect to memcache
(using PERL).
Do I need to install more package or something else ? Can anyone give
instruction ?


You will need a client.  There is a full list with links on the server's 
project site:

http://code.google.com/p/memcached/wiki/Clients

Hope that helps,

- Matt


release updates and roadmap discussion

2009-08-19 Thread Matt Ingenthron


Hi all,

Since a number of memcached contributors and client authors were in one 
place at the Drizzle conference this week in Seattle we had a short 
discussion about the release roadmap for stable and development 
branches.  I captured much of the discussion and it's posted on the wiki:

http://code.google.com/p/memcached/wiki/DevelopmentRoadmap

It is prefaced with dormando's proposed release cycle, which I think 
myself and Dustin have replied to.  The roadmap items are also in a 
proposal state, so if you have any thoughts, work you have in the 
pipeline or things you'd like to see, please jump into the conversation 
here on the list.


Thanks,

- Matt


Re: RAM type

2009-08-15 Thread Matt Ingenthron


jim wrote:

Which RAM type(buffered, un-registered or FB DIMM) is better for
memcached server?
  


I'd venture to say for 99%+ deployments, it doesn't really matter.  The 
network is the largest limiting factor in both latency and throughput 
for memcached performance.  $/GB is probably your best metric.



What's max RAM size that memcached server can support?
  


memcached is 64-bit (pass the --enable-64bit at configure time), so then 
it's down to whatever the limiting factor or the OS/platform you're 
running on.  64GB has often been deployed and tested.  I've personally 
seen even more in lab environments. 

Back to what I said above though, the $/GB tends to go up with higher 
densities per system.  The operational cost can be a factor here too... 
consider DDR2 power consumption compared to FB-DIMM.


Is there a more specific problem you're trying to solve?

- Matt


Re: 1.2.6 to 1.2.8 or 1.4.0?

2009-08-05 Thread Matt Ingenthron


Jay Paroline wrote:

Hello,
We've been having some intermittent memcached connection issues and I
noticed that we are a couple of releases behind. Our current version
is 1.2.6.

Before I nag our admin about upgrading, is there any reason why it
might be more wise to go to 1.2.8 rather than make the leap to 1.4.0?
  


1.4.0 is in production in some large sites (as was 1.3) and it does have 
some miles on it.  Whether you make the move to 1.4 or not is very 
subjective to environment, etc. but I'd say if you're going to make the 
move going to 1.4.0 gives you a few more client options, and that side 
of things likely iterates faster.  I'm personally a fan of more options.


I have to say, I don't think going to 1.2.8 or 1.4.0 has anything which 
will help with connection issues... but then I don't really know what 
your issues are.

FWIW the clients we use are PHP primarily and a couple of lower
traffic Java apps. Should 1.4.0 work with any clients that were
working with 1.2.6?
  


Yes, 1.4.0 should support all of your clients which are working with 
1.2.6.  As aforementioned, there are clients which give you features 
only available with 1.4.0, but the 1.4.0 server is backward compatible 
with existing clients.


Hope that helps,

- Matt


Re: Issue tracker now set to spam the mailing list

2009-08-03 Thread Matt Ingenthron


dormando wrote:

Yo,

Updates to the issue tracker should now all hit the mailing list. The
traffic over there is relatively low, but only a small handful of us
actively check it - I'd prefer all information go to the same place. 


I think this is a good change...


A few
threads which should have been mailing list discussions have instead been
held over there. This solves that problem :)

If it gets too spammy I'll separate it out into a -bugs mailing list.
Since we used to do all bug discussions on the ML anyway, this should be
no big deal.

-Dormando
  




Re: Is clearing out the cache necessary?

2009-07-29 Thread Matt Ingenthron


blazah wrote:

What do folks do with the objects stored in memcached when a new
version of the software is deployed?  There is the potential that the
data could be stale depending on the code changes so do people
typically just flush the cache?
  


The most common approach here is to add something to the key prefix 
which matches the version of the application.  If you work this out, you 
can actually even do 'rolling upgrades' of the application (assuming 
users have long lived sessions).  As things roll over from the old 
prefix to the new one, the old objects will just naturally expire or LRU 
out.


If you have a heavy workload, a flush_all could be pretty bad for your 
users.



Is there information on best practices and/or how memcached is used in
production?


There's quite a bit on the FAQ:
http://code.google.com/p/memcached/wiki/FAQ

Hope that helps,

- Matt



Re: Memcached crash

2009-07-27 Thread Matt Ingenthron


lessmian2 wrote:

Thanks for reply. I have 32-bit kernel. I have no memory limits for
single process. Ulimit say:
  


You can look at ulimit as soft limits.  There are always hard limits 
based upon your system's architecture and the features of that 
architecture you're using.  One of these is address space.  If you have 
32-bit only hardware or you have 64-bit capable hardware but are running 
a 32-bit kernel (which implies a 32-bit userspace) it doesn't matter 
what ulimit says.  There's a maximum to how much memory an individual 
process can address.


Have you tried dormando's suggestions??

Hope that helps,

- Matt


ulimit -a
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 71680
max locked memory   (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 71680
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited

Lesmian2

On 23 Lip, 09:09, dormando dorma...@rydia.net wrote:
  

Hey,

Are you running a 32-bit or 64-bit OS? If you are 32-bit and your process
size limit is 2G, you should set memcached to a limit of 1800 or so, which
allows for process overhead. If it's 4G or 3.5G, same idea but subtrack a
few hundred megs.

-Dormando



On Tue, 21 Jul 2009, lessmian2 wrote:



Hello.
  
I have problem with memcached. I have three web servers (Apache2) with

PHP. On all of these servers i have installed memcached. In period of
few days the memcached is crashed with some error like this:
'/usr/bin/memcached: invalid option -- 2 Illegal argument ?'
and:
'kernel: [2967687.232682] memcached[6431]: segfault at ccbc3706 ip
08049ea1 sp b7ce5200 error 7 in memcached[8048000+c000]'
  
My hardware (all three servers):

Intel Xeon X3210  @ 2.13GHz
8 GB RAM
RAID 1, 2 x 160GB HDD
  
My software:

Debian Etch 4.0
Apache 2.2.9
PHP 5.2.6
Memcached 1.2.8-1
PHP memcache client 3.0.1-1
  
I run memcached like this:

'/usr/bin/memcached -vv -m 4096 -p 11211 -u nobody'
  
Can anybody help me?
  
Thanks
  




Re: Memcached 1.4.0 (stable) Released!

2009-07-20 Thread Matt Ingenthron


Dustin wrote:


On Jul 19, 4:29 pm, Gary Z g...@ironplanet.com wrote:
  

In order to make it work for Solaris10 x86 (not opensolaris), I had to
make following minor changes, two issues:



  Hey, we have a Solaris 10 sparc builder and OpenSolaris x86
builders.  Apparently that's not enough to cover it.
  


I believe our SPARC builder is using Sun Studio, not gcc.  With the 
first one, it could be something gcc is catching (correctly?) which 
studio is not.  The second one shouldn't cause a difference with 
compilers though.


Looking at the man page for syscalls (man -s 2 intro), I see the following:

62 ETIMETimer expired

The timer set for a STREAMS ioctl(2)
call  has expired. The cause of this
error is device-specific  and  could
indicate   either   a   hardware  or
software  failure,  or   perhaps   a
timeout  value that is too short for
the specific operation.  The  status
of the ioctl() operation is indeter-
minate. This is also returned in the
case  of  _lwp_cond_timedwait(2)  or
cond_timedwait(3C).

This doesn't seem to apply in the current context, does it?  Did you get 
some sort of error during compliation, or were you just looking at 
errno.h for something you thought was equivalent?


Hmm, but looking deeper... there it is in port_getn().  
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libc/port/gen/event_port.c#96


This defines errno as

ETIME The time  interval  expired  before  the  expected
  number  of  events  have  been  posted to the port
  (original value in nget), or nget is updated  with
  the  number of returned port_event_t structures in
  list[].

But... that return is to libevent from a port_getn() which seems to 
handle the return value: 
http://src.opensolaris.org/source/xref/sfw/usr/src/lib/libevent/libevent-1.3e/evport.c#355  
Sun would generally configure it to use /dev/poll rather than event port 
completion since it was an interface with more miles on it and it 
supports both.  Event port completion is more useful if you're writing 
multithreaded code and don't want to have to dance around /dev/poll with 
your threads.


The section of code you're referring to has just a read(2)... and the 
man page for read(2) says nothing about returning ETIME.  How did you 
see a need for this?




  ... or are you talking about something that's not breaking any of
our current tests?  If it doesn't, then it'd be really good to get a
test case for this.

  Any chance you can create a zone or something to run a buildbot
slave so we can keep this platform properly supported?  :)

  

1. using standard C prototype

2. solaris seems to return ETIME instead of EWOULDBLOCK for non-
blocking read
In errno.h:
#define EAGAIN  11  /* Resource temporarily unavailable */
#define EWOULDBLOCK EAGAIN
#define ETIME   62  /* timer expired*/

With following changes, it built with GNU Make 3.80, SFW gcc 3.4.3 on
Solaris 10-u7-ga-x86, linked with libevent-1.4.6-stable.

diff -U2 ./solaris_priv.c ../memcached-1.4.0-sol10/./solaris_priv.c
--- ./solaris_priv.cThu Jul  9 09:43:42 2009
+++ ../memcached-1.4.0-sol10/./solaris_priv.c   Wed Jul 15 21:49:01
2009
@@ -3,4 +3,5 @@
 #include stdio.h

+extern void drop_privileges(void);
 /*
  * this section of code will drop all (Solaris) privileges including

diff -U2 ./memcached.h ../memcached-1.4.0-sol10/./memcached.h
--- ./memcached.h   Thu Jul  9 10:16:24 2009
+++ ../memcached-1.4.0-sol10/./memcached.h  Wed Jul 15 21:48:07
2009
@@ -465,5 +465,5 @@

 #if HAVE_DROP_PRIVILEGES
-extern void drop_privileges();
+extern void drop_privileges(void);
 #else
 #define drop_privileges()

diff -U2 ./memcached.c ../memcached-1.4.0-sol10/./memcached.c
--- ./memcached.c   Thu Jul  9 10:16:24 2009
+++ ../memcached-1.4.0-sol10/./memcached.c  Wed Jul 15 22:14:20
2009
@@ -3016,5 +3016,5 @@
 }
 if (res == -1) {
-if (errno == EAGAIN || errno == EWOULDBLOCK) {
+if (errno == EAGAIN || errno == EWOULDBLOCK || errno ==
ETIME) {
 break;
 }
@@ -3115,5 +3115,5 @@
 return TRANSMIT_INCOMPLETE;
 }
-if (res == -1  (errno == EAGAIN || errno == EWOULDBLOCK)) {
+if (res == -1  (errno == EAGAIN || errno == EWOULDBLOCK ||
errno == ETIME)) {
 if (!update_event(c, EV_WRITE | EV_PERSIST)) {
 if (settings.verbose  0)
@@ -3267,5 +3267,4 @@
 }
 }
-
 /*  now try reading from the socket */
   

Re: at OSCON? get together?

2009-07-20 Thread Matt Ingenthron


Brian Moon wrote:


I may not be at the Gearman BoF but want to meet up with you guys.  Do 
we know the where part yet?


Yep, all confirmed. 

Wednesday 7/22, we have a spot reserved at Gordon Biersch, 8pm.  33 E 
San Fernando St., San Jose, CA 95113.  I believe it's between 1st and 
2nd street.  For all those who are going to the Gearman BoF, just head 
on over after it's ended.


- Matt


Re: Release schedule for 1.4.1 and beyond

2009-07-18 Thread Matt Ingenthron



Thanks for kicking off the discussion on this.

dormando wrote:

I'd like to propose something mildly similar to the linux kernel schedule,
but inverted a little:

- 3 weeks after each new stable release, we release -rc1 for the next
release.
- New -rc's will be kicked out daily or bidaily if there are fixes.
- After 1 week in RC, unless there are still bug reports coming in, stable
is released.

So we should have a good stable release roughly once per month. Exceptions
can be made, as usual. Major bug finds, emergencies, of course warrant
earlier releases. Cycles with large code changes all at once might warrant
an earlier cut to -rc1 and a 2-3 week -rc cycle (still trying to round it
out at a month overall).
  


It could be because my backlog of goals and ideas isn't as aggressive as 
others, but to me this looks great other than the frequency.  It feels a 
bit too frequent to me, but I'm only one voice.  It feels more like 6 
weeks or so makes sense (unless there's a security issue to deal with of 
course).  The linux kernel probably needs the frequency because of the 
larger number of things going in. 


Development should stay the way it is. All of us making code changes
should be cc'ing as much as possible to the mailing list. I've noticed
some of this has dropped off a little in recent days, but it's easy to
pick that back up again. I do realize it's easier for people to just
follow us on github, but lets stay open for the sake of discussion.
  


+1 

This could also head off duplication of work... or worse... work that is 
incompatible.



Since OSCON is coming up, I'd like to give an extra week for the first
cycle. Tentative date for 1.4.1-rc1 will be july 30th, with the stable
release of 1.4.1 being on august 6th.
  


Makes sense to me!

- Matt


web cleanup request for danga.com

2009-07-16 Thread Matt Ingenthron


Based on a discussion on the IRC channel and the mailing list, it seems 
people aren't finding the release notes.  I believe (partially because 
one guy told me it was the case)  this is because the release notes are 
not linked from the danga.com/memcached/news.bml page


When walking around, danga.com/memcached/apis.bml also seems to point to 
only the old protocol.txt in subversion.  It's not up to date with the 
1.4.0 release and it doesn't mention the protocol-binary.xml which 
should probably be cooked and hosted somewhere.  Perhaps I'll do that 
for 1.4.0 and put it on the wiki for a danga.com pointer.



- Matt


Re: web cleanup request for danga.com

2009-07-16 Thread Matt Ingenthron


Brian Aker wrote:


Why not just update the wiki on code.google.com? A greater number of 
people can access/keep that up to date.


I probably agree with you there.  I'm one of those people (though I 
can't seem to be able to edit the summary).  code.google.com does have 
the release notes and such, but not necessarily the most user friendly 
summary page.  Users are looking for what it is, what's the latest 
release, and where to get it.  It's all there on danga.com (but 
incomplete) and one has to dig for it on code.google.com.


Reality is a bunch of things on danga.com point to code.google.com 
already.  I'm not sure what the right way forward is but even if there 
were a decision to do that, someone would need to update danga.com. 

We can probably come up with a better long term plan in some kind of 
face-to-face discussion at OSCON, but for the short term just getting 
the release notes link on the news page on danga.com will probably help.


- Matt




On Jul 16, 2009, at 9:53 AM, Matt Ingenthron wrote:



Based on a discussion on the IRC channel and the mailing list, it 
seems people aren't finding the release notes.  I believe (partially 
because one guy told me it was the case)  this is because the release 
notes are not linked from the danga.com/memcached/news.bml page


When walking around, danga.com/memcached/apis.bml also seems to point 
to only the old protocol.txt in subversion.  It's not up to date with 
the 1.4.0 release and it doesn't mention the protocol-binary.xml 
which should probably be cooked and hosted somewhere.  Perhaps I'll 
do that for 1.4.0 and put it on the wiki for a danga.com pointer.



- Matt






Re: Enabling large-page-allocations

2009-07-15 Thread Matt Ingenthron


Hi Mike,

Mike Lambert wrote:

Trond, any thoughts?
  


Trond is actually on vacation, but I did steal a few cycles of his time 
and asked about this.

I'd like to double-check that there isn't a reason we can't support
preallocation without getpagesizes() before attempting to manually
patch memcache and play with our production system here.
  


There's no reason you can't do that.  There may be a slightly cleaner 
integration approach Trond and I talked through.  I'll try to code that 
up here in the next few days... but for now you may try your approach to 
see if it helps alleviate the issue you were seeing.


Incidentially, how did the memory fragmentation manifest itself on your 
system?  I mean, could you see any effect on apps running on the system?




Thanks,
Mike

On Jul 13, 8:38 pm, Mike Lambert mlamb...@gmail.com wrote:
  

On Jul 10, 1:37 pm, Matt Ingenthron ingen...@cep.net wrote:







Mike Lambert wrote:
  

Currently the -L flag is only enabled if
HAVE_GETPAGESIZESHAVE_MEMCNTL. I'm curious what the motivation is
for something like that? In our experience, for some memcache pools we
end up fragmenting memory due to the repeated allocation of 1MB slabs
around all the other hashtables and free lists going on. We know we
want to allocate all memory upfront, but can't seem to do that on a
Linux system.


The primary motivation was more about not beating up the TLB cache on
the CPU when running with large heaps.  There are users with large heaps
already, so this should help if the underlying OS supports large pages.  
TLB cache sizes are getting bigger in CPUs, but virtualization is more

common and memory heaps are growing faster.
  
I'd like to have some empirical data on how big a difference the -L flag

makes, but that assumes a workload profile.  I should be able to hack
one up and do this with memcachetest, but I've just not done it yet.  :)
  

To put it more concretely, here is a proposed change to make -L do a
contiguous preallocation even on machines without getpagesizes tuning.
My memcached server doesn't seem to crash, but I'm not sure if that's
a proper litmus test. What are the pros/cons of doing something like
this?


This feels more related to the -k flag, and that it should be using
madvise() in there somewhere too.  It wouldn't be a bad idea to separate
these necessarily.   I don't know that the day after 1.4.0 is the day to
redefine -L though, but it's not necessarily bad. We should wait for
Trond's repsonse to see what he thinks about this since he implemented
it.  :)
  

Haha, yeah, the release of 1.4.0 reminded me I wanted to send this
email. Sorry for the bad timing.

-k keeps the memory from getting paged out to disk (which is a very
goodt hing in our case.)
-L appears to me (who isn't aware of what getpagesizes does) to be
related to preallocation with big allocations, which I thought was
what I wanted.

If you want, I'd be just as happy with a -A flag that turns on
preallocation, but without any of getpagesizes() tuning. It'd force
one big slabs allocation and that's it.



Also, I did some testing with this (-L) some time back (admittedly on
OpenSolaris) and the actual behavior will vary based on the memory
allocation library you're using and what it does with the OS
underneath.  I didn't try Linux variations, but that may be worthwhile
for you.  IIRC, default malloc would wait for page-fault to do the
actual memory allocation, so there'd still be risk of fragmentation.
  

We do use Linux, but haven't tested in production with my modified -L
patch. What I *have* noticed is that when we allocate a 512MB
hashtable, that shows up in linux as mmap-ed contiguous block of
memory. Fromhttp://m.linuxjournal.com/article/6390, we For very
large requests, malloc() uses the mmap() system call to find
addressable memory space. This process helps reduce the negative
effects of memory fragmentation when large blocks of memory are freed
but locked by smaller, more recently allocated blocks lying between
them and the end of the allocated space.

I was hoping to get the same large mmap for all our slabs, out of the
way in a different address space in a way that didn't interfere with
the actual memory allocator itself, so that the linux allocator could
then focus on balancing just the small allocations without any page
waste.

Thanks,
Mike





at OSCON? get together?

2009-07-14 Thread Matt Ingenthron


Hi all,

In the IRC channel the other day, it came up that a number of us will be 
at OSCON.  There are a number of IRC and mailing list regulars who 
probably haven't met in person.


Is anyone interested in meeting up one evening?  This would be a social 
event (though we'll no doubt informally talk tech.


Just to take a stab at a day/time, it looks like Wednesday evening may 
be the best time for such an event.  How about Wednesday, July 22nd at 
7:30pm? 

Anyone aware of any (major) conflicts?  If not, sound like a deal?  Drop 
a note to the list (or to me directly) if you'd like to join in.  
Presuming I'll get enough takers, I'll get a place organized near the 
conference.


- Matt


Re: at OSCON? get together?

2009-07-14 Thread Matt Ingenthron


Eric Day wrote:

For those of us also working on Gearman, there is a Gearman BoF at
7pm on Wednesday. I know at least a small handlful of folks will
probably be there.

We could meet after, like 8:30ish? Or just meet at the BoF and go
from there... :)
  


Yes, there's definite overlap with those working with/on Gearman, so 
let's target 8:30 instead.  Those who want to proceed from the Gearman 
BoF can, and those who may be doing another BoF or something else 
interesting will just know where/when to meet us.


Sound good?


On Tue, Jul 14, 2009 at 12:17:05AM -0700, Matt Ingenthron wrote:
  

Hi all,

In the IRC channel the other day, it came up that a number of us will be  
at OSCON.  There are a number of IRC and mailing list regulars who  
probably haven't met in person.


Is anyone interested in meeting up one evening?  This would be a social  
event (though we'll no doubt informally talk tech.


Just to take a stab at a day/time, it looks like Wednesday evening may  
be the best time for such an event.  How about Wednesday, July 22nd at  
7:30pm? 

Anyone aware of any (major) conflicts?  If not, sound like a deal?  Drop  
a note to the list (or to me directly) if you'd like to join in.   
Presuming I'll get enough takers, I'll get a place organized near the  
conference.


- Matt





Re: Enabling large-page-allocations

2009-07-10 Thread Matt Ingenthron


Mike Lambert wrote:

Currently the -L flag is only enabled if
HAVE_GETPAGESIZESHAVE_MEMCNTL. I'm curious what the motivation is
for something like that? In our experience, for some memcache pools we
end up fragmenting memory due to the repeated allocation of 1MB slabs
around all the other hashtables and free lists going on. We know we
want to allocate all memory upfront, but can't seem to do that on a
Linux system.
  


The primary motivation was more about not beating up the TLB cache on 
the CPU when running with large heaps.  There are users with large heaps 
already, so this should help if the underlying OS supports large pages.  
TLB cache sizes are getting bigger in CPUs, but virtualization is more 
common and memory heaps are growing faster.


I'd like to have some empirical data on how big a difference the -L flag 
makes, but that assumes a workload profile.  I should be able to hack 
one up and do this with memcachetest, but I've just not done it yet.  :)



To put it more concretely, here is a proposed change to make -L do a
contiguous preallocation even on machines without getpagesizes tuning.
My memcached server doesn't seem to crash, but I'm not sure if that's
a proper litmus test. What are the pros/cons of doing something like
this?
  


This feels more related to the -k flag, and that it should be using 
madvise() in there somewhere too.  It wouldn't be a bad idea to separate 
these necessarily.   I don't know that the day after 1.4.0 is the day to 
redefine -L though, but it's not necessarily bad. We should wait for 
Trond's repsonse to see what he thinks about this since he implemented 
it.  :)


Also, I did some testing with this (-L) some time back (admittedly on 
OpenSolaris) and the actual behavior will vary based on the memory 
allocation library you're using and what it does with the OS 
underneath.  I didn't try Linux variations, but that may be worthwhile 
for you.  IIRC, default malloc would wait for page-fault to do the 
actual memory allocation, so there'd still be risk of fragmentation.


- Matt


Re: memcached MySQL dotorg booth

2009-04-17 Thread Matt Ingenthron


hachi wrote:

Hi Jruiz,

I'll be at the booth for the entire time next week.

I had been planning this via dormando (who sent a message similar to
yours on the list a short while ago), and I have some supplies for the
booth already.
  


I manned the booth for a bit last year, and I'd be glad to this year as 
well.  Should we get more formal about it and schedule things so you can 
step away for a session/break or two?  Or is it better to just keep it 
informal?


- Matt

See you there :)

--hachi

On Apr 16, 12:35 pm, Jruiz joaquin.r...@gmail.com wrote:
  

Hey guys,

Would love to see more volunteers for the memcached booth at the
dotorg pavilion at the MySQL conference next week.  If you are
contributing to memcached and would like to answer questions, please
show up at booth TT9 on Tuesday and/or Wednesday (21st and 22nd)
between 9AM and 5PM.

If you volunteer, please post back relevant comments/feedback/
questions/learnings that you hear to this group.

Saludos

Joaquin





Re: why -k dangerous?

2009-04-13 Thread Matt Ingenthron


Qiangning Hong wrote:

The manpage says:
   -k Lock down all paged memory. This is a somewhat
dangerous  option
  with  large caches, so consult the README and memcached
homepage
  for configuration suggestions.

README says:
Also, be warned that the -k (mlockall) option to memcached might be
dangerous when using a large cache.  Just make sure the memcached
machines
don't swap.  memcached does non-blocking network I/O, but not disk.
(it
should never go to disk, or you've lost the whole point of it)

Why locking pages in memory considered dangerous?  It can avoid
swapping, so that keep a good response time.  Doesn't it a good
feature?
  


It's likely considered dangerous due to the impact it can have on other 
applications running on the same system, or components of the OS.  As 
you say, it is a good feature if used correctly. 

If you have, for instance, your application running on the same system 
and have a peak in user requests, it can lead to a situation where that 
application has memory needs and cannot get any more memory.  Even 
though your cache could have a low response time, the overall user 
experience can be quite low.


I've recommended this flag to others in the past, and I'd just recommend 
start conservative (without creating too large a cache), monitor the 
environment, and tune from there.


Hope that helps,

- Matt


Re: memcached not freeing up ram?

2009-03-22 Thread Matt Ingenthron


tongueroo wrote:

Thats a good point.  Um. maybe thats swap is whaat is happening.
  


Note there is a -k flag to memcached which, depending on underlying 
OS, can lock down the pages so they won't page out to swap space.  Since 
your mongrel memory usage will be variable, you may be able to use that 
flag to avoid a situation where memory pressure evicts memcached's pages.


Also, I don't know cruby's internals all that well, but running a quick 
rails app here locally seems to show that all of the memory allocations 
are in heap.  Since those won't be given back until the process exits, 
if you're running lots of mongrels without ever exiting you're likely 
not as efficient about memory usage as you could be.  I only see
a very small amount of memory mmap()'d.  If things are evenly 
distributed and cruby (and the underlying memory allocator you're using) 
are efficient about memory allocation, it may not be too bad.


One solution for this would be to recycle the mongrels on a regular 
basis.  Maybe you're already doing that?  There are some other solutions 
too.




Though I dont think that we are swapping, that would makes the most
sense.
Below are our 9 memcached servers
1. how much free ram as listed by free -m
2. memcached version number
3. how much ram is being used by memcached right now, its near full
again (full at 1GB)

https://gist.github.com/e886d958a4bc8e103810

Right now we it doesnt appear that we have swapping.
However, we do run our memcached instances on the same slices as our
app servers where our mongrels live.  Perhaps
spikes in mongrels is causing it to swap..

Do those free numbers look good?
From what Ive been told is that the second row of free is more
important..

-/+ buffers/cache:   XXX   XXX

Thats how much actual free ram we have before we start swapping  A gig
on each slice seems o-plenty.

Thanks again for all the helpful feedback and responses thus far.

Tung



On Mar 18, 8:04 pm, Dustin dsalli...@gmail.com wrote:
  

On Mar 18, 8:00 pm, tongueroo tongue...@gmail.com wrote:



memcached reads are reported as very slow. 10+ seconds.
  

  Are you giving it more RAM than you actually have?  I would expect
that behavior if it were fetching from swap.





Re: facebook memcached on github (redux)

2008-12-20 Thread Matt Ingenthron


Toru Maesaka wrote:

oops, clicked send before I finished the email :(

ASFAIK, Trond and Matt has been experimenting quite a bit with what
I've mentioned above.
  


True, more Trond than I.  I've been working on a method of testing, and 
have a good workload generator.  We're actually not using mmap() at the 
moment but we'd talked about it before.  I don't think it would be a 
complex change.


Also, Trond had some thoughts on how to do this with less locking which 
he'd sketched out, but hasn't told me about yet.  I think he was working 
on resynching everything with the latest engine code.


- Matt

On Sat, Dec 20, 2008 at 9:42 PM, Toru Maesaka tmaes...@gmail.com wrote:
  

Hi!

I've quickly glanced through the repo and even though I think the idea
of the flat allocator is neat and how it uses mmap (/me loves mmap), I
think the flat allocator itself should belong outside the codebase.
So, what I'm trying to say is, shouldn't this be a separate storage
engine?

Trond and I are currently experimenting on reshaping the codebase by
changing the item structure to be a minimal and generic 32/64bit
aligned structure, which the storage engine will handle. If you're
interested I can write more about this.

So what I'd like to know is, what exactly is it that you guys are
concerned about dynamic linking at startup? If the performance loss
(if we could even call it that) is going to become a huge issue for
the memcached userbase, we could discuss/plan static linking too.

Personally, I think working on things like improving memcached's space
efficiency (more bang for buck) seem more important than the dynamic
linking issue.

Cheers,
Toru

Opinions?
ASFAIK,