Re: Memcache as session server with high cache miss?

2010-03-10 Thread moses wejuli
I agree memcache isn't expected - and designed - to persistently keep your
data, hence unsuitable for critical session data storage which sweetly
introduces an idea for our *beloved* memcache developers: would it not be
nice to have an option (just an option) to cache data with no possibility of
expulsion with, for example, a precondition that such cached data cannot
exceed *n%* of total memory allocated to memecache on each server (or quite
simply with no preconditions).

This way, large (or small) sets of critical data, e.g. sessions (or a cross
section of session data e.g. login data) would be saved in memcache without
worrying about expulsions or expirations

My hunch is that the memcache devs have probably considered this idea but
never went ahead with it. I would be interested to know why?

On 10 March 2010 16:45, Les Mikesell  wrote:

> On 3/10/2010 10:28 AM, Brian Hawkins wrote:
>
>> Explain how using memcached opens one self to a DOS attack?
>>
>
> If you are expecting it to be a persistent store by itself, simply
> exceeding capacity will drop data before you expect it to expire.  Of course
> if your sessions are tied to logins this would be harder to cause, and if
> you have a backing DB of critical entries then it's not a problem until you
> overwhelm the DB.
>
> Memcache itself isn't a problem unless you expect it to always have your
> data, which it isn't intended to do.
>
> --
>  Les Mikesell
>   lesmikes...@gmail.com
>
>


Re: Memcache as session server with high cache miss?

2010-03-10 Thread moses wejuli
Yes, good point as far as service interruption is concerned but I  don't see
how this affects performance, especially if the programmer carefully uses
this feature for tiny amounts of data per user I just think, given how
far memcache has metamorphosed, the time is probably right to include such a
feature (to be used with some care of course)

On 10 March 2010 23:04, Dean Harding  wrote:

> > My hunch is that the memcache devs have probably
> > considered this idea but never went ahead with
> > it. I would be interested to know why?
>
> I'm not one of the devs, but I can guess why they wouldn't want to do this.
> It's pretty simple: it's not just expiration and expulsion that can cause a
> memcache node to lose keys. If you need to reboot a server, the data will
> be
> lost. If you have a hard disk crash and the server dies, you lose your
> data.
>
> Of course, you can reboot database servers and so on as well, but they're
> usually replicated so that there's no interruption of service and no data
> lost.
>
> Besides, this feature could affect performance, and memcache is ALL about
> performance.
>
> For us, our session data is usually mostly static. When you log on, we
> create the session data and it doesn't change much until you log off. So we
> actually store it in the database AND in memcache. The database then is
> mostly write-only and only needed if a memcache node goes down and the key
> is lost.
>
> Dean.
>
>
>


Re: Memcache as session server with high cache miss?

2010-03-10 Thread moses wejuli
it's not so much a *"theoretically reliable storage mechanism"* but only
storing data for as long as the programmer chooses or in this case, for only
as long as the user is logged on (data is expelled when the user chooses to
log out or is forcibly logged out for one reason or another).

data miscalculation? i think you'd have to take care as a *dev* for this not
to happen -- infact this is presently the case (think max slab size)

On 10 March 2010 23:34, Les Mikesell  wrote:

> On 3/10/2010 5:14 PM, moses wejuli wrote:
>
>> Yes, good point as far as service interruption is concerned but I  don't
>> see how this affects performance, especially if the programmer carefully
>> uses this feature for tiny amounts of data per user I just think,
>> given how far memcache has metamorphosed, the time is probably right to
>> include such a feature (to be used with some care of course)
>>
>
> You can't count on nothing ever going wrong.  How should this theoretically
> reliable storage mechanism respond to a network glitch to one or more of the
> servers?  Or just a miscalculation in terms of how much data you throw at
> it?
>
> --
>  Les Mikesell
>   lesmikes...@gmail.com
>
>
>


Re: How to get more predictable caching behavior - how to store sessions in memcached

2010-03-14 Thread moses wejuli
there seems to be a general phobia out there of storing sessions in the DB
-- i know this coz i had it once, but overcame it by realizing (mentally and
through this forum) that we really should use memcached for what it's good
at and NOT as a persistent data store.

unless you have abysmally low/poor server specs, i think you shouldn't worry
about perfomance issues with regard to DB-based session handling -- enhanced
with memcached!

On 14 March 2010 13:27, Martin Grotzke wrote:

> Yes, what you described is similar to the situation with the sessions not
> updated in memcached as they were only read by the application issue.
>
> Still, I think if there's enough memory for all active sessions only
> sessions should be dropped that are in fact expired. For this a simplified
> slab configuration (one slab for all sessions) would be helpful AFAICS.
>
> Cheers,
> Martin
>
>
> On Sun, Mar 14, 2010 at 8:54 AM, Peter J. Holzer  wrote:
>
>> On 2010-03-12 17:07:25 -0800, dormando wrote:
>> > Now, it should be obvious that if a user session has reached a point
>> where
>> > it would be evicted early, it is because you did not have enough memory
>> to
>> > store *all active sessions anyway*. The odds of it evicting someone who
>> > has visited your site *after* me are highly unlikely. The longer I stay
>> > off the site, the higher the odds of it being evicted early due to lack
>> of
>> > memory.
>> >
>> > This does mean, by way of painfully describing how an LRU works, that
>> the
>> > odds of you finding sessions in memcached which have not been expired,
>> but
>> > are being evicted from the LRU earlier than expired sessions, is very
>> > unlikely.
>> [...]
>> > The caveat is that memcached has one LRU per slab class.
>> >
>> > So, lets say your traffic ends up looking like:
>> >
>> > - For the first 10,000 sessions, they are all 200 kilobytes. This ends
>> up
>> > having memcached allocate all of its slab memory toward something that
>> > will fit 200k items.
>> > - You get linked from the frontpage of digg.com and suddenly you have a
>> > bunch of n00bass users hitting your site. They have smaller sessions
>> since
>> > they are newbies. 10k items.
>> > - Memcached has only reserved 1 megabyte toward 10k items. So now all of
>> > your newbies share a 1 megabyte store for sessions, instead of 200
>> > megabytes.
>>
>> There's another caveat (I think Martin may have been referring to this
>> scenario, but he wasn't very clear):
>>
>>
>> Suppose you have two kinds of entries in your memcached, with different
>> expire times. For example, in addition to your sessions with 3600s, you
>> have some alert box with an expiry time of 60s. By chance,
>> both items are approximately the same size and occupy the same slab
>> class(es).
>>
>> You have enough memory to keep all sessions for 3600 seconds and enough
>> memory to keep all alert boxes for 60 seconds. But you don't have enough
>> memory to keep all alert boxes for 3600 seconds (why should you, they
>> expire
>> after 60 seconds).
>>
>> Now, when you walk the LRU chain, the search for expired items will only
>> return expired alert boxes which are about as old as your oldest session.
>> As soon as there are 50 (not yet expired) sessions older than the oldest
>> (expired) alert box, you will evict a session although you still have a
>> lot of expired alert boxes which you could reuse.
>>
>> The only workaround for this problem I can see is to use different
>> memcached servers for items of (wildly) different expiration times.
>>
>> > However the slab out of balance thing is a real fault of ours. It's a
>> > project on my plate to have automated slab rebalancing done in some
>> usable
>> > fashion within the next several weeks. This means that if a slab is out
>> of
>> > memory and under pressure, memcached will decide if it can pull memory
>> > from another slab class to satisfy that need. As the size of your items
>> > change over time, it will thus try to compensate.
>>
>> That's good to hear.
>>
>>hp
>>
>> --
>>   _  | Peter J. Holzer| Openmoko has already embedded
>> |_|_) | Sysadmin WSR   | voting system.
>> | |   | h...@hjp.at | Named "If you want it -- write it"
>> __/   | http://www.hjp.at/ |  -- Ilja O. on commun...@lists.openmoko.org
>>
>> -BEGIN PGP SIGNATURE-
>> Version: GnuPG v1.4.9 (GNU/Linux)
>>
>> iD4DBQFLnJYsfZ+RkG8quy0RAqt8AJoCTvx1wPJE6Q4P7+rz8Pvi2l2HLgCYvhpa
>> SBop1pFUnyf6ODozse9kyA==
>> =c9w0
>> -END PGP SIGNATURE-
>>
>>
>
>
> --
> Martin Grotzke
> http://www.javakaffee.de/blog/
>


Re: How to get more predictable caching behavior - how to store sessions in memcached

2010-03-27 Thread moses wejuli
thnx

On 27 March 2010 13:50, Martin Grotzke wrote:

> Great, thanx!
>
> Cheers,
> Martin
>
>
> On Mon, Mar 22, 2010 at 6:04 PM, Adam Lee  wrote:
> > On Sat, Mar 20, 2010 at 7:31 PM, Martin Grotzke
> >  wrote:
> >> Ok, thanx for sharing your experience. Do you have some app online
> >> implemented like this I can have a look at?
> >
> > http://www.fotolog.com/
> >
> > --
> > awl
> >
> > To unsubscribe from this group, send email to memcached+
> unsubscribegooglegroups.com or reply to this email with the words "REMOVE
> ME" as the subject.
> >
>
>
>
> --
> Martin Grotzke
> http://www.javakaffee.de/blog/
>
> To unsubscribe from this group, send email to memcached+
> unsubscribegooglegroups.com or reply to this email with the words "REMOVE
> ME" as the subject.
>

To unsubscribe from this group, send email to 
memcached+unsubscribegooglegroups.com or reply to this email with the words 
"REMOVE ME" as the subject.


Re: PostgreSQL, Caching and replication

2010-06-10 Thread moses wejuli
thanks but i'm unavailable at the moment.

On 10 June 2010 09:45, Simon Riggs  wrote:

>
> fyi, PostgreSQL CHAR(10) conference has detailed coverage of
> * PostgreSQL and memcache integration
> * Latest PostgreSQL 9.0 replication features
> * Slony and pgpool updates
> * other technologies for clustering and cloud computing
>
> CHAR(10) is being held in Oxford, UK on July 2-3.
>
> You can register and/or pay online at
> http://www.char10.org/
>
> Please register in next two weeks to avoid late booking fees.
>
> See you there!
>
> --
>  Simon Riggs   www.2ndQuadrant.com
>  PostgreSQL Development, 24x7 Support, Training and Services
>
>


Re: Help Needed - Rails Cache - Time Based

2010-08-27 Thread moses wejuli
thanks but i'm unavailable at the moment.

On 11 July 2010 10:43, Ajmal  wrote:

> Hello Sir,
>
> We are from India a small Web development company... I was search in
> the web for time based caching concept in rails.
>
> We have a small issue...
>
> what we are having is a home page with few iframes(with stock data
> from other websites)
>
> the loading time is high since data is got from other websites and
> displayed.
>
> We would like to cache the page as a pure html and display every say 5
> min
>
> ie the the rails home page is cached and stored as html so the output
> for normal users will be a plain html page which will be real fast.
>
> and every 5 min a new cache page is created deleting the old page...
>
> Could you help us and give an idea how we can achieve them..
>
> Thanks


Re: using memcached for php session management - high load site

2010-10-11 Thread moses wejuli
memcache memcached connection connections

On 8 October 2010 05:32, Brian Moon  wrote:

> Read this and make sure it fits your use case.
> http://code.google.com/p/memcached/wiki/NewProgrammingFAQ#Why_is_memcached_not_recommended_for_sessions?_Everyone_does_it
> !
>
>
>  how many concurrent connections can memcached handle? how do i check
>> to see if in fact this is the bottleneck?
>>
>
> my memcached servers handle 8k connections each. I know of others that have
> blown this number away.  listen_disabled_num tells you how many times you
> have hit your connection limit.
>
>
>  any other information regarding what settings / stat values to look
>> for would be helpful.
>>
>
> evictions and measuring hits vs. misses will tell you if your cache is
> being used efficiently. There are a ton of other stats. Use something like
> Cacti to graph the data. There are a couple of sets of templates already
> built for Cacti. See http://code.google.com/p/memcached/wiki/MonitorLinks
>
>
>  I'm considering using a memcaced server for each php app server but
>> then it wouldn't be a centralized situation.
>>
>
> I personally run memcached on every web server I have. Read
> http://code.google.com/p/memcached/wiki/NewConfiguringClient
>
> --
>
> Brian.
> 
> http://brian.moonspot.net/
>


Re: Is memcache add() atomic on a multithreaded memcached?

2010-10-13 Thread moses wejuli
... or you couldd use a concatenation of ur server ID/timestamp/query/unique
client variable(s)/session etc.. (all hashed) as part of your (hashed)
key... there's countless ways to make ur key unique... even in ur
situation!!!

On 13 October 2010 19:11, Adam Lee  wrote:

> Yeah, we also have used this as a sort of crude locking mechanism on a site
> under fairly heavy load and have never seen any sort of inconsistency-- as
> dormando said, I'd make sure your configuration is correct.  Debug and make
> sure that they're both indeed setting it on the same server.  Or, if that's
> not possible, whip up a small script that iterates through all of your
> servers and see if the key exists on multiple servers.
>
> On Wed, Oct 13, 2010 at 1:47 PM, dormando  wrote:
>
>> > Hi everyone,
>> >
>> > we have the following situation: due to massive simultaneous inserts
>> > in mysql on possibly identical primary keys, we use the atomic
>> > memcache add() as a semaphore. In a few cases we observed the
>> > behaviour, that two simultaneous add() using the same key from
>> > different clients both returned true (due to consistent hashing the
>> > key has to be on the same machine).
>> >
>> > Is it now possible, that the multithreaded memcached does return true
>> > on two concurrent add() on the same key, if the requests are handled
>> > by two different threads on the same machine?
>>
>> It should not be possible, no. Be sure you've disabled the client
>> "failover" code.
>>
>
>
>
> --
> awl
>


Re: Replication ?

2011-03-04 Thread moses wejuli
...or better still, when one of your cache servers go down, you hit the
databse (or other cache servers) till the broken one is fixed !!!

On 4 March 2011 01:42, dormando  wrote:

> > Hi all,
> > I know I'll get blasted for not googling enough, but I have a quick
> question.
> >
> > I was under the impression memcached servers replicated data, such that
> if i have 2 servers and one machine goes down the data would all still be
> > available on the other machine.  this with the understanding that some
> data may not yet have been replicated as replication isn't instantaneous.
> >
> > Can you clarify for me?
> >
> > thx,
> >
> > -nathan
>
> I sound like a broken record about this, but I like restating things
> nobody cares about;
>
> - memcached doesn't do replication by default
> - because not replicating your cache gives you 2x cache space
> - and when you have 10 memcached servers and one fails...
> - ... you get some 10% miss rate.
> - and may cache 2x more crap in the meantime.
>
> if your workload really requires cache data never disappear, you're
> looking more for a database (mysql, NoSQL, or otherwise).
>
> the original point (and something I still see as a feature) is the ability
> to elastically add/remove cache space in front of things which don't scale
> as well or take too much time to process.
>
> For everything else there's
> mastercard^Wredis^Wmembase^Wcassandra^Wsomeotherproduct
>
> -Dormando


Re: Replication ?

2011-03-04 Thread moses wejuli
guys, the creators of this much loved tool -- viz-a-viz memcache -- designed
it with one goal in mind: CACHING!!

using sessions with memcache would only make sense from a CACHING
standpoint, i.e. cache the session values in your memcache server and if the
caching fails for some reason or another, hit your permanent storage system:
RDBMS or No-SQL... obvioulsy, your caching server specs (and supporting
environment like interconnect fabrics, Mbps download capacity, server
durability, etc..) should reflect your user load + dat importance for
efficiency, among other factors.. i generally use Memcache  (+ PHP) out of
the box with this in mind and never found any earth-moving issues... For
sessions particularly, i never found any issues.

I think it's vitally important to keep in mind what Memcache is for ... a
CACHING TOOL.. and not a permanent storage system (also it's a Friday
evening here in England so please excuse the language.. and any typos ;) )

Moses.

On 4 March 2011 23:38, Dustin  wrote:

>
> On Mar 4, 9:11 am, Nathan Nobbe  wrote:
>
> > i know its OT, but .. thoughts? :)
>
>   He captured thoughts about this a while back in a blog post that's
> worth a read either way:
>
>  http://dormando.livejournal.com/495593.html


Re: Drupal

2011-04-15 Thread moses wejuli
thanks but i'm unavailable at the moment.

On 14 April 2011 19:38, Brian Moon  wrote:

> There is nothing about memcached or PHP that requires this. However, Drupal
> may have some limitations there. I know they tie things to versions of PHP
> in really horrible, dumb ways.
>
> Brian.
> http://brian.moonspot.net
>
>
> On 4/14/11 1:26 PM, Brad Fisher wrote:
>
>> I have a server admin that is tellling me that for memcached to work
>> with drupal that php 5.3 needs to be present. Currently I am running
>> php 5.2.17 and I don't want to upgrade. Is he correct?
>>
>


recommended maximum number of nodes in memcache cluster (server pool)

2011-11-25 Thread moses wejuli
hi guys,

not sure if this has been asked (and answered) before, but thought i might
ask away anyway...

what would be the recommended maximum number of nodes in a memcached server
pool (cluster) ...? am thinking u cannot go on indefinitely adding nodes
without some sort of performance penalty  -- a 100-node homogeneous cluster
will probably hash faster than a 2000-node homogeneous cluster??! with
additional network issues for good measure??

any pointers would be very helpful!!

oh, and what wud be the optimal node specs in such a case (particularly CPU
cores)?

thanks,

-m.


Re: recommended maximum number of nodes in memcache cluster (server pool)

2011-11-26 Thread moses wejuli
Thanks Henrik. Thanks Arjen.

On 26 November 2011 13:15, Arjen van der Meijden  wrote:

> Wouldn't more servers become increasingly (seen from the application)
> slower as you force your clients to connect to more servers?
>
> Assuming all machines have enough processing power and network bandwidth,
> I'd expect performance of the last of these variants to be best:
> 16x  1GB machines
>  8x  2GB machines
>  4x  4GB machines
>  2x  8GB machines
>  1x 16GB machines
>
> In the first one you may end up with 16 different tcp/ip-connections per
> client. Obviously, connection pooling and proxies can alleviate some of
> that overhead. Still, a multi-get might actually hit all 16 servers.
>
> Obviously, the last variant offers much lower availability.
>
> Best regards,
>
> Arjen
>
>
> On 26-11-2011 12:47 Henrik Schröder wrote:
>
>> The only limits are when you've saturated your internal network or hit
>> the max number of TCP connections that your underlying OS can handle.
>> The amount of nodes make absolutely no difference.
>>
>> Yes, part of the server selection algorithm gets slower the more nodes
>> you have, but that part is insignificant compared to the part where you
>> actually compute the hash for each key, and that in turn is
>> insignificant compared to the time it takes to talk to a server over the
>> network, so in effect there is no maximum amount of nodes.
>>
>> The memcached server itself consumes very little CPU, don't worry about
>> that. In the typical case you don't build a separate cluster for that,
>> you just use whatever servers you already have that have some spare RAM.
>>
>>
>> /Henrik
>>
>> On Sat, Nov 26, 2011 at 06:05, moses wejuli > <mailto:m.wej...@gmail.com>> wrote:
>>
>>hi guys,
>>
>>not sure if this has been asked (and answered) before, but thought i
>>might ask away anyway...
>>
>>what would be the recommended maximum number of nodes in a memcached
>>server pool (cluster) ...? am thinking u cannot go on indefinitely
>>adding nodes without some sort of performance penalty  -- a 100-node
>>homogeneous cluster will probably hash faster than a 2000-node
>>homogeneous cluster??! with additional network issues for good
>>measure??
>>
>>any pointers would be very helpful!!
>>
>>oh, and what wud be the optimal node specs in such a case
>>(particularly CPU cores)?
>>
>>thanks,
>>
>>-m.
>>
>>
>>
>>


Re: recommended maximum number of nodes in memcache cluster (server pool)

2011-11-26 Thread moses wejuli
@Les, you make a clear and concise point. thnx.

In this thread, i'm really keen on exploring a theoretical possibility
(that could become very practical for very large installations):

-- at what node count (for a given pool) may/could we start to
experience problems related to performance  (server, network or even
client) assuming a near perfect hardware/network set-up?
-- if a memcacached client were to pool say, 2,000 or 20,000
connections (again, theoretical but not entirely impractical given the rate
of internet growth), wud that not inject enough overhead -- connection or
otherwise -- on the client side to, say, warrant a direct fetch from the
database? in such a case, we wud have established a *theoretical* maximum
number nodes in a pool for that given client in near perfect conditions.
-- also, i wud think the hashing algo wud deteriorate after a given
number of nodes.. admittedly, this number could be very large indeed and
also, i  know this is unlikely in probably 99.999% of cases but it wud be
great to factor in the maths behind science.

Just saying

-m.

On 26 November 2011 18:28, Les Mikesell  wrote:

> On Sat, Nov 26, 2011 at 7:15 AM, Arjen van der Meijden 
> wrote:
> > Wouldn't more servers become increasingly (seen from the application)
> slower
> > as you force your clients to connect to more servers?
> >
> > Assuming all machines have enough processing power and network bandwidth,
> > I'd expect performance of the last of these variants to be best:
> > 16x  1GB machines
> >  8x  2GB machines
> >  4x  4GB machines
> >  2x  8GB machines
> >  1x 16GB machines
> >
> > In the first one you may end up with 16 different tcp/ip-connections per
> > client. Obviously, connection pooling and proxies can alleviate some of
> that
> > overhead. Still, a multi-get might actually hit all 16 servers.
>
> That doesn't make sense.  Why would you expect 16 servers acting in
> parallel to be slower than a single server?  And in many/most cases
> the application will also be spread over multiple servers so the load
> is distributed independently there as well.
>
> --
>   Les Mikesell
> lesmikes...@gmail.com
>


Re: recommended maximum number of nodes in memcache cluster (server pool)

2011-11-26 Thread moses wejuli
@dormando, great response this is almost exctly what i had in mind,
i.e. grouping all of your memcached servers into logical pools so as to
avoid hitting all of them for every request. infact, a reasonable design,
for a very large server installation base, would be to aim for say 10-20%
node hit for every request (or even less if u can manage it).

 so with the facebook example, we know there's a point you get to where a
high node count means all sorts of problems, in this case, it was 800, i
think (correct me if am wrong) proving the point that logical groupings
should be the way to go for large pools -- infact i would suggest groupings
with varying canopies of granularity as long as your app kept a simple and
clear means by which to zap down to a small cross section of servers
without losing any  intended benefits of caching in the fisrt place..

in short, most of my anxities have been well addressed (both theoretical
and practical)...

+1 for posting this in a wiki Dormando.

thanks @dormando @henrik @les (oh, and @arjen)

-m.

On 26 November 2011 22:34, dormando  wrote:a

> > @Les, you make a clear and concise point. thnx.
> >
> > In this thread, i'm really keen on exploring a theoretical possibility
> (that could become very practical for very large installations):
> >
> > -- at what node count (for a given pool) may/could we start to
> experience problems related to performance  (server, network or even client)
> > assuming a near perfect hardware/network set-up?
>   the benefoit
> I think the really basic theoretical response is:
>
> - If your request will easily fit in the TCP send buffer and immediately
> transfer out the network card, it's best if it hits a single server.
> - If your requests are large, you can get lower latency responses by not
> waiting on the TCP socket.
> - Then there's some fiddling in the middle.
> - Each time a client runs "send" that's a syscall, so more do suck, but
> keep in mind the above tradeoff: A little system cpu time vs waiting for
> TCP Ack's.
>
> In reality it doesn't tend to matter that much. The point of my response
> to the facebook "multiget hole" is that you can tell clients to group keys
> to specific or subsets of servers, (like all keys related to a particular
> user), so you can have a massive pool and still generally avoid contacting
> all of them on every request.
>
> > -- if a memcacached client were to pool say, 2,000 or 20,000
> connections (again, theoretical but not entirely impractical given the rate
> of
> > internet growth), wud that not inject enough overhead -- connection or
> otherwise -- on the client side to, say, warrant a direct fetch from the
> > database? in such a case, we wud have established a *theoretical*
> maximum number nodes in a pool for that given client in near perfect
> conditions.
>
> The theory depends on your setup, of course:
>
> - Accessing the server hash takes no time (it's a hash), calculating it
> is the time consuming one. We've seen clients misbehave and seriously slow
> things down by recalculating a consistent hash on every request. So long
> as you're internally caching the continuum the lookups are free.
>
> - Established TCP sockets mostly just waste RAM, but don't generally slow
> things down. So for a client server, you can calculate the # of memcached
> instances * number of apache procs or whatever * the amount of memory
> overhead per TCP socket compared to the amount of RAM in the box and
> there's your limit. If you're using persistent connections.
>
> - If you decide to not use persistent connections, and design your
> application so satisfying a page read would hit at *most* something like 3
> memcached instances, you can go much higher. Tune the servers for
> TIME_WAIT reuse, higher local ports, etc, which deals with the TCP churn.
> Connections are established on first use, then reused until the end of the
> request, so the TCP SYN/ACK cycle for 1-3 (or even more) instances won't
> add up to much. Pretending you can have an infinite number of servers on
> the same L2 segment you would likely be limited purely by bandwidth, or
> the amount of memory required to load the consistent hash for clients.
> Probably tens of thousands.
>
> - Or use UDP, if your data is tiny and you tune the fuck out of it.
> Typically it doesn't seem to be much faster, but I may get a boost out of
> it with some new linux syscalls.
>
> - Or (Matt/Dustin correct me if I'm wrong) you use a client design like
> spymemcached. The memcached binary protocol can actually allow many client
> instances to use the same server connections. Each client stacks commands
> in the TCP sockets like a queue (you could even theoretically add a few
> more connections if you block too long waiting for space), then they get
> responses routed to them off the same socket. This means you can use
> persistent connections, and generally have one socket per server instance
> for an entire app server. Many thousands should scale okay.
>

Re: (tcp 11211) failed with: Connection timed out (110)

2011-11-28 Thread moses wejuli
hi brian,

i notice this thread was last active over a year ago.. so i thought it was
about time for a recap (in light of the fact that documentation for
pecl/memcache and pecl/memcached is sometimes not always up-to-date) --

1) has the persistent connections bug in pecl/memcached been fixed , and if
so, is it now stable (i.e. wud u recommend using it in a production
environment)? what release/version?
2) in the past, you've recommended using pecl/memcache 2.2.x  for
persistent connections with a workaround -- is this still the case now? is
the workaround still necessary? does that workaround now address the maxing
out of memory?
3) persistent connections are generally recommended but can usually cause
problems with buggy client implementations as is evident with the php
clients.. do u still recommend them?
4) from some of your tests, what is the performnce gap between using
persistent and non-persistent connections in either client? is it large
enough to warrant a trade in of a larger memory footprint by persistent
connections for the performance gain? put another way: would you absolutely
*not* recommend non-persistent connections from a performance standpoint?
5) lastly, and slightly off topic (apologies) but relevant, knowing as i do
that you use mysql, wud you apply similar logic to connections in mysql
using PDO? feel free not to bother with this as it's not the right forum
:).. just though i wud drop it in..

cheers,

-m.

On 9 September 2010 16:48, Brian Moon  wrote:

> On 9/9/10 10:20 AM, the_fonz wrote:
>
>> We reverted back to 1.4.5. Still get continuous  timed out connections
>> with no increase in listen_disabled_num
>>
>> I read 
>> http://brian.moonspot.net/php-**memcached-issues
>>
>> I see that PECL/memcached is buggy for persistent connections but am
>> wondering if I should enable persistent connections with
>> php-pecl-memcache-2.2.3, would this speed things up?
>>
>> Is it worth trying PECL/memcached without persistent connections?
>>
>
> I run PECL/memcache with persistent connections always.
>
> If you dig through the comments on my blog post there is a work around for
> the persistent connection issue in PECL/memcached. But, there are other
> issues with that library that will hopefully be fixed soon.
>
>
> --
>
> Brian.
> 
> http://brian.moonspot.net/
>


Re: recommended maximum number of nodes in memcache cluster (server pool)

2011-12-08 Thread moses wejuli
i think the notes are well summarised... cheers.

On 8 December 2011 22:27, dormando  wrote:

> > @dormando, great response this is almost exctly what i had in mind,
> i.e. grouping all of your memcached servers into logical pools so as to
> > avoid hitting all of them for every request. infact, a reasonable
> design, for a very large server installation base, would be to aim for say
> 10-20%
> > node hit for every request (or even less if u can manage it).
> >
> >  so with the facebook example, we know there's a point you get to where
> a high node count means all sorts of problems, in this case, it was 800, i
> > think (correct me if am wrong) proving the point that logical groupings
> should be the way to go for large pools -- infact i would suggest groupings
> > with varying canopies of granularity as long as your app kept a simple
> and clear means by which to zap down to a small cross section of servers
> > without losing any  intended benefits of caching in the fisrt place..
> >
> > in short, most of my anxities have been well addressed (both theoretical
> and practical)...
> >
> > +1 for posting this in a wiki Dormando.
>
> http://code.google.com/p/memcached/wiki/NewPerformance added to the bottom
> of this wiki page... tried to summarize the notes better, but speak up
> with edits if something's missing or unclear.


Re: Memcached 1.4.6 64 bit for windows

2014-10-29 Thread moses wejuli
@srividya, do you use windows in your production environment or is this
just for your local development environment?

thanks,

-moses

On 29 October 2014 14:21, sekhar mekala  wrote:

> You can download the version like* Memcached 1.4.5 Windows (64-bit)*
>
> See for more information below link for Download the memcached-amd64.
>
>
> http://blog.elijaa.org/index.php?post/2010/08/25/Memcached-1.4.5-for-Windows
>
> Regards, Sekhar
>
>
> On Monday, July 18, 2011 3:55:08 PM UTC+5:30, srividhya wrote:
>>
>> Hi,
>>
>> I am planning to use memcached for my development in Windows 64 bit
>> environment.
>> Can someone give me pointers on where to find the latest windows
>> binaries.
>> Is there any plan for an official memcached binary relase in the near
>> future.
>>
>> Appreciate any pointer or help.
>>
>> Thanks.
>> Srividhya
>>
>  --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to memcached+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Memcached user warnings

2009-10-06 Thread moses wejuli
+1 here...

2009/10/6 Clint Webb 

> +1 too
>
>
> On Tue, Oct 6, 2009 at 4:15 PM, Trond Norbye  wrote:
>
>>
>> Toru Maesaka wrote:
>>
>>> +1
>>>
>>> Warnings are good.
>>>
>>>
>> as long as they doesn't come from your code ;-)
>>
>> Cheers,
>>
>> Trond
>>
>>
>
>
> --
> "Be excellent to each other"
>


Re: memcached failover solution?

2009-10-17 Thread moses wejuli
Hi Henrick,

When you say:

"If your application requires that all memcached instances are up
and running all the time, it's pretty likely that you are doing something
wrong, that you are using memcached in a way it was not intended for."

Does this mean you cannot use memcache for session management in PHP, coz
then you are relying on it for a key, facet of your application -- one you
cannot do without!!

Please advise... Adi's concerns particularly resonate with mine.

Cheers.



2009/10/16 Henrik Schröder 

> Hi Adi,
>
> Why do you want failover? It's just a cache, so your application should be
> able to run ok if part of the cache cluster is unavailable, you would
> experience a slightly higher cache miss ratio.
>
> If your application requires that all memcached instances are up and
> running all the time, it's pretty likely that you are doing something wrong,
> that you are using memcached in a way it was not intended for. Please tell
> us a bit about your application and we can probably help you.
>
> That said, the BeITMemcached client supports consistent hashing, but there
> is no automatic failover, and no automatic recovery from failover, because
> those features would in the absolute majority of cases only cause subtle
> errors in the application, without offering any benefits whatsoever.
>
>
> /Henrik
>
>
> On Fri, Oct 16, 2009 at 13:55, Adi  wrote:
>
>>
>> Hi,
>> I am using memcached on windows server 2003 in a web cluster
>> environment setup through NLB (Network load balancer) and using two
>> different memcached server for caching, using BeIT Memcached client.
>>
>> I want to know if memcached doesn't provide failover explicitly than
>> is there any way to handle it?
>>
>> I had also check out the faqs, let me know please which windows client
>> provide consistent hashing if one memcached node is dead? Can i handle
>> it manually buy identifying the dead node and remove it from the
>> server list.
>>
>> Regards
>> Adeel Ahmed
>
>
>


Re: memcached failover solution?

2009-10-17 Thread moses wejuli
Thanks Dustin.

I guess what i'm really askin is: would you recommend using memcached for
session management in PHP.. the PHP extension for memcache has got a
facility for manging sessions. This behaviour (using memcache for session
mgmt) can be turned on in the PHP ini file.

I know/believe C is your primary language of expression but if you are at
all familiar with PHP, please let me know your thoughts. Sessions are pretty
non-trivial in PHP. I would presume in case of a cache miss, PHP would look
to the default session store: the filesystem!

Really looking for someone particularly adept with this topic to shed some
much needed light on this...

Cheers all,

M/.

2009/10/18 Dustin 

>
>
> On Oct 17, 3:49 pm, moses wejuli  wrote:
>
> > Does this mean you cannot use memcache for session management in PHP, coz
> > then you are relying on it for a key, facet of your application -- one
> you
> > cannot do without!!
>
>   I recommend reading this:  http://dormando.livejournal.com/495593.html
>
>  In summary: memcached is a cache -- it provides fast access to data
> that may be stored in your primary (presumably slower) data store.  If
> you're using it for your primary data store, you are likely asking for
> trouble.
>


Re: memcached failover solution?

2009-10-17 Thread moses wejuli
the article is very good, incisive and convincing -- in summary, it requires
that you treat sessions as any other data -- stored in the DB, and cached in
m/cache, blah blah blah

i suppose i'll have to implemnt it first before i can come up with better
questions -- for the time being though, many thanks Dustin.

M/.

2009/10/18 Dustin 

>
>
> On Oct 17, 4:22 pm, moses wejuli  wrote:
> > Thanks Dustin.
> >
> > I guess what i'm really askin is: would you recommend using memcached for
> > session management in PHP.. the PHP extension for memcache has got a
> > facility for manging sessions. This behaviour (using memcache for session
> > mgmt) can be turned on in the PHP ini file.
> >
> > I know/believe C is your primary language of expression but if you are at
> > all familiar with PHP, please let me know your thoughts. Sessions are
> pretty
> > non-trivial in PHP. I would presume in case of a cache miss, PHP would
> look
> > to the default session store: the filesystem!
> >
> > Really looking for someone particularly adept with this topic to shed
> some
> > much needed light on this...
>
>   This isn't a language issue.  It's about deciding how you want your
> application to behave.
>
>  The article describes a lot of the trade offs.  Was there a specific
> part you disagreed with?
>
>  I would, in particular, *not* recommend filesystem based storage
> unless it's well-abstracted and you've only got one web server.  In
> general, I like to pretend like filesystems don't exist when writing
> application code.