Re: CAS slows down dramatically for larger records?

2013-08-20 Thread Marc Bollinger
Jumbo frames FTW

- Marc


On Tue, Aug 20, 2013 at 8:34 PM, Henrik Schröder  wrote:

> 1400 bytes (plus overhead) sounds suspiciously close to the very common
> 1500 bytes MTU setting, so something weird probably happened when you went
> from one packet to two in that specific environment.
>
>
> /Henrik
>
>
> On Tue, Aug 13, 2013 at 12:06 PM, Karlis Zigurs wrote:
>
>> Hello,
>>
>> Never mind - must have been some interplay between the network and VM.
>> Once memcached was deployed on a physical box (MBAir in fact) it works
>> like a treat keeping 3-4 ms response times when CAS'ing 10k records
>> over local physical network.
>>
>> Regards,
>> Karlis
>>
>>
>> On Tue, Aug 13, 2013 at 2:49 PM, Karlis Zigurs 
>> wrote:
>> > Hello,
>> >
>> > I am currently playing around with memcached and have noted a rather
>> > worrying behaviour around using CAS when stored record starts to exceed
>> > circa 1400 bytes: while performing the CAS operations from a single
>> threaded
>> > java client (netspy 2.9.1) anything that exceeds size threshold suddenly
>> > raises the response time from circa 2-3 to circa 300-400ms (with no
>> linear
>> > increase in between, in fact it appears that I get an extra 300ms for
>> every
>> > 1400 bytes afterwards).
>> > I have noted couple of references on the web referring to possible UDP
>> > related limit, but I would never expect such a drastic increase even if
>> > protocol is doing full round trips.
>> >
>> > Version: 1.4.15 (built from scratch on Centos 5 running in VM)
>> > Command line# memcached -vv -u nobody -m 256 -U 0 -p 11211 -l
>> 192.168.x.xxx
>> > Client: netspy java lib, 2.9.1 (single threaded test harness)
>> >
>> > Is this something that is inherent in the current implementation (has
>> > anybody else noticed similar behaviour) or should I proceed with firing
>> up
>> > wireshark and start investigating at the wire / env issues? Possibly
>> some
>> > build flags I should be aware of?
>> >
>> > CAS itself is perfect for the use case (managing the occasional
>> > addition/removal from a master list that further points to a large
>> number of
>> > client groups specific records - treating the core list as low
>> contention
>> > lock with perhaps < 5 write operations per second expected while the
>> rest of
>> > the system would be handling 10+k reads/writes distributed across the
>> whole
>> > estate),
>> >
>> > Regards,
>> > Karlis
>> >
>> > --
>> >
>> > ---
>> > You received this message because you are subscribed to a topic in the
>> > Google Groups "memcached" group.
>> > To unsubscribe from this topic, visit
>> > https://groups.google.com/d/topic/memcached/zdO2Av4Oj84/unsubscribe.
>> > To unsubscribe from this group and all its topics, send an email to
>> > memcached+unsubscr...@googlegroups.com.
>> > For more options, visit https://groups.google.com/groups/opt_out.
>> >
>> >
>>
>> --
>>
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "memcached" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to memcached+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>>
>>
>  --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to memcached+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: How to determine if memcache is full

2011-05-30 Thread Marc Bollinger
> In fact i'm using the last stable version at debian repositories 
> (http://packages.debian.org/lenny/memcached).

Then also upgrade your OS to Squeeze (latest stable) :)
http://packages.debian.org/squeeze/memcached

- Marc

On Mon, May 30, 2011 at 6:12 AM, Eduardo Silvestre  wrote:
> Hello Dormando,
>
>  thans for your feedback.Why will
> no longer accept new connections?  Can i determine the cause based on stats?
>
> I'm collecting data from memcache with cacti templates, and can't find any
> reason for this situation.
>
> Best Regards,
>
> On Mon, May 30, 2011 at 8:12 AM, dormando  wrote:
>>
>> > Hello everyone,
>> >
>> >  every weeks my memcache server stop accepting more connections. Today
>> > before restart daemon, i've check stats.
>> >
>> > stats
>> > STAT pid 30026
>> > STAT uptime 938964
>> > STAT time 1306667508
>> > STAT version 1.2.2
>>
>> Please upgrade to a newer version :) That one has grown a lot of hair,
>> memcached does not stop accepting connections when it gets full. It's
>> supposed to do more useful things instead.
>>
>> -Dormando
>
>


Re: What's new in memcached (part 2)

2011-04-11 Thread Marc Bollinger
I was actually going to ask if this was a draft of a changelog going on
github/memcache.org/etc, because it already seems pretty well-formatted to
be a one-off email, and would be useful to point others to, without
forwarding.

- Marc

On Mon, Apr 11, 2011 at 3:30 PM, Adam Lee  wrote:

> is there somewhere i can copy edit this document?
>
> a bit nitpicky, i know, but i found a few mistakes just while browsing
> it... section 2.1 both "suites" should be "suits," section 3.4 "it's" should
> be "its," etc.
>
> awl
> On Apr 11, 2011 3:05 PM, "Trond Norbye"  wrote:
> > What's new in memcached
> > ===
> >
> > (part two - new feature proposals)
> >
> > Table of Contents
> > =
> > 1 Protocol
> > 1.1 Virtual buckets!
> > 1.2 TAP
> > 1.3 New commands
> > 1.3.1 VERBOSITY
> > 1.3.2 TOUCH, GAT and GATQ
> > 1.3.3 SET_VBUCKET, GET_VBUCKET, DEL_VBUCKET
> > 1.3.4 TAP_CONNECT
> > 1.3.5 TAP_MUTATION, TAP_DELETE, TAP_FLUSH
> > 1.3.6 TAP_OPAQUE
> > 1.3.7 TAP_VBUCKET_SET
> > 1.3.8 TAP_CHECKPOINT_START and TAP_CHECKPOINT_END
> > 2 Modularity
> > 2.1 Engines
> > 2.2 Extensions
> > 2.2.1 Logger
> > 2.2.2 Daemon
> > 2.2.3 ASCII commands
> > 3 New stats
> > 3.1 Stats returned by the default stats command
> > 3.1.1 libevent
> > 3.1.2 rejected_conns
> > 3.1.3 stats related to TAP
> > 3.2 topkeys
> > 3.3 aggregate
> > 3.4 settings
> > 3.4.1 extension
> > 3.4.2 topkeys
> >
> >
> > 1 Protocol
> > ~~~
> >
> > Intentionally, there is no significant difference in protocol over
> > 1.4.x. There is one minor change, but it should be transparent to
> > most users.
> >
> > 1.1 Virtual buckets!
> > =
> >
> > We don't know who originally came up with the idea, but we've heard
> > rumors that it might be Anatoly Vorobey or Brad Fitzpatrick. In lieu
> > of a full explanation on this, the concept is that instead of mapping
> > each key to a server we map it to a virtual bucket. These virtual
> > buckets are then distributed across all of the servers. To ease the
> > introduction of this we've assigned the two reserved bytes in the
> > binary protocol for specifying the vbucket id, which allowed us to
> > avoid protocol extensions.
> >
> > Note that this change should allow for complete compatibility if the
> > clients and the server are not aware of vbuckets. These should have
> > been set to 0 according to the original binary protocol specification,
> > which means that they will always use vbucket 0.
> >
> > The idea is that we can move these vbuckets between servers such that
> > you can "grow" or "shrink" your cluster without losing data in your
> > cache. The classic memcached caching engine does _not_ implement
> > support for multiple vbuckets right now, but it is on the roadmap to
> > create a version of the engine in memcached to support this (it is a
> > question of memory efficiency, and there are currently not many
> > clients that support them).
> >
> > Defining this now will allow us to start moving down the path to
> > vbuckets in the default_engine and allow other engine implementors to
> > consider vbuckets in their design.
> >
> > You can read more about the mechanics of it here:
> > [http://dustin.github.com/2010/06/29/memcached-vbuckets.html]
> >
> > However, you _cannot_ use a mix of clients that are vbucket aware and
> > clients who don't use vbuckets, but then again it doesn't make sense
> > to use a vbucket aware backend if your clients don't know how to
> > access them. This is why we believe a protocol change isn't
> > warranted.
> >
> > Defining this now will allow us to start moving down the path to
> > vbuckets in the default_engine and allow other engine implementors to
> > consider vbuckets in their design.
> >
> > 1.2 TAP
> > 
> >
> > In order to facilitate vbucket transfers, among other use cases where
> > people want to see what's inside the server, we added to the binary
> > protocol a set of commands collectively called TAP. The intention is
> > to allow "clients" to receive a stream of notifications whenever data
> > change in the server. It is solely up to the backing store to
> > implement this, so it can make decisions about what resources are used
> > to implement TAP. This functionality is commonly needed enough though
> > that the core is aware of it, leaving specific implementation to
> > engines.
> >
> > 1.3 New commands
> > =
> >
> > There are a few new commands available. The following sections
> > provides a brief description of them. Please check protocol_binary.h
> > for the implementation details.
> >
> > 1.3.1 VERBOSITY
> > 
> >
> > We did not have an equivalent of the verbosity command in the textual
> > protocol. This command allows the user to change the verbosity level
> > on your running server by using the binary protocol. Why do we need
> > this? There is a command line option you may use to disable the ascii
> > protocol, so we need this command in order to change the logging level
> > i

Re: cant install memcahed into cpanel showing error

2011-04-05 Thread Marc Bollinger
Tough night, dormando?

On Tue, Apr 5, 2011 at 8:17 PM, dormando  wrote:
> sigh.
>
> yum install --skip-broken memcached
>
> or whatever combo actually works.
>
> On Wed, 6 Apr 2011, smiling dream wrote:
>
>> can you give me the rpm for installing memcahed
>>
>> On 4/6/11, dormando  wrote:
>> > What the ... urgh.
>> >
>> > I have no idea where you're getting that RPM of memcached, but it looks
>> > like the packager didn't remove the deps for the "damemtop" script I
>> > shoved in the scripts/ directory. Yum is being helpful and trying to
>> > install a ton of useless perl depedencies.
>> >
>> > If you just tell it to force the install it'll work fine (--skip-broken or
>> > whatever). I'll make sure the RPM specs we supply will not try to pull in
>> > deps for damemtop.
>> >
>> > On Mon, 4 Apr 2011, onel0ve wrote:
>> >
>> >> root@srv [~]# yum install memcached
>> >> Loaded plugins: fastestmirror
>> >> Loading mirror speeds from cached hostfile
>> >>  * addons: mirror.denit.net
>> >>  * base: mirror.denit.net
>> >>  * epel: mirrors.nl.eu.kernel.org
>> >>  * extras: mirror.denit.net
>> >>  * rpmforge: ftp-stud.fht-esslingen.de
>> >>  * updates: mirror.denit.net
>> >> Excluding Packages in global exclude list
>> >> Finished
>> >> Setting up Install Process
>> >> Resolving Dependencies
>> >> --> Running transaction check
>> >> ---> Package memcached.i386 0:1.4.5-1.el5.rf set to be updated
>> >> --> Processing Dependency: perl(AnyEvent) for package: memcached
>> >> --> Processing Dependency: perl(AnyEvent::Socket) for package:
>> >> memcached
>> >> --> Processing Dependency: perl(AnyEvent::Handle) for package:
>> >> memcached
>> >> --> Processing Dependency: libevent-1.1a.so.1 for package: memcached
>> >> --> Processing Dependency: perl(YAML) for package: memcached
>> >> --> Processing Dependency: perl(Term::ReadKey) for package: memcached
>> >> --> Running transaction check
>> >> ---> Package compat-libevent-11a.i386 0:3.2.1-1.el5.rf set to be
>> >> updated
>> >> ---> Package memcached.i386 0:1.4.5-1.el5.rf set to be updated
>> >> --> Processing Dependency: perl(AnyEvent) for package: memcached
>> >> --> Processing Dependency: perl(AnyEvent::Socket) for package:
>> >> memcached
>> >> --> Processing Dependency: perl(AnyEvent::Handle) for package:
>> >> memcached
>> >> --> Processing Dependency: perl(YAML) for package: memcached
>> >> --> Processing Dependency: perl(Term::ReadKey) for package: memcached
>> >> --> Finished Dependency Resolution
>> >> memcached-1.4.5-1.el5.rf.i386 from rpmforge has depsolving problems
>> >>   --> Missing Dependency: perl(YAML) is needed by package
>> >> memcached-1.4.5-1.el5.rf.i386 (rpmforge)
>> >> memcached-1.4.5-1.el5.rf.i386 from rpmforge has depsolving problems
>> >>   --> Missing Dependency: perl(Term::ReadKey) is needed by package
>> >> memcached-1.4.5-1.el5.rf.i386 (rpmforge)
>> >> memcached-1.4.5-1.el5.rf.i386 from rpmforge has depsolving problems
>> >>   --> Missing Dependency: perl(AnyEvent::Socket) is needed by package
>> >> memcached-1.4.5-1.el5.rf.i386 (rpmforge)
>> >> memcached-1.4.5-1.el5.rf.i386 from rpmforge has depsolving problems
>> >>   --> Missing Dependency: perl(AnyEvent::Handle) is needed by package
>> >> memcached-1.4.5-1.el5.rf.i386 (rpmforge)
>> >> memcached-1.4.5-1.el5.rf.i386 from rpmforge has depsolving problems
>> >>   --> Missing Dependency: perl(AnyEvent) is needed by package
>> >> memcached-1.4.5-1.el5.rf.i386 (rpmforge)
>> >> Error: Missing Dependency: perl(Term::ReadKey) is needed by package
>> >> memcached-1.4.5-1.el5.rf.i386 (rpmforge)
>> >> Error: Missing Dependency: perl(AnyEvent) is needed by package
>> >> memcached-1.4.5-1.el5.rf.i386 (rpmforge)
>> >> Error: Missing Dependency: perl(YAML) is needed by package
>> >> memcached-1.4.5-1.el5.rf.i386 (rpmforge)
>> >> Error: Missing Dependency: perl(AnyEvent::Handle) is needed by package
>> >> memcached-1.4.5-1.el5.rf.i386 (rpmforge)
>> >> Error: Missing Dependency: perl(AnyEvent::Socket) is needed by package
>> >> memcached-1.4.5-1.el5.rf.i386 (rpmforge)
>> >>  You could try using --skip-broken to work around the problem
>> >>  You could try running: package-cleanup --problems
>> >>                         package-cleanup --dupes
>> >>                         rpm -Va --nofiles --nodigest
>> >>
>> >>
>> >> how to fix this
>> >>
>> >
>>
>>
>> --
>> Ultimate Download Center
>>
>> www.smilng-dream.info
>>
>


Re: Open Source Projects using Memcache?

2011-03-07 Thread Marc Bollinger
"Yes, Drupal does have a memcache module, but from my own
investigation it doesn't speed up, if at all."

>From the FAQ:
http://code.google.com/p/memcached/wiki/NewProgrammingFAQ#Can_using_memcached_make_my_application_slower?

I'm not sure how broad your investigation was, but if you're just
running Drupal on one machine, benchmarking a few hundred requests per
second, it's not likely to be much faster, no. A not-that-stressed
MySQL instance will have no problem handling basic CMS queries.
memcached was really designed to reduce unnecessary load on the
database servers, and if you're not seeing slowdown either due to
load, or do to complex joins/triggers/etc., you're not going to see
again--turning on APC for Drupal would have a better yield for one
machine. Apologies if that's not the case, but it comes up
not-infrequently.

- Marc

On Mon, Mar 7, 2011 at 4:42 PM, Boris Partensky
 wrote:
> Not php, but subversion server uses memcached for some internal FSFS caching.
>
> On Mon, Mar 7, 2011 at 7:29 PM, Fuzzpault  wrote:
>> Anyone know of any open source projects (preferably in php) which use
>> memcache at its core, or that is greatly accelerated with it?
>> I've got some ideas for speeding up memcache and reducing network traffic
>> and am in need of some app to stress.  I'd be great if I had a running site
>> with actual usage data, but without it I'm forced to simulate.  More
>> specifically, changes to the client, not memcached itself.
>> Yes, Drupal does have a memcache module, but from my own investigation it
>> doesn't speed up, if at all.
>>
>> Thanks for your insights!
>


Re: Where do I find the invalid characters for a memcached key?

2011-02-08 Thread Marc Bollinger
Ah, awesome. Yeah, I noticed the key stuff wasn't in protocol/commands,
which is why I went digging in the first place. Good point on with the
distribution though, I was staring at the document in the repo without
thinking, "this obviously _comes_ with memcached."

- Marc

On Tue, Feb 8, 2011 at 10:13 AM, dormando  wrote:

> protocol.txt also comes with the software... so technically it's
> memcached.org -> tarball -> spend 1.75 seconds doing `ls`
>
> Or typing "memcached key" into google gets you enough results.
>
> Or wiki -> protocol/commands (two clicks!) though the key stuff should be
> repeated at the top there (just fixed that). Probably should be repeated
> somewhere else too, which we'll improve next time.
>
> On Tue, 8 Feb 2011, Marc Bollinger wrote:
>
> > I understand why the 'official' protocol.txt is where it is, but to
> get there from memcached.org, you go from
> > memcached.org -> code.google.com -> github
> >
> > which just seems kind of janky.
> >
> > - Marc
> >
> > On Tue, Feb 8, 2011 at 9:55 AM, dormando  wrote:
> >   > Where do I find the invalid characters for a memcached key?
> >
> > Buried in the wiki somewhere + protocol.txt.
> >
> > in short, for ascii; no spaces or newlines.
> >
> >
> >
> >
> >
>



-- 
Marc Bollinger
mbollin...@gmail.com


Re: Where do I find the invalid characters for a memcached key?

2011-02-08 Thread Marc Bollinger
I understand why the 'official' protocol.txt is where it is, but to get
there from memcached.org, you go from
memcached.org -> code.google.com -> github

which just seems kind of janky.

- Marc

On Tue, Feb 8, 2011 at 9:55 AM, dormando  wrote:

> > Where do I find the invalid characters for a memcached key?
>
> Buried in the wiki somewhere + protocol.txt.
>
> in short, for ascii; no spaces or newlines.
>


Re: features - interesting!!

2011-02-04 Thread Marc Bollinger
I get why you may strike out on your own and write an extension to memcached
that does exactly what you want (kind of, there _are_ better tools, as
mentioned),
that's precisely what open source is about. I do not get why you're so
persistent
about getting changes that the community and maintainers have said several times
is not the direction memcached is headed committed to the trunk. As Dustin just
said, write your own engine. Put it on Github, show it off here,
publicize it, write
blogs about it, that's totally great! What's wrong with that?

- Marc

On Fri, Feb 4, 2011 at 9:30 AM, Roberto Spadim  wrote:
> i can´t replace PIC, i have more 2.000 unit of PIC here (a lot of
> money), after that i can use arm with linux
> for now i can just use PIC
> ok, no help from forum, i will try to code by my self, if code looks
> good could i send to memcache mail list and try to make it 'default'
> in source code?
>
>
> 2011/2/4 Dustin :
>>
>> On Feb 4, 8:37 am, Roberto Spadim  wrote:
>>
>>> put at main memcache code, and we can port to others memcache forks,
>>> got the quest?
>>
>>  Our goals are quite far from this.  It would be trivial for you to
>> create an extension in the new engine branch to add lock support if
>> you feel it solves your exact problem.  You can do this today (right
>> now!) without even reading the server code.  It can be misuse of
>> memcached, but it can be misused of memcached contained within your
>> environment.
>>
>>> easy? with it i can have 3 or more send/receive removed from my
>>> pic18f4550 code (save a lot of rom for me)
>>
>>  Are you sure you're not looking for a job server (gearmand,
>> beanstalkd, etc...)?  Perhaps your mistake is in doing any of the
>> processing on your PIC.
>
>
>
> --
> Roberto Spadim
> Spadim Technology / SPAEmpresarial
>


Re: Require Windows build of Memcache

2011-01-04 Thread Marc Bollinger
Wasn't NorthScale working on an informal Windows build awhile ago? Did
that just fizzle due to other priorities (e.g. releasing membase)?

- Marc

On Tue, Jan 4, 2011 at 10:43 AM, Matt Ingenthron  wrote:
> Hi Vitold,
>
> On 12/31/10 3:54 PM, Vitold S wrote:
>> Hello,
>>
>> I am Web developing at Winows platform and I am want tu use Memcache,
>> but I am cant use last version with all interesting function...
>> Plaease make at memcache.org build or port to Wienows...
>
> memcached.org provides only source.
>
> Trond Norbye posted instructions on how to build on windows on his blog:
> http://trondn.blogspot.com/2010/03/building-memcached-windows.html
>
> The trunk code has changed a little, but that should get you most of the
> way there.
>
> Matt
>


Re: REST API

2010-07-29 Thread Marc Bollinger
Along those lines, I just did some digging and it looks like there's a
third-party nginx plugin that supports using REST to address the cache at
the proxy level: http://wiki.nginx.org/NginxHttpMemcModule, and I agree with
the others that's where you'd want to place something like this. Note: I
just found the above link now, and am in no way advocating its use in
particular, just that there are already efforts to do this in more
appropriate layers.

- Marc

On Thu, Jul 29, 2010 at 10:17 AM, John Reilly  wrote:

> You could easily develop an http-to-memcached proxy to allow this.  All the
> partitioning logic could exist in the memcache client embedded in your
> proxy.  This might make some sense because then you would not have to
> implement the partitioning logic into your clients.  I would say the answer
> to the question is no (memcached does not need to support http).
>
>
> On Thu, Jul 29, 2010 at 6:54 AM, j.s. mammen  wrote:
>
>> Folks, lets not get bogged down by REST defined by  Roy Fielding in
>> 2000.
>>
>> My question was simple.
>> Here it is again, rephrased.
>>
>> Do we need to implement a memcached layer whereby we can access the
>> cached objects by using HTTP protocol. Here is an example of getting a
>> cached object from a server
>> GET [server]/mc/object/id1
>>
>> Hope the question is clearer now?
>>
>> On Jul 29, 4:30 pm, Henrik Schröder  wrote:
>> > I would assume he's talking about making memcached expose some sort of
>> > simple web service api over http.
>> >
>> > Although, you could argue that both the ascii protocol and binary
>> protocol
>> > are restful, the sure seem to me to fit the definition pretty closely.
>> >
>> > /Henrik
>> >
>> >
>> >
>> > On Thu, Jul 29, 2010 at 12:56, Aaron Stone  wrote:
>> > > What's a ReST protocol? ReST is a model.
>> >
>> > > On Wed, Jul 28, 2010 at 8:42 PM, jsm  wrote:
>> > > > What I meant was to add a REST protocol to memcached layer, just
>> like
>> > > > you have a binary protocol and ascii.
>> > > > Its up to the user to decide which protocol to use when accessing
>> > > > memcached objects.
>> > > > Regards,
>> > > > J.S.Mammen
>> >
>> > > > On Jul 29, 1:49 am, Aaron Stone  wrote:
>> > > >> On Wed, Jul 28, 2010 at 8:37 AM, jsm  wrote:
>> >
>> > > >> > On Jul 28, 8:02 pm, Rajesh Nair 
>> wrote:
>> > > >> >> Gavin,
>> >
>> > > >> >> If you go by the strict sense of word, HTTP protocol is not a
>> > > pre-requisite
>> > > >> >> for REST service.
>> > > >> >> It requires a protocol which supports linking entities through
>> URIs.
>> > >  It is
>> > > >> >> very much possible to implement a RESTful service by coming up
>> with
>> > > own URI
>> > > >> >> protocol for memcached messages
>> >
>> > > >> >> something like :
>> > > >> >> mc:///messages/
>> >
>> > > >> >> and the transport layer can be pretty much the same TCP to not
>> add
>> > > any
>> > > >> >> overhead.
>> >
>> > > >> >> JSM,
>> >
>> > > >> >> What is the value-add you are looking from the RESTful version
>> of the
>> > > >> >> memcached API?
>> >
>> > > >> > Basically to be able to use without binding to any particular
>> > > >> > language.
>> >
>> > > >> I read this as requesting memcached native support for structured
>> > > >> values (e.g. hashes, lists, etc.) -- is that what you meant?
>> >
>> > > >> Aaron
>> >
>> > > >> >> Regards,
>> > > >> >> Rajesh Nair
>> >
>> > > >> >> On Wed, Jul 28, 2010 at 8:13 PM, Gavin M. Roy <
>> g...@myyearbook.com>
>> > > wrote:
>> >
>> > > >> >> > Why add the HTTP protocol overhead?  REST/HTTP would add
>> ~75Mbps of
>> > > >> >> > additional traffic at 100k gets per second by saying there's a
>> > > rough 100
>> > > >> >> > byte overhead per request over the ASCII protocol.  I base the
>> 100
>> > > bytes by
>> > > >> >> > the HTTP GET request, minimal request headers and minimal
>> response
>> > > >> >> > headers. The binary protocol is very terse in comparison to
>> the
>> > > ASCII
>> > > >> >> > protocol.  In addition netcat or telnet works as good as curl
>> for
>> > > drop dead
>> > > >> >> > simplicity.  Don't get me wrong, it would be neat, but
>> shouldn't be
>> > > >> >> > considered in moderately well used memcached environments.
>> >
>> > > >> >> > Regards,
>> >
>> > > >> >> > Gavin
>> >
>> > > >> >> > On Wed, Jul 28, 2010 at 8:43 AM, jsm 
>> wrote:
>> >
>> > > >> >> >> Anyone writing or planning to write a REST API for memcached?
>> > > >> >> >> If no such plan, I would be interested in writing a REST API.
>> > > >> >> >> Any suggestions, comments welcome.
>>
>


Re: REST API

2010-07-28 Thread Marc Bollinger
"and some of them have inherent http capability and aren't used enough to
care about that last 10% efficiency when it means rewriting a bunch of code
with new libraries to get it."

But you're..._adding_ support for memcached to that system. If this were a
system already using memcached, it'd be...speaking memcached. If it were
using a different, external caching system, you'd probably expect some
measure of integration code, or at least testing it if, like Tyrant, it did
speak the same protocol you were expecting. The only other case would be if
you weren't using some external cache already, in which event you're going
to be adding logic, regardless.

- Marc

On Wed, Jul 28, 2010 at 11:19 AM, Adam Lee  wrote:

> That seems an odd case to me, to be honest.  One of the key benefits of
> memcached is its ultra-low latency, which is negated somewhat by using a
> much heavier protocol.  Also, writing a simple client library for the text
> protocol is seriously achievable in an afternoon.
>
> Anyway, there's nothing to say that you can't change your "hashing"
> strategy to work with HTTP/URIs.  Just hash the URI or use characters from
> it to select a server, for example...  As long as all clients agree, it
> doesn't matter how you shard the data.
>
> Again, I'd say you should take a look at TokyoTyrant for a fast, simple
> key-value store that speaks both memcached and HTTP.
>
> On Wed, Jul 28, 2010 at 2:11 PM, Les Mikesell wrote:
>
>> There's no argument that embedding a locally-configured memcache client
>> library for the appropriate language into your program would be more
>> efficient, but consider the case where you have many programs in many
>> languages sharing the cache data and some of them have inherent http
>> capability and aren't used enough to care about that last 10% efficiency
>> when it means rewriting a bunch of code with new libraries to get it.
>> However, I still think the http interface would have to be a separate
>> standalone piece, sitting over a stock client that knows about the local
>> distributed servers or you'd need a special client library anyway.
>>
>>  -Les
>>
>>
>>
>> On 7/28/2010 12:53 PM, Adam Lee wrote:
>>
>>> memcached's protocol is, as has been pointed out, already language
>>> agnostic and much more efficient than trying to do HTTP.  If you're
>>> saying RESTful in the "not necessarily HTTP" sense, though, then I'd say
>>> that memcached's text protocol is basically already as RESTful as you're
>>> going to get- think of commands as verbs ('get,' 'set,' 'add,' 'delete,'
>>> etc...) and the key as a URI and you're basically in an analogous
>>> situation that I think basically meets the criteria as much as you can
>>> (hard to have a stateless cache)...
>>> http://en.wikipedia.org/wiki/Representational_State_Transfer#Constraints
>>>
>>> If you want a key-value datastore with an HTTP interface, though, might
>>> I recommend Tokyo Tyrant?  It speaks memcached and its own binary
>>> protocol as well: http://1978th.net/tokyotyrant/spex.html#protocol
>>>
>>> On Wed, Jul 28, 2010 at 12:03 PM, Les Mikesell >> > wrote:
>>>
>>>On 7/28/2010 10:16 AM, jsm wrote:
>>>
>>>Gavin,
>>>You are right about the overhead and also saw that API's exist for
>>>most of the languages as well.
>>>I thought REST API would make memcached language agnostic.
>>>I would like to hear from the community if the REST API should be
>>>pursued or not?
>>>
>>>
>>>I'm not quite sure how a rest api could deal with the distributed
>>>servers without a special client anyway.  But, it might be handy to
>>>have a web service that mapped a rest api as directly as possible to
>>>memcache operations where the http side would use traditional load
>>>balance/fail over methods and handle the http 1.1 connection
>>>caching.  I'm sure there would be places where this could be used by
>>>components that have/want data in a cache shared by more efficient
>>>clients.
>>>
>>>--
>>>  Les Mikesell
>>>lesmikes...@gmail.com 
>>>
>>>
>>>
>>>
>>> --
>>> awl
>>>
>>
>>
>
>
> --
> awl
>


Re: LRU mechanism question

2010-07-06 Thread Marc Bollinger
Sergei,

One more tidbit would be that doesn't appear in either of those links
(though I'm not sure it'd necessarily be super-appropriate in either)
that may throw off new users is that `flush`-based commands are
only invalidating objects, _not_ clearing the data store. The above
links should be enough to get you rolling, though.

- Marc

On Tue, Jul 6, 2010 at 4:32 PM, dormando  wrote:

> Here's a more succinct and to the point page:
>
> http://code.google.com/p/memcached/wiki/NewUserInternals
> ^ If your question isn't answered here ask for clarification and I'll
> update the page.
>
> Your problem is about the slab preallocation I guess.
>
> On Tue, 6 Jul 2010, Matt Ingenthron wrote:
>
> > Hi Sergei,
> >
> > For various reasons (performance, avoiding memory fragmentation),
> memcached
> > uses a memory allocation approach called slab allocation.  The memcached
> > flavor of it can be found here:
> >
> > http://code.google.com/p/memcached/wiki/MemcachedSlabAllocator
> >
> > Chances are, your items didn't fit into the slabs defined.  There are
> some
> > stats to see the details and you can potentially do some slab tuning.
> >
> > Hope that helps,
> >
> > - Matt
> >
> > siroga wrote:
> > > Hi,
> > > I just started playing with memcached. While doing very basic stuff I
> > > found one thing that confused me a lot.
> > > I have memcached running with default settings - 64M of memory for
> > > caching.
> > > 1. Called flushALL to clean the cache.
> > > 2. insert 100 of byte arrays 512K each - this should consume about 51M
> > > of memory so  I should have enough space to keep all of them - and to
> > > very that call get() for each of them  - as expected all arrays are
> > > present
> > > 3. I call flushAll again - so cache should be clear
> > > 4. insert 100 arrays of smaller size ( 256K). I also expected that I
> > > have enough memory to store them (overall I need about 26M), but
> > > surprisingly to me when calling get() only last 15 where found in the
> > > cache!!!
> > >
> > > It looks like memcached still hold memory occupied by first 100
> > > arrays.
> > > Memcache-top says that only 3.8M out of 64 used.
> > >
> > > Any info/explanation on memcached memory management details is very
> > > welcomed. Sorry if it is a well known feature, but I did not find much
> > > on a wiki that would suggest explanation.
> > >
> > > Regards,
> > > Sergei
> > >
> > > Here is my test program (I got the same result using both danga and
> > > spy.memcached. clients):
> > >
> > > MemCachedClient cl;
> > >
> > > @Test
> > > public void strange() throws Throwable
> > > {
> > > byte[] testLarge = new byte[1024*512];
> > > byte[] testSmall = new byte[1024*256];
> > > int COUNT = 100;
> > > cl.flushAll();
> > > Thread.sleep(1000);
> > > for (int i = 0; i < COUNT; i++)
> > > {
> > > cl.set("largekey" + i, testLarge, 600);
> > > }
> > > for (int i = 0; i < COUNT; i++)
> > > {
> > > if (null != cl.get("largekey" + i))
> > > {
> > > System.out.println("First not null " + i);
> > > break;
> > > }
> > > }
> > > Thread.sleep(1000);
> > > cl.flushAll();
> > > Thread.sleep(1000);
> > > for (int i = 0; i < COUNT; i++)
> > > {
> > > cl.set("smallkey" + i, testSmall, 600);
> > > }
> > > for (int i = 0; i < COUNT; i++)
> > > {
> > > if (null != cl.get("smallkey" + i))
> > > {
> > > System.out.println("First not null " + i);
> > > break;
> > > }
> > > }
> > >
> > > }
> > >
> >
> >
>


Re: Distributed != Replicated. Another memcached vocabulary lesson

2010-06-18 Thread Marc Bollinger
It didn't seem to me that Brian was talking about a quorum of people
thinking that, but rather that some small portion of people conflate the
two, one of whom started the long thread about memcached not being
distributed. I agree that most people probably wouldn't trip over that
wording.


- Marc

On Fri, Jun 18, 2010 at 6:04 AM, Simon Riggs  wrote:

> On Fri, 2010-06-11 at 11:54 -0500, Brian Moon wrote:
> > After the recent thread and reading some comments on the memcached wiki
> > I think I know what is wrong. People see the word distributed and think
> > it means replicated.
>
> I'm not really sure there's a quorum of people that think that. I didn't
> worry too much when I read what that previously.
>
> --
>  Simon Riggs   www.2ndQuadrant.com
>  PostgreSQL Development, 24x7 Support, Training and Services
>
>


Re: housekeeping and expire times

2010-03-25 Thread Marc Bollinger
In truth, I took that verbatim from the Wiki page I pointed him to,
but it looks in items.c (
http://github.com/dustin/memcached/blob/master/items.c ) that it walks
the first 50 items in the LRU tail, and will reclaim the first expired
slab it finds, but doesn't do any more exhaustive searching beyond
that.

- Marc

On Thu, Mar 25, 2010 at 5:25 PM, Perry Krug  wrote:
> Can you explain how the memcache server knows which items are expired in
> order to evict them first?
> It was my understanding that memcache does not keep a list of keys (expired
> or otherwise) nor does it track expired keys.
> Thus, expired keys would not necessarily be the first to get evicted when
> space is needed.
> Am I incorrect?
> Thanks!
> On Thu, Mar 25, 2010 at 10:10 AM, Marc Bollinger 
> wrote:
>>
>> There's no actual 'housecleaning' per se, just lazy expiration and
>> eviction. If an item's expiration time is reached, the next time its
>> key is requested, the lookup fails. If the server is out of memory,
>> it'll evict expired items first, but if nothing's expired, will start
>> evicting things in a least-recently-used fashion. More info is on the
>> Google Code wiki:
>>
>>
>> http://code.google.com/p/memcached/wiki/FAQ#When_do_expired_cached_items_get_deleted_from_the_cache?
>>
>> - Marc
>>
>> On Thu, Mar 25, 2010 at 9:50 AM, Jens  wrote:
>> > Hi, memcached is really great and easy to start working with. Maybe i
>> > haven't fully understood your concept, but i don't understand how
>> > housekeeping is performed. As it is possible to set a memory limit, i
>> > assume that some kind of housekeeping which keeps the amount of
>> > entries under the memory limit must be performed at some point. Is
>> > this solely done on the basis to expire times? Is the expire time
>> > reset to retrieval? I there a details description of available
>> > anywhere?
>> >
>> > To unsubscribe from this group, send email to
>> > memcached+unsubscribegooglegroups.com or reply to this email with the words
>> > "REMOVE ME" as the subject.
>> >
>>
>> To unsubscribe from this group, send email to
>> memcached+unsubscribegooglegroups.com or reply to this email with the words
>> "REMOVE ME" as the subject.
>
> To unsubscribe from this group, send email to
> memcached+unsubscribegooglegroups.com or reply to this email with the words
> "REMOVE ME" as the subject.
>

To unsubscribe from this group, send email to 
memcached+unsubscribegooglegroups.com or reply to this email with the words 
"REMOVE ME" as the subject.


Re: housekeeping and expire times

2010-03-25 Thread Marc Bollinger
There's no actual 'housecleaning' per se, just lazy expiration and
eviction. If an item's expiration time is reached, the next time its
key is requested, the lookup fails. If the server is out of memory,
it'll evict expired items first, but if nothing's expired, will start
evicting things in a least-recently-used fashion. More info is on the
Google Code wiki:

http://code.google.com/p/memcached/wiki/FAQ#When_do_expired_cached_items_get_deleted_from_the_cache?

- Marc

On Thu, Mar 25, 2010 at 9:50 AM, Jens  wrote:
> Hi, memcached is really great and easy to start working with. Maybe i
> haven't fully understood your concept, but i don't understand how
> housekeeping is performed. As it is possible to set a memory limit, i
> assume that some kind of housekeeping which keeps the amount of
> entries under the memory limit must be performed at some point. Is
> this solely done on the basis to expire times? Is the expire time
> reset to retrieval? I there a details description of available
> anywhere?
>
> To unsubscribe from this group, send email to 
> memcached+unsubscribegooglegroups.com or reply to this email with the words 
> "REMOVE ME" as the subject.
>

To unsubscribe from this group, send email to 
memcached+unsubscribegooglegroups.com or reply to this email with the words 
"REMOVE ME" as the subject.


Re: new release of the "enyim" .net memcached client

2010-03-17 Thread Marc Bollinger
Hah! We're currently using the BeIT client, and were wondering if
there was a timeline for binary protocol and NorthScale support,
actually.


Congrats and thank you on pushing out the Enyim release to Attila!

- Marc

On Wed, Mar 17, 2010 at 12:29 PM, Henrik Schröder  wrote:
> Nice!
>
> I'm feeling the pressure on me now to update my client as well. :-D
>
>
> /Henrik
>
> On Wed, Mar 17, 2010 at 00:43, a.  wrote:
>>
>> Hi,
>>
>> After some silence I've release the 2.0 beta version of my memcached
>> client.
>>
>> The most notable new features are
>>
>> - binary protocol (CAS coming soon)
>> - SASL
>> - spymemcached compatible KetamaNodeLocator
>> - NorthScale Memcached Server support
>>
>> Sources and binaries are available at GitHub:
>>
>> http://github.com/enyim/EnyimMemcached/downloads
>>
>>
>>
>> a.
>>
>>
>> --
>> http://memcached.enyim.com
>
>


Re: How to get more predictable caching behavior - how to store sessions in memcached

2010-03-12 Thread Marc Bollinger
Part of the disconnect is that, "how do I have to run memcached to
'store' these sessions in memcached," is not a concrete question. It's
wibbly wobbly at best to try and achieve this behavior, and, "You
can't," _is_ concrete in that there is no way to do this in a
mathematically provable way. The best you're going to get is something
that anecdotally works, contingent on the existence of roughly
homogeneous object sizes, and that you're allocating enough memory to
memcached. The scheme you described above (maybe tweak the growth
factor downward to taste?) will probably work, but that assumes a
limit of 1000 users, which is unrealistic for the scale that memcached
was really designed for in the first place.

- Marc

On Fri, Mar 12, 2010 at 3:56 PM, Martin Grotzke
 wrote:
> Hi Brian,
> you're making a very clear point. However it would be nice if you'd provide
> concrete answers to concrete questions. I want to get a better understanding
> of memcached's memory model and I'm thankful for any help I'm getting here
> on this list. If my intro was not supporting this please forgive me...
> Cheers,
> Martin
>
> On Sat, Mar 13, 2010 at 12:27 AM, Brian Moon  wrote:
>>
>> The resounding answer you will get from this list is: You don't, can't and
>> won't with memcached. That is not its job. It will never be its job. Perhaps
>> when storage engines are done, maybe. But then you won't get the performance
>> that you get with memcached. There is a trade off for performance.
>>
>> Brian.
>> 
>> http://brian.moonspot.net/
>>
>> On 3/12/10 3:02 PM, martin.grotzke wrote:
>>>
>>> Hi,
>>>
>>> I know that this topic is rather burdened, as it was said often enough
>>> that memcached never was created to be used like a reliable datastore.
>>> Still, there are users interested in some kind of reliability, users
>>> that want to store items in memcached and be "sure" that these items
>>> can be pulled from memcached as long as they are not expired.
>>>
>>> I read the following on memcached's memory management:
>>>   "Memcached has two separate memory management strategies:
>>> - On read, if a key is past its expiry time, return NOT FOUND.
>>> - On write, choose an appropriate slab class for the value; if it's
>>> full, replace the oldest-used (read or written) key with the new one.
>>> Note that the second strategy, LRU eviction, does not check the expiry
>>> time at all." (from "peeping into memcached", [1])
>>>
>>> I also found "Slabs, Pages, Chunks and Memcached" ([2]) a really good
>>> explanation of memcached's memory model.
>>>
>>> Having this as background, I wonder how it would be possible to get
>>> more predictability regarding the availability of cached items.
>>>
>>> Asume that I want to store sessions in memcached. How could I run
>>> memcached so that I can be sure that my sessions are available in
>>> memcached when I try to "get" them? Additionally asume, that I expect
>>> to have 1000 sessions at a time in max in one memcached node (and that
>>> I can control/limit this in my application). Another asumption is,
>>> that sessions are between 50kb and 200 kb.
>>>
>>> The question now is how do I have to run memcached to "store" these
>>> sessions in memcached?
>>>
>>> Would it be an option to run memcached with a minimum slab size of
>>> 200kb. Then I would know that for each session a 200kb chunk is used.
>>> When I have 1000 session between 50kb and 200kb this should take 200mb
>>> in total. When I run memcached with more than 200mb memory, could I be
>>> sure, that the sessions are alive as long as they are not expired?
>>>
>>> What do you think about this?
>>>
>>> Cheers,
>>> Martin
>>>
>>>
>>> [1]
>>> http://blog.evanweaver.com/articles/2009/04/20/peeping-into-memcached/
>>> [2]
>>> http://www.mikeperham.com/2009/06/22/slabs-pages-chunks-and-memcached/
>
>
>
> --
> Martin Grotzke
> http://www.javakaffee.de/blog/
>


Re: Memcache as session server with high cache miss?

2010-03-11 Thread Marc Bollinger
>  And I'm not sure what happens if the keys
> have hash collisions.

Like many key-value stores, last write wins.

- Marc


Re: Memcache as session server with high cache miss?

2010-03-10 Thread Marc Bollinger
> the memcached to expell valid data. If your application relies on data
> to be present on the memcached to work properly, you have a problem.

Now you have two problems.

-- 
Marc Bollinger
mbollin...@gmail.com


Re: Updating memcache frequently?

2010-02-24 Thread Marc Bollinger
If the scale is actually only hundreds per minute, even sequentially,
I wouldn't worry about memcached writes being the bottleneck, even on
a small machine.

-- 
Marc Bollinger
mbollin...@gmail.com

On Wed, Feb 24, 2010 at 8:08 AM, mnenchev  wrote:
> Hi,
>
> I have mdb - message driven bean, that process hundreds messages per
> minute. I want to update memcache on every message - this is very
> frequently. But if i make my mdb singleton it will process messages
> sequentially, so the next update will be done when the previous is
> finished. Is this a problem?
> Regards.
>


Re: Memcached set is too slow

2010-02-24 Thread Marc Bollinger
And are you using persistent connections? There have been a handful of
threads recently, discussing setting up persistent connections with
PECL::memcached.

- Marc

On Wed, Feb 24, 2010 at 9:41 AM, Adam Lee  wrote:
> What kind of hardware and software configurations are you using on the
> client and server sides?
> We have servers doing like 5M/s in and 10M/s out without even breaking a
> sweat...
>
> On Wed, Feb 24, 2010 at 7:35 AM, me from  wrote:
>>
>> We use memcached php extension, (http://pecl.php.net/package/memcached)
>>
>> On Wed, Feb 24, 2010 at 12:02 PM, Juri Bracchi  wrote:
>>>
>>> the latest memcache php extension version is 2.2.5
>>>
>>> http://pecl.php.net/package/memcache
>>>
>>>
>>>
>>>
>>>
>>> On Wed, 24 Feb 2010 05:09:36 +0300, me from wrote:
>>> > No. Sorry for misunderstanding, its my bad. Its php extension (PECL)
>>> > of version 1.0.0.
>>> >
>>> > Memcached is 1.4.4
>>> >
>>> > On Wed, Feb 24, 2010 at 4:42 AM, Eric Lambert
>>> >  wrote:
>>> >>>
>>> >>> PHP5.3, libmemcached 0.35, memcached 1.0.0
>>> >>
>>> >> Is this really the version of the memcached server you are using
>>> >> (1.0.0) If so, that is certainly out-of-date. Latest version is
>>> >> 1.4.*.
>>> >>
>>> >> Eric
>>> >>
>>> >>
>>> >> Chaosty wrote:
>>> >>> We have found that Memcahed::set stores items around 100-200kbs for
>>> >>> 0.10-0.11 seconds, its too slow. Compression is turned off. Any
>>> >>> suggestions?
>>> >>>
>>> >>> PHP5.3, libmemcached 0.35, memcached 1.0.0
>>> >>>
>>> >>
>>> >
>>
>
>
>
> --
> awl
>


Re: Assertion Failed for Items > 1MB

2010-02-09 Thread Marc Bollinger
Awesome, thanks for the usage info! We're going to do some testing then, and
start integrating it into some not-as-essential clusters. Matt, we'll
let you know
if we run into any stability or such issues, and thanks to your team for putting
together the release!

- Marc

On Tue, Feb 9, 2010 at 4:16 AM,   wrote:
> I have been using it for development use for over a month now and I have
> not experienced any problems whatsoever.  We don't use it in our production
> environment, however, so I can't say how stable it is for production use.
>
> I hope that helps.
>
>
> Brandon Ramirez | Office: 585.214.5013 | Fax: 585.295.4848
> Software Engineer II | Element K | www.elementk.com
>
>
>
>
>
>
>  From:       Matt Ingenthron 
>
>  To:         memcached@googlegroups.com
>
>  Date:       02/09/2010 04:11 AM
>
>  Subject:    Re: Assertion Failed for Items > 1MB
>
>  Sent by:    memcac...@googlegroups.com
>
>
>
>
>
>
> Hi Marc,
>
> Marc Bollinger wrote:
>> Just curious, Northscale guys: how stable would you consider the build
>> on your website currently (e.g. are you using it in production, or
>> would you recommend using it in production)?
>>
>
> There will be a release which we consider thoroughly tested and ready
> for any production environment.  As far as whether the build which is
> there is ready for your production environment, that varies.
>
> Except for an issue that appears to be related to dynamic linking
> restrictions on Microsoft's Azure, we aren't aware of any outstanding
> issues.  We haven't encountered any stability or functional issues, and
> haven't had any reports of any yet.
>
> There are a number of people with it deployed in developer environments,
> more on desktops, and we've not heard of any issues there.  I'm pretty
> sure there's at least one deployment someone considers "production",
> though that designation may be applied to very different deployments.
>
> In the mean time, if you have any feedback, we'd love to hear about it.
>
> Thanks!
>
> - Matt
>
>> On Tue, Jan 12, 2010 at 2:35 PM, Patrick Galbraith  wrote:
>>
>>> Hi there again!
>>>
>>> The updated, tested binary is
>>> http://downloads.northscale.com/memcached-win32-1.4.4-54-g136cb6e.zip
>>>
>>> regards,
>>>
>>> Patrick
>>>
>>> nkranes wrote:
>>>
>>>> I'm getting an Access Denied error.
>>>>
>>>> On Jan 12, 2:48 pm, Patrick Galbraith  wrote:
>>>>
>>>>
>>>>> Hi there!
>>>>>
>>>>> I built a new binary that you can try out if you like at :
>>>>>
>>>>> http://downloads.northscale.com/memcached-win32-1.4.4-53-g0b7694c.zip
>>>>>
>>>>> We're in the process of testing this right now, but if you want to
> give
>>>>> it a shot, please feel free!
>>>>>
>>>>> --Patrick
>>>>>
>>>>> nkranes wrote:
>>>>>
>>>>>
>>>>>> I am using the Windows version of memcached 1.2.5 in production
>>>>>> without issue.  I am attempting to upgrade to 1.4.4 (from
>>>>>> http://labs.northscale.com/memcached-packages/) to take advantage of
>>>>>> the configurable item size limitation.  Whenever I try to store an
>>>>>> item > 1MB I get the following error:
>>>>>>      Assertion failed: it->nbytes < (1024 * 1024), file items.c, line
>>>>>> 284
>>>>>>      The line of code in items.c is:
>>>>>> assert(it->nbytes < (1024 * 1024));  /* 1MB max size */
>>>>>>      So how could this possibly work?  I can remove that assertion,
> but
>>>>>> was
>>>>>> curious to see if anyone knew why it was there considering the item
>>>>>> limit *should* now be configurable.
>>>>>>      Thanks!
>>>>>>
>>>>>>
>>>
>
>
>
>
>


Re: Assertion Failed for Items > 1MB

2010-02-08 Thread Marc Bollinger
Just curious, Northscale guys: how stable would you consider the build
on your website currently (e.g. are you using it in production, or
would you recommend using it in production)?

Thanks!

- Marc

On Tue, Jan 12, 2010 at 2:35 PM, Patrick Galbraith  wrote:
> Hi there again!
>
> The updated, tested binary is
> http://downloads.northscale.com/memcached-win32-1.4.4-54-g136cb6e.zip
>
> regards,
>
> Patrick
>
> nkranes wrote:
>>
>> I'm getting an Access Denied error.
>>
>> On Jan 12, 2:48 pm, Patrick Galbraith  wrote:
>>
>>>
>>> Hi there!
>>>
>>> I built a new binary that you can try out if you like at :
>>>
>>> http://downloads.northscale.com/memcached-win32-1.4.4-53-g0b7694c.zip
>>>
>>> We're in the process of testing this right now, but if you want to give
>>> it a shot, please feel free!
>>>
>>> --Patrick
>>>
>>> nkranes wrote:
>>>

 I am using the Windows version of memcached 1.2.5 in production
 without issue.  I am attempting to upgrade to 1.4.4 (from
 http://labs.northscale.com/memcached-packages/) to take advantage of
 the configurable item size limitation.  Whenever I try to store an
 item > 1MB I get the following error:
      Assertion failed: it->nbytes < (1024 * 1024), file items.c, line
 284
      The line of code in items.c is:
 assert(it->nbytes < (1024 * 1024));  /* 1MB max size */
      So how could this possibly work?  I can remove that assertion, but
 was
 curious to see if anyone knew why it was there considering the item
 limit *should* now be configurable.
      Thanks!

>
>


Constant evictions after ~22,000 items stored?

2009-10-16 Thread Marc Bollinger

Hi All,

We've been testing a write-heavy scenario (hence the get/set
discrepancy) using the Jellycan build of 1.2.6 on Windows Server 2003.
There's a staging server throwing live data at the memcached server
(items are roughly 2-300 bytes apiece), and are seeing the number of
curr_items stagnate at exactly 22539, at which point everything else
is evicted.

First `stats` (abbreviated output):
STAT version 1.2.6
STAT pointer_size 32
STAT curr_items 22539
STAT total_items 5238076
STAT bytes 14729824
STAT curr_connections 26
STAT total_connections 110
STAT connection_structures 30
STAT cmd_get 63
STAT cmd_set 5238076
STAT get_hits 22
STAT get_misses 41
STAT evictions 5121775
STAT bytes_read 2157723502
STAT bytes_written 44197122
STAT limit_maxbytes 536870912

A few seconds later:

STAT version 1.2.6
STAT pointer_size 32
STAT curr_items 22539
STAT total_items 5238076
STAT bytes 14729824
STAT connection_structures 30
STAT cmd_get 63
STAT cmd_set 5238076
STAT get_hits 22
STAT get_misses 41
STAT evictions 5121775
STAT bytes_read 2157723518
STAT bytes_written 44198050
STAT limit_maxbytes 536870912

The result of `stats slabs` shows that there's only one slab that
still has free chunks (out of 20), but even if there were no chunks
available period, it should be able to allocate more than 20 slabs, as
according to bytes_used, we're only using 14MB, and the limit is set
to 512MB. My first thought would be that the homogeneity of the data
sizes is throwing off chunk allocation and lots of memory is being
wasted..but that wouldn't explain why only 14MB is being used. Any
thoughts?

Thanks!

- Marc

`stats slabs` output:

stats slabs
STAT 3:chunk_size 144
STAT 3:chunks_per_page 7281
STAT 3:total_pages 1
STAT 3:total_chunks 7281
STAT 3:used_chunks 7281
STAT 3:free_chunks 0
STAT 3:free_chunks_end 6793
STAT 4:chunk_size 184
STAT 4:chunks_per_page 5698
STAT 4:total_pages 1
STAT 4:total_chunks 5698
STAT 4:used_chunks 5698
STAT 4:free_chunks 0
STAT 4:free_chunks_end 5285
STAT 5:chunk_size 232
STAT 5:chunks_per_page 4519
STAT 5:total_pages 1
STAT 5:total_chunks 4519
STAT 5:used_chunks 4519
STAT 5:free_chunks 0
STAT 5:free_chunks_end 3869
STAT 6:chunk_size 296
STAT 6:chunks_per_page 3542
STAT 6:total_pages 1
STAT 6:total_chunks 3542
STAT 6:used_chunks 3542
STAT 6:free_chunks 0
STAT 6:free_chunks_end 2821
STAT 7:chunk_size 376
STAT 7:chunks_per_page 2788
STAT 7:total_pages 1
STAT 7:total_chunks 2788
STAT 7:used_chunks 2788
STAT 7:free_chunks 0
STAT 7:free_chunks_end 2236
STAT 8:chunk_size 472
STAT 8:chunks_per_page 2221
STAT 8:total_pages 1
STAT 8:total_chunks 2221
STAT 8:used_chunks 2221
STAT 8:free_chunks 0
STAT 8:free_chunks_end 1880
STAT 9:chunk_size 592
STAT 9:chunks_per_page 1771
STAT 9:total_pages 1
STAT 9:total_chunks 1771
STAT 9:used_chunks 1771
STAT 9:free_chunks 0
STAT 9:free_chunks_end 1504
STAT 10:chunk_size 744
STAT 10:chunks_per_page 1409
STAT 10:total_pages 13
STAT 10:total_chunks 18317
STAT 10:used_chunks 18317
STAT 10:free_chunks 0
STAT 10:free_chunks_end 0
STAT 11:chunk_size 936
STAT 11:chunks_per_page 1120
STAT 11:total_pages 1
STAT 11:total_chunks 1120
STAT 11:used_chunks 1120
STAT 11:free_chunks 0
STAT 11:free_chunks_end 927
STAT 12:chunk_size 1176
STAT 12:chunks_per_page 891
STAT 12:total_pages 1
STAT 12:total_chunks 891
STAT 12:used_chunks 891
STAT 12:free_chunks 0
STAT 12:free_chunks_end 717
STAT 13:chunk_size 1472
STAT 13:chunks_per_page 712
STAT 13:total_pages 1
STAT 13:total_chunks 712
STAT 13:used_chunks 712
STAT 13:free_chunks 0
STAT 13:free_chunks_end 601
STAT 14:chunk_size 1840
STAT 14:chunks_per_page 569
STAT 14:total_pages 1
STAT 14:total_chunks 569
STAT 14:used_chunks 569
STAT 14:free_chunks 0
STAT 14:free_chunks_end 519
STAT 15:chunk_size 2304
STAT 15:chunks_per_page 455
STAT 15:total_pages 1
STAT 15:total_chunks 455
STAT 15:used_chunks 455
STAT 15:free_chunks 0
STAT 15:free_chunks_end 422
STAT 16:chunk_size 2880
STAT 16:chunks_per_page 364
STAT 16:total_pages 1
STAT 16:total_chunks 364
STAT 16:used_chunks 364
STAT 16:free_chunks 0
STAT 16:free_chunks_end 343
STAT 17:chunk_size 3600
STAT 17:chunks_per_page 291
STAT 17:total_pages 1
STAT 17:total_chunks 291
STAT 17:used_chunks 291
STAT 17:free_chunks 0
STAT 17:free_chunks_end 285
STAT 18:chunk_size 4504
STAT 18:chunks_per_page 232
STAT 18:total_pages 1
STAT 18:total_chunks 232
STAT 18:used_chunks 232
STAT 18:free_chunks 0
STAT 18:free_chunks_end 223
STAT 19:chunk_size 5632
STAT 19:chunks_per_page 186
STAT 19:total_pages 1
STAT 19:total_chunks 186
STAT 19:used_chunks 186
STAT 19:free_chunks 0
STAT 19:free_chunks_end 0
STAT 20:chunk_size 7040
STAT 20:chunks_per_page 148
STAT 20:total_pages 239
STAT 20:total_chunks 35372
STAT 20:used_chunks 7
STAT 20:free_chunks 35365
STAT 20:free_chunks_end 7
STAT 21:chunk_size 8800
STAT 21:chunks_per_page 119
STAT 21:total_pages 253
STAT 21:total_chunks 30107
STAT 21:used_chunks 97
STAT 21:free_chunks 30010
STAT 21:free_chunks_end 95
STAT 22:chunk_size 11000
STAT 22:chunks_per_page 95
STAT 22:total_pages 1
STAT

Re: memcached on Win 2003 Server

2009-06-25 Thread Marc Bollinger
We've got a couple of small-but-high-throughput installations running on
Win2k3 servers with little issue, are there any particular problems you're
seeing? Are both the servers and clients on Win2k3, and are the errors
client errors or server errors? What versions are you using, etc. ?
Best,

- Marc

On Thu, Jun 25, 2009 at 3:12 AM, Mayank Jha  wrote:

> Hi,
>
> We tried using memcached on windows and it works for win xp but the same
> code does not work on Win 2003 platform. Anyone aware of any problems on
> windows platform, or any pointers that would help us.
>
> Thanks
> Mayank
>
> --
> ___
> "Only those who dare to fail greatly can ever
> achieve greatly."
> - Robert F. Kennedy
> ___
> http://www.mayankjha.com/
> http://www.linkedin.com/in/mayankjha/
> ___
>


Re: How does memcached load balance or does it?

2009-05-30 Thread Marc Bollinger
The key is run through a hash function (client lib-depending), and  
it's sent to the server whose list index is that value  
mod(num_servers). It's not _perfectly_ spreading load, but on the  
order which most memcached installations run, it's close enough.




On May 30, 2009, at 9:34 AM, Andy  wrote:

> Third, since all keys are hashed, you'll find that your items will  
be evenly distributed so unless your nodes have different amounts of  
memory, they will fill up at approximately the same pace.


Can you tell me a little bit more about how this part works?  What  
does hashing the key against the list of servers mean?  How does  
hashing alllow the servers to fill up at approximately the same pace?


Thanks


Date: Sat, 30 May 2009 10:16:46 +0200
Subject: Re: How does memcached load balance or does it?
From: skro...@gmail.com
To: memcached@googlegroups.com

Well, first of all, it doesn't ensure that. There's no communication  
between memcached nodes in a cluster, which is what makes it scale  
linearly.


Second, you shouldn't worry about a node filling up. It's a cache,  
not a data-store, the least recently used items will be evicted when  
you fill up a node.


Third, since all keys are hashed, you'll find that your items will  
be evenly distributed so unless your nodes have different amounts of  
memory, they will fill up at approximately the same pace.



/Henrik

On Sat, May 30, 2009 at 07:05, scranthdaddy   
wrote:


The documentation states "When doing a memcached lookup, first the
client hashes the key against the whole list of servers. Once it has
chosen a server, the client then sends its request, and the server
does an internal hash key lookup for the actual item data. "

My question is how then does memcache ensure that a node in the
cluster doesnt fill up?


Hotmail® goes with you. Get it on your BlackBerry or iPhone.


Re: LOVE DATING & ROMANCE WITH HOT MODELS LIVE

2009-04-15 Thread Marc Bollinger
Searching "memcached in:spam" yields only emails I've already flagged (and
not, for instance, this one), so it looks like the false positive rate is
pretty low in my limited sample size. On the other hand, I've been flagging
group spam as spam all day, and it doesn't seem like it's caught anything on
its own, either.


- Marc

On Wed, Apr 15, 2009 at 7:19 PM, Clint Webb  wrote:

> Does anyone know what the "Report spam" in gmail will do to spam on the
> googlegroups list?   I would love to press it on these spams, but fear that
> I might be labeling the group as spam instead.   Does anyone else hit that
> button in gmail for these?
>
> On Thu, Apr 16, 2009 at 1:54 AM, Hanna Moodaeu wrote:
>
>> LOVE DATING & ROMANCE WITH HOT MODELS LIVE
>> Love Dating and Romance: http://www.onlinedating4u.co.cc
>> Love Dating And Romance - All you need to know and not know about Love
>> Dating And Romance.
>> How to Create Everyday Romance – Relationship Tips For Keeping Love
>> Fresh. It seems like so many people wait for one day in the calendar
>> year to get around ...
>
>
>
>
> --
> "Be excellent to each other"
>


Re: memcache Vs memcached

2009-04-14 Thread Marc Bollinger
PECL extensions are C-based, unlike PEAR extensions, but you're right that
the PECL memcached client is basically a PHP wrapper over libmemcached,
while PECL's memcache client is hand-rolled. I don't keep track of issues
with either client, but the 'beta' designation wouldn't hold me back from
using the former because of its association with libmemcached.

- Marc

2009/4/14 NICK VERBECK 

>
> Doesn't the memcached version use libmemcached where the other is more pure
> php?
>
> 2009/4/14 张志坚 :
> > Memcached need PHP Version: PHP version 6.0.0 or older or PHP Version:
> PHP
> > 5.2.0 or newer
> > Memcache need PHP Version: PHP 4.3.11 or newer PEAR Package: PEAR
> Installer
> > 1.4.0b1 or newer
> >
> > Suggest to use Memcache. Memcached is still a beta version
> > 2009/4/15 Paras 
> >>
> >> Hello all,
> >>
> >> I am just begining to use memcached. I came accross pecl and there I
> >> saw 2 caching extensions. memcached and memcache. Which one should I
> >> use ???
> >>
> >> Thanks,
> >> Paras
> >
> >
> > --
> > 2009 | 新的旅程
> >
>
>
>
> --
> Nick Verbeck - NerdyNick
> 
> NerdyNick.com
> SkeletalDesign.com
> VivaLaOpenSource.com
> Coloco.ubuntu-rocks.org
>


Re: memcached not freeing up ram?

2009-03-17 Thread Marc Bollinger
Funny you should mention that! I've been having this problem, see...

On Tue, Mar 17, 2009 at 6:26 PM, Jose Celestino  wrote:

>
> Words by Tadeu Alves [Tue, Mar 17, 2009 at 07:09:48PM -0300]:
> > hello there guys, i was looking for some help to configurate my memcached
> > api into a mysql server with innodb tables.
> >
>
> OMG, this must be the thread highjacking week.
>
> --
> Jose Celestino | http://japc.uncovering.org/files/japc-pgpkey.asc
> 
> "One man’s theology is another man’s belly laugh." -- Robert A. Heinlein
>



-- 
Marc Bollinger
mbollin...@gmail.com


Re: Webconsole debugging tool built into memcached

2009-03-13 Thread Marc Bollinger
>
> However, wouldn't it be simple enough to make a UI using PHP or
> something that was totally portable and you could list multiple
> servers and such? This says "built in" to memcached. Maybe that's what
> scares me... is this actually bolted into memcached itself or am I
> just getting stuck on the words :)
>

Just by briefly reading his email, it sounds like he's adding his own
callbacks to libevent and intercepting events, then storing them for display
via the in-process webserver. I'm guessing this was a performance
consideration, but PHP really doesn't offer the sort of capability to hook
in that way--you'd wind up at the same point we're currently at, where
people complain about not being able to dump out the contents of memcached.
I'm not saying don't be scared (though this shouldn't be going into
production, anyway), though ;-)


Re: High load on web servers when using consistent hashing enabled with PHP's Memcache module

2009-02-27 Thread Marc Bollinger
We definitely saw this exact same behavior with the PHP PECL module, using
consistent
hashing. Weirdly enough, I was just digging around before responding, and
version 3.0.4 was released a few days ago with the changelog statement:
"Improved performance of consistent hash strategy." You might check that out
if you're using a version more than a week old, but it sounds like you've
decided against consistent hashing. We initially thought it might be a good
idea because we're running memcached instances on EC2, but realized even
there, the servers are static enough that having additional overhead for
each read isn't worth it.

- Marc

2009/2/27 Pavel Aleksandrov 

>
> I used "naive" for the standard method, because it's described as such
> in many places where they talk about this algorithms. As I said in the
> previous message, we don't expect the instances to go much up or down,
> so using the standard hashing may be OK for what we need. My question
> was about the overhead - apparently the module recalculates each time
> where everything should go and this involves a lot of hashing for each
> server and that translates in CPU load on the web nodes.
>
> On 27 Фев, 17:08, Brian Moon  wrote:
> > On 2/27/09 9:00 AM, Pavel Aleksandrov wrote:
> >
> > > Never mind the PHP, it's a topic I don't want to discuss :)
> >
> > > About the changes - the only change that made this impact was changing
> > > the hash distribution method. We are currently using the new memcache
> > > instances, but with the standard, naive method and there are no
> > > negative effects on the load of the web nodes. The moment we switch to
> > > the consistent method the load jumps.
> >
> > Well, I don't use the consistent hashing.  I guess I am naive.  I also
> > have not heard of this problem before however.
> >
> > --
> >
> > Brian.
>


Re: Watch variable Proposal

2009-02-16 Thread Marc Bollinger
Henrik is completely right about your idea being beyond the scope of what
memcached is, but what you might want to look at is something like a
Javascript XMPP library (http://blog.jwchat.org/jsjac/ for example). What
you're talking about is basically a publish/subscribe model in the browser,
and using XMPP would allow you to avoid reinventing the wheel in a
couple of places.

- Marc

On Mon, Feb 16, 2009 at 6:34 AM, Henrik Schröder  wrote:

> Memcached is good because it does one thing, and that one thing very well.
> What you want is something different, and the best way to solve your problem
> is to find a piece of software that does that other thing. In your case, you
> should probably be looking for a message queue instead of trying to convert
> a distributed cache into something it is not.
>
>
> /Henrik
>
>
> On Mon, Feb 16, 2009 at 10:24, luka8088  wrote:
>
>>
>> Hi, i have one idea, for programs (scripts) that run for a very long
>> time, and need to check for the same variable on memcached server
>> multiple times, to be able to tell server: tell me when a certain
>> variable changes...
>>
>> For example: If we want to make an ajax chat server, then we could
>> make ajax request hang in background until someone writes something...
>> or, until some variable changes, in this case a loop getting a same
>> variable is required, and implementing this option in protocol would
>> decrease trafic, because client would not ask server over and over
>> again the value of that variable, but instead would wait for server to
>> tell him ...
>>
>> I think this would be best solution for chats and something like that,
>> if there is some other way that i don't know of, please let me know,
>> because I need it :)
>>
>> Thank you ! :)
>
>


Re: Curious about the "right" way to spool up new instances of memcached in a cloud...

2009-02-10 Thread Marc Bollinger
Agreed. From experience, in all likelihood, this is _not_ what you want to
do (it sounds like you're talking about maintaining a memcached server on
each app server). If you're even thinking about scaling by CPU, you should
be able to afford at least one m1-small server lying around with memcached
using all of the memory you can throw at it, and bring up app servers
separate, as needed. If for one reason or another you absolutely, positively
need to have memcached running locally, you're almost certainly better off
having a tiered caching strategy utilizing a caching system native to
whatever framework you're using; there was a discussion here about that a
week or so ago.

On Tue, Feb 10, 2009 at 6:37 PM, a.  wrote:

>
> can't you just have a memcached node and an appserver node? appservers
> started later could use the same memcached instance.
>
>
>
> On Feb 11, 2009, at 3:35 AM, Travis Bell wrote:
>
>
>> Hey Dustin,
>>
>> Keep in mind I wouldn't keep these instances running... they would be
>> brought up and down as the load needed them to be so maybe I am
>> missing a key step (which trust me, I most certainly could be) but I
>> am not sure how the new instance would even get used based on what you
>> said.
>>
>> Example 1: load gets high so a new EC2 instance is triggered. Once
>> it's up, I reload the config on my load balancer so requests are split
>> across 2 instances, instead of 1. The original instance is going to
>> have hundreds of thousands of items cached when the second (new)
>> instance does not. Whenever a requests gets forwarded to this new
>> instance it will result in a cache miss and have to go fetch the item
>> again.
>>
>> It seems to me having to re-fetch the item is a bit of a waste since
>> it's already cached on the first server... this is what I am trying to
>> solve.
>>
>> Regarding saturating memcached, it's less about that and more about
>> all the other things this server is doing behind the scenes so moving
>> memcache to a new instance can spare the first box when it is needed.
>>
>> Thanks in advance for any more info you guys can provide!
>>
>>
>>
>


-- 
Marc Bollinger
mbollin...@gmail.com


Re: facebook memcached

2008-12-13 Thread Marc Bollinger
+2. Optimism is a welcome change to this discussion, though the proposed
merge-athon is good to hear, as well.

On Sat, Dec 13, 2008 at 12:45 PM, Aaron Stone  wrote:

>
> Branch friction aside, Paul Saab's description of the interesting
> problems is really something worth expanding upon and learning from in
> context of a real-world implementation going after C10K scalability.
>
> For example, Paul notes that with thousands of open TCP connections,
> memcached can start using several extra gigabytes of memory for
> buffers. Well, that's interesting! I want to know more about that.
> What can we learn here?
>
> Paul notes that they switched to using UDP for GETs with
> application-level flow control. Only GETs? App flow control? That's
> good to know -- IIRC, when memcached over UDP was last being discussed
> there were advocates for SET over UDP and for various approaches to
> flow control. (For my part, I advocated for a GET variant with an
> offset+length parameter.)
>
> So while the devs work out the details and disagreements of merging
> code bases, let's also keep the communication channels open about the
> engineering decisions and experiences in this work. At the end of the
> day, if we haven't collectively advanced the state of the art and
> updated the best practices in our field, we're doing something wrong.
>
> Aaron
>
>
> On Fri, Dec 12, 2008 at 1:34 PM, steve.yen  wrote:
> >
> > fyi, Paul Saab's note on facebook's memcached improvements and git
> > repo (originally pointed out to me by Dustin)
> >
> > http://www.facebook.com/note.php?note_id=39391378919
> >
> >
>