On May 21, 5:24 pm, punkish wrote:
> Yeah, pretty good
>
> Benchmark: timing 1 iterations of query_dbh, query_mem...
> query_dbh: 5 wallclock secs ( 3.20 usr + 0.99 sys = 4.19 CPU) @
> 2386.63/s (n=1)
> query_mem: 6 wallclock secs ( 2.37 usr + 1.26 sys = 3.63 CPU) @
> 2754.82/s (n
One of the memcached configuration parameters is
-r : maximize core file limit
My understanding is that in case of server failure the memory will be
dumped to a file & this setting is trying to maximize the file size as
possible.
My question is that if I kill the server, it will not be able dum
On May 21, 6:58 pm, dormando wrote:
> > Ha ha! I installed Cache::Memcached::Fast, which seems to be a C based
> > drop in replacement for C::M, and now I get the following results
> > (this includes the get_multi method to get many keys in one shot)
>
> > query_dbh: 5 wallclock secs ( 3.31 us
>
> Ha ha! I installed Cache::Memcached::Fast, which seems to be a C based
> drop in replacement for C::M, and now I get the following results
> (this includes the get_multi method to get many keys in one shot)
>
> query_dbh: 5 wallclock secs ( 3.31 usr + 1.03 sys = 4.34 CPU) @
> 2304.15/s (n=1
On May 21, 5:23 pm, dormando wrote:
..
>
> > Benchmark: timing 1 iterations of query_dbh, query_mem...
> > query_dbh: 6 wallclock secs ( 3.29 usr + 1.03 sys = 4.32 CPU) @
> > 2314.81/s (n=1)
> > query_mem: 54 wallclock secs (30.02 usr + 8.24 sys = 38.26 CPU) @
> > 261.37/s (n=1
On May 21, 5:38 pm, Dustin wrote:
..
>
> I can't imagine why it wouldn't be worse. On each iteration, you
> compile a SQL query then block on network activity 20 times in the
> memcached case. You're doing almost the entire SQLite workload in the
> memcached case and then mixing in a gang of
On May 21, 3:08 pm, punkish wrote:
> ds> Minor nit: You're not opening the file every time.
> ds>
>
> Interesting. I didn't realize that, and still don't. Anyway, that is
> unimportant for this discussion.
You open the file one time before looping through your tests.
> My worry is that I am
> The funny thing is, while in real production, the queries are not this
> simple, in most web apps I make, the queries are really not all that
> complicated. They do retrieve data from large data stores, but the SQL
> itself is relatively straightforward. Besides, none of the web sites I
> make ar
On May 21, 5:23 pm, dormando wrote:
> > timethese($count, {
> > 'query_mem' => sub {
> > my $sth = $dbh->prepare($sql);
> > my @res = ();
> > for (@ids) {
> > my $str = $memd->get($_);
> >
>
> timethese($count, {
> 'query_mem' => sub {
> my $sth = $dbh->prepare($sql);
> my @res = ();
> for (@ids) {
> my $str = $memd->get($_);
> unless ($st
On May 21, 2011, at 1:40 PM, Dustin wrote:
pk> On May 21, 10:07 am, punkish wrote:
pk>
pk> I consistently get results such as above, while I expected the
pk> memcache to slowly fill up and speed up the queries way faster
pk> than only accessing the file based db. Whatever I am doing, it is
pk>
On May 21, 10:07 am, punkish wrote:
> I consistently get results such as above, while I expected the
> memcache to slowly fill up and speed up the queries way faster than
> only accessing the file based db. Whatever I am doing, it is far
> faster to open up the SQLite database every time and que
On May 21, 5:28 am, Ashu gupta wrote:
> Hi All,
>
> I have implemented memcache in my system but i am facing Speed
> Problem . I am using spymemcache as a client. One of my value size is
> around 25KB but it is taking around 30-40 milisecond in order to
> reterieve that key's value. Please sugges
I am trying to learn memcached, so I installed it on my laptop and
fired it up. I also created a simple SQLite db 'mem.sqlite' like so
and filled the table with 20_000 random strings.
CREATE TABLE t (id INTEGER PRIMARY KEY, str TEXT)
I started memcached with a simple `memcached -p 11212 -m 48` an
Hi All,
I have implemented memcache in my system but i am facing Speed
Problem . I am using spymemcache as a client. One of my value size is
around 25KB but it is taking around 30-40 milisecond in order to
reterieve that key's value. Please suggest why it is taking so much
time what can be possibl
15 matches
Mail list logo