Build failure using clang-15

2022-10-11 Thread David Bohman
Specifically, using clang-15.0.2.

There are missing prototype errors and one instance of an unused variable.

Would you like a PR? I have built the changes against both next and master.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/00459cf9-a06e-4e30-b248-6440ee08be98n%40googlegroups.com.


Can’t select Memcached option W3 Total cache

2022-06-19 Thread David
I have WordPress and Memcached all installed to get it working, but I can't 
enable the option with the W3 plugin. Why does this happen?

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/ca63ccac-4011-40e2-b88d-b3ac47118ee5n%40googlegroups.com.


Re: warm restart

2019-12-09 Thread David Karlsen
-e /cache_state/memory_file <-- fails
-e/cache_state/memory_file <-- OK

man. 9. des. 2019 kl. 22:58 skrev dormando :

> example?
>
> On Mon, 9 Dec 2019, David Karlsen wrote:
>
> > OK, found it, apparently -e needs to be followed by path w/o any
> whitespace
> >
> > man. 9. des. 2019 kl. 21:54 skrev dormando :
> >   I'm not sure offhand. From my perspective I'd triple check what the
> >   path/file it's trying to open is (add a printf or something), else
> you're
> >   in container/kube territory and that's a bit beyond me. I assume
> it logs
> >   something on policy violations, or can be made to do so.
> >
> >   On Mon, 9 Dec 2019, David Karlsen wrote:
> >
> >   > No. It is there and writable
> >   >
> >   > man. 9. des. 2019 kl. 21:32 skrev dormando :
> >   >   Is the directory missing?
> >   >
> >   >   On Mon, 9 Dec 2019, David Karlsen wrote:
> >   >
> >   >   > OK, I hacked it together apt-installing some shared
> libs.With the patch applied I get:
> >   >   >
> >   >   > k logs test-memcached-0
> >   >   > failed to open file for mmap: No such file or directory
> >   >   >
> >   >   > Which is a bit strange - should not the file be created
> dynamically if it does not exist?
> >   >   >
> >   >   > fredag 6. desember 2019 23.51.38 UTC+1 skrev Dormando
> følgende:
> >   >   >   It's going to use some caps (opening files,
> mmap'ing them, shared memory,
> >   >   >   etc). I don't know what maps to which specific
> thing.
> >   >   >
> >   >   >   That error looks like an omission on my part..
> >   >   >
> >   >   >   mmap_fd = open(file, O_RDWR|O_CREAT, S_IRWXU);
> >   >   >   if (ftruncate(mmap_fd, limit) != 0) {
> >   >   >   perror("ftruncate failed");
> >   >   >   abort();
> >   >   >   }
> >   >   >
> >   >   >   missing the error check after open.
> >   >   >
> >   >   >   Try adding a:
> >   >   >
> >   >   >   if (mmap_fd == -1) {
> >   >   > perror("failed to open file for mmap");
> >   >   > abort();
> >   >   >   }
> >   >   >
> >   >   >   between the open and if (ftruncate) lines, which
> will give you the real
> >   >   >   error. I'll get that fixed upstream.
> >   >   >
> >   >   >   On Fri, 6 Dec 2019, David Karlsen wrote:
> >   >   >
> >   >   >   > Does memcached use any of these capabilities:
> https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/policy/container-capabilities/
>  ?
> >   >   >   >
> >   >   >   >
> >   >   >   > fre. 6. des. 2019 kl. 16:39 skrev David Karlsen <
> da...@davidkarlsen.com>:
> >   >   >   >   So far I am stuck on:
> >   >   >   > k logs test-memcached-0
> >   >   >   > ftruncate failed: Bad file descriptor
> >   >   >   >
> >   >   >   >
> >   >   >   >   - memcached
> >   >   >   > - -m 768m
> >   >   >   > - -I 1m
> >   >   >   > - -v
> >   >   >   > - -e /cache-state/memory_file
> >   >   >   >
> >   >   >   > -vvv does not reveal anything interesting.
> >   >   >   > What could be the cause of this?
> >   >   >   >
> >   >   >   > lørdag 30. november 2019 18.03.23 UTC+1 skrev
> David Karlsen følgende:
> >   >   >   >   Reading
> https://github.com/memcached/memcached/wiki/WarmRestart it is a bit
> unclear to me if the mount *has* to be tmpfs backed, or it can be
> >   a
> >   >   normal
> >   >   >   >   fileystem like xfs.
> >   &g

Re: warm restart

2019-12-09 Thread David Karlsen
OK, found it, apparently -e needs to be followed by path w/o any whitespace

man. 9. des. 2019 kl. 21:54 skrev dormando :

> I'm not sure offhand. From my perspective I'd triple check what the
> path/file it's trying to open is (add a printf or something), else you're
> in container/kube territory and that's a bit beyond me. I assume it logs
> something on policy violations, or can be made to do so.
>
> On Mon, 9 Dec 2019, David Karlsen wrote:
>
> > No. It is there and writable
> >
> > man. 9. des. 2019 kl. 21:32 skrev dormando :
> >   Is the directory missing?
> >
> >   On Mon, 9 Dec 2019, David Karlsen wrote:
> >
> >   > OK, I hacked it together apt-installing some shared libs.With
> the patch applied I get:
> >   >
> >   > k logs test-memcached-0
> >   > failed to open file for mmap: No such file or directory
> >   >
> >   > Which is a bit strange - should not the file be created
> dynamically if it does not exist?
> >   >
> >   > fredag 6. desember 2019 23.51.38 UTC+1 skrev Dormando følgende:
> >   >   It's going to use some caps (opening files, mmap'ing them,
> shared memory,
> >   >   etc). I don't know what maps to which specific thing.
> >   >
> >   >   That error looks like an omission on my part..
> >   >
> >   >   mmap_fd = open(file, O_RDWR|O_CREAT, S_IRWXU);
> >   >   if (ftruncate(mmap_fd, limit) != 0) {
> >   >   perror("ftruncate failed");
> >   >   abort();
> >   >   }
> >   >
> >   >   missing the error check after open.
> >   >
> >   >   Try adding a:
> >   >
> >   >   if (mmap_fd == -1) {
> >   > perror("failed to open file for mmap");
> >   > abort();
> >   >   }
> >   >
> >   >   between the open and if (ftruncate) lines, which will give
> you the real
> >   >   error. I'll get that fixed upstream.
> >   >
> >   >   On Fri, 6 Dec 2019, David Karlsen wrote:
> >   >
> >   >   > Does memcached use any of these capabilities:
> https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/policy/container-capabilities/
>  ?
> >   >   >
> >   >   >
> >   >   > fre. 6. des. 2019 kl. 16:39 skrev David Karlsen <
> da...@davidkarlsen.com>:
> >   >   >   So far I am stuck on:
> >   >   > k logs test-memcached-0
> >   >   > ftruncate failed: Bad file descriptor
> >   >   >
> >   >   >
> >   >   >   - memcached
> >   >   > - -m 768m
> >   >   > - -I 1m
> >   >   > - -v
> >   >   > - -e /cache-state/memory_file
> >   >   >
> >   >   > -vvv does not reveal anything interesting.
> >   >   > What could be the cause of this?
> >   >   >
> >   >   > lørdag 30. november 2019 18.03.23 UTC+1 skrev David
> Karlsen følgende:
> >   >   >   Reading
> https://github.com/memcached/memcached/wiki/WarmRestart it is a bit
> unclear to me if the mount *has* to be tmpfs backed, or it can be a
> >   normal
> >   >   >   fileystem like xfs.
> >   >   >   We are looking into running memcached through
> Kubernetes/containers - and as a tmpfs volume would be wiped on
> pod-recreation
> >   >   >
> >   >   > --
> >   >   >
> >   >   > ---
> >   >   > You received this message because you are subscribed to
> the Google Groups "memcached" group.
> >   >   > To unsubscribe from this group and stop receiving emails
> from it, send an email to memc...@googlegroups.com.
> >   >   > To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/f39374e2-d603-43e6-928a-a9a0fc9e93ed%40googlegroups.com
> .
> >   >   >
> >   >   >
> >   >   >
> >   >   > --
> >   >   > --
> >   >   > David J. M. Karlsen -
> http://www.linkedin.com/in/davidkarlsen
> >   >   >
> >

Re: warm restart

2019-12-09 Thread David Karlsen
No. It is there and writable

man. 9. des. 2019 kl. 21:32 skrev dormando :

> Is the directory missing?
>
> On Mon, 9 Dec 2019, David Karlsen wrote:
>
> > OK, I hacked it together apt-installing some shared libs.With the patch
> applied I get:
> >
> > k logs test-memcached-0
> > failed to open file for mmap: No such file or directory
> >
> > Which is a bit strange - should not the file be created dynamically if
> it does not exist?
> >
> > fredag 6. desember 2019 23.51.38 UTC+1 skrev Dormando følgende:
> >   It's going to use some caps (opening files, mmap'ing them, shared
> memory,
> >   etc). I don't know what maps to which specific thing.
> >
> >   That error looks like an omission on my part..
> >
> >   mmap_fd = open(file, O_RDWR|O_CREAT, S_IRWXU);
> >   if (ftruncate(mmap_fd, limit) != 0) {
> >   perror("ftruncate failed");
> >   abort();
> >   }
> >
> >   missing the error check after open.
> >
> >   Try adding a:
> >
> >   if (mmap_fd == -1) {
> > perror("failed to open file for mmap");
> >     abort();
> >   }
> >
> >   between the open and if (ftruncate) lines, which will give you the
> real
> >   error. I'll get that fixed upstream.
> >
> >   On Fri, 6 Dec 2019, David Karlsen wrote:
> >
> >   > Does memcached use any of these capabilities:
> https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/policy/container-capabilities/
>  ?
> >   >
> >   >
> >   > fre. 6. des. 2019 kl. 16:39 skrev David Karlsen <
> da...@davidkarlsen.com>:
> >   >   So far I am stuck on:
> >   > k logs test-memcached-0
> >   > ftruncate failed: Bad file descriptor
> >   >
> >   >
> >   >   - memcached
> >   > - -m 768m
> >   > - -I 1m
> >   > - -v
> >   > - -e /cache-state/memory_file
> >   >
> >   > -vvv does not reveal anything interesting.
> >   > What could be the cause of this?
> >   >
> >   > lørdag 30. november 2019 18.03.23 UTC+1 skrev David Karlsen
> følgende:
> >   >   Reading
> https://github.com/memcached/memcached/wiki/WarmRestart it is a bit
> unclear to me if the mount *has* to be tmpfs backed, or it can be a normal
> >   >   fileystem like xfs.
> >   >   We are looking into running memcached through
> Kubernetes/containers - and as a tmpfs volume would be wiped on
> pod-recreation
> >   >
> >   > --
> >   >
> >   > ---
> >   > You received this message because you are subscribed to the
> Google Groups "memcached" group.
> >   > To unsubscribe from this group and stop receiving emails from
> it, send an email to memc...@googlegroups.com.
> >   > To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/f39374e2-d603-43e6-928a-a9a0fc9e93ed%40googlegroups.com
> .
> >   >
> >   >
> >   >
> >   > --
> >   > --
> >   > David J. M. Karlsen - http://www.linkedin.com/in/davidkarlsen
> >   >
> >   > --
> >   >
> >   > ---
> >   > You received this message because you are subscribed to the
> Google Groups "memcached" group.
> >   > To unsubscribe from this group and stop receiving emails from
> it, send an email to memc...@googlegroups.com.
> >   > To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/CAGO7Ob1d002j0ve-aN6hBGncZWF7jR9ygpaz7B54wbQUGDA%2Beg%40mail.gmail.com
> .
> >   >
> >   >
> >
> > --
> >
> > ---
> > You received this message because you are subscribed to the Google
> Groups "memcached" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to memcached+unsubscr...@googlegroups.com.
> > To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/8defad58-9248-466d-8e4a-14f9e4afd9af%40googlegroups.com
> .
> >
> >
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.1912091232060.19126%40dskull
> .
>
-- 
--
David J. M. Karlsen - http://www.linkedin.com/in/davidkarlsen

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/CAGO7Ob0prNyY%2B_qVyRdmvn-vWnxF2An6i%3DbJc53G7zWmj7J9uQ%40mail.gmail.com.


Re: warm restart

2019-12-09 Thread David Karlsen
Noe

man. 9. des. 2019 kl. 21:32 skrev dormando :

> Is the directory missing?
>
> On Mon, 9 Dec 2019, David Karlsen wrote:
>
> > OK, I hacked it together apt-installing some shared libs.With the patch
> applied I get:
> >
> > k logs test-memcached-0
> > failed to open file for mmap: No such file or directory
> >
> > Which is a bit strange - should not the file be created dynamically if
> it does not exist?
> >
> > fredag 6. desember 2019 23.51.38 UTC+1 skrev Dormando følgende:
> >   It's going to use some caps (opening files, mmap'ing them, shared
> memory,
> >   etc). I don't know what maps to which specific thing.
> >
> >   That error looks like an omission on my part..
> >
> >   mmap_fd = open(file, O_RDWR|O_CREAT, S_IRWXU);
> >   if (ftruncate(mmap_fd, limit) != 0) {
> >   perror("ftruncate failed");
> >   abort();
> >   }
> >
> >   missing the error check after open.
> >
> >   Try adding a:
> >
> >   if (mmap_fd == -1) {
> > perror("failed to open file for mmap");
> >     abort();
> >   }
> >
> >   between the open and if (ftruncate) lines, which will give you the
> real
> >   error. I'll get that fixed upstream.
> >
> >   On Fri, 6 Dec 2019, David Karlsen wrote:
> >
> >   > Does memcached use any of these capabilities:
> https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/policy/container-capabilities/
>  ?
> >   >
> >   >
> >   > fre. 6. des. 2019 kl. 16:39 skrev David Karlsen <
> da...@davidkarlsen.com>:
> >   >   So far I am stuck on:
> >   > k logs test-memcached-0
> >   > ftruncate failed: Bad file descriptor
> >   >
> >   >
> >   >   - memcached
> >   > - -m 768m
> >   > - -I 1m
> >   > - -v
> >   > - -e /cache-state/memory_file
> >   >
> >   > -vvv does not reveal anything interesting.
> >   > What could be the cause of this?
> >   >
> >   > lørdag 30. november 2019 18.03.23 UTC+1 skrev David Karlsen
> følgende:
> >   >   Reading
> https://github.com/memcached/memcached/wiki/WarmRestart it is a bit
> unclear to me if the mount *has* to be tmpfs backed, or it can be a normal
> >   >   fileystem like xfs.
> >   >   We are looking into running memcached through
> Kubernetes/containers - and as a tmpfs volume would be wiped on
> pod-recreation
> >   >
> >   > --
> >   >
> >   > ---
> >   > You received this message because you are subscribed to the
> Google Groups "memcached" group.
> >   > To unsubscribe from this group and stop receiving emails from
> it, send an email to memc...@googlegroups.com.
> >   > To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/f39374e2-d603-43e6-928a-a9a0fc9e93ed%40googlegroups.com
> .
> >   >
> >   >
> >   >
> >   > --
> >   > --
> >   > David J. M. Karlsen - http://www.linkedin.com/in/davidkarlsen
> >   >
> >   > --
> >   >
> >   > ---
> >   > You received this message because you are subscribed to the
> Google Groups "memcached" group.
> >   > To unsubscribe from this group and stop receiving emails from
> it, send an email to memc...@googlegroups.com.
> >   > To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/CAGO7Ob1d002j0ve-aN6hBGncZWF7jR9ygpaz7B54wbQUGDA%2Beg%40mail.gmail.com
> .
> >   >
> >   >
> >
> > --
> >
> > ---
> > You received this message because you are subscribed to the Google
> Groups "memcached" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to memcached+unsubscr...@googlegroups.com.
> > To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/8defad58-9248-466d-8e4a-14f9e4afd9af%40googlegroups.com
> .
> >
> >
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.1912091232060.19126%40dskull
> .
>
-- 
--
David J. M. Karlsen - http://www.linkedin.com/in/davidkarlsen

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/CAGO7Ob3f-e16hWKNVnq-PP%3D-3CePrLLAvfXnNgEJ9uGagbCa6A%40mail.gmail.com.


Re: warm restart

2019-12-09 Thread David Karlsen
OK, I hacked it together apt-installing some shared libs.
With the patch applied I get:

k logs test-memcached-0
failed to open file for mmap: No such file or directory

Which is a bit strange - should not the file be created dynamically if it 
does not exist?

fredag 6. desember 2019 23.51.38 UTC+1 skrev Dormando følgende:
>
> It's going to use some caps (opening files, mmap'ing them, shared memory, 
> etc). I don't know what maps to which specific thing. 
>
> That error looks like an omission on my part.. 
>
> mmap_fd = open(file, O_RDWR|O_CREAT, S_IRWXU); 
> if (ftruncate(mmap_fd, limit) != 0) { 
> perror("ftruncate failed"); 
> abort(); 
> } 
>
> missing the error check after open. 
>
> Try adding a: 
>
> if (mmap_fd == -1) { 
>   perror("failed to open file for mmap"); 
>   abort(); 
> } 
>
> between the open and if (ftruncate) lines, which will give you the real 
> error. I'll get that fixed upstream. 
>
> On Fri, 6 Dec 2019, David Karlsen wrote: 
>
> > Does memcached use any of these capabilities:
> https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/policy/container-capabilities/
>  ? 
>
> > 
> > 
> > fre. 6. des. 2019 kl. 16:39 skrev David Karlsen  >: 
> >   So far I am stuck on: 
> > k logs test-memcached-0  
> > ftruncate failed: Bad file descriptor 
> > 
> > 
> >   - memcached 
> > - -m 768m 
> > - -I 1m 
> > - -v 
> > - -e /cache-state/memory_file 
> > 
> > -vvv does not reveal anything interesting. 
> > What could be the cause of this? 
> > 
> > lørdag 30. november 2019 18.03.23 UTC+1 skrev David Karlsen følgende: 
> >   Reading https://github.com/memcached/memcached/wiki/WarmRestart it 
> is a bit unclear to me if the mount *has* to be tmpfs backed, or it can be 
> a normal 
> >   fileystem like xfs. 
> >   We are looking into running memcached through 
> Kubernetes/containers - and as a tmpfs volume would be wiped on 
> pod-recreation 
> > 
> > -- 
> > 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "memcached" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to memc...@googlegroups.com . 
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/f39374e2-d603-43e6-928a-a9a0fc9e93ed%40googlegroups.com.
>  
>
> > 
> > 
> > 
> > -- 
> > -- 
> > David J. M. Karlsen - http://www.linkedin.com/in/davidkarlsen 
> > 
> > -- 
> > 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "memcached" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to memc...@googlegroups.com . 
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/CAGO7Ob1d002j0ve-aN6hBGncZWF7jR9ygpaz7B54wbQUGDA%2Beg%40mail.gmail.com.
>  
>
> > 
> >

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/8defad58-9248-466d-8e4a-14f9e4afd9af%40googlegroups.com.


Re: warm restart

2019-12-09 Thread David Karlsen
OK, so I compiled with that changed (doing the same steps as in the 
.travis.yml) - but it seems to use shared-libraries. Is there anyway to 
compile this statically?
I also created a PR with the same change: 
https://github.com/memcached/memcached/pull/587

lørdag 30. november 2019 18.03.23 UTC+1 skrev David Karlsen følgende:
>
> Reading https://github.com/memcached/memcached/wiki/WarmRestart it is a 
> bit unclear to me if the mount *has* to be tmpfs backed, or it can be a 
> normal fileystem like xfs.
> We are looking into running memcached through Kubernetes/containers - and 
> as a tmpfs volume would be wiped on pod-recreation
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/d21bad53-8ff2-4b61-92a2-cbfb5f39bec6%40googlegroups.com.


Re: warm restart

2019-12-06 Thread David Karlsen
Does memcached use any of these capabilities:
https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/policy/container-capabilities/
 ?


fre. 6. des. 2019 kl. 16:39 skrev David Karlsen :

> So far I am stuck on:
>
> k logs test-memcached-0
> ftruncate failed: Bad file descriptor
>
>
>   - memcached
> - -m 768m
> - -I 1m
> - -v
> - -e /cache-state/memory_file
>
> -vvv does not reveal anything interesting.
> What could be the cause of this?
>
> lørdag 30. november 2019 18.03.23 UTC+1 skrev David Karlsen følgende:
>>
>> Reading https://github.com/memcached/memcached/wiki/WarmRestart it is a
>> bit unclear to me if the mount *has* to be tmpfs backed, or it can be a
>> normal fileystem like xfs.
>> We are looking into running memcached through Kubernetes/containers - and
>> as a tmpfs volume would be wiped on pod-recreation
>>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/f39374e2-d603-43e6-928a-a9a0fc9e93ed%40googlegroups.com
> <https://groups.google.com/d/msgid/memcached/f39374e2-d603-43e6-928a-a9a0fc9e93ed%40googlegroups.com?utm_medium=email_source=footer>
> .
>


-- 
--
David J. M. Karlsen - http://www.linkedin.com/in/davidkarlsen

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/CAGO7Ob1d002j0ve-aN6hBGncZWF7jR9ygpaz7B54wbQUGDA%2Beg%40mail.gmail.com.


Re: warm restart

2019-12-06 Thread David Karlsen
So far I am stuck on:

k logs test-memcached-0 
ftruncate failed: Bad file descriptor


  - memcached
- -m 768m
- -I 1m
- -v
- -e /cache-state/memory_file

-vvv does not reveal anything interesting.
What could be the cause of this?

lørdag 30. november 2019 18.03.23 UTC+1 skrev David Karlsen følgende:
>
> Reading https://github.com/memcached/memcached/wiki/WarmRestart it is a 
> bit unclear to me if the mount *has* to be tmpfs backed, or it can be a 
> normal fileystem like xfs.
> We are looking into running memcached through Kubernetes/containers - and 
> as a tmpfs volume would be wiped on pod-recreation
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/f39374e2-d603-43e6-928a-a9a0fc9e93ed%40googlegroups.com.


Re: warm restart

2019-12-04 Thread David Karlsen
Sure will 
Follow https://github.com/bitnami/charts/issues/1685

ons. 4. des. 2019, 08:20 skrev dormando :

> If you succeed you should share with the class :)
>
> On Sun, 1 Dec 2019, David Karlsen wrote:
>
> > Thank you - that explains it well. I'll look around if I can create a
> "durable" tmpfs in k8s via a storageclass :)
> >
> > søn. 1. des. 2019 kl. 04:25 skrev dormando :
> >   The disk file is memory mapped; that is the actual memory, now
> external to
> >   memcached. There's no flush at shutdown, it just gracefully stops
> all
> >   in-flight actions and then does a fast data fixup on restart.
> >
> >   So it does continually read/write to that file. As I said earlier
> you can
> >   create an "equivalent" to writing the file at shutdown by moving
> the file
> >   after shutdown :)
> >
> >   On Sun, 1 Dec 2019, David Karlsen wrote:
> >
> >   > Won’t the cache be written to file at shutdown and not
> contionously while running?
> >   >
> >   > søn. 1. des. 2019 kl. 03:58 skrev dormando :
> >   >   Hey,
> >   >
> >   >   It's only guaranteed to work in a ram disk. It will "work"
> on anything
> >   >   else, but you'll lose deterministic performance. Worst
> case it'll burn out
> >   >   whatever device is underlying because it's not optimized
> for anything but
> >   >   RAM.
> >   >
> >   >   So, two options for this situation:
> >   >
> >   >   1) I'd hope there's some way to bind mount an underlying
> tmpfs. With
> >   >   almost all container systems there's some method of
> exposing an underlying
> >   >   path, though I have a low opinion of Kube so manybe not.
> >   >
> >   >   2) It does just create two normal files: the path you give
> it + .meta file
> >   >   that appears during graceful shutdown. After shutdown you
> can copy these
> >   >   (perhaps with pigz or something) to a filesystem then
> restore to in-pod
> >   >   tmpfs before starting up again. It'll increase the
> downtime but it'll
> >   >   work.
> >   >
> >   >   I guess.. 3) For completeness it also works on a DCPMM dax
> mount, which
> >   >   survive reboots and act as "filesystems". You'd need to
> have the right
> >   >   system and memory and etc.
> >   >
> >   >   -Dormando
> >   >
> >   >   On Sat, 30 Nov 2019, David Karlsen wrote:
> >   >
> >   >   > Reading
> https://github.com/memcached/memcached/wiki/WarmRestart it is a bit
> unclear to me if the mount *has* to be tmpfs backed, or it can be a normal
> >   fileystem
> >   >   like xfs.
> >   >   > We are looking into running memcached through
> Kubernetes/containers - and as a tmpfs volume would be wiped on
> pod-recreation
> >   >   >
> >   >   > --
> >   >   >
> >   >   > ---
> >   >   > You received this message because you are subscribed to
> the Google Groups "memcached" group.
> >   >   > To unsubscribe from this group and stop receiving emails
> from it, send an email to memcached+unsubscr...@googlegroups.com.
> >   >   > To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/2ed07578-4aff-4704-83b1-3cd7d56de59f%40googlegroups.com
> .
> >   >   >
> >   >   >
> >   >
> >   >   --
> >   >
> >   >   ---
> >   >   You received this message because you are subscribed to
> the Google Groups "memcached" group.
> >   >   To unsubscribe from this group and stop receiving emails
> from it, send an email to memcached+unsubscr...@googlegroups.com.
> >   >   To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.1911301854320.5300%40dskull
> .
> >   >
> >   > --
> >   > --
> >   > David J. M. Karlsen - http://www.linkedin.com/in/davidkarlsen
> >   >
> >   > --
> >   >
> >   > ---
> >   > You received this message because you are subscribed to the
> Google Groups "memcached" group.

Re: warm restart

2019-12-01 Thread David Karlsen
Thank you - that explains it well. I'll look around if I can create a
"durable" tmpfs in k8s via a storageclass :)


søn. 1. des. 2019 kl. 04:25 skrev dormando :

> The disk file is memory mapped; that is the actual memory, now external to
> memcached. There's no flush at shutdown, it just gracefully stops all
> in-flight actions and then does a fast data fixup on restart.
>
> So it does continually read/write to that file. As I said earlier you can
> create an "equivalent" to writing the file at shutdown by moving the file
> after shutdown :)
>
> On Sun, 1 Dec 2019, David Karlsen wrote:
>
> > Won’t the cache be written to file at shutdown and not contionously
> while running?
> >
> > søn. 1. des. 2019 kl. 03:58 skrev dormando :
> >   Hey,
> >
> >   It's only guaranteed to work in a ram disk. It will "work" on
> anything
> >   else, but you'll lose deterministic performance. Worst case it'll
> burn out
> >   whatever device is underlying because it's not optimized for
> anything but
> >   RAM.
> >
> >   So, two options for this situation:
> >
> >   1) I'd hope there's some way to bind mount an underlying tmpfs.
> With
> >   almost all container systems there's some method of exposing an
> underlying
> >   path, though I have a low opinion of Kube so manybe not.
> >
> >   2) It does just create two normal files: the path you give it +
> .meta file
> >   that appears during graceful shutdown. After shutdown you can copy
> these
> >   (perhaps with pigz or something) to a filesystem then restore to
> in-pod
> >   tmpfs before starting up again. It'll increase the downtime but
> it'll
> >   work.
> >
> >   I guess.. 3) For completeness it also works on a DCPMM dax mount,
> which
> >   survive reboots and act as "filesystems". You'd need to have the
> right
> >   system and memory and etc.
> >
> >   -Dormando
> >
> >   On Sat, 30 Nov 2019, David Karlsen wrote:
> >
> >   > Reading https://github.com/memcached/memcached/wiki/WarmRestart it
> is a bit unclear to me if the mount *has* to be tmpfs backed, or it can be
> a normal fileystem
> >   like xfs.
> >   > We are looking into running memcached through
> Kubernetes/containers - and as a tmpfs volume would be wiped on
> pod-recreation
> >   >
> >   > --
> >   >
> >   > ---
> >   > You received this message because you are subscribed to the
> Google Groups "memcached" group.
> >   > To unsubscribe from this group and stop receiving emails from
> it, send an email to memcached+unsubscr...@googlegroups.com.
> >   > To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/2ed07578-4aff-4704-83b1-3cd7d56de59f%40googlegroups.com
> .
> >   >
> >   >
> >
> >   --
> >
> >   ---
> >   You received this message because you are subscribed to the Google
> Groups "memcached" group.
> >   To unsubscribe from this group and stop receiving emails from it,
> send an email to memcached+unsubscr...@googlegroups.com.
> >   To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.1911301854320.5300%40dskull
> .
> >
> > --
> > --
> > David J. M. Karlsen - http://www.linkedin.com/in/davidkarlsen
> >
> > --
> >
> > ---
> > You received this message because you are subscribed to the Google
> Groups "memcached" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to memcached+unsubscr...@googlegroups.com.
> > To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/CAGO7Ob36THu%2BJTo3uxFMW6t6RV4BUQ38WMuUVUetY9TqGwDP7w%40mail.gmail.com
> .
> >
> >
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.1911301923260.5300%40dskull
> .
>


-- 
--
David J. M. Karlsen - http://www.linkedin.com/in/davidkarlsen

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/CAGO7Ob0k73qux%3DL1GPHRchMy5M0v1EfY2NtqWJWxuTLcx2NP%2Bg%40mail.gmail.com.


Re: warm restart

2019-11-30 Thread David Karlsen
Won’t the cache be written to file at shutdown and not contionously while
running?

søn. 1. des. 2019 kl. 03:58 skrev dormando :

> Hey,
>
> It's only guaranteed to work in a ram disk. It will "work" on anything
> else, but you'll lose deterministic performance. Worst case it'll burn out
> whatever device is underlying because it's not optimized for anything but
> RAM.
>
> So, two options for this situation:
>
> 1) I'd hope there's some way to bind mount an underlying tmpfs. With
> almost all container systems there's some method of exposing an underlying
> path, though I have a low opinion of Kube so manybe not.
>
> 2) It does just create two normal files: the path you give it + .meta file
> that appears during graceful shutdown. After shutdown you can copy these
> (perhaps with pigz or something) to a filesystem then restore to in-pod
> tmpfs before starting up again. It'll increase the downtime but it'll
> work.
>
> I guess.. 3) For completeness it also works on a DCPMM dax mount, which
> survive reboots and act as "filesystems". You'd need to have the right
> system and memory and etc.
>
> -Dormando
>
> On Sat, 30 Nov 2019, David Karlsen wrote:
>
> > Reading https://github.com/memcached/memcached/wiki/WarmRestart it is a
> bit unclear to me if the mount *has* to be tmpfs backed, or it can be a
> normal fileystem like xfs.
> > We are looking into running memcached through Kubernetes/containers -
> and as a tmpfs volume would be wiped on pod-recreation
> >
> > --
> >
> > ---
> > You received this message because you are subscribed to the Google
> Groups "memcached" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to memcached+unsubscr...@googlegroups.com.
> > To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/2ed07578-4aff-4704-83b1-3cd7d56de59f%40googlegroups.com
> .
> >
> >
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.1911301854320.5300%40dskull
> .
>
-- 
--
David J. M. Karlsen - http://www.linkedin.com/in/davidkarlsen

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/CAGO7Ob36THu%2BJTo3uxFMW6t6RV4BUQ38WMuUVUetY9TqGwDP7w%40mail.gmail.com.


warm restart

2019-11-30 Thread David Karlsen
Reading https://github.com/memcached/memcached/wiki/WarmRestart it is a bit 
unclear to me if the mount *has* to be tmpfs backed, or it can be a normal 
fileystem like xfs.
We are looking into running memcached through Kubernetes/containers - and 
as a tmpfs volume would be wiped on pod-recreation

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/2ed07578-4aff-4704-83b1-3cd7d56de59f%40googlegroups.com.


Re: memcached node js client

2015-10-07 Thread David Sheldon
On Wed, Oct 07, 2015 at 03:57:50AM -0700, nith...@cloudion.net wrote:
> I am using memcached node js client. I am not getting the JSON output when 
> performing the get operation on it. Here is my code
> 
> memcached.get(id, function( err, result ){
>   if( err ) console.error( err );
>   console.log( result );
> });
> and the output is 
> 
> a:2:{s:7:"passkey";s:40:"8c779538d8c4ed54cf8a89d1e59518557febf0ac";s:7:"userkey";s:7:"5b9ce84";}
> How to get the userkey from this output or how to get the output as json?


Looks like your problem is that you haven't put JSON into memcached.
That looks a bit like PHP serialisation format. Maybe call json_encode
in your PHP. If you can't do that, it looks like people have created
some nasty node.js php unserializers. Like
https://github.com/naholyr/js-php-unserialize

David

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Any options to set eviction order preference when memory is full?

2014-02-14 Thread David Carlos Manuelda
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I am wondering if there is any option (or any idea to implement it) to
somehow set an eviction order when memory is full.

That would be useful, for example, to store session data in PHP +
memcached.

If I could set that any other data rather than session data should be
evicted BEFORE them (if possible) to allocate new items would be a
very good benefit.

Of course, there may be other useful usecases.

For example, if can be set as a parameter when setting the key-value,
along with the expire time.

Can it be done or does it already exists?
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJS/i6JAAoJEKJX2y/M0f1vZowQAKmepEYFl8In7c/d4/+PsAYn
Kr+81kK0F4AU9rJzdMwvq6FKG/ohu0MNdlzjNtb5die5UkSip+nzzTwQkKol3s8k
UFys/rW+enpH8tVW4Uhv8w1ui8WnZmz5m8IyPSCmwQHyvf/f2bQ+XEbGLQOG+lyR
7Of2tFxv9X1ewBDZ4gvL/FCyUrdtJ02d/avKZLhsjfuMhooA/sGrdAyyNtcMTMrh
inbap7vYVaWCY8eXFaKem1T19OVGIaOiRwZ0tLXONQg4tIZ+AFxRqIhlXv9b+F6c
mjCfjVHpbRa5EqSk7sYRH9u+Evb1wW+K4doHLbWjS3UU0uhtIjiPWZLuwyxJiG8p
89boXr1S/DDKdaF/7UbPqFkOm0GjKvjaOksVpxdE9MITb3eHsQhWXGuXZkCC1ksb
PuArach5ry+tzKzaXv6vC4vI7SuP9Lgdxcb8bezsDU6SicCWnuxelQJkV9+KQB/F
kPZMTrEXMItV1xjKiTtwKeCFi8XcNwL0uZSxV5EdWicSFX6X8iItMefGtIBSoZd7
TBvj6Va/xxXgyAg/4NYrGdjtP9gxUKfPMczNsVoe0Zkflxr9Y4bqWmOaXf/+ZNuE
DAQPyGeCNkEzBqB5uzJ0MLykmEO2gHORlTqDpJdQnB1WR++TQCgQn7jPox5luohb
TesgtXPq99h7qxEq+8t+
=Vmbd
-END PGP SIGNATURE-

-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Stale data handling with memcached/consistent hashing

2012-12-24 Thread David Walter
On Mon, Dec 24, 2012 at 3:59 AM, Yoga howac...@gmail.com wrote:

 Hi,

 Assume I have two memcached nodes (*node A, B*) at the beginning, and
 when I added a new *node C*, portion of keys are remapped and thanks to
 consistent hashing, only some of them.

 Let assume a value with key *foo* originally at server A, is now being
 to mapped to server C.

 When I finally removed node C, the key should be mapped to node A again,
 but at that time, node A only contain stale data.

 So, is flushing the data only way to solve this issue?


 Maybe, but then, how do you know that the node insert and removal have
taken place? If you have strict requirements, quite likely your
application's integration with memcached will have a synchronization
protocol to manage them.

What's the definition of stale in your environment? If the objects have a
timeout [ say of 10 minutes ] and the insert and removal of node C is
greater than the timeout [ 11 minutes ] all get( foo ) calls would fail
on node A and, I assume, your application would re-create foo and
set(foo, data) in A.


Re: Issue 301 in memcached: link fails for GCC atomic functions

2012-12-18 Thread David Walter
I thought there are linux kernel headers with inline assembler for that.

They seemed to work in tests on 32 bit platforms.

Admittedly these aren't the gcc __sync* family but would they work?

http://www.takatan.net/lxr/source/include/asm-i386/atomic.h?v=2.4.21-47.EL



On Tue, Dec 18, 2012 at 3:01 PM, memcac...@googlecode.com wrote:


 Comment #3 on issue 301 by dorma...@rydia.net: link fails for GCC atomic
 functions
 http://code.google.com/p/**memcached/issues/detail?id=301http://code.google.com/p/memcached/issues/detail?id=301

 Take a look through the configure script, it's supposed to actually test
 and use the atomic call, and fail back if it fails.

 The problem is that 32bit x86 doesn't have 16bit atomics, so they have to
 be detected and disabled. Except some versions of GCC (the one you have)
 claims to have them anyway, and it'll fail at runtime. 32bit is stuck with
 the mutex, but given how much memory you're limited to, I doubt you'd ever
 notice the performance drop. It just won't be hugely faster than .11.

 I have no idea why setting march=i686 makes it work right, and I don't
 have your OS to test on :/ Since that's all pretty old legacy stuff, we
 might have to insist you compile with your workaround.

 I'd consider a patch to the autocrap if it doesn't degrade any of our
 supported platforms.




Re: How is it possible to locate server based on key hash code

2012-10-27 Thread David Walter
I think that you want to read how consistent hashing works, in brief,
it's not a simple modulo arithmetic mapping.

http://www.paperplanes.de/2011/12/9/the-magic-of-consistent-hashing.html

libmemcached has an option:

memcached_server_instance_st memcached_server_by_key(const memcached_st *ptr,
 const char *key,
 size_t key_length,
 memcached_return_t *error)

Your language probably has a similar method server for key or by key.


On Sat, Oct 27, 2012 at 11:47 AM, Ravi ravikiranva...@gmail.com wrote:
 I have 3 Memcached balanced Servers running for the Application .

 I googled and found out that , using hashkey , its possible to locate a key
 from any of the server

 I am using KetamaMemcachedSessionLocator set up with my code .


 I tried this way , but i am not sure if its correct or not .

 int mod = MemcachedClient.getAvailableServers().size(); // This value is 3

 int keyid = key.hashCode();


 COuld anybody please tell me how can i locate first , Second Server  with
 this approach ??



Where can I find the memcached windows version?

2012-08-12 Thread David
I didn't know how to build the memcached source code. I just want to use 
the memcached on windows 2003 server.
 
Where can I find the memcached windows version?
 
 
Thank you very much!


Some set() calls taking way too long

2012-07-17 Thread David Morel
hi memcached users/devvers,

I'm seeing occasional slowdowns (tens of milliseconds) in setting some
keys on some big servers (80GB RAM allocated to memcached) which contain
a large number of keys (many millions). The current version I use is
1.4.6 on RH6.

The thing is once I bounce the service (restart, not flush_all),
everything becomes fine again. So could a large number of keys be the
source of the issue (some memory allocation slowdown or something)?

I don't see that many evictions on the box, and anyway, evicting an
object to make room for another shouldn't take long, should it? Is there
a remote possibility the large number of keys is at fault and splitting
the daemons, like 2 or more instances per box, would fix it? Or is that
a known issue fixed in a later release?

Thanks for any insight.

David Morel


Re: Some set() calls taking way too long

2012-07-17 Thread David Morel


On Tuesday, July 17, 2012 12:26:16 PM UTC+2, Yiftach wrote:

 Few things that may help understanding your problem:

 1. What is the status of your slabs allocation, is there enough room to 
 all slabes ?


This happens when the memory gets close to full. however there is not a 
large number of evictions.
I would expect evictions to be made whenever needed, but not the process of 
making room for 1 object to take half a second.
 

 2. Do you see increase in the requests rate when your Memcached memory is 
 becoming full with objects ?


I don't think so, why would that be the case, it's application dependent, 
not server, right?
 

 3. How many threads are configured ?


the default 4
 



 On Tue, Jul 17, 2012 at 1:11 PM, David Morel david.mo...@amakuru.netwrote:

 hi memcached users/devvers,

 I'm seeing occasional slowdowns (tens of milliseconds) in setting some
 keys on some big servers (80GB RAM allocated to memcached) which contain
 a large number of keys (many millions). The current version I use is
 1.4.6 on RH6.

 The thing is once I bounce the service (restart, not flush_all),
 everything becomes fine again. So could a large number of keys be the
 source of the issue (some memory allocation slowdown or something)?

 I don't see that many evictions on the box, and anyway, evicting an
 object to make room for another shouldn't take long, should it? Is there
 a remote possibility the large number of keys is at fault and splitting
 the daemons, like 2 or more instances per box, would fix it? Or is that
 a known issue fixed in a later release?

 Thanks for any insight.

 David Morel




 -- 
 Yiftach Shoolman
 +972-54-7634621
  


Re: Some set() calls taking way too long

2012-07-17 Thread David Morel

On 17 Jul 2012, at 15:33, Yiftach Shoolman wrote:

Re #2 - when more objects are stored in the cache, the hit ratio 
should be
higher-- the application might run faster, i.e. with less DB 
accesses.


no relation here, I was really mentioning bare set() calls timing, 
regardless of anything else.


Anyway, it sounds to me more related to slabs allocation, as only 
reset

solves it (not flush) if I understood u well.


all the slabs are already allocated, with a moderate rate of evictions. 
so it's not slab
allocation either, it's just one key now and then, regardless (it seems) 
of the key

itself or the size of the object.


Does it happen on any object size or on specific object size range ?


anything, really. puzzling, hey?

thanks!

On Tue, Jul 17, 2012 at 4:04 PM, David Morel 
david.mo...@amakuru.netwrote:





On Tuesday, July 17, 2012 12:26:16 PM UTC+2, Yiftach wrote:


Few things that may help understanding your problem:

1. What is the status of your slabs allocation, is there enough room 
to

all slabes ?



This happens when the memory gets close to full. however there is not 
a

large number of evictions.
I would expect evictions to be made whenever needed, but not the 
process

of making room for 1 object to take half a second.


2. Do you see increase in the requests rate when your Memcached 
memory is

becoming full with objects ?



I don't think so, why would that be the case, it's application 
dependent,

not server, right?



3. How many threads are configured ?



the default 4





On Tue, Jul 17, 2012 at 1:11 PM, David Morel 
david.mo...@amakuru.netwrote:



hi memcached users/devvers,

I'm seeing occasional slowdowns (tens of milliseconds) in setting 
some
keys on some big servers (80GB RAM allocated to memcached) which 
contain
a large number of keys (many millions). The current version I use 
is

1.4.6 on RH6.

The thing is once I bounce the service (restart, not flush_all),
everything becomes fine again. So could a large number of keys be 
the

source of the issue (some memory allocation slowdown or something)?

I don't see that many evictions on the box, and anyway, evicting an
object to make room for another shouldn't take long, should it? Is 
there
a remote possibility the large number of keys is at fault and 
splitting
the daemons, like 2 or more instances per box, would fix it? Or is 
that

a known issue fixed in a later release?

Thanks for any insight.

David Morel





--
Yiftach Shoolman
+972-54-7634621






--
Yiftach Shoolman
+972-54-7634621



David Morel
--
Booking.com http://www.booking.com/
Lyon office: 93 rue de la Villette, 5e étage, F-69003 Lyon
phone:+33 4 20 10 26 63
gsm:+33 6 80 38 56 83


genhash

2011-09-27 Thread John David Duncan

(Resending from the gmail account so Google will forward to the list)

Hi Trond,


commit e70f5ace86dc71a2683b884182fa46d57965a25a
Author: Trond Norbye trond.nor...@gmail.com
Date:   Fri Sep 23 09:35:44 2011 +0200

   Removed topkeys implementation

   Measurements showed memcached only able to handle about 50% of the
   operations with top keys on vs. when it was off.



OK, that's interesting.  I see the motivation for that.

But I had assumed genhash.h was part of the public (utilities) API,  
and I was planning to use it. Suddenly it's gone!






Re: genhash

2011-09-27 Thread John David Duncan
Well, I work at a company with lots of lawyers.  The easiest  fastest  
thing for me to do will be to write a hashtable.




On Sep 27, 2011, at 1:13 PM, Trond Norbye wrote:

I'm not a lawyer, but I would assume that sine the code has been  
released into memcached under that license, you should still be able  
to use it...


I nuked it from the memcached core since we don't use it anymore in  
the core. Anyway. Dustin Sallings wrote it, so you could just send  
him an email about using it.


Cheers,

Trond

On Tue, Sep 27, 2011 at 10:07 PM, John David Duncan john.david.dun...@gmail.com 
 wrote:

(Resending from the gmail account so Google will forward to the list)

Hi Trond,

commit e70f5ace86dc71a2683b884182fa46d57965a25a
Author: Trond Norbye trond.nor...@gmail.com
Date:   Fri Sep 23 09:35:44 2011 +0200

  Removed topkeys implementation

  Measurements showed memcached only able to handle about 50% of the
  operations with top keys on vs. when it was off.


OK, that's interesting.  I see the motivation for that.

But I had assumed genhash.h was part of the public (utilities) API,  
and I was planning to use it. Suddenly it's gone!







--
Trond Norbye





Re: Overriding the size of each slab page

2011-07-26 Thread David Mitchell

On Jul 25, 4:38 pm, dormando dorma...@rydia.net wrote:
 Uhhh. Can you post the output of `memcached -vvv` with your -I
 adjustments? If you reduce the max page size it most certainly reduces the
 number of slabs. It will increase the number of slab *pages* available.
 Which doesn't affect anything.


  Also, I do not understand the warning, It is not recommended to raise
  this limit above 1MB due just to performance reasons.  What exactly
  are the performance issues?

  If my default chunk size is 480 bytes and if I am storing items in 416
  byte chunks and 448 byte chunks, then, I can store more chunks in 10
  megabytes pages than I can in 10 kilobyte pages.  So, why wouldn't I
  opt to store my chunks in 10 megabyte pages (rather than 10 kilobyte
  pages or even 1 megabyte pages)?  The vast majority of my chunks are
  448 byte chunks.  So, it seems to me that I can use my memory more
  efficiently by opting for 10 megabyte slab pages.  What, if anything,
  is behind the peformance warning?

 IF ANYTHING. So ominous.

 Using a non-brokeassed build of memcached, start it with the following
 examples:

 memcached -vvv -I 128k
 memcached -vvv -I 512k
 memcached -vvv -I 1M
 memcached -vvv -I 5M
 memcached -vvv -I 10M
 memcached -vvv -I 20M

 You'll see that the slab sizes get further apart. You're missunderstanding
 what everything is inside the slabber.

 - slab page is set (1M by default)
 - slab classes are created by starting at a minimum, multiplying by a
 number (1.2 by default?) and repeating until the slab class size is equal
 to the page size (1M by default)
 - when you want to store an item:
   - finds slab class (item size 416 bytes would go into class 8, so long
 as the key isn't too long)
   - try to find memory in class 8. No memory? Pull in one *page* (1M)
   - divide that page into chunks of size 480 bytes (from class 8)
   - hand back one page chunk for the item
   - repeat
 - memory between your actual item size, and the slab chunk size, is wasted
 overhead
 - the further apart slab classes are, the more memory you waste (perf
 issue #1)

 If you give memcached a memory limit of 60 megs, and a max item size of
 10 megs, it only has enough pages to give one page to 6 slab classes.
 Others will starve (tho it's actually a little more complicated than
 that, but that's the idea).

 -Dormando


Hi Dormando,

Thank you for your detailed explanation.

I am storing roughly the same size items.  All of the items get
written to slab class 8, which has a chuck size of 480 bytes.  Most of
my items are 448 bytes.  Some are 416 bytes.  I can store roughly 8
million items with 4 GB of total memory.

My point above was that when I store the same size items, namely 480
byte chunks, it does not matter very much what value that I assign to
my slab page size.  If I set my slab page size to 128K, I get 273
chunks (480 bytes per chunk) in slab class 8, and I have more slab
pages.  If I set the slab page size to 1MB, I get 2,184 chunks (480
bytes per chunk) in slab class 8, and I have fewer slab pages.

Test 1:
1M slab page size
3700 max mem
3376314121 bytes
3219.9 megabytes
8,080,800 curr_items
3762.7 MB of resident memory used

Test 2:
1K slab page size
3700 max mem
3382172689 bytes
3225.5 megabytes
8,082,772 curr items
3768.4 MB of resident memory used

Test 3:
10M slab page size
3690 max mem
3370641668 bytes
3214.5 megabytes
8,060,805 curr items
3754.8 MB of resident memory used

With a 1K slab page size (i.e., test 2), I only get 2 chunks per page,
so I have 4,041,386 total pages (8,082,772 total chunks).  With 1M
slab page size (i.e., test 1), I get 2,184 chunks per page, so I have
3,700 total pages (8,080,800 total chunks).

I did notice that in test 2 that the system was using more virtual
memory--I had 160 MB of swap spaced used in test 2, and no swap space
used in test 1.  So, I am assuming that a 1MB slab page size is more
efficient, but I was questioning the peformance warning.

So, if I am ONLY using slab class 8 (480 bytes), is there any
advantage in setting my slab page size to 1 KB or to 10 MB?  I am
seeing a very slight edge to a 10 MB slab page size.  In test 3 (10 MB
slab page size), memcached used 8 megabytes less resident memory than
in test 1 (1 MB slab page size), and test 3 stored only 10,000 less
items, which is about 4.6 megabytes of data.

David


Overriding the size of each slab page

2011-07-25 Thread David Mitchell
On memcached version 1.4.5-1ubuntu1, there are two entries for the ‘-
I’ parameter in the memcached(1) man page.

-I Override the size of each slab page in bytes.  In mundane
words, it adjusts the maximum item size that memcached will accept.
You can use the suffixes K and M to  specify  the size as well, so use
200 or 2000K or 2M if you want a maximum size of 2 MB per object.
It is not recommended to raise this limit above 1MB due just to
performance reasons.  The default value is 1 MB.

-I size  Override the default size of each slab page. Default is
1mb. Default is 1m, minimum is 1k, max is 128m. Adjusting this value
changes  the  item  size  limit.  Beware that this also increases the
number of slabs (use -v to view), and the overal memory usage of
memcached.

It seems to me that the first entry is misleading.  The parameter does
not adjust the maximum item size; rather, the parameter adjusts the
slab page size, and the number of items stored in each slab page.
These two entries should be combined into one entry.

The second entry could be further clarified by saying that reducing
the page size below the 1 megabyte default page size will result in an
increased number of slabs.

By the way, '-I 10M' does not work.  Neither does '-I 10m'.  I
discovered that you have to specify the byte size, i.e., '-I
10485760'.

Please correct my understanding, if I am missing something.

Also, I do not understand the warning, It is not recommended to raise
this limit above 1MB due just to performance reasons.  What exactly
are the performance issues?

If my default chunk size is 480 bytes and if I am storing items in 416
byte chunks and 448 byte chunks, then, I can store more chunks in 10
megabytes pages than I can in 10 kilobyte pages.  So, why wouldn't I
opt to store my chunks in 10 megabyte pages (rather than 10 kilobyte
pages or even 1 megabyte pages)?  The vast majority of my chunks are
448 byte chunks.  So, it seems to me that I can use my memory more
efficiently by opting for 10 megabyte slab pages.  What, if anything,
is behind the peformance warning?

Thank you for your help.

David


Re: Tuning memcached on ubuntu x86_64 (2.6.35)

2011-07-21 Thread David Mitchell
On Jul 21, 1:41 pm, Trond Norbye trond.nor...@gmail.com wrote:
 Are you running out of virtual memory ;)


Well, the problem is that memcached will use swap, when it runs out of
resident memory.  When swap space fills up, memcached will crash under
load.

Yesterday, I had the -m option set to 3700 (or 3700 megabtyes), since
I have a 4GB system.  But, I started getting evictions when the
dataset reached the size of 3219.9 megabytes.  As I mentioned above, I
started getting evictions at 3763 megabytes of RSS and 3834 megabytes
of VSZ.  Is the -m option the size of the dataset or is it the size of
resident memory?

Today, I increased the -m option to 8000 (or 8000 megabytes) to see
what would happen.  I only have 3954 megabytes total memory in the
system.  Now, memcached is filling up the swap space.  I assume that I
will start getting evictions when the virtual memory is full.

It seems to me that I should avoid touching the swap space, since
memcached can become unstable when using swap space.  But, last week,
I got into trouble because I set the -m option close to the total
available memory on the system, and I guess that I had the value set
too high, since the swap space filled up and memcached crashed.
Today, I am trying to duplicate the issue that I saw last week.

David


Re: Get multi error - too many keys returned

2010-07-22 Thread David Morel
On 21 juil, 10:59, Guille -bisho- bishi...@gmail.com wrote:
 On Wed, Jul 21, 2010 at 10:37, dormando dorma...@rydia.net wrote:
  I'm having an issue with my memcached farm. I'm using 3 memcached
  servers (v1.2.6) and python-memcached client (v1.43).

  When using get_multi function, memcached returns sometimes keys that I
  wasn't asking for. Memcached can return less data than expected (if
  key-value does not exist), but documentation says nothing about
  returning more data than it should. What can cause this?

 We have experienced similar behaviours with php memcache under heavy
 load. Sometimes the clients seems to mix requests from different
 replies.

This happens when your client forks and forgets to reopen a
connection, but re-uses the one that was open before the fork.
As a results, several clients use the same socket which ends up
interleaving the traffic.
This is completely evil, not about memcache, really (it will happen
just the same with any network protocol), and is solved by
disconnecting and reconnecting.
Also, it is only visible under heavy load because this is when you
have a chance of seeing actually concurrent requests despite the very
short roundtrip of a typical memcached query.
Solutions: #1 make very sure you open the connection after the fork,
or #2 have a simple check in your memcached layer that checks the PID
hasn't changed. If it has, do a disconnect_all() or whatever it's
called in your language, and then run your queries. There is some
overhead when you adopt solution #2, but #1 is not always applicable
if you have spaghetti code and/or lax coding policies.

David


Re: Using PCIe SSDs instead of RAM

2010-07-14 Thread David Raccah
Can you also send me your patch?  We have been waiting for the storage
engine, but we are not close to maxing out our systems yet.

Thanks'
David

On 7/13/10, Mitch gmi...@gmail.com wrote:
 Hi Marten!

 I have developed a patch for memcached 1.4.x that splits memcached's
 slab store into metadata and data bits, so that the key/values can
 live on flash without a tremendous performance penalty.  Ultimately, I
 predict the best solution will be to use the storage engine branch and/
 or Northscale's membase, but for the time being the patch works pretty
 well.  I'll send you a private email with more info.

 thanks!
 Mitch (from Fusion-io)

 On Jul 9, 10:01 am, Marten Lehmann coolcoyo...@googlemail.com wrote:
 Hello,

 I know that memcached is designed to get its speed from the fast
 access to RAM. But RAM is still very expensive - even with the amount
 of RAM you get for the same money increasing every year.

 When I thought of using PCIe SSDs instead of RAM I wasn't doing this
 with regard to persistence of objects. I just noticed, that the Fusion-
 io's ioDrives are working with near-RAM speed, having the PCIe bus as
 the only bottleneck in speed (don't mix it up with SATA SSDs). An
 ioDrive 160 GB with SLC memory is available for less than $6,000 and
 is capable to perform more than 100,000 random IOPS (read and write),
 whereas with ECC RAM you'd have to pay a multiple of that amount the
 get the same ressources.

 I don't know of any way to use a block device (like the ioDrive) as
 RAM, you can only use RAM as a block device (which doesn't help in
 this situation). So for the emerging market of PCIe SSDs (many high
 performance databases are using this as replacement for RAID 10 arrays
 and large RAM) it would be necessary to extend or branch memcached to
 support SSD block devices.

 Did someone start with that, is this possibly already on the roadmap,
 or did the maintainers refuse to extend memcache with this option for
 a reason?

 Btw.: We are using memcached in conjunction with nginx as a web proxy
 to our backend webservers to cache images and other static files,
 which improves performance a lot. But 64 GB of RAM is much more
 expensiv than 160 GB of an ioDrive PCIe SSD.

 Kind regards
 Marten


Problems with memcache running on virtual ethernet interface

2010-06-11 Thread David Novakovic
Hey,

We have two VPS running on linode with a virtual ethernet connection
between them. The ifconfig info looks like the following.. (note the
name of the interface)

eth0:0Link encap:Ethernet  HWaddr fe:fd:ad:e6:95:fb
  inet addr:192.168.141.205  Bcast:192.168.255.255  Mask:
255.255.128.0
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  Interrupt:28

Memcached seems to run ok when it is listening on 127.0.0.1, as soon
as i switch to the interface above it becomes unbearably slow!
This means I can't run my memcached servers in a cluster.
Even when I telnet in to do a stats sometimes the connection is
closed by memcached after a long pause before it even sends a
response.

When I did have each server running its own memcache instance on
127.0.0.1 (ie not clustered) I notice the follow coming up in the log
right before I had to restart memcache to get it to work again. Does
it mean anything to anyone? i notice some people have interesting ways
to deal with this issue.. like the following node.js snippet I found
on github: http://gist.github.com/322407

r...@li159-251:~# tail -f /var/log/memcached.log
event_add: No such file or directory
event_add: No such file or directory


Thanks in advance for any insights!

David


Re: Problems with memcache running on virtual ethernet interface

2010-06-11 Thread David Novakovic
Sorry yeah, IP alias.

I actually think I may have found the source of my problem. I was
using a connection pooler that had about max_conns connections per
worker process! So despite not getting any traffic to the box the
memcached connections would have been saturated.

This doesn't explain the errors in my log though...

I'll keep updating with my progress thanks.

D

On Jun 12, 12:23 am, Vladimir Vuksan vli...@veus.hr wrote:
 Can you send what is the command you use to invoke memcached. eth0:0 is
 not really a virtual interface but an IP alias.

 Vladimir



 On Fri, 11 Jun 2010, David Novakovic wrote:
  We have two VPS running on linode with a virtual ethernet connection
  between them. The ifconfig info looks like the following.. (note the
  name of the interface)

  eth0:0    Link encap:Ethernet  HWaddr fe:fd:ad:e6:95:fb
           inet addr:192.168.141.205  Bcast:192.168.255.255  Mask:
  255.255.128.0
           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
           Interrupt:28

  Memcached seems to run ok when it is listening on 127.0.0.1, as soon
  as i switch to the interface above it becomes unbearably slow!
  This means I can't run my memcached servers in a cluster.
  Even when I telnet in to do a stats sometimes the connection is
  closed by memcached after a long pause before it even sends a
  response.

  When I did have each server running its own memcache instance on
  127.0.0.1 (ie not clustered) I notice the follow coming up in the log
  right before I had to restart memcache to get it to work again. Does
  it mean anything to anyone? i notice some people have interesting ways
  to deal with this issue.. like the following node.js snippet I found
  on github:http://gist.github.com/322407

  r...@li159-251:~# tail -f /var/log/memcached.log
  event_add: No such file or directory
  event_add: No such file or directory

  Thanks in advance for any insights!

  David


Re: how to run the two memcache in same windows xp machine

2010-03-26 Thread David Morel
On 25 mar, 06:44, Senthil Raja nsenthilraj...@gmail.com wrote:

 I am not getting new services, while I ran the following command.

 sc create memcached1 binPath=c:\memecached\memcached.exe -d runservice -p
 11311 start=auto DisplayName=memcached server 2

 please help me.


c:\memecached\memcached.exe ... your directory is memEcached, typo ?

David

To unsubscribe from this group, send email to 
memcached+unsubscribegooglegroups.com or reply to this email with the words 
REMOVE ME as the subject.


Re: Windows 2003 64 Bit memory utilization?

2009-11-04 Thread David H.

On Nov 4, 12:00 am, Trond Norbye trond.nor...@sun.com wrote:
 [...]
 Other problems you may encounter is:

 * Declarations of variables when they are used, and not in the beginning
 of the scope (this includes inside for () etc)
 * Some places in the code use // for comments
 * uint64_t and the use of PRIu64 in printf's
 * bool as a datatype
 * nameless unions
 * some struct initalizers where we address arrays and members directly..

 Do your compiler support that?

* Declarations - not supported(!!!)
* C99 comment style - supported
* uint64_t - supported (even in C mode, I think; for sure in C++ mode,
or this would be a showstopper for me)
* bool - can be faked
* anonymous unions - supported
* offsetof(), etc. - not sure (requires more research)

Of course, the major show-stopper here are the inline declarations.
However, the more interesting question is whether we really need to
compile memcached as C code under VC9.  I'm fairly sure most of these
issues will just go away if we compile as C++ code (that is, create a
VC9 project which tells Visual Studio alone to compile as C++, which
it prefers anyway).  I'm sure some people will howl and holler about
exceptions, vtables, RTTI, multiple-inheritance, and a whole raft of
other completely irrelevant issues, but it seems to me that this will
make the code *just work*.  Of course, I will have to play with it
quite a bit more to justify that claim.  My guess is that VC9 can be
made to shut up about scary uses of struct member access and 0-length
arrays with some strategically placed warning suppression pragmas (for
instance, VC9 is quite happy to both identify 0-length array members
as non-standard and to allow their use in C++ mode, while it
considers them totally kosher in C mode).  Is this a viable way
forward?

Note that I am relying on VC9 here and have no interest in VC8's
shortcomings (apologies to anyone who can't afford to upgrade to VC9).

Dave


Re: Windows 2003 64 Bit memory utilization?

2009-11-04 Thread David H.

On Nov 4, 3:16 am, Robert Buck buck.rober...@gmail.com wrote:
 Related to porting to C++, back when I helped port a large (8+ million
 line) code base to C++ we had tougher issues than what is faced within
 memcached, and the product was provably the better for it -- customers
 noticed the increased level of stability that resulted in addressing a
 host of latent bugs that C helped to hide. It is not terribly hard to
 port to C++, but it takes commitment from all parties involved. As for
 benefits, there were that ended up being resolved as a result. It was
 a painful few months, but even the die-hard C-fans appreciated the
 benefits.

I would also like to point out that memcached could easily remain a C-
within-C++ app which avoids all the C++ features while taking
advantage of the wider availability of conforming C++0x compilers than
C99 compilers.  There are no uses of 'class', and even the void*-T*
conversions can be trivially fixed with explicit C-style casts (and
one could argue that from a documentation perspective, this is A Good
Thing(TM)).  The only major C99 feature that might be missed is VLAs,
and I don't recall seeing any in the code.

Dave


Re: pylibmc vs python-libmemcached vs cmemcache

2009-09-23 Thread David Stanek

On Wed, Sep 23, 2009 at 3:59 PM, Jehiah Czebotar jeh...@gmail.com wrote:

 It seems there are 3 memcached libraries for python now that wrap the
 c libmemcached for use in python.

 pylibmc - http://lericson.blogg.se/code/category/pylibmc.html
 python-libmemcached - http://code.google.com/p/python-libmemcached/
 cmemcache - http://gijsbert.org/cmemcache/index.html

 Is anyone using these in a heavy production environment, are any more
 reliable than any others? I havn't seen much discussion about any of
 these in the list archives.

 cmemcache lists some known problems (even important things like
 crashing on disconnects)
 pylibmc seems newer and appears to have the most active development

 thoughts?


We (at my day job) have been happily using python-memcached[0] in
production for at least a year now. Our sites get a pretty good amount
of traffic. What is your definition of heavy?

[0] http://www.tummy.com/Community/software/python-memcached/

-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek


Re: pylibmc vs python-libmemcached vs cmemcache

2009-09-23 Thread David Stanek

On Wed, Sep 23, 2009 at 4:27 PM, Jehiah Czebotar jeh...@gmail.com wrote:

 I've been happily using python-memcached as well for a long time and
 have contributed code to it; i am however specifically asking about
 python-libmemcached, the one based on libmemcached not the pure python
 client. (it is unfortunate that those two names are so similar)


Any reason you are contemplating moving to a C-based library?


-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek


Re: Clobbering updates CAS

2009-08-10 Thread David Sheldon

On Fri, Aug 7, 2009 at 4:37 AM, Dustindsalli...@gmail.com wrote:
 On Aug 6, 6:26 am, Ren jared.willi...@ntlworld.com wrote:
 If the CAS operation succeeds, then its the currently running tasks
 responsibility to refresh the data in the cache, if it fails with
 RES_DATA_EXISTS, then someone else is taking care of the update.

  That's an interesting approach.  I'd only fear a failure to actually
 update the value causing stale data to stick around longer than you'd
 like.  Probably best to run those updates through your favorite job
 queue.

I see this as a good idea. I think that the failure to update should be
on the step where it puts back the original data with the extended
expiry. If it cannot do this, then starting again will either read the
stale data with extended expiry, or updated data. Either way, that
thread will not update it.

Can you tell it not to use CAS (i.e. force the write) when it writes the
updated data with new expiry after regeneration? If so, then it's
unlikely that stale data will hang around longer.

David


Re: Key check feature request

2009-08-10 Thread David Sheldon

On Mon, Aug 10, 2009 at 11:05 AM, Stefan de Koninkste...@konink.de wrote:
 For a webserver plugin I am looking for something so trivial that I
 wonder why such check doesn't exist. I would like to know if a
 specific key is in use. I don't want to write into memcached, and I
 don't want alter data to check if the key is present. This could could
 call a race condition.

Can you explain what you want to use the key check for? I can't see
any use for a key check, as it may expire between checking for the
key, and whatever you want to do with the key, and produce a race
condition.

David


Re: memory allocation, does memcached play fair ?

2009-06-11 Thread David Rolston
I'm not following your argument.  First off, sessions in general are not a
good candidate for caching.  Caching is in my opinion best reserved for data
that is primarily static, or has a high read to write ratio.  Memcache when
used to front mysql for example, is preventing the overhead and contention
that can occur in the database when you have a lot of selects repeatedly
returning the same result set.  When you're reading much more frequently
than you're writing or changing data, then memcache works great.  If you
have volatile data as sessions tend to be, then caching now becomes an
impediment and added complexity. In the not too distant past, I did
implement a custom session handler in php that stores the session data in
memcache.  While this worked well and was reliable, the problem becomes that
if memcached is restarted, all the sessions are lost.

If the sessions are inherently file based, a Netapp appliance a good network
infrastructure is a great way to go --- the simplicity of NFS with the
reliability of RAID 6, and you can get performance limited only by the money
you want to spend on the networking --- anywhere from commodity stuff to
fiber channel or iscsi SAN.

On Wed, Jun 10, 2009 at 12:08 PM, tbs theblues...@gmail.com wrote:

 
 I hear you, I hear you, and that is what I wanted to do too. However
 it was impossible to get that passed the pointy-headed bosses. In fact
 why I wanted to use memCached in the first place was a difficult
 enough battle. Our sessions currently are flatfile over NFS ( with 
 10 million page views a month) and NFS has a hotline to tears and 911!
 a pure DB solution, with our growth path would have only been a temp
 fix and ended up in the same place as NFS. Using memcached+db is
 perfect, and if I could have pinched off a small chunk of memory from
 all the front end servers and created a nice little pool things would
 have been sweet, however 'da management' has [unqualified]
 reservations, and so I am forced to do this 'all on one box' method
 just to get us going. When it works out perfectly nd I am proven
 right, then I get some leverage to spread the memory load across
 multiple servers.
 oh the joy of engineering versus office politics :(

 Richard


  Brian.


Re: Memcached don't serve my ads

2009-05-06 Thread David Stanek

On Tue, May 5, 2009 at 9:47 AM, simpsone...@googlemail.com
simpsone...@googlemail.com wrote:

 Hi, i'm using memcached 1.1.12 and Drupal 5.X with ad modul.
 This was working very well but since i did a mistake and delete my
 vhost.conf file memcached doesn't show my blocks with ads in.
 Are there any important settings that has to be like open_basedir or
 something else, that is interrupting to show my ads ?

 Are there any hints ?


You may be better off asking on the Drupal list since memcached
doesn't serve ads.

-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek


Re: memcache-top

2009-04-23 Thread David Stanek

2009/4/23 Jose Celestino j...@co.sapo.pt:

 On Qua, 2009-04-22 at 13:00 -0700, gf wrote:
 Hi. It's really good idea, but your code is not good, IMHO.

 What's wrong with his code? Can you elaborate on that?


It uses sane indents and whitespace. What crap.


-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek


Re: Maintainer?

2009-04-07 Thread David Stanek

Why not just use this list?

On 4/7/09, Joseph S. Testa II jte...@positronsecurity.com wrote:

 Hi,

 I am interested in contacting the maintainer(s) of memcached.  A
 week ago, I sent e-mail to Brad Fitzpatrick, Anatoly Vorobey, and
 Steven Grimm (as they are listed in the packaged AUTHORS /
 CONTRIBUTORS files), but I have not received a response.  Are they
 still the maintainers, or did the responsibility pass to someone else?

 Thanks,
 - Joe


-- 
Sent from my mobile device

David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek


Re: Memcached crashing under load

2009-03-15 Thread David Stanek

On Sun, Mar 15, 2009 at 8:49 PM, meppum mmep...@gmail.com wrote:

 The script:
 try:
        import cmemcache as memcache
 except ImportError:
        import memcache

 c = memcache.Client([127.0.0.1:11211])
 c.set('abc', '123')
 c.disconnect_all()

 for i in range(2):
        if i % 1000 == 0:
                print iteration: %s % i

        c = memcache.Client([127.0.0.1:11211])
        c.get('abc')
        c.disconnect_all()


This script will not run multiple memcached requests in parallel. Is
that what you were going for?

-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek


Re: UNSUBSCRIBE

2009-01-30 Thread David Rolston
Who could possibly differ with such a logical argument?  The Xenophobia was
an especially nice touch, I thought.

It's just so darn hard to use google, and obtain information about
unsubscribing from a google group.  I really struggled until I came up with
this magical phrase:

unsubscribe from google groups

I put it into this crazy american thing called google.com.  There was a
humming, and the lights dimmed, and low and behold a webpage was returned
with a list of links.  The first one was:

http://groups.google.com/support/bin/answer.py?hl=enanswer=46608



On Thu, Jan 29, 2009 at 10:03 PM, Gaurav Arora gaura...@tyroo.com wrote:

  I'm sorry. Please forgive me. I forgot that I wasn't l33t enough to join
 a group that talks about a second level caching solution.



 Let me take a guess, you're American right? Heh. No wonder.



 --

 Regards,

 Gaurav Arora
   --

 *From:* memcached@googlegroups.com [mailto:memcac...@googlegroups.com] *On
 Behalf Of *David Rolston
 *Sent:* Friday, January 30, 2009 10:50 AM
 *To:* memcached@googlegroups.com
 *Subject:* Re: UNSUBSCRIBE



 Why do people who are too stupid/lazy to read the simplest directions join
 groups like this.  boggle

 On Thu, Jan 29, 2009 at 9:02 PM, NICK VERBECK nerdyn...@gmail.com wrote:


 Unsubscribe here:

 http://groups.google.com/group/memcached/subscribe



 On Thu, Jan 29, 2009 at 9:52 PM, Gaurav Arora gaura...@tyroo.com wrote:
  I have been wondering how to do the same too …
 
 
 
  --
 
  Regards,
 
  Gaurav Arora
 
  
 
  From: memcached@googlegroups.com [mailto:memcac...@googlegroups.com] On
  Behalf Of Ben Standefer
  Sent: Friday, January 30, 2009 5:54 AM
  To: memcached@googlegroups.com
  Subject: UNSUBSCRIBE
 
 
 
  UNSUBSCRIBE


   --
 Nick Verbeck - NerdyNick
 
 NerdyNick.com
 SkeletalDesign.com
 VivaLaOpenSource.com
 Coloco.ubuntu-rocks.org





Re: UNSUBSCRIBE

2009-01-29 Thread David Rolston
Why do people who are too stupid/lazy to read the simplest directions join
groups like this.  boggle

On Thu, Jan 29, 2009 at 9:02 PM, NICK VERBECK nerdyn...@gmail.com wrote:


 Unsubscribe here:

 http://groups.google.com/group/memcached/subscribe


 On Thu, Jan 29, 2009 at 9:52 PM, Gaurav Arora gaura...@tyroo.com wrote:
  I have been wondering how to do the same too …
 
 
 
  --
 
  Regards,
 
  Gaurav Arora
 
  
 
  From: memcached@googlegroups.com [mailto:memcac...@googlegroups.com] On
  Behalf Of Ben Standefer
  Sent: Friday, January 30, 2009 5:54 AM
  To: memcached@googlegroups.com
  Subject: UNSUBSCRIBE
 
 
 
  UNSUBSCRIBE



 --
 Nick Verbeck - NerdyNick
 
 NerdyNick.com
 SkeletalDesign.com
 VivaLaOpenSource.com
 Coloco.ubuntu-rocks.org



Re: PHP Memcache extension

2008-11-03 Thread David Rolston
Yes, AFAIK the default for memcached is 1024 connections.  I graph
connections using a munin plugin, and they have never been higher than 400.
Additionally, when this has occured it has only effected one apache in the
cluster at a time.  I forgot to mention that the only way we have found to
correct this is to restart apache.  It does not appear to autocorrect.  My
best guess at what is happening, is that the connections are stale, but from
the client, it believes that the connections are still active.

On Mon, Nov 3, 2008 at 11:18 PM, dormando [EMAIL PROTECTED] wrote:


 You sure you're not hitting the connection limit for memcached?

 you should do a little more troubleshooting to discover exactly what's
 going on there...

 -Dormando

 On Mon, 3 Nov 2008, [EMAIL PROTECTED] wrote:

 
  We are using memcache in a couple different ways, using the php client
  with mod_php.  Once and a while it seems that something gets hinky in
  apache/mod_php/memcache client using persistent connections, and calls
  to $memcache-getServerStatus() will return false, which in our code
  causes an exception.
 
  I have a few questions:
 
  -Is there a way to configure the number of persistent connections to
  maintain?
  -Any other settings related to persistent connections?
 
  We have a small farm of 3-4 webservers currently going against a
  single memcached, and when I look for a connection count using lsof or
  netstat I get a number in the 255ish range -- is this a coincidence,
  or indicative of some built-in setting?
 
  I know there are a lot of variables here, our basic config:
 
  Centos5
  httpd-2.0.52-41, running worker
  mod PHP 5.2.6
 
  For now, I've disabled persistent connections, but I'm curious if this
  is going to significantly limit our scalability as time goes on.
  Recommendations or advice of others using memcache and php is
  appreciated.
 
 
 
 
 
 
 
 
 



Re: Memcache serving 24gig a node?

2008-10-31 Thread David Stanek

Spreading across more boxes also makes you more fault tolerant. If one
or two go down your database (or other expensive resource) would still
be OK.


On Fri, Oct 31, 2008 at 8:47 AM, Stephen Johnston
[EMAIL PROTECTED] wrote:
 I think the major point of consideration is that if memcached had a must
 have upgrade tomorrow. What would the impact of taking down one of those
 24g instances to upgrade be? If that makes you cringe, then you should
 probably reduce the size of each instance even if you are running just them
 on the same machine.

 -Stephen
 On Thu, Oct 30, 2008 at 11:40 PM, Andy Hawkins [EMAIL PROTECTED] wrote:

 I've got around 200 gigs of ram I'm running 6 nodes all set around
 24gigs each.

 Is this appropriate or should I cluster them out?

 ~@


-- 
David
http://www.traceback.org