Re: Beginner issue

2019-03-27 Thread dormando
Yo,

-m doesn't take units, and apparently doesn't error if you give it a bad
string.

>  b'limit_maxbytes': 67108864,

Sadly, it's gone to its default of 64 megabytes of memory. Sorry about
that :(

try: "-m 64000" - you should be able to confirm with 'ps' that you're not
using the memory as-is.

On Wed, 27 Mar 2019, Jerome Kieffer wrote:

> Hi Dormado,
>
> Thanks for your prompt feed-back.
>
> On Tue, 26 Mar 2019 11:06:10 -0700 (PDT)
> dormando  wrote:
>
> > Seems like this is a borderline use case, but it might still work for you.  
>
> From what I read on the internet, it looks like we are miss-using the
> tool ... but on the other hand I don't see why it shouldn't work ...
>
>  
> > How did you verify you found the cause? Can you share snapshots from
> > "stats items" and "stats slabs" output after your test was run?
> >
> > Memory isn't evenly distributed; it's assigned where objects actually
> > exist. so either you've filled the server with other objects or
> > there's been a miscalculation. It's possible you're hitting client
> > timeouts or something.  
>
> This is how I performed the test, using one of the many Python bindings (I 
> tested 3 of them with the same effect)
>
>
> On the server side:
> memcached -m 64G -I 16m -vv
>
> On the client side:
> Generate the data:
>
> import numpy  
> shape = (2048,2048)  
> nframes, scan = 1024, 0
> data = [numpy.random.randint(0,65530, 
> numpy.prod(shape)).reshape(shape).astype("uint16") for i in range(nframes)]
> print(len(data[0].tostring()), type(data[0].tostring()))  
>     
>  
> --> 8388608   
>
> Connection:
> from pymemcache.client.base  import Client
> client = Client(('127.0.0.1', 11211))
>
> The test for writing:
> %time scan+=1;res=[client.set('scan%d_frame%d'%(scan,idx),
> frame.tobytes()) for idx,frame in enumerate(data)]
> --> 8.5s for 8G of data  
>
> client.stats()
>     
> {b'pid': 12731,
>  b'uptime': 390,
>  b'time': 1553627843,
>  b'version': b'1.5.6',
>  b'libevent': b'2.0.21-stable',
>  b'pointer_size': 64,
>  b'rusage_user': 0.174296,
>  b'rusage_system': 2.338119,
>  b'max_connections': 1024,
>  b'curr_connections': 1,
>  b'total_connections': 2,
>  b'rejected_connections': 0,
>  b'connection_structures': 2,
>  b'reserved_fds': 20,
>  b'cmd_get': 0,
>  b'cmd_set': 1024,
>  b'cmd_flush': 0,
>  b'cmd_touch': 0,
>  b'get_hits': 0,
>  b'get_misses': 0,
>  b'get_expired': 0,
>  b'get_flushed': 0,
>  b'delete_misses': 0,
>  b'delete_hits': 0,
>  b'incr_misses': 0,
>  b'incr_hits': 0,
>  b'decr_misses': 0,
>  b'decr_hits': 0,
>  b'cas_misses': 0,
>  b'cas_hits': 0,
>  b'cas_badval': 0,
>  b'touch_hits': 0,
>  b'touch_misses': 0,
>  b'auth_cmds': 0,
>  b'auth_errors': 0,
>  b'bytes_read': 8589977522,
>  b'bytes_written': 0,
>  b'limit_maxbytes': 67108864,
>  b'accepting_conns': 1,
>  b'listen_disabled_num': 0,
>  b'time_in_listen_disabled_us': 0,
>  b'threads': 4,
>  b'conn_yields': 0,
>  b'hash_power_level': 16,
>  b'hash_bytes': 524288,
>  b'hash_is_expanding': False,
>  b'slab_reassign_rescues': 0,
>  b'slab_reassign_chunk_rescues': 2,
>  b'slab_reassign_evictions_nomem': 0,
>  b'slab_reassign_inline_reclaim': 0,
>  b'slab_reassign_busy_items': 0,
>  b'slab_reassign_busy_deletes': 0,
>  b'slab_reassign_running': False,
>  b'slabs_moved': 1,
>  b'lru_crawler_running': 0,
>  b'lru_crawler_starts': 1020,
>  b'lru_maintainer_juggles': 5931,
>  b'malloc_fails': 0,
>  b'log_worker_dropped': 0,
>  b'log_worker_written': 0,
>  b'log_watcher_skipped': 0,
>  b'log_watcher_sent': 0,
>  b'bytes': 58720774,
>  b'curr_items': 7,
>  b'total_items': 1024,
>  b'slab_global_page_pool': 0,
>  b'expired_unfetched': 0,
>  b'evicted_unfetched': 1017,   <==
>  b'evicted_active': 0,
>  b'evictions': 1017,
>  b'reclaimed': 0,
>  b'crawler_reclaimed': 0,
>  b'crawler_items_checked': 6,
>  b'lrutail_reflocked': 4,
>  b'moves_to_cold': 1024,
>  b'moves_to_warm': 0,
>  b'moves_within_lru': 0,
>  b'direct_reclaims': 1197,
>  b'lru_bumps_dropped': 0}
>
> client.stats("items") 
>     
> {b'items:39:number': 7,
>  b'items:39:number_hot': 0,
>  b'items:39:number_warm': 0,
>  b'items:39:number_cold': 7,
>  b'items:39:age_hot': 0,
>  b'items:39:age_warm': 0,
>  b'items:39:age': 71,
>  b'items:39:evicted': 1017,    <==
>  b'items:39:evicted_nonzero': 0,
>  b'items:39:evicted_time': 0,
>  b'items:39:outofmemory': 0,
>  b'items:39:tailrepairs': 0,
>  b'items:39:reclaimed': 0,
>  b'items:39:expired_unfetched': 0,
>  b'items:39:evicted_unfetched': 1017,  <==
>  b'items:39:evicted_active': 0,
>  b'items:39:crawler_reclaimed': 0,
>  b'items:39:crawler_items_checked': 6,
>  b'items:39:lrutail_reflocked': 4,
>  b'items:39:moves_to_cold': 1024,
>  b'items:39:moves_to_warm': 0,
>  b'items:39:moves_within_lru': 0,

Re: Beginner issue

2019-03-27 Thread Jerome Kieffer
Hi Dormado,

Thanks for your prompt feed-back.

On Tue, 26 Mar 2019 11:06:10 -0700 (PDT)
dormando  wrote:

> Seems like this is a borderline use case, but it might still work for 
you.  

>From what I read on the internet, it looks like we are miss-using the
tool ... but on the other hand I don't see why it shouldn't work ...

 
> How did you verify you found the cause? Can you share snapshots from
> "stats items" and "stats slabs" output after your test was run?
> 
> Memory isn't evenly distributed; it's assigned where objects actually
> exist. so either you've filled the server with other objects or
> there's been a miscalculation. It's possible you're hitting client
> timeouts or something.  

This is how I performed the test, using one of the many Python bindings (I 
tested 3 of them with the same effect)


On the server side:
memcached -m 64G -I 16m -vv

On the client side:
Generate the data:

import numpy  
shape = (2048,2048)  
nframes, scan = 1024, 0 
data = [numpy.random.randint(0,65530, 
numpy.prod(shape)).reshape(shape).astype("uint16") for i in range(nframes)] 
print(len(data[0].tostring()), 
type(data[0].tostring()))   
  
 
 
--> 8388608   

Connection:
from pymemcache.client.base  import Client
client = Client(('127.0.0.1', 11211))

The test for writing:
%time scan+=1;res=[client.set('scan%d_frame%d'%(scan,idx),
frame.tobytes()) for idx,frame in enumerate(data)] 
--> 8.5s for 8G of data  

client.stats()  
 
 
{b'pid': 12731,
 b'uptime': 390,
 b'time': 1553627843,
 b'version': b'1.5.6',
 b'libevent': b'2.0.21-stable',
 b'pointer_size': 64,
 b'rusage_user': 0.174296,
 b'rusage_system': 2.338119,
 b'max_connections': 1024,
 b'curr_connections': 1,
 b'total_connections': 2,
 b'rejected_connections': 0,
 b'connection_structures': 2,
 b'reserved_fds': 20,
 b'cmd_get': 0,
 b'cmd_set': 1024,
 b'cmd_flush': 0,
 b'cmd_touch': 0,
 b'get_hits': 0,
 b'get_misses': 0,
 b'get_expired': 0,
 b'get_flushed': 0,
 b'delete_misses': 0,
 b'delete_hits': 0,
 b'incr_misses': 0,
 b'incr_hits': 0,
 b'decr_misses': 0,
 b'decr_hits': 0,
 b'cas_misses': 0,
 b'cas_hits': 0,
 b'cas_badval': 0,
 b'touch_hits': 0,
 b'touch_misses': 0,
 b'auth_cmds': 0,
 b'auth_errors': 0,
 b'bytes_read': 8589977522,
 b'bytes_written': 0,
 b'limit_maxbytes': 67108864,
 b'accepting_conns': 1,
 b'listen_disabled_num': 0,
 b'time_in_listen_disabled_us': 0,
 b'threads': 4,
 b'conn_yields': 0,
 b'hash_power_level': 16,
 b'hash_bytes': 524288,
 b'hash_is_expanding': False,
 b'slab_reassign_rescues': 0,
 b'slab_reassign_chunk_rescues': 2,
 b'slab_reassign_evictions_nomem': 0,
 b'slab_reassign_inline_reclaim': 0,
 b'slab_reassign_busy_items': 0,
 b'slab_reassign_busy_deletes': 0,
 b'slab_reassign_running': False,
 b'slabs_moved': 1,
 b'lru_crawler_running': 0,
 b'lru_crawler_starts': 1020,
 b'lru_maintainer_juggles': 5931,
 b'malloc_fails': 0,
 b'log_worker_dropped': 0,
 b'log_worker_written': 0,
 b'log_watcher_skipped': 0,
 b'log_watcher_sent': 0,
 b'bytes': 58720774,
 b'curr_items': 7,
 b'total_items': 1024,
 b'slab_global_page_pool': 0,
 b'expired_unfetched': 0,
 b'evicted_unfetched': 1017,   <==
 b'evicted_active': 0,
 b'evictions': 1017,
 b'reclaimed': 0,
 b'crawler_reclaimed': 0,
 b'crawler_items_checked': 6,
 b'lrutail_reflocked': 4,
 b'moves_to_cold': 1024,
 b'moves_to_warm': 0,
 b'moves_within_lru': 0,
 b'direct_reclaims': 1197,
 b'lru_bumps_dropped': 0}

client.stats("items")   
 
 
{b'items:39:number': 7,
 b'items:39:number_hot': 0,
 b'items:39:number_warm': 0,
 b'items:39:number_cold': 7,
 b'items:39:age_hot': 0,
 b'items:39:age_warm': 0,
 b'items:39:age': 71,
 b'items:39:evicted': 1017,<==
 b'items:39:evicted_nonzero': 0,
 b'items:39:evicted_time': 0,
 b'items:39:outofmemory': 0,
 b'items:39:tailrepairs': 0,
 b'items:39:reclaimed': 0,
 b'items:39:expired_unfetched': 0,
 b'items:39:evicted_unfetched': 1017,  <==
 b'items:39:evicted_active': 0,
 b'items:39:crawler_reclaimed': 0,
 b'items:39:crawler_items_checked': 6,
 b'items:39:lrutail_reflocked': 4,
 b'items:39:moves_to_cold': 1024,
 b'items:39:moves_to_warm': 0,
 b'items:39:moves_within_lru': 0,
 b'items:39:direct_reclaims': 1197,
 b'items:39:hits_to_hot': 0,
 b'items:39:hits_to_warm': 0,
 b'items:39:hits_to_cold': 0,
 b'items:39:hits_to_temp': 0}

client.stats("slabs")   
 
 
{b'2:chunk_size': 120,
 b'2:chunks_per_page': 8738,
 b'2:total_pages': 1,
 b'2:total_chunks': 8738,
 b'2:used_chunks': 7,
 b'2:free_chunks': 8731,
 b'2:free_chunks_end': 0,
 b'2:mem_requested': 840,
 b'2:get_hits': 0,
 b'2:cmd_set': 0,
 b'2:delete_hits': 0,
 b'2:incr_hits': 0,
 b'2:decr_hits': 0,
 b'2:cas_hits': 0,
 b'2:cas_badval': 0,
 b'2:touch_hits

Re: Beginner issue

2019-03-26 Thread dormando
Seems like this is a borderline use case, but it might still work for you.

How did you verify you found the cause? Can you share snapshots from
"stats items" and "stats slabs" output after your test was run?

Memory isn't evenly distributed; it's assigned where objects actually
exist. so either you've filled the server with other objects or there's
been a miscalculation. It's possible you're hitting client timeouts or
something.

On Tue, 26 Mar 2019, Jerome Kieffer wrote:

> Hi,
>
> I am completely new to memcached but it looks like the right tool for what I 
> need to do:
> I would like to do temporary storage of some data coming from a detector at a 
> pretty high data-rate and share them with other computer
> via memcached. The image size varies from 512x512 to 4096x4096 which makes 
> the raw data size from 0.5 to 32 MB (hence compatible with
> memcached). I wish to use some fast (nvme) SSD as extra memory as well.
>
> I setup a quick benchmark, initially to measure speeds (memcached 1.5.8 from 
> debian10):
> * Memcached is setup to accept object up to 128m and use up to 64G of RAM 
> (optional SSD storage coming later ...)
> * Store 1024 images of 2048x2048 as uint16 which represents ~8G of RAM.
>
> Actually only a limited number of frames  were still available in the cache 
> after the end of the write (between 7 and 120 frames).
>
> I found the cause: all frames have the same size hence fall into the same 
> slab (the one for the largest objects), which is of limited
> size because chunk slabs are evenly distributed over object sizes.
>
> This raises 2 questions:
> * How can I assign "most" of the available memory to a given size avoid 
> dropping frames ? (I know in advance what will be the size of
> the objects)
> * Maybe memcached is not the right tool and you could indicate another tool 
> better adapted ?
>
> Thanks in advance for your help.
>
> Cheers,
>
> Jérôme
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.