Re: [ZODB-Dev] Sharing ZEO client blobs directory with ZEO server ?

2012-07-13 Thread Andreas Gabriel
Am 13.07.2012 17:46, schrieb Thierry Florac:> I have a Zope3 application which
is deployed in multi-processes mode via
> Apache and mod_wsgi, connected to a local ZEO server.
> Is it possible to share the blobs directory in read/write mode between ZEO
> server and ZEO clients ?

Hi,

there is a concept to be found in ZODB's documentation directory

http://svn.zope.org/ZODB/trunk/doc/HOWTO-Blobs-NFS.txt?logsort=rev&rev=82268&view=markup

best regards
Andreas

-- 
Dr. Andreas Gabriel, Hochschulrechenzentrum, http://www.uni-marburg.de/hrz
Hans-Meerwein-Str., 35032 Marburg,  fon +49 (0)6421 28-23560  fax 28-26994
 Philipps-Universitaet Marburg ---


___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] zeo.memcache

2011-10-09 Thread Andreas Gabriel
On 08.10.2011 22:34, Shane Hathaway wrote:
> I could adapt the cache code in RelStorage for ZEO.  I don't think it
> would be very difficult.  How many people would be interested in such a
> thing?

+1 for me too !

Kind regards,
Andreas


-- 
Dr. Andreas Gabriel, Hochschulrechenzentrum
Hans-Meerwein-Str., 35032 Marburg, fon +49 (0)6421 28-23560 fax -26994
- Philipps-Universitaet Marburg --

___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] zeo.memcache

2011-10-07 Thread Andreas Gabriel
Hi,

Am 07.10.2011 11:18, schrieb Vincent Pelletier:
> Le vendredi 7 octobre 2011 10:15:34, Andreas Gabriel a écrit :
>> self._update() in the while loop is called (calls indirectly the memcache
>> "query" method, a synonym for "get") before the "cas" method is called.
> 
> In my understanding from "pydoc memcache", there is "get", which loads, and 
> "gets" which loads and supposedly does some magic needed by "cas".
> Maybe on any "cas"-supporting memcache implementation "get" just does that 
> magic too.

You are right. There is a bug in my code, because it depends on 
lovely.memcached,
which does not support 'cas' :(. I didn't remember that the code was not tested.
Sorry!

However, is your implementation thread safe? Maybe I am blind ;). That was
the reason  I used lovely.memcached as memcached connector. Each thread has its 
own
connection and namespace to store keys. Therefore, the locks from one or more
zeo-clients with multiple threads ẃere distinguishable.

Kind regards
Andreas





-- 
Dr. Andreas Gabriel, Hochschulrechenzentrum, http://www.uni-marburg.de/hrz
Hans-Meerwein-Str., 35032 Marburg,  fon +49 (0)6421 28-23560  fax 28-26994
 Philipps-Universitaet Marburg ---
___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] zeo.memcache

2011-10-07 Thread Andreas Gabriel
Hi,

Am 07.10.2011 01:57, schrieb Vincent Pelletier:
> Le jeudi 06 octobre 2011 21:18:39, Andreas Gabriel a écrit :
> I couldn't resist writing my own version inspired from your code:
>   https://github.com/vpelletier/python-memcachelock

That's no problem :)

> It lacks any integration with ZODB.
> It drops support for non-"cas" memcached. I understand your code relies on 
> ZODB conflicts as last resort, but I wanted to scratch an itch :) .
> It drops support for timeout (not sure why they are used for, so it's 
> actually 
> more a "left asides" than a drop).

This feature supports the fallback from pessimistic locking to the standard 
optimistic locking
of ZODB (if all locks are lost because of a restart of memcachd etc.)
 -> details: http://pypi.python.org/pypi/unimr.memcachedlock

> I admit this is my first real attempt at using "cas", and the documentation 
> mentions gets must be called before calling cas for it to succeed. I don't 
> see 
> gets calls in your code, so I wonder if there wouldn't be a bug... Or maybe 
> it's just my misunderstanding.

self._update() in the while loop is called (calls indirectly the memcache 
"query"
method, a synonym for "get") before the "cas" method is called.

> As the README states: it's not well tested. I only did stupid sanity checks 
> (2 
> instances in a single python interactive interpreter, one guy on the keyboard 
> - and a slow one, because it's late) and a pylint run.

Please continue your developement because this will be important 
feature/enhancement
for big zope sites with many zeo-clients under heavy load.

kind regards
Andreas

-- 
Dr. Andreas Gabriel, Hochschulrechenzentrum, http://www.uni-marburg.de/hrz
Hans-Meerwein-Str., 35032 Marburg,  fon +49 (0)6421 28-23560  fax 28-26994
 Philipps-Universitaet Marburg ---
___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] zeo.memcache

2011-10-06 Thread Andreas Gabriel
Hi,

On 06.10.2011 19:59, Vincent Pelletier wrote:
> synchronisation. Supporting such setup requires using the test-and-set 
> memcached operation, plus some sugar. I just don't think this was intended to 
> be supported in the original code.

Maybe this code will help as example for the shared locking problem

https://svn.plone.org/svn/collective/unimr.memcachedlock/trunk/unimr/memcachedlock/memcachedlock.py

Kind regards
Andreas

-- 
Dr. Andreas Gabriel, Hochschulrechenzentrum
Hans-Meerwein-Str., 35032 Marburg, fon +49 (0)6421 28-23560 fax -26994
- Philipps-Universitaet Marburg --

___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] Practical experience - ZEO-Server & limited size of file descriptors

2010-07-19 Thread Andreas Gabriel
On 19.07.2010 17:34, Jim Fulton wrote:
>>>>> I advise against serving more than one storage per server.
>>>> Why?
>>>
>>> Because Python programs can't use much more than 1 CPU and multiple
>>> storage servers in the same process will be slower than in separate
>>> processes, assuming the machine has multiple cores.
>>>
>>>
>> On Mon, Jul 19, 2010 at 11:26 AM, Andreas Jung  wrote:
>> Sure but I am not aware of serious performance problems related to this
>> reason on large scale installations with heavy read and write operations.
> 
> I am.

We are also not aware of serious performance problems at the server side.
Nonetheless, the configuration allows more than one storage per zeo server
and there is still the problem that no error message occurs in the zeo
server log when the file descriptor limit is reached.

Andreas

-- 
Dr. Andreas Gabriel, Hochschulrechenzentrum
Hans-Meerwein-Str., 35032 Marburg, fon +49 (0)6421 28-23560 fax -26994
- Philipps-Universitaet Marburg --

___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


[ZODB-Dev] Practical experience - ZEO-Server & limited size of file descriptors

2010-07-19 Thread Andreas Gabriel
Hello,

we are hosting a big ZEO based zope site and encountered a problem with a
limited size of file descriptors (FDs) of the ZEO server.

Scenario

  36 ZEO clients (each 2 threads)
  1  ZEO server  (serving 10 storages)

It was not possible to connect all ZEO clients to the ZEO server.
After a short amount of time following events occur in the event.log
of the ZEO clients

[snip]

2010-02-08T14:03:25 PROBLEM(100) zrpc:21615 CW: error connecting to
('zeo-server.dummy.de', 1): ECONNREFUSED

[snip]

and simultaneously the ZEO server hangs and the whole site goes down.
Unfortunately, there was no hint in the ZEO server logs. After 'Googling'
we found the following hint

  http://comments.gmane.org/gmane.comp.web.zope.plone.user/101892

that each zeo client connection is consuming three file descriptors at the ZEO
server side. It was possible to calculate the theoretical required number of FDs
with this info

   75 (base) + 36 (zeo-clients) x 10 (storages) = 1155

We tried to open as many connections as possible to the ZEO server with a simple
script (see attachment) and counted the number of open FDs of the ZEO server 
using "lsof".
The result was that the ZEO server hangs at 1025 open FDs. Therefore, we 
assumed that
the OS (here Linux) limits the available number of FDs to 1024 by configuration.
Using "ulimit" (hard/soft) we increased the number of allowed open FDs to 2048.
However, there was no chance to open more than 329 (instead of 360) connections
(=1025 FDs) to the ZEO server :(

After looking at the sources, ZEO server uses the asyncore library to manage the
incoming connections. After *intensive* 'Googling' we have to notice that 
python's
asyncore library has a hard compiled in size limit of open FDs (namely 1024). 
The
limit is defined as macro __FD_SETSIZE in the header file of the libc6 library

/usr/include/bits/typesizes.h

Therefore, it was unfortunately necessary to change the limit in the header file
to

  #define __FD_SETSIZE 2048

and to re-compile python's sources to overcome the problem. However, our ZEO 
scenario
now works with the re-compiled python interpreter :)

I hope you will find this information useful.
Kind regards
Andreas


-- 
Dr. Andreas Gabriel, Hochschulrechenzentrum, http://www.uni-marburg.de/hrz
Hans-Meerwein-Str., 35032 Marburg,  fon +49 (0)6421 28-23560  fax 28-26994
 Philipps-Universitaet Marburg ---

#!/usr/bin/python2.3

"""Connect to a ZEO server and check for maximal connections.

Usage: zeo-check-max-conections.py [options]

Options:

-p port -- port to connect to

-h host -- host to connect to (default is current host)

-U path -- Unix-domain socket to connect to

-S name -- comma separated list of storage names (default is '1')

-c connections -- simultaneous connections


You must specify either -p and -h or -U.

"""

import getopt
import socket
import sys
import time

from ZEO.ClientStorage import ClientStorage


def multiConnect(addr, storages, connections):
	
cs={}

for s in storages:
	for i in range(0,connections):
	   key = '%s-%s' % (s,i)
	   print 'connecting storage %s' % key
	   cs[key] = ClientStorage(addr, storage=s, wait=1, read_only=0)
   print '%s. connection established' % (len(cs))

# release connections after 10 seconds	
time.sleep(10)
for s in cs.keys():
cs[s].close()

def usage(exit=1):
print __doc__
print " ".join(sys.argv)
sys.exit(exit)

def main():
host = None
port = None
unix = None
storages = ['1']
connections = 1

try:
opts, args = getopt.getopt(sys.argv[1:], 'p:h:U:S:c:')
for o, a in opts:
if o == '-p':
port = int(a)
elif o == '-h':
host = a
elif o == '-U':
unix = a
elif o == '-S':
storages = a.split(',')
elif o == '-c':
connections = int(a)

except Exception, err:
print err
usage()

if unix is not None:
addr = unix
else:
if host is None:
host = socket.gethostname()
if port is None:
usage()
addr = host, port
	

multiConnect(addr, storages, connections)


if __name__ == "__main__":
try:
main()
except Exception, err:
print err
sys.exit(1)

___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev