[Terry Hancock]
> ...
> I realize you've probably already made a decision on this, but this sounds
> like a classic argument for using an *object DBMS*, such as ZODB: It
> certainly does support transactions, and "abstracting the data into tables"
> is a non-issue as ZODB stores Python objects more
On Wednesday 22 June 2005 04:09 pm, Eloff wrote:
> Hi Paul,
> >You're doing what every serious database implementation needs to do ...
> >Are you sure you don't want to just use an RDBMS?
>
> It was considered, but we decided that abstracting the data into tables
> to be manipulated with SQL queri
In article <[EMAIL PROTECTED]>,
Konstantin Veretennicov <[EMAIL PROTECTED]> wrote:
> On 22 Jun 2005 17:50:49 -0700, Paul Rubin
> <"http://phr.cx"@nospam.invalid> wrote:
>
> > Even on a multiprocessor
> > system, CPython (because of the GIL) doesn't allow true parallel
> > threads, ... .
>
> Ple
On 22 Jun 2005 17:50:49 -0700, Paul Rubin
<"http://phr.cx"@nospam.invalid> wrote:
> Even on a multiprocessor
> system, CPython (because of the GIL) doesn't allow true parallel
> threads, ... .
Please excuse my ignorance, do you mean that python threads are always
scheduled to run on the same sing
Thanks for all of the replies, I'm glad I posted here, you guys have
been very helpful.
>Obviously, I only know what you've told us about your data, but 20-100
>queries? That doesn't sound right ... RDBMSes are well-
>studied and well-understood; they are also extremely powerful when used
>to the
Using an RDBMS is no cure-all for deadlocks - I can just as easily
deadlock with an RDBMS as with my own threads and locks, probably
easier.
I try to pick up crumbs of knowledge from my co-workers, and one of the
smarter ones gave me this rubric for testing for deadlocks. You need 3
things to cre
Eloff:
> So I think you would need multiple locks so clients only acquire what
> they need. This would let multiple threads access the data at once. But
> now I have to deal with deadlocks since clients will usually acquire a
> resource and then block acquiring another. It is very likely that one
"Eloff" <[EMAIL PROTECTED]> writes:
> >If the 100 threads are blocked waiting for the lock, they shouldn't
> >get awakened until the lock is released. So this approach is
> >reasonable if you can minimize the lock time for each transaction.
>
> Now that is interesting, because if 100 clients have
On 22 Jun 2005 14:09:42 -0700,
"Eloff" <[EMAIL PROTECTED]> wrote:
[Paul Rubin]
>> You're doing what every serious database implementation needs to do ...
>> Are you sure you don't want to just use an RDBMS?
> It was considered, but we decided that abstracting the data into
> tables to be manipul
On 6/23/05, Steve Horsley <[EMAIL PROTECTED]> wrote:
> It is my understanding that Pythons multithreading is done at the
> interpteter level and that the interpreter itself is single
> threaded. In this case, you cannot have multiple threads running
> truly concurrently even on a multi-CPU machine
Hi Steve,
The backup thread only holds the lock long enough to create an
in-memory representation of the data. It writes to disk on it's own
time after it has released the lock, so this is not an issue.
If you're saying what I think you are, then a single lock is actually
better for performance t
Eloff wrote:
> Hi Paul,
>
>>If the 100 threads are blocked waiting for the lock, they shouldn't
>>get awakened until the lock is released. So this approach is
>>reasonable if you can minimize the lock time for each transaction.
>
>
> Now that is interesting, because if 100 clients have to go th
Hi Paul,
>Do you mean a few records of 20+ MB each, or millions of records of a
>few dozen bytes, or what?
Well they're objects with lists and dictionaries and data members and
other objects inside of them. Some are very large, maybe bigger than
20MB, while others are very numerous and small (a h
"Eloff" <[EMAIL PROTECTED]> writes:
> I have a shared series of objects in memory that may be > 100MB. Often
> to perform a task for a client several of these objects must be used.
Do you mean a few records of 20+ MB each, or millions of records of a
few dozen bytes, or what?
> However imagine wh
This is not really Python specific, but I know Python programmers are
among the best in the world. I have a fair understanding of the
concepts involved, enough to realize that I would benefit from the
experience of others :)
I have a shared series of objects in memory that may be > 100MB. Often
to
15 matches
Mail list logo