I'm trying to create a class that would lie to the user that a member is in
some cases a simple variable and in other cases a class. The nature of the
member would depend on call syntax like so:
1. x = obj.member #x becomes the "simple" value contained in member
2. x = obj.member.another_member #
Norberto,
While certainly useful, this kind of functionality contradicts the way
today's "string" libraries work.
What you are proposing isn't dict self referencing, but rather strings
referencing other external data (in this case other strings from the
same dict).
When you write code like
config
On Jun 21, 9:32 am, OdarR wrote:
>
> Do you think multiprocessing can help you seriously ?
> Can you benefit from multiple cpu ?
>
> did you try to enhance your code with numpy ?
>
> Olivier
> (installed a backported multiprocessing on his 2.5.1 Python, but need
> installation of Xcode first)
Mul
On Jun 21, 9:43 am, Чеширский Кот wrote:
> 1. say me dbf files count?
> 2. why dbf ?
It was just a test. It was the most compatible format I could get
between Python and the business application I work with without using
SQL servers and such.
Otherwise it's of no consequence. The final applicatio
Look, guys, here's the thing:
In the company I work at we decided to rewrite our MRP system in
Python. I was one of the main proponents of it since it's nicely cross
platform and allows for quite rapid application development. The
language and it's built in functions are simply great. The oppositio
Add:
Carl, Olivier & co. - You guys know exactly what I wanted.
Others: Going back to C++ isn't what I had in mind when I started
initial testing for my project.
--
http://mail.python.org/mailman/listinfo/python-list
On Jun 20, 1:36 am, a...@pythoncraft.com (Aahz) wrote:
>
> You should put up or shut up -- I've certainly seen multi-core speedup
> with threaded software, so show us your benchmarks!
> --
Sorry, no intent to offend anyone here. Flame wars are not my thing.
I have shown my benchmarks. See first p
On Jun 19, 11:59 pm, Jesse Noller wrote:
> On Fri, Jun 19, 2009 at 12:50 PM, OdarR wrote:
> > On 19 juin, 16:16, Martin von Loewis >> If you know that your (C) code is thread safe on its own, you can
> >> release the GIL around long-running algorithms, thus using as many
> >> CPUs as you have ava
On Jun 19, 11:45 pm, OdarR wrote:
> On 19 juin, 21:05, Christian Heimes wrote:
>
> > I've seen a single Python process using the full capacity of up to 8
> > CPUs. The application is making heavy use of lxml for large XSL
> > transformations, a database adapter and my own image processing library
Sorry, just a few more thoughts:
Does anybody know why GIL can't be made more atomic? I mean, use
different locks for different parts of code?
This way there would be way less blocking and the plugin interface
could remain the same (the interpreter would know what lock it used
for the plugin, so t
Thanks guys, for all the replies.
They were some very interesting reading / watching.
Seems to me, the Unladen-Swallow might in time produce code which will
have this problem lessened a bit. Their roadmap suggests at least
modifying the GIL principles if not fully removing it. On top of this,
they
See here for introduction:
http://groups.google.si/group/comp.lang.python/browse_thread/thread/370f8a1747f0fb91
Digging through my problem, I discovered Python isn't exactly thread
safe and to solve the issue, there's this Global Interpreter Lock
(GIL) in place.
Effectively, this causes the interp
Digging further, I found this:
http://www.oreillynet.com/onlamp/blog/2005/10/does_python_have_a_concurrency.html
Looking up on this info, I found this:
http://docs.python.org/c-api/init.html#thread-state-and-the-global-interpreter-lock
If this is correct, no amount of threading would ever help in
I've done some further testing on the subject:
I also added some calculations in the main loop to see what effect
they would have on speed. Of course, I also added the same
calculations to the single threaded functions.
They were simple summary functions, like average, sum, etc. Almost no
interact
Thanks for the suggestions.
I've been looking at the source code of threading support objects and
I saw that non-blocking requests in queues use events, while blocking
requests just use InterlockedExchange.
So plain old put/get is much faster and I've managed to confirm this
today with further test
Hi,
I'm pretty new to Python (2.6) and I've run into a problem I just
can't seem to solve.
I'm using dbfpy to access DBF tables as part of a little test project.
I've programmed two separate functions, one that reads the DBF in main
thread and the other which reads the DBF asynchronously in a separ
16 matches
Mail list logo