New submission from mrjbq7:
If you have a simple module (say "foo.py"):
$ cat foo.py
bar = 1
You get weird errors when trying to deep copy them (which I did by accident,
not intentionally trying to deep copy modules):
Python 2.7.2:
>>> import foo
>>> impor
mrjbq7 added the comment:
> Richard was saying that you shouldn't serialize such a large array,
> that's just a huge performance bottleneck. The right way would be
> to use a shared memory.
Gotcha, for clarification, my original use case was to *create* them
in the other
mrjbq7 added the comment:
On a machine with 256GB of RAM, it makes more sense to send arrays of this size
than say on a laptop...
--
___
Python tracker
<http://bugs.python.org/issue17
New submission from mrjbq7:
I ran into a problem using multiprocessing to create large data objects (in
this case numpy float64 arrays with 90,000 columns and 5,000 rows) and return
them to the original python process.
It breaks in both Python 2.7 and 3.3, using numpy 1.7.0 (but with
mrjbq7 added the comment:
I noticed a Reddit post[1] today that makes the comment that the docstring
should read:
product(*iterables[, repeat]) --> product object
instead of:
product(*iterables) --> product object
---
[1]
http://www.reddit.com/r/Python/comments
Changes by mrjbq7 :
--
components: +Interpreter Core
versions: +Python 2.6
___
Python tracker
<http://bugs.python.org/issue6684>
___
___
Python-bugs-list mailin
New submission from mrjbq7 :
There are a couple arithmetic operations that idempotent, where the
returned python object is the same python object as the input.
For example, given a number:
>>> x = 12345
The abs() builtin returns the same number object if it is already a
posit