On Tue, 7 Mar 2023 at 16:53, Stephen Tucker wrote:
>
> Hi again,
>
> I tried xrange, but I got an error telling me that my integer was too big
> for a C long.
>
> Clearly, xrange in Py2 is not capable of dealing with Python (that is,
> possibly very long) integers.
That's because Py2 has two diff
Jon Ribbens via Python-list <
python-list@python.org> wrote:
> On 2023-03-02, Stephen Tucker wrote:
> > The range function in Python 2.7 (and yes, I know that it is now
> > superseded), provokes a Memory Error when asked to deiliver a very long
> > list of values.
> >
On 2023-03-02, Stephen Tucker wrote:
> The range function in Python 2.7 (and yes, I know that it is now
> superseded), provokes a Memory Error when asked to deiliver a very long
> list of values.
>
> I assume that this is because the function produces a list which it then
>
On Thu, 2 Mar 2023 at 22:27, Stephen Tucker wrote:
>
> Hi,
>
> The range function in Python 2.7 (and yes, I know that it is now
> superseded), provokes a Memory Error when asked to deiliver a very long
> list of values.
>
> I assume that this is because the function produ
On 2023-03-02 at 11:25:49 +,
Stephen Tucker wrote:
> The range function in Python 2.7 (and yes, I know that it is now
> superseded), provokes a Memory Error when asked to deiliver a very long
> list of values.
>
> I assume that this is because the function produces a lis
Hi,
The range function in Python 2.7 (and yes, I know that it is now
superseded), provokes a Memory Error when asked to deiliver a very long
list of values.
I assume that this is because the function produces a list which it then
iterates through.
1. Does the range function in Python 3.x
On Mon, Jun 8, 2015 at 3:32 AM, naren wrote:
> Memory Error while working with pandas dataframe.
>
> Description of Environment Windows 7 python 3.4.2 32-bit version pandas
> 0.16.0
>
> We are running into the error described below. Any help provided will be
> sincerely ap
Memory Error while working with pandas dataframe.
Description of Environment Windows 7 python 3.4.2 32-bit version pandas 0.16.0
We are running into the error described below. Any help provided will be
sincerely appreciated.
We are able to read a 300MB Csv file into a dataframe using the
Memory Error while working with pandas dataframe.
Description of Environment Windows 7 python 3.4.2 32-bit version pandas
0.16.0
We are running into the error described below. Any help provided will be
sincerely appreciated.
We are able to read a 300MB Csv file into a dataframe using the
Jamie Mitchell writes:
> ...
> I then get a memory error:
>
> Traceback (most recent call last):
> File "", line 1, in
> File "/usr/local/sci/lib/python2.7/site-packages/scipy/stats/stats.py",
> line 2409, in pearsonr
> x = np.asarray(x)
On 03/24/2014 04:32 AM, Jamie Mitchell wrote:
Hello all,
I'm afraid I am new to all this so bear with me...
I am looking to find the statistical significance between two large netCDF data
sets.
Firstly I've loaded the two files into python:
swh=netCDF4.Dataset('/data/cr1/jmitchel/Q0/swh/cont
> standard_name: significant_height_of_wind_and_swell_waves
>
> long_name: significant_wave_height
>
> units: m
>
> add_offset: 0.0
>
> scale_factor: 0.002
>
> _FillValue: -32767
>
> missing_value: -32767
>
> unlimite
767
missing_value: -32767
unlimited dimensions: time
current shape = (86400, 350, 227)
Then to perform the pearsons correlation:
from scipy.stats.stats import pearsonr
pearsonr(hs,hs_2050s)
I then get a memory error:
Traceback (most recent call last):
File "", line 1, in
File &
On 23 January 2013 17:33, Isaac Won wrote:
> On Wednesday, January 23, 2013 10:51:43 AM UTC-6, Oscar Benjamin wrote:
>> On 23 January 2013 14:57, Isaac Won wrote:
>>
>> > On Wednesday, January 23, 2013 8:40:54 AM UTC-6, Oscar Benjamin wrote:
>>
>> Unless I've misunderstood how this function is su
On Wednesday, January 23, 2013 10:51:43 AM UTC-6, Oscar Benjamin wrote:
> On 23 January 2013 14:57, Isaac Won wrote:
>
> > On Wednesday, January 23, 2013 8:40:54 AM UTC-6, Oscar Benjamin wrote:
>
> >> On 23 January 2013 14:28, Isaac Won wrote:
>
> >>
>
> [SNIP]
>
> >
>
> > Following is full
On 23 January 2013 14:57, Isaac Won wrote:
> On Wednesday, January 23, 2013 8:40:54 AM UTC-6, Oscar Benjamin wrote:
>> On 23 January 2013 14:28, Isaac Won wrote:
>>
[SNIP]
>
> Following is full error message after I adjusted following Ulich's advice:
>
> interp = interp1d(indices[not_nan], x[not_
On Wednesday, January 23, 2013 8:40:54 AM UTC-6, Oscar Benjamin wrote:
> On 23 January 2013 14:28, Isaac Won wrote:
>
> > On Wednesday, January 23, 2013 4:08:13 AM UTC-6, Oscar Benjamin wrote:
>
> >
>
> > To Oscar
>
> > My actual error message is:
>
> > File
> > "/lustre/work/apps/python-2.7
On Wednesday, January 23, 2013 2:55:14 AM UTC-6, Ulrich Eckhardt wrote:
> Am 23.01.2013 05:06, schrieb Isaac Won:
>
> > I have tried to use different interpolation methods with Scipy. My
>
> > code seems just fine with linear interpolation, but shows memory
>
> >
On 23 January 2013 14:28, Isaac Won wrote:
> On Wednesday, January 23, 2013 4:08:13 AM UTC-6, Oscar Benjamin wrote:
>
> To Oscar
> My actual error message is:
> File
> "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py",
> line 311, in __init__
> se
gt;
> >> code seems just fine with linear interpolation, but shows memory
>
> >> error with quadratic. I am a novice for python. I will appreciate any
>
> >> help.
>
> >
>
> [SNIP]
>
> >
>
> >
>
> > Concerning the res
gt;
> >> code seems just fine with linear interpolation, but shows memory
>
> >> error with quadratic. I am a novice for python. I will appreciate any
>
> >> help.
>
> >
>
> [SNIP]
>
> >
>
> >
>
> > Concerning the res
On Tuesday, January 22, 2013 10:06:41 PM UTC-6, Isaac Won wrote:
> Hi all,
>
>
>
> I have tried to use different interpolation methods with Scipy. My code seems
> just fine with linear interpolation, but shows memory error with quadratic. I
> am a novice for python. I wil
On 23 January 2013 08:55, Ulrich Eckhardt
wrote:
> Am 23.01.2013 05:06, schrieb Isaac Won:
>
>> I have tried to use different interpolation methods with Scipy. My
>> code seems just fine with linear interpolation, but shows memory
>> error with quadratic. I am a n
Am 23.01.2013 05:06, schrieb Isaac Won:
I have tried to use different interpolation methods with Scipy. My
code seems just fine with linear interpolation, but shows memory
error with quadratic. I am a novice for python. I will appreciate any
help.
>
#code
f = open(filin, "r")
Hi all,
I have tried to use different interpolation methods with Scipy. My code seems
just fine with linear interpolation, but shows memory error with quadratic. I
am a novice for python. I will appreciate any help.
#code
f = open(filin, "r")
for columns in ( raw.strip().split() fo
area returned by malloc()) is
often lowered by memory fragmentation. If your program tries to malloc
100 MB of memory but the largest contiguous area is just 98 MB you'll
get a memory error, too.
Christian
--
http://mail.python.org/mailman/listinfo/python-list
On 07/24/2012 04:06 AM, Sammy Danso wrote:
> Hello Experts,
> I am having a 'memory error',
Please post the actual error message.
> which suggest that I
> have run out of memory, but I am not sure this is the case as I have
> considerable amount of memory unused
Hello Experts,
I am having a 'memory error', which suggest that I
have run out of memory, but I am not sure this is the case as I have
considerable amount of memory unused on my computer.
A little
search suggest this is a limitation of python 32 and an option is to
have a 64bit.
Hello All,
I am still having trouble with memory errors when I try to process many
netcdf files.
Originally I would get the memory error as mentioned in the previous post
but when I added gc.collect() after each for loop I receive the error:
GEOS_ERROR: bad allocation
with no additional
Hello All,
I keep coming across a memory error when processing many netcdf files. I
assume it has something to do with how I loop things and maybe need to close
things off properly.
In the code below I am looping through a bunch of netcdf files (each file is
hourly data for one month) and within
g
Date: Mon, 25 Jul 2011 17:14:46 +0200
Subject: Re: How to catch an memory error in Windows?
On 25/07/11 16:55, António Rocha wrote:
> Greetings
>
> I'm using subprocess module to run an external Windows binary. Due to
> some limitations, sometimes all memory is consumed in this pro
On 25/07/11 16:55, António Rocha wrote:
> Greetings
>
> I'm using subprocess module to run an external Windows binary. Due to
> some limitations, sometimes all memory is consumed in this process. How
> can I catch this error?
> Antonio
>
How is this relevant to the Python part?
Also, "no memory
Greetings
I'm using subprocess module to run an external Windows binary. Due to some
limitations, sometimes all memory is consumed in this process. How can I
catch this error?
Antonio
--
http://mail.python.org/mailman/listinfo/python-list
>
> But-- the image does say Pythonwin... are you running this from the
> Pythonwin editor/IDE? Does this script crash out if you run it through the
> normal 'python'(or pythonw) commands? If not, are you attempting to do any
> sort of GUI work in this script? That rarely works within Pythonwin
> d
From: python-list-bounces+shahmed=sfwmd@python.org
[mailto:python-list-bounces+shahmed=sfwmd@python.org] On Behalf Of Stephen
Hansen
Sent: Thursday, December 03, 2009 10:22 PM
To: python-list@python.org
Subject: Re: memory error
On Thu, Dec 3, 2009 at 5:51 AM, Ahmed, Shakir
On Thu, Dec 3, 2009 at 5:51 AM, Ahmed, Shakir wrote:
> I am getting a memory error while executing a script. Any idea is highly
> appreciated.
>
> Error message: " The instruction at "0x1b009032" referenced memory at
> "0x0804:, The memory could not be &q
On 12/4/2009 12:51 AM, Ahmed, Shakir wrote:
I am getting a memory error while executing a script. Any idea is highly
appreciated.
Error message: " The instruction at "0x1b009032" referenced memory at
"0x0804:, The memory could not be "written"
This error i
On Thursday 03 December 2009 05:51:05 Ahmed, Shakir wrote:
> I am getting a memory error while executing a script. Any idea is
> highly appreciated.
>
> Error message: " The instruction at "0x1b009032" referenced memory at
> "0x0804:, The memory could
On Mon, 13 Jul 2009 14:20:13 -0700, Aaron Scott wrote:
>> BTW, you should derive all your classes from something. If nothing
>> else, use object.
>> class textfile(object):
>
> Just out of curiousity... why is that? I've been coding in Python for a
> long time, and I never derive my base class
On Mon, Jul 13, 2009 at 2:51 PM, Vilya Harvey wrote:
> 2009/7/13 Aaron Scott :
>>> BTW, you should derive all your classes from something. If nothing
>>> else, use object.
>>> class textfile(object):
>>
>> Just out of curiousity... why is that? I've been coding in Python for
>> a long time, and
2009/7/13 Aaron Scott :
>> BTW, you should derive all your classes from something. If nothing
>> else, use object.
>> class textfile(object):
>
> Just out of curiousity... why is that? I've been coding in Python for
> a long time, and I never derive my base classes. What's the advantage
> to der
> BTW, you should derive all your classes from something. If nothing
> else, use object.
> class textfile(object):
Just out of curiousity... why is that? I've been coding in Python for
a long time, and I never derive my base classes. What's the advantage
to deriving them?
--
http://mail.python
sityee kong wrote:
Hi All,
I have a similar problem that many new python users might encounter. I would
really appreciate if you could help me fix the error.
I have a big text file with size more than 2GB. It turned out memory error
when reading in this file. Here is my python script, the error
phoebe> I have a big text file with size more than 2GB. It turned out
phoebe> memory error when reading in this file. Here is my python
phoebe> script, the error occurred at line -- self.fh.readlines().
phoebe> import math
phoebe> import time
phoebe&
sityee kong wrote:
Hi All,
I have a similar problem that many new python users might encounter. I
would really appreciate if you could help me fix the error.
I have a big text file with size more than 2GB. It turned out memory
error when reading in this file. Here is my python script, the
Hi All,
I have a similar problem that many new python users might encounter. I would
really appreciate if you could help me fix the error.
I have a big text file with size more than 2GB. It turned out memory error
when reading in this file. Here is my python script, the error occurred at
line
range(1,len(refSeqIDsinTransPro)):
> found = re.search(l,refSeqIDsinTransPro[j])
> if found:
> """promoterSequencesinTransPro[j] """
> print l
>
> line = fileInputHandler.readline()
>
> fileInputHandler.close()
>
> The error that I
On Nov 11, 8:47 am, [EMAIL PROTECTED] wrote:
> import linecache
Why???
> reader2 = csv.reader(open(sys.argv[2],"rb"))
> reader2_list = []
> reader2_list.extend(reader2)
>
> for data2 in reader2_list:
> refSeqIDsinTransPro.append(data2[3])
> for data2 in reader2_list:
> promoterSequencesinT
On Tue, Nov 11, 2008 at 7:47 AM, <[EMAIL PROTECTED]> wrote:
> refSeqIDsinTransPro = []
> promoterSequencesinTransPro = []
> reader2 = csv.reader(open(sys.argv[2],"rb"))
> reader2_list = []
> reader2_list.extend(reader2)
Without testing, this looks like you're reading the _ENTIRE_
input stream int
llows:
Traceback (most recent call last):
File "RefSeqsToPromoterSequences.py", line 31, in
reader2_list.extend(reader2)
MemoryError
I understand that the issue is Memory error and it is caused because
of the line reader2_list.extend(reader2). Is there any other
alternative method in readin
I am using the cPickle module to serialization and de-serialization of heavy
python object (80 MB). When I try to save the object it gives the memory
Error. Any one can help me out of this problem.
I am pickling the object as:
def savePklFile(pickleFile, data):
pickledFile
Thank you very much Martin. It worked like a charm.
--
http://mail.python.org/mailman/listinfo/python-list
> I didn't have the problem with dumping as a string. When I tried to
> save this object to a file, memory error pops up.
That's not what the backtrace says. The backtrace says that the error
occurs inside pickle.dumps() (and it is consistent with the functions
being called, s
I didn't have the problem with dumping as a string. When I tried to
save this object to a file, memory error pops up.
I am sorry for the mention of size for a dictionary. What I meant by
65000X50 is that it has 65000 keys and each key has a list of 50
tuples.
I was able to save a dicti
Nagu wrote:
> I am trying to save a dictionary of size 65000X50 to a local file and
> I get the memory error problem.
What do you mean by this size specification? When I interpreter X as
multiplication, I can't see a problem: the code
import pickle
d = {}
for i in xrange(65000*50):
I am trying to save a dictionary of size 65000X50 to a local file and
I get the memory error problem.
How do I go about resolving this? Is there way to partition the pickle
object and combine later if this is a problem due to limited resources
(memory) on the machine (it is 32 bit machine Win XP
Nagu wrote:
I am trying to save a dictionary of size 65000X50 to a local file and
I get the memory error problem.
How do I go about resolving this? Is there way to partition the pickle
object and combine later if this is a problem due to limited resources
(memory) on the machine (it is 32 bit
I am trying to save a dictionary of size 65000X50 to a local file and
I get the memory error problem.
How do I go about resolving this? Is there way to partition the pickle
object and combine later if this is a problem due to limited resources
(memory) on the machine (it is 32 bit machine Win XP
I wrote:
> Here's a small example of a ZipFile subclass (tested a bit this time)
> that implements two generator methods:
Argh, not quite tested enough - one fix needed, change:
if bytes[-1] not in ('\n', '\r'):
partial = lines.pop()
to:
if bytes[-1] not
David Bolen <[EMAIL PROTECTED]> writes:
> If you are going to read the file data incrementally from the zip file
> (which is what my other post provided) you'll prevent the huge memory
> allocations and risk of running out of resource, but would have to
> implement your own line ending support if
mcl <[EMAIL PROTECTED]> writes:
> pseudo code
>
> zfhdl = zopen(zip,filename) # Open File in Zip Archive for
> Reading
>
> while True:
> ln = zfhdl.readline()# Get nextline of file
> if not ln: # if EOF file
> break
> dealwithline(ln)
On 29 Aug, 21:18, David Bolen <[EMAIL PROTECTED]> wrote:
> mcl <[EMAIL PROTECTED]> writes:
> > I am trying to unzip an 18mb zip containing just a single 200mb file
> > and I get a Memory Error. When I run the code on a smaller file 1mb
> > zip, 11mb fil
mcl <[EMAIL PROTECTED]> writes:
> I am trying to unzip an 18mb zip containing just a single 200mb file
> and I get a Memory Error. When I run the code on a smaller file 1mb
> zip, 11mb file, it works fine.
(...)
> def unzip_file_into_dir(file, dir):
> #os.mkdir(dir
I am trying to unzip an 18mb zip containing just a single 200mb file
and I get a Memory Error. When I run the code on a smaller file 1mb
zip, 11mb file, it works fine.
I am running on a hosted Apache web server
I am using some code I found on the web somewhere.
def unzip_file_into_dir(file
> "lisa" == lisa engblom <[EMAIL PROTECTED]> writes:
lisa> Hi, I am using matplotlib with python to generate a bunch of
lisa> charts. My code works fine for a single iteration, which
lisa> creates and saves 4 different charts. The trouble is that
lisa> when I try to run it fo
It is hard to know what is wrong when we do not know how the
wrapper around the function works. The error could also be in
ConstructFigName or ConstructFigPath. Also please send the
specific error message when asking for help as that significantly
helps in tracking down the error.
Cheers
Tommy
Hi,
I am using matplotlib with python to generate a bunch of charts. My
code works fine for a single iteration, which creates and saves 4
different charts. The trouble is that when I try to run it for the
entire set (about 200 items) it can run for 12 items at a time. On the
13th, I get an erro
Hari Sekhon wrote:
> I've seen people using everything from zip to touch, either out of
> laziness or out of the fact it wouldn't work very well in python, this
> zip case is a good example.
so based on a limitation in one library, and some random code you've
seen on the internet, you're makin
Fredrik Lundh wrote:
Hari Sekhon wrote:
I take it that it's still a work in progress to be able to pythonify
everything, and until then we're just gonna have to rely on shell and
those great C coded coreutils and stuff like that. Ok, I'm rather fond
of Bash+coreutils, highest r
Hari Sekhon wrote:
> I take it that it's still a work in progress to be able to pythonify
> everything, and until then we're just gonna have to rely on shell and
> those great C coded coreutils and stuff like that. Ok, I'm rather fond
> of Bash+coreutils, highest ratio of code lines to work I'v
Fredrik Lundh wrote:
Hari Sekhon wrote:
Is it me or is having to use os.system() all the time symtomatic of a
deficiency/things which are missing from python as a language?
it's you.
I take it that it's still a work in progress to be able to pythonify
everything
Hari Sekhon wrote:
> Is it me or is having to use os.system() all the time symtomatic of a
> deficiency/things which are missing from python as a language?
it's you.
--
http://mail.python.org/mailman/listinfo/python-list
On 20/05/06, Bruno Desthuilliers <[EMAIL PROTECTED]> wrote:
Roger Miller a écrit :> The basic problem is that the zipfile interface only reads and writes> whole files, so it may perform poorly or fail on huge files. At one> time I implemented a patch to allow reading files in chunks. However I
>
Roger Miller a écrit :
> The basic problem is that the zipfile interface only reads and writes
> whole files, so it may perform poorly or fail on huge files. At one
> time I implemented a patch to allow reading files in chunks. However I
> believe that the current interface has too many problems
The basic problem is that the zipfile interface only reads and writes
whole files, so it may perform poorly or fail on huge files. At one
time I implemented a patch to allow reading files in chunks. However I
believe that the current interface has too many problems to solve by
incremental patching,
Sion Arrowsmith wrote:
> Hari Sekhon <[EMAIL PROTECTED]> wrote:
(snip)
>>The python zipfile module is obviously broken...
>
> This isn't at all obvious to me.
zipfile.read() does not seem to take full advantage of zlib's
decompressobj's features. This could perhaps be improved (left as an
exerci
al memory
>(swap), but it could also be a symptom of a libc bug, a bad RAM chip, etc.
>"""
There's another possibility, which I ran into recently. Which is a
problem with physical+virtual memory exceding the space addressable by
a process. So I've got 2G physi
Hari Sekhon <[EMAIL PROTECTED]> wrote:
>import zipfile
>zip=zipfile.ZipFile('d:\somepath\cdimage.zip')
>zip.namelist()
>['someimage.iso']
[ ... ]
>B) content=zip.read('someimage.iso')
>
>Traceback (most recent call last):
> File "", line 1, in ?
> File "D:\u\Python24\lib\zipfile.py", line 357
[EMAIL PROTECTED] wrote:
> Take a look at the pywin32 extension, which I believe has some lower
> level memory allocation and file capabilities that might help you in
> this situation.
But then the solution would not be portable, which would be a shame
since the zlib module (on which ZipFile reli
Take a look at the pywin32 extension, which I believe has some lower
level memory allocation and file capabilities that might help you in
this situation. If I'm completely wrong, someone please tell me XD.
Of course, you could just make the read() a step process, reading, O
lets say 8192 bytes at
Hari Sekhon wrote:
> I do
>
> import zipfile
> zip=zipfile.ZipFile('d:\somepath\cdimage.zip')
> zip.namelist()
> ['someimage.iso']
>
> then either of the two:
>
> A) file('someimage.iso','w').write(zip.read('someimage.iso'))
> or
> B) content=zip.read('someimage.iso')
>
> but both result in
> Hari Sekhon <[EMAIL PROTECTED]> writes:
> Traceback (most recent call last):
> File "", line 1, in ?
> File "D:\u\Python24\lib\zipfile.py", line 357, in read
> bytes = dc.decompress(bytes)
> MemoryError
Looks like the .iso file is huge. Even if it's only a CD image (approx
650MB), r
I do
import zipfile
zip=zipfile.ZipFile('d:\somepath\cdimage.zip')
zip.namelist()
['someimage.iso']
then either of the two:
A) file('someimage.iso','w').write(zip.read('someimage.iso'))
or
B) content=zip.read('someimage.iso')
but both result in the same error:
Traceback (most recent call l
like magic it did the trick :D
This should be applied to future Python release. Thanks.
Fredrik Lundh wrote:
> Jean-Paul Calderone wrote:
> if you look at the debug output (which you may already have done),
> it's an obvious case of fragmentation-inducing behaviour. any malloc-
> based system ma
Jean-Paul Calderone wrote:
> >On a second trial, it's also failed on Python 2.3.5 for Windows, Python
> >2.3.3 for Windows, and Python 2.2.3 for Windows. So this seems to me as
> >a Windows system related bug, not a particular version of Python bug.
>
> Arguably, it's a bug in Python's imaplib mod
On Fri, 23 Dec 2005 14:21:27 +1100, Dody Suria Wijaya <[EMAIL PROTECTED]> wrote:
>Noah wrote:
>> This looks like a bug in your build of Python 2.4.2 for Windows.
>> Basically it means that C's malloc() function in the Python interpreter
>> failed.
>>
>
>On a second trial, it's also failed on Python
Noah wrote:
> This looks like a bug in your build of Python 2.4.2 for Windows.
> Basically it means that C's malloc() function in the Python interpreter
> failed.
>
On a second trial, it's also failed on Python 2.3.5 for Windows, Python
2.3.3 for Windows, and Python 2.2.3 for Windows. So this se
Fredrik Lundh wrote:
> try adding a print statement to lib/imaplib.py, just before that read
> statement,
>
> print size, read, size-read
> data = self.sslobj.read(size-read)
>
> and let us know what it prints.
14130601 0 14130601
14130601 16353 14114248
14130601 32737 14097864
1
Dody Suria Wijaya wrote:
> Hi, I encountered a Memory Error Exception on using IMAP4 just like in
> Python documentation example, on a specially large email (10 MB). Any
> idea how to fix/circumvent this?
>
> >>> typ, data = M.fetch(89, '(RFC822)')
> Traceba
This looks like a bug in your build of Python 2.4.2 for Windows.
Basically it means that C's malloc() function in the Python interpreter
failed.
You can catch this exception to try to recover. Here is an example:
try:
typ, data = M.fetch(num, '(RFC822)')
exception MemoryError, e:
Mode details, this occurs on Python 2.4.2 windows, but not on Python
2.3.4 cygwin or Python 2.3.5 windows binary.
Dody Suria Wijaya wrote:
>
> Hi, I encountered a Memory Error Exception on using IMAP4 just like in
> Python documentation example, on a specially large email (10 MB). An
Hi, I encountered a Memory Error Exception on using IMAP4 just like in
Python documentation example, on a specially large email (10 MB). Any
idea how to fix/circumvent this?
>>> typ, data = M.fetch(89, '(RFC822)')
Traceback (most recent call last):
File "&q
92 matches
Mail list logo