On Tue, 22 Nov 2016 10:27 am, Fillmore wrote:
>
> Hi there, Python newbie here.
>
> I am working with large files. For this reason I figured that I would
> capture the large input into a list and serialize it with pickle for
> later (faster) usage.
> Everything has worked beautifully until today
On Tue, 22 Nov 2016 11:40 am, Peter Otten wrote:
> Fillmore wrote:
>
>> Hi there, Python newbie here.
>>
>> I am working with large files. For this reason I figured that I would
>> capture the large input into a list and serialize it with pickle for
>> later (faster) usage.
>
> But is it really
On Mon, Nov 21, 2016 at 3:43 PM, John Gordon wrote:
> In Fillmore
> writes:
>
>
>> Question for experts: is there a way to refactor this so that data may
>> be filled/written/released as the scripts go and avoid the problem?
>> code below.
>
> That depends on how the data will be read. Here is
Fillmore wrote:
> Hi there, Python newbie here.
>
> I am working with large files. For this reason I figured that I would
> capture the large input into a list and serialize it with pickle for
> later (faster) usage.
But is it really faster? If the pickle is, let's say, twice as large as the
or
In Fillmore
writes:
> Question for experts: is there a way to refactor this so that data may
> be filled/written/released as the scripts go and avoid the problem?
> code below.
That depends on how the data will be read. Here is one way to do it:
fileObject = open(filename, "w")
for
Hi there, Python newbie here.
I am working with large files. For this reason I figured that I would
capture the large input into a list and serialize it with pickle for
later (faster) usage.
Everything has worked beautifully until today when the large data (1GB)
file caused a MemoryError :(