Re: Gigantic file size processing error

2014-01-03 Thread kurtz le pirate
In article 
1388676082.98276.yahoomail...@web193403.mail.sg3.yahoo.com,
 mani_nm...@yahoo.com (mani kandan) wrote:

 Hi,

We have file size of huge size 500MB, Need to Manipulate the file, some 
 replacement and then write the file, I have used File::slurp and works for 
 file size of 300MB (Thanks Uri) but for this huge size 500MB it is not 
 processing and come out with error. I have also used Tie::file module same 
 case as not processing, any guidance.

regards
Manikandan


Hi,


have you try this kind of command :

  perl -p -i -e s/oneThing/otherThing/g yourFile

hang or not ?


and, 500MB is not a gigantic file :)


-- 
klp

-- 
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/




Re: Gigantic file size processing error

2014-01-03 Thread Jan Gruber
Hi List,

On Friday, January 03, 2014 10:57:13 AM kurtz le pirate wrote:

 have you try this kind of command :
   perl -p -i -e s/oneThing/otherThing/g yourFile

I was about to post the same thing. My suggestion: Create a backup file just 
in case something goes wrong.

perl -pi.bak -e s/oneThing/otherThing/g yourFile

This creates a backup named yourFile.bak prior to processing yourFile.

 hang or not ?
I have processed files  2G this way, no problems encountered.

Regards,
Jan
-- 
When a woman marries again it is because she detested her first husband. 
When a man marries again, it is because he adored his first wife. -- Oscar 
Wilde
- 


-- 
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/




Re: Gigantic file size processing error

2014-01-03 Thread Janek Schleicher

Am 02.01.2014 18:08, schrieb David Precious:


Oh, I was thinking of a wrapper that would:

(a) open a new temp file
(b) iterate over the source file, line-by-line, calling the provided
coderef for each line
(c) write $_ (potentially modified by the coderef) to the temp file
(d) finally, rename the temp file over the source file

Of course, it's pretty easy to write such code yourself, and as it
doesn't slurp the file in, it could be considered out of place in
File::Slurp.  I'd be fairly surprised if such a thing doesn't already
exist on CPAN, too.  (If it didn't, I might actually write such a
thing, as a beginner-friendly here's how to easily modify a file, line
by line, with minimal effort offering.)


A short look to CPAN brings out https://metacpan.org/pod/File::Inplace
what looks to do what OP wants.

Honestly I never used, and it can be that it has also a performance 
problem, but for at least I looked to it's source code and it implements 
it via a temporary file without saving the whole file.



Greetings,
Janek

--
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/




Re: Gigantic file size processing error

2014-01-03 Thread mani kandan
Hi,

Thanks for all your guidance, The Error was Perl Command Line Intepretar has 
encountered a problem and needs to close,

 Also increased the virtual memory, No use, My system configuration OS XP SP3 
Intel Core 2 duo with 2 GB Ram.


regards
Manikandan N



On Friday, 3 January 2014 9:06 PM, Janek Schleicher janek_schleic...@yahoo.de 
wrote:
 
Am 02.01.2014 18:08, schrieb David Precious:

 Oh, I was thinking of a wrapper that would:

 (a) open a new temp file
 (b) iterate over the source file, line-by-line, calling the provided
 coderef for each line
 (c) write $_ (potentially modified by the coderef) to the temp file
 (d) finally, rename the temp file over the source file

 Of course, it's pretty easy to write such code yourself, and as it
 doesn't slurp the file in, it could be considered out of place in
 File::Slurp.  I'd be fairly surprised if such a thing doesn't already
 exist on CPAN, too.  (If it didn't, I might actually write such a
 thing, as a beginner-friendly here's how to easily modify a file, line
 by line, with minimal effort offering.)

A short look to CPAN brings out https://metacpan.org/pod/File::Inplace
what looks to do what OP wants.

Honestly I never used, and it can be that it has also a performance 
problem, but for at least I looked to it's source code and it implements 
it via a temporary file without saving the whole file.


Greetings,
Janek


-- 
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/

Re: Gigantic file size processing error

2014-01-03 Thread Uri Guttman

On 01/03/2014 10:22 AM, Janek Schleicher wrote:



A short look to CPAN brings out https://metacpan.org/pod/File::Inplace
what looks to do what OP wants.

Honestly I never used, and it can be that it has also a performance
problem, but for at least I looked to it's source code and it implements
it via a temporary file without saving the whole file.


i haven't seen that before but it was last touched in 2005. its api 
requires method calls to get each line, another method call to replace a 
line and such. i would call that somewhat clunky compared to 
edit_file_lines and its primary arg of a code block modifies $_. likely 
it will be much slower for typical files as well.


now for very large files, we can't tell. we still haven't heard back 
from the OP about the actual error. my conjecture of a resource limit 
still feels right. neither perl nor file::slurp would have any errors on 
a large file other than limited resources. and that can be fixed with a 
ulimit call or similar.


uri



--
Uri Guttman - The Perl Hunter
The Best Perl Jobs, The Best Perl Hackers
http://PerlHunter.com

--
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/




Re: Gigantic file size processing error

2014-01-03 Thread Uri Guttman

On 01/03/2014 12:10 PM, mani kandan wrote:

Hi,

Thanks for all your guidance, The Error was Perl Command Line
Intepretar has encountered a problem and needs to close,


that isn't the real error. you need to run this in a command window that 
won't close after it fails so you can see the real error message.


Also increased the virtual memory, No use, My system configuration OS
XP SP3 Intel Core 2 duo with 2 GB Ram.


that isn't a lot of ram for a 500MB file to be slurped. increasing the 
virtual ram won't help as it will likely be mostly in swap. i don't know 
windows much so i can't say how to really check/set the virtual size of 
a process. try doing this on linux or on a box with much more ram. 
otherwise use a perl -p one liner loop and it should work.


uri

--
Uri Guttman - The Perl Hunter
The Best Perl Jobs, The Best Perl Hackers
http://PerlHunter.com

--
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/




Re: Gigantic file size processing error

2014-01-03 Thread Shawn H Corey
On Fri, 03 Jan 2014 12:22:48 -0500
Uri Guttman u...@stemsystems.com wrote:

 i haven't seen that before but it was last touched in 2005.

That means it has no bugs. A better metric of a modules quality is how
many outstanding bugs are? See
https://rt.cpan.org//Dist/Display.html?Queue=File-Inplace


-- 
Don't stop where the ink does.
Shawn

-- 
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/




Re: Gigantic file size processing error

2014-01-03 Thread Uri Guttman

On 01/03/2014 12:48 PM, Shawn H Corey wrote:

On Fri, 03 Jan 2014 12:22:48 -0500
Uri Guttman u...@stemsystems.com wrote:


i haven't seen that before but it was last touched in 2005.


That means it has no bugs. A better metric of a modules quality is how
many outstanding bugs are? See
https://rt.cpan.org//Dist/Display.html?Queue=File-Inplace


it also means it may be rotting on the vine. or no one uses it to report 
bugs. or it is an orphan module. or no requests for new features 
(popular modules always get that). stable doesn't always mean it is 
good. considering i wrote edit_file_lines and never heard of that until 
now. it says the module isn't known or used. in fact metacpan says it 
has no reverse dependencies (not one module or distribution uses it). 
not bragging but file::slurp has over 600 reverse dependencies. that 
means i get feature requests, more bug reports, etc.


you may need to look at the whole picture before you decide to use a 
module. if this module was so useful, why isn't it being used by anyone 
since 2005? i think a major negative is the very odd api which i already 
mentioned. you have to do a lot of work to use it and it doesn't gain 
much because of that. it does have a commit/rollback thing but again, 
that is easy to code up yourself. just write to a temp file and either 
rename it or delete it. not much of a win there.


uri


--
Uri Guttman - The Perl Hunter
The Best Perl Jobs, The Best Perl Hackers
http://PerlHunter.com

--
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/




Re: Gigantic file size processing error

2014-01-03 Thread Rob Dixon

On 02/01/2014 15:21, mani kandan wrote:

Hi,

We have file size of huge size 500MB, Need to Manipulate the file, some
replacement and then write the file, I have used File::slurp and works
for file size of 300MB (Thanks Uri) but for this huge size 500MB it is
not processing and come out with error. I have also used Tie::file
module same case as not processing, any guidance.


Slurping entire files into memory is usually overkill, and you should
only do it if you can aford the memory and *really need* random access
to the entire file at once. Most of the time a simple sequential
read/modify/write is appropriate, and Perl will take care of buffering
the input and output files in reasonable amounts.

According to your later posts you have just 2GB of memory, and although
Windows XP *can* run in 500MB I wouldn't like to see a program that
slurped a quarter of the entire memory.

I haven't seen you describe what processing you want to do on the file.
If the input is a text file and the changes can be done line by line,
then you are much better off with a program that looks like this

use strict;
use warnings;

open my $in, '', 'myfile.txt' or die $!;
open my $out, '', 'outfile.txt' or die $!;

while ($in) {
  s/from string/to string/g;
  print $out $_;
}

__END__

But if you need more, then I would guess that Tie::File is your best
bet. You don't say what problems you are getting using this module, so
please explain.

Rob




---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com


--
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/




Re: Gigantic file size processing error

2014-01-03 Thread Uri Guttman

On 01/03/2014 02:28 PM, Rob Dixon wrote:

On 02/01/2014 15:21, mani kandan wrote:

Hi,

We have file size of huge size 500MB, Need to Manipulate the file, some
replacement and then write the file, I have used File::slurp and works
for file size of 300MB (Thanks Uri) but for this huge size 500MB it is
not processing and come out with error. I have also used Tie::file
module same case as not processing, any guidance.


Slurping entire files into memory is usually overkill, and you should
only do it if you can aford the memory and *really need* random access
to the entire file at once. Most of the time a simple sequential
read/modify/write is appropriate, and Perl will take care of buffering
the input and output files in reasonable amounts.



of course i differ on that opinion. slurping is almost always faster and 
in many cases the code is simpler than line by line i/o. also you can do 
much easier parsing and processing of whole files in single scalar than 
line by line. and reasonable size has shifted dramatically over the 
decades. in the olden days line by line was mandated due to small 
amounts of ram. the typical file size (code, configs, text, markup, 
html, etc) has not grown much since then but ram has gotten so large and 
cheap. slurping is the way to go today other than for genetics, logs and 
similar super large files.




According to your later posts you have just 2GB of memory, and although
Windows XP *can* run in 500MB I wouldn't like to see a program that
slurped a quarter of the entire memory.

I haven't seen you describe what processing you want to do on the file.
If the input is a text file and the changes can be done line by line,
then you are much better off with a program that looks like this

use strict;
use warnings;

open my $in, '', 'myfile.txt' or die $!;
open my $out, '', 'outfile.txt' or die $!;

while ($in) {
   s/from string/to string/g;
   print $out $_;
}

__END__

But if you need more, then I would guess that Tie::File is your best
bet. You don't say what problems you are getting using this module, so
please explain.


tie::file will be horrible for editing a large file like that. your line 
by line or similar code would be much better. tie::file does so much 
seeking and i/o, much more than linear access buffering would do. when 
lines wrap over block boundaries (much more likely than not), tie::file 
does extra amounts of i/o.


uri

--
Uri Guttman - The Perl Hunter
The Best Perl Jobs, The Best Perl Hackers
http://PerlHunter.com

--
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/




Re: Gigantic file size processing error

2014-01-02 Thread David Precious
On Thu, 2 Jan 2014 23:21:22 +0800 (SGT)
mani kandan mani_nm...@yahoo.com wrote:

 Hi,
 
 We have file size of huge size 500MB, Need to Manipulate the file,
 some replacement and then write the file, I have used File::slurp and
 works for file size of 300MB (Thanks Uri) but for this huge size
 500MB it is not processing and come out with error. I have also used
 Tie::file module same case as not processing, any guidance.

Firstly, be specific - come out with error doesn't help us - what is
the error?

Secondly - do you need to work on the file as a whole, or can you just
loop over it, making changes, and writing them back out?  In other
words, do you *need* to hold the whole file in memory at one time?
More often than not, you don't.

If it's per-line changes, then File::Slurp::edit_file_lines should work
- for e.g.:

  use File::Slurp qw(edit_file_lines);
  my $filename = '/tmp/foo';
  edit_file_lines(sub { s/badger/mushroom/g }, $filename);

The above would of course replace every occurrence of 'badger' with
'mushroom' in the file.

Cheers

Dave P


-- 
David Precious (bigpresh) dav...@preshweb.co.uk
http://www.preshweb.co.uk/ www.preshweb.co.uk/twitter
www.preshweb.co.uk/linkedinwww.preshweb.co.uk/facebook
www.preshweb.co.uk/cpanwww.preshweb.co.uk/github



-- 
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/




Re: Gigantic file size processing error

2014-01-02 Thread Uri Guttman

On 01/02/2014 10:39 AM, David Precious wrote:

On Thu, 2 Jan 2014 23:21:22 +0800 (SGT)
mani kandan mani_nm...@yahoo.com wrote:


Hi,

We have file size of huge size 500MB, Need to Manipulate the file,
some replacement and then write the file, I have used File::slurp and
works for file size of 300MB (Thanks Uri) but for this huge size
500MB it is not processing and come out with error. I have also used
Tie::file module same case as not processing, any guidance.


Firstly, be specific - come out with error doesn't help us - what is
the error?

Secondly - do you need to work on the file as a whole, or can you just
loop over it, making changes, and writing them back out?  In other
words, do you *need* to hold the whole file in memory at one time?
More often than not, you don't.

If it's per-line changes, then File::Slurp::edit_file_lines should work
- for e.g.:

   use File::Slurp qw(edit_file_lines);
   my $filename = '/tmp/foo';
   edit_file_lines(sub { s/badger/mushroom/g }, $filename);

The above would of course replace every occurrence of 'badger' with
'mushroom' in the file.


if there is a size issue, that would be just as bad as slurping in the 
whole file and it would use even more storage as it will be an array of 
all the lines internally. slurping in 500MB is not a smart thing unless 
you have many gigs of free ram. otherwise it will just be going to disk 
on the swap and you don't gain much other than simpler logic.


but i agree, knowing the error message and who is generating it will be 
valuable. it could be a virtual ram limitation on the OS which can be 
changed with the ulimit utility (or BSD::Resource if you have that module).


uri


--
Uri Guttman - The Perl Hunter
The Best Perl Jobs, The Best Perl Hackers
http://PerlHunter.com

--
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/




Re: Gigantic file size processing error

2014-01-02 Thread David Precious
On Thu, 02 Jan 2014 11:18:31 -0500
Uri Guttman u...@stemsystems.com wrote:
 On 01/02/2014 10:39 AM, David Precious wrote:

  Secondly - do you need to work on the file as a whole, or can you
  just loop over it, making changes, and writing them back out?  In
  other words, do you *need* to hold the whole file in memory at one
  time? More often than not, you don't.
 
  If it's per-line changes, then File::Slurp::edit_file_lines should
  work
  - for e.g.:
 
 use File::Slurp qw(edit_file_lines);
 my $filename = '/tmp/foo';
 edit_file_lines(sub { s/badger/mushroom/g }, $filename);
 
  The above would of course replace every occurrence of 'badger' with
  'mushroom' in the file.
 
 if there is a size issue, that would be just as bad as slurping in
 the whole file and it would use even more storage as it will be an
 array of all the lines internally.

Oh - my mistake, I'd believed that edit_file_lines edited the file
line-by-line, writing the results to a temporary file and then
renaming the temporary file over the original at the end.

In that case, I think the docs are a little unclear:

These subs read in a file into $_, execute a code block which should
modify $_ and then write $_ back to the file. The difference between
them is that edit_file reads the whole file into $_ and calls the code
block one time. With edit_file_lines each line is read into $_ and the
code is called for each line...

and 

These subs are the equivalent of the -pi command line options of
Perl...

... to me, that sounds like edit_file_lines reads a line at a time
rather than slurping the whole lot - but looking at the code, it does
indeed read the entire file contents into RAM.  (I probably should have
expected anything in File::Slurp to, well, slurp the file... :) )

Part of me wonders if File::Slurp should provide an in-place (not
slurping into RAM) editing feature which works like edit_file_lines but
line-by-line using a temp file, but that's probably feature creep :)

OP - what didn't work about Tie::File?



-- 
David Precious (bigpresh) dav...@preshweb.co.uk
http://www.preshweb.co.uk/ www.preshweb.co.uk/twitter
www.preshweb.co.uk/linkedinwww.preshweb.co.uk/facebook
www.preshweb.co.uk/cpanwww.preshweb.co.uk/github



-- 
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/




Re: Gigantic file size processing error

2014-01-02 Thread Uri Guttman

On 01/02/2014 11:48 AM, David Precious wrote:

On Thu, 02 Jan 2014 11:18:31 -0500
Uri Guttman u...@stemsystems.com wrote:

On 01/02/2014 10:39 AM, David Precious wrote:



Secondly - do you need to work on the file as a whole, or can you
just loop over it, making changes, and writing them back out?  In
other words, do you *need* to hold the whole file in memory at one
time? More often than not, you don't.

If it's per-line changes, then File::Slurp::edit_file_lines should
work
- for e.g.:

use File::Slurp qw(edit_file_lines);
my $filename = '/tmp/foo';
edit_file_lines(sub { s/badger/mushroom/g }, $filename);

The above would of course replace every occurrence of 'badger' with
'mushroom' in the file.


if there is a size issue, that would be just as bad as slurping in
the whole file and it would use even more storage as it will be an
array of all the lines internally.


Oh - my mistake, I'd believed that edit_file_lines edited the file
line-by-line, writing the results to a temporary file and then
renaming the temporary file over the original at the end.

In that case, I think the docs are a little unclear:

These subs read in a file into $_, execute a code block which should
modify $_ and then write $_ back to the file. The difference between
them is that edit_file reads the whole file into $_ and calls the code
block one time. With edit_file_lines each line is read into $_ and the
code is called for each line...



good point. i should emphasize that it does slurp in the file. tie::file 
only reads in chunks and moves around as you access elements. 
edit_file_lines slurps into an array and loops over those elements 
aliasing each one to $_. it definitely eats its own dog food!



and

These subs are the equivalent of the -pi command line options of
Perl...

... to me, that sounds like edit_file_lines reads a line at a time
rather than slurping the whole lot - but looking at the code, it does
indeed read the entire file contents into RAM.  (I probably should have
expected anything in File::Slurp to, well, slurp the file... :) )


as i said, dog food is good! :)

i wrote edit_file and edit_file_lines as interesting wrappers around 
read_file and write_file. i assumed it was obvious they used those slurp 
functions.





Part of me wonders if File::Slurp should provide an in-place (not
slurping into RAM) editing feature which works like edit_file_lines but
line-by-line using a temp file, but that's probably feature creep :)


that IS tie::file which i didn't want for efficiency reasons. it has to 
read/write back and forth every time you modify an element. edit_file 
(and _lines) are meant to be fast and simple to use for common editing 
of files. as with slurping, i didn't expect them to be used on .5GB 
files! :)


uri



--
Uri Guttman - The Perl Hunter
The Best Perl Jobs, The Best Perl Hackers
http://PerlHunter.com

--
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/




Re: Gigantic file size processing error

2014-01-02 Thread David Precious
On Thu, 02 Jan 2014 11:56:26 -0500
Uri Guttman u...@stemsystems.com wrote:

  Part of me wonders if File::Slurp should provide an in-place (not
  slurping into RAM) editing feature which works like edit_file_lines
  but line-by-line using a temp file, but that's probably feature
  creep :)  
 
 that IS tie::file which i didn't want for efficiency reasons. it has
 to read/write back and forth every time you modify an element.
 edit_file (and _lines) are meant to be fast and simple to use for
 common editing of files. as with slurping, i didn't expect them to be
 used on .5GB files! :)

Oh, I was thinking of a wrapper that would:

(a) open a new temp file
(b) iterate over the source file, line-by-line, calling the provided
coderef for each line
(c) write $_ (potentially modified by the coderef) to the temp file
(d) finally, rename the temp file over the source file

Of course, it's pretty easy to write such code yourself, and as it
doesn't slurp the file in, it could be considered out of place in
File::Slurp.  I'd be fairly surprised if such a thing doesn't already
exist on CPAN, too.  (If it didn't, I might actually write such a
thing, as a beginner-friendly here's how to easily modify a file, line
by line, with minimal effort offering.)


-- 
David Precious (bigpresh) dav...@preshweb.co.uk
http://www.preshweb.co.uk/ www.preshweb.co.uk/twitter
www.preshweb.co.uk/linkedinwww.preshweb.co.uk/facebook
www.preshweb.co.uk/cpanwww.preshweb.co.uk/github



-- 
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/




Re: Gigantic file size processing error

2014-01-02 Thread Uri Guttman

On 01/02/2014 12:08 PM, David Precious wrote:

On Thu, 02 Jan 2014 11:56:26 -0500
Uri Guttman u...@stemsystems.com wrote:


Part of me wonders if File::Slurp should provide an in-place (not
slurping into RAM) editing feature which works like edit_file_lines
but line-by-line using a temp file, but that's probably feature
creep :)


that IS tie::file which i didn't want for efficiency reasons. it has
to read/write back and forth every time you modify an element.
edit_file (and _lines) are meant to be fast and simple to use for
common editing of files. as with slurping, i didn't expect them to be
used on .5GB files! :)


Oh, I was thinking of a wrapper that would:

(a) open a new temp file
(b) iterate over the source file, line-by-line, calling the provided
coderef for each line
(c) write $_ (potentially modified by the coderef) to the temp file
(d) finally, rename the temp file over the source file

Of course, it's pretty easy to write such code yourself, and as it
doesn't slurp the file in, it could be considered out of place in
File::Slurp.  I'd be fairly surprised if such a thing doesn't already
exist on CPAN, too.  (If it didn't, I might actually write such a
thing, as a beginner-friendly here's how to easily modify a file, line
by line, with minimal effort offering.)




it wouldn't be a bad addition to file::slurp. call it something like 
edit_file_loop. if you write it, i will add it to the module. you can 
likely steal the code from edit_file_lines and modify that. i would 
document it as an alternative to edit_file_lines for very large files.


it will need pod, test files and good comments for me to add it. credit 
will be given :)


thanx,

uri

--
Uri Guttman - The Perl Hunter
The Best Perl Jobs, The Best Perl Hackers
http://PerlHunter.com

--
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/




Re: Gigantic file size processing error

2014-01-02 Thread Uri Guttman

On 01/02/2014 12:33 PM, David Precious wrote:

On Thu, 02 Jan 2014 12:19:16 -0500
Uri Guttman u...@stemsystems.com wrote:

On 01/02/2014 12:08 PM, David Precious wrote:

Oh, I was thinking of a wrapper that would:

(a) open a new temp file
(b) iterate over the source file, line-by-line, calling the provided
coderef for each line
(c) write $_ (potentially modified by the coderef) to the temp file
(d) finally, rename the temp file over the source file


[...]

it wouldn't be a bad addition to file::slurp. call it something like
edit_file_loop. if you write it, i will add it to the module. you can
likely steal the code from edit_file_lines and modify that. i would
document it as an alternative to edit_file_lines for very large files.

it will need pod, test files and good comments for me to add it.
credit will be given :)


Righto - I'll add it to my list of things awaiting tuit resupply :)



who is your tuit supplier? i am looking for a better and cheaper one.

uri


--
Uri Guttman - The Perl Hunter
The Best Perl Jobs, The Best Perl Hackers
http://PerlHunter.com

--
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/




Re: Gigantic file size processing error

2014-01-02 Thread David Precious
On Thu, 02 Jan 2014 12:19:16 -0500
Uri Guttman u...@stemsystems.com wrote:
 On 01/02/2014 12:08 PM, David Precious wrote:
  Oh, I was thinking of a wrapper that would:
 
  (a) open a new temp file
  (b) iterate over the source file, line-by-line, calling the provided
  coderef for each line
  (c) write $_ (potentially modified by the coderef) to the temp file
  (d) finally, rename the temp file over the source file

[...]
 it wouldn't be a bad addition to file::slurp. call it something like 
 edit_file_loop. if you write it, i will add it to the module. you can 
 likely steal the code from edit_file_lines and modify that. i would 
 document it as an alternative to edit_file_lines for very large files.
 
 it will need pod, test files and good comments for me to add it.
 credit will be given :)

Righto - I'll add it to my list of things awaiting tuit resupply :)



-- 
David Precious (bigpresh) dav...@preshweb.co.uk
http://www.preshweb.co.uk/ www.preshweb.co.uk/twitter
www.preshweb.co.uk/linkedinwww.preshweb.co.uk/facebook
www.preshweb.co.uk/cpanwww.preshweb.co.uk/github



-- 
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/