On Monday 24 December 2007 09:49, Afan Pasalic wrote: > hi, > I have 1.6 GB big text file and I have to find if there is a specific > word in the file. > Every time I try > $> grep -i "word" file.txt > I'll get message: "grep: memory exhausted".
Try fgrep. It doesn't use regular expression matching (and your "word" is a simple fixed string, so it will work for that). > How can I do that? > Is there any way I can split the file into several files and then do > search? % apropos split |fgrep '(1)' |sort 1 xml_split (1) [xml_split] - cut a big XML file into smaller chunks csplit (1) - split a file into sections determined by context lines ogmsplit (1) - Split OGG/OGM files into sevaral smaller OGG/OGM files pnmsplit (1) - see http://netpbm.sourceforge.net/doc//pnmsplit.html ppmtoyuvsplit (1) - see http://netpbm.sourceforge.net/doc//ppmtoyuvsplit.html split (1) - split a file into pieces split2po (1) - Creates a po file from two DocBook XML files splitdiff (1) - separate out incremental patches tiffsplit (1) - split a multi-image TIFF into single-image TIFF files xml_merge (1) - merge back XML files split with C<xml_split> xml_split (1) - cut a big XML file into smaller chunks yuvsplittoppm (1) - see http://netpbm.sourceforge.net/doc//yuvsplittoppm.html zipsplit (1) [zip] - package and compress (archive) files A few are potentially useful for you: split and csplit are the most probable. > thanks for any help. > > -afan Randall Schulz -- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]