I will make the measurement on the initial machine, will work with
different files and post the results.

The code I used is little different, the while loop is allocating new
memory for each chunk:


   while( (res = read(fd, buffer, chunk_size) ) > 0)
   {
      buffer = (char *)malloc(chunk_size);
      count ++;
   }

I removed the line, but memory allocation is not influencing that
much... if I work with rel. huge chunks> 5000


Misi




On Jun 17, 12:47 pm, "Antonio Colombo" <[EMAIL PROTECTED]> wrote:
> Hi Misi,
>
> your tests still are not in the "ideal" conditions, unless there
>
> are reboots after the creation of each new test file. The creation
>
> of the file of course leaves a lot of buffers in memory.
>
> In my tests I used existing files I had not been using at all
>
> from boot time.
>
> .
>
> Apart from that, your program and "grep" do not really need
>
> to have in their private memory (not in the Unix buffer) more
>
> than one "little piece" of the file. Of course vim needs and uses
>
> its private memory to hold the whole file. To make things even,
>
> you should modify your program in order to read each new
>
> chunk of the file in a different piece of memory, thus having
>
> them all in memory at the same time at the end of the run.
>
> I expect the difference to be noticeable (unless the memory
>
> of the HP-UX server is gigantic).
>
> Cheers, Antonio
>
>  PS Off Topic: I like more "ciao" as a throw away file name.  ;-)
>
> --
>    /||\    | Antonio Colombo
>   / || \   | [EMAIL PROTECTED]
> /  ()  \  | [EMAIL PROTECTED]
> (___||___) |   [EMAIL PROTECTED]
--~--~---------~--~----~------------~-------~--~----~
You received this message from the "vim_dev" maillist.
For more information, visit http://www.vim.org/maillist.php
-~----------~----~----~----~------~----~------~--~---

Reply via email to