Re: Copying large files eats all of the RAM

2013-12-02 Thread Andreas Mohr
On Sat, Nov 30, 2013 at 11:59:07AM +0530, venkata koppula wrote:
> Thanks for you replies.
> 
> Yeah, I understand that we need to utilize the resources as much as we
> can. At the same time user should not
> feel that system is slow and user should never wait for the copying
> operation should complete to launch another
> application meanwhile.
> 
> If the user is a system administrator or a programmer, he/she
> understands the problem and will try to tune the kernel
> based on his/her requirements. As an application user(A desktop user
> doesn't worry about the optimizations,
> even doesn't know what it is:)) faster response is important.

I'm afraid you're damn right - standard configuration ought to be pretty
similar to the optimum, without any tuning, given that most people don't
(know to) do manual tuning.
And even in recent times the kernel did not manage to achieve that goal,
for several use cases. But I'm quite certain we're getting closer :)

BTW, a quite likely related and possibly helpful topic is
"[patch 7/9] mm: thrash detection-based file cache sizing" 
https://lkml.org/lkml/2013/12/2/483

HTH,

Andreas Mohr

-- 
GNU/Linux. It's not the software that's free, it's you.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Copying large files eats all of the RAM

2013-12-02 Thread Andreas Mohr
On Sat, Nov 30, 2013 at 11:59:07AM +0530, venkata koppula wrote:
 Thanks for you replies.
 
 Yeah, I understand that we need to utilize the resources as much as we
 can. At the same time user should not
 feel that system is slow and user should never wait for the copying
 operation should complete to launch another
 application meanwhile.
 
 If the user is a system administrator or a programmer, he/she
 understands the problem and will try to tune the kernel
 based on his/her requirements. As an application user(A desktop user
 doesn't worry about the optimizations,
 even doesn't know what it is:)) faster response is important.

I'm afraid you're damn right - standard configuration ought to be pretty
similar to the optimum, without any tuning, given that most people don't
(know to) do manual tuning.
And even in recent times the kernel did not manage to achieve that goal,
for several use cases. But I'm quite certain we're getting closer :)

BTW, a quite likely related and possibly helpful topic is
[patch 7/9] mm: thrash detection-based file cache sizing 
https://lkml.org/lkml/2013/12/2/483

HTH,

Andreas Mohr

-- 
GNU/Linux. It's not the software that's free, it's you.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Copying large files eats all of the RAM

2013-11-30 Thread Henrique de Moraes Holschuh
On Sat, 30 Nov 2013, venkata koppula wrote:
> Yeah, I understand that we need to utilize the resources as much as we
> can. At the same time user should not
> feel that system is slow and user should never wait for the copying
> operation should complete to launch another
> application meanwhile.

http://thread.gmane.org/gmane.linux.kernel.mm/108708/focus=1588604

Can also help when you have a mix of slow-as-heck and fast devices.

-- 
  "One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie." -- The Silicon Valley Tarot
  Henrique Holschuh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Copying large files eats all of the RAM

2013-11-30 Thread Henrique de Moraes Holschuh
On Sat, 30 Nov 2013, venkata koppula wrote:
 Yeah, I understand that we need to utilize the resources as much as we
 can. At the same time user should not
 feel that system is slow and user should never wait for the copying
 operation should complete to launch another
 application meanwhile.

http://thread.gmane.org/gmane.linux.kernel.mm/108708/focus=1588604

Can also help when you have a mix of slow-as-heck and fast devices.

-- 
  One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie. -- The Silicon Valley Tarot
  Henrique Holschuh
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Copying large files eats all of the RAM

2013-11-29 Thread venkata koppula
Thanks for you replies.

Yeah, I understand that we need to utilize the resources as much as we
can. At the same time user should not
feel that system is slow and user should never wait for the copying
operation should complete to launch another
application meanwhile.

If the user is a system administrator or a programmer, he/she
understands the problem and will try to tune the kernel
based on his/her requirements. As an application user(A desktop user
doesn't worry about the optimizations,
even doesn't know what it is:)) faster response is important.

As you said may be the problem is with my hardware. Mine is an acer
laptop with intel core I5 processor with
4GB RAM  and 500 GB HDD(SATA controller, didn't check who the hard
disk manufacturer is). Copying is within
the same hard disk.

I will check if there is any problem with my hardware, but meanwhile
this is the slabinfo (I took a part of slabinfo
what I think is important in this case).

Before the operation:

# name 
: tunables: slabdata
  

ext4_inode_cache   12469  12469880   378 : tunables00
  0 : slabdata337337  0
ext4_io_page5376   5376 16  2561 : tunables00
  0 : slabdata 21 21  0


inode_cache 9309   9309560   294 : tunables00
  0 : slabdata321321  0
dentry 65549  65919192   211 : tunables00
  0 : slabdata   3139   3139  0

buffer_head32526  32526104   391 : tunables00
  0 : slabdata834834  0


When the copy is going on:

ext4_inode_cache   11298  12543880   378 : tunables00
  0 : slabdata339339  0
ext4_io_page   29696  29696 16  2561 : tunables00
  0 : slabdata116116  0


inode_cache 9309   9309560   294 : tunables00
  0 : slabdata321321  0
dentry 24787  33054192   211 : tunables00
  0 : slabdata   1574   1574  0
buffer_head   553839 553839104   391 : tunables00
  0 : slabdata  14201  14201  0


I tried Austin S Hemmelgarn suggestion as well, results are as follows.

Same stats with the echo $((16*1024*1024)) >
/proc/sys/vm/dirty_background_bytes.

Before the copy operation:

ext4_inode_cache5788  10434880   378 : tunables00
  0 : slabdata282282  0
ext4_io_page9734  11264 16  2561 : tunables00
  0 : slabdata 44 44  0

inode_cache 8258   9251560   294 : tunables00
  0 : slabdata319319  0
dentry 17400  24843192   211 : tunables00
  0 : slabdata   1183   1183  0
buffer_head14425  30966104   391 : tunables00
  0 : slabdata794794  0


When the copy is going on:

ext4_inode_cache4965   9805880   378 : tunables00
  0 : slabdata265265  0
ext4_io_page   17708  19456 16  2561 : tunables00
  0 : slabdata 76 76  0

node_cache 8258   9251560   294 : tunables00
 0 : slabdata319319  0
dentry 17052  24738192   211 : tunables00
  0 : slabdata   1178   1178  0
buffer_head   458454 459810104   391 : tunables00
  0 : slabdata  11790  11790  0


I think with this tune system response is fast, no freezes and able to
launch Apps.

Thanks
Venkat

On Sat, Nov 30, 2013 at 5:12 AM, Austin S Hemmelgarn
 wrote:
> On 11/29/2013 04:56 PM, Andreas Mohr wrote:
>> Hi,
>>
>>> My laptop has 4GB of ram, before I issue the command around 1.5GB of
>>> memory
>>> is used, when I issue the cp command around 3.7GB of memory is used.
>>> And the cp command takes a lot of time to copy.
>>>
>>> I am not able to launch other applications(take a lot of time) and
>>> even compiz freezes frequently. my laptop has Ubuntu installed on it.
>>>
>>> Is this the problem with only my system or it is a common problem with
>>> Linux?.
>>>
>>> Is there any way to stop any copy command to use all of my memory.
>>
>> The purpose of a good operating system is *exactly* to optimize the
>> *all* RAM is used, *all* the time optimum to the highest degree.
>> Or would you want to have your power supply used to power useless
>> memory that's sitting idle?
>>
>> It's probably a good idea to read up on the many sites which explain
>> important OS caching mechanisms.
>>
>> That said, there may of course be situations where too much
>> competition/contention for resources occurs, or where calculation of the
>> kept-free-for-reuse memory amount is sub-optimal, leaving overly scarce
>> amounts of memory available for immediate use.
>> But that should be a matter of optimizing core kernel algorithms
>> even more than they already are.
>>
>> And it's also known that for certain situations (e.g. trying to push very 
>> large
>> amounts of data over a lowly USB 1.1 USB stick 

Re: Copying large files eats all of the RAM

2013-11-29 Thread Austin S Hemmelgarn
On 11/29/2013 04:56 PM, Andreas Mohr wrote:
> Hi,
> 
>> My laptop has 4GB of ram, before I issue the command around 1.5GB of
>> memory
>> is used, when I issue the cp command around 3.7GB of memory is used.
>> And the cp command takes a lot of time to copy.
>>
>> I am not able to launch other applications(take a lot of time) and
>> even compiz freezes frequently. my laptop has Ubuntu installed on it.
>>
>> Is this the problem with only my system or it is a common problem with
>> Linux?.
>>
>> Is there any way to stop any copy command to use all of my memory.
> 
> The purpose of a good operating system is *exactly* to optimize the
> *all* RAM is used, *all* the time optimum to the highest degree.
> Or would you want to have your power supply used to power useless
> memory that's sitting idle?
> 
> It's probably a good idea to read up on the many sites which explain
> important OS caching mechanisms.
> 
> That said, there may of course be situations where too much
> competition/contention for resources occurs, or where calculation of the
> kept-free-for-reuse memory amount is sub-optimal, leaving overly scarce
> amounts of memory available for immediate use.
> But that should be a matter of optimizing core kernel algorithms
> even more than they already are.
> 
> And it's also known that for certain situations (e.g. trying to push very 
> large
> amounts of data over a lowly USB 1.1 USB stick connection),
> Linux does (or did?) tend to have issues with that cached data piling up
> in somewhat negative ways prior to getting flushed over the connection,
> thereby causing system performance to degrade (I'm not in the know of
> how much that still applies to very new Linux kernel versions).
> 
> But in your case that might simply be a problem of your particular
> hardware (IRQ issues, improperly implemented drivers, ...).
> Some benchmarking activities might be able to provide more details
> (e.g. hdparm -tT, bonnie++ results, memory performance tests, etc.).
> 
> cat /proc/slabinfo
> ought to provide an initial overview of which cache elements
> manage to keep the largest memory areas in service.
> 

I think it is more likely hardware related, the USB storage (and other
slow device) caching issues are related to the actual amount of memory
in the system (and how slow the device is).  Linux by default caches
writes up to 10% of system memory prior to starting write-back on a 4G
system this is only 400M,  the easy way to check though is to try the
same operation after running:
echo $((16*1024*1024)) > /proc/sys/vm/dirty_background_bytes
this will configure things to start write-back of the data much earlier.

On the other hand, if you are doing this between locations on an SSD,
that might also be part of the issue, write operations on SSD's take
much longer than reads, and most of them can't do a read operation while
the write operation is running (USB flash drives have similar issues,
which is part of why the caching issues are so evident with them).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Copying large files eats all of the RAM

2013-11-29 Thread Andreas Mohr
Hi,

> My laptop has 4GB of ram, before I issue the command around 1.5GB of
> memory
> is used, when I issue the cp command around 3.7GB of memory is used.
> And the cp command takes a lot of time to copy.
> 
> I am not able to launch other applications(take a lot of time) and
> even compiz freezes frequently. my laptop has Ubuntu installed on it.
> 
> Is this the problem with only my system or it is a common problem with
> Linux?.
> 
> Is there any way to stop any copy command to use all of my memory.

The purpose of a good operating system is *exactly* to optimize the
*all* RAM is used, *all* the time optimum to the highest degree.
Or would you want to have your power supply used to power useless
memory that's sitting idle?

It's probably a good idea to read up on the many sites which explain
important OS caching mechanisms.

That said, there may of course be situations where too much
competition/contention for resources occurs, or where calculation of the
kept-free-for-reuse memory amount is sub-optimal, leaving overly scarce
amounts of memory available for immediate use.
But that should be a matter of optimizing core kernel algorithms
even more than they already are.

And it's also known that for certain situations (e.g. trying to push very large
amounts of data over a lowly USB 1.1 USB stick connection),
Linux does (or did?) tend to have issues with that cached data piling up
in somewhat negative ways prior to getting flushed over the connection,
thereby causing system performance to degrade (I'm not in the know of
how much that still applies to very new Linux kernel versions).

But in your case that might simply be a problem of your particular
hardware (IRQ issues, improperly implemented drivers, ...).
Some benchmarking activities might be able to provide more details
(e.g. hdparm -tT, bonnie++ results, memory performance tests, etc.).

cat /proc/slabinfo
ought to provide an initial overview of which cache elements
manage to keep the largest memory areas in service.

HTH,

Andreas Mohr
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Copying large files eats all of the RAM

2013-11-29 Thread Andreas Mohr
Hi,

 My laptop has 4GB of ram, before I issue the command around 1.5GB of
 memory
 is used, when I issue the cp command around 3.7GB of memory is used.
 And the cp command takes a lot of time to copy.
 
 I am not able to launch other applications(take a lot of time) and
 even compiz freezes frequently. my laptop has Ubuntu installed on it.
 
 Is this the problem with only my system or it is a common problem with
 Linux?.
 
 Is there any way to stop any copy command to use all of my memory.

The purpose of a good operating system is *exactly* to optimize the
*all* RAM is used, *all* the time optimum to the highest degree.
Or would you want to have your power supply used to power useless
memory that's sitting idle?

It's probably a good idea to read up on the many sites which explain
important OS caching mechanisms.

That said, there may of course be situations where too much
competition/contention for resources occurs, or where calculation of the
kept-free-for-reuse memory amount is sub-optimal, leaving overly scarce
amounts of memory available for immediate use.
But that should be a matter of optimizing core kernel algorithms
even more than they already are.

And it's also known that for certain situations (e.g. trying to push very large
amounts of data over a lowly USB 1.1 USB stick connection),
Linux does (or did?) tend to have issues with that cached data piling up
in somewhat negative ways prior to getting flushed over the connection,
thereby causing system performance to degrade (I'm not in the know of
how much that still applies to very new Linux kernel versions).

But in your case that might simply be a problem of your particular
hardware (IRQ issues, improperly implemented drivers, ...).
Some benchmarking activities might be able to provide more details
(e.g. hdparm -tT, bonnie++ results, memory performance tests, etc.).

cat /proc/slabinfo
ought to provide an initial overview of which cache elements
manage to keep the largest memory areas in service.

HTH,

Andreas Mohr
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Copying large files eats all of the RAM

2013-11-29 Thread Austin S Hemmelgarn
On 11/29/2013 04:56 PM, Andreas Mohr wrote:
 Hi,
 
 My laptop has 4GB of ram, before I issue the command around 1.5GB of
 memory
 is used, when I issue the cp command around 3.7GB of memory is used.
 And the cp command takes a lot of time to copy.

 I am not able to launch other applications(take a lot of time) and
 even compiz freezes frequently. my laptop has Ubuntu installed on it.

 Is this the problem with only my system or it is a common problem with
 Linux?.

 Is there any way to stop any copy command to use all of my memory.
 
 The purpose of a good operating system is *exactly* to optimize the
 *all* RAM is used, *all* the time optimum to the highest degree.
 Or would you want to have your power supply used to power useless
 memory that's sitting idle?
 
 It's probably a good idea to read up on the many sites which explain
 important OS caching mechanisms.
 
 That said, there may of course be situations where too much
 competition/contention for resources occurs, or where calculation of the
 kept-free-for-reuse memory amount is sub-optimal, leaving overly scarce
 amounts of memory available for immediate use.
 But that should be a matter of optimizing core kernel algorithms
 even more than they already are.
 
 And it's also known that for certain situations (e.g. trying to push very 
 large
 amounts of data over a lowly USB 1.1 USB stick connection),
 Linux does (or did?) tend to have issues with that cached data piling up
 in somewhat negative ways prior to getting flushed over the connection,
 thereby causing system performance to degrade (I'm not in the know of
 how much that still applies to very new Linux kernel versions).
 
 But in your case that might simply be a problem of your particular
 hardware (IRQ issues, improperly implemented drivers, ...).
 Some benchmarking activities might be able to provide more details
 (e.g. hdparm -tT, bonnie++ results, memory performance tests, etc.).
 
 cat /proc/slabinfo
 ought to provide an initial overview of which cache elements
 manage to keep the largest memory areas in service.
 

I think it is more likely hardware related, the USB storage (and other
slow device) caching issues are related to the actual amount of memory
in the system (and how slow the device is).  Linux by default caches
writes up to 10% of system memory prior to starting write-back on a 4G
system this is only 400M,  the easy way to check though is to try the
same operation after running:
echo $((16*1024*1024))  /proc/sys/vm/dirty_background_bytes
this will configure things to start write-back of the data much earlier.

On the other hand, if you are doing this between locations on an SSD,
that might also be part of the issue, write operations on SSD's take
much longer than reads, and most of them can't do a read operation while
the write operation is running (USB flash drives have similar issues,
which is part of why the caching issues are so evident with them).
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Copying large files eats all of the RAM

2013-11-29 Thread venkata koppula
Thanks for you replies.

Yeah, I understand that we need to utilize the resources as much as we
can. At the same time user should not
feel that system is slow and user should never wait for the copying
operation should complete to launch another
application meanwhile.

If the user is a system administrator or a programmer, he/she
understands the problem and will try to tune the kernel
based on his/her requirements. As an application user(A desktop user
doesn't worry about the optimizations,
even doesn't know what it is:)) faster response is important.

As you said may be the problem is with my hardware. Mine is an acer
laptop with intel core I5 processor with
4GB RAM  and 500 GB HDD(SATA controller, didn't check who the hard
disk manufacturer is). Copying is within
the same hard disk.

I will check if there is any problem with my hardware, but meanwhile
this is the slabinfo (I took a part of slabinfo
what I think is important in this case).

Before the operation:

# name active_objs num_objs objsize objperslab pagesperslab
: tunables limit batchcount sharedfactor : slabdata
active_slabs num_slabs sharedavail

ext4_inode_cache   12469  12469880   378 : tunables00
  0 : slabdata337337  0
ext4_io_page5376   5376 16  2561 : tunables00
  0 : slabdata 21 21  0


inode_cache 9309   9309560   294 : tunables00
  0 : slabdata321321  0
dentry 65549  65919192   211 : tunables00
  0 : slabdata   3139   3139  0

buffer_head32526  32526104   391 : tunables00
  0 : slabdata834834  0


When the copy is going on:

ext4_inode_cache   11298  12543880   378 : tunables00
  0 : slabdata339339  0
ext4_io_page   29696  29696 16  2561 : tunables00
  0 : slabdata116116  0


inode_cache 9309   9309560   294 : tunables00
  0 : slabdata321321  0
dentry 24787  33054192   211 : tunables00
  0 : slabdata   1574   1574  0
buffer_head   553839 553839104   391 : tunables00
  0 : slabdata  14201  14201  0


I tried Austin S Hemmelgarn suggestion as well, results are as follows.

Same stats with the echo $((16*1024*1024)) 
/proc/sys/vm/dirty_background_bytes.

Before the copy operation:

ext4_inode_cache5788  10434880   378 : tunables00
  0 : slabdata282282  0
ext4_io_page9734  11264 16  2561 : tunables00
  0 : slabdata 44 44  0

inode_cache 8258   9251560   294 : tunables00
  0 : slabdata319319  0
dentry 17400  24843192   211 : tunables00
  0 : slabdata   1183   1183  0
buffer_head14425  30966104   391 : tunables00
  0 : slabdata794794  0


When the copy is going on:

ext4_inode_cache4965   9805880   378 : tunables00
  0 : slabdata265265  0
ext4_io_page   17708  19456 16  2561 : tunables00
  0 : slabdata 76 76  0

node_cache 8258   9251560   294 : tunables00
 0 : slabdata319319  0
dentry 17052  24738192   211 : tunables00
  0 : slabdata   1178   1178  0
buffer_head   458454 459810104   391 : tunables00
  0 : slabdata  11790  11790  0


I think with this tune system response is fast, no freezes and able to
launch Apps.

Thanks
Venkat

On Sat, Nov 30, 2013 at 5:12 AM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
 On 11/29/2013 04:56 PM, Andreas Mohr wrote:
 Hi,

 My laptop has 4GB of ram, before I issue the command around 1.5GB of
 memory
 is used, when I issue the cp command around 3.7GB of memory is used.
 And the cp command takes a lot of time to copy.

 I am not able to launch other applications(take a lot of time) and
 even compiz freezes frequently. my laptop has Ubuntu installed on it.

 Is this the problem with only my system or it is a common problem with
 Linux?.

 Is there any way to stop any copy command to use all of my memory.

 The purpose of a good operating system is *exactly* to optimize the
 *all* RAM is used, *all* the time optimum to the highest degree.
 Or would you want to have your power supply used to power useless
 memory that's sitting idle?

 It's probably a good idea to read up on the many sites which explain
 important OS caching mechanisms.

 That said, there may of course be situations where too much
 competition/contention for resources occurs, or where calculation of the
 kept-free-for-reuse memory amount is sub-optimal, leaving overly scarce
 amounts of memory available for immediate use.
 But that should be a matter of optimizing core kernel algorithms
 even more than they already are.

 And it's also known that for certain situations (e.g. trying to push very 
 large
 

Copying large files eats all of the RAM

2013-11-28 Thread venkata koppula
Hi,

I am using cp command to copy a list of files each with size of around
1GB (copy within the same hard disk).

My laptop has 4GB of ram, before I issue the command around 1.5GB of memory
is used, when I issue the cp command around 3.7GB of memory is used.
And the cp command takes a lot of time to copy.

I am not able to launch other applications(take a lot of time) and
even compiz freezes frequently. my laptop has Ubuntu installed on it.

Is this the problem with only my system or it is a common problem with Linux?.

Is there any way to stop any copy command to use all of my memory.

Thanks
Venkat
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Copying large files eats all of the RAM

2013-11-28 Thread venkata koppula
Hi,

I am using cp command to copy a list of files each with size of around
1GB (copy within the same hard disk).

My laptop has 4GB of ram, before I issue the command around 1.5GB of memory
is used, when I issue the cp command around 3.7GB of memory is used.
And the cp command takes a lot of time to copy.

I am not able to launch other applications(take a lot of time) and
even compiz freezes frequently. my laptop has Ubuntu installed on it.

Is this the problem with only my system or it is a common problem with Linux?.

Is there any way to stop any copy command to use all of my memory.

Thanks
Venkat
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/