Re: [Ocfs2-users] OCFS2 performance - disk random access time problem

2010-06-02 Thread Angelo McComis
Cory-  is it appropriate to say that the fix will not be backported to
the 2.6.16.60.x.x kernels for 10-sp3?


- Angelo

{via mobile device}

On Jun 2, 2010, at 8:43 AM, Coly Li  wrote:

>
>
> On 06/02/2010 08:19 PM, Tao Ma Wrote:
>> Add Mark Fasheh  and Coly Li  to
>> cc
>> since they know what
>> ocfs2 kernel version SUSE uses.
>> Angelo McComis wrote:
>>>
 On 01/06/10 22:34, Sunil Mushran wrote:
> The kernel is old. We fixed this issue in 2.6.30. We have also
>>>backported
> it to the 1.4 production tree.
>
> The problem was that the inodes being created did not have
>>> locality
> leading to a directory having inodes that were spaced far apart
>>>from
> each other. The one place where it really affected performance
>>>was "rm".

 Thank you for reply!
>>>
>>>
>>> I've seen this rm problem in the production version of SLES 10.3
>>> +updates.
>>>
>>> Is this fix available in the SLES Enterprise kernels yet? 10.3.x or
>>> 11.1.x (11.1 officially releases today, btw).
>>>
>
> I guess you meant SLES10 SP3 and SLES11 SP1. For SLES11 SP1, the
> kernel is 2.6.32 based, the fix Sunil mentioned (which
> was included in 2.6.30) should be in SLES11 SP1 kernel.
>
> --
> Coly Li
> SuSE Labs

___
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users


Re: [Ocfs2-users] OCFS2 performance - disk random access time problem

2010-06-02 Thread Coly Li


On 06/02/2010 08:19 PM, Tao Ma Wrote:
> Add Mark Fasheh  and Coly Li  to cc
> since they know what
> ocfs2 kernel version SUSE uses.
> Angelo McComis wrote:
>>
>> > On 01/06/10 22:34, Sunil Mushran wrote:
>> >> The kernel is old. We fixed this issue in 2.6.30. We have also
>> backported
>> >> it to the 1.4 production tree.
>> >>
>> >> The problem was that the inodes being created did not have
>> locality
>> >> leading to a directory having inodes that were spaced far apart
>> from
>> >> each other. The one place where it really affected performance
>> was "rm".
>> >
>> > Thank you for reply!
>>
>>  
>> I've seen this rm problem in the production version of SLES 10.3
>> +updates.
>>  
>> Is this fix available in the SLES Enterprise kernels yet? 10.3.x or
>> 11.1.x (11.1 officially releases today, btw).
>>  

I guess you meant SLES10 SP3 and SLES11 SP1. For SLES11 SP1, the kernel is 
2.6.32 based, the fix Sunil mentioned (which
was included in 2.6.30) should be in SLES11 SP1 kernel.

-- 
Coly Li
SuSE Labs

___
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users


Re: [Ocfs2-users] OCFS2 performance - disk random access time problem

2010-06-02 Thread Tao Ma
Add Mark Fasheh  and Coly Li  to cc 
since they know what
ocfs2 kernel version SUSE uses.
Angelo McComis wrote:
>
> > On 01/06/10 22:34, Sunil Mushran wrote:
> >> The kernel is old. We fixed this issue in 2.6.30. We have also
> backported
> >> it to the 1.4 production tree.
> >>
> >> The problem was that the inodes being created did not have locality
> >> leading to a directory having inodes that were spaced far apart
> from
> >> each other. The one place where it really affected performance
> was "rm".
> >
> > Thank you for reply!
>
>  
> I've seen this rm problem in the production version of SLES 10.3 +updates.
>  
> Is this fix available in the SLES Enterprise kernels yet? 10.3.x or 
> 11.1.x (11.1 officially releases today, btw).
>  
> Thanks,
> Angelo
> 
>
> ___
> Ocfs2-users mailing list
> Ocfs2-users@oss.oracle.com
> http://oss.oracle.com/mailman/listinfo/ocfs2-users


___
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users


Re: [Ocfs2-users] OCFS2 performance - disk random access time problem

2010-06-02 Thread Tao Ma
Proskurin Kirill wrote:
> On 02/06/10 13:26, Tao Ma wrote:
>   
>>> Thank you for reply!
>>> It is enough to update kernel or tools need to be updated too?
>>>   
>> If you only want to use the old formatted volume, updating kernel is
>> enough. But if you want to use some new features we added, better update
>> ocfs2-tools also.
>> 
>
> If im understand right - you bind tools release to kernel release.
> For 2.6.32 kernel which tools are preferred?
>   
No we don't bind actually. We do have a 1.2 vs. 1.4 for the enterprise 
kernel.
For the mainline kernel, we don't have this limitation.
So using the latest source code from ocfs2-tools is ok, or if you want 
to use the
built rpm, use 1.4.* since it is almost the latest.

Regards,
Tao


___
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users


Re: [Ocfs2-users] OCFS2 performance - disk random access time problem

2010-06-02 Thread Proskurin Kirill
On 02/06/10 13:26, Tao Ma wrote:
>> Thank you for reply!
>> It is enough to update kernel or tools need to be updated too?
> If you only want to use the old formatted volume, updating kernel is
> enough. But if you want to use some new features we added, better update
> ocfs2-tools also.

If im understand right - you bind tools release to kernel release.
For 2.6.32 kernel which tools are preferred?

-- 
Best regards,
Proskurin Kirill

___
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users


Re: [Ocfs2-users] OCFS2 performance - disk random access time problem

2010-06-02 Thread Angelo McComis
>
> > On 01/06/10 22:34, Sunil Mushran wrote:
> >> The kernel is old. We fixed this issue in 2.6.30. We have also
> backported
> >> it to the 1.4 production tree.
> >>
> >> The problem was that the inodes being created did not have locality
> >> leading to a directory having inodes that were spaced far apart from
> >> each other. The one place where it really affected performance was "rm".
> >
> > Thank you for reply!
>

I've seen this rm problem in the production version of SLES 10.3 +updates.

Is this fix available in the SLES Enterprise kernels yet? 10.3.x or 11.1.x
(11.1 officially releases today, btw).

Thanks,
Angelo
___
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users

Re: [Ocfs2-users] OCFS2 performance - disk random access time problem

2010-06-02 Thread Proskurin Kirill
On 01/06/10 22:34, Sunil Mushran wrote:
> The kernel is old. We fixed this issue in 2.6.30. We have also backported
> it to the 1.4 production tree.
>
> The problem was that the inodes being created did not have locality
> leading to a directory having inodes that were spaced far apart from
> each other. The one place where it really affected performance was "rm".

Thank you for reply!
It is enough to update kernel or tools need to be updated too?

-- 
Best regards,
Proskurin Kirill

___
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users


Re: [Ocfs2-users] OCFS2 performance - disk random access time problem

2010-06-02 Thread Tao Ma
Hi Proskurin,

On 06/02/2010 05:23 PM, Proskurin Kirill wrote:
> On 01/06/10 22:34, Sunil Mushran wrote:
>> The kernel is old. We fixed this issue in 2.6.30. We have also backported
>> it to the 1.4 production tree.
>>
>> The problem was that the inodes being created did not have locality
>> leading to a directory having inodes that were spaced far apart from
>> each other. The one place where it really affected performance was "rm".
>
> Thank you for reply!
> It is enough to update kernel or tools need to be updated too?
If you only want to use the old formatted volume, updating kernel is 
enough. But if you want to use some new features we added, better update 
ocfs2-tools also.

Regards,
Tao

___
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users


Re: [Ocfs2-users] OCFS2 performance - disk random access time problem

2010-06-01 Thread Sunil Mushran
The kernel is old. We fixed this issue in 2.6.30. We have also backported
it to the 1.4 production tree.

The problem was that the inodes being created did not have locality
leading to a directory having inodes that were spaced far apart from
each other. The one place where it really affected performance was "rm".

On 05/30/2010 02:38 AM, Proskurin Kirill wrote:
> Hello.
>
> I plan to use OCFS2 + DRBD for email server.
>
> Problem:
> I use "seeker"  for testing
> http://www.linuxinsight.com/how_fast_is_your_disk.html
>
> And get this:
> Results: 65 seeks/second, 15.23 ms random access time
> Then I do rm of many files - it fals to 10 seeks/second and performance
> is terrible.
>
> What can I do to increase it? What`s wrong?
>
> Below is many info.
>
> What we have:
> Debian lenny 2.6.26-2-amd64
> ocfs2-tools 1.4.1-1
> drbd8-2.6.26-2-amd64  2:8.3.7-1~bpo50+1+2.6.26-21lenny4
> drbd8-source 2:8.3.7-1~bpo50+1
> drbd8-utils 2:8.3.7-1~bpo50+1
>
> debugfs.ocfs2 -R "stats" /dev/drdb0
>   Revision: 0.90
>   Mount Count: 0   Max Mount Count: 20
>   State: 0   Errors: 0
>   Check Interval: 0   Last Check: Fri Mar 26 11:54:41 2010
>   Creator OS: 0
>   Feature Compat: 1 BackupSuper
>   Feature Incompat: 16 Sparse
>   Tunefs Incomplete: 0 None
>   Feature RO compat: 1 Unwritten
>   Root Blknum: 17   System Dir Blknum: 18
>   First Cluster Group Blknum: 8
>   Block Size Bits: 12   Cluster Size Bits: 15
>   Max Node Slots: 2
>   Label: ocfs2_drbd0
>   UUID: 75507E99756D4AE986337FD09BC7B2E1
>   Cluster stack: classic o2cb
>   Inode: 2   Mode: 00   Generation: 140519344 (0x86027b0)
>   FS Generation: 140519344 (0x86027b0)
>   Type: Unknown   Attr: 0x0   Flags: Valid System Superblock
>   User: 0 (root)   Group: 0 (root)   Size: 0
>   Links: 0   Clusters: 27476584
>   ctime: 0x4baca081 -- Fri Mar 26 11:54:41 2010
>   atime: 0x0 -- Thu Jan  1 00:00:00 1970
>   mtime: 0x4baca081 -- Fri Mar 26 11:54:41 2010
>   dtime: 0x0 -- Thu Jan  1 00:00:00 1970
>   ctime_nsec: 0x -- 0
>   atime_nsec: 0x -- 0
>   mtime_nsec: 0x -- 0
>   Last Extblk: 0
>   Sub Alloc Slot: Global   Sub Alloc Bit: 65535
>
> /etc/init.d/o2cb status
> Driver for "configfs": Loaded
> Filesystem "configfs": Mounted
> Stack glue driver: Loaded
> Stack plugin "o2cb": Loaded
> Driver for "ocfs2_dlmfs": Loaded
> Filesystem "ocfs2_dlmfs": Mounted
> Checking O2CB cluster ocfs2: Online
> Heartbeat dead threshold = 31
> Network idle timeout: 15000
> Network keepalive delay: 2000
> Network reconnect delay: 2000
> Checking O2CB heartbeat: Active
>
> dmesg | grep -i ocfs2
> [   18.821010] OCFS2 Node Manager 1.5.0
> [   18.826429] OCFS2 DLM 1.5.0
> [   18.829955] ocfs2: Registered cluster interface o2cb
> [   18.849955] OCFS2 DLMFS 1.5.0
> [   18.849955] OCFS2 User DLM kernel interface loaded
> [  304.329154] OCFS2 1.5.0
>
> OCFS2 mount options:
> nodev,noauto,noatime,data=writeback
>
> Nodes connected via 1Gbit\s LAN.
> HDD is SATA Raptor 10k RPM.
>
> Mail storage is 260Gb with 4 million files(maildir)
>
> cat /proc/meminfo
> MemTotal:  4058444 kB
> MemFree:   3014752 kB
> Buffers:585544 kB
> Cached:  71036 kB
> SwapCached:  0 kB
> Active: 326432 kB
> Inactive:   553292 kB
> SwapTotal: 1951856 kB
> SwapFree:  1951164 kB
> Dirty: 172 kB
> Writeback:   0 kB
> AnonPages:  223136 kB
> Mapped:  15820 kB
> Slab:   100492 kB
> SReclaimable:89588 kB
> SUnreclaim:  10904 kB
> PageTables:   4632 kB
> NFS_Unstable:0 kB
> Bounce:  0 kB
> WritebackTmp:0 kB
> CommitLimit:   3981076 kB
> Committed_AS:   457124 kB
> VmallocTotal: 34359738367 kB
> VmallocUsed:279904 kB
> VmallocChunk: 34359458171 kB
> HugePages_Total: 0
> HugePages_Free:  0
> HugePages_Rsvd:  0
> HugePages_Surp:  0
> Hugepagesize: 2048 kB
>
>


___
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users


[Ocfs2-users] OCFS2 performance - disk random access time problem

2010-05-30 Thread Proskurin Kirill
Hello.

I plan to use OCFS2 + DRBD for email server.

Problem:
I use "seeker"  for testing
http://www.linuxinsight.com/how_fast_is_your_disk.html

And get this:
Results: 65 seeks/second, 15.23 ms random access time
Then I do rm of many files - it fals to 10 seeks/second and performance 
is terrible.

What can I do to increase it? What`s wrong?

Below is many info.

What we have:
Debian lenny 2.6.26-2-amd64
ocfs2-tools 1.4.1-1
drbd8-2.6.26-2-amd64  2:8.3.7-1~bpo50+1+2.6.26-21lenny4
drbd8-source 2:8.3.7-1~bpo50+1
drbd8-utils 2:8.3.7-1~bpo50+1

debugfs.ocfs2 -R "stats" /dev/drdb0
 Revision: 0.90
 Mount Count: 0   Max Mount Count: 20
 State: 0   Errors: 0
 Check Interval: 0   Last Check: Fri Mar 26 11:54:41 2010
 Creator OS: 0
 Feature Compat: 1 BackupSuper
 Feature Incompat: 16 Sparse
 Tunefs Incomplete: 0 None
 Feature RO compat: 1 Unwritten
 Root Blknum: 17   System Dir Blknum: 18
 First Cluster Group Blknum: 8
 Block Size Bits: 12   Cluster Size Bits: 15
 Max Node Slots: 2
 Label: ocfs2_drbd0
 UUID: 75507E99756D4AE986337FD09BC7B2E1
 Cluster stack: classic o2cb
 Inode: 2   Mode: 00   Generation: 140519344 (0x86027b0)
 FS Generation: 140519344 (0x86027b0)
 Type: Unknown   Attr: 0x0   Flags: Valid System Superblock
 User: 0 (root)   Group: 0 (root)   Size: 0
 Links: 0   Clusters: 27476584
 ctime: 0x4baca081 -- Fri Mar 26 11:54:41 2010
 atime: 0x0 -- Thu Jan  1 00:00:00 1970
 mtime: 0x4baca081 -- Fri Mar 26 11:54:41 2010
 dtime: 0x0 -- Thu Jan  1 00:00:00 1970
 ctime_nsec: 0x -- 0
 atime_nsec: 0x -- 0
 mtime_nsec: 0x -- 0
 Last Extblk: 0
 Sub Alloc Slot: Global   Sub Alloc Bit: 65535

/etc/init.d/o2cb status
Driver for "configfs": Loaded
Filesystem "configfs": Mounted
Stack glue driver: Loaded
Stack plugin "o2cb": Loaded
Driver for "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
Heartbeat dead threshold = 31
   Network idle timeout: 15000
   Network keepalive delay: 2000
   Network reconnect delay: 2000
Checking O2CB heartbeat: Active

dmesg | grep -i ocfs2
[   18.821010] OCFS2 Node Manager 1.5.0
[   18.826429] OCFS2 DLM 1.5.0
[   18.829955] ocfs2: Registered cluster interface o2cb
[   18.849955] OCFS2 DLMFS 1.5.0
[   18.849955] OCFS2 User DLM kernel interface loaded
[  304.329154] OCFS2 1.5.0

OCFS2 mount options:
nodev,noauto,noatime,data=writeback

Nodes connected via 1Gbit\s LAN.
HDD is SATA Raptor 10k RPM.

Mail storage is 260Gb with 4 million files(maildir)

cat /proc/meminfo
MemTotal:  4058444 kB
MemFree:   3014752 kB
Buffers:585544 kB
Cached:  71036 kB
SwapCached:  0 kB
Active: 326432 kB
Inactive:   553292 kB
SwapTotal: 1951856 kB
SwapFree:  1951164 kB
Dirty: 172 kB
Writeback:   0 kB
AnonPages:  223136 kB
Mapped:  15820 kB
Slab:   100492 kB
SReclaimable:89588 kB
SUnreclaim:  10904 kB
PageTables:   4632 kB
NFS_Unstable:0 kB
Bounce:  0 kB
WritebackTmp:0 kB
CommitLimit:   3981076 kB
Committed_AS:   457124 kB
VmallocTotal: 34359738367 kB
VmallocUsed:279904 kB
VmallocChunk: 34359458171 kB
HugePages_Total: 0
HugePages_Free:  0
HugePages_Rsvd:  0
HugePages_Surp:  0
Hugepagesize: 2048 kB

-- 
Best regards,
Proskurin Kriill


___
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users