I don't think that we have any caching options set. Below is our current config file. Each server has 12GB of RAM.

<Defaults>
        UnexpectedRequests 50
        EventLogging none
        EnableTracing no
        LogStamp thread
        BMIModules bmi_tcp
        FlowModules flowproto_multiqueue
        PerfUpdateInterval 1000
        ServerJobBMITimeoutSecs 30
        ServerJobFlowTimeoutSecs 30
        ClientJobBMITimeoutSecs 300
        ClientJobFlowTimeoutSecs 300
        ClientRetryLimit 5
        ClientRetryDelayMilliSecs 2000
        PrecreateBatchSize 0,32,512,32,32,32,0
        PrecreateLowThreshold 0,16,256,16,16,16,0
</Defaults>

<Aliases>
        Alias virtual1 tcp://172.16.47.1:4334
        Alias virtual2 tcp://172.16.47.2:4335
        Alias virtual3 tcp://172.16.47.3:4336
</Aliases>

<Filesystem>
        Name pvfs2-fs
        ID 623134090
        RootHandle 1048576
        FileStuffing yes
        <MetaHandleRanges>
                Range virtual1 3-1537228672809129302
                Range virtual2 1537228672809129303-3074457345618258602
                Range virtual3 3074457345618258603-4611686018427387902
        </MetaHandleRanges>
        <DataHandleRanges>
                Range virtual1 4611686018427387903-6148914691236517202
                Range virtual2 6148914691236517203-7686143364045646502
                Range virtual3 7686143364045646503-9223372036854775802
        </DataHandleRanges>
        <StorageHints>
                TroveSyncMeta yes
                TroveSyncData no
                TroveMethod alt-aio
        </StorageHints>
</Filesystem>

<ServerOptions>
        Server virtual1
        DataStorageSpace /panfs/stack1/data
        MetadataStorageSpace /panfs/stack1/meta
        LogFile /var/log/orangefs/orangefs-server-1.log
</ServerOptions>

<ServerOptions>
        Server virtual2
        DataStorageSpace /panfs/stack2/data
        MetadataStorageSpace /panfs/stack2/meta
        LogFile /var/log/orangefs/orangefs-server-2.log
</ServerOptions>

<ServerOptions>
        Server virtual3
        DataStorageSpace /panfs/stack3/data
        MetadataStorageSpace /panfs/stack3/meta
        LogFile /var/log/orangefs/orangefs-server-3.log
</ServerOptions>

Thanks,
Mike

On 5/28/13 10:48 AM, Boyd Wilson wrote:
Yes we did review and thought with the workload the overhead of small io
to panfs would not impact you too much and it seemed ok in your testing.
   I personally did not think through the deletes when we talked before.
   On a create if the files are large enough you don't see a large
percentage of time lost.

There is a cache setting you can add to the config file to increase the
db cache, but that only helps on lookups (does not help with deletes or
creates too much).   I think we have sent you a sample config that has
that set in it.  If not let us know how much memory each server node has
and we can help out.

If the above does not get the performance to acceptable levels, then yes
drbd on the volumes is how you would do HA

Again, sorry we missed this piece earlier.

-Boyd



On Tuesday, May 28, 2013, Michael Robbert wrote:

    That is disappointing to hear. I thought that we had cleared this
    design with Omnibond on a conference call to establish support. Can
    you explain more fully what the problem is? Panasas has told us in
    the past that their file system should perform well under small
    random access which is what I thought meta data would be.
    We have local disk on the servers, but that moving the meta data
    there would be a problem for our HA setup. Would you suggest DRBD if
    we need to go that route?

    Thanks,
    Mike


    On 5/25/13 4:26 AM, Boyd Wilson wrote:

        If the md files are on panfs that is probably the issue, panfs
        is not
        great for db performance.  Do you have local disks in the
        server?   If
        you can reconfigure and point there, performance should get better.

        -Boyd

        On Friday, May 24, 2013, Michael Robbert wrote:

             I believe so. Last week I was rsyncing some file to the
        file system.
             Yesterday I needed to delete a bunch of them and that is when I
             noticed the problem. On closer inspection it looks like
        rsync still
             writes quickly with large files(100MB/s), but bonnie++ is
        quite a
             bit slower(20MB/s). So for now I'm just concerned with the MD
             performance.
             It is stored on the same PanFS systems as the data.

             Mike

             On 5/24/13 5:47 PM, Boyd Wilson wrote:

                 Michael,
                 You said slowdown, was it performing better before and
        slowed down?

                 Also where are your MD file stored?

                 -b


                 On Fri, May 24, 2013 at 6:06 PM, Michael Robbert
        <[email protected]
                 <mailto:[email protected]>> wrote:

                      We recently noticed a performance problem with our
        OrangeFS
                 server.

                      Here are the server stats:
                      3 servers, built identically with identical hardware

                      [root@orangefs02 ~]# /usr/sbin/pvfs2-server --version
                      2.8.7-orangefs (mode: aio-threaded)

                      [root@orangefs02 ~]# uname -r
                      2.6.18-308.16.1.el5.584g0000

                      4 core E5603 1.60GHz
                      12GB of RAM

                      OrangeFS is being served to clients using bmi_tcp
        over DDR
                 Infiniband.
                      Backend storage is PanFS with 2x10Gig connections
        on the
                 servers.
                      Performance to the backend looks fine using bonnie++.
                  >100MB/sec
                      write and ~250MB/s read to each stack. ~300
        creates/sec.

                      On the OrangeFS clients are running kernel version
                 2.6.18-238.19.1.el5.

                      The biggest problem I have right now is that
        delete are
                 taking a
                      long time. Almost 1 sec per file.

                      [root@fatcompute-11-32
                 L_10_V0.2_eta0.3_wRes_______truncerr1e-11]# find
                      N2/|wc -l
                      137
                      [root@fatcompute-11-32
                 L_10_V0.2_eta0.3_wRes_______truncerr1e-11]# time
                      rm -rf N2

                      real    1m31.096s
                      user    0m0.000s
                      sys     0m0.015s

                      Similar results for file creates:

                      [root@fatcompute-11-32 ]# date;for i in `seq 1
        50`;do touch
                      file${i};done;date
                      Fri May 24 16:04:17 MDT 2013
                      Fri May 24 16:05:05 MDT 2013

                      What else do you need to know? Which debug flags? What
                 should we be
                      looking at?
                      I don't see any load on the servers and I've restarted
                 server and
                      rebooted server nodes.

                      Thanks for any pointers,
                      Mike Robbert
                      Colorado School of Mines


                      ___________________________________________________
                      Pvfs2-users mailing list
        [email protected]
                      <mailto:[email protected]>
        http://www.beowulf-____underground.org/mailman/____listinfo/pvfs2-users
        <http://www.beowulf-__underground.org/mailman/__listinfo/pvfs2-users>

        <http://www.beowulf-__underground.org/mailman/__listinfo/pvfs2-users
        <http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users>>





Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to