Hello to Everybody,

first I would like to ask Edwin to tell the list which parameters of the long list he finally found helpful for his problem.
Second, does anybody know the corresponding incantations for Solaris.
(My current reading http://docs.oracle.com/cd/E26576_01/doc.312/e24936/tuning-os.htm or http://domaintest001.com/category/solaris/ is not very SunRay specific.)

Regards,

Karl

On 10.07.14 20:50, Edwin Marqe wrote:
Thanks Alejandro. I've been tunning some of those parameters and it seems now it works significantly faster. As Jim stated, it's hard to make it much faster due to the format/hardware limitations, but I think now it's acceptable.

Thanks so much again guys!


2014-07-08 20:26 GMT+01:00 Alejandro Soler <[email protected] <mailto:[email protected]>>:

    Hi Edwin

    I have this sets in my sysctl.conf, this parameters are right if you
    have 1GB network. Apply this and then run "sysctl -p", and see what
    happen.

    vm.swappiness = 10
    net.ipv4.ip_local_port_range = 10000 65000
    net.core.rmem_max = 16777216
    net.core.wmem_max = 16777216
    net.core.rmem_default = 16777216
    net.core.wmem_default = 16777216
    net.core.optmem_max = 40960
    net.ipv4.tcp_rmem = 4096 87380 16777216
    net.ipv4.tcp_wmem = 4096 65536 16777216

    net.ipv4.tcp_window_scaling = 1
    net.ipv4.tcp_timestamps = 1
    net.ipv4.tcp_sack = 1
    net.core.netdev_max_backlog = 50000
    net.ipv4.tcp_max_syn_backlog = 30000
    net.ipv4.tcp_max_tw_buckets = 2000000
    net.ipv4.tcp_tw_reuse = 1
    net.ipv4.tcp_fin_timeout = 10

    net.ipv4.tcp_slow_start_after_idle = 0
    net.ipv4.udp_rmem_min = 8192
    net.ipv4.udp_wmem_min = 8192
    net.ipv4.conf.all.send_redirects = 0
    net.ipv4.conf.all.accept_redirects = 0
    net.ipv4.conf.all.accept_source_route = 0
    net.ipv4.tcp_no_metrics_save = 1

    luck.

    --

    ------------------------------------
    Alejandro Soler
    Administrador de Sistemas
    Martina di Trento S.A.
    Tel.: (011) 4000-7243
    Av.Pedro de Mendoza 2555
    (C1169AAJ) Buenos Aires - Argentina


    ---------- Mensaje reenviado ----------
    From: Edwin Marqe <[email protected] <mailto:[email protected]>>
    To: SunRay-Users mailing list <[email protected]
    <mailto:[email protected]>>
    Cc:
    Date: Tue, 8 Jul 2014 20:09:30 +0100
    Subject: [Marketing Mail] Re: [SunRay-Users] [Marketing Mail] Re:
    [Marketing Mail] utstoraged takes too long to mount a storage device
    Hi Alejandro,

    I have not customized any sysctl parameters yet, but I'd be glad
    if you could give me some hints on what you did to make it work
    fine. I'll make some tests within this week and see if I can
    improve it. If not, I'll probably just send an utwall to the users
    telling them to be patient :-)

    Thank you again for your help!


    2014-07-04 19:17 GMT+01:00 Alejandro Soler
    <[email protected] <mailto:[email protected]>>:

        Hi, Edwin

        I have one server with OEL6.3 with SRS5.4 and the times of
        mounts are
        acceptable.

        Did you change sysctl.conf options (specialy network options).?


        --

        ------------------------------------
        Alejandro Soler
        Administrador de Sistemas
        Martina di Trento S.A.
        Tel.: (011) 4000-7243
        Av.Pedro de Mendoza 2555
        (C1169AAJ) Buenos Aires - Argentina


        ---------- Mensaje reenviado ----------
        From: Edwin Marqe <[email protected]
        <mailto:[email protected]>>
        To: [email protected] <mailto:[email protected]>, SunRay-Users
        mailing list <[email protected]
        <mailto:[email protected]>>
        Cc:
        Date: Fri, 4 Jul 2014 16:06:50 +0100
        Subject: [Marketing Mail] Re: [SunRay-Users] [Marketing Mail]
        utstoraged takes too long to mount a storage device
        Wow, thanks Jim for the accurate and detailed explaination!
        Indeed, this seems to have much to do with the NTFS/FUSE
        combination, as when I try to mount those external USB drives
        I could see a line in the log saying FUSE was being used here.
        I also did some tests with the iostat and iotop tools, and
        seems that in the time interval between the device being
        plugged in and when it gets opened as a folder on the client
        side, there's a process of 'nautilus --no-desktop
        <mountpoint>' running the 99% of the IO operations of the
        server, so I guess this is the culprit.

        I also tried setting Alejandro's options, but it didn't seem
        to help.

        Any tips on why is that so expensive and how to reduce it a
        little bit?

        Again thank you guys!


        2014-07-01 18:29 GMT+01:00 Jim Klimov <[email protected]
        <mailto:[email protected]>>:

            One point in the question was the USB drive's filesystem.

            Typically one would see FAT (aka pcfs for Solaris hosts)
            or NTFS
            here, of which NTFS support in the Unixes is often done
            with an
            userspace FUSE driver layer which tends to be slow even
            with the
            faster devices (i.e. HDDs). USB Flash is rather slow as it
            is (raw),
            and being USB over Ethernet does not help things much ;)

            Of course it is valid to see ext* or ufs or whatever on
            flashes,
            but the other options are typically less portable and
            rarely used.

            Also, the "pcfs" in Solaris was historically also known to be
            sub-performant, so AFAIK both Solaris 11 and illumos had
            projects
            to rewrite it with a modern FAT supporting driver. But
            with your
            OL hosts this part does not matter.

            I'd expect this to be a problem with NTFS/FUSE, primarily.
            See if "iostat" and similar server-side tools yield anything
            interesting regarding the disk traffic while it is being
            mounted?

            As a second option, it might be some networking speed mismatch
            resulting in pathological bandwidths, though this would be
            also
            visible with interactive (graphics) part of your sessions,
            probably.
            For example, some gigabit switches for servers with poor
            buffering
            settings or implementations were known to cause problems
            when the
            downstream 10/100M DTUs were used, google for details if
            this rings
            a bell for your setup.

            HTH,
            //Jim Klimov


            On 2014-07-01 18:29, Edwin Marqe wrote:

                Hi Alejandro!

                I've indeed checked the log and it seems pretty normal
                apart of the
                speed issue. The device is detected a few seconds
                after plugging it in,
                but seems to be completely mounted about 40 seconds
                later (after which
                the nautilus navigator with the mounted drive is
                automatically shown). I
                think this will be a storage capacity problem, as if I
                plug in any 1-2
                Gb USB drive it gets mounted a few seconds after. I've
                also tried
                plugging several >= 8 Gb USB drives and it happens
                with all of them, so
                I guess it's "normal", but I'd still like to know
                whether there's a way
                to enhace the speed of mounting.

                I'm using Oracle Linux 6.3 and SRS 5.4 here.

                Thanks for your help


                2014-07-01 13:39 GMT+01:00 Alejandro Soler
                <[email protected]
                <mailto:[email protected]>
                <mailto:[email protected]
                <mailto:[email protected]>>>:


                    Hi Edwin

                    Not all devices and filesystems are well
                supported. Check the systems




--

            +============================================================+
            |              |
            | ?????? ???????,   Jim Klimov |
            | ??????????? ????????         CTO |
            | ??? "??? ? ??"  JSC COS&HT |
            |              |
| +7-903-7705859 <tel:%2B7-903-7705859> (cellular) mailto:[email protected] <mailto:[email protected]> |
            | CC:[email protected]
            <mailto:cc%[email protected]>,[email protected]
            <mailto:[email protected]> |
            +============================================================+
            | ()  ascii ribbon campaign - against html mail              |
            | /\                        - against microsoft attachments  |
            +============================================================+





            _______________________________________________
            SunRay-Users mailing list
            [email protected] <mailto:[email protected]>
            http://www.filibeto.org/mailman/listinfo/sunray-users



        _______________________________________________
        SunRay-Users mailing list
        [email protected] <mailto:[email protected]>
        http://www.filibeto.org/mailman/listinfo/sunray-users

        _______________________________________________
        SunRay-Users mailing list
        [email protected] <mailto:[email protected]>
        http://www.filibeto.org/mailman/listinfo/sunray-users



    _______________________________________________
    SunRay-Users mailing list
    [email protected] <mailto:[email protected]>
    http://www.filibeto.org/mailman/listinfo/sunray-users

    _______________________________________________
    SunRay-Users mailing list
    [email protected] <mailto:[email protected]>
    http://www.filibeto.org/mailman/listinfo/sunray-users




_______________________________________________
SunRay-Users mailing list
[email protected]
http://www.filibeto.org/mailman/listinfo/sunray-users


--
Karl Behler sen., Garching, Germany

_______________________________________________
SunRay-Users mailing list
[email protected]
http://www.filibeto.org/mailman/listinfo/sunray-users

Reply via email to