Re: [OpenIndiana-discuss] OI Crash

2013-01-18 Thread dormitionsk...@hotmail.com
Well, I don't think it's stressing the hardware all that much, when you 
consider our oldest server is 11 1/2 years old, with all its original hardware. 
 Our newest server is somewhere around 7 years old, without a hardware failure 
for at least five years.

I admit I'm not much of a system admin.  I've been forced into that role 
because there's nobody else here to do it. Our hosting provider situation is a 
similarly less than ideal situation, which we're working on.  Bosses kind of 
tend to get in the way of some of these things, too...

I have no idea about SPARC, or any of the real big server environments.  I 
can't even fathom working in an environment with thousands of servers, or why 
they would even need that many.  

And if you have the time and expertise to work through and find the problem so 
it can be resolved, that's obviously better.  But this archaic way of "dealing" 
with the problem actually works -- if a person can do it.  Like I said, it may 
not be practical for everyone's situation, though.  It's certainly not for big, 
professional admins.  For smaller environments, I believe it can be a 
reasonable option, though.

It's not being superstitious, or a victim.  It's simply trying take the easy 
way out, and if it takes care of the problem, then you don't have to deal with 
it any more.  Or at least not right now.  If it doesn't, well, then, you have 
to fight your way through it.  

I think setting up periodic reboots is better as a preventive maintenance 
measure, than as a way of addressing a known issue.  But if nothing else, it 
might just buy you some time until you can work on it more at your convenience.

Oh, and I didn't make this reboot procedure up.  From what I understand, it 
used to be fairly common practice.  I figured some of the professionals would 
take exception to it.  But sometimes, older things can still be better than 
new.  

Unless, of course, you like fighting and beating your head against the wall 
trying to figure out why your system hangs, or whatever, instead of having a 
stable network and spending your time on less pressing and / or more mundane 
things... 

[]:-)

Cheers.

fp



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] OI Crash

2013-01-18 Thread Doug Hughes

On 1/18/2013 7:53 PM, dormitionsk...@hotmail.com wrote:

On Jan 17, 2013, at 8:47 PM, Reginald Beardsley wrote:


As far as I'm concerned, problems like this are a bottomless abyss.  Which is 
why I'm still putting up w/ my OI box hanging.  It's annoying, but not 
critical.  It's also why critical stuff still runs on Solaris 10.

Intermittent failures are the worst time sink there is. There is no assurance 
that devoting all your time to the problem will fix it even at very high skill 
levels w/ a full complement of the very best tools.

If you're getting crash dumps there is hope of finding the cause, so that's a 
big improvement.

Good luck,
Reg

BTW Back in the 80's there was a VAX operator in Texas who went out to his 
truck, got a .357 and shot the computer.  His employer was not happy.  But I 
can certainly understand how the operator felt.



 From 1992 to I used to 1998, I used to work at the Denver Museum of Natural 
History -- now the Denver Museum of Nature and Science.  We had two or three 
DEC Vax's and an AIX machine there.  It was their policy that once a week we 
had to power each of the servers all the way down to clear out any memory 
problems -- or whatever -- as preventive maintenance.

Since then, I've always had the habit of setting up a cron job to reboot my 
servers once a week.  It's not as good as a full power down, but it's better 
than nothing.  And in all these years, I've never had to deal with intermittent 
problems like this, except for a few brief times when I used Red Hat Linux ten 
plus years ago.  (I've tried most of Red Hat's versions since 6.2, and RHEL 6 
is the first version I've found that runs decent enough on our hardware, and 
that I'm happy enough with, for us to use.)

So, if you can do it, you might want try setting up a cron job to reboot your 
server once a week -- or every night.  I reboot our LTSP thin client server 
every night just because it gets hit with running lots of desktop applications 
that I think give it a greater potential for these kinds of memory problems.

On the other hand, we have all of our websites hosted on one of our 
parishioner's servers -- and he doesn't reboot his machines periodically like I 
do -- and about every two months, I have to call him up and tell him something 
is wrong.  And he goes and powers down his system -- sometimes he has to even 
unplug it -- and then turn it back on, and everything works again.

I know there are system admins that just love to brag about how great their 
up-times are on their machines -- but this might just save you a lot of time 
and grief.

Of course, if you're running a real high-volume server, this might not be 
workable for you; but it only takes 2-5 minutes or so to reboot... Perhaps in 
the middle of the night you might be able to spare it being down that short 
time?

Just a friendly suggestion.

Shared experience.

I know others may tell you that that's no longer necessary anymore in these 
more modern times; but my experience has been otherwise.

I hope it helps.

+Peter, hieromonk



Haven't we passed the days of mystical sysadmin without understanding 
and characterization? Keeping up tradition for tradition's sake without 
understanding the underlying reasons really doesn't do anybody a favor. 
If there are memory leaks, we posses the technology to find them. My 
organization has thousands of machines that run jobs sometimes for 
months at a time. If I had to reboot servers once a week, my users would 
be at the doors with pitchforks. The only time we take downtime is when 
there are reasons to do so, including OS updates, hardware failures, and 
user software run amok. They can run a very long time like this.


Not that memory leaks never happen. Of course they do, but they 
eventually get found and fixed, or the program causing them passes into 
obsolescence. Always.


I encourage discovery rather than superstition, and diagnosis rather 
than repetition.


Be a knight, not a victim!


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] 10 GB Networking - any known gotchas/tips - stuff to avoid?

2013-01-18 Thread Ian Collins

Lou Picciano wrote:

You appear to have tagged onto an existing thread, not a good idea!


We're looking at building out some of our infrastructure in 10 GB ethernet 
land, then reevaluating our changing storage needs in context of best practices 
for leveraging these speeds. IE, iSCSI? Dedicated Storage arrays, etc.

First: Thanks to those of you who've updated our wiki re 10 GB ethernet 
adapters: http://wiki.openindiana.org/oi/Ethernet+Networking Is it generally 
safe to assume a 'chipset-level' compatibility, whether on adapter or mobo? 
(btw, many of those links to Intel are no longer working...)

Anyone have any specific experience running 10 GB on the newer SuperMicro 
boards? Yes, I realize we're generally safe with the Intel interfaces but, hey; 
doesn't hurt to ask, right?


I have been using an X9DRH-7TF and another host with an Intel X540 dual 
10G card (running SmartOS) for a while and they work very will.  If your 
storage pool can keep up, you can achieve excellent performance.


--
Ian.


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] OI Crash

2013-01-18 Thread Sašo Kiselkov
On 01/19/2013 01:53 AM, dormitionsk...@hotmail.com wrote:
> From 1992 to I used to 1998, I used to work at the Denver Museum of Natural 
> History -- now the Denver Museum of Nature and Science.  We had two or three 
> DEC Vax's and an AIX machine there.  It was their policy that once a week we 
> had to power each of the servers all the way down to clear out any memory 
> problems -- or whatever -- as preventive maintenance.  
> 
> Since then, I've always had the habit of setting up a cron job to reboot my 
> servers once a week.  It's not as good as a full power down, but it's better 
> than nothing.  And in all these years, I've never had to deal with 
> intermittent problems like this, except for a few brief times when I used Red 
> Hat Linux ten plus years ago.  (I've tried most of Red Hat's versions since 
> 6.2, and RHEL 6 is the first version I've found that runs decent enough on 
> our hardware, and that I'm happy enough with, for us to use.)

Nice anecdote, but I find this kind of policy very strange. Sure,
regular maintenance downtime windows are important, but doing to preempt
any problems in the OS seems just strange... not to mention that a
powercycle needlessly stresses the electromechanical components of the
server (HDD motors, fans, etc.)

Also, I don't know about VAX, but boot on a typical SPARC machine can
easily take upwards of 10 minutes (or more, depending on the level of
checks you enabled). Sun E10ks were famous for booting over half an hour
(checking all of their complicated hardware took a lot of time).

> So, if you can do it, you might want try setting up a cron job to reboot your 
> server once a week -- or every night.  I reboot our LTSP thin client server 
> every night just because it gets hit with running lots of desktop 
> applications that I think give it a greater potential for these kinds of 
> memory problems.  

How about just killing these apps (e.g. forced logout of users) rather
than rebooting the whole machine? Do you suspect memory problems in the
base OS services?

> On the other hand, we have all of our websites hosted on one of our 
> parishioner's servers -- and he doesn't reboot his machines periodically like 
> I do -- and about every two months, I have to call him up and tell him 
> something is wrong.

I suggest switching hosting providers, as your server admin apparently
has next to no idea of what he's doing. I've been running web servers
for years without any trouble. Only the most drastic changes should
warrant a reboot (e.g. kernel update).

  And he goes and powers down his system -- sometimes he has to even
unplug it -- and then turn it back on, and everything works again.

What's up with this Windows 95-era powercycling voodoo? You are
obviously dealing with a serious issue and ignoring it.

> I know there are system admins that just love to brag about how great their 
> up-times are on their machines -- but this might just save you a lot of time 
> and grief.

Frequent rebooting and powercycling might have worked for you, but lots
of applications don't allow for that. Don't mistake an admin's pride of
a job well done for bragging.

> Of course, if you're running a real high-volume server, this might not be 
> workable for you; but it only takes 2-5 minutes or so to reboot... Perhaps in 
> the middle of the night you might be able to spare it being down that short 
> time?

This is just plastering over the problem - I've seen plenty of
"solutions" of this kind where the restart frequency of a service slowly
had to increase until it was no longer workable. In general, I'd
recommend doing what you say only as the absolute last option.

> Just a friendly suggestion.
> Shared experience.
> 
> I know others may tell you that that's no longer necessary anymore in these 
> more modern times; but my experience has been otherwise.
> 
> I hope it helps.

When you do encounter these kinds of problems, try and capture a crash
dump, file an Illumos issue and provide as much info on the problem as
possible to help debug it (that's what I recommended to David, he has
yet to respond). Nothing will improve if users keep issues to
themselves. I've been dealing with a serious (show stopper) network load
problem in Illumos a while back and after a little googling, mailing and
testing I managed to resolve it. Sticking one's head in the sand isn't a
good avenue of progress.

Anyway, just my two cents..

Cheers,
--
Saso

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] OI Crash

2013-01-18 Thread dormitionsk...@hotmail.com
On Jan 17, 2013, at 8:47 PM, Reginald Beardsley wrote:

> As far as I'm concerned, problems like this are a bottomless abyss.  Which is 
> why I'm still putting up w/ my OI box hanging.  It's annoying, but not 
> critical.  It's also why critical stuff still runs on Solaris 10.
> 
> Intermittent failures are the worst time sink there is. There is no assurance 
> that devoting all your time to the problem will fix it even at very high 
> skill levels w/ a full complement of the very best tools.
> 
> If you're getting crash dumps there is hope of finding the cause, so that's a 
> big improvement.
> 
> Good luck,
> Reg
> 
> BTW Back in the 80's there was a VAX operator in Texas who went out to his 
> truck, got a .357 and shot the computer.  His employer was not happy.  But I 
> can certainly understand how the operator felt.


>From 1992 to I used to 1998, I used to work at the Denver Museum of Natural 
>History -- now the Denver Museum of Nature and Science.  We had two or three 
>DEC Vax's and an AIX machine there.  It was their policy that once a week we 
>had to power each of the servers all the way down to clear out any memory 
>problems -- or whatever -- as preventive maintenance.  

Since then, I've always had the habit of setting up a cron job to reboot my 
servers once a week.  It's not as good as a full power down, but it's better 
than nothing.  And in all these years, I've never had to deal with intermittent 
problems like this, except for a few brief times when I used Red Hat Linux ten 
plus years ago.  (I've tried most of Red Hat's versions since 6.2, and RHEL 6 
is the first version I've found that runs decent enough on our hardware, and 
that I'm happy enough with, for us to use.)

So, if you can do it, you might want try setting up a cron job to reboot your 
server once a week -- or every night.  I reboot our LTSP thin client server 
every night just because it gets hit with running lots of desktop applications 
that I think give it a greater potential for these kinds of memory problems.  

On the other hand, we have all of our websites hosted on one of our 
parishioner's servers -- and he doesn't reboot his machines periodically like I 
do -- and about every two months, I have to call him up and tell him something 
is wrong.  And he goes and powers down his system -- sometimes he has to even 
unplug it -- and then turn it back on, and everything works again.

I know there are system admins that just love to brag about how great their 
up-times are on their machines -- but this might just save you a lot of time 
and grief.

Of course, if you're running a real high-volume server, this might not be 
workable for you; but it only takes 2-5 minutes or so to reboot... Perhaps in 
the middle of the night you might be able to spare it being down that short 
time?

Just a friendly suggestion.

Shared experience.

I know others may tell you that that's no longer necessary anymore in these 
more modern times; but my experience has been otherwise.

I hope it helps.

+Peter, hieromonk



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] 10 GB Networking - any known gotchas/tips - stuff to avoid?

2013-01-18 Thread Lou Picciano
We're looking at building out some of our infrastructure in 10 GB ethernet 
land, then reevaluating our changing storage needs in context of best practices 
for leveraging these speeds. IE, iSCSI? Dedicated Storage arrays, etc.

First: Thanks to those of you who've updated our wiki re 10 GB ethernet 
adapters: http://wiki.openindiana.org/oi/Ethernet+Networking Is it generally 
safe to assume a 'chipset-level' compatibility, whether on adapter or mobo? 
(btw, many of those links to Intel are no longer working...)

Anyone have any specific experience running 10 GB on the newer SuperMicro 
boards? Yes, I realize we're generally safe with the Intel interfaces but, hey; 
doesn't hurt to ask, right?

Thanks, All.

Lou Picciano
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Dell Precision T3600 with Openindiana 151a7 ?

2013-01-18 Thread Udo Grabowski (IMK)

On 11/10/2012 10:25, Udo Grabowski (IMK) wrote:

Hello, vendor wants us to buy Dell Precision T3600 with Xeon E5-1650 16 GB
ECC. processor (C600 series Chipset, Raid Card H310 PCIe, Intel 82579 Gbe
controller). Does anybody know if that works with OI151a7 ? Don't want to
return 3 large boxes ..



So thanks for all suggestions so far.
Boxes arrived, and, of course, we had HUGE PROBLEMS 

First, the BIOS had to be upgraded to version A07, otherwise
there's no chance to get it up at all (seems to be a homebrew
BIOS...).

Next problem: The 151a7 full live DVD does not boot, instead failed
with 'console login services cannot be run'. Booting with
the text-only live cd solved that. Maybe something wrong on
that DVD ?

Next problem: The @!#$... PERC310 card has no driver, so we
ripped that out and connected to one internal sata port.
Found that the imr_sas driver available from elsewhere supports
the pci id 1000,73 needed here, so we will try later to use
that one without reflashing the card (using the non-RAID BIOS
switch). mr_sas (even not the newer release available elsewhere)
does NOT work with that card (maybe after reflashing to a
different firmware/pci id, if flashable at all).

Next problem: After loading the KVM virtualization module, the
system freezes. A long search for the cause finally led us to
, which was just fixed
4 weeks ago. So 'set apix_enable=0' in /etc/system fixed that
problem, to boot with the DVD you have to switch off virtualization
support in the BIOS (both vt and vt-d).

Next problem: Our older Syskonnect 9E81 pciex LC optical network
Gigabit HBA is not seen at all by that machine (not in prtconf on
OI, not in lspci in Ubuntu, switched slots, machines, cards, no
hope), that's probably because that BIOS relies on the newer AMT
mechanism to recognize that card. Called Dell, they may give an
answer to that problem the next days. In parallel, we ordered a
couple of Intel 82572GI based optical EXPI9400PFBLK cards, hope
they will work with OI and that system.

Next problem: Independent of the non-recognized network card
inside or not, we get these spurious, cryptic warnings spitting
into the boot process: 'pci_lcap_locate: unexpected pci header type:6d'
where the number is quite random ranging from 4 to ff. No
idea where that comes from, but it seems to have no visible impact
so far.

Next problem: ddu tells us that a couple of components still have
no driver:
Intel Corporation C600/X79 series chipset MEI Controller #1
NEC Corporation uPD720200 USB 3.0 Host Controller
Don't know if the first one harms us. USB 3 is known to be
missing from illumos.

Next problem: DVDs cannot be burned anymore (dumped 3 different
vendors, tried Brasero DVD creator as well as cdrecord, DVD is
not readable after burn). This seems to be a more general problem
not connected to the machine since ~151a7, I remember I burned an
a7 DVD on a5 (but had problems there too with non-matching md5sums).
A boot with such a DVD hangs somewhere near the beginning with
SCSI errors (maybe wrong burn parameters are used while creating).
This is a severe bug, cause yet unknown. There seem to be a few
reports about that problem already.

So far this was one of the harder setups we ever had. The
essential lessons to learn for OI are these:

1. The live DVD should work without having to fall back to the
   text DVD, and it should work for newer machines that trigger
   the apix driver, that means, the live /etc/system should either
   have 'set apix_enable=0', or OI should quickly import the
   illumos bug fix #1723. This workaround (as well as the BIOS
   vt/vt-d workaround) should be published in the 'Release Notes',
   since that affects almost all new machines (>= Ivy Bridge,Sandy Bridge).

2. If imr_sas works, that should be included in OI, since the
   options for available workstations that really work are getting
   smaller and smaller, and the DELL is one of the few that are leftover.

3. OI should concentrate on importing working new drivers for
   new hardware, a lot of stuff is already out there in the
   illumos cloud and elsewhere and just needs to be packaged.
   Otherwise, we will quickly loose users since no hardware will
   be left where OI will run without problems or hassles like we have
   now. This all reminds me of the early years of Linux... USB 3 should
   be of maximum priority now, e.g., shortly before Christmas it was
   really difficult to find any external HDDs that do NOT rely on USB 3.
   It seems that it will (or already does) completely dominate the market
   for external small scale storage in less than a few months.

4. User visible problems (like the DVD write bug, missing USB 3) should
   get more priority than esoteric enhancements. Given the small maintainer/
   developer base, we should concentrate to have OI in good shape,
   that will attract users and, in the longer run, more developers.

--
Dr.Udo GrabowskiInst.f.Meteorology a.Climate R

[OpenIndiana-discuss] 10 GB Networking - any known gotchas/tips - stuff to avoid?

2013-01-18 Thread Lou Picciano
We're looking at building out some of our infrastructure in 10 GB ethernet 
land, then reevaluating our changing storage needs in context of best practices 
for leveraging these speeds. IE, iSCSI? Dedicated Storage arrays, etc.

First: Thanks to those of you who've updated our wiki re 10 GB ethernet 
adapters: http://wiki.openindiana.org/oi/Ethernet+Networking Is it generally 
safe to assume a 'chipset-level' compatibility, whether on adapter or mobo? 
(btw, many of those links to Intel are no longer working...)

Anyone have any specific experience running 10 GB on the newer SuperMicro 
boards? Yes, I realize we're generally safe with the Intel interfaces but, hey; 
doesn't hurt to ask, right?

Thanks, All.

Lou Picciano
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Are there any known problems with OI and jumbo frames MTU 9000

2013-01-18 Thread Jim Klimov

On 2013-01-18 09:22, Flo wrote:

Why do you use a MTU of 9000? Only for the higher performance or for the
lower CPU load?


Basically, because we can ;-\
I don't remember really benchmarking before and after; it was just a
recommendation we followed when I was young and tuned that network:
bulk data and database IO which stayed inside the LAN were published
on both a legacy net and on a private jumbo vlan for hosts with shiny
new gigabit interfaces.


Hi Jim

thank you for your advice!

I don't know, if the broadcom nics are powerfull enough?
I tested the throghput with iperf with MTU 1500 and MTU 9000.
With 1500 I got 945Mbit and with 9000 I got 985Mbit.

With the newest Solaris version, there is no need to edit the driver
config file. Is this also possible with OIa7 or is it possible to
activate the new config file without a reboot?


FWIW, on a Solaris 10 Sun V240 I see this config tweak in bge.conf
(back then Broadcoms were the brand of choice for Sun):

# diff bge.conf-*
157,170d156
< # see http://www.opensolaris.org/jive/thread.jspa?messageID=48569
< # http://www.sunmanagers.org/pipermail/summaries/2003-December/004776.html
< # 
http://blogs.sun.com/shantnu/entry/opensolaris_project_brussels_unified_nic

< #default_mtu=9000;
< #default-mtu=9000;
< #default_mtu=8000;
< default_mtu=1500;
<
< # interface bge0
< #name="bge" parent="/pci@1f,70" unit-address="2" instance=0 
default_mtu=1500;
< #name="bge" parent="/pci@1f,70" unit-address="2" instance=188000 
default_mtu=9000;

<
< # interface bge3
< name="bge" parent="/pci@1d,70" unit-address="2,1" default_mtu=9000;

and in /etc/hostname.bge3 it spells "mtu 9000" after the node name.

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] numpy for python 3.2

2013-01-18 Thread Milan Jurik

Hi,

yes, it would be possible, but it requires time which I do not have now 
:-(


Please, open ticket, I will try to look at it later.

Best regards,

Milan

On 17.01.2013 23:09, Kostas Oikonomou wrote:

Would it be possible to add numpy to the lib/vendor-packages of the
SFE python3.2 package?

Kostas




___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Are there any known problems with OI and jumbo frames MTU 9000

2013-01-18 Thread Flo

On 01/15/2013 01:00 AM, Jason Matthews wrote:>
> I use this configuration with no issues. For an MTU of 9000 with 
VLANs  on

> the host, make sure the switch MTU is set to 9014 or greater.
>
> Thanks,
> j.
>

Hi Jason,
do you also use Link Aggregation?
Why must I use 9014 on the switch? I thought, that when the MTU is 9000, 
it will behave like MTU 1500.


Why do you use a MTU of 9000? Only for the higher performance or for the 
lower CPU load?



On 01/14/2013 04:43 PM, Jim Klimov wrote:

Likely yes, there should be a gain - although some experts on the list
have recently stated, that with modern hardware the difference should
be negligible. In the past it could have been greater due to slower
NIC processors, I suppose.

 From my experience with Jumbo on e1000 interfaces, there are flags
you should set in the driver config file (and reapply after each OS
upgrade which overwrites this file):

# diff /kernel/drv/e1000g.conf-orig /kernel/drv/e1000g.conf-jumbo
52c52,53
< MaxFrameSize=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;
---
 > #MaxFrameSize=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;
 > MaxFrameSize=3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3;


Or to test without LSO (dunno why we have this, maybe hunted for bugs?)

# diff /kernel/drv/e1000g.conf-orig /kernel/drv/e1000g.conf-jumbo-noLSO
52c52,53
< MaxFrameSize=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;
---
 > #MaxFrameSize=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;
 > MaxFrameSize=3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3;
84a86,88
 >
 >   #Disable LSO in e1000g.conf by adding one line:
 > lso_enable = 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;


On my OI boxes I see that the default flag value is still zero...


After you set these flags to 3 and reboot, you can use the increased
frame sizes in "ifconfig ... mtu" clauses.

Also take care to verify that all the NICs and switches and OSes on
each side do indeed support your chosen frame size (i.e. 9000 bytes),
because there were many different maximum "increased frame sizes"
supported over time. By using no more than the lowest common size
supported by all your gear, you'd avoid packet fragmentation and/or
errors and benefit from Jumbo.

Good luck,
//Jim


Hi Jim

thank you for your advice!

I don't know, if the broadcom nics are powerfull enough?
I tested the throghput with iperf with MTU 1500 and MTU 9000.
With 1500 I got 945Mbit and with 9000 I got 985Mbit.

With the newest Solaris version, there is no need to edit the driver 
config file. Is this also possible with OIa7 or is it possible to 
activate the new config file without a reboot?


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss