Hello!
FreeBSD NFS client seems to have an ancient problem
(http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/57696) with readdir(2).
Do we have any workaround on system level? running lockd does not help,
attempt to increase readdir read size does not help as well.
Does 8-CURRENT NFS client
Hi all
I've some problem with my «first» server.
This server have one and only one purpose : nfs server.
Recently (last week) I've change the old server by the new one (HP Proliant
ML 350 G4), the data is on a FC raid MSA1000 attach by Fiber Channel to my
server.
The old server running FreeBSD
On Jul 26, 2005, at 11:38 AM, Dmitriy Kirhlarov wrote:
Problem in the netmask. When I try 192.168.2.1 all work fine.
How I can fix the problem?
You might have to restart portmap, and/or feed it the -h option:
-h Specify specific IP addresses to bind to for UDP
requests. This
Hi, list
I can't mount shared resource over nfs, when use alias on network interface:
$ sudo mount_nfs 192.168.2.3:/usr/local/cvsroot /var/cvsbackup
[udp] clh.cluster:/usr/local/cvsroot: NFSPROC_NULL: RPC: Timed out
My config:
$ cat /etc/exports
/usr -alldirs -mapall=nobody
$ ifconfig rl0
rl0:
Polling should not produce any improvement over interrupts for EM0.
The EM0 card will aggregate 8-14+ packets per interrupt, or more.
which is only around 8000 interrupts/sec. I've got a ton of these
cards installed.
# mount_nfs -a 4 dhcp61:/home /mnt
# dd if=/mnt/x of=/d
"David G. Lawrence" wrote:
>
> > >>tests. With the re driver, no change except placing a 100BT setup with
> > >>no packet loss to a gigE setup (both linksys switches) will cause
> > >>serious packet loss at 20Mbps data rates. I have discovered the only
> > >>way to get good performance with no p
> > >ifnet and netisr queues. You could also try
> setting net.isr.enable=1 to
> > >enable direct dispatch, which in the in-bound
> direction would reduce the
> > >number of context switches and queueing. It
> sounds like the device driver
> > >has a limit of 256 receive and transmit
> descriptor
> >>tests. With the re driver, no change except placing a 100BT setup with
> >>no packet loss to a gigE setup (both linksys switches) will cause
> >>serious packet loss at 20Mbps data rates. I have discovered the only
> >>way to get good performance with no packet loss was to
> >>
> >>1) Remove i
Robert Watson wrote:
On Sun, 21 Nov 2004, Sean McNeil wrote:
I have to disagree. Packet loss is likely according to some of my
tests. With the re driver, no change except placing a 100BT setup with
no packet loss to a gigE setup (both linksys switches) will cause
serious packet loss at 20Mbps dat
:Increasing the interrupt moderation frequency worked on the re driver,
:but it only made it marginally better. Even without moderation,
:however, I could lose packets without m_defrag. I suspect that there is
:something in the higher level layers that is causing the packet loss. I
:have no expl
Hi John-Mark,
On Mon, 2004-11-22 at 13:31 -0800, John-Mark Gurney wrote:
> Sean McNeil wrote this message on Mon, Nov 22, 2004 at 12:14 -0800:
> > On Mon, 2004-11-22 at 11:34 +, Robert Watson wrote:
> > > On Sun, 21 Nov 2004, Sean McNeil wrote:
> > >
> > > > I have to disagree. Packet loss i
Sean McNeil wrote this message on Mon, Nov 22, 2004 at 12:14 -0800:
> On Mon, 2004-11-22 at 11:34 +, Robert Watson wrote:
> > On Sun, 21 Nov 2004, Sean McNeil wrote:
> >
> > > I have to disagree. Packet loss is likely according to some of my
> > > tests. With the re driver, no change except
On Mon, 2004-11-22 at 11:34 +, Robert Watson wrote:
> On Sun, 21 Nov 2004, Sean McNeil wrote:
>
> > I have to disagree. Packet loss is likely according to some of my
> > tests. With the re driver, no change except placing a 100BT setup with
> > no packet loss to a gigE setup (both linksys sw
I did FastEthernet throughput test by Smartbits with SmartApp.
It's simpler than TCP throughput measurement. :)
This Smartbits has some FastEthernet ports, has no GbE ports.
The router is consist of single Xeon 2.4GHz which is HTT enabled and
two on-boarded em interfaces. The kernel is 5.3-RE
On Sun, 21 Nov 2004, Sean McNeil wrote:
> I have to disagree. Packet loss is likely according to some of my
> tests. With the re driver, no change except placing a 100BT setup with
> no packet loss to a gigE setup (both linksys switches) will cause
> serious packet loss at 20Mbps data rates. I
On Sun, 2004-11-21 at 20:42 -0800, Matthew Dillon wrote:
> : Yes, I knew that adjusting TCP window size is important to use up a link.
> : However I wanted to show adjusting the parameters of Interrupt
> : Moderation affects network performance.
> :
> : And I think a packet loss was occured by enab
: Yes, I knew that adjusting TCP window size is important to use up a link.
: However I wanted to show adjusting the parameters of Interrupt
: Moderation affects network performance.
:
: And I think a packet loss was occured by enabled Interrupt Moderation.
: The mechanism of a packet loss in this
Thank you, Matt.
>
> Very interesting, but the only reason you get lower results is simply
> because the TCP window is not big enough. That's it.
>
Yes, I knew that adjusting TCP window size is important to use up a link.
However I wanted to show adjusting the parameters of Interrup
: I did simple benchmark at some settings.
:
: I used two boxes which are single Xeon 2.4GHz with on-boarded em.
: I measured a TCP throughput by iperf.
:
: These results show that the throughput of TCP increased if Interrupt
:Moderation is turned OFF. At least, adjusting these parameters affected
On Sun, 2004-11-21 at 21:27 +0900, Shunsuke SHINOMIYA wrote:
> Jeremie, thank you for your comment.
>
> I did simple benchmark at some settings.
>
> I used two boxes which are single Xeon 2.4GHz with on-boarded em.
> I measured a TCP throughput by iperf.
>
> These results show that the thro
Jeremie, thank you for your comment.
I did simple benchmark at some settings.
I used two boxes which are single Xeon 2.4GHz with on-boarded em.
I measured a TCP throughput by iperf.
These results show that the throughput of TCP increased if Interrupt
Moderation is turned OFF. At least, adj
> I changed cables and couldn't reproduce that bad results so I changed cables
> back but also cannot reproduce them, especially the ggate write, formerly
> with 2,6MB/s now performs at 15MB/s, but I haven't done any polling tests
> anymore, just interrupt driven, since Matt explained that em do
In message:
"Daniel Eriksson" <[EMAIL PROTECTED]> writes:
: Finally, my question. What would you recommend:
: 1) Run with ACPI disabled and debug.mpsafenet=1 and hope that the mix of
: giant-safe and giant-locked (em and ahc) doesn't trigger any bugs. This is
: what I currently do.
:
Am Freitag, 19. November 2004 13:56 schrieb Robert Watson:
> On Fri, 19 Nov 2004, Emanuel Strobl wrote:
> > Am Donnerstag, 18. November 2004 13:27 schrieb Robert Watson:
> > > On Wed, 17 Nov 2004, Emanuel Strobl wrote:
> > > > I really love 5.3 in many ways but here're some unbelievable transfer
[.
On Fri, 19 Nov 2004, Emanuel Strobl wrote:
> Am Donnerstag, 18. November 2004 13:27 schrieb Robert Watson:
> > On Wed, 17 Nov 2004, Emanuel Strobl wrote:
> > > I really love 5.3 in many ways but here're some unbelievable transfer
>
> First, thanks a lot to all of you paying attention to my probl
Am Donnerstag, 18. November 2004 13:27 schrieb Robert Watson:
> On Wed, 17 Nov 2004, Emanuel Strobl wrote:
> > I really love 5.3 in many ways but here're some unbelievable transfer
First, thanks a lot to all of you paying attention to my problem again.
I'll use this as a cumulative answer to the m
> Hi, Jeremie, how is this?
> To disable Interrupt Moderation, sysctl hw.em?.int_throttle_valve=0.
Great, I would have called it "int_throttle_ceil", but that's a detail
and my opinion is totally subjective.
> However, because this patch is just made now, it is not fully tested.
I'll give it
Hi, Jeremie, how is this?
To disable Interrupt Moderation, sysctl hw.em?.int_throttle_valve=0.
However, because this patch is just made now, it is not fully tested.
> > if you suppose your computer has sufficient performance, please try to
> > disable or adjust parameters of Interrupt Moderat
> if you suppose your computer has sufficient performance, please try to
> disable or adjust parameters of Interrupt Moderation of em.
Nice ! It would be even better if there was a boot-time sysctl to
configure the behaviour of this feature, or something like ifconfig
link0 option of the fxp(4) d
Hi list,
if you suppose your computer has sufficient performance, please try to
disable or adjust parameters of Interrupt Moderation of em.
In my router(Xeon 2.4GHz and on-board two em interfaces) case, it
improves a router's packet forwarding performance. I think the
interrupt delay by Interr
Polling should not produce any improvement over interrupts for EM0.
The EM0 card will aggregate 8-14+ packets per interrupt, or more.
which is only around 8000 interrupts/sec. I've got a ton of these
cards installed.
# mount_nfs -a 4 dhcp61:/home /mnt
# dd if=/mnt/x of=/d
On Thu, 18 Nov 2004, Daniel Eriksson wrote:
> I have a Tyan Tiger MPX board (dual AthlonMP) that has two 64bit PCI
> slots. I have an Adaptec 29160 and a dual port Intel Pro/1000 MT
> plugged into those slots.
>
> As can be seen from the vmstat -i output below, em1 shares ithread with
> ahc0.
M. Warner Losh wrote:
> Also, make sure that you aren't sharing interrupts between
> GIANT-LOCKED and non-giant-locked cards. This might be exposing bugs
> in the network layer that debug.mpsafenet=0 might correct. Just
> noticed that our setup here has that setup, so I'll be looking into
> that
In message: <[EMAIL PROTECTED]>
Robert Watson <[EMAIL PROTECTED]> writes:
: (1) I'd first off check that there wasn't a serious interrupt problem on
: the box, which is often triggered by ACPI problems. Get the box to be
: as idle as possible, and then use vmstat -i or stat -vm
Andreas Braukmann said:
> --On Mittwoch, 17. November 2004 20:48 Uhr -0500 Mike Jakubik
> <[EMAIL PROTECTED]> wrote:
>
>> I have two PCs connected together, using the em card. One is FreeBSD 6
>> from Fri Nov 5 , the other is Windows XP. I am using the default mtu of
>> 1500, no polling, and i get
On Thu, 18 Nov 2004, Wilko Bulte wrote:
> On Thu, Nov 18, 2004 at 12:27:44PM +, Robert Watson wrote..
> >
> > On Wed, 17 Nov 2004, Emanuel Strobl wrote:
> >
> > > I really love 5.3 in many ways but here're some unbelievable transfer
> > > rates, after I went out and bought a pair of Intel G
On Thu, Nov 18, 2004 at 12:27:44PM +, Robert Watson wrote..
>
> On Wed, 17 Nov 2004, Emanuel Strobl wrote:
>
> > I really love 5.3 in many ways but here're some unbelievable transfer
> > rates, after I went out and bought a pair of Intel GigaBit Ethernet
> > Cards to solve my performance prob
On Thu, 18 Nov 2004, Pawel Jakub Dawidek wrote:
> On Wed, Nov 17, 2004 at 11:57:41PM +0100, Emanuel Strobl wrote:
> +> Dear best guys,
> +>
> +> I really love 5.3 in many ways but here're some unbelievable transfer
> rates,
> +> after I went out and bought a pair of Intel GigaBit Ethernet Card
On Wed, 17 Nov 2004, Emanuel Strobl wrote:
> I really love 5.3 in many ways but here're some unbelievable transfer
> rates, after I went out and bought a pair of Intel GigaBit Ethernet
> Cards to solve my performance problem (*laugh*):
I think the first thing you want to do is to try and determ
On Wed, Nov 17, 2004 at 11:57:41PM +0100, Emanuel Strobl wrote:
+> Dear best guys,
+>
+> I really love 5.3 in many ways but here're some unbelievable transfer rates,
+> after I went out and bought a pair of Intel GigaBit Ethernet Cards to solve
+> my performance problem (*laugh*):
[...]
I done
--On Mittwoch, 17. November 2004 20:48 Uhr -0500 Mike Jakubik <[EMAIL
PROTECTED]> wrote:
I have two PCs connected together, using the em card. One is FreeBSD 6
from Fri Nov 5 , the other is Windows XP. I am using the default mtu of
1500, no polling, and i get ~ 21MB/s tranfser rates via ftp. Im s
Emanuel Strobl said:
~ 15MB/s
> .and with 1m blocksize:
> test2:~#17: dd if=/dev/zero of=/samsung/testfile bs=1m
> ^C61+0 records in
> 60+0 records out
> 62914560 bytes transferred in 4.608726 secs (13651182 bytes/sec)
> ->
Am Donnerstag, 18. November 2004 01:01 schrieb Chuck Swiger:
> Emanuel Strobl wrote:
> [ ... ]
>
> > Tests were done with two Intel GigaBit Ethernet cards (82547EI, 32bit PCI
> > Desktop adapter MT) connected directly without a switch/hub
>
> If filesharing via NFS is your primary goal, it's reason
ping only tests latency *not* throughput. So it is not really a good test.
- aW
0n Wed, Nov 17, 2004 at 07:01:24PM -0500, Chuck Swiger wrote:
Emanuel Strobl wrote:
[ ... ]
>Tests were done with two Intel GigaBit Ethernet cards (82547EI, 32bit
PCI
>D
Emanuel Strobl wrote:
[ ... ]
Tests were done with two Intel GigaBit Ethernet cards (82547EI, 32bit PCI
Desktop adapter MT) connected directly without a switch/hub
If filesharing via NFS is your primary goal, it's reasonable to test that,
however it would be easier to make sense of your results b
Am Donnerstag, 18. November 2004 00:33 schrieb Scott Long:
> Emanuel Strobl wrote:
> > Dear best guys,
> >
> > I really love 5.3 in many ways but here're some unbelievable transfer
> > rates, after I went out and bought a pair of Intel GigaBit Ethernet Cards
> > to solve my performance problem (*la
Emanuel Strobl wrote:
Dear best guys,
I really love 5.3 in many ways but here're some unbelievable transfer rates,
after I went out and bought a pair of Intel GigaBit Ethernet Cards to solve
my performance problem (*laugh*):
(In short, see *** below)
Tests were done with two Intel GigaBit Ethern
Am Donnerstag, 18. November 2004 00:17 schrieb Sean McNeil:
> On Wed, 2004-11-17 at 23:57 +0100, Emanuel Strobl wrote:
> > Dear best guys,
> >
> > I really love 5.3 in many ways but here're some unbelievable transfer
> > rates, after I went out and bought a pair of Intel GigaBit Ethernet Cards
> >
On Wed, 2004-11-17 at 23:57 +0100, Emanuel Strobl wrote:
> Dear best guys,
>
> I really love 5.3 in many ways but here're some unbelievable transfer rates,
> after I went out and bought a pair of Intel GigaBit Ethernet Cards to solve
> my performance problem (*laugh*):
>
> (In short, see *** be
Dear best guys,
I really love 5.3 in many ways but here're some unbelievable transfer rates,
after I went out and bought a pair of Intel GigaBit Ethernet Cards to solve
my performance problem (*laugh*):
(In short, see *** below)
Tests were done with two Intel GigaBit Ethernet cards (82547EI, 3
On Sun, 28 Dec 2003, Rob wrote:
> On the client (147.47.254.184) I do not get the proper response on:
> $ showmount -e 147.46.44.183
> RPC: Port mapper failure
> showmount: can't do exports rpc
Make sure portmap (or rpcbind if the server is 5.x) is allowed to talk to
the client
On Mon, Dec 29, 2003 at 02:21:56PM +0900, Rob wrote:
> This is what I get.
>
> The NFS server is 147.46.44.183 with gateway 147.46.44.1
> The NFS client is 147.47.254.184 with gateway 147.47.254.1
>
> On the NFS server:
> $ sockstat | grep portmap
> daemon portmap 796933 udp4
Igor Pokrovsky wrote:
On Sun, Dec 28, 2003 at 09:20:41PM +0900, Rob wrote:
Hi,
I am running two FreeBSD-Stable PCs.
One is an NFS server and the other NFS client. Everything used to
work fine until recently. I suspect that either the new kernel is the
problem (although there are no complaints on
:I'm seeing problems with NFS serving over UDP in a 4.5-PRERELEASE
:system. The problem seems to be in readdir or perhaps stat. I can't ls
:a directory that's mounted via UDP. I can read files. The problem
:goes away with TCP mounts. I had problems with both FBSD 4.4 and Solaris
:2.8 clients.
ok...
buildworld builds just fine when /usr/src is on the local
disk. however, if i move that same src over to my NFS server
(in this case, an Origin 2000 with IRIX 6.5.4).
i have the following entry in my /etc/fstab
fs2.servers.nat:/export/maint/freebsd/4.2-STABLE/src /usr/src nfs
rw,nfsv3 2 2
w
55 matches
Mail list logo