problem cross-compiling 9.1

2012-10-03 Thread Daniel Braniss
reposting to hackers, maybe better luck here?

When using an amd64 host to 'make TARGET_ARCH=i386 buildworld' it seems that
it's using the wrong cpp, at least when building ioctl.c via mkioctl in
usr.bin/ktrace and having set WITHOUT_CPP(*). This used to work with
previous releases.

...
=== usr.bin/kdump (depend)
env CPP=cpp  sh /r+d/stable/9/usr.bin/kdump/mkioctls 
/home/obj/rnd/alix/i386.i386/r+d/stable/9/tmp/usr/include  ioctl.c
...
cc -O2 -pipe  -I/r+d/stable/9/usr.bin/kdump/../ktrace 
-I/r+d/stable/9/usr.bin/kdump -I/r+d/stable/9/usr.bin/kdump/../.. -DNDEBUG 
-std=gnu99 -fstack-protector -Wno-pointer-sign -c ioctl.c
ioctl.c: In function 'ioctlname':
ioctl.c:1216: error: 'MPTIO_RAID_ACTION32' undeclared (first use in this 
function)
ioctl.c:1216: error: (Each undeclared identifier is reported only once
ioctl.c:1216: error: for each function it appears in.)
ioctl.c:1292: error: 'MPTIO_READ_EXT_CFG_HEADER32' undeclared (first use in 
this function)
ioctl.c:1632: error: 'MPTIO_READ_EXT_CFG_PAGE32' undeclared (first use in this 
function)
ioctl.c:1772: error: 'CCISS_PASSTHRU32' undeclared (first use in this function)
ioctl.c:2010: error: 'IPMICTL_RECEIVE_MSG_TRUNC_32' undeclared (first use in 
this function)
ioctl.c:2082: error: 'IPMICTL_RECEIVE_MSG_32' undeclared (first use in this 
function)
ioctl.c:2300: error: 'MPTIO_READ_CFG_PAGE32' undeclared (first use in this 
function)
ioctl.c:2870: error: 'MPTIO_READ_CFG_HEADER32' undeclared (first use in this 
function)
ioctl.c:2878: error: 'IPMICTL_SEND_COMMAND_32' undeclared (first use in this 
function)
ioctl.c:2938: error: 'MPTIO_WRITE_CFG_PAGE32' undeclared (first use in this 
function)
*** [ioctl.o] Error code 1

*: Im compiling for an embedded system, and hence I want the minimum stuff.

at the moment the work around is to remove the WITHOUT_CPP, but it got me
worried.

any fix?

cheers,
danny


___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Zfs import issue

2012-10-03 Thread Ram Chander
Hi,

 I am importing zfs snapshot to freebsd-9 from anther host running
freebsd-9.  When the import happens, it locks the filesystem, df hangs
and unable to use the filesystem. Once the import completes, the filesystem
is back to normal and read/write works fine.  The same doesnt happen in
Solaris/OpenIndiana.

# uname -an
FreeBSD hostname 9.0-RELEASE FreeBSD 9.0-RELEASE #0: Tue Jan  3 07:46:30
UTC 2012 r...@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC
amd64

Zfs ver: 28


Any inputs would be helpful. Is there any way to overcome this freeze ?


Regards,
Ram
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: NFS server bottlenecks

2012-10-03 Thread Rick Macklem
Garrett Wollman wrote:
 [Adding freebsd-fs@ to the Cc list, which I neglected the first time
 around...]
 
 On Tue, 2 Oct 2012 08:28:29 -0400 (EDT), Rick Macklem
 rmack...@uoguelph.ca said:
 
  I can't remember (I am early retired now;-) if I mentioned this
  patch before:
http://people.freebsd.org/~rmacklem/drc.patch
  It adds tunables vfs.nfsd.tcphighwater and vfs.nfsd.udphighwater
  that can
  be twiddled so that the drc is trimmed less frequently. By making
  these
  values larger, the trim will only happen once/sec until the high
  water
  mark is reached, instead of on every RPC. The tradeoff is that the
  DRC will
  become larger, but given memory sizes these days, that may be fine
  for you.
 
 It will be a while before I have another server that isn't in
 production (it's on my deployment plan, but getting the production
 servers going is taking first priority).
 
 The approaches that I was going to look at:
 
 Simplest: only do the cache trim once every N requests (for some
 reasonable value of N, e.g., 1000). Maybe keep track of the number of
 entries in each hash bucket and ignore those buckets that only have
 one entry even if is stale.
 
Well, the patch I have does it when it gets too big. This made sense to
me, since the cache is trimmed to keep it from getting too large. It also
does the trim at least once/sec, so that really stale entries are removed.

 Simple: just use a sepatate mutex for each list that a cache entry
 is on, rather than a global lock for everything. This would reduce
 the mutex contention, but I'm not sure how significantly since I
 don't have the means to measure it yet.
 
Well, since the cache trimming is removing entries from the lists, I don't
see how that can be done with a global lock for list updates? A mutex in
each element could be used for changes (not insertion/removal) to an individual
element. However, the current code manipulates the lists and makes minimal
changes to the individual elements, so I'm not sure if a mutex in each element
would be useful or not, but it wouldn't help for the trimming case, imho.

I modified the patch slightly, so it doesn't bother to acquire the mutex when
it is checking if it should trim now. I think this results in a slight risk that
the test will use an out of date cached copy of one of the global vars, but
since the code isn't modifying them, I don't think it matters. This modified
patch is attached and is also here:
   http://people.freebsd.org/~rmacklem/drc2.patch

 Moderately complicated: figure out if a different synchronization type
 can safely be used (e.g., rmlock instead of mutex) and do so.
 
 More complicated: move all cache trimming to a separate thread and
 just have the rest of the code wake it up when the cache is getting
 too big (or just once a second since that's easy to implement). Maybe
 just move all cache processing to a separate thread.
 
Only doing it once/sec would result in a very large cache when bursts of
traffic arrives. The above patch does it when it is too big or at least
once/sec.

I'm not sure I see why doing it as a separate thread will improve things.
There are N nfsd threads already (N can be bumped up to 256 if you wish)
and having a bunch more cache trimming threads would just increase
contention, wouldn't it? The only negative effect I can think of w.r.t.
having the nfsd threads doing it would be a (I believe negligible) increase
in RPC response times (the time the nfsd thread spends trimming the cache).
As noted, I think this time would be negligible compared to disk I/O and network
transit times in the total RPC response time?

Isilon did use separate threads (I never saw their code, so I am going by what
they told me), but it sounded to me like they were trimming the cache too 
agressively
to be effective for TCP mounts. (ie. It sounded to me like they had broken the
algorithm to achieve better perf.)

Remember that the DRC is weird, in that it is a cache to improve correctness at
the expense of overhead. It never improves performance. On the other hand, turn
it off or throw away entries too aggressively and data corruption, due to 
retries
of non-idempotent operations, can be the outcome.

Good luck with whatever you choose, rick

 It's pretty clear from the profile that the cache mutex is heavily
 contended, so anything that reduces the length of time it's held is
 probably a win.
 
 That URL again, for the benefit of people on freebsd-fs who didn't see
 it on hackers, is:
 
  http://people.csail.mit.edu/wollman/nfs-server.unhalted-core-cycles.png.
 
 (This graph is slightly modified from my previous post as I removed
 some spurious edges to make the formatting look better. Still looking
 for a way to get a profile that includes all kernel modules with the
 kernel.)
 
 -GAWollman
 ___
 freebsd-hackers@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
 To unsubscribe, send any mail to
 

Re: ule+smp: small optimization for turnstile priority lending

2012-10-03 Thread Andriy Gapon
on 20/09/2012 16:14 Attilio Rao said the following:
 On 9/20/12, Andriy Gapon a...@freebsd.org wrote:
[snip]
 The patch works well as far as I can tell.  Thank you!
 There is one warning with full witness enables but it appears to be harmless
 (so
 far):
 
 Andriy,
 thanks a lot for your testing and reports you made so far.
 Unfortunately I'm going off for 2 weeks now and I won't work on
 FreeBSD for that timeframe. I will get back to those in 2 weeks then.
 If you want  to play more with this idea feel free to extend/fix/etc.
 this patch.

Unfortunately I haven't found time to work on this further, but I have some
additional thoughts.

First, I think that the witness message was not benign and it actually warns 
about
the same kind of deadlock that I originally had.
The problem is that sched_lend_prio() is called with target thread's td_lock 
held,
which is a lock of tdq on the thread's CPU.  Then, in your patch, we acquire
current tdq's lock to modify its load.  But if two tdq locks are to be held at 
the
same time, then they must be locked using tdq_lock_pair, so that lock order is
maintained.  With the patch no tdq lock order can be maintained (can arbitrary)
and thus it is possible to run into a deadlock.

I see two possibilities here, but don't rule out that there can be more.

1. Do something like my patch does.  That is, manipulate current thread's tdq in
advance before any other thread/tdq locks are acquired (or td_lock is changed to
point to a different lock and current tdq is unlocked).  The API can be made 
more
generic in nature.  E.g. it can look like this:
void
sched_thread_block(struct thread *td, int inhibitor)
{
struct tdq *tdq;

THREAD_LOCK_ASSERT(td, MA_OWNED);
KASSERT(td == curthread,
(sched_thread_block: only curthread is supported));
tdq = TDQ_SELF();
TDQ_LOCK_ASSERT(tdq, MA_OWNED);
MPASS(td-td_lock == TDQ_LOCKPTR(tdq));
TD_SET_INHIB(td, inhibitor);
tdq_load_rem(tdq, td);
tdq_setlowpri(tdq, td);
}


2. Try to do things from sched_lend_prio based on curthread's state.  This, as 
it
seems, would require completely lock-less manipulations of current tdq.  E.g. 
for
tdq_load we could use atomic operations (it is already accessed locklessly, but
not modified).  But for tdq_lowpri a more elaborate trick would be required like
having a separate field for a temporary value.

In any case, I'll have to revisit this later.

-- 
Andriy Gapon
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


help me,my virtualbox run error.

2012-10-03 Thread cz li
my os version is freebsd 9.0.I installed virtualbox 4.0.2.Installation
no error.but, can not run.error infromation:
 Failed to create the VirtualBox COM object.

  The application will now terminate.



  Callee RC: NS_ERROR_FACTORY_NOT_REGISTERED (0x80040154)


How to solve this problem?


Thank you.

Eagerly look forward to your reply.


lichaozhong
2012-10-3
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: help me,my virtualbox run error.

2012-10-03 Thread Tom Evans
On Wed, Oct 3, 2012 at 4:16 PM, cz li willing...@gmail.com wrote:
 my os version is freebsd 9.0.I installed virtualbox 4.0.2.Installation
 no error.but, can not run.error infromation:
  Failed to create the VirtualBox COM object.

   The application will now terminate.



   Callee RC: NS_ERROR_FACTORY_NOT_REGISTERED (0x80040154)
 


https://www.virtualbox.org/ticket/2335


Cheers

Tom
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


kvm_proclist: gnore processes in PRS_NEW

2012-10-03 Thread Andriy Gapon

I believe that the following patch does the right thing that is repeated in a 
few
other places.
I would like to ask for a review just in case.

commit cf0f573a1dcbc09cb8fce612530afeeb7f1b1c62
Author: Andriy Gapon a...@icyb.net.ua
Date:   Sun Sep 23 22:49:26 2012 +0300

kvm_proc: ignore processes in larvae state

diff --git a/lib/libkvm/kvm_proc.c b/lib/libkvm/kvm_proc.c
index 8fc415c..d1daf77 100644
--- a/lib/libkvm/kvm_proc.c
+++ b/lib/libkvm/kvm_proc.c
@@ -144,6 +144,8 @@ kvm_proclist(kvm_t *kd, int what, int arg, struct proc *p,
_kvm_err(kd, kd-program, can't read proc at %p, p);
return (-1);
}
+   if (proc.p_state == PRS_NEW)
+   continue;
if (proc.p_state != PRS_ZOMBIE) {
if (KREAD(kd, (u_long)TAILQ_FIRST(proc.p_threads),
mtd)) {

-- 
Andriy Gapon
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


kvm_getprocs: gracefully handle errors in kvm_deadprocs

2012-10-03 Thread Andriy Gapon

kvm_deadprocs returns -1 to signify an error.
Current kvm_getprocs code would pass this return code as 'cnt' out parameter and
would not reset return value to NULL.
This confuses some callers, most prominently procstat_getprocs, into believing
that kvm_getprocs was successful.  Moreover, the code tried to use cnt=-1 as an
unsigned number to allocate some additional memory.  As a result fstat -M could
try to allocate huge amount of memory e.g. when used with a kernel that didn't
match userland.

With the proposed change the error code should be handled properly.
Additionally it should now be possible to enable realloc code, which previously
contained a bug and was called even after kvm_deadprocs error.

commit 6ddf602409119eded40321e5bb349b464f24e81a
Author: Andriy Gapon a...@icyb.net.ua
Date:   Sun Sep 23 22:52:28 2012 +0300

kvm_proc: gracefully handle errors in kvm_deadprocs, don't confuse callers

Plus fix a bug under 'notdef' (sic) section.

diff --git a/lib/libkvm/kvm_proc.c b/lib/libkvm/kvm_proc.c
index d1daf77..31258d7 100644
--- a/lib/libkvm/kvm_proc.c
+++ b/lib/libkvm/kvm_proc.c
@@ -593,9 +593,15 @@ liveout:

nprocs = kvm_deadprocs(kd, op, arg, nl[1].n_value,
  nl[2].n_value, nprocs);
+   if (nprocs = 0) {
+   _kvm_freeprocs(kd);
+   nprocs = 0;
+   }
 #ifdef notdef
-   size = nprocs * sizeof(struct kinfo_proc);
-   (void)realloc(kd-procbase, size);
+   else {
+   size = nprocs * sizeof(struct kinfo_proc);
+   kd-procbase = realloc(kd-procbase, size);
+   }
 #endif
}
*cnt = nprocs;

P.S. it might may sense to change 'count' parameter of procstat_getprocs to 
signed
int so that it matches kvm_getprocs interface.  Alternatively, an intermediate
variable could be used to insulate signedness difference:

index 56562e1..11a817e 100644
--- a/lib/libprocstat/libprocstat.c
+++ b/lib/libprocstat/libprocstat.c
@@ -184,15 +184,18 @@ procstat_getprocs(struct procstat *procstat, int what, 
int arg,
struct kinfo_proc *p0, *p;
size_t len;
int name[4];
+   int cnt;
int error;

assert(procstat);
assert(count);
p = NULL;
if (procstat-type == PROCSTAT_KVM) {
-   p0 = kvm_getprocs(procstat-kd, what, arg, count);
-   if (p0 == NULL || count == 0)
+   *count = 0;
+   p0 = kvm_getprocs(procstat-kd, what, arg, cnt);
+   if (p0 == NULL || cnt = 0)
return (NULL);
+   *count = cnt;
len = *count * sizeof(*p);
p = malloc(len);
if (p == NULL) {



-- 
Andriy Gapon
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: SMP Version of tar

2012-10-03 Thread John Nielsen
On Oct 2, 2012, at 12:36 AM, Yamagi Burmeister li...@yamagi.org wrote:

 On Mon, 1 Oct 2012 22:16:53 -0700
 Tim Kientzle t...@kientzle.com wrote:
 
 There are a few different parallel command-line compressors and 
 decompressors in ports; experiment a lot (with large files being read from 
 and/or written to disk) and see what the real effect is.  In particular, 
 some decompression algorithms are actually faster than memcpy() when run on 
 a single processor.  Parallelizing such algorithms is not likely to help 
 much in the real world.
 
 The two popular algorithms I would expect to benefit most are bzip2 
 compression and lzma compression (targeting xz or lzip format).  For 
 decompression, bzip2 is block-oriented so fits SMP pretty naturally.  Other 
 popular algorithms are stream-oriented and less amenable to parallelization.
 
 Take a careful look at pbzip2, which is a parallelized bzip2/bunzip2 
 implementation that's already under a BSD license.  You should be able to 
 get a lot of ideas about how to implement a parallel compression algorithm.  
 Better yet, you might be able to reuse a lot of the existing pbzip2 code.
 
 Mark Adler's pigz is also worth studying.  It's also license-friendly, and 
 is built on top of regular zlib, which is a nice technique when it's 
 feasible.
 
 Just a small note: There's a parallel implementation of xz called
 pixz. It's build atop of liblzma and libarchiv and stands under a 
 BSD style license. See: https://github.com/vasi/pixz Maybe it's
 possible to reuse most of the code.


See also below, which has some bugfixes/improvements that AFAIK were never 
committed in the original project (though they were submitted).
https://github.com/jlrobins/pixz

JN

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: SMP Version of tar

2012-10-03 Thread Richard Yao
On 10/02/2012 03:06 AM, Adrian Chadd wrote:
 .. please keep in mind that embedded platforms (a) don't necessarily
 benefit from it, and (b) have a very small footprint. Bloating out the
 compression/archival tools for the sake of possible SMP support will
 make me very, very sad.
 
 
 
 Adrian
 ___
 freebsd-hackers@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
 To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org

Someone might want to ask if parallelizing tar is even possible. tar is
meant to serially write to tape. It should not be possible possible to
parallelize that operation.

I can imagine parallelizing compression algorithms, but I cannot imagine
parallelizing tar.



signature.asc
Description: OpenPGP digital signature


Re: kvm_proclist: gnore processes in PRS_NEW

2012-10-03 Thread John Baldwin
On Wednesday, October 03, 2012 12:37:07 pm Andriy Gapon wrote:
 
 I believe that the following patch does the right thing that is repeated in a 
 few
 other places.
 I would like to ask for a review just in case.
 
 commit cf0f573a1dcbc09cb8fce612530afeeb7f1b1c62
 Author: Andriy Gapon a...@icyb.net.ua
 Date:   Sun Sep 23 22:49:26 2012 +0300
 
 kvm_proc: ignore processes in larvae state

I think this is fine.

-- 
John Baldwin
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: NFS server bottlenecks

2012-10-03 Thread Garrett Wollman
On Wed, 3 Oct 2012 09:21:06 -0400 (EDT), Rick Macklem rmack...@uoguelph.ca 
said:

 Simple: just use a sepatate mutex for each list that a cache entry
 is on, rather than a global lock for everything. This would reduce
 the mutex contention, but I'm not sure how significantly since I
 don't have the means to measure it yet.
 
 Well, since the cache trimming is removing entries from the lists, I don't
 see how that can be done with a global lock for list updates?

Well, the global lock is what we have now, but the cache trimming
process only looks at one list at a time, so not locking the list that
isn't being iterated over probably wouldn't hurt, unless there's some
mechanism (that I didn't see) for entries to move from one list to
another.  Note that I'm considering each hash bucket a separate
list.  (One issue to worry about in that case would be cache-line
contention in the array of hash buckets; perhaps NFSRVCACHE_HASHSIZE
ought to be increased to reduce that.)

 Only doing it once/sec would result in a very large cache when bursts of
 traffic arrives.

My servers have 96 GB of memory so that's not a big deal for me.

 I'm not sure I see why doing it as a separate thread will improve things.
 There are N nfsd threads already (N can be bumped up to 256 if you wish)
 and having a bunch more cache trimming threads would just increase
 contention, wouldn't it?

Only one cache-trimming thread.  The cache trim holds the (global)
mutex for much longer than any individual nfsd service thread has any
need to, and having N threads doing that in parallel is why it's so
heavily contended.  If there's only one thread doing the trim, then
the nfsd service threads aren't spending time either contending on the
mutex (it will be held less frequently and for shorter periods).

 The only negative effect I can think of w.r.t.  having the nfsd
 threads doing it would be a (I believe negligible) increase in RPC
 response times (the time the nfsd thread spends trimming the cache).
 As noted, I think this time would be negligible compared to disk I/O
 and network transit times in the total RPC response time?

With adaptive mutexes, many CPUs, lots of in-memory cache, and 10G
network connectivity, spinning on a contended mutex takes a
significant amount of CPU time.  (For the current design of the NFS
server, it may actually be a win to turn off adaptive mutexes -- I
should give that a try once I'm able to do more testing.)

-GAWollman
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: NFS server bottlenecks

2012-10-03 Thread Rick Macklem
Garrett Wollman wrote:
 On Wed, 3 Oct 2012 09:21:06 -0400 (EDT), Rick Macklem
 rmack...@uoguelph.ca said:
 
  Simple: just use a sepatate mutex for each list that a cache entry
  is on, rather than a global lock for everything. This would reduce
  the mutex contention, but I'm not sure how significantly since I
  don't have the means to measure it yet.
 
  Well, since the cache trimming is removing entries from the lists, I
  don't
  see how that can be done with a global lock for list updates?
 
 Well, the global lock is what we have now, but the cache trimming
 process only looks at one list at a time, so not locking the list that
 isn't being iterated over probably wouldn't hurt, unless there's some
 mechanism (that I didn't see) for entries to move from one list to
 another. Note that I'm considering each hash bucket a separate
 list. (One issue to worry about in that case would be cache-line
 contention in the array of hash buckets; perhaps NFSRVCACHE_HASHSIZE
 ought to be increased to reduce that.)
 
Yea, a separate mutex for each hash list might help. There is also the
LRU list that all entries end up on, that gets used by the trimming code.
(I think? I wrote this stuff about 8 years ago, so I haven't looked at
 it in a while.)

Also, increasing the hash table size is probably a good idea, especially
if you reduce how aggressively the cache is trimmed.

  Only doing it once/sec would result in a very large cache when
  bursts of
  traffic arrives.
 
 My servers have 96 GB of memory so that's not a big deal for me.
 
This code was originally production tested on a server with 1Gbyte,
so times have changed a bit;-)

  I'm not sure I see why doing it as a separate thread will improve
  things.
  There are N nfsd threads already (N can be bumped up to 256 if you
  wish)
  and having a bunch more cache trimming threads would just increase
  contention, wouldn't it?
 
 Only one cache-trimming thread. The cache trim holds the (global)
 mutex for much longer than any individual nfsd service thread has any
 need to, and having N threads doing that in parallel is why it's so
 heavily contended. If there's only one thread doing the trim, then
 the nfsd service threads aren't spending time either contending on the
 mutex (it will be held less frequently and for shorter periods).
 
I think the little drc2.patch which will keep the nfsd threads from
acquiring the mutex and doing the trimming most of the time, might be
sufficient. I still don't see why a separate trimming thread will be
an advantage. I'd also be worried that the one cache trimming thread
won't get the job done soon enough.

When I did production testing on a 1Gbyte server that saw a peak
load of about 100RPCs/sec, it was necessary to trim aggressively.
(Although I'd be tempted to say that a server with 1Gbyte is no
 longer relevant, I recently recall someone trying to run FreeBSD
 on a i486, although I doubt they wanted to run the nfsd on it.)

  The only negative effect I can think of w.r.t. having the nfsd
  threads doing it would be a (I believe negligible) increase in RPC
  response times (the time the nfsd thread spends trimming the cache).
  As noted, I think this time would be negligible compared to disk I/O
  and network transit times in the total RPC response time?
 
 With adaptive mutexes, many CPUs, lots of in-memory cache, and 10G
 network connectivity, spinning on a contended mutex takes a
 significant amount of CPU time. (For the current design of the NFS
 server, it may actually be a win to turn off adaptive mutexes -- I
 should give that a try once I'm able to do more testing.)
 
Have fun with it. Let me know when you have what you think is a good patch.

rick

 -GAWollman
 ___
 freebsd-hackers@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
 To unsubscribe, send any mail to
 freebsd-hackers-unsubscr...@freebsd.org
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: help me,my virtualbox run error.

2012-10-03 Thread cz li
I follow the above said to did.

FreeBSDHos# ls -ld /tmp
drwxrwxrwt  20 root  wheel  2560 Oct  4 11:53 /tmp

 I use ROOT user login.but, can not run.

 I installed on this machine GNOME.Very strange,I can only use the
ROOT user log on normally.New user logs on, the user interface is not
responding.I can only used ROOT user.

Thank you

2012/10/3 Tom Evans tevans...@googlemail.com:
 On Wed, Oct 3, 2012 at 4:16 PM, cz li willing...@gmail.com wrote:
 my os version is freebsd 9.0.I installed virtualbox 4.0.2.Installation
 no error.but, can not run.error infromation:
  Failed to create the VirtualBox COM object.

   The application will now terminate.



   Callee RC: NS_ERROR_FACTORY_NOT_REGISTERED (0x80040154)
 


 https://www.virtualbox.org/ticket/2335


 Cheers

 Tom
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: SMP Version of tar

2012-10-03 Thread Tim Kientzle
 Someone might want to ask if parallelizing tar is even possible.

Answer:  Yes.  Here's a simple parallel version of tar:

   find . | cpio -o -H ustar | gzip  outfile.tgz

There are definitely other approaches.


Tim

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org