Re: world's toolchain CPUTYPE

2006-03-01 Thread Ruslan Ermilov
On Wed, Mar 01, 2006 at 06:02:26AM +0300, Alex Semenyaka wrote:
 On Tue, Feb 28, 2006 at 10:19:11AM +0200, Ruslan Ermilov wrote:
   Isn't is reasonable to add corresponding optional functionality
   into the buld process?
  No.
 
 Why? :)
 
I think I've explained this in the non-quoted here part.

   For example, if -DSTATIC_TOOLCHAIN (or
   pick any other name) is set, then:
   1) build toolchain statically linked
  This is already the case (${XMAKE} has -DNO_SHARED).
 
 Oh, great. Could we also add -DNO_MAKE_CONF then?
 Or at least -DTOOLCHAIN_NO_MAKE_CONF :)
 That's would be enough. Or do I miss something?
 
What problem are you trying to attack, I fail to see?
-DNO_CPU_CFLAGS is already there, if that's what you mean:

BMAKE=  MAKEOBJDIRPREFIX=${WORLDTMP} \
${BMAKEENV} ${MAKE} -f Makefile.inc1 \
DESTDIR= \
BOOTSTRAPPING=${OSRELDATE} \
-DNO_HTML -DNO_INFO -DNO_LINT -DNO_MAN -DNO_NLS -DNO_PIC \
-DNO_PROFILE -DNO_SHARED -DNO_CPU_CFLAGS -DNO_WARNS
 ^^^

XMAKE=  TOOLS_PREFIX=${WORLDTMP} ${BMAKE} -DNO_FORTRAN -DNO_GDB
 

... and has the following effect:

$ make -V CFLAGS CPUTYPE=opteron 
-O2 -fno-strict-aliasing -pipe -march=opteron
$ make -V CFLAGS CPUTYPE=opteron -DNO_CPU_CFLAGS
-O2 -fno-strict-aliasing -pipe

But it doesn't really matter since building host's libraries that
ARE used to build toolchain might have been built using optimized
CFLAGS.  See other posts where people go into more details what
should be the conditions to allow NFS-mounted src/ installs.


Cheers,
-- 
Ruslan Ermilov
[EMAIL PROTECTED]
FreeBSD committer


pgpQOgnRb78KL.pgp
Description: PGP signature


Re: world's toolchain CPUTYPE

2006-03-01 Thread Ruslan Ermilov
On Mon, Feb 27, 2006 at 01:15:02PM +0300, Yar Tikhiy wrote:
  What's really fun is tricking the build system so you can cross build
  on one system, but native install on another from the same tree...
 
 I wondered, too, if it would be possible to cross-build install
 tools so that they could run on the target system, but I haven't
 investigated this way yet.  Do you have any ideas/recipes?  Thanks!
 
Well, the tools you want were already built, for the target host.
But you might not be able to install and run them (they may require
a new syscall, some new shared libraries, etc.).  The tools that
you intend to run on host I during the install should either be
compiled on this host (using its libraries, preferably statically
linked), or on a compatible host in a compatible build environment.
So it all depends on how similar the hosts B and I and their
build environments are.


Cheers,
-- 
Ruslan Ermilov
[EMAIL PROTECTED]
FreeBSD committer


pgpvr24TkJVjh.pgp
Description: PGP signature


KGDB not reading my crash dump.

2006-03-01 Thread Josef Karthauser
Hi guys,

I've got a crash dump that I'm trying to examine, but kgdb isn't
recognising it:

genius# kgdb /usr/obj/usr/src/sys/GENIUS2/kernel.debug ./vmcore.12
kgdb: cannot read PTD
genius# file vmcore.12
vmcore.12: ELF 32-bit LSB core file Intel 80386, invalid version (embedded)

Is this a known problem?  Any idea what's up?

Joe

ps. FreeBSD genius.tao.org.uk 6.1-PRERELEASE FreeBSD 6.1-PRERELEASE #26: Fri 
Feb 17 12:26:21 GMT 2006 [EMAIL PROTECTED]:/usr/obj/usr/src/sys/GENIUS2  
i386


pgpPpXIkUx0Zw.pgp
Description: PGP signature


Re: Accessing address space of a process through kld!!

2006-03-01 Thread Andrey Simonenko
On Tue, Feb 28, 2006 at 01:33:47PM -0500, John Baldwin wrote:
 On Monday 27 February 2006 13:31, John-Mark Gurney wrote:
  Tanmay wrote this message on Mon, Feb 27, 2006 at 13:56 +0530:
   How do I access the address space ie text,data and stack of a (user
   level)process whose pid I know from my kld. for eg: Suppose 'vi' is 
   running
   and I want to access its address space through my kld, then how do I do 
   it?
  
  You look up the process with pfind(9), and then you can use uio(9) to
  transfer data into kernel space...  Don't forget to PROC_UNLOCK the
  struct once you are done referencing it.
 
 You can use the proc_rwmem() function (it takes a uio and a struct proc)
 to do the actual I/O portion.  You can see example use in the ptrace()
 syscall.

I have two questions about this function:

1.  vm_fault() does not guarantee, that (possibly) faulted in page
will be in the object or in one of backing objects when
vm_fault() returns, because a page can become not resident
again.  Why not to wire needed page in vm_fault() (by giving
a special flag to vm_fault() function)?

2.  When the object which owns the page is unlocked, which lock
guarantees, then m will point to a page?  I mean m, which is
used in vm_page_hold(m), which is called after VM_OBJECT_UNLOCK()
(I mean a gap of time between VM_OBJECT_UNLOCK() and
vm_page_lock_queues() function calls).

Can you answer these two question?  Thanks.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: UMA zone allocator memory fragmentation questions

2006-03-01 Thread Robert Watson


On Wed, 1 Mar 2006, Rohit Jalan wrote:

My problem is that I need to enforce a single memory limit on the total 
number of pages used by multiple zones.


The limit changes dynamically based on the number of pages being used by 
other non-zone allocations and also on the amount of available swap and 
memory.


I've tried to do the same in various ways with the stock kernel but I was 
unsuccessful due to reasons detailed below. In the end I had to patch the 
UMA subsystem to achieve my goal.


Is there a better method of doing the same? Something that would not involve 
patching the kernel. Please advise.


Currently, UMA supports limits on allocation by keg, so if two zones don't 
share the same keg, they won't share the same limit.  Supporting limits shared 
across requires a change as things stand.


On the general topic of how to implement this -- I'm not sure what the best 
approach is.  Your approach gives quite a bit of flexibility.  I wonder, 
though, if it would be better to add an explicit accounting feature rather 
than a more flexible callback feature?  I.e., have a notion of a UMA 
accounting group which can be shared by one or more Keg to impose shared 
limits on multiple kegs?


Something similar to this might also be useful in the mbuf allocator, where we 
currently have quite a few kegs and zones floating around, making implementing 
a common limit quite difficult.


Robert N M Watson



--
TMPFS uses multiple UMA zones to store filesystem metadata.
These zones are allocated on a per mount basis for reasons described in
the documentation. Because of fragmentation that can occur in a zone due
to dynamic allocations and frees, the actual memory in use can be more
than the sum of the contained item sizes. This makes it difficult to
track and limit the space being used by a filesystem.

Even though the zone API provides scope for custom item constructors
and destructors the necessary information (nr. pages used) is
stored inside a keg structure which itself is a part of the opaque
uma_zone_t object. One could  include vm/uma_int.h and access
the keg information in the custom constructor but it would require
messy code to calculate the change delta because one would have to
track the older value to see how many pages have been added or
subtracted.

The zone API also provides custom page allocation and free hooks.
These are ideal for my purpose as they allow me to control
page allocation and frees effectively. But the callback interface is
lacking, it does not allow one to specify an argument (like const  destr)
making it difficult to update custom information from within the uma_free
callback because it is not passed the zone pointer nor an argument.

Presently I have patched my private sources to modify the UMA API to
support passing an argument to the page allocation and free callbacks.
Unlike the constructor and destructor callback argument which is specified
on each call, the argument to uma_alloc or uma_free is specified
when setting the callback via uma_zone_set_allocf() or uma_zone_set_freef().
This argument is stored in the keg and passed to the callback whenever
it is called.

The scheme implemented by my patch imposes an overhead of
passing an extra argument to the uma_alloc and uma_free callbacks.
The uma_keg structure size is also increased by (2 * sizeof(void*)).

My patch changes the present custom alloc and free callback routines
(e.g., page_alloc, page_free, etc.) to accept an extra argument
which is ignored.

The static page_alloc and page_free routines are made global and
are renamed to uma_page_alloc and uma_page_free respectively.
This is so that they may be called from other custom allocators.
As is the case with my code.

--

Patches:
 http://download.purpe.com/files/TMPFS_FreeBSD_7-uma-1.dif
 http://download.purpe.com/files/TMPFS_FreeBSD_7-uma-2.dif

Regards,

rohit --



On Tue, Feb 28, 2006 at 10:04:41PM +, Robert Watson wrote:

On Mon, 27 Feb 2006, Rohit Jalan wrote:


Is there an upper limit on the amount of fragmentation / wastage that can
occur in a UMA zone?

Is there a method to know the total number of pages used by a UMA zone at
some instance of time?


Hey there Rohit,

UMA allocates pages retrieved from VM as slabs.  It's behavior depends a
bit on how large the allocated object is, as it's a question of packing
objects into page-sized slabs for small objects, or packing objects into
sets of pages making up a slab for larger objects.  You can
programmatically access information on UMA using libmemstat(3), which
allows you to do things like query the current object cache size, total
lifetime allocations for the zone, allocation failure count, sizes of
per-cpu caches, etc.  You may want to take a glance at the source code for
vmstat -z and netstat -m for examples of it in use.  You'll notice, for
example, 

Re: Accessing address space of a process through kld!!

2006-03-01 Thread John Baldwin
On Wednesday 01 March 2006 09:06, Andrey Simonenko wrote:
 On Tue, Feb 28, 2006 at 01:33:47PM -0500, John Baldwin wrote:
  On Monday 27 February 2006 13:31, John-Mark Gurney wrote:
   Tanmay wrote this message on Mon, Feb 27, 2006 at 13:56 +0530:
How do I access the address space ie text,data and stack of a (user
level)process whose pid I know from my kld. for eg: Suppose 'vi' is 
running
and I want to access its address space through my kld, then how do I do 
it?
   
   You look up the process with pfind(9), and then you can use uio(9) to
   transfer data into kernel space...  Don't forget to PROC_UNLOCK the
   struct once you are done referencing it.
  
  You can use the proc_rwmem() function (it takes a uio and a struct proc)
  to do the actual I/O portion.  You can see example use in the ptrace()
  syscall.
 
 I have two questions about this function:
 
 1.vm_fault() does not guarantee, that (possibly) faulted in page
   will be in the object or in one of backing objects when
   vm_fault() returns, because a page can become not resident
   again.  Why not to wire needed page in vm_fault() (by giving
   a special flag to vm_fault() function)?
 
 2.When the object which owns the page is unlocked, which lock
   guarantees, then m will point to a page?  I mean m, which is
   used in vm_page_hold(m), which is called after VM_OBJECT_UNLOCK()
   (I mean a gap of time between VM_OBJECT_UNLOCK() and
   vm_page_lock_queues() function calls).
 
 Can you answer these two question?  Thanks.

Those are outside of my realm of knowledge unfortunately, but there are
some other folks you can ask including probably truckman@ and [EMAIL PROTECTED]
 

-- 
John Baldwin [EMAIL PROTECTED]http://www.FreeBSD.org/~jhb/
Power Users Use the Power to Serve  =  http://www.FreeBSD.org
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: UMA zone allocator memory fragmentation questions

2006-03-01 Thread Rohit Jalan
 
 On the general topic of how to implement this -- I'm not sure what the best 
 approach is.  Your approach gives quite a bit of flexibility.  I wonder, 
 though, if it would be better to add an explicit accounting feature rather 
 than a more flexible callback feature?  I.e., have a notion of a UMA 
 accounting group which can be shared by one or more Keg to impose shared 
 limits on multiple kegs?
 

The addition of an accounting feature would be nice but 
would that not increase lock contention? 

Specifically, UMA will need to do locking if it manages
shared information but some API users may not want to pay this
penalty if their shared-limit zones are accessed in an orderly
manner or if the user for some reason locks the parent data 
structures in which all the shared-limit zones are contained 
before doing zone operations.

And even then without a callback it will not be feasible 
to implement a dynamic limit.

E.g.,

TMPFS_PAGES_MAX() is a macro that uses vmmeter and user specified 
parameters to provide a dynamic limit.

static void*
tmpfs_zone_page_alloc(uma_zone_t zone, int size, uint8_t *pflag, int wait,
void *arg)
{
void *m;
struct tmpfs_mount *tmp = (struct tmpfs_mount *)arg;

if ((TMPFS_PAGES_MAX(tmp)) = tmp-tm_pages_used)
return NULL;

m = uma_page_alloc(zone, size, pflag, wait, NULL);

/*
 * For end cases addition of size may breach limit 
 * if size  PAGE_SIZE but that is an intentional trade-off.
 */
if (m) 
tmp-tm_pages_used += size / PAGE_SIZE;

return m;
}


rohit --

Ps. Thank you for fxr.watson.org.


On Wed, Mar 01, 2006 at 04:57:10PM +, Robert Watson wrote:
 
 On Wed, 1 Mar 2006, Rohit Jalan wrote:
 
 My problem is that I need to enforce a single memory limit on the total 
 number of pages used by multiple zones.
 
 The limit changes dynamically based on the number of pages being used by 
 other non-zone allocations and also on the amount of available swap and 
 memory.
 
 I've tried to do the same in various ways with the stock kernel but I was 
 unsuccessful due to reasons detailed below. In the end I had to patch the 
 UMA subsystem to achieve my goal.
 
 Is there a better method of doing the same? Something that would not 
 involve patching the kernel. Please advise.
 
 Currently, UMA supports limits on allocation by keg, so if two zones don't 
 share the same keg, they won't share the same limit.  Supporting limits 
 shared across requires a change as things stand.
 
 On the general topic of how to implement this -- I'm not sure what the best 
 approach is.  Your approach gives quite a bit of flexibility.  I wonder, 
 though, if it would be better to add an explicit accounting feature rather 
 than a more flexible callback feature?  I.e., have a notion of a UMA 
 accounting group which can be shared by one or more Keg to impose shared 
 limits on multiple kegs?
 
 Something similar to this might also be useful in the mbuf allocator, where 
 we currently have quite a few kegs and zones floating around, making 
 implementing a common limit quite difficult.
 
 Robert N M Watson
 
 
 --
 TMPFS uses multiple UMA zones to store filesystem metadata.
 These zones are allocated on a per mount basis for reasons described in
 the documentation. Because of fragmentation that can occur in a zone due
 to dynamic allocations and frees, the actual memory in use can be more
 than the sum of the contained item sizes. This makes it difficult to
 track and limit the space being used by a filesystem.
 
 Even though the zone API provides scope for custom item constructors
 and destructors the necessary information (nr. pages used) is
 stored inside a keg structure which itself is a part of the opaque
 uma_zone_t object. One could  include vm/uma_int.h and access
 the keg information in the custom constructor but it would require
 messy code to calculate the change delta because one would have to
 track the older value to see how many pages have been added or
 subtracted.
 
 The zone API also provides custom page allocation and free hooks.
 These are ideal for my purpose as they allow me to control
 page allocation and frees effectively. But the callback interface is
 lacking, it does not allow one to specify an argument (like const  destr)
 making it difficult to update custom information from within the uma_free
 callback because it is not passed the zone pointer nor an argument.
 
 Presently I have patched my private sources to modify the UMA API to
 support passing an argument to the page allocation and free callbacks.
 Unlike the constructor and destructor callback argument which is specified
 on each call, the argument to uma_alloc or uma_free is specified
 when setting the callback via uma_zone_set_allocf() or 
 uma_zone_set_freef().
 This argument is stored in the keg and passed to the callback whenever
 

Re: unversal watchdog

2006-03-01 Thread M. Warner Losh
In message: [EMAIL PROTECTED]
Alex Semenyaka [EMAIL PROTECTED] writes:
: On Mon, Feb 27, 2006 at 05:41:32PM +0700, Vitaliy Ovsyannikov wrote:
:  Hello, freebsd-hackers.
:I've stuck with the unable to make watchdogs for daemons running via
:  startup rc-scripts. In linux we can just put the process in the
:  inittab. Does FreeBSD contains ability like this?
: 
: You can do it with /etc/ttys. Actually this point is missing by many
: FreeBSD administrators, people just think of /etc/ttys in term of
: terminals and stuff :) But if youi'll open the man page you would
: found the following:
: 
:  The first field is normally the name of the terminal special file as it
:  is found in /dev.  However, it can be any arbitrary string when the asso-
:  ciated command is not related to a tty.
: 
: So you can perfectly run any program there and init will watch or it, just
: like in linux.

The behavior goes back to at least 4.2 BSD (not FreeBSD 4.2, but 4.2
BSD).  I've used it on old SunOS 3.x boxes too...

Warner
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]