Re: What is /boot/kernel/*.symbols?

2009-07-04 Thread Ian J Hart

Quoting "Patrick M. Hausen" :


Hi!

On Fri, Jul 03, 2009 at 05:05:08PM +0200, Dimitry Andric wrote:

E.g. the debug stuff is put into the .symbols files.  The kernel itself
still contains the function and data names, though:


Understood. Thanks. No, I don't want the kernel to be void
of any information ;-)

Kind regards,
Patrick
--
punkt.de GmbH * Kaiserallee 13a * 76133 Karlsruhe
Tel. 0721 9109 0 * Fax 0721 9109 100
i...@punkt.de   http://www.punkt.de
Gf: Jürgen Egeling  AG Mannheim 108285
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"



I just had an installworld fail due to this (/rescue).

Given that many people will have chosen the default root size offered  
by sysinstall a different build default would seem prudent.


In any case sysinstall needs to be updated (1GB?). Let's not put off  
new users anymore than we have to.



--
ian j hart


This message was sent using IMP, the Internet Messaging Program.


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS and df weirdness

2009-07-04 Thread Christian Walther
Hi Dan,

basically the "size" in df shows the current free space plus the used
space of the specific filesystem. This makes sense: Since per default
all disk space is shared, and thus can be used by all filesystems, all
filesystems need to report it as free space. Well, and used space has
to be added to the complete size, of course.

In your setup, there are 1.5TB available, but DATA uses 292GB (rounded
to 300GB). Both value add up to 1.8TB, giving the overall size of your
pool. (There is another rounding error because of your other
filesystems, but they are rather small.)
If you, say, add 400GB to tank/home/jago your df would look something like this:

> tank/DATA  1.4T292G1.1T16%/DATA
> tank/home/jago 1.1T400G1.1T 0%/home/jago

It needs some time to get used to the way df displays data. IMO things
are getting easier when one remembers that the OS actually treats
every Z Filesystem like an individual device.
And BTW: The real fun starts when you add reservation and quotas to
some of your filesystems. ;-)

HTH
Christian
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS and df weirdness

2009-07-04 Thread Dan Naumov
On Sun, Jul 5, 2009 at 2:26 AM, Freddie Cash wrote:
>
>
> On Sat, Jul 4, 2009 at 2:55 PM, Dan Naumov  wrote:
>>
>> Hello list.
>>
>> I have a single 2tb disk used on a 7.2-release/amd64 system with a
>> small part of it given to UFS and most of the disk given to a single
>> "simple" zfs pool with several filesystems without redundancy. I've
>> noticed a really weird thing regarding what "df" reports regarding the
>> "total space" of one of my filesystems:
>>
>> atom# df -h
>> Filesystem         Size    Used   Avail Capacity  Mounted on
>> /dev/ad12s1a        15G    1.0G     13G     7%    /
>> devfs              1.0K    1.0K      0B   100%    /dev
>> linprocfs          4.0K    4.0K      0B   100%    /usr/compat/linux/proc
>> tank/DATA          1.8T    292G    1.5T    16%    /DATA
>> tank/home          1.5T      0B    1.5T     0%    /home
>> tank/home/jago     1.5T    128K    1.5T     0%    /home/jago
>> tank/home/karni    1.5T      0B    1.5T     0%    /home/karni
>> tank/usr/local     1.5T    455M    1.5T     0%    /usr/local
>> tank/usr/obj       1.5T      0B    1.5T     0%    /usr/obj
>> tank/usr/ports     1.5T    412M    1.5T     0%    /usr/ports
>> tank/usr/src       1.5T    495M    1.5T     0%    /usr/src
>> tank/var/log       1.5T    256K    1.5T     0%    /var/log
>>
>> Considering that every single filesystem is part of the exact same
>> pool, with no custom options whatsoever used during filesystem
>> creation (except for mountpoints), why is the size of tank/DATA 1.8T
>> while the others are 1.5T?
>
> Did you set a reservation for any of the other filesystems?  Reserved space
> is not listed in the "general" pool.

no custom options whatsoever were used during filesystem creation
(except for mountpoints).

- Dan
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS and df weirdness

2009-07-04 Thread Freddie Cash
On Sat, Jul 4, 2009 at 2:55 PM, Dan Naumov  wrote:

> Hello list.
>
> I have a single 2tb disk used on a 7.2-release/amd64 system with a
> small part of it given to UFS and most of the disk given to a single
> "simple" zfs pool with several filesystems without redundancy. I've
> noticed a really weird thing regarding what "df" reports regarding the
> "total space" of one of my filesystems:
>
> atom# df -h
> Filesystem SizeUsed   Avail Capacity  Mounted on
> /dev/ad12s1a15G1.0G 13G 7%/
> devfs  1.0K1.0K  0B   100%/dev
> linprocfs  4.0K4.0K  0B   100%/usr/compat/linux/proc
> tank/DATA  1.8T292G1.5T16%/DATA
> tank/home  1.5T  0B1.5T 0%/home
> tank/home/jago 1.5T128K1.5T 0%/home/jago
> tank/home/karni1.5T  0B1.5T 0%/home/karni
> tank/usr/local 1.5T455M1.5T 0%/usr/local
> tank/usr/obj   1.5T  0B1.5T 0%/usr/obj
> tank/usr/ports 1.5T412M1.5T 0%/usr/ports
> tank/usr/src   1.5T495M1.5T 0%/usr/src
> tank/var/log   1.5T256K1.5T 0%/var/log
>
> Considering that every single filesystem is part of the exact same
> pool, with no custom options whatsoever used during filesystem
> creation (except for mountpoints), why is the size of tank/DATA 1.8T
> while the others are 1.5T?
>

Did you set a reservation for any of the other filesystems?  Reserved space
is not listed in the "general" pool.

-- 
Freddie Cash
fjwc...@gmail.com
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


ZFS and df weirdness

2009-07-04 Thread Dan Naumov
Hello list.

I have a single 2tb disk used on a 7.2-release/amd64 system with a
small part of it given to UFS and most of the disk given to a single
"simple" zfs pool with several filesystems without redundancy. I've
noticed a really weird thing regarding what "df" reports regarding the
"total space" of one of my filesystems:

atom# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
tank   1.80T294G   1.51T15%  ONLINE -

atom# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
tank  294G  1.48T18K  none
tank/DATA 292G  1.48T   292G  /DATA
tank/home 216K  1.48T21K  /home
tank/home/jago132K  1.48T   132K  /home/jago
tank/home/karni62K  1.48T62K  /home/karni
tank/usr 1.33G  1.48T18K  none
tank/usr/local455M  1.48T   455M  /usr/local
tank/usr/obj   18K  1.48T18K  /usr/obj
tank/usr/ports412M  1.48T   412M  /usr/ports
tank/usr/src  495M  1.48T   495M  /usr/src
tank/var  320K  1.48T18K  none
tank/var/log  302K  1.48T   302K  /var/log

atom# df
Filesystem  1K-blocks   Used  Avail Capacity  Mounted on
/dev/ad12s1a  16244334   1032310   13912478 7%/
devfs1 1  0   100%/dev
linprocfs4 4  0   100%/usr/compat/linux/proc
tank/DATA   1897835904 306397056 159143884816%/DATA
tank/home   1591438848 0 1591438848 0%/home
tank/home/jago  1591438976   128 1591438848 0%/home/jago
tank/home/karni 1591438848 0 1591438848 0%/home/karni
tank/usr/local  1591905024466176 1591438848 0%/usr/local
tank/usr/obj1591438848 0 1591438848 0%/usr/obj
tank/usr/ports  1591860864422016 1591438848 0%/usr/ports
tank/usr/src1591945600506752 1591438848 0%/usr/src
tank/var/log1591439104   256 1591438848 0%/var/log

atom# df -h
Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/ad12s1a15G1.0G 13G 7%/
devfs  1.0K1.0K  0B   100%/dev
linprocfs  4.0K4.0K  0B   100%/usr/compat/linux/proc
tank/DATA  1.8T292G1.5T16%/DATA
tank/home  1.5T  0B1.5T 0%/home
tank/home/jago 1.5T128K1.5T 0%/home/jago
tank/home/karni1.5T  0B1.5T 0%/home/karni
tank/usr/local 1.5T455M1.5T 0%/usr/local
tank/usr/obj   1.5T  0B1.5T 0%/usr/obj
tank/usr/ports 1.5T412M1.5T 0%/usr/ports
tank/usr/src   1.5T495M1.5T 0%/usr/src
tank/var/log   1.5T256K1.5T 0%/var/log

Considering that every single filesystem is part of the exact same
pool, with no custom options whatsoever used during filesystem
creation (except for mountpoints), why is the size of tank/DATA 1.8T
while the others are 1.5T?


- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


WiFi + Inputs and Outputs (Digital and Analog)

2009-07-04 Thread Exemys
This is a message in multipart MIME format.  Your mail client should not be 
displaying this. Consider upgrading your mail client to view this message 
correctly.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: trap 12

2009-07-04 Thread vwe
On 12/23/-58 20:59, Ian J Hart wrote:
> Is this likely to be hardware? Details will
> follow if not.
> 
> [copied from a screen dump]
> 
> Fatal trap 12: page fault while in kernel mode
> cpuid = 1; apic id = 01
> fault virtual address = 0x0
> fault code = supervisor write data, page not present
> instruction pointer = 0x8:0x807c6c12
> stack pointer = 0x10:0x510e7890
> frame pointer = 0x10:0xff00054a6c90
> code segment = base 0x0, limit 0xf, type 0x1b
> = DPL 0, pres 1, long 1 def32 0, gran 1
> processor eflags = interrupt enabled, resume, IOPL = 0
> current process = 75372 (printf)
> trap number = 12
> panic: page fault
> cpuid = 1
> uptime: 8m2s
> Cannot dump. No dump device defined.
> 
> 

Ian,

it doesn't look like it's hardware. The message basically means, some
code in the kernel was trying to use a NULL pointer (which is by
definition a bad pointer) and tried to do something with it. Do you
experience that message often? Can you reproduce it easily?

To tell you more, we need a backtrace and I'm wondering if you can
manage to get the kernel dump being written and have us getting the
stack backtrace and more debugging information.

If, by any chance, you're using a recent 7.x system, you may want to
enable textdump(4) (AFAIR introduced before 7.1, also look at ddb(8)).
For setting up your system for kernel crashdumps, please have a look at
the handbook and savecore(8). For getting infos out of a kernel
crashdump, please check the developers handbook.

Please try to get us some debug information and we can then most likely
suggest something.

HTH

Volker
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: trap 12

2009-07-04 Thread Robert Watson


On Fri, 3 Jul 2009, Ian J Hart wrote:


Is this likely to be hardware? Details will follow if not.


This looks like a kernel NULL pointer deference (faulting address 0x0), which 
means it is most likely a kernel bug, although it could be triggered by a 
hardare problem.  If this early in the boot or a diskless box, hence no dump 
device?


Robert N M Watson
Computer Laboratory
University of Cambridge



[copied from a screen dump]

Fatal trap 12: page fault while in kernel mode
cpuid = 1; apic id = 01
fault virtual address = 0x0
fault code = supervisor write data, page not present
instruction pointer = 0x8:0x807c6c12
stack pointer = 0x10:0x510e7890
frame pointer = 0x10:0xff00054a6c90
code segment = base 0x0, limit 0xf, type 0x1b
= DPL 0, pres 1, long 1 def32 0, gran 1
processor eflags = interrupt enabled, resume, IOPL = 0
current process = 75372 (printf)
trap number = 12
panic: page fault
cpuid = 1
uptime: 8m2s
Cannot dump. No dump device defined.


--
ian j hart



This message was sent using IMP, the Internet Messaging Program.


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: FreeBSD child process die for root [SOLVED]

2009-07-04 Thread Sagara Wijetunga

   Dear all
   The issue was solved. It was a our side mistake. On a modification we
   made to libutils, we execute following line without checking whether
   the group is empty or not. In our case, it was empty, therefore,
   crashes:
   running = strdup(*(grp->gr_mem));
   So that now login, su and cron work well.
   Thank you for all those who helped us.
   Best regards
   Sagara
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"