Re: dumpdev default

2012-01-17 Thread Jeremy Chadwick
On Tue, Jan 17, 2012 at 06:37:34PM +1100, Aristedes Maniatis wrote:
 The manual states that dumpdev AUTO is the default as of FreeBSD 6.0 [1]
 
 However:
 
 # uname -a
 FreeBSD xx 9.0-RELEASE FreeBSD 9.0-RELEASE #0: Tue Jan  3 07:46:30 UTC 
 2012 r...@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC  amd64
 
 # grep dumpdev /etc/defaults/rc.conf
 dumpdev=NO  # Device to crashdump to (device name, AUTO, or NO).
 savecore_flags= # Used if dumpdev is enabled above, and present.
 
 
 It looks like NO is still the default. Is there a reason why this should not 
 be turned on even for production machines? I haven't read about any side 
 effects, but it seems to be off by default for some reason.

The Handbook is incorrect, and I filed a PR for this matter last year
(PR 159650):

http://lists.freebsd.org/pipermail/freebsd-doc/2011-August/018654.html

Worth reading:

http://lists.freebsd.org/pipermail/freebsd-stable/2011-August/063541.html
http://lists.freebsd.org/pipermail/freebsd-stable/2011-August/063542.html
http://lists.freebsd.org/pipermail/freebsd-stable/2011-August/063543.html

And the entire thread:

http://lists.freebsd.org/pipermail/freebsd-stable/2011-August/063535.html

-- 
| Jeremy Chadwick j...@parodius.com |
| Parodius Networking http://www.parodius.com/ |
| UNIX Systems Administrator Mountain View, CA, US |
| Making life hard for others since 1977. PGP 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: dumpdev default

2012-01-17 Thread Aristedes Maniatis

On 17/01/12 7:10 PM, Jeremy Chadwick wrote:

On Tue, Jan 17, 2012 at 06:37:34PM +1100, Aristedes Maniatis wrote:

The manual states that dumpdev AUTO is the default as of FreeBSD 6.0 [1]

However:

# uname -a
FreeBSD xx 9.0-RELEASE FreeBSD 9.0-RELEASE #0: Tue Jan  3 07:46:30 UTC 2012 
r...@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC  amd64

# grep dumpdev /etc/defaults/rc.conf
dumpdev=NO  # Device to crashdump to (device name, AUTO, or NO).
savecore_flags= # Used if dumpdev is enabled above, and present.


It looks like NO is still the default. Is there a reason why this should not be 
turned on even for production machines? I haven't read about any side effects, 
but it seems to be off by default for some reason.


The Handbook is incorrect, and I filed a PR for this matter last year
(PR 159650):

http://lists.freebsd.org/pipermail/freebsd-doc/2011-August/018654.html

Worth reading:

http://lists.freebsd.org/pipermail/freebsd-stable/2011-August/063541.html
http://lists.freebsd.org/pipermail/freebsd-stable/2011-August/063542.html
http://lists.freebsd.org/pipermail/freebsd-stable/2011-August/063543.html

And the entire thread:

http://lists.freebsd.org/pipermail/freebsd-stable/2011-August/063535.html



Ahh!!! Someone give Jeremy a commit bit for the FreeBSD documentation already!

The commit you reference by mnag has only the following log:


--
- Change dumpdev default to NO. Only HEAD is set to AUTO

Discussed with: re
Approved by:re (scottl)
--

Not enough to know whether dumpdev is now considered a dangerous feature for 
production servers. Only scottl or mnag may know the answer.



I've also found another bug. man dumpon(8) shows:

  The dumpon utility will refuse to enable a dump device which is smaller
 than the total amount of physical memory as reported by the hw.physmem
 sysctl(8) variable.

However, I have found that

# dumpon -v /dev/ad4s1b
kernel dumps on /dev/ad4s1b

# sysctl hw.physmem
hw.physmem: 25744310272

# swapinfo
Device  1K-blocks UsedAvail Capacity
/dev/ad4s1b   83886080  8388608 0%

So, 24Gb RAM, 8Gb swap device. No complaints from dumpon. Either it is silently 
failing (poor response from the app) or it is incorrectly setting the dump 
device to something too small.


Ari

--
--
Aristedes Maniatis
ish
http://www.ish.com.au
Level 1, 30 Wilson Street Newtown 2042 Australia
phone +61 2 9550 5001   fax +61 2 9550 4001
GPG fingerprint CBFB 84B4 738D 4E87 5E5C  5EFA EF6A 7D2E 3E49 102A
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: dumpdev default

2012-01-17 Thread Matthew Seaman
On 17/01/2012 08:43, Aristedes Maniatis wrote:

   The dumpon utility will refuse to enable a dump device which is
 smaller
  than the total amount of physical memory as reported by the hw.physmem
  sysctl(8) variable.
 
 However, I have found that
 
 # dumpon -v /dev/ad4s1b
 kernel dumps on /dev/ad4s1b
 
 # sysctl hw.physmem
 hw.physmem: 25744310272
 
 # swapinfo
 Device  1K-blocks UsedAvail Capacity
 /dev/ad4s1b   83886080  8388608 0%
 
 So, 24Gb RAM, 8Gb swap device. No complaints from dumpon. Either it is
 silently failing (poor response from the app) or it is incorrectly
 setting the dump device to something too small.

We have minidumps nowadays: that's been the default since 7.0.

Cheers,

Matthew

-- 
Dr Matthew J Seaman MA, D.Phil.   7 Priory Courtyard
  Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate
JID: matt...@infracaninophile.co.uk   Kent, CT11 9PW



signature.asc
Description: OpenPGP digital signature


FreeBSD 9.0 and Intel MatrixRAID RAID5

2012-01-17 Thread Alexander Pyhalov

Hello.
On my desktop I use Intel MatrixRAID RAID5 soft raid controller. RAID5 
is configured over 3 disks. FreeBSD 8.2 sees this as:


ar0: 953874MB Intel MatrixRAID RAID5 (stripe 64 KB) status: READY
ar0: disk0 READY using ad4 at ata2-master
ar0: disk1 READY using ad6 at ata3-master
ar0: disk2 READY using ad12 at ata6-master

Root filesystem is on /dev/ar0s1.
Today I've tried to upgrade to 9.0.
It doesn't see this disk array. Here is dmesg. When I load geom_raid, it 
finds something, but doesn't want to work with RAID:


GEOM_RAID: Intel-e922b201: Array Intel-e922b201 created.
GEOM_RAID: Intel-e922b201: No transformation module found for Volume0.
GEOM_RAID: Intel-e922b201: Volume Volume0 state changed from STARTING to 
UNSUPPORTED.

GEOM_RAID: Intel-e922b201: Disk ada2 state changed from NONE to ACTIVE.
GEOM_RAID: Intel-e922b201: Subdisk Volume0:2-ada2 state changed from 
NONE to ACTIVE.

GEOM_RAID: Intel-e922b201: Disk ada1 state changed from NONE to ACTIVE.
GEOM_RAID: Intel-e922b201: Subdisk Volume0:1-ada1 state changed from 
NONE to ACTIVE.

GEOM_RAID: Intel-e922b201: Disk ada0 state changed from NONE to ACTIVE.
GEOM_RAID: Intel-e922b201: Subdisk Volume0:0-ada0 state changed from 
NONE to ACTIVE.

GEOM_RAID: Intel-e922b201: Array started.

No new devices appear in /dev.
How could I solve this issue?
--
Best regards,
Alexander Pyhalov,
system administrator of Computer Center of Southern Federal University
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


FB9-stable: bridge0 doesn't come up via rc

2012-01-17 Thread Denny Schierz
hi,

I have problems starting the bridge via rc.d:

rc.conf:

cloned_interfaces=bridge0
ifconfig_bge0=up
ifconfig_bridge0=addm bge0 up
ifconfig_bridge0=inet 192.168.1.0 netmask 255.255.255.0 up
defaultrouter=192.168.1.254
gateway_enable=YES

It doesn't work. After reboot I have to set up:

ifconfig bridge0 addm bge0

then it works.

Also a problem: /etc/rc.d/netif stop doesn't destroy bridge0 and 
/etc/rc.d/netif start gives errors, because bridge exists already.

any suggestions?

cu denny___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: FreeBSD 9.0 and Intel MatrixRAID RAID5

2012-01-17 Thread Matthias Gamsjager
Not sure if geom_raid is implemented with cam. I remember a post a while
back about this issue to happen with defaulting cam in 9. Did not follow it
so not sure if something has been done about it.

On Tue, Jan 17, 2012 at 11:53 AM, Alexander Pyhalov a...@rsu.ru wrote:

 Hello.
 On my desktop I use Intel MatrixRAID RAID5 soft raid controller. RAID5 is
 configured over 3 disks. FreeBSD 8.2 sees this as:

 ar0: 953874MB Intel MatrixRAID RAID5 (stripe 64 KB) status: READY
 ar0: disk0 READY using ad4 at ata2-master
 ar0: disk1 READY using ad6 at ata3-master
 ar0: disk2 READY using ad12 at ata6-master

 Root filesystem is on /dev/ar0s1.
 Today I've tried to upgrade to 9.0.
 It doesn't see this disk array. Here is dmesg. When I load geom_raid, it
 finds something, but doesn't want to work with RAID:

 GEOM_RAID: Intel-e922b201: Array Intel-e922b201 created.
 GEOM_RAID: Intel-e922b201: No transformation module found for Volume0.
 GEOM_RAID: Intel-e922b201: Volume Volume0 state changed from STARTING to
 UNSUPPORTED.
 GEOM_RAID: Intel-e922b201: Disk ada2 state changed from NONE to ACTIVE.
 GEOM_RAID: Intel-e922b201: Subdisk Volume0:2-ada2 state changed from NONE
 to ACTIVE.
 GEOM_RAID: Intel-e922b201: Disk ada1 state changed from NONE to ACTIVE.
 GEOM_RAID: Intel-e922b201: Subdisk Volume0:1-ada1 state changed from NONE
 to ACTIVE.
 GEOM_RAID: Intel-e922b201: Disk ada0 state changed from NONE to ACTIVE.
 GEOM_RAID: Intel-e922b201: Subdisk Volume0:0-ada0 state changed from NONE
 to ACTIVE.
 GEOM_RAID: Intel-e922b201: Array started.

 No new devices appear in /dev.
 How could I solve this issue?
 --


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: FB9-stable: bridge0 doesn't come up via rc

2012-01-17 Thread Volodymyr Kostyrko

Denny Schierz wrote:

hi,

I have problems starting the bridge via rc.d:

rc.conf:

cloned_interfaces=bridge0
ifconfig_bge0=up
ifconfig_bridge0=addm bge0 up
ifconfig_bridge0=inet 192.168.1.0 netmask 255.255.255.0 up


Remember that rc.conf is a pure shell script. The line above overwrites 
one with addm bge0 up.



defaultrouter=192.168.1.254
gateway_enable=YES

It doesn't work. After reboot I have to set up:

ifconfig bridge0 addm bge0

then it works.


Use igconfig_bridge0_alias0.


Also a problem: /etc/rc.d/netif stop doesn't destroy bridge0 and /etc/rc.d/netif 
start gives errors, because bridge exists already.

any suggestions?


--
Sphinx of black quartz judge my vow.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: FB9-stable: bridge0 doesn't come up via rc

2012-01-17 Thread Stefan Esser
Am 17.01.2012 12:57, schrieb Denny Schierz:
 hi,
 
 I have problems starting the bridge via rc.d:
 
 rc.conf:
 
 cloned_interfaces=bridge0
 ifconfig_bge0=up
 ifconfig_bridge0=addm bge0 up
 ifconfig_bridge0=inet 192.168.1.0 netmask 255.255.255.0 up

You forgot that rc.conf does not contain commands, but only variable
assignments. The latter of the last two lines overwrites the value
set in the former.

You may want to replace the first of these two lines by:

autobridge_bridge0=bge0 # add further interfaces as required

The parameter holds a space separated list and may include wild-cards
(e.g. bge0 ixp*).

 defaultrouter=192.168.1.254
 gateway_enable=YES
 
 It doesn't work. After reboot I have to set up:
 
 ifconfig bridge0 addm bge0
 
 then it works.

Yes, as explained above ...

 Also a problem: /etc/rc.d/netif stop doesn't destroy bridge0 and 
 /etc/rc.d/netif start gives errors, because bridge exists already.

This will be fixed if you use autobridge as explained above. See
/etc/rc.d/bridge for details.

Regards, STefan
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: dumpdev default

2012-01-17 Thread Ken Smith
On Tue, 2012-01-17 at 18:37 +1100, Aristedes Maniatis wrote:
 The manual states that dumpdev AUTO is the default as of FreeBSD
 6.0 [1]
 
 However:
 
 # uname -a
 FreeBSD xx 9.0-RELEASE FreeBSD 9.0-RELEASE #0: Tue Jan  3 07:46:30
 UTC 2012 r...@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC
 amd64
 
 # grep dumpdev /etc/defaults/rc.conf
 dumpdev=NO  # Device to crashdump to (device name, AUTO, or NO).
 savecore_flags= # Used if dumpdev is enabled above, and present.
 
 
 It looks like NO is still the default. Is there a reason why this
 should not be turned on even for production machines? I haven't read
 about any side effects, but it seems to be off by default for some
 reason.
 
 
 Please cc me on any responses since I'm not currently subscribed.
 
 Cheers
 Ari

If you use bsdinstall(8) to install a machine from scratch it explicitly
asks you about whether you want crash dumps enabled or not.

As long as you're aware that the crash dumps are happening and know that
you might need to clean up after them (remove stuff from /var, etc)
there are no dangers.  You just need to make sure wherever the crash
dumps will wind up going (/var by default) has enough space to handle
both the crash dumps and anything else the machines need to do.  We
currently have no provision for preventing crash dumps from filling up
the target partition.

I keep advocating for the conservative side of this issue, preferring
that crash dumps be an opt-in setting until we have infrastructure in
place to prevent them from filling the target partition.  I still
picture there being people out there who don't know what crash dumps
are, wouldn't know they might need to clean up after them, and may
be negatively impacted if the target partition wound up full without
them knowing why.

-- 
Ken Smith
- From there to here, from here to  |   kensm...@buffalo.edu
  there, funny things are everywhere.   |
  - Theodor Geisel  |


signature.asc
Description: This is a digitally signed message part


Re: dumpdev default

2012-01-17 Thread Aristedes Maniatis

On 18/01/12 2:07 AM, Ken Smith wrote:

On Tue, 2012-01-17 at 18:37 +1100, Aristedes Maniatis wrote:

The manual states that dumpdev AUTO is the default as of FreeBSD
6.0 [1]

However:

# uname -a
FreeBSD xx 9.0-RELEASE FreeBSD 9.0-RELEASE #0: Tue Jan  3 07:46:30
UTC 2012 r...@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC
amd64

# grep dumpdev /etc/defaults/rc.conf
dumpdev=NO  # Device to crashdump to (device name, AUTO, or NO).
savecore_flags= # Used if dumpdev is enabled above, and present.


It looks like NO is still the default. Is there a reason why this
should not be turned on even for production machines? I haven't read
about any side effects, but it seems to be off by default for some
reason.


Please cc me on any responses since I'm not currently subscribed.

Cheers
Ari


If you use bsdinstall(8) to install a machine from scratch it explicitly
asks you about whether you want crash dumps enabled or not.

As long as you're aware that the crash dumps are happening and know that
you might need to clean up after them (remove stuff from /var, etc)
there are no dangers.  You just need to make sure wherever the crash
dumps will wind up going (/var by default) has enough space to handle
both the crash dumps and anything else the machines need to do.  We
currently have no provision for preventing crash dumps from filling up
the target partition.

I keep advocating for the conservative side of this issue, preferring
that crash dumps be an opt-in setting until we have infrastructure in
place to prevent them from filling the target partition.  I still
picture there being people out there who don't know what crash dumps
are, wouldn't know they might need to clean up after them, and may
be negatively impacted if the target partition wound up full without
them knowing why.



Thanks Ken. That is very clear. If you have time, please update the 
documentation with that answer too since others are likely to be confused by 
what I found there which is incorrect and incomplete.

Also, for ZFS users, I assume that the first swap disk will be default? So this 
is another consideration when sizing up swap partitions as compared to the size 
of memory installed.


Thanks

Ari


--
--
Aristedes Maniatis
ish
http://www.ish.com.au
Level 1, 30 Wilson Street Newtown 2042 Australia
phone +61 2 9550 5001   fax +61 2 9550 4001
GPG fingerprint CBFB 84B4 738D 4E87 5E5C  5EFA EF6A 7D2E 3E49 102A
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


ZFS / zpool size

2012-01-17 Thread Christer Solskogen
Hi!

I have a zpool called data, and I have some inconsistencies with sizes.

$ zpool iostat
   capacity operationsbandwidth
poolalloc   free   read  write   read  write
--  -  -  -  -  -  -
data3.32T   761G516 50  56.1M  1.13M

$ zfs list -t all
NAMEUSED  AVAIL  REFER  MOUNTPOINT
data   2.21T   463G  9.06G  /data

Can anyone throw any light on this?
Is not free the same as AVAIL?

I do not have any zfs snapshots, but some filesystems are compressed.

-- 
chs,
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS / zpool size

2012-01-17 Thread Christer Solskogen
On Tue, Jan 17, 2012 at 4:47 PM, Christer Solskogen
christer.solsko...@gmail.com wrote:

snip

Ups, I forgot to say that this is on FreeBSD 9.0-RELEASE and all
filesystems are v28.


-- 
chs,
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS / zpool size

2012-01-17 Thread Shawn Webb
The `zpool` command does not show all the overhead from ZFS. The `zfs`
command does. That's why the `zfs` command shows less available space
than the `zpool` command.

Thanks,

Shawn

On Tue, Jan 17, 2012 at 8:47 AM, Christer Solskogen
christer.solsko...@gmail.com wrote:
 Hi!

 I have a zpool called data, and I have some inconsistencies with sizes.

 $ zpool iostat
               capacity     operations    bandwidth
 pool        alloc   free   read  write   read  write
 --  -  -  -  -  -  -
 data        3.32T   761G    516     50  56.1M  1.13M

 $ zfs list -t all
 NAME                            USED  AVAIL  REFER  MOUNTPOINT
 data                           2.21T   463G  9.06G  /data

 Can anyone throw any light on this?
 Is not free the same as AVAIL?

 I do not have any zfs snapshots, but some filesystems are compressed.

 --
 chs,
 ___
 freebsd-stable@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS / zpool size

2012-01-17 Thread Christer Solskogen
On Tue, Jan 17, 2012 at 4:52 PM, Shawn Webb latt...@gmail.com wrote:
 The `zpool` command does not show all the overhead from ZFS. The `zfs`
 command does. That's why the `zfs` command shows less available space
 than the `zpool` command.


A overhead of almost 300GB? That seems a bit to much, don't you think?
The pool consist of one vdev with two 1,5TB disks and one 3TB in raidz1.

-- 
chs,
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS / zpool size

2012-01-17 Thread Shawn Webb
I don't think so. On an OpenIndiana server I run, it shows almost a
full 1TB difference:

shawn@indianapolis:~$ zpool list tank
NAME   SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
tank  4.06T  1.62T  2.44T39%  1.00x  ONLINE  -
shawn@indianapolis:~$ zfs list tank
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank  1.08T  1.58T  45.3K  /tank
shawn@indianapolis:~$ zpool iostat tank
   capacity operationsbandwidth
poolalloc   free   read  write   read  write
--  -  -  -  -  -  -
tank1.62T  2.44T  4 22   473K   165K

On Tue, Jan 17, 2012 at 9:00 AM, Christer Solskogen
christer.solsko...@gmail.com wrote:
 On Tue, Jan 17, 2012 at 4:52 PM, Shawn Webb latt...@gmail.com wrote:
 The `zpool` command does not show all the overhead from ZFS. The `zfs`
 command does. That's why the `zfs` command shows less available space
 than the `zpool` command.


 A overhead of almost 300GB? That seems a bit to much, don't you think?
 The pool consist of one vdev with two 1,5TB disks and one 3TB in raidz1.

 --
 chs,
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS / zpool size

2012-01-17 Thread Tom Evans
On Tue, Jan 17, 2012 at 4:00 PM, Christer Solskogen
christer.solsko...@gmail.com wrote:
 A overhead of almost 300GB? That seems a bit to much, don't you think?
 The pool consist of one vdev with two 1,5TB disks and one 3TB in raidz1.


Confused about your disks - can you show the output of zpool status.

If you have a raidz of N disks with a minimum size of Y GB, you can
expect ``zpool list'' to show a size of N*Y and ``zfs list'' to show a
size of roughly (N-1)*Y.

So, on my box with 2 x 6 x 1.5 TB drives in raidz, I see a zpool size
of 16.3 TB, and a zfs size of 13.3 TB.

Cheers

Tom
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS / zpool size

2012-01-17 Thread Shawn Webb
shawn@indianapolis:~$ pfexec format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
       0. c1d0 WDC WD30-  WD-WCAWZ084341-0001-2.73TB
          /pci@0,0/pci-ide@11/ide@0/cmdk@0,0
       1. c1d1 WDC WD30-  WD-WCAWZ087742-0001-2.73TB
          /pci@0,0/pci-ide@11/ide@0/cmdk@1,0
       2. c2d0 Unknown-Unknown-0001 cyl 30397 alt 2 hd 255 sec 63
          /pci@0,0/pci-ide@11/ide@1/cmdk@0,0
       3. c2d1 ST315003-         9VS25XB-0001-1.36TB
          /pci@0,0/pci-ide@11/ide@1/cmdk@1,0
       4. c4d0 Unknown-Unknown-0001 cyl 30397 alt 2 hd 255 sec 63
          /pci@0,0/pci-ide@14,1/ide@1/cmdk@0,0
Specify disk (enter its number): ^C
shawn@indianapolis:~$ zpool status tank
  pool: tank
 state: ONLINE
  scan: scrub repaired 0 in 4h53m with 0 errors on Tue Dec 20 12:03:36 2011
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c1d0    ONLINE       0     0     0
            c1d1    ONLINE       0     0     0
            c2d1    ONLINE       0     0     0

errors: No known data errors

On Tue, Jan 17, 2012 at 9:18 AM, Tom Evans tevans...@googlemail.com wrote:
 On Tue, Jan 17, 2012 at 4:00 PM, Christer Solskogen
 christer.solsko...@gmail.com wrote:
 A overhead of almost 300GB? That seems a bit to much, don't you think?
 The pool consist of one vdev with two 1,5TB disks and one 3TB in raidz1.


 Confused about your disks - can you show the output of zpool status.

 If you have a raidz of N disks with a minimum size of Y GB, you can
 expect ``zpool list'' to show a size of N*Y and ``zfs list'' to show a
 size of roughly (N-1)*Y.

 So, on my box with 2 x 6 x 1.5 TB drives in raidz, I see a zpool size
 of 16.3 TB, and a zfs size of 13.3 TB.

 Cheers

 Tom
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS / zpool size

2012-01-17 Thread Christer Solskogen
On Tue, Jan 17, 2012 at 5:18 PM, Tom Evans tevans...@googlemail.com wrote:
 On Tue, Jan 17, 2012 at 4:00 PM, Christer Solskogen
 christer.solsko...@gmail.com wrote:
 A overhead of almost 300GB? That seems a bit to much, don't you think?
 The pool consist of one vdev with two 1,5TB disks and one 3TB in raidz1.


 Confused about your disks - can you show the output of zpool status.


Sure!
$ zpool status
  pool: data
 state: ONLINE
 scan: scrub repaired 0 in 9h11m with 0 errors on Tue Jan 17 18:11:26 2012
config:

NAMESTATE READ WRITE CKSUM
dataONLINE   0 0 0
  raidz1-0  ONLINE   0 0 0
ada1ONLINE   0 0 0
ada2ONLINE   0 0 0
ada3ONLINE   0 0 0
logs
  gpt/slog  ONLINE   0 0 0
cache
  da0   ONLINE   0 0 0

$ dmesg | grep ada
ada0 at ahcich0 bus 0 scbus0 target 0 lun 0
ada0: Crucial CT32GBFAB0 MER1.01k ATA-6 SATA 2.x device
ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 512bytes)
ada0: Command Queueing enabled
ada0: 31472MB (64454656 512 byte sectors: 16H 63S/T 16383C)
ada0: Previously was known as ad4
ada1 at ahcich1 bus 0 scbus1 target 0 lun 0
ada1: WDC WD15EARS-00MVWB0 51.0AB51 ATA-8 SATA 2.x device
ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada1: Command Queueing enabled
ada1: 1430799MB (2930277168 512 byte sectors: 16H 63S/T 16383C)
ada1: Previously was known as ad6
ada2 at ahcich2 bus 0 scbus2 target 0 lun 0
ada2: ST3000DM001-9YN166 CC98 ATA-8 SATA 3.x device
ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada2: Command Queueing enabled
ada2: 2861588MB (5860533168 512 byte sectors: 16H 63S/T 16383C)
ada2: Previously was known as ad8
ada3 at ahcich3 bus 0 scbus3 target 0 lun 0
ada3: WDC WD15EARS-00MVWB0 51.0AB51 ATA-8 SATA 2.x device
ada3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada3: Command Queueing enabled
ada3: 1430799MB (2930277168 512 byte sectors: 16H 63S/T 16383C)
ada3: Previously was known as ad10


 If you have a raidz of N disks with a minimum size of Y GB, you can
 expect ``zpool list'' to show a size of N*Y and ``zfs list'' to show a
 size of roughly (N-1)*Y.


Ah, that explains it.
$ zpool list
NAME   SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
data  4.06T  3.33T   748G82%  1.00x  ONLINE  -

what zpool iostat show is how much of the disks are set to ZFS.


 So, on my box with 2 x 6 x 1.5 TB drives in raidz, I see a zpool size
 of 16.3 TB, and a zfs size of 13.3 TB.


Yeap. I can see clearly now, thanks!


-- 
chs,
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: FreeBSD 9.0 and Intel MatrixRAID RAID5

2012-01-17 Thread Vinny Abello
I had something similar on a software based RAID controller on my Intel 
S5000PSL motherboard when I just went from 8.2-RELEASE to 9.0-RELEASE. After 
adding geom_raid_load=YES to my /boot/loader.conf, it still didn't create the 
device on bootup. I had to manually create the label with graid. After that it 
created /dev/raid/ar0 for me and I could mount the volume. Only thing which 
I've trying to understand is the last message below about the integrity check 
failed. I've found other posts on this but when I dig into my setup, I don't 
see the same problems that are illustrated in the post and am at a loss for why 
that is being stated. Also, on other posts I think it was (raid/r0, MBR) that 
people were getting and trying to fix. Mine is (raid/r0, BSD) which I cannot 
find reference to. I have a feeling it has to do with the geometry of the disk 
or something. Everything else seems fine... I admittedly only use this volume 
for scratch space and didn't have anything important stored
 on it so I wasn't worried about experimenting or losing data. 

ada0 at ahcich0 bus 0 scbus2 target 0 lun 0
ada0: WDC WD4000YR-01PLB0 01.06A01 ATA-7 SATA 1.x device
ada0: 150.000MB/s transfers (SATA 1.x, UDMA6, PIO 8192bytes)
ada0: Command Queueing enabled
ada0: 381554MB (781422768 512 byte sectors: 16H 63S/T 16383C)
ada0: Previously was known as ad4
ada1 at ahcich1 bus 0 scbus3 target 0 lun 0
ada1: WDC WD4000YR-01PLB0 01.06A01 ATA-7 SATA 1.x device
ada1: 150.000MB/s transfers (SATA 1.x, UDMA6, PIO 8192bytes)
ada1: Command Queueing enabled
ada1: 381554MB (781422768 512 byte sectors: 16H 63S/T 16383C)
ada1: Previously was known as ad6

GEOM_RAID: Intel-8c840681: Array Intel-8c840681 created.
GEOM_RAID: Intel-8c840681: Disk ada0s1 state changed from NONE to ACTIVE.
GEOM_RAID: Intel-8c840681: Subdisk ar0:0-ada0s1 state changed from NONE to 
ACTIVE.
GEOM_RAID: Intel-8c840681: Disk ada1s1 state changed from NONE to ACTIVE.
GEOM_RAID: Intel-8c840681: Subdisk ar0:1-ada1s1 state changed from NONE to 
ACTIVE.
GEOM_RAID: Intel-8c840681: Array started.
GEOM_RAID: Intel-8c840681: Volume ar0 state changed from STARTING to OPTIMAL.
GEOM_RAID: Intel-8c840681: Provider raid/r0 for volume ar0 created.
GEOM_PART: integrity check failed (raid/r0, BSD)

Any ideas on the integrity check anyone?

Thanks!

-Vinny

On 1/17/2012 6:57 AM, Matthias Gamsjager wrote:
 Not sure if geom_raid is implemented with cam. I remember a post a while
 back about this issue to happen with defaulting cam in 9. Did not follow it
 so not sure if something has been done about it.
 
 On Tue, Jan 17, 2012 at 11:53 AM, Alexander Pyhalov a...@rsu.ru wrote:
 
 Hello.
 On my desktop I use Intel MatrixRAID RAID5 soft raid controller. RAID5 is
 configured over 3 disks. FreeBSD 8.2 sees this as:

 ar0: 953874MB Intel MatrixRAID RAID5 (stripe 64 KB) status: READY
 ar0: disk0 READY using ad4 at ata2-master
 ar0: disk1 READY using ad6 at ata3-master
 ar0: disk2 READY using ad12 at ata6-master

 Root filesystem is on /dev/ar0s1.
 Today I've tried to upgrade to 9.0.
 It doesn't see this disk array. Here is dmesg. When I load geom_raid, it
 finds something, but doesn't want to work with RAID:

 GEOM_RAID: Intel-e922b201: Array Intel-e922b201 created.
 GEOM_RAID: Intel-e922b201: No transformation module found for Volume0.
 GEOM_RAID: Intel-e922b201: Volume Volume0 state changed from STARTING to
 UNSUPPORTED.
 GEOM_RAID: Intel-e922b201: Disk ada2 state changed from NONE to ACTIVE.
 GEOM_RAID: Intel-e922b201: Subdisk Volume0:2-ada2 state changed from NONE
 to ACTIVE.
 GEOM_RAID: Intel-e922b201: Disk ada1 state changed from NONE to ACTIVE.
 GEOM_RAID: Intel-e922b201: Subdisk Volume0:1-ada1 state changed from NONE
 to ACTIVE.
 GEOM_RAID: Intel-e922b201: Disk ada0 state changed from NONE to ACTIVE.
 GEOM_RAID: Intel-e922b201: Subdisk Volume0:0-ada0 state changed from NONE
 to ACTIVE.
 GEOM_RAID: Intel-e922b201: Array started.

 No new devices appear in /dev.
 How could I solve this issue?
 --


 ___
 freebsd-stable@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS / zpool size

2012-01-17 Thread Stefan Esser
Am 17.01.2012 16:47, schrieb Christer Solskogen:
 Hi!
 
 I have a zpool called data, and I have some inconsistencies with sizes.
 
 $ zpool iostat
capacity operationsbandwidth
 poolalloc   free   read  write   read  write
 --  -  -  -  -  -  -
 data3.32T   761G516 50  56.1M  1.13M
 
 $ zfs list -t all
 NAMEUSED  AVAIL  REFER  MOUNTPOINT
 data   2.21T   463G  9.06G  /data
 
 Can anyone throw any light on this?

The ZFS numbers are 2/3 of the ZPOOL numbers for alloc.

This looks like a raidz1 over 3 drives. The ZPOOL command shows
disk blocks available and used (disk drive view), the ZFS command
operates on the file-system level and shows blocks used to hold
actual data or available for actual data (does not account for
RAID parity overhead).

 Is not free the same as AVAIL?

Avail should be 2/3 of free, but I guess there is some overhead
that reduces the number of available blocks.

 I do not have any zfs snapshots, but some filesystems are compressed.

Compression affects the ZPOOL and ZFS numbers in the same way, but  zfs
list and df will differ significantly for file-systems that
contain compressible data.

Regards, STefan
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


about thumper aka sun fire x4500

2012-01-17 Thread peter h
I have been beating on of these a few days, i have udes freebsd 9.0 and 8.2
Both fails when i engage  10 disks, the system craches and messages :
Hyper transport sync flood will get into the BIOS errorlog ( but nothing will
come to syslog since reboot is immediate)

Using a zfs radz of 25 disks and typing zpool scrub will bring the system 
down in seconds.

Anyone using a x4500 that can comfirm that it works ? Or is this box broken ?
-- 
Peter Håkanson   

There's never money to do it right, but always money to do it
again ... and again ... and again ... and again.
( Det är billigare att göra rätt. Det är dyrt att laga fel. )
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Timekeeping in stable/9

2012-01-17 Thread Joe Holden

Hi guys,

Has anyone else noticed the tendency for 9.0-R to be unable to 
accurately keep time?  I've got a couple of machines that have been 
upgraded from 8.2 that are struggling, in particular a Virtual box guest 
that was fine on 8.2, but now that's its been upgraded to 9.0 counts at 
anything from 2 to 20 seconds per 5 second sample, the result is similar 
with HPET, ACPI-fast and TSC.


I also have physical boxes which new seem to drift quite substantially, 
ntpd cannot keep up and as these boxes need to be able to report the 
time relatively accurately, it is causing problems with log times and 
such...


Any suggestions most welcome!

Thanks,
J
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: FreeBSD 9.0 and Intel MatrixRAID RAID5

2012-01-17 Thread Alexander Motin

On 17.01.2012 12:53, Alexander Pyhalov wrote:

On my desktop I use Intel MatrixRAID RAID5 soft raid controller. RAID5
is configured over 3 disks. FreeBSD 8.2 sees this as:

ar0: 953874MB Intel MatrixRAID RAID5 (stripe 64 KB) status: READY
ar0: disk0 READY using ad4 at ata2-master
ar0: disk1 READY using ad6 at ata3-master
ar0: disk2 READY using ad12 at ata6-master

Root filesystem is on /dev/ar0s1.
Today I've tried to upgrade to 9.0.
It doesn't see this disk array. Here is dmesg. When I load geom_raid, it
finds something, but doesn't want to work with RAID:

GEOM_RAID: Intel-e922b201: Array Intel-e922b201 created.
GEOM_RAID: Intel-e922b201: No transformation module found for Volume0.
GEOM_RAID: Intel-e922b201: Volume Volume0 state changed from STARTING to
UNSUPPORTED.
GEOM_RAID: Intel-e922b201: Disk ada2 state changed from NONE to ACTIVE.
GEOM_RAID: Intel-e922b201: Subdisk Volume0:2-ada2 state changed from
NONE to ACTIVE.
GEOM_RAID: Intel-e922b201: Disk ada1 state changed from NONE to ACTIVE.
GEOM_RAID: Intel-e922b201: Subdisk Volume0:1-ada1 state changed from
NONE to ACTIVE.
GEOM_RAID: Intel-e922b201: Disk ada0 state changed from NONE to ACTIVE.
GEOM_RAID: Intel-e922b201: Subdisk Volume0:0-ada0 state changed from
NONE to ACTIVE.
GEOM_RAID: Intel-e922b201: Array started.

No new devices appear in /dev.
How could I solve this issue?


ataraid(4) had mostly read-only support for RAID5 because it doesn't 
update parity data. I haven't thought anybody really using it in such 
condition. That's why geom_raid doesn't support RAID5 now at all.


--
Alexander Motin
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: FreeBSD 9.0 and Intel MatrixRAID RAID5

2012-01-17 Thread Alexander Motin

On 17.01.2012 19:03, Vinny Abello wrote:

I had something similar on a software based RAID controller on my Intel S5000PSL 
motherboard when I just went from 8.2-RELEASE to 9.0-RELEASE. After adding 
geom_raid_load=YES to my /boot/loader.conf, it still didn't create the device 
on bootup. I had to manually create the label with graid. After that it created 
/dev/raid/ar0 for me and I could mount the volume. Only thing which I've trying to 
understand is the last message below about the integrity check failed. I've found other 
posts on this but when I dig into my setup, I don't see the same problems that are 
illustrated in the post and am at a loss for why that is being stated. Also, on other 
posts I think it was (raid/r0, MBR) that people were getting and trying to fix. Mine is 
(raid/r0, BSD) which I cannot find reference to. I have a feeling it has to do with the 
geometry of the disk or something. Everything else seems fine... I admittedly only use 
this volume for scratch space and didn't have anything important stor

ed

  on it so I wasn't worried about experimenting or losing data.

ada0 at ahcich0 bus 0 scbus2 target 0 lun 0
ada0:WDC WD4000YR-01PLB0 01.06A01  ATA-7 SATA 1.x device
ada0: 150.000MB/s transfers (SATA 1.x, UDMA6, PIO 8192bytes)
ada0: Command Queueing enabled
ada0: 381554MB (781422768 512 byte sectors: 16H 63S/T 16383C)
ada0: Previously was known as ad4
ada1 at ahcich1 bus 0 scbus3 target 0 lun 0
ada1:WDC WD4000YR-01PLB0 01.06A01  ATA-7 SATA 1.x device
ada1: 150.000MB/s transfers (SATA 1.x, UDMA6, PIO 8192bytes)
ada1: Command Queueing enabled
ada1: 381554MB (781422768 512 byte sectors: 16H 63S/T 16383C)
ada1: Previously was known as ad6

GEOM_RAID: Intel-8c840681: Array Intel-8c840681 created.
GEOM_RAID: Intel-8c840681: Disk ada0s1 state changed from NONE to ACTIVE.
GEOM_RAID: Intel-8c840681: Subdisk ar0:0-ada0s1 state changed from NONE to 
ACTIVE.
GEOM_RAID: Intel-8c840681: Disk ada1s1 state changed from NONE to ACTIVE.
GEOM_RAID: Intel-8c840681: Subdisk ar0:1-ada1s1 state changed from NONE to 
ACTIVE.
GEOM_RAID: Intel-8c840681: Array started.
GEOM_RAID: Intel-8c840681: Volume ar0 state changed from STARTING to OPTIMAL.
GEOM_RAID: Intel-8c840681: Provider raid/r0 for volume ar0 created.
GEOM_PART: integrity check failed (raid/r0, BSD)

Any ideas on the integrity check anyone?


It is not related to geom_raid, but to geom_part. There is something 
wrong with your label. You may set kern.geom.part.check_integrity sysctl 
to zero do disable these checks. AFAIR it was mentioned in 9.0 release 
notes.



On 1/17/2012 6:57 AM, Matthias Gamsjager wrote:

Not sure if geom_raid is implemented with cam. I remember a post a while
back about this issue to happen with defaulting cam in 9. Did not follow it
so not sure if something has been done about it.

On Tue, Jan 17, 2012 at 11:53 AM, Alexander Pyhalova...@rsu.ru  wrote:


Hello.
On my desktop I use Intel MatrixRAID RAID5 soft raid controller. RAID5 is
configured over 3 disks. FreeBSD 8.2 sees this as:

ar0: 953874MBIntel MatrixRAID RAID5 (stripe 64 KB)  status: READY
ar0: disk0 READY using ad4 at ata2-master
ar0: disk1 READY using ad6 at ata3-master
ar0: disk2 READY using ad12 at ata6-master

Root filesystem is on /dev/ar0s1.
Today I've tried to upgrade to 9.0.
It doesn't see this disk array. Here is dmesg. When I load geom_raid, it
finds something, but doesn't want to work with RAID:

GEOM_RAID: Intel-e922b201: Array Intel-e922b201 created.
GEOM_RAID: Intel-e922b201: No transformation module found for Volume0.
GEOM_RAID: Intel-e922b201: Volume Volume0 state changed from STARTING to
UNSUPPORTED.
GEOM_RAID: Intel-e922b201: Disk ada2 state changed from NONE to ACTIVE.
GEOM_RAID: Intel-e922b201: Subdisk Volume0:2-ada2 state changed from NONE
to ACTIVE.
GEOM_RAID: Intel-e922b201: Disk ada1 state changed from NONE to ACTIVE.
GEOM_RAID: Intel-e922b201: Subdisk Volume0:1-ada1 state changed from NONE
to ACTIVE.
GEOM_RAID: Intel-e922b201: Disk ada0 state changed from NONE to ACTIVE.
GEOM_RAID: Intel-e922b201: Subdisk Volume0:0-ada0 state changed from NONE
to ACTIVE.
GEOM_RAID: Intel-e922b201: Array started.

No new devices appear in /dev.
How could I solve this issue?


--
Alexander Motin
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: FreeBSD 9.0 and Intel MatrixRAID RAID5

2012-01-17 Thread Vinny Abello
On 1/17/2012 4:04 PM, Alexander Motin wrote:
 On 17.01.2012 19:03, Vinny Abello wrote:
 I had something similar on a software based RAID controller on my Intel 
 S5000PSL motherboard when I just went from 8.2-RELEASE to 9.0-RELEASE. After 
 adding geom_raid_load=YES to my /boot/loader.conf, it still didn't create 
 the device on bootup. I had to manually create the label with graid. After 
 that it created /dev/raid/ar0 for me and I could mount the volume. Only 
 thing which I've trying to understand is the last message below about the 
 integrity check failed. I've found other posts on this but when I dig into 
 my setup, I don't see the same problems that are illustrated in the post and 
 am at a loss for why that is being stated. Also, on other posts I think it 
 was (raid/r0, MBR) that people were getting and trying to fix. Mine is 
 (raid/r0, BSD) which I cannot find reference to. I have a feeling it has to 
 do with the geometry of the disk or something. Everything else seems fine... 
 I admittedly only use this volume for scratch space and didn't have anything 
 important st
or
 ed
   on it so I wasn't worried about experimenting or losing data.

 ada0 at ahcich0 bus 0 scbus2 target 0 lun 0
 ada0:WDC WD4000YR-01PLB0 01.06A01  ATA-7 SATA 1.x device
 ada0: 150.000MB/s transfers (SATA 1.x, UDMA6, PIO 8192bytes)
 ada0: Command Queueing enabled
 ada0: 381554MB (781422768 512 byte sectors: 16H 63S/T 16383C)
 ada0: Previously was known as ad4
 ada1 at ahcich1 bus 0 scbus3 target 0 lun 0
 ada1:WDC WD4000YR-01PLB0 01.06A01  ATA-7 SATA 1.x device
 ada1: 150.000MB/s transfers (SATA 1.x, UDMA6, PIO 8192bytes)
 ada1: Command Queueing enabled
 ada1: 381554MB (781422768 512 byte sectors: 16H 63S/T 16383C)
 ada1: Previously was known as ad6

 GEOM_RAID: Intel-8c840681: Array Intel-8c840681 created.
 GEOM_RAID: Intel-8c840681: Disk ada0s1 state changed from NONE to ACTIVE.
 GEOM_RAID: Intel-8c840681: Subdisk ar0:0-ada0s1 state changed from NONE to 
 ACTIVE.
 GEOM_RAID: Intel-8c840681: Disk ada1s1 state changed from NONE to ACTIVE.
 GEOM_RAID: Intel-8c840681: Subdisk ar0:1-ada1s1 state changed from NONE to 
 ACTIVE.
 GEOM_RAID: Intel-8c840681: Array started.
 GEOM_RAID: Intel-8c840681: Volume ar0 state changed from STARTING to OPTIMAL.
 GEOM_RAID: Intel-8c840681: Provider raid/r0 for volume ar0 created.
 GEOM_PART: integrity check failed (raid/r0, BSD)

 Any ideas on the integrity check anyone?
 
 It is not related to geom_raid, but to geom_part. There is something wrong 
 with your label. You may set kern.geom.part.check_integrity sysctl to zero do 
 disable these checks. AFAIR it was mentioned in 9.0 release notes.

Thanks for responding, Alexander. I also found that information about that 
sysctl variable, however I was trying to determine if something is actually 
wrong, how to determine what it is and ultimately how to fix it so it passes 
the check. I'd rather not ignore errors/warnings unless it's a bug. Again, I 
have no data of value on this partition, so I can do anything to fix it. Just 
not sure what to do or look at specifically.

Thanks!

-Vinny
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: FreeBSD 9.0 and Intel MatrixRAID RAID5

2012-01-17 Thread Alexander Motin

On 17.01.2012 23:35, Vinny Abello wrote:

On 1/17/2012 4:04 PM, Alexander Motin wrote:

On 17.01.2012 19:03, Vinny Abello wrote:

I had something similar on a software based RAID controller on my Intel S5000PSL 
motherboard when I just went from 8.2-RELEASE to 9.0-RELEASE. After adding 
geom_raid_load=YES to my /boot/loader.conf, it still didn't create the device 
on bootup. I had to manually create the label with graid. After that it created 
/dev/raid/ar0 for me and I could mount the volume. Only thing which I've trying to 
understand is the last message below about the integrity check failed. I've found other 
posts on this but when I dig into my setup, I don't see the same problems that are 
illustrated in the post and am at a loss for why that is being stated. Also, on other 
posts I think it was (raid/r0, MBR) that people were getting and trying to fix. Mine is 
(raid/r0, BSD) which I cannot find reference to. I have a feeling it has to do with the 
geometry of the disk or something. Everything else seems fine... I admittedly only use 
this volume for scratch space and didn't have anything important st



or

ed

   on it so I wasn't worried about experimenting or losing data.

ada0 at ahcich0 bus 0 scbus2 target 0 lun 0
ada0:WDC WD4000YR-01PLB0 01.06A01   ATA-7 SATA 1.x device
ada0: 150.000MB/s transfers (SATA 1.x, UDMA6, PIO 8192bytes)
ada0: Command Queueing enabled
ada0: 381554MB (781422768 512 byte sectors: 16H 63S/T 16383C)
ada0: Previously was known as ad4
ada1 at ahcich1 bus 0 scbus3 target 0 lun 0
ada1:WDC WD4000YR-01PLB0 01.06A01   ATA-7 SATA 1.x device
ada1: 150.000MB/s transfers (SATA 1.x, UDMA6, PIO 8192bytes)
ada1: Command Queueing enabled
ada1: 381554MB (781422768 512 byte sectors: 16H 63S/T 16383C)
ada1: Previously was known as ad6

GEOM_RAID: Intel-8c840681: Array Intel-8c840681 created.
GEOM_RAID: Intel-8c840681: Disk ada0s1 state changed from NONE to ACTIVE.
GEOM_RAID: Intel-8c840681: Subdisk ar0:0-ada0s1 state changed from NONE to 
ACTIVE.
GEOM_RAID: Intel-8c840681: Disk ada1s1 state changed from NONE to ACTIVE.
GEOM_RAID: Intel-8c840681: Subdisk ar0:1-ada1s1 state changed from NONE to 
ACTIVE.
GEOM_RAID: Intel-8c840681: Array started.
GEOM_RAID: Intel-8c840681: Volume ar0 state changed from STARTING to OPTIMAL.
GEOM_RAID: Intel-8c840681: Provider raid/r0 for volume ar0 created.
GEOM_PART: integrity check failed (raid/r0, BSD)

Any ideas on the integrity check anyone?


It is not related to geom_raid, but to geom_part. There is something wrong with 
your label. You may set kern.geom.part.check_integrity sysctl to zero do 
disable these checks. AFAIR it was mentioned in 9.0 release notes.


Thanks for responding, Alexander. I also found that information about that 
sysctl variable, however I was trying to determine if something is actually 
wrong, how to determine what it is and ultimately how to fix it so it passes 
the check. I'd rather not ignore errors/warnings unless it's a bug. Again, I 
have no data of value on this partition, so I can do anything to fix it. Just 
not sure what to do or look at specifically.


First thing I would check is that partition is not bigger then the RAID 
volume size. If label was created before the RAID volume, that could be 
the reason, because RAID cuts several sectors off the end of disk to 
store metadata.


--
Alexander Motin
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: FreeBSD 9.0 and Intel MatrixRAID RAID5

2012-01-17 Thread Vinny Abello
On 1/17/2012 4:38 PM, Alexander Motin wrote:
 On 17.01.2012 23:35, Vinny Abello wrote:
 On 1/17/2012 4:04 PM, Alexander Motin wrote:
 On 17.01.2012 19:03, Vinny Abello wrote:
 I had something similar on a software based RAID controller on my Intel 
 S5000PSL motherboard when I just went from 8.2-RELEASE to 9.0-RELEASE. 
 After adding geom_raid_load=YES to my /boot/loader.conf, it still didn't 
 create the device on bootup. I had to manually create the label with 
 graid. After that it created /dev/raid/ar0 for me and I could mount the 
 volume. Only thing which I've trying to understand is the last message 
 below about the integrity check failed. I've found other posts on this but 
 when I dig into my setup, I don't see the same problems that are 
 illustrated in the post and am at a loss for why that is being stated. 
 Also, on other posts I think it was (raid/r0, MBR) that people were 
 getting and trying to fix. Mine is (raid/r0, BSD) which I cannot find 
 reference to. I have a feeling it has to do with the geometry of the disk 
 or something. Everything else seems fine... I admittedly only use this 
 volume for scratch space and didn't have anything important 
st
 
 or
 ed
on it so I wasn't worried about experimenting or losing data.

 ada0 at ahcich0 bus 0 scbus2 target 0 lun 0
 ada0:WDC WD4000YR-01PLB0 01.06A01   ATA-7 SATA 1.x device
 ada0: 150.000MB/s transfers (SATA 1.x, UDMA6, PIO 8192bytes)
 ada0: Command Queueing enabled
 ada0: 381554MB (781422768 512 byte sectors: 16H 63S/T 16383C)
 ada0: Previously was known as ad4
 ada1 at ahcich1 bus 0 scbus3 target 0 lun 0
 ada1:WDC WD4000YR-01PLB0 01.06A01   ATA-7 SATA 1.x device
 ada1: 150.000MB/s transfers (SATA 1.x, UDMA6, PIO 8192bytes)
 ada1: Command Queueing enabled
 ada1: 381554MB (781422768 512 byte sectors: 16H 63S/T 16383C)
 ada1: Previously was known as ad6

 GEOM_RAID: Intel-8c840681: Array Intel-8c840681 created.
 GEOM_RAID: Intel-8c840681: Disk ada0s1 state changed from NONE to ACTIVE.
 GEOM_RAID: Intel-8c840681: Subdisk ar0:0-ada0s1 state changed from NONE to 
 ACTIVE.
 GEOM_RAID: Intel-8c840681: Disk ada1s1 state changed from NONE to ACTIVE.
 GEOM_RAID: Intel-8c840681: Subdisk ar0:1-ada1s1 state changed from NONE to 
 ACTIVE.
 GEOM_RAID: Intel-8c840681: Array started.
 GEOM_RAID: Intel-8c840681: Volume ar0 state changed from STARTING to 
 OPTIMAL.
 GEOM_RAID: Intel-8c840681: Provider raid/r0 for volume ar0 created.
 GEOM_PART: integrity check failed (raid/r0, BSD)

 Any ideas on the integrity check anyone?

 It is not related to geom_raid, but to geom_part. There is something wrong 
 with your label. You may set kern.geom.part.check_integrity sysctl to zero 
 do disable these checks. AFAIR it was mentioned in 9.0 release notes.

 Thanks for responding, Alexander. I also found that information about that 
 sysctl variable, however I was trying to determine if something is actually 
 wrong, how to determine what it is and ultimately how to fix it so it passes 
 the check. I'd rather not ignore errors/warnings unless it's a bug. Again, I 
 have no data of value on this partition, so I can do anything to fix it. 
 Just not sure what to do or look at specifically.
 
 First thing I would check is that partition is not bigger then the RAID 
 volume size. If label was created before the RAID volume, that could be the 
 reason, because RAID cuts several sectors off the end of disk to store 
 metadata.

OK, thanks for the suggestion. I will investigate.

-Vinny
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: about thumper aka sun fire x4500

2012-01-17 Thread Patrick M. Hausen
Hi, all,

Am 17.01.2012 um 18:59 schrieb peter h pe...@hk.ipsec.se:

 I have been beating on of these a few days, i have udes freebsd 9.0 and 8.2
 Both fails when i engage  10 disks, the system craches and messages :
 Hyper transport sync flood will get into the BIOS errorlog ( but nothing 
 will
 come to syslog since reboot is immediate)
 
 Using a zfs radz of 25 disks and typing zpool scrub will bring the system 
 down in seconds.
 
 Anyone using a x4500 that can comfirm that it works ? Or is this box broken ?

Well, I hate to write that, but ... does it work with the vendor supported [tm] 
OS?
If yes, you can rule out a hardware defect. I would at least try Solaris for 
this reason.
If no, the HW is broken and there is no need to look for a fault on FreeBSD's 
side.

Kind regards,
Patrick___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: about thumper aka sun fire x4500

2012-01-17 Thread Jeremy Chadwick
On Tue, Jan 17, 2012 at 06:59:08PM +0100, peter h wrote:
 I have been beating on of these a few days, i have udes freebsd 9.0 and 8.2
 Both fails when i engage  10 disks, the system craches and messages :
 Hyper transport sync flood will get into the BIOS errorlog ( but nothing 
 will
 come to syslog since reboot is immediate)
 
 Using a zfs radz of 25 disks and typing zpool scrub will bring the system 
 down in seconds.
 
 Anyone using a x4500 that can comfirm that it works ? Or is this box broken ?

I do not have one of these boxes / am not familiar with them, but
HyperTransport is an AMD thing.  The concept is that it's a bus that
interconnects different pieces of a system to the CPU (and thus the
memory bus).  ASCII diagram coming up:

+---+
| RAM   |
+--++
   |
+--++
|  CPU (w/ on-die MCH)  |
+--++
   |
+--++ +-+
| HyperTransport bridge +-+ PCI Express bus (VGA, etc.) |
+--++ +-+
   |
+--+---+
| Southbridge (SATA, etc.) |
+--+
   
ZFS is memory I/O intensive.  Your controller, given that it consists of
25 disks, is probably sitting on the PCI Express bus, and thus is
generating an equally high amount of I/O.

Given this above diagram, I'm sure you can figure out how flooding
might occur.  :-)  I'm not sure what sync flood means (vs. I/O
flooding).

Googling turns up *tons* of examples of this on the web, except every
time they involve people doing overclocking or having CPU-level problems
pertaining to voltage.

There may be a BIOS option on your system to help curb this behaviour,
or at least try to limit it in some way.  I know on our AMD systems at
work the number of options in the Memory section of the BIOS is quite
large, many of which pertaining to interactivity with HyperTransport.

If you want my advice?  Bring the issue up to Sun.  They will almost
certainly be able to assign the case to an engineer, who although may
not be familiar with FreeBSD, hopefully WILL be familiar with the bus
interconnects described above and might be able to help you out.

-- 
| Jeremy Chadwick j...@parodius.com |
| Parodius Networking http://www.parodius.com/ |
| UNIX Systems Administrator Mountain View, CA, US |
| Making life hard for others since 1977. PGP 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: about thumper aka sun fire x4500

2012-01-17 Thread Ronald Klop

On Tue, 17 Jan 2012 18:59:08 +0100, peter h pe...@hk.ipsec.se wrote:

I have been beating on of these a few days, i have udes freebsd 9.0 and  
8.2

Both fails when i engage  10 disks, the system craches and messages :
Hyper transport sync flood will get into the BIOS errorlog ( but  
nothing will

come to syslog since reboot is immediate)

Using a zfs radz of 25 disks and typing zpool scrub will bring the  
system down in seconds.


Anyone using a x4500 that can comfirm that it works ? Or is this box  
broken ?


Does it work if you make 3 raid groups of 8 disks and 1 spare?

Ronald.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: about thumper aka sun fire x4500

2012-01-17 Thread Rainer Duffner

Am 17.01.2012 um 23:09 schrieb Jeremy Chadwick:

 On Tue, Jan 17, 2012 at 06:59:08PM +0100, peter h wrote:
 I have been beating on of these a few days, i have udes freebsd 9.0 and 8.2
 Both fails when i engage  10 disks, the system craches and messages :
 Hyper transport sync flood will get into the BIOS errorlog ( but nothing 
 will
 come to syslog since reboot is immediate)
 
 Using a zfs radz of 25 disks and typing zpool scrub will bring the system 
 down in seconds.
 
 Anyone using a x4500 that can comfirm that it works ? Or is this box broken ?
 
 I do not have one of these boxes / am not familiar with them, but
 HyperTransport is an AMD thing.  The concept is that it's a bus that
 interconnects different pieces of a system to the CPU (and thus the
 memory bus).  ASCII diagram coming up:



Not exactly:

http://www.c0t0d0s0.org/archives/1792-Do-it-yourself-X4500.html


At the time, there was no similar board on the market, AFAIK.
I haven't looked, but I think it should be easier to get one today...


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: about thumper aka sun fire x4500

2012-01-17 Thread peter h
On Tuesday 17 January 2012 23.15, Ronald Klop wrote:
 On Tue, 17 Jan 2012 18:59:08 +0100, peter h pe...@hk.ipsec.se wrote:
 
  I have been beating on of these a few days, i have udes freebsd 9.0 and  
  8.2
  Both fails when i engage  10 disks, the system craches and messages :
  Hyper transport sync flood will get into the BIOS errorlog ( but  
  nothing will
  come to syslog since reboot is immediate)
 
  Using a zfs radz of 25 disks and typing zpool scrub will bring the  
  system down in seconds.
 
  Anyone using a x4500 that can comfirm that it works ? Or is this box  
  broken ?
 
 Does it work if you make 3 raid groups of 8 disks and 1 spare?
No, i did not test this.  
I did some simple ones ( 5 disks in a raidz ) but what i wanted this box
to do is a more powerful work. For smaller stuff i use simple hardware

I guess i'll buy some supermicro box instead.

 
 Ronald.
 ___
 freebsd-stable@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
 
 

-- 
Peter Håkanson   

There's never money to do it right, but always money to do it
again ... and again ... and again ... and again.
( Det är billigare att göra rätt. Det är dyrt att laga fel. )
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: about thumper aka sun fire x4500

2012-01-17 Thread Bob Healey
The X4500s are oldish systems built around a pair Opteron 290 chips with 
16 GB RAM and 6 PCI-X Marvell SATA controllers with 8 ports each 
supporting 48 drives in the machine.  Only the first and 4th drive on 
the I think 4th controller are bootable.   Are you using the latest 
firmware?  If not, you're going to have to pay Oracle for the privilege 
of updating it as there is no way the machines are still under 
warranty.  I'd find a copy of the  OpenSolaris Live CD and see if that 
boots and supports all your drives.  Hope this helps you, or helps 
someone else on the list with more knowledge of debugging older AMD 
systems point you in the right direction.


Bob Healey
Systems Administrator
Biocomputation and Bioinformatics Constellation
and Molecularium
hea...@rpi.edu
(518) 276-4407


On 1/17/2012 5:09 PM, Jeremy Chadwick wrote:

On Tue, Jan 17, 2012 at 06:59:08PM +0100, peter h wrote:

I have been beating on of these a few days, i have udes freebsd 9.0 and 8.2
Both fails when i engage  10 disks, the system craches and messages :
Hyper transport sync flood will get into the BIOS errorlog ( but nothing will
come to syslog since reboot is immediate)

Using a zfs radz of 25 disks and typing zpool scrub will bring the system 
down in seconds.

Anyone using a x4500 that can comfirm that it works ? Or is this box broken ?

I do not have one of these boxes / am not familiar with them, but
HyperTransport is an AMD thing.  The concept is that it's a bus that
interconnects different pieces of a system to the CPU (and thus the
memory bus).  ASCII diagram coming up:

+---+
| RAM   |
+--++
|
+--++
|  CPU (w/ on-die MCH)  |
+--++
|
+--++ +-+
| HyperTransport bridge +-+ PCI Express bus (VGA, etc.) |
+--++ +-+
|
+--+---+
| Southbridge (SATA, etc.) |
+--+

ZFS is memory I/O intensive.  Your controller, given that it consists of
25 disks, is probably sitting on the PCI Express bus, and thus is
generating an equally high amount of I/O.

Given this above diagram, I'm sure you can figure out how flooding
might occur.  :-)  I'm not sure what sync flood means (vs. I/O
flooding).

Googling turns up *tons* of examples of this on the web, except every
time they involve people doing overclocking or having CPU-level problems
pertaining to voltage.

There may be a BIOS option on your system to help curb this behaviour,
or at least try to limit it in some way.  I know on our AMD systems at
work the number of options in the Memory section of the BIOS is quite
large, many of which pertaining to interactivity with HyperTransport.

If you want my advice?  Bring the issue up to Sun.  They will almost
certainly be able to assign the case to an engineer, who although may
not be familiar with FreeBSD, hopefully WILL be familiar with the bus
interconnects described above and might be able to help you out.


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: about thumper aka sun fire x4500

2012-01-17 Thread Patrick M. Hausen
Hi all,

Am 18.01.2012 um 00:14 schrieb peter h pe...@hk.ipsec.se:
 On Tuesday 17 January 2012 23.15, Ronald Klop wrote:
 On Tue, 17 Jan 2012 18:59:08 +0100, peter h pe...@hk.ipsec.se wrote:
 
 I have been beating on of these a few days, i have udes freebsd 9.0 and  
 8.2
 Both fails when i engage  10 disks, the system craches and messages :
 Hyper transport sync flood will get into the BIOS errorlog ( but  
 nothing will
 come to syslog since reboot is immediate)
 
 Using a zfs radz of 25 disks and typing zpool scrub will bring the  
 system down in seconds.
 
 Anyone using a x4500 that can comfirm that it works ? Or is this box  
 broken ?
 
 Does it work if you make 3 raid groups of 8 disks and 1 spare?
 No, i did not test this.  
 I did some simple ones ( 5 disks in a raidz ) but what i wanted this box
 to do is a more powerful work. For smaller stuff i use simple hardware
 
 I guess i'll buy some supermicro box instead.

But Ronald is right. I apologize for not reading your initial post thoroughly 
and
jumping on your suspicion that the hardware might be to blame.

Did you really create one vdev of 25 disks?

This is strongly discouraged by the documentation provided by Sun/Oracle.

You should (IIRC) never have more than 9 disk in a single vdev. Of course
you can join N vdevs of type, say, raidz2 to a single zpool.

See for example http://forums.freebsd.org/archive/index.php/t-4641.html

I'm writing from my iPad and cannot quickly find the Link to the Sun 
documentation.

Kind regards, HTH,
Patrick___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: about thumper aka sun fire x4500

2012-01-17 Thread Chuck Swiger
On Jan 17, 2012, at 2:09 PM, Jeremy Chadwick wrote:
 I do not have one of these boxes / am not familiar with them, but
 HyperTransport is an AMD thing.  The concept is that it's a bus that
 interconnects different pieces of a system to the CPU (and thus the
 memory bus).  

While that was a nice picture, it's not related to the bus architecture of a 
Sun 4500. :-)

An X or E 4500 is a highly fault-tolerant parallel minicomputer with 8 slots-- 
one was I/O, and you could put up to 7 CPU boards with dual UltraSPARC 
processors-- you could hot-plug CPU boards and memory in the event of a failure 
and keep the rest of the system up.  They cost a significant fraction of a 
million dollars circa y2k.

A check of some old docs suggests:

Hypertransport Sync Flood occurred on last boot: Uncorrectable ECC error caused 
the last reboot.

Regards,
-- 
-Chuck

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: about thumper aka sun fire x4500

2012-01-17 Thread Marcus Reid
On Tue, Jan 17, 2012 at 03:12:19PM -0800, Chuck Swiger wrote:
 On Jan 17, 2012, at 2:09 PM, Jeremy Chadwick wrote:
  I do not have one of these boxes / am not familiar with them, but
  HyperTransport is an AMD thing.  The concept is that it's a bus that
  interconnects different pieces of a system to the CPU (and thus the
  memory bus).  
 
 While that was a nice picture, it's not related to the bus
 architecture of a Sun 4500. :-)
 
 An X or E 4500 is a highly fault-tolerant parallel minicomputer with 8
 slots-- one was I/O, and you could put up to 7 CPU boards with dual
 UltraSPARC processors-- you could hot-plug CPU boards and memory in
 the event of a failure and keep the rest of the system up.  They cost
 a significant fraction of a million dollars circa y2k.

You're thinking E4500, which is as you describe.  The X4500 is described
here:

  http://en.wikipedia.org/wiki/Sun_Fire_X4500

Marcus
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: FB9-stable: bridge0 doesn't come up via rc

2012-01-17 Thread Denny Schierz

Am 17.01.2012 um 14:23 schrieb Stefan Esser:

 You forgot that rc.conf does not contain commands, but only variable
 assignments. The latter of the last two lines overwrites the value
 set in the former.

ah, ok, that wasn't clear for me. I tried several combinations, so I try it 
again. Thanks a lot @Stefan and @Volodymyr

cu denny___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Timekeeping in stable/9

2012-01-17 Thread Martin Sugioarto
Am Tue, 17 Jan 2012 20:12:51 +
schrieb Joe Holden li...@rewt.org.uk:

 Hi guys,
 
 Has anyone else noticed the tendency for 9.0-R to be unable to 
 accurately keep time?  I've got a couple of machines that have been 
 upgraded from 8.2 that are struggling, in particular a Virtual box
 guest that was fine on 8.2, but now that's its been upgraded to 9.0
 counts at anything from 2 to 20 seconds per 5 second sample, the
 result is similar with HPET, ACPI-fast and TSC.

Hi Joe,

I can confirm this on VirtualBox. I've been running WinXP inside
VirtualBox and measured network I/O during downloads. It showed me very
high download rates (around 800kB/s) while it's physically possible to
download 200kB/s through DSL here (Germany sucks with DSL, even in
largest cities, btw!).

I correlated this behavior with high disk I/O on the host. That means
that the timer issues on the virtual host appear when I start a
larger cp job on the host. I also immediately thought that this has
something to do with timers.

You can perhaps try yourself and confirm, if it's the disk I/O that
influences timers. I somehow don't like the hard disk behavior. It
makes desktop unusable in some situations (mouse pointer skipping,
applications lock for several seconds).

 I also have physical boxes which new seem to drift quite
 substantially, ntpd cannot keep up and as these boxes need to be able
 to report the time relatively accurately, it is causing problems with
 log times and such...

Not sure about physical boxes. I have not taken a look at this, yet.

--
Martin


signature.asc
Description: PGP signature