MySQL performance concern

2010-10-02 Thread Rumen Telbizov
Hello everyone,

I am experimenting with MySQL running on FreeBSD and comparing with another
(older) setup running on a Linux box.
My results show that performance on Linux is significantly better than
FreeBSD although the hardware is weaker.
I'd appreciate your comments and ideas.

Here's the setup:

1) FreeBSD 8.1-STABLE amd64 (Tue Sep 14 15:29:22 PDT 2010) running on a
SuperMicro machine with 2 x Dual Core
Xeon E5502 1.87Ghz ; 4 x SAS 15K in RAID10 setup under ZFS (two mirrored
pairs) and 2 x SSD X25-E partitioned
for: 8G for ZIL and the rest for L2ARC; 16G ram with 8 of them given to
mysql and tons of free.

2) Linux Gentoo with 3 SATA disks in hardware RAID5 with similar
cpu/motherboard and same memory size.

The sole application that runs is a python script which inserts a batch of
lines at a time. Only myisam is used as a format.
Here's the problem: On the Linux box it manages to push around
*5800*inserts/second while on the FreeBSD box
it's only *4000/*second.

MySQL version is 5.1.51

During this load the disk subsystem on FreeBSD is pretty much idle (both the
SSDs and the SAS disks). CPU utilization
contributed to mysqld is only around 30%. So I am clearly heavily
under-utilizing the hardware.
Linuxthreads support for 64bit architectures is not available so I couldn't
try that but aside from that I tried recompiling
mysql with all the different Makefile options available without any effect.
Changing the recordsize in zfs to 8K doesn't make any difference.
Tried percona binary without any luck.

Let me know what additional information would be useful and I'll provide it
here.

Thank you in advance for your comments and suggestions.

Cheers,
Rumen Telbizov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: MySQL performance concern

2010-10-02 Thread Matthew Seaman
On 02/10/2010 08:06:52, Rumen Telbizov wrote:
 Hello everyone,
 
 I am experimenting with MySQL running on FreeBSD and comparing with another
 (older) setup running on a Linux box.
 My results show that performance on Linux is significantly better than
 FreeBSD although the hardware is weaker.
 I'd appreciate your comments and ideas.
 
 Here's the setup:
 
 1) FreeBSD 8.1-STABLE amd64 (Tue Sep 14 15:29:22 PDT 2010) running on a
 SuperMicro machine with 2 x Dual Core
 Xeon E5502 1.87Ghz ; 4 x SAS 15K in RAID10 setup under ZFS (two mirrored
 pairs) and 2 x SSD X25-E partitioned
 for: 8G for ZIL and the rest for L2ARC; 16G ram with 8 of them given to
 mysql and tons of free.
 
 2) Linux Gentoo with 3 SATA disks in hardware RAID5 with similar
 cpu/motherboard and same memory size.
 
 The sole application that runs is a python script which inserts a batch of
 lines at a time. Only myisam is used as a format.
 Here's the problem: On the Linux box it manages to push around
 *5800*inserts/second while on the FreeBSD box
 it's only *4000/*second.
 
 MySQL version is 5.1.51
 
 During this load the disk subsystem on FreeBSD is pretty much idle (both the
 SSDs and the SAS disks). CPU utilization
 contributed to mysqld is only around 30%. So I am clearly heavily
 under-utilizing the hardware.
 Linuxthreads support for 64bit architectures is not available so I couldn't
 try that but aside from that I tried recompiling
 mysql with all the different Makefile options available without any effect.
 Changing the recordsize in zfs to 8K doesn't make any difference.
 Tried percona binary without any luck.
 
 Let me know what additional information would be useful and I'll provide it
 here.
 
 Thank you in advance for your comments and suggestions.

Um... a fairly obvious point, but have you tuned the mysql configuration
appropriately on both machines?  I'd guess you have, but you didn't
mention it.  As I recall, the default configuration you get out of the
box with mysql is suitable for a machine with something like 64MB RAM.
Not at all appropriate nowadays where dedicated DB server hardware would
be more likely to have 64*G*B than 64*M*B...

Cheers,

Matthew

-- 
Dr Matthew J Seaman MA, D.Phil.   7 Priory Courtyard
  Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate
JID: matt...@infracaninophile.co.uk   Kent, CT11 9PW



signature.asc
Description: OpenPGP digital signature


60s boot hang on Xen running 8-STABLE when using ATA_CAM

2010-10-02 Thread Bruce Cran
I rebuilt my 8-STABLE kernel today using ATA_CAM to use the ada driver
on my Xen VPS and found that the boot will hang for 60s, apparently
because a CD image hasn't been configured for the virtual CD-ROM
drive?  After the run_interrupt_driven_hooks: still waiting after 60
seconds for xpt_config message, the boot continues and the system
appears to work fine.

Timecounter TSC frequency 285054 Hz quality 800
Timecounters tick every 10.000 msec
lo0: bpf attached
ata0: reset tp1 mask=03 ostat0=50 ostat1=00
ata0: stat0=0x50 err=0x01 lsb=0x00 msb=0x00
ata0: stat1=0x00 err=0x01 lsb=0xff msb=0xff
ata0: reset tp2 stat0=50 stat1=00 devices=0x1
(aprobe0:ata0:0:0:0): SIGNATURE: 
ata1: reset tp1 mask=03 ostat0=50 ostat1=41
ata1: stat0=0x50 err=0x01 lsb=0xff msb=0xff
ata1: stat1=0x00 err=0x01 lsb=0x14 msb=0xeb
ata1: reset tp2 stat0=50 stat1=00 devices=0x20001
(aprobe0:ata1:0:0:0): SIGNATURE: 
ata1: reset tp1 mask=03 ostat0=50 ostat1=00
ata1: stat0=0x50 err=0x01 lsb=0xff msb=0xff
ata1: stat1=0x00 err=0x01 lsb=0x14 msb=0xeb
ata1: reset tp2 stat0=50 stat1=00 devices=0x20001
(aprobe0:ata1:0:0:0): ATA_IDENTIFY. ACB: ec 00 00 00 00 40 00 00 00 00
00 00 (aprobe0:ata1:0:0:0): CAM status: Command timeout
(aprobe0:ata1:0:0:0): SIGNATURE: 
run_interrupt_driven_hooks: still waiting after 60 seconds for
xpt_config ata1: reset tp1 mask=03 ostat0=50 ostat1=00
ata1: stat0=0x50 err=0x01 lsb=0xff msb=0xff
ata1: stat1=0x00 err=0x01 lsb=0x14 msb=0xeb
ata1: reset tp2 stat0=50 stat1=00 devices=0x20001
(aprobe0:ata1:0:0:0): ATA_IDENTIFY. ACB: ec 00 00 00 00 40 00 00 00 00
00 00 (aprobe0:ata1:0:0:0): CAM status: Command timeout
(aprobe0:ata1:0:1:0): SIGNATURE: eb14
ada0 at ata0 bus 0 scbus0 target 0 lun 0
ada0: QEMU HARDDISK 0.9.1 ATA-7GEOM: new disk ada0
 device
ada0: Serial Number QM1
ada0: 16.700MB/s transfers (WDMA2, PIO 8192bytes)
ada0: 10752MB (22020096 512 byte sectors: 16H 63S/T 16383C)

A full verbose dmesg is available at
http://www.cran.org.uk/~brucec/freebsd/dmesg.ATA_CAM.hang.txt

-- 
Bruce Cran
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: MySQL performance concern

2010-10-02 Thread Ronald Klop
On Sat, 02 Oct 2010 09:06:52 +0200, Rumen Telbizov telbi...@gmail.com  
wrote:



Hello everyone,

I am experimenting with MySQL running on FreeBSD and comparing with  
another

(older) setup running on a Linux box.
My results show that performance on Linux is significantly better than
FreeBSD although the hardware is weaker.
I'd appreciate your comments and ideas.

Here's the setup:

1) FreeBSD 8.1-STABLE amd64 (Tue Sep 14 15:29:22 PDT 2010) running on a
SuperMicro machine with 2 x Dual Core
Xeon E5502 1.87Ghz ; 4 x SAS 15K in RAID10 setup under ZFS (two mirrored
pairs) and 2 x SSD X25-E partitioned
for: 8G for ZIL and the rest for L2ARC; 16G ram with 8 of them given to
mysql and tons of free.

2) Linux Gentoo with 3 SATA disks in hardware RAID5 with similar
cpu/motherboard and same memory size.

The sole application that runs is a python script which inserts a batch  
of

lines at a time. Only myisam is used as a format.
Here's the problem: On the Linux box it manages to push around
*5800*inserts/second while on the FreeBSD box
it's only *4000/*second.

MySQL version is 5.1.51

During this load the disk subsystem on FreeBSD is pretty much idle (both  
the

SSDs and the SAS disks). CPU utilization
contributed to mysqld is only around 30%. So I am clearly heavily
under-utilizing the hardware.
Linuxthreads support for 64bit architectures is not available so I  
couldn't

try that but aside from that I tried recompiling
mysql with all the different Makefile options available without any  
effect.

Changing the recordsize in zfs to 8K doesn't make any difference.
Tried percona binary without any luck.

Let me know what additional information would be useful and I'll provide  
it

here.

Thank you in advance for your comments and suggestions.

Cheers,
Rumen Telbizov


Your app is singlethreaded I presume, so the multi-cores are not relevant  
in this story.

Do you have the same indexes on the tables on both servers?
Do they both have the same way to connect with mysql? Unix sockets or  
localhost?

Do they both run mysql 5.1.51, because you mention the Linux one is older?

Ronald.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


device names changes for adX.

2010-10-02 Thread Peter Ankerstål
Hi,

When I installed FreeBSD 8.1-RELEASE (freebsd-update) the adX devices changed 
index number and
the machine obviously didnt boot. Due to this I hesitate to install 8.1 on my 
servers remote. How do I know
if and to what the devices will change?


--
Peter Ankerstål
pe...@pean.org
http://www.pean.org/

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: device names changes for adX.

2010-10-02 Thread Marat N.Afanasyev

Peter Ankerstål wrote:

Hi,

When I installed FreeBSD 8.1-RELEASE (freebsd-update) the adX devices changed 
index number and
the machine obviously didnt boot. Due to this I hesitate to install 8.1 on my 
servers remote. How do I know
if and to what the devices will change?


--
Peter Ankerstål
pe...@pean.org
http://www.pean.org/

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

label your filesystems and mount them by label rather than by device 
name. see


man glabel

--
SY, Marat



Broken SASL/Kerberos authentication: openldap client GSSAPI authentication segfaults on FreeBSD 8.1 Release too

2010-10-02 Thread Martin Schweizer
Hello 

I use the system as a mail server (Cyrus Impad) which I authenticate against 
Kerberos5 (Windows Active Directory) with Cyrus SASL (saslauthd -a kerberos5). 
Here are the details:

cyrus-imapd-2.3.16_2 The cyrus mail server, supporting POP3 and IMAP4 protocols
cyrus-sasl-2.1.23   RFC  SASL (Simple Authentication and Security Layer)
cyrus-sasl-saslauthd-2.1.23 SASL authentication server for cyrus-sasl2

My system:
FreeBSD acsvfbsd04.acutronic.ch 8.1-RELEASE FreeBSD 8.1-RELEASE #0: Thu Sep 30 
12:33:18 CEST 2010 
mar...@acsvfbsd04.acutronic.ch:/usr/obj/usr/src/sys/GENERIC i386

After I upgaded from 7.2 to 8.1 the SASL authentication (with Kerberos5) is 
broken. See 
http://docs.freebsd.org/cgi/getmsg.cgi?fetch=301304+0+archive/2010/freebsd-stable/20100718.freebsd-stable
 and the following threads. See alo PR 147454.

I did what you suggested in different threads around july regarding the subject:

1. cvsup a fresh copy of RELEASE 8.1 in /usr/src
2. Now I apply the patch in /usr/src with patch -p1 -E  patch name
3. Now  I make buildworld  make buidlkernel  make installkernel and I get 
the following messages:


cc -fpic -DPIC -O2 -pipe  
-I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi 
-I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi/krb5 
-I/usr/src/kerb
cc -fpic -DPIC -O2 -pipe  
-I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi 
-I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi/krb5 
-I/usr/src/kerb
cc -fpic -DPIC -O2 -pipe  
-I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi 
-I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi/krb5 
-I/usr/src/kerb
cc -fpic -DPIC -O2 -pipe  
-I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi 
-I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi/krb5 
-I/usr/src/kerb
cc -fpic -DPIC -O2 -pipe  
-I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi 
-I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi/krb5 
-I/usr/src/kerb
cc -fpic -DPIC -O2 -pipe  
-I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi 
-I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi/krb5 
-I/usr/src/kerb
cc -fpic -DPIC -O2 -pipe  
-I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi 
-I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi/krb5 
-I/usr/src/kerb
cc -fpic -DPIC -O2 -pipe  
-I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi 
-I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi/krb5 
-I/usr/src/kerb
cc -fpic -DPIC -O2 -pipe  
-I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi 
-I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi/krb5 
-I/usr/src/kerb
acc -fpic -DPIC -O2 -pipe  
-I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi 
-I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi/krb5 
-I/usr/src/ker
cc -fpic -DPIC -O2 -pipe  
-I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi 
-I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi/krb5 
-I/usr/src/kerb
make: don't know how to make /usr/obj/usr/src/tmp/usr/lib/libpthread.a. Stop
*** Error code 2
Stop in /usr/src.

What I'm doing wrong? 

Kind regards,

-- 

Martin Schweizer
off...@pc-service.ch

PC-Service M. Schweizer GmbH; Bannholzstrasse 6; CH-8608 Bubikon
Tel. +41 55 243 30 00; Fax: +41 55 243 33 22

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


out of HDD space - zfs degraded

2010-10-02 Thread Dan Langille
Overnight I was running a zfs send | zfs receive (both within the same 
system / zpool).  The system ran out of space, a drive went off line, 
and the system is degraded.


This is a raidz2 array running on FreeBSD 8.1-STABLE #0: Sat Sep 18 
23:43:48 EDT 2010.


The following logs are also available at 
http://www.langille.org/tmp/zfs-space.txt - no line wrapping


This is what was running:

# time zfs send storage/bac...@transfer | mbuffer | zfs receive 
storage/compressed/bacula-mbuffer
in @  0.0 kB/s, out @  0.0 kB/s, 3670 GB total, buffer 100% fullcannot 
receive new filesystem stream: out of space
mbuffer: error: outputThread: error writing to stdout at offset 
0x395917c4000: Broken pipe


summary: 3670 GByte in 10 h 40 min 97.8 MB/s
mbuffer: warning: error during output to stdout: Broken pipe
warning: cannot send 'storage/bac...@transfer': Broken pipe

real640m48.423s
user8m52.660s
sys 211m40.862s


Looking in the logs, I see this:

Oct  2 00:50:53 kraken kernel: (ada0:siisch0:0:0:0): lost device
Oct  2 00:50:54 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:54 kraken kernel: siisch0: siis_timeout is 0004 ss 
4000 rs 4000 es  sts 801f0040 serr 

Oct  2 00:50:54 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:55 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:55 kraken kernel: siisch0: siis_timeout is 0004 ss 
4000 rs 4000 es  sts 801f0040 serr 

Oct  2 00:50:55 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:56 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:56 kraken kernel: siisch0: siis_timeout is 0004 ss 
4000 rs 4000 es  sts 801f0040 serr 

Oct  2 00:50:56 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:57 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:57 kraken kernel: siisch0: siis_timeout is 0004 ss 
4000 rs 4000 es  sts 801f0040 serr 

Oct  2 00:50:57 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:58 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:58 kraken kernel: siisch0: siis_timeout is 0004 ss 
4000 rs 4000 es  sts 801f0040 serr 

Oct  2 00:50:58 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:59 kraken root: ZFS: vdev I/O failure, zpool=storage 
path=/dev/gpt/disk06-live offset=270336 size=8192 error=6


Oct  2 00:50:59 kraken kernel: (ada0:siisch0:0:0:0): Synchronize cache 
failed

Oct  2 00:50:59 kraken kernel: (ada0:siisch0:0:0:0): removing device entry

Oct  2 00:50:59 kraken root: ZFS: vdev I/O failure, zpool=storage 
path=/dev/gpt/disk06-live offset=2000187564032 size=8192 error=6
Oct  2 00:50:59 kraken root: ZFS: vdev I/O failure, zpool=storage 
path=/dev/gpt/disk06-live offset=2000187826176 size=8192 error=6


$ zpool status
  pool: storage
 state: DEGRADED
 scrub: scrub in progress for 5h32m, 17.16% done, 26h44m to go
config:

NAME STATE READ WRITE CKSUM
storage  DEGRADED 0 0 0
  raidz2 DEGRADED 0 0 0
gpt/disk01-live  ONLINE   0 0 0
gpt/disk02-live  ONLINE   0 0 0
gpt/disk03-live  ONLINE   0 0 0
gpt/disk04-live  ONLINE   0 0 0
gpt/disk05-live  ONLINE   0 0 0
gpt/disk06-live  REMOVED  0 0 0
gpt/disk07-live  ONLINE   0 0 0

$ zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
storage6.97T  1.91T  1.75G  /storage
storage/bacula 4.72T  1.91T  4.29T  /storage/bacula
storage/compressed 2.25T  1.91T  46.9K  /storage/compressed
storage/compressed/bacula  2.25T  1.91T  42.7K  /storage/compressed/bacula
storage/pgsql  5.50G  1.91T  5.50G  /storage/pgsql

$ sudo camcontrol devlist
Password:
Hitachi HDS722020ALA330 JKAOA28A  at scbus2 target 0 lun 0 (pass1,ada1)
Hitachi HDS722020ALA330 JKAOA28A  at scbus3 target 0 lun 0 (pass2,ada2)
Hitachi HDS722020ALA330 JKAOA28A  at scbus4 target 0 lun 0 (pass3,ada3)
Hitachi HDS722020ALA330 JKAOA28A  at scbus5 target 0 lun 0 (pass4,ada4)
Hitachi HDS722020ALA330 JKAOA28A  at scbus6 target 0 lun 0 (pass5,ada5)
Hitachi HDS722020ALA330 JKAOA28A  at scbus7 target 0 lun 0 (pass6,ada6)
ST380815AS 4.AAB at scbus8 target 0 lun 0 (pass7,ada7)
TSSTcorp CDDVDW SH-S223C SB01at scbus9 target 0 lun 0 (cd0,pass8)
WDC WD1600AAJS-75M0A0 02.03E02   at scbus10 target 0 lun 0 (pass9,ada8)

I'm not yet sure if the drive is fully dead or not.  This is not a 
hot-swap box.


I'm guessing the first step is to get ada0 back online and then in the 
zpool.  However, I'm reluctant to do a 'camcontrol scan' on this box as 
it it froze up the system the last time I tried that:


  http://docs.freebsd.org/cgi/mid.cgi?4C78FF01.5020500

Any suggestions for getting the drive back 

Re: device names changes for adX.

2010-10-02 Thread Jeremy Chadwick
On Sat, Oct 02, 2010 at 02:55:02PM +0200, Peter Ankerstål wrote:
 When I installed FreeBSD 8.1-RELEASE (freebsd-update) the adX devices changed 
 index number and
 the machine obviously didnt boot. Due to this I hesitate to install 8.1 on my 
 servers remote. How do I know
 if and to what the devices will change?

Please see this thread:

http://www.mail-archive.com/freebsd-stable@freebsd.org/msg112349.html

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Broken SASL/Kerberos authentication: openldap client GSSAPI authentication segfaults on FreeBSD 8.1 Release too

2010-10-02 Thread Jeremy Chadwick
On Sat, Oct 02, 2010 at 03:11:07PM +0200, Martin Schweizer wrote:
 [...] 
 3. Now  I make buildworld  make buidlkernel  make installkernel and I get 
 the following messages:
 [...] 
 
 cc -fpic -DPIC -O2 -pipe  
 -I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi 
 -I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi/krb5 
 -I/usr/src/kerb
 make: don't know how to make /usr/obj/usr/src/tmp/usr/lib/libpthread.a. Stop
 *** Error code 2
 Stop in /usr/src.
 
 What I'm doing wrong? 

Did you specify any -j flags during your make buildworld (ex. make
-j2 buildworld)?

If so, please remove them and restart the build.  Then you will see
where the actual compile/make error happens.  From the above output, it
doesn't look like it's related to the Kerberos or libgssapi stuff.

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: out of HDD space - zfs degraded

2010-10-02 Thread Jeremy Chadwick
On Sat, Oct 02, 2010 at 09:43:30AM -0400, Dan Langille wrote:
 Overnight I was running a zfs send | zfs receive (both within the
 same system / zpool).  The system ran out of space, a drive went off
 line, and the system is degraded.
 
 This is a raidz2 array running on FreeBSD 8.1-STABLE #0: Sat Sep 18
 23:43:48 EDT 2010.
 
 The following logs are also available at
 http://www.langille.org/tmp/zfs-space.txt - no line wrapping
 
 This is what was running:
 
 # time zfs send storage/bac...@transfer | mbuffer | zfs receive
 storage/compressed/bacula-mbuffer
 in @  0.0 kB/s, out @  0.0 kB/s, 3670 GB total, buffer 100%
 fullcannot receive new filesystem stream: out of space
 mbuffer: error: outputThread: error writing to stdout at offset
 0x395917c4000: Broken pipe
 
 summary: 3670 GByte in 10 h 40 min 97.8 MB/s
 mbuffer: warning: error during output to stdout: Broken pipe
 warning: cannot send 'storage/bac...@transfer': Broken pipe
 
 real640m48.423s
 user8m52.660s
 sys 211m40.862s
 
 
 Looking in the logs, I see this:
 
 Oct  2 00:50:53 kraken kernel: (ada0:siisch0:0:0:0): lost device
 Oct  2 00:50:54 kraken kernel: siisch0: Timeout on slot 30
 Oct  2 00:50:54 kraken kernel: siisch0: siis_timeout is 0004 ss
 4000 rs 4000 es  sts 801f0040 serr 
 Oct  2 00:50:54 kraken kernel: siisch0: Error while READ LOG EXT
 Oct  2 00:50:55 kraken kernel: siisch0: Timeout on slot 30
 Oct  2 00:50:55 kraken kernel: siisch0: siis_timeout is 0004 ss
 4000 rs 4000 es  sts 801f0040 serr 
 Oct  2 00:50:55 kraken kernel: siisch0: Error while READ LOG EXT
 Oct  2 00:50:56 kraken kernel: siisch0: Timeout on slot 30
 Oct  2 00:50:56 kraken kernel: siisch0: siis_timeout is 0004 ss
 4000 rs 4000 es  sts 801f0040 serr 
 Oct  2 00:50:56 kraken kernel: siisch0: Error while READ LOG EXT
 Oct  2 00:50:57 kraken kernel: siisch0: Timeout on slot 30
 Oct  2 00:50:57 kraken kernel: siisch0: siis_timeout is 0004 ss
 4000 rs 4000 es  sts 801f0040 serr 
 Oct  2 00:50:57 kraken kernel: siisch0: Error while READ LOG EXT
 Oct  2 00:50:58 kraken kernel: siisch0: Timeout on slot 30
 Oct  2 00:50:58 kraken kernel: siisch0: siis_timeout is 0004 ss
 4000 rs 4000 es  sts 801f0040 serr 
 Oct  2 00:50:58 kraken kernel: siisch0: Error while READ LOG EXT
 Oct  2 00:50:59 kraken root: ZFS: vdev I/O failure, zpool=storage
 path=/dev/gpt/disk06-live offset=270336 size=8192 error=6
 
 Oct  2 00:50:59 kraken kernel: (ada0:siisch0:0:0:0): Synchronize
 cache failed
 Oct  2 00:50:59 kraken kernel: (ada0:siisch0:0:0:0): removing device entry
 
 Oct  2 00:50:59 kraken root: ZFS: vdev I/O failure, zpool=storage
 path=/dev/gpt/disk06-live offset=2000187564032 size=8192 error=6
 Oct  2 00:50:59 kraken root: ZFS: vdev I/O failure, zpool=storage
 path=/dev/gpt/disk06-live offset=2000187826176 size=8192 error=6
 
 $ zpool status
   pool: storage
  state: DEGRADED
  scrub: scrub in progress for 5h32m, 17.16% done, 26h44m to go
 config:
 
 NAME STATE READ WRITE CKSUM
 storage  DEGRADED 0 0 0
   raidz2 DEGRADED 0 0 0
 gpt/disk01-live  ONLINE   0 0 0
 gpt/disk02-live  ONLINE   0 0 0
 gpt/disk03-live  ONLINE   0 0 0
 gpt/disk04-live  ONLINE   0 0 0
 gpt/disk05-live  ONLINE   0 0 0
 gpt/disk06-live  REMOVED  0 0 0
 gpt/disk07-live  ONLINE   0 0 0
 
 $ zfs list
 NAMEUSED  AVAIL  REFER  MOUNTPOINT
 storage6.97T  1.91T  1.75G  /storage
 storage/bacula 4.72T  1.91T  4.29T  /storage/bacula
 storage/compressed 2.25T  1.91T  46.9K  /storage/compressed
 storage/compressed/bacula  2.25T  1.91T  42.7K  /storage/compressed/bacula
 storage/pgsql  5.50G  1.91T  5.50G  /storage/pgsql
 
 $ sudo camcontrol devlist
 Password:
 Hitachi HDS722020ALA330 JKAOA28A  at scbus2 target 0 lun 0 (pass1,ada1)
 Hitachi HDS722020ALA330 JKAOA28A  at scbus3 target 0 lun 0 (pass2,ada2)
 Hitachi HDS722020ALA330 JKAOA28A  at scbus4 target 0 lun 0 (pass3,ada3)
 Hitachi HDS722020ALA330 JKAOA28A  at scbus5 target 0 lun 0 (pass4,ada4)
 Hitachi HDS722020ALA330 JKAOA28A  at scbus6 target 0 lun 0 (pass5,ada5)
 Hitachi HDS722020ALA330 JKAOA28A  at scbus7 target 0 lun 0 (pass6,ada6)
 ST380815AS 4.AAB at scbus8 target 0 lun 0 (pass7,ada7)
 TSSTcorp CDDVDW SH-S223C SB01at scbus9 target 0 lun 0 (cd0,pass8)
 WDC WD1600AAJS-75M0A0 02.03E02   at scbus10 target 0 lun 0 (pass9,ada8)
 
 I'm not yet sure if the drive is fully dead or not.  This is not a
 hot-swap box.

It looks to me like the disk labelled gpt/disk06-live literally stopped
responding to commands.  The errors you see are coming from the OS and
the siis(4) controller, and 

Re: device names changes for adX.

2010-10-02 Thread Peter Ankerstål


 Peter Ankerstål wrote:
 Hi,
 
 When I installed FreeBSD 8.1-RELEASE (freebsd-update) the adX devices 
 changed index number and
 the machine obviously didnt boot. Due to this I hesitate to install 8.1 on 
 my servers remote. How do I know
 if and to what the devices will change?
 
 
 --
 Peter Ankerstål
 pe...@pean.org
 http://www.pean.org/
 
 ___
 freebsd-stable@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
 
 label your filesystems and mount them by label rather than by device name. see
 
 man glabel
 
 -- 
 SY, Marat
 

Thanks, I may try that. But how will this affect ZFS raidz set up to use 
ad-drives?

Like this:

tankONLINE   0 0 0
  raidz1ONLINE   0 0 0
ad10s2  ONLINE   0 0 0
ad12ONLINE   0 0 0
ad14ONLINE   0 0 0
ad16ONLINE   0 0 0


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: RELENG_7 em problems (and RELENG_8)

2010-10-02 Thread Mike Tancsa


Hi Jack,
Two quick notes about the new driver.

On the server that was having nic lockups, so far so good.  Saturday 
AM, the box would take a lot of level0 dumps as well as do about 
70Mb/s of outbound rsync traffic.  By now, the nic would have wedged 
at least once So far so good!



On different, new box, I decided to try HEAD, with the new driver, 
and ran into problems with the onboard nic


e...@pci0:0:25:0:class=0x02 card=0x00368086 
chip=0x10f08086 rev=0x06 hdr=0x00

vendor = 'Intel Corporation'
class  = network
subclass   = ethernet
cap 01[c8] = powerspec 2  supports D0 D3  current D0
cap 05[d0] = MSI supports 1 message, 64 bit enabled with 1 message
cap 13[e0] = PCI Advanced Features: FLR TP

em0: Intel(R) PRO/1000 Network Connection 7.0.5 port 0xf020-0xf03f 
mem 0xfe50-0xfe51,0xfe527000-0xfe527fff irq 20 at device 25.0 on pci0

em0: Using MSI interrupt
em0: [FILTER]
em0: Ethernet address: 70:71:bc:09:5e:aa

This is an intel branded desktop board

acpi0: INTEL DH55TC on motherboard

I find I have to disable rx and tx csum on the interface, otherwise 
there are a lot of re-transmits due to missed packets.  tcpdump 
implies the packets are going out, but it seems never to get 
out.  The mother board is at the office on an unmanaged switch right 
now, so I dont have any stats from the switch.  But tcpdump shows a 
lot of outbound re-transmits. Turning off rxcsum and txcsum fixes the problem.


dev.em.0.%desc: Intel(R) PRO/1000 Network Connection 7.0.8
dev.em.0.%driver: em
dev.em.0.%location: slot=25 function=0 handle=\_SB_.PCI0.GBE_
dev.em.0.%pnpinfo: vendor=0x8086 device=0x10f0 subvendor=0x8086 
subdevice=0x0036 class=0x02

dev.em.0.%parent: pci0
dev.em.0.nvm: -1
dev.em.0.rx_int_delay: 0
dev.em.0.tx_int_delay: 66
dev.em.0.rx_abs_int_delay: 66
dev.em.0.tx_abs_int_delay: 66
dev.em.0.rx_processing_limit: 100
dev.em.0.link_irq: 0
dev.em.0.mbuf_alloc_fail: 0
dev.em.0.cluster_alloc_fail: 0
dev.em.0.dropped: 0
dev.em.0.tx_dma_fail: 0
dev.em.0.rx_overruns: 0
dev.em.0.watchdog_timeouts: 0
dev.em.0.device_control: 1074790976
dev.em.0.rx_control: 67141634
dev.em.0.fc_high_water: 8192
dev.em.0.fc_low_water: 6692
dev.em.0.queue0.txd_head: 15
dev.em.0.queue0.txd_tail: 17
dev.em.0.queue0.tx_irq: 0
dev.em.0.queue0.no_desc_avail: 0
dev.em.0.queue0.rxd_head: 843
dev.em.0.queue0.rxd_tail: 842
dev.em.0.queue0.rx_irq: 0
dev.em.0.mac_stats.excess_coll: 0
dev.em.0.mac_stats.single_coll: 0
dev.em.0.mac_stats.multiple_coll: 0
dev.em.0.mac_stats.late_coll: 0
dev.em.0.mac_stats.collision_count: 0
dev.em.0.mac_stats.symbol_errors: 0
dev.em.0.mac_stats.sequence_errors: 0
dev.em.0.mac_stats.defer_count: 0
dev.em.0.mac_stats.missed_packets: 0
dev.em.0.mac_stats.recv_no_buff: 0
dev.em.0.mac_stats.recv_undersize: 0
dev.em.0.mac_stats.recv_fragmented: 0
dev.em.0.mac_stats.recv_oversize: 0
dev.em.0.mac_stats.recv_jabber: 0
dev.em.0.mac_stats.recv_errs: 0
dev.em.0.mac_stats.crc_errs: 0
dev.em.0.mac_stats.alignment_errs: 0
dev.em.0.mac_stats.coll_ext_errs: 0
dev.em.0.mac_stats.xon_recvd: 80
dev.em.0.mac_stats.xon_txd: 0
dev.em.0.mac_stats.xoff_recvd: 82
dev.em.0.mac_stats.xoff_txd: 0
dev.em.0.mac_stats.total_pkts_recvd: 35697
dev.em.0.mac_stats.good_pkts_recvd: 35535
dev.em.0.mac_stats.bcast_pkts_recvd: 231
dev.em.0.mac_stats.mcast_pkts_recvd: 85
dev.em.0.mac_stats.rx_frames_64: 0
dev.em.0.mac_stats.rx_frames_65_127: 0
dev.em.0.mac_stats.rx_frames_128_255: 0
dev.em.0.mac_stats.rx_frames_256_511: 0
dev.em.0.mac_stats.rx_frames_512_1023: 0
dev.em.0.mac_stats.rx_frames_1024_1522: 0
dev.em.0.mac_stats.good_octets_recvd: 14878015
dev.em.0.mac_stats.good_octets_txd: 14051783
dev.em.0.mac_stats.total_pkts_txd: 45313
dev.em.0.mac_stats.good_pkts_txd: 45313
dev.em.0.mac_stats.bcast_pkts_txd: 3
dev.em.0.mac_stats.mcast_pkts_txd: 5
dev.em.0.mac_stats.tx_frames_64: 0
dev.em.0.mac_stats.tx_frames_65_127: 0
dev.em.0.mac_stats.tx_frames_128_255: 0
dev.em.0.mac_stats.tx_frames_256_511: 0
dev.em.0.mac_stats.tx_frames_512_1023: 0
dev.em.0.mac_stats.tx_frames_1024_1522: 0
dev.em.0.mac_stats.tso_txd: 2788
dev.em.0.mac_stats.tso_ctx_fail: 0
dev.em.0.interrupts.asserts: 48733
dev.em.0.interrupts.rx_pkt_timer: 0
dev.em.0.interrupts.rx_abs_timer: 0
dev.em.0.interrupts.tx_pkt_timer: 0
dev.em.0.interrupts.tx_abs_timer: 0
dev.em.0.interrupts.tx_queue_empty: 0
dev.em.0.interrupts.tx_queue_min_thresh: 0
dev.em.0.interrupts.rx_desc_min_thresh: 0
dev.em.0.interrupts.rx_overrun: 0
dev.em.0.wake: 0



At 08:00 PM 9/26/2010, Jack Vogel wrote:

The system I've had stress tests running on has 82574 LOMs, so I hope it
will solve the problem, will see tomorrow morning at how things have held
up...

Jack


On Sun, Sep 26, 2010 at 4:43 PM, Mike Tancsa 
mailto:m...@sentex.netm...@sentex.net wrote:

At 06:19 PM 9/26/2010, Jack Vogel wrote:
Your em1 is using MSI not MSIX and thus can't have multiple queues. I'm
not sure whats broken from what you show here. I will try to get the new
driver out shortly for you 

RE: if_rtdel: error 47 (netgraph or mpd issue?)

2010-10-02 Thread Mike Tancsa


FYI,
I disabled ipv6 in mpd as well as set ipv6_enable=NO and 
the box has been stable for 2 weeks now.  Previously, it would crash 
every 5 days or so.  Something in inet6 or mpd ?


---Mike


At 01:59 PM 9/17/2010, Mike Tancsa wrote:

At 12:51 PM 9/10/2010, Mike Tancsa wrote:



FYI, I enabled witness in the kernel and am seeing the following


uma_zalloc_arg: zone 128 with the following non-sleepable locks held:
exclusive rw ifnet_rw (ifnet_rw) r = 0 (0xc0b56ec4) locked @ 
/usr/src/sys/net/if.c:419



Hi,
Another crash. I had it break to the serial debugger this time


Fatal trap 12: page fault while in kernel mode
cpuid = 1; apic id = 01
fault virtual address   = 0x24
fault code  = supervisor read, page not present
instruction pointer = 0x20:0xc64c79e4
stack pointer   = 0x28:0xe7c84864
frame pointer   = 0x28:0xe7c84a9c
code segment= base 0x0, limit 0xf, type 0x1b
= DPL 0, pres 1, def32 1, gran 1
processor eflags= interrupt enabled, resume, IOPL = 0
current process = 1280 (mpd5)
[thread pid 1280 tid 100096 ]
Stopped at  ng_path2noderef+0x174:  testb   $0x1,0x24(%esi)
db bt
Tracing pid 1280 tid 100096 td 0xc58f7780
ng_path2noderef(cace4b80,cb0a5350,e7c84ab8,e7c84ab4,0,...) at 
ng_path2noderef+0x174
ng_address_path(cace4b80,c64d4400,cb0a5350,0,28885ba0,...) at 
ng_address_path+0x40

ngc_send(cb66db44,0,cb2f4500,cba946f0,0,...) at ngc_send+0x182
sosend_generic(cb66db44,cba946f0,e7c84bec,0,0,...) at sosend_generic+0x50d
sosend(cb66db44,cba946f0,e7c84bec,0,0,...) at sosend+0x3f
kern_sendit(c58f7780,8d,e7c84c60,0,0,...) at kern_sendit+0x107
sendit(0,cba946f0,7,e7c84c7c,1,...) at sendit+0xb1
sendto(c58f7780,e7c84cf8,c093d225,c091bcfe,282,...) at sendto+0x48
syscall(e7c84d38) at syscall+0x1da
Xint0x80_syscall() at Xint0x80_syscall+0x21
--- syscall (133, FreeBSD ELF32, sendto), eip = 0x284b13c7, esp = 
0xbf9fe4cc, ebp = 0xbf9fe4f8 ---

db where
Tracing pid 1280 tid 100096 td 0xc58f7780
ng_path2noderef(cace4b80,cb0a5350,e7c84ab8,e7c84ab4,0,...) at 
ng_path2noderef+0x174
ng_address_path(cace4b80,c64d4400,cb0a5350,0,28885ba0,...) at 
ng_address_path+0x40

ngc_send(cb66db44,0,cb2f4500,cba946f0,0,...) at ngc_send+0x182
sosend_generic(cb66db44,cba946f0,e7c84bec,0,0,...) at sosend_generic+0x50d
sosend(cb66db44,cba946f0,e7c84bec,0,0,...) at sosend+0x3f
kern_sendit(c58f7780,8d,e7c84c60,0,0,...) at kern_sendit+0x107
sendit(0,cba946f0,7,e7c84c7c,1,...) at sendit+0xb1
sendto(c58f7780,e7c84cf8,c093d225,c091bcfe,282,...) at sendto+0x48
syscall(e7c84d38) at syscall+0x1da
Xint0x80_syscall() at Xint0x80_syscall+0x21
--- syscall (133, FreeBSD ELF32, sendto), eip = 0x284b13c7, esp = 
0xbf9fe4cc, ebp = 0xbf9fe4f8 ---

db show locks
exclusive sx so_snd_sx (so_snd_sx) r = 0 (0xcb66dc64) locked @ 
/usr/src/sys/kern/uipc_sockbuf.c:148

db show alllocks
Process 1928 (sshd) thread 0xc6402a00 (100094)
exclusive sx so_rcv_sx (so_rcv_sx) r = 0 (0xc669a898) locked @ 
/usr/src/sys/kern/uipc_sockbuf.c:148

Process 1281 (ng_queue) thread 0xc58f6a00 (100057)
shared rw radix node head (radix node head) r = 0 (0xc56e1580) 
locked @ /usr/src/sys/net/route.c:362

Process 1280 (mpd5) thread 0xc58f7780 (100096)
exclusive sx so_snd_sx (so_snd_sx) r = 0 (0xcb66dc64) locked @ 
/usr/src/sys/kern/uipc_sockbuf.c:148

db call doadump()
Physical memory: 2032 MB
Dumping 274 MB: 259 243 227 211 195 179 163 147 131 115 99 83 67 51 35 19 3
Dump complete




panic:

GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type show copying to see the conditions.
There is absolutely no warranty for GDB.  Type show warranty for details.
This GDB was configured as i386-marcel-freebsd...

Unread portion of the kernel message buffer:


Fatal trap 12: page fault while in kernel mode
cpuid = 1; apic id = 01
fault virtual address   = 0x24
fault code  = supervisor read, page not present
instruction pointer = 0x20:0xc64c79e4
stack pointer   = 0x28:0xe7c84864
frame pointer   = 0x28:0xe7c84a9c
code segment= base 0x0, limit 0xf, type 0x1b
= DPL 0, pres 1, def32 1, gran 1
processor eflags= interrupt enabled, resume, IOPL = 0
current process = 1280 (mpd5)
Physical memory: 2032 MB
Dumping 274 MB: 259 243 227 211 195 179 163 147 131 115 99 83 67 51 35 19 3

#0  doadump () at pcpu.h:231
231 pcpu.h: No such file or directory.
in pcpu.h
(kgdb) #0  doadump () at pcpu.h:231
#1  0xc04a5899 in db_fncall (dummy1=1, dummy2=0, dummy3=-1061510048,
dummy4=0xe7c84600 ) at /usr/src/sys/ddb/db_command.c:548
#2  0xc04a5c91 in db_command (last_cmdp=0xc09cf71c, cmd_table=0x0, dopager=1)
at /usr/src/sys/ddb/db_command.c:445
#3  0xc04a5dea in db_command_loop () at /usr/src/sys/ddb/db_command.c:498
#4  0xc04a7c6d 

Re: MySQL performance concern

2010-10-02 Thread Steven Hartland

When you say similar hardware whets the actual spec?

What do you have set in my.cnf?

What config options are you using for zfs?

- Original Message - 
From: Rumen Telbizov telbi...@gmail.com

To: freebsd-stable@freebsd.org
Sent: Saturday, October 02, 2010 8:06 AM
Subject: MySQL performance concern



Hello everyone,

I am experimenting with MySQL running on FreeBSD and comparing with another
(older) setup running on a Linux box.
My results show that performance on Linux is significantly better than
FreeBSD although the hardware is weaker.
I'd appreciate your comments and ideas.

Here's the setup:

1) FreeBSD 8.1-STABLE amd64 (Tue Sep 14 15:29:22 PDT 2010) running on a
SuperMicro machine with 2 x Dual Core
Xeon E5502 1.87Ghz ; 4 x SAS 15K in RAID10 setup under ZFS (two mirrored
pairs) and 2 x SSD X25-E partitioned
for: 8G for ZIL and the rest for L2ARC; 16G ram with 8 of them given to
mysql and tons of free.

2) Linux Gentoo with 3 SATA disks in hardware RAID5 with similar
cpu/motherboard and same memory size.

The sole application that runs is a python script which inserts a batch of
lines at a time. Only myisam is used as a format.
Here's the problem: On the Linux box it manages to push around
*5800*inserts/second while on the FreeBSD box
it's only *4000/*second.

MySQL version is 5.1.51

During this load the disk subsystem on FreeBSD is pretty much idle (both the
SSDs and the SAS disks). CPU utilization
contributed to mysqld is only around 30%. So I am clearly heavily
under-utilizing the hardware.
Linuxthreads support for 64bit architectures is not available so I couldn't
try that but aside from that I tried recompiling
mysql with all the different Makefile options available without any effect.
Changing the recordsize in zfs to 8K doesn't make any difference.
Tried percona binary without any luck.

Let me know what additional information would be useful and I'll provide it
here.

Thank you in advance for your comments and suggestions.

Cheers,
Rumen Telbizov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org




This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 


In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: device names changes for adX.

2010-10-02 Thread Matthew Seaman
On 02/10/2010 15:29:49, Peter Ankerstål wrote:

 Peter Ankerstål wrote:

 When I installed FreeBSD 8.1-RELEASE (freebsd-update) the adX devices 
 changed index number and
 the machine obviously didnt boot. Due to this I hesitate to install 8.1 on 
 my servers remote. How do I know
 if and to what the devices will change?

 label your filesystems and mount them by label rather than by device name. 
 see

 man glabel

 Thanks, I may try that. But how will this affect ZFS raidz set up to use 
 ad-drives?
 
 Like this:
 
   tankONLINE   0 0 0
 raidz1ONLINE   0 0 0
   ad10s2  ONLINE   0 0 0
   ad12ONLINE   0 0 0
   ad14ONLINE   0 0 0
   ad16ONLINE   0 0 0

It actually shouldn't matter.  ZFS writes metadata about the zpools,
zdevs etc. it knows about onto the drives, and if it can read the drive
and see the metadata, it can reconstruct itself.  You can take disks out
of a ZFS setup, shuffle them, stick them back into the wrong slots, and
ZFS will still work.  Similarly, you can use glabel to name the disks,
and switch to using that, and everything should still work.

In fact, if you glabel the disks while ZFS is still active, it should
instantly recognise all the new names and update the 'zpool status'
output on the fly.  Actually, that last is probably the right way to do
things -- you'll update the zpool.cache that way, which means that ZFS
will come up without any remedial action after reboot.

Cheers,

Matthew

-- 
Dr Matthew J Seaman MA, D.Phil.   7 Priory Courtyard
  Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate
JID: matt...@infracaninophile.co.uk   Kent, CT11 9PW



signature.asc
Description: OpenPGP digital signature


Re: device names changes for adX.

2010-10-02 Thread Bruce Cran
On Sat, 02 Oct 2010 16:13:28 +0100
Matthew Seaman m.sea...@infracaninophile.co.uk wrote:

 In fact, if you glabel the disks while ZFS is still active, it should
 instantly recognise all the new names and update the 'zpool status'
 output on the fly.  Actually, that last is probably the right way to
 do things -- you'll update the zpool.cache that way, which means that
 ZFS will come up without any remedial action after reboot.

Since glabel writes metadata to disk, won't doing this on a disk with a
filesystem corrupt something?

-- 
Bruce Cran
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Broken SASL/Kerberos authentication: openldap client GSSAPI authentication segfaults on FreeBSD 8.1 Release too

2010-10-02 Thread Martin Schweizer
Hello Jeremy

Am Sat, Oct 02, 2010 at 07:11:46AM -0700 Jeremy Chadwick schrieb:
 On Sat, Oct 02, 2010 at 03:11:07PM +0200, Martin Schweizer wrote:
  [...] 
  3. Now  I make buildworld  make buidlkernel  make installkernel and I 
  get the following messages:
  [...] 
  
  cc -fpic -DPIC -O2 -pipe  
  -I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi 
  -I/usr/src/kerberos5/lib/libgssapi/../../../crypto/heimdal/lib/gssapi/krb5 
  -I/usr/src/kerb
  make: don't know how to make /usr/obj/usr/src/tmp/usr/lib/libpthread.a. Stop
  *** Error code 2
  Stop in /usr/src.
  
  What I'm doing wrong? 
 
 Did you specify any -j flags during your make buildworld (ex. make
 -j2 buildworld)?
 
 If so, please remove them and restart the build.  Then you will see
 where the actual compile/make error happens.  From the above output, it
 doesn't look like it's related to the Kerberos or libgssapi stuff.

No, I did not use any flags when I did start make buildworld.


Regards,
--
Martin Schweizer
off...@pc-service.ch

PC-Service M. Schweizer GmbH; Bannholzstrasse 6; CH-8608 Bubikon
Tel. +41 55 243 30 00; Fax: +41 55 243 33 22

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


utmp.h exists or not in RELENG_8?

2010-10-02 Thread Jeremy Messenger
My system is RELENG_8 and I have checkout by via csup today. It shows
that utmp.h still exists in RELENG_8. But when I see this PR:

http://www.freebsd.org/cgi/query-pr.cgi?pr=ports/149945

I have decided to check in the
http://www.freebsd.org/cgi/cvsweb.cgi/src/include/?only_with_tag=RELENG_8
... It shows that utmp.h has been removed. But in the
http://sources.freebsd.org/RELENG_8/src/include/ shows a different
story as it exists. I am confusing... Is it supposed to be deleted in
CVS when it did the SVN-CVS? Or what? I don't have svn installed in
my system at the moment, so can't check it now.

Please add me in the CC as I am not in the list.

Cheers,
Mezz


-- 
mezz.free...@gmail.com - m...@freebsd.org
FreeBSD GNOME Team
http://www.FreeBSD.org/gnome/ - gn...@freebsd.org
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: utmp.h exists or not in RELENG_8?

2010-10-02 Thread Jilles Tjoelker
On Sat, Oct 02, 2010 at 12:22:22PM -0500, Jeremy Messenger wrote:
 My system is RELENG_8 and I have checkout by via csup today. It shows
 that utmp.h still exists in RELENG_8. But when I see this PR:

 http://www.freebsd.org/cgi/query-pr.cgi?pr=ports/149945

 I have decided to check in the
 http://www.freebsd.org/cgi/cvsweb.cgi/src/include/?only_with_tag=RELENG_8
 ... It shows that utmp.h has been removed. But in the
 http://sources.freebsd.org/RELENG_8/src/include/ shows a different
 story as it exists. I am confusing... Is it supposed to be deleted in
 CVS when it did the SVN-CVS? Or what? I don't have svn installed in
 my system at the moment, so can't check it now.

utmp.h has been removed in HEAD (9.x) but is still present in 8.x and
earlier branches. It looks like cvsweb is buggy in this area.

The build error in ports/149945 may be caused by a stray utmpx related
file found by the configure process. Partly because the various unix
variant developers have made a mess of utmp/utmpx, the code to use it is
rather fragile.

-- 
Jilles Tjoelker
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: utmp.h exists or not in RELENG_8?

2010-10-02 Thread Sergey Kandaurov
On 2 October 2010 21:22, Jeremy Messenger mezz.free...@gmail.com wrote:
 My system is RELENG_8 and I have checkout by via csup today. It shows
 that utmp.h still exists in RELENG_8. But when I see this PR:

 http://www.freebsd.org/cgi/query-pr.cgi?pr=ports/149945

 I have decided to check in the
 http://www.freebsd.org/cgi/cvsweb.cgi/src/include/?only_with_tag=RELENG_8
 ... It shows that utmp.h has been removed. But in the
 http://sources.freebsd.org/RELENG_8/src/include/ shows a different
 story as it exists. I am confusing... Is it supposed to be deleted in
 CVS when it did the SVN-CVS? Or what? I don't have svn installed in
 my system at the moment, so can't check it now.

 Please add me in the CC as I am not in the list.


I have a suspect, cvsweb.cgi handles such case incorrectly, i.e.
when file is removed in MAIN branch, but it exists in BRANCH_X, then
passing the only_with_tag=RELENG_X will show such file in Attic.

-- 
wbr,
pluknet
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


is there a bug in AWK on 6.x and 7.x (fixed in 8.x)?

2010-10-02 Thread Miroslav Lachman
I think there is a bug in AWK in base of FreeBSD 6.x and 7.x (tested on 
6.4 i386 and 7.3 i386)


I have this simple test case, where I want 2 columns from GeoIP CSV file:

awk 'FS=, { print $1-$2 }' GeoIPCountryWhois.csv

It should produce output like this:

# awk 'FS=, { print $1-$2 }' GeoIPCountryWhois.csv | head -n 5
1.0.0.0-1.7.255.255
1.9.0.0-1.9.255.255
1.10.10.0-1.10.10.255
1.11.0.0-1.11.255.255
1.12.0.0-1.15.255.255

(above is taken from FreeBSD 8.1 i386)

On FreeBSD 6.4 and 7.3 it results in broken first line:

awk 'FS=, { print $1-$2 }' GeoIPCountryWhois.csv | head -n 5
1.0.0.0,1.7.255.255,16777216,17301503,AU,Australia-
1.9.0.0-1.9.255.255
1.10.10.0-1.10.10.255
1.11.0.0-1.11.255.255
1.12.0.0-1.15.255.255

There are no errors in CSV file, it doesn't metter if I delete the 
affected first line from the file.


It is reproducible with handmade file:

# cat test.csv
1.9.0.0,1.9.255.255,17367040,17432575,MY,Malaysia
1.10.10.0,1.10.10.255,17435136,17435391,AU,Australia
1.11.0.0,1.11.255.255,17498112,17563647,KR,Korea, Republic of
1.12.0.0,1.15.255.255,17563648,17825791,CN,China
1.16.0.0,1.19.255.255,17825792,18087935,KR,Korea, Republic of
1.21.0.0,1.21.255.255,18153472,18219007,JP,Japan


# awk 'FS=, { print $1-$2 }' test.csv
1.9.0.0,1.9.255.255,17367040,17432575,MY,Malaysia-
1.10.10.0-1.10.10.255
1.11.0.0-1.11.255.255
1.12.0.0-1.15.255.255
1.16.0.0-1.19.255.255
1.21.0.0-1.21.255.255


As it works in 8.1, can it be fixed in 7-STABLE?
(I don't know if it was purposely fixed or if it is coincidence of newer 
version of AWK in 8.x)


Should I file PR for it?

Miroslav Lachman
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: MySQL performance concern

2010-10-02 Thread Rumen Telbizov
Hello everyone,

Here's the requested information below:

FreeBSD mysql 5.1.51:

my.cnf:
skip-external-locking
key_buffer_size = 8192M
max_allowed_packet = 16M
table_open_cache = 2048
sort_buffer_size = 64M
read_buffer_size = 8M
read_rnd_buffer_size = 16M
myisam_sort_buffer_size = 256M
thread_cache_size = 64
query_cache_size = 32M
thread_concurrency = 8
max_heap_table_size = 6G

hardware:
FreeBSD 8.1-STABLE amd64 (Tue Sep 14 15:29:22 PDT 2010) running on a
SuperMicro machine with X8DTU motherboard
and 2 x Dual Core Xeon E5502 1.87Ghz ; 4 x SAS 15K in RAID10 setup under ZFS
(two mirrored pairs) and 2 x SSD X25-E partitioned
for: 8G for ZIL and the rest for L2ARC; 16G RAM.  Disk controller is LSI 4Hi
in IT (Initiator Target) mode.

-- Linux Gentoo (2.6.18-164.10.1.el5.028stab067.4) mysql 5.1.50 --

my.cnf:
skip-external-locking
key_buffer  = 4G
max_heap_table_size = 6G
max_allowed_packet  = 1M
table_cache = 64
sort_buffer_size= 512K
net_buffer_length   = 8K
read_buffer_size= 256K
read_rnd_buffer_size= 512K
myisam_sort_buffer_size = 8M

Linux runs as an OpenVZ VE inside CentOS. It's the only VE and has all the
memory allocated to it

hardware node:
2 x Xeon Quad E5410 @ 2.33GHz on SuperMicro X7DBU motherboard; 16G RAM; 4
SATA 1T disks in hardware raid 5 attached
to a 3ware controller; NO SSDs

Some other notes:
 * It is indeed a single thread which inserts into the mysql so yes it's
only one core which handles the application and another one for MySQL. What
is interesting here, like I mentioned, is that on FreeBSD mysql process
doesn't get more than 30-40% CPU utilization. So it has a lot of headroom.
gstat also shows 0% disk load
 * It is exactly the same database schema. In fact it's only one table
that's inserted heavily into. It is a partition table with only one HASH
index which looks something like this: PRIMARY KEY
(`IntField`,`DateField`,`Varchar150Field`) USING HASH. The speed difference
is obvious right from the beginning. I don't have to wait for any data to
accrue to see a degradation. I don't wait for more than a 100'000 records to
be processed.
 * Application maintains only 1 local TCP connection to mysql. They both run
on the same host
 * As for the ZFS. Here's the pool configuration:

  pool: tank
config:

NAME   STATE READ WRITE CKSUM
tank   ONLINE   0 0 0
  mirror   ONLINE   0 0 0
gpt/tank0  ONLINE   0 0 0
gpt/tank1  ONLINE   0 0 0
  mirror   ONLINE   0 0 0
gpt/tank2  ONLINE   0 0 0
gpt/tank3  ONLINE   0 0 0
logs   ONLINE   0 0 0
  mirror   ONLINE   0 0 0
gpt/zil0   ONLINE   0 0 0
gpt/zil1   ONLINE   0 0 0
cache
  gpt/l2arc0   ONLINE   0 0 0
  gpt/l2arc1   ONLINE   0 0 0

  pool: zroot
config:

NAMESTATE READ WRITE CKSUM
zroot   ONLINE   0 0 0
  mirrorONLINE   0 0 0
gpt/zroot0  ONLINE   0 0 0
gpt/zroot1  ONLINE   0 0 0


zroot is a couple of small partitions from two of the same SAS disks. zil
and l2arc are 8 and 22G partitions from 32G SSDs

I pretty much have no zfs tuning done since from what I've found there
shouldn't be any needed since I'm running 8.1 on a 64bit machine.
Let me know if you'd like me to experiment with any ...

Some additional information:
# sysctl vm.kmem_size
vm.kmem_size: 5539958784
# sysctl vm.kmem_size_max
vm.kmem_size_max: 329853485875
# sysctl vfs.zfs.arc_max
vfs.zfs.arc_max: 4466216960

I think this answers all the questions so far.
Let me know what you think. I might be missing something obvious.

Thank you,
Rumen Telbizov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: utmp.h exists or not in RELENG_8?

2010-10-02 Thread Jeremy Messenger
Thanks all, it's cvsweb bug then..

Cheers,
Mezz

On Sat, Oct 2, 2010 at 12:22 PM, Jeremy Messenger
mezz.free...@gmail.com wrote:
 My system is RELENG_8 and I have checkout by via csup today. It shows
 that utmp.h still exists in RELENG_8. But when I see this PR:

 http://www.freebsd.org/cgi/query-pr.cgi?pr=ports/149945

 I have decided to check in the
 http://www.freebsd.org/cgi/cvsweb.cgi/src/include/?only_with_tag=RELENG_8
 ... It shows that utmp.h has been removed. But in the
 http://sources.freebsd.org/RELENG_8/src/include/ shows a different
 story as it exists. I am confusing... Is it supposed to be deleted in
 CVS when it did the SVN-CVS? Or what? I don't have svn installed in
 my system at the moment, so can't check it now.

 Please add me in the CC as I am not in the list.

 Cheers,
 Mezz


 --
 mezz.free...@gmail.com - m...@freebsd.org
 FreeBSD GNOME Team
 http://www.FreeBSD.org/gnome/ - gn...@freebsd.org




-- 
mezz.free...@gmail.com - m...@freebsd.org
FreeBSD GNOME Team
http://www.FreeBSD.org/gnome/ - gn...@freebsd.org
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: is there a bug in AWK on 6.x and 7.x (fixed in 8.x)?

2010-10-02 Thread Miroslav Lachman

Damian Weber wrote:



On Sat, 2 Oct 2010, Miroslav Lachman wrote:


Date: Sat, 02 Oct 2010 21:58:27 +0200
From: Miroslav Lachman000.f...@quip.cz
To: freebsd-stablefreebsd-stable@freebsd.org
Subject: is there a bug in AWK on 6.x and 7.x (fixed in 8.x)?

I think there is a bug in AWK in base of FreeBSD 6.x and 7.x (tested on 6.4
i386 and 7.3 i386)

I have this simple test case, where I want 2 columns from GeoIP CSV file:

awk 'FS=, { print $1-$2 }' GeoIPCountryWhois.csv

It should produce output like this:

# awk 'FS=, { print $1-$2 }' GeoIPCountryWhois.csv | head -n 5
1.0.0.0-1.7.255.255
1.9.0.0-1.9.255.255
1.10.10.0-1.10.10.255
1.11.0.0-1.11.255.255
1.12.0.0-1.15.255.255

(above is taken from FreeBSD 8.1 i386)

On FreeBSD 6.4 and 7.3 it results in broken first line:

awk 'FS=, { print $1-$2 }' GeoIPCountryWhois.csv | head -n 5
1.0.0.0,1.7.255.255,16777216,17301503,AU,Australia-
1.9.0.0-1.9.255.255
1.10.10.0-1.10.10.255
1.11.0.0-1.11.255.255
1.12.0.0-1.15.255.255



Are you sure the command above contains a valid variable assignment?


I am not AWK expert, so maybe you are right. I just found this 
difference between 7.x and 8.x.


But if if works for other lines, why it doesn't work fot the first line too?

Anyway, thank you for working examples, I will use them!

Another working example from 6.4 is:

awk -F , '{ print $1-$2 }' GeoIPCountryWhois.csv | head -n 5
1.0.0.0-1.7.255.255
1.9.0.0-1.9.255.255
1.10.10.0-1.10.10.255
1.11.0.0-1.11.255.255
1.12.0.0-1.15.255.255


The following works on both 7.3-STABLE and 8.1-STABLE

$ awk -v FS=, '{ print $1-$2; }'  GeoIPCountryWhois.csv | head -n 5
1.0.0.0-1.7.255.255
1.9.0.0-1.9.255.255
1.10.10.0-1.10.10.255
1.11.0.0-1.11.255.255
1.12.0.0-1.15.255.255


The following works as well

$ awk '{ print $1-$2; }' FS=, GeoIPCountryWhois.csv | head -n 5
1.0.0.0-1.7.255.255
1.9.0.0-1.9.255.255
1.10.10.0-1.10.10.255
1.11.0.0-1.11.255.255
1.12.0.0-1.15.255.255

Or, using a BEGIN section for assignment...

$ awk 'BEGIN {FS=,} { print $1-$2 }' GeoIPCountryWhois.csv | head -n 5
1.0.0.0-1.7.255.255
1.9.0.0-1.9.255.255
1.10.10.0-1.10.10.255
1.11.0.0-1.11.255.255
1.12.0.0-1.15.255.255

As a side note, gawk shows the following output on 7-STABLE and 8-STABLE
$ gawk 'FS=, { print $1-$2 }' GeoIPCountryWhois.csv | head -n 5
1.0.0.0,1.7.255.255,16777216,17301503,AU,Australia-
1.9.0.0-1.9.255.255
1.10.10.0-1.10.10.255
1.11.0.0-1.11.255.255
1.12.0.0-1.15.255.255

... which means the new behaviour of awk on 8-STABLE seems to break
compatibility with gawk at that point.

-- Damian

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: is there a bug in AWK on 6.x and 7.x (fixed in 8.x)?

2010-10-02 Thread Damian Weber


On Sat, 2 Oct 2010, Miroslav Lachman wrote:

 Date: Sat, 02 Oct 2010 21:58:27 +0200
 From: Miroslav Lachman 000.f...@quip.cz
 To: freebsd-stable freebsd-stable@freebsd.org
 Subject: is there a bug in AWK on 6.x and 7.x (fixed in 8.x)?
 
 I think there is a bug in AWK in base of FreeBSD 6.x and 7.x (tested on 6.4
 i386 and 7.3 i386)
 
 I have this simple test case, where I want 2 columns from GeoIP CSV file:
 
 awk 'FS=, { print $1-$2 }' GeoIPCountryWhois.csv
 
 It should produce output like this:
 
 # awk 'FS=, { print $1-$2 }' GeoIPCountryWhois.csv | head -n 5
 1.0.0.0-1.7.255.255
 1.9.0.0-1.9.255.255
 1.10.10.0-1.10.10.255
 1.11.0.0-1.11.255.255
 1.12.0.0-1.15.255.255
 
 (above is taken from FreeBSD 8.1 i386)
 
 On FreeBSD 6.4 and 7.3 it results in broken first line:
 
 awk 'FS=, { print $1-$2 }' GeoIPCountryWhois.csv | head -n 5
 1.0.0.0,1.7.255.255,16777216,17301503,AU,Australia-
 1.9.0.0-1.9.255.255
 1.10.10.0-1.10.10.255
 1.11.0.0-1.11.255.255
 1.12.0.0-1.15.255.255
 

Are you sure the command above contains a valid variable assignment?

The following works on both 7.3-STABLE and 8.1-STABLE

$ awk -v FS=, '{ print $1-$2; }'  GeoIPCountryWhois.csv | head -n 5
1.0.0.0-1.7.255.255
1.9.0.0-1.9.255.255
1.10.10.0-1.10.10.255
1.11.0.0-1.11.255.255
1.12.0.0-1.15.255.255


The following works as well

$ awk '{ print $1-$2; }' FS=, GeoIPCountryWhois.csv | head -n 5
1.0.0.0-1.7.255.255
1.9.0.0-1.9.255.255
1.10.10.0-1.10.10.255
1.11.0.0-1.11.255.255
1.12.0.0-1.15.255.255

Or, using a BEGIN section for assignment...

$ awk 'BEGIN {FS=,} { print $1-$2 }' GeoIPCountryWhois.csv | head -n 5
1.0.0.0-1.7.255.255
1.9.0.0-1.9.255.255
1.10.10.0-1.10.10.255
1.11.0.0-1.11.255.255
1.12.0.0-1.15.255.255

As a side note, gawk shows the following output on 7-STABLE and 8-STABLE
$ gawk 'FS=, { print $1-$2 }' GeoIPCountryWhois.csv | head -n 5
1.0.0.0,1.7.255.255,16777216,17301503,AU,Australia-
1.9.0.0-1.9.255.255
1.10.10.0-1.10.10.255
1.11.0.0-1.11.255.255
1.12.0.0-1.15.255.255

... which means the new behaviour of awk on 8-STABLE seems to break 
compatibility with gawk at that point.

-- Damian

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: out of HDD space - zfs degraded

2010-10-02 Thread Dan Langille

On 10/2/2010 10:19 AM, Jeremy Chadwick wrote:

On Sat, Oct 02, 2010 at 09:43:30AM -0400, Dan Langille wrote:

Overnight I was running a zfs send | zfs receive (both within the
same system / zpool).  The system ran out of space, a drive went off
line, and the system is degraded.

This is a raidz2 array running on FreeBSD 8.1-STABLE #0: Sat Sep 18
23:43:48 EDT 2010.

The following logs are also available at
http://www.langille.org/tmp/zfs-space.txt- no line wrapping

This is what was running:

# time zfs send storage/bac...@transfer | mbuffer | zfs receive
storage/compressed/bacula-mbuffer
in @  0.0 kB/s, out @  0.0 kB/s, 3670 GB total, buffer 100%
fullcannot receive new filesystem stream: out of space
mbuffer: error: outputThread: error writing tostdout  at offset
0x395917c4000: Broken pipe

summary: 3670 GByte in 10 h 40 min 97.8 MB/s
mbuffer: warning: error during output tostdout: Broken pipe
warning: cannot send 'storage/bac...@transfer': Broken pipe

real640m48.423s
user8m52.660s
sys 211m40.862s


Looking in the logs, I see this:

Oct  2 00:50:53 kraken kernel: (ada0:siisch0:0:0:0): lost device
Oct  2 00:50:54 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:54 kraken kernel: siisch0: siis_timeout is 0004 ss
4000 rs 4000 es  sts 801f0040 serr 
Oct  2 00:50:54 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:55 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:55 kraken kernel: siisch0: siis_timeout is 0004 ss
4000 rs 4000 es  sts 801f0040 serr 
Oct  2 00:50:55 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:56 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:56 kraken kernel: siisch0: siis_timeout is 0004 ss
4000 rs 4000 es  sts 801f0040 serr 
Oct  2 00:50:56 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:57 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:57 kraken kernel: siisch0: siis_timeout is 0004 ss
4000 rs 4000 es  sts 801f0040 serr 
Oct  2 00:50:57 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:58 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:58 kraken kernel: siisch0: siis_timeout is 0004 ss
4000 rs 4000 es  sts 801f0040 serr 
Oct  2 00:50:58 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:59 kraken root: ZFS: vdev I/O failure, zpool=storage
path=/dev/gpt/disk06-live offset=270336 size=8192 error=6

Oct  2 00:50:59 kraken kernel: (ada0:siisch0:0:0:0): Synchronize
cache failed
Oct  2 00:50:59 kraken kernel: (ada0:siisch0:0:0:0): removing device entry

Oct  2 00:50:59 kraken root: ZFS: vdev I/O failure, zpool=storage
path=/dev/gpt/disk06-live offset=2000187564032 size=8192 error=6
Oct  2 00:50:59 kraken root: ZFS: vdev I/O failure, zpool=storage
path=/dev/gpt/disk06-live offset=2000187826176 size=8192 error=6

$ zpool status
   pool: storage
  state: DEGRADED
  scrub: scrub in progress for 5h32m, 17.16% done, 26h44m to go
config:

 NAME STATE READ WRITE CKSUM
 storage  DEGRADED 0 0 0
   raidz2 DEGRADED 0 0 0
 gpt/disk01-live  ONLINE   0 0 0
 gpt/disk02-live  ONLINE   0 0 0
 gpt/disk03-live  ONLINE   0 0 0
 gpt/disk04-live  ONLINE   0 0 0
 gpt/disk05-live  ONLINE   0 0 0
 gpt/disk06-live  REMOVED  0 0 0
 gpt/disk07-live  ONLINE   0 0 0

$ zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
storage6.97T  1.91T  1.75G  /storage
storage/bacula 4.72T  1.91T  4.29T  /storage/bacula
storage/compressed 2.25T  1.91T  46.9K  /storage/compressed
storage/compressed/bacula  2.25T  1.91T  42.7K  /storage/compressed/bacula
storage/pgsql  5.50G  1.91T  5.50G  /storage/pgsql

$ sudo camcontrol devlist
Password:
Hitachi HDS722020ALA330 JKAOA28A   at scbus2 target 0 lun 0 (pass1,ada1)
Hitachi HDS722020ALA330 JKAOA28A   at scbus3 target 0 lun 0 (pass2,ada2)
Hitachi HDS722020ALA330 JKAOA28A   at scbus4 target 0 lun 0 (pass3,ada3)
Hitachi HDS722020ALA330 JKAOA28A   at scbus5 target 0 lun 0 (pass4,ada4)
Hitachi HDS722020ALA330 JKAOA28A   at scbus6 target 0 lun 0 (pass5,ada5)
Hitachi HDS722020ALA330 JKAOA28A   at scbus7 target 0 lun 0 (pass6,ada6)
ST380815AS 4.AAB  at scbus8 target 0 lun 0 (pass7,ada7)
TSSTcorp CDDVDW SH-S223C SB01 at scbus9 target 0 lun 0 (cd0,pass8)
WDC WD1600AAJS-75M0A0 02.03E02at scbus10 target 0 lun 0 (pass9,ada8)

I'm not yet sure if the drive is fully dead or not.  This is not a
hot-swap box.


It looks to me like the disk labelled gpt/disk06-live literally stopped
responding to commands.  The errors you see are coming from the OS and
the siis(4) controller, and both indicate the actual hard 

Re: MySQL performance concern

2010-10-02 Thread Jeremy Chadwick
On Sat, Oct 02, 2010 at 01:18:20PM -0700, Rumen Telbizov wrote:
 Hello everyone,
 
 Here's the requested information below:

The tunings between your Linux and FreeBSD instances differ severely,
and some of the variables don't even exist any longer (example:
table_cache is now known as table_open_cache as of MySQL 5.1.3, and
probably key_buffer vs. key_buffer_size too).

Can you please rule out MySQL tunings being responsible for the problem?

Here are your configuration bits, more sanely written:

FreeBSD Linux
--  --  --
MySQL version   5.1.51  5.1.50
--  --  --

my.cnf tuning   FreeBSD Linux
--  --  --
key_buffer_size8 GB 
key_buffer     4 GB
max_allowed_packet16 MB1 MB
table_open_cache2048
table_cache   64
sort_buffer_size  64 MB  512 KB
read_buffer_size   8 MB  256 KB
read_rnd_buffer_size  16 MB  512 KB
net_buffer_length      8 KB
myisam_sort_buffer_size  256 MB8 MB
thread_cache_size 64
query_cache_size  32 MB 
thread_concurrency 8
max_heap_table_size6 GB6 GB
--  --  --

Can you also please provide top output for the mysqld process on
FreeBSD?

  * As for the ZFS. Here's the pool configuration:

If you move things to UFS2, does the problem disappear?

You might not be seeing any disk I/O on the filesystem with gstat
because ZFS ARC could have all of the data in it.

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: out of HDD space - zfs degraded

2010-10-02 Thread Jeremy Chadwick
On Sat, Oct 02, 2010 at 06:09:25PM -0400, Dan Langille wrote:
 On 10/2/2010 10:19 AM, Jeremy Chadwick wrote:
 On Sat, Oct 02, 2010 at 09:43:30AM -0400, Dan Langille wrote:
 Overnight I was running a zfs send | zfs receive (both within the
 same system / zpool).  The system ran out of space, a drive went off
 line, and the system is degraded.
 
 This is a raidz2 array running on FreeBSD 8.1-STABLE #0: Sat Sep 18
 23:43:48 EDT 2010.
 
 The following logs are also available at
 http://www.langille.org/tmp/zfs-space.txt- no line wrapping
 
 This is what was running:
 
 # time zfs send storage/bac...@transfer | mbuffer | zfs receive
 storage/compressed/bacula-mbuffer
 in @  0.0 kB/s, out @  0.0 kB/s, 3670 GB total, buffer 100%
 fullcannot receive new filesystem stream: out of space
 mbuffer: error: outputThread: error writing tostdout  at offset
 0x395917c4000: Broken pipe
 
 summary: 3670 GByte in 10 h 40 min 97.8 MB/s
 mbuffer: warning: error during output tostdout: Broken pipe
 warning: cannot send 'storage/bac...@transfer': Broken pipe
 
 real640m48.423s
 user8m52.660s
 sys 211m40.862s
 
 
 Looking in the logs, I see this:
 
 Oct  2 00:50:53 kraken kernel: (ada0:siisch0:0:0:0): lost device
 Oct  2 00:50:54 kraken kernel: siisch0: Timeout on slot 30
 Oct  2 00:50:54 kraken kernel: siisch0: siis_timeout is 0004 ss
 4000 rs 4000 es  sts 801f0040 serr 
 Oct  2 00:50:54 kraken kernel: siisch0: Error while READ LOG EXT
 Oct  2 00:50:55 kraken kernel: siisch0: Timeout on slot 30
 Oct  2 00:50:55 kraken kernel: siisch0: siis_timeout is 0004 ss
 4000 rs 4000 es  sts 801f0040 serr 
 Oct  2 00:50:55 kraken kernel: siisch0: Error while READ LOG EXT
 Oct  2 00:50:56 kraken kernel: siisch0: Timeout on slot 30
 Oct  2 00:50:56 kraken kernel: siisch0: siis_timeout is 0004 ss
 4000 rs 4000 es  sts 801f0040 serr 
 Oct  2 00:50:56 kraken kernel: siisch0: Error while READ LOG EXT
 Oct  2 00:50:57 kraken kernel: siisch0: Timeout on slot 30
 Oct  2 00:50:57 kraken kernel: siisch0: siis_timeout is 0004 ss
 4000 rs 4000 es  sts 801f0040 serr 
 Oct  2 00:50:57 kraken kernel: siisch0: Error while READ LOG EXT
 Oct  2 00:50:58 kraken kernel: siisch0: Timeout on slot 30
 Oct  2 00:50:58 kraken kernel: siisch0: siis_timeout is 0004 ss
 4000 rs 4000 es  sts 801f0040 serr 
 Oct  2 00:50:58 kraken kernel: siisch0: Error while READ LOG EXT
 Oct  2 00:50:59 kraken root: ZFS: vdev I/O failure, zpool=storage
 path=/dev/gpt/disk06-live offset=270336 size=8192 error=6
 
 Oct  2 00:50:59 kraken kernel: (ada0:siisch0:0:0:0): Synchronize
 cache failed
 Oct  2 00:50:59 kraken kernel: (ada0:siisch0:0:0:0): removing device entry
 
 Oct  2 00:50:59 kraken root: ZFS: vdev I/O failure, zpool=storage
 path=/dev/gpt/disk06-live offset=2000187564032 size=8192 error=6
 Oct  2 00:50:59 kraken root: ZFS: vdev I/O failure, zpool=storage
 path=/dev/gpt/disk06-live offset=2000187826176 size=8192 error=6
 
 $ zpool status
pool: storage
   state: DEGRADED
   scrub: scrub in progress for 5h32m, 17.16% done, 26h44m to go
 config:
 
  NAME STATE READ WRITE CKSUM
  storage  DEGRADED 0 0 0
raidz2 DEGRADED 0 0 0
  gpt/disk01-live  ONLINE   0 0 0
  gpt/disk02-live  ONLINE   0 0 0
  gpt/disk03-live  ONLINE   0 0 0
  gpt/disk04-live  ONLINE   0 0 0
  gpt/disk05-live  ONLINE   0 0 0
  gpt/disk06-live  REMOVED  0 0 0
  gpt/disk07-live  ONLINE   0 0 0
 
 $ zfs list
 NAMEUSED  AVAIL  REFER  MOUNTPOINT
 storage6.97T  1.91T  1.75G  /storage
 storage/bacula 4.72T  1.91T  4.29T  /storage/bacula
 storage/compressed 2.25T  1.91T  46.9K  /storage/compressed
 storage/compressed/bacula  2.25T  1.91T  42.7K  /storage/compressed/bacula
 storage/pgsql  5.50G  1.91T  5.50G  /storage/pgsql
 
 $ sudo camcontrol devlist
 Password:
 Hitachi HDS722020ALA330 JKAOA28A   at scbus2 target 0 lun 0 (pass1,ada1)
 Hitachi HDS722020ALA330 JKAOA28A   at scbus3 target 0 lun 0 (pass2,ada2)
 Hitachi HDS722020ALA330 JKAOA28A   at scbus4 target 0 lun 0 (pass3,ada3)
 Hitachi HDS722020ALA330 JKAOA28A   at scbus5 target 0 lun 0 (pass4,ada4)
 Hitachi HDS722020ALA330 JKAOA28A   at scbus6 target 0 lun 0 (pass5,ada5)
 Hitachi HDS722020ALA330 JKAOA28A   at scbus7 target 0 lun 0 (pass6,ada6)
 ST380815AS 4.AAB  at scbus8 target 0 lun 0 (pass7,ada7)
 TSSTcorp CDDVDW SH-S223C SB01 at scbus9 target 0 lun 0 (cd0,pass8)
 WDC WD1600AAJS-75M0A0 02.03E02at scbus10 target 0 lun 0 (pass9,ada8)
 
 I'm not yet sure if the drive is fully dead or not.  This is not a
 hot-swap box.
 
 It looks to me like the disk labelled 

Re: out of HDD space - zfs degraded

2010-10-02 Thread Dan Langille

On 10/2/2010 6:36 PM, Jeremy Chadwick wrote:

On Sat, Oct 02, 2010 at 06:09:25PM -0400, Dan Langille wrote:

On 10/2/2010 10:19 AM, Jeremy Chadwick wrote:

On Sat, Oct 02, 2010 at 09:43:30AM -0400, Dan Langille wrote:

Overnight I was running a zfs send | zfs receive (both within the
same system / zpool).  The system ran out of space, a drive went off
line, and the system is degraded.

This is a raidz2 array running on FreeBSD 8.1-STABLE #0: Sat Sep 18
23:43:48 EDT 2010.

The following logs are also available at
http://www.langille.org/tmp/zfs-space.txt- no line wrapping

This is what was running:

# time zfs send storage/bac...@transfer | mbuffer | zfs receive
storage/compressed/bacula-mbuffer
in @  0.0 kB/s, out @  0.0 kB/s, 3670 GB total, buffer 100%
fullcannot receive new filesystem stream: out of space
mbuffer: error: outputThread: error writing tostdout   at offset
0x395917c4000: Broken pipe

summary: 3670 GByte in 10 h 40 min 97.8 MB/s
mbuffer: warning: error during output tostdout: Broken pipe
warning: cannot send 'storage/bac...@transfer': Broken pipe

real640m48.423s
user8m52.660s
sys 211m40.862s


Looking in the logs, I see this:

Oct  2 00:50:53 kraken kernel: (ada0:siisch0:0:0:0): lost device
Oct  2 00:50:54 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:54 kraken kernel: siisch0: siis_timeout is 0004 ss
4000 rs 4000 es  sts 801f0040 serr 
Oct  2 00:50:54 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:55 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:55 kraken kernel: siisch0: siis_timeout is 0004 ss
4000 rs 4000 es  sts 801f0040 serr 
Oct  2 00:50:55 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:56 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:56 kraken kernel: siisch0: siis_timeout is 0004 ss
4000 rs 4000 es  sts 801f0040 serr 
Oct  2 00:50:56 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:57 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:57 kraken kernel: siisch0: siis_timeout is 0004 ss
4000 rs 4000 es  sts 801f0040 serr 
Oct  2 00:50:57 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:58 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:58 kraken kernel: siisch0: siis_timeout is 0004 ss
4000 rs 4000 es  sts 801f0040 serr 
Oct  2 00:50:58 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:59 kraken root: ZFS: vdev I/O failure, zpool=storage
path=/dev/gpt/disk06-live offset=270336 size=8192 error=6

Oct  2 00:50:59 kraken kernel: (ada0:siisch0:0:0:0): Synchronize
cache failed
Oct  2 00:50:59 kraken kernel: (ada0:siisch0:0:0:0): removing device entry

Oct  2 00:50:59 kraken root: ZFS: vdev I/O failure, zpool=storage
path=/dev/gpt/disk06-live offset=2000187564032 size=8192 error=6
Oct  2 00:50:59 kraken root: ZFS: vdev I/O failure, zpool=storage
path=/dev/gpt/disk06-live offset=2000187826176 size=8192 error=6

$ zpool status
   pool: storage
  state: DEGRADED
  scrub: scrub in progress for 5h32m, 17.16% done, 26h44m to go
config:

 NAME STATE READ WRITE CKSUM
 storage  DEGRADED 0 0 0
   raidz2 DEGRADED 0 0 0
 gpt/disk01-live  ONLINE   0 0 0
 gpt/disk02-live  ONLINE   0 0 0
 gpt/disk03-live  ONLINE   0 0 0
 gpt/disk04-live  ONLINE   0 0 0
 gpt/disk05-live  ONLINE   0 0 0
 gpt/disk06-live  REMOVED  0 0 0
 gpt/disk07-live  ONLINE   0 0 0

$ zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
storage6.97T  1.91T  1.75G  /storage
storage/bacula 4.72T  1.91T  4.29T  /storage/bacula
storage/compressed 2.25T  1.91T  46.9K  /storage/compressed
storage/compressed/bacula  2.25T  1.91T  42.7K  /storage/compressed/bacula
storage/pgsql  5.50G  1.91T  5.50G  /storage/pgsql

$ sudo camcontrol devlist
Password:
Hitachi HDS722020ALA330 JKAOA28Aat scbus2 target 0 lun 0 (pass1,ada1)
Hitachi HDS722020ALA330 JKAOA28Aat scbus3 target 0 lun 0 (pass2,ada2)
Hitachi HDS722020ALA330 JKAOA28Aat scbus4 target 0 lun 0 (pass3,ada3)
Hitachi HDS722020ALA330 JKAOA28Aat scbus5 target 0 lun 0 (pass4,ada4)
Hitachi HDS722020ALA330 JKAOA28Aat scbus6 target 0 lun 0 (pass5,ada5)
Hitachi HDS722020ALA330 JKAOA28Aat scbus7 target 0 lun 0 (pass6,ada6)
ST380815AS 4.AAB   at scbus8 target 0 lun 0 (pass7,ada7)
TSSTcorp CDDVDW SH-S223C SB01  at scbus9 target 0 lun 0 (cd0,pass8)
WDC WD1600AAJS-75M0A0 02.03E02 at scbus10 target 0 lun 0 (pass9,ada8)

I'm not yet sure if the drive is fully dead or not.  This is not a
hot-swap box.


It looks to me like the disk labelled gpt/disk06-live literally stopped
responding 

Re: out of HDD space - zfs degraded

2010-10-02 Thread Jeremy Chadwick
On Sat, Oct 02, 2010 at 07:23:16PM -0400, Dan Langille wrote:
 On 10/2/2010 6:36 PM, Jeremy Chadwick wrote:
 On Sat, Oct 02, 2010 at 06:09:25PM -0400, Dan Langille wrote:
 On 10/2/2010 10:19 AM, Jeremy Chadwick wrote:
 On Sat, Oct 02, 2010 at 09:43:30AM -0400, Dan Langille wrote:
 Overnight I was running a zfs send | zfs receive (both within the
 same system / zpool).  The system ran out of space, a drive went off
 line, and the system is degraded.
 
 This is a raidz2 array running on FreeBSD 8.1-STABLE #0: Sat Sep 18
 23:43:48 EDT 2010.
 
 The following logs are also available at
 http://www.langille.org/tmp/zfs-space.txt- no line wrapping
 
 This is what was running:
 
 # time zfs send storage/bac...@transfer | mbuffer | zfs receive
 storage/compressed/bacula-mbuffer
 in @  0.0 kB/s, out @  0.0 kB/s, 3670 GB total, buffer 100%
 fullcannot receive new filesystem stream: out of space
 mbuffer: error: outputThread: error writing tostdout   at offset
 0x395917c4000: Broken pipe
 
 summary: 3670 GByte in 10 h 40 min 97.8 MB/s
 mbuffer: warning: error during output tostdout: Broken pipe
 warning: cannot send 'storage/bac...@transfer': Broken pipe
 
 real640m48.423s
 user8m52.660s
 sys 211m40.862s
 
 
 Looking in the logs, I see this:
 
 Oct  2 00:50:53 kraken kernel: (ada0:siisch0:0:0:0): lost device
 Oct  2 00:50:54 kraken kernel: siisch0: Timeout on slot 30
 Oct  2 00:50:54 kraken kernel: siisch0: siis_timeout is 0004 ss
 4000 rs 4000 es  sts 801f0040 serr 
 Oct  2 00:50:54 kraken kernel: siisch0: Error while READ LOG EXT
 Oct  2 00:50:55 kraken kernel: siisch0: Timeout on slot 30
 Oct  2 00:50:55 kraken kernel: siisch0: siis_timeout is 0004 ss
 4000 rs 4000 es  sts 801f0040 serr 
 Oct  2 00:50:55 kraken kernel: siisch0: Error while READ LOG EXT
 Oct  2 00:50:56 kraken kernel: siisch0: Timeout on slot 30
 Oct  2 00:50:56 kraken kernel: siisch0: siis_timeout is 0004 ss
 4000 rs 4000 es  sts 801f0040 serr 
 Oct  2 00:50:56 kraken kernel: siisch0: Error while READ LOG EXT
 Oct  2 00:50:57 kraken kernel: siisch0: Timeout on slot 30
 Oct  2 00:50:57 kraken kernel: siisch0: siis_timeout is 0004 ss
 4000 rs 4000 es  sts 801f0040 serr 
 Oct  2 00:50:57 kraken kernel: siisch0: Error while READ LOG EXT
 Oct  2 00:50:58 kraken kernel: siisch0: Timeout on slot 30
 Oct  2 00:50:58 kraken kernel: siisch0: siis_timeout is 0004 ss
 4000 rs 4000 es  sts 801f0040 serr 
 Oct  2 00:50:58 kraken kernel: siisch0: Error while READ LOG EXT
 Oct  2 00:50:59 kraken root: ZFS: vdev I/O failure, zpool=storage
 path=/dev/gpt/disk06-live offset=270336 size=8192 error=6
 
 Oct  2 00:50:59 kraken kernel: (ada0:siisch0:0:0:0): Synchronize
 cache failed
 Oct  2 00:50:59 kraken kernel: (ada0:siisch0:0:0:0): removing device entry
 
 Oct  2 00:50:59 kraken root: ZFS: vdev I/O failure, zpool=storage
 path=/dev/gpt/disk06-live offset=2000187564032 size=8192 error=6
 Oct  2 00:50:59 kraken root: ZFS: vdev I/O failure, zpool=storage
 path=/dev/gpt/disk06-live offset=2000187826176 size=8192 error=6
 
 $ zpool status
pool: storage
   state: DEGRADED
   scrub: scrub in progress for 5h32m, 17.16% done, 26h44m to go
 config:
 
  NAME STATE READ WRITE CKSUM
  storage  DEGRADED 0 0 0
raidz2 DEGRADED 0 0 0
  gpt/disk01-live  ONLINE   0 0 0
  gpt/disk02-live  ONLINE   0 0 0
  gpt/disk03-live  ONLINE   0 0 0
  gpt/disk04-live  ONLINE   0 0 0
  gpt/disk05-live  ONLINE   0 0 0
  gpt/disk06-live  REMOVED  0 0 0
  gpt/disk07-live  ONLINE   0 0 0
 
 $ zfs list
 NAMEUSED  AVAIL  REFER  MOUNTPOINT
 storage6.97T  1.91T  1.75G  /storage
 storage/bacula 4.72T  1.91T  4.29T  /storage/bacula
 storage/compressed 2.25T  1.91T  46.9K  /storage/compressed
 storage/compressed/bacula  2.25T  1.91T  42.7K  /storage/compressed/bacula
 storage/pgsql  5.50G  1.91T  5.50G  /storage/pgsql
 
 $ sudo camcontrol devlist
 Password:
 Hitachi HDS722020ALA330 JKAOA28Aat scbus2 target 0 lun 0 
 (pass1,ada1)
 Hitachi HDS722020ALA330 JKAOA28Aat scbus3 target 0 lun 0 
 (pass2,ada2)
 Hitachi HDS722020ALA330 JKAOA28Aat scbus4 target 0 lun 0 
 (pass3,ada3)
 Hitachi HDS722020ALA330 JKAOA28Aat scbus5 target 0 lun 0 
 (pass4,ada4)
 Hitachi HDS722020ALA330 JKAOA28Aat scbus6 target 0 lun 0 
 (pass5,ada5)
 Hitachi HDS722020ALA330 JKAOA28Aat scbus7 target 0 lun 0 
 (pass6,ada6)
 ST380815AS 4.AAB   at scbus8 target 0 lun 0 (pass7,ada7)
 TSSTcorp CDDVDW SH-S223C SB01  at scbus9 target 0 lun 0 (cd0,pass8)
 WDC WD1600AAJS-75M0A0 02.03E02 at scbus10 target 0 lun 0 
 

Re: MySQL performance concern

2010-10-02 Thread Steven Hartland
You similar hardware specs are hardly similar an 8 core 2.3Ghz box vs a 4 core 
1.8Ghz according to Intel cpu comparison:-
http://ark.intel.com/Compare.aspx?ids=37092,33080,

If you want to compare you really need to do so on the same hardware or all 
bets are off.

Regards
Steve
  - Original Message - 
  From: Rumen Telbizov 
  To: Steven Hartland 
  Cc: freebsd-stable@freebsd.org 
  Sent: Saturday, October 02, 2010 9:18 PM
  Subject: Re: MySQL performance concern


  Hello everyone,

  Here's the requested information below:

  FreeBSD mysql 5.1.51:

  my.cnf:
  skip-external-locking
  key_buffer_size = 8192M
  max_allowed_packet = 16M
  table_open_cache = 2048
  sort_buffer_size = 64M
  read_buffer_size = 8M
  read_rnd_buffer_size = 16M
  myisam_sort_buffer_size = 256M
  thread_cache_size = 64
  query_cache_size = 32M
  thread_concurrency = 8
  max_heap_table_size = 6G

  hardware:
  FreeBSD 8.1-STABLE amd64 (Tue Sep 14 15:29:22 PDT 2010) running on a 
SuperMicro machine with X8DTU motherboard
  and 2 x Dual Core Xeon E5502 1.87Ghz ; 4 x SAS 15K in RAID10 setup under ZFS 
(two mirrored pairs) and 2 x SSD X25-E partitioned
  for: 8G for ZIL and the rest for L2ARC; 16G RAM.  Disk controller is LSI 4Hi 
in IT (Initiator Target) mode.

  -- Linux Gentoo (2.6.18-164.10.1.el5.028stab067.4) mysql 5.1.50 --

  my.cnf:
  skip-external-locking
  key_buffer  = 4G
  max_heap_table_size = 6G
  max_allowed_packet  = 1M
  table_cache = 64
  sort_buffer_size= 512K
  net_buffer_length   = 8K
  read_buffer_size= 256K
  read_rnd_buffer_size= 512K
  myisam_sort_buffer_size = 8M

  Linux runs as an OpenVZ VE inside CentOS. It's the only VE and has all the 
memory allocated to it

  hardware node:
  2 x Xeon Quad E5410 @ 2.33GHz on SuperMicro X7DBU motherboard; 16G RAM; 4 
SATA 1T disks in hardware raid 5 attached
  to a 3ware controller; NO SSDs

  Some other notes:
   * It is indeed a single thread which inserts into the mysql so yes it's only 
one core which handles the application and another one for MySQL. What is 
interesting here, like I mentioned, is that on FreeBSD mysql process doesn't 
get more than 30-40% CPU utilization. So it has a lot of headroom. gstat also 
shows 0% disk load
   * It is exactly the same database schema. In fact it's only one table that's 
inserted heavily into. It is a partition table with only one HASH index which 
looks something like this: PRIMARY KEY 
(`IntField`,`DateField`,`Varchar150Field`) USING HASH. The speed difference is 
obvious right from the beginning. I don't have to wait for any data to accrue 
to see a degradation. I don't wait for more than a 100'000 records to be 
processed.
   * Application maintains only 1 local TCP connection to mysql. They both run 
on the same host
   * As for the ZFS. Here's the pool configuration:

pool: tank
  config:

  NAME   STATE READ WRITE CKSUM
  tank   ONLINE   0 0 0
mirror   ONLINE   0 0 0
  gpt/tank0  ONLINE   0 0 0
  gpt/tank1  ONLINE   0 0 0
mirror   ONLINE   0 0 0
  gpt/tank2  ONLINE   0 0 0
  gpt/tank3  ONLINE   0 0 0
  logs   ONLINE   0 0 0
mirror   ONLINE   0 0 0
  gpt/zil0   ONLINE   0 0 0
  gpt/zil1   ONLINE   0 0 0
  cache
gpt/l2arc0   ONLINE   0 0 0
gpt/l2arc1   ONLINE   0 0 0

pool: zroot
  config:

  NAMESTATE READ WRITE CKSUM
  zroot   ONLINE   0 0 0
mirrorONLINE   0 0 0
  gpt/zroot0  ONLINE   0 0 0
  gpt/zroot1  ONLINE   0 0 0


  zroot is a couple of small partitions from two of the same SAS disks. zil and 
l2arc are 8 and 22G partitions from 32G SSDs

  I pretty much have no zfs tuning done since from what I've found there 
shouldn't be any needed since I'm running 8.1 on a 64bit machine.
  Let me know if you'd like me to experiment with any ...

  Some additional information:
  # sysctl vm.kmem_size
  vm.kmem_size: 5539958784
  # sysctl vm.kmem_size_max
  vm.kmem_size_max: 329853485875
  # sysctl vfs.zfs.arc_max
  vfs.zfs.arc_max: 4466216960

  I think this answers all the questions so far.
  Let me know what you think. I might be missing something obvious.

  Thank you,
  Rumen Telbizov



This e.mail is private and confidential between Multiplay (UK) Ltd. and the 
person or entity to whom it is addressed. In the event of misdirection, the 
recipient is prohibited from using, copying, printing or otherwise 
disseminating it or any information contained in it. 

In the event of 

Re: out of HDD space - zfs degraded

2010-10-02 Thread Dan Langille

On 10/2/2010 7:50 PM, Jeremy Chadwick wrote:

On Sat, Oct 02, 2010 at 07:23:16PM -0400, Dan Langille wrote:

On 10/2/2010 6:36 PM, Jeremy Chadwick wrote:

On Sat, Oct 02, 2010 at 06:09:25PM -0400, Dan Langille wrote:

On 10/2/2010 10:19 AM, Jeremy Chadwick wrote:

On Sat, Oct 02, 2010 at 09:43:30AM -0400, Dan Langille wrote:

Overnight I was running a zfs send | zfs receive (both within the
same system / zpool).  The system ran out of space, a drive went off
line, and the system is degraded.

This is a raidz2 array running on FreeBSD 8.1-STABLE #0: Sat Sep 18
23:43:48 EDT 2010.

The following logs are also available at
http://www.langille.org/tmp/zfs-space.txt- no line wrapping

This is what was running:

# time zfs send storage/bac...@transfer | mbuffer | zfs receive
storage/compressed/bacula-mbuffer
in @  0.0 kB/s, out @  0.0 kB/s, 3670 GB total, buffer 100%
fullcannot receive new filesystem stream: out of space
mbuffer: error: outputThread: error writing tostdoutat offset
0x395917c4000: Broken pipe

summary: 3670 GByte in 10 h 40 min 97.8 MB/s
mbuffer: warning: error during output tostdout: Broken pipe
warning: cannot send 'storage/bac...@transfer': Broken pipe

real640m48.423s
user8m52.660s
sys 211m40.862s


Looking in the logs, I see this:

Oct  2 00:50:53 kraken kernel: (ada0:siisch0:0:0:0): lost device
Oct  2 00:50:54 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:54 kraken kernel: siisch0: siis_timeout is 0004 ss
4000 rs 4000 es  sts 801f0040 serr 
Oct  2 00:50:54 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:55 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:55 kraken kernel: siisch0: siis_timeout is 0004 ss
4000 rs 4000 es  sts 801f0040 serr 
Oct  2 00:50:55 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:56 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:56 kraken kernel: siisch0: siis_timeout is 0004 ss
4000 rs 4000 es  sts 801f0040 serr 
Oct  2 00:50:56 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:57 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:57 kraken kernel: siisch0: siis_timeout is 0004 ss
4000 rs 4000 es  sts 801f0040 serr 
Oct  2 00:50:57 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:58 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:58 kraken kernel: siisch0: siis_timeout is 0004 ss
4000 rs 4000 es  sts 801f0040 serr 
Oct  2 00:50:58 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:59 kraken root: ZFS: vdev I/O failure, zpool=storage
path=/dev/gpt/disk06-live offset=270336 size=8192 error=6

Oct  2 00:50:59 kraken kernel: (ada0:siisch0:0:0:0): Synchronize
cache failed
Oct  2 00:50:59 kraken kernel: (ada0:siisch0:0:0:0): removing device entry

Oct  2 00:50:59 kraken root: ZFS: vdev I/O failure, zpool=storage
path=/dev/gpt/disk06-live offset=2000187564032 size=8192 error=6
Oct  2 00:50:59 kraken root: ZFS: vdev I/O failure, zpool=storage
path=/dev/gpt/disk06-live offset=2000187826176 size=8192 error=6

$ zpool status
   pool: storage
  state: DEGRADED
  scrub: scrub in progress for 5h32m, 17.16% done, 26h44m to go
config:

 NAME STATE READ WRITE CKSUM
 storage  DEGRADED 0 0 0
   raidz2 DEGRADED 0 0 0
 gpt/disk01-live  ONLINE   0 0 0
 gpt/disk02-live  ONLINE   0 0 0
 gpt/disk03-live  ONLINE   0 0 0
 gpt/disk04-live  ONLINE   0 0 0
 gpt/disk05-live  ONLINE   0 0 0
 gpt/disk06-live  REMOVED  0 0 0
 gpt/disk07-live  ONLINE   0 0 0

$ zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
storage6.97T  1.91T  1.75G  /storage
storage/bacula 4.72T  1.91T  4.29T  /storage/bacula
storage/compressed 2.25T  1.91T  46.9K  /storage/compressed
storage/compressed/bacula  2.25T  1.91T  42.7K  /storage/compressed/bacula
storage/pgsql  5.50G  1.91T  5.50G  /storage/pgsql

$ sudo camcontrol devlist
Password:
Hitachi HDS722020ALA330 JKAOA28A at scbus2 target 0 lun 0 (pass1,ada1)
Hitachi HDS722020ALA330 JKAOA28A at scbus3 target 0 lun 0 (pass2,ada2)
Hitachi HDS722020ALA330 JKAOA28A at scbus4 target 0 lun 0 (pass3,ada3)
Hitachi HDS722020ALA330 JKAOA28A at scbus5 target 0 lun 0 (pass4,ada4)
Hitachi HDS722020ALA330 JKAOA28A at scbus6 target 0 lun 0 (pass5,ada5)
Hitachi HDS722020ALA330 JKAOA28A at scbus7 target 0 lun 0 (pass6,ada6)
ST380815AS 4.AABat scbus8 target 0 lun 0 (pass7,ada7)
TSSTcorp CDDVDW SH-S223C SB01   at scbus9 target 0 lun 0 (cd0,pass8)
WDC WD1600AAJS-75M0A0 02.03E02  at scbus10 target 0 lun 0 (pass9,ada8)

I'm not yet sure if the drive is fully dead or 

Panic: attempted pmap_enter on 2MB page

2010-10-02 Thread Dave Hayes
What does the above mentioned panic mean? I'm booting from
an mfsroot off of a DVD with a loader.conf like this:

 autoboot_delay=5
 mfsroot_load=YES
 mfsroot_type=mfs_root
 mfsroot_name=/mfsboot
 vfs.root.mountfrom=ufs:md0
 vfs.root.mountfrom.options=rw
 kern.ipc.nmbclusters=32768
 net.inet.tcp.tcbhashsize=16384
 vm.pmap.pg_ps_enabled=1
 vm.kmem_size=2G
 accf_http_load=YES
 net.inet.tcp.syncache.hashsize=1024
 net.inet.tcp.syncache.bucketlimit=100

This is FreeBSD 8.1-RELEASE amd64 running with the debugger installed
into the kernel. Thanks in advance for any insight provided. :)
-- 
Dave Hayes - Consultant - Altadena CA, USA - d...@jetcafe.org 
 The opinions expressed above are entirely my own 

No one is lazy except in the pursuit of another one's goal.


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: out of HDD space - zfs degraded

2010-10-02 Thread Steve Polyack

 On 10/2/2010 10:04 PM, Dan Langille wrote:

On 10/2/2010 7:50 PM, Jeremy Chadwick wrote:

On Sat, Oct 02, 2010 at 07:23:16PM -0400, Dan Langille wrote:

On 10/2/2010 6:36 PM, Jeremy Chadwick wrote:

On Sat, Oct 02, 2010 at 06:09:25PM -0400, Dan Langille wrote:

On 10/2/2010 10:19 AM, Jeremy Chadwick wrote:

On Sat, Oct 02, 2010 at 09:43:30AM -0400, Dan Langille wrote:

Overnight I was running a zfs send | zfs receive (both within the
same system / zpool).  The system ran out of space, a drive went 
off

line, and the system is degraded.

This is a raidz2 array running on FreeBSD 8.1-STABLE #0: Sat Sep 18
23:43:48 EDT 2010.

The following logs are also available at
http://www.langille.org/tmp/zfs-space.txt- no line wrapping

This is what was running:

# time zfs send storage/bac...@transfer | mbuffer | zfs receive
storage/compressed/bacula-mbuffer
in @  0.0 kB/s, out @  0.0 kB/s, 3670 GB total, buffer 100%
fullcannot receive new filesystem stream: out of space
mbuffer: error: outputThread: error writing tostdoutat offset
0x395917c4000: Broken pipe

summary: 3670 GByte in 10 h 40 min 97.8 MB/s
mbuffer: warning: error during output tostdout: Broken pipe
warning: cannot send 'storage/bac...@transfer': Broken pipe

real640m48.423s
user8m52.660s
sys 211m40.862s


Looking in the logs, I see this:

Oct  2 00:50:53 kraken kernel: (ada0:siisch0:0:0:0): lost device
Oct  2 00:50:54 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:54 kraken kernel: siisch0: siis_timeout is 0004 ss
4000 rs 4000 es  sts 801f0040 serr 
Oct  2 00:50:54 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:55 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:55 kraken kernel: siisch0: siis_timeout is 0004 ss
4000 rs 4000 es  sts 801f0040 serr 
Oct  2 00:50:55 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:56 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:56 kraken kernel: siisch0: siis_timeout is 0004 ss
4000 rs 4000 es  sts 801f0040 serr 
Oct  2 00:50:56 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:57 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:57 kraken kernel: siisch0: siis_timeout is 0004 ss
4000 rs 4000 es  sts 801f0040 serr 
Oct  2 00:50:57 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:58 kraken kernel: siisch0: Timeout on slot 30
Oct  2 00:50:58 kraken kernel: siisch0: siis_timeout is 0004 ss
4000 rs 4000 es  sts 801f0040 serr 
Oct  2 00:50:58 kraken kernel: siisch0: Error while READ LOG EXT
Oct  2 00:50:59 kraken root: ZFS: vdev I/O failure, zpool=storage
path=/dev/gpt/disk06-live offset=270336 size=8192 error=6

Oct  2 00:50:59 kraken kernel: (ada0:siisch0:0:0:0): Synchronize
cache failed
Oct  2 00:50:59 kraken kernel: (ada0:siisch0:0:0:0): removing 
device entry


Oct  2 00:50:59 kraken root: ZFS: vdev I/O failure, zpool=storage
path=/dev/gpt/disk06-live offset=2000187564032 size=8192 error=6
Oct  2 00:50:59 kraken root: ZFS: vdev I/O failure, zpool=storage
path=/dev/gpt/disk06-live offset=2000187826176 size=8192 error=6

$ zpool status
   pool: storage
  state: DEGRADED
  scrub: scrub in progress for 5h32m, 17.16% done, 26h44m to go
config:

 NAME STATE READ WRITE CKSUM
 storage  DEGRADED 0 0 0
   raidz2 DEGRADED 0 0 0
 gpt/disk01-live  ONLINE   0 0 0
 gpt/disk02-live  ONLINE   0 0 0
 gpt/disk03-live  ONLINE   0 0 0
 gpt/disk04-live  ONLINE   0 0 0
 gpt/disk05-live  ONLINE   0 0 0
 gpt/disk06-live  REMOVED  0 0 0
 gpt/disk07-live  ONLINE   0 0 0

$ zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
storage6.97T  1.91T  1.75G  /storage
storage/bacula 4.72T  1.91T  4.29T  /storage/bacula
storage/compressed 2.25T  1.91T  46.9K  /storage/compressed
storage/compressed/bacula  2.25T  1.91T  42.7K  
/storage/compressed/bacula

storage/pgsql  5.50G  1.91T  5.50G  /storage/pgsql

$ sudo camcontrol devlist
Password:
Hitachi HDS722020ALA330 JKAOA28A at scbus2 target 0 lun 0 
(pass1,ada1)
Hitachi HDS722020ALA330 JKAOA28A at scbus3 target 0 lun 0 
(pass2,ada2)
Hitachi HDS722020ALA330 JKAOA28A at scbus4 target 0 lun 0 
(pass3,ada3)
Hitachi HDS722020ALA330 JKAOA28A at scbus5 target 0 lun 0 
(pass4,ada4)
Hitachi HDS722020ALA330 JKAOA28A at scbus6 target 0 lun 0 
(pass5,ada5)
Hitachi HDS722020ALA330 JKAOA28A at scbus7 target 0 lun 0 
(pass6,ada6)
ST380815AS 4.AABat scbus8 target 0 lun 0 
(pass7,ada7)
TSSTcorp CDDVDW SH-S223C SB01   at scbus9 target 0 lun 0 
(cd0,pass8)
WDC WD1600AAJS-75M0A0 02.03E02  at scbus10 target 0 lun 0