Re: nginx + passenger = segv in _rtld_error on restart on FreeBSD 8.0?
On Tue, Dec 8, 2009 at 6:48 PM, Steven Hartland kill...@multiplay.co.uk wrote: I'm currently testing nginx + passenger on FreeBSD 8.0 and I'm seeing a strange segv which seems to indicate a core library error in _rtld_error. Could this be the case or is the stack just badly corrupted? (gdb) bt #0 0x0008005577dc in _rtld_error () from /libexec/ld-elf.so.1 #1 0x000800557c3f in _rtld_error () from /libexec/ld-elf.so.1 #2 0x000800557d5e in _rtld_error () from /libexec/ld-elf.so.1 #3 0x00080055851b in dladdr () from /libexec/ld-elf.so.1 #4 0x0008005585f3 in dladdr () from /libexec/ld-elf.so.1 #5 0x00080055576d in ?? () from /libexec/ld-elf.so.1 #6 0x0001 in ?? () #7 0x004117f8 in boost::detail::sp_counted_impl_pPassenger::Application::StandardSession::dispose (this=0x800768980) at sp_counted_impl.hpp:78 Previous frame inner to this frame (corrupt stack?) Regards Steve Steve, Did you figure this out? We're seeing something very similar with nginx + passenger + FreeBSD 8.0. Matt ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: nginx + passenger = segv in _rtld_error on restart on FreeBSD8.0?
On Wed, Dec 9, 2009 at 4:20 AM, Steven Hartland kill...@multiplay.co.uk wrote: - Original Message - From: Kostik Belousov kostik...@gmail.com To: Steven Hartland kill...@multiplay.co.uk Cc: freebsd-hack...@freebsd.org; freebsd-stable@freebsd.org Sent: Wednesday, December 09, 2009 10:21 AM Subject: Re: nginx + passenger = segv in _rtld_error on restart on FreeBSD8.0? This is the trace once world had been recompiled with:- CFLAGS=-pipe WITH_CTF=1 DEBUG_FLAGS=-g #0 0x000800c95eec in thr_kill () at thr_kill.S:3 #1 0x000800b22e9e in _thr_send_sig (thread=0x800f06600, sig=6) at /usr/src/lib/libthr/thread/thr_kern.c:92 #2 0x000800b1f878 in _raise (sig=6) at /usr/src/lib/libthr/thread/thr_sig.c:187 #3 0x000800d74003 in abort () at /usr/src/lib/libc/stdlib/abort.c:65 #4 0x0043b8a7 in Client::threadMain (this=0x800f9cf40) at ext/nginx/HelperServer.cpp:516 #5 0x00411302 in boost::_mfi::mf0void, Client::operator() (this=0x7fa45ea8, p=0x800f9cf40) at mem_fn_template.hpp:49 #6 0x00411651 in boost::_bi::list1boost::_bi::valueClient* ::operator()boost::_mfi::mf0void, Client, boost::_bi::list0 (this=0x7fa45eb8, f...@0x7fa45ea8, a...@0x7fa45d7f) at bind.hpp:232 #7 0x00411696 in boost::_bi::bind_tvoid, boost::_mfi::mf0void, Client, boost::_bi::list1boost::_bi::valueClient* ::operator() (this=0x7fa45ea8) at bind_template.hpp:20 #8 0x004116bd in boost::detail::function::void_function_obj_invoker0boost::_bi::bind_tvoid, boost::_mfi::mf0void, Client, boost::_bi::list1boost::_bi::valueClient* , void::invoke ( function_obj_p...@0x7fa45ea8) at function_template.hpp:158 #9 0x0042e73a in boost::function0void, std::allocatorvoid ::operator() (this=0x7fa45ea0) at function_template.hpp:825 #10 0x00435760 in oxt::thread::thread_main (fu...@0x7fa45ea0, da...@0x7fa45e90) at thread.hpp:107 #11 0x0041310e in boost::_bi::list2boost::_bi::valueboost::functionvoid ()(), std::allocatorvoid , boost::_bi::valueboost::shared_ptroxt::thread::thread_data ::operator()void (*)(boost::functionvoid ()(), std::allocatorvoid , boost::shared_ptroxt::thread::thread_data), boost::_bi::list0 (this=0x800f3ee80, f...@0x800f3ee78, a...@0x7fa45f0f) at bind.hpp:289 #12 0x00413196 in boost::_bi::bind_tvoid, void (*)(boost::functionvoid ()(), std::allocatorvoid , boost::shared_ptroxt::thread::thread_data), boost::_bi::list2boost::_bi::valueboost::functionvoid ()(), std::allocatorvoid , boost::_bi::valueboost::shared_ptroxt::thread::thread_data ::operator() (this=0x800f3ee78) at bind_template.hpp:20 #13 0x004131b9 in boost::thread::thread_databoost::_bi::bind_tvoid, void (*)(boost::functionvoid ()(), std::allocatorvoid , boost::shared_ptroxt::thread::thread_data), boost::_bi::list2boost::_bi::valueboost::functionvoid ()(), std::allocatorvoid , boost::_bi::valueboost::shared_ptroxt::thread::thread_data::run (this=0x800f3ee00) at thread.hpp:130 #14 0x00443259 in thread_proxy (param=0x800f3ee00) at ext/boost/src/pthread/thread.cpp:127 #15 0x000800b1badd in thread_start (curthread=0x800f06600) at /usr/src/lib/libthr/thread/thr_create.c:288 #16 0x in ?? () Cannot access memory at address 0x7fa46000 Current language: auto; currently asm It seems that in the passenger client threads it calls closeStream which errors when the socket close errors with ENOTCONN virtual void closeStream() { TRACE_POINT(); if (fd != -1) { int ret = syscalls::close(fd); fd = -1; if (ret == -1) { if (errno == EIO) { throw SystemException(A write operation on the session stream failed, errno); } else { throw SystemException(Cannot close the session stream, errno); } } } } This causes it to call abort on the the thread which then crashes the app with the above stack trace, which seems really weird. Anyone got any ideas? Regards steve Steve, The patch for PR 144061 works for us. http://lists.freebsd.org/pipermail/freebsd-hackers/2010-February/030741.html http://www.freebsd.org/cgi/query-pr.cgi?pr=144061 Matt ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: booting off a ZFS pool consisting of multiple striped mirror vdevs
On Tue, Feb 16, 2010 at 12:38 AM, Dan Naumov dan.nau...@gmail.com wrote: I don't know, but I plan to test that scenario in a few days. Matt Please share the results when you're done, I am really curious :) Booting from a stripe of two raidz vdevs works: FreeBSD/i386 boot Default: doom:/boot/zfsloader boot: status pool: doom config: NAME STATE doom ONLINE raidz1 ONLINE label/doom-0 ONLINE label/doom-1 ONLINE label/doom-2 ONLINE raidz1 ONLINE label/doom-3 ONLINE label/doom-4 ONLINE label/doom-5 ONLINE I'd guess a stripe of mirrors would work fine too. If I get a chance I'll test that combo. If booting of a stripe of 3 mirrors should work assuming no BIOS bugs, can you explain why is booting off simple stripes (of any number of disks) currently unsupported? I haven't tested that myself, but everywhere I look seems to indicate that booting off a simple stripe doesn't work or is that everywhere also out of date after your changes? :) It's probably unsupported in Solaris/OpenSolaris because of their bootloader. Our bootloader is completely different from theirs and so is not subject to those restrictions in the ZFS docs. The bottom line is that I think FreeBSD can boot from pretty much any configuration, except possibly from systems with huge numbers of disks. Matt ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: booting off a ZFS pool consisting of multiple striped mirror vdevs
On Thu, Feb 18, 2010 at 10:57 AM, Matt Reimer mattjrei...@gmail.com wrote: On Tue, Feb 16, 2010 at 12:38 AM, Dan Naumov dan.nau...@gmail.com wrote: I don't know, but I plan to test that scenario in a few days. Matt Please share the results when you're done, I am really curious :) Booting from a stripe of two raidz vdevs works: FreeBSD/i386 boot Default: doom:/boot/zfsloader boot: status pool: doom config: NAME STATE doom ONLINE raidz1 ONLINE label/doom-0 ONLINE label/doom-1 ONLINE label/doom-2 ONLINE raidz1 ONLINE label/doom-3 ONLINE label/doom-4 ONLINE label/doom-5 ONLINE I'd guess a stripe of mirrors would work fine too. If I get a chance I'll test that combo. A stripe of three-way mirrors works: FreeBSD/i386 boot Default: mithril:/boot/zfsloader boot: status pool: mithril config: NAME STATE mithril ONLINE mirror ONLINE label/mithril-0 ONLINE label/mithril-1 ONLINE label/mithril-2 ONLINE mirror ONLINE label/mithril-3 ONLINE label/mithril-4 ONLINE label/mithril-5 ONLINE Matt ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: booting off a ZFS pool consisting of multiple striped mirror vdevs
On Thu, Feb 18, 2010 at 4:36 PM, Dan Naumov dan.nau...@gmail.com wrote: A stripe of 3-way mirrors, whoa. Out of curiosity, what is the system used for? I am not doubting that there exist some uses/workloads for a system that uses 6 disks with 2 disks worth of usable space, but that's a bit of an unusual configuration. What are your system/disc specs and what kind of performance are you seeing from the pool? It's for a reasonably busy webserver hosting a few hundred domains, which tends to be somewhat seek-intensive. For this pool we had two main criteria: speed and double-disk redundancy. A stripe of three two-way mirrors would only give us single-disk redundancy in the worst case (i.e. losing both disks in one of the mirrors), so we went with two three-way mirrors instead. Even though we're only getting the capacity of two disks worth of space, we'll still have 6x the capacity of the array it's replacing. A simple-minded dd test gives me ~180MB/s writing a single long file, and 400-500MB/s reading. Matt ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: ZFS tuning [was: hardware for home use large storage]
On Mon, Feb 15, 2010 at 5:49 PM, jhell jh...@dataix.net wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 It is funny that you guys are all of a sudden talking about this, as I was just working on some modifications to the arc_summary.pl script for some better formatting and inclusion of kmem statistics. My intent on the modifications is to make the output more usable to the whole community by revealing the relevant system information that can be included in an email to the lists for diagnosis by others. ... Example output: - - System Summary OS Revision:199506 OS Release Date:703100 Hardware Platform: i386 Processor Architecture: i386 Storage pool Version: 13 Filesystem Version: 3 Kernel Memory Usage TEXT: 8950208 KiB,8 MiB DATA: 206347264 KiB, 196 MiB TOTAL: 215297472 KiB, 205 MiB Above did you really mean 8950208 B not KiB, etc.? Matt ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: booting off a ZFS pool consisting of multiple striped mirror vdevs
On Sat, Feb 13, 2010 at 12:04 PM, Dan Naumov dan.nau...@gmail.com wrote: Hello I have succesfully tested and used a full ZFS install of FreeBSD 8.0 on both single disk and mirror disk configurations using both MBR and GPT partitioning. AFAIK, with the more recent -CURRENT and -STABLE it is also possible to boot off a root filesystem located on raidz/raidz2 pools. But what about booting off pools consisting of multiple striped mirror or raidz vdevs? Like this: Assume each disk looks like a half of a traditional ZFS mirror root configuration using GPT 1: freebsd-boot 2: freebsd-swap 3: freebsd-zfs |disk1+disk2| + |disk3+disk4| + |disk5+disk6| My logic tells me that while booting off any of the 6 disks, boot0 and boot1 stage should obviously work fine, but what about the boot2 stage? Can it properly handle booting off a root filesystem thats striped across 3 mirror vdevs or is booting off a single mirror vdev the best that one can do right now? I don't know, but I plan to test that scenario in a few days. Matt ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: ZFS on root, serial console install
On Thu, Feb 11, 2010 at 10:27 PM, Charles Sprickman sp...@bway.net wrote: Any hints on that one? I finally got around to getting dhcp/tftp/nfs setup on an internal network to perform normal installs (and with some pxelinux hackery, the ability to boot a DOS disk or memtest86 disk images). Sysinstall in general is kind of an unweildy beast over serial, but one thing I was not able to accomplish was to get a shell (no extra virtual consoles on serial) or attempt any mounting of fixit media. From my last install that put ZFS on root, I had to do quite a bit of tapdancing since I had no DVD or bootable USB media - lots of switching from the install disk to fixit, which brought me to many chicken and egg moments. I did it though... But remotely, I'm not seeing a good way to do this. If mfsroot were larger and had more tools, then I'd be in business. This is probably the direction I need to get shoved in. I've looked at some other options with pxelinux and perhaps booting the mini ISO, but I'm not sure that gets me anywhere. Any tips? This isn't a make or break situation, I live 15 minutes from the colo... It's more of a quest. :) The way I do it is to boot over the network using pxeboot, configure the partitions and ZFS pool and filesystems mounted on /mnt, then install using sysinstall, using the Options dialog to set the install directory to /mnt. I think I created the NFS filesystem using make installworld DESTDIR=/usr/nfs/freebsd or something like that; this gives you all the tools you need. Matt ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: freebsd 8.0 stable amd64/x86 needs ~9min to bootup
2010/1/27 Zavam, Vinícius egyp...@gmail.com noon, all you guys. well, I'm having some issues during the 8.0-stable bootup process. it takes ~9min to finish the entire boot process to shows me the login: screen. Are you using zfsloader? A month or so ago the ZFS code was updated to probe all 128 possible GPT partitions instead of just four, resulting in a slow-down, but probably not nine minutes' worth. Matt ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: booting off GPT partitions
On Wed, Jan 27, 2010 at 8:45 AM, Dan Naumov dan.nau...@gmail.com wrote: Hey I was under the impression that everyone and their dog is using GPT partitioning in FreeBSD these days, including for boot drives and that I was just being unlucky with my current NAS motherboard (Intel D945GCLF2) having supposedly shaky support for GPT boot. But right now I am having an email exchange with Supermicro support (whom I contacted since I am pondering their X7SPA-H board for a new system), who are telling me that booting off GPT requires UEFI BIOS, which is supposedly a very new thing and that for example NONE of their current motherboards have support for this. Am I misunderstanding something or is the Supermicro support tech misguided? I'm booting servers with SuperMicro X8STi-F motherboards just fine using pmbr + GPT + ZFS. Matt ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: ZFS pool upgrade to v14 broke ZFS booting
On Wed, Jan 27, 2010 at 11:18 AM, Paul Mather p...@gromit.dlib.vt.eduwrote: I have a FreeBSD guest running under VirtualBox 3.1.2 on Mac OS X. It's running a recent 8-STABLE and is a ZFS-only install booting via gptzfsboot. I use this VirtualBox guest as a test install. A day or so ago I noticed zpool status report that my pool could be upgraded from v13 to v14. I did this, via zfs upgrade -a. Today, when attempting to fire up this FreeBSD guest in VirtualBox I get this on the console: = ZFS: unsupported ZFS version 14 (should be 13) No ZFS pools located, can't boot _ = and the boot halts at that point. I don't see the boot menu I normally see that lists the opportunity to boot single-user; disable ACPI; and so on. Has anyone else experienced this? Is this a mismatch between gptzfsboot and my current pool version? (Gptzfsboot includes the message I'm seeing.) Am I supposed to rebuild and replace gptzfsboot every time the pool version is updated? (There was no advisory in /usr/src/UPDATING concerning this, nor do I remember seeing it elsewhere.) Yes, you're running a version of gptzfsboot that only knows how to run version 13 and below. The commit that brought in version 14 support also bumped the version number for gptzfsboot though it doesn't look like any of the code changed; perhaps version 14 doesn't change anything that gptzfsboot cares about. Try rebuilding and reinstalling gptzfsboot and zfsloader to see if that helps: cd /sys/boot make cleandir make cleandir make obj make depend make all make install gpart bootcode -p /boot/gptzfsboot -i 1 /dev/somedisk Of course adjust the gpart command for your setup. Matt ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: Also seeing 2 x quad-core system slower that 2 x dual core
On Nov 29, 2007 10:58 AM, Kris Kennaway [EMAIL PROTECTED] wrote: Pete French wrote: On the dual core processors this takes about 20 seconds. On the quad cores it takes about 3 minutes! This is true for both the 32 and 64 bit versions of FreeBSD :-( That almost certainly has nothing to do with how many CPUs your system has, since rm -rf is a single process running on a single core. I wonder if I'm seeing this too. Running super-smack on a 2 x quad core 1.6GHz Dell 1950 I get about 4 qps, whereas on a 2 x dual core 3.0GHz box I've seen 8 qps. Is this expected? Matt CPU: Intel(R) Xeon(R) CPU E5310 @ 1.60GHz (1595.94-MHz K8-class CPU) Origin = GenuineIntel Id = 0x6f7 Stepping = 7 Features=0xbfebfbffFPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE Features2=0x4e33dSSE3,RSVD2,MON,DS_CPL,VMX,TM2,SSSE3,CX16,xTPR,PDCM,DCA AMD Features=0x20100800SYSCALL,NX,LM AMD Features2=0x1LAHF Cores per package: 4 usable memory = 8577105920 (8179 MB) avail memory = 8289787904 (7905 MB) CPU: Intel(R) Xeon(R) CPU5160 @ 3.00GHz (3000.02-MHz K8-class CPU) Origin = GenuineIntel Id = 0x6f6 Stepping = 6 Features=0xbfebfbffFPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE Features2=0x4e3bdSSE3,RSVD2,MON,DS_CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,b15,DCA AMD Features=0x20100800SYSCALL,NX,LM AMD Features2=0x1LAHF Cores per package: 2 usable memory = 17170817024 (16375 MB) avail memory = 16629559296 (15859 MB) ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Also seeing 2 x quad-core system slower that 2 x dual core
On Nov 29, 2007 11:20 AM, Kris Kennaway [EMAIL PROTECTED] wrote: Matt Reimer wrote: On Nov 29, 2007 10:58 AM, Kris Kennaway [EMAIL PROTECTED] wrote: Pete French wrote: On the dual core processors this takes about 20 seconds. On the quad cores it takes about 3 minutes! This is true for both the 32 and 64 bit versions of FreeBSD :-( That almost certainly has nothing to do with how many CPUs your system has, since rm -rf is a single process running on a single core. I wonder if I'm seeing this too. Running super-smack on a 2 x quad core 1.6GHz Dell 1950 I get about 4 qps, whereas on a 2 x dual core 3.0GHz box I've seen 8 qps. Please, let's try to stay focused :) rm -rf has nothing to do with super-smack and vice versa. It's relevant to $subject. Is this expected? It is not very surprising. super-smack is not a good SMP benchmark, it does stupid things like 1-byte I/O, so it is not very scalable nor a good model of real-world database activity. Accounting for your CPUs being twice as fast on the dual core, it roughly says that the benchmark is not scaling beyond 4 CPUs, which is in line with my own observations. Is sysbench a better benchmark? It gives me 2362.99 on the 2 x dual-core box vs 1327.26 on the 2 x quad-core box. Matt ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: 2 x quad-core system is slower that 2 x dual core on FreeBSD
On Nov 19, 2007 8:03 AM, Alexey Popov [EMAIL PROTECTED] wrote: Ivan Voras wrote: Also, did you try configuring and running pecl-APC for PHP?'s I'm using eAccelerator. Again, the same soft works good on less-CPU system and on Linux. FWIW, when playing with eaccelerator on RELENG_7 recently, I noticed that it seems to chew a lot of extra system time (as seen in top) when used with Apache+mod_fastcgi, but not when used with nginx. I didn't investigate. Matt ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Network throughput problems in RELENG_7
On 10/27/07, Abdullah Ibn Hamad Al-Marri [EMAIL PROTECTED] wrote: - Original Message From: Matthew Reimer [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Friday, October 26, 2007 7:10:55 PM Subject: Network throughput problems in RELENG_7 I'm seeing a problem where a much faster quad-core host running RELENG_7 serves many fewer netrate/http requests per second (175/sec) than an old, busy, UP 6.0 host (828/sec). The problem seems to be related to latency and connection setup, as it shows up dramatically over a link with 50-60 ms latency. Can you help? ... Hello, When did you last time did you csup and buildworld? I saw some changes in tcp few days ago. Last Friday: FreeBSD gandalf.vpop.net 7.0-BETA1 FreeBSD 7.0-BETA1 #1: Fri Oct 26 12:27:19 PDT 2007 [EMAIL PROTECTED]:/usr/obj/usr/src/sys/GANDALF amd64 Matt ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to [EMAIL PROTECTED]