Re: 9.2-RC1 sparc install via network problems
On 08/06/2013 03:53, Marius Strobl wrote: On Mon, Aug 05, 2013 at 07:36:40PM +0200, Michiel Boland wrote: Hi. I have some problems installing 9.2-RC1 on sparc64 via the network I have a dhcp server, and an NFS server that exports the disc1 ISO. Relevant portions of dhcpd.conf: filename "boot/loader"; option root-path ":/cdrom"; The boot/loader was copied from the install image. The tftpboot directory is otherwise empty. (So no loader.conf etc.) Do you also have 'next-server' in there? I'm buffled how TFTPing the loader could work without one ... There is a next-server line; it was in another location in the dhcpd.conf, so I did not think to copy that, sorry. [...] U10 are still supported, why should they have been dropped? :) In fact, the 9.2 images have been tested on a U10 before they were published. I also found an ISP 1040 card, which works just fine here. So this indeed could be a hardware problem causing a PCI access to fail, which typically causes very strange backtraces like the incomplete one above. Ok. I'll take out the pci cards - let's see if I can get it to boot then. So far, that's expected as you seem to also have copied over the /etc/fstab from disc1. Right. I thought I could take a shortcut by just mdconfig-ing the install image and mounting and exporting that. Cheers Michiel ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: Problem with zfsloader on 9.2-BETA2
On Mon, Aug 5, 2013 at 8:01 PM, J David wrote: > I should add that this is indeed the correct guid for the pool: > > $ zpool get guid > NAME PROPERTY VALUE SOURCE > data guid 2022708996989799150 default After a full make buildworld installworld, I finally got the revised output: ZFS: can't find pool by guid: 1C121B1DB1904AEE ZFS: can't find pool by guid: 1C121B1DB1904AEE (0x1C121B1DB1904AEE == 2022708996989799150) Thanks! ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: 9.2-RC1 sparc install via network problems
On Mon, Aug 05, 2013 at 07:36:40PM +0200, Michiel Boland wrote: > Hi. I have some problems installing 9.2-RC1 on sparc64 via the network > > I have a dhcp server, and an NFS server that exports the disc1 ISO. > > Relevant portions of dhcpd.conf: > > filename "boot/loader"; > option root-path ":/cdrom"; > > The boot/loader was copied from the install image. The tftpboot directory is > otherwise empty. (So no loader.conf etc.) Do you also have 'next-server' in there? I'm buffled how TFTPing the loader could work without one ... > > First I tried booting an Ultra10. That paniced immediately > > isp0: port 0x1000-0x10ff mem > 0x2808000-0x2808fff at device 4.0 on pci3 > isp0: invalid NVRAM header > isp1: port 0x2000-0x20ff mem > 0x290-0x2900fff at device 2.0 on pci2 > panic: trap: data access error (kernel) > cpuid = 0 > KDB: stack backtrace: > #0 0xc08588b4 at trap+0x554 > Uptime: 1s > > This may be a hardware thing so I did not pursue this further. Maybe Ultra10 > is > no longer supported, I don't know. (Ultra10s are crap anyway :) U10 are still supported, why should they have been dropped? :) In fact, the 9.2 images have been tested on a U10 before they were published. I also found an ISP 1040 card, which works just fine here. So this indeed could be a hardware problem causing a PCI access to fail, which typically causes very strange backtraces like the incomplete one above. > > Next I tried netbooting a V-120. That at least did not panic, but instead of > starting the installer it produced this message on the console > > cd0: Attempt to query device size failed: NOT READY, Medium not present > Trying to mount root from cd9660:/dev/iso9660/FREEBSD_INSTALL [ro]... > mountroot: waiting for device /dev/iso9660/FREEBSD_INSTALL ... > Mounting from cd9660:/dev/iso9660/FREEBSD_INSTALL failed with error 19. So far, that's expected as you seem to also have copied over the /etc/fstab from disc1. > Trying to mount root from nfs: []... > > At which point all I get is a single-user shell. > > Is installing via the network supported at all with the new bsd installer? Any > magic loader options I need to get this to work? > According to the /etc/rc.local that ends up on the ISO 9660 file system of the release images, netbooted installing indeed was thought of when these were switched to bsdinstall(8). Given that just copying over that one file to a NFS root properly fires up bsdinstall(8) after booting here and given that your netboot environment apparently also works just fine - except you refer to the mountroot prompt as single-user shell -, I've no idea what could be going wrong in your case ... Marius ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: Problem with zfsloader on 9.2-BETA2
On Mon, Aug 5, 2013 at 7:54 PM, J David wrote: > OK show vfs.zfs.boot.primary_pool > 2022708996989799150 I should add that this is indeed the correct guid for the pool: $ zpool get guid NAME PROPERTY VALUE SOURCE data guid 2022708996989799150 default Thanks! ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: Problem with zfsloader on 9.2-BETA2
On Sat, Aug 3, 2013 at 3:16 PM, Andriy Gapon wrote: > Very unusual. Would you be able to try 9.2 zfsloader again? Surely. > I would like to see values of loaddev, currdev and vfs.zfs.boot.primary_pool > loader variables (if any are set). These can be obtained using 'show' command > at loader prompt. OK show loaddev OK show currdev OK show vfs.zfs.boot.primary_pool 2022708996989799150 > Also, output of lsdev -v. OK lsdev -v cd devices: disk devices: disk0: BIOS drive C: disk0p1: FreeBSD boot64KB disk0p2: FreeBSD swap2048MB disk0p3: FreeBSD ZFS 28GB disk1: BIOS drive D: disk1p1: FreeBSD boot64KB disk1p2: FreeBSD swap2048MB disk1p3: FreeBSD ZFS 27GB disk2: BIOS drive E: disk3: BIOS drive F: disk4: BIOS drive G: disk5: BIOS drive H: disk6: BIOS drive I: disk7: BIOS drive J: pxe devices: zfs devices: OK > Also, if you are able to build custom 9.2 zfsloader, then it would be useful > to > modify the printf statement (in zfs_fmtdev(), sys/boot/zfs/zfs.c) to print > dev->pool_guid. This didn't produce the expected result. Slipped it into the 9.2-RC1 build, rebuilt, installed it, but nothing changed. In order to get it to load, we removed zfsloader, storing the 8.4 version as zfsloader-8.4 and installing the 9.2-RC1 zfsloader as zfsloader-9.2. Then, when the gptzfsboot can't find /boot/zfsloader, I entered zfsloader-9.2 at the prompt. It does the same thing, with the same message: ZFS: can't find pool by guid Even though I changed the source as you suggested: Index: zfs.c === --- zfs.c (revision 253967) +++ zfs.c (working copy) @@ -645,7 +645,7 @@ } else spa = spa_find_by_guid(dev->pool_guid); if (spa == NULL) { - printf("ZFS: can't find pool by guid\n"); + printf("ZFS: can't find pool by guid: %llX\n", dev->pool_guid); return (buf); } if (dev->root_guid == 0 && zfs_get_root(spa, &dev->root_guid)) { So I can't figure out why the message didn't change. Doing a full buildworld buildkernel right now in case the problem is just that I don't know how to selectively build this. Thanks! ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
unexpected idprio 31 behavior on 9.2-BETA2 and 9.2-RC1
I have an i5-2500 machine 8GB RAM now running 9.2-RC1 amd64 with the GENERIC kernel. Today, while still running 9.2-BETA2, I updated my source tree and started building world with idprio 31 and I looked back a while later and all the CPU cores and disk were essentially idle, and hardly any progress had been made on the build. I stopped and restarted the build without the idle priority setting and it ran fine. Anybody else seen any of this? Anybody know about any fairly recent changes that might account for it? I did a "rm -rf /usr/src /usr/obj" and loaded a new source tree before going to RC1. I still see odd behavior at RC1. Sometimes it works just like it should (i.e. compute bound processes use most/all of the available CPU time), but a lot of the time both the CPU and disk are idle (e.g. CPU 97.8% idle, disk 1% busy per systat). I don't think I ever saw this behavior before while running "make buildworld -j4". Can anyone else confirm/rebut my findings? Thanks. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
9.2-RC1 sparc install via network problems
Hi. I have some problems installing 9.2-RC1 on sparc64 via the network I have a dhcp server, and an NFS server that exports the disc1 ISO. Relevant portions of dhcpd.conf: filename "boot/loader"; option root-path ":/cdrom"; The boot/loader was copied from the install image. The tftpboot directory is otherwise empty. (So no loader.conf etc.) First I tried booting an Ultra10. That paniced immediately isp0: port 0x1000-0x10ff mem 0x2808000-0x2808fff at device 4.0 on pci3 isp0: invalid NVRAM header isp1: port 0x2000-0x20ff mem 0x290-0x2900fff at device 2.0 on pci2 panic: trap: data access error (kernel) cpuid = 0 KDB: stack backtrace: #0 0xc08588b4 at trap+0x554 Uptime: 1s This may be a hardware thing so I did not pursue this further. Maybe Ultra10 is no longer supported, I don't know. (Ultra10s are crap anyway :) Next I tried netbooting a V-120. That at least did not panic, but instead of starting the installer it produced this message on the console cd0: Attempt to query device size failed: NOT READY, Medium not present Trying to mount root from cd9660:/dev/iso9660/FREEBSD_INSTALL [ro]... mountroot: waiting for device /dev/iso9660/FREEBSD_INSTALL ... Mounting from cd9660:/dev/iso9660/FREEBSD_INSTALL failed with error 19. Trying to mount root from nfs: []... At which point all I get is a single-user shell. Is installing via the network supported at all with the new bsd installer? Any magic loader options I need to get this to work? Cheers Michiel ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: NFS deadlock on 9.2-Beta1
On Mon, Aug 5, 2013 at 12:06 PM, Mark Saad wrote: > Is there any updates on this issue ? Has anyone tested it or see it happen > on the release candidate ? It's a bit premature for that; the RC has been out for a few hours. We put BETA2 on 25 nodes and only saw the problem on five after 24 hours. At that point we switched to a build that reverts the patch that causes the deadlock and no node on which that was done (at this point, all of them) has had the problem since. We'll get some machines on releng/9.2 today, but I didn't see anything in the release candidate announcement to indicate that relevant changes had been made. Is there anything in the release candidate that is believe to address this issue? If so, let us know with svn revision it's in and we'll try to accelerate test deployment. Thanks! ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: NFS deadlock on 9.2-Beta1
On Jul 29, 2013, at 10:48 PM, J David wrote: > If it is helpful, we have 25 nodes testing the 9.2-BETA1 build and > without especially trying to exercise this bug, we found > sendfile()-using processes deadlocked in WCHAN newnfs on 5 of the 25 > nodes. The ones with highest uptime (about 3 days) seem most > affected, so it does seem like a "sooner or later" type of thing. > Hopefully the fix is easy and it won't be an issue, but it definitely > does seem like a problem 9.2-RELEASE would be better off without. > > Unfortunately we are not in a position to capture the requested > debugging information at this time; none of those nodes are running a > debug version of the kernel. If Michael is unable to get the > information as he hopes, we can try to do that, possibly over the > weekend. For the time being, we will convert half the machines to > rollback r250907 to try to confirm that resolves the issue. > > Thanks all! If one has to encounter a problem like this, it is nice > to come to the list and find the research already so well underway! > __ All Is there any updates on this issue ? Has anyone tested it or see it happen on the release candidate ? --- Mark saad | mark.s...@longcount.org ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: Strange sendmail behaviour after upgrade to 9.1-BETA2
Another DNS (not MS) server answers NXDomain instead of SERVFAIL on '? kalmar.' query. And then sendmail asks for A. Email works There are two ways obviously: 1. Change sendmail behaviour, i.e. make it ask for A record just after first query for . Like in 9.1-RELEASE. 2. Make MS DNS answer NXDomain instead of SERVFAIL on of kalmar. Is it possible at all? =) 2013/8/5 Pavel Timofeev : > Hmm. SERVFAIL comes because of not full dns name. Kalmar. instead of > kalmar.ocs.ru. > > 2013/8/5 Pavel Timofeev : >> Got some new info. I read this >> https://lists.isc.org/pipermail/bind-users/2006-August/063528.html >> Perhaps, the problem because our MS DNS server returns SERVFAIL on >> query. >> It happens in case of often queries. It's 9.2-RC1: >> >> 14:59:50.973445 IP reticulum.xxx.ru.29244 > hercules.xxx.ru.domain: >> 53419+ MX? xxx.ru. (24) >> 14:59:50.973754 IP hercules.xxx.ru.domain > reticulum.xxx.ru.29244: >> 53419* 1/0/1 MX kalmar.xxx.ru. 10 (63) >> 14:59:50.974061 IP reticulum.xxx.ru.56461 > hercules.xxx.ru.domain: >> 53420+ ? kalmar.xxx.ru. (31) >> 14:59:50.974340 IP hercules.xxx.ru.domain > reticulum.xxx.ru.56461: >> 53420* 0/1/0 (82) >> 14:59:50.974570 IP reticulum.xxx.ru.28332 > hercules.xxx.ru.domain: >> 53421+ ? kalmar.xxx.ru. (31) >> 14:59:50.974887 IP hercules.xxx.ru.domain > reticulum.xxx.ru.28332: >> 53421* 0/1/0 (82) >> 14:59:50.974919 IP reticulum.xxx.ru.21453 > hercules.xxx.ru.domain: >> 53422+ ? kalmar. (24) >> 14:59:50.975290 IP hercules.xxx.ru.domain > reticulum.xxx.ru.21453: >> 53422 ServFail 0/0/0 (24) >> >> At third time MS DNS returned SERVFAIL. Why sendmail asks for 3 times? >> Something wrong with bind libraries? I think if there was no SERVFAIL >> on it would ask for A record then. >> >> The opposite situation on 9.1-RELEASE. Here sendmail asks only >> for 1 time. Then it asks A. >> ... >> 14:30:45.061950 IP 10.0.2.15.30979 > hercules.xxx.ru.domain: 62684+ >> MX? xxx.ru. (24) >> 14:30:45.063064 IP hercules.xxx.ru.domain > 10.0.2.15.30979: 62684* >> 1/0/1 MX kalmar.xxx.ru. 10 (63) >> 14:30:45.063624 IP 10.0.2.15.39212 > hercules.xxx.ru.domain: 62685+ >> ? kalmar.xxx.ru. (31) >> 14:30:45.064460 IP hercules.xxx.ru.domain > 10.0.2.15.39212: 62685* 0/1/0 >> (82) >> 14:30:45.064766 IP 10.0.2.15.44381 > hercules.xxx.ru.domain: 62686+ A? >> kalmar.xxx.ru. (31) >> 14:30:45.065530 IP hercules.xxx.ru.domain > 10.0.2.15.44381: 62686* >> 1/0/0 A 192.168.31.190 (47) >> ... >> >> I will continue investigation. >> >> >> >> 2013/8/4 Pavel Timofeev : >>> Same thing on RELENG/9.2 >>> >>> 2013/8/3 Pavel Timofeev : 2013/8/2 Ruben de Groot : > On Wed, Jul 31, 2013 at 04:54:08PM +0400, Pavel Timofeev typed: >> I've just installed new fresh 9.2-BETA2 amd64 on another machine. >> Same behaviour - Sendmail asks DNS only for record of mx server. >> We don't use IPv6 in our company. >> >> 2013/7/31 Pavel Timofeev : >> > I wanted to say that sendmail asks only (IPv6) record of mx >> > server, but not A (IPv4). >> > Any ideas? >> > >> > 2013/7/31 Pavel Timofeev : >> >> Looks like my sendmail uses only IPv6 to resolve mx server dns name to >> >> ip address. >> >> >> >> 14:59:50.793338 IP reticulum.xxx.ru.19032 > hercules.xxx.ru.domain: >> >> 53417+ ? xxx.ru. (24) >> >>[13/98] >> >> 14:59:50.793662 IP hercules.xxx.ru.domain > reticulum.xxx.ru.19032: >> >> 53417* 0/1/0 (75) >> >> 14:59:50.793696 IP reticulum.xxx.ru.55299 > hercules.xxx.ru.domain: >> >> 53418+ A? xxx.ru. (24) > > Here it is definately asking an A record. And? It's not A record of mx server. >> >> 14:59:50.794087 IP hercules.xxx.ru.domain > reticulum.xxx.ru.55299: >> >> 53418* 7/0/0 A 192.168.2.11, A 192.168.2.12, A 192.168.41.4, A >> >> 192.168.14.12, A 192.168.34.100, A 192.168.34.110, A 192.168.44.19 >> >> (136) > > And here is the reply. The quetion is more why is it repeating the same > query (below) over and over. Yes, reply about xxx.ru, not about mx server's IP which is kalmar.xxx.ru > > Ruben > >> >> 14:59:50.973445 IP reticulum.xxx.ru.29244 > hercules.xxx.ru.domain: >> >> 53419+ MX? xxx.ru. (24) >> >> 14:59:50.973754 IP hercules.xxx.ru.domain > reticulum.xxx.ru.29244: >> >> 53419* 1/0/1 MX kalmar.xxx.ru. 10 (63) >> >> 14:59:50.974061 IP reticulum.xxx.ru.56461 > hercules.xxx.ru.domain: >> >> 53420+ ? kalmar.xxx.ru. (31) >> >> 14:59:50.974340 IP hercules.xxx.ru.domain > reticulum.xxx.ru.56461: >> >> 53420* 0/1/0 (82) >> >> 14:59:50.974570 IP reticulum.xxx.ru.28332 > hercules.xxx.ru.domain: >> >> 53421+ ? kalmar.xxx.ru. (31) >> >> 14:59:50.974887 IP hercules.xxx.ru.domain > reticulum.xxx.ru.28332: >> >> 53421* 0/1/0 (82) >> >> 14:59:50.974919 IP reticulum.xxx.ru.21453 > hercules.xxx.ru.domain
Re: Strange sendmail behaviour after upgrade to 9.1-BETA2
So, sendmail is broken? Or libraries which sendmail uses to resolv names? 2013/8/5 Pavel Timofeev : > Hmm. SERVFAIL comes because of not full dns name. Kalmar. instead of > kalmar.ocs.ru. > > 2013/8/5 Pavel Timofeev : >> Got some new info. I read this >> https://lists.isc.org/pipermail/bind-users/2006-August/063528.html >> Perhaps, the problem because our MS DNS server returns SERVFAIL on >> query. >> It happens in case of often queries. It's 9.2-RC1: >> >> 14:59:50.973445 IP reticulum.xxx.ru.29244 > hercules.xxx.ru.domain: >> 53419+ MX? xxx.ru. (24) >> 14:59:50.973754 IP hercules.xxx.ru.domain > reticulum.xxx.ru.29244: >> 53419* 1/0/1 MX kalmar.xxx.ru. 10 (63) >> 14:59:50.974061 IP reticulum.xxx.ru.56461 > hercules.xxx.ru.domain: >> 53420+ ? kalmar.xxx.ru. (31) >> 14:59:50.974340 IP hercules.xxx.ru.domain > reticulum.xxx.ru.56461: >> 53420* 0/1/0 (82) >> 14:59:50.974570 IP reticulum.xxx.ru.28332 > hercules.xxx.ru.domain: >> 53421+ ? kalmar.xxx.ru. (31) >> 14:59:50.974887 IP hercules.xxx.ru.domain > reticulum.xxx.ru.28332: >> 53421* 0/1/0 (82) >> 14:59:50.974919 IP reticulum.xxx.ru.21453 > hercules.xxx.ru.domain: >> 53422+ ? kalmar. (24) >> 14:59:50.975290 IP hercules.xxx.ru.domain > reticulum.xxx.ru.21453: >> 53422 ServFail 0/0/0 (24) >> >> At third time MS DNS returned SERVFAIL. Why sendmail asks for 3 times? >> Something wrong with bind libraries? I think if there was no SERVFAIL >> on it would ask for A record then. >> >> The opposite situation on 9.1-RELEASE. Here sendmail asks only >> for 1 time. Then it asks A. >> ... >> 14:30:45.061950 IP 10.0.2.15.30979 > hercules.xxx.ru.domain: 62684+ >> MX? xxx.ru. (24) >> 14:30:45.063064 IP hercules.xxx.ru.domain > 10.0.2.15.30979: 62684* >> 1/0/1 MX kalmar.xxx.ru. 10 (63) >> 14:30:45.063624 IP 10.0.2.15.39212 > hercules.xxx.ru.domain: 62685+ >> ? kalmar.xxx.ru. (31) >> 14:30:45.064460 IP hercules.xxx.ru.domain > 10.0.2.15.39212: 62685* 0/1/0 >> (82) >> 14:30:45.064766 IP 10.0.2.15.44381 > hercules.xxx.ru.domain: 62686+ A? >> kalmar.xxx.ru. (31) >> 14:30:45.065530 IP hercules.xxx.ru.domain > 10.0.2.15.44381: 62686* >> 1/0/0 A 192.168.31.190 (47) >> ... >> >> I will continue investigation. >> >> >> >> 2013/8/4 Pavel Timofeev : >>> Same thing on RELENG/9.2 >>> >>> 2013/8/3 Pavel Timofeev : 2013/8/2 Ruben de Groot : > On Wed, Jul 31, 2013 at 04:54:08PM +0400, Pavel Timofeev typed: >> I've just installed new fresh 9.2-BETA2 amd64 on another machine. >> Same behaviour - Sendmail asks DNS only for record of mx server. >> We don't use IPv6 in our company. >> >> 2013/7/31 Pavel Timofeev : >> > I wanted to say that sendmail asks only (IPv6) record of mx >> > server, but not A (IPv4). >> > Any ideas? >> > >> > 2013/7/31 Pavel Timofeev : >> >> Looks like my sendmail uses only IPv6 to resolve mx server dns name to >> >> ip address. >> >> >> >> 14:59:50.793338 IP reticulum.xxx.ru.19032 > hercules.xxx.ru.domain: >> >> 53417+ ? xxx.ru. (24) >> >>[13/98] >> >> 14:59:50.793662 IP hercules.xxx.ru.domain > reticulum.xxx.ru.19032: >> >> 53417* 0/1/0 (75) >> >> 14:59:50.793696 IP reticulum.xxx.ru.55299 > hercules.xxx.ru.domain: >> >> 53418+ A? xxx.ru. (24) > > Here it is definately asking an A record. And? It's not A record of mx server. >> >> 14:59:50.794087 IP hercules.xxx.ru.domain > reticulum.xxx.ru.55299: >> >> 53418* 7/0/0 A 192.168.2.11, A 192.168.2.12, A 192.168.41.4, A >> >> 192.168.14.12, A 192.168.34.100, A 192.168.34.110, A 192.168.44.19 >> >> (136) > > And here is the reply. The quetion is more why is it repeating the same > query (below) over and over. Yes, reply about xxx.ru, not about mx server's IP which is kalmar.xxx.ru > > Ruben > >> >> 14:59:50.973445 IP reticulum.xxx.ru.29244 > hercules.xxx.ru.domain: >> >> 53419+ MX? xxx.ru. (24) >> >> 14:59:50.973754 IP hercules.xxx.ru.domain > reticulum.xxx.ru.29244: >> >> 53419* 1/0/1 MX kalmar.xxx.ru. 10 (63) >> >> 14:59:50.974061 IP reticulum.xxx.ru.56461 > hercules.xxx.ru.domain: >> >> 53420+ ? kalmar.xxx.ru. (31) >> >> 14:59:50.974340 IP hercules.xxx.ru.domain > reticulum.xxx.ru.56461: >> >> 53420* 0/1/0 (82) >> >> 14:59:50.974570 IP reticulum.xxx.ru.28332 > hercules.xxx.ru.domain: >> >> 53421+ ? kalmar.xxx.ru. (31) >> >> 14:59:50.974887 IP hercules.xxx.ru.domain > reticulum.xxx.ru.28332: >> >> 53421* 0/1/0 (82) >> >> 14:59:50.974919 IP reticulum.xxx.ru.21453 > hercules.xxx.ru.domain: >> >> 53422+ ? kalmar. (24) >> >> 14:59:50.975290 IP hercules.xxx.ru.domain > reticulum.xxx.ru.21453: >> >> 53422 ServFail 0/0/0 (24) >> >> 14:59:50.975314 IP reticulum.xxx.ru.63038 > hercules.xxx.ru.domain: >> >> 53422+ ? kalmar. (24) >> >> 14:59:50.975674 IP hercul
Re: Strange sendmail behaviour after upgrade to 9.1-BETA2
Hmm. SERVFAIL comes because of not full dns name. Kalmar. instead of kalmar.ocs.ru. 2013/8/5 Pavel Timofeev : > Got some new info. I read this > https://lists.isc.org/pipermail/bind-users/2006-August/063528.html > Perhaps, the problem because our MS DNS server returns SERVFAIL on query. > It happens in case of often queries. It's 9.2-RC1: > > 14:59:50.973445 IP reticulum.xxx.ru.29244 > hercules.xxx.ru.domain: > 53419+ MX? xxx.ru. (24) > 14:59:50.973754 IP hercules.xxx.ru.domain > reticulum.xxx.ru.29244: > 53419* 1/0/1 MX kalmar.xxx.ru. 10 (63) > 14:59:50.974061 IP reticulum.xxx.ru.56461 > hercules.xxx.ru.domain: > 53420+ ? kalmar.xxx.ru. (31) > 14:59:50.974340 IP hercules.xxx.ru.domain > reticulum.xxx.ru.56461: > 53420* 0/1/0 (82) > 14:59:50.974570 IP reticulum.xxx.ru.28332 > hercules.xxx.ru.domain: > 53421+ ? kalmar.xxx.ru. (31) > 14:59:50.974887 IP hercules.xxx.ru.domain > reticulum.xxx.ru.28332: > 53421* 0/1/0 (82) > 14:59:50.974919 IP reticulum.xxx.ru.21453 > hercules.xxx.ru.domain: > 53422+ ? kalmar. (24) > 14:59:50.975290 IP hercules.xxx.ru.domain > reticulum.xxx.ru.21453: > 53422 ServFail 0/0/0 (24) > > At third time MS DNS returned SERVFAIL. Why sendmail asks for 3 times? > Something wrong with bind libraries? I think if there was no SERVFAIL > on it would ask for A record then. > > The opposite situation on 9.1-RELEASE. Here sendmail asks only > for 1 time. Then it asks A. > ... > 14:30:45.061950 IP 10.0.2.15.30979 > hercules.xxx.ru.domain: 62684+ > MX? xxx.ru. (24) > 14:30:45.063064 IP hercules.xxx.ru.domain > 10.0.2.15.30979: 62684* > 1/0/1 MX kalmar.xxx.ru. 10 (63) > 14:30:45.063624 IP 10.0.2.15.39212 > hercules.xxx.ru.domain: 62685+ > ? kalmar.xxx.ru. (31) > 14:30:45.064460 IP hercules.xxx.ru.domain > 10.0.2.15.39212: 62685* 0/1/0 (82) > 14:30:45.064766 IP 10.0.2.15.44381 > hercules.xxx.ru.domain: 62686+ A? > kalmar.xxx.ru. (31) > 14:30:45.065530 IP hercules.xxx.ru.domain > 10.0.2.15.44381: 62686* > 1/0/0 A 192.168.31.190 (47) > ... > > I will continue investigation. > > > > 2013/8/4 Pavel Timofeev : >> Same thing on RELENG/9.2 >> >> 2013/8/3 Pavel Timofeev : >>> 2013/8/2 Ruben de Groot : On Wed, Jul 31, 2013 at 04:54:08PM +0400, Pavel Timofeev typed: > I've just installed new fresh 9.2-BETA2 amd64 on another machine. > Same behaviour - Sendmail asks DNS only for record of mx server. > We don't use IPv6 in our company. > > 2013/7/31 Pavel Timofeev : > > I wanted to say that sendmail asks only (IPv6) record of mx > > server, but not A (IPv4). > > Any ideas? > > > > 2013/7/31 Pavel Timofeev : > >> Looks like my sendmail uses only IPv6 to resolve mx server dns name to > >> ip address. > >> > >> 14:59:50.793338 IP reticulum.xxx.ru.19032 > hercules.xxx.ru.domain: > >> 53417+ ? xxx.ru. (24) > >>[13/98] > >> 14:59:50.793662 IP hercules.xxx.ru.domain > reticulum.xxx.ru.19032: > >> 53417* 0/1/0 (75) > >> 14:59:50.793696 IP reticulum.xxx.ru.55299 > hercules.xxx.ru.domain: > >> 53418+ A? xxx.ru. (24) Here it is definately asking an A record. >>> >>> And? It's not A record of mx server. >>> > >> 14:59:50.794087 IP hercules.xxx.ru.domain > reticulum.xxx.ru.55299: > >> 53418* 7/0/0 A 192.168.2.11, A 192.168.2.12, A 192.168.41.4, A > >> 192.168.14.12, A 192.168.34.100, A 192.168.34.110, A 192.168.44.19 > >> (136) And here is the reply. The quetion is more why is it repeating the same query (below) over and over. >>> >>> Yes, reply about xxx.ru, not about mx server's IP which is kalmar.xxx.ru >>> Ruben > >> 14:59:50.973445 IP reticulum.xxx.ru.29244 > hercules.xxx.ru.domain: > >> 53419+ MX? xxx.ru. (24) > >> 14:59:50.973754 IP hercules.xxx.ru.domain > reticulum.xxx.ru.29244: > >> 53419* 1/0/1 MX kalmar.xxx.ru. 10 (63) > >> 14:59:50.974061 IP reticulum.xxx.ru.56461 > hercules.xxx.ru.domain: > >> 53420+ ? kalmar.xxx.ru. (31) > >> 14:59:50.974340 IP hercules.xxx.ru.domain > reticulum.xxx.ru.56461: > >> 53420* 0/1/0 (82) > >> 14:59:50.974570 IP reticulum.xxx.ru.28332 > hercules.xxx.ru.domain: > >> 53421+ ? kalmar.xxx.ru. (31) > >> 14:59:50.974887 IP hercules.xxx.ru.domain > reticulum.xxx.ru.28332: > >> 53421* 0/1/0 (82) > >> 14:59:50.974919 IP reticulum.xxx.ru.21453 > hercules.xxx.ru.domain: > >> 53422+ ? kalmar. (24) > >> 14:59:50.975290 IP hercules.xxx.ru.domain > reticulum.xxx.ru.21453: > >> 53422 ServFail 0/0/0 (24) > >> 14:59:50.975314 IP reticulum.xxx.ru.63038 > hercules.xxx.ru.domain: > >> 53422+ ? kalmar. (24) > >> 14:59:50.975674 IP hercules.xxx.ru.domain > reticulum.xxx.ru.63038: > >> 53422 ServFail 0/0/0 (24) > >> 14:59:50.975749 IP reticulum.xxx.ru.38393 > hercules.xxx.ru.domain: > >> 53423+ ? kalmar.xxx.ru. (31) > >> 14:59:50.976105 IP
Re: Strange sendmail behaviour after upgrade to 9.1-BETA2
Got some new info. I read this https://lists.isc.org/pipermail/bind-users/2006-August/063528.html Perhaps, the problem because our MS DNS server returns SERVFAIL on query. It happens in case of often queries. It's 9.2-RC1: 14:59:50.973445 IP reticulum.xxx.ru.29244 > hercules.xxx.ru.domain: 53419+ MX? xxx.ru. (24) 14:59:50.973754 IP hercules.xxx.ru.domain > reticulum.xxx.ru.29244: 53419* 1/0/1 MX kalmar.xxx.ru. 10 (63) 14:59:50.974061 IP reticulum.xxx.ru.56461 > hercules.xxx.ru.domain: 53420+ ? kalmar.xxx.ru. (31) 14:59:50.974340 IP hercules.xxx.ru.domain > reticulum.xxx.ru.56461: 53420* 0/1/0 (82) 14:59:50.974570 IP reticulum.xxx.ru.28332 > hercules.xxx.ru.domain: 53421+ ? kalmar.xxx.ru. (31) 14:59:50.974887 IP hercules.xxx.ru.domain > reticulum.xxx.ru.28332: 53421* 0/1/0 (82) 14:59:50.974919 IP reticulum.xxx.ru.21453 > hercules.xxx.ru.domain: 53422+ ? kalmar. (24) 14:59:50.975290 IP hercules.xxx.ru.domain > reticulum.xxx.ru.21453: 53422 ServFail 0/0/0 (24) At third time MS DNS returned SERVFAIL. Why sendmail asks for 3 times? Something wrong with bind libraries? I think if there was no SERVFAIL on it would ask for A record then. The opposite situation on 9.1-RELEASE. Here sendmail asks only for 1 time. Then it asks A. ... 14:30:45.061950 IP 10.0.2.15.30979 > hercules.xxx.ru.domain: 62684+ MX? xxx.ru. (24) 14:30:45.063064 IP hercules.xxx.ru.domain > 10.0.2.15.30979: 62684* 1/0/1 MX kalmar.xxx.ru. 10 (63) 14:30:45.063624 IP 10.0.2.15.39212 > hercules.xxx.ru.domain: 62685+ ? kalmar.xxx.ru. (31) 14:30:45.064460 IP hercules.xxx.ru.domain > 10.0.2.15.39212: 62685* 0/1/0 (82) 14:30:45.064766 IP 10.0.2.15.44381 > hercules.xxx.ru.domain: 62686+ A? kalmar.xxx.ru. (31) 14:30:45.065530 IP hercules.xxx.ru.domain > 10.0.2.15.44381: 62686* 1/0/0 A 192.168.31.190 (47) ... I will continue investigation. 2013/8/4 Pavel Timofeev : > Same thing on RELENG/9.2 > > 2013/8/3 Pavel Timofeev : >> 2013/8/2 Ruben de Groot : >>> On Wed, Jul 31, 2013 at 04:54:08PM +0400, Pavel Timofeev typed: I've just installed new fresh 9.2-BETA2 amd64 on another machine. Same behaviour - Sendmail asks DNS only for record of mx server. We don't use IPv6 in our company. 2013/7/31 Pavel Timofeev : > I wanted to say that sendmail asks only (IPv6) record of mx > server, but not A (IPv4). > Any ideas? > > 2013/7/31 Pavel Timofeev : >> Looks like my sendmail uses only IPv6 to resolve mx server dns name to >> ip address. >> >> 14:59:50.793338 IP reticulum.xxx.ru.19032 > hercules.xxx.ru.domain: >> 53417+ ? xxx.ru. (24) >>[13/98] >> 14:59:50.793662 IP hercules.xxx.ru.domain > reticulum.xxx.ru.19032: >> 53417* 0/1/0 (75) >> 14:59:50.793696 IP reticulum.xxx.ru.55299 > hercules.xxx.ru.domain: >> 53418+ A? xxx.ru. (24) >>> >>> Here it is definately asking an A record. >> >> And? It's not A record of mx server. >> >> 14:59:50.794087 IP hercules.xxx.ru.domain > reticulum.xxx.ru.55299: >> 53418* 7/0/0 A 192.168.2.11, A 192.168.2.12, A 192.168.41.4, A >> 192.168.14.12, A 192.168.34.100, A 192.168.34.110, A 192.168.44.19 >> (136) >>> >>> And here is the reply. The quetion is more why is it repeating the same >>> query (below) over and over. >> >> Yes, reply about xxx.ru, not about mx server's IP which is kalmar.xxx.ru >> >>> >>> Ruben >>> >> 14:59:50.973445 IP reticulum.xxx.ru.29244 > hercules.xxx.ru.domain: >> 53419+ MX? xxx.ru. (24) >> 14:59:50.973754 IP hercules.xxx.ru.domain > reticulum.xxx.ru.29244: >> 53419* 1/0/1 MX kalmar.xxx.ru. 10 (63) >> 14:59:50.974061 IP reticulum.xxx.ru.56461 > hercules.xxx.ru.domain: >> 53420+ ? kalmar.xxx.ru. (31) >> 14:59:50.974340 IP hercules.xxx.ru.domain > reticulum.xxx.ru.56461: >> 53420* 0/1/0 (82) >> 14:59:50.974570 IP reticulum.xxx.ru.28332 > hercules.xxx.ru.domain: >> 53421+ ? kalmar.xxx.ru. (31) >> 14:59:50.974887 IP hercules.xxx.ru.domain > reticulum.xxx.ru.28332: >> 53421* 0/1/0 (82) >> 14:59:50.974919 IP reticulum.xxx.ru.21453 > hercules.xxx.ru.domain: >> 53422+ ? kalmar. (24) >> 14:59:50.975290 IP hercules.xxx.ru.domain > reticulum.xxx.ru.21453: >> 53422 ServFail 0/0/0 (24) >> 14:59:50.975314 IP reticulum.xxx.ru.63038 > hercules.xxx.ru.domain: >> 53422+ ? kalmar. (24) >> 14:59:50.975674 IP hercules.xxx.ru.domain > reticulum.xxx.ru.63038: >> 53422 ServFail 0/0/0 (24) >> 14:59:50.975749 IP reticulum.xxx.ru.38393 > hercules.xxx.ru.domain: >> 53423+ ? kalmar.xxx.ru. (31) >> 14:59:50.976105 IP hercules.xxx.ru.domain > reticulum.xxx.ru.38393: >> 53423* 0/1/0 (82) >> 14:59:50.976176 IP reticulum.xxx.ru.45558 > hercules.xxx.ru.domain: >> 53424+ ? kalmar. (24) >> 14:59:50.976483 IP hercules.xxx.ru.domain > reticulum.xxx.ru.45558:
FreeBSD 9.2-RC1 now available
The first release candidate builds of the 9.2-RELEASE release cycle are now available on the FTP servers for the amd64, i386, ia64, powerpc, powerpc64, and sparc64 architectures. The image checksums follow at the end of this email. ISO images and, for architectures that support it, the memory stick images are available here: ftp://ftp.freebsd.org/pub/FreeBSD/releases/ISO-IMAGES/9.2/ (or any of the FreeBSD mirror sites). If you notice problems you can report them through the normal GNATS PR system or here on the -stable mailing list. If you would like to use SVN to do a source based update of an existing system use "releng/9.2". Please be aware that cvsup and CVS are both deprecated, and are not supported methods of updating the src/ tree. Changes between -BETA2 and -RC1 include: - Expand the list of devices claimed by cxgbe(4). - Fix a panic in the racct code when munlock(2) is called with incorrect values. - Remove ctl(4) from GENERIC. ctladm(8) now automatically loads the corresponding module as necessary. This reduces the default memory footprint and allows FreeBSD to work on i386 machines with 128 MB of RAM out of the box. - Fix zfs send -D hang after processing requiring a CTRL+C to interrupt. The freebsd-update(8) utility supports binary upgrades of amd64 and i386 systems running earlier FreeBSD releases. Systems running earlier FreeBSD releases can upgrade as follows: # freebsd-update upgrade -r 9.2-RC1 During this process, FreeBSD Update may ask the user to help by merging some configuration files or by confirming that the automatically performed merging was done correctly. # freebsd-update install The system must be rebooted with the newly installed kernel before continuing. # shutdown -r now After rebooting, freebsd-update needs to be run again to install the new userland components: # freebsd-update install It is recommended to rebuild and install all applications if possible, especially if upgrading from an earlier FreeBSD release, for example, FreeBSD 8.x. Alternatively, the user can install misc/compat8x and other compatibility libraries, afterwards the system must be rebooted into the new userland: # shutdown -r now Finally, after rebooting, freebsd-update needs to be run again to remove stale files: # freebsd-update install Checksums: amd64: SHA256 (FreeBSD-9.2-RC1-amd64-bootonly.iso) = 57312363a8ce588c22c21b849af802ca2616e105da2e29b4ae6f549d41e9a32d SHA256 (FreeBSD-9.2-RC1-amd64-disc1.iso) = bc30f5661304c65d14612d39e19f9651502fbdc467d9ad02402c58a99d6fe2da SHA256 (FreeBSD-9.2-RC1-amd64-memstick.img) = 57af7a47604d2c138e4887a717526f7a48d90ba0e0f5e23425845a7b1a2ff131 MD5 (FreeBSD-9.2-RC1-amd64-bootonly.iso) = c36b97aec501a49f3a49b38303b53f0d MD5 (FreeBSD-9.2-RC1-amd64-disc1.iso) = 5124086f31bf5efc40b770dd2439ce7a MD5 (FreeBSD-9.2-RC1-amd64-memstick.img) = 2f2ea636cfc5fe74ea15a4c1cbd2f8cb i386: SHA256 (FreeBSD-9.2-RC1-i386-bootonly.iso) = 7926b57fdc9ce536bdf51722a524d9ae385a82bf02c42b3315604542672b0906 SHA256 (FreeBSD-9.2-RC1-i386-disc1.iso) = d33ec96a39e2b5a53b143397daac4d3778c30b3ec0592fccd5e8b84def69d03c SHA256 (FreeBSD-9.2-RC1-i386-memstick.img) = 523899af78effdf7760791d91c3d56620a484b73c74bbfa82e9f21178979ddb8 MD5 (FreeBSD-9.2-RC1-i386-bootonly.iso) = 3500e3b375a955d91f194b8ab3a58c2f MD5 (FreeBSD-9.2-RC1-i386-disc1.iso) = 4534a3675a5a38d475a7eff6651f4289 MD5 (FreeBSD-9.2-RC1-i386-memstick.img) = c4cbb5478c557dd9e2200da027da799c ia64: SHA256 (FreeBSD-9.2-RC1-ia64-bootonly.iso) = bd95b2480edd547fff06587a04f65ceceeaa312ac08e0fceccbced47f3df47ac SHA256 (FreeBSD-9.2-RC1-ia64-disc1.iso) = 93795e3e9df2e3ea6163efadf0f25a70d8574f75868826a403721fbc8abec43e SHA256 (FreeBSD-9.2-RC1-ia64-memstick.img) = d027d2965906c96798855adadf60ea622d87e19e28de15efe889086962d73774 MD5 (FreeBSD-9.2-RC1-ia64-bootonly.iso) = 3be3ec429b03facf5f6f29ac409dd2f6 MD5 (FreeBSD-9.2-RC1-ia64-disc1.iso) = 24f6a2c3749799337065ebc7bb8f7d92 MD5 (FreeBSD-9.2-RC1-ia64-memstick.img) = 8fa6c1736ea74bc440137079bf3c9fa1 powerpc: SHA256 (FreeBSD-9.2-RC1-powerpc-bootonly.iso) = 0d0f2048ffe81ac7bc36a687ccd8dcd4368460c4dd7055cdabde5f70824062d0 SHA256 (FreeBSD-9.2-RC1-powerpc-disc1.iso) = 31544e513992da8c1df4ca043d22952b102ed2c1b9c184db6ddc3a2fd4aecedc SHA256 (FreeBSD-9.2-RC1-powerpc-memstick.img) = 48306dd6f718d820ca2464eedfafe761b2fb5f211d1f23545e3cb1f6dfdbf0a8 MD5 (FreeBSD-9.2-RC1-powerpc-bootonly.iso) = 20d43ffe51a6d374fc358dd7e5c65a2a MD5 (FreeBSD-9.2-RC1-powerpc-disc1.iso) = 8deb66e7d557734fd64709c8512a MD5 (FreeBSD-9.2-RC1-powerpc-memstick.img) = 324373d1a647722f6b925038b00a9004 powerpc64: SHA256 (FreeBSD-9.2-RC1-powerpc-powerpc64-bootonly.iso) = 27f4f23093067d20a92f85d7fb611c6576e7ceb4fdf2481154ca05ba1ddbf282 SHA256 (FreeBSD-9.2-RC1-powerpc-powerpc64-disc1.iso) = 008f42feb9c6378976d20cec227fbd5ab811f36be1519db9f90e0edd07df961c