Re: gptboot broken when compiled with clang 6 and WITHOUT_LOADER_GELI -- clang 5 is OK

2018-04-25 Thread Dewayne Geraghty
Andre, You're not alone.  I think there's a problem with clang6 on i386
FreeBSD 11.1X, refer:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=227552
https://forums.freebsd.org/threads/uptime-w-i386-breakage.65584/
and perhaps also on amd64, search for
https://bugs.freebsd.org/bugzilla/buglist.cgi?quicksearch=clang_id=226390.

Without time to investigate I've reverted userland
FreeBSD 11.2-PRERELEASE  r332843M    amd64 1101515 1101509
FreeBSD 11.2-PRERELEASE  r332843M    i386 1101515 1101509

Apologies for my earlier email, I'd missed setting content-type:
text/plain instead of text/html (the latter strips content to the
mailing list), too many management memo's! ;)

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: gptboot broken when compiled with clang 6 and WITHOUT_LOADER_GELI -- clang 5 is OK

2018-04-25 Thread Dewayne Geraghty

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Problems with ifconfig when starting all jails after 10.3 -> 10.4 upgrade

2018-04-25 Thread Marc Branchaud

On 2018-04-06 02:26 PM, Marc Branchaud wrote:

On 2018-04-05 10:28 AM, Marc Branchaud wrote:

Hi all,

I just upgraded from 10.3 to 10.4, and "/etc/rc.d/jail start" is 
having problems starting all of my jails:


# /etc/rc.d/jail start
Starting jails:xipbuild_3_3: created
ifconfig:: bad value
jail: xipbuild_3_3_8: /sbin/ifconfig lo1 inet 10.1.1.38/32 alias: failed
xipbuild_3_4: created
ifconfig:: bad value
jail: xipbuild_4_0: /sbin/ifconfig lo1 inet 10.1.1.5/32 alias: failed
xipbuild: created
xipbuild_4_9: created
ifconfig:: bad value
jail: xipbuild9: /sbin/ifconfig lo1 inet 10.1.1.209/32 alias: failed
.


More info: Things work fine with jail_parallel_start="YES".

In 10.4, /etc/rc.d/jail now adds "-p1" to the jail command's arguments 
when starting all jails with jail_parallel_start="NO".  It's definitely 
this parameter that's causing my problems -- changing /etc/rc.d/jail to 
not add the parameter fixes the problem.


The problem stems from work for bug 209112, a patch for which (r302857) 
was MFC'd to stable-10.


I've added a comment to bug 209112.

M.



     M.


This worked fine in 10.3.  I can individually start each jail, e.g. 
"/etc/rc.d/jail start xipbuild9".


All the jails configure the same set of parameters.  Here's my jail.conf:

--- 8< --- 8< --- 8< --- 8< --- 8< --- 8< --- 8< --- 8< ---
xipbuild_3_3 {
   path="/usr/build-jails/jails/3.3";
   host.hostname="xipbuild_3_3";
   ip4.addr="10.1.1.3/32";

   allow.chflags;
   allow.mount;
   mount.devfs;

   persist;

   mount="/usr/home  /usr/build-jails/jails/3.3/usr/home nullfs rw 0 0";
   interface="lo1";
}
xipbuild_3_3_8 {
   path="/usr/build-jails/jails/3.3.8";
   host.hostname="xipbuild_3_3_8";
   ip4.addr="10.1.1.38/32";

   allow.chflags;
   allow.mount;
   mount.devfs;

   persist;

   mount="/usr/home  /usr/build-jails/jails/3.3.8/usr/home nullfs rw 0 
0";

   interface="lo1";
}
xipbuild_3_4 {
   path="/usr/build-jails/jails/3.4";
   host.hostname="xipbuild_3_4";
   ip4.addr="10.1.1.4/32";

   allow.chflags;
   allow.mount;
   mount.devfs;

   persist;

   mount="/usr/home  /usr/build-jails/jails/3.4/usr/home nullfs rw 0 0";
   interface="lo1";
}
xipbuild_4_0 {
   path="/usr/build-jails/jails/4.0";
   host.hostname="xipbuild_4_0";
   ip4.addr="10.1.1.5/32";

   allow.chflags;
   allow.mount;
   mount.devfs;

   persist;

   mount="/usr/home  /usr/build-jails/jails/4.0/usr/home nullfs rw 0 0";
   interface="lo1";
}
xipbuild {
   path="/usr/build-jails/jails/latest";
   host.hostname="xipbuild";
   ip4.addr="10.1.1.200/32";

   allow.chflags;
   allow.mount;
   mount.devfs;

   persist;

   mount="/usr/home  /usr/build-jails/jails/latest/usr/home nullfs rw 
0 0";

   interface="lo1";
}
xipbuild_4_9 {
   path="/usr/build-jails/jails/4.9";
   host.hostname="xipbuild_4_9";
   ip4.addr="10.1.1.90/32";

   allow.chflags;
   allow.mount;
   mount.devfs;

   persist;

   mount="/usr/home  /usr/build-jails/jails/4.9/usr/home nullfs rw 0 0";
   interface="lo1";
}
xipbuild9 {
   path="/usr/build-jails/jails/latest9";
   host.hostname="xipbuild9";
   ip4.addr="10.1.1.209/32";

   allow.chflags;
   allow.mount;
   mount.devfs;

   persist;

   mount="/usr/home  /usr/build-jails/jails/latest9/usr/home nullfs rw 
0 0";

   interface="lo1";
}
--- 8< --- 8< --- 8< --- 8< --- 8< --- 8< --- 8< --- 8< ---

I use ipnat to give the jails network access.  Here's ipnat.rules:

--- 8< --- 8< --- 8< --- 8< --- 8< --- 8< --- 8< --- 8< ---
map em0 10.1.1.0/24 -> 0/32 proxy port ftp ftp/tcp
map em0 10.1.1.0/24 -> 0/32
--- 8< --- 8< --- 8< --- 8< --- 8< --- 8< --- 8< --- 8< ---

And here's my rc.conf:

--- 8< --- 8< --- 8< --- 8< --- 8< --- 8< --- 8< --- 8< ---
# Generated by Ansible

# hostname must be FQDN
hostname="devastator.xiplink.com"

zfs_enable="False"

# FIXME: previously auto-created?
ifconfig_lo1="create"


ifconfig_em0="DHCP SYNCDHCP"

network_interfaces="em0"
gateway_enable="YES"

# Prevent rpc
rpcbind_enable="NO"

# Prevent sendmail to try to connect to localhost
sendmail_enable="NO"
sendmail_submit_enable="NO"
sendmail_outbound_enable="NO"
sendmail_msp_queue_enable="NO"

# Bring up sshd, it takes some time and uses some entropy on first 
startup

sshd_enable="YES"

netwait_enable="YES"
netwait_ip="10.10.0.35"
netwait_if="em0"

jenkins_swarm_enable="YES"
jenkins_swarm_opts="-executors 8"

# --- Build jails ---
build_jails_enable="YES"
jail_enable="YES"

# Set rules in /etc/ipnat.rules
ipnat_enable="YES"

# Set interface name for ipnat
network_interfaces="${network_interfaces} lo1"

# Each jail needs to specify its IP address and mask bits in 
ipv4_addrs_lo1

ipv4_addrs_lo1="10.1.1.1/32"

jail_chflags_allow="yes"

varmfs="NO"
--- 8< --- 8< --- 8< --- 8< --- 8< --- 8< --- 8< --- 8< ---

Any insight would be deeply appreciated!

 M.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to 

gptboot broken when compiled with clang 6 and WITHOUT_LOADER_GELI -- clang 5 is OK

2018-04-25 Thread Andre Albsmeier
I have set up a new system disk for an i386 11.2-PRERELEASE box. I did the
usual

gpart create -s gpt $disk
gpart add -t freebsd-boot -s 984 $disk
gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 $disk
...

thing, just to notice that the box wouldn't boot. It seems to hang where
stage 2 should be run -- when the '\' should start spinning the screen
turns white and the box hangs (tested on two machines, an Asus P5W and a
Supermicro A2SAV).

So I replaced gptboot on the new disk by the one from an older machine
and everything was fine. I wanted to find out what is actually causing
the issue and recompiled /usr/src/stand after updating the old sources
in several steps.

Eventually it turned out that it depends on the compiler. When compiling
the latest /usr/src/stand with clang 5.0.1 the resulting gptboot works.
When using 6.0.0 it doesn't. To be exact, it's gptboot.o which is causing
the problems. When using a gptboot.o from a clang 5 system it is OK, when
using a gptboot.o from a clang 6 system it fails. 

To add more confusion: I usually have WITHOUT_LOADER_GELI in my make.conf.
When removing this, the resulting gptboot works even when compiled with
clang 6...

I can reproduce this in case s.o. wants me to do some tests...
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Pós-graduação em Engenharia de Automação e Controle Industrial

2018-04-25 Thread FIERGS | FATEC - Faculdade SENAI de Tecnologia

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Ryzen issues on FreeBSD ? (with sort of workaround)

2018-04-25 Thread Mike Tancsa
On 4/24/2018 8:04 PM, Don Lewis wrote:
>>
>> I was able to lock it up with vbox. bhyve was just a little easier to
>> script and also I figured would be good to get VBox out of the mix in
>> case it was something specific to VBox.  I dont recall if I tried it
>> with SMT disabled.  Regardless, on Intel based systems I ran these tests
>> for 72hrs straight without issue.   I can sort of believe hardware issue
>> or flaky motherboard BIOSes (2 ASUS MBs, 1 MSI MB, 3 Ryzen chips), but
>> the fact that two server class MBs from SuperMicro along with an Epyc
>> chip also does the same thing makes me think something specific to
>> FreeBSD and this class of AMD CPU :(
> 
> It would be interesting to test other AMD CPUs.  I think AMD and Intel
> have some differences in the virtualization implementations.

I dont have any Opterons handy unfortunately. However, I have a feeling
its not so much due to virtualization, as I am pretty sure I had a crash
when I wasnt doing any VM testing as well.
Peter Grehan speculated it might have something to do with a lot of IPIs
being generated.  Are there any non VM workloads that will generate many
IPIs ?

---Mike


-- 
---
Mike Tancsa, tel +1 519 651 3400 x203
Sentex Communications, m...@sentex.net
Providing Internet services since 1994 www.sentex.net
Cambridge, Ontario Canada
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Two USB 4 disk enclosure and a panic

2018-04-25 Thread Nenhum_de_Nos
On Mon, April 23, 2018 23:18, Nenhum_de_Nos wrote:
> Hi,
>
> I would like to know how to debug this. I have two 4 disk enclosures:
>
> Mediasonic ProBox 4 Bay 3.5' SATA HDD Enclosure – USB 3.0 & eSATA
> (HF2-SU3S2)
> NexStar HX4 - NST-640SU3-BK
>
> and both have 4 disk on them, and not all disk are equal.
>
> The issue comes when I plug the probox usb3 enclosure on the system. I
> can't even read the /var/log/message, it crashes very quickly.
>
> I can see on the boot process up to the point where the second enclosure
> comes to be loaded. The 4 disk are shown on the dmesg/console, and then a
> core dump happens, the boot process goes to the debug screen and a restart
> happens like a flash.
>
> The motherboard is a Intel® Desktop Board D525MW running 8GB RAM.
> All disk use ZFS, 4 or 5 zpools, one raidz, one mirror and two or three
> single disk pools.
> FreeBSD xxx 11.1-RELEASE-p7 FreeBSD 11.1-RELEASE-p7 #1 r330596: Thu Mar  8
> 06:45:59 -03 2018 root@xxx:/usr/obj/usr/src/sys/FreeBSD-11-amd64-PF
> amd64
>
> The kernel is a slightly modified generic, just to have altq.
>
> How can I debug this? I have no idea. I have to use two machines to run
> all those disks, and I would really like to have just one for it.
>
> Can it be the amount of RAM? The other box is and APU2 from PCEngines and
> have 4GB ram. apu2 uname -a: FreeBSD yyy 11.1-RELEASE-p4 FreeBSD
> 11.1-RELEASE-p4 #0: Tue Nov 14 06:12:40 UTC 2017
> r...@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  amd64
>
> I tried to plug the Vantec hardware on the apu2 box, but there it would
> not panic, but won't load all vantec disks neither. I really run out of
> ideas here :(
>
> thanks.
>
> --
> "We will call you Cygnus,
> the God of balance you shall be."

Hi,

I found some logs on the daily security output:

+ZFS filesystem version: 5
+ZFS storage pool version: features support (5000)
+panic: Solaris(panic): blkptr at 0xf8000b93c848 DVA 1 has invalid VDEV 1
+cpuid = 0
+KDB: stack backtrace:
+#0 0x80ab65c7 at kdb_backtrace+0x67
+#1 0x80a746a6 at vpanic+0x186
+#2 0x80a74513 at panic+0x43
+#3 0x82623192 at vcmn_err+0xc2
+#4 0x824a73ba at zfs_panic_recover+0x5a
+#5 0x824ce893 at zfs_blkptr_verify+0x2d3
+#6 0x824ce8dc at zio_read+0x2c
+#7 0x82445fb4 at arc_read+0x6c4
+#8 0x824636a4 at dmu_objset_open_impl+0xd4
+#9 0x8247eafa at dsl_pool_init+0x2a
+#10 0x8249b093 at spa_load+0x823
+#11 0x8249a2de at spa_load_best+0x6e
+#12 0x82496a81 at spa_open_common+0x101
+#13 0x824e2879 at pool_status_check+0x29
+#14 0x824eba3d at zfsdev_ioctl+0x4ed
+#15 0x809429f8 at devfs_ioctl_f+0x128
+#16 0x80ad1f15 at kern_ioctl+0x255
+CPU: Intel(R) Atom(TM) CPU D525   @ 1.80GHz (1800.11-MHz K8-class CPU)
+avail memory = 8246845440 (7864 MB)
+Timecounter "TSC" frequency 1800110007 Hz quality 1000
+GEOM_PART: integrity check failed (ada0s1, BSD)
+GEOM_PART: integrity check failed (diskid/DISK-5LZ0ZDBBs1, BSD)
+ugen1.2:  at usbus1
+ukbd0 on uhub0
+ukbd0:  on usbus1
+kbd2 at ukbd0
+ZFS filesystem version: 5
+ZFS storage pool version: features support (5000)
+re0: link state changed to DOWN
+uhid0 on uhub0
+uhid0:  on usbus1
+ums0 on uhub0
+ums0:  on usbus1
+ums0: 3 buttons and [XYZ] coordinates ID=0
+re0: promiscuous mode enabled
+re0: link state changed to UP

For what I see can be ZFS related.

If anyone have any hints, please tell :)

I kinda got curious about this:

+ZFS storage pool version: features support (5000)

How can I figure out if my pools are from different versions and may this
be the culprit here?

thanks,

matheus

-- 
"We will call you Cygnus,
the God of balance you shall be."

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Ryzen issues on FreeBSD ? (with sort of workaround)

2018-04-25 Thread Pete French



It would be interesting to test other AMD CPUs.  I think AMD and Intel
have some differences in the virtualization implementations.


I've been using AMD CPU's pretty extensively since the early 90's - back 
then I was running NeXTStep and out of the three Intel alternatives it 
was the only one which ran properly. Never seen anything like this 
before now though. In particular this Ryzen replaces the CPU on an old 
Phenom II - i.e. its the same disc and hence the same OS and VirtualBox 
config - and that ran fine.


So theres something odd about the Ryzen here.

Am also not sure its to do with the virtualisation - I got my first 
lockup with no virtualisation going on at all. Disablign SMT and the 
system hasnt locked, despite heavy virtualbox use.


-pete.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"