At Mon, 17 Oct 2016 17:43:25 +0200,
Marko Cupać wrote:
>
> On Mon, 17 Oct 2016 15:55:07 +0200
> Oliver Peter wrote:
>
> > On Mon, Oct 17, 2016 at 03:37:08PM +0200, Marko Cupać wrote:
> > > I have 10.3 host which runs a dozen or so ezjail-based jails. I have
> > > installed another 11.0 host, and
https://jenkins.FreeBSD.org/job/FreeBSD_stable_10/429/
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
On 17/10/2016 22:50, Karl Denninger wrote:
I will make some effort on the sandbox machine to see if I can come up
with a way to replicate this. I do have plenty of spare larger drives
laying around that used to be in service and were obsolesced due to
capacity -- but what I don't know if wheth
I will make some effort on the sandbox machine to see if I can come up
with a way to replicate this. I do have plenty of spare larger drives
laying around that used to be in service and were obsolesced due to
capacity -- but what I don't know if whether the system will misbehave
if the source is a
On 17/10/2016 20:52, Andriy Gapon wrote:
On 17/10/2016 21:54, Steven Hartland wrote:
You're hitting stack exhaustion, have you tried increasing the kernel stack
pages?
It can be changed from /boot/loader.conf
kern.kstack_pages="6"
Default on amd64 is 4 IIRC
Steve,
perhaps you can think of a
Setting those values will only effect what's queued to the device not
what's actually outstanding.
On 17/10/2016 21:22, Karl Denninger wrote:
Since I cleared it (by setting TRIM off on the test machine, rebooting,
importing the pool and noting that it did not panic -- pulled drives,
re-inserted
Since I cleared it (by setting TRIM off on the test machine, rebooting,
importing the pool and noting that it did not panic -- pulled drives,
re-inserted into the production machine and ran backup routine -- all
was normal) it may be a while before I see it again (a week or so is usual.)
It appear
Be good to confirm its not an infinite loop by giving it a good bump first.
On 17/10/2016 19:58, Karl Denninger wrote:
I can certainly attempt setting that higher but is that not just
hiding the problem rather than addressing it?
On 10/17/2016 13:54, Steven Hartland wrote:
You're hitting
On 17/10/2016 21:54, Steven Hartland wrote:
> You're hitting stack exhaustion, have you tried increasing the kernel stack
> pages?
> It can be changed from /boot/loader.conf
> kern.kstack_pages="6"
>
> Default on amd64 is 4 IIRC
Steve,
perhaps you can think of a more proper fix? :-)
https://lis
https://jenkins.FreeBSD.org/job/FreeBSD_stable_10/428/
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
I can certainly attempt setting that higher but is that not just
hiding the problem rather than addressing it?
On 10/17/2016 13:54, Steven Hartland wrote:
> You're hitting stack exhaustion, have you tried increasing the kernel
> stack pages?
> It can be changed from /boot/loader.conf
> kern.k
You're hitting stack exhaustion, have you tried increasing the kernel
stack pages?
It can be changed from /boot/loader.conf
kern.kstack_pages="6"
Default on amd64 is 4 IIRC
On 17/10/2016 19:08, Karl Denninger wrote:
The target (and devices that trigger this) are a pair of 4Gb 7200RPM
SATA rota
The target (and devices that trigger this) are a pair of 4Gb 7200RPM
SATA rotating rust drives (zmirror) with each provider geli-encrypted
(that is, the actual devices used for the pool create are the .eli's)
The machine generating the problem has both rotating rust devices *and*
SSDs, so I can't
what's your underlying media?
Warner
On Mon, Oct 17, 2016 at 10:02 AM, Karl Denninger wrote:
> Update from my test system:
>
> Setting vfs.zfs.vdev_trim_max_active to 10 (from default 64) does *not*
> stop the panics.
>
> Setting vfs.zfs.vdev.trim.enabled = 0 (which requires a reboot) DOES
> st
On Mon, 17 Oct 2016 03:44:14 +0300
Rostislav Krasny wrote:
> First of all I faced an old problem that I reported here a year ago:
> http://comments.gmane.org/gmane.os.freebsd.stable/96598
> Completely new USB flash drive flashed by the
> FreeBSD-11.0-RELEASE-i386-mini-memstick.img file kills ever
https://jenkins.FreeBSD.org/job/FreeBSD_stable_10/427/
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Marko Cupać wrote on 2016/10/17 17:43:
[...]
Now, to give answer to my own question: everything works fine, after -
of course - reinstalling all the packages with `pkg-static upgrade -f'.
Reinstalling all packages?
So you didn't just move jails from one host to another but you upgraded
them
Update from my test system:
Setting vfs.zfs.vdev_trim_max_active to 10 (from default 64) does *not*
stop the panics.
Setting vfs.zfs.vdev.trim.enabled = 0 (which requires a reboot) DOES
stop the panics.
I am going to run a scrub on the pack, but I suspect the pack itself
(now that I can actually
On Mon, 17 Oct 2016 15:55:07 +0200
Oliver Peter wrote:
> On Mon, Oct 17, 2016 at 03:37:08PM +0200, Marko Cupać wrote:
> > I have 10.3 host which runs a dozen or so ezjail-based jails. I have
> > installed another 11.0 host, and I'd like to move jails to it.
>
> I would switch them to iocage+zf
> Index: head/usr.sbin/freebsd-update/freebsd-update.sh
> ===
> --- head/usr.sbin/freebsd-update/freebsd-update.sh (revision 279900)
> +++ head/usr.sbin/freebsd-update/freebsd-update.sh (revision 279901)
> @@ -1231,7 +1231,7
On 10/17/16 11:02, George Mitchell wrote:
> [...]
> After setting hw.sdhci.debug=1 and hw.mmc.debug=1 in /etc/sysctl.conf
> and doing a verbose boot, then inserting and removing an SD card, all
> I get in "dmesg | egrep mmc\|sdhci" is:
>
> sdhci_pci0: mem 0xf0c6c000-0xf0c6c0ff irq 16 at device
>
Oliver Peter skrev:
>
> On Mon, Oct 17, 2016 at 03:37:08PM +0200, Marko Cupać wrote:
>> I have 10.3 host which runs a dozen or so ezjail-based jails. I have
>> installed another 11.0 host, and I'd like to move jails to it.
>
> I would switch them to iocage+zfs, ezjail is sooo 90s. :)
> Have a loo
On 10/16/16 17:40, George Mitchell wrote:
> On 10/16/16 14:16, Warner Losh wrote:
>> On Sun, Oct 16, 2016 at 12:08 PM, Warren Block wrote:
>>> On Sun, 16 Oct 2016, George Mitchell wrote:
>>>
> So not only is it (apparently) recognized, but the sdhci_pci driver
> attached to it! But insert
This is a situation I've had happen before, and reported -- it appeared
to be a kernel stack overflow, and it has gotten materially worse on
11.0-STABLE.
The issue occurs after some period of time (normally a week or so.) The
system has a mirrored pair of large drives used for backup purposes to
On Mon, Oct 17, 2016 at 03:37:08PM +0200, Marko Cupać wrote:
> I have 10.3 host which runs a dozen or so ezjail-based jails. I have
> installed another 11.0 host, and I'd like to move jails to it.
I would switch them to iocage+zfs, ezjail is sooo 90s. :)
Have a look at the documentation:
h
On Mon, Oct 17, 2016 at 3:39 PM, Rostislav Krasny wrote:
> On Mon, Oct 17, 2016 at 3:31 PM, krad wrote:
>>
>> Does this just affect MBR layouts? If possible you might want to consider
>> UEFI booting for both windows and other os's, It's probably safer as you
>> dont need to plays with partitions
Hi,
I have 10.3 host which runs a dozen or so ezjail-based jails. I have
installed another 11.0 host, and I'd like to move jails to it.
Can I just archive jails on 10.3, scp them to 11.0, and re-create them
there by restoring from archive (-a switch)?
Are there any additional actions I should pe
https://jenkins.FreeBSD.org/job/FreeBSD_stable_10/426/--
[...truncated 105166 lines...]
--- PartialInlining.o ---
c++ -O2 -pipe
-I/builds/workspace/FreeBSD_stable_10/src/lib/clang/libllvmipo/../../../contrib/llvm/include
-I/builds/workspace/FreeBSD_stabl
After rebasing some of my systems from r305866 to r307312
(plus local patches) I noticed that most of the ARC accesses
are counted as misses now.
Example:
[fk@elektrobier2 ~]$ uptime
2:03PM up 1 day, 18:36, 7 users, load averages: 0.29, 0.36, 0.30
[fk@elektrobier2 ~]$ zfs-stats -E
On Mon, Oct 17, 2016 at 3:31 PM, krad wrote:
>
> Does this just affect MBR layouts? If possible you might want to consider
> UEFI booting for both windows and other os's, It's probably safer as you
> dont need to plays with partitions and bootloaders.
This is an old computer that doesn't support
Does this just affect MBR layouts? If possible you might want to consider
UEFI booting for both windows and other os's, It's probably safer as you
dont need to plays with partitions and bootloaders.
On 17 October 2016 at 13:11, Rostislav Krasny wrote:
> On 17.10.2016 11:57:16 +0500, Eugene M. Zh
On 17.10.2016 11:57:16 +0500, Eugene M. Zheganin wrote:
> Hi.
>
> On 17.10.2016 5:44, Rostislav Krasny wrote:
> > Hi,
> >
> > I've been using FreeBSD for many years. Not as my main operating
> > > system, though. But anyway several bugs and patches were contributed
> > and somebody even added my na
> On 17 Oct 2016, at 02:44, Rostislav Krasny wrote:
>
> Hi,
>
> First of all I faced an old problem that I reported here a year ago:
> http://comments.gmane.org/gmane.os.freebsd.stable/96598
> Completely new USB flash drive flashed by the
> FreeBSD-11.0-RELEASE-i386-mini-memstick.img file kills
33 matches
Mail list logo