Matthew Dillon wrote:
This issue is vexing a lot of people.
Heh... I can appreciate this. I would like someone to inform me that
this can't be guaranteed to be a ZFS problem... if I can get
confirmation that others have this issue aside from ZFS, I would feel
content.
Setting the
Laptop details :
HP Pavilion dv2000 (dv2422ca)
Specifications (taken from
http://h10025.www1.hp.com/ewfrf/wc/document?cc=au&docname=c01070158&dlc=en&l
c=en&jumpid=reg_R1002_AUEN ) :
Product Name: dv2422ca
Product Number: GM039UA#ABC / GM039UA#ABL
Microprocessor: 1.8 GHz AMD Tur
Andrew Snow wrote:
From Western Digital's line of "enterprise" drives:
"RAID-specific time-limited error recovery (TLER) - Pioneered by WD,
this feature prevents drive fallout caused by the extended hard drive
error-recovery processes common to desktop drives."
Therefore I think the FreeBS
:...
:> and see if the problem reoccurs with just two drives.
:
:... I knew that was going to come up... my response is "I worked so hard
:to get this system with ZFS all configured *exactly* how I wanted it".
:
:To test, I'm going to flip to 30 as per Matthews recommendation, and see
:how f
> From: "John Sullivan" <[EMAIL PROTECTED]>
> Date: Tue, 15 Jul 2008 10:58:19 +0100
> Sender: [EMAIL PROTECTED]
>
> I am experiencing 'random' reboots interspersed with panics whenever I put a
> newly installed system under load (make index in
> /usr/ports is enough). A sample panic is at the en
Jeremy Chadwick wrote:
On Tue, Jul 15, 2008 at 10:29:28PM -0400, Steve Bertrand wrote:
Is there anyone interested to the point where remote login would be helpful?
I believe my FreeBSD Wiki page documents what to do if your problem
is easily reproducable: contact Scott Long, who has offered to
Alex Trull wrote:
Don't want to give conflicting advice, and would suggest you certainly
try the 30 sec thing first. I'm already on 10 myself but haven't pushed
further.
What were you doing, and what did you notice when the problem started?
As much as it seems silly, I'm mostly interested in w
On Tue, Jul 15, 2008 at 10:29:28PM -0400, Steve Bertrand wrote:
> Is there anyone interested to the point where remote login would be helpful?
I believe my FreeBSD Wiki page documents what to do if your problem
is easily reproducable: contact Scott Long, who has offered to help
track down the sour
Matthew Dillon wrote:
:Went from 10->15, and it took quite a bit longer into the backup before
:the problem cropped back up.
Jumping right into it, there is another post after this one, but I'm
going to try to reply inline:
Try 30 or longer. See if you can make the problem go away enti
Matthew Dillon wrote:
Try that first. If it helps then it is a known issue. Basically
a combination of the on-disk write cache and possible ECC corrections,
remappings, or excessive remapped sectors can cause the drive to take
much longer then normal to complete a request. The
Jeremy Chadwick wrote:
On Tue, Jul 15, 2008 at 11:47:57AM -0400, Sven Willenberger wrote:
On Tue, 2008-07-15 at 07:54 -0700, Jeremy Chadwick wrote:
ZFS's send/recv capability (over a network) is something I didn't have
time to experiment with, but it looked *very* promising. The method is
docu
On Tue, Jul 15, 2008 at 11:47:57AM -0400, Sven Willenberger wrote:
> On Tue, 2008-07-15 at 07:54 -0700, Jeremy Chadwick wrote:
> > ZFS's send/recv capability (over a network) is something I didn't have
> > time to experiment with, but it looked *very* promising. The method is
> > documented in the
On Tue, Jul 15, 2008 at 07:10:05PM +0200, Kris Kennaway wrote:
> Wesley Shields wrote:
>> On Tue, Jul 15, 2008 at 07:54:26AM -0700, Jeremy Chadwick wrote:
>>> One of the "annoyances" to ZFS snapshots, however, was that I had to
>>> write my own script to do snapshot rotations (think incremental dum
Since upgrading to 7.0 Stable, I've noticed an occasional problem with
konqueror. I've been recompiling my ports for the past few weeks and have
noticed that some sites are complaining about cookies not being enabled.
Further investigation has revealed that if I start konqueror from the
termina
Don't want to give conflicting advice, and would suggest you certainly
try the 30 sec thing first. I'm already on 10 myself but haven't pushed
further.
In my own case I've not had any issue with zfs in particular since I
applied the ZFS zil/prefetch disable loader.conf tunables 10 hours ago.
I am
[EMAIL PROTECTED] wrote:
>> #9 0x8067d3ee in uma_zalloc_arg (zone=0xff00bfed07e0,
udata=0x0,
flags=-256) at /usr/src/sys/vm/uma_core.c:1835
From the frame #9, please do
p *zone
I am esp. interested in the value of the uz_ctor member.
It seems that it becomes corrupted, it valu
> Do the "frame 9" before "p *zone".
It's obvious now you say it ;-)
You are indeed right:
(kgdb) frame 9
#9 0x8067d3ee in uma_zalloc_arg (zone=0xff00bfed07e0, udata=0x0,
flags=-256) at /usr/src/sys/vm/uma_core.c:1835
1835 uma_dbg_alloc(zone
:Went from 10->15, and it took quite a bit longer into the backup before
:the problem cropped back up.
Try 30 or longer. See if you can make the problem go away entirely.
then fall back to 5 and see if the problem resumes at its earlier
pace.
--
It could be temperature rela
On Tue, Jul 15, 2008 at 08:47:03PM +0100, [EMAIL PROTECTED] wrote:
>
>
> >> #9 0x8067d3ee in uma_zalloc_arg (zone=0xff00bfed07e0,
> >>udata=0x0,
> >>flags=-256) at /usr/src/sys/vm/uma_core.c:1835
> >From the frame #9, please do
> >p *zone
> >I am esp. interested in the value of the
On Tue, Jul 15, 2008 at 08:19:15PM +0100, [EMAIL PROTECTED] wrote:
>
>
> > Please collect kgdb/ddb backtraces.
>
> kgdb backtrace:
>
> server251# kgdb -c /var/crash/vmcore.0
> kgdb: couldn't find a suitable kernel image
> server251# kgdb /boot/kernel/kernel /var/crash/vmcore.0
> kgdb: kvm
>> #9 0x8067d3ee in uma_zalloc_arg (zone=0xff00bfed07e0,
udata=0x0,
flags=-256) at /usr/src/sys/vm/uma_core.c:1835
From the frame #9, please do
p *zone
I am esp. interested in the value of the uz_ctor member.
It seems that it becomes corrupted, it value should be 0, as this see
[EMAIL PROTECTED] wrote:
(kgdb) backtrace
#0 doadump () at pcpu.h:194
#1 0xff0004742440 in ?? ()
#2 0x80477699 in boot (howto=260)
at /usr/src/sys/kern/kern_shutdown.c:409
#3 0x80477a9d in panic (fmt=0x104 )
at /usr/src/sys/kern/kern_shutdown.c:563
#4 0x8
Steve Bertrand wrote:
Matthew Dillon wrote:
If you are getting DMA timeouts, go to this URL:
http://wiki.freebsd.org/JeremyChadwick/ATA_issues_and_troubleshooting
Then I would suggest going into /usr/src/sys/dev/ata (I think, on
FreeBSD), locate all instances where request->ti
> Please collect kgdb/ddb backtraces.
kgdb backtrace:
server251# kgdb -c /var/crash/vmcore.0
kgdb: couldn't find a suitable kernel image
server251# kgdb /boot/kernel/kernel /var/crash/vmcore.0
kgdb: kvm_read: invalid address (0xff00010e5468)
[GDB will not be able to debug user-mode t
Matthew Dillon wrote:
If you are getting DMA timeouts, go to this URL:
http://wiki.freebsd.org/JeremyChadwick/ATA_issues_and_troubleshooting
Then I would suggest going into /usr/src/sys/dev/ata (I think, on
FreeBSD), locate all instances where request->timeout is set to 5,
Matthew Dillon wrote:
If you are getting DMA timeouts, go to this URL:
Yes, I am.
http://wiki.freebsd.org/JeremyChadwick/ATA_issues_and_troubleshooting
I fall under the category of "ATA/SATA DMA timeout issues".
Then I would suggest going into /usr/src/sys/dev/ata (I think, o
:Oliver Fromme wrote:
:
:> Yet another way would be to use DragoFly's "Hammer" file
:> system which is part of DragonFly BSD 2.0 which will be
:> released in a few days. It supports remote mirroring,
:> i.e. mirror source and mirror target can run on different
:> machines. Of course it is still
OK, will put on my todo list :)
On Tue, Jul 15, 2008 at 10:31 AM, <[EMAIL PROTECTED]> wrote:
> At Tue, 15 Jul 2008 10:07:22 -0700,
> Jack Vogel wrote:
>>
>> Oh, so the problem is if igb alone is defined?
>>
>
> Yes.
>
> Best,
> George
>
___
freebsd-sta
At Tue, 15 Jul 2008 10:07:22 -0700,
Jack Vogel wrote:
>
> Oh, so the problem is if igb alone is defined?
>
Yes.
Best,
George
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any
Hi,
The problem started when i installed a kodicom 4400 card and started to run
zoneminder.
Prior to that no problems with my machine, which now runs
FreeBSD panix.internal.net 7.0-RELEASE-p3 FreeBSD 7.0-RELEASE-p3 #3: Mon Jul 14
16:35:37 EEST 2008
[EMAIL PROTECTED]:/usr/obj/usr/src/sys/GEN
:Hi everyone,
:
:I'm wondering if the problems described in the following link have been
:resolved:
:
:http://unix.derkeiler.com/Mailing-Lists/FreeBSD/stable/2008-02/msg00211.html
:
:I've got four 500GB SATA disks in a ZFS raidz pool, and all four of them
:are experiencing the behavior.
:
:The p
Wesley Shields wrote:
On Tue, Jul 15, 2008 at 07:54:26AM -0700, Jeremy Chadwick wrote:
One of the "annoyances" to ZFS snapshots, however, was that I had to
write my own script to do snapshot rotations (think incremental dump(8)
but using ZFS snapshots).
There is a PR[1] to get something like t
Oh, so the problem is if igb alone is defined?
On Tue, Jul 15, 2008 at 10:04 AM, <[EMAIL PROTECTED]> wrote:
> At Mon, 14 Jul 2008 14:53:16 -0700,
> Jack Vogel wrote:
>>
>> Just guessing, did someone change conf/files maybe??
>>
>
> If you build a STABLE kernel with igb AND em then things work an
At Mon, 14 Jul 2008 14:53:16 -0700,
Jack Vogel wrote:
>
> Just guessing, did someone change conf/files maybe??
>
If you build a STABLE kernel with igb AND em then things work and the
kernel uses em.
I'm not sure which thing needs to be changed in conf/files or
otherwise though.
Later,
George
_
On Tue, Jul 15, 2008 at 07:54:26AM -0700, Jeremy Chadwick wrote:
> One of the "annoyances" to ZFS snapshots, however, was that I had to
> write my own script to do snapshot rotations (think incremental dump(8)
> but using ZFS snapshots).
There is a PR[1] to get something like this in the ports tre
Hi everyone,
I'm wondering if the problems described in the following link have been
resolved:
http://unix.derkeiler.com/Mailing-Lists/FreeBSD/stable/2008-02/msg00211.html
I've got four 500GB SATA disks in a ZFS raidz pool, and all four of them
are experiencing the behavior.
The problem on
> You install a filer cluster with two nodes. Then there is
> no single point of failure.
Yes, that would be my choice too. Unfortunately it didn't get
done that way. Mind you, the solution we do have is something
I am actually pretty happy with - it's cheap and does the job.
We never wanted 100%
Ronald Klop wrote:
> I just upgraded a machine from FreeBSD 6 to 7. Very nice.
> But my portupgrade -fa failed after a while.
> How can I know which ports/packages are still from FreeBSD 6? Is there a
> datee recorded somewhere or the FreeBSD-version of the port/package?
> The date of the fi
Oliver Fromme wrote:
Yet another way would be to use DragoFly's "Hammer" file
system which is part of DragonFly BSD 2.0 which will be
released in a few days. It supports remote mirroring,
i.e. mirror source and mirror target can run on different
machines. Of course it is still very new and exp
Pete French wrote:
> I am not the roiginal poster, but I am doing something very similar and
> can answer that question for you. Some people get paranoid about the
> whole "single point of failure" thing. I originally suggestted that we buy
> a filer and have identical servers so if one breaks
On Tue, 2008-07-15 at 07:54 -0700, Jeremy Chadwick wrote:
> On Tue, Jul 15, 2008 at 10:07:14AM -0400, Sven Willenberger wrote:
> > 3) The send/recv feature of zfs was something I had not even considered
> > until very recently. My understanding is that this would work by a)
> > taking a snapshot of
Jo Rhett <[EMAIL PROTECTED]> wrote:
> About 10 days ago one of my personal machines started hanging at
> random. This is the first bit of instability I've ever experienced on
> this machine (2+ years running)
>
> FreeBSD triceratops.netconsonance.com 6.2-RELEASE-p11 FreeBSD 6.2-
> RELE
Sven Willenberger wrote:
> [...]
> 1) I have been using ggated/ggatec on a set of 6.2-REL boxes and find
> that ggated tends to fail after some time leaving me rebuilding the
> mirror periodically (and gmirror resilvering takes quite some time). Has
> ggated/ggatec performance and stability im
> However, I must ask you this: why are you doing things the way you are?
> Why are you using the equivalent of RAID 1 but for entire computers? Is
> there some reason you aren't using a filer (e.g. NetApp) for your data,
> thus keeping it centralised?
I am not the roiginal poster, but I am doing
Jeremy Chadwick wrote:
Compared to UFS2 snapshots (e.g. dump -L or mksnap_ffs), ZFS snapshots
are fantastic. The two main positives for me were:
1) ZFS snapshots take significantly less time to create; I'm talking
seconds or minutes vs. 30-45 minutes. I also remember receiving mail
from someo
On Tue, Jul 15, 2008 at 10:07:14AM -0400, Sven Willenberger wrote:
> 3) The send/recv feature of zfs was something I had not even considered
> until very recently. My understanding is that this would work by a)
> taking a snapshot of master_data1 b) zfs sending that snapshot to
> slave_data1 c) via
With the introduction of zfs to FreeBSD 7.0, a door has opened for more
mirroring options so I would like to get some opinions on what direction
I should take for the following scenario.
Basically I have 2 machines that are "clones" of each other (master and
slave) wherein one will be serving up s
Hey,
Did you hear about Scour? It is the next gen search engine with
Google/Yahoo/MSN results and user comments all on one page. Best of all we get
paid for using it by earning points with every search, comment and vote. The
points are redeemable for Visa gift cards! It's like earning credit ca
Hey,
Did you hear about Scour? It is the next gen search engine with
Google/Yahoo/MSN results and user comments all on one page. Best of all we get
paid for using it by earning points with every search, comment and vote. The
points are redeemable for Visa gift cards! It's like earning credit ca
On Mon, 2008-07-14 at 11:29 +0300, Danny Braniss wrote:
> > FreeBSD 7.0
> >
> > I have 2 machines with identical configurations/hardware, let's call them A
> > (master)
> > and B (slave). I have installed iscsi-target from ports and have set up 3
> > targets
> > representing the 3 drives I wis
John Sullivan wrote:
Can the system in question run memtest86+ successfully (no
errors) for an hour? It would help diminish (but not
entirely rule out) hardware (memory or chipset) issues.
Sorry, forgot to mention, I ran memtest over night without any problem
reported. I ran Fedora 9 for
> Can the system in question run memtest86+ successfully (no
> errors) for an hour? It would help diminish (but not
> entirely rule out) hardware (memory or chipset) issues.
Sorry, forgot to mention, I ran memtest over night without any problem
reported. I ran Fedora 9 for a month without a
On Tue, Jul 15, 2008 at 10:58:19AM +0100, John Sullivan wrote:
> I am experiencing 'random' reboots interspersed with panics whenever I put a
> newly installed system under load (make index in
> /usr/ports is enough). A sample panic is at the end of this email.
>
> I have updated to 7.0-RELEASE
I am experiencing 'random' reboots interspersed with panics whenever I put a
newly installed system under load (make index in
/usr/ports is enough). A sample panic is at the end of this email.
I have updated to 7.0-RELEASE-p2 using the GENERIC amd64 kernel and it is still
the same. The system
54 matches
Mail list logo