Oh and I had been doing this remotely, so I didn't notice the following error
before -
receiving incremental stream of datapool/[EMAIL PROTECTED] into backup/[EMAIL
PROTECTED]
cannot receive incremental stream: destination backup/shares has been modified
since most recent snapshot
This is repo
Jens Elkner wrote:
On Tue, Oct 07, 2008 at 11:35:47AM +0530, Pramod Batni wrote:
The reason why the (implicit) truncation could be taking long might be due
to
6723423 [6]UFS slow following large file deletion with fix for 6513858
installed
To overcome this problem fo
Ok I'm taking a step back here. Forgetting the incremental for a minute (which
is the part causing the segmentation fault), I'm simply trying to use zfs send
-R to get a whole filesystem and all of its snapshots. I ran the following,
after creating a compressed pool called backup :
zfs send -
Am .. first of all its depends on how many hard disks you have and also on how
much is the importance and the necessity of these data . for my system I used 2
hard disk ( 160 GB each ) , first one has three pools one for the system it
self and one for my documents, books and all my work files .
I tried some other solutions but no luck. one solution remain is to upgrade the
system , so I took a nv_96 and I did a live upgrade to my system and everything
seems to work freaky nice , the system performance is really much faster , at
the end I think the cause of this whole problem is my SATA
On Wed, Oct 08, 2008 at 06:27:51PM -0400, Jim Dunham wrote:
>
> If one wants this type of mirror functionality on a single node, use
> host based or controller based mirroring software.
Is there mirroring software that can do async copies to a mirror?
-brian
--
"Coding in C is like sending a
Joe,
> Brian Hechinger
>> On Mon, Oct 06, 2008 at 10:47:04AM -0400, Moore, Joe wrote:
>>>
>>> I wonder if an AVS-replicated storage device on the
>> backends would be appropriate?
>>>
>>> write -> ZFS-mirrored slog -> ramdisk -AVS-> physical disk
>>> \
>>>+-is
comment below...
Janåke Rönnblom wrote:
> Hi!
>
> I have a problem with ZFS and most likely the SATA PCI-X controllers.
> I run
> opensolaris 2008.11 snv_98 and my hardware is Sun Netra x4200 M2 with
> 3 SIL3124 PCI-X with 4 eSATA ports each connected to 3 1U diskchassis
> which each hold 4 SATA
Hi!
I have a problem with ZFS and most likely the SATA PCI-X controllers.
I run
opensolaris 2008.11 snv_98 and my hardware is Sun Netra x4200 M2 with
3 SIL3124 PCI-X with 4 eSATA ports each connected to 3 1U diskchassis
which each hold 4 SATA disks manufactured by Seagate model ES.2
(500 and 750
Hello,
I haven't seen this discussed before. Any pointers would be appreciated.
I'm curious, if I have a set of disks in a system, is there any benefit
or disadvantage to breaking the disks into multiple pools instead of a
single pool?
Does multiple pools cause any additional overhead for ZFS,
How can I diagnose why a resilver appears to be hanging at a certain
percentage, seemingly doing nothing for quite a while, even though the
HDD LED is lit up permanently (no apparent head seeking)?
The drives in the pool are WD Raid Editions, thus have TLER and should
Tom Servo wrote:
>>> How can I diagnose why a resilver appears to be hanging at a certain
>>> percentage, seemingly doing nothing for quite a while, even though the
>>> HDD LED is lit up permanently (no apparent head seeking)?
>>>
>>> The drives in the pool are WD Raid Editions, thus have TLER and
Ross wrote:
> Hey folks,
>
> This might be a daft idea, but is there any way to shut down solaris / zfs
> without flushing the slog device?
>
panic, sudden power loss, sledgehammer to the motherboard :-)
> The reason I ask is that we're planning to use mirrored nvram slogs, and in
> the long
Tom Servo wrote:
>>> How can I diagnose why a resilver appears to be hanging at a certain
>>> percentage, seemingly doing nothing for quite a while, even though the
>>> HDD LED is lit up permanently (no apparent head seeking)?
>>>
>>> The drives in the pool are WD Raid Editions, thus have TLER and
>> How can I diagnose why a resilver appears to be hanging at a certain
>> percentage, seemingly doing nothing for quite a while, even though the
>> HDD LED is lit up permanently (no apparent head seeking)?
>>
>> The drives in the pool are WD Raid Editions, thus have TLER and should
>> time out on
Mario Goebbels wrote:
> How can I diagnose why a resilver appears to be hanging at a certain
> percentage, seemingly doing nothing for quite a while, even though the
> HDD LED is lit up permanently (no apparent head seeking)?
>
> The drives in the pool are WD Raid Editions, thus have TLER and shoul
I was using EMC's iorate for the comparison.
ftp://ftp.emc.com/pub/symm3000/iorate/
I had 4 processes running on the pool in parallel do 4K sequential writes.
I've also been playing around with a few other benchmark tools (i just had
results from other storage test with this same iorate tes
On Wed, Oct 08, 2008 at 08:50:57AM -0400, Moore, Joe wrote:
>
> I've not worked with AVS other than looking at the basic concepts, but to me
> this looks like a dont-shoot-yourself-in-the-foot critical warning rather
> than an actual functionality restriction. Is there a -force option to
> ove
On Wed, Oct 8, 2008 at 10:29 AM, Ross <[EMAIL PROTECTED]> wrote:
> bounce
>
> Can anybody confirm how bug 6729696 is going to affect a busy system running
> synchronous NFS shares? Is the sync activity from NFS
> going to be enough to prevent resilvering from ever working, or have I
> mis-unders
0n Sat, Oct 04, 2008 at 10:37:26PM -0700, Chris Greer wrote:
>The big thing here is I ended up getting a MASSIVE boost in
>performance even with the overhead of the 1GB link, and iSCSI.
>The iorate test I was using went from 3073 IOPS on 90% sequential
>writes to 23953 IOPS w
Hey folks,
This might be a daft idea, but is there any way to shut down solaris / zfs
without flushing the slog device?
The reason I ask is that we're planning to use mirrored nvram slogs, and in the
long term hope to use a pair of 80GB ioDrives. I'd like to have a large amount
of that reserv
Brian Hechinger
> On Mon, Oct 06, 2008 at 10:47:04AM -0400, Moore, Joe wrote:
> >
> > I wonder if an AVS-replicated storage device on the
> backends would be appropriate?
> >
> > write -> ZFS-mirrored slog -> ramdisk -AVS-> physical disk
> >\
> > +-iscsi-> ra
> Michael Hale wrote:
> 1. The Comments field asks that the core dump be made readable by our
> zfs group, and the CR was made incomplete until
> the person who
> saved the core does this.
Hi Michael,
i can reproduce acore dump by destroying a snapshot on our system. Lokk
at http://opensolaris
Hi,
i try to destroy a snapshop1 on opensolaris
SunOS storage11 5.11 snv_98 i86pc i386 i86pc
and my box reboots leaving a crash-file in /var/crash/storage11.
This is repoducable... for this one snapshot1 - other
snapshots was destroyable (without crash)
How can i help somebody to track down th
How can I diagnose why a resilver appears to be hanging at a certain
percentage, seemingly doing nothing for quite a while, even though the
HDD LED is lit up permanently (no apparent head seeking)?
The drives in the pool are WD Raid Editions, thus have TLER and should
time out on errors in just se
bounce
Can anybody confirm how bug 6729696 is going to affect a busy system running
synchronous NFS shares? Is the sync activity from NFS going to be enough to
prevent resilvering from ever working, or have I mis-understood this bug?
thanks,
Ross
--
This message posted from opensolaris.org
__
26 matches
Mail list logo