Richard,
> Applications can take advantage of this and there are services available
> to integrate ZFS snapshots with Oracle databases, Windows clients, etc.
which services are you referring to?
best regards.
Maurilio.
--
This message posted from opensolaris.org
___
By the way,
there are more than fifty bugs logged for marevell88sx, many of them about
problems with DMA handling and/or driver behaviour under stress.
Can it be that I'm stumbling upon something along these lines?
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6826483
Maurilio.
--
Richard,
it is the same controller used inside Sun's thumpers; It could be a problem in
my unit (which is a couple of years old now), though.
Is there something I can do to find out if I owe you that steak? :)
Thanks.
Maurilio.
--
This message posted from opensolaris.org
_
Richard,
thanks for the explanation.
So can we say that the problem is in the disks loosing a command now and then
under stress?
Best regards.
Maurilio.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolari
Errata,
they're ST31000333AS and not 340AS
Maurilio.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Carson,
they're seagate ST31000340AS with a firmware release CC1H, which from a rapid
googling should have no firmware errors.
Anyway, setting NCQ depth to 1
# echo zfs_vdev_max_pending/W0t1 | mdb -kw
did not solve the problem :(
Maurilio.
--
This message posted from opensolaris.org
__
Milek,
this is it
# iostat -En
c1t0d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA Product: ST3808110AS Revision: DSerial No:
Size: 80,03GB <80026361856 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 91 Predictive F
> Possible, but less likely. I'd suggest running some
> disk I/O tests, looking at
> the drive error counters before/after.
>
These disks have a few months of life and are scrubbed weekly, no errors so far.
I did try to use smartmontools, but it cannot report SMART logs nor start SMART
tests,
Carson,
the strange thing is that this is happening on several disks (can it be that
are all failing?)
What is the controller bug you're talking about? I'm running snv_114 on this
pc, so it is fairly recent.
Best regards.
Maurilio.
--
This message posted from opensolaris.org
Hi,
I have a pc with a MARVELL AOC-SAT2-MV8 controller and a pool made up of a six
disks in a raid-z pool with a hot spare.
-bash-3.2$ /sbin/zpool status
pool: nas
stato: ONLINE
scrub: scrub in progress for 9h4m, 81,59% done, 2h2m to go
config:
NAMESTATE READ WRITE CKSU
> Neither.
> It'll send all necessary data (without having to
> promote anything) so
> that the receiving zvol has a working vol1, and it's
> not a clone.
Fajar,
thanks for clarifying, this is what I was calling 'promotion'.
It is like a "promotion" happening on the receiving side.
Maurilio.
Hi,
I have a question, let's say I have a zvol named vol1 which is a clone of a
snapshot of another zvol (its origin property is tank/my...@mysnap).
If I send this zvol to a different zpool through a zfs send does it send the
origin too that is, does an automatic promotion happen or do I end up
Robin,
LSI 3041er and 3081er are pci-e 4 and 8 ports sata cards; they are not hot-swap
capable, as far as I know, but do work very well (I'm using several of them) in
jbod and they're not too expensive.
See this
http://www.lsi.com/storage_home/products_home/host_bus_adapters/sas_hbas/lsisas304
Hi,
I'd like to have it fixed as well, I'm having the same problem with 20 zvols
which are windows xp images exported through iscsi, they are auto-snapshotted
every hour/day/month, right now I've got nearly 1500 snapshots and booting this
4core xeon with 8Gb of ram and 8 disks on four pairs o
I forgot to mention this is a
SunOS biscotto 5.11 snv_111a i86pc i386 i86pc
version.
Maurilio.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
Hi,
I have a pc where a pool suffered a disk failure, I did replace the failed disk
and the pool resilvered but, after resilvering, it was in this state
mauri...@biscotto:~# zpool status iscsi
pool: iscsi
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
Hi,
I'd like to understand a thing or two ... :)
I have a zpool on which I've created a zvol, then I've snapshotted the zvol and
I've created a clone out of that snapshot.
Now, what happens if I do a
zfs send mycl...@mysnap > myfile?
I mean, is this stream enough to recover the clone (does i
Eric,
thanks for the hint.
Maurilio.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi,
I'm trying to expand a raidz pool made up of six drives replacing one at a time
with a bigger disk and waiting for resilver on a snv114 system.
While resilvering fmd writes 8/10Mb each second inside
/var/fm/fmd/errlog
I had to disable it since it was filling up my boot disk.
Is this expe
Tim,
I really was trying to have a full copy of my pool onto a different pc, so I
think that I have to use -R otherwise I would loose all the history (monthly
and weekly and daily snapshots) of my data which is valuable for me.
That said, I fear that during a send -R the autosnapshot service sh
Hi,
I'm trying to send a pool (its filesystems) from a pc to another, so I first
created a recursive snapshot:
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
nas 840G 301G 3,28G /nas
nas/drivers 12,6G 301G 12,6G /nas/driver
Hi,
I'm using Tim Foster's zfs automatic snapshot to keep copies of a zfs
filesystem.
I keep 30 days, seven weeks and 12 months, these snapshots get created at 00:00
every day and when a new week starts (or a new month) there are two or three
snapshots that start at the same time.
Using zfs l
Alan,
I'm using nexenta core rc4 which is based on nevada 81/82.
zfs casesensitivity is set to 'insensitive'
Best regards.
Maurilio.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.open
Hi,
I'm testing ZFS+CIFS server using nexenta core rc4, everything seems fine and
speed is also ok, but DOS programs don't see sub-dirs (command.com sees them,
though).
I've set casesensitivity=insensitive in the ZFS filesystem that I'm sharing.
I've made this test using Windows2000, Windows20
24 matches
Mail list logo