Hello,
was use several cad applications and with one of those we have problems using
zfs.
OS and hardware is SunOS 5.10 Generic_118855-36, Fire X4200, the cad
application is catia v4.
There are several configuration and data files stored on the server and shared
via nfs to solaris and aix
Hi,
I have few questions about the way a transaction group is created.
1. Is it possible to group transactions related to multiple operations
in same group? For example, an rmdir foo followed by mkdir bar,
can these end up in same transaction group?
2. Is it possible for an operation (say
Atul,
Atul Vidwansa wrote:
Hi,
I have few questions about the way a transaction group is created.
1. Is it possible to group transactions related to multiple operations
in same group? For example, an rmdir foo followed by mkdir bar,
can these end up in same transaction group?
Each TXG is 5
Atul Vidwansa wrote:
Hi,
I have few questions about the way a transaction group is created.
1. Is it possible to group transactions related to multiple operations
in same group? For example, an rmdir foo followed by mkdir bar,
can these end up in same transaction group?
Yes.
2. Is it
Sanjeev,
Thanks for the response. The thread_ids I was talking about are
userland threads, not the ZFS transaction thread. Is it possible to
know for each commited transaction group, which transactions (id) were
part of it and which syscall or userland thread initiated those
syscalls?
On 9-Apr-07, at 8:15 AM, Atul Vidwansa wrote:
Hi,
I have few questions about the way a transaction group is created.
1. Is it possible to group transactions related to multiple operations
in same group? For example, an rmdir foo followed by mkdir bar,
can these end up in same transaction
Malachi,
The section on adding devices to a ZFS storage pool in the ZFS Admin
guide, here, provides an example of adding to a raidz configuration:
http://docsview.sfbay/app/docs/doc/817-2271/6mhupg6ft?a=view
I think I need to provide a summary of what you can do with
both raidz and mirrored
Here's the correct link:
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6ft?a=view
The same example exists on page 52 of the 817-2271 PDF posted on
the opensolaris.../zfs/documentation page.
Cindy
Malachi de Ælfweald wrote:
FYI That page is not publicly viewable. It was the 817-2271 pdf I
Dirk Jakobsmeier wrote:
Hello,
was use several cad applications and with one of those we have problems using
zfs.
OS and hardware is SunOS 5.10 Generic_118855-36, Fire X4200, the cad
application is catia v4.
There are several configuration and data files stored on the server and shared via
Matthew Ahrens wrote:
Joseph Barbey wrote:
Robert Milkowski wrote:
JB So, normally, when the script runs, all snapshots finish in maybe
a minute
JB total. However, on Sundays, it continues to take longer and
longer. On
JB 2/25 it took 30 minutes, and this last Sunday, it took 2:11. The
On Apr 9, 2007, at 2:20 AM, Dirk Jakobsmeier wrote:
Hello,
was use several cad applications and with one of those we have
problems using zfs.
OS and hardware is SunOS 5.10 Generic_118855-36, Fire X4200, the
cad application is catia v4.
There are several configuration and data files
Hello Ricardo,
Friday, April 6, 2007, 5:33:14 AM, you wrote:
RC Isn't it more likely that these are errors on data as well? I think zfs
RC retries read operations when there's a checksum failure, so maybe these
RC are transient hardware problems (faulty cables, high temperature..)?
RC This
Gino,
Can you send me the corefile from the zpool command? This looks like a
case where we can't open the device for some reason. Are you using a
multi-pathing solution other than MPXIO?
Thanks,
George
Gino wrote:
Today we lost an other zpool!
Fortunately it was only a backup repository.
Gino,
Were you able to recover by setting zfs_recover?
Thanks,
George
Gino wrote:
Hi All,
here is an other kind of kernel panic caused by ZFS that we found.
I have dumps if needed.
#zpool import
pool: zpool8
id: 7382567111495567914
state: ONLINE
status: The pool is formatted using an
William D. Hathaway wrote:
I'm running Nevada build 60 inside VMWare, it is a test rig with no data of value.
SunOS b60 5.11 snv_60 i86pc i386 i86pc
I wanted to check out the FMA handling of a serious zpool error, so I did the
following:
2007-04-07.08:46:31 zpool create tank mirror c0d1 c1d1
Hello Basrt,
tanks for your answer. The filesystems on different projects are sized between
20 to 400 gb. Those filesystem sizes where no problem on earlier installation
(vxfs) and should not be a problem now. I can reproduce this error with the 20
gb filesystem.
Regards.
This message
16 matches
Mail list logo