[zfs-discuss] Re: CAD application not working with zfs

2007-04-09 Thread Dirk Jakobsmeier
Hello Basrt,

tanks for your answer. The filesystems on different projects are sized between 
20 to 400 gb. Those filesystem sizes where no problem on earlier installation 
(vxfs) and should not be a problem now. I can reproduce this error with the 20 
gb filesystem.

Regards.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] misleading zpool state and panic -- nevada b60 x86

2007-04-09 Thread George Wilson

William D. Hathaway wrote:
I'm running Nevada build 60 inside VMWare, it is a test rig with no data of value. 
SunOS b60 5.11 snv_60 i86pc i386 i86pc

I wanted to check out the FMA handling of a serious zpool error, so I did the 
following:

2007-04-07.08:46:31 zpool create tank mirror c0d1 c1d1
2007-04-07.15:21:37 zpool scrub tank
(inserted some errors with dd on one device to see if it showed up, which it 
did, but healed fine)
2007-04-07.15:22:12 zpool scrub tank
2007-04-07.15:22:46 zpool clear tank c1d1
(added a single device without any redundancy)
2007-04-07.15:28:29 zpool add -f tank /var/500m_file
(then I copied data into /tank and removed the /var/500m_file, a panic 
resulted, which was expected)

I created a new /var/500m_file and then decided to destroy the pool and start 
over again.  This caused a panic, which I wasn't expecting.  On reboot, I did a 
zpool -x, which shows:
  pool: tank
 state: ONLINE
status: One or more devices could not be used because the label is missing or
invalid.  Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-4J
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
tank  ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c0d1  ONLINE   0 0 0
c1d1  ONLINE   0 0 0
  /var/500m_file  UNAVAIL  0 0 0  corrupted data

errors: No known data errors

Since there was no redundancy for the /var/500m_file vdev, I don't see how a 
replace will help (unless I still had the original device/file with the data 
intact).

When I try to destroy the pool with "zpool destroy tank", I get a panic with:
Apr  7 16:00:17 b60 genunix: [ID 403854 kern.notice] assertion failed: 
vdev_config_sync(rvd, t
xg) == 0, file: ../../common/fs/zfs/spa.c, line: 2910
Apr  7 16:00:17 b60 unix: [ID 10 kern.notice]
Apr  7 16:00:17 b60 genunix: [ID 353471 kern.notice] d893cd0c 
genunix:assfail+5a (f9e87e74, f9
e87e58,)
Apr  7 16:00:17 b60 genunix: [ID 353471 kern.notice] d893cd6c zfs:spa_sync+6c3 
(da89cac0, 1363
, 0)
Apr  7 16:00:17 b60 genunix: [ID 353471 kern.notice] d893cdc8 
zfs:txg_sync_thread+1df (d467854
0, 0)
Apr  7 16:00:18 b60 genunix: [ID 353471 kern.notice] d893cdd8 
unix:thread_start+8 ()
Apr  7 16:00:18 b60 unix: [ID 10 kern.notice]
Apr  7 16:00:18 b60 genunix: [ID 672855 kern.notice] syncing file systems...

My question/comment boil down to:
1) Should the pool state really be 'online' after losing a non-redundant vdev?
  


Yeah, this seems odd and is probably a bug.

2) It seems like a bug if I get a panic when trying to destroy a pool (although 
this clearly may be related to #1).
  


This is a known problem and one that we're working on right now:

6413847 vdev label write failure should be handled more gracefully.

In your case we are trying to update the label to indicate that the pool 
has been destroyed and this results in label write failure and thus the 
panic.


Thanks,
George

Am I hitting a known bug (or misconceptions about how the pool should function)?
I will happily provide any debugging info that I can.

I haven't tried a 'zpool destroy -f tank' yet since I didn't know if there was 
any debugging value in my current state.

Thanks,
William Hathaway
www.williamhathaway.com
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] other panic caused by ZFS

2007-04-09 Thread George Wilson

Gino,

Were you able to recover by setting zfs_recover?

Thanks,
George

Gino wrote:

Hi All,
here is an other kind of kernel panic caused by ZFS that we found.
I have dumps if needed.


#zpool import

pool: zpool8
id: 7382567111495567914
 state: ONLINE
status: The pool is formatted using an older on-disk version.
action: The pool can be imported using its name or numeric identifier, though
some features will not be available without an explicit 'zpool upgrade'.
config:

volume8  ONLINE
  c0t60001FE100118DB9119074440055d0  ONLINE


#zpool import zpool8



Apr  7 22:53:34 SERVER140 ^Mpanic[cpu1]/thread=ff001807dc80: 
Apr  7 22:53:34 SERVER140 genunix: [ID 361072 kern.notice] zfs: freeing free segment (offset=17712545792 size=131072)
Apr  7 22:53:34 SERVER140 unix: [ID 10 kern.notice] 
Apr  7 22:53:34 SERVER140 genunix: [ID 655072 kern.notice] ff001807d380 genunix:vcmn_err+28 ()

Apr  7 22:53:34 SERVER140 genunix: [ID 655072 kern.notice] ff001807d470 
zfs:zfs_panic_recover+b6 ()
Apr  7 22:53:34 SERVER140 genunix: [ID 655072 kern.notice] ff001807d500 
zfs:space_map_remove+147 ()
Apr  7 22:53:34 SERVER140 genunix: [ID 655072 kern.notice] ff001807d5a0 
zfs:space_map_load+1f4 ()
Apr  7 22:53:34 SERVER140 genunix: [ID 655072 kern.notice] ff001807d5e0 
zfs:metaslab_activate+66 ()
Apr  7 22:53:34 SERVER140 genunix: [ID 655072 kern.notice] ff001807d6a0 
zfs:metaslab_group_alloc+1fb ()
Apr  7 22:53:34 SERVER140 genunix: [ID 655072 kern.notice] ff001807d770 
zfs:metaslab_alloc_dva+17d ()
Apr  7 22:53:34 SERVER140 genunix: [ID 655072 kern.notice] ff001807d810 
zfs:metaslab_alloc+6f ()
Apr  7 22:53:34 SERVER140 genunix: [ID 655072 kern.notice] ff001807d850 
zfs:zio_dva_allocate+63 ()
Apr  7 22:53:34 SERVER140 genunix: [ID 655072 kern.notice] ff001807d870 
zfs:zio_next_stage+b3 ()
Apr  7 22:53:34 SERVER140 genunix: [ID 655072 kern.notice] ff001807d8a0 
zfs:zio_checksum_generate+6e ()
Apr  7 22:53:34 SERVER140 genunix: [ID 655072 kern.notice] ff001807d8c0 
zfs:zio_next_stage+b3 ()
Apr  7 22:53:34 SERVER140 genunix: [ID 655072 kern.notice] ff001807d930 
zfs:zio_write_compress+202 ()
Apr  7 22:53:34 SERVER140 genunix: [ID 655072 kern.notice] ff001807d950 
zfs:zio_next_stage+b3 ()
Apr  7 22:53:34 SERVER140 genunix: [ID 655072 kern.notice] ff001807d9a0 
zfs:zio_wait_for_children+5d ()
Apr  7 22:53:34 SERVER140 genunix: [ID 655072 kern.notice] ff001807d9c0 
zfs:zio_wait_children_ready+20 ()
Apr  7 22:53:34 SERVER140 genunix: [ID 655072 kern.notice] ff001807d9e0 
zfs:zio_next_stage_async+bb ()
Apr  7 22:53:34 SERVER140 genunix: [ID 655072 kern.notice] ff001807da00 
zfs:zio_nowait+11 ()
Apr  7 22:53:34 SERVER140 genunix: [ID 655072 kern.notice] ff001807da80 
zfs:dmu_objset_sync+180 ()
Apr  7 22:53:34 SERVER140 genunix: [ID 655072 kern.notice] ff001807dad0 
zfs:dsl_dataset_sync+42 ()
Apr  7 22:53:34 SERVER140 genunix: [ID 655072 kern.notice] ff001807db40 
zfs:dsl_pool_sync+a7 ()
Apr  7 22:53:34 SERVER140 genunix: [ID 655072 kern.notice] ff001807dbd0 
zfs:spa_sync+1c5 ()
Apr  7 22:53:34 SERVER140 genunix: [ID 655072 kern.notice] ff001807dc60 
zfs:txg_sync_thread+19a ()
Apr  7 22:53:34 SERVER140 genunix: [ID 655072 kern.notice] ff001807dc70 
unix:thread_start+8 ()
Apr  7 22:53:34 SERVER140 unix: [ID 10 kern.notice] 
Apr  7 22:53:34 SERVER140 genunix: [ID 672855 kern.notice] syncing file systems...

Apr  7 22:53:35 SERVER140 genunix: [ID 904073 kern.notice]  done
Apr  7 22:53:36 SERVER140 genunix: [ID 111219 kern.notice] dumping to 
/dev/dsk/c2t0d0s3, offset 1677983744, content: kernel
Apr  7 22:54:04 SERVER140 genunix: [ID 409368 kern.notice] ^M100% done: 644612 pages dumped, compression ratio 4.30, 
Apr  7 22:54:04 SERVER140 genunix: [ID 851671 kern.notice] dump succeeded


gino
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unbelievable. an other crashed zpool :(

2007-04-09 Thread George Wilson

Gino,

Can you send me the corefile from the zpool command? This looks like a 
case where we can't open the device for some reason. Are you using a 
multi-pathing solution other than MPXIO?


Thanks,
George

Gino wrote:
Today we lost an other zpool! 
Fortunately it was only a backup repository. 


SERVER144@/# zpool import zpool3
internal error: unexpected error 5 at line 773 of ../common/libzfs_pool.c

this zpool was a RAID10 from 4 HDS LUN.

trying to import it into snv_60 (recovery mode) doesn't work.

gino
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[2]: [zfs-discuss] Re: ZFS checksum error detection

2007-04-09 Thread Robert Milkowski
Hello Ricardo,

Friday, April 6, 2007, 5:33:14 AM, you wrote:

RC> Isn't it more likely that these are errors on data as well? I think zfs
RC> retries read operations when there's a checksum failure, so maybe these
RC> are transient hardware problems (faulty cables, high temperature..)?

RC> This would explain the non-existence of unrecoverable errors.

Wouldn't ZFS retryy for metadata also then?

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CAD application not working with zfs

2007-04-09 Thread eric kustarz


On Apr 9, 2007, at 2:20 AM, Dirk Jakobsmeier wrote:


Hello,

was use several cad applications and with one of those we have  
problems using zfs.


OS and hardware is SunOS 5.10 Generic_118855-36, Fire X4200, the  
cad application is catia v4.


There are several configuration and data files stored on the server  
and shared via nfs to solaris and aix clients. The application is  
crashing on the aix client except the server is sharing those files  
from a ufs filesystem. Has anybody informations in this?


A snoop trace would be helpful to diagnosis this.  Do you know why  
the app on the AIX client is crashing (a particular error)?


eric



Regards.


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs snapshot issues.

2007-04-09 Thread Joseph Barbey

Matthew Ahrens wrote:

Joseph Barbey wrote:

Robert Milkowski wrote:

JB> So, normally, when the script runs, all snapshots finish in maybe 
a minute
JB> total.  However, on Sundays, it continues to take longer and 
longer.   On
JB> 2/25 it took 30 minutes, and this last Sunday, it took 2:11.  The 
only
JB> thing special thing about Sunday's snapshots is that they are the 
first
JB> ones created since the full backup (using NetBackup) on Saturday. 
All

JB> other backups are incrementals.

hm do you have atime property set to off?
Maybe you spend most of the time in destroying snapshots due to much
larger delta coused by atime updates? You can possibly also gain some
performance by setting atime to off.


Yep, atime is set to off for all pools and filesystems.  I looked 
through the other possible properties, and nothing really looked like 
it would really affect things.


One additional weird thing.  My script hits each filesystem 
(email-pool/A..Z) individually, so I can run zfs list -t snapshot and 
find out how long each snapshot actually takes.  Everything runs fine 
until I get to around V or (normally) W.  Then it can take a couple of 
hours on the one FS.  After that, the rest go quickly.


So, what operation exactly is taking "a couple of hours on the one FS"? 
 The only one I can imagine taking more than a minute would be 'zfs 
destroy', but even that should be very rare on a snapshot.  Is it always 
the same FS that takes longer than the rest?  Is the pool busy when you 
do the slow operation?


I've now determined that renaming the previous snapshot seems to be the 
problem in certain instances.


What we are currently doing through the script is to keep 2 weeks of daily 
snapshots of the various pool/filesystems.  These snapshots are named 
{fs}.$Day-2, {fs}.$Day-2, and {fs}.snap.  Specifically, for our 'V' 
filesystem, which is created under the email-pool, I will have the 
following snapshots:


  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]

So, my script does the following for each FS:
  Check for FS.$Day-2.  If exists, then destroy it.
  Check if there is a FS.$Day-1.  If so, rename it to $DAY-2.
  Check for FS.snap. If so, rename to FS.$Yesterday-1 (day it was created).
  Create FS.snap

I added logging to a file, along with the action just run and the time that 
it completed:


  Destroy email-pool/[EMAIL PROTECTED]Sun Apr  8 00:01:04 CDT 2007
  Rename email-pool/[EMAIL PROTECTED] email-pool/[EMAIL PROTECTED]Sun Apr  8 
00:01:05 CDT 2007
  Rename email-pool/[EMAIL PROTECTED] email-pool/[EMAIL PROTECTED]Sun Apr  8 
00:54:52 CDT 2007

  Create email-pool/[EMAIL PROTECTED]Sun Apr  8 00:54:53 CDT 2007

Looking at the above, Rename took from 00:01:05 until 00:54:52, so almost 
54 minutes.


So, any ideas on why a rename should take so long?  And again, why is this 
only happening on Sunday?  Any other information I can provide that might 
help diagnose this?


Thanks again for any help on this.

--
Joe Barbey   IT Services/Network Services
office: (715) 425-4357   Davee Library room 166C
cell:   (715) 821-0008   UW - River Falls
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CAD application not working with zfs

2007-04-09 Thread Bart Smaalders

Dirk Jakobsmeier wrote:

Hello,

was use several cad applications and with one of those we have problems using 
zfs.

OS and hardware is SunOS 5.10 Generic_118855-36, Fire X4200, the cad 
application is catia v4.

There are several configuration and data files stored on the server and shared via nfs to 
solaris and aix clients. The application is crashing on the aix client except the server 
is sharing those files from a ufs filesystem. Has anybody informations in this?




What are the sizes of the filesystems being exported?  Perhaps the AIX
client cannot cope w/ large filesystems?

- Basrt





--
Bart Smaalders  Solaris Kernel Performance
[EMAIL PROTECTED]   http://blogs.sun.com/barts
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: simple Raid-Z question

2007-04-09 Thread Cindy . Swearingen

Here's the correct link:

http://docs.sun.com/app/docs/doc/817-2271/6mhupg6ft?a=view

The same example exists on page 52 of the 817-2271 PDF posted on
the opensolaris.../zfs/documentation page.

Cindy

Malachi de Ælfweald wrote:
FYI That page is not publicly viewable. It was the 817-2271 pdf I was 
looking at though.


Malachi

On 4/9/07, * [EMAIL PROTECTED] * 
<[EMAIL PROTECTED] > wrote:


Malachi,

The section on adding devices to a ZFS storage pool in the ZFS Admin
guide, here, provides an example of adding to a raidz configuration:

http://docsview.sfbay/app/docs/doc/817-2271/6mhupg6ft?a=view

I think I need to provide a summary of what you can do with
both raidz and mirrored configs since you all had trouble finding
it.

Thanks for the feedback,

Cindy

Malachi de Ælfweald wrote:
 > Yeah, I am not sure what docs I was originally looking at...
 >
 > Although we may want to ensure that the ZFS Admin Guide is a bit more
 > clear on the matter:
 > Additional disks can be added similarly to a RAID-Z configuration.
 >
 > Malachi
 >
 >
 > On 4/8/07, *Frank Cusack* <[EMAIL PROTECTED]

 > mailto:[EMAIL PROTECTED]>>> wrote:
 >
 > [top-posting corrected]
 >
 > On April 8, 2007 1:43:48 PM -0700 Malachi de Ælfweald
 > < [EMAIL PROTECTED] 
>>
 > wrote:
 >  > On 4/7/07, Eric Haycraft < [EMAIL PROTECTED]

 > >> wrote:
 >  >>
 >  >> You cannot add 1 drive at a time to a raid-z or raid-2z. You
 > need to add
 >  >> the same number of disks that were used per stripe.. So,
if you
 > start
 >  >> with 5 disks, you would have to add 5 more in the future
to add
 > disk
 >  >> space. There is also a method of swapping each disk one at a
 > time with a
 >  >> larger disk and performing a scrub inbetween each
replacement to
 >  >> increase the pool size.
 >  >
 >  > Hmmm... I definitely missed this one... I thought the
 > documentation said
 >  > that using zpool attach would add a new drive to the existing
 > raidz(2) and
 >  > then it would start resilvering...
 >
 > zpool(1M):
 >
 >  zpool attach [-f] pool device new_device
 >
 >  Attaches new_device to an  existing  zpool  device.  The
 >  existing device cannot be part of a raidz
configuration.
 >
 > -frank
 >
 >
 >
 >

 >
 > ___
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org 
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org 
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: simple Raid-Z question

2007-04-09 Thread Cindy . Swearingen

Malachi,

The section on adding devices to a ZFS storage pool in the ZFS Admin 
guide, here, provides an example of adding to a raidz configuration:


http://docsview.sfbay/app/docs/doc/817-2271/6mhupg6ft?a=view

I think I need to provide a summary of what you can do with
both raidz and mirrored configs since you all had trouble finding
it.

Thanks for the feedback,

Cindy

Malachi de Ælfweald wrote:

Yeah, I am not sure what docs I was originally looking at...

Although we may want to ensure that the ZFS Admin Guide is a bit more 
clear on the matter:

Additional disks can be added similarly to a RAID-Z configuration.

Malachi


On 4/8/07, *Frank Cusack* <[EMAIL PROTECTED] 
> wrote:


[top-posting corrected]

On April 8, 2007 1:43:48 PM -0700 Malachi de Ælfweald
<[EMAIL PROTECTED] >
wrote:
 > On 4/7/07, Eric Haycraft < [EMAIL PROTECTED]
> wrote:
 >>
 >> You cannot add 1 drive at a time to a raid-z or raid-2z. You
need to add
 >> the same number of disks that were used per stripe.. So, if you
start
 >> with 5 disks, you would have to add 5 more in the future to add
disk
 >> space. There is also a method of swapping each disk one at a
time with a
 >> larger disk and performing a scrub inbetween each replacement to
 >> increase the pool size.
 >
 > Hmmm... I definitely missed this one... I thought the
documentation said
 > that using zpool attach would add a new drive to the existing
raidz(2) and
 > then it would start resilvering...

zpool(1M):

 zpool attach [-f] pool device new_device

 Attaches new_device to an  existing  zpool  device.  The
 existing device cannot be part of a raidz configuration.

-frank





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Contents of transaction group?

2007-04-09 Thread Toby Thain


On 9-Apr-07, at 8:15 AM, Atul Vidwansa wrote:


Hi,
   I have few questions about the way a transaction group is created.

1. Is it possible to group transactions related to multiple operations
in same group? For example, an "rmdir foo" followed by "mkdir bar",
can these end up in same transaction group?


I began to wonder about doing this with Reiser's filesystems. It's  
certainly an interesting capability to have & I'm looking forward to  
this thread...


--Toby



2. Is it possible for an operation (say write()) to occupie multiple
transaction groups?

3. Is it possible to know the thread id(s) for every commited txg_id?

Regards,
-Atul
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Contents of transaction group?

2007-04-09 Thread Atul Vidwansa

Sanjeev,
Thanks for the response. The thread_ids I was talking about are
userland threads, not the ZFS transaction thread. Is it possible to
know for each commited transaction group, which transactions (id) were
part of it and which syscall or userland thread initiated those
syscalls?

Cheers,
-Atul

On 4/9/07, Sanjeev Bagewadi <[EMAIL PROTECTED]> wrote:

Atul,

Atul Vidwansa wrote:
> Hi,
>I have few questions about the way a transaction group is created.
>
> 1. Is it possible to group transactions related to multiple operations
> in same group? For example, an "rmdir foo" followed by "mkdir bar",
> can these end up in same transaction group?
Each TXG is 5 sec long (in normal cases unless some operation forcefully
closed it).
So, it is quite possible that the 2 syscalls can end up in the same TXG.
But, is not guaranteed.

If it has to be guaranteed then this logic will have to be built into
the VNODE ops code. ie. ZPL
code. However, that would be tricky as rmdir and mkdir are 2 different
syscalls and I am not sure what locking
issues you would need to take care.
>
> 2. Is it possible for an operation (say write()) to occupie multiple
> transaction groups?
Yes.
>
> 3. Is it possible to know the thread id(s) for every commited txg_id?
The TXG is always synced by the txg threads. Not sure why you want it.

Regards,
Sanjeev.

>
> Regards,
> -Atul
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Solaris Revenue Products Engineering,
India Engineering Center,
Sun Microsystems India Pvt Ltd.
Tel:x27521 +91 80 669 27521



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: [zfs-code] Contents of transaction group?

2007-04-09 Thread Mark Maybee

Atul Vidwansa wrote:

Hi,
   I have few questions about the way a transaction group is created.

1. Is it possible to group transactions related to multiple operations
in same group? For example, an "rmdir foo" followed by "mkdir bar",
can these end up in same transaction group?


Yes.


2. Is it possible for an operation (say write()) to occupie multiple
transaction groups?


Yes.  Writes are broken into transactions at block boundaries.  So it
is possible for a large write to span multiple transaction groups.


3. Is it possible to know the thread id(s) for every commited txg_id?


No.

-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Contents of transaction group?

2007-04-09 Thread Sanjeev Bagewadi

Atul,

Atul Vidwansa wrote:

Hi,
   I have few questions about the way a transaction group is created.

1. Is it possible to group transactions related to multiple operations
in same group? For example, an "rmdir foo" followed by "mkdir bar",
can these end up in same transaction group?
Each TXG is 5 sec long (in normal cases unless some operation forcefully 
closed it).
So, it is quite possible that the 2 syscalls can end up in the same TXG. 
But, is not guaranteed.


If it has to be guaranteed then this logic will have to be built into 
the VNODE ops code. ie. ZPL
code. However, that would be tricky as rmdir and mkdir are 2 different 
syscalls and I am not sure what locking

issues you would need to take care.


2. Is it possible for an operation (say write()) to occupie multiple
transaction groups?

Yes.


3. Is it possible to know the thread id(s) for every commited txg_id?

The TXG is always synced by the txg threads. Not sure why you want it.

Regards,
Sanjeev.



Regards,
-Atul
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



--
Solaris Revenue Products Engineering,
India Engineering Center,
Sun Microsystems India Pvt Ltd.
Tel:x27521 +91 80 669 27521 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Contents of transaction group?

2007-04-09 Thread Atul Vidwansa

Hi,
   I have few questions about the way a transaction group is created.

1. Is it possible to group transactions related to multiple operations
in same group? For example, an "rmdir foo" followed by "mkdir bar",
can these end up in same transaction group?

2. Is it possible for an operation (say write()) to occupie multiple
transaction groups?

3. Is it possible to know the thread id(s) for every commited txg_id?

Regards,
-Atul
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] CAD application not working with zfs

2007-04-09 Thread Dirk Jakobsmeier
Hello,

was use several cad applications and with one of those we have problems using 
zfs.

OS and hardware is SunOS 5.10 Generic_118855-36, Fire X4200, the cad 
application is catia v4.

There are several configuration and data files stored on the server and shared 
via nfs to solaris and aix clients. The application is crashing on the aix 
client except the server is sharing those files from a ufs filesystem. Has 
anybody informations in this?

Regards.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss