Re: [zfs-discuss] zfs won't import a pool automatically at boot

2007-10-16 Thread dudekula mastan
Hi Mike,
   
  After rebooting a UNIX machine (HP-UX/Linux/Solaris), it will mount (or 
import) only the file systems which are mounted (or imported) before the reboot.
   
  In your case the zfs file system tank/data is exported(or unmounted) before 
reboot.Thats the reason why the zpool is not imported automatically after 
reboot.
   
  This is neither a problem nor a bug. ZFS developers designed the import and 
export command like this.
   
  Not only ZFS, no file system will mount all the available file systems. Any 
UNIX machine will mount only the file systems which are mounted before reboot.
-Masthan  D

Michael Goff <[EMAIL PROTECTED]> wrote:

  Hi,

When jumpstarting s10x_u4_fcs onto a machine, I have a postinstall script which 
does:

zpool create tank c1d0s7 c2d0s7 c3d0s7 c4d0s7
zfs create tank/data
zfs set mountpoint=/data tank/data
zpool export -f tank

When jumpstart finishes and the node reboots, the pool is not imported 
automatically. I have to do:

zpool import tank

for it to show up. Then on subsequent reboots it imports and mounts 
automatically. What I can I do to get it to mount automatically the first time? 
When I didn't have the zpool export I would get an message that I needed to use 
zpool import -f because it wasn't exported properly from another machine. So it 
looks like the state of the pool created during the jumpstart install was lost.

BTW, I love using zfs commands to manage filesystems. They are so easy and 
intuitive!

thanks,
Mike


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


   
-
Be a better Globetrotter. Get better travel answers from someone who knows.
Yahoo! Answers - Check it out.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-10 Thread dudekula mastan
Hi All,
   
  Any update on this ?
   
  -Masthan D

dudekula mastan <[EMAIL PROTECTED]> wrote:
Hi Everybody,
   
  From the last one week so many mails are exchanged on this topic.
   
  I have also one similar issue like this. I will appreciate if any one helps 
me on this.
   
  I have an IO test tool, which writes the data and reads the data and then 
compare the read data with write data. If read data and write data are same 
then there is no CORRUIPTION else there is a CORRUPTION.
   
  File data may corrupt because of any reasons and one possible reason is file 
system cache. If file system cache have issues, it will give wrong data (wrong 
data means the actual data on the disk and the data that read call return to 
the application are not match) to user applications.
   
  When there is a CORRUPTION, to check file system cache issues, my application 
bypass the file system cache and then reads (Re-read) the data from the same 
file and then compare the re-read data with write data.
   
  Tell me, is there a way to skip ZFS file system cache or tell me is there a 
way to do direct IO on ZFS file system?
   
  Regards
  Masthan D
   

-
  Don't let your dream ride pass you by. Make it a reality with Yahoo! Autos. 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


   
-
Pinpoint customers who are looking for what you sell. ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-09 Thread dudekula mastan
Hi Everybody,
   
  From the last one week so many mails are exchanged on this topic.
   
  I have also one similar issue like this. I will appreciate if any one helps 
me on this.
   
  I have an IO test tool, which writes the data and reads the data and then 
compare the read data with write data. If read data and write data are same 
then there is no CORRUIPTION else there is a CORRUPTION.
   
  File data may corrupt because of any reasons and one possible reason is file 
system cache. If file system cache have issues, it will give wrong data (wrong 
data means the actual data on the disk and the data that read call return to 
the application are not match) to user applications.
   
  When there is a CORRUPTION, to check file system cache issues, my application 
bypass the file system cache and then reads (Re-read) the data from the same 
file and then compare the re-read data with write data.
   
  Tell me, is there a way to skip ZFS file system cache or tell me is there a 
way to do direct IO on ZFS file system?
   
  Regards
  Masthan D
   

   
-
Don't let your dream ride pass you by.Make it a reality with Yahoo! Autos. ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What would be the exact difference between import/mount and export/unmount ?

2007-10-09 Thread dudekula mastan
Hi All,
   
  Can any one explain this ?
   
  -Mashtan D
  

dudekula mastan <[EMAIL PROTECTED]> wrote:
 
  Hi All,
   
  What exactly import and export commands will do ?
   
  Are they similar to mount and unmount ?
   
  How import differs from mount and how export differs from umount ?
   
  If you run zpool import command, it will lists all theimportable zpools and 
the devices which are part of those zpools. How exactly import command works ?
   
  In my machine I have 10 to 15 zpools, and I am pumping IO on those zpools. IO 
pump tool is working fine for 2 to 3 hours after that some how my machine is 
going down.
  Can any one explain why it is happening ?
   
  I am not much aware of debug tools, Can any one explain how to debug coredump 
?
   
  Your help is apprecialted.
   
  Thanks & Regards
  Masthan D

-
  Fussy? Opinionated? Impossible to please? Perfect. Join Yahoo!'s user panel 
and lay it on us. ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


   
-
Take the Internet to Go: Yahoo!Go puts the Internet in your pocket: mail, news, 
photos & more. ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS file system is crashing my system

2007-10-09 Thread dudekula mastan
Hi Jeyaram,
   
  Thanks for your reply. Can you explain more about this bug ?
   
  Regards
  Masthan D

Prabahar Jeyaram <[EMAIL PROTECTED]> wrote:
  Your system seem to have hit a variant of BUG :

6458218 - http://bugs.opensolaris.org/view_bug.do?bug_id=6458218

This is fixed in Opensolaris Build 60 or S10U4.

--
Prabahar.


On Oct 8, 2007, at 10:04 PM, dudekula mastan wrote:

> Hi All,
>
> Any one has any chance to look into this issue ?
>
> -Masthan D
>
> dudekula mastan wrote:
>
> Hi All,
>
> While pumping IO on a zfs file system my ststem is crashing/ 
> panicing. Please find the crash dump below.
>
> panic[cpu0]/thread=2a100adfcc0: assertion failed: ss != NULL, 
> file: ../../common/fs/zfs/space_map.c, line: 125
> 02a100adec40 genunix:assfail+74 (7b652448, 7b652458, 7d, 
> 183d800, 11ed400, 0)
> %l0-3:   011e7508 
> 03000744ea30
> %l4-7: 011ed400  0186fc00 
> 
> 02a100adecf0 zfs:space_map_remove+b8 (3000683e7b8, 2b20, 
> 2, 7b652000, 7b652400, 7b652400)
> %l0-3:  2b22 2b0ec600 
> 03000744ebc0
> %l4-7: 03000744eaf8 2b0ec000 7b652000 
> 2b0ec600
> 02a100adedd0 zfs:space_map_load+218 (3000683e7b8, 30006f5f160, 
> 1000, 3000683e488, 2b00, 1)
> %l0-3: 0160 030006f5f000  
> 7b620ad0
> %l4-7: 7b62086c 7fff 7fff 
> 030006f5f128
> 02a100adeea0 zfs:metaslab_activate+3c (3000683e480, 
> 8000, c000, 24a998, 3000683e480, c000)
> %l0-3:  0008  
> 029ebf9d
> %l4-7: 704e2000 03000391e940 030005572540 
> 0300060bacd0
> 02a100adef50 zfs:metaslab_group_alloc+1bc (3fff, 
> 2, 8000, 7e68000, 30006766080, )
> %l0-3:  0300060bacd8 0001 
> 03000683e480
> %l4-7: 8000  03f34000 
> 4000
> 02a100adf030 zfs:metaslab_alloc_dva+114 (0, 7e68000, 
> 30006766080, 2, 30005572540, 1e910)
> %l0-3: 0001  0003 
> 03000380b6e0
> %l4-7:  0300060bacd0  
> 0300060bacd0
> 02a100adf100 zfs:metaslab_alloc+2c (3000391e940, 2, 
> 30006766080, 1, 1e910, 0)
> %l0-3: 009980001605 0016 1b4d 
> 0214
> %l4-7:   03000391e940 
> 0001
> 02a100adf1b0 zfs:zio_dva_allocate+4c (30005dd8a40, 7b6335a8, 
> 30006766080, 704e2508, 704e2400, 20001)
> %l0-3: 030005dd8a40 060200ff00ff 060200ff00ff 
> 
> %l4-7:  018a6400 0001 
> 0006
> 02a100adf260 zfs:zio_write_compress+1ec (30005dd8a40, 23e20b, 
> 23e000, ff00ff, 2, 30006766080)
> %l0-3:  00ff 0100 
> 0002
> %l4-7:  00ff fc00 
> 00ff
> 02a100adf330 zfs:arc_write+e4 (30005dd8a40, 3000391e940, 6, 2, 
> 1, 1e910)
> %l0-3:  7b6063c8 030006af2570 
> 0300060c5cf0
> %l4-7: 02a100adf538 0004 0004 
> 0300060c7a88
> 02a100adf440 zfs:dbuf_sync+6c0 (30006af2570, 30005dd9440, 
> 2b3ca, 2, 6, 1e910)
> %l0-3: 030005dd96c0  030006ae7750 
> 030006af2678
> %l4-7: 030006766080 0013 0001 
> 
> 02a100adf560 zfs:dnode_sync+35c (0, 0, 30005dd9440, 
> 30005ac8cc0, 2, 2)
> %l0-3: 030006af2570 030006ae77a8 030006ae7808 
> 030006ae7808
> %l4-7:  030006ae77a8 0001 
> 03000640ace0
> 02a100adf620 zfs:dmu_objset_sync_dnodes+6c (30005dd96c0, 
> 30005dd97a0, 30005ac8cc0, 30006ae7750, 30006bd3ca0, 0)
> %l0-3: 704e84c0 704e8000 704e8000 
> 0001
> %l4-7:  704e4000  
> 030005dd9440
> 02a100adf6d0 zfs:dmu_objset_sync+54 (30005dd96c0, 30005ac8cc0, 
> 0, 0, 300060c5318, 1e910)
> %l0-3:  000f  
> 478d
> %l4-7: 030005dd97a0  030005dd97a0 
> 030005dd9820
> 02a100adf7e0 zfs:dsl_dataset_sync+c (30006f36780, 30005ac8cc0, 
> 30006f36810, 300040c7db8, 300040c7db8, 30006f36780)
> %l0-3: 0001 0007 0300040c7e38 
> 
> %l4-7: 030006f36808   
> 000

Re: [zfs-discuss] ZFS file system is crashing my system

2007-10-08 Thread dudekula mastan
Hi All,
   
  Any one has any chance to look into this issue ?
   
  -Masthan D

dudekula mastan <[EMAIL PROTECTED]> wrote:

Hi All,
   
  While pumping IO on a zfs file system my ststem is crashing/panicing. Please 
find the crash dump below.
   
  panic[cpu0]/thread=2a100adfcc0: assertion failed: ss != NULL, file: 
../../common/fs/zfs/space_map.c, line: 125
  02a100adec40 genunix:assfail+74 (7b652448, 7b652458, 7d, 183d800, 
11ed400, 0)
  %l0-3:   011e7508 03000744ea30
  %l4-7: 011ed400  0186fc00 
  02a100adecf0 zfs:space_map_remove+b8 (3000683e7b8, 2b20, 2, 
7b652000, 7b652400, 7b652400)
  %l0-3:  2b22 2b0ec600 03000744ebc0
  %l4-7: 03000744eaf8 2b0ec000 7b652000 2b0ec600
  02a100adedd0 zfs:space_map_load+218 (3000683e7b8, 30006f5f160, 1000, 
3000683e488, 2b00, 1)
  %l0-3: 0160 030006f5f000  7b620ad0
  %l4-7: 7b62086c 7fff 7fff 030006f5f128
  02a100adeea0 zfs:metaslab_activate+3c (3000683e480, 8000, 
c000, 24a998, 3000683e480, c000)
  %l0-3:  0008  029ebf9d
  %l4-7: 704e2000 03000391e940 030005572540 0300060bacd0
  02a100adef50 zfs:metaslab_group_alloc+1bc (3fff, 2, 
8000, 7e68000, 30006766080, )
  %l0-3:  0300060bacd8 0001 03000683e480
  %l4-7: 8000  03f34000 4000
  02a100adf030 zfs:metaslab_alloc_dva+114 (0, 7e68000, 30006766080, 2, 
30005572540, 1e910)
  %l0-3: 0001  0003 03000380b6e0
  %l4-7:  0300060bacd0  0300060bacd0
  02a100adf100 zfs:metaslab_alloc+2c (3000391e940, 2, 30006766080, 1, 
1e910, 0)
  %l0-3: 009980001605 0016 1b4d 0214
  %l4-7:   03000391e940 0001
  02a100adf1b0 zfs:zio_dva_allocate+4c (30005dd8a40, 7b6335a8, 30006766080, 
704e2508, 704e2400, 20001)
  %l0-3: 030005dd8a40 060200ff00ff 060200ff00ff 
  %l4-7:  018a6400 0001 0006
  02a100adf260 zfs:zio_write_compress+1ec (30005dd8a40, 23e20b, 23e000, 
ff00ff, 2, 30006766080)
  %l0-3:  00ff 0100 0002
  %l4-7:  00ff fc00 00ff
  02a100adf330 zfs:arc_write+e4 (30005dd8a40, 3000391e940, 6, 2, 1, 1e910)
  %l0-3:  7b6063c8 030006af2570 0300060c5cf0
  %l4-7: 02a100adf538 0004 0004 0300060c7a88
  02a100adf440 zfs:dbuf_sync+6c0 (30006af2570, 30005dd9440, 2b3ca, 2, 6, 
1e910)
  %l0-3: 030005dd96c0  030006ae7750 030006af2678
  %l4-7: 030006766080 0013 0001 
  02a100adf560 zfs:dnode_sync+35c (0, 0, 30005dd9440, 30005ac8cc0, 2, 2)
  %l0-3: 030006af2570 030006ae77a8 030006ae7808 030006ae7808
  %l4-7:  030006ae77a8 0001 03000640ace0
  02a100adf620 zfs:dmu_objset_sync_dnodes+6c (30005dd96c0, 30005dd97a0, 
30005ac8cc0, 30006ae7750, 30006bd3ca0, 0)
  %l0-3: 704e84c0 704e8000 704e8000 0001
  %l4-7:  704e4000  030005dd9440
  02a100adf6d0 zfs:dmu_objset_sync+54 (30005dd96c0, 30005ac8cc0, 0, 0, 
300060c5318, 1e910)
  %l0-3:  000f  478d
  %l4-7: 030005dd97a0  030005dd97a0 030005dd9820
  02a100adf7e0 zfs:dsl_dataset_sync+c (30006f36780, 30005ac8cc0, 
30006f36810, 300040c7db8, 300040c7db8, 30006f36780)
  %l0-3: 0001 0007 0300040c7e38 
  %l4-7: 030006f36808   
  02a100adf890 zfs:dsl_pool_sync+64 (300040c7d00, 1e910, 30006f36780, 
30005ac9640, 30005581a80, 30005581aa8)
  %l0-3:  03000391ed00 030005ac8cc0 0300040c7e98
  %l4-7: 0300040c7e68 0300040c7e38 0300040c7da8 030005dd9440
  02a100adf940 zfs:spa_sync+1b0 (3000391e940, 1e910, 0, 0, 2a100adfcc4, 1)
  %l0-3: 03000391eb00 03000391eb10 03000391ea28 030005ac9640
  %l4-7:  03000410f580 0300040c7d00 03000391eac0
  02a100adfa00 zfs:txg_sync_thread+134 (300040c7d00, 1e910, 0, 2a100adfab0, 
300040c7e10, 300040c7e12)
  %l0-3: 0300040c7e20 0300040c7dd0  0300040c7dd8
  %l4-7: 0300040c7e16 0300040c7e14 0300040c7dc8 0001e911
  syncing file systems... 

[zfs-discuss] ZFS file system is crashing my system

2007-10-08 Thread dudekula mastan

Hi All,
   
  While pumping IO on a zfs file system my ststem is crashing/panicing. Please 
find the crash dump below.
   
  panic[cpu0]/thread=2a100adfcc0: assertion failed: ss != NULL, file: 
../../common/fs/zfs/space_map.c, line: 125
  02a100adec40 genunix:assfail+74 (7b652448, 7b652458, 7d, 183d800, 
11ed400, 0)
  %l0-3:   011e7508 03000744ea30
  %l4-7: 011ed400  0186fc00 
  02a100adecf0 zfs:space_map_remove+b8 (3000683e7b8, 2b20, 2, 
7b652000, 7b652400, 7b652400)
  %l0-3:  2b22 2b0ec600 03000744ebc0
  %l4-7: 03000744eaf8 2b0ec000 7b652000 2b0ec600
  02a100adedd0 zfs:space_map_load+218 (3000683e7b8, 30006f5f160, 1000, 
3000683e488, 2b00, 1)
  %l0-3: 0160 030006f5f000  7b620ad0
  %l4-7: 7b62086c 7fff 7fff 030006f5f128
  02a100adeea0 zfs:metaslab_activate+3c (3000683e480, 8000, 
c000, 24a998, 3000683e480, c000)
  %l0-3:  0008  029ebf9d
  %l4-7: 704e2000 03000391e940 030005572540 0300060bacd0
  02a100adef50 zfs:metaslab_group_alloc+1bc (3fff, 2, 
8000, 7e68000, 30006766080, )
  %l0-3:  0300060bacd8 0001 03000683e480
  %l4-7: 8000  03f34000 4000
  02a100adf030 zfs:metaslab_alloc_dva+114 (0, 7e68000, 30006766080, 2, 
30005572540, 1e910)
  %l0-3: 0001  0003 03000380b6e0
  %l4-7:  0300060bacd0  0300060bacd0
  02a100adf100 zfs:metaslab_alloc+2c (3000391e940, 2, 30006766080, 1, 
1e910, 0)
  %l0-3: 009980001605 0016 1b4d 0214
  %l4-7:   03000391e940 0001
  02a100adf1b0 zfs:zio_dva_allocate+4c (30005dd8a40, 7b6335a8, 30006766080, 
704e2508, 704e2400, 20001)
  %l0-3: 030005dd8a40 060200ff00ff 060200ff00ff 
  %l4-7:  018a6400 0001 0006
  02a100adf260 zfs:zio_write_compress+1ec (30005dd8a40, 23e20b, 23e000, 
ff00ff, 2, 30006766080)
  %l0-3:  00ff 0100 0002
  %l4-7:  00ff fc00 00ff
  02a100adf330 zfs:arc_write+e4 (30005dd8a40, 3000391e940, 6, 2, 1, 1e910)
  %l0-3:  7b6063c8 030006af2570 0300060c5cf0
  %l4-7: 02a100adf538 0004 0004 0300060c7a88
  02a100adf440 zfs:dbuf_sync+6c0 (30006af2570, 30005dd9440, 2b3ca, 2, 6, 
1e910)
  %l0-3: 030005dd96c0  030006ae7750 030006af2678
  %l4-7: 030006766080 0013 0001 
  02a100adf560 zfs:dnode_sync+35c (0, 0, 30005dd9440, 30005ac8cc0, 2, 2)
  %l0-3: 030006af2570 030006ae77a8 030006ae7808 030006ae7808
  %l4-7:  030006ae77a8 0001 03000640ace0
  02a100adf620 zfs:dmu_objset_sync_dnodes+6c (30005dd96c0, 30005dd97a0, 
30005ac8cc0, 30006ae7750, 30006bd3ca0, 0)
  %l0-3: 704e84c0 704e8000 704e8000 0001
  %l4-7:  704e4000  030005dd9440
  02a100adf6d0 zfs:dmu_objset_sync+54 (30005dd96c0, 30005ac8cc0, 0, 0, 
300060c5318, 1e910)
  %l0-3:  000f  478d
  %l4-7: 030005dd97a0  030005dd97a0 030005dd9820
  02a100adf7e0 zfs:dsl_dataset_sync+c (30006f36780, 30005ac8cc0, 
30006f36810, 300040c7db8, 300040c7db8, 30006f36780)
  %l0-3: 0001 0007 0300040c7e38 
  %l4-7: 030006f36808   
  02a100adf890 zfs:dsl_pool_sync+64 (300040c7d00, 1e910, 30006f36780, 
30005ac9640, 30005581a80, 30005581aa8)
  %l0-3:  03000391ed00 030005ac8cc0 0300040c7e98
  %l4-7: 0300040c7e68 0300040c7e38 0300040c7da8 030005dd9440
  02a100adf940 zfs:spa_sync+1b0 (3000391e940, 1e910, 0, 0, 2a100adfcc4, 1)
  %l0-3: 03000391eb00 03000391eb10 03000391ea28 030005ac9640
  %l4-7:  03000410f580 0300040c7d00 03000391eac0
  02a100adfa00 zfs:txg_sync_thread+134 (300040c7d00, 1e910, 0, 2a100adfab0, 
300040c7e10, 300040c7e12)
  %l0-3: 0300040c7e20 0300040c7dd0  0300040c7dd8
  %l4-7: 0300040c7e16 0300040c7e14 0300040c7dc8 0001e911
  syncing file systems... [1] 16 [1] 6 [1] [1] [1] [1] [1] [1] [1] [1] [1] [1] 
[1] [1] [1] [1] [1] [1] [1] [1] [1] [1] [1] done (not all i/o completed)
  dumping

[zfs-discuss] What would be the exact difference between import/mount and export/unmount ?

2007-09-27 Thread dudekula mastan
 
  Hi All,
   
  What exactly import and export commands will do ?
   
  Are they similar to mount and unmount ?
   
  How import differs from mount and how export differs from umount ?
   
  If you run zpool import command, it will lists all theimportable zpools and 
the devices which are part of those zpools. How exactly import command works ?
   
  In my machine I have 10 to 15 zpools, and I am pumping IO on those zpools. IO 
pump tool is working fine for 2 to 3 hours after that some how my machine is 
going down.
  Can any one explain why it is happening ?
   
  I am not much aware of debug tools, Can any one explain how to debug coredump 
?
   
  Your help is apprecialted.
   
  Thanks & Regards
  Masthan D

   
-
Fussy? Opinionated? Impossible to please? Perfect.  Join Yahoo!'s user panel 
and lay it on us.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Query on Zpool mount point

2007-09-10 Thread dudekula mastan
Hi All,
   
  At the time of zpool creation, user controls the zpool mount point by using 
"-m" option. Is there a way to change this mount point dynamically ?
   
  Your help is appreciated.
   
  Thanks & Regards
  Masthan D

   
-
Building a website is a piece of cake. 
Yahoo! Small Business gives you all the tools to get online.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Is it possible to create a ZPool on SVM volumes ?

2007-06-12 Thread dudekula mastan

Hi All,
   
  Is it possible to create a ZPool on SVM volumes ? What are the limitations 
for this ?
   
  on a solaris machine, how many number of zpools we can create ? Is there any 
limitation on number of zpools per system ?
   
  -Mastahn

   
-
Choose the right car based on your needs.  Check out Yahoo! Autos new Car 
Finder tool.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] file system full corruption in ZFS

2007-05-28 Thread dudekula mastan
Atleaset in my experience, I saw Corruptions when ZFS file system was full. So 
far there is no way to check the file system consistency on ZFS (to the best of 
my knowledge). ZFS people claiming that ZFS file system is always consistent 
and there is no need for FSCK command. 
   
  >>>I cannot issue a ls -la (it hangs) but a ls  works fine
   
  Masthan: On a ZPOOL it is always good to reserve 20% of the pool space for 
storing  meta data . And not to use this 20% space for any other purpose. In 
your case, flush the data and then sync the zpool, still if you are seeing the 
problem reboot your machine. If it is really bad to you reboot also wont work, 
in that case increase the ZPOOL space.
   
  -Masthan 

Richard Elling <[EMAIL PROTECTED]> wrote:
  Michael Barrett wrote:
> Robert Milkowski wrote:
>> Hello Michael,
>>
>> Sunday, May 27, 2007, 5:13:39 AM, you wrote:
>>
>> MB> Does ZFS handle a file system full situation any better than UFS? 
>> I had
>> MB> a ZFS file system run at 100% full for a few days, deleted out the 
>> MB> offending files to bring it back down to 75% full, and now in 
>> certain MB> directories I cannot issue a ls -la (it hangs) but a ls 
>> works fine. MB> Normally in UFS this meant one would need to run a 
>> fsck. What does one
>> MB> do for ZFS?
>>
>>
>> 1. I've never run at a bug tat after ufs was 100% you need to fsck -
>> perhaps there's another problem?
> 
> Normally if you have a ufs file system hit 100% and you have a very high 
> level of system and application load on the box (that resides in the 
> 100% file system) you will run into inode issues that require a fsck and 
> show themselves by not being about to long list out all their attributes 
> (ls -la). Not a bug, just what happens.

I've done extensive testing of this condition and have never found a need
for fsck. Can you reproduce this? If so, then please file a bug!
-- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


   
-
Ready for the edge of your seat? Check out tonight's top picks on Yahoo! TV. ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How crazy ZFS import command................

2007-02-28 Thread dudekula mastan
Hi All,
   
  Today I created a zpool (name as testpool) on c0t0d0.
   
  #zpool create -m /masthan testpool c0t0d0
   
  Then I written some data on the pool
   
  #cp /usha/* /masthan/
   
  Then I destroyed the zpool
   
  #zpool destroy testpool
   
  After that I created UFS File System on the same device i.e. on c0t0d0 .
   
  #newfs -f 2048 /dev/rdsk/c0t0d0s2
  and then I mounted it and i written some data on it ... after that It is 
unmounted.
   
  But still I am able to see ZFS file system on the c0t0d0
   
   the command  #zpool import -Df testpool Is successfully importing the 
testpool and it is successfully show all the files what I written earlier 
   
  Whats wrong with ZFS import command ? On a ZFS disk after creating a new file 
system also it is recovering old ZFS file system on it .
   
  Why ZFS designed like that ? How it is recovering the old ZFS Fs ?
   
  -Masthan
   

 
-
Food fight? Enjoy some healthy debate
in the Yahoo! Answers Food & Drink Q&A.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How zpool import command works ?

2007-02-26 Thread dudekula mastan

Hi All,
   
  I have a zpool (name as testpool) on /dev/dsk/c0t0d0. 
   
  The command $zpool import testpool, imports the testpool (means mount the 
testpool).
   
  How the import command comes to know testpool created on /dev/dsk/c0t0d0 ?
   
  And also the  command $zpool import, list out all the zpools which we can 
import, How it list our them ?
   
  Please take a look into the following sequence of commands
   
  //Create  testpool on /dev/dsk/c0t0d0 and destroy it
  #zpool create testpool /dev/dsk/c0t0d0
  #zpool destroy testpool
   
  //create testpool on /dev/dsk/c0t0d1 and destroy it
  #zpool create testpool /dev/dsk/c0t0d1
  #zpool destroy testpool
   
  //now list out all the zpools which are destroyed
  #zpool import -D
   
  The above command lists two testpools one  on c0t0d0 and another on c0t0d1
   
  Actually at any time we can create (import) only one pool with one name. But 
the above command listing two different pools with same name. Whats wrong with 
import command ?
   
  How a ZFS  system knows about which device belongs to which pool ?
  is Zpool import command read any info on the disk to know about to which pool 
this disk belongs ?
   
  Your help is appreciated.
   
  Thanks & Regards
  -Masthan
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   

 
-
Be a PS3 game guru.
Get your game face on with the latest PS3 news and previews at Yahoo! Games.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How do i know which file system I am using?

2007-02-25 Thread dudekula mastan

>>I am using Solaris 10 and i am not a super user. How do i know which 
>>filesytem ,i am using. 
  >>>
  To know FS type of a device use "fstyp" command. For more information about 
this command refer man page.
  >>and can i use ZFS filesystem locally. I mean in case my admin is not using 
that can i test it locally.
  >> Yes, you can...
   
  -Masthan




 
-
Never miss an email again!
Yahoo! Toolbar alerts you the instant new Mail arrives. Check it out.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is ZFS file system supports short writes ?

2007-02-17 Thread dudekula mastan
If a write call attempted to write X bytes of data, and if writecall writes 
only x ( hwere x  wrote:
  Robert Milkowski wrote:
>
> Hello dudekula,
>
>
> Thursday, February 15, 2007, 11:08:26 AM, you wrote:
>
>
> >
>
> 
>
> Hi all,
>
> 
>
> Please let me know the ZFS support for short writes ?
>
> 
>
>
>
> And what are short writes?
>

http://www.pittstate.edu/wac/newwlassignments.html#ShortWrites :-P


 
-
Food fight? Enjoy some healthy debate
in the Yahoo! Answers Food & Drink Q&A.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Is ZFS file system supports short writes ?

2007-02-15 Thread dudekula mastan
Hi all,
   
  Please let me know the ZFS support for short writes ?
   
  Thanks & Regards
  Masthan

 
-
Cheap Talk? Check out Yahoo! Messenger's low PC-to-Phone call rates.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] VxVM volumes in a zpool.

2007-02-14 Thread dudekula mastan
I don't think it is good to creat ZFS file system on VxVM volumes ( I feel, it 
is not possible to create ZFS file system on VxVM volumes - I 'm not sure )
   
  -Masthan
Mike Gerdts <[EMAIL PROTECTED]> wrote:
  On 1/18/07, Tan Shao Yi wrote:
> Hi,
>
> Was wondering if anyone had experience working with VxVM volumes in a
> zpool. We are using VxVM 5.0 on a Solaris 10 11/06 box. The volume is on a
> SAN, with two FC HBAs connected to a fabric.
>
> The setup works, but we observe a very strange message on bootup. The
> bootup screen is attached at the bottom of this e-mail.

I suspect that the problem is that
svc:/system/vxvm/vxvm-sysboot:default needs to be online before
svc:/system/filesystem/local:default. I bet that adding something
like the following to /var/svc/manifest/system/vxvm/vxvm-sysboot.xml:

name='vxvm-sysboot_filesystem-local'
grouping='optional_all'
restart_on='none'>



You can then run:

svccfg import /var/svc/manifest/system/vxvm/vxvm-sysboot.xml

On your next boot it should bring up (or try to...) vxconfigd (which
should make volumes available) before the first zfs commands are run.
I suspect that if /usr or /opt is a separate file system, you may have
issues with this dependency.

This is based upon 5 minutes of looking, not a careful read of all the
parts involved.

Mike

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


 
-
Don't pick lemons.
See all the new 2007 cars at Yahoo! Autos.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Meta data corruptions on ZFS.

2007-02-06 Thread dudekula mastan
Hi All,
   
  No one has any idea on this ?
   
  -Masthan

dudekula mastan <[EMAIL PROTECTED]> wrote:
  
  Hi All,
   
  In my test set up, I have one zpool of size 1000M bytes.
   
  On this zpool, my application writes 100 files each of size 10 MB.
   
  First 96 files were written successfully with out any problem.
   
  But the 97 file is not written successfully , it written only 5 MB (the 
return value of write() call ). 
   
  Since it is short write my application tried to truncate it to 5MB. But 
ftruncate is failing with an erroe message saying that No space on the devices.
   
  Have you people ever seen these kind of error message ?
   
  After ftruncate failure I checked the size of 97 th file, it is strange. The 
size is 7 MB but the expected size is only 5 MB.
   
  You help is appreciated.
   
  Thanks & Regards
  Mastan
   

-
  TV dinner still cooling?
Check out "Tonight's Picks" on Yahoo! 
TV.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


 
-
Have a burning question? Go to Yahoo! Answers and get answers from real people 
who know.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which label a ZFS/ZPOOL device has ? VTOC or EFI ?

2007-02-04 Thread dudekula mastan
Hi Robert,
   
  Thanks for your quick reply.
   
  As for as my knowledge concerns (because I am new to Solaris) the second 
slice (e.g. c0t0d0d2) of disk is exactly same to whole disk (e.g. c0t0d0). It  
means if we create  ZPool on second slice it will create zpool on whole disk.
   
  For example the following two commands does the same job.
   
   
  $zpool create -f mypool c1t0d0
   
  $zpool create -f mypool c1t0d0s2
   
  How the label of the disk differs  from first case ($zpool create -f mypool 
c1t0d0) to second case ($zpool create -f mypool c1t0d0s2).
   
   
  Thanks & Regards
  Masthan
   
   
   
   
  

Robert Milkowski <[EMAIL PROTECTED]> wrote:
  Hello dudekula,
  

  Saturday, February 3, 2007, 8:31:24 AM, you wrote:
  

>
Hi All,
   
  ZPOOL / ZFS commands writes EFI label on a device if we create ZPOOL/ZFS fs 
on it. Is it true ?
   
  I formatted a device with VTOC lable and I created a ZFS file system on it.
   
  Now which label the ZFS device has ? is it old VTOC or EFI ?
  


  

  

  Depends how you've created pool.
  If during pool creation you've specified entire disk (c0t0d0 - without slice) 
then ZFS
  put EFI label. Now if you've specified slice then ZFS has just used that 
slice without
  changing anything to label.
  

  You can also check current label with format(1M) utility.
  

  

  

  

  -- 
  Best regards,
   Robertmailto:[EMAIL PROTECTED]
 http://milek.blogspot.com


 
-
Be a PS3 game guru.
Get your game face on with the latest PS3 news and previews at Yahoo! Games.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Which label a ZFS/ZPOOL device has ? VTOC or EFI ?

2007-02-02 Thread dudekula mastan
Hi All,
   
  ZPOOL / ZFS commands writes EFI label on a device if we create ZPOOL/ZFS fs 
on it. Is it true ?
   
  I formatted a device with VTOC lable and I created a ZFS file system on it.
   
  Now which label the ZFS device has ? is it old VTOC or EFI ?
   
  After creating the ZFS file system on a VTOC labeled disk, I am seeing the 
following warning messages.
   
  Feb  3 07:47:00 scoobyb Corrupt label; wrong magic number
  Feb  3 07:47:00 scoobyb scsi: [ID 107833 kern.warning] WARNING: 
/scsi_vhci/[EMAIL PROTECTED] (ss
d156):
   
  Any idea on this ?
   
  Your help is appreciated.
   
  Thanks & Regards
  Masthan


 
-
It's here! Your new message!
Get new email alerts with the free Yahoo! Toolbar.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Meta data corruptions on ZFS.

2007-02-02 Thread dudekula mastan

  Hi All,
   
  In my test set up, I have one zpool of size 1000M bytes.
   
  On this zpool, my application writes 100 files each of size 10 MB.
   
  First 96 files were written successfully with out any problem.
   
  But the 97 file is not written successfully , it written only 5 MB (the 
return value of write() call ). 
   
  Since it is short write my application tried to truncate it to 5MB. But 
ftruncate is failing with an erroe message saying that No space on the devices.
   
  Have you people ever seen these kind of error message ?
   
  After ftruncate failure I checked the size of 97 th file, it is strange. The 
size is 7 MB but the expected size is only 5 MB.
   
  You help is appreciated.
   
  Thanks & Regards
  Mastan
   

 
-
TV dinner still cooling?
Check out "Tonight's Picks" on Yahoo! TV.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Need Help on device structure

2007-01-30 Thread dudekula mastan
Hi All,
   
   
  I don't know whether it's the right place or not to discuss my doubts.
   
  I opened a device ( in raw mode) and I filled the entire space (from 1 block 
to last block) with  some random data. While writing data, I am seeing the 
following warning messages in dmesg buffer.
   
  Jan 30 08:32:36 masthan scsi: [ID 107833 kern.warning] WARNING: 
/scsi_vhci/[EMAIL PROTECTED] (ssd175):
Jan 30 08:32:36 masthan Corrupt label; wrong magic number

  Any idea on this ?
   
  I thought  my application is corrupting device structure (device structure 
has disk label, partition table..etc).
   
  In linux, the first block of the device has device structure. Do you know the 
blocks which has device structure in solaris ?
   
  Your help appreciated.
   
  Thanks & Regards
  Masthan



 
-
 Get your own web address.
 Have a HUGE year through Yahoo! Small Business.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ftruncate is failing on ZFS

2007-01-29 Thread dudekula mastan
Hi All,
   
  In my test set up, I have one zpool of size 1000M bytes and it has only 30 M 
free space (970 M is used for some other purpose). On this zpool I created one 
file (using open () call) and i attempted to write 2MB data on it ( with 
write() call) but it is failed. It written only 1.3 MB (the written value of 
write() call) data,  it is because of "No space left on the device". After that 
I tried to truncate this file to 1.3 Mb data but it is failing. 
   
  Any clues on this?
   
  -Masthan

 
-
Food fight? Enjoy some healthy debate
in the Yahoo! Answers Food & Drink Q&A.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS direct IO

2007-01-04 Thread dudekula mastan
Hi All,
   
  As you all know that DIRECT IO is not supported by ZFS file sytem. When ZFS 
people will add DIRECT IO support to ZFS ? What is the roadmap for ZFS direct 
IO ? Do you have any idea on this, Please let me know.
   
   
  Thanks & Regards
  Masthan

 __
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Need Clarification on ZFS quota property.

2006-12-13 Thread dudekula mastan
Hi Darren
   
  Thanks for your reply.
   
  You please take a  deep look into the following command:
   
  $mkfs -F vxfs -o bsize=1024 /dev/rdsk/c5t20d9s2 2048000
   
  The above command creates vxfs file system on first 2048000 blocks (each 
block size is 1024 bytes)  of  /dev/rdsk/c5t20d9s2 .
   
  Like this is there a option to limit the size of ZFS file system.? if so what 
it is ? how it is ?
   
  Your help is appreciated.
   
  Thanks & Regards
  Masthan

Darren Dunham <[EMAIL PROTECTED]> wrote:
  > Hi All,
> 
> Assume the device c0t0d0 size is 10 KB.
> I created ZFS file system on this
> $ zpool create -f mypool c0t0d0s2

This creates a pool on the entire slice.

> and to limit the size of ZFS file system I used quota property.
> 
> $ zfs set quota = 5000K mypool

Note that this sets a quota only on the default filesystem that was
created along with the zpool. There may be other filesystems created on
the pool with different quotas. You are not setting a quota on the pool
itself.

> Which 5000 K bytes are belongs (or reserved) to mypool first 5000KB
> or last 5000KB or random ?

All blocks belong to the pool. The /mypool filesystem may be allocated
any particular space there depending on other filesystems and layout.
Attempts to allocate space greater than 5000K will fail.

> UFS and VxFS file systems have options to limit the size of file
> system on the device (E.g. We can limit the size offrom 1 block to
> some nth block . Like this is there any sub command to limit the
> size of ZFS file system from 1 block to some n th block ?

I'm not sure what you're saying here. UFS and VxFS normally take the
entire space of a disk slice or volume. The pool creation does the same
thing.

Can you clarify what you mean by limiting the size of UFS or VxVS?

-- 
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOS http://www.taos.com/
Got some Dr Pepper? San Francisco, CA bay area
< This line left intentionally blank to confuse you. >
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


 
-
Need a quick answer? Get one in minutes from people who know. Ask your question 
on Yahoo! Answers.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Need Clarification on ZFS quota property.

2006-12-12 Thread dudekula mastan

Hi All,
   
  Assume the device c0t0d0  size is 10 KB.
   
  I created ZFS file system on this
   
  $ zpool create -f mypool c0t0d0s2
   
  and to limit the size of ZFS file system I used quota property.
   
  $ zfs set quota = 5000K mypool
   
  Which 5000 K bytes are belongs (or reserved) to mypool first 5000KB or last 
5000KB or random ?
   
  UFS and VxFS file systems have options to limit the size of file system on 
the device (E.g. We can limit the size offrom 1 block to some nth block . Like 
this is there any sub command to limit the size of ZFS file system from 1 block 
to  some n th block ?
   
  Your help is appreciated.
   
  Thanks & Regards
  Masthan
   
   

 __
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to do DIRECT IO on ZFS ?

2006-12-12 Thread dudekula mastan
Hi All,
   
  We have directio() system to do DIRECT IO on UFS file system. Can any one 
know how to do DIRECT IO on ZFS file system.
   
  Regards
  Masthan

 
-
Everyone is raving about the all-new Yahoo! Mail beta.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Doubt on solaris 10 installation ..

2006-12-11 Thread dudekula mastan
Hi Everybody,
   
  I have some problems in solaris 10 installation. 
   
  After installing the first CD ,  I removed the CD from CDrom , after that the 
machine is getting rebooting again and again. It is not asking second CD to 
install.
   
  If you have any idea. Please tell me.
   
  Thanks & Regards
  Masthan

 __
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] doubt on solaris 10

2006-12-10 Thread dudekula mastan
 
  Hi ALL,
   
  Is it possible to install solaris 10 on HP-VISUALIZE XL - CLASS server ?
   
  Regards
  Masthan


-
Everyone is raving about the all-new Yahoo! Mail beta.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Limitations of ZFS

2006-12-07 Thread dudekula mastan
Hi Folks,
   
  Man pages of ZFS and ZPOOL, clearly saying that it is not good (recommended) 
to use some portion of device for ZFS file system creation.
   
  Hardly what are the problems if we use only some portion of disk space for 
ZFS FS ?
   
or 
   
  Why i can't use one partition of device for ZFS file ststem and another 
partition of for some other purpose ?
   
  Will it cause any problems if i use one partition of device for ZFS and 
another partition for some other purpose ?
   
  Why all people are strongly recommending to use whole disk (not part of disk) 
for creation zpools / ZFS file system ?
   
  Your help is appreciated.
   
   
  Thanks & Regards
  Masthan

 
-
Want to start your own business? Learn how on Yahoo! Small Business.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] need Clarification on ZFS

2006-12-04 Thread dudekula mastan
Hi All,
   
  I am new to solaris. Please clarify me on the following questions.
   
  1) On Linux to know the presence of ext2/ext3 file systems on a device we use 
tune2fs command. Similar to tune2fs command is there any command to know the 
presence of ZFS file system on a device ?
   
  2) When a device is shared between two machines , What our project does is,
   
  -> Create ext2 file system on device 
  a) Mount the device on machine 1
   b) Write data on the device 
  c) unmount the device from machine 1
  d)mount the device on machine 2
  e) read the data on the device
  f) compare the current read data with previous write data  and report the 
result
  g) unmount the device from machine 2
  h) Goto step a.
   
  Like this , Can We share zfs file system between two machines. If so please 
explain it.
   
  3) Can we create ZFS pools (or ZFS file system ) on VxVm volumes ? if so, how 
?
   
  4) Can we share ZFS pools ( ZFS file ststem ) between two machines ?
   
  5)  Like fsck command on Linux, is there any command  to check the 
consistency of the ZFS file system ?
   
  your help is appreciated.
   
  Thanks & Regards
  Masthan
   

 
-
Need a quick answer? Get one in minutes from people who know. Ask your question 
on Yahoo! Answers.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss