[zfs-discuss] Odp: Kernel panic at zpool import

2008-08-11 Thread Łukasz K
Dnia 7-08-2008 o godz. 13:20 Borys Saulyak napisał(a):
 Hi,
 
 I have problem with Solaris 10. I know that this forum is for
 OpenSolaris but may be someone will have an idea.
 My box is crashing on any attempt to import zfs pool. First crash
 happened on export operation and since then I cannot import pool anymore
 due to kernel panics. Is there any way of getting it imported or fixed?
 Removal of zpool.cache did not help.
 
 Here are details:
 SunOS omases11 5.10 Generic_137112-02 i86pc i386 i86pc
 
 [EMAIL PROTECTED]:~[8]#zpool import
 pool: public
 id: 10521132528798740070
 state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:
 
 public ONLINE
 c7t60060160CBA21000A5D22553CA91DC11d0 ONLINE
 
 pool: private
 id: 3180576189687249855
 state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:
 
 private ONLINE
 c7t60060160CBA21000A6D22553CA91DC11d0 ONLINE
 


Try to change uberblock

http://www.opensolaris.org/jive/thread.jspa?messageID=217097

this might help.


--Lukas Karwacki


Wytworne szmatki, luksusowe auta, efekciarskie gadżety.
Serwis dla koneserów prawdziwego luksusu.
http://klik.wp.pl/?adr=www.LuxClub.plsid=443


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Backup/replication system

2008-01-10 Thread Łukasz K
Hi
I'm using ZFS on few X4500 and I need to backup them.
The data on source pool keeps changing so the online replication
would be the best solution.

As I know AVS doesn't support ZFS - there is a problem with
mounting backup pool.
Other backup systems (disk-to-disk or block-to-block) have the
same problem with mounting ZFS pool.
I hope I'm wrong ?

In case of any problem I want the backup pool to be operational
within 1 hour.

Do you know any solution ?

--Lukas


Zagłosuj i zgarnij 10.000 złotych! 
Wybierz z nami Internetowego SportoWWWca Roku.
Oddaj swój głos na najlepszego. - Kliknij:
http://klik.wp.pl/?adr=http%3A%2F%2Fcorto.www.wp.pl%2Fas%2Fsportowiec2007.htmlsid=166


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backup/replication system

2008-01-10 Thread Łukasz K
Dnia 10-01-2008 o godz. 16:11 Jim Dunham napisał(a):
 Łukasz K wrote:
 
  Hi
 I'm using ZFS on few X4500 and I need to backup them.
  The data on source pool keeps changing so the online replication
  would be the best solution.
 
 As I know AVS doesn't support ZFS - there is a problem with
  mounting backup pool.
 
 This is not true, if replication is configured correctly.
 Where are you getting information about the aforementioned problem?

I red on zfs-discuss.

 
 Have you looked at the following?
 
 http://blogs.sun.com/avs
 http://www.opensolaris.org/os/project/avs/

I have seen that.

I want to configure x4500 A to replicate data on x4500 B
 - asynchronous replication - synchronous will block I/O on A.
 
Let's say I have a crash on A, I want to use backup pool B
The B pool can be mounted with force option.
How much data will I lose and
is there guarantee that pool B is consistent.

 
 
 Other backup systems (disk-to-disk or block-to-block) have the
  same problem with mounting ZFS pool.
 I hope I'm wrong ?
 
 In case of any problem I want the backup pool to be operational
  within 1 hour.
 
  Do you know any solution ?
 


Zagłosuj i zgarnij 10.000 złotych! 
Wybierz z nami Internetowego SportoWWWca Roku.
Oddaj swój głos na najlepszego. - Kliknij:
http://klik.wp.pl/?adr=http%3A%2F%2Fcorto.www.wp.pl%2Fas%2Fsportowiec2007.htmlsid=166


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backup/replication system

2008-01-10 Thread Łukasz K
Dnia 10-01-2008 o godz. 17:45 eric kustarz napisał(a):
 On Jan 10, 2008, at 4:50 AM, Łukasz K wrote:
 
  Hi
  I'm using ZFS on few X4500 and I need to backup them.
  The data on source pool keeps changing so the online replication
  would be the best solution.
 
  As I know AVS doesn't support ZFS - there is a problem with
  mounting backup pool.
  Other backup systems (disk-to-disk or block-to-block) have the
  same problem with mounting ZFS pool.
  I hope I'm wrong ?
 
  In case of any problem I want the backup pool to be operational
  within 1 hour.
 
  Do you know any solution ?
 
 If it doesn't need to be synchronous, then you can use 'zfs send -R'.

I need automatic system. Now I'm using zfs send but it
takes too much human resources to control it.

 
 eric


Czy Dygant ma sztuczny biust? 
Zobacz! 
http://klik.wp.pl/?adr=http%3A%2F%2Fcorto.www.wp.pl%2Fas%2Fdygat.htmlsid=175


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Slow file system access on zfs

2007-11-08 Thread Łukasz K


Dnia 8-11-2007 o godz. 7:58 Walter Faleiro napisał(a):
Hi Lukasz,
The output of the first sript gives 
bash-3.00# ./test.sh 
dtrace: script './test.sh' matched 4 probes
CPU
ID
FUNCTION:NAME
 0
42681
:tick-10s 

 0
42681
:tick-10s 

 0
42681
:tick-10s 

 0
42681
:tick-10s 

 0
42681
:tick-10s 

 0
42681
:tick-10s 

 0
42681
:tick-10s 



and it goes on.It means that you have free blocks :) , or you do not have any I/O writes.run:#zpool iostat 1 and #iostat -zxc 1

The second script gives:

checking pool map size [B]: filer
mdb: failed to dereference symbol: unknown symbol name
423917216903435
Which Solaris version do you use ?Maybe you should patch kernel.Also you can check if there are problems with zfs sync phase.Run #dtrace -n fbt::txg_wait_open:entry'{ stack(); ustack(); }'and wait 10 minutesalso give more information about pool#zfs get all filerI assume 'filer' is you pool name.RegardsLukasOn 11/7/07, Łukasz K [EMAIL PROTECTED] wrote:
Hi,I think your problem is filesystem fragmentation.When available space is less than 40% ZFS might have problems withfinding free blocks. Use this script to check it:#!/usr/sbin/dtrace -s
fbt::space_map_alloc:entry{ self-s = arg1;}fbt::space_map_alloc:return/arg1 != -1/{self-s = 0;}fbt::space_map_alloc:return/self-s  (arg1 == -1)/
{@s = quantize(self-s);self-s = 0;}tick-10s{printa(@s);}Run script for few minutes.You might also have problems with space map size.This script will show you size of space map on disk:
#!/bin/shecho '::spa' | mdb -k | grep ACTIVE \| while read pool_ptr state pool_namedoecho "checking pool map size [B]: $pool_name"echo "${pool_ptr}::walk metaslab|::print -d struct metaslab
ms_smo.smo_objsize" \| mdb -k \| nawk '{sub("^0t","",$3);sum+=$3}END{print sum}'doneIn memory space map takes 5 times more.All space map is loaded into memory all the time, but for example
during snapshot remove all space map might be loaded, so checkif you have enough RAM available on machine.Check ::kmastat in mdb.Space map uses kmem_alloc_40( on thumpers this is a real problem )
Workaround:1. first you can change pool recordsizezfs set recordsize=64K POOLMaybe you wil have to use 32K or even 16K2. You will have to disable ZIL, becuase ZIL always takes 128kBblocks.
3. Try to disable cache, tune vdev cache. Check:http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_GuideLukas Karwacki
Dnia 7-11-2007 o godz. 1:49 Walter Faleiro napisał(a): Hi, We have a zfs file system configured using a Sunfire 280R with a 10T Raidweb array bash-3.00# zpool list
NAMESIZEUSED
AVAILCAPHEALTH
ALTROOT
filer
9.44T 6.97T
2.47T73%ONLINE
- bash-3.00# zpool status pool: backupstate: ONLINEscrub: none requested config:
NAMESTATE
READ WRITE CKSUM
filerONLINE
0 0 0
c1t2d1ONLINE
0 0 0
c1t2d2ONLINE
0 0 0
c1t2d3ONLINE
0 0 0
c1t2d4ONLINE
0 0 0
c1t2d5ONLINE
0 0 0 the file system is shared via nfs. Off late we have seen that the file system access slows down considerably. Running commands like find, du on the zfs system did slow it down, but the intermittent slowdowns
 cannot be explained. Is there a way to trace the I/O on the zfs so that we can list out heavy read/writes to the file system to be responsible for the slowness. Thanks,
 --Walter ___ zfs-discuss mailing list zfs-discuss@opensolaris.org 
http://mail.opensolaris.org/mailman/listinfo/zfs-discussWojna z terrorem wkracza w decydującą fazę:Robert Redford, Meryl Streep i Tom Cruise w filmie
UKRYTA STRATEGIA - w kinach od 9 listopada!http://klik.wp.pl/?adr=http%3A%2F%2Fcorto.www.wp.pl%2Fas%2Fstrategia.htmlsid=90

Wojna z terrorem wkracza w decydującą fazę:Robert Redford, Meryl Streep i Tom Cruise w filmieUKRYTA STRATEGIA - w kinach od 9 listopada!http://klik.wp.pl/?adr=http://corto.www.wp.pl/as/strategia.html=90


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Space Map optimalization

2007-10-11 Thread Łukasz K
  Now space maps, intent log, spa history are compressed.
 
 All normal metadata (including space maps and spa history) is always
 compressed.  The intent log is never compressed.

Can you tell me where space map is compressed ?

Buffer is filled up with:
  468   *entry++ = SM_OFFSET_ENCODE(start) |
  469   SM_TYPE_ENCODE(maptype) |
  470   SM_RUN_ENCODE(run_len);
and later dmu_write is called.

I want to propose few optimalization here:
 - space map block size schould be dynamin ( 4KB buffer is a bug )
   My space map on thumper takes over 3,5 GB / 4kB = 855k blocks

 - space map should be compressed before dividing:
  1. FILL LARGER BLOCK with data
  2. compress it
  3. divide to blocks and then write

 - other thing is memory usage, space map is using kmem_alloc_40
   for allocating space map in memory. During sync phase after
   removing snapshot kmem_alloc_40 takes over 13GB RAM and system
   is swapping.

My question is when are you going to optimalize space map ?
We are having big problems here with ZFS due to space map and
fragmentation. We have to lower recordsize and disable zil.


Potrzebujesz samochodu? Mamy dla Ciebie auto tylko za 70 zł dziennie!
Oferta specjalna Express Rent a Car - Kliknij:
http://klik.wp.pl/?adr=https%3A%2F%2Fwynajemsamochodow.wp.pl%2Fsid=58


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Odp: zfs destroy takes long time

2007-08-24 Thread Łukasz K
Dnia 23-08-2007 o godz. 22:15 Igor Brezac napisał(a):
 We are on Solaris 10 U3 with relatively recent recommended patches
 applied.  zfs destroy of a filesystem takes a very long time; 20GB usage
 and about 5 million objects takes about 10 minutes to destroy.  zfs pool
 is a 2 drive stripe, nothing too fancy.  We do not have any snapshots.
 
 Any ideas?

Maybe your pool is fragmented and pool space map i very big.

Run this script:

#!/bin/sh


echo '::spa' | mdb -k | grep ACTIVE \
  | while read pool_ptr state pool_name
do
  echo checking pool map size [B]: $pool_name

  echo ${pool_ptr}::walk metaslab|::print -d struct metaslab 
ms_smo.smo_objsize \
| mdb -k \
| nawk '{sub(^0t,,$3);sum+=$3}END{print sum}'
done

This will show the size of pool space map on disk ( in bytes ).
Then destroying filesystem or snapshot on fragmented pool kernel
will have to:
1. read space map ( in memory space map will take
4x more RAM )
2. do changes
3. write space map ( space map is kept on disks it 2 copies )

I don't know any workaround for this bug.


Lukas


Poznaj nowego wybrańca Boga... i jego trzódkę! 
Rewelacyjna komedia Evan Wszechmogący w kinach od 24 sierpnia.
http://klik.wp.pl/?adr=http%3A%2F%2Fadv.reklama.wp.pl%2Fas%2Fevanw.htmlsid=1270


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Odp: Re[2]: Re: Re[2]: Re: Re: Re: Snapshots impact on performance

2007-07-27 Thread Łukasz K
Dnia 26-07-2007 o godz. 13:31 Robert Milkowski napisał(a):
 Hello Victor,
 
 Wednesday, June 27, 2007, 1:19:44 PM, you wrote:
 
 VL Gino wrote:
  Same problem here (snv_60).
  Robert, did you find any solutions?
 
 VL Couple of week ago I put together an implementation of space maps
 which
 VL completely eliminates loops and recursion from space map alloc
 VL operation, and allows to implement different allocation strategies
 quite
 VL easily (of which I put together 3 more). It looks like it works for me
 VL on thumper and my notebook with ZFS Root though I have almost no
 time to
 VL test it more these days due to year end. I haven't done SPARC build
 yet
 VL and I do not have test case to test against.
 
 VL Also, it comes at a price - I have to spend some more time
 (logarithmic,
 VL though) during all other operations on space maps and is not
 optimized now.
 
 Lukasz (cc) - maybe you can test it and even help on tuning it?
 
Yes, I can test it. I'm building environment to compile opensolaris
and test zfs. I will be ready next week.

Victor, can you tell me where to look for your changes ?
How to change allocation strategy ?
I can see that changing space_map_ops_t
I can declare diffrent callback functions.

Lukas


Tylko od nich zależy czy przeżyją tę noc. Jak uciec, gdy 
oni widzą wszystko? Kate Beckinsale w mrocznym thrillerze
MOTEL - kinach od 3 sierpnia!
http://klik.wp.pl/?adr=http%3A%2F%2Fadv.reklama.wp.pl%2Fas%2Fmotel.htmlsid=1236


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss