Re: possible zfs bug? lost all pools

2008-06-27 Thread JoaoBR
On Sunday 18 May 2008 12:39:11 Jeremy Chadwick wrote:

...


 and if necessary /etc/rc.d/zfs should start hostid or at least set REQUIRE 
 different and warn

...


 I've been in the same boat you are, and I was told the same thing.  I've
 documented the situation on my Wiki, and the necessary workarounds.

 http://wiki.freebsd.org/JeremyChadwick/Commonly_reported_issue

 so I changed the rcorder as you can see in the attached filesile

 http://suporte.matik.com.br/jm/zfs.rcfiles.tar.gz


i'm  coming back on this because I am convicted to zfs each day more and more 
and like to express my gratitude not only to whom made zfs but also and 
specially to the people who brought it to FBSD - and: thank you guys making 
it public, this is really a step forward!

my zfs related rc files changes(above) made my problems go away and like to 
share some other experience here

as on Jeremie's page explained I had similare problems with zfs but seems I 
could get around them with (depending on machine's load) setting either to 
500, 1000 or 1500k vm.kmem_size* ... but seems main problem on FBSD is zfs 
recordsize, on ufs like partitions I set it to 64k and I never got panics any 
more, even with several zpools (as said as dangerous), cache_dirs for squid 
or mysql partitions might need lower values to get to there new and 
impressive peaks. 

this even seems to solve panics when copying large files from nfs|ufs to or 
from zfs ...

so seems that FBSD do not like recordsize64k  ...

I have now a mail server running, for almost two month,  with N zfs volumes 
(one per user) in order simulating quotas (-/+ 1000 users) with success and 
completely stable and performance is outstanding under all loads

web server, apache/php/mysql, gave up maior stability problems  but 
distributing depending on workload to zpools with different recordsizes and 
never 64k solved my problems and I am appearently panic free now

I run almost scsi-only, only my test machines are sata, lowest conf is X2/4G, 
rest is X4 or opterons with 8g or more and I am extremely satisfied and happy 
with zfs

my backups are running twice as fast as on ufs, mirroring in comparims to 
gmirror is fucking-incredible fast and the zfs snapshot thing deserves an 
Oscar! ... and the zfs send|receive another

so thank you all who had fingers in/on zfs! (sometimes I press reset at my 
home server only to see how fast it comes up) .. just kidding but true is: 
thank's again! zfs is thE fs.


-- 

João







A mensagem foi scaneada pelo sistema de e-mail e pode ser considerada segura.
Service fornecido pelo Datacenter Matik  https://datacenter.matik.com.br
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: possible zfs bug? lost all pools

2008-05-19 Thread Claus Guttesen
 Did you run '/etc/rc.d.hostid start' first?
 IIRC, it is needed before zfs will mount in single-user mode.

 Just curious, as I've been wanting to fiddle around with ZFS in my spare
 time...  what is the solution here if you have failed hardware and you want
 to move your ZFS disks to another system (with a different host id)?

'zpool import' possible with a -f.  man 1m zpool.

-- 
regards
Claus

When lenity and cruelty play for a kingdom,
the gentlest gamester is the soonest winner.

Shakespeare
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: possible zfs bug? lost all pools

2008-05-19 Thread Fabian Keil
JoaoBR [EMAIL PROTECTED] wrote:

 man page thar zfs can not be a dump device, not sure if I understand it
 as meant but I can dump to zfs very well and fast as long as
 recordsize=128

I assume you tried dump(8), while the sentence in the man
page is about using a ZFS volume as dumpon(8) target:

%sudo dumpon -v /dev/zvol/tank/swap 
dumpon: ioctl(DIOCSKERNELDUMP): Operation not supported

Fabian


signature.asc
Description: PGP signature


possible zfs bug? lost all pools

2008-05-18 Thread JoaoBR

after trying to mount my zfs pools in single user mode I got the following 
message for each:

May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be loaded as 
it was last accessed by another system (host: gw.bb1.matik.com.br hostid: 
0xbefb4a0f).  See: http://www.sun.com/msg/ZFS-8000-EY

any zpool cmd returned nothing else as not existing zfs, seems the zfs info on 
disks was gone

to double-check I recreated them, rebooted in single user mode and repeated 
the story, same thing, trying to /etc/rc.d/zfs start returnes the above msg 
and pools are gone ...

I guess this is kind of wrong 


-- 

João







A mensagem foi scaneada pelo sistema de e-mail e pode ser considerada segura.
Service fornecido pelo Datacenter Matik  https://datacenter.matik.com.br
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: possible zfs bug? lost all pools

2008-05-18 Thread Greg Byshenk
On Sun, May 18, 2008 at 09:56:17AM -0300, JoaoBR wrote:
 
 after trying to mount my zfs pools in single user mode I got the following 
 message for each:
 
 May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be loaded as 
 it was last accessed by another system (host: gw.bb1.matik.com.br hostid: 
 0xbefb4a0f).  See: http://www.sun.com/msg/ZFS-8000-EY
 
 any zpool cmd returned nothing else as not existing zfs, seems the zfs info 
 on 
 disks was gone
 
 to double-check I recreated them, rebooted in single user mode and repeated 
 the story, same thing, trying to /etc/rc.d/zfs start returnes the above msg 
 and pools are gone ...
 
 I guess this is kind of wrong 


I think that the problem is related to the absence of a hostid when in
single-user.  Try running '/etc/rc.d/hostid start' before mouning.

http://lists.freebsd.org/pipermail/freebsd-current/2007-July/075001.html


-- 
greg byshenk  -  [EMAIL PROTECTED]  -  Leiden, NL
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: possible zfs bug? lost all pools

2008-05-18 Thread Torfinn Ingolfsen
On Sun, 18 May 2008 09:56:17 -0300
JoaoBR [EMAIL PROTECTED] wrote:

 after trying to mount my zfs pools in single user mode I got the
 following message for each:
 
 May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be
 loaded as it was last accessed by another system (host:
 gw.bb1.matik.com.br hostid: 0xbefb4a0f).  See:
 http://www.sun.com/msg/ZFS-8000-EY

Did you run '/etc/rc.d.hostid start' first?
IIRC, it is needed before zfs will mount in single-user mode.

HTH
-- 
Regards,
Torfinn Ingolfsen

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: possible zfs bug? lost all pools

2008-05-18 Thread Jeremy Chadwick
On Sun, May 18, 2008 at 12:20:33PM -0300, JoaoBR wrote:
 On Sunday 18 May 2008 11:11:38 Greg Byshenk wrote:
  On Sun, May 18, 2008 at 09:56:17AM -0300, JoaoBR wrote:
   after trying to mount my zfs pools in single user mode I got the
   following message for each:
  
   May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be
   loaded as it was last accessed by another system (host:
   gw.bb1.matik.com.br hostid: 0xbefb4a0f).  See:
   http://www.sun.com/msg/ZFS-8000-EY
  
   any zpool cmd returned nothing else as not existing zfs, seems the zfs
   info on disks was gone
  
   to double-check I recreated them, rebooted in single user mode and
   repeated the story, same thing, trying to /etc/rc.d/zfs start returnes
   the above msg and pools are gone ...
  
   I guess this is kind of wrong
 
  I think that the problem is related to the absence of a hostid when in
  single-user.  Try running '/etc/rc.d/hostid start' before mouning.
 
 well, obviously that came to my mind after seeing the msg ...
 
 anyway the pools should not vanish don't you agree?
 
 and if necessary /etc/rc.d/zfs should start hostid or at least set REQUIRE 
 different and warn

I've been in the same boat you are, and I was told the same thing.  I've
documented the situation on my Wiki, and the necessary workarounds.

http://wiki.freebsd.org/JeremyChadwick/Commonly_reported_issues

This sort of thing needs to get hammered out before ZFS can be
considered usable from a system administration perspective.  Expecting
people to remember to run an rc.d startup script before they can use any
of their filesystems borders on unrealistic.

-- 
| Jeremy Chadwickjdc at parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: possible zfs bug? lost all pools

2008-05-18 Thread JoaoBR
On Sunday 18 May 2008 12:39:11 Jeremy Chadwick wrote:
 On Sun, May 18, 2008 at 12:20:33PM -0300, JoaoBR wrote:
  On Sunday 18 May 2008 11:11:38 Greg Byshenk wrote:
   On Sun, May 18, 2008 at 09:56:17AM -0300, JoaoBR wrote:
after trying to mount my zfs pools in single user mode I got the
following message for each:
   
May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be
loaded as it was last accessed by another system (host:
gw.bb1.matik.com.br hostid: 0xbefb4a0f).  See:
http://www.sun.com/msg/ZFS-8000-EY
   
any zpool cmd returned nothing else as not existing zfs, seems the
zfs info on disks was gone
   
to double-check I recreated them, rebooted in single user mode and
repeated the story, same thing, trying to /etc/rc.d/zfs start
returnes the above msg and pools are gone ...
   
I guess this is kind of wrong
  
   I think that the problem is related to the absence of a hostid when in
   single-user.  Try running '/etc/rc.d/hostid start' before mouning.
 
  well, obviously that came to my mind after seeing the msg ...
 
  anyway the pools should not vanish don't you agree?
 
  and if necessary /etc/rc.d/zfs should start hostid or at least set
  REQUIRE different and warn

 I've been in the same boat you are, and I was told the same thing.  I've
 documented the situation on my Wiki, and the necessary workarounds.

 http://wiki.freebsd.org/JeremyChadwick/Commonly_reported_issues


nice work this page, thanks

 This sort of thing needs to get hammered out before ZFS can be
 considered usable from a system administration perspective.  Expecting
 people to remember to run an rc.d startup script before they can use any
 of their filesystems borders on unrealistic.

yes but on the other side we know it is new stuff and sometimes the price is 
what happens to me this morning but then it also helps to make things better

anyway, a little fix to rc.d/zfs like

if [ ! `sysctl -n kern.hostiid 2/dev/null` ]; then echo zfs needs hostid 
first; exit 0; fi

or something as precmd or first in zfs_start_main should fix this issue


talking about there are more things, I experienced still not working


swapon|off from rc.d/zfs script does not work either

not sure what it is because same part of script run as root works, adding a 
dash to #!/bin/sh does not help either, from rc.d/zfs the state returns a 
dash

do not see sense in rc.d/zfs `zfs share` since it is the default when sharenfs 
property is enabled

man page tipo tells swap -a ... not swapon -a

subcommands volini and volfini not in manual at all

man page thar zfs can not be a dump device, not sure if I understand it as 
meant but I can dump to zfs very well and fast as long as recordsize=128

but at the end, for the short time zfs is there it gives me respectable 
performance results and it is stable for me as well

-- 

João







A mensagem foi scaneada pelo sistema de e-mail e pode ser considerada segura.
Service fornecido pelo Datacenter Matik  https://datacenter.matik.com.br
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: possible zfs bug? lost all pools

2008-05-18 Thread Charles Sprickman

On Sun, 18 May 2008, Torfinn Ingolfsen wrote:


On Sun, 18 May 2008 09:56:17 -0300
JoaoBR [EMAIL PROTECTED] wrote:


after trying to mount my zfs pools in single user mode I got the
following message for each:

May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be
loaded as it was last accessed by another system (host:
gw.bb1.matik.com.br hostid: 0xbefb4a0f).  See:
http://www.sun.com/msg/ZFS-8000-EY


Did you run '/etc/rc.d.hostid start' first?
IIRC, it is needed before zfs will mount in single-user mode.


Just curious, as I've been wanting to fiddle around with ZFS in my spare 
time...  what is the solution here if you have failed hardware and you 
want to move your ZFS disks to another system (with a different host id)?


Thanks,

Charles


HTH
--
Regards,
Torfinn Ingolfsen

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]