RE: iSCSI lvm and reboot

2010-03-29 Thread Geoff Galitz


Take a look at configuring multipating between the KVM server and the
fileserver.  You can take advantage of failover or just simply block IO
until the fileserver returns to service, and then everything should resume
normally.

It works for me.

-geoff


-
Geoff Galitz
Blankenheim NRW, Germany
http://www.galitz.org/
http://german-way.com/blog/


 -Original Message-
 From: open-iscsi@googlegroups.com [mailto:open-is...@googlegroups.com] On
 Behalf Of Raimund Sacherer
 Sent: Sonntag, 28. März 2010 10:29
 To: open-iscsi
 Subject: iSCSI lvm and reboot
 
 
 I am new to iSCSI and FilerSystems, but I am evaluating if it makes sense
 for our clients.
 
 So I set up my testlab and created a KVM Server with 3 instances,
 
 2 x Ubuntu (one for Zimbra LDAP, one for Zimbra Mailserver)
 1 x Ubuntu (with some OpenVZ virtual machines in it)
 
 These 3 KVM instances have RAW LVM disks which are on a Volume in the
 iSCSI Filer.
 
 I tried yesterday to reboot the filer, without doing anything to the KVM
 Machines, to simulate outage/human error.
 
 The reboot is fine and the iSCSI targets get exposed, but the KVM Servers
 have their filesystems mounted readonly.
 
 Is there any way to get the LVM volumes on the KVM Server machine or
 inside the KVM Virtualized servers back to R/W mode?
 
 I could not figure it out, vgchange -aly storage on the KVM server did
 not change anything.
 
 Is the only way to handle this situation with a cold-reset of the KVM
 Guests? As a reboot command was taking very long, i guess because of the
 RO mounts i had to reset them.
 
 A push in the right direction regarding, e.g. doku, or any help is very
 appreciated.
 
 
 Thank you,
 best
 
 -
 RunSolutions
  Open Source It Consulting
 -
 Email: r...@runsolutions.com
 
 Parc Bit - Centro Empresarial Son Espanyol
 Edificio Estel - Local 3D
 07121 -  Palma de Mallorca
 Baleares
 
 --
 You received this message because you are subscribed to the Google Groups
 open-iscsi group.
 To post to this group, send email to open-is...@googlegroups.com.
 To unsubscribe from this group, send email to open-
 iscsi+unsubscr...@googlegroups.com.
 For more options, visit this group at http://groups.google.com/group/open-
 iscsi?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



RE: Write back cache

2009-12-15 Thread Geoff Galitz

 You might want to post to linux-kernel or linux-scsi list. The iscsi
 initiator does not control this, but I think there are some scsi or
 block layer tools that can do it if the target/disk supports it.

I'll do that. I did try to use sgmode to enable this, but that just did not
work.

-geoff



-
Geoff Galitz
Blankenheim NRW, Germany
http://www.galitz.org/
http://german-way.com/blog/



--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Write back cache

2009-12-14 Thread galitz

Hello,

I am connecting to iSCSI targets on a QNAP TS-439U.  They connect as
write-through cache enabled, but I cannot seem to tune the cache on
the QNAP side.  Is there a way to enable/configure write back cache on
the Linux initiator side?

I am actually doing this via XenServer, but they seem to be using
standard open-iscsi from what I can tell.

Thanks,
Geoff

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




RE: open-iscsi initiator machine hangs when iscsitarget is rebooted

2009-05-12 Thread Geoff Galitz



 [...]
  reboot. Is this a normal behaviour? Because of the server hang, i'm
  unable to check the logs and always have to go for a hard
  reset/shutdown. noob. help pls.
 


I've worked around similar issues by implementing dm-multipath for any
systems using iSCSI.  Any I/O that is destined for the iSCSI layer will get
queued up and the system should continue running.

I'm picking up this thread in the middle, so sorry if this has been brought
up, already.

-geoff


-
Geoff Galitz
Blankenheim NRW, Germany
http://www.galitz.org/
http://german-way.com/blog/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



RE: iSCSI and filesystem error recovery

2008-06-23 Thread Geoff Galitz


Thanks for the info.

One more question, I just looked through the iscsi.conf file and the README
but I do not see a setting for explicitly setting dm-multipath.  How do I
enable it?

-geoff


Geoff Galitz
Blankenheim NRW, Deutschland
http://www.galitz.org


-Original Message-
From: open-iscsi@googlegroups.com [mailto:[EMAIL PROTECTED] On
Behalf Of Mike Christie
Sent: Montag, 23. Juni 2008 23:44
To: open-iscsi@googlegroups.com
Subject: Re: iSCSI and filesystem error recovery

galitz wrote:
 
 
 I am evaluating iSCSI in our production environment and have a
 question.
 
 When I induce a failure by powering down the iSCSI target while there
 is active traffic and then restore the iSCSI target 5+ minutes later,
 the filesystem remains in read-only mode.  Fair enough, I see by
 reading the docs that anytime a filesystem error is generated the
 filesystem is made read-only.
 
 I can clear this by ummounting and then remounting the filesystem.  Is
 there a more elegant or a recommended way of restoring the filesystem
 to a read-write state once the iSCSI target has returned to service?
 
 Ideally we'd like this to be a transparent process.  Perhaps dm-
 multipath is what I need?
 

I think dm-multpath is best. But if you setup dm-multipath to eventually 
return IO errors to the layer above it, then you will have the same problem.

At the iscsi layer you can set node.session.timeo.replacement_timeout to 
a higher value and that is how long we will hold onto IO before failing 
it (default is 2 minutes). There is a bug in this code where we can only 
hold on to it for so long. I just did this patch against 269.2 which 
allows you to set the node.session.timeo.replacement_timeout to 0 which 
will hold onto the IO until we reconnect.

At the dm multipath layer you can set the no_path_retry to do the same 
thing. If you set this to queue it will hold onto IO forever or until 
the user intervenes.

But like I said if you get FS errors then you have to unmount and 
remount. If your questions was about that then there is nothing I can do.




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---