Re: part after colon @ initiatorname.iscsi
On 31 Mar 2009 at 5:46, HIMANSHU wrote: iqn.2005-03.org.open-iscsi:d612b128bb59 this is my initiatorname.iscsi. What the part after colon actually signifies and from where it comes? If I installed open-iscsi on different machine.then also I get the same number i.e d612b128bb59 after colon. For SLES10, a unique number is created during RPM installation. See page 33 of RFC 3720 (3.2.6.3.1. Type iqn. (iSCSI Qualified Name)): The iSCSI qualified name string consists of: - The string iqn., used to distinguish these names from eui. formatted names. - A date code, in -mm format. This date MUST be a date during which the naming authority owned the domain name used in this format, and SHOULD be the first month in which the domain name was owned by this naming authority at 00:01 GMT of the first day of the month. This date code uses the Gregorian calendar. All four digits in the year must be present. Both digits of the month must be present, with January == 01 and December == 12. The dash must be included. - A dot . - The reversed domain name of the naming authority (person or organization) creating this iSCSI name. - An optional, colon (:) prefixed, string within the character set and length boundaries that the owner of the domain name deems appropriate. This may contain product types, serial numbers, host identifiers, or software keys (e.g., it may include colons to separate organization boundaries). With the exception of the colon prefix, the owner of the domain name can assign everything after the reversed domain name as desired. It is the responsibility of the entity that is the naming authority to ensure that the iSCSI names it assigns are worldwide unique. For example, Example Storage Arrays, Inc., might own the domain name example.com. initiatorname is supposed to be unique?right? Yes! Regards, Ulrich --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-iscsi@googlegroups.com To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/open-iscsi -~--~~~~--~~--~--~---
Re: part after colon @ initiatorname.iscsi
On 31 Mar 2009 at 11:19, Mike Christie wrote: HIMANSHU wrote: iqn.2005-03.org.open-iscsi:d612b128bb59 this is my initiatorname.iscsi. What the part after colon actually signifies and from where it comes? It is just a unique id. You can set it to whatever you want if you have a different naming scheme you prefer. The default value is just a random number, which I guess is not random enough :) In case someone is thinking on how to make a unique random string: There's a utility named uuidgen -r (part of e2fsprogs) that creates strings that should be unique enough (Like fe5a7f1a-8f4f-49b1-bec0-7ccfdf0cb850). Unfortunately uuid is not a valid iSCSI naming scheme, so you'll have to append the UUID (RFC 4122) after the colon. The name is generated with the attached program. This gets run when you do a make install. Hi, having had a small look at it, I wonder (please see rfc 4086 on Randomness Requirements for Security): when picking 16 random bytes, why feeding those into MD5 and adding more data of little randomness, and finally selecting randomly six bytes from the random data? If the first 16 bytes are random, you don't add anything to the randomness by those operations. If the initial bytes are not very random, you also add little. Why not simply using the hex-string of those 16 bytes (or less)? Also, these days SHA-1 is much preferrable to MD5, and the RFC recommands AES, but maybe that's overkill for the purpose. With six bytes making 48 bits (12 characters), one could also use alphanumerical characters to encode more bits: Unless I'm wrong, you'll encode 71 bits with a 12-chracacter string like 7FSsmEnHiSCW, and even 65 bits in a 11-character string. With a 22- character string you'll encode the full 128 bit (actually 131) of the initial random sequence. If I installed open-iscsi on different machine.then also I get the same number i.e d612b128bb59 after colon. Is this something you can easily reproduce? initiatorname is supposed to be unique?right? Yeah, it is supposed to be unique. Regards, Ulrich --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-iscsi@googlegroups.com To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/open-iscsi -~--~~~~--~~--~--~---
Q: multipathd: queue_if_no_path
Hi, this is a bit off-topic, but esential: After experiencing a network failure for about four minutes, I was watching the syslog (the system had no problems so far): Mar 30 15:24:33 testhost multipathd: sdc: tur checker reports path is up Mar 30 15:24:33 testhost multipathd: 8:32: reinstated Mar 30 15:24:33 testhost multipathd: L112_09: queue_if_no_path enabled Mar 30 15:24:33 testhost multipathd: L112_09: Recovered to normal mode Mar 30 15:24:33 testhost multipathd: L112_09: remaining active paths: 1 Mar 30 15:24:33 testhost multipathd: L112_09: switch to path group #2 Mar 30 15:24:33 testhost multipathd: L112_09: switch to path group #2 I'm surprised: I thought queue_if_no_path would be enabled if the device fails, not if the device recovered! The multipath configuration for the device looks like this: device { vendor HP product HSV2.* path_grouping_policy group_by_prio path_checker tur prio_callout mpath_prio_alua /dev/%n failback immediate #polling_interval 30 no_path_retry 1000 features 1 queue_if_no_path } Does anbody have similar experience? Regards, Ulrich --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-iscsi@googlegroups.com To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/open-iscsi -~--~~~~--~~--~--~---
multipath iSCSI installs
Hi all, I am trying to install RHEL5.3 on an iSCSI disk with two paths. I booted with mapth option but the installer picked up only a single path. Is this the expected behavior when I use iBFT? The install went fine on a single path. I was trying to convert the single path to multi-path by running mkinitrd. RHEL was unable to boot (panics) with the new initrd image. The only difference between the old initrd image and the new one is that the old initrd image was using iBFT method and the new image was trying to use the values from the existing session(s) at initrd creation time. For some reason the latter method doesn't work. Is this a known bug? I also tried installing SLES10 and SLES11. I believe they recommend installing on a single path and then converting to multipath. I have found that SLES11's initrd image can only find one path even after including multipath support into the initrd image. It creates a dm-multipath device with a single path and later converts to dm-multipath device with two paths later when it runs scripts in /etc/init.d. SLES10's behavior might be same, but I didn't analyse. Does anyone know if SLES11's initrd image can find more than one iSCSI path? Thanks, Malahal. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-iscsi@googlegroups.com To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/open-iscsi -~--~~~~--~~--~--~---
Re: multipath iSCSI installs
On Wed, Apr 1, 2009 at 6:42 PM, Pasi Kärkkäinen pa...@iki.fi wrote: On Wed, Apr 01, 2009 at 05:13:10AM -0700, mala...@us.ibm.com wrote: Hi all, I am trying to install RHEL5.3 on an iSCSI disk with two paths. I booted with mapth option but the installer picked up only a single path. Is this the expected behavior when I use iBFT? I've installed RHEL 5.3 (and CentOS 5.3) to multipath-root using mpath installer option. It worked fine. I didn't use iBFT though.. The install went fine on a single path. I was trying to convert the single path to multi-path by running mkinitrd. RHEL was unable to boot (panics) with the new initrd image. The only difference between the old initrd image and the new one is that the old initrd image was using iBFT method and the new image was trying to use the values from the existing session(s) at initrd creation time. For some reason the latter method doesn't work. Is this a known bug? Yeah conversion from single path to multipath root can be tricky.. I think it might require manual customization/editing of initrd image (scripts). I know it is tricky and I hand edited mkinitrd script (not the image itself). But it doesn't matter whether I edit the script or not. I installed RHEL5.3 with iBFT and then ran mkinitrd /boot/initrd-version.img version without changing anything anywhere. I couldn't boot! It has to be a bug. Did anyone run mkinitrd successfully after an iBFT iSCSI install? Thanks, Malahal. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-iscsi@googlegroups.com To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/open-iscsi -~--~~~~--~~--~--~---
RE: multipath iSCSI installs
Hi Malahal. There is a BZ for SLES. https://bugzilla.novell.com/show_bug.cgi?id=436463 I, too, have not been able to get RHEL 5.3 to install either. Regards, Wayne. -Original Message- From: open-iscsi@googlegroups.com [mailto:open-is...@googlegroups.com] On Behalf Of mala...@us.ibm.com Sent: Wednesday, April 01, 2009 8:13 AM To: open-iscsi@googlegroups.com Subject: multipath iSCSI installs Hi all, I am trying to install RHEL5.3 on an iSCSI disk with two paths. I booted with mapth option but the installer picked up only a single path. Is this the expected behavior when I use iBFT? The install went fine on a single path. I was trying to convert the single path to multi-path by running mkinitrd. RHEL was unable to boot (panics) with the new initrd image. The only difference between the old initrd image and the new one is that the old initrd image was using iBFT method and the new image was trying to use the values from the existing session(s) at initrd creation time. For some reason the latter method doesn't work. Is this a known bug? I also tried installing SLES10 and SLES11. I believe they recommend installing on a single path and then converting to multipath. I have found that SLES11's initrd image can only find one path even after including multipath support into the initrd image. It creates a dm-multipath device with a single path and later converts to dm-multipath device with two paths later when it runs scripts in /etc/init.d. SLES10's behavior might be same, but I didn't analyse. Does anyone know if SLES11's initrd image can find more than one iSCSI path? Thanks, Malahal. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-iscsi@googlegroups.com To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/open-iscsi -~--~~~~--~~--~--~---
Re: Q: multipathd: queue_if_no_path
On Wed, Apr 01, 2009 at 01:05:30PM +0200, Ulrich Windl wrote: Hi, this is a bit off-topic, but esential: After experiencing a network failure for about four minutes, I was watching the syslog (the system had no problems so far): Mar 30 15:24:33 testhost multipathd: sdc: tur checker reports path is up Mar 30 15:24:33 testhost multipathd: 8:32: reinstated Mar 30 15:24:33 testhost multipathd: L112_09: queue_if_no_path enabled Mar 30 15:24:33 testhost multipathd: L112_09: Recovered to normal mode Mar 30 15:24:33 testhost multipathd: L112_09: remaining active paths: 1 Mar 30 15:24:33 testhost multipathd: L112_09: switch to path group #2 Mar 30 15:24:33 testhost multipathd: L112_09: switch to path group #2 I'm surprised: I thought queue_if_no_path would be enabled if the device fails, not if the device recovered! It should be enabled all the time - I think you are seeing the code path that figures out that queue_if_nopath is set and it prints an informational message about it (the queue of I/O only happens when access to the disk is offline) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-iscsi@googlegroups.com To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/open-iscsi -~--~~~~--~~--~--~---
Re: part after colon @ initiatorname.iscsi
Ulrich Windl wrote: On 31 Mar 2009 at 11:19, Mike Christie wrote: HIMANSHU wrote: iqn.2005-03.org.open-iscsi:d612b128bb59 this is my initiatorname.iscsi. What the part after colon actually signifies and from where it comes? It is just a unique id. You can set it to whatever you want if you have a different naming scheme you prefer. The default value is just a random number, which I guess is not random enough :) In case someone is thinking on how to make a unique random string: There's a utility named uuidgen -r (part of e2fsprogs) that creates strings that should be unique enough (Like fe5a7f1a-8f4f-49b1-bec0-7ccfdf0cb850). Unfortunately uuid is not a valid iSCSI naming scheme, so you'll have to append the UUID (RFC 4122) after the colon. The name is generated with the attached program. This gets run when you do a make install. Hi, having had a small look at it, I wonder (please see rfc 4086 on Randomness Requirements for Security): when picking 16 random bytes, why feeding those into MD5 and adding more data of little randomness, and finally selecting randomly six bytes from the random data? If the first 16 bytes are random, you don't add anything to the randomness by those operations. If the initial bytes are not very random, you also add little. Why not simply using the hex-string of those 16 bytes (or less)? Also, these days SHA-1 is much preferrable to MD5, and the RFC recommands AES, but maybe that's overkill for the purpose. With six bytes making 48 bits (12 characters), one could also use alphanumerical characters to encode more bits: Unless I'm wrong, you'll encode 71 bits with a 12-chracacter string like 7FSsmEnHiSCW, and even 65 bits in a 11-character string. With a 22- character string you'll encode the full 128 bit (actually 131) of the initial random sequence. I will look into this. We just took the iscsi-iname program from the old linux-iscsi code and have not worried about or even looked at it much until now. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-iscsi@googlegroups.com To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/open-iscsi -~--~~~~--~~--~--~---
Re: Q: multipathd: queue_if_no_path
Ulrich Windl wrote: Hi, this is a bit off-topic, but esential: After experiencing a network failure for about four minutes, I was watching the syslog (the system had no problems so far): Mar 30 15:24:33 testhost multipathd: sdc: tur checker reports path is up Mar 30 15:24:33 testhost multipathd: 8:32: reinstated Mar 30 15:24:33 testhost multipathd: L112_09: queue_if_no_path enabled Mar 30 15:24:33 testhost multipathd: L112_09: Recovered to normal mode Mar 30 15:24:33 testhost multipathd: L112_09: remaining active paths: 1 Mar 30 15:24:33 testhost multipathd: L112_09: switch to path group #2 Mar 30 15:24:33 testhost multipathd: L112_09: switch to path group #2 I'm surprised: I thought queue_if_no_path would be enabled if the device fails, not if the device recovered! I think it might be a bad log message. The multipath configuration for the device looks like this: device { vendor HP product HSV2.* path_grouping_policy group_by_prio path_checker tur prio_callout mpath_prio_alua /dev/%n failback immediate #polling_interval 30 no_path_retry 1000 features 1 queue_if_no_path } I think you might have conflicting values here. The no_path_retry value of 1000 would put a upper limit on how long it queues io. Eventually fail the IO after the 1000 timeout value. But I think queue_if_no_path means to queue until the problem is resolved or dm is stopped. You might want to ask dm-devel or see the docs. I am not 100% sure. I normally use one or the other. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-iscsi@googlegroups.com To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/open-iscsi -~--~~~~--~~--~--~---
Re: multipath iSCSI installs
mala...@us.ibm.com wrote: Hi all, I am trying to install RHEL5.3 on an iSCSI disk with two paths. I booted with mapth option but the installer picked up only a single path. Is this the expected behavior when I use iBFT? For this mail ibft boot means the boot process where the ibft implementation hooks into the box and logs into the target and brings over the kernel and initrd. It could be. If the ibft implementation uses only one session during the ibft boot and then only exports that one session then yeah, it is expected because the iscsi tools only know what ibft tells us. If the ibft implementation uses one session, but exports all the targets in the ibft info, then in RHEL 5.3 the installer only picks up the session used for the ibft boot up, but the initrd root-boot code used after the install should log into all the targets in ibft whether they were used for the ibft boot or not. There different behavior is a result of the installer goofing up where is used the wrong api. The install went fine on a single path. I was trying to convert the single path to multi-path by running mkinitrd. RHEL was unable to boot (panics) with the new initrd image. The only difference between the old initrd image and the new one is that the old initrd image was using iBFT method and the new image was trying to use the values from the existing session(s) at initrd creation time. For some reason the latter method doesn't work. Is this a known bug? This should work, but there are some other issues. For the boot that panics what was the problem? Did the session get logged in or was it a panic in the iscsi code? Were you trying to force mkinitrd to use the session values at initrd building time or did you want it to use ibft one at runtime? There was a bug where mkinitrd would use the current sessoins values then stick them in the initrd and use them. In 5.3, if you were using ibft then the initrd code should use the ibft values. Check out /sbin/mkinitrd:emit_iscsi_device. It should call iscsi_is_ibft and figure out that ibft is used. It is still sort of broken. If you changed the ibft value then ran mkinitrd we would not figure out ibft is being used because it checks this by matching the ibft values with the current sessions. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-iscsi@googlegroups.com To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/open-iscsi -~--~~~~--~~--~--~---
Re: [Fwd: [PATCH] [Target_Core_Mod/Persistent_Reservations]: Add support for PREEMPT_AND_ABORT SA]
On Mon, 2009-03-30 at 19:34 -0500, Mike Christie wrote: Nicholas A. Bellinger wrote: Greetings Mike, Hannes and co: Here is the OOPs that I am seeing with upstream Open-iSCSI on kernel.org v2.6.27.10 with the TASK_ABORTED status getting returned to outstanding It looks like you are returning TASK_ABORTED when a R2T has been sent to the initiator and it was responding or do we have InitialR2T=No and are doing that first part of the write. nod If so it might be the following: Novell's target sent a check condition when a R2T was being processed and at that time, our reading of 10.4.2. Status: If a SCSI device error is detected while data from the initiator is still expected (the command PDU did not contain all the data and the target has not received a Data PDU with the final bit Set), the target MUST wait until it receives a Data PDU with the F bit set in the last expected sequence before sending the Response PDU. We took this to mean that if we were sending data in response to a r2t or with the command pdu as part of InitialR2T=No handling, then the target would not send a scsi command response pdu on us. If it did then that r2t handling is basically dangling around and when the task is reallocated we see it did not get handled in that check. Good eye, I completely agree with your interpretation of 10.4.2. If we read the spec right then the fix for us is simple. Fix your target :) Ok, I will fix LIO-Target to follow 10.4.2.. If I goofed then the fix for us might be simple. See the attached patch. I would have to look at the other drivers to make sure it works for them. Of course, not running into this BUG on the initiator side (even when targets are being naughty) would be the ideal solution. :-) I will give our test a run and see what happens. Many thanks for your most valuable of time, --nab plain text document attachment (force-cleanup.patch) --- linux-2.6.27.20/drivers/scsi/libiscsi.c.orig 2009-03-30 19:32:26.0 -0500 +++ linux-2.6.27.20/drivers/scsi/libiscsi.c 2009-03-30 19:33:25.0 -0500 @@ -332,6 +332,9 @@ static void iscsi_complete_command(struc struct iscsi_session *session = conn-session; struct scsi_cmnd *sc = task-sc; + if (task-state != ISCSI_TASK_PENDING) + conn-session-tt-cleanup_task(conn, task); + list_del_init(task-running); task-state = ISCSI_TASK_COMPLETED; task-sc = NULL; @@ -402,8 +405,6 @@ static void fail_command(struct iscsi_co * the cmd in the sequencing */ conn-session-queued_cmdsn--; - else - conn-session-tt-cleanup_task(conn, task); /* * Check if cleanup_task dropped the lock and the command completed, */ --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-iscsi@googlegroups.com To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/open-iscsi -~--~~~~--~~--~--~---
Re: [Fwd: [PATCH] [Target_Core_Mod/Persistent_Reservations]: Add support for PREEMPT_AND_ABORT SA]
Nicholas A. Bellinger wrote: On Mon, 2009-03-30 at 19:34 -0500, Mike Christie wrote: Nicholas A. Bellinger wrote: Greetings Mike, Hannes and co: Here is the OOPs that I am seeing with upstream Open-iSCSI on kernel.org v2.6.27.10 with the TASK_ABORTED status getting returned to outstanding It looks like you are returning TASK_ABORTED when a R2T has been sent to the initiator and it was responding or do we have InitialR2T=No and are doing that first part of the write. nod If so it might be the following: Novell's target sent a check condition when a R2T was being processed and at that time, our reading of 10.4.2. Status: If a SCSI device error is detected while data from the initiator is still expected (the command PDU did not contain all the data and the target has not received a Data PDU with the final bit Set), the target MUST wait until it receives a Data PDU with the F bit set in the last expected sequence before sending the Response PDU. We took this to mean that if we were sending data in response to a r2t or with the command pdu as part of InitialR2T=No handling, then the target would not send a scsi command response pdu on us. If it did then that r2t handling is basically dangling around and when the task is reallocated we see it did not get handled in that check. Good eye, I completely agree with your interpretation of 10.4.2. If we read the spec right then the fix for us is simple. Fix your target :) Ok, I will fix LIO-Target to follow 10.4.2.. If I goofed then the fix for us might be simple. See the attached patch. I would have to look at the other drivers to make sure it works for them. Of course, not running into this BUG on the initiator side (even when targets are being naughty) would be the ideal solution. :-) Yeah, I sort of agree. I am just worried about adding regressions. I have some patches that fix up that code's locking and while doing the patches I was thinking about Hannes's target and they do handle this now. So I am not sure I want to fix this now, test it out, then do the locking patches and test them out again - just not enough time. I will give our test a run and see what happens. Many thanks for your most valuable of time, No problem. Thanks for the bug reports. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-iscsi@googlegroups.com To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/open-iscsi -~--~~~~--~~--~--~---
Re: [Fwd: [PATCH] [Target_Core_Mod/Persistent_Reservations]: Add support for PREEMPT_AND_ABORT SA]
Nicholas A. Bellinger wrote: Hi Mike, Hope you are well.. :-) Any chance that I can bribe you at LSF next week to to take a look at this..? Did you want me to review the target code for upstream inclusion or review the problem? I am not going to LSF. I couldn't think of anything interesting to talk about and I am still not done with what I talked about last year :) so I did not submit anything. Many thanks for your most valuable of time, --nab On Fri, 2009-03-27 at 02:02 -0700, Nicholas A. Bellinger wrote: Greetings Mike, Hannes and co: Here is the OOPs that I am seeing with upstream Open-iSCSI on kernel.org v2.6.27.10 with the TASK_ABORTED status getting returned to outstanding struct scsi_cmnd from lio-core.2.6.git provided fabric. Here is the code from drivers/scsi/iscsi_tcp.c:iscsi_tcp_task_init(): BUG_ON(__kfifo_len(tcp_task-r2tqueue)); Is this something that has been fixed in recent Open-iSCSI code..? Any ideas..? Many thanks for your most valuable of time, --nab email message attachment, Forwarded message - [PATCH] [Target_Core_Mod/Persistent_Reservations]: Add support for PREEMPT_AND_ABORT SA Forwarded Message From: Nicholas A. Bellinger n...@linux-iscsi.org To: linux-scsi linux-s...@vger.kernel.org, LKML linux-ker...@vger.kernel.org Cc: James Bottomley james.bottom...@hansenpartnership.com, Mike Christie micha...@cs.wisc.edu, FUJITA Tomonori fujita.tomon...@lab.ntt.co.jp, Hannes Reinecke h...@suse.de, Martin K. Petersen martin.peter...@oracle.com, Alan Stern st...@rowland.harvard.edu, Douglas Gilbert dgilb...@interlog.com, Alasdair G Kergon a...@redhat.com, Philipp Reisner philipp.reis...@linbit.com, Ming Zhang blackmagic02...@gmail.com Subject: [PATCH] [Target_Core_Mod/Persistent_Reservations]: Add support for PREEMPT_AND_ABORT SA Date: Fri, 27 Mar 2009 01:46:36 -0700 Greetings all, This patch adds support for the PROUT PREEMPT_AND_ABORT service action to core_scsi3_emulate_pro_preempt() and core_tmr_lun_reset() using existing TAS (TASK_ABORTED status) emulation. The logic assigns the initiator port provided pre-registered reservation key to allocated SCSI Task and Task Management through logical unit ACLs in target_core_mod code. This patch uses struct list_head preempt_and_abort_list inside of core_scsi3_emulate_pro_preempt() to track PR registrations/reservation data structure t10_pr_registration_t once it has been removed from the se_device_t list, and before they get released back into struct kmem_cache t10_pr_reg_cache in core_scsi3_release_preempt_and_abort(). These patches are made against lio-core-2.6.git/master and tested on v2.6.29 x86 32-bit HVM using sg_persist and sg_request from sg3_utils. The lio-core-2.6.git tree can be found at: http://git.kernel.org/?p=linux/kernel/git/nab/lio-core-2.6.git;a=summary So far, I have been primarily testing the following scenario, which is what linux-cluster userspace in RHEL/CentOS (and everyone else) does with fence_scsi today for their WRITE Exclusive, Registrants Only persistent reservation setup. Here is test setup I am using: # The same /sys/kernel/config/target/core/$HBA/$DEV Linux LVM object mapped across # two different LIO-Target iSCSI target ports initiator# lsscsi --transport [3:0:0:0]disk iqn.2003-01.org.linux-iscsi.target.i686:sn.cff3eedbd2fd,t,0x1 /dev/sde [4:0:0:0]disk iqn.2003-01.org.linux-iscsi.target.i686:sn.e475ed6fcdd0,t,0x1 /dev/sdf *) Initiator side using Open/iSCSI: [ 185.682972] scsi3 : iSCSI Initiator over TCP/IP [ 185.998817] scsi 3:0:0:0: Direct-Access LIO-ORG IBLOCK 3.0 PQ: 0 ANSI: 5 [ 186.009172] sd 3:0:0:0: [sde] 128000 4096-byte hardware sectors (524 MB) [ 186.013274] sd 3:0:0:0: [sde] Write Protect is off [ 186.013286] sd 3:0:0:0: [sde] Mode Sense: 2f 00 00 00 [ 186.034372] sd 3:0:0:0: [sde] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA [ 186.045854] sd 3:0:0:0: [sde] 128000 4096-byte hardware sectors (524 MB) [ 186.054955] sd 3:0:0:0: [sde] Write Protect is off [ 186.054967] sd 3:0:0:0: [sde] Mode Sense: 2f 00 00 00 [ 186.072648] sd 3:0:0:0: [sde] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA [ 186.073401] sde: unknown partition table [ 186.084108] sd 3:0:0:0: [sde] Attached SCSI disk [ 186.085052] sd 3:0:0:0: Attached scsi generic sg4 type 0 [ 186.316470] alua: device handler registered [ 186.321957] sd 3:0:0:0: alua: supports implicit TPGS [ 186.326881] sd 3:0:0:0: alua: port group 00 rel port 02 [ 186.331445] sd 3:0:0:0: alua: port group 00 state A supports tousNA [ 186.372696] sd 3:0:0:0: alua: port group 00 state A supports tousNA [ 188.204172] scsi4 : iSCSI Initiator over TCP/IP [ 188.481410] scsi 4:0:0:0: Direct-Access LIO-ORG IBLOCK 3.0 PQ: 0 ANSI: 5 [ 188.489828] sd 4:0:0:0: [sdf] 128000 4096-byte hardware sectors (524 MB) [