Re: upgrade from 2.0.865 to 2.0.869

2008-05-28 Thread Padmanabhan

Thanks Mike 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Connection Errors

2008-05-28 Thread swejis


> > Can you remind me what target you are using and how many sessions you
> > should have?
The m500i have got two portals, so two session are started of for each
portal.

tcp: [1] 192.168.43.6:3260,2 iqn.
1994-12.com.promise.target.a9.39.4.55.1.0.0.20
tcp: [2] 192.168.43.5:3260,1 iqn.
1994-12.com.promise.target.a9.39.4.55.1.0.0.20

> > It looks like only one session has problems.

True indeed, it seem always only one of the two connections reports an
error, I actually
tried to shift to the second portal when doing the discovery in order
to see if that would make
any difference.

The other/s
> > look like they are just fine. Are the errors now (before I understood
> > that it happened when no IO was running) only occuring when you put IO
> > on the session/disk?

The error only occurs when there is I/O on the connection. I actually
thought we had fixed the problem when I had not seen any errors for
days, but
during that time the machine just idled, as soon as I got some I/O the
error came back immediately.

Brgds Jonas
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Connection Errors

2008-05-28 Thread Pasi Kärkkäinen

On Wed, May 28, 2008 at 01:44:02PM -0500, Mike Christie wrote:
> 
> swejis wrote:
> > OK, new logfile found here: http://www.wehay.com/messages.new.gz
> > 
> 
> Can you remind me what target you are using and how many sessions you 
> should have? It looks like only one session has problems. The other/s 
> look like they are just fine. Are the errors now (before I understood 
> that it happened when no IO was running) only occuring when you put IO 
> on the session/disk?
> 

>From the first mail of this thread:

"The target is the infamous Promise m500i"

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Connection Errors

2008-05-28 Thread Mike Christie

swejis wrote:
> OK, new logfile found here: http://www.wehay.com/messages.new.gz
> 

Can you remind me what target you are using and how many sessions you 
should have? It looks like only one session has problems. The other/s 
look like they are just fine. Are the errors now (before I understood 
that it happened when no IO was running) only occuring when you put IO 
on the session/disk?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: open-iscsi with Promise M500i dropping session / Nop-out timedout

2008-05-28 Thread Pasi Kärkkäinen

On Wed, May 28, 2008 at 01:17:17PM -0500, Mike Christie wrote:
> 
> Pasi Kärkkäinen wrote:
> > Hello list!
> > 
> > Unfortunately I had to upgrade a server running CentOS 4.6 (sfnet 
> > initiator) 
> > to CentOS 5.1 (open-iscsi initiator) and now I have some problems with it
> 
> You are using the open-iscsi code that comes with Centos right?
> 

Yep, the default open-iscsi that comes with CentOS 5.1 (and the latest
updates installed).

> 
> You can turn nops off
> 
> open-iscsi
>  > node.conn[0].timeo.noop_out_interval = 0
>  > node.conn[0].timeo.noop_out_timeout = 0
> 

Does turning nops off have any side effects? 

> But I think the problem with promise was that it needed new firmware or 
> something right? If it did not work with sfnet and open-iscsi then I 
> think that was the problem. If it just did not work on open-iscsi then 
> it may have been something else. Did you search the list by any chance?
>

Yep, I was searching.. I think same kind of problem with Infotrend target
was fixed with a firmware upgrade. 

I'm running the latest firmware on that Promise.. so that doesn't help in
this case. 

And yep, I had/have issues with both sfnet (CentOS 4) and open-iscsi (CentOS 5)
when I use this Promise target.. 

Here's some other recent thread about problems with the same target:
http://www.mail-archive.com/open-iscsi@googlegroups.com/msg00692.html

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: FreeBSD initiator, Solaris target

2008-05-28 Thread Mike Christie

Matt Herzog wrote:
> Has anyone set up a FreeBSD 7 initiator to mount a Solaris 10 target?
> I spent the better part of a day trying to "make it go" and failed. 
> 
> I just want to know if anyone on this list has succeeded.
> 

I think you are on the wrong list. open-iscsi has not worked on bsd for 
a long time and I think bsd has its own initiator. Is freebsd's 
initiator named open-iscsi too?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: open-iscsi with Promise M500i dropping session / Nop-out timedout

2008-05-28 Thread Mike Christie

Pasi Kärkkäinen wrote:
> Hello list!
> 
> Unfortunately I had to upgrade a server running CentOS 4.6 (sfnet initiator) 
> to CentOS 5.1 (open-iscsi initiator) and now I have some problems with it

You are using the open-iscsi code that comes with Centos right?

> (then again I was expecting it.. I hate this Promise array).
> 
> /var/log/messages:
> 
> May 28 15:14:16 server1 multipathd: path checkers start up
> May 28 15:15:39 server1 iscsid: Nop-out timedout after 10 seconds on 
> connection 14:0 state (3). Dropping session.
> May 28 15:15:42 server1 iscsid: connection14:0 is operational after recovery 
> (2 attempts)
> May 28 15:19:21 server1 kernel: sd 16:0:0:0: SCSI error: return code = 
> 0x0002
> May 28 15:19:21 server1 kernel: end_request: I/O error, dev sdd, sector 
> 190057296
> May 28 15:19:21 server1 kernel: device-mapper: multipath: Failing path 8:48.
> May 28 15:19:21 server1 kernel: sd 16:0:0:0: SCSI error: return code = 
> 0x0002
> May 28 15:19:21 server1 kernel: end_request: I/O error, dev sdd, sector 
> 190057552
> May 28 15:19:21 server1 kernel: sd 16:0:0:0: SCSI error: return code = 
> 0x0002
> May 28 15:19:21 server1 kernel: end_request: I/O error, dev sdd, sector 
> 190057560
> May 28 15:19:21 server1 multipathd: sdd: readsector0 checker reports path is 
> down
> May 28 15:19:21 server1 multipathd: checker failed path 8:48 in map 
> promise_test1
> May 28 15:19:21 server1 multipathd: promise_test1: remaining active paths: 1
> May 28 15:19:21 server1 iscsid: Nop-out timedout after 10 seconds on 
> connection 14:0 state (3). Dropping session.
> May 28 15:19:25 server1 iscsid: connection14:0 is operational after recovery 
> (2 attempts)
> May 28 15:19:26 server1 multipathd: sdd: readsector0 checker reports path is 
> up
> May 28 15:19:26 server1 multipathd: 8:48: reinstated
> May 28 15:19:26 server1 multipathd: promise_test1: remaining active paths: 2
> May 28 15:19:26 server1 multipathd: promise_test1: switch to path group #1
> 
> $ iscsiadm -m node --targetname  | grep timeo 
> node.session.timeo.replacement_timeout = 15
> node.session.err_timeo.abort_timeout = 10
> node.session.err_timeo.reset_timeout = 30
> node.conn[0].timeo.logout_timeout = 15
> node.conn[0].timeo.login_timeout = 15
> node.conn[0].timeo.auth_timeout = 45
> node.conn[0].timeo.active_timeout = 5
> node.conn[0].timeo.idle_timeout = 60
> node.conn[0].timeo.ping_timeout = 5
> node.conn[0].timeo.noop_out_interval = 5
> node.conn[0].timeo.noop_out_timeout = 10
> node.session.timeo.replacement_timeout = 15
> node.session.err_timeo.abort_timeout = 10
> node.session.err_timeo.reset_timeout = 30
> node.conn[0].timeo.logout_timeout = 15
> node.conn[0].timeo.login_timeout = 15
> node.conn[0].timeo.auth_timeout = 45
> node.conn[0].timeo.active_timeout = 5
> node.conn[0].timeo.idle_timeout = 60
> node.conn[0].timeo.ping_timeout = 5
> node.conn[0].timeo.noop_out_interval = 5
> node.conn[0].timeo.noop_out_timeout = 10
> 
> Basicly those "Nop-out timedout" errors keep showing up all the time when
> there is IO going on.. and if I have "dd if=/dev/mpath of=/dev/null" running 
> IO rates seem to go down every 20 seconds or so and stay stalled (at 0) for 
> 5 seconds or so.. weird.
> 
> Initiator is the default RHEL/CentOS 5.1 version.
> 
> Most probably the problem is in the Promise target because I had a lot of 
> issues
> with it earlier too.. It took some time before I got it to work "ok" with
> CentOS 4.6. 
> 
> With CentOS 4.6 (sfnet initiator) I was using this in iscsid.conf:
> 
> ConnFailTimeout=5
> PingTimeout=10
> 
> and also:
> echo 60 > /sys/block/sdc/device/timeout
> echo 60 > /sys/block/sdd/device/timeout
> 
> But I remember seeing errors / failing paths in the logs then too.. 
> 
> Anyway, is there anything I can do about these errors, or should I just let
> multipath do its job :)
> 

You can turn nops off

open-iscsi
 > node.conn[0].timeo.noop_out_interval = 0
 > node.conn[0].timeo.noop_out_timeout = 0


sfnet
PingTimeout=0
ActiveTimeout=0
IdleTimeout=0

But I think the problem with promise was that it needed new firmware or 
something right? If it did not work with sfnet and open-iscsi then I 
think that was the problem. If it just did not work on open-iscsi then 
it may have been something else. Did you search the list by any chance?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: A few newbie questions

2008-05-28 Thread Mike Christie

jergendutch wrote:
> Hello,
> 
> I have iscsi setup on some boxes. They can all mount a central target
> in succession but not at the same time. This is fine, I have installed
> GFS to make this work.
> 
> I read somewhere (and this is where I need the help) that open-iscsi
> limits the number of connections per lun to one, and I need to
> increase this to the number of hosts. Can anyone tell me where this
> is, I cannot find it any more.


If you have a target with LUN1 and one portal, and if you have hostA and 
hostB running open-iscsi. HostA will create one connection to the target 
and through that we will access LUN1 and any other logical units on the 
target. If the target only has one portal then we will only create one 
connection to the target for hostA (open-iscsi creates one connection 
per portal on the target basically). For hostB, the initiator running on 
that will create another connection to the target.


> 
> My second question is about startup.
> At the moment I start iscsi, then I run the discovery command, then
> restart iscsi to see the disk.
> 

After discovery you can just run

iscsiadm -m node -T target -p ip:port -l
to login into a specific portal that was found. There are lots of 
variations on this. See the README.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: upgrade from 2.0.865 to 2.0.869

2008-05-28 Thread Mike Christie

Padmanabhan wrote:
> Hello,
> I have already installed 2.0.865 and want to install the latest
> release 2.0.869.
> 
> Which files should I remove from the previous installation ?
> 

You should not need to remove any files. Everything should be backward 
compatible. You might want to use the new iscsid.conf file in 2.0.869 
and replace the one that was used for 2.0.865, because there are some 
new features.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Help! On Ubuntu or RHEL 5.1 client I always get: iscsiadm: discovery session to [IP] received unexpected opcode 0x20

2008-05-28 Thread Mike Christie

[EMAIL PROTECTED] wrote:
> As the title says, I get this error when I try to find iscsi targets.
> I cannot for the life of me get open-iscsi to work and always get this
> error.  I am trying to connect to a working sanfly iscsi target from
> either RHEL 5.1 client or the latest ubuntu.
> 

What target are you using? Is it a Cisco or LSI box? Some targets will 
send the initiator nops, which is not in the iscsi spec, and as a result 
open-iscsi does not support this.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: open-iscsi with Promise M500i dropping session / Nop-out timedout

2008-05-28 Thread Pasi Kärkkäinen

On Wed, May 28, 2008 at 07:10:08PM +0300, Pasi Kärkkäinen wrote:
> 
> > > 
> > > Basicly those "Nop-out timedout" errors keep showing up all the time when
> > > there is IO going on.. and if I have "dd if=/dev/mpath of=/dev/null" 
> > > running 
> > 
> > You can expand the timeout to a higher value? 30 seconds ? Also you might
> > want to limit the node.session.queue_depth to a lower value as well.
> > 
> 
> I tried this.. doesn't seem to help much. I still get the same errors. 
> 
> I'll try limiting queue depth too.. 
> 

default queue depth is 32. 

I ran:
echo 8 > /sys/block/sdc/device/queue_depth
echo 8 > /sys/block/sdd/device/queue_depth

and re-ran the "dd test". Same problem. Log entries:

iscsid: Nop-out timedout after 10 seconds on connection 14:0 state (3). 
Dropping session.
iscsid: connection14:0 is operational after recovery (2 attempts)

then again it seems I get these errors less often now.. (with a smaller queue 
depth).
So it seems to help.. 

I'm not totally sure about this, but it could be that sometimes when I can see 
the "io stall" (with iostat) I also get that Nop-out timedout.. and sometimes 
not. 

With a smaller queue depth it just stalls, but with a bigger queue depth it 
also 
drops the session (more often).


Results from the "dd test" with noop_out_timeout of 30 seconds and queue depth 
of 32:

iscsid: Nop-out timedout after 30 seconds on connection 18:0 state (3). 
Dropping session.
iscsid: connection18:0 is operational after recovery (2 attempts)
kernel: sd 20:0:0:0: SCSI error: return code = 0x0002
kernel: end_request: I/O error, dev sdd, sector 13510024
kernel: device-mapper: multipath: Failing path 8:48.
multipathd: 8:48: mark as failed
multipathd: promise_test1: remaining active paths: 1
iscsid: Nop-out timedout after 30 seconds on connection 18:0 state (3). 
Dropping session.
iscsid: connection18:0 is operational after recovery (2 attempts)
multipathd: sdd: readsector0 checker reports path is up
multipathd: 8:48: reinstated
multipathd: promise_test1: remaining active paths: 2
multipathd: promise_test1: switch to path group #1

So hmm.. it looks like lowering the queue depth helps with the session drops 
while increasing 
the noop_out_timeout doesn't make much difference.. 

Or actually, it could be that increasing the noop_out_timeout makes the
stalls happen less often.. hmm:)

Thanks for the help/comments!

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Help! On Ubuntu or RHEL 5.1 client I always get: iscsiadm: discovery session to [IP] received unexpected opcode 0x20

2008-05-28 Thread Konrad Rzeszutek

On Wed, May 28, 2008 at 09:56:38AM -0700, [EMAIL PROTECTED] wrote:
> 
> Has anyone experienced this error?  I have no firewall, no SELinux
> running, etc.  The iSCSI target should be fine as windows clients were
> able to utilize the sanfly targets before (or ones like it).

Well, the error is just bizzare. It looks as the target is misbehaving
and the initiator can't handle that.

Can you capture the TCP data and provide on the mailing list? That can
help a bit in narrowing down the problem. Search for 'tcpdump' and for
e-mails from Mike Christie on how to sniff your TCP session data.

> 
> Is there a howto or directions I can follow?  Can I not do a discovery
> and just connect to it directly? What commands should I be using?  I

Those are the proper steps (well, you can substitue the /etc/init.d/open-iscsi
restart with "iscsiadm -m node -L all"). What is your iSCSI target?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Design Questions

2008-05-28 Thread Konrad Rzeszutek

On Wed, May 28, 2008 at 01:15:36PM -0300, Arturo 'Buanzo' Busleiman wrote:
> 
> Arturo 'Buanzo' Busleiman wrote:
> > So, the obvious question here: I want to store the data in the SAN. 
> > Should I get my sessions running in the host, or inside each virtual 
> > machine?
> If this is not the correct group to ask this question, I'd gladly accept 
> suggestions for other groups! :)

I am not sure how you are partitioning your space. Does each guest
have an iSCSI target (or LUN) assigned to it? Or is it one big
drive that they run from? Also are you envisioning using this
with LiveMigration (or whatever it is called with your virtualization
system)?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



RE: Help! On Ubuntu or RHEL 5.1 client I always get: iscsiadm: discovery session to [IP] received unexpected opcode 0x20 -

2008-05-28 Thread Steve Marfisi

Regarding MujZeptu's post on the unexpected opcode, we have identified an
issue with sanFly when using open-iscsi in discovery sessions. This was not
seen in testing with other iSCSI initiators. A patch for sanFly will be
released once tested in-house.

Steve Marfisi
emBoot Inc.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Help! On Ubuntu or RHEL 5.1 client I always get: iscsiadm: discovery session to [IP] received unexpected opcode 0x20

2008-05-28 Thread MujZeptu

Has anyone experienced this error?  I have no firewall, no SELinux
running, etc.  The iSCSI target should be fine as windows clients were
able to utilize the sanfly targets before (or ones like it).

Is there a howto or directions I can follow?  Can I not do a discovery
and just connect to it directly? What commands should I be using?  I
followed the directions at: 
http://www.cyberciti.biz/faq/howto-setup-debian-ubuntu-linux-iscsi-initiator/

and always run into this error.  Any help you can provide or potential
howtos you can point me to is greatly appreciated!



On May 27, 7:02 pm, [EMAIL PROTECTED] wrote:
> As the title says, I get this error when I try to find iscsi targets.
> I cannot for the life of me get open-iscsi to work and always get this
> error.  I am trying to connect to a working sanfly iscsi target from
> either RHEL 5.1 client or the latest ubuntu.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Design Questions

2008-05-28 Thread Arturo 'Buanzo' Busleiman

Arturo 'Buanzo' Busleiman wrote:
> So, the obvious question here: I want to store the data in the SAN. 
> Should I get my sessions running in the host, or inside each virtual 
> machine?
If this is not the correct group to ask this question, I'd gladly accept 
suggestions for other groups! :)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: open-iscsi with Promise M500i dropping session / Nop-out timedout

2008-05-28 Thread Pasi Kärkkäinen

On Wed, May 28, 2008 at 10:16:40AM -0400, Konrad Rzeszutek wrote:
> 
> On Wed, May 28, 2008 at 03:34:37PM +0300, Pasi Kärkkäinen wrote:
> > 
> > Hello list!
> > 
> > Unfortunately I had to upgrade a server running CentOS 4.6 (sfnet 
> > initiator) 
> > to CentOS 5.1 (open-iscsi initiator) and now I have some problems with it
> > (then again I was expecting it.. I hate this Promise array).
> > 
> > /var/log/messages:
> > 
> > May 28 15:14:16 server1 multipathd: path checkers start up
> > May 28 15:15:39 server1 iscsid: Nop-out timedout after 10 seconds on 
> > connection 14:0 state (3). Dropping session.
> > May 28 15:15:42 server1 iscsid: connection14:0 is operational after 
> > recovery (2 attempts)
> > May 28 15:19:21 server1 kernel: sd 16:0:0:0: SCSI error: return code = 
> > 0x0002
> > May 28 15:19:21 server1 kernel: end_request: I/O error, dev sdd, sector 
> > 190057296
> > May 28 15:19:21 server1 kernel: device-mapper: multipath: Failing path 8:48.
> > May 28 15:19:21 server1 kernel: sd 16:0:0:0: SCSI error: return code = 
> > 0x0002
> > May 28 15:19:21 server1 kernel: end_request: I/O error, dev sdd, sector 
> > 190057552
> > May 28 15:19:21 server1 kernel: sd 16:0:0:0: SCSI error: return code = 
> > 0x0002
> > May 28 15:19:21 server1 kernel: end_request: I/O error, dev sdd, sector 
> > 190057560
> > May 28 15:19:21 server1 multipathd: sdd: readsector0 checker reports path 
> > is down
> > May 28 15:19:21 server1 multipathd: checker failed path 8:48 in map 
> > promise_test1
> > May 28 15:19:21 server1 multipathd: promise_test1: remaining active paths: 1
> > May 28 15:19:21 server1 iscsid: Nop-out timedout after 10 seconds on 
> > connection 14:0 state (3). Dropping session.
> > May 28 15:19:25 server1 iscsid: connection14:0 is operational after 
> > recovery (2 attempts)
> > May 28 15:19:26 server1 multipathd: sdd: readsector0 checker reports path 
> > is up
> > May 28 15:19:26 server1 multipathd: 8:48: reinstated
> > May 28 15:19:26 server1 multipathd: promise_test1: remaining active paths: 2
> > May 28 15:19:26 server1 multipathd: promise_test1: switch to path group #1
> > 
> > $ iscsiadm -m node --targetname  | grep timeo 
> > node.session.timeo.replacement_timeout = 15
> > node.session.err_timeo.abort_timeout = 10
> > node.session.err_timeo.reset_timeout = 30
> > node.conn[0].timeo.logout_timeout = 15
> > node.conn[0].timeo.login_timeout = 15
> > node.conn[0].timeo.auth_timeout = 45
> > node.conn[0].timeo.active_timeout = 5
> > node.conn[0].timeo.idle_timeout = 60
> > node.conn[0].timeo.ping_timeout = 5
> > node.conn[0].timeo.noop_out_interval = 5
> > node.conn[0].timeo.noop_out_timeout = 10
> > node.session.timeo.replacement_timeout = 15
> > node.session.err_timeo.abort_timeout = 10
> > node.session.err_timeo.reset_timeout = 30
> > node.conn[0].timeo.logout_timeout = 15
> > node.conn[0].timeo.login_timeout = 15
> > node.conn[0].timeo.auth_timeout = 45
> > node.conn[0].timeo.active_timeout = 5
> > node.conn[0].timeo.idle_timeout = 60
> > node.conn[0].timeo.ping_timeout = 5
> > node.conn[0].timeo.noop_out_interval = 5
> > node.conn[0].timeo.noop_out_timeout = 10
> > 
> > Basicly those "Nop-out timedout" errors keep showing up all the time when
> > there is IO going on.. and if I have "dd if=/dev/mpath of=/dev/null" 
> > running 
> 
> You can expand the timeout to a higher value? 30 seconds ? Also you might
> want to limit the node.session.queue_depth to a lower value as well.
> 

I tried this.. doesn't seem to help much. I still get the same errors. 

I'll try limiting queue depth too.. 

> > IO rates seem to go down every 20 seconds or so and stay stalled (at 0) for 
> > 5 seconds or so.. weird.
> 
> That could be due to the NOP not getting its response and stalling the session
> until it receives the response.
> 

Ok.. that would explain it.

I haven't had any problems with the Equallogic target so it has to be
something to do with Promise.. 

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



FreeBSD initiator, Solaris target

2008-05-28 Thread Matt Herzog

Has anyone set up a FreeBSD 7 initiator to mount a Solaris 10 target?
I spent the better part of a day trying to "make it go" and failed. 

I just want to know if anyone on this list has succeeded.

-- 
"Outside of a dog, a book is a man's best friend. Inside of a dog, it is too 
dark to read."   
--  Groucho Marx


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: A few newbie questions

2008-05-28 Thread Konrad Rzeszutek

On Wed, May 28, 2008 at 07:23:32AM -0700, jergendutch wrote:
> 
> On 28 Mai, 16:13, Konrad Rzeszutek <[EMAIL PROTECTED]> wrote:
> > On Wed, May 28, 2008 at 05:32:03AM -0700, jergendutch wrote:
> >
> > > Hello,
> >
> > > I have iscsi setup on some boxes. They can all mount a central target
> > > in succession but not at the same time. This is fine, I have installed
> > > GFS to make this work.
> >
> > > I read somewhere (and this is where I need the help) that open-iscsi
> > > limits the number of connections per lun to one, and I need to
> > > increase this to the number of hosts. Can anyone tell me where this
> > > is, I cannot find it any more.
> >
> > Not sure where you read it, but that is false. The default LUN max is
> > 512.
> >
> 
> Okay, so I have another problem then. Damn :/

An easy way to check for this is to run from 'sg3_utils' the sg_luns program:

sg_luns /dev/sg1
Lun list length = 8 which imples 1 lun entry
Report luns [select_report=0]:


Which tells you that there is one LUN.  If you see one entry and you
think you have more than one then the target is not allowing you
to see it. 
> 
> > > My second question is about startup.
> > > At the moment I start iscsi, then I run the discovery command, then
> > > restart iscsi to see the disk.
> >
> > > This seems wrong. Is there a better way?
> >
> > When you run the discovery command the results are cached. When you
> > log-in the session is also cached. So you init script should
> > take advantage of that and automaticly log-in to those targets.
> >
> > You did log-in to those targets after the discovery, right?
> 
> I don't require login for the targets, they are on a private subnet.

Yes you do. You need to do 'iscsiadm -m node -L all' to login
in the targets. There are three phases of iSCSI: discovery,
login/logoff, operational. You will know when you hit the last
phase when the block devices show up.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: A few newbie questions

2008-05-28 Thread jergendutch

On 28 Mai, 16:13, Konrad Rzeszutek <[EMAIL PROTECTED]> wrote:
> On Wed, May 28, 2008 at 05:32:03AM -0700, jergendutch wrote:
>
> > Hello,
>
> > I have iscsi setup on some boxes. They can all mount a central target
> > in succession but not at the same time. This is fine, I have installed
> > GFS to make this work.
>
> > I read somewhere (and this is where I need the help) that open-iscsi
> > limits the number of connections per lun to one, and I need to
> > increase this to the number of hosts. Can anyone tell me where this
> > is, I cannot find it any more.
>
> Not sure where you read it, but that is false. The default LUN max is
> 512.
>

Okay, so I have another problem then. Damn :/

> > My second question is about startup.
> > At the moment I start iscsi, then I run the discovery command, then
> > restart iscsi to see the disk.
>
> > This seems wrong. Is there a better way?
>
> When you run the discovery command the results are cached. When you
> log-in the session is also cached. So you init script should
> take advantage of that and automaticly log-in to those targets.
>
> You did log-in to those targets after the discovery, right?

I don't require login for the targets, they are on a private subnet.

I will try again to see if they are cached (centos 5.1)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: open-iscsi with Promise M500i dropping session / Nop-out timedout

2008-05-28 Thread Konrad Rzeszutek

On Wed, May 28, 2008 at 03:34:37PM +0300, Pasi Kärkkäinen wrote:
> 
> Hello list!
> 
> Unfortunately I had to upgrade a server running CentOS 4.6 (sfnet initiator) 
> to CentOS 5.1 (open-iscsi initiator) and now I have some problems with it
> (then again I was expecting it.. I hate this Promise array).
> 
> /var/log/messages:
> 
> May 28 15:14:16 server1 multipathd: path checkers start up
> May 28 15:15:39 server1 iscsid: Nop-out timedout after 10 seconds on 
> connection 14:0 state (3). Dropping session.
> May 28 15:15:42 server1 iscsid: connection14:0 is operational after recovery 
> (2 attempts)
> May 28 15:19:21 server1 kernel: sd 16:0:0:0: SCSI error: return code = 
> 0x0002
> May 28 15:19:21 server1 kernel: end_request: I/O error, dev sdd, sector 
> 190057296
> May 28 15:19:21 server1 kernel: device-mapper: multipath: Failing path 8:48.
> May 28 15:19:21 server1 kernel: sd 16:0:0:0: SCSI error: return code = 
> 0x0002
> May 28 15:19:21 server1 kernel: end_request: I/O error, dev sdd, sector 
> 190057552
> May 28 15:19:21 server1 kernel: sd 16:0:0:0: SCSI error: return code = 
> 0x0002
> May 28 15:19:21 server1 kernel: end_request: I/O error, dev sdd, sector 
> 190057560
> May 28 15:19:21 server1 multipathd: sdd: readsector0 checker reports path is 
> down
> May 28 15:19:21 server1 multipathd: checker failed path 8:48 in map 
> promise_test1
> May 28 15:19:21 server1 multipathd: promise_test1: remaining active paths: 1
> May 28 15:19:21 server1 iscsid: Nop-out timedout after 10 seconds on 
> connection 14:0 state (3). Dropping session.
> May 28 15:19:25 server1 iscsid: connection14:0 is operational after recovery 
> (2 attempts)
> May 28 15:19:26 server1 multipathd: sdd: readsector0 checker reports path is 
> up
> May 28 15:19:26 server1 multipathd: 8:48: reinstated
> May 28 15:19:26 server1 multipathd: promise_test1: remaining active paths: 2
> May 28 15:19:26 server1 multipathd: promise_test1: switch to path group #1
> 
> $ iscsiadm -m node --targetname  | grep timeo 
> node.session.timeo.replacement_timeout = 15
> node.session.err_timeo.abort_timeout = 10
> node.session.err_timeo.reset_timeout = 30
> node.conn[0].timeo.logout_timeout = 15
> node.conn[0].timeo.login_timeout = 15
> node.conn[0].timeo.auth_timeout = 45
> node.conn[0].timeo.active_timeout = 5
> node.conn[0].timeo.idle_timeout = 60
> node.conn[0].timeo.ping_timeout = 5
> node.conn[0].timeo.noop_out_interval = 5
> node.conn[0].timeo.noop_out_timeout = 10
> node.session.timeo.replacement_timeout = 15
> node.session.err_timeo.abort_timeout = 10
> node.session.err_timeo.reset_timeout = 30
> node.conn[0].timeo.logout_timeout = 15
> node.conn[0].timeo.login_timeout = 15
> node.conn[0].timeo.auth_timeout = 45
> node.conn[0].timeo.active_timeout = 5
> node.conn[0].timeo.idle_timeout = 60
> node.conn[0].timeo.ping_timeout = 5
> node.conn[0].timeo.noop_out_interval = 5
> node.conn[0].timeo.noop_out_timeout = 10
> 
> Basicly those "Nop-out timedout" errors keep showing up all the time when
> there is IO going on.. and if I have "dd if=/dev/mpath of=/dev/null" running 

You can expand the timeout to a higher value? 30 seconds ? Also you might
want to limit the node.session.queue_depth to a lower value as well.

> IO rates seem to go down every 20 seconds or so and stay stalled (at 0) for 
> 5 seconds or so.. weird.

That could be due to the NOP not getting its response and stalling the session
until it receives the response.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: A few newbie questions

2008-05-28 Thread Konrad Rzeszutek

On Wed, May 28, 2008 at 05:32:03AM -0700, jergendutch wrote:
> 
> Hello,
> 
> I have iscsi setup on some boxes. They can all mount a central target
> in succession but not at the same time. This is fine, I have installed
> GFS to make this work.
> 
> I read somewhere (and this is where I need the help) that open-iscsi
> limits the number of connections per lun to one, and I need to
> increase this to the number of hosts. Can anyone tell me where this
> is, I cannot find it any more.

Not sure where you read it, but that is false. The default LUN max is
512.

> 
> My second question is about startup.
> At the moment I start iscsi, then I run the discovery command, then
> restart iscsi to see the disk.
> 
> This seems wrong. Is there a better way?

When you run the discovery command the results are cached. When you
log-in the session is also cached. So you init script should
take advantage of that and automaticly log-in to those targets.

You did log-in to those targets after the discovery, right?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



open-iscsi with Promise M500i dropping session / Nop-out timedout

2008-05-28 Thread Pasi Kärkkäinen

Hello list!

Unfortunately I had to upgrade a server running CentOS 4.6 (sfnet initiator) 
to CentOS 5.1 (open-iscsi initiator) and now I have some problems with it
(then again I was expecting it.. I hate this Promise array).

/var/log/messages:

May 28 15:14:16 server1 multipathd: path checkers start up
May 28 15:15:39 server1 iscsid: Nop-out timedout after 10 seconds on connection 
14:0 state (3). Dropping session.
May 28 15:15:42 server1 iscsid: connection14:0 is operational after recovery (2 
attempts)
May 28 15:19:21 server1 kernel: sd 16:0:0:0: SCSI error: return code = 
0x0002
May 28 15:19:21 server1 kernel: end_request: I/O error, dev sdd, sector 
190057296
May 28 15:19:21 server1 kernel: device-mapper: multipath: Failing path 8:48.
May 28 15:19:21 server1 kernel: sd 16:0:0:0: SCSI error: return code = 
0x0002
May 28 15:19:21 server1 kernel: end_request: I/O error, dev sdd, sector 
190057552
May 28 15:19:21 server1 kernel: sd 16:0:0:0: SCSI error: return code = 
0x0002
May 28 15:19:21 server1 kernel: end_request: I/O error, dev sdd, sector 
190057560
May 28 15:19:21 server1 multipathd: sdd: readsector0 checker reports path is 
down
May 28 15:19:21 server1 multipathd: checker failed path 8:48 in map 
promise_test1
May 28 15:19:21 server1 multipathd: promise_test1: remaining active paths: 1
May 28 15:19:21 server1 iscsid: Nop-out timedout after 10 seconds on connection 
14:0 state (3). Dropping session.
May 28 15:19:25 server1 iscsid: connection14:0 is operational after recovery (2 
attempts)
May 28 15:19:26 server1 multipathd: sdd: readsector0 checker reports path is up
May 28 15:19:26 server1 multipathd: 8:48: reinstated
May 28 15:19:26 server1 multipathd: promise_test1: remaining active paths: 2
May 28 15:19:26 server1 multipathd: promise_test1: switch to path group #1

$ iscsiadm -m node --targetname  | grep timeo 
node.session.timeo.replacement_timeout = 15
node.session.err_timeo.abort_timeout = 10
node.session.err_timeo.reset_timeout = 30
node.conn[0].timeo.logout_timeout = 15
node.conn[0].timeo.login_timeout = 15
node.conn[0].timeo.auth_timeout = 45
node.conn[0].timeo.active_timeout = 5
node.conn[0].timeo.idle_timeout = 60
node.conn[0].timeo.ping_timeout = 5
node.conn[0].timeo.noop_out_interval = 5
node.conn[0].timeo.noop_out_timeout = 10
node.session.timeo.replacement_timeout = 15
node.session.err_timeo.abort_timeout = 10
node.session.err_timeo.reset_timeout = 30
node.conn[0].timeo.logout_timeout = 15
node.conn[0].timeo.login_timeout = 15
node.conn[0].timeo.auth_timeout = 45
node.conn[0].timeo.active_timeout = 5
node.conn[0].timeo.idle_timeout = 60
node.conn[0].timeo.ping_timeout = 5
node.conn[0].timeo.noop_out_interval = 5
node.conn[0].timeo.noop_out_timeout = 10

Basicly those "Nop-out timedout" errors keep showing up all the time when
there is IO going on.. and if I have "dd if=/dev/mpath of=/dev/null" running 
IO rates seem to go down every 20 seconds or so and stay stalled (at 0) for 
5 seconds or so.. weird.

Initiator is the default RHEL/CentOS 5.1 version.

Most probably the problem is in the Promise target because I had a lot of issues
with it earlier too.. It took some time before I got it to work "ok" with
CentOS 4.6. 

With CentOS 4.6 (sfnet initiator) I was using this in iscsid.conf:

ConnFailTimeout=5
PingTimeout=10

and also:
echo 60 > /sys/block/sdc/device/timeout
echo 60 > /sys/block/sdd/device/timeout

But I remember seeing errors / failing paths in the logs then too.. 

Anyway, is there anything I can do about these errors, or should I just let
multipath do its job :)

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



A few newbie questions

2008-05-28 Thread jergendutch

Hello,

I have iscsi setup on some boxes. They can all mount a central target
in succession but not at the same time. This is fine, I have installed
GFS to make this work.

I read somewhere (and this is where I need the help) that open-iscsi
limits the number of connections per lun to one, and I need to
increase this to the number of hosts. Can anyone tell me where this
is, I cannot find it any more.

My second question is about startup.
At the moment I start iscsi, then I run the discovery command, then
restart iscsi to see the disk.

This seems wrong. Is there a better way?

Thanks.
J
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Connection Errors

2008-05-28 Thread swejis

OK, new logfile found here: http://www.wehay.com/messages.new.gz

TIA
// Jonas
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: [PATCH 3/3] bnx2i: Add bnx2i iSCSI driver.

2008-05-28 Thread Hannes Reinecke

Hi all,

Jeff Garzik wrote:
> Michael Chan wrote:
>> If we change the implementation to use a separate IP address and
>> separate MAC address for iSCSI, will it be acceptable?  The iSCSI IP/MAC
>> addresses will be unknown to the Linux TCP stack and so no sharing of
>> the 4-tuple space will be needed.
>>
>> The patches will be very similar, except that all inet calls and
>> notifiers will be removed.
> 
> 
> IMO a totally separate MAC and IP would definitely be preferred...
> 
And as it happens, the machines I have here claim to use a different
MAC address for iSCSI anyway. So we should be using them.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke   zSeries & Storage
[EMAIL PROTECTED] +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Markus Rex, HRB 16746 (AG Nürnberg)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---