Re: iscsiadm: InitiatorName is required on the first Login PDU

2009-04-03 Thread Ulrich Windl

On 2 Apr 2009 at 18:31, Boaz Harrosh wrote:

> [r...@testlin2]$ ps ax |grep iscsi
> 32268 ?S< 0:00 [iscsi_eh]
> 32284 ?Ss 0:00 iscsid
> 32285 ?S 32313 pts/0S+ 0:00 grep iscsi
> 

Hi,

just a question: why do we see two iscsid processes? Usual daemons just appear 
once.

Regards,
Ulrich


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Kernel / iscsi problem under high load

2009-04-03 Thread Ulrich Windl

On 2 Apr 2009 at 18:19, Gonçalo Borges wrote:

[...]
> I have the following multipath devices:
[...]
> [r...@core26 ~]# multipath -ll
> sda: checker msg is "rdac checker reports path is down"
> iscsi06-apoio1 (3600a0b80003ad1e50f2e49ae6d3e) dm-0 IBM,VirtualDisk
> [size=2.7T][features=1 queue_if_no_path][hwhandler=0]

Very interesting: Out SAN system allows only 2048 GB of storage per LUN. 
Lookinginto the SCSI protocol, it seems there is a 32bit number of blocks 
(512Bytes) to count the LUN capacity. Thus roughly 4Gig times 0.4kB makes 2TB. 
I 
wonder how your system represents 2.7TB in the SCSI protocol.

[...]
> [r...@core26 ~]# fdisk -l /dev/sdb1
> Disk /dev/sdb1: 499.9 GB, 49983104 bytes

Isn't that a bit small for 2.7TB ? I think you should use fdisk on the disk, 
not 
on the partition!

> 255 heads, 63 sectors/track, 60788 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Disk /dev/sdb1 doesn't contain a valid partition table

See above!
[...]
> [r...@core26 ~]# df -k
> Filesystem   1K-blocks  Used Available Use% Mounted on
> /dev/sda1 90491396   2008072  83812428   3% /
> tmpfs   524288 0524288   0% /dev/shm
> /dev/mapper/iscsi06-apoio1p1
>  480618344202804 456001480   1% /apoio06-1
> /dev/mapper/iscsi06-apoio2p1
>  480618344202800 456001484   1% /apoio06-2
> 
> The sizes, although not exactly the same (but that doesn't happen also for
> the system disk), are very close.

So you have roughly 500GB on a 2.7TB LUN in use.

> 
> 
> 
> > Then one could compare those sizes to those reported by the kernel. Maybe
> > the
> > setup just wrong, and it takes a while until the end of the device is
> > reached.
> >
> 
> 
> I do not think the difference I see in previous commands is big enough to
> justify a wrong setup. But I'm just guessing and I'm not really an expert.

It now depends where the partition is located on the disk (use a corrected 
fdisk 
invocation to find out).

> 
> 
> >
> > Then I would start slowly, i.e. with one izone running on one client.
> >
> 
> 
> I've already performed the same testes with 6 Raid 0 and 6 Raid 1 instead of
> 2 Raid 10 in similar DS 3300 systems without having this kind of errors. But
> probably, I could be hitting some kind of limit..
> 
> 
> >
> > BTW, what do you want to measure: the kernel throughput, the network
> > throughput,
> > the iSCSI throughput, the controller throughput, or the disk throughput?
> > You
> > should have some concrete idea before starting the benchmark. Also with
> > just 12
> > disks I see little sense in having that many threads accessign the disk. To
> > shorten a lengthy test, it may be advisable to reduce the system memory
> > (iozone
> > recommands to create a file size at least three times the amount of RAM,
> > end even
> > 8GB on a local disk takes hours to perform)
> 
> 
> I want to measure the I/O performance for the RAID in sequential and random
> write/reads. What matters for the final user is that he was able to
> write/read at XXX MB/s. I want to stress the system to know the limit of the
> ISCSI controllers (this is why I'm starting so many threads). In theory, at
> the controllers limit, they should take a lot of time to deal with the I/O
> traffic from the diferent clients but they are not suppose to die.

I was able to reach the limit of our system (380MB/s over 4Gb FC) with one 
single 
machine. As a summary: Performance is best if you write large blocks (1MB) 
sequentially. Anything else is bad.

Regards,
Ulrich


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Kernel / iscsi problem under high load

2009-04-03 Thread Ulrich Windl

On 3 Apr 2009 at 9:27, I wrote:

> Very interesting: Out SAN system allows only 2048 GB of storage per LUN. 
---^Our

> Lookinginto the SCSI protocol, it seems there is a 32bit number of blocks 
> (512Bytes) to count the LUN capacity. Thus roughly 4Gig times 0.4kB makes 
> 2TB. I 
---^0.5
> wonder how your system represents 2.7TB in the SCSI protocol.

Sorry for the typing!

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Kernel / iscsi problem under high load

2009-04-03 Thread Gonçalo Borges
>
>
>
> Sure.. but the normal rdac handler (that comes with the kernel) doesn't
> spit those errors. It looks as a proprietary module.
>
> If this is the proprietary module, what happens when you use the one that
> comes with
> the RHEL5U2 kernel?
>


This RDAC handler is suggested in
http://publib.boulder.ibm.com/infocenter/systems/topic/liaai/rdac/BPMultipathRDAC.pdf,
and I had to download it from
http://www.lsi.com/rdac/rdac-LINUX-09.02.C5.16-source.tar.gz, and compile
it. I haven't tested the RDAC from the Kernel... Do you have any info on how
to do it?

What I have done previously was to test the DM-multipath with the
"path_checker readsector0" in /etc/multipath. I got the same problems in
this Raid 10 configuration for the DS3300. However, dividing the same DS3300
in 6 R1, I had no problems either with the present RDAC or with readsector0,
but I got better I/O performance with the RDAC.

Cheers
Goncalo

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Kernel / iscsi problem under high load

2009-04-03 Thread Gonçalo Borges
Hi...

> [r...@core26 ~]# multipath -ll
> > sda: checker msg is "rdac checker reports path is down"
> > iscsi06-apoio1 (3600a0b80003ad1e50f2e49ae6d3e) dm-0 IBM,VirtualDisk
> > [size=2.7T][features=1 queue_if_no_path][hwhandler=0]
>
> Very interesting: Out SAN system allows only 2048 GB of storage per LUN.
> Lookinginto the SCSI protocol, it seems there is a 32bit number of blocks
> (512Bytes) to count the LUN capacity. Thus roughly 4Gig times 0.4kB makes
> 2TB. I
> wonder how your system represents 2.7TB in the SCSI protocol.
>


Is this 2048 GB limit imposed on iSCSI? Because there is nothing in SCSI
itlself which forces you to this limit... Nowadays, you could have huge
partitions (if you do GPT partitions with PARTED)... So, if there is a
limit, it should come from iSCSI...


>
> > [r...@core26 ~]# fdisk -l /dev/sdb1
> > Disk /dev/sdb1: 499.9 GB, 49983104 bytes
>
> Isn't that a bit small for 2.7TB ? I think you should use fdisk on the
> disk, not
> on the partition!



Here goes the output of fdisk on the disk:

 [r...@core26 ~]# fdisk -l /dev/sdb
WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk
doesn't support GPT. Use GNU Parted.
Disk /dev/sdb: 2998.9 GB, 2998998663168 bytes
255 heads, 63 sectors/track, 364607 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot  Start End  Blocks   Id  System
/dev/sdb1   1  267350  2147483647+  ee  EFI GPT

We do GPT partitions with parted in order to overcome the deficienty of the
(old!) 2048GB limit. Here goes the output of a parted (just to be sure):

[r...@core26 ~]# parted /dev/mapper/iscsi06-apoio2
GNU Parted 1.8.1
Using /dev/mapper/iscsi06-apoio2
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: Linux device-mapper (dm)
Disk /dev/mapper/iscsi06-apoio2: 2999GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   EndSize   File system  NameFlags
 1  17.4kB  500GB  500GB  ext3 iscsi06-apoio2


> [r...@core26 ~]# df -k
> > Filesystem   1K-blocks  Used Available Use% Mounted on
> > /dev/sda1 90491396   2008072  83812428   3% /
> > tmpfs   524288 0524288   0% /dev/shm
> > /dev/mapper/iscsi06-apoio1p1
> >  480618344202804 456001480   1% /apoio06-1
> > /dev/mapper/iscsi06-apoio2p1
> >  480618344202800 456001484   1% /apoio06-2
> >
> > The sizes, although not exactly the same (but that doesn't happen also
> for
> > the system disk), are very close.
>
> So you have roughly 500GB on a 2.7TB LUN in use.
>

That is right... I have a logical volume of  2.7TB but a partition of 500GB.
But isn't this allowed?

> I do not think the difference I see in previous commands is big enough to
> justify a wrong setup. But I'm just guessing and I'm not really an expert.

It now depends where the partition is located on the disk (use a corrected
> fdisk
> invocation to find out).
>

In principle, the partitition should be on the beguining of the logical
volume but I can not confirm it with parted. If this is the case, everything
shoudl work fine. However, If there is the limit of 2048 GB of storage per
LUN, this may confuse the setup.. don't know for sure.


Cheers and Thanks
Goncalo

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Kernel / iscsi problem under high load

2009-04-03 Thread Ulrich Windl

On 3 Apr 2009 at 11:42, Gonçalo Borges wrote:

[...]
> Is this 2048 GB limit imposed on iSCSI? Because there is nothing in SCSI
> itlself which forces you to this limit... Nowadays, you could have huge
> partitions (if you do GPT partitions with PARTED)... So, if there is a
> limit, it should come from iSCSI...

Hi, I just looked it up (use: "T10 SBC-2"): SCSI (See SBC-2, section 4.1) seems 
to 
use "short LBA" (four bytes) and "long LBA" (eight bytes) to address blocks in 
block devices. So it seems our storage system only supports "short LBA". I 
don't 
know what Linux supports. I'd guess sizes up to 2^32-1 blocks are safe, however.

> 
> 
> >
> > > [r...@core26 ~]# fdisk -l /dev/sdb1
> > > Disk /dev/sdb1: 499.9 GB, 49983104 bytes
> >
> > Isn't that a bit small for 2.7TB ? I think you should use fdisk on the
> > disk, not
> > on the partition!
> 
> 
> 
> Here goes the output of fdisk on the disk:
> 
>  [r...@core26 ~]# fdisk -l /dev/sdb
> WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk
> doesn't support GPT. Use GNU Parted.
> Disk /dev/sdb: 2998.9 GB, 2998998663168 bytes
> 255 heads, 63 sectors/track, 364607 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>Device Boot  Start End  Blocks   Id  System
> /dev/sdb1   1  267350  2147483647+  ee  EFI GPT

As the utility said, MS-DOS partitions can only handle partitions up to 2TB 
(2047 
point something GB). I had little experience with parted yet, so you must find 
out 
yorself. At least your utilities seem to do have done the right thing.

> 
> We do GPT partitions with parted in order to overcome the deficienty of the
> (old!) 2048GB limit. Here goes the output of a parted (just to be sure):
> 
> [r...@core26 ~]# parted /dev/mapper/iscsi06-apoio2
> GNU Parted 1.8.1
> Using /dev/mapper/iscsi06-apoio2
> Welcome to GNU Parted! Type 'help' to view a list of commands.
> (parted) print
> Model: Linux device-mapper (dm)
> Disk /dev/mapper/iscsi06-apoio2: 2999GB
> Sector size (logical/physical): 512B/512B
> Partition Table: gpt
> 
> Number  Start   EndSize   File system  NameFlags
>  1  17.4kB  500GB  500GB  ext3 iscsi06-apoio2

Ok, so your 500GB partition is at the start of the device. that should be safe 
for 
Linux.

[...]
> In principle, the partitition should be on the beguining of the logical
> volume but I can not confirm it with parted. If this is the case, everything
> shoudl work fine. However, If there is the limit of 2048 GB of storage per
> LUN, this may confuse the setup.. don't know for sure.

Now you'll have to compare the sector number Linux complains about (to be past 
the 
end of the device / partition with the actual limit. Linux shouldn't access a 
device past the limit. Usually the commands that create filesystems do that 
correctly so that Linux shouldn't exceed the limits.

If there is an access outside the valid range, it could be some corruption vie 
iSCSI. You could use something like "dd if=/dev/zero 
of=a_big_file_in_your_filesystem" to fill your filesystem completely. Linux 
shouldn't complain about access past the end of the device. If it does, you'll 
have to dig further into details.

Regards,
Ulrich


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



RE: multipath iSCSI installs

2009-04-03 Thread Shyam_Iyer

 
Mike Christie wrote:
>mala...@us.ibm.com wrote:
>> Hi all,
>> 
>>  I am trying to install RHEL5.3 on an iSCSI disk with two paths.
>> I booted with "mapth" option but the installer picked up only a
single 
>> path. Is this the expected behavior when I use "iBFT"?
>> 
>
>For this mail ibft boot means the boot process where the ibft
implementation hooks into the box and logs into the target and brings
over the kernel and 
>initrd.
>
>
>It could be. If the ibft implementation uses only one session during
the ibft boot and then only exports that one session then yeah, it is
expected 
>because the iscsi tools only know what ibft tells us.

Mike - Here is an old thread that we had on the limitation of the
fwparam tool's limitation with multiple sessions at boot time.
http://www.mail-archive.com/open-iscsi@googlegroups.com/msg01659.html

You had suggested a few iface bindings that could be done to have
multiple sessions with an extra flag that uses the hint from the iBFT to
have multiple sessions.

I guess this might be the problem that malahal might be having. The iBFT
might be configured to have multiple sessions and thus have two paths
but the iscsiadm shipped with the installer will not take the hint from
iBFT to connect to two paths. 

-Shyam Iyer


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



RE: multipath iSCSI installs

2009-04-03 Thread Shyam_Iyer

mala...@us.ibm.com wrote:
>Mike Christie [micha...@cs.wisc.edu] wrote:
>> If the ibft implementation uses one session, but exports all the 
>> targets in the ibft info, then in RHEL 5.3 the installer only picks
up 
>> the session used for the ibft boot up, but the initrd root-boot code 
>> used after the install should log into all the targets in ibft
whether 
>> they were used for the ibft boot or not. There different behavior is
a 
>> result of the installer goofing up where is used the wrong api.
>
>It is quite likely that my iBFT implementation uses&exports a single
session.

My bad! Malahal has fixed the issue I guess.
 
But it does seem like we will hit an issue if the iBFT does export
multiple sessions and iscsiadm doesn't use them in the installer to find
multiple paths.

-Shyam


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Manually adding targets

2009-04-03 Thread Matthew Richardson
I'm trying to find a way of manually adding targets to a initiator,
rather than using the iscsiadm discovery command.

Is there an easier way of doing this than manually creating entries in
the nodes/ directory?

I've seen some references to a 'static' directory for this, but can't
find any documentation on what this does.

Alternatively, is there a way to specify a list of portals and have
discovery run against them at boot time? (other than writing my own
init.d script).

Thanks,

Matthew


-- 

The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.




signature.asc
Description: OpenPGP digital signature


Re: multipath iSCSI installs

2009-04-03 Thread Mike Christie

shyam_i...@dell.com wrote:
>  
> Mike Christie wrote:
>> mala...@us.ibm.com wrote:
>>> Hi all,
>>>
>>> I am trying to install RHEL5.3 on an iSCSI disk with two paths.
>>> I booted with "mapth" option but the installer picked up only a
> single 
>>> path. Is this the expected behavior when I use "iBFT"?
>>>
>> For this mail ibft boot means the boot process where the ibft
> implementation hooks into the box and logs into the target and brings
> over the kernel and 
>> initrd.
>>
>>
>> It could be. If the ibft implementation uses only one session during
> the ibft boot and then only exports that one session then yeah, it is
> expected 
>> because the iscsi tools only know what ibft tells us.
> 
> Mike - Here is an old thread that we had on the limitation of the
> fwparam tool's limitation with multiple sessions at boot time.
> http://www.mail-archive.com/open-iscsi@googlegroups.com/msg01659.html
> 
> You had suggested a few iface bindings that could be done to have
> multiple sessions with an extra flag that uses the hint from the iBFT to
> have multiple sessions.
> 
> I guess this might be the problem that malahal might be having. The iBFT

There are multiple issues remember?

1. Do we want to bind sessions to specific nics.

- You need the iface bindings like you mentioned above.

2. If the ibft exports all the targets setup even if they were not used 
for ibft boot do we want to log into them?

- We used to only log into the one used for boot. Now for boot we log 
into all of them using the default behavior where we let the network 
layer route. For install in RHEL, there was a goof where it always only 
logs into the one used for boot.


> might be configured to have multiple sessions and thus have two paths
> but the iscsiadm shipped with the installer will not take the hint from
> iBFT to connect to two paths. 

That is what I said in the mail you replied to more or less. For boot it 
is fixed. iscsiadm/iscsistart logs into all the targets found in ibft 
whehter they were used for the ibft boot or not. For install there is a 
bug still, because the isntaller code used the wrong API which only logs 
into the session used for boot.

When I did the fix for the iscsi tools, I think you even replied on the 
thread :)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Manually adding targets

2009-04-03 Thread Mike Christie

Matthew Richardson wrote:
> I'm trying to find a way of manually adding targets to a initiator,
> rather than using the iscsiadm discovery command.
> 
> Is there an easier way of doing this than manually creating entries in
> the nodes/ directory?

Not sure if it is easier but you can use the iscsiadm node addition command:

iscsiadm -m node -T your_target -p ip:port,tpgt -o new

If you want it bound to a iface then throw that in too

iscsiadm -m node -T your_target -p ip:port,tpgt -I iface0 - iface1 -o new

> 
> I've seen some references to a 'static' directory for this, but can't
> find any documentation on what this does.
> 
> Alternatively, is there a way to specify a list of portals and have
> discovery run against them at boot time? (other than writing my own
> init.d script).
> 

There is not. I could write something up though. You would just want it 
to log into all the portals/targets it found at discovery then right?

What target are you using this for or why do you want to do this? We 
used to do this in linux-scsi. I actually thought that was nicer. On the 
target side I would just make sure I setup some access group so the 
initiator only saw the targets/portals/device it should know about. Then 
on the initaitor I  did not have to worry about the node record config. 
It turns out a lot of targets are not so nice and just send you 
everything they have :( and we ended up having what we have now.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: iscsiadm: InitiatorName is required on the first Login PDU

2009-04-03 Thread Mike Christie

Ulrich Windl wrote:
> On 2 Apr 2009 at 18:31, Boaz Harrosh wrote:
> 
>> [r...@testlin2]$ ps ax |grep iscsi
>> 32268 ?S< 0:00 [iscsi_eh]
>> 32284 ?Ss 0:00 iscsid
>> 32285 ?S> 32313 pts/0S+ 0:00 grep iscsi
>>
> 
> Hi,
> 
> just a question: why do we see two iscsid processes? Usual daemons just 
> appear 
> once.
> 

One is the daemon process that does the iscsi work like login/relogin 
and error handling. The other is for logging. We cannot do logging from 
the iscsi work process because if it did something like try to write out 
log data to a iscsi disk we were trying to relogin to then we would lock 
because we would be blocked on the same disk we need to reconnect to.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Kernel / iscsi problem under high load

2009-04-03 Thread Konrad Rzeszutek

On Fri, Apr 03, 2009 at 10:42:31AM +0100, Gonçalo Borges wrote:
> >
> >
> >
> > Sure.. but the normal rdac handler (that comes with the kernel) doesn't
> > spit those errors. It looks as a proprietary module.
> >
> > If this is the proprietary module, what happens when you use the one that
> > comes with
> > the RHEL5U2 kernel?
> >
> 
> 
> This RDAC handler is suggested in
> http://publib.boulder.ibm.com/infocenter/systems/topic/liaai/rdac/BPMultipathRDAC.pdf,
> and I had to download it from
> http://www.lsi.com/rdac/rdac-LINUX-09.02.C5.16-source.tar.gz, and compile
> it. I haven't tested the RDAC from the Kernel... Do you have any info on how
> to do it?

Move the module it created to some old place (those would be the mpp*.ko files)
and make sure that there is a dm-rdac.ko is in your /lib/modules/`uname -r`/ 
directory.

Boot a normal initrd, not the one the LSI package created.

The multipath.conf that you posted will work. You can check that by running
lsmod | grep rdac

and you should see dm_rdac loaded.

> 
> What I have done previously was to test the DM-multipath with the
> "path_checker readsector0" in /etc/multipath. I got the same problems in

Yikes. You don't want that.

> this Raid 10 configuration for the DS3300. However, dividing the same DS3300
> in 6 R1, I had no problems either with the present RDAC or with readsector0,

6 R1 ?

> but I got better I/O performance with the RDAC.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Manually adding targets

2009-04-03 Thread Matthew Richardson
Thanks for the fast response!

> Not sure if it is easier but you can use the iscsiadm node addition command:
> 
> iscsiadm -m node -T your_target -p ip:port,tpgt -o new
> 

Yep - I discovered that one, but it still requires running a command
somewhere in the startup process.

> There is not. I could write something up though. You would just want it 
> to log into all the portals/targets it found at discovery then right?
> 
> What target are you using this for or why do you want to do this? 

Ideally, open-iscsi would have a file somewhere that lists combinations
of portals, targets and ifaces, and would do either discovery or 'manual
add' (as in the command listed above), depending on how much information
was given.

I'm currently using scsi-target-utils (ie http://stgt.berlios.de) -
which does allow IP and user-based access controls - so the simplest
'list of portals' approach would be enough for my needs.

As an aside, I'm also on holiday for the next 2 weeks, so if you want
any testing/further comments, there might be a delay :)

Thanks,

Matthew

-- 

The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.




signature.asc
Description: OpenPGP digital signature


Re: Manually adding targets

2009-04-03 Thread Mike Christie

Matthew Richardson wrote:
> Thanks for the fast response!
> 
>> Not sure if it is easier but you can use the iscsiadm node addition command:
>>
>> iscsiadm -m node -T your_target -p ip:port,tpgt -o new
>>
> 
> Yep - I discovered that one, but it still requires running a command
> somewhere in the startup process.
> 
>> There is not. I could write something up though. You would just want it 
>> to log into all the portals/targets it found at discovery then right?
>>
>> What target are you using this for or why do you want to do this? 
> 
> Ideally, open-iscsi would have a file somewhere that lists combinations
> of portals, targets and ifaces, and would do either discovery or 'manual
> add' (as in the command listed above), depending on how much information
> was given.

What are you trying to accomplish? The manual add threw me.

Normally when you do
iscsiadm -m discovery -t st -p .

it will create some files in /etc/iscsi/nodes (or /var/lib/iscsi/nodes 
in some distros). Then you can set the node.startup value to control if 
when the iscsi service is started if you want the tools to automatically 
log in for you.


With the
iscsiadm -m node -T .. -p .. -o new
it creates some files in the nodes dir and you can again set if you want 
the tools to autologin or not.

For the second option you listed (the manual add one), it seems like if 
you were going to write out a list of portal,target,iface tuples that 
you want to log into, then it is same as doing iscsiadm -o new command 
(iscsiadm -o new would be writing the file for you basically). Are you 
just trying to not have to run iscsiadm at all?



> 
> I'm currently using scsi-target-utils (ie http://stgt.berlios.de) -
> which does allow IP and user-based access controls - so the simplest
> 'list of portals' approach would be enough for my needs.
> 
> As an aside, I'm also on holiday for the next 2 weeks, so if you want
> any testing/further comments, there might be a delay :)
> 
> Thanks,
> 
> Matthew
> 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Manually adding targets

2009-04-03 Thread Matthew Richardson
Mike Christie wrote:
> What are you trying to accomplish? The manual add threw me.
> 
> Normally when you do
> iscsiadm -m discovery -t st -p .
> 
> it will create some files in /etc/iscsi/nodes (or /var/lib/iscsi/nodes 
> in some distros). Then you can set the node.startup value to control if 
> when the iscsi service is started if you want the tools to automatically 
> log in for you.
> 
> 
> With the
> iscsiadm -m node -T .. -p .. -o new
> it creates some files in the nodes dir and you can again set if you want 
> the tools to autologin or not.
> 
> For the second option you listed (the manual add one), it seems like if 
> you were going to write out a list of portal,target,iface tuples that 
> you want to log into, then it is same as doing iscsiadm -o new command 
> (iscsiadm -o new would be writing the file for you basically). Are you 
> just trying to not have to run iscsiadm at all?

At the moment I'd have to either put a startup script in place, or
manually run the above commands at least once to get targets etc added
to the iscsi database.

I'd like to be able to deliver a config file to the system, and have the
system do the creation of ifaces, and the discovery (or manual add) or
targets as part of its startup (e.g in the init.d script), as well as
the 'loginall' it currently does.  A config file listing ifaces,
portals, targets etc seemed to be the right way to do this - it being
parsed by a startup script.

Hope this makes sense.

Thanks,

Matthew

-- 

The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.




signature.asc
Description: OpenPGP digital signature


Re: Manually adding targets

2009-04-03 Thread Mike Christie

Matthew Richardson wrote:
> Mike Christie wrote:
>> What are you trying to accomplish? The manual add threw me.
>>
>> Normally when you do
>> iscsiadm -m discovery -t st -p .
>>
>> it will create some files in /etc/iscsi/nodes (or /var/lib/iscsi/nodes 
>> in some distros). Then you can set the node.startup value to control if 
>> when the iscsi service is started if you want the tools to automatically 
>> log in for you.
>>
>>
>> With the
>> iscsiadm -m node -T .. -p .. -o new
>> it creates some files in the nodes dir and you can again set if you want 
>> the tools to autologin or not.
>>
>> For the second option you listed (the manual add one), it seems like if 
>> you were going to write out a list of portal,target,iface tuples that 
>> you want to log into, then it is same as doing iscsiadm -o new command 
>> (iscsiadm -o new would be writing the file for you basically). Are you 
>> just trying to not have to run iscsiadm at all?
> 
> At the moment I'd have to either put a startup script in place, or
> manually run the above commands at least once to get targets etc added
> to the iscsi database.
> 
> I'd like to be able to deliver a config file to the system, and have the
> system do the creation of ifaces, and the discovery (or manual add) or
> targets as part of its startup (e.g in the init.d script), as well as

When we parse the cofig file and make ifaces and do discovery or 
manually add, do you want us to also add the info the iscsi DB or just 
use that info to create sessions while the iscsi service is running that 
one time?



> the 'loginall' it currently does.  A config file listing ifaces,
> portals, targets etc seemed to be the right way to do this - it being
> parsed by a startup script.
> 
> Hope this makes sense.
> 
> Thanks,
> 
> Matthew
> 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



RE: multipath iSCSI installs

2009-04-03 Thread Shyam_Iyer

Mike Christie wrote:
>That is what I said in the mail you replied to more or less. For boot
it is fixed. iscsiadm/iscsistart logs into all the targets found in ibft
whehter 
>they were used for the ibft boot or not. For install there is a bug
still, because the isntaller code used the wrong API which only logs
into the session >used for boot.

>When I did the fix for the iscsi tools, I think you even replied on the
thread :)

I need coffee (:. Sorry about that.

So, for the installers will that be fixed with the new library approach
that we are implementing?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Manually adding targets

2009-04-03 Thread Matthew Richardson
Mike Christie wrote:

>> I'd like to be able to deliver a config file to the system, and have the
>> system do the creation of ifaces, and the discovery (or manual add) or
>> targets as part of its startup (e.g in the init.d script), as well as
> 
> When we parse the cofig file and make ifaces and do discovery or 
> manually add, do you want us to also add the info the iscsi DB or just 
> use that info to create sessions while the iscsi service is running that 
> one time?
> 

I think the latter would be fine - as long as the config file is there a 
restart of the
service would reinitiate creation of ifaces, re-run the discovery etc.  But I 
imagine that
there might be situations where the opposite is true, and I have no idea how 
much work it
would be to create non-persistent sessions...

Perhaps an option in the config file? Either to use non-persistent sessions, or 
to simply
throw away the contents of the persistent database on service restart?

Thanks,

Matthew



signature.asc
Description: OpenPGP digital signature
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.