Thank you for your answers,

The motivation behind the original question is for reducing the waiting 
time for different iscsi connections logins
in case some of the portals are down.

We have a limitation on our RHEV system where all logins to listed iscsi 
targets should finish within 180 seconds in total.
In our current implementation we serialize the iscsiadm node logins one 
after the other,
each is for specific target and portal. In such scheme, each login would 
wait 120 seconds in case a portal is down
(default 15 seconds login timeout * 8 login retries), so if we have 2 or 
more connections down, we spend at least 240 seconds
which exceeds our 180 seconds time limit and the entire operation is 
considered to be failed (RHEV-wise).

Testing [1] different login schemes is summarized in the following table 
(logins to 2 targets with 2 portals each).
It seems that either login-all nodes after creating them, as suggested in 
previous answer here, compares in  total time spent 
with doing specific node logins concurrently (i.e. running iscsiadm -m node 
-T target -p portal -I interface  -l in parallel per
each target-portal), for both cases of all portals being online and when 
one portal is down:

Login scheme                         Online  Portals             Active 
Sessions       Total Login Time (seconds)
---------------------------------------------------------------------------------------------------------------------------------------------------------
    All at once                            
2/2                                 4                               2.1
    All at once                            1/2                             
    2                               120.2
    Serial target-portal              2/2                                
4                                8.5
    Serial target-portal              1/2                                
2                                243.5
    Concurrent target-portal     2/2                               
4                                2.1
    Concurrent target-portal    1/2                                
2                               120.1

Using concurrent target-portal logins seems to be preferable in our 
perspective as it allows to connect only to the
specified target and portals without the risk of intermixing with other 
potential iscsi targets.

The node creation part is kept serial in all tests here and we have seen it 
may result in the iscsi DB issues if run in parallel.
But using only node logins in parallel doesn't seems to have issues for at 
least 1000 tries of out tests.

The question to be asked here is it advisable by open-iscsi?
I know I have been answered already that iscsiadm is racy, but does it 
applies to node logins as well?

The other option is to use one login-all call without parallelism, but that 
would have other implications on our system to consider.

Your answers would be helpful once again.

Thanks,
- Amit

[1] https://gerrit.ovirt.org/#/c/110432






On Tuesday, June 30, 2020 at 8:02:15 PM UTC+3 The Lee-Man wrote:

> On Tuesday, June 30, 2020 at 8:55:13 AM UTC-7, Donald Williams wrote:
>>
>> Hello,
>>  
>>  Assuming that devmapper is running and MPIO properly configured you want 
>> to connect to the same volume/target from different interfaces. 
>>
>> However in your case you aren't specifying the same interface. "default"  
>> but they are on the same subnet.  Which typically will only use the default 
>> NIC for that subnet. 
>>
>
> Yes, generally best practices require that each component of your two 
> paths between initiator and target are redundant. This means that, in the 
> case of networking, you want to be on different subnets, served by 
> different switches. You also want two different NICs on your initiator, if 
> possible, although many times they are on the same card. But, obviously, 
> some points are not redundant (like your initiator or target). 
>
>>
>> What iSCSI target are you using?  
>>
>>  Regards,
>> Don
>>
>> On Tue, Jun 30, 2020 at 9:00 AM Amit Bawer <aba...@redhat.com> wrote:
>>
> [Sorry if this message is duplicated, haven't seen it is published in the 
>>> group]
>>>
>>> Hi,
>>>
>>> Have couple of question regarding iscsiadm version 6.2.0.878-2:
>>>
>>> 1) Is it safe to have concurrent logins to the same target from 
>>> different interfaces? 
>>> That is, running the following commands in parallel:
>>>
>>> iscsiadm -m node -T iqn.2003-01.org.vm-18-198.iqn2 -I default -p 
>>> 10.35.18.121:3260,1 -l
>>> iscsiadm -m node -T iqn.2003-01.org.vm-18-198.iqn2 -I default -p 
>>> 10.35.18.166:3260,1 -l
>>>
>>> 2) Is there a particular reason for the default values of  
>>> node.conn[0].timeo.login_timeout and node.session.initial_login_
>>> retry_max?
>>> According to comment in iscsid.conf it would spend 120 seconds in case 
>>> of an unreachable interface login:
>>>
>>> # The default node.session.initial_login_retry_max is 8 and
>>> # node.conn[0].timeo.login_timeout is 15 so we have:
>>> #
>>> # node.conn[0].timeo.login_timeout * node.session.initial_login_retry_max 
>>> =
>>> #                                                               120 
>>> seconds
>>>
>>>
>>> Thanks,
>>> Amit
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "open-iscsi" group.
>>>
>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to open-iscsi+unsubscr...@googlegroups.com.
>>
>>
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/open-iscsi/cc3ad021-753a-4ac4-9e6f-93e8da1e19bbn%40googlegroups.com
>>>  
>>> <https://groups.google.com/d/msgid/open-iscsi/cc3ad021-753a-4ac4-9e6f-93e8da1e19bbn%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/0aed6f01-5c36-46db-af27-5b6c353fd7b0n%40googlegroups.com.

Reply via email to