Many thanks for the explanations Christian!

Here are some more clarification on my setup.

Indeed, I have done a "discover" for my NAS, and I have a node configured.
The other element is that the NAS is generally off, which means if open-iscsci 
would try to communicate with the NAS at startup, that could timeout.

What I don't yet fully understand with isci, is that when I need my
NAS's LUN, since I have once and for all "discovered" it, I just run

$ sudo iscsiadm -m node -l

First, this works with or without the 3 services at startup... not sure
it should, and then what is the use of those services (at least in my
case!)

But also unlike the same open-iscsi the command line does NOT return...
which does not prevent the remote mount to work perfectly fine.

When I'm done with the mount, I just unmount it, do CTRL-C on the
command line (which was not necessary in 16.04), and do

$ sudo iscsiadm -m node -u


What I also don't get from reading the man is the difference in what you 
explain:

$ sudo iscsiadm -m node -l
Logging in to [iface: default, target: 
iqn.2000-01.com.synology:diskstation.blocks, portal: 192.168.0.100,3260] 
(multiple)


And the command you quote from open-iscsi says:

$ sudo iscsiadm -m node --loginall=automatic
iscsiadm: No records found


Is it because my nodes configuration has a "manual" somewhere:

$ sudo cat 
/etc/iscsi/nodes/iqn.2000-01.com.synology:diskstation.blocks/192.168.0.100,3260,0/default
# BEGIN RECORD 2.0-874
node.name = iqn.2000-01.com.synology:diskstation.blocks
node.tpgt = 0
node.startup = manual
....

I didn't knowingly put it there, it is apparently the default value when
issuing the "discovery" command:

$ sudo iscsiadm --mode discovery --op update --type sendtargets --portal
192.168.0.100


Now from your explanations, I tried with only the two parts related to
iscsi (without open-isci) and I have the same behaviour.

I guess my thinking was right. The logic you explain is that iscsi
believes nodes are needed for the startup of the machine (when he finds
some) and then waits for the network to become ready (at least) and
possibly more to ping the nodes (?)

I don't think iscsi by itself takes a lot of time, or even that there is
a timeout with my NAS that is not powered on, it is just because you
need to wait for the network that all the "graphical" process is
delayed.


Two proofs of that.
I have a fuse mount of my own (1fichierfs : 
https://gitlab.com/BylonAkila/astreamfs) that runs at session start, since you 
want to run fuse mounts for the user, and not as root.
For the 20.04, and also for Raspberry OS that does the same "trick", I now 
introduced an optional "wait for network" feature in the mount itself.

Here is what I get on the log
[1fichierfs     0.000] NOTICE: started: Monday 15 June 2020 at 22:22:36
[1fichierfs     0.000] INFO: successfuly parsed arguments.
[1fichierfs     0.000] INFO: log level is 7.
[1fichierfs     0.000] INFO: user_agent=1fichierfs/1.7.1.1
[1fichierfs     0.008] INFO: <<< API(in) (iReq:1) folder/ls.cgi 
POST={"folder_id":0,"files":1} name=/
[1fichierfs     8.071] NOTICE: Waited 8 seconds for network at startup.


You see, it said it waited 8 seconds, after when "programs at session start" 
are kicked in.
It can also be less, sometimes 6 seconds, sometimes 2.


Second exhibit is the SVG graph with and without iscsi (only the 2 iscsi 
related parts).

What you want to look at is the red line saying "Plymouth-quit-
wait.service" which is, as it says, when plymouth exits and the user
starts seeing the desktop.

As you can see after the "plymouth-quit-wait", just after gdm, is a
predecessor to the user-1000.slice which I guess starts what we commonly
call the user session.

You can observe in the first graph that the "network-online" target
occurs about 7 to 8 seconds after the start of user-1000.slice, which is
coherent with my 1fichierfs log.

But at the point where the network is finally online, almost all is
ready apart thinks like openvpn which obviously need the network to be
online to start clients, and some others.


In the second graph "with iscsi", you can see that after unattended-upgraded 
and snapd service, nothing happens, and we simply sleep waiting for the 
network, because:
- iscsi wants it
- and (probably) iscsi said (I couldn't find how) that it is needed before 
starting users, and now you see that gdm + plymouth-quit-wait are started AFTER 
iscsi, which is here after the network.


So there we also waited for about 7 to 8 seconds, but a lot of services that 
could be started in the "normal" run without waiting for the network are now 
"not started yet", which means with have lost more due to not taking the 
opportunity to run tasks in parallel.


Of course, if this is desktop machine with "auto-login". Would you not be in 
"auto-login", you probably barely notice a difference unless you a very quick 
to type your password.

** Attachment added: "Startup Graph with NO iscsi"
   
https://bugs.launchpad.net/ubuntu/+source/open-iscsi/+bug/1882986/+attachment/5384159/+files/startup_noiscsi.svg

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1882986

Title:
  open-iscsi is slowing down the boot process

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/open-iscsi/+bug/1882986/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to