Re: ZPool on iSCSI storage not available after a reboot

2024-03-12 Thread Dennis Clarke

On 3/12/24 15:41, Alan Somers wrote:

On Tue, Mar 12, 2024 at 1:28 PM Dennis Clarke  wrote:

.
.
.
.

Yes, this looks exactly like an ordering problem.  zpools get imported
early in the boot process, under the assumption that most of them are
local.  Networking comes up later, under the assumption that
networking might require files that are mounted on ZFS.  For you, I
suggest setting proteus's cachefile to a non-default location and
importing it from /etc/rc.local, like this:

zpool set cachefile=/var/cache/iscsi-zpools.cache proteus

Then in /etc/rc.local:
zpool import -a -c /var/cache/iscsi-zpools.cache -o
cachefile=/var/cache/iscsi-zpools.cache



That seems to be perfectly reasonable.

I will give that a test right now.

I was messing with the previous zpool called proteus and destroyed it. 
Easy enough to re-create :


titan# gpart add -t freebsd-zfs /dev/da0
da0p1 added

titan#
titan# gpart show /dev/da0
=>40  4294967216  da0  GPT  (2.0T)
  40   8   - free -  (4.0K)
  48  42949672001  freebsd-zfs  (2.0T)
  4294967248   8   - free -  (4.0K)

titan#
titan# zpool create -O compress=zstd -O checksum=sha512 -O atime=off -o 
compatibility=openzfs-2.0-freebsd -o autoexpand=off -o autoreplace=on -o 
failmode=continue -o listsnaps=off -m none proteus /dev/da0p1

titan# zpool set cachefile=/var/cache/iscsi-zpools.cache proteus
titan#
titan# ls -lapb /etc/rc.local
ls: /etc/rc.local: No such file or directory
titan# ed /etc/rc.local
/etc/rc.local: No such file or directory
a
zpool import -a -c /var/cache/iscsi-zpools.cache -o 
cachefile=/var/cache/iscsi-zpools.cache

.
f
/etc/rc.local
w
92
q
titan#

After reboot ... yes ... this seems to get the job done neatly !



root@titan:~ #
root@titan:~ # zpool list
NAME  SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAGCAP  DEDUP 
HEALTH  ALTROOT
iota 7.27T   321G  6.95T- - 0% 4%  1.00x 
ONLINE  -
proteus  1.98T  1.03M  1.98T- - 0% 0%  1.00x 
ONLINE  -
t0444G  40.8G   403G- - 4% 9%  1.00x 
ONLINE  -

root@titan:~ #
root@titan:~ # uptime
 8:21PM  up 3 mins, 1 user, load averages: 0.02, 0.04, 0.01
root@titan:~ #

Looks good.

Thank you very much :)



--
--
Dennis Clarke
RISC-V/SPARC/PPC/ARM/CISC
UNIX and Linux spoken




Re: ZPool on iSCSI storage not available after a reboot

2024-03-12 Thread Alan Somers
On Tue, Mar 12, 2024 at 1:28 PM Dennis Clarke  wrote:
>
>
> This is a somewhat odd problem and may have nothing to do with iSCSI
> config at all. Suffice it to say that I have the following in the server
> /etc/rc.conf :
>
> #
> # the iSCSI initiator
> iscsid_enable="YES"
> iscsictl_enable="YES"
> iscsictl_flags="-Aa"
> #
>
> During boot I see this on the console :
>
>
> cannot import 'proteus': no suchpid 55 (zpool) is attempting to use
> unsafe AIO requests - not logging anymore
>   pool or dataset
>  Destroy and re-create the pool from
>  a backup source.
> cachefile import failed, retrying
> no pools available to import
>
>
> Sure enough the machine brings up a 10Gbit link with jumboframes *after*
> the above messages :
>
>
> ix0: flags=1008843
> metric 0 mtu 9000
>
> options=4e53fbb
>  ether 8c:dc:d4:ae:18:b8
>  inet 10.0.0.2 netmask 0xff00 broadcast 10.0.0.255
>  media: Ethernet autoselect (10Gbase-Twinax
> )
>  status: active
>  nd6 options=29
>
>
> Then a little later I see iscsi doing its goodness :
>
>
> da0 at iscsi1 bus 0 scbus8 target 0 lun 0
> da0:  Fixed Direct Access SPC-5 SCSI device
> da0: Serial Number MYSERIAL
> da0: 150.000MB/s transfers
> da0: Command Queueing enabled
> da0: 2097152MB (4294967296 512 byte sectors)
> add net ::0.0.0.0: gateway ::1
> Starting iscsid.
> Starting iscsictl.
>
> The storage exists just fine and iSCSI seems to be doing its thing :
>
> root@titan:~ #
> root@titan:~ # camcontrol devlist
>  at scbus0 target 0 lun 0 (pass0,ada0)
>   at scbus1 target 0 lun 0 (pass1,ada1)
>at scbus2 target 0 lun 0 (ses0,pass2)
>at scbus6 target 0 lun 0 (ses1,pass3)
>   at scbus7 target 0 lun 1 (pass4,nda0)
>  at scbus8 target 0 lun 0 (da0,pass5)
> root@titan:~ #
> root@titan:~ # gpart show da0
> =>40  4294967216  da0  GPT  (2.0T)
>40   8   - free -  (4.0K)
>48  42949672001  freebsd-zfs  (2.0T)
>4294967248   8   - free -  (4.0K)
>
> root@titan:~ #
>
> However the zpool therein is not seen :
>
> root@titan:~ #
> root@titan:~ # zpool list
> NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAGCAP  DEDUP
> HEALTH  ALTROOT
> iota  7.27T   597G  6.68T- - 0% 8%  1.00x
> ONLINE  -
> t0 444G  40.8G   403G- - 4% 9%  1.00x
> ONLINE  -
> root@titan:~ #
>
>
> Of course I can manually import it :
>
>
> root@titan:~ # zpool import
> pool: proteus
>   id: 15277728307274839698
>state: ONLINE
> status: Some supported features are not enabled on the pool.
>  (Note that they may be intentionally disabled if the
>  'compatibility' property is set.)
>   action: The pool can be imported using its name or numeric identifier,
> though
>  some features will not be available without an explicit 'zpool
> upgrade'.
>   config:
>
>  proteus ONLINE
>da0p1 ONLINE
> root@titan:~ #
>
> It seems as if there is something out of sequence and the iSCSI
> processes should be happening earlier in the boot process? I really do
> not know and am wondering why that zpool proteus on the iSCSI storage
> needs to be manually import'ed after a reboot.
>
> Any insights would be wonderful.
>
> --
> Dennis Clarke
> RISC-V/SPARC/PPC/ARM/CISC
> UNIX and Linux spoken

Yes, this looks exactly like an ordering problem.  zpools get imported
early in the boot process, under the assumption that most of them are
local.  Networking comes up later, under the assumption that
networking might require files that are mounted on ZFS.  For you, I
suggest setting proteus's cachefile to a non-default location and
importing it from /etc/rc.local, like this:

zpool set cachefile=/var/cache/iscsi-zpools.cache proteus

Then in /etc/rc.local:
zpool import -a -c /var/cache/iscsi-zpools.cache -o
cachefile=/var/cache/iscsi-zpools.cache

-Alan



ZPool on iSCSI storage not available after a reboot

2024-03-12 Thread Dennis Clarke



This is a somewhat odd problem and may have nothing to do with iSCSI 
config at all. Suffice it to say that I have the following in the server 
/etc/rc.conf :


#
# the iSCSI initiator
iscsid_enable="YES"
iscsictl_enable="YES"
iscsictl_flags="-Aa"
#

During boot I see this on the console :


cannot import 'proteus': no suchpid 55 (zpool) is attempting to use 
unsafe AIO requests - not logging anymore

 pool or dataset
Destroy and re-create the pool from
a backup source.
cachefile import failed, retrying
no pools available to import


Sure enough the machine brings up a 10Gbit link with jumboframes *after* 
the above messages :



ix0: flags=1008843 
metric 0 mtu 9000


options=4e53fbb
ether 8c:dc:d4:ae:18:b8
inet 10.0.0.2 netmask 0xff00 broadcast 10.0.0.255
media: Ethernet autoselect (10Gbase-Twinax 
)

status: active
nd6 options=29


Then a little later I see iscsi doing its goodness :


da0 at iscsi1 bus 0 scbus8 target 0 lun 0
da0:  Fixed Direct Access SPC-5 SCSI device
da0: Serial Number MYSERIAL
da0: 150.000MB/s transfers
da0: Command Queueing enabled
da0: 2097152MB (4294967296 512 byte sectors)
add net ::0.0.0.0: gateway ::1
Starting iscsid.
Starting iscsictl.

The storage exists just fine and iSCSI seems to be doing its thing :

root@titan:~ #
root@titan:~ # camcontrol devlist
 at scbus0 target 0 lun 0 (pass0,ada0)
  at scbus1 target 0 lun 0 (pass1,ada1)
   at scbus2 target 0 lun 0 (ses0,pass2)
   at scbus6 target 0 lun 0 (ses1,pass3)
  at scbus7 target 0 lun 1 (pass4,nda0)
 at scbus8 target 0 lun 0 (da0,pass5)
root@titan:~ #
root@titan:~ # gpart show da0
=>40  4294967216  da0  GPT  (2.0T)
  40   8   - free -  (4.0K)
  48  42949672001  freebsd-zfs  (2.0T)
  4294967248   8   - free -  (4.0K)

root@titan:~ #

However the zpool therein is not seen :

root@titan:~ #
root@titan:~ # zpool list
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAGCAP  DEDUP 
HEALTH  ALTROOT
iota  7.27T   597G  6.68T- - 0% 8%  1.00x 
ONLINE  -
t0 444G  40.8G   403G- - 4% 9%  1.00x 
ONLINE  -

root@titan:~ #


Of course I can manually import it :


root@titan:~ # zpool import
   pool: proteus
 id: 15277728307274839698
  state: ONLINE
status: Some supported features are not enabled on the pool.
(Note that they may be intentionally disabled if the
'compatibility' property is set.)
 action: The pool can be imported using its name or numeric identifier, 
though
some features will not be available without an explicit 'zpool 
upgrade'.

 config:

proteus ONLINE
  da0p1 ONLINE
root@titan:~ #

It seems as if there is something out of sequence and the iSCSI 
processes should be happening earlier in the boot process? I really do 
not know and am wondering why that zpool proteus on the iSCSI storage 
needs to be manually import'ed after a reboot.


Any insights would be wonderful.

--
Dennis Clarke
RISC-V/SPARC/PPC/ARM/CISC
UNIX and Linux spoken



Re: Request for Testing: TCP RACK

2024-03-12 Thread tuexen
> On 12. Mar 2024, at 14:39, Nuno Teixeira  wrote:
> 
> Hello,
> 
> I'm curious about tcp RACK.
> 
> As I do not run on a server background, only a laptop and a rpi4 for
> poudriere, git, browsing, some torrent and ssh/sftp connections, will
> I see any difference using RACK?
> What tests should I do for comparison?
You might want to read the following article in the FreeBSD Journal:
https://freebsdfoundation.org/our-work/journal/browser-based-edition/rack-and-alternate-tcp-stacks-for-freebsd/

There is no specific area for testing. Just test the stack on
the systems you use with the workload you use and report back
any issues...

Best regards
Michael
> 
> Thanks,
> 
>  escreveu (quinta, 16/11/2023 à(s) 15:10):
>> 
>> Dear all,
>> 
>> recently the main branch was changed to build the TCP RACK stack
>> which is a loadable kernel module, by default:
>> https://cgit.FreeBSD.org/src/commit/?id=3a338c534154164504005beb00a3c6feb03756cc
>> 
>> As discussed on the bi-weekly transport call, it would be great if people
>> could test the RACK stack for their workload. Please report any problems to 
>> the
>> net@ mailing list or open an issue in the bug tracker and drop me a note via 
>> email.
>> This includes regressions in CPU usage, regressions in performance or any 
>> other
>> unexpected change you observe.
>> 
>> You can load the kernel module using
>> kldload tcp_rack
>> 
>> You can make the RACK stack the default stack using
>> sysctl net.inet.tcp.functions_default=rack
>> 
>> Based on the feedback we get, the default stack might be switched to the
>> RACK stack.
>> 
>> Please let me know if you have any questions.
>> 
>> Best regards
>> Michael
>> 
>> 
>> 
> 
> 
> -- 
> Nuno Teixeira
> FreeBSD Committer (ports)




Re: Request for Testing: TCP RACK

2024-03-12 Thread Pete Wright




On 3/12/24 6:39 AM, Nuno Teixeira wrote:

Hello,

I'm curious about tcp RACK.

As I do not run on a server background, only a laptop and a rpi4 for
poudriere, git, browsing, some torrent and ssh/sftp connections, will
I see any difference using RACK?
What tests should I do for comparison?



I found this blog post from Klara was a good backgrounder to get my 
comfortable with testing out RACK:

https://klarasystems.com/articles/using-the-freebsd-rack-tcp-stack/

I've been using it on a busy'ish server and a workstation without any 
issues, but I'll defer to others as to what areas of focus for testing 
are needed.


-pete

--
Pete Wright
p...@nomadlogic.org



Re: Request for Testing: TCP RACK

2024-03-12 Thread Nuno Teixeira
Hello,

I'm curious about tcp RACK.

As I do not run on a server background, only a laptop and a rpi4 for
poudriere, git, browsing, some torrent and ssh/sftp connections, will
I see any difference using RACK?
What tests should I do for comparison?

Thanks,

 escreveu (quinta, 16/11/2023 à(s) 15:10):
>
> Dear all,
>
> recently the main branch was changed to build the TCP RACK stack
> which is a loadable kernel module, by default:
> https://cgit.FreeBSD.org/src/commit/?id=3a338c534154164504005beb00a3c6feb03756cc
>
> As discussed on the bi-weekly transport call, it would be great if people
> could test the RACK stack for their workload. Please report any problems to 
> the
> net@ mailing list or open an issue in the bug tracker and drop me a note via 
> email.
> This includes regressions in CPU usage, regressions in performance or any 
> other
> unexpected change you observe.
>
> You can load the kernel module using
> kldload tcp_rack
>
> You can make the RACK stack the default stack using
> sysctl net.inet.tcp.functions_default=rack
>
> Based on the feedback we get, the default stack might be switched to the
> RACK stack.
>
> Please let me know if you have any questions.
>
> Best regards
> Michael
>
>
>


-- 
Nuno Teixeira
FreeBSD Committer (ports)