Send netdisco-users mailing list submissions to
        netdisco-users@lists.sourceforge.net

To subscribe or unsubscribe via the World Wide Web, visit
        https://lists.sourceforge.net/lists/listinfo/netdisco-users
or, via email, send a message with subject or body 'help' to
        netdisco-users-requ...@lists.sourceforge.net

You can reach the person managing the list at
        netdisco-users-ow...@lists.sourceforge.net

When replying, please edit your Subject line so it is more specific
than "Re: Contents of netdisco-users digest..."
Today's Topics:

   1. Re: Fortinet FortiOS Shenanigans (Christian Vo)
   2. Re: Fortinet FortiOS Shenanigans (Christian Vo)
--- Begin Message ---
Thanks Christian,
I’ve made the modification to line 146 and now I’m able to collect ARP entries.
Will continue to test and update, much appreciated!



From: Christian Ramseyer <ramse...@netnea.com>
Sent: Tuesday, March 18, 2025 3:45 PM
To: Christian Vo <christian...@synaptics.com>
Cc: netdisco-users@lists.sourceforge.net; Michael Butash <mich...@butash.net>
Subject: Re: [Netdisco] Fortinet FortiOS Shenanigans

CAUTION: Email originated externally, do not click links or open attachments 
unless you recognize the sender and know the content is safe.


On 18.03.2025 21:43, Christian Vo wrote:


[2888572] 2025-03-18 20:35:50 debug output collected: Current virtual domain: 
FG-traffic
[2888572] 2025-03-18 20:35:50 debug output collected: Max number of virtual 
domains: 10
[2888572] 2025-03-18 20:35:50 debug output collected: Virtual domains status: 2 
in NAT mode, 0 in TP mode
[2888572] 2025-03-18 20:35:50 debug output collected: Virtual domain 
configuration: split-task
[2888572] 2025-03-18 20:35:50 debug output collected: FIPS-CC mode: disable
[2888572] 2025-03-18 20:35:50 debug output collected: Current HA mode: a-p, 
primary
[2888572] 2025-03-18 20:35:50 debug output collected: Cluster uptime: 1221 
days, 20 hours, 22 minutes, 58 seconds
[2888572] 2025-03-18 20:35:50 debug output collected: Cluster state change 
time: 2022-11-26 20:06:54
[2888572] 2025-03-18 20:35:50 debug output collected: Branch point: 0418
[2888572] 2025-03-18 20:35:50 debug output collected: Release Version 
Information: GA
[2888572] 2025-03-18 20:35:50 debug output collected: FortiOS x86-64: Yes
[2888572] 2025-03-18 20:35:50 debug output collected: System time: Wed Mar 19 
04:35:50 2025
[2888572] 2025-03-18 20:35:50 debug output collected: Last reboot reason: warm 
reboot
[2888572] 2025-03-18 20:35:50 debug output collected: Fortinet-UUT-FW
[2888572] 2025-03-18 20:35:50 debug output collected: get system arp
[2888572] 2025-03-18 20:35:50 debug output collected: command parse error 
before 'arp'
[2888572] 2025-03-18 20:35:50 debug output collected: Command fail. Return code 
-61
[2888572] 2025-03-18 20:35:50 debug output collected: Fortinet-UUT-FW

Well it looks like you have a setup that is new to this script, it actually 
checks for "Virtual domain configuration: multiple" to decide to enter VDOM 
mode.
You'd need to experiment a bit, maybe it works if you just change line 146 of 
lib/App/Netdisco/SSHCollector/Platform/FortiOS.pm to


if ($_ && /^Virtual domain configuration: (multiple|split-task)$/) {


Unfortunately I'm not deep enough into Fortinet to know if other changes would 
be needed or if this split-task setup will work nicely with the commands the 
module is sending, you'll have to experiment a bit.

Cheers
Christian

--- End Message ---
--- Begin Message ---
Hi all,

I’m running into Fortinet-related arpnip issue,  it seems that the CLI “get 
system arp”  cmd need to be executed from a specific VDOM.
Initial SSH login from the account specified in the deployment.yml will fail to 
run the above cmd.

I noticed we needed to do the following from CLI:

  *   config vdom
  *   edit FG-traffic (not sure if this is default VDOM name or not, I do see 
root as the other option)
  *   get system arp

I do realize netdisco-sshcollector is depreciated, so not entire sure what is 
needed on my end in order for these cmds to be ran properly.

Please help


Christian



From: Michael Butash <mich...@butash.net>
Sent: Sunday, March 16, 2025 4:23 PM
To: Christian Ramseyer <ramse...@netnea.com>
Cc: netdisco-users@lists.sourceforge.net
Subject: Re: [Netdisco] Fortinet FortiOS Shenanigans

CAUTION: Email originated externally, do not click links or open attachments 
unless you recognize the sender and know the content is safe.

Ahh, yes chmod 755 on the directory worked for the arpnip! Ok, I guess I'm not 
a very good sysadmin not knowing/realizing that as a security thing, but didn't 
think (and never have) much about it.

Great, I was going more for completeness since I saw there was a forti ssh 
collector now in there, as neighbors aren't figuring themselves out for me at 
all here.  LLDP still isn't figuring out neighbors between my catalysts and 
fortiswitches, but it's a small enough network I just added manual links for 
them anyways. I may still follow up separately on that as LLDP info is being 
found, just not showing on ports or linking topology neighbors.

Regarding having to bulkwalk_no the single host, I'd probably blame Fortinet if 
it weren't for the fact 2 out of the 3 work normally, another hub and a branch 
spoke all poll just fine. Even weirder it gets stuck on a stack oid that most 
certainly isn't present on the fortigate, and repeatedly with no delay or 
waiting for a response. The fortigate seems to ignore it as an invalid mib, but 
they show incoming as quick as the terminal will scroll.

I'm curious enough now I'm going to pull a full debug on the working and 
not-working devices (both virtual in azure too, the spoke is a physical 100F) 
and see, maybe open a ticket too as I've had enough weirdness on this 
deployment we're old friends they and I. I'll follow up if I get a better 
answer. I'm open to share them unicast back to you if interested to have a look.

Oh, separate note on your memory leak - maybe unrelated, but thought I'd 
mention it... Another long-time customer of mine randomly started having 
fortigate issues back in October with the IPS randomly starting to OOM the box 
into conservation mode, and after a few months found out it was a bad IPS 
engine update they pushed randomly (oct 26th if I remember right). I think it 
got fixed officially in a january maintenance release, but I had to get a 
specifically fixed IPS engine version for some older 7.2 boxes we had that we 
didn't want to upgrade yet. IPS stuck out like honeymoon wood for the top 
memory consumer out of nowhere on only one box of many. I was surprised that it 
didn't get more public attention affecting folks, fortinet support seemed to 
know about it after a few cases over months about it running through hoops.

Thanks again Christian, really appreciate the answers and your experience!

-mb


On Sun, Mar 16, 2025 at 2:42 PM Christian Ramseyer 
<ramse...@netnea.com<mailto:ramse...@netnea.com>> wrote:


On 16.03.2025 22:12, Michael Butash wrote:
> Ahh, so money! Yeah, once I found a reference on how to set bulkget_no
> (thanks mailing list, your docs *should* really give an example of
> use.. :)), it ran right through with no issues using getnext. Reading
> the doc didn't make any sense where to use it, searches turn up
> nothing on how or where to declare this, even chatgpt said I should
> jab it into the device_auth section, but otherwise... Thank you!!
>
> So now the question is why is ND misbehaving? There really is little
> configuration difference between the working fortigate and not working
> one, particularly nothing special around SNMP, so I have no idea why
> ND would behave like this for one fortigate and not another. This
> seems more of a ND problem than the fortigate.

Nice we've made some progress, excellent :)

I doubt its ND, having crappy bulkget implementations is a tradition
across many vendors. I'm pretty sure you'll get the same result when
pointing "snmpbulkwalk" at the same ifStackStatus subtree.

>
> And yes, I'm a dork re: discover vs arpnip, I was doing discovery.
> Sorry for barking up the wrong tree.
>
> Still though, it seems to try, but fails weirdly with an error about
> .libnet-openssh-perl not being secure. I wasn't really sure what part
> it was considering "not secure", chatgpt seemed to think it was
> related to the directory not being secure, but it's chmod 700 to
> netdisco only, not sure how much more secure it wants it. I can ssh to
> the device normally with that account otherwise from the server.
>
> [2179658] 2025-03-16 20:41:55 error  [10.0.0.10] ssh connection error
> [ctl_dir /opt/netdisco/.libnet-openssh-perl/ is not secure]
> [2179658] 2025-03-16 20:41:55 debug ⬅ (defer) arpnip failed: could not
> SSH connect to 10.0.0.10
>

It's probably the /opt/netdisco directory that's still group writeable
or worse, openssh doesn't like that. chmod 755 (or 700, 750)  on it
should fix it.

Cheers
Christian

--- End Message ---
_______________________________________________
Netdisco mailing list - Digest Mode
netdisco-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/netdisco-users

Reply via email to