Re: recent Trixie upgrade removed nfs client

2024-05-01 Thread Brad Rogers
On Tue, 30 Apr 2024 15:48:09 -0400
Gary Dale  wrote:

Hello Gary,

>Yes but: both gdb and nfs-client installed fine. Moreover, the 
>nfs-client doesn't appear to be a dependency of any of the massive load 
>of files updated lately.  The gdb package however is but for some

This transition is ongoing;  Just because a package is uninstallable
today, doesn't mean the same will be true tomorrow.  Sometimes,
dependencies transfer in the wrong order.

Minor point;  nfs-client doesn't appear to exist in Debian.  At least,
not according my search of https://packages.debian.org  Closest packages I 
could find are nfs-common or ndb-client.

>Shouldn't autoremove only offer to remove packages that used to be a 
>dependency but aren't currently (i.e. their status has changed)?

There are lots of inter-dependant relationships (that I don't even
pretend to understand).  It's not as simple as 'X doesn't depend on Y, so
it should not be removed'.  There's (nearly) always other things going
on at such times as this.  For example, it's not until today I could get
libllvmt64 to install, and replace, libllvm.  For several days,
attempting the replacement would have ended up with broken packages, so
the upgrade was not allowed.

Sometimes, only upgrading a subset of the packages offered can help;
apt isn't perfect at resolving all the issues.  Assuming the issues are
solvable in the first place.  This is not a criticism of apt, because
aptitude and synaptic can have difficulties, too.  Each tool has its
foibles.

On top of all that, I've found quite a few library packages don't
automatically migrate to their t64 counterpart.  Whether that's by
accident or design, IDK.  What I do know is that the act of installing
(manually) the t64 version will force the removal of the old version.
There's usually a 'complaint' about such an action (warning about
removing an in use library), but it proceeds without problems.

-- 
 Regards  _   "Valid sig separator is {dash}{dash}{space}"
 / )  "The blindingly obvious is never immediately apparent"
/ _)rad   "Is it only me that has a working delete key?"
Two sides to every story
Public Image - Public Image Ltd


pgpKK8g09RESF.pgp
Description: OpenPGP digital signature


Re: recent Trixie upgrade removed nfs client

2024-04-30 Thread Gary Dale

On 2024-04-30 10:58, Brad Rogers wrote:

On Tue, 30 Apr 2024 09:51:01 -0400
Gary Dale  wrote:

Hello Gary,


Not looking for a solution. Just reporting a spate of oddities I've
encountered lately.

As Erwan says, this is 'normal'.  Especially ATM due to the t64
transition.

As you've found out, paying attention to removals is a Good Idea(tm).
Sometimes those removals cannot be avoided.  Of course, removal of
'library' to be replaced with 'libraryt64' is absolutely fine.

If the upgrade wants to remove (say) half of the base packages of KDE,
waiting a few days would be prudent.   :-D

You may also notice quite a few packages being reported as "local or
obsolete".  This is expected as certain packages have had to be removed
from testing to enable a smoother flow of the transition.  Many will
return in due course.  I do know of one exception, however;  deborphan
has bee removed from testing and, as things stand, it looks like it
might be permanent -  I fully understand why, but I shall mourn its
passing, as I find it to be quite handy for weeding out cruft.

Yes but: both gdb and nfs-client installed fine. Moreover, the 
nfs-client doesn't appear to be a dependency of any of the massive load 
of files updated lately.  The gdb package however is but for some reason 
apt didn't want to install it.


The point is that apt didn't handle the situation reasonably. If it 
wanted a package that was installable, should it not have installed it? 
And while nfs-client isn't a dependency of other installed packages, why 
should autoremove remove it? It's status of not being a dependency 
didn't change. There are lots of packages that aren't depended on by 
other packages that I have installed (e.g. every end-user application). 
Shouldn't autoremove only offer to remove packages that used to be a 
dependency but aren't currently (i.e. their status has changed)?




Re: recent Trixie upgrade removed nfs client

2024-04-30 Thread songbird
Gary Dale wrote:

> I'm running Trixie on an AMD64 system.
>
> Yesterday after doing my usual morning full-upgrade, I rebooted because 
> there were a lot of Plasma-related updates. When I logged in, I found I 
> wasn't connected to my file server shares. I eventually traced this down 
> to a lack of nfs software on my workstation. Reinstalling nfs-client 
> fixed this.
>
> I guess I need to pay closer attention to what autoremove tells me it's 
> going to remove, but I'm confused as to why it would remove nfs-client & 
> related packages.
>
> This follows a couple of previous full-upgrades that were having 
> problems. The first, a few days ago, was stopped by gdb not being 
> available. However, it installed fine manually (apt install gdb). I 
> don't see why apt full-upgrade didn't do this automatically as a 
> dependency for whatever package needed it.
>
> The second was blocked by the lack of a lcl-qt5 or lcl-gtk5 library. I 
> can see this as legitimate because it looks like you don't need both so 
> the package manager lets you decide which you want.
>
> Not looking for a solution. Just reporting a spate of oddities I've 
> encountered lately.

  the on-going time_t transitions may be causing some packages
to be removed for a while as dependencies get adjusted.

  i've currently not been doing full upgrades because there are
many Mate packages that would be removed.


  songbird



Re: recent Trixie upgrade removed nfs client

2024-04-30 Thread Brad Rogers
On Tue, 30 Apr 2024 09:51:01 -0400
Gary Dale  wrote:

Hello Gary,

>Not looking for a solution. Just reporting a spate of oddities I've 
>encountered lately.

As Erwan says, this is 'normal'.  Especially ATM due to the t64
transition.

As you've found out, paying attention to removals is a Good Idea(tm).
Sometimes those removals cannot be avoided.  Of course, removal of 
'library' to be replaced with 'libraryt64' is absolutely fine. 

If the upgrade wants to remove (say) half of the base packages of KDE,
waiting a few days would be prudent.   :-D

You may also notice quite a few packages being reported as "local or
obsolete".  This is expected as certain packages have had to be removed
from testing to enable a smoother flow of the transition.  Many will
return in due course.  I do know of one exception, however;  deborphan
has bee removed from testing and, as things stand, it looks like it
might be permanent -  I fully understand why, but I shall mourn its
passing, as I find it to be quite handy for weeding out cruft.

-- 
 Regards  _   "Valid sig separator is {dash}{dash}{space}"
 / )  "The blindingly obvious is never immediately apparent"
/ _)rad   "Is it only me that has a working delete key?"
He looked the wrong way at a policeman
I Predict A Riot - Kaiser Chiefs


pgpNgF_iNx5wu.pgp
Description: OpenPGP digital signature


Re: recent Trixie upgrade removed nfs client

2024-04-30 Thread Erwan David
On Tue, Apr 30, 2024 at 03:51:01PM CEST, Gary Dale  
said:
> I'm running Trixie on an AMD64 system.
> 
> Yesterday after doing my usual morning full-upgrade, I rebooted because
> there were a lot of Plasma-related updates. When I logged in, I found I
> wasn't connected to my file server shares. I eventually traced this down to
> a lack of nfs software on my workstation. Reinstalling nfs-client fixed
> this.
> 
> I guess I need to pay closer attention to what autoremove tells me it's
> going to remove, but I'm confused as to why it would remove nfs-client &
> related packages.
> 
> This follows a couple of previous full-upgrades that were having problems.
> The first, a few days ago, was stopped by gdb not being available. However,
> it installed fine manually (apt install gdb). I don't see why apt
> full-upgrade didn't do this automatically as a dependency for whatever
> package needed it.
> 
> The second was blocked by the lack of a lcl-qt5 or lcl-gtk5 library. I can
> see this as legitimate because it looks like you don't need both so the
> package manager lets you decide which you want.
> 
> Not looking for a solution. Just reporting a spate of oddities I've
> encountered lately.
> 

Trixie is undergoing major transitions. You must be careful and check what each 
upgrade will want to uninstall, but it is normal for a "testing" distribution.

In those cases I use the curses interface of aptitude to check which upgrade 
will remove another package that I want, and limit my upgrades to the one that 
do not break my system. Usually some days later it is Ok 
(sometimes week for major transitions)

-- 
Erwan



recent Trixie upgrade removed nfs client

2024-04-30 Thread Gary Dale

I'm running Trixie on an AMD64 system.

Yesterday after doing my usual morning full-upgrade, I rebooted because 
there were a lot of Plasma-related updates. When I logged in, I found I 
wasn't connected to my file server shares. I eventually traced this down 
to a lack of nfs software on my workstation. Reinstalling nfs-client 
fixed this.


I guess I need to pay closer attention to what autoremove tells me it's 
going to remove, but I'm confused as to why it would remove nfs-client & 
related packages.


This follows a couple of previous full-upgrades that were having 
problems. The first, a few days ago, was stopped by gdb not being 
available. However, it installed fine manually (apt install gdb). I 
don't see why apt full-upgrade didn't do this automatically as a 
dependency for whatever package needed it.


The second was blocked by the lack of a lcl-qt5 or lcl-gtk5 library. I 
can see this as legitimate because it looks like you don't need both so 
the package manager lets you decide which you want.


Not looking for a solution. Just reporting a spate of oddities I've 
encountered lately.




Uninterruptible sleep apache process while aceessing nfs on debian 12 bookworm

2024-04-29 Thread El Mahdi Mouslih
Hi

We recently migrated to new nfs server running on debian 12 bookworm

On the client Apache processes started randomly switching to D state,

In apache fluststatus Process 93661 a mis 10786 sec
=

4-1 93661 1598/ W 15.92 10786 0 2367404 0.0 71.45 142.44 172.20.1.47 http/1.1 
sisca.groupe-mfc.fr:80 POST 
/deverrouille-fiche-ajax.php?sTable=prospects&iCode=243239



ps aux ==> Process 93661 un interruptible sleep

root@hexaom-v2-vm-prod-front2:~# while true; do date; ps auxf | awk 
'{if($8=="D") print $0;}'; sleep 1; done
Fri 26 Apr 2024 12:37:59 PM CEST
www-data   93661  0.1  1.4 315100 120468 ?   D08:45   0:14  \_ 
/usr/sbin/apache2 -k start
www-data  119374  0.2  0.0  0 0 ?D11:33   0:10  \_ [apache2]
www-data  127425  0.1  0.8 214520 68308 ?D12:27   0:00  \_ 
/usr/sbin/apache2 -k start



process stack :  (can't attach using gdp gcore etc )
===
root@hexaom-v2-vm-prod-front2:~# cat /proc/93661/stack
[<0>] wait_on_commit+0x71/0xb0 [nfs]
[<0>] __nfs_commit_inode+0x131/0x180 [nfs]
[<0>] nfs_wb_all+0xb4/0x100 [nfs]
[<0>] nfs4_file_flush+0x6f/0xa0 [nfsv4]
[<0>] filp_close+0x2f/0x70
[<0>] __x64_sys_close+0x1e/0x60
[<0>] do_syscall_64+0x30/0x80
[<0>] entry_SYSCALL_64_after_hwframe+0x62/0xc7
=



In the client  debian 11
=
 rpcdebug -m nfs -s all
Apr 26 11:30:15 hexaom-v2-vm-prod-front2 kernel: [51318.693854] 
decode_attr_fs_locations: fs_locations done, error = 0
Apr 26 11:30:15 hexaom-v2-vm-prod-front2 kernel: [51318.693871] 
nfs41_sequence_process: Error 0 free the slot
Apr 26 11:30:15 hexaom-v2-vm-prod-front2 kernel: [51318.694161] 
nfs41_sequence_process: Error 0 free the slot
Apr 26 11:30:15 hexaom-v2-vm-prod-front2 kernel: [51318.694301] 
nfs41_sequence_process: Error 0 free the slot
=

No error in nfds server even with debug all : rpcdebug -m nfsd -s all

Information :  on client and server
***



client :


root@hexaom-v2-vm-prod-front2:~# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/";
SUPPORT_URL="https://www.debian.org/support";
BUG_REPORT_URL="https://bugs.debian.org/";


root@hexaom-v2-vm-prod-front2:~# uname -a
Linux hexaom-v2-vm-prod-front2 5.10.0-28-amd64 #1 SMP Debian 5.10.209-2 
(2024-01-31) x86_64 GNU/Linux


root@hexaom-v2-vm-prod-front2:~# dpkg -l | grep -i nfs
ii  liblockfile1:amd641.17-1+b1 
 amd64NFS-safe locking library
ii  libnfsidmap2:amd640.25-6
 amd64    NFS idmapping library
ii  nfs-common1:1.3.4-6         
 amd64NFS support files common to 
client and server


fstab:
192.20.2.30:/NFS/sessions_v2 /srv/sessions nfs 
defaults,rw,relatime,vers=4.1,hard,timeo=100,retrans=4,_netdev 0 0


=




Server:
=
root@SERVSESSION01:~# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/";
SUPPORT_URL="https://www.debian.org/support";
BUG_REPORT_URL="https://bugs.debian.org/";


Linux SERVSESSION01 6.1.0-18-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.76-1 
(2024-02-01) x86_64 GNU/Linux

*
root@SERVSESSION01:~# dpkg -l | grep nfs
ii  libnfsidmap1:amd64        1:2.6.2-4   amd64 
   NFS idmapping library
ii  nfs-common    1:2.6.2-4   amd64 
   NFS support files common to client and server
ii  nfs-kernel-server 1:2.6.2-4   amd64 
   support for NFS kernel server
root@SERVSESSION01:~# dpkg -l | grep rpc
ii  libtirpc-common   1.3.3+ds-1  all   
   transport-independent RPC library - common files
ii  libtirpc3:amd64   1.3.3+ds-1  amd64 
   transport-independent RPC library
ii  rpcbind   1.2.6-6+b1  amd64 
   converts RPC program numbers into universal addresses
root@SERVSESSION01:~#
**


*
root@SERVSESSION01:~# cat /etc/default/nfs-common
# If you do not set values for the NEED_ options, they will be attempted
# autodetected; this should be sufficient for most people. Valid alternatives
# for the NEED_ options are "yes" and "no".

# Do you want to start the statd daemon?

Re: very poor nfs performance

2024-03-09 Thread Ralph Aichinger
On Sat, 2024-03-09 at 13:54 +0100, hw wrote:
> 
> NFS can be hard on network card drivers
> IPv6 may be faster than IPv4
> the network cable might suck
> the switch might suck or block stuff

As iperf and other network protocols were confirmed to be fast by the
OP it is very unlikely that it is a straight network problem. Yes,
these effects do exist occasionally (weird interactions of higher level
protocols and the low level stuff), but it is very rare. The cable that
is so specifically broken to slow down NFS but not scp might exist, but
it is very unlikely.

/ralph




Re: very poor nfs performance

2024-03-09 Thread hw
On Thu, 2024-03-07 at 10:13 +0100, Stefan K wrote:
> Hello guys,
> 
> I hope someone can help me with my problem.
> Our NFS performance ist very bad, like ~20MB/s, mountoption looks like that:

Reading or writing, or both?

Try testing with files on a different volume.

> rw,relatime,sync,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,local_lock=none

try IPv6

> [...]
> Only one nfs client(debian 12) is connected via 10G,

try a good 1GB network card

> since we host also database files on the nfs share,

bad idea

> 'sync'-mountoption is important (more or less), but it should still
> be much faster than 20MB/s

I wouldn't dare to go without sync other than for making backups
maybe.  Without sync, you probably need to test with a larger file.

> so can somebody tell me whats wrong or what should I change to speed
> that up?

Guesswork:

NFS can be hard on network card drivers
IPv6 may be faster than IPv4
the network cable might suck
the switch might suck or block stuff
ZFS might suck in combination with NFS
network cards might happen to be in disadvantageous slots
network cards can get too hot
try Fedora instead of Debian (boot a live system on the server,
configure NFS and see what happens when serving files from BTRFS)
do you see any unusual system load while transferring files?
do you need to run more NFS processes (increase their limit)
are you running irqbalance?
are you running numad if you're on a numa machine?
what CPU governors are you using?
do the i/o schedulers interfere?



signature.asc
Description: This is a digitally signed message part


Re: very poor nfs performance

2024-03-08 Thread Dan Ritter
Mike Kupfer wrote: 
> Stefan K wrote:
> 
> > > Can you partition the files into 2 different shares?  Put the database
> > > files in one share and access them using "sync", and put the rest of the
> > > files in a different share, with no "sync"?
> > this could be a solution, but I want to understand why is it so slow and 
> > fix that
> 
> It's inherent in how sync works.  Over-the-wire calls are expensive.
> NFS implementations try to get acceptable performance by extensive
> caching, using asynchronous operations when possible, and by issuing a
> smaller number of large RPCs (rather than a larger number of small
> RPCs).  The sync option defeats all of those mechanisms.

It is also the case that databases absolutely need sync to work
properly, so running them over NFS is a bad idea. At most, a
sqlite DB can be OK -- because sqlite is single user.

-dsr-



Re: Aw: Re: Re: very poor nfs performance

2024-03-08 Thread Mike Kupfer
Stefan K wrote:

> > Can you partition the files into 2 different shares?  Put the database
> > files in one share and access them using "sync", and put the rest of the
> > files in a different share, with no "sync"?
> this could be a solution, but I want to understand why is it so slow and fix 
> that

It's inherent in how sync works.  Over-the-wire calls are expensive.
NFS implementations try to get acceptable performance by extensive
caching, using asynchronous operations when possible, and by issuing a
smaller number of large RPCs (rather than a larger number of small
RPCs).  The sync option defeats all of those mechanisms.

mike



Re: very poor nfs performance

2024-03-08 Thread debian-user
Stefan K  wrote:
> > Run the database on the machine that stores the files and perform
> > database access remotely over the net instead. ?  
> 
> yes, but this doesn't resolve the performance issue with nfs

But it removes your issue that forces you to use the sync option.



Aw: Re: very poor nfs performance

2024-03-08 Thread Stefan K
> Run the database on the machine that stores the files and perform
> database access remotely over the net instead. ?

yes, but this doesn't resolve the performance issue with nfs



Aw: Re: Re: very poor nfs performance

2024-03-08 Thread Stefan K
> Can you partition the files into 2 different shares?  Put the database
> files in one share and access them using "sync", and put the rest of the
> files in a different share, with no "sync"?
this could be a solution, but I want to understand why is it so slow and fix 
that



Re: very poor nfs performance

2024-03-08 Thread debian-user
Stefan K  wrote:
> > You could try removing the "sync" option, just as an experiment, to
> > see how much it is contributing to the slowdown.  
> 
> If I don't use sync I got around 300MB/s  (tested with 600MB-file) ..
> that's ok (far from great), but since there are database files on the
> nfs it can cause file/database corruption, so we would like to use
> sync option

Run the database on the machine that stores the files and perform
database access remotely over the net instead. ?

> best regards
> Stefan



Aw: Re: very poor nfs performance

2024-03-08 Thread Stefan K
> You could try removing the "sync" option, just as an experiment, to see
> how much it is contributing to the slowdown.

If I don't use sync I got around 300MB/s  (tested with 600MB-file) .. that's ok 
(far from great), but since there are database files on the nfs it can cause 
file/database corruption, so we would like to use sync option

best regards
Stefan



Aw: Re: very poor nfs performance

2024-03-08 Thread Stefan K
> You could test with noatime if you don't need access times.
> And perhaps with lazytime instead of relatime.
Mountoptions are:
type zfs (rw,xattr,noacl)
I get you point, but when you look at my fio output, the performance is quiet 
good

> Could you provide us
> nfsstat -v
server:
nfsstat -v
Server packet stats:
packetsudptcptcpconn
509979521   0  510004972   2

Server rpc stats:
calls  badcalls   badfmt badauthbadclnt
509971853   0  0  0  0

Server reply cache:
hits   misses nocache
0  0  509980028

Server io stats:
read   write
1587531840   3079615002

Server read ahead cache:
size   0-10%  10-20% 20-30% 30-40% 40-50% 50-60% 
60-70% 70-80% 80-90% 90-100%notfound
0  0  0  0  0  0  0  0  
0  0  0  0

Server file handle cache:
lookup anon   ncachedir  ncachenondir  stale
0  0  0  0      0

Server nfs v4:
null compound
2 0% 509976662 99%

Server nfs v4 operations:
op0-unused   op1-unused   op2-future   access   close
0 0% 0 0% 0 0% 5015903   0% 3091693   0%
commit   create   delegpurge   delegreturn  getattr
3146340% 1498360% 0 0% 1615740   0% 390748077 
20%
getfhlink lock locktlocku
2573550   0% 0 0% 170% 0 0% 150%
lookup   lookup_root  nverify  open openattr
3931149   0% 0 0% 0 0% 3131045   0% 0 0%
open_confopen_dgrdputfhputpubfh putrootfh
0 0% 3 0% 510522216 26% 0 0% 4 
0%
read readdir  readlink remove   rename
59976532  3% 4217910% 0 0% 4299650% 2445640%
renewrestorefhsavefh   secinfo  setattr
0 0% 0 0% 5422310% 0 0% 8453240%
setcltid setcltidconf verify   writerellockowner
0 0% 0 0% 0 0% 404569758 21% 0 
0%
bc_ctl   bind_connexchange_id  create_ses   destroy_ses
0 0% 0 0% 4 0% 2 0% 1 0%
free_stateid getdirdeleg  getdevinfo   getdevlist   layoutcommit
150% 0 0% 0 0% 0 0% 0 0%
layoutgetlayoutreturn secinfononam sequence set_ssv
0 0% 0 0% 2 0% 509980018 26% 0 
0%
test_stateid want_deleg   destroy_clid reclaim_comp allocate
100% 0 0% 1 0% 2 0% 164   0%
copy copy_notify  deallocate   ioadvise layouterror
2976670% 0 0% 0 0% 0 0% 0 0%
layoutstats  offloadcanceloffloadstatusreadplus seek
0 0% 0 0% 0 0% 0 0% 0 0%
write_same
0 0%


client:
nfsstat -v
Client packet stats:
packetsudptcptcpconn
0  0  0  0

Client rpc stats:
calls  retransauthrefrsh
37415730   0  37425651

Client nfs v4:
null read writecommit   open
1 0% 4107833  10% 30388717 81% 2516  0% 55493 0%
open_confopen_noatopen_dgrdclosesetattr
0 0% 1942520% 0 0% 2473800% 75890 0%
fsinfo   renewsetclntidconfirm  lock
459   0% 0 0% 0 0% 0 0% 4 0%
locktlockuaccess   getattr  lookup
0 0% 2 0% 1315330% 1497029   4% 3180560%
lookup_root  remove   rename   link symlink
1 0% 31656 0% 15877 0% 0 0% 0 0%
create   pathconf statfs   readlink readdir
7019  0% 458   0% 1705220% 0 0% 30007 0%
server_caps  delegreturn  getacl   setacl   fs_locations
917   0% 1181090% 0 0% 0 0% 0 0%
rel_lkowner  secinfo  fsid_present exchange_id  
create_session
0 0% 0 0% 0 0% 2 0% 1 0%
destroy_session  sequence get_lease_time   reclaim_comp layoutget
0 0% 0

Re: very poor nfs performance

2024-03-08 Thread Michel Verdier
On 2024-03-07, Stefan K wrote:

> I hope someone can help me with my problem.
> Our NFS performance ist very bad, like ~20MB/s, mountoption looks like that:
> rw,relatime,sync,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,local_lock=none

What are the mount options for your zfs filesystem ?

You could test with noatime if you don't need access times.
And perhaps with lazytime instead of relatime.

Could you provide us
nfsstat -v



Re: very poor nfs performance

2024-03-07 Thread Mike Kupfer
Stefan K wrote:

> 'sync'-mountoption is important (more or less), but it should still be
> much faster than 20MB/s

I don't know if "sync" could be entirely responsible for such a
slowdown, but it's likely at least contributing, particularly if the
application is doing small I/Os at the system call level.

You could try removing the "sync" option, just as an experiment, to see
how much it is contributing to the slowdown.

mike



Aw: Re: very poor nfs performance

2024-03-07 Thread Stefan K
Hi Ralph,

I just tested it with scp and I got 262MB/s
So it's not a network issue, just a NFS issue, somehow.

best regards
Stefan

> Gesendet: Donnerstag, 07. März 2024 um 11:22 Uhr
> Von: "Ralph Aichinger" 
> An: debian-user@lists.debian.org
> Betreff: Re: very poor nfs performance
>
> On Thu, 2024-03-07 at 10:13 +0100, Stefan K wrote:
> > Hello guys,
> > 
> > I hope someone can help me with my problem.
> > Our NFS performance ist very bad, like ~20MB/s, mountoption looks
> > like that:
> 
> Are both sides agreeing on MTU (using Jumbo frames or not)?
> 
> Have you tested the network with iperf (or simiar), does this happen
> only with NFS or also with other network traffic?
> 
> /ralph
> 
>



Re: very poor nfs performance

2024-03-07 Thread Ralph Aichinger
On Thu, 2024-03-07 at 10:13 +0100, Stefan K wrote:
> Hello guys,
> 
> I hope someone can help me with my problem.
> Our NFS performance ist very bad, like ~20MB/s, mountoption looks
> like that:

Are both sides agreeing on MTU (using Jumbo frames or not)?

Have you tested the network with iperf (or simiar), does this happen
only with NFS or also with other network traffic?

/ralph



very poor nfs performance

2024-03-07 Thread Stefan K
Hello guys,

I hope someone can help me with my problem.
Our NFS performance ist very bad, like ~20MB/s, mountoption looks like that:
rw,relatime,sync,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,local_lock=none
The NFS server (debian 12) is a ZFS Fileserver with 40G network connection, 
also the read/write performance is good:
fio --rw=readwrite  --rwmixread=70 --name=test --size=50G --direct=1 
--bsrange=4k-1M --numjobs=30 --group_reporting 
--filename=/zfs_storage/test/asdfg --runtime=600 --time_based
   READ: bw=11.1GiB/s (11.9GB/s), 11.1GiB/s-11.1GiB/s (11.9GB/s-11.9GB/s), 
io=6665GiB (7156GB), run=64-64msec
  WRITE: bw=4875MiB/s (5112MB/s), 4875MiB/s-4875MiB/s (5112MB/s-5112MB/s), 
io=2856GiB (3067GB), run=64-64msec

Only one nfs client(debian 12) is connected via 10G, since we host also 
database files on the nfs share, 'sync'-mountoption is important (more or 
less), but it should still be much faster than 20MB/s

so can somebody tell me whats wrong or what should I change to speed that up?

thanks in advance.
best regards
Stefan



Re: usrmerge on root NFS will not be run automatically

2024-02-13 Thread fabjunkm...@gmail.com
Very unimpressed with the so called "fix" for #842145 of just blocking
running the script on nfs mount rather than fixing the script to work
properly with nfs.

The problem with the script is that it does not ignore the .nfs*
files. An explanation of these files is available here:

https://unix.stackexchange.com/questions/348315/nfs-mount-device-or-resource-busy

I am not a programmer so will not share my crappy attempt at getting
the script to work. I will describe a workaround I did and maybe that
may help someone come up with their own workaround for system with nfs
root (where the nfs server is not running linux). Or even better maybe
the package maintainer might adjust the script within usrmerge to work
properly with nfs using these ideas.

- Starting from bullseye (I rolled back to a snapshot pre-install of
usrmerge), downloaded src of usrmerge to get access to the
convert-usrmerge script.

- modify script to try and get it to ignore/avoid changing any objects
with ".nfs000" in its name

- run the script so it hopefully completes successfully (In my case it
did not fully complete. It made most of the changes but I still had
some directories in root such as /lib - probably from mistakes in my
code changes)

- rebooted nfs client to clear open file handles on nfs which would
remove .nfs000* files

- ran the original unmodified script which this time completed
successfully including converting eg /lib to a symlink. I think this
worked as most of the files had moved and symlinks created so there
was not much left for it to do except sort out the remaining few
moves/symlinks.

- installed usrmerge package which this time completed (it detected
the merge completed and did not complain about nfs)

>From there I expect it should be safe to upgrade to bookworm without
usrmerge breaking the upgrade (not tested yet).

good luck



/usr on NFS (was: Re: disable auto-linking of /bin -> /usr/bin/)

2024-01-10 Thread Andy Smith
Hello,

On Wed, Jan 10, 2024 at 10:41:05AM -0800, Mike Castle wrote:
> I did that for years.
> 
> Then again, when I started doing that, I was using PLIP over a
> null-printer cable.  But even after I could afford larger harddrives
> (so I had room to install /usr), and Ethernet cards (and later a hub),
> I still ran /usr over NFS.

You can still do it if you want, as long as your initramfs mounts
/usr from nfs, which I'm pretty sure it will without any difficulty
if you have the correct entry in /etc/fstab. I don't think anything
has gone out of its way to break that use it's just that it's been
given up on, and I don't blame Debian for that since it would mean
lots of work to bend upstream authors to a use case that they have
no interest in.

Time moved on and the way to do "immutable OS" evolved.

Just a couple of days ago I retired a Debian machine that had been
running constantly for 18½ years from the mostly-read-only 512M
CompactFlash boot device it came with. 😀

https://social.bitfolk.com/@grifferz/111704438510674644

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: NFS: IPV6

2024-01-06 Thread Leandro Noferini
Pocket  writes:

[...]

> I am in the process of re-configuring NFS for V4 only.

Could it be there is some misunderstanding?

IPV4 and IPV6 are quite different concepts from NFSv4: I think this
works either on IPV4 and IPV6.

--
Ciao
leandro



Re: NFS: IPV6

2024-01-05 Thread Andy Smith
Hello,

On Fri, Jan 05, 2024 at 07:04:21AM -0500, Pocket wrote:
> I have this in the exports, ipv4 works
> 
> /srv/Multimedia 192.168.1.0/24(rw,no_root_squash,subtree_check)
> /srv/Other 192.168.1.0/24(rw,no_root_squash,subtree_check)
> #/home 2002:474f:e945:0:0:0:0:0/64(rw,no_root_squash,subtree_check)

This syntax in /etc/exports works for me:

/srv 2001:0db8:a:b::/64(rw,fsid=0)

And then on a client you have to surround the server IP with []
e.g.:

# mount -t nfs4 '[2001:0db8:a:b::1]':/srv /mnt

I have not tested what degree of quoting is required in /etc/fstab,
i.e. may or may not work without the single quotes.

How's your migration away from Debian and all is evils going, by
the way?

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: NFS: IPV6

2024-01-05 Thread Greg Wooledge
On Fri, Jan 05, 2024 at 09:54:54AM +, debian-u...@howorth.org.uk wrote:
> plus FWIW...
> 
> https://docs.oracle.com/cd/E23824_01/html/821-1453/ipv6-ref-71.html
> 
> "NFS software and Remote Procedure Call (RPC) software support IPv6 in a
> seamless manner. Existing commands that are related to NFS services
> have not changed. Most RPC applications also run on IPv6 without any
> change. Some advanced RPC applications with transport knowledge might
> require updates."

I wouldn't assume Oracle (Solaris?) documentation is authoritative for
Debian systems.  The NFS implementations are probably quite different.



Re: NFS: IPV6

2024-01-05 Thread Pocket

On 1/5/24 04:54, debian-u...@howorth.org.uk wrote:

Marco Moock  wrote:

Am 04.01.2024 um 18:19:57 Uhr schrieb Pocket:


Where can I find information on how to configure NFS to use ipv6
addresses both server and client.


Does IPv6 work basically on your machine, including name resolution?

Does it work if you enter the address directly?

https://ipv6.net/blog/mounting-an-nfs-share-over-ipv6/

How does your fstab look like?


plus FWIW...

https://docs.oracle.com/cd/E23824_01/html/821-1453/ipv6-ref-71.html

"NFS software and Remote Procedure Call (RPC) software support IPv6 in a
seamless manner. Existing commands that are related to NFS services
have not changed. Most RPC applications also run on IPv6 without any
change. Some advanced RPC applications with transport knowledge might
require updates."




According to debian docs NFSServerSetup from the debian wiki

Additionally, rpcbind is not strictly needed by NFSv4 but will be
started as a prerequisite by nfs-server.service. This can be
prevented by masking rpcbind.service and rpcbind.socket.
sudo systemctl mask rpcbind.service
sudo systemctl mask rpcbind.socket

I am going to do that to use NFSv4 only.

I believe that my issue is the in the /etc/exports file but I don't know 
for sure.

I have this in the exports, ipv4 works

/srv/Multimedia 192.168.1.0/24(rw,no_root_squash,subtree_check)
/srv/Other 192.168.1.0/24(rw,no_root_squash,subtree_check)
#/home 2002:474f:e945:0:0:0:0:0/64(rw,no_root_squash,subtree_check)

I am looking for an example

I have commented out the ipv6 for now because I want to use NFSv4 only 
and after I get that working I want to get ipv6 mounts working and 
change the ipv4 mounts to use ipv6.
/srv/Multimedia and /srv/Other are root mounts and there isn't any bind 
mounts



--
Hindi madali ang maging ako




Re: NFS: IPV6

2024-01-05 Thread Pocket

On 1/5/24 03:35, Marco Moock wrote:

Am 04.01.2024 um 18:19:57 Uhr schrieb Pocket:


Where can I find information on how to configure NFS to use ipv6
addresses both server and client.


Does IPv6 work basically on your machine, including name resolution?


Yes I have bind running and ssh to the host is working



Does it work if you enter the address directly?

https://ipv6.net/blog/mounting-an-nfs-share-over-ipv6/

How does your fstab look like?




I followed some info that I found on the internet and it didn't work.

I am in the process of re-configuring NFS for V4 only.

I should have that done here shortly and I will try again to mount NFS 
mounts shortly




--
Hindi madali ang maging ako




Re: NFS: IPV6

2024-01-05 Thread debian-user
Marco Moock  wrote:
> Am 04.01.2024 um 18:19:57 Uhr schrieb Pocket:
> 
> > Where can I find information on how to configure NFS to use ipv6 
> > addresses both server and client.  
> 
> Does IPv6 work basically on your machine, including name resolution?
> 
> Does it work if you enter the address directly?
> 
> https://ipv6.net/blog/mounting-an-nfs-share-over-ipv6/
> 
> How does your fstab look like?

plus FWIW...

https://docs.oracle.com/cd/E23824_01/html/821-1453/ipv6-ref-71.html

"NFS software and Remote Procedure Call (RPC) software support IPv6 in a
seamless manner. Existing commands that are related to NFS services
have not changed. Most RPC applications also run on IPv6 without any
change. Some advanced RPC applications with transport knowledge might
require updates."



Re: NFS: IPV6

2024-01-05 Thread Marco Moock
Am 04.01.2024 um 18:19:57 Uhr schrieb Pocket:

> Where can I find information on how to configure NFS to use ipv6 
> addresses both server and client.

Does IPv6 work basically on your machine, including name resolution?

Does it work if you enter the address directly?

https://ipv6.net/blog/mounting-an-nfs-share-over-ipv6/

How does your fstab look like?



NFS: IPV6

2024-01-04 Thread Pocket



Where can I find information on how to configure NFS to use ipv6 
addresses both server and client.


I haven't found any good information on how to do that and what I did 
find was extremely sparce.


I have NFS mounts working using ipv4 and want to change that to ipv6


--
Hindi madali ang maging ako



Re: Update on problem mounting NFS share

2023-10-05 Thread David Christensen

On 10/5/23 05:01, Steve Matzura wrote:

On 10/4/2023 2:32 PM, David Christensen wrote:

On 10/4/23 05:03, Steve Matzura wrote:

On 10/3/2023 6:06 PM, David Christensen wrote:

On 10/3/23 12:03, Steve Matzura wrote:

I gave up on the NFS business and went back to good old buggy
but reliable SAMBA (LOL), ...


I have attempted to document the current state of Samba on my 
SOHO, below.  ...


Wow but that's an awful lot of work for something that seems to 
be a timing problem. But at least I learned something.


What is posted is the results of a lot of work -- trying countless
combinations of settings on the server and on the client via edit,
rebooting, and test cycles; until I found a combination that
seems to work.

Your OP of /etc/fstab:

//192.168.1.156/BigVol1 /mnt/bigvol1 civs vers=2.0,credentials=/root/smbcreds,ro

* The third field is "civs".  Should that be "cifs"?

* The fourth field contains "ro".  Is that read-only?  If so, how 
do you create, update, and delete files and directories on 
/mnt/bigvol1?


The 'civs' is indeed cifs. That was hand-transcribed, not 
copied/pasted. My bad.



Please "reply to List".


Please using inline posting style.


Copy-and-paste can require more effort, but precludes transcription 
errors.  On the Internet, errors are not merely embarrassing: they can 
cause confusion forever.



I mount the drive read-only because it 
contains my entire audio and video library, so anyone to whom I give 
access on my network must not ever have the slightest possibility of 
being able to modify it.



So, you create, update, and delete content via some means other than 
/mnt/bigvol1 (?).



TIMTOWTDI for multiple-user Samba shares.


ACL's would seem to be the canonical solution, but they introduce 
complexities -- Unskilled users?  Usage consistency from Windows 
Explorer, Command Prompt, Finder, Terminal, Thunar, Terminal Emulator, 
etc.?  Unix or Windows ACL's?  Backup and restore?  Integrity auditing 
and validation?



I chose to implement a "groupshare" share with read-write access for all 
Samba users and a social contract for usage (I also implemented a 
"groupshare" group/user and configured Samba to force ownership of content):


2023-10-05 11:51:52 toor@samba ~
# cat /usr/local/etc/smb4.conf
[global]
local master = Yes
netbios name = SAMBA
ntlm auth = ntlmv1-permitted
passdb backend = tdbsam
preferred master = Yes
security = USER
server string = Samba Server Version %v
wins support = Yes
workgroup = WORKGROUP

[dpchrist]
force user = dpchrist
path = /var/local/samba/dpchrist
read only = No
valid users = dpchrist

[groupshare]
create mask = 0777
directory mask = 0777
force create mode = 0666
force directory mode = 0777
force unknown acl user = Yes
force user = groupshare
path = /var/local/samba/groupshare
read only = No

2023-10-05 11:54:48 toor@samba ~
# grep groupshare /etc/group /etc/passwd
/etc/group:groupshare:*:999:
/etc/passwd:groupshare:*:999:999:Groupshare:/home/groupshare:/usr/sbin/nologin

2023-10-05 12:00:55 toor@samba ~
# find /var/local/samba/ -type d -depth 1 | egrep 'dpchrist|groupshare' 
| xargs ls -ld
drwxrwxr-x  98 dpchristdpchrist102 Oct  3 14:13 
/var/local/samba/dpchrist
drwxrwxr-x   8 groupshare  groupshare   13 Oct  5 11:32 
/var/local/samba/groupshare



Debian client:

2023-10-05 11:45:42 root@taz ~
# egrep 'dpchrist|groupshare' /etc/fstab | perl -pe 's/\s+/ /g'
//samba/dpchrist /samba/dpchrist cifs 
noauto,vers=3.0,user,username=dpchrist 0 0

//samba/groupshare /samba/groupshare cifs noauto,vers=3.0,user 0 0



By the way, another friend to whom I showed my problem came up with a
similar solution surrounding my original hypothesis that there is a
delay between the time fstab is processed and networking is 
available. He said he tested it probably a dozen times and it worked 
every time. His suggested fstab line is this:


//192.168.1.156/BigVol1 /mnt/bigvol1 cifs 
vers=3.1.1,credentials=,rw,GID=1000,uid=1000,noauto,x-systemd.automount



It's a matter of edit-reboot-test cycles with a consistent and complete 
test protocol.



"GID=1000,uid=1000" -- looks similar to my "groupshare" idea, but from 
the client side.  That should produce the same results for most use-cases.



"credentials=,noauto,x-systemd.automount" -- I assume 
this is a work-around for unreliable mounting at system start-up (?).  I 
tried similar tricks, and ended up going back to KISS -- typing mount(8) 
commands by hand once per boot.



David



Re: Update on problem mounting NFS share

2023-10-04 Thread David Christensen

On 10/4/23 05:03, Steve Matzura wrote:

On 10/3/2023 6:06 PM, David Christensen wrote:

On 10/3/23 12:03, Steve Matzura wrote:

I gave up on the NFS business and went back to good old buggy
but reliable SAMBA (LOL), ...




I have attempted to document the current state of Samba on my
SOHO, below.  ...



Wow but that's an awful lot of work for something that seems to be a
timing problem. But at least I learned something.


What is posted is the results of a lot of work -- trying countless
combinations of settings on the server and on the client via edit,
rebooting, and test cycles; until I found a combination that seems to work.


Your OP of /etc/fstab:

//192.168.1.156/BigVol1 /mnt/bigvol1 civs
vers=2.0,credentials=/root/smbcreds,ro

* The third field is "civs".  Should that be "cifs"?

* The fourth field contains "ro".  Is that read-only?  If so, how do you
create, update, and delete files and directories on /mnt/bigvol1?


David



Re: Update on problem mounting NFS share

2023-10-03 Thread David Christensen

On 10/3/23 12:03, Steve Matzura wrote:
I gave up on the NFS business and went back to good old buggy but 
reliable SAMBA (LOL), which is what I was using when I was on Debian 8, 
and which worked fine. Except for one thing, everything's great.



In /etc/fstab, I have:


//192.168.1.156/BigVol1 /mnt/bigvol1 civs 
vers=2.0,credentials=/root/smbcreds,ro



That should work, right? Well, it does, but only sometimes. If I boot 
the system, the remote share isn't there. If I unmount everything with 
'umount -a', wait a few seconds, then remount everything with 'mount 
-a', I sometimes have to do it twice. Sometimes, the first time I get a 
message from mount about error -95, but if I wait the space of a couple 
heartbeats and try 'mount -a' again, the share mounts. If I look through 
/var/kern.log for errors, I don't find anything that stands out as 
erroneous, but would be glad to supply extracts here that might help me 
to trace this down and fix it.



Using Samba to share files over the network requires various steps and 
settings on both the server and on the clients.  I put a lot of effort 
into Samba back in the day, and only went far enough to get basic file 
sharing working.  Since then, I have copied-and-pasted.  But Microsoft 
has not stood still, nor has Samba.



I have attempted to document the current state of Samba on my SOHO, 
below.  But beware -- my Samba setup is insecure and has issues.



My username is "dpchrist" on all computers and on Samba.


My primary group is "dpchrist" on all Unix computers.


My UID and GID are both "12345" (redaction) on all Unix computers.


The server is FreeBSD (I previously used Debian, but switched to get 
native ZFS):


2023-10-03 12:20:58 toor@f3 ~
# freebsd-version -kru
12.4-RELEASE-p5
12.4-RELEASE-p5
12.4-RELEASE-p5


The latest version of Samba seemed to want Kerberos, so I chose an older 
version that does not:


2023-10-03 12:25:25 toor@samba ~
# pkg version | grep samba
samba413-4.13.17_5 =


I configured Samba to share files:

2023-10-03 14:49:00 toor@samba ~
# cat /usr/local/etc/smb4.conf
[global]
local master = Yes
netbios name = SAMBA
ntlm auth = ntlmv1-permitted
passdb backend = tdbsam
preferred master = Yes
security = USER
server string = Samba Server Version %v
wins support = Yes
workgroup = WORKGROUP

[dpchrist]
force user = dpchrist
path = /var/local/samba/dpchrist
read only = No
valid users = dpchrist



I validate the configuration file with testparm(1):

2023-10-03 13:37:31 toor@samba ~
# testparm
Load smb config files from /usr/local/etc/smb4.conf
Loaded services file OK.
Weak crypto is allowed
Server role: ROLE_STANDALONE

Press enter to see a dump of your service definitions

# Global parameters
[global]
ntlm auth = ntlmv1-permitted
preferred master = Yes
security = USER
server string = Samba Server Version %v
wins support = Yes
idmap config * : backend = tdb

[dpchrist]
force user = dpchrist
path = /var/local/samba/dpchrist
read only = No
valid users = dpchrist



I created a Samba user account:

root@samba:~ # pdbedit -a dpchrist
new password:
retype new password:


Whenever I change anything related to Samba on the server, I reboot and 
verify before I attempt to connect from a client.



On Debian clients:

2023-10-03 12:44:39 root@taz ~
# cat /etc/debian_version ; uname -a
11.7
Linux taz 5.10.0-25-amd64 #1 SMP Debian 5.10.191-1 (2023-08-16) x86_64 
GNU/Linux



I installed the Samba client file sharing package:

2023-10-03 12:55:06 root@taz ~
# dpkg-query -W cifs-utils
cifs-utils  2:6.11-3.1+deb11u1


I created a mount point for the incoming share:

2023-10-03 12:58:13 root@taz ~
# ls -ld /samba/dpchrist
drwxr-xr-x 2 dpchrist dpchrist 0 Jun 18 14:31 /samba/dpchrist


I created an /etc/fstab entry for the incoming share:

2023-10-03 12:59:41 root@taz ~
# grep samba\/dpchrist /etc/fstab
//samba/dpchrist	/samba/dpchrist		cifs	 
noauto,vers=3.0,user,username=dpchrist		0	0



I mount the incoming share manually:

2023-10-03 13:01:07 dpchrist@taz ~
$ mount /samba/dpchrist
Password for dpchrist@//samba/dpchrist:

2023-10-03 13:01:46 dpchrist@taz ~
$ mount | grep samba\/dpchrist
//samba/dpchrist on /samba/dpchrist type cifs 
(rw,nosuid,nodev,relatime,vers=3.0,cache=strict,username=dpchrist,uid=12345,forceuid,gid=12345,forcegid,addr=192.168.5.24,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1,user=dpchrist)



Note that there is a maddening issue with Samba on Unix clients -- the 
Unix execute bits vs. MS-DOS System, Hidden, and Archive bits:


https://unix.stackexchange.com/questions/103415/why-are-files-in-a-smbfs-mounted-share-created-with-executable-bit-set


On Wind

Re: Update on problem mounting NFS share

2023-10-03 Thread piorunz

On 03/10/2023 20:03, Steve Matzura wrote:

I gave up on the NFS business


Why?


and went back to good old buggy but reliable SAMBA (LOL)


:o

Sorry but I think you created bigger problem that you already had. NFS
works great, I've been using it for years and it never failed me. I
cannot image what was not working for you. Anyway, good luck.

--
With kindest regards, Piotr.

⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org/
⠈⠳⣄



Update on problem mounting NFS share

2023-10-03 Thread Steve Matzura
I gave up on the NFS business and went back to good old buggy but 
reliable SAMBA (LOL), which is what I was using when I was on Debian 8, 
and which worked fine. Except for one thing, everything's great.



In /etc/fstab, I have:


//192.168.1.156/BigVol1 /mnt/bigvol1 civs 
vers=2.0,credentials=/root/smbcreds,ro



That should work, right? Well, it does, but only sometimes. If I boot 
the system, the remote share isn't there. If I unmount everything with 
'umount -a', wait a few seconds, then remount everything with 'mount 
-a', I sometimes have to do it twice. Sometimes, the first time I get a 
message from mount about error -95, but if I wait the space of a couple 
heartbeats and try 'mount -a' again, the share mounts. If I look through 
/var/kern.log for errors, I don't find anything that stands out as 
erroneous, but would be glad to supply extracts here that might help me 
to trace this down and fix it.



TIA


Re: usrmerge on root NFS will not be run automatically

2023-10-03 Thread Marco
On Thu, 14 Sep 2023 22:17:35 +0200
Marco  wrote:

> If I screw with this I'd prefer to do it at night or on a weekend to
> keep the systems running during business hours.

Followup:

I went through the list and resolved each conflict manually. I
launched usrmerge after every change and deleted/merged the
offending files.

Note that I ran usrmerge on the individual hosts themselves, on NFS
root. Although usrmerge complained that this won't work, it somehow
did. Systems rebooted, all came up fine, no broken packages and the
programs are working.

Thanks for all the support. Case solved.

Marco



Re: Can't mount NFS NAS after major upgrade

2023-09-18 Thread debian-user
Steve Matzura  wrote:
 
> mount /mnt/bigvol1/dir-1 /home/steve/dir-1 -o bind,ro

In addition to what others have observed it might be worth mentioning
that the -v option to mount (i.e. verbose) often gives more information
about what's going on.



Re: Can't mount NFS NAS after major upgrade

2023-09-17 Thread tomas
On Sun, Sep 17, 2023 at 02:43:16PM -0400, Steve Matzura wrote:

As Charles points out, this looks rather like CIFS, not NFS:

> # NAS box:
> //192.168.1.156/BigVol1 /mnt/bigvol1 cifs
   
> _netdev,username=,password=,ro 0 0

If Charles's (and my) hunch is correct, perhaps this wiki page
contains leada for you to follow:

  https://wiki.debian.org/Samba/ClientSetup

Cheers
-- 
tomás


signature.asc
Description: PGP signature


Re: Can't mount NFS NAS after major upgrade

2023-09-17 Thread Tom Dial




On 9/17/23 12:43, Steve Matzura wrote:

I upgraded a version 8 system to version 11 from scratch--e.g., I totally 
reinitialized the internal drive and laid down an entirely fresh install of 11. 
Then 12 came out about a week later, but I haven't yet upgraded to 12 because I 
have a show-stopper on 11 which I absolutely must solve before moving ahead, 
and it's the following:


For years I have had a Synology NAS that was automatically mounted and 
directories thereon bound during the boot process via the following lines at 
the end of /etc/fstab:


# NAS box:
//192.168.1.156/BigVol1 /mnt/bigvol1 cifs 
_netdev,username=,password=,ro 0 0

Then I had the following line, replicated for several directories on bigvol1, 
to bind them to directories on the home filesystem, all in a script called 
/root/remount that I executed manually after each reboot:


mount /mnt/bigvol1/dir-1 /home/steve/dir-1 -o bind,ro

I had directories set up on the home filesystem to accept these binds, like 
this:


mount /mnt/bigvol1/dir-1 /home/steve/dir-1 -o bind,ro


None of this works any more on Debian 11. After boot, /mnt/bigvol1 is empty, so 
there's no need to even try the remount script because there's nothing to which 
those directories can bind, so even if those mount commands are correct, I 
would never know until bigvol1 mounts correctly and content appears in at least 
'ls -ld /mnt/bigvol1'.



Are there relevant messages in the output of dmesg or in the systemd journal? 
If so, they might give useful information.

This is out of range of my usage and experience, but from others I have found 
that some consumer NAS units still use, and are effectively stuck at, SMB1. SMB 
version 1 has a fairly serious uncorrectable vulnerability and Microsoft 
deprecated it (but continued to support it through, I think Windows 11. I 
believe Samba no longer supports it by default, but still can be configured to 
use it, with some effort, if you wish. Another, and preferable fix would be to 
configure the Synology to use SMB version 3, if that appears to be the cause of 
the problem.

If the Synology NAS supports NFS, that might be a better approach in the long 
run, though.

Regards,
Tom Dial


Research into this problem made me try similar techniques after having 
installed nfs-utils. I got bogged down by a required procedure entailing 
exportation of NFS volume information in order to let nfs-utils know about the 
NFS drive, but before I commit to that, I thought I'd ask in here to make sure 
I'm not about to do anything horribly wrong.


So, summarily put, what's different about mounting a networked NFS drive from 8 
to 11 and 12?


Thanks in advance.






Re: Can't mount NFS NAS after major upgrade

2023-09-17 Thread Charles Curley
On Sun, 17 Sep 2023 14:43:16 -0400
Steve Matzura  wrote:

> # NAS box:
> //192.168.1.156/BigVol1 /mnt/bigvol1 cifs 
> _netdev,username=,password=,ro 0 0

Possibly part of the problem is that this is a CIFS (Samba) mount, not
an NFS mount.

Is samba installed?

If you try to mount that mount manually, what error message(s) and
return value do you get? e.g. a successful mount:

root@jhegaala:~# mount /home/charles/samba 
root@jhegaala:~# echo $?
0
root@jhegaala:~# 

You may also want to rethink that line in fstab. I have, e.g.:

//samba.localdomain/samba /home/charles/samba cifs 
_netdev,rw,credentials=/etc/samba/charles.credentials,uid=charles,gid=charles,file_mode=0644,noauto
  0   0

The noauto is in there because this is a laptop, and I have scripts to
mount it only if the machine is on a home network. For a desktop, I
remove the noauto and have x-systemd.automount instead.


-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/



Can't mount NFS NAS after major upgrade

2023-09-17 Thread Steve Matzura
I upgraded a version 8 system to version 11 from scratch--e.g., I 
totally reinitialized the internal drive and laid down an entirely fresh 
install of 11. Then 12 came out about a week later, but I haven't yet 
upgraded to 12 because I have a show-stopper on 11 which I absolutely 
must solve before moving ahead, and it's the following:



For years I have had a Synology NAS that was automatically mounted and 
directories thereon bound during the boot process via the following 
lines at the end of /etc/fstab:



# NAS box:
//192.168.1.156/BigVol1 /mnt/bigvol1 cifs 
_netdev,username=,password=,ro 0 0


Then I had the following line, replicated for several directories on 
bigvol1, to bind them to directories on the home filesystem, all in a 
script called /root/remount that I executed manually after each reboot:



mount /mnt/bigvol1/dir-1 /home/steve/dir-1 -o bind,ro

I had directories set up on the home filesystem to accept these binds, 
like this:



mount /mnt/bigvol1/dir-1 /home/steve/dir-1 -o bind,ro


None of this works any more on Debian 11. After boot, /mnt/bigvol1 is 
empty, so there's no need to even try the remount script because there's 
nothing to which those directories can bind, so even if those mount 
commands are correct, I would never know until bigvol1 mounts correctly 
and content appears in at least 'ls -ld /mnt/bigvol1'.



Research into this problem made me try similar techniques after having 
installed nfs-utils. I got bogged down by a required procedure entailing 
exportation of NFS volume information in order to let nfs-utils know 
about the NFS drive, but before I commit to that, I thought I'd ask in 
here to make sure I'm not about to do anything horribly wrong.



So, summarily put, what's different about mounting a networked NFS drive 
from 8 to 11 and 12?



Thanks in advance.



Re: usrmerge on root NFS will not be run automatically

2023-09-15 Thread Marco
On Fri, 15 Sep 2023 17:55:06 +
Andy Smith  wrote:

> I haven't followed this thread closely, but is my understanding
> correct:
> 
> - You have a FreeBSD NFS server with an export that is a root
>   filesystem of a Debian 11 install shared by multiple clients

Almost. It's not *one* Debian installation, it's many (diskless
workstations PXE boot). Each host has it's own root on the NFS.
Some stuff is shared, but that's not relevant here.

> - You're trying to do an upgrade to Debian 12 running on one of the
>   clients.

Not on one on *all* clients.

> - It tries to do a usrmerge but aborts because NFS is not supported
>   by that script?

Correct. Strangely the usrmerge script succeeded on one host. But on
all others it throws errors. Either relating to NFS being not
supported or because of duplicate files.

> If so, have you tried reporting a bug on this yet?

No I haven't. As far as I understand it's a known issue and the
developer has decided to just have the script fail on NFS.

> If you don't get anywhere with that, I don't think you have much
> choice except to take away the root directory tree to a Linux host,
> chroot into it and complete the merge there, then pack it up again
> and bring it back to your NFS server. Which is very far from ideal.

I'll try to solve the conflicts manually. If that fails, that's what
I have to do, I guess. I didn't expect that level of fiddling with
system files for a simple upgrade. But hey, here we are now.

> The suggestions about running a VM on the NFS server probably aren't
> going to work as you won't be able to take the directory tree out of
> use and export it as a block device to the VM.

Indeed.

> The option of making the usrmerge script work from FreeBSD might not
> be too technically challenging but I wouldn't want to do it without
> assistance from the Debian developers responsible for the script.

I won't do that. I don't speak Perl and will not rewrite the
usrmerge script.

Marco



Re: usrmerge on root NFS will not be run automatically

2023-09-15 Thread Andy Smith
Hello,

On Fri, Sep 15, 2023 at 01:52:27PM +0200, Marco wrote:
> On Thu, 14 Sep 2023 16:43:09 -0400
> Dan Ritter  wrote:
> > Each of these things could be rewritten to be compatible with
> > FreeBSD; I suspect it would take about twenty minutes to an hour,
> > most of it testing, for someone who was familiar with FreeBSD's
> > userland
> 
> I'm not going down that route.

I haven't followed this thread closely, but is my understanding
correct:

- You have a FreeBSD NFS server with an export that is a root
  filesystem of a Debian 11 install shared by multiple clients

- You're trying to do an upgrade to Debian 12 running on one of the
  clients.

- It tries to do a usrmerge but aborts because NFS is not supported
  by that script?

If so, have you tried reporting a bug on this yet? It seems like an
interesting problem which although being quite a corner case, might
spark the interest of the relevant Debian developers.

If you don't get anywhere with that, I don't think you have much
choice except to take away the root directory tree to a Linux host,
chroot into it and complete the merge there, then pack it up again
and bring it back to your NFS server. Which is very far from ideal.

The suggestions about running a VM on the NFS server probably aren't
going to work as you won't be able to take the directory tree out of
use and export it as a block device to the VM. Or rather, you could
do that, but it's probably not quicker/easier than the method of
taking a copy of it elsewhere then bringing it back.

The option of making the usrmerge script work from FreeBSD might not
be too technically challenging but I wouldn't want to do it without
assistance from the Debian developers responsible for the script.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: usrmerge on root NFS will not be run automatically

2023-09-15 Thread Stefan Monnier
> So the file in /lib appears to be newer. So what to do? Can I delete
> the one in /usr/lib ?

Yes.


Stefan



Re: usrmerge on root NFS will not be run automatically

2023-09-15 Thread Marco
On Thu, 14 Sep 2023 16:54:27 -0400
Stefan Monnier  wrote:

> Still going on with this?

I am.

> Have you actually looked at those two files:
> 
> /lib/udev/rules.d/60-libsane1.rules and
> /usr/lib/udev/rules.d/60-libsane1.rules
> 
> to see if they're identical or not and to see if you might have an
> idea how to merge them?

Yes, I did. On some hosts they are identical, on others they're
different. That's why I asked how to handle that.

> `usrmerge` did give you a pretty clear explanation of the problem it's
> facing (AFAIC)

It does indeed.

> and I believe it should be very easy to address it

Everything is easy if you only know how to do it.

As I said, on some hosts they are identical. So what to do? Can I
delete one of them? If yes, which one?

On other hosts they differ, here the first lines:

/lib/

# This file was generated from description files (*.desc)
# by sane-desc 3.6 from sane-backends 1.1.1-debian
#
# udev rules file for supported USB and SCSI devices
#
# For the list of supported USB devices see /lib/udev/hwdb.d/20-sane.hwdb
#
# The SCSI device support is very basic and includes only
# scanners that mark themselves as type "scanner" or
# SCSI-scanners from HP and other vendors that are entitled "processor"
# but are treated accordingly.
#
# If your SCSI scanner isn't listed below, you can add it to a new rules
# file under /etc/udev/rules.d/.
#
# If your scanner is supported by some external backend (brother, epkowa,
# hpaio, etc) please ask the author of the backend to provide proper
# device detection support for your OS
#
# If the scanner is supported by sane-backends, please mail the entry to
# the sane-devel mailing list (sane-de...@alioth-lists.debian.net).
#
ACTION=="remove", GOTO="libsane_rules_end"

…

/usr/lib/

# This file was generated from description files (*.desc)
# by sane-desc 3.6 from sane-backends 1.0.31-debian
#
# udev rules file for supported USB and SCSI devices
#
# For the list of supported USB devices see /lib/udev/hwdb.d/20-sane.hwdb
#
# The SCSI device support is very basic and includes only
# scanners that mark themselves as type "scanner" or
# SCSI-scanners from HP and other vendors that are entitled "processor"
# but are treated accordingly.
#
# If your SCSI scanner isn't listed below, you can add it to a new rules
# file under /etc/udev/rules.d/.
#
# If your scanner is supported by some external backend (brother, epkowa,
# hpaio, etc) please ask the author of the backend to provide proper
# device detection support for your OS
#
# If the scanner is supported by sane-backends, please mail the entry to
# the sane-devel mailing list (sane-de...@alioth-lists.debian.net).
#
ACTION!="add", GOTO="libsane_rules_end"

…

So the file in /lib appears to be newer. So what to do? Can I delete
the one in /usr/lib ?

> (no need to play with anything funny like setting up a VM or
> mounting the disk from some other system).

Which is good because that's not that easy, apparently.

Thank you for your replies and support regarding this matter.

Marco



Re: usrmerge on root NFS will not be run automatically

2023-09-15 Thread Marco
On Thu, 14 Sep 2023 16:43:09 -0400
Dan Ritter  wrote:

> The heart of the convert-usrmerge perl script is pretty
> reasonable. However:
> 
> […]
> 
> Similarly, there are calls to stat and du which probably have
> some incompatibilities.
> 
> The effect of running this would be fairly safe, but also not do
> anything: you would get some errors and then it would die.

Ok, then I'll not try that. Would be a waste of time.

> Each of these things could be rewritten to be compatible with
> FreeBSD; I suspect it would take about twenty minutes to an hour,
> most of it testing, for someone who was familiar with FreeBSD's
> userland

I'm not going down that route.

Marco



Re: usrmerge on root NFS will not be run automatically

2023-09-14 Thread Stefan Monnier
Still going on with this?

Have you actually looked at those two files:

/lib/udev/rules.d/60-libsane1.rules and 
/usr/lib/udev/rules.d/60-libsane1.rules

to see if they're identical or not and to see if you might have an idea
how to merge them?
[ as I suggested a week ago.  ]

`usrmerge` did give you a pretty clear explanation of the problem it's
facing (AFAIC) and I believe it should be very easy to address it (no
need to play with anything funny like setting up a VM or mounting
the disk from some other system).

If you're not sure what to do with those two files, show them to us.


Stefan


Marco [2023-09-14 20:28:59] wrote:

> On Thu, 14 Sep 2023 13:20:09 -0400
> Dan Ritter  wrote:
>
>> > FreeBSD (actually a TrueNAS appliance)  
>> 
>> If it supports the 9P share system, Debian can mount that with
>> -t 9p.
>> 
>> I don't know whether TrueNAS enabled that.
>
> No it does not. I just confirmed, the only choices are raw disk
> access (ZVOL), NFS and Samba.
>
> However, usrmerge is a perl script. Can I run it on the server
> (after chroot'ing) in a jail (under FreeBSD)? Or does this mess
> things up? Just a thought.
>
> Marco



Re: usrmerge on root NFS will not be run automatically

2023-09-14 Thread Marco
On Thu, 14 Sep 2023 15:01:50 -0400
Dan Ritter  wrote:

> Is this a mission-critical server?

I'd say so, yes. It's not one single server. It's *all*
workstations.

> i.e. will screwing it up for a day cause other people to be upset

Yes, because no one can use their computer.

> or you to lose money?

Yes.

If I screw with this I'd prefer to do it at night or on a weekend to
keep the systems running during business hours.

> Do you have a good, current backup?

Yes.

> Since it's TrueNAS, I assume you are using ZFS, so: have you sent
> snapshots to some other device recently?

Yes, every three hours. And one before every system upgrade.



Re: usrmerge on root NFS will not be run automatically

2023-09-14 Thread Marco
On Thu, 14 Sep 2023 13:20:09 -0400
Dan Ritter  wrote:

> > FreeBSD (actually a TrueNAS appliance)  
> 
> If it supports the 9P share system, Debian can mount that with
> -t 9p.
> 
> I don't know whether TrueNAS enabled that.

No it does not. I just confirmed, the only choices are raw disk
access (ZVOL), NFS and Samba.

However, usrmerge is a perl script. Can I run it on the server
(after chroot'ing) in a jail (under FreeBSD)? Or does this mess
things up? Just a thought.

Marco



Re: usrmerge on root NFS will not be run automatically

2023-09-14 Thread Marco
On Thu, 14 Sep 2023 11:00:17 -0400
Dan Ritter  wrote:

> What VM software are you using

bhyve

…which I know very little about. It's supported on the server, I've
tried it, set up a VM, it works. But the server is mainly serving
NFS shares to various clients.

> and what's the OS on which that runs?

FreeBSD (actually a TrueNAS appliance)

Marco



Re: usrmerge on root NFS will not be run automatically

2023-09-14 Thread Dan Ritter
Marco wrote: 
> On Fri, 8 Sep 2023 12:26:38 -0400
> Dan Ritter  wrote:
> > * have the VM mount the filesystem directly
> 
> How? I can only attach devices (=whole disks) to the VM or mount the
> FS via NFS. I can't attach it as a device because it's not a device,
> rather than a directory with the root file systems of several hosts
> directly on the server. So that doesn't work.


What VM software are you using, and what's the OS on which that
runs?

-dsr-



Re: usrmerge on root NFS will not be run automatically

2023-09-14 Thread Marco
On Fri, 8 Sep 2023 12:26:38 -0400
Dan Ritter  wrote:

> Can you start a temporary VM directly on the server?

I just checked. I can, yes.

> If so, you can
> * stop your remote Debian machine

Ok, no problem.

> * run a Debian rescue image in the VM on the NFS server

No problem.

> * have the VM mount the filesystem directly

How? I can only attach devices (=whole disks) to the VM or mount the
FS via NFS. I can't attach it as a device because it's not a device,
rather than a directory with the root file systems of several hosts
directly on the server. So that doesn't work.

This leaves me with an NFS mount in the VM. But NFS mounts are not
supported by usrmerge, that's the whole issue I'm facing here.

So this VM-on-the-server idea doesn't work in my case or am I
missing something here?

Another question: if usrmerge complains that the file is present in
/lib as well as /usr/lib, what's the correct thing to do if

i)  the files are identical
ii) the files are different ?

Regards
Marco



Re: usrmerge on root NFS will not be run automatically

2023-09-11 Thread Javier Barroso
Hello,

El dom., 10 sept. 2023 21:55, Marco  escribió:

> On Fri, 8 Sep 2023 12:26:38 -0400
> Dan Ritter  wrote:
>
> > > That is quite an involved task. I didn't expect such fiddling for a
> > > simple OS update. I'm a bit worried that the permissions and owners
> > > go haywire when I copy stuff directly off the server onto a VM and
> > > back onto the server. Is there a recommended procedure or
> > > documentation available?
> >
> > Can you start a temporary VM directly on the server?
>
> I might actually. I'll have to check the following days.
>
> > If so, you can
> > * stop your remote Debian machine
> > * run a Debian rescue image in the VM on the NFS server
> > * have the VM mount the filesystem directly
> > * chroot, run usrmerge
> > * unmount
>
> Ok, that's also quite a task, but it seems less error-prone than
> copying a bunch of system files across the network and hope for the
> best. I'll try.
>
> Marco
>

Maybe you can open a new bug asking for a better documentation or what
should be done in this case.

Maybe dpkg -L with both files can help to clarify what should be done

Regards

>


Re: usrmerge on root NFS will not be run automatically

2023-09-10 Thread Marco
On Fri, 8 Sep 2023 12:26:38 -0400
Dan Ritter  wrote:

> > That is quite an involved task. I didn't expect such fiddling for a
> > simple OS update. I'm a bit worried that the permissions and owners
> > go haywire when I copy stuff directly off the server onto a VM and
> > back onto the server. Is there a recommended procedure or
> > documentation available?  
> 
> Can you start a temporary VM directly on the server?

I might actually. I'll have to check the following days.

> If so, you can
> * stop your remote Debian machine
> * run a Debian rescue image in the VM on the NFS server
> * have the VM mount the filesystem directly
> * chroot, run usrmerge
> * unmount

Ok, that's also quite a task, but it seems less error-prone than
copying a bunch of system files across the network and hope for the
best. I'll try.

Marco



Re: usrmerge on root NFS will not be run automatically

2023-09-08 Thread Stefan Monnier
>   root@foobar:~# /usr/lib/usrmerge/convert-usrmerge 
>
>   FATAL ERROR:
>   Both /lib/udev/rules.d/60-libsane1.rules and 
> /usr/lib/udev/rules.d/60-libsane1.rules exist.

The problem is that "usrmerge" needs to unify those two and doesn't
know how.  So you need to do it by hand.
E.g. get rid of one of those two (or maybe if you can make
them 100% identical `usrmerge` will be happy as well).


Stefan



Re: usrmerge on root NFS will not be run automatically

2023-09-08 Thread Marco
On Fri, 8 Sep 2023 16:55:23 +0200
zithro  wrote:

> On 08 Sep 2023 12:54, Marco wrote:
> >Warning: NFS detected, /usr/lib/usrmerge/convert-usrmerge will
> > not be run automatically. See #842145 for details.  
> 
> Read :
> - https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=842145

  “I repeated it a few times. I had to restart various services in between
  retries (I think I restarted everything by the end). Eventually it
  succeeded.”

I tried it 30 times to no avail. The report doesn't offer another
solution.

> - https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1039522

  “instead of converting the client, convert the server first.”

I don't want to convert the server. The server is running fine and
has no issues. I don't have a clue what this has to do with the
server.

  “So there is a workaround when your NFS server is a Linux machine
  and you may use chroot on it, at least.”

The server doesn't run Linux. So also no solution there.

> carefully copy the files over a Linux machine, chroot+convert
> there, then move back to the NFS server.

That is quite an involved task. I didn't expect such fiddling for a
simple OS update. I'm a bit worried that the permissions and owners
go haywire when I copy stuff directly off the server onto a VM and
back onto the server. Is there a recommended procedure or
documentation available?

> Can help: 
> https://unix.stackexchange.com/questions/312218/chroot-from-freebsd-to-linux

I cannot install stuff on the server unfortunately.

Thanks for your quick reply.

Marco



Re: usrmerge on root NFS will not be run automatically

2023-09-08 Thread zithro

On 08 Sep 2023 12:54, Marco wrote:

   Warning: NFS detected, /usr/lib/usrmerge/convert-usrmerge will not be run
   automatically. See #842145 for details.


Read :
- https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=842145
- https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1039522

The solution would be :
"convert-usrmerge can be run in a chroot on the NFS server"

If your NFS server is not Linux-based, carefully copy the files over a 
Linux machine, chroot+convert there, then move back to the NFS server.

Make backups ;)
Can help: 
https://unix.stackexchange.com/questions/312218/chroot-from-freebsd-to-linux



As per the why, I don't know.
Maybe the symlink handling by NFS ?

Reading the script "/usr/lib/usrmerge/convert-usrmerge", you get the 
reasons why it fails :


142 # The other cases are more complex and there are some corner 
cases that

143 # we do not try to resolve automatically.
144
145 # both source and dest are links
[...]
168 # the source is a link
[...]
175 # the destination is a link
[...]
191 # both source and dest are directories
192 # this is the second most common case

You may change the script to detect where/why it fails, so edit
fatal("Both $n and /usr$n exist");
to
fatal("Both $n and /usr$n exist - err line 145");
fatal("Both $n and /usr$n exist - err line 168");

Also found this on
"https://en.opensuse.org/openSUSE:Usr_merge#Known_Problems"; :

- "File systems that do not support RENAME_EXCHANGE such as ZFS or NFS 
cannot perform live conversion (Bug 1186637)"
- "Conversion fails if there's a mount point below 
(/usr)/{bin,sbin,lib,lib64}"


Good luck

--
++
zithro / Cyril



usrmerge on root NFS will not be run automatically

2023-09-08 Thread Marco
Hi,

I'm in the process of upgrading my Debian stable hosts and run into
a problem with usrmerge:

  Setting up usrmerge (35) ...

  Warning: NFS detected, /usr/lib/usrmerge/convert-usrmerge will not be run
  automatically. See #842145 for details.

  E: usrmerge failed.
  dpkg: error processing package usrmerge (--configure):
   installed usrmerge package post-installation script subprocess returned 
error exit status 1
  Errors were encountered while processing:
   usrmerge
  E: Sub-process /usr/bin/dpkg returned an error code (1)

True, root is mounted via NFS. So I ran usrmerge manually:

  root@foobar:~# /usr/lib/usrmerge/convert-usrmerge 

  FATAL ERROR:
  Both /lib/udev/rules.d/60-libsane1.rules and 
/usr/lib/udev/rules.d/60-libsane1.rules exist.

  You can try correcting the errors reported and running again
  /usr/lib/usrmerge/convert-usrmerge until it will complete without errors.
  Do not install or update other Debian packages until the program
  has been run successfully.

It instructs me to:

  You can try correcting the errors reported and running again

But it's not mentioned anywhere *how* to correct those errors. It's
true that both files exist. I've read

  https://wiki.debian.org/UsrMerge

But the page doesn't cover the error I face.

How to fix the error? Is there a command I can run (e.g. rsync?) to
fix whatever usrmerge complains about? Like keeping only the newest
file or deleting the old one? I feel there's very little info out
there how to recover from this situation. Any tips are much
appreciated.

Marco

Debian stable 6.1.0-11-amd64



Re: Debian 12 kernel NFS server doesn't listen on port 2049 UDP

2023-07-29 Thread Matthias Scheler
On Sat, Jul 29, 2023 at 05:44:59PM +0100, piorunz wrote:
> Edit /etc/nfs.conf file:
> [nfsd]
> udp=y
> 
> then:
> sudo systemctl restart nfs-server

Yes, that fixed my NFS problem.

Thank you very much

-- 
Matthias Scheler  http://zhadum.org.uk/



Re: Debian 12 kernel NFS server doesn't listen on port 2049 UDP

2023-07-29 Thread piorunz

On 29/07/2023 16:00, Matthias Scheler wrote:


Hello,

after upgrading one of my systems from Debian 11 to 12 the kernel NFS server
doesn't seem to accept NFS requests over UDP on port 2049 anymore:

>rpcinfo -p | grep nfs
133   tcp   2049  nfs
134   tcp   2049  nfs
1002273   tcp   2049  nfs_acl

This causes problems for a non-Linux NFS client whose automounter
tries to perform the mount over UDP. Is there a way to re-enable
the UDP port?

Kind regards



Edit /etc/nfs.conf file:
[nfsd]
udp=y

then:
sudo systemctl restart nfs-server

Result:
$ rpcinfo -p | grep nfs
133   tcp   2049  nfs
134   tcp   2049  nfs
1002273   tcp   2049  nfs_acl
133   udp   2049  nfs
1002273   udp   2049  nfs_acl

--
With kindest regards, Piotr.

⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org/
⠈⠳⣄



Debian 12 kernel NFS server doesn't listen on port 2049 UDP

2023-07-29 Thread Matthias Scheler


Hello,

after upgrading one of my systems from Debian 11 to 12 the kernel NFS server
doesn't seem to accept NFS requests over UDP on port 2049 anymore:

>rpcinfo -p | grep nfs
133   tcp   2049  nfs
134   tcp   2049  nfs
1002273   tcp   2049  nfs_acl

This causes problems for a non-Linux NFS client whose automounter
tries to perform the mount over UDP. Is there a way to re-enable
the UDP port?

Kind regards

-- 
Matthias Scheler  http://zhadum.org.uk/



Booting Debian from NFS (using EFI PXE GRUB)

2023-03-03 Thread tuxifan
Hey!

As kind of a network-wide fallback system (and for some diskless computers in 
our network) I'd like to set up EFI PXE boot for a simple Debian system. 
However, I have only been able to find very sparse and possibly outdated 
information on how to actually tell the kernel/initramfs to mount a NFS 
filesystem as filesystem root. I even asked Chatgpt, and it replied with its 
usual hallucinations, unable to provide real links to source of information.

This is what my TFTP root currently looks like:

├── grub
│   └── grub.cfg
├── grubnetx64.efi
├── initrd.img (Generic Debian Testing initrd.img)
└── vmlinuz (Generic Debian Testing vmlinuz)
(1 directory, 4 files)

And my NFS root currently isn't much more than the result of a Debian Stable 
debootstrap.

Do you have any tips and ideas on how to get Linux to mount that NFS root as 
the filesystem root?

Thanks in advance
Tuxifan




Re: NAS + serveur de fichier SMB NFS SFTP avec Debian 11

2022-12-26 Thread Gilles Mocellin
Le lundi 26 décembre 2022, 16:51:56 CET Jean-François Bachelet a écrit :
> Hello ^^)
> 
> Le 26/12/2022 à 16:05, Olivier Back my spare a écrit :
> > Bonjour
> > 
> > Est-il possible de faire un NAS serveur de fichier SMB NFS SFTP + LDAP
> > avec un Debian?
> > J'ai acheté un nouvel ordinateur pour ma mère et j'ai récupéré son
> > vieux i3 8 Go de RAM, 1 To de HDD.
> > Je voudrais en faire un NAS serveur de fichier SMB NFS SFTP + LDAP
> > sans utiliser Openvault. Est-ce possible d'obtenir le même résultat
> > avec Debian 11?
> > Je compte mettre un ssd 128 Go pour l'OS et le swap et 2 HDD 4 To avec
> > une carte raid 1 sata "SSU-sata3-t2.v1" pour les datas.
> 
> si tu tiens à Debian tu peux utiliser Freenas Scale il est basé dessus ^^)
> 
> Jeff

Bonjour,

Dans les distributions dédié NAS et basée sur Debian, nous avons aussi 
OpenMedia Vault : /
https://www.openmediavault.org//[1]

Peut-être plus simple à installer après coup sur une Debian que FreeNAS Scale ?

--- Mince je viens de voir que tu as mentionner Openvault, sûrement une erreur 
de frappe.

Mais pour répondre à ta question : oui, tous les logiciels sont présent dans 
Debian pour faire 
tout ce qu'un NAS doit savoir faire.
Ce n'est qu'une question de temps à y passer, pour sélectionner les logiciels 
(il y a souvent 
plusieurs possibilités) et les configurer.
Mais si c'est un hobby et que ça te plaît, il faut le faire comme ça, c'est 
très formateur.

Par contre, si j'étais toi, je ne m'embarrasserais pas d'un carte RAID 
matériel, et je ferais du 
RAID soft, soit avec MD soit avec du ZFS.

Les cartes RAID matériel, ça n'a qu'une seule utilité pour moi : permettre de 
faire changer les 
disques à des gens qui n'y connaissent rien.
Avec MD ou ZFS, il faut généralement aller taper des commandes pour remplacer 
le disque.


[1] https://www.openmediavault.org/


Re: Mount NFS hangs

2022-10-04 Thread Greg Wooledge
On Tue, Oct 04, 2022 at 12:04:56PM +0100, tony wrote:
> I can successfully do (pls ignore spurious line break):
> 
> mount -t nfs -o _netdev tony-fr:/mnt/sharedfolder
> /mnt/sharedfolder_client
> 
> but the user id is incorrect.

What do you mean, "the user id"?  As if there's only one?

This isn't a VFAT file system.  It's NFS.  The file system on the server
has multiple UIDs and GIDs and permissions.  These are reflected on
the client.

If a file (say, /mnt/sharedfolder_client/foo.txt) is owned by UID 1001
on the server, then it is also owned by UID 1001 on the client.

Unless of course you're doing ID mapping stuff, which I have never done,
and really don't recommend.

Just make sure UID 1001 on the server and UID 1001 on the client map to
the same person.  And similarly for all the other UIDs and GIDs.



Mount NFS hangs

2022-10-04 Thread tony

Hi,

I need to mount a directory from a debian 11 server to a debian 10 client.

I can successfully do (pls ignore spurious line break):

mount -t nfs -o _netdev tony-fr:/mnt/sharedfolder
/mnt/sharedfolder_client

but the user id is incorrect. If I now try:

mount -t nfs -o _netdev,uid=1002 tony-fr:/mnt/sharedfolder 
/mnt/sharedfolder_client


the command hangs in terminal. Uid 1002 is valid in the /etc/passwd file 
on both machines.


Any suggestion on how to fix this please?

cheers, Tony



Re: "Failed to start Create System Users" when booting Debian 10 rootfs from NFS mount.

2022-09-01 Thread mj

Hi,

A suggestion: we've had issues in the past, where on NFS root the issue 
was that setting "Linux Capabilities" (setcap) fails, because NFS does 
not support the extended attributes to store them.


Perhaps that is your issue as well?

MJ

Op 16-08-2022 om 21:58 schreef Lie Rock:

Hi,

I'm trying to bring up the Debian 10 root file system on an ARM SoC 
board. When the rootfs was in an SD card the board worked well. When I 
put the rootfs on an NFS server and tried to boot the board through NFS 
mount, it reported error through serial port:


|[FAILED] Failed to start Create System Users. See 'systemctl status 
systemd-sysusers.service' for details. |


And this is the only error message printed out. The board went all the 
way to login inputI, but I could not login with any of 
the preset accounts including root (because no users have been created 
as it suggested?), and I didn't see any way to run commands to check 
system status for details.


So how is the process "create system users" performed when Linux/Debian 
starts? What can be contributing to this error?


Any suggestions would be greatly appreciated.

Rock





Re: nfs-kernel-server

2022-08-20 Thread Greg Wooledge
On Sat, Aug 20, 2022 at 06:21:21PM -0700, Wylie wrote:
> 
> i am getting this error ... on a fresh install of nfs-kernel-server
> 
>   mount.nfs: access denied by server while mounting
> 192.168.42.194:/ShareName
> 
> i'm not having this issue on other machines installed previously
> i've tried re-installing Debian and nfs several times

What's in your /etc/exports file on the server?  What's the client's
IP address and hostname?  If you attempt to resolve the client's IP
address from the server, what do you get?

If the client changed IP address or name, or if you changed its entry
in /etc/exports on the server, did you restart the NFS server service?



nfs-kernel-server

2022-08-20 Thread Wylie


i am getting this error ... on a fresh install of nfs-kernel-server

  mount.nfs: access denied by server while mounting 
192.168.42.194:/ShareName


i'm not having this issue on other machines installed previously
i've tried re-installing Debian and nfs several times


Wylie!


Re: "Failed to start Create System Users" when booting Debian 10 rootfs from NFS mount.

2022-08-16 Thread tomas
On Tue, Aug 16, 2022 at 04:20:36PM -0400, Greg Wooledge wrote:
> On Tue, Aug 16, 2022 at 03:58:30PM -0400, Lie Rock wrote:
> > So how is the process "create system users" performed when Linux/Debian
> > starts? What can be contributing to this error?
> 
> unicorn:~$ grep -ri 'create system users' /lib/systemd
> /lib/systemd/system/systemd-sysusers.service:Description=Create System Users

[...]

Good research, and "thank you" from a systemd-abstainer, that's
my way to learn, after all :)

I'd contribute my hunch: perhaps systemd is trying to get sysusers
up "too early", before the root file system is pivoted-in?

Feeding my search engine with "NFS root" and +systemd turns up a
bunch of interesting suggestions (e.g. network has to be up before
NFS has to be mounted, etc:).

Good luck... and tell us what it was ;-)

Cheers
-- 
t


signature.asc
Description: PGP signature


Re: "Failed to start Create System Users" when booting Debian 10 rootfs from NFS mount.

2022-08-16 Thread Greg Wooledge
On Tue, Aug 16, 2022 at 03:58:30PM -0400, Lie Rock wrote:
> So how is the process "create system users" performed when Linux/Debian
> starts? What can be contributing to this error?

unicorn:~$ grep -ri 'create system users' /lib/systemd
/lib/systemd/system/systemd-sysusers.service:Description=Create System Users

unicorn:~$ systemctl cat systemd-sysusers.service
[...]
Documentation=man:sysusers.d(5) man:systemd-sysusers.service(8)
[...]
ExecStart=systemd-sysusers

unicorn:~$ man systemd-sysusers
[...]
   systemd-sysusers creates system users and groups, based on the file
   format and location specified in sysusers.d(5).

That's enough to get you started down the rabbit hole(s).  You should
also definitely check the logs on your system (e.g.
 journaltctl -u systemd-sysusers) to see what *exactly* went wrong.



"Failed to start Create System Users" when booting Debian 10 rootfs from NFS mount.

2022-08-16 Thread Lie Rock
Hi,

I'm trying to bring up the Debian 10 root file system on an ARM SoC board.
When the rootfs was in an SD card the board worked well. When I put the
rootfs on an NFS server and tried to boot the board through NFS mount, it
reported error through serial port:

[FAILED] Failed to start Create System Users.
See 'systemctl status systemd-sysusers.service' for details.

And this is the only error message printed out. The board went all the way
to login inputI, but I could not login with any of the preset accounts
including root (because no users have been created as it suggested?), and I
didn't see any way to run commands to check system status for details.

So how is the process "create system users" performed when Linux/Debian
starts? What can be contributing to this error?

Any suggestions would be greatly appreciated.

Rock


Re: Mounting NFS share from Synology NAS

2022-02-10 Thread Anssi Saari
Andrei POPESCU  writes:

> Are you sure you're actually using NFSv4? (check 'mount | grep nfs').

Yes I'm sure. It's all host on path type nfs4 and in options also
vers=4.2.

Also the bog standard auto.net these days has code to mount using NFSv4.

> In my experience in order to make NFSv4 work it's necessary to configure 
> a "root" share with fsid=0 or something like that and mount
> the actual shares using a path relative to it (my NFS "server" is 
> currently down, so I can't check exactly what I did).

That's the weirdness I meant. But it's not true, these days and hasn't
been for years. Or maybe it's hidden? But I can do, for example:

# mount zippy:/tmp /mnt/foo  
# mount|grep zip
zippy:/tmp on /mnt/foo type nfs4 
(rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.2.119,local_lock=none,addr=10.0.2.126)

I don't have anything about that in fstab. This is actually a tmpfs
mount where I have fsid=something in /etc/exports but I don't know if
that's required today. zfs mounts the same way from zippy and I don't
have any fsid stuff there. Of course it could be handled automatically.

Autofs mounts a little differently, this is like the old way:

zippy:/ on /net/zippy type nfs4 
(rw,nosuid,nodev,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.2.119,local_lock=none,addr=10.0.2.126)
zippy:/tmp on /net/zippy/tmp type nfs4 
(rw,nosuid,nodev,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.2.119,local_lock=none,addr=10.0.2.126)

> As far as I know ZFS is using the kernel NFS server, it's just providing 
> a convenient method to share / unshare so it's not necessary to mess 
> with /etc/exports if all your shares are ZFS data sets.

Good to know.



Re: Mounting NFS share from Synology NAS

2022-02-09 Thread Andrei POPESCU
On Mi, 02 feb 22, 13:49:38, Anssi Saari wrote:
> Greg Wooledge  writes:
> 
> > I'm unclear on how NFS v4 works.  Everything I've read about it in the
> > past says that you have to set up a user mapping, which is shared by
> > the client and the server.  And that this is *not* optional, and *is*
> > exactly as much of a pain as it sounds.
> 
> I've never done that, as far as I remember. NFS (NFSv4, these days)
> mounts in my home network use autofs but I haven't done anything there
> either specifically for NFS of any verstion. I remember there was some
> weirdness at some point with NFSv4 and I didn't bother with it much. I
> had maybe two computers back then so not much of network. But over the
> years my NFS mounts just became NFSv4.

Are you sure you're actually using NFSv4? (check 'mount | grep nfs').

In my experience in order to make NFSv4 work it's necessary to configure 
a "root" share with fsid=0 or something like that and mount
the actual shares using a path relative to it (my NFS "server" is 
currently down, so I can't check exactly what I did).

> Access for me is by UID. Service is by the kernel driver or in the case
> of zfs, the NFS service it provides. I've thought about setting up
> Kerberos but haven't gotten around to it. One thing is, I don't know if
> Kerberos would work with the NFS service zfs provides? No big deal
> either way though.

As far as I know ZFS is using the kernel NFS server, it's just providing 
a convenient method to share / unshare so it's not necessary to mess 
with /etc/exports if all your shares are ZFS data sets.

(zfs-utils Suggests: nfs-kernel-server and 
https://wiki.debian.org/ZFS#NFS_shares implies the same)

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: Mounting NFS share from Synology NAS

2022-02-03 Thread Christian Britz



On 2022-02-03 08:52 UTC+0100, Tixy wrote:
> On Wed, 2022-02-02 at 17:06 -0500, Bob Weber wrote:
> [...]
>> I second the sshfs approach.   I use it between several Debian servers and 
>> have 
>> been happy with the results.  Once setup in the fstab a click in a GUI or 
>> mount 
>> command on the cli mounts the remote server on a directory specified in the 
>> fstab.
> 
> If you use a GUI, you can also have gvfs installed (default in some
> desktops) and then in your file manager just use the directory path
> like ssh://user@host/ with user being optional if you want the current
> user. You can add bookmarks in your filemanager for the paths you use
> frequently.
> 
> I use this for quick access for copying and editing files on other
> machines. For proper automated backup and bulk storage I use NFS on a
> NAS/router box (an ARM based computer running Debian).

I use x-systemd.automount option in the fstab and have the path
available in my file manager.



Re: Mounting NFS share from Synology NAS

2022-02-02 Thread Tixy
On Wed, 2022-02-02 at 17:06 -0500, Bob Weber wrote:
[...]
> I second the sshfs approach.   I use it between several Debian servers and 
> have 
> been happy with the results.  Once setup in the fstab a click in a GUI or 
> mount 
> command on the cli mounts the remote server on a directory specified in the 
> fstab.

If you use a GUI, you can also have gvfs installed (default in some
desktops) and then in your file manager just use the directory path
like ssh://user@host/ with user being optional if you want the current
user. You can add bookmarks in your filemanager for the paths you use
frequently.

I use this for quick access for copying and editing files on other
machines. For proper automated backup and bulk storage I use NFS on a
NAS/router box (an ARM based computer running Debian).

-- 
Tixy



Re: Mounting NFS share from Synology NAS

2022-02-02 Thread Christian Britz



On 02.02.22 23:06, Bob Weber wrote:
> On 2/2/22 07:36, gene heskett wrote:
>>
>> Sounds like how my network grew, with more cnc'd machines added. But I 
>> was never able the make MFSv4 Just Work for anything for more than the 
>> next reboot of one of the machines.  Then I discovered sshfs which Just 
>> Does anything the user can do, it does not allow root access, but since I 
>> am the same user number on all machines, I just put whatever needs root 
>> in a users tmp dir then ssh login to that machine, become root and then 
>> put the file wherever it needs to go. I can do whatever needs done, to 
>> any of my machines, currently 7, from a comfy office chair.
>> Stay well all.
>>
>> Cheers, Gene Heskett.
> 
> I second the sshfs approach.   I use it between several Debian servers
> and have been happy with the results.  Once setup in the fstab a click
> in a GUI or mount command on the cli mounts the remote server on a
> directory specified in the fstab.
> 
> A sample of a line in the fstab (check docs for more options):
[...]

Thanks Gene and Bob, I didn't think of sshfs, although I have used it on
other occasions in the past. It works perfectly and I have disabled the
other file share options on the NAS. The performance feels even better
compared to SMB and NFS.

In the long term, I will setup my own Debian based home server, there
are many usefull suggestions in the other thread.



Re: Mounting NFS share from Synology NAS

2022-02-02 Thread Bob Weber

On 2/2/22 07:36, gene heskett wrote:


Sounds like how my network grew, with more cnc'd machines added. But I
was never able the make MFSv4 Just Work for anything for more than the
next reboot of one of the machines.  Then I discovered sshfs which Just
Does anything the user can do, it does not allow root access, but since I
am the same user number on all machines, I just put whatever needs root
in a users tmp dir then ssh login to that machine, become root and then
put the file wherever it needs to go. I can do whatever needs done, to
any of my machines, currently 7, from a comfy office chair.
Stay well all.

Cheers, Gene Heskett.


I second the sshfs approach.   I use it between several Debian servers and have 
been happy with the results.  Once setup in the fstab a click in a GUI or mount 
command on the cli mounts the remote server on a directory specified in the fstab.


A sample of a line in the fstab (check docs for more options):

sshfs#r...@172.16.0.xxx:/   /mnt/deb-test  fuse user,noauto,rw    0   0

The user at the remote system is root in this example.  Not a good idea unless 
you are the only one who can login to your system. I use ssh keys always.  If 
they are created without a password sshfs won't ask for one when it is mounted 
(I need this for my backup system Backuppc).  I even use sshfs to access a 
Digital Ocean droplet I have over the internet.


The current NAS you have might work with sshfs if their ssh server supports 
SFTP.


--


*...Bob*

Re: Mounting NFS share from Synology NAS

2022-02-02 Thread Christian Britz



On 2022-02-02 02:01 UTC+0100, Christian Britz wrote:

> Thank you, that was the right hint, the solution to get it work (with
> NFS4 support) with IP based "security" was:

[...]

> Is my assumption right, that I would have to setup a Kerberos server to
> achieve real security?

I am thinking about going the Kerberos path indeed. Sometimes there are
guests on my LAN and I think it is a good opportunity to broaden my
knowledge.

Unfortunately Synology does not ship a Kerberos server and my DS115j
model is not capable of running docker. In the long term, I might
replace the NAS.

Now I am thinking about running the Kerberos server components on my
client. Is it possible at all to run server and client on the same
network interface?

Any hints would be welcome.



Re: Mounting NFS share from Synology NAS

2022-02-02 Thread gene heskett
On Wednesday, February 2, 2022 6:49:38 AM EST Anssi Saari wrote:
> Greg Wooledge  writes:
> > I'm unclear on how NFS v4 works.  Everything I've read about it in
> > the
> > past says that you have to set up a user mapping, which is shared by
> > the client and the server.  And that this is *not* optional, and *is*
> > exactly as much of a pain as it sounds.
> 
> I've never done that, as far as I remember. NFS (NFSv4, these days)
> mounts in my home network use autofs but I haven't done anything there
> either specifically for NFS of any verstion. I remember there was some
> weirdness at some point with NFSv4 and I didn't bother with it much. I
> had maybe two computers back then so not much of network. But over the
> years my NFS mounts just became NFSv4.

Sounds like how my network grew, with more cnc'd machines added. But I 
was never able the make MFSv4 Just Work for anything for more than the 
next reboot of one of the machines.  Then I discovered sshfs which Just 
Does anything the user can do, it does not allow root access, but since I 
am the same user number on all machines, I just put whatever needs root 
in a users tmp dir then ssh login to that machine, become root and then 
put the file wherever it needs to go. I can do whatever needs done, to 
any of my machines, currently 7, from a comfy office chair.

> Access for me is by UID. Service is by the kernel driver or in the case
> of zfs, the NFS service it provides. I've thought about setting up
> Kerberos but haven't gotten around to it. One thing is, I don't know
> if Kerberos would work with the NFS service zfs provides? No big deal
> either way though.
> 
> > I'm looking at <https://help.ubuntu.com/community/NFSv4Howto> for
> > example and there's discussion back and forth on the page about how
> > the user mapping is not working as expected, and try this and that,
> > and see this bug
> 
> It's a wiki by random people. Last updated in 2017, looks like. Did you
> think it has particular relevance to Debian or NFS today?
> 
> .
Stay well all.

Cheers, Gene Heskett.
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page <http://geneslinuxbox.net:6309/gene>





Re: Mounting NFS share from Synology NAS

2022-02-02 Thread Anssi Saari
Greg Wooledge  writes:

> I'm unclear on how NFS v4 works.  Everything I've read about it in the
> past says that you have to set up a user mapping, which is shared by
> the client and the server.  And that this is *not* optional, and *is*
> exactly as much of a pain as it sounds.

I've never done that, as far as I remember. NFS (NFSv4, these days)
mounts in my home network use autofs but I haven't done anything there
either specifically for NFS of any verstion. I remember there was some
weirdness at some point with NFSv4 and I didn't bother with it much. I
had maybe two computers back then so not much of network. But over the
years my NFS mounts just became NFSv4.

Access for me is by UID. Service is by the kernel driver or in the case
of zfs, the NFS service it provides. I've thought about setting up
Kerberos but haven't gotten around to it. One thing is, I don't know if
Kerberos would work with the NFS service zfs provides? No big deal
either way though.

> I'm looking at <https://help.ubuntu.com/community/NFSv4Howto> for example
> and there's discussion back and forth on the page about how the user
> mapping is not working as expected, and try this and that, and see this
> bug

It's a wiki by random people. Last updated in 2017, looks like. Did you
think it has particular relevance to Debian or NFS today?



Re: Mounting NFS share from Synology NAS

2022-02-01 Thread Christian Britz



On 2022-02-01 17:28 UTC+0100, Henning Follmann wrote:
> On Tue, Feb 01, 2022 at 04:32:57PM +0100, Christian Britz wrote:
>> 2. Accessing the mounted share with my personal user: The access rights
>> for /Daten look right, the user on the NAS has the same name as the user
>> on my machine. But:
> 
> And how about the userId?
> The username does not mean anything. The access control is 
> based on Id.

Thank you, that was the right hint, the solution to get it work (with
NFS4 support) with IP based "security" was:

1. Recreate the user and group on the NAS web interface with the same
names as on my localhost.
2. Assign the right group via SSH to the user on the NAS.
3. chown -R the files on the NAS to the new user and group.
4. Change UID and GID on my localhost to match the UID and GID on the
NAS (I read somewhere that the Synology crap has problems if you change
UID and GID on the server).
5. Fix ownership of files on localhost
=> Works!

Drawback 1, compared to my previous SMB mount method: The NAS internal
sub-directories named "@eaDir" are visible when accessing the share via
NFS. Workaround: Deleting them. Should be relatively safe according to
the Web. In the worst case, they get recreated.

Drawback 2: Security is only relying on the client IP. This would
probably be not acceptable, if I were not the only user on my network.
Is my assumption right, that I would have to setup a Kerberos server to
achieve real security?

Big advantage, compared to my previous SMB mount method: the modified
timestamp is finally shown correctly. This didn't seem to work correctly
with SMB.

Thank you all.



Re: Mounting NFS share from Synology NAS

2022-02-01 Thread Christian Britz



On 2022-02-01 17:36 UTC+0100, Bob Weber wrote:
> On 2/1/22 10:32, Christian Britz wrote:
>> This is my entry in /etc/fstab:
>> diskstation:/volume1/Medien /Daten nfs
>> nfsvers=4,rw,x-systemd.automount,noauto 0 0
>>
> Have you tried the user option in fstab? 
> 
> user - Permit any user to mount the filesystem.
> 
> nouser - Only permit root to mount the filesystem. This is also a
> default setting.

This works, one step further! :-)



Re: Mounting NFS share from Synology NAS

2022-02-01 Thread Tixy
On Tue, 2022-02-01 at 11:43 -0500, Greg Wooledge wrote:
[...]
> I'm unclear on how NFS v4 works.  Everything I've read about it in the
> past says that you have to set up a user mapping, which is shared by
> the client and the server.  And that this is *not* optional, and *is*
> exactly as much of a pain as it sounds.
> 
> I'm looking at <https://help.ubuntu.com/community/NFSv4Howto> for example
> and there's discussion back and forth on the page about how the user
> mapping is not working as expected, and try this and that, and see this
> bug
> 
> I've never actually used NFS v4 myself.  In fact, at work I have to go out
> of my way to *prevent* it from being used, because some of the NFS servers
> to which I connect (which are not under my control) don't support it.
> 
> The comment about the access being based on UID is certainly true for
> NFS v3, though.  NFS v3 ("regular, traditional NFS") controls mounting
> options by the host's IP address, and controls file system access by
> UID and GID.  There may be some way to circumvent that, but I've never
> done it.  I just make sure the UIDs and GIDs match, the way you're
> supposed to.
> 
> For a home network, I can't really imagine a need to go through all of
> the NFS v4 hoops.  I would just use NFS v3 with synchronized UIDs.

Perhaps because I didn't know better, but I used NFSv4 since first
setting up my home network. My install notes for my clients just
have...

   Edit /etc/default/nfs-common to have

NEED_IDMAPD=yes

   Edit /etc/idmapd.conf, make sure these aren't commented out or missing...

Verbosity = 0
Pipefs-Directory = /run/rpc_pipefs # before jessie this was 
/var/lib/nfs/rpc_pipefs

Presumably that's the voodoo I found on the internet when I set things
up many years ago. I do have all my UIDs and GUIs matching across all
machines at home. Everything works seamlessly here. (On the server the
exports have option no_root_squash, the latter lets root use NFS
filesystem too.)

-- 
Tixy



Re: Mounting NFS share from Synology NAS

2022-02-01 Thread Greg Wooledge
On Tue, Feb 01, 2022 at 11:28:55AM -0500, Henning Follmann wrote:
> On Tue, Feb 01, 2022 at 04:32:57PM +0100, Christian Britz wrote:
> > This is my entry in /etc/fstab:
> > diskstation:/volume1/Medien /Daten nfs
> > nfsvers=4,rw,x-systemd.automount,noauto 0 0
> > 
> > Mounting only works as root, I guess this is expected without further
> > configuration.
> > 
> > 1. Security: It seems that the only security check is the check for my
> > IP adress. Is it possible to achieve more without dealing with Kerberos?
> > 
> > 2. Accessing the mounted share with my personal user: The access rights
> > for /Daten look right, the user on the NAS has the same name as the user
> > on my machine. But:
> 
> And how about the userId?
> The username does not mean anything. The access control is 
> based on Id.

I'm unclear on how NFS v4 works.  Everything I've read about it in the
past says that you have to set up a user mapping, which is shared by
the client and the server.  And that this is *not* optional, and *is*
exactly as much of a pain as it sounds.

I'm looking at <https://help.ubuntu.com/community/NFSv4Howto> for example
and there's discussion back and forth on the page about how the user
mapping is not working as expected, and try this and that, and see this
bug

I've never actually used NFS v4 myself.  In fact, at work I have to go out
of my way to *prevent* it from being used, because some of the NFS servers
to which I connect (which are not under my control) don't support it.

The comment about the access being based on UID is certainly true for
NFS v3, though.  NFS v3 ("regular, traditional NFS") controls mounting
options by the host's IP address, and controls file system access by
UID and GID.  There may be some way to circumvent that, but I've never
done it.  I just make sure the UIDs and GIDs match, the way you're
supposed to.

For a home network, I can't really imagine a need to go through all of
the NFS v4 hoops.  I would just use NFS v3 with synchronized UIDs.



Re: Mounting NFS share from Synology NAS

2022-02-01 Thread Bob Weber

On 2/1/22 10:32, Christian Britz wrote:


This is my entry in /etc/fstab:
diskstation:/volume1/Medien /Daten nfs
nfsvers=4,rw,x-systemd.automount,noauto 0 0


Have you tried the user option in fstab?

user - Permit any user to mount the filesystem.

nouser - Only permit root to mount the filesystem. This is also a default 
setting.

--


*...Bob*

Re: Mounting NFS share from Synology NAS

2022-02-01 Thread Henning Follmann
On Tue, Feb 01, 2022 at 04:32:57PM +0100, Christian Britz wrote:
> Hello,
> 
> I am playing with NFS on my home network for the first time and I have
> some difficulties/questions.
> 
> The server is a Synology NAS, it is based on Linux, supports NFS4 and
> gets configured by a web interface.
> The NAS offers a Kerberos authentification for NFS but I did not
> configure this. Instead, something called AUTH_SYS is enabled. Only one
> specific host is allowed to access the share.
> 
> 
> This is my entry in /etc/fstab:
> diskstation:/volume1/Medien /Daten nfs
> nfsvers=4,rw,x-systemd.automount,noauto 0 0
> 
> Mounting only works as root, I guess this is expected without further
> configuration.
> 
> 1. Security: It seems that the only security check is the check for my
> IP adress. Is it possible to achieve more without dealing with Kerberos?
> 
> 2. Accessing the mounted share with my personal user: The access rights
> for /Daten look right, the user on the NAS has the same name as the user
> on my machine. But:

And how about the userId?
The username does not mean anything. The access control is 
based on Id.

> 
> ls -ahl /Daten/
> ls: cannot open directory '/Daten/': Permission denied
> 
> sudo ls -ahl /Daten/
> [sudo] password for xyz:
> total 340K
> drwxrwxrwx 14 xyz root  4.0K Jan 30 21:31 .
> drwxr-xr-x 19 root   root  4.0K Jan 24 09:58 ..
> drwxrwxrwx  5 xyz users 4.0K Jan 30 21:31 Directory1
> drwxrwxrwx  4 xyz users 4.0K Aug 10 10:28 Directory2
> 
> Why can't user xyz access the mountpoint?
> 
> Thank you for your support.
> 
> Regards,
> Christian
> 


-H

-- 
Henning Follmann   | hfollm...@itcfollmann.com



Mounting NFS share from Synology NAS

2022-02-01 Thread Christian Britz
Hello,

I am playing with NFS on my home network for the first time and I have
some difficulties/questions.

The server is a Synology NAS, it is based on Linux, supports NFS4 and
gets configured by a web interface.
The NAS offers a Kerberos authentification for NFS but I did not
configure this. Instead, something called AUTH_SYS is enabled. Only one
specific host is allowed to access the share.


This is my entry in /etc/fstab:
diskstation:/volume1/Medien /Daten nfs
nfsvers=4,rw,x-systemd.automount,noauto 0 0

Mounting only works as root, I guess this is expected without further
configuration.

1. Security: It seems that the only security check is the check for my
IP adress. Is it possible to achieve more without dealing with Kerberos?

2. Accessing the mounted share with my personal user: The access rights
for /Daten look right, the user on the NAS has the same name as the user
on my machine. But:

ls -ahl /Daten/
ls: cannot open directory '/Daten/': Permission denied

sudo ls -ahl /Daten/
[sudo] password for xyz:
total 340K
drwxrwxrwx 14 xyz root  4.0K Jan 30 21:31 .
drwxr-xr-x 19 root   root  4.0K Jan 24 09:58 ..
drwxrwxrwx  5 xyz users 4.0K Jan 30 21:31 Directory1
drwxrwxrwx  4 xyz users 4.0K Aug 10 10:28 Directory2

Why can't user xyz access the mountpoint?

Thank you for your support.

Regards,
Christian



Re: systemd nfs mount blocked until first entered

2021-07-02 Thread Greg Wooledge
On Fri, Jul 02, 2021 at 07:46:31PM +0200, Reiner Buehl wrote:
> I think I found a solution! For whatever reason, my network interface
> enp5s11 was not in the "auto" line in /etc/network/interfaces. After adding
> it there and a reboot, the filesystem is mounted correct without any of
> the  x-systemd mount options.

This happens a *lot*.



Re: systemd nfs mount blocked until first entered

2021-07-02 Thread Reiner Buehl
I think I found a solution! For whatever reason, my network interface
enp5s11 was not in the "auto" line in /etc/network/interfaces. After adding
it there and a reboot, the filesystem is mounted correct without any of
the  x-systemd mount options.

Am Fr., 2. Juli 2021 um 19:30 Uhr schrieb Reiner Buehl <
reiner.bu...@gmail.com>:

> Hello,
>
> this is the full unit:
>
> # /etc/systemd/system/vdr.service
> [Unit]
> Description=Video Disk Recorder
>
> Wants=systemd-udev-settle.service
> After=systemd-udev-settle.service
>
> [Service]
> Type=notify
> ExecStartPre=/bin/sh /usr/lib/vdr/merge-commands.sh "commands"
> ExecStartPre=/bin/sh /usr/lib/vdr/merge-commands.sh "reccmds"
> ExecStart=/usr/bin/vdr
> Restart=on-failure
> RestartPreventExitStatus=0 2
>
> [Install]
> WantedBy=multi-user.target
>
> # /etc/systemd/system/vdr.service.d/override.conf
> [Unit]
> After=remote-fs.target
> Requires=remote-fs.target
>
> I only added the x-systemd options to /etc/fstab because the filesystems
> where not mounted at boot time at all with the old fstab options that I
> used before the upgrade to Debian (I did use yavdr before - a distro that
> was based on a super old 12.x version of Ubuntu). There I just used
>
> 192.168.1.2:/video /video   nfs
> defaults,rsize=8192,wsize=8192,soft,nolock,noatime  0   0
>
> If I try with this entry, the auto-generated video.mount unit fails as it
> seems to be started too early:
>
> ● video.mount - /video
>Loaded: loaded (/etc/fstab; generated)
>Active: failed (Result: exit-code) since Fri 2021-07-02 19:26:02 CEST;
> 2min 46s ago
> Where: /video
>  What: 192.168.1.2:/video
>  Docs: man:fstab(5)
>man:systemd-fstab-generator(8)
>
> Jul 02 19:26:02 vdr systemd[1]: Mounting /video...
> Jul 02 19:26:02 vdr mount[403]: mount.nfs: Network is unreachable
> Jul 02 19:26:02 vdr systemd[1]: video.mount: Mount process exited,
> code=exited, status=32/n/a
> Jul 02 19:26:02 vdr systemd[1]: video.mount: Failed with result
> 'exit-code'.
> Jul 02 19:26:02 vdr systemd[1]: Failed to mount /video.
>
> Best regards,
> Reiner
>
> Am Fr., 2. Juli 2021 um 19:15 Uhr schrieb Reco :
>
>> Hi.
>>
>> On Fri, Jul 02, 2021 at 06:12:58PM +0200, Reiner Buehl wrote:
>> > I have a directory that is mounted via NFS from a remote server.
>>
>> Actually, you have an autofs mountpoint, because you set
>> x-systemd.automount option in fstab.
>> Only if something starts using that mountpoint an NFS filesystem should
>> be mounted there.
>>
>> In another words - you do not require your NFS filesystem to be mounted
>> at boot time, and thus remote-fs.target does not include your NFS
>> filesystem.
>>
>>
>> > If I boot the vdr daemon fails during startup with the error message
>>
>> In other words, vdr fails to trigger automounting of the filesystem in
>> question. As usual with journald, the actual reason of this is not
>> present in this log.
>>
>>
>> > The vdr.service has an override of
>> >
>> > [Unit]
>> > After=remote-fs.target
>> > Requires=remote-fs.target
>> >
>> > to ensure that the filesystem is mounted.
>>
>> These dependencies are useless for your service given the current state
>> of your fstab.
>> The reason being - "autofs" filesystems belong to local-fs.target, not
>> remote-fs.target, and explicitly depending on local-fs.target is useless
>> anyway (it's one of the default dependencies for the most units).
>> What you probably need here is a dependency for a .mount unit
>> corresponding to your NFS filesystem.
>>
>>
>> > If I try to restart vdr.service, it fails again with the same error but
>> if
>> > I just cd to the directory and then try to restart it, it starts and
>> works
>> > fine.
>>
>> Can you show the result of "systemctl cat vdr" please?
>>
>> > What is systemd doing here that blocks the mount point for the vdr
>> process?
>>
>> Many things are possible here. You have ProtectSystem=full set in unit,
>> or you have PrivateMounts=true set in there.
>>
>> > Do I need different fstab options?
>>
>> It depends. x-systemd.automount is useful, because it does not require
>> your NFS server to be present at boot time.
>> I'll refrain from suggesting certain hacks for now, I'd like to see your
>> unit in full first.
>>
>> Reco
>>
>>


Re: systemd nfs mount blocked until first entered

2021-07-02 Thread Reiner Buehl
Hello,

this is the full unit:

# /etc/systemd/system/vdr.service
[Unit]
Description=Video Disk Recorder

Wants=systemd-udev-settle.service
After=systemd-udev-settle.service

[Service]
Type=notify
ExecStartPre=/bin/sh /usr/lib/vdr/merge-commands.sh "commands"
ExecStartPre=/bin/sh /usr/lib/vdr/merge-commands.sh "reccmds"
ExecStart=/usr/bin/vdr
Restart=on-failure
RestartPreventExitStatus=0 2

[Install]
WantedBy=multi-user.target

# /etc/systemd/system/vdr.service.d/override.conf
[Unit]
After=remote-fs.target
Requires=remote-fs.target

I only added the x-systemd options to /etc/fstab because the filesystems
where not mounted at boot time at all with the old fstab options that I
used before the upgrade to Debian (I did use yavdr before - a distro that
was based on a super old 12.x version of Ubuntu). There I just used

192.168.1.2:/video /video   nfs
defaults,rsize=8192,wsize=8192,soft,nolock,noatime  0   0

If I try with this entry, the auto-generated video.mount unit fails as it
seems to be started too early:

● video.mount - /video
   Loaded: loaded (/etc/fstab; generated)
   Active: failed (Result: exit-code) since Fri 2021-07-02 19:26:02 CEST;
2min 46s ago
Where: /video
 What: 192.168.1.2:/video
 Docs: man:fstab(5)
   man:systemd-fstab-generator(8)

Jul 02 19:26:02 vdr systemd[1]: Mounting /video...
Jul 02 19:26:02 vdr mount[403]: mount.nfs: Network is unreachable
Jul 02 19:26:02 vdr systemd[1]: video.mount: Mount process exited,
code=exited, status=32/n/a
Jul 02 19:26:02 vdr systemd[1]: video.mount: Failed with result 'exit-code'.
Jul 02 19:26:02 vdr systemd[1]: Failed to mount /video.

Best regards,
Reiner

Am Fr., 2. Juli 2021 um 19:15 Uhr schrieb Reco :

> Hi.
>
> On Fri, Jul 02, 2021 at 06:12:58PM +0200, Reiner Buehl wrote:
> > I have a directory that is mounted via NFS from a remote server.
>
> Actually, you have an autofs mountpoint, because you set
> x-systemd.automount option in fstab.
> Only if something starts using that mountpoint an NFS filesystem should
> be mounted there.
>
> In another words - you do not require your NFS filesystem to be mounted
> at boot time, and thus remote-fs.target does not include your NFS
> filesystem.
>
>
> > If I boot the vdr daemon fails during startup with the error message
>
> In other words, vdr fails to trigger automounting of the filesystem in
> question. As usual with journald, the actual reason of this is not
> present in this log.
>
>
> > The vdr.service has an override of
> >
> > [Unit]
> > After=remote-fs.target
> > Requires=remote-fs.target
> >
> > to ensure that the filesystem is mounted.
>
> These dependencies are useless for your service given the current state
> of your fstab.
> The reason being - "autofs" filesystems belong to local-fs.target, not
> remote-fs.target, and explicitly depending on local-fs.target is useless
> anyway (it's one of the default dependencies for the most units).
> What you probably need here is a dependency for a .mount unit
> corresponding to your NFS filesystem.
>
>
> > If I try to restart vdr.service, it fails again with the same error but
> if
> > I just cd to the directory and then try to restart it, it starts and
> works
> > fine.
>
> Can you show the result of "systemctl cat vdr" please?
>
> > What is systemd doing here that blocks the mount point for the vdr
> process?
>
> Many things are possible here. You have ProtectSystem=full set in unit,
> or you have PrivateMounts=true set in there.
>
> > Do I need different fstab options?
>
> It depends. x-systemd.automount is useful, because it does not require
> your NFS server to be present at boot time.
> I'll refrain from suggesting certain hacks for now, I'd like to see your
> unit in full first.
>
> Reco
>
>


Re: systemd nfs mount blocked until first entered

2021-07-02 Thread Reco
Hi.

On Fri, Jul 02, 2021 at 06:12:58PM +0200, Reiner Buehl wrote:
> I have a directory that is mounted via NFS from a remote server.

Actually, you have an autofs mountpoint, because you set
x-systemd.automount option in fstab.
Only if something starts using that mountpoint an NFS filesystem should
be mounted there.

In another words - you do not require your NFS filesystem to be mounted
at boot time, and thus remote-fs.target does not include your NFS
filesystem.


> If I boot the vdr daemon fails during startup with the error message

In other words, vdr fails to trigger automounting of the filesystem in
question. As usual with journald, the actual reason of this is not
present in this log.


> The vdr.service has an override of
> 
> [Unit]
> After=remote-fs.target
> Requires=remote-fs.target
> 
> to ensure that the filesystem is mounted.

These dependencies are useless for your service given the current state
of your fstab.
The reason being - "autofs" filesystems belong to local-fs.target, not
remote-fs.target, and explicitly depending on local-fs.target is useless
anyway (it's one of the default dependencies for the most units).
What you probably need here is a dependency for a .mount unit
corresponding to your NFS filesystem.


> If I try to restart vdr.service, it fails again with the same error but if
> I just cd to the directory and then try to restart it, it starts and works
> fine.

Can you show the result of "systemctl cat vdr" please?

> What is systemd doing here that blocks the mount point for the vdr process?

Many things are possible here. You have ProtectSystem=full set in unit,
or you have PrivateMounts=true set in there.

> Do I need different fstab options?

It depends. x-systemd.automount is useful, because it does not require
your NFS server to be present at boot time.
I'll refrain from suggesting certain hacks for now, I'd like to see your
unit in full first.

Reco



Re: systemd nfs mount blocked until first entered

2021-07-02 Thread Greg Wooledge
On Fri, Jul 02, 2021 at 06:12:58PM +0200, Reiner Buehl wrote:
> I have a directory that is mounted via NFS from a remote server. The mount
> is done via an /etc/fstab entry like this:
> 
> 192.168.1.2:/video /video   nfs
> defaults,x-systemd.automount,x-systemd.requires=network-online.target,x-systemd.device-timeout=10,rsize=8192,wsize=8192,soft,nolock,noatime
>  0   0

That's a lot of options.  I wonder what they all do.

If you simply boot the machine and then login and run 'df', do you see
the file system mounted?

I'm wondering in particular about that x-systemd.automount option.  Does
that mean something like "don't mount this until I think someone really
wants it"?

https://manpages.debian.org/buster/systemd/systemd.automount.5.en.html
says that these are "activated when the automount path is accessed", but
it doesn't say what counts as "accessed".

I wonder if removing the x-systemd.automount option would help you.


The other thing you'll want to look at is how your network interface
is configured.  You've got x-systemd.requires=network-online.target
which *sounds* reasonable, but only if the network interface is actually
configured to be waited upon.

If you're using /etc/network/interface (the Debian default) for your
interface config, make sure the interface is marked as "auto", rather
than as "allow-hotplug".  The latter causes systemd NOT to wait for
the interface.  Make sure it says "auto" instead.



systemd nfs mount blocked until first entered

2021-07-02 Thread Reiner Buehl
Hi all,

I have a directory that is mounted via NFS from a remote server. The mount
is done via an /etc/fstab entry like this:

192.168.1.2:/video /video   nfs
defaults,x-systemd.automount,x-systemd.requires=network-online.target,x-systemd.device-timeout=10,rsize=8192,wsize=8192,soft,nolock,noatime
 0   0

If I boot the vdr daemon fails during startup with the error message

 vdr.service - Video Disk Recorder
   Loaded: loaded (/etc/systemd/system/vdr.service; enabled; vendor preset:
enabled)
  Drop-In: /etc/systemd/system/vdr.service.d
   └─override.conf
   Active: failed (Result: exit-code) since Thu 2021-07-01 23:27:25 CEST;
8h ago
  Process: 523 ExecStartPre=/bin/sh /usr/lib/vdr/merge-commands.sh commands
(code=exited, status=0/SUCCESS)
  Process: 533 ExecStartPre=/bin/sh /usr/lib/vdr/merge-commands.sh reccmds
(code=exited, status=0/SUCCESS)
  Process: 543 ExecStart=/usr/bin/vdr (code=exited, status=2)
 Main PID: 543 (code=exited, status=2)
Jul 01 23:27:25 vdr systemd[1]: Starting Video Disk Recorder...
Jul 01 23:27:25 vdr vdr[543]: [543] ERROR: can't access /video
Jul 01 23:27:25 vdr vdr[543]: vdr: can't access video directory /video
Jul 01 23:27:25 vdr systemd[1]: vdr.service: Main process exited,
code=exited, status=2/INVALIDARGUMENT
Jul 01 23:27:25 vdr systemd[1]: vdr.service: Failed with result 'exit-code'.
Jul 01 23:27:25 vdr systemd[1]: Failed to start Video Disk Recorder.

The vdr.service has an override of

[Unit]
After=remote-fs.target
Requires=remote-fs.target

to ensure that the filesystem is mounted.

If I try to restart vdr.service, it fails again with the same error but if
I just cd to the directory and then try to restart it, it starts and works
fine.

What is systemd doing here that blocks the mount point for the vdr process?
Do I need different fstab options?

Best regards,
Reiner


Re: Permissions on NFS mounts

2020-12-10 Thread Michael Stone

On Thu, Dec 10, 2020 at 04:48:36PM +0300, Reco wrote:

I just like to remind you the original question:

Is there a way to put an account "beyond use", in any way including su,
sudo etc,

*In any way* includes the way I've described above IMO.


So you're asking if there's a way to prevent someone from using sudo to 
do something sudo has been specifically configured to do? Kind of a 
weird question, IMO. If you don't want to allow someone to sudo to a 
particular user then...don't configure sudo to allow them to do that.


Also worth pointing out that having a passwd entry isn't even relevant 
to whether root can setuid. At some point if you've provided enough rope 
then setting a bunch of artificial constraints for the sake of argument 
is just a waste of time.


# id
uid=0(root) gid=0(root) groups=0(root)
# id 1234
id: ‘1234’: no such user
# python3 -c 'import os; os.setuid(1234); os.execl("/bin/bash", "bash")'
$ id
uid=1234 gid=0(root) groups=0(root)



Re: Permissions on NFS mounts

2020-12-10 Thread Michael Stone

On Thu, Dec 10, 2020 at 10:42:36AM -0500, Greg Wooledge wrote:

In the context of the original question, having a consistent set of
local user accounts (name/UID pairs) across all of your systems in
an NFS environment is useful for making sure all files have consistent
ownership.  Even on the systems where, say, charlie will never log in,
seeing that the files in /home/charlie are owned by user "charlie" is
helpful.


It's practically impossible to sync everything on a modern system in the 
presence of dynamically allocated IDs. The best you can hope for is sync 
a certain *range* of IDs and by convention only use IDs in that range 
within NFS exports. If something outside that range happens to sneak 
into the export it'll look weird, but has no real effect on security. 
(If you're using sec=sys on an NFS mount you have no security outside of 
what the client chooses to implement.)


Historically this could be done by being diligent in manually creating 
passwd entries, via yp/nis to distribute a common passwd file, or via 
various configuration management schemes to automate local passwd file 
management. In most normal (heterogenous) environments these did only 
manage a certain range, and trying to sync system users was simply not 
done because it was harder than it was worth.




Re: Permissions on NFS mounts

2020-12-10 Thread Michael Stone

On Wed, Dec 09, 2020 at 03:38:21PM -0500, Paul M Foster wrote:

I have two users on the client: paulf 1000 and nancyf 1001. On the
server, I have two users: pi 1000 and paulf 1001. I can mount the NFS
share from the server to /mnt on my client. But any files belonging to
me (user 1001 on the server) look like they belong to nancy (user 1001
on the client. More importantly, if I copy files to this share from the
client, they will look like they belong to pi (user 1000) on the server.

Is there some way in the /etc/exports file to adjust the parameters so
that files retain my ownership on the server?


Traditional NFS depends on the uid/gid matching across all the systems 
in a tightly controlled local network. Your solution involves changing 
the IDs so they match.


The newer model for NFS depends on cryptographic authentication 
(generally kerberos) of requests rather than assuming that everything is 
trusted and consistently configured. In this model you can have the 
uid/gid be random, but you need a kerberos server.


It is theoretically possible to do uid mapping without the 
authentication component, but that's all disabled by default and I'm not 
sure how current any of the directions or even the code is. You'd need 
to set up static maps in /etc/idmapd.conf and set 
nfs4_disable_idmapping=0 on the nfsd module. Also make sure you're using 
nfs4 and not nfs3. "idmapd.conf" and "nfs4_disable_idmapping" should be 
good google keywords to find instructions.


Depending on your use case you might also find running samba and using 
cifs rather than nfs works better for you. (Or not.) It has a different 
authentication model and interface with its own pros and cons.




Re: Permissions on NFS mounts

2020-12-10 Thread Greg Wooledge
On Thu, Dec 10, 2020 at 03:35:50PM +, Tixy wrote:
> Why would you execute sudo or su on the target machine to change to one
> of these unneeded users, presumably you can do whatever mischief is
> your aim by using the account you are executing su or sudo from. Or by
> changing to another valid user on that machine if you are a legitimate
> user and were trying to cover your tracks.

If you have full sudo access, you're *already* at the top of the food
chain.  You can create a new user and switch to it.  You can delete
users.  You can lock and unlock users.  You can do literally everthing,
because you're the superuser.

Putting additional entries in the passwd file is not a security issue,
unless those entries have guessable passwords, or some other means of
logging in as them from a remote system, or from a different non-root
user account.

Additional entries in passwd are useful for *lots* of things, such as
running a service as a UID that has no other access.  They are not a
reduction in security.  Properly used, they can increase security.

In the context of the original question, having a consistent set of
local user accounts (name/UID pairs) across all of your systems in
an NFS environment is useful for making sure all files have consistent
ownership.  Even on the systems where, say, charlie will never log in,
seeing that the files in /home/charlie are owned by user "charlie" is
helpful.



Re: Permissions on NFS mounts

2020-12-10 Thread David Wright
On Thu 10 Dec 2020 at 16:48:36 (+0300), Reco wrote:
> On Thu, Dec 10, 2020 at 03:36:47PM +0200, Andrei POPESCU wrote:
> > At least on Debian sudo has to be explicitly configured to allow a 
> > regular user to use '-u' with another user name. We can only assume the 
> > admin had good reasons to that, possibly on purpose (see below).
> 
> You're correct here, one has to explicitly allow such activity in
> sudoers in Debian and just about any OS I've encountered these years
> (assuming it has sudo, of course).
> 
> I just like to remind you the original question:
> 
> Is there a way to put an account "beyond use", in any way including su,
> sudo etc,
> 
> *In any way* includes the way I've described above IMO.

The original question was almost a textbook example of the X Y problem.

The opening statement says "you'll inevitably end up with situations
where users are created on some of the machines only for the purpose
of keeping the IDs in synch", and that's wrong. So why try to solve it.
Fortunately, this statement reveals X (which would be unreported in a
true textbook example).

Your reminder of the "original question" just quotes part of Mark's
attempted solution to problem Y, namely creating an account that's
barred. The answer to the real "original question" is to avoid
creating those accounts at all—then there's no need to bar them.

Cheers,
David.



  1   2   3   4   5   6   7   8   9   10   >