Re: Debian on raspberrypi: failed to configure wlan0

2019-12-18 Thread Andrei POPESCU
On Mi, 18 dec 19, 18:34:03, Franco Martelli wrote:
> 
> Thanks for your answer I've just solved thank reading this link [1] it
> was the gateway line once commented all work fine. Now the wlan0
> configuration file is:

Apparently with ifupdown[2] you can't have the same default gateway for 
several interfaces.

For the future, it would have helped if you had provided your complete 
network configuration.
 
> [1]
> https://raspberrypi.stackexchange.com/questions/13895/solving-rtnetlink-answers-file-exists-when-running-ifup

[2] Didn't investigate if/how this works with other network management 
frameworks.

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: Home made backup system

2019-12-18 Thread David Christensen

On 2019-12-18 09:02, rhkra...@gmail.com wrote:

Aside / Admission: I don't backup all that I should and as often as I should,
so I'm looking for ways to improve.  One thought I have is to write my own
backup "system" and use it, and I've thought about that a little, and provide
some of my thoughts below.

A purpose of sending this to the mailing-list is to find out if there already
exists a solution (or parts of a solution) close to what I'm thinking about
(no sense re-inventing the wheel), or if someone thinks I've overlooked
something or making a big mistake.

Part of the reason for doing my own is that I don't want to be trapped into
using a system that might disappear or change and leave me with a problem.  (I
subscribe to a mailing list for one particular backup system, and I wrote to
that list with my concerns and a little bit of my thoughts about my own system
(well, at the time, I was hoping for a "universal" configuration file (the file
that would specify what, where, when, how each file, directory, or partition to
be backed up would be treated), one that could be read and acted upon by a
great variety (and maybe all future backup programs).

The only response I got (iirc) was that since their program was open source,
it would never go away.  (Yet, if I'm not mixing up backup programs, they were
transitioning from using Python 2 as the underlying language to Python 3 --
I'm not sure Python 2 would ever go completely away, or become non-functional,
but it reinforces my belief / fear that any (complex?) backup program, even
open source, would someday become unusable.

So, here are my thoughts:

After I thought about (hoped for) a universal config file for backup programs
and it seeming that no such thing exists (not surprising), I thought I'd try
to create my own -- this morning as I thought about it a little more (despite
a headache and a non-working car what I should be working on), I thought that
the simplest thing for me to do is write a bash script and a bash subroutine,
something along these lines:

* the backups should be in formats such that I can access them by a variety
of other tools (as appropriate) if I need to -- if I backup an entire
directory or partition, I should be able to easily access and restore any
particular file from within that backup, and do so even if encrypted (i.e.,
encryption would be done by "standard programs" (a bad example might be
ccrypt) that I could use "outside" of the backup system.

* the bash subroutine (command) that I write should basically do the
following:

   * check that the specified target exists (for things like removable
drives or NAS type things) and has (sufficient) space (not sure I can tell that
until after backup is attempted) (or an encrypted drive that is not mounted /
unencrypted, i.e., available to write to)

   * if the right conditions don't exist (above) tell me (I'm thinking of
an email as email is something that always gets my attention, maybe not
immediately, but soon enough)

   * if the right conditions do exist, invoke the commands to backup the
files

   * if the backup is unsuccessful for any reason, notify me (email again)

   * optionally notify me that the backup was successful (at least to the
extent of writing something)

   * optionally actually do something to confirm that the backup is readable
/ usable (need to think about what that could be -- maybe write it (to /tmp or
to a ramdrive), do something like a checksum (e.g., sha-256 or whatever makes
sense) on it and the original file, and confirm they match

   * ???

All of the commands invoked by the script should be parameters so that the
commands can be easily changed in the future (e.g., cp / tar / rsync, sha-256
or whatever, ccrypt or whatever, etc.)

Then the master script (actually probably scripts, e.g. one or more each for
hourly, daily, weekly, ... backups) would be invoked by cron (or maybe include
the at command? --my computers run 24/7 unless they crash, but for others, at
or something similar might be a better choice) would invoke that subroutine /
command for each file, directory, or partition to be backed up, specifying the
commands to use, what files to backup, where to back them up, encrypted or not,
compressed or not, tarred or not, etc.

In other words, instead of a configuration file, the system would just use bash
scripts with the appropriate commands, and invoked at the appropriate time by
cron (or with all backup commands in one script with backup times specified
with at or similar).

Aside: even if Amanda (for example) will always exist, I don't really want to
learn anything about it or any other program that might cease to be
maintainied in the future.


I wrote and use a homebrew backup and archive solution that started with 
a Perl script to invoke rsync (backup) and tar/ gzip (archive) over ssh 
from a central server according to configurable job files.  My thinking was:


1.  Use lowest-common 

Re: Broken PMTUD / ICMP blackhole?

2019-12-18 Thread John Hasler
Celejar writes:
>  ...the problem only occurs when tethering.

Which is the only time the cellular encapsulation is being done.
-- 
John Hasler 
jhas...@newsguy.com
Elmwood, WI USA



Re: Home made backup system

2019-12-18 Thread songbird
rhkra...@gmail.com wrote:
...
>>   if test -z "$home" -o \! -d "$home" ; then
>
> What does the -o \! do -- hmm, I guess \! is a bash "refeence" to the owner 
> -- 
  no, -o is logical or in that context.
the backslash is just protecting the ! operator 
which is the not operator on what follows.

  i'm not going to go any further with reading
whatever script that is.  i don't want to be
here all evening.  ;)

  when searching the bash man pages you have to
be aware of context as some of the operators
and options are used in many places but have 
quite different meanings.


  songbird



Re: Home made backup system

2019-12-18 Thread Charles Curley
On Wed, 18 Dec 2019 12:02:56 -0500
rhkra...@gmail.com wrote:

> Aside / Admission: I don't backup all that I should and as often as I
> should, so I'm looking for ways to improve.  One thought I have is to
> write my own backup "system" and use it, and I've thought about that
> a little, and provide some of my thoughts below.

There are different backup programs for different purposes. Some
thoughts:
http://charlescurley.com/blog/posts/2019/Nov/02/backups-on-linux/



-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/



Re: Home made backup system

2019-12-18 Thread rhkramer
Thanks to all who replied!

This script (or elements of it) looks useful to me, but I don't fully 
understand it -- I plan to work my way through it -- I have a few questions 
now, I'm sure I will have more after I get past the first 3 (or more 
encouraging to me, first 6) lines.

Questions below:

On Wednesday, December 18, 2019 12:26:04 PM to...@tuxteam.de wrote:
> On Wed, Dec 18, 2019 at 12:02:56PM -0500, rhkra...@gmail.com wrote:

>   #!/bin/bash
>   home=${HOME:-~}

What does that line do, or more specifically, what does the :-~ do -- note the 
following:

rhk@s19:/rhk/git_test$ echo ${HOME:-~}
/home/rhk
rhk@s19:/rhk/git_test$ echo ${HOME}
/home/rhk

>   if test -z "$home" -o \! -d "$home" ; then

What does the -o \! do -- hmm, I guess \! is a bash "refeence" to the owner -- 
I guess I should look for it in man bash...

Hmm, but that means (in bash) the "history number" of the command

"  \! the history number of this command"

> echo "can't backup the homeless, sorry"
> exit 1
>   fi

I'm sure I'll have more questions as I continue, but that is enough for me for 
tonight.

>   backup=/media/backup/${home#/}
>   rsync -av --delete --filter="merge $home/.backup/filter" $home/ $backup/
>   echo -n "syncing..."
>   sync
>   echo " done."
>   df -h
> 
> I mount an USB stick (currently 128G) on /media/backup (the stick has a
> LUKS encrypted file system on it) and invoke backup.
> 
> The only non-quite obvious thing is the option
> 
>   --filter="merge $home/.backup/filter"
> 
> which controls what (not) to back up. This one has a list of excludes
> (much shortened) like so
> 
>   - /.cache/
>   [...much elided...]
>   - /.xsession-errors
>   - /tmp
>   dir-merge .backup-filter
> 
> The last line is interesting: it tells rsync to merge a file .backup-filter
> in each directory it visits -- so I can exclude huge subdirs I don't need
> to keep (e.g. because they are easy to re-build, etc.).
> 
> One example of that: I've a subdirectory virt, where I keep virtual images
> and install media. Then virt/.backup-filter looks like this:
> 
>   + /.backup-filter
>   + /notes
>   - /*
> 
> i.e. "just keep .backup-filter and notes, ignore the rest".
> 
> This scheme has served me well over the last ten years. It does have its
> limitations: it's sub-optimal with huge files, it won't probably scale
> well for huge amounts of data.
> 
> But it's easy to use and easy to understand.
> 
> Cheers
> -- t



Re: Problemas con PATH dpkg

2019-12-18 Thread Felix Perez
El mié., 18 de dic. de 2019 a la(s) 14:54, Pablo Ramirez
(pabloramirez1...@gmail.com) escribió:
>
> Estoy instalando como root
> Debían 10
> DaVinci 16 (16.1.2 creo que es la última)
>
> dpkg -i
>
> Gracias por tu respuesta, saludos.
>
Por favor No escribas al privado y por favor no hagas top posting.  Gracias.

Revisa esta busqueda:
https://www.google.com/search?source=hp=MNz6XfHPGuXG5OUP56qsqA4=dpkg%3A+atenci%C3%B3n%3A+%60ldconfig%27+no+se+ha+encontrado+en+el+PATH+o+no+es+ejecutable=dpkg%3A+atenci%C3%B3n%3A+%60ldconfig%27+no+se+ha+encontrado+en+el+PATH+o+no+es+ejecutable_l=psy-ab.3..0l2.425.425..939...0.0..0.173.173.0j1..02j1..gws-wiz.SwKhT5sVAb8=0ahUKEwjxp8iS0cDmAhVlI7kGHWcVC-UQ4dUDCAU=5

Tienes un problema con la variable de entorno.

Suerte
> El 18-12-2019, a las 13:34, Felix Perez  
> escribió:
> >
> > El mar., 17 de dic. de 2019 a la(s) 13:45, Pablo Ramirez
> > (pabloramirez1...@gmail.com) escribió:
> >>
> >> Buenas Estimados
> >>
> >> Mi nombre es Pablo y soy nuevo en el mundo de linux, el tema puntual es 
> >> que estoy tratando de instalar un programa, en especifico DaVinci Resolve, 
> >> pero al momento de ejecutar el comando dkpg salta el siguiente error:
> >> dpkg: atención: `ldconfig' no se ha encontrado en el PATH o no es 
> >> ejecutable
> >> dpkg: atención: `start-stop-daemon' no se ha encontrado en el PATH o no es 
> >> ejecutable
> >> dpkg: error: no se ha encontrado 2 en el PATH o no es ejecutable
> >> NOTA: El PATH de root debería incluir habitualmente /usr/local/sbin, 
> >> /usr/sbin y /sbin.
> >>
> >
> > Primero que todo:
> > - Estás instalando como root?
> > - versión de Debian.
> > - versión de davinci
> > - revisar notas de instalación de Davinci (requirimientos)
> > - comando dpk utilizado completo
> >
> >> buscando por internet, que es por donde estoy aprendiendo a usar linux,
> >> no he logrado llegar a un solución, agradeciendo vuestra ayuda
> >> de ante mano muchas gracias
> >>
> >
> > Especificamente que haz buscado y/o qué haz echo
> >
> >> Pablo.
> >>
> >
> >
> > --
> > usuario linux  #274354
> > normas de la lista:  http://wiki.debian.org/es/NormasLista
> > como hacer preguntas inteligentes:
> > http://www.sindominio.net/ayuda/preguntas-inteligentes.html
> >



-- 
usuario linux  #274354
normas de la lista:  http://wiki.debian.org/es/NormasLista
como hacer preguntas inteligentes:
http://www.sindominio.net/ayuda/preguntas-inteligentes.html



Re: Broken PMTUD / ICMP blackhole?

2019-12-18 Thread Celejar
On Wed, 18 Dec 2019 13:58:38 -0600
John Hasler  wrote:

> Celejar writes:
> > I assume the phone first just routes them from wifi to cellular. I'm
> > not familiar with how it then transmits them over the cellular link.
> 
> It has to encapsulate them in some way for the cellular protocol.
> 
> > Yes, but that happens with virtually all this machine's network
> > connections.
> 
> I thought you said it only happened when using tethering.

I suppose I misunderstood you. I meant that the wifi encapsulation
takes place with virtually all the machine's network connections,
since it is rarely connected via wired ethernet, and that this
encapsulation was therefore not likely the cause of the problem,
since the problem only occurs when tethering.

Celejar



Re: Debian 10.2 ne démarre pas

2019-12-18 Thread Pascal Hambourg

Le 17/12/2019 à 01:39, G2PC a écrit :


De plus en plus désagréable la communauté du libre.


Je ne connais pas cette communauté, et n'en fais pas partie.


Le seul problème qui reste, c'est les entrée du debian qui ne fonctionne
pas sur mon système


Déjà répondu :


man efibootmgr


Qu'est-ce qu'il te faut de plus ?



Re: Home made backup system

2019-12-18 Thread elvis
If you don't to reinvent the wheel, and have more than one computer to 
backup...


try Bacula  www.bacula.org


does everything you want

On 19/12/19 3:02 am, rhkra...@gmail.com wrote:

Aside / Admission: I don't backup all that I should and as often as I should,
so I'm looking for ways to improve.  One thought I have is to write my own
backup "system" and use it, and I've thought about that a little, and provide
some of my thoughts below.

A purpose of sending this to the mailing-list is to find out if there already
exists a solution (or parts of a solution) close to what I'm thinking about
(no sense re-inventing the wheel), or if someone thinks I've overlooked
something or making a big mistake.

Part of the reason for doing my own is that I don't want to be trapped into
using a system that might disappear or change and leave me with a problem.  (I
subscribe to a mailing list for one particular backup system, and I wrote to
that list with my concerns and a little bit of my thoughts about my own system
(well, at the time, I was hoping for a "universal" configuration file (the file
that would specify what, where, when, how each file, directory, or partition to
be backed up would be treated), one that could be read and acted upon by a
great variety (and maybe all future backup programs).

The only response I got (iirc) was that since their program was open source,
it would never go away.  (Yet, if I'm not mixing up backup programs, they were
transitioning from using Python 2 as the underlying language to Python 3 --
I'm not sure Python 2 would ever go completely away, or become non-functional,
but it reinforces my belief / fear that any (complex?) backup program, even
open source, would someday become unusable.

So, here are my thoughts:

After I thought about (hoped for) a universal config file for backup programs
and it seeming that no such thing exists (not surprising), I thought I'd try
to create my own -- this morning as I thought about it a little more (despite
a headache and a non-working car what I should be working on), I thought that
the simplest thing for me to do is write a bash script and a bash subroutine,
something along these lines:

* the backups should be in formats such that I can access them by a variety
of other tools (as appropriate) if I need to -- if I backup an entire
directory or partition, I should be able to easily access and restore any
particular file from within that backup, and do so even if encrypted (i.e.,
encryption would be done by "standard programs" (a bad example might be
ccrypt) that I could use "outside" of the backup system.

* the bash subroutine (command) that I write should basically do the
following:

   * check that the specified target exists (for things like removable
drives or NAS type things) and has (sufficient) space (not sure I can tell that
until after backup is attempted) (or an encrypted drive that is not mounted /
unencrypted, i.e., available to write to)

   * if the right conditions don't exist (above) tell me (I'm thinking of
an email as email is something that always gets my attention, maybe not
immediately, but soon enough)

   * if the right conditions do exist, invoke the commands to backup the
files

   * if the backup is unsuccessful for any reason, notify me (email again)

   * optionally notify me that the backup was successful (at least to the
extent of writing something)

   * optionally actually do something to confirm that the backup is readable
/ usable (need to think about what that could be -- maybe write it (to /tmp or
to a ramdrive), do something like a checksum (e.g., sha-256 or whatever makes
sense) on it and the original file, and confirm they match

   * ???

All of the commands invoked by the script should be parameters so that the
commands can be easily changed in the future (e.g., cp / tar / rsync, sha-256
or whatever, ccrypt or whatever, etc.)

Then the master script (actually probably scripts, e.g. one or more each for
hourly, daily, weekly, ... backups) would be invoked by cron (or maybe include
the at command? --my computers run 24/7 unless they crash, but for others, at
or something similar might be a better choice) would invoke that subroutine /
command for each file, directory, or partition to be backed up, specifying the
commands to use, what files to backup, where to back them up, encrypted or not,
compressed or not, tarred or not, etc.

In other words, instead of a configuration file, the system would just use bash
scripts with the appropriate commands, and invoked at the appropriate time by
cron (or with all backup commands in one script with backup times specified
with at or similar).

Aside: even if Amanda (for example) will always exist, I don't really want to
learn anything about it or any other program that might cease to be
maintainied in the future.


--
If we aren't supposed to eat animals, why are they made of meat?



Re: Broken PMTUD / ICMP blackhole?

2019-12-18 Thread John Hasler
Celejar writes:
> I assume the phone first just routes them from wifi to cellular. I'm
> not familiar with how it then transmits them over the cellular link.

It has to encapsulate them in some way for the cellular protocol.

> Yes, but that happens with virtually all this machine's network
> connections.

I thought you said it only happened when using tethering.
-- 
John Hasler 
jhas...@newsguy.com
Elmwood, WI USA



Bogue d'affiche Debian lors du partitionnement et chiffrement d'un disque

2019-12-18 Thread G2PC
Debian Live 10.2 XFCE

Je sélectionne l'option pour chiffrer le disque avec LVM.

Le disque, lors de l'étape " Partitionner les disques " se voit être
effacé :



Effacement des données de ... , partition n° ...

Le programme d'installation écrit actuellement des données aléatoires
sur SCSI4 (0,0,0), partition n° . la suite du message est tronqué et
dépasse de l'écran.


Il serait intéressant de signaler ce bogue, et, de proposer d'afficher
l'information ainsi :


Le programme d'installation écrit actuellement des données aléatoires
sur ( la partition ) :

SCSI4 (0,0,0), partition n° . ( L'information ne serait plus tronquée. )




Re: Broken PMTUD / ICMP blackhole?

2019-12-18 Thread Celejar
On Wed, 18 Dec 2019 07:29:08 -0600
John Hasler  wrote:

> Celejar writes:
> > I'm not that familiar with the internals, but basically, the phone
> > presents a wifi access point, the computer connects to it as it would
> > to any AP, and the phone apparently routes packets to and from the
> > cellular network.
> 
> Yes, I know that.  However, the packets are being encapsulated in some
> way. They may be encapsulating the IP packets directly or they may be
> encapsulating the ethernet packets as PPP does.  Either way there is an

I assume the phone first just routes them from wifi to cellular. I'm
not familiar with how it then transmits them over the cellular link.

> opportunity for problems similar to those that have developed with
> PPPoE because encapsulization always involves adding headers.  The
> packets are being encapsulated for the WiFi too, of course.

Yes, but that happens with virtually all this machine's network
connections.

Celejar



Re: Broken PMTUD / ICMP blackhole?

2019-12-18 Thread Celejar
On Wed, 18 Dec 2019 07:47:18 -0600
John Hasler  wrote:

> Celejar writes:
> > It does work (and to automate it, I'll probably put it in e/n/i with a
> > post-up line) - I'm just looking for the "right" way to do it (and to
> > undo it on connection down, as I discussed in another message in this
> > thread).
> 
> I wouldn't worry about doing it the *right* way.  It's a kludge that
> should not be necessary but is.
> 
> Automatic adjustment would be interesting but I don't see an obvious
> method.

Fair enough.

Celejar



Re: Invisible IPv6 addresses

2019-12-18 Thread Celejar
On Wed, 18 Dec 2019 07:41:21 -0600
John Hasler  wrote:

> This may be what you have:
> https://en.wikipedia.org/wiki/IPv6#IPv4-mapped_IPv6_addresses

Thank you. As per my other mail, it turns out that it's apparently this:

https://en.wikipedia.org/wiki/IPv6_transition_mechanisms#NAT64
https://en.wikipedia.org/wiki/IPv6_transition_mechanisms#464XLAT

Celejar



[Solved] Re: Invisible IPv6 addresses

2019-12-18 Thread Celejar
On Wed, 18 Dec 2019 10:27:51 +0300
Reco  wrote:

>   Hi.
> 
> On Tue, Dec 17, 2019 at 04:54:17PM -0500, Celejar wrote:
> > But the IPv6 address e:f:g:h:i:j:k:l is not actually configured
> > anywhere on the router (as shown by 'ip a' and other tools)!
> 
> Either there's some IPv6 - IPv4 conversion involved, or Verison just
> terminates inbound IPv6 connections on their end.

I did some investigating with tcpdump on the router, and IIUC, there is
indeed some sort of IPv6 - IPv4 conversion going on:

On the router:

~# tcpdump -i any icmp

On the remote box:

~$ ping e:f:g:h:i:j:k:l
PING e:f:g:h:i:j:k:l(e:f:g:h:i:j:k:l) 56 data bytes
64 bytes from e:f:g:h:i:j:k:l: icmp_seq=1 ttl=51 time=89.0 ms

But the tcpdump instance on the router sees the pings like this

IP ue.tmodns.net > pool-a-b-c-d.region.fios.verizon.net: ICMP echo request, id 
44719, seq 1, length 64
IP pool-a-b-c-d.region.fios.verizon.net > ue.tmodns.net: ICMP echo reply, id 
44719, seq 1, length 64

or (with -n):

IP 172.58.187.252 > a.b.c.d: ICMP echo request, id 47985, seq 1, length 64
IP a.b.c.d > 172.58.187.252: ICMP echo reply, id 47985, seq 1, length 64

ue.tmodns.net / 172.58.187.252 is owned by T-Mobile, my wireless
provider (via an MVNO), and apparently it's transparently translating
between IPv6 - IPv4.

I looked around a bit, and IIUC, this is a NAT64 server [1], which
T-Mobile uses as part of it's 464XLAT (RFC 6877) architecture [2], a
system it developed to facilitate interoperability between its pure
IPv6 network and legacy IPv4 installations [3] (like my Verizon residential
service).

Thanks much for the help.

[1] https://en.wikipedia.org/wiki/NAT64
[2] https://en.wikipedia.org/wiki/IPv6_transition_mechanism#464XLAT
https://tools.ietf.org/html/rfc6877
[3] 
https://www.internetsociety.org/resources/deploy360/2014/case-study-t-mobile-us-goes-ipv6-only-using-464xlat/
https://www.reddit.com/r/tmobile/comments/5le5s7/tmobile_openvpn_connect_ipv6_nat64/dbv33j3/

Celejar



Re: Problemas con PATH dpkg

2019-12-18 Thread Pablo Ramirez
Estoy instalando como root 
Debían 10
DaVinci 16 (16.1.2 creo que es la última)
 
dpkg -i 

Gracias por tu respuesta, saludos.

El 18-12-2019, a las 13:34, Felix Perez  escribió:
> 
> El mar., 17 de dic. de 2019 a la(s) 13:45, Pablo Ramirez
> (pabloramirez1...@gmail.com) escribió:
>> 
>> Buenas Estimados
>> 
>> Mi nombre es Pablo y soy nuevo en el mundo de linux, el tema puntual es que 
>> estoy tratando de instalar un programa, en especifico DaVinci Resolve, pero 
>> al momento de ejecutar el comando dkpg salta el siguiente error:
>> dpkg: atención: `ldconfig' no se ha encontrado en el PATH o no es ejecutable
>> dpkg: atención: `start-stop-daemon' no se ha encontrado en el PATH o no es 
>> ejecutable
>> dpkg: error: no se ha encontrado 2 en el PATH o no es ejecutable
>> NOTA: El PATH de root debería incluir habitualmente /usr/local/sbin, 
>> /usr/sbin y /sbin.
>> 
> 
> Primero que todo:
> - Estás instalando como root?
> - versión de Debian.
> - versión de davinci
> - revisar notas de instalación de Davinci (requirimientos)
> - comando dpk utilizado completo
> 
>> buscando por internet, que es por donde estoy aprendiendo a usar linux,
>> no he logrado llegar a un solución, agradeciendo vuestra ayuda
>> de ante mano muchas gracias
>> 
> 
> Especificamente que haz buscado y/o qué haz echo
> 
>> Pablo.
>> 
> 
> 
> -- 
> usuario linux  #274354
> normas de la lista:  http://wiki.debian.org/es/NormasLista
> como hacer preguntas inteligentes:
> http://www.sindominio.net/ayuda/preguntas-inteligentes.html
> 



Re: Debian on raspberrypi: failed to configure wlan0

2019-12-18 Thread Franco Martelli
On 18/12/19 at 17:32, Nektarios Katakis wrote:
> 
> You should try to associate the wireless nic with your wifi by running
> only wpa_supplicant to see if that succeeds (the link state should
> change - the mode in the `iwconfig` command should be managed).
> 
> For example this is how my config looks like
> ```
> allow-hotplug wlx000f00bf4a3f
> iface wlx000f00bf4a3f inet static
> address 192.168.1.71
> netmask 255.255.255.0
> gateway 192.168.1.254
> wpa-ssid ssid-name
> wpa-psk
> e8918bce6980814557b664fb52bda4d342174d2a2c95dd06078d7a29851de799
> ```
> 
> Hope this helps.

Thanks for your answer I've just solved thank reading this link [1] it
was the gateway line once commented all work fine. Now the wlan0
configuration file is:

~# cat /etc/network/interfaces.d/wlan0
# allow-hotplug wlan0
iface wlan0 inet static
address 192.168.0.9
netmask 255.255.255.0
network 192.168.0.0
broadcast 192.168.0.255
# gateway 192.168.0.1
wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

Debian on raspberrypi works great I've cups, bind9, isc-dhcp-server
running fine, if you are unsure give it a try.


[1]
https://raspberrypi.stackexchange.com/questions/13895/solving-rtnetlink-answers-file-exists-when-running-ifup
-- 
Franco Martelli



Re: Home made backup system

2019-12-18 Thread tomas
On Wed, Dec 18, 2019 at 12:02:56PM -0500, rhkra...@gmail.com wrote:
> Aside / Admission: I don't backup all that I should and as often as I should, 
> so I'm looking for ways to improve [...]

> Part of the reason for doing my own is that I don't want to be trapped into 
> using a system that might disappear or change and leave me with a problem.

I just use rsync. The whole thing is driven from a minimalist script:

  #!/bin/bash
  home=${HOME:-~}
  if test -z "$home" -o \! -d "$home" ; then
echo "can't backup the homeless, sorry"
exit 1
  fi
  backup=/media/backup/${home#/}
  rsync -av --delete --filter="merge $home/.backup/filter" $home/ $backup/
  echo -n "syncing..."
  sync
  echo " done."
  df -h

I mount an USB stick (currently 128G) on /media/backup (the stick has a
LUKS encrypted file system on it) and invoke backup.

The only non-quite obvious thing is the option

  --filter="merge $home/.backup/filter"

which controls what (not) to back up. This one has a list of excludes
(much shortened) like so

  - /.cache/
  [...much elided...]
  - /.xsession-errors
  - /tmp
  dir-merge .backup-filter

The last line is interesting: it tells rsync to merge a file .backup-filter
in each directory it visits -- so I can exclude huge subdirs I don't need
to keep (e.g. because they are easy to re-build, etc.).

One example of that: I've a subdirectory virt, where I keep virtual images
and install media. Then virt/.backup-filter looks like this:

  + /.backup-filter
  + /notes
  - /*

i.e. "just keep .backup-filter and notes, ignore the rest".

This scheme has served me well over the last ten years. It does have its
limitations: it's sub-optimal with huge files, it won't probably scale
well for huge amounts of data.

But it's easy to use and easy to understand.

Cheers
-- t


signature.asc
Description: Digital signature


Re: Home made backup system

2019-12-18 Thread billium

On 18/12/2019 17:02, rhkra...@gmail.com wrote:

Aside / Admission: I don't backup all that I should and as often as I should,
so I'm looking for ways to improve.  One thought I have is to write my own
backup "system" and use it, and I've thought about that a little, and provide
some of my thoughts below.

A purpose of sending this to the mailing-list is to find out if there already
exists a solution (or parts of a solution) close to what I'm thinking about
(no sense re-inventing the wheel), or if someone thinks I've overlooked
something or making a big mistake.

Part of the reason for doing my own is that I don't want to be trapped into
using a system that might disappear or change and leave me with a problem.  (I
subscribe to a mailing list for one particular backup system, and I wrote to
that list with my concerns and a little bit of my thoughts about my own system
(well, at the time, I was hoping for a "universal" configuration file (the file
that would specify what, where, when, how each file, directory, or partition to
be backed up would be treated), one that could be read and acted upon by a
great variety (and maybe all future backup programs).

The only response I got (iirc) was that since their program was open source,
it would never go away.  (Yet, if I'm not mixing up backup programs, they were
transitioning from using Python 2 as the underlying language to Python 3 --
I'm not sure Python 2 would ever go completely away, or become non-functional,
but it reinforces my belief / fear that any (complex?) backup program, even
open source, would someday become unusable.

So, here are my thoughts:

After I thought about (hoped for) a universal config file for backup programs
and it seeming that no such thing exists (not surprising), I thought I'd try
to create my own -- this morning as I thought about it a little more (despite
a headache and a non-working car what I should be working on), I thought that
the simplest thing for me to do is write a bash script and a bash subroutine,
something along these lines:

* the backups should be in formats such that I can access them by a variety
of other tools (as appropriate) if I need to -- if I backup an entire
directory or partition, I should be able to easily access and restore any
particular file from within that backup, and do so even if encrypted (i.e.,
encryption would be done by "standard programs" (a bad example might be
ccrypt) that I could use "outside" of the backup system.

* the bash subroutine (command) that I write should basically do the
following:

   * check that the specified target exists (for things like removable
drives or NAS type things) and has (sufficient) space (not sure I can tell that
until after backup is attempted) (or an encrypted drive that is not mounted /
unencrypted, i.e., available to write to)

   * if the right conditions don't exist (above) tell me (I'm thinking of
an email as email is something that always gets my attention, maybe not
immediately, but soon enough)

   * if the right conditions do exist, invoke the commands to backup the
files

   * if the backup is unsuccessful for any reason, notify me (email again)

   * optionally notify me that the backup was successful (at least to the
extent of writing something)

   * optionally actually do something to confirm that the backup is readable
/ usable (need to think about what that could be -- maybe write it (to /tmp or
to a ramdrive), do something like a checksum (e.g., sha-256 or whatever makes
sense) on it and the original file, and confirm they match

   * ???

All of the commands invoked by the script should be parameters so that the
commands can be easily changed in the future (e.g., cp / tar / rsync, sha-256
or whatever, ccrypt or whatever, etc.)

Then the master script (actually probably scripts, e.g. one or more each for
hourly, daily, weekly, ... backups) would be invoked by cron (or maybe include
the at command? --my computers run 24/7 unless they crash, but for others, at
or something similar might be a better choice) would invoke that subroutine /
command for each file, directory, or partition to be backed up, specifying the
commands to use, what files to backup, where to back them up, encrypted or not,
compressed or not, tarred or not, etc.

In other words, instead of a configuration file, the system would just use bash
scripts with the appropriate commands, and invoked at the appropriate time by
cron (or with all backup commands in one script with backup times specified
with at or similar).

Aside: even if Amanda (for example) will always exist, I don't really want to
learn anything about it or any other program that might cease to be
maintainied in the future.

The rsync web site had some good examples.  There is a daily rotating 
one and overall backups also. I use these to backup to a Debian nas and 
a VPS.





Re: Home made backup system

2019-12-18 Thread Levente
It depends what do you want to backup. If that is code, or text files, use
git. If they are photos videos or mostly binary, use some script and
magnetic tapes.


Levente

On Wed, Dec 18, 2019, 18:03  wrote:

> Aside / Admission: I don't backup all that I should and as often as I
> should,
> so I'm looking for ways to improve.  One thought I have is to write my own
> backup "system" and use it, and I've thought about that a little, and
> provide
> some of my thoughts below.
>
> A purpose of sending this to the mailing-list is to find out if there
> already
> exists a solution (or parts of a solution) close to what I'm thinking
> about
> (no sense re-inventing the wheel), or if someone thinks I've overlooked
> something or making a big mistake.
>
> Part of the reason for doing my own is that I don't want to be trapped
> into
> using a system that might disappear or change and leave me with a
> problem.  (I
> subscribe to a mailing list for one particular backup system, and I wrote
> to
> that list with my concerns and a little bit of my thoughts about my own
> system
> (well, at the time, I was hoping for a "universal" configuration file (the
> file
> that would specify what, where, when, how each file, directory, or
> partition to
> be backed up would be treated), one that could be read and acted upon by a
> great variety (and maybe all future backup programs).
>
> The only response I got (iirc) was that since their program was open
> source,
> it would never go away.  (Yet, if I'm not mixing up backup programs, they
> were
> transitioning from using Python 2 as the underlying language to Python 3
> --
> I'm not sure Python 2 would ever go completely away, or become
> non-functional,
> but it reinforces my belief / fear that any (complex?) backup program,
> even
> open source, would someday become unusable.
>
> So, here are my thoughts:
>
> After I thought about (hoped for) a universal config file for backup
> programs
> and it seeming that no such thing exists (not surprising), I thought I'd
> try
> to create my own -- this morning as I thought about it a little more
> (despite
> a headache and a non-working car what I should be working on), I thought
> that
> the simplest thing for me to do is write a bash script and a bash
> subroutine,
> something along these lines:
>
>* the backups should be in formats such that I can access them by a
> variety
> of other tools (as appropriate) if I need to -- if I backup an entire
> directory or partition, I should be able to easily access and restore any
> particular file from within that backup, and do so even if encrypted
> (i.e.,
> encryption would be done by "standard programs" (a bad example might be
> ccrypt) that I could use "outside" of the backup system.
>
>* the bash subroutine (command) that I write should basically do the
> following:
>
>   * check that the specified target exists (for things like removable
> drives or NAS type things) and has (sufficient) space (not sure I can tell
> that
> until after backup is attempted) (or an encrypted drive that is not
> mounted /
> unencrypted, i.e., available to write to)
>
>   * if the right conditions don't exist (above) tell me (I'm thinking
> of
> an email as email is something that always gets my attention, maybe not
> immediately, but soon enough)
>
>   * if the right conditions do exist, invoke the commands to backup
> the
> files
>
>   * if the backup is unsuccessful for any reason, notify me (email
> again)
>
>   * optionally notify me that the backup was successful (at least to
> the
> extent of writing something)
>
>   * optionally actually do something to confirm that the backup is
> readable
> / usable (need to think about what that could be -- maybe write it (to
> /tmp or
> to a ramdrive), do something like a checksum (e.g., sha-256 or whatever
> makes
> sense) on it and the original file, and confirm they match
>
>   * ???
>
> All of the commands invoked by the script should be parameters so that the
> commands can be easily changed in the future (e.g., cp / tar / rsync,
> sha-256
> or whatever, ccrypt or whatever, etc.)
>
> Then the master script (actually probably scripts, e.g. one or more each
> for
> hourly, daily, weekly, ... backups) would be invoked by cron (or maybe
> include
> the at command? --my computers run 24/7 unless they crash, but for others,
> at
> or something similar might be a better choice) would invoke that
> subroutine /
> command for each file, directory, or partition to be backed up, specifying
> the
> commands to use, what files to backup, where to back them up, encrypted or
> not,
> compressed or not, tarred or not, etc.
>
> In other words, instead of a configuration file, the system would just use
> bash
> scripts with the appropriate commands, and invoked at the appropriate time
> by
> cron (or with all backup commands in one script with backup times
> specified
> with at or similar).
>
> Aside: even if Amanda (for example) will 

Home made backup system

2019-12-18 Thread rhkramer
Aside / Admission: I don't backup all that I should and as often as I should, 
so I'm looking for ways to improve.  One thought I have is to write my own 
backup "system" and use it, and I've thought about that a little, and provide 
some of my thoughts below.

A purpose of sending this to the mailing-list is to find out if there already 
exists a solution (or parts of a solution) close to what I'm thinking about 
(no sense re-inventing the wheel), or if someone thinks I've overlooked 
something or making a big mistake.

Part of the reason for doing my own is that I don't want to be trapped into 
using a system that might disappear or change and leave me with a problem.  (I 
subscribe to a mailing list for one particular backup system, and I wrote to 
that list with my concerns and a little bit of my thoughts about my own system 
(well, at the time, I was hoping for a "universal" configuration file (the file 
that would specify what, where, when, how each file, directory, or partition to 
be backed up would be treated), one that could be read and acted upon by a 
great variety (and maybe all future backup programs).

The only response I got (iirc) was that since their program was open source, 
it would never go away.  (Yet, if I'm not mixing up backup programs, they were 
transitioning from using Python 2 as the underlying language to Python 3 -- 
I'm not sure Python 2 would ever go completely away, or become non-functional, 
but it reinforces my belief / fear that any (complex?) backup program, even 
open source, would someday become unusable.

So, here are my thoughts:

After I thought about (hoped for) a universal config file for backup programs 
and it seeming that no such thing exists (not surprising), I thought I'd try 
to create my own -- this morning as I thought about it a little more (despite 
a headache and a non-working car what I should be working on), I thought that 
the simplest thing for me to do is write a bash script and a bash subroutine, 
something along these lines:

   * the backups should be in formats such that I can access them by a variety 
of other tools (as appropriate) if I need to -- if I backup an entire 
directory or partition, I should be able to easily access and restore any 
particular file from within that backup, and do so even if encrypted (i.e., 
encryption would be done by "standard programs" (a bad example might be 
ccrypt) that I could use "outside" of the backup system.

   * the bash subroutine (command) that I write should basically do the 
following:

  * check that the specified target exists (for things like removable 
drives or NAS type things) and has (sufficient) space (not sure I can tell that 
until after backup is attempted) (or an encrypted drive that is not mounted / 
unencrypted, i.e., available to write to)

  * if the right conditions don't exist (above) tell me (I'm thinking of 
an email as email is something that always gets my attention, maybe not 
immediately, but soon enough)

  * if the right conditions do exist, invoke the commands to backup the 
files

  * if the backup is unsuccessful for any reason, notify me (email again)

  * optionally notify me that the backup was successful (at least to the 
extent of writing something)

  * optionally actually do something to confirm that the backup is readable 
/ usable (need to think about what that could be -- maybe write it (to /tmp or 
to a ramdrive), do something like a checksum (e.g., sha-256 or whatever makes 
sense) on it and the original file, and confirm they match

  * ???

All of the commands invoked by the script should be parameters so that the 
commands can be easily changed in the future (e.g., cp / tar / rsync, sha-256 
or whatever, ccrypt or whatever, etc.) 

Then the master script (actually probably scripts, e.g. one or more each for 
hourly, daily, weekly, ... backups) would be invoked by cron (or maybe include 
the at command? --my computers run 24/7 unless they crash, but for others, at 
or something similar might be a better choice) would invoke that subroutine / 
command for each file, directory, or partition to be backed up, specifying the 
commands to use, what files to backup, where to back them up, encrypted or not, 
compressed or not, tarred or not, etc.

In other words, instead of a configuration file, the system would just use bash 
scripts with the appropriate commands, and invoked at the appropriate time by 
cron (or with all backup commands in one script with backup times specified 
with at or similar).

Aside: even if Amanda (for example) will always exist, I don't really want to 
learn anything about it or any other program that might cease to be 
maintainied in the future.



Re: Problemas con PATH dpkg

2019-12-18 Thread Felix Perez
El mar., 17 de dic. de 2019 a la(s) 13:45, Pablo Ramirez
(pabloramirez1...@gmail.com) escribió:
>
> Buenas Estimados
>
> Mi nombre es Pablo y soy nuevo en el mundo de linux, el tema puntual es que 
> estoy tratando de instalar un programa, en especifico DaVinci Resolve, pero 
> al momento de ejecutar el comando dkpg salta el siguiente error:
> dpkg: atención: `ldconfig' no se ha encontrado en el PATH o no es ejecutable
> dpkg: atención: `start-stop-daemon' no se ha encontrado en el PATH o no es 
> ejecutable
> dpkg: error: no se ha encontrado 2 en el PATH o no es ejecutable
> NOTA: El PATH de root debería incluir habitualmente /usr/local/sbin, 
> /usr/sbin y /sbin.
>

Primero que todo:
- Estás instalando como root?
- versión de Debian.
- versión de davinci
- revisar notas de instalación de Davinci (requirimientos)
- comando dpk utilizado completo

> buscando por internet, que es por donde estoy aprendiendo a usar linux,
> no he logrado llegar a un solución, agradeciendo vuestra ayuda
> de ante mano muchas gracias
>

Especificamente que haz buscado y/o qué haz echo

> Pablo.
>


-- 
usuario linux  #274354
normas de la lista:  http://wiki.debian.org/es/NormasLista
como hacer preguntas inteligentes:
http://www.sindominio.net/ayuda/preguntas-inteligentes.html



Re: Debian on raspberrypi: failed to configure wlan0

2019-12-18 Thread Nektarios Katakis
On Wed, 18 Dec 2019 16:28:40 +0100
Franco Martelli  wrote:

> Hi everybody,
> 
> Following the instructions reported on the Debian unofficial port
> home-site [1] I successful installed Debian on raspberrypi 3B 2016 all
> works fine for my needs but configuring the built-in wi-fi interface
> apparently it works but reporting errors:
> 
> ~# ip addr show dev wlan0
> 3: wlan0:  mtu 1500 qdisc noop state DOWN group
> default qlen 1000
> link/ether b8:27:eb:8b:10:67 brd ff:ff:ff:ff:ff:ff
> 
> ~# ifup wlan0
> RTNETLINK answers: File exists
> ifup: failed to bring up wlan0
> 
> the ifup command reports that it fails to bring up but the interface
> is configured:
> 
> ~# ip addr show dev wlan0
> 3: wlan0:  mtu 1500 qdisc
> pfifo_fast state DORMANT group default qlen 1000
> link/ether b8:27:eb:8b:10:67 brd ff:ff:ff:ff:ff:ff
> inet 192.168.0.9/24 brd 192.168.0.255 scope global wlan0
>valid_lft forever preferred_lft forever
> 
> if I try to de-configure the interface it fails:
> 
> ~# ifdown wlan0
> ifdown: interface wlan0 not configured
> 
> How can I de-configure wlan0 and why do I get errors when I bring up
> with ifup command?
> Some useful information about my configuration:
> 
> ~# cat /etc/network/interfaces.d/wlan0
> # allow-hotplug wlan0
> iface wlan0 inet static
> address 192.168.0.9
> netmask 255.255.255.0
> network 192.168.0.0
> broadcast 192.168.0.255
> gateway 192.168.0.1
> wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
> 
> ~# cat /etc/wpa_supplicant/wpa_supplicant.conf
> ctrl_interface=/var/run/wpa_supplicant
> update_config=1
> network={
> ssid="myessid"
> psk=dfd452fedacd69d6d54582770dc93acebfb6f2ec2aac7d2e3f24e6ecacafc487
> }
> 
> ~# systemctl is-enabled wpa_supplicant
> disabled
> 
> Thanks for any answer, best regards.
> 
> [1] https://salsa.debian.org/raspi-team/image-specs

You should try to associate the wireless nic with your wifi by running
only wpa_supplicant to see if that succeeds (the link state should
change - the mode in the `iwconfig` command should be managed).

For example this is how my config looks like
```
allow-hotplug wlx000f00bf4a3f
iface wlx000f00bf4a3f inet static
address 192.168.1.71
netmask 255.255.255.0
gateway 192.168.1.254
wpa-ssid ssid-name
wpa-psk
e8918bce6980814557b664fb52bda4d342174d2a2c95dd06078d7a29851de799
```

Hope this helps.


-- 
Regards,
Nektarios Katakis



Debian on raspberrypi: failed to configure wlan0

2019-12-18 Thread Franco Martelli
Hi everybody,

Following the instructions reported on the Debian unofficial port
home-site [1] I successful installed Debian on raspberrypi 3B 2016 all
works fine for my needs but configuring the built-in wi-fi interface
apparently it works but reporting errors:

~# ip addr show dev wlan0
3: wlan0:  mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether b8:27:eb:8b:10:67 brd ff:ff:ff:ff:ff:ff

~# ifup wlan0
RTNETLINK answers: File exists
ifup: failed to bring up wlan0

the ifup command reports that it fails to bring up but the interface is
configured:

~# ip addr show dev wlan0
3: wlan0:  mtu 1500 qdisc
pfifo_fast state DORMANT group default qlen 1000
link/ether b8:27:eb:8b:10:67 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.9/24 brd 192.168.0.255 scope global wlan0
   valid_lft forever preferred_lft forever

if I try to de-configure the interface it fails:

~# ifdown wlan0
ifdown: interface wlan0 not configured

How can I de-configure wlan0 and why do I get errors when I bring up
with ifup command?
Some useful information about my configuration:

~# cat /etc/network/interfaces.d/wlan0
# allow-hotplug wlan0
iface wlan0 inet static
address 192.168.0.9
netmask 255.255.255.0
network 192.168.0.0
broadcast 192.168.0.255
gateway 192.168.0.1
wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

~# cat /etc/wpa_supplicant/wpa_supplicant.conf
ctrl_interface=/var/run/wpa_supplicant
update_config=1
network={
ssid="myessid"
psk=dfd452fedacd69d6d54582770dc93acebfb6f2ec2aac7d2e3f24e6ecacafc487
}

~# systemctl is-enabled wpa_supplicant
disabled

Thanks for any answer, best regards.

[1] https://salsa.debian.org/raspi-team/image-specs
-- 
Franco Martelli



Re: Broken PMTUD / ICMP blackhole?

2019-12-18 Thread John Hasler
Celejar writes:
> It does work (and to automate it, I'll probably put it in e/n/i with a
> post-up line) - I'm just looking for the "right" way to do it (and to
> undo it on connection down, as I discussed in another message in this
> thread).

I wouldn't worry about doing it the *right* way.  It's a kludge that
should not be necessary but is.

Automatic adjustment would be interesting but I don't see an obvious
method.
-- 
John Hasler 
jhas...@newsguy.com
Elmwood, WI USA



Re: Invisible IPv6 addresses

2019-12-18 Thread John Hasler
This may be what you have:
https://en.wikipedia.org/wiki/IPv6#IPv4-mapped_IPv6_addresses
-- 
John Hasler 
jhas...@newsguy.com
Elmwood, WI USA



Re: Broken PMTUD / ICMP blackhole?

2019-12-18 Thread John Hasler
Celejar writes:
> I'm not that familiar with the internals, but basically, the phone
> presents a wifi access point, the computer connects to it as it would
> to any AP, and the phone apparently routes packets to and from the
> cellular network.

Yes, I know that.  However, the packets are being encapsulated in some
way. They may be encapsulating the IP packets directly or they may be
encapsulating the ethernet packets as PPP does.  Either way there is an
opportunity for problems similar to those that have developed with
PPPoE because encapsulization always involves adding headers.  The
packets are being encapsulated for the WiFi too, of course.
-- 
John Hasler 
jhas...@newsguy.com
Elmwood, WI USA



Re: Broken PMTUD / ICMP blackhole?

2019-12-18 Thread Celejar
On Wed, 18 Dec 2019 10:59:11 - (UTC)
Curt  wrote:

> On 2019-12-18, Anthony DeRobertis  wrote:
> >
> > Another option might be to use Network Manager, I think its connections 
> > can set a custom MTU, but I'm not 100% sure as I've never tried it.
> 
> I've used tethering successfully with Network Manager but never had
> occasion to alter the mtu settings.
> 
> Why would wouldn't
> 
>  ip link set dev  mtu 1300
> 
> work?

It does work (and to automate it, I'll probably put it in e/n/i with a
post-up line) - I'm just looking for the "right" way to do it (and to
undo it on connection down, as I discussed in another message in this
thread).

Celejar



Re: Broken PMTUD / ICMP blackhole?

2019-12-18 Thread Celejar
On Wed, 18 Dec 2019 02:30:01 -0500
Anthony DeRobertis  wrote:

> On 12/17/19 11:39 AM, Celejar wrote:
> >
> > Now I just have have to figure out the best place to configure this.
> > I'm using dhcp via /etc/network/interfaces, but the 'dhcp' method
> > doesn't seem to support manual MTU setting. I could use a 'supersede
> > interface-mtu' line in dhclient.conf, but AFAICT, options there apply to
> > all dhcp connections, and I can't make out a simple way to set them on
> > a per connection basis. I suppose I could always just put 'ip link set
> > wlan0 mtu 1440' in a script and hook it in to the appropriate
> > 'iface' stanza in e/n/i with a 'post-up' line.
> 
> Personally, I'd just use the post-up line.

I'll probably do that - but I'm not sure how to restore the proper MTU
on connection down (since I'm not sure that my other connections are
setting it at all, as opposed to just leaving it as is). I suppose I can
just assume that it should always be 1500, so I can just use a
pre/post-down line to set it to 1500 on connection down, but I wonder if
there's a better way. I suppose I could also save the current MTU to
some file somewhere when I lower the MTU, and then restore it
afterward ...

> Ideally whichever device is the DHCP server would set the appropriate 
> MTU via its DHCP response, but if that's the phone, I have no idea how 
> you'd fix that easily. (That's why there is a supersede setting for it, 

Well, it's LineageOS, a fairly open OS, so I can probably figure out
how to hack it, if necessary.

> it can be part of the DHCP response — that's how different MTUs from 
> different DHCP connections should work.)
> 
> Another option might be to use Network Manager, I think its connections 
> can set a custom MTU, but I'm not 100% sure as I've never tried it.
> 
> BTW: dhclient.conf options can be per-interface (via an interface 
> section), so your wifi and wired adapters could be different.

I occasionally use wired, but I'm almost always on wifi.

Thanks,
Celejar



Re: Broken PMTUD / ICMP blackhole?

2019-12-18 Thread Curt
On 2019-12-18, Anthony DeRobertis  wrote:
>
> Another option might be to use Network Manager, I think its connections 
> can set a custom MTU, but I'm not 100% sure as I've never tried it.

I've used tethering successfully with Network Manager but never had
occasion to alter the mtu settings.

Why would wouldn't

 ip link set dev  mtu 1300

work?

> BTW: dhclient.conf options can be per-interface (via an interface 
> section), so your wifi and wired adapters could be different.
>
>


-- 
"J'ai pour me guérir du jugement des autres toute la distance qui me sépare de
moi." Antonin Artaud




Core dumps are instantly removed

2019-12-18 Thread Dmitry Katsubo
Dear Debian users,

Hopefully you can easily help me with my confusion.

I would like to collect / keep core dumps in the system. For that I use 
systemd-coredump package which is configured as following:

=== cut /etc/systemd/coredump.conf ===
[Coredump]
Storage=external
MaxUse=5G
=== cut ===

Then I have created a script to send core dump report daily to root:

=== cut /etc/cron.daily/coredump ===
YESTERDAY=`date --date="1 day ago" +%Y-%m-%d`
MESSAGE=`coredumpctl list --no-pager -r -S $YESTERDAY -U $(date +%Y-%m-%d) 2> 
/dev/null` && (echo "$MESSAGE"; echo -e "\nLast core dump info:\n"; coredumpctl 
info --no-pager; echo -e "\nCore
dumps:\n"; ls -l /var/lib/systemd/coredump; ) | mail -s "Core dumps created 
yesterday $YESTERDAY" root
=== cut ===

What I get is:

=== cut ===
TIMEPID   UID   GID SIG COREFILE  EXE
Tue 2019-12-10 11:27:26 CET2537  1003   100   5 missing   
/usr/bin/light-locker

Last core dump info:

   PID: 2537 (light-locker)
...
Signal: 5 (TRAP)
 Timestamp: Tue 2019-12-10 11:27:25 CET (20h ago)
  Command Line: light-locker
Executable: /usr/bin/light-locker
...
   Storage: 
/var/lib/systemd/coredump/core.light-locker.1003.810304...157597364500.lz4 
(inaccessible)
   Message: Process 2537 (light-locker) of user 1003 dumped core.

Stack trace of thread 2537:
#0  0x7fde22515c75 n/a (libglib-2.0.so.0)
#1  0x7fde22516d0d g_log_default_handler (libglib-2.0.so.0)
#2  0x7fde22516f5f g_logv (libglib-2.0.so.0)
#3  0x7fde2251714f g_log (libglib-2.0.so.0)
#4  0x563b0e2f30a3 n/a (light-locker)
#5  0x7fde22615107 g_type_create_instance 
(libgobject-2.0.so.0)
...
Core dumps:

total 0
=== cut ===

As one can see, stack trace is somehow captured by coredumpctl (where it gets 
it from?) but core file is not there. I would like core dump files to be 
preserved for (say) 10 days.

light-locker generates really tiny core dump when I start it from console:

=== cut ===
$ /usr/bin/light-locker
Trace/breakpoint trap (core dumped)

# ls -l /var/lib/systemd/coredump/
-rw-r-+ 1 root root 884992 Dec 13 14:16 
core.light-locker.1003.8103...157624301300.lz4
=== cut ===

Also I am aware about this setting:

=== cut /usr/lib/tmpfiles.d/systemd.conf ===
d /var/lib/systemd/coredump 0755 root root 3d
=== cut ===

but it configures the directory to be cleaned in three days while they are 
removed sooner.

Any ideas? Thanks in advance!

P.S. I have read:

https://wiki.debian.org/HowToGetABacktrace#Core_dump
https://wiki.archlinux.org/index.php/Core_dump

but didn't find the answer.

-- 
With best regards,
Dmitry