Re: [RESEND] lxde error

2018-08-05 Thread Reco
Hi.

On Sun, Aug 05, 2018 at 03:26:28PM +0900, Byung-Hee HWANG (황병희, 黃炳熙) wrote:
> thanks for feedback, Reco^^^
> 
> 
> On 2018년 08월 04일 00:17, Reco wrote:
> > Hi.
> > 
> > On Fri, Aug 03, 2018 at 11:23:49PM +0900, Byung-Hee HWANG (황병희, 黃炳熙) wrote:
> > > thanks for feedback, Reco^^^
> > > 
> > > 
> > > On 2018년 08월 03일 15:22, Reco wrote:
> > > > Hi.
> > > > 
> > > > On Thu, Aug 02, 2018 at 11:04:37PM +0900, Byung-Hee HWANG (황병희) wrote:
> > > > > i'm new to debian. yesterday i did install debian on chromebook. i get
> > > > > some error when i start lxde. attached file with error [1]. how can i
> > > > > resolve it?
> > > > > 
> > > > > [1] 
> > > > > https://gitlab.com/soyeomul/stuff/raw/master/jessie-birch/20180802_214248.jpg
> > > > This seems very similar to Debian bug #864402 - [2]. Can you test the
> > > > workaround proposed at message 10 please?
> > > > 
> > > > Reco
> > > > 
> > > > [2] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=864402
> > > > 
> > > Ah yes i checked that above PR. Then i did look around autostart tab.
> > > however there is no "policy kit agent" in my config [3].
> > > 
> > > thanks,
> > > 
> > > [3] 
> > > https://gitlab.com/soyeomul/stuff/raw/master/jessie-birch/2018-08-03-231242_1920x1080_scrot.png
> > Let's do it the hard way then.
> > 
> > 1) Relogin.
> > 
> > 2) Launch terminal emulator (it should be lxterminal, I suppose, but any
> > will do), execute this:
> > 
> > ps -fP 
> > 
> > and, for the sake of completeness,
> > 
> > ps -fU $(id -u)
> > 
> > 3) Also please publish the contents of ~/.xsession-errors somewhere.
> > 
> > Reco
> > 
> attached file as [typescript], thanks!!!

So, pid 4865 is lxpolkit, which is expected.
Its parent is lxsession, which is expected too.
But then you have this:

soyeomul  4723  4715  0 13:30 ?00:00:00 /usr/bin/xinit /usr/local/bin/cr
soyeomul  4792  4723  0 13:30 ?00:00:00 /bin/sh -e /usr/local/bin/crouto
soyeomul  4801  4792  0 13:30 ?00:00:00 /bin/sh -e /usr/local/bin/crouto
soyeomul  4802  4792  0 13:30 ?00:00:00 /bin/sh -e /usr/local/bin/crouto
soyeomul  4814  4802  0 13:30 ?00:00:00 /bin/sh -e /usr/local/bin/crouto
soyeomul  4815  4802  0 13:30 ?00:00:00 /bin/sh -e /usr/local/bin/crouto

Feeding local script to xinit is an old and respected tradition, so
that's ok. But launching such script from itself is probably not.

My suspicion that this '/usr/local/bin/crouto…' script tries to launch
lxsession multiple times, therefore launching lxpolkit multiple times as
well.

So the question is - what are the contents of this
'/usr/local/bin/crouto…'?

And another one is - what are the contents of /etc/xdg/autostart?

Reco



Re: [RESEND] lxde error

2018-08-05 Thread Reco
Hi.

On Sun, Aug 05, 2018 at 07:14:15PM +0900, Byung-Hee HWANG (황병희, 黃炳熙) wrote:
> Sorry Reco, my attached file was somewhat odd. So re-send it as files.
> 

And now it's getting some sense at last.
This Crouton thing starts xinit, which in turn starts startlxde, which
invokes lxsession, which parses files at /etc/xdg/autostart and starts
lxpolkit along the other things.

lxpolkit, along the other things, invokes
"polkit_agent_register_listener" library function, which (supposedly
through DBUS mumbo-jumbo) ends up at "polkit_unix_session_initable_init"
function.

The choice of names is kind of meh, but that particular function invokes
"sd_pid_get_owner_uid" function. Being part of systemd (sd-login.c to be
precise), it's hardly surprising that it returns anything meaningful
only if the systemd is pid1, user has registered a logind session, etc.

Which is not the case for you, because of the Crouton is written for the
completely different usecase. Moreover, the man himself spoke on the
similar issue at [1], and said "lxpolkit is broken, fix it".


The remaining question is - what to do with all this?
I suspect that patching out problematic parts of policykit is out of
question (it is for me at least), so I propose either:

1) Removing /etc/xdg/autostart/lxpolkit.desktop.

2) Replacing /usr/bin/lxpolkit with a symlink to /bin/true.

Reco

[1] https://github.com/systemd/systemd/issues/833



Re: Please help with error message

2018-08-07 Thread Reco
Hi.

On Tue, Aug 07, 2018 at 09:05:28AM +0200, Rodolfo Medina wrote:
> Some little problems after `full-upgrade' to Sid: no sound...  Besides, when
> trying to install new packages, the following message appears:
> 
> # aptitude install alsaplayer-alsa pulseaudio 
> pulseaudio is already installed at the requested version (12.0-1)
> pulseaudio is already installed at the requested version (12.0-1)
> The following NEW packages will be installed:
>   alsaplayer-alsa alsaplayer-common{a} alsaplayer-gtk{a} libmikmod3{a} 
> 0 packages upgraded, 4 newly installed, 0 to remove and 0 not upgraded.
> Need to get 0 B/505 kB of archives. After unpacking 1,410 kB will be used.
> Do you want to continue? [Y/n/?] y
> dpkg: warning: 'ldconfig' not found in PATH or not executable
> dpkg: warning: 'start-stop-daemon' not found in PATH or not executable
> dpkg: error: 2 expected programs not found in PATH or not executable
> Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin
> E: Sub-process /usr/bin/dpkg returned an error code (2)
> dpkg: warning: 'ldconfig' not found in PATH or not executable
> dpkg: warning: 'start-stop-daemon' not found in PATH or not executable
> dpkg: error: 2 expected programs not found in PATH or not executable
> Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin
> 
> Please help...  I'm not expert.

Your installation lacks /sbin/ldconfig and /sbin/start-stop-daemon from
"libc-bin" and "dpkg" packages respectively. Or root's $PATH lack
"/sbin" somehow.
Either way it's not normal, it's a little wonder that you're able to
boot or run any binary executable.


I'd start fixing this mess by checking root's $PATH:

echo $PATH

It literally should have this value:

/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

Next I'd try this:

apt-get install --reinstall dpkg libc-bin

Reco



Re: Please help with error message

2018-08-07 Thread Reco
Hi.

On Tue, Aug 07, 2018 at 10:08:06AM +0200, Rodolfo Medina wrote:
> Reco  writes:
> 
> > On Tue, Aug 07, 2018 at 09:05:28AM +0200, Rodolfo Medina wrote:
> >> Some little problems after `full-upgrade' to Sid: no sound...  Besides, 
> >> when
> >> trying to install new packages, the following message appears:
> >> 
> >> # aptitude install alsaplayer-alsa pulseaudio 
> >> pulseaudio is already installed at the requested version (12.0-1)
> >> pulseaudio is already installed at the requested version (12.0-1)
> >> The following NEW packages will be installed:
> >>   alsaplayer-alsa alsaplayer-common{a} alsaplayer-gtk{a} libmikmod3{a} 
> >> 0 packages upgraded, 4 newly installed, 0 to remove and 0 not upgraded.
> >> Need to get 0 B/505 kB of archives. After unpacking 1,410 kB will be used.
> >> Do you want to continue? [Y/n/?] y
> >> dpkg: warning: 'ldconfig' not found in PATH or not executable
> >> dpkg: warning: 'start-stop-daemon' not found in PATH or not executable
> >> dpkg: error: 2 expected programs not found in PATH or not executable
> >> Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and 
> >> /sbin
> >> E: Sub-process /usr/bin/dpkg returned an error code (2)
> >> dpkg: warning: 'ldconfig' not found in PATH or not executable
> >> dpkg: warning: 'start-stop-daemon' not found in PATH or not executable
> >> dpkg: error: 2 expected programs not found in PATH or not executable
> >> Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and 
> >> /sbin
> >> 
> >> Please help...  I'm not expert.
> >
> > Your installation lacks /sbin/ldconfig and /sbin/start-stop-daemon from
> > "libc-bin" and "dpkg" packages respectively. Or root's $PATH lack
> > "/sbin" somehow.
> > Either way it's not normal, it's a little wonder that you're able to
> > boot or run any binary executable.
> >
> >
> > I'd start fixing this mess by checking root's $PATH:
> >
> > echo $PATH
> >
> > It literally should have this value:
> >
> > /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
> >
> > Next I'd try this:
> >
> > apt-get install --reinstall dpkg libc-bin
> 
> 
> Thanks...  I'm afraid it's bad...:

No, it's a honest mistake on your part, not a misconfigured system.


> $ echo $PATH
> /home/rodolfo/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
> rodolfo@sda6-acer:~$ su

Don't. Do. That. Ever.

'su' without arguments preserves your current environment.
$PATH suffers from this (which is annoying or overall bad in this case),
but unforeseen side effects (including root-owned files in your $HOME)
start with a single X client that you'll run as root.

It's called 'su -'. Use it instead.

Reco



Re: Please help with error message

2018-08-07 Thread Reco
Hi.

On Tue, Aug 07, 2018 at 12:01:02PM +0200, Stephan Seitz wrote:
> On Di, Aug 07, 2018 at 12:35:32 +0300, Reco wrote:
> > > rodolfo@sda6-acer:~$ su
> > Don't. Do. That. Ever.
> 
> That’s bullshit. I did it all the time until Debian decided to break things.

It never hurts to check an appropriate manpage *before* calling BS.
In this case:

The su command is used to become another user during a login session.
Invoked without a username, su defaults to becoming the superuser. The
optional argument - may be used to provide an environment similar to
what the user would expect had the user logged in directly.


> I never had your mentioned problems.

Either you have /sbin in your user's path, or you haven't run a single
apt-get all these years. There are other possibilities, of course,
though less flattering.


> „su” doesn’t change the working directory. So if you compile software as a
> user you can then type „make install” after su.

True. But this tidbit does not relate to this particular problem at all.


> Now it is simpler to compile as root user.

It was always 'simpler'. But not 'smarter'.


> If you need to run an X11 program as root su preserved the DISPLAY variable.

And it also preserves $HOME. So any changed configuration file will be
owned by root. Not a big deal if you never try to run the program in
question as your user.


> Luckily you can switch back to the old behaviour, but this should be the
> default.

Care to provide a Debian bug number that you filled on this particular
issue? Because rants on debian-user do not transform to patches by
themselves.


> As Linus would say: „Don’t break user behaviour! Give them an
> option to switch to a new one.”.

A recent kernel update (linux-4.9.110-3+deb9u1) begs to differ.
Two notable behaviour changes without any way to disable them.

Reco



Re: Please help with error message

2018-08-07 Thread Reco
Hi.

On Tue, Aug 07, 2018 at 12:45:51PM +0200, Stephan Seitz wrote:
> On Di, Aug 07, 2018 at 01:18:59 +0300, Reco wrote:
> > > I never had your mentioned problems.
> > Either you have /sbin in your user's path, or you haven't run a single
> > apt-get all these years. There are other possibilities, of course,
> > though less flattering.
> 
> Bullshit again. You didn’t read the thread, did you?

Tsk-tsk. Personal attacks on debian-user, and it's not even a Friday.

> This is new behaviour in testing because Debian switched the source for the
> su binary.
> 
> Debian 9:
> stse@fsing:~$ echo $PATH
> /home/stse/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
> stse@fsing:~$ su
> Passwort:
> root@fsing /home/stse # echo $PATH
> /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
> 
> Testing:
> [stse@osgiliath]: echo $PATH
> /home/stse/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/home/stse/wego/bin
> [stse@osgiliath]: su
> Passwort:
> osgiliath:/home/stse# echo $PATH
> /home/stse/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/home/stse/wego/bin
> 
> Testing with „ALWAYS_SET_PATH yes” in login.defs:
> [stse@osgiliath]: echo $PATH
> /home/stse/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/home/stse/wego/bin
> [stse@osgiliath]: su
> Passwort:
> osgiliath:/home/stse# echo $PATH
> /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
> 
> I hope you see the difference.

So once again Debian aligned its behaviour with RHEL. Not the first
time, not the last. A interesting change, but a minor one.
All of us using 'su -' all these years are not affected.
Users of ordinary 'su' may suffer though.


> 
> > > „su” doesn’t change the working directory. So if you compile
> > > software as a user you can then type „make install” after su.
> > True. But this tidbit does not relate to this particular problem at all.
> 
> It does. Depending on your needs you could use „su” or „su -”.

And you're telling me to read the thread. How exactly apt-get's
behaviour depends on cwd?


> > > If you need to run an X11 program as root su preserved the DISPLAY
> > > variable.
> > And it also preserves $HOME. So any changed configuration file will be
> > owned by root. Not a big deal if you never try to run the program in
> 
> Only if the file never existed.

Or program's developer is trying being smart and writes changed
configuration in a different file followed by rename(2).

> 
> > > Luckily you can switch back to the old behaviour, but this should be
> > > the default.
> > Care to provide a Debian bug number that you filled on this particular
> > issue? Because rants on debian-user do not transform to patches by
> > themselves.
> 
> Which patches?

You're expressing a strong dislike of a certain change, but you're doing
so in a wrong place. An appropriate place for such dislikes is called
bugs.debian.org, and all the changes of behaviour are accepted in the
form of patches to source packages theres.


> > > As Linus would say: „Don’t break user behaviour! Give them an
> > > option to switch to a new one.”.
> > A recent kernel update (linux-4.9.110-3+deb9u1) begs to differ.
> > Two notable behaviour changes without any way to disable them.
> 
> Are these security changes? Then Linus permits it if there is no other way.
> By the way, what are these changes that are breaking user space?

Too lazy to read the changelog, eh?
Fix for CVE-2018-13405 breaks directory permissions.
Fix for CVE-2018-5390 changes TCP stack.

Reco



Re: which program/command can show wireless connection quality?

2018-08-09 Thread Reco
Hi.

On Thu, Aug 09, 2018 at 12:29:31AM +, Long Wind wrote:
> i have a USB wireless card, it doesn't seem stable, sometimes it's slow
> i even suspect changing USB connector can affect network speed
> 
> which program can show connection quality?

These two should work with any network interface (watch for error and
dropped frames):

netstat -ni 1

watch -n1 "ip -s a l"

Wireless specifics (signal strength, noise ratio, etc) are better viewed
by airodump-ng from aircrack-ng package, although stock iw or iwconfig
should do too.

Also, consider wrapping a sheet of tin foil around USB WiFi dongle,
transforming stock omni-directional antenna to uni-directional.

Reco



Re: question about the kernel

2018-08-09 Thread Reco
Hi.

On Thu, Aug 09, 2018 at 08:15:42AM +0100, mick crane wrote:
> Am I right in thinking that the kernel is a single codebase agreed between
> all the kernel developers at any particular date

No. As [1] shows us, there's a mainline branch (aka to-be-released
kernel), stable branch (aka released kernel) and longterm support
branches.
Also, anyone can make their own fork of the kernel. To name the most
used ones there are RedHat's fork and OpenWRT's fork.


> and that Linux
> distributions can take bits out from that for their release

Every Linux distribution effectively maintains their own branch of
kernel, Debian included.
AFAIK Slackware is one of distributions that tries to to maintain the
least deviation from the upstream possible.


> but shouldn't
> add bespoke stuff that isn't agreed by everybody else ?

Tell that to RedHat, which single-handedly implemented their own special
way of signing the kernel and its modules (and which was not accepted by
upstream). Or Novell with their kgraft. Or Oracle with ksplice and dtrace.

Reco



Re: question about the kernel

2018-08-09 Thread Reco
Hi.

On Thu, Aug 09, 2018 at 12:45:39PM +, davidson wrote:
> On Thu, 9 Aug 2018, Reco wrote:
> 
> > Hi.
> > 
> > On Thu, Aug 09, 2018 at 08:15:42AM +0100, mick crane wrote:
> > > Am I right in thinking that the kernel is a single codebase agreed between
> > > all the kernel developers at any particular date
> > 
> > No. As [1] shows us, there's a mainline branch (aka to-be-released
> > kernel), stable branch (aka released kernel) and longterm support
> > branches.
> 
> For the record, I looked for the referent of "[1]", but couldn't find
> any pointer in Reco's message or OP's.
> 
> So I made a wild guess and went to
> 
>  https://www.kernel.org/

I missed that link indeed. Thank you.


> There I saw the list of downloads on the front page:
> 
> | mainline:   4.18-rc8 | stable: 4.17.14 | longterm:   4.14.62 |
> longterm:   4.9.119 | longterm:   4.4.147 | longterm:   3.18.118 [EOL] |
> longterm:   3.16.57 | linux-next: next-20180809
> 
> I'm going update my CV now: "Accomplished mind reader"

:)

Reco



Re: which program/command can show wireless connection quality?

2018-08-09 Thread Reco
Hi.

On Fri, Aug 10, 2018 at 11:44:42AM +1200, Richard Hector wrote:
> On 09/08/18 19:00, Reco wrote:
> > Also, consider wrapping a sheet of tin foil around USB WiFi dongle,
> > transforming stock omni-directional antenna to uni-directional.
> 
> Uni-directional or no-directional?
> 
> I'd have thought you want to be fairly specific and precise with your
> 'wrapping' to get a benefit ...

Google it. [2] uses a strainer, not a tinfoil, but is pretty close to
what I meant.

[2] 
http://homestead-and-survival.com/diy-uni-directional-usb-wifi-range-extender/

Reco



Re: which program/command can show wireless connection quality?

2018-08-09 Thread Reco
Hi.

On Fri, Aug 10, 2018 at 12:16:38AM +, Long Wind wrote:
>  Thanks! 
> 
> my adapter is Asus WL-167g USB WLAN Adapter
> i don't understand why change directional from omni to uni

Because if it's the noise/signal ratio that's the trouble this should
improve it.

> the adapter is problematic, sometimes it works fine, other times it doesn't 
> though link quality is good

Or, it's your neighbour's APs which give you signal interference. Or a
nearby police car (radar).

Reco



Re: Monitoring copy file security

2018-08-14 Thread Reco
Hi.

On Mon, Aug 13, 2018 at 08:52:35PM +0200, Ilyass Kaouam wrote:
> Hi,
> 
> I have a database server in which I save the database (dump)
> let say
>  /home/backup directory.
> I would like to monitor this directory and find out if anyone is doing a cp
> or mv or.

apt install auditd

auditctl -a always,exit -F dir=/home/backup -F perm=war

md5sum /home/backup/* # any reading/writing command will do

tail /var/log/audit/audit.log

Reco



Re: Why is libc updated every time there's an update to the kernel

2018-08-15 Thread Reco
Hi.

On Wed, Aug 15, 2018 at 08:33:37AM -0300, Marcelo Lacerda wrote:
> I know that the kernel and libc are deeply integrated

On the contrary, libc merely states a minimal supported kernel version,
and you're free to use more-or-less recent kernel with it.
You'll miss all new system calls, but that's all that you should miss.


> but I imagine that a
> security update to it doesn't actually change anything to libc source code,
> so why do the two of them always upgrade together?

Today's stable kernel update brought a patched kernel and a 'perf' tool.
That's it, no libc upgrade.

Reco



Re: Why is libc updated every time there's an update to the kernel

2018-08-15 Thread Reco
On Wed, Aug 15, 2018 at 02:34:54PM +0200, Ulf Volmer wrote:
> On 15.08.2018 14:02, Reco wrote:
> > On Wed, Aug 15, 2018 at 08:33:37AM -0300, Marcelo Lacerda wrote:
> 
> >> but I imagine that a
> >> security update to it doesn't actually change anything to libc source code,
> >> so why do the two of them always upgrade together?
> > 
> > Today's stable kernel update brought a patched kernel and a 'perf' tool.
> > That's it, no libc upgrade.
> 
> package linux-libc-dev has been also updated today.

True. But libc6 (aka GNU libc) was not updated today, and it's the only
libc that counts.

Reco



Re: Predictable Network Interface Names

2018-08-15 Thread Reco
Hi.

On Wed, Aug 15, 2018 at 04:46:13PM +0200, Martin wrote:
> Hi ML members,
> 
> I have a bunch of machines, all virtual, where I have to swap the NIC type. 
> Three or four  NIC's per host, e1000 to vmxnet3 for those who may care about.

vmxnet3 is what's important here.

> With Predictable Network Interface Names enabled, it should be possible, to 
> do this automated.

It is now, once they fixed it.

> I got this 'ens' part, no problem. But where do the numbers come from?

Long story short, VMWare NICs were horribly broken in regards to
Predictable Network Interface Names. Since they fixed it RedHat way,
vmxnet3 NICs are called in accordance to ID_NET_NAME_SLOT udev
parameter, see [1].

> It's about that PCI address numbers, right?

It was, but it's not now. Either VMWare was supplying the kernel bogus
PCI address ([1] says that), or systemd upstream misinterpreted that. It
was not predictable back than, but it was nothing that was impossible to
fix (net.ifnames=0, .link files, the usual).

Reco

[1] https://access.redhat.com/solutions/2592561



Re: How to file bug for installation images that don't boot

2018-08-15 Thread Reco
Hi.

On Wed, Aug 15, 2018 at 09:41:23AM -0700, Tabor Kelly wrote:
> I have an Intel NUC8i7HNK which does not boot Debian Stretch, testing
> alpha3, or the latest testing nightly. I have gathered some pertinent
> triage information which I won't bore everyone with here. My real
> question is: How do I report this bug?

Use plaintext e-mail as outlined by [1].
IMO bug should be filled against the "grub" or the "linux" depending on
where boot process fails for you.

Reco

[1] https://www.debian.org/Bugs/Reporting



Re: Debian 9 network management

2018-08-16 Thread Reco
Hi.

On Thu, Aug 16, 2018 at 12:04:28PM +0200, Alessandro Vesely wrote:
> On Wed 15/Aug/2018 08:31:32 +0200 mick crane wrote:
> > On 2018-08-14 09:08, Remigio wrote:
> >> [...]
> >> Could you help me please to understand where are network configuration
> >> files and how to manage them?
> > 
> > I too have been wondering about this and the wiki seems clear.
> > https://wiki.debian.org/NetworkConfiguration#Setting_up_an_Ethernet_Interface
> 
> However, that doesn't cover how to properly coordinate setting up IP links,
> firewall, NAT, and netfilter daemons.

If you're using userspace daemons for netfilter then you're doing it
wrong. For instance, it has forced non-exsistent distinction between the
firewall, NAT and netfilter in your e-mail.

All these are merely the state of running kernel, and while you
certainly need userspace for configuring them, there's no need for any
userspace running for these things to function.

> IIRC it is possible, but difficult to make and maintain, and seemingly
> fragile.

A difficulty is in the eye of the beholder.

Reco



Re: Installing package *NOT* in repository

2018-08-17 Thread Reco
Hi.

On Fri, Aug 17, 2018 at 07:27:01AM -0500, Richard Owlett wrote:
> On 08/17/2018 06:31 AM, Gene Heskett wrote:
> > On Friday 17 August 2018 05:29:07 Vincent Lefevre wrote:
> > 
> > > On 2018-08-13 09:38:48 -0300, Samuel Henrique wrote:
> > > > If you pass a file as parameter to apt install, like:
> > > > apt install ./package.deb
> > > > It will work, at least on buster.
> > > 
> > > And the "./" is important, otherwise it will not work (until now,
> > > for this reason, I didn't know that passing a file was supported).
> > > I don't know the exact rule, but it seems that the pathname needs
> > > to start with either "/", "./" or "../".
> > 
> > The effect is where the search for the given file is anchored
> > just a plain filename is assumed to be someplace in the $PATH
> > / means its in the root directory
> > ./ means its in the current directory the shell is cd'd to
> > ../ means its one directory level above the currently cd'd to directory
> > 
> > 
> 
> Can an absolute path [/home/richard/mydebs/aname.deb] be given?

Yes. Checked out this useful feature myself today.
Absolute pathname definitely works.

Reco



Re: Debian 9 network management

2018-08-18 Thread Reco
Hi.

On Sat, Aug 18, 2018 at 11:25:15AM +0200, Alessandro Vesely wrote:
> On Thu 16/Aug/2018 14:02:08 +0200 Reco wrote:
> > On Thu, Aug 16, 2018 at 12:04:28PM +0200, Alessandro Vesely wrote:
> >> On Wed 15/Aug/2018 08:31:32 +0200 mick crane wrote:
> >>> 
> >>> I too have been wondering about this and the wiki seems clear.
> >>> https://wiki.debian.org/NetworkConfiguration#Setting_up_an_Ethernet_Interface
> >> 
> >> However, that doesn't cover how to properly coordinate setting up IP links,
> >> firewall, NAT, and netfilter daemons.
> > 
> > If you're using userspace daemons for netfilter then you're doing it
> > wrong. For instance, it has forced non-exsistent distinction between the
> > firewall, NAT and netfilter in your e-mail.
> > 
> > All these are merely the state of running kernel, and while you
> > certainly need userspace for configuring them, there's no need for any
> > userspace running for these things to function.
> 
> A netfilter queue daemon runs in userspace, but that doesn't make much of a
> difference.

True. But said daemon (whenever it's used for NetFlow collection or L7
filtering) is not responsible for the netfilter rules themselves.


> The point is in what order things are configured/ enabled, and
> which files do you have to edit to check or change the corresponding 
> parameters.

Also true. And this is where all userspace "firewall" daemons loose. Not
a single one of them is not able to stomach a single netfilter rules
that was not added by them. At best they ignore it.


> >> IIRC it is possible, but difficult to make and maintain, and seemingly
> >> fragile.
> > 
> > A difficulty is in the eye of the beholder.
> 
> So is his/ her learning curve, especially in a system where network management
> leans toward casual laptop users rather than server admins —and rightly so.

I agree that a server and desktop/laptop are configured differently.
One of the main differences boils down to the fact that one can expect a
netfilter rule set on a server, but it's a rare sight on a
desktop/laptop.
The reasons being - mDNS, SSDP, video casting, IPTV (multicast variant),
torrents etc. Is it possible to allow all this via netfilter? Yes. Would
end-user bother? Hardly, as it's easier to disable all netfilter rules
altogether (in the case of Debian - not to enable them at all).
And if security is wanted by end-user (which is rare in my experience),
there is always intermediate network hardware for that.


> In any case, a sysadmin has to learn the syntax of say, sysctl, ip, iptables,
> vconfig, modprobe, and the like.  Hence, just running the right sequence of
> (kernel configuration) commands is more straightforward than trying to 
> discover
> how to have them run in the same sequence indirectly, by properly setting a
> number of configuration files, methinks.

You forgot to mention one crucial part - troubleshooting. For us, mere
mortals, writing a set of netfilter rules at first try without any
errors is hard if not impossible.
And all these high-level tools are hardly suited for the
troubleshooting.


> In addition, the semantics of high
> level configuration files seems to be more likely to change across releases
> than that of lower level commands.

There's answer for that, but it's hardly for anyone's liking.
RedHat's firewalld. It's tricky, with big 'S' for security in name, and
it's written in Python, but end-user interface is stable.

Reco



Re: Repository Problem

2018-08-18 Thread Reco
Hi.

On Sat, Aug 18, 2018 at 11:13:04AM -0400, Stephen P. Molnar wrote:
> 
> 
> On 08/18/2018 10:20 AM, Dan Ritter wrote:
> > On Sat, Aug 18, 2018 at 08:15:12PM +1000, David wrote:
> > > On 18 August 2018 at 05:00, Stephen P. Molnar  
> > > wrote:
> > > > I have just installed Stretch on a new SSD on my platform.
> > > > 
> > > > During the installation I selected the University of Chicago mirror and
> > > > accepted the defaults plus backports.
> > > > 
> > > > When I fun apt-get install Thunderbird apt-get tries to log on to
> > > > prod.debian.map.fastly.net (2a04:4E42:2c::2040 and hangs. I can't find 
> > > > that
> > > > address anywhere in /etc/apt.  Why am I getting this behavior?
> > > As explained at [1], the debian-security repo [2] might be provided to
> > > you by fastly.net.
> > > 
> > > Access to the debian-security repo is important because it is the method
> > > by which your system will receive future security updates.
> > > 
> > > > Even more
> > > > important, how do I get rid of  the problem?
> > > If by "the problem" you mean the "hang", then you need to investigate why
> > > that is occurring.
> > Two cents says that he doesn't have upstream IPv6 connectivity.
> > 
> > If ping6 fails for both prod.debian.map.fastly.net and
> > www.google.com, that's a decent indicator I'm right.
> > 
> > Then the question is whether he expects to have IPv6
> > connectivity (and so it's broken) or whether he doesn't (and we
> > should tell Debian to stop using it).
> > 
> Thank for the reply.
> 
> Where can I send the two cents?  It looks as if that's correct.
> 
> The installer installed ipv6 without giving me any choice about the matter.

Don't blame the installer for that. The way IPv6 is provided there's
nothing to configure on your host (and there's nothing to blame here either).
You network hardware (aka router), on the other hand, most surely
advertizes IPv6 prefix. So put the blame there or on your ISP.

> How do I get rid of ipv9 and replace it WITH ipv4?

1) Delicate way of doing it (apply after each boot):

ip6tables -I INPUT ! -o lo -p icmp6 --icmpv6-type 134 -j DROP

2) Hardcore way of doing it (ditto):

sysctl -qw net.ipv6.conf.all.disable_ipv6=1

3) Right way of doing things:

Fix your router.

Reco



Re: Fail2Ban Question: Can I do this without restarting the service?

2018-08-18 Thread Reco
Hi.

On Sat, Aug 18, 2018 at 05:55:50PM +0200, john doe wrote:
> On 8/17/2018 7:35 PM, Brian wrote:
> > On Fri 17 Aug 2018 at 19:16:07 +0200, john doe wrote:
> > 
> > > Also, a server without firewall capibility should never be facing 
> > > internet.
> > 
> > Why? "never" seems a little strong. Mine does; what's the problem?
> > 
> 
> Given the fact that the OP want's to use fail2ban and has at least two
> services running on his public host (apache, ssh) it was a reasonable guess
> to stress out that a firewall is a must in his situation.
> 
> I can not talk about your server configuration because I don't know anything
> about it! :)
> 
> In general, the requirements for firewalling a public host depends on the
> environment and other factors.
> Googling this topick will show that there is no formal answer.

There is. Google for "TCP RST flood".

Reco



Re: Repository Problem

2018-08-18 Thread Reco
Hi.

On Sat, Aug 18, 2018 at 01:46:31PM -0400, Stephen P. Molnar wrote:
> 
> 
> On 08/18/2018 11:51 AM, Reco wrote:
> > Hi.
> > 
> > On Sat, Aug 18, 2018 at 11:13:04AM -0400, Stephen P. Molnar wrote:
> > > 
> > > On 08/18/2018 10:20 AM, Dan Ritter wrote:
> > > > On Sat, Aug 18, 2018 at 08:15:12PM +1000, David wrote:
> > > > > On 18 August 2018 at 05:00, Stephen P. Molnar 
> > > > >  wrote:
> > > > > > I have just installed Stretch on a new SSD on my platform.
> > > > > > 
> > > > > > During the installation I selected the University of Chicago mirror 
> > > > > > and
> > > > > > accepted the defaults plus backports.
> > > > > > 
> > > > > > When I fun apt-get install Thunderbird apt-get tries to log on to
> > > > > > prod.debian.map.fastly.net (2a04:4E42:2c::2040 and hangs. I can't 
> > > > > > find that
> > > > > > address anywhere in /etc/apt.  Why am I getting this behavior?
> > > > > As explained at [1], the debian-security repo [2] might be provided to
> > > > > you by fastly.net.
> > > > > 
> > > > > Access to the debian-security repo is important because it is the 
> > > > > method
> > > > > by which your system will receive future security updates.
> > > > > 
> > > > > > Even more
> > > > > > important, how do I get rid of  the problem?
> > > > > If by "the problem" you mean the "hang", then you need to investigate 
> > > > > why
> > > > > that is occurring.
> > > > Two cents says that he doesn't have upstream IPv6 connectivity.
> > > > 
> > > > If ping6 fails for both prod.debian.map.fastly.net and
> > > > www.google.com, that's a decent indicator I'm right.
> > > > 
> > > > Then the question is whether he expects to have IPv6
> > > > connectivity (and so it's broken) or whether he doesn't (and we
> > > > should tell Debian to stop using it).
> > > > 
> > > Thank for the reply.
> > > 
> > > Where can I send the two cents?  It looks as if that's correct.
> > > 
> > > The installer installed ipv6 without giving me any choice about the 
> > > matter.
> > Don't blame the installer for that. The way IPv6 is provided there's
> > nothing to configure on your host (and there's nothing to blame here 
> > either).
> > You network hardware (aka router), on the other hand, most surely
> > advertizes IPv6 prefix. So put the blame there or on your ISP.
> > 
> > > How do I get rid of ipv6 and replace it WITH ipv4?
> > 1) Delicate way of doing it (apply after each boot):
> > 
> > ip6tables -I INPUT ! -o lo -p icmp6 --icmpv6-type 134 -j DROP
> > 
> > 2) Hardcore way of doing it (ditto):
> > 
> > sysctl -qw net.ipv6.conf.all.disable_ipv6=1
> > 
> > 3) Right way of doing things:
> > 
> > Fix your router.
> > 
> > Reco
> > 
> > 
> 
> According to my AT&T BGW210 Router both ipv4 amd 1pv6 are active

And AT&T is known to have strange views on IPv6 (works for some, broken
for most).
Unless IPv6 is something that you just cannot live with - disable IPv6
advertizing on a router.

I.e. - locate a page looking like [1] and set Off all three knobs.

Reco

[1] http://setuprouter.com/router/arris/bgw210-700-att/ipv6-86430-large.htm



Re: Deep Packet Inspection

2018-08-19 Thread Reco
Hi.

On Sun, Aug 19, 2018 at 08:31:42PM +0300, Mimiko wrote:
> Hello.
> 
> Maybe this was answered. Is there a Deep Packet Inspection to use in Debian 9 
> for a firewall setup? Opensource and maybe in repository.

Once upon a time there was so called l7filter (main suite), which was
packaged for Debian, but it was excluded from current stable.
Not a big loss IMO, as l7filter was only good for traffic classification
(netfilter mangle table).

You may want to check a set of kernel patches called nDPI - [1] (sorry
for the GitHub link). It will take a patched kernel *and* iptables suite
to make the thing run, and I suspect that amd64 is the only supported
architecture.

If software archeology is your thing, there's OpenDPI - [2] (sorry for
the GitHub link again).

As far as I can tell, there's no DPI software packaged for current
stable at all.

[1] https://github.com/vel21ripn/nDPI

[2] https://github.com/thomasbhatia/OpenDPI

Reco



Re: Sid: NFSv3 mounting problem

2018-08-19 Thread Reco
Hi.

On Sun, Aug 19, 2018 at 05:11:02PM +0200, Grzegorz Sójka wrote:
> On 08/19/18 16:52, deloptes wrote:
> > Grzegorz Sójka wrote:
> > 
> > > /home/trash 192.168.0.0/24(no_subtree_check,async,rw,all_squash)
> > 
> > and you are 100% sure you are using nfs v3 and not nfs v4 on the not working
> > client? you do not have firewalls enabled?
> 
> Yes, here is appropriate line from fstab:
> 
> Hermes:/home/trash /home/trash nfs vers=3,defaults,noatime,nodiratime 0 0

Out of pure curiosity, can you provide the result of

rpcinfo -p localhost

from the NFS server, 'good' NFS client and a 'problematic' one?

And, while we're at it anyway, the result of:

rpcinfo -p hermes

from both the clients?

Reco



Re: Deep Packet Inspection

2018-08-19 Thread Reco
Hi.

On Sun, Aug 19, 2018 at 09:03:10PM +0300, Eero Volotinen wrote:
> snort

Intrusion detection. Unsuitable for traffic shaping or filtering.

> and suricata.

Utilizes NFQUEUE. Friends do not let friends to copy network packets
from kernelspace to userspace and back.

Reco



Re: Problem Mounting New Drives

2018-08-19 Thread Reco
Hi.

On Sun, Aug 19, 2018 at 02:04:39PM -0400, Stephen P. Molnar wrote:

Note that second field is a mountpoint in fstab(5), as one of the stock
records show:

> UUID=8f4eeaae-a055-4262-bebb-cf99abe982a5 /varext4 defaults
> 0   2

Yours have a block device name instead of mountpoint:

> UUID=900b5f0b-4f3d-4a64-8c91-29aee4c6fd07 /dev/sdb1 ext4 errors=remount-ro 0
> 1
> UUID=d65867da-c658-4e35-928c-9dd2d6dd5742 /dev/sdc1 ext4 errors=remount-ro 0
> 1
> UUID=007c1f16-34a4-438c-9d15-e3df601649ba /dev/sdc2 ext4 errors=remount-ro 0
> 1

So even a stock mount(1) should refuse to mount these.


> When I rebooted the computer the OS didn't like the new fstab and gave me a
> number of, at least to me, obscure messages.

systemd-mount can be cryptic, I agree.


> Obviously, I missed something important.  My question is what?

It's a kind of dumb question, but where do you need your sdb1, sdc1 and
sdc2 mounted? Your fstab(5) does not mention that.

Reco



Re: Group ID conflicts between different distros: how to manage them with NIS?

2018-08-19 Thread Reco
Hi.

On Mon, Aug 20, 2018 at 12:51:24AM -0300, Joao Roscoe wrote:
> Hmmm...
> 
> If I create a NIS group (with a high ID), called serial_ports, dhould I
> just, as root, chgrp /dev/ttyS0 so that it's group is serial_ports ?

You could, and it may even work, but it would be temporary.
To make it truly work you should write your own udev rule for these (and
other) devices.

The reason being - udev creates everything under the /dev (system boot).
Udev changes everything under the /dev (vt switch, user relogins).

In that particular case you should override changes made by
/lib/udev/rules.d/50-udev-default.rules.

Reco



Re: Deep Packet Inspection

2018-08-19 Thread Reco
Hi.

On Sun, Aug 19, 2018 at 05:47:43PM -0400, Cindy-Sue Causey wrote:
> YES, I know. Overall, it still might not do the OP's job that's
> needed, but it used the SAME words I just read above in Reco's
> response.

That's true, I was brief. The main difference between, say, nDPI and
ngrep is that nDPI analyzes layer 7 of network communication *and*
allows creating filtering rules on top of them. ngrep merely analyzes
captured traffic.
And, if it's the network traffic analysis is what one needs, there's
wireshark. Go no further.

Reco



Re: Sid: NFSv3 mounting problem (again!)

2018-08-20 Thread Reco
Hi.

On Mon, Aug 20, 2018 at 12:42:14PM +0200, Grzegorz Sójka wrote:
> I added /etc/services /etc/rpc and now rpcbind is starting but:
> 
> #rpcinfo -p localhist
>program vers proto   port  service
> 104   tcp111  portmapper
> 103   tcp111  portmapper
> 102   tcp111  portmapper
> 104   udp111  portmapper
> 103   udp111  portmapper
> 102   udp111  portmapper
> 1000241   udp  49035  status
> 1000241   tcp  58571  status
> 
> there is no nlockmgr.

I always said that part of the result is better that none at all.
This one is suitable for the client, and it advertizes NFS support from
version 2 to version 4 inclusive.
For NFS server I'd expect to see 'nfs' (v3 or v4).

Reco



Re: Sid: NFSv3 mounting problem (again!)

2018-08-20 Thread Reco
Hi.

On Mon, Aug 20, 2018 at 04:04:01PM +0200, Grzegorz Sójka wrote:
> On 08/20/18 15:12, Reco wrote:
> > Hi.
> > 
> > On Mon, Aug 20, 2018 at 12:42:14PM +0200, Grzegorz Sójka wrote:
> > > I added /etc/services /etc/rpc and now rpcbind is starting but:
> > > 
> > > #rpcinfo -p localhist
> > > program vers proto   port  service
> > >  104   tcp111  portmapper
> > >  103   tcp111  portmapper
> > >  102   tcp111  portmapper
> > >  104   udp111  portmapper
> > >  103   udp111  portmapper
> > >  102   udp111  portmapper
> > >  1000241   udp  49035  status
> > >  1000241   tcp  58571  status
> > > 
> > > there is no nlockmgr.
> > 
> > I always said that part of the result is better that none at all.
> > This one is suitable for the client, and it advertizes NFS support from
> > version 2 to version 4 inclusive.
> > For NFS server I'd expect to see 'nfs' (v3 or v4).
> 
> If I'm getting this right nlockmgr is needed only on the server side?

And only if you're using NFSv3 and your mount options does not include
'nolock'. The question is - what's on the server?


> Anyway, I still get the following error:
> # mount -v /mnt/users
> mount.nfs: timeout set for Sat Aug 18 18:55:52 2018
> mount.nfs: trying text-based options 'nfsvers=3,addr=192.168.0.129'
> mount.nfs: prog 13, trying vers=3, prot=6
> mount.nfs: trying 192.168.0.129 prog 13 vers 3 prot TCP port 2049

A client tries NFSv3 via tcp:2049 and presumably meets TCP RST.

> mount.nfs: prog 15, trying vers=3, prot=17
> mount.nfs: trying 192.168.0.129 prog 15 vers 3 prot UDP port 37385
> mount.nfs: Protocol not supported

A server says to the client that NFSv3 over udp is not supported here.

But without rpcinfo from the server it's impossble to tell what's really
supported.


> On all the working clients I do have:
> 
> # rpcinfo -p
>program vers proto   port  service
> 104   tcp111  portmapper
> 103   tcp111  portmapper
> 102   tcp111  portmapper
> 104   udp111  portmapper
> 103   udp111  portmapper
> 102   udp111  portmapper
> 1000211   udp  44149  nlockmgr
> 1000213   udp  44149  nlockmgr
> 1000214   udp  44149  nlockmgr
> 1000211   tcp  37814  nlockmgr
> 1000213   tcp  37814  nlockmgr
> 1000214   tcp  37814  nlockmgr
> 1000241   udp  52987  status
> 1000241   tcp  55392  status
> 
> So, nlockmgr is missing only on the broken client.

Note that your client advertizes NFSv4 support. Maybe that's what they
really use?

Reco



Re: Deep Packet Inspection

2018-08-21 Thread Reco
Hi.

Top posting is considered bad manners here.

On Tue, Aug 21, 2018 at 11:22:02AM +0300, Mimiko wrote:
> last update to OpenDPI was 6 years ago. Could it be used now without problems?

I sincerely doubt it. Hence my suggestion of nDPI.

Reco



Re: Microsoft Does It Again

2018-08-21 Thread Reco
Hi.

On Tue, Aug 21, 2018 at 11:14:48AM -0400, Stephen P. Molnar wrote:
> I'm not trying to start a flame war or bash Microsoft (let's fact it, they
> don't pay attention), but I have a problem with an Excel file.
> 
> For reasons unbeknownst to me, somehow it got saved as an Excel OOXML file.
> I find that i don't have a Linux app that seems to able to open the file.

That's to be expected. M$ are relative newbies then it comes to vendor
lock-in, but they mastered this trick slowly last 30 years or so.

> Is there a Linux application that will allow me to recover the contents of
> the file?

Try xlsx2csv. It's a nasty python script, but it beats unpacking ooxml
(hint - it's a zip archive in disguise) and trying to decypher
underlying XML parody.

Reco



Re: Microsoft Does It Again

2018-08-21 Thread Reco
Hi.

On Tue, Aug 21, 2018 at 05:48:31PM +0200, to...@tuxteam.de wrote:
> > Is there a Linux application that will allow me to recover the
> > contents of the file?
> 
> As others have said, LibreOffice should do. I don't know about xlsx2csv,

It's the *best*. Small, relatively lightweight, and does the job in
milliseconds. The output is limited to CSV though.


> but an apt search on my box yields
> 
>   tomas@trotzki:~$ apt search ooxml
>   Sorting... Done
>   Full Text Search... Done
>   docx2txt/stable,stable,stable 1.4-0.1 all
> Convert Microsoft OOXML files to plain text

Not relevant. Input is xlsx.


>   libapache-poi-java/stable,stable,stable 3.10.1-3 all
> Apache POI - Java API for Microsoft Documents
>   
>   libapache-poi-java-doc/stable,stable,stable 3.10.1-3 all
> Apache POI - Java API for Microsoft Documents (Documentation)

Suggesting using Java to parse XML in ZIP archive can be considered
cruel and unusual punishment in certain countries. Besides, these are
library and a documentation. Implementation of working parser is left as
an exercise to a reader.


>   libexcel-writer-xlsx-perl/stable,stable,stable 0.95-1 all
> module to create Excel spreadsheets in xlsx format

That one's good, with one *litte* problem.
Making the thing output any sensible result is *very* painful.


>   unoconv/stable,stable,stable 0.7-1.1 all
> converter between LibreOffice document formats

Don't. Just don't. unoconv is an ugly python script that launches
headless Libreoffice and feeds the file to a resulting web-service.
It has all the disadvantages of Libreoffice (CPU/Memory consumption,
abysmal parsing speed, Libreoffice format limitations), and a single
gain - an ability to batch-convert files.

If Libreoffice is unable to open it - unoconv will do one absolutely no
good.

Also, 'apt search xlsx'.

Reco



Re: Microsoft Does It Again

2018-08-21 Thread Reco
Hi.

On Tue, Aug 21, 2018 at 06:28:57PM +0200, to...@tuxteam.de wrote:
> On Tue, Aug 21, 2018 at 07:02:32PM +0300, Reco wrote:
> > On Tue, Aug 21, 2018 at 05:48:31PM +0200, to...@tuxteam.de wrote:
> 
> [...]
> 
> > >   tomas@trotzki:~$ apt search ooxml
> > >   Sorting... Done
> > >   Full Text Search... Done
> > >   docx2txt/stable,stable,stable 1.4-0.1 all
> > > Convert Microsoft OOXML files to plain text
> > 
> > Not relevant. Input is xlsx.
> 
> Well, xlsx *is* OOXML (I like to call it "MOOXML" as in
> "Microsoft's..." -- you get the idea :)

That's like saying that apples and oranges are both fruits. 
I.e. that's truth, but one does not compare apples to oranges usually.

Both docx and xlsx are zip archives with xml inside. Their parsing is
different, and applying parsing rules from one to another yields no
useful result.

Parsing docx is easy, even I can do it (and did it, actually).
Parsing xlsx with all its gross formulas (sp?), macros and arcane date
formats is the definition of pain. I gave it up and became a happy
xlsx2csv user.

Reco



Re: what is special about unrar-free when we have unar ?

2018-08-21 Thread Reco
Hi.

On Tue, Aug 21, 2018 at 11:26:53PM +0530, shirish शिरीष wrote:
> Hi all,
> 
> Please CC me as I'm not following the list per-se (due to traffic
> constraints and just inability to manage information flow.)
> 
> Does anybody know why unrar-free is till in Debian repo. when we have
> unar which does the same or more (support for rarv5 among others) . I
> do get the idea that we should have alternatives for any application
> or is there something more that I don't know ?

It's more likely that you've been lucky and haven't met multipart solid
password-protected RAR5 archives. unar cannot chew on these.
bsdtar (which has some RAR support) cannot even do 'solid' part of
previous statement.
So, it's sad, gross, but unrar-nonfree stays in non-free for all those
cornercases.

Reco



Re: Data Recovery

2018-08-21 Thread Reco
Hi.

On Tue, Aug 21, 2018 at 11:36:20AM -0500, Josh W. wrote:
> Hello World of Debian,
> I was trying to setup a shared folder between my Debian Stretch
> system and my Raspberry Pi. I had created an "/export & /export/users
> directory" and had bound it to my "/home/users" directory. I had given up
> on the idea of sharing between the two OSes, because i was frustated and
> deleted my "/export/users" directory not thinking that it was still bound
> to my "/home/users" directory. When i deleted the "/export/users" directory
> it took my "/home/users" directory with it. So my question is, How can i
> recovery the lost data. What would be the best route to take in order to
> recover my personal files? Your help is Very Much Needed!

1) Immediately shut down Raspberry Pi.

2) Make block-level image copy of SD card. Do not do the following on SD
card itself!

3) Mount a filesystem from said copy on a Debian PC (can be substituted
with any GNU/Linux PC).

4) Install testdisk on a Debian PC.

5) Run photorec as root on a filesystem from pt 3.

6) Consider doing regular filesystem backups in the future, preferably
automatic ones.

Reco



Re: Microsoft Does It Again

2018-08-21 Thread Reco
Hi.

On Tue, Aug 21, 2018 at 03:11:55PM -0400, Stephen P. Molnar wrote:
> 
> 
> On 08/21/2018 12:02 PM, Stephen P. Molnar wrote:
> > 
> > 
> > On 08/21/2018 11:48 AM, to...@tuxteam.de wrote:
> > > -BEGIN PGP SIGNED MESSAGE-
> > > Hash: SHA1
> > > 
> > > On Tue, Aug 21, 2018 at 11:14:48AM -0400, Stephen P. Molnar wrote:
> > > > I'm not trying to start a flame war or bash Microsoft [...]
> > > No worries: I won't try to stop you.
> > > 
> > > > For reasons unbeknownst to me, somehow it got saved as an Excel
> > > > OOXML file [...]
> > > Go figure. There are masochists out there, aren't there ;-)
> > > 
> > > > [...] Contacting Microsoft Customer Service resulted in
> > > > instructions to upgrade Excel to the current version.  THere is no
> > > > way that I will spend the money to implement that suggestion.
> > > Bah. Microsoft at its best. "Trust us", says Nadella.
> > > 
> > > > Is there a Linux application that will allow me to recover the
> > > > contents of the file?
> > > As others have said, LibreOffice should do. I don't know about xlsx2csv,
> > > but an apt search on my box yields
> > > 
> > >tomas@trotzki:~$ apt search ooxml
> > >Sorting... Done
> > >Full Text Search... Done
> > >docx2txt/stable,stable,stable 1.4-0.1 all
> > >  Convert Microsoft OOXML files to plain text
> > >   libapache-poi-java/stable,stable,stable 3.10.1-3 all
> > >  Apache POI - Java API for Microsoft Documents
> > >   libapache-poi-java-doc/stable,stable,stable 3.10.1-3 all
> > >  Apache POI - Java API for Microsoft Documents (Documentation)
> > >   libexcel-writer-xlsx-perl/stable,stable,stable 0.95-1 all
> > >  module to create Excel spreadsheets in xlsx format
> > >   unoconv/stable,stable,stable 0.7-1.1 all
> > >  converter between LibreOffice document formats
> > > 
> > > Unoconv is the machinery behind LibreOffice import; there's docx2txt,
> > > which I don't know personally, and Perl's libexcel-writer-xlsx-perl,
> > > which I have used to good effect (some assembly required, but if you
> > > want to build it into some automatic workflow it might be your cup
> > > of tea). No idea about the java thingies.
> > > 
> > > Lots of choice :-)
> > > 
> > > Cheers
> > > - -- tomás
> > > -BEGIN PGP SIGNATURE-
> > > Version: GnuPG v1.4.12 (GNU/Linux)
> > > 
> > > iEYEARECAAYFAlt8NE8ACgkQBcgs9XrR2kaFBQCbBQky3qzgdbjxa/ENuwU88gPb
> > > Kl0An3o7wxkhWsfusgiFbpbhCi3NYC/R
> > > =ZnsD
> > > -END PGP SIGNATURE-
> > > 
> > > 
> > Many thanks to all who responded.
> > 
> > LibreOffice did it!!
> > 
> Sadly, I was mistaken.  I opened the wrong file.  LibreOffice wouldn't open
> the corrupted file.

apt-get install xlsx2csv

Reco



Re: Openssl ciphers is not means SSL supported?

2018-08-21 Thread Reco
Hi.

On Wed, Aug 22, 2018 at 02:01:23PM +0900, Miwa Susumu wrote:
> Hi all.
> 
> [question 1]
> 'openssl ciphers -v' output ciphers. include SSL protocol version.
> I have 'SSLv3' by 'openssl ciphers -v'
> but debian openssl package disable ssl3. by configure option.
> (see configure option in debian/rules file).
> 
> my openssl doesn't support SSLv3. is it right?

Debian's openssl does support ciphers that were associated with SSLv3,
but all these ciphers can be used for TLS too.
The support of SSLv3 protocol itself is disabled.


> [question 2]
> What can I know which SSL version is supported by openssl?

"openssl list -disabled" should show all disabled features, here they
include SSL3. The support for SSL2 was lost by openssl a long time ago.

So, which version of SSL does Debian's openssl support? No version at
all.
Which version of TLS does Debian's openssl support? 1.0, 1.1 and 1.2.

Reco



Re: Slow XFS write

2018-08-22 Thread Reco
Hi.

On Wed, Aug 22, 2018 at 04:15:30PM +0200, Martin LEUSCH wrote:
> To complete the description there is infos about the XFS partition:
> 
> meta-data=/dev/sda4  isize=256agcount=11, agsize=268435455 
> blks
>  =   sectsz=512   attr=2, projid32bit=1
>  =   crc=0finobt=0
> data =   bsize=4096   blocks=2920268544, imaxpct=5
>  =   sunit=0  swidth=0 blks
> naming   =version 2  bsize=4096   ascii-ci=0 ftype=0
> log  =internal   bsize=4096   blocks=521728, version=2
>  =   sectsz=512   sunit=0 blks, lazy-count=1
> realtime =none   extsz=4096   blocks=0, rtextents=0

Nothing unusual IMO. Certainly nothing that could explain such a
slowdown.

Can you provide a result of 'perf top'? Just to be sure it's xfs that's
to be blamed.

> And infos about the RAID5 volume:

Looks suspiciously similar to LSI MegaRAID.
Is controller firmware current? Is it possible to upgrade it?
Since you seem to have BBU, have you considered enabling WriteBack mode?

Reco



Re: A Rather Basic Cron Question

2018-08-23 Thread Reco
Hi.

On Thu, Aug 23, 2018 at 01:29:30PM -0500, Martin McCormick wrote:
> */5 5-12,13-23 * * * sh -c ". $HOME/.master.env; ./etc/do_mail"
...
>   In this case, no harm was done but shouldn't the cron
> runs have stopped at 12:00 and then resumed at 13:00?

No, it should not. It's a classic 'off by one' mistake:

$ perl -e 'my @a = (eval "5..12,13..23"); print join ",", @a;'

5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23

On the other hand, 

$ perl -e 'my @a = (eval "5..11,13..23"); print join ",", @a;'

5,6,7,8,9,10,11,13,14,15,16,17,18,19,20,21,22,23

So 5-11,13-23 should do what you want.

Reco



Re: mailing list vs "the futur"

2018-08-24 Thread Reco
Hi.

On Fri, Aug 24, 2018 at 04:31:40PM +, Curt wrote:
> On 2018-08-24, Gene Heskett  wrote:
> > On Friday 24 August 2018 09:23:14 Rodolfo Medina wrote:
> >
> >> rhkra...@gmail.com writes:
> >> > On Thursday, August 09, 2018 01:47:24 PM Greg Wooledge wrote:
> >> >> On Thu, Aug 09, 2018 at 05:39:36PM +, tech wrote:
> >> >> > Should'nt be time to move away from an old mail-listing to
> >> >> > something more modern like a bugzilla or else ???
> >> >>
> >> >> No.
> >> >
> >> > +1
> >>
> >> +2
> > -100
> >
> 
> -97

-INT_MAX. I win.

Reco



Re: mailing list vs "the futur"

2018-08-24 Thread Reco
Hi.

On Fri, Aug 24, 2018 at 06:45:00PM +0100, Eric S Fraga wrote:
> On Friday, 24 Aug 2018 at 20:15, Reco wrote:
> > -INT_MAX. I win.
> 
> -1 (wraps around so = INT_MAX) and I win!

Damn. Should've seen this. Will use long int next time.

Reco



Re: Swap priority in Debian

2018-08-25 Thread Reco
Hi.

On Sat, Aug 25, 2018 at 09:42:31AM +0530, Subhadip Ghosh wrote:
> Hi,
> 
> I am a Debian testing user. Recently I am experiencing freezing on my Debian
> system intermittently and during troubleshooting the same, I found out that
> the I have a swap partition with priority set to -2.

Same here with stable.


> But according to the below manpage:
> 
> https://manpages.debian.org/stretch/mount/swapon.8.en.html
> 
> swap priority should be between -1 and 32767.

So swapon(8) utility has this restriction. Note that the system call
itself - swapon(2) merely interprets swap priority as a signed integer,
so any priority number is actually possible (within integer limits of
course).


> I have a swappiness value of 60 but I don't remember seeing the swap
> being used at all recently, the used swap is always 0%.

A hint. They invented this wonderful thing called sysstat decades ago so
you don't have to remember your swap usage, along with other things
sysstat gathers.


> My question is, do you think that the -2 priority is stopping the swap
> partition from actually being used and because of that, the system is
> getting frozen when the memory usage is high?

No, it definitely does not work this way.
A swap priority only comes into play once you have multiple instances of
swaps. If you have a single swap partition/lv/file, a priority value is
meaningless.

A system freezes, on the other hand (did I mention sysstat?), could
indicate heavy swapping, barrier writes, kernel bugs (12309, anyone?),
oom killer invocations, overheating and many other things.
A good starting point would be kernel message log (aka
/var/log/kern.log) from the time of this freeze.

Reco



Re: Swap priority in Debian

2018-08-25 Thread Reco
Hi.

On Sat, Aug 25, 2018 at 06:07:01PM +0530, Subhadip Ghosh wrote:
> 
> 
> On Saturday 25 August 2018 04:29 PM, Reco wrote:
> > Hi.
> > 
> > On Sat, Aug 25, 2018 at 09:42:31AM +0530, Subhadip Ghosh wrote:
> > > Hi,
> > > 
> > > I am a Debian testing user. Recently I am experiencing freezing on my 
> > > Debian
> > > system intermittently and during troubleshooting the same, I found out 
> > > that
> > > the I have a swap partition with priority set to -2.
> > Same here with stable.
> > 
> > 
> > > But according to the below manpage:
> > > 
> > > https://manpages.debian.org/stretch/mount/swapon.8.en.html
> > > 
> > > swap priority should be between -1 and 32767.
> > So swapon(8) utility has this restriction. Note that the system call
> > itself - swapon(2) merely interprets swap priority as a signed integer,
> > so any priority number is actually possible (within integer limits of
> > course).
> > 
> > 
> > > I have a swappiness value of 60 but I don't remember seeing the swap
> > > being used at all recently, the used swap is always 0%.
> > A hint. They invented this wonderful thing called sysstat decades ago so
> > you don't have to remember your swap usage, along with other things
> > sysstat gathers.
> > 
> > 
> > > My question is, do you think that the -2 priority is stopping the swap
> > > partition from actually being used and because of that, the system is
> > > getting frozen when the memory usage is high?
> > No, it definitely does not work this way.
> > A swap priority only comes into play once you have multiple instances of
> > swaps. If you have a single swap partition/lv/file, a priority value is
> > meaningless.
> > 
> > A system freezes, on the other hand (did I mention sysstat?), could
> > indicate heavy swapping, barrier writes, kernel bugs (12309, anyone?),
> > oom killer invocations, overheating and many other things.
> > A good starting point would be kernel message log (aka
> > /var/log/kern.log) from the time of this freeze.
> Thanks for the suggestion. I checked the kern.log but did not find any
> suspicious log, in fact no entry from that exact time frame when the last
> freeze happened. Do you have any other suggestions to troubleshoot the
> freezes?

"sar -r ALL -f 25" and "sar -S -f 25" would be the next logical step.
Note that "25" means current date (i.e. 25th of current month).

Should be preceeded by "apt install sysstat", of course.

Reco



Re: question on spamd logging

2018-08-25 Thread Reco
Hi.

On Sat, Aug 25, 2018 at 11:27:32AM -0400, Gene Heskett wrote:
> This is expanding the syslog to the point of drowning out any real 
> actionable messages.
> 
> I think it used to have a log of its own. How, it this continues once 
> stretch is up and running, can we put those spamd messages back into 
> spamassassin's own log file? Seems like the logical place for them.

It's definitely possible with rsyslog's filtering feature.
Can you provide a sample of the records that annoy you?

Reco



Re: question on spamd logging

2018-08-25 Thread Reco
Hi.

On Sat, Aug 25, 2018 at 12:16:49PM -0400, Gene Heskett wrote:
> On Saturday 25 August 2018 12:12:09 Reco wrote:
> 
> > Hi.
> >
> > On Sat, Aug 25, 2018 at 11:27:32AM -0400, Gene Heskett wrote:
> > > This is expanding the syslog to the point of drowning out any real
> > > actionable messages.
> > >
> > > I think it used to have a log of its own. How, it this continues
> > > once stretch is up and running, can we put those spamd messages back
> > > into spamassassin's own log file? Seems like the logical place for
> > > them.
> >
> > It's definitely possible with rsyslog's filtering feature.
> > Can you provide a sample of the records that annoy you?
> >
> > Reco
> 
> Aug 25 12:10:01 coyote /USR/SBIN/CRON[20245]: (www-data) CMD ([ -x 
> /usr/share/awstats/tools/update.sh ] && /usr/share/awstats/tools/update.sh)
> Aug 25 12:11:33 coyote spamd[4854]: spamd: connection from localhost 
> [127.0.0.1]:43518 to port 783, fd 5
> Aug 25 12:11:33 coyote spamd[4854]: spamd: setuid to gene succeeded
> Aug 25 12:11:33 coyote spamd[4854]: spamd: processing message 
> <20180825161027.eaq2xy65oiar6...@p5k.home> aka 
>  for gene:1000
> Aug 25 12:11:34 coyote spamd[4854]: spamd: clean message (1.6/5.1) for 
> gene:1000 in 1.1 seconds, 10538 bytes.
> Aug 25 12:11:34 coyote spamd[4854]: spamd: result: . 1 - 
> BAYES_50,HEADER_FROM_DIFFERENT_DOMAINS,RDNS_NONE,T_DKIM_INVALID 
> scantime=1.1,size=10538,user=gene,uid=1000,required_score=5.1,rhost=localhost,raddr=127.0.0.1,rport=43518,mid=<20180825161027.eaq2xy65oiar6...@p5k.home>,rmid=,bayes=0.50,autolearn=no
>  
> autolearn_force=no
> Aug 25 12:11:35 coyote spamd[4707]: prefork: child states: II
> 
> Several hundred a day...

Try this:

cat > /etc/rsyslog.d/spamd.conf << EOF
:syslogtag, startswith, "spamd" /var/log/spamd.log
:syslogtag, startswith, "spamd" stop
EOF

service rsyslogd restart

Consider adding logrotate configuration file for the new
/var/log/spamd.log.

And, before you ask, documentation for rsyslogd lives in "rsyslog-doc"
package.

Reco



Re: question on spamd logging

2018-08-25 Thread Reco
Hi.

On Sat, Aug 25, 2018 at 01:49:53PM -0400, Gene Heskett wrote:
> > > Aug 25 12:11:35 coyote spamd[4707]: prefork: child states: II
> > >
> > > Several hundred a day...
> >
> > Try this:
> >
> > cat > /etc/rsyslog.d/spamd.conf << EOF
> >
> > :syslogtag, startswith, "spamd" /var/log/spamd.log
> > :syslogtag, startswith, "spamd" stop
> >
> > EOF
> >
> > service rsyslogd restart
> >
> no permission

I assumed that I could skip obligatory 'please assume root privileges
before making systemwide changes'. Apparently I was wrong, but …


> so I cd to e/rs.d sudo -i and made this file
> :syslogtag, startswith, "spamd" /var/log/spamd.log
> :syslogtag, startswith, "spamd" stop

… since things worked out themselves, we now have this:


> And had to do the restart as root, which logged this:
> Aug 25 13:34:45 coyote rsyslogd: [origin software="rsyslogd" 
> swVersion="7.6.3" x-pid="3079" x-info="http://www.rsyslog.com";] exiting 
> on signal 15.
> Aug 25 13:34:45 coyote rsyslogd: [origin software="rsyslogd" 
> swVersion="7.6.3" x-pid="23099" x-info="http://www.rsyslog.com";] start

These two are you usual rsyslogd restart. Nothing to see here.


> Aug 25 13:34:45 coyote rsyslogd-3000: unknown priority name ""
> 
> No clue what that error might be, you?

But this one is sure cryptic. Even if one takes [1] into the account.
It's been awhile since I've tinkered with wheezy's rsyslogd, try
replacing "stop" with "~". I.e. replace:

:syslogtag, startswith, "spamd" stop

with:

:syslogtag, startswith, "spamd" ~


> Thanks Reco.

You're welcome.


> > Consider adding logrotate configuration file for the new
> > /var/log/spamd.log.
> >
> > And, before you ask, documentation for rsyslogd lives in "rsyslog-doc"
> > package.
> 
> Synaptic says its installed, but its not on /usr/share?

It should be /usr/share/doc/rsyslogd-doc.
I made a habit doing 'dpkg -L …' on newly installed packages.


> Ahh, found it but no mention of that exact syntax of :syslogtag

To put it simply, it's that thing that follows hostname in your typical
syslog entry. Usually comes in format "process_name[process_pid]".
In this case it's "spamd[4707]".

[1] https://www.rsyslog.com/?s=error+3000

Reco



Re: My wired connection doesn't work suddenly (Debian sid)

2018-08-26 Thread Reco
Hi.

On Sun, Aug 26, 2018 at 07:42:31AM -0300, Vinícius Couto wrote:
> Hi,
> 
> I use Debian sid and some weeks ago, a upgrade in some package caused a
> strange thing.
> 
> I use my computer, when my wired connection down. Sometimes only the IPv4,
> sometimes all the connections.

Yet another NetworkManager fault, most likely. The thing lives to its
reputation once more.


> The symbol of the wired connection says that the connection is working.
> When i disable to use a wireless connection in the same network, it works.

Of all the ways to test network connection you just used the least
informative one. Please share the output of "ip a l" and "ip ro l" next
time this happens.


> When i use dmesg | grep r8169
> r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded
> r8169 :02:00.0 eth0: RTL8168evl/8111evl at 0x(ptrval), ..., XID
> ... IRQ ..
> r8169 :02:00.0 eth0: jumbo features [frames: 9200 bytes, tx
> checksumming: ko]

Pretty normal.

> r8169 :02:00.0 enp2s0: renamed from eth0

That's systemd thing.

> r8169 :02:00.0: firmware: direct-loading firmware rtl_nic/rtl8168e-3.fw

And here you've used non-free blob, which is most likely completely
unnecessary. But it's unlikely that any of these are somehow related to
your problem.


> dmesg | grep enp2s0
> TCP: enp2s0: Driver has suspect GRO implementation, TCP performance may be
> compromised.

And that says that your NIC has problematic Generic Receive Offload
implementation, yet kernel enabled it.
Try "ethtool -K enp2s0 gro off" if it bothers you, but it'll unlikely
will have any visible effect.

> What is happening to my computer?

Network Manager. See above.

Reco



iproute, NM, customary Sunday rant (was: Re: My wired

2018-08-26 Thread Reco
connection doesn't work suddenly (Debian sid))
Reply-To: 
In-Reply-To: <201808260940.37925.ghesk...@shentel.net>

On Sun, Aug 26, 2018 at 09:40:37AM -0400, Gene Heskett wrote:
> On Sunday 26 August 2018 08:55:55 Reco wrote:
> > > The symbol of the wired connection says that the connection is
> > > working. When i disable to use a wireless connection in the same
> > > network, it works.
> >
> > Of all the ways to test network connection you just used the least
> > informative one. Please share the output of "ip a l" and "ip ro l"
> > next time this happens.
> >
> Why do I have to read a mailing list to learn how to use these  new ip 
> tools. Those two examples tell me more than than 20 readings of the man 
> page, thank you Reco.

You're welcome. It's how I learn stuff - commit into memory two or three
'use in all cases' invocations of the tool, and slowly learn the rest.
One can easily tell a good manpage from a bad one - a good manpage have
those two or three invocations in EXAMPLES section. A bad manpage lacks
EXAMPLES. An abysmal manpage tells you the name of the tool, and that's
it (most of the binaries shipped with GNOME belong in this category).

iproute2 is an excellent, versatile tool. Sadly whoever wrote the tool
hated to write documentation, hence the manpages of questionable
quality.


> > Network Manager. See above.
> 
> Rant mode ON
> 
> What I have never understood about N-M is why it tears down a perfectly 
> good, working connection, and spends 5 minutes trying to establish a new 
> one, and failing, leaving the poor user no way to ask a mailing list for 
> help. Thats unforgivable and unforgiven here.

They called the thing NetworkMangler for a reason. It's also known as
NotworkManager, and there's a reason for this too.


> Theres some keywords 
> (mentioned in the man page in obtuse language IIRC) to use in e-n-i to 
> tell N_M to keep its malicious hands off a given interface, but you have 
> to read between the lines with your logical superpowers to detect them. 

It's easily explained. NM is a RedHat project. More users = more
testing. The best testing is provided by paying customers, so modern
RHEL is unthinkable without NM (yes, it's possible to disable the thing,
and yes, they won't tell you how).

Debian project strived all these years to deviate from upstream as
little as possible, so in this case NM is forced down the throat to any
DE user RedHat style.
Luckily here, in Debian, we have sid, and what's more important - those
poor souls that are willing to use sid on everyday's basis. That
includes OP, and that's commendable to say the least - unearthing nasty
bugs so us, stable users, won't have to.


> There now, but for the longest time removing its starter script via 
> chkconfig or removing N-M with the package manager, not possible a 
> decade ago without its dependencies tearing down the system so I used mc 
> for that, nuking the binaries.

While I prefer "don't install what you don't need approach", I fail to
see why a simple "apt-get purge network-manager" did not to work for
you.

Reco



Re: problem with amd64 architecture

2018-08-26 Thread Reco
Hi.

On Sun, Aug 26, 2018 at 06:24:59PM +0200, Pierre Frenkiel wrote:
> hi,
>  get a amd64 deb package (franz, to name it), and converted it to i386,
>  using the recommended sequence:
> 
>dpkg-deb -x ../paquet_amd64.deb .
>dpkg -e ../paquet_amd64.deb
>in DEBIAN/control, replaced amd64 by i386
>dpkg-deb -b . ../paquet_i386.deb

So you haven't touch the contents of package itself.


> but after installing the resulting .deb. I still get a amd64 binary.
> What did I miss?

See above. No magic here, files provided by package stayed the same.


> Is there a way to convert the binary itself to i386?

If the binary was produced by compiling conventional ASM/C/C++ source -
no. Rebuild from the source is required.

Reco



Re: processing order for configuration files in /etc/network/interfaces.d

2018-08-27 Thread Reco
Hi.

On Mon, Aug 27, 2018 at 09:08:19AM -0400, Greg Wooledge wrote:
> > Hm. Interfaces man page refers to wordexp(3), but this one doesn't say
> > anything about sorted results
> 
> In the absence of such information, the best thing to conclude is that
> the order is unspecified.  It may be using the raw unsorted directory
> contents from readdir(3), or it may be starting them all in parallel
> threads, in which case the order will be nondeterministic.

wordexp(3) invokes glob(3).

glob(3) states that one *needs* to specify GLOB_NOSORT to get resultes
pathnames in no particular order, as by default the result will be
sorted.

Unless I'm reading glibc source wrong, the only non-default argument
that wordexp(3) passes to glob(3) is GLOB_NOCHECK.

Reco



Re: processing order for configuration files in /etc/network/interfaces.d

2018-08-27 Thread Reco
Hi.

On Mon, Aug 27, 2018 at 04:49:06PM +0200, to...@tuxteam.de wrote:
> On Mon, Aug 27, 2018 at 04:32:26PM +0300, Reco wrote:
> > Hi.
> > 
> > On Mon, Aug 27, 2018 at 09:08:19AM -0400, Greg Wooledge wrote:
> > > > Hm. Interfaces man page refers to wordexp(3), but this one doesn't say
> > > > anything about sorted results
> > > 
> > > In the absence of such information, the best thing to conclude is that
> > > the order is unspecified.  It may be using the raw unsorted directory
> > > contents from readdir(3), or it may be starting them all in parallel
> > > threads, in which case the order will be nondeterministic.
> > 
> > wordexp(3) invokes glob(3).
> > 
> > glob(3) states that one *needs* to specify GLOB_NOSORT to get resultes
> > pathnames in no particular order, as by default the result will be
> > sorted.
> > 
> > Unless I'm reading glibc source wrong, the only non-default argument
> > that wordexp(3) passes to glob(3) is GLOB_NOCHECK.
> 
> Thanks for looking into the source.
> 
> The remaining problem is, since the doc seems pretty fuzzy about that,
> whether one can rely on this behaviour, or whether this is just an
> implementation detail which can change under one at any time.

Assuming that glibc stays true to POSIX, 2001 standard - [1] says that:

The wordexp() function shall store the number of generated words into
pwordexp->we_wordc and a pointer to a list of pointers to words in
pwordexp->we_wordv. Each individual field created during field splitting
(see the Shell and Utilities volume of IEEE Std 1003.1-2001, Section
2.6.5, Field Splitting) or pathname expansion (see the Shell and
Utilities volume of IEEE Std 1003.1-2001, Section 2.6.6, Pathname
Expansion) shall be a separate word in the pwordexp->we_wordv list. The
words shall be in order as described in the Shell and Utilities volume
of IEEE Std 1003.1-2001, Section 2.6, Word Expansions.

Last sentence says to me that wordexp output should be always sorted.

Latest edition I could find - 2017 standard - [2] contains similar
wording.

Reco

[1] http://pubs.opengroup.org/onlinepubs/009695299/functions/wordexp.html

[2] http://pubs.opengroup.org/onlinepubs/9699919799/



Re: processing order for configuration files in /etc/network/interfaces.d

2018-08-27 Thread Reco
Hi.

On Mon, Aug 27, 2018 at 12:01:23PM -0400, Greg Wooledge wrote:
> On Mon, Aug 27, 2018 at 06:41:25PM +0300, Reco wrote:
> > Last sentence says to me that wordexp output should be always sorted.
> 
> This only tells us that it *reads* the config files in glob-sorted order.
> And peeking into the actual source code of ifupdown, yes, it appears to
> do this.  (File config.c starting at line 436, in the stretch source.)
> It even uses wordexp() as advertised.
> 
> What's less clear to me at this moment is what happens *after* the
> interface configuration file(s) have been read into memory.
> 
> Moving over to main.c, it looks like it reads the interfaces (line 639),
> and that function was called from line 1190.  After line 1190, it loops
> over the "target_iface" array, and processes them in the order they appear
> in that array.
> 
> So... next question is how they get packed into that array.
> 
> Returning to the select_interfaces function, it looks like the ordering
> is created by a call to find_allowup.
> 
> And... this is where I stopped reading, because it got confusing.  Maybe
> someone else can take a stab at deciphering that part.

select_interfaces() is called on "ifup -a" invocation.
read_interfaces() is called by it.
read_interfaces() calls read_interfaces_defn().
read_interfaces_defn() parses "iface" and "source" directives, and calls
read_interfaces_defn() once again on each file specified by "source" in
order defined by wordexp(3).

Therefore defn[] array should be filled in this order (assuming that one
specifies "source" only at /e/n/i):

1) any interface definition at /e/n/i in order encountered, until
"source" directive.
2) any interface definition provided by "source" directive in order
defined by wordexp(3).
3) any interface definition at /e/n/i in order encountered, after the
"source" directive.

I'd use ltrace(1) to check it, but building test environment is
something that I lack the time to do.

Reco



Re: processing order for configuration files in /etc/network/interfaces.d

2018-08-27 Thread Reco
Hi.

On Mon, Aug 27, 2018 at 11:46:37AM -0500, David Wright wrote:
> On Mon 27 Aug 2018 at 19:24:01 (+0300), Reco wrote:
> > Hi.
> > 
> > On Mon, Aug 27, 2018 at 12:01:23PM -0400, Greg Wooledge wrote:
> > > On Mon, Aug 27, 2018 at 06:41:25PM +0300, Reco wrote:
> > > > Last sentence says to me that wordexp output should be always sorted.
> > > 
> > > This only tells us that it *reads* the config files in glob-sorted order.
> > > And peeking into the actual source code of ifupdown, yes, it appears to
> > > do this.  (File config.c starting at line 436, in the stretch source.)
> > > It even uses wordexp() as advertised.
> > > 
> > > What's less clear to me at this moment is what happens *after* the
> > > interface configuration file(s) have been read into memory.
> > > 
> > > Moving over to main.c, it looks like it reads the interfaces (line 639),
> > > and that function was called from line 1190.  After line 1190, it loops
> > > over the "target_iface" array, and processes them in the order they appear
> > > in that array.
> > > 
> > > So... next question is how they get packed into that array.
> > > 
> > > Returning to the select_interfaces function, it looks like the ordering
> > > is created by a call to find_allowup.
> > > 
> > > And... this is where I stopped reading, because it got confusing.  Maybe
> > > someone else can take a stab at deciphering that part.
> > 
> > select_interfaces() is called on "ifup -a" invocation.
> > read_interfaces() is called by it.
> > read_interfaces() calls read_interfaces_defn().
> > read_interfaces_defn() parses "iface" and "source" directives, and calls
> > read_interfaces_defn() once again on each file specified by "source" in
> > order defined by wordexp(3).
> > 
> > Therefore defn[] array should be filled in this order (assuming that one
> > specifies "source" only at /e/n/i):
> > 
> > 1) any interface definition at /e/n/i in order encountered, until
> > "source" directive.
> > 2) any interface definition provided by "source" directive in order
> > defined by wordexp(3).
> > 3) any interface definition at /e/n/i in order encountered, after the
> > "source" directive.
> > 
> > I'd use ltrace(1) to check it, but building test environment is
> > something that I lack the time to do.
> 
> That's still the order in which the stanzas are read. Don't we now
> need to know what the contents of each stanza is, ie is it an "auto"
> or an "allow-hotplug" stanza, etc?

I simplified things. After defn[] is filled at read_interfaces(),
find_allowup() is called, defn[] being the first argument.
find_allowup() iterates on defn[], using a pointer to defn->allowups.
defn->allowups, unless I'm reading it wrong, is filled by the same
read_interfaces_defn(), in the same order that /e/n/i and "source" are
evaluated.
This filling is done by add_allow_up() each time one specifies "auto" or
"allow-" stanzas.

Reco



Re: linux 4.19 scsi-mq default?

2018-08-27 Thread Reco
Hi.

On Sun, Aug 26, 2018 at 07:47:54PM -0400, Boyan Penkov wrote:
> Hello folks,
> 
> Likely not the exact right list to ask, but likely someone here knows — did 
> scsi-mq by default make it into linux 4.19?
> 
> http://lkml.iu.edu/hypermail/linux/kernel/1807.0/02224.htm

4.19-rc1 has SCSI_MQ_DEFAULT defined as "y" - [1].
Whenever they build it in Debian that way remains to be seen.

Reco

[1] 
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/scsi/Kconfig?h=v4.19-rc1



Re: programs freeze in /

2018-08-27 Thread Reco
Hi.

On Tue, Aug 28, 2018 at 03:57:42PM +1000, Gary Hodder wrote:
> Hi all,
> In midnight commander if I go to the / directory mc freezes.
> This also happens in leafpad the cursor just stays spinning and nothing
> happens.
> Both mc and leathpad were started from a root console.
> I have 2 machine both on 9.5 and both do the same.
> Is there a fix for this?

Sharing the result of "strace ls /" and "dmesg | tail -100" (should be
executed in this order) would be a good place to start.

Reco



Re: Network issue

2018-08-29 Thread Reco
Hi.

Please do not top post. This is a mailing list, not a corporate e-mail
spamfest.

On Wed, Aug 29, 2018 at 01:18:52PM -0500, Nicholas Geovanis wrote:
> Hi, I've never used OVH. How certain are you that the e1000 network driver
> is the correct one?
> Under VMWare/ESX the network driver choice can be crucial, for example.

1) You cannot run Xen in VSphere/ESXi.

2) No sane public provider will use VSphere/ESXi for hosting, the costs
can dim a budget of a small country.

3) e1000e may be bad, but vmxnet3 will make oom-killer an everyday
reality. SR-IOV is a way to go.


Now, to the issue at hand.

> On Wed, Aug 29, 2018 at 6:30 AM Kevin DAGNEAUX 
> wrote:

Please note that you seem to have "link down" first:

> > Aug 28 15:50:32 ovh-1 kernel: e1000e: enp1s0 NIC Link is Down

and 'malfunctioning' netfilter rules next.

> > Aug 28 15:50:32 ovh-1 kernel: DROPED packets IN=enp1s0 OUT=
> > MAC=ZZ:ZZ:ZZ:ZZ:ZZ:ZZ:ZZ:ZZ:ZZ:ZZ:ZZ:ZZ:ZZ:ZZ SRC=YY.YY.YY.YY
> > DST=XX.XX.XX.XX LEN=40 TOS=0x00 PREC=0x00 TTL=55 ID=2229 DF PROTO=TCP
> > SPT=9610 DPT=80 WINDOW=0 RES=0x00 RST URGP=0

Observe 'RST' flag at each and every 'DROPPED' message.
I find it highly unlikely that whatever PHP load test you did it
involved sending mass amounts of TCP Reset packets.

It seems that the following scenario has much higher probability:

1) You fired your load generator application.

2) Your hosting provided immediately got a signal of your typical DOS
attack (not to be confused with DDOS) coming from that seems to be a
typical mobile phone.

3) DDOS protection kicked in:

a) Isolating your server from the network to stop DOS.

b) Sending forged TCP RST to your server to break existing connections
*and* termninate unneeded Apache workers (or whatever you have there).

c) Banning the initiator of DOS (i.e. you on mobile network) temporarily.

4) Real network outage of your server was 2 seconds (time between "link
down" and "link up").

Reco

> 
> 
> > Hi,
> >
> > I've a server in OVH datacenter, on this server i've 7 VMs, on 1 of them
> > in run Apache.
> > To debug a slow upload (who was ~2Mo/s instead 12Mo/s) i've installed an
> > HTML5/PHP speed test application.
> > When i use this app, i've no problem in general, but, when a make a speed
> > test from a source who have more bandwith than the server (the server is
> > limited at 100Mb/s by OVH and i make the test from a 4G+ network where i've
> > ~150Mb/s of bandwith), in this case, the DOM0 lost his network connection
> > (like the ethernet cable is unplugged) until i reboot the server.
> >
> > When i check the syslog of DOM0, i see that iptables drop incomming packet
> > on port 80 instead of routing them to the VM.
> >
> > This is my iptables script i use on DOM0 :
> >
> > #!/bin/bash
> >
> > IPT="/sbin/iptables"
> >
> >
> > ###
> > # Filter
> >
> > ## Remise par defaut des regles
> > $IPT -t filter -P INPUT   ACCEPT
> > $IPT -t filter -P FORWARD ACCEPT
> > $IPT -t filter -P OUTPUT  ACCEPT
> >
> > ## On purge les tables
> > $IPT -t filter -F
> >
> > ## On autorise lo
> > $IPT -t filter -A INPUT -i lo -j ACCEPT
> >
> > ## On ouvre les ports nécéssaires au DOM0
> > $IPT -t filter -A INPUT -m tcp -p tcp --dport 22  -j
> > ACCEPT ## SSH
> > $IPT -t filter -A INPUT -m udp -p udp --dport 53  -j
> > ACCEPT ## DNS
> > $IPT -t filter -A INPUT -m icmp -p icmp --icmp-type 8 -j
> > ACCEPT ## Ping
> > $IPT -t filter -A INPUT -s 10.0.0.0/24 -j ACCEPT
> >
> > ## On accepte si la connexion est déjà établie
> > $IPT -t filter -A INPUT -m conntrack --ctstate ESTABLISHED -j ACCEPT
> >
> > ## On log ce qui n'a pas été matché par les règles précédente
> > $IPT -A INPUT -p tcp -j LOG --log-prefix "DROPED packets "
> >
> > ## On bloque tout le reste
> > $IPT -t filter -P INPUT DROP
> >
> >
> > 
> > # Nat
> >
> > ## Remise par defaut des regles
> > $IPT -t nat -P PREROUTING  ACCEPT
> > $IPT -t nat -P POSTROUTING ACCEPT
> > $IPT -t nat -P INPUT   ACCEPT
> > $IPT -t nat -P OUTPUT  ACCEPT
> >
> > ## On purge
> > $IPT -t nat -F
> >
> > ### Routage des ports entrants pour la VM "mails"
> > $IPT -t nat -A PRERO

Re: Network issue

2018-08-29 Thread Reco
Hi.

On Wed, Aug 29, 2018 at 03:37:52PM -0500, Nicholas Geovanis wrote:
> On Wed, Aug 29, 2018 at 2:00 PM Reco  wrote:
> > On Wed, Aug 29, 2018 at 01:18:52PM -0500, Nicholas Geovanis wrote:
> > > Hi, I've never used OVH. How certain are you that the e1000 network driver
> > > is the correct one?
> > > Under VMWare/ESX the network driver choice can be crucial, for example.
> >
> > 1) You cannot run Xen in VSphere/ESXi.
> 
> I didn't suggest that one could. Only that network driver choice is
> crucial to network performance in a virtual setting. Hence I wrote
> "For example".

And OP is running Xen. So this means real hardware.

Reco



Re: Network issue

2018-08-29 Thread Reco
Hi.

On Wed, Aug 29, 2018 at 06:43:00PM -0400, Michael Stone wrote:
> On Wed, Aug 29, 2018 at 09:58:47PM +0300, Reco wrote:
> > 3) e1000e may be bad, but vmxnet3 will make oom-killer an everyday
> > reality. SR-IOV is a way to go.
> 
> everyday? really? when did you last try this?

Saw the problem with my own eyes yesterday.

Relatively low amount of free RAM (like 64M out of 16G) and a oom killer
stack trace containing tcp_sendmsg + some vmxnet3 NICs.

Those RHEL kernels are outright broken then it comes to memory
management.

Reco



Re: Network issue

2018-08-30 Thread Reco
Hi.

On Thu, Aug 30, 2018 at 06:38:50AM -0400, Michael Stone wrote:
> On Thu, Aug 30, 2018 at 07:32:24AM +0300, Reco wrote:
> > On Wed, Aug 29, 2018 at 06:43:00PM -0400, Michael Stone wrote:
> > > On Wed, Aug 29, 2018 at 09:58:47PM +0300, Reco wrote:
> > > > 3) e1000e may be bad, but vmxnet3 will make oom-killer an everyday
> > > > reality. SR-IOV is a way to go.
> > > 
> > > everyday? really? when did you last try this?
> > 
> > Saw the problem with my own eyes yesterday.
> > 
> > Relatively low amount of free RAM (like 64M out of 16G) and a oom killer
> > stack trace containing tcp_sendmsg + some vmxnet3 NICs.
> > 
> > Those RHEL kernels are outright broken then it comes to memory
> > management.
> 
> Well, this is a debian list...

So? The kernel is a kernel, even if it's a RedHat fork.

Besides, I did not bring VMWare into this thread. In fact, VMWare has
nothing to do with OPs problem.

Reco



Re: Security issue ... please could someone help !!!

2020-04-05 Thread Reco
Hi.

On Sun, Apr 05, 2020 at 09:03:00PM +0100, Bhasker C V wrote:
> I kept digging down and saw that anything below 32 bytes is not accepted
> (by cryptsetup --key-file option) but anything above 32 bytes is
> discarded.

cryptsetup(8), "-s" option.


> Does this mean that cryptsetup plain with --key-file uses
> only 32 bytes ?

Yes, assuming the defaults.


> Am I doing anything wrong ?

Probably no.

By default cryptsetup uses AES encryption algorithm with the key size of
256 bits. You're suppling your own key to cryptsetup, hence it chooses
just right amount of bits from it (32 bytes = 256 bits).


> If only 32 bytes are used, it is (in my opinion) not so much secure
> isnt it  ?

It's sufficiently secure, unless you try to do something really wrong
(like storing a plain key somewhere), or generate your key predictably.

Reco



Re: advisable to use installer script?

2020-04-06 Thread Reco
Hi.

On Mon, Apr 06, 2020 at 09:43:48AM -0500, Anil F Duggirala wrote:
> hello,
> I am looking to install Anaconda in my machine, Debian Buster. There
> suggested installation method is using an installer that is downloaded
> from their site.

I'm assuming it's this one: https://www.anaconda.com/distribution/#linux
Somehow I find it unlikely that you want to install RHEL's installer
(called anaconda too) on Debian.


> Is it advisable to install software in this manner?

No more or no less than running arbitrary binary from the nearest warez
dump. I.e. there are few cases when you can probably get away with it,
but you're risking doing it.


> Can anyone give a broad idea of what this install script does?

Extracts a HUEG tar.bz2 archive full of (presumably) Python and R
modules of unknown quality. Why would anyone (short of Windoze user)
willingly use this anaconda given the existence of pip and CRAN is
beyond me.
In Debian we have apt for this, and I kindly suggest you to consider
using it instead.


> Does make changes to my system files?

Unlikely, script *seems* to be limited to user-writable $PREFIX.
A quick look at the script contents haven't revealed anything fishy.
But I haven't run it, and do not intend to.

At most you're risking running some cryptominer (with user privileges)
or stealing/damaging the contents of your $HOME. Just do not run the
thing as root.

Reco



Re: flatpak and root access

2020-04-06 Thread Reco
Hi.

On Mon, Apr 06, 2020 at 12:00:18PM -0500, Anil F Duggirala wrote:
> hello,
> I know there have been some security concerns with flatpak, which are
> too high level for me to understand,

It's simple, and security is just a part of a bigger problem here.
The very purpose of flatpak is to enable the user running untrusted
software (i.e. not obtained by usual OS means).
So, for instance, if the author of the software wants their software to
perform "telemetry" - they just do it and their users will "enjoy" it.
A good software maintainer will just patch the offensive functions out
because such privacy violation is a legitimate cause for a bug report in
Debian (and yes, those *did* happen).
Likewise, flatpak by itself cannot do anything against a cryptominer
"helpfully" "bundled" with a software.


> but I want to ask, is it normal
> for flatpak to ask for the root password when installing a new package?

For so-called "system install" - yes, it's normal.
The reason for this being that "system" installed flatpaks expose their
binaries in /var/lib/flatpak/exports/bin, which is not user-writable.
For so-called "user install" - i.e. inside your $HOME, no it's not.


> Are these packages not supposed to be sandboxed?

It's rather you have a different definition of "sandboxing" than flatpak
authors. For them it's important to restrict an access to the $HOME
files for anything that's running via flatpak (along the other things).
Whatever collateral damage they do to the filesystem usually limited to
/var/lib/flatpak.

Reco



Re: advisable to use installer script?

2020-04-06 Thread Reco
On Mon, Apr 06, 2020 at 08:49:53PM +0200, Alex Mestiashvili wrote:
> Regarding Python and R modules of unknown quality. What quality?

My question exactly. Who build it? From which source? What toolchain was
in use? How can I build the same in a reproduceable way? What else was
bundled along the way? What about upgrading and deleting a module
(installing is always the easiest part)?


> Debian doesn't magically make any python module better or safer.

Yet it's known which source was used, which toolchain was there and
there are guarantees that the module in question does not change its
behaviour in the next five minutes.
Oh, and there are distribution-specific patches which *do* make packages
better and safer, python included. And, what's most important here -
compatible with *other* packages.


> Debian just packages a python module provided by upstream and can
> possibly provide some additional patches and support.

Nope, see above. Building a distribution is an engineering task more
complex than you seem to think it is.


> There are pros and cons for both apt and conda, but it totally depends
> on the use case.

Sure. On apt's side there's unified way to install/upgrade/delete
anything, and on conda's side there's turning your system into
Slackware.


> So in general it is totally fine to use anaconda installer.

I agree. They call the Debian the Universal OS because it can take an
impressive amount of such punishment from the determined user *and*
remain operational to a certain degree.
And it's hardly matters whenever the offending "tool" is called conda,
pip or docker.

Reco



Re: advisable to use installer script?

2020-04-06 Thread Reco
On Mon, Apr 06, 2020 at 10:49:13PM +0200, Alex Mestiashvili wrote:
> On 4/6/20 9:33 PM, Reco wrote:
> > On Mon, Apr 06, 2020 at 08:49:53PM +0200, Alex Mestiashvili wrote:
> >> Regarding Python and R modules of unknown quality. What quality?
> > 
> > My question exactly. Who build it? From which source? What toolchain was
> > in use? How can I build the same in a reproduceable way? What else was
> > bundled along the way? What about upgrading and deleting a module
> > (installing is always the easiest part)?
> 
> R packages and python modules as everything else packageable for Debian
> comes as source code, so how it is build is up to you and tools you use.

You don't get it. A python module may come with C source code to build a
ELF library (what they call an FFI).
The contents of the built library may differ even for two successive
builds *unless* the library is built in a very specific way.
Same goes for executables, but python modules do not do that, usually.

What's the point of having the source code if one cannot verify that the
library is built from that source?

> Even more, binary packages might be suboptimal compared to locally built
> ones.

You're using the wrong distribution then. I'd suggest something like
Gentoo, but it's dying. I don't know, LFS maybe?

> >> Debian doesn't magically make any python module better or safer.
> > 
> > Yet it's known which source was used, which toolchain was there and
> > there are guarantees that the module in question does not change its
> > behaviour in the next five minutes.
> 
> There is no problem to track all the above with most open source
> projects.

That's the attitude I see much these days. I could not care less about
"open source" (i.e. you can see but you cannot touch).
I care only about "free software" (as in freedom), which, specifically,
allows me to modify anything for any purpose, and do so in a meaningful
way.
First one is a popular gimmick. Second one actually gives users control.

> It's open source, nobody prevents you from checking every bit.

The best thing in Debian that I usually don't have to, for their packages.

> >> Debian just packages a python module provided by upstream and can
> >> possibly provide some additional patches and support.
> > 
> > Nope, see above. Building a distribution is an engineering task more
> > complex than you seem to think it is.
> 
> I guess we are talking about different things, people are asking not
> about adopting dpkg for their linux from scratch, but about installing a
> software. Most users don't care about 90% of the stuff you mentioned.

Most users do not care about many things. These include, but aren't
limited to their own security, privacy, convenience, or, btw, getting
their job done in a meaningful amount of time.
But it was always that way, and I fail to see any problem here.

But I was under the impression that we're talking about software
developers here (python and R are developers' turf). Are you saying that
developers do not care about aforementioned things?

> The only thing they care about is working software. And even not the
> software, but the goal they solve with it. Software is a tool. And they
> are not interested in the internals.

And this is why it's important to have good tools, not flawed ones.

> >> There are pros and cons for both apt and conda, but it totally depends
> >> on the use case.
> > 
> > Sure. On apt's side there's unified way to install/upgrade/delete
> > anything, and on conda's side there's turning your system into
> > Slackware.
> 
> I am not convinced. I don't use conda but I am pretty sure it can do all
> above and even more. It's just different and has it's own strong sides.

I've yet to see any.

> Btw, are you aware that gitlab instance for salsa.debian.org is not
> using packaged gitlab?

No, but what difference does it make in this discussion?

> There are many softwares which simply don't fit into Debian's paradigm
> of a packaging.

True. For instance, the software which the only strong side being how
fast it has been written. Or the software that's donning the hat of
"open source" while being proprietary in fact (for instance, refusing to
accept patches that re-implement already existing proprietary bits).

> But nevertheless they are useful and open source.

See above. 

> >> So in general it is totally fine to use anaconda installer.
> > 
> > I agree. They call the Debian the Universal OS because it can take an
> > impressive amount of such punishment from the determined user *and*
> > remain operational to a certain degree.
> > And it's hardly matters whenever the offending "tool" is called conda,
> > pip or docker.
> 
> Don't forget cpan,

They take care of it with dh_cpan here.

> rvm,

Ruby's dead.

> maven

Abomination, plain and simple, and a needless reimplementation of a
wheel. But most Java shovelware falls here, so it's no surprise.
Good news are Oracle's suffocating Java, so let's leave maven for dull
enterprisey world.

Reco



Re: Microcom; What's this Script Feature?

2020-04-07 Thread Reco
Hi.

On Tue, Apr 07, 2020 at 08:08:26AM -0500, Martin McCormick wrote:
>   Everything seems to work as far as I can tell but what
> does a script look like?

Judging from the source it it should open a text file on your side (i.e.
"x filename") and feed its contents line by line to the other side.
So whatever you'll write in the file is specific to the device you're
connecting to.

Reco



Re: geolocation services disabled and Gnome maps

2020-04-10 Thread Reco
Hi.

On Fri, Apr 10, 2020 at 08:24:41AM -0500, Anil Felipe Duggirala wrote:
> On Thu, Apr 9, 2020, at 11:16 AM, John Hasler wrote:
> > It's just looking up your IP.  The method isn't reliable (it usually
> > puts me on the other side of the state) but it works more often than
> > not.
>
> I don't believe this is the case.

The software behaviour does not depend on one's beliefs.

$ apt show gnome-maps | grep Dep
Depends: ... libgeocode-glib0 (>= 3.16.2) ...

$ apt-show libgeocode-glib0 | grep ^Desc
Description: geocoding and reverse geocoding GLib library using Nominatim

And the source of geocode-glib shows the actual server they're using:

GeocodeNominatim *
geocode_nominatim_get_gnome (void)
{
GeocodeNominatim *backend;

G_LOCK (backend_nominatim_gnome_lock);
backend = g_weak_ref_get (&backend_nominatim_gnome);
if (backend == NULL) {
backend = geocode_nominatim_new ("https://nominatim.gnome.org";,
 "zeesha...@gnome.org");
g_weak_ref_set (&backend_nominatim_gnome, backend);
}
G_UNLOCK (backend_nominatim_gnome_lock);

return backend;
}


> Is there any way I could check to see exactly where Gnome Maps is getting the 
> location from?

Being the GNOME software? The source is the only way to get sure.
I'd check tcp:443 connections to 8.43.85.23.


> What is the default geolocation service installed by Gnome or Debian?

That depends on your definition of "default Debian install". For
instance, last time I've used netboot I got no such service.

Reco



Re: geolocation services disabled and Gnome maps

2020-04-10 Thread Reco
On Fri, Apr 10, 2020 at 03:14:33PM -, Curt wrote:
> On 2020-04-10, Reco  wrote:
> >
> > The software behaviour does not depend on one's beliefs.
> 
> It does and can quite often depend on *user configuration*, though, and the 
> OP I
> believe has informed us he has *turned off* geolocation services.

And GNOME Maps has this neat library as a dependency that can use
geolocation regardless of the said setting.

Reco



Re: geolocation services disabled and Gnome maps

2020-04-10 Thread Reco
On Fri, Apr 10, 2020 at 03:35:01PM -, Curt wrote:
> On 2020-04-10, Reco  wrote:
> > On Fri, Apr 10, 2020 at 03:14:33PM -, Curt wrote:
> >> On 2020-04-10, Reco  wrote:
> >> >
> >> > The software behaviour does not depend on one's beliefs.
> >> 
> >> It does and can quite often depend on *user configuration*, though, and 
> >> the OP I
> >> believe has informed us he has *turned off* geolocation services.
> >
> > And GNOME Maps has this neat library as a dependency that can use
> > geolocation regardless of the said setting.
> 
> So you're saying that Gnome Maps *uses* the geolocation library even in
> the case of a user who has explicitly turned that "feature" off in his
> privacy settings, in blatant disregard of those settings?
> 
> That is really an egregious bug, then, and should be reported. 

I'm saying that it can. Too lazy to dig into the Javascript that GNOME
Maps is written. Explains the behaviour OP's seeing IMO.

Reco



Re: geolocation services disabled and Gnome maps

2020-04-11 Thread Reco
Hi.

On Sat, Apr 11, 2020 at 09:28:51AM -0500, Anil F Duggirala wrote:
> On Fri, 2020-04-10 at 17:51 +0300, Reco wrote:
> > Hi.
> > 
> > On Fri, Apr 10, 2020 at 08:24:41AM -0500, Anil Felipe Duggirala
> > wrote:
> > > On Thu, Apr 9, 2020, at 11:16 AM, John Hasler wrote:
> > > > It's just looking up your IP.  The method isn't reliable (it
> > > > usually
> > > > puts me on the other side of the state) but it works more often
> > > > than
> > > > not.
> > > 
> > > I don't believe this is the case.
> > 
> > The software behaviour does not depend on one's beliefs.
> > 
> > $ apt show gnome-maps | grep Dep
> > Depends: ... libgeocode-glib0 (>= 3.16.2) ...
> > 
> > $ apt-show libgeocode-glib0 | grep ^Desc
> > Description: geocoding and reverse geocoding GLib library using
> > Nominatim
> > 
> > And the source of geocode-glib shows the actual server they're using:
> > 
> > GeocodeNominatim *
> > geocode_nominatim_get_gnome (void)
> > {
> > GeocodeNominatim *backend;
> > 
> > G_LOCK (backend_nominatim_gnome_lock);
> > backend = g_weak_ref_get (&backend_nominatim_gnome);
> > if (backend == NULL) {
> > backend = geocode_nominatim_new ("https://nominatim.gnome.org
> > ",
> >  "zeesha...@gnome.org");
> > g_weak_ref_set (&backend_nominatim_gnome, backend);
> > }
> > G_UNLOCK (backend_nominatim_gnome_lock);
> > 
> > return backend;
> > }
>
> Could you tell me if this code, by connecting to this service is
> getting my location simply by using my IP address?

Answering "yes" here would be a gross oversimplification.
Answering "no" here would be a deviation from the truth.

I'd say it this way:

1) Your instance of GNOME Maps connects to nominatim.gnome.org by using
https. Your local IP address does not matter here, as long as the
connection gets established.

2) Due to the way your home network is made (it's called NAT), nobody
sees your computer IP but your router (or the router provided to you by
your ISP).

3) What's nominatim.gnome.org is seeing is the IP that's used by your
ISP for outbound connections on your behalf. It may be the same IP
that's your router is using for the WAN port, it may be different.

4) If you're interested what is your outbound ip - I suggest you to use
some Internet service like https://www.whatismyip.org/ . 

IP from pt 3 is enough to pinpoint your location with the country
precision at worst, city precision at best, and by the very definition
of what's happening here this information is available to you (via GNOME
Maps) and to the nominatim.gnome.org.

I do not GNOME Maps so I cannot comment on if it's possible to disable
this feature without rebuilding GNOME Maps from the source.


> > > Is there any way I could check to see exactly where Gnome Maps is
> > > getting the location from?
> > 
> > Being the GNOME software? The source is the only way to get sure.
> > I'd check tcp:443 connections to 8.43.85.23.
> > 
> There is a connection to that IP address and it starts when I open
> Gnome Maps (I think it connects to a different port though, Im a newbie
> though)

It's so called "ephemeral" port that's used on your side for the TCP
connection, and it's the usual thing.

What's important here is that confirms that GNOME Maps is using
aforementioned library for the purposes of establishing your location
regardless of the GNOME privacy setting.

Personally I see it as a privacy violation, but someone may see such
(mis)feature as a crucial function of GNOME Maps. In any case, a bug
report with the severity of "minor" can not do any harm, so I suggest
you to install "reportbug-gtk" package and report your findings in a
"gnome-maps" package to Debian's bugtracker.

Reco

[1] https://bugs.debian.org/cgi-bin/pkgreport.cgi?package=gnome-maps



Debian is testing Discourse

2020-04-12 Thread Reco
Dear list,

[1] came to my attention today. To quote relevant parts:

What about the mailing lists?
  This may or may not be a replacement for any particular list. I
suspect there are some thet would benefit greatly from having Discourse
be the primary interaction, and other places where this would be less
suitable.

Be specific!
  Ok... I think *debian-user*, debian-vote and possibly debian-project
would be better off in Discourse. I think debian-devel-announce should
stay as an email list (for now). However, I am not suddenly proposing
that we shut those lists down. The aim of this exercise is to see if
Discourse would work well for us.

Email is still important to me!
  Fine, you can interact with Discorse by email rather than the web
interface. It should be noted however, that there is *not 1:1 feature
partiy* with email and the web interface, as Discorse does things that
can't easily be done with email. For the majority of users though, email
interaction should be "good enough".

Why are you doing this?
  I have two motivations. First, is *moderation*. Discourse has built in
tools to allow community moderation on a much better scale than our
email lists.  Secondly, I genuinely believe that ease of access to new
contributors is of paramount importance to the project.


So, thoughts, options?

Reco

[1] https://lists.debian.org/debian-project/2020/04/msg00074.html



Re: Debian is testing Discourse

2020-04-12 Thread Reco
Hi.

On Sun, Apr 12, 2020 at 10:57:05AM +0200, to...@tuxteam.de wrote:
> > Email is still important to me!
> >   Fine, you can interact with Discorse by email rather than the web
> > interface. It should be noted however, that there is *not 1:1 feature
> > partiy* with email and the web interface, as Discorse does things that
> > can't easily be done with email. For the majority of users though, email
> > interaction should be "good enough".
> 
> I have the honor to be mail participant in a discourse forum.
> 
> I don't like it *at all*.

About the biggest gripe I have with the Discourse is its inability to
function without Javascript. Even for a simple reading.
But I don't participate at forums (fora? whatever) anymore, so I have a
limited view here.


> It's not that "Discorse does things that > can't easily be done with
> email" -- this is *framing*. Discourse makes things which were easy
> by email practically impossible (like, for example, following up
> something in private).

The million euro question here is how actually good (or bad) is that
"e-mail interaction with Discourse" is.

Reco



Re: Debian is testing Discourse

2020-04-12 Thread Reco
On Sun, Apr 12, 2020 at 11:10:22AM +0200, to...@tuxteam.de wrote:
> On Sun, Apr 12, 2020 at 12:03:12PM +0300, Reco wrote:
> > Hi.
> > 
> > On Sun, Apr 12, 2020 at 10:57:05AM +0200, to...@tuxteam.de wrote:
> > > > Email is still important to me!
> > > >   Fine, you can interact with Discorse by email rather than the web
> > > > interface [...]
> 
> "...but we'll make your life miserable for that".

Yuck. So, worst case scenario, in several years debian-user will be dead
as we know it. That's unsettling, to say the least.

Reco



Re: DOH (was: geolocation services disabled and Gnome maps)

2020-04-12 Thread Reco
On Sun, Apr 12, 2020 at 12:10:45PM +0200, to...@tuxteam.de wrote:
> That's why I cringe at the idea that browsers want to start doing
> name resolution over HTTPS.

This simple one line of dnsmasq configuration will disable this
problematic feature for good for Firefox (basically it creates a bogus
NXDOMAIN response for this particular site):

local=/use-application-dns.net/

Chromium does not do it *yet*, but I'll implement something akin to the
previous once it'll get there.

So, as long as you control your network - it does not concern you.

Reco



Re: DOH (was: geolocation services disabled and Gnome maps)

2020-04-12 Thread Reco
On Sun, Apr 12, 2020 at 12:35:44PM +0200, to...@tuxteam.de wrote:
> On Sun, Apr 12, 2020 at 01:21:08PM +0300, Reco wrote:
> > On Sun, Apr 12, 2020 at 12:10:45PM +0200, to...@tuxteam.de wrote:
> > > That's why I cringe at the idea that browsers want to start doing
> > > name resolution over HTTPS.
> > 
> > This simple one line of dnsmasq configuration will disable this
> > problematic feature for good for Firefox (basically it creates a bogus
> > NXDOMAIN response for this particular site):
> > 
> > local=/use-application-dns.net/
> 
> I don't quite understand [1] how the dnsmasq config has a say on
> whether the browser resolves things over HTTP (it won't ask the
> resolver in the first place, would it?), but thanks for the pointer
> anyway.
> 
> Cheers
> [1] That's not a rhethorical flourish, it's genuine. I know too
>little about DNS-over-HTTP to be of any use at this point.

The questionable idea behind DOH is that the browser makers do not trust
your local resolver. As usual, main arguments here are:

1) One can use a local resolver with the ability *not* to resolve
certain DNS queries, which refer to the sites which just happen to
contain advertisements, fingerprinting, tracking, cryptomining etc.
Since all two major browser makers (Google and Mozilla) happen to rely
on revenue generated by advertising *and* users' browsing habits this
obviously can not be tolerated.

2) ISPs can intercept DNS queries, and modify them at their leisure.
Usually considered a first step to a censorship, implemented in this
particular form at certain European countries.

3) Bad guys and gals can hijack DNS too, to the usual hilarious results.

With the advent of HTTPS all this may be seen as moot points (if you're
redirected elsewhere the certificate validation should fail), but
nevertheless DOH is forced upon the collective throat of Firefox users
as we speak (and Chrome users are likely to follow them Soon™).
Currently a Firefox user is supposed to trust Cloudflare to do DNS
queries for them, and HTTPS is used for this purpose because Security™.


In its current form DOH has a huge gaping hole that every system
administrator worthy of the title is familiar with - local name
resolution - because Cloudflare cannot resolve hosts in your Intranet,
although they probably want to. And yes, your dirty /etc/hosts tricks
won't work here, because DOH simply skips parsing the contents of hosts
file.


Hence the trick. What Firefox does first is trying to resolve
use-application-dns.net on the assumption that if it local DNS does the
resolving then the user's host is connected to the Internet.
Because if it does not - then it's most probably a corporate Intranet so
DOH should be disabled for the duration of this browser run.
I won't go into the details here on how many levels this logic is
flawed or outright broken.

I'll just say that it's enough to use it for your own good and to
disable DOH without rebuilding Firefox from the source. So, as I wrote
earlier - if you controlling your network DOH is just another
questionable thing that can be rid of.

Reco



Re: Debian is testing Discourse

2020-04-12 Thread Reco
Hi.

On Sun, Apr 12, 2020 at 04:30:05PM +0300, Andrei POPESCU wrote:
> On Du, 12 apr 20, 12:03:12, Reco wrote:
> > 
> > The million euro question here is how actually good (or bad) is that
> > "e-mail interaction with Discourse" is.
> 
> Having a highly e-mail centric community like Debian using it is likely 
> to have a significant impact on its development.

While I admire your optimistic view on things, there's a thing to
consider:

1) The proposition was made by Neil McGovern, who's currently listed as
an Executive Director at [1].

2) GNOME, for the last ten years or so is known for their own, unique
approach to the usability and the pursuit of the Modern Desktop™. The end
result of this, let's put it lightly, is not to everyone's liking.

3) Summing these two, there's distinct possibility that "e-mail
interaciton with Discourse" will be considered UnModern in the
not-so-distant future, and we all known how they deal with UnModern in
GNOME.

Reco

[1] https://www.gnome.org/foundation/staff/



Re: Synaptic error

2020-04-12 Thread Reco
Hi.

On Sun, Apr 12, 2020 at 09:07:07AM -0500, Richard Owlett wrote:
> As I said, there has been no previous problem with Synaptic.
> Where would I look for cause of missing archives sub-directory?

Unless you have an audit facility configured in your kernel - you have
to guess.

> Is it safe to blindly create it?

Yep. 0:0 as group:owner, 0755 as permission.

$ ls -ald /var/cache/apt/archives/
drwxr-xr-x 3 root root 4096 Apr 11 12:00 /var/cache/apt/archives/

Reco



Re: Synaptic error

2020-04-12 Thread Reco
On Sun, Apr 12, 2020 at 10:39:34AM -0500, Richard Owlett wrote:
> On 04/12/2020 09:17 AM, Reco wrote:
> > Hi.
> > 
> > On Sun, Apr 12, 2020 at 09:07:07AM -0500, Richard Owlett wrote:
> > > As I said, there has been no previous problem with Synaptic.
> > > Where would I look for cause of missing archives sub-directory?
> > 
> > Unless you have an audit facility configured in your kernel - you have
> > to guess.
> 
> Unless an audit facility is installed by default, I don't have it.
> Sounds like something I should read up on.
> Suggested link(s) for someone coming at the subject cold?

apt install auditd

man auditctl

You'll need this:

auditctl -a always,exit -F dir=/var/cache/apt/archives -F perm=wa


> > > Is it safe to blindly create it?
> > 
> > Yep. 0:0 as group:owner, 0755 as permission.
> > 
> > $ ls -ald /var/cache/apt/archives/
> > drwxr-xr-x 3 root root 4096 Apr 11 12:00 /var/cache/apt/archives/
> 
> I created the sub-directory using Caja.
> # ls -ald /var/cache/apt/archives/
> drwxr-xr-x 2 root root 4096 Apr 12 09:41 /var/cache/apt/archives/
> 
> *NOTE* When I ran ls there were only 2 {NOT 3} hard links.
> I attempted to install via Synaptic again.
> The error message was:
> > W: Failed to fetch 
> > http://deb.debian.org/debian/pool/main/t/texinfo/install-info_6.3.0.dfsg.1-1+b2_i386.deb
> >   Could not open file 
> > /var/cache/apt/archives/partial/install-info_6.3.0.dfsg.1-1+b2_i386.deb - 
> > open (2: No such file or directory) [IP: 151.101.52.204 80]
> > 
> > W: Failed to fetch 
> > http://deb.debian.org/debian/pool/main/t/texinfo/info_6.3.0.dfsg.1-1+b2_i386.deb
> >   Could not open file 
> > /var/cache/apt/archives/partial/info_6.3.0.dfsg.1-1+b2_i386.deb - open (2: 
> > No such file or directory) [IP: 151.101.52.204 80]

Your local apt index file is out of sync with the mirror.
They should give a button in Synaptic that performs the equivalent of
"apt-get update".

Reco



Re: Best practice to allow a program to write its logs

2020-04-12 Thread Reco
Hi.

On Sun, Apr 12, 2020 at 05:00:09PM +0200, l0f...@tuta.io wrote:
> Hi,
> 
> Oops, I didn't answer to that, sorry...

No big deal. This is a maillist, we have nothing to hurry here :)


> > It all works - conventional POSIX permissions, ACLs, xattrs, SELinux
> > Labels, etc. Until you try to restore from a backup :)
> >
> What do you mean please? Why does the backup would pose a problem when 
> restored?

Both cpio and tar do not save xattrs and SELinux labels. The very
archive format does not allow it.
There were some experimental RedHat patches for that, but they never
made it to the upstream.
There are also similar deficiencies in rsync (although it is better in
this regard).

And these three (cpio, tar, rsync) are the cornerstones of just
about any backup tools that Debian main provides.

Of course, there's dump(8) which *does* account for all these filesystem
oddities, but it's ext4-specific by definition and is not that popular.

All this applies to the backups of / and /var, mainly. Your typical
$HOME is usually free from all these advanced attributes.


> >> You mean for this use case writing into /var/log/msmtp?
> >> Actually, I don't really know why but I've decided to write user
> >> specific configuration with appropriate logs.  So my conf is not in
> >> /etc/msmtprc but in ~/.msmtprc and logging is in the same vein inside
> >> ~/.msmtp.log.
> >
> > My advice to you then - don't do this.
> > Best are not the logs that are written often or are verbose, and even
> > not those are written in a convenient filesystem locations.
> > Best are the logs which contain the problem, but do not contain assorted
> > junk (implies logs filtering), 
> >
> How to you filter your logs? Directly by tweaking severities in 
> (r)syslog(-ng).conf?

Personally I prefer rsyslog configuration files for this.
If you prefer syslog-ng - it should have similar filtering capabilities.


> > are small (implies periodic rotation)
> >
> logrotate I presume?

Yep. There's no need to reinvent the wheel here.


> > and, the most importantly, are stored not only on your host, but
> > elsewhere, preferrably in a centralized and indexed way.
> >
> Via (r)syslog(-ng)?

Yup. A single setting like this (rsyslog style):

*.info  @rsyslog.home

Implies running syslog daemon that's listening udp:514 at rsyslog.home.


> > A false dichotomy. Why reinventing a wheel with custom logging if they
> > given you that "--syslog=on" option?
> >
> I didn't notice that, thanks! :)
> So you recommend not specifying any logfile in /etc/msmtprc but just use 
> switch "--syslog=on" everytime msmtp is used? Maybe this can be viewed as a 
> constraint as one should remember to use that option... (except by using some 
> trick as an alias)

Yes, I do.


> > Ah, that's not a real MTA. My mistake.
> > A quick look at postinst script gives me:
> >
> > chgrp msmtp /usr/bin/msmtp
> > chmod 2755 /usr/bin/msmtp
> >
> In a nutshell, an application triggers actions under the identity of who 
> launched it initially, except if the application makes use of a specific 
> technical account, right?
> If so what is the best way to know if an application operates under a 
> specific account?

Reading postinst (possibly preinst) from the package never failed me so
far.

Reco



Re: DOH (was: geolocation services disabled and Gnome maps)

2020-04-12 Thread Reco
Hi.

On Sun, Apr 12, 2020 at 07:46:38PM -0400, Lee wrote:
> > The questionable idea behind DOH is that the browser makers do not trust
> > your local resolver.
> 
> Mozilla claims it's a privacy issue:
> https://support.mozilla.org/en-US/kb/firefox-dns-over-https

It's a privacy issue along with the other things.
With the default settings the Firefox user is handing all DNS resolution
to Cloudflare. Not an equivalent to complete browsing history, but close
enough.


> > 1) One can use a local resolver with the ability *not* to resolve
> > certain DNS queries, which refer to the sites which just happen to
> > contain advertisements, fingerprinting, tracking, cryptomining etc.
> > Since all two major browser makers (Google and Mozilla) happen to rely
> > on revenue generated by advertising *and* users' browsing habits this
> > obviously can not be tolerated.
> 
> Wasn't there a fairly recent kerfluffle about an upcoming change to
> chrome that would break things like the uMatrix addon?

There was, indeed.


> If firefox wasn't a viable alternative to chrome, what are the chances
> that change would have been implemented?

It is implemented already, it's just there are alternatives to
declarativeNetRequest that are working - so far.


> > 3) Bad guys and gals can hijack DNS too, to the usual hilarious results.
> 
> And the bad guys and gals can use DOH to "hide" their traffic and
> circumvent things like pihole.

There is tor or i2p for *that* already.


> I just did a quick search and couldn't find anything for smart TVs
> using DOH.

Probably because they aren't there yet. A typical smart TV is based on
the Android, and Google haven't said their word about DOH so far.


> > With the advent of HTTPS all this may be seen as moot points (if you're
> > redirected elsewhere the certificate validation should fail), but
> > nevertheless DOH is forced upon the collective throat of Firefox users
> > as we speak (and Chrome users are likely to follow them Soon™).
> > Currently a Firefox user is supposed to trust Cloudflare to do DNS
> > queries for them, and HTTPS is used for this purpose because Security™.
> 
> For some values of "security", DOH _is_ more secure.

As far as the "last mile" is concerned - maybe. As far as the whole
Internet goes - not so much as overall security of DNS queries depends
of DNSSEC implemented in every zone (and it ain't there yet).


> How many people use a dnssec validating resolver?

See above. Besides, DNSSEC is for integrity of zones, not privacy.
You need DNS-over-TLS if you need last one.


> At least Cloudflare resolvers have dnssec enabled.

*And* the ability to see users' DNS queries. Neat, right?

Reco



Re: Synaptic error

2020-04-12 Thread Reco
Hi.

On Sun, Apr 12, 2020 at 06:50:04PM -0500, David Wright wrote:
> On Sun 12 Apr 2020 at 15:46:45 (+0200), to...@tuxteam.de wrote:
> > On Sun, Apr 12, 2020 at 08:43:12AM -0500, Richard Owlett wrote:
> > > Using Synaptic I:
> > > 1. searched package names for "info"
> > > 2. selected it
> > > 3. clicked Apply
> > > 4. received error message saying
> > > >E: Could not open lock file /var/cache/apt/archives/lock - open (2: No 
> > > >such file or directory)
> > > >E: Could not open file descriptor -1
> > > >E: Unable to lock the download directory
> > 
> > Question: is there a /var/cache/apt/archives directory? There should
> > be one.
> 
> The impression given by the FHS is that, upon deletion, the system
> should be able to recreate anything in a cache directory.

That is assuming that the cache directory itself is existing (i.e.
/var/cache, and everything down the filesystem hierarchy that's needed).
Note that FHS part you're quoting specifically refers to the "files",
not anything else.

A small distinction, but it's important here.

Reco



Re: DOH (was: geolocation services disabled and Gnome maps)

2020-04-13 Thread Reco
Hi.

On Mon, Apr 13, 2020 at 11:16:02AM +0300, Andrei POPESCU wrote:
> On Lu, 13 apr 20, 08:47:22, Reco wrote:
> > On Sun, Apr 12, 2020 at 07:46:38PM -0400, Lee wrote:
> > 
> > > How many people use a dnssec validating resolver?
> > 
> > See above. Besides, DNSSEC is for integrity of zones, not privacy.
> > You need DNS-over-TLS if you need last one.
> > 
> > 
> > > At least Cloudflare resolvers have dnssec enabled.
> > 
> > *And* the ability to see users' DNS queries. Neat, right?
> 
> Whether DoH or DNS-over-TLS, you have to trust the DNS server.

Yup. That's why I have my own, and every Debian user can have their own
too, using only free software.

Reco



Re: DOH (was: geolocation services disabled and Gnome maps)

2020-04-13 Thread Reco
On Mon, Apr 13, 2020 at 12:14:44PM +0100, Liam O'Toole wrote:
> On Mon, 13 Apr, 2020 at 12:57:54 +0300, Reco wrote:
> > Hi.
> > 
> > On Mon, Apr 13, 2020 at 11:16:02AM +0300, Andrei POPESCU wrote:
> 
> [...]
> 
> > > Whether DoH or DNS-over-TLS, you have to trust the DNS server.
> > 
> > Yup. That's why I have my own, and every Debian user can have their own
> > too, using only free software.
> > 
> 
> Pray tell us more. I use dnsmasq for clients on my LAN, but even that
> has to use an upstream name server --- in my case the one provided by my
> ISP.

1) Rent yourself a VPS, install bind there (there's no DNS but bind).
Replace bind with unbound if you need caching-only nameserver
(caching-only bind is possible, but it's an overkill).

2) Apply [1] to your dnsmasq.

3) Your ISP gets a TLS tunneled DNS request (and they can't do anything
about it), you get unmolested name resolution.

stunnel can be replaced with ipsec or openvpn or wireguard.
Whatever you use as a caching DNS on your end does not matter, as long
as it can forward DNS queries to another upstream DNS.

Reco

[1] https://kb.isc.org/docs/aa-01386



Re: Moderation (not!) [was: Debian is testing Discourse]

2020-04-13 Thread Reco
Hi.

On Mon, Apr 13, 2020 at 08:09:02AM -0500, John Hasler wrote:
> tomás writes:
> > Please, folks. The subject (mail vs discourse) is thorny enough,
> > eliciting strong emotions (myself included).
> 
> > Mixing it up with group moderation is going to kill any chance
> > of having a productive discussion.
> 
> > It is as easy to moderate a mailing list as it is a platform à la
> > discourse. So this isn't a criterion to decide between both.
> 
> > So please: let's take one thing at a time, will we?
> 
> The OP brought up moderation by giving the ease of doing it as a major
> reason to switch to Discourse.  This implies an intent to implement it.

A small nitpick - I'm the OP, and the proposition was made by Neil
McGovern, Debian Developer.

Reco



Re: Debian is testing Discourse

2020-04-13 Thread Reco
Hi.

On Mon, Apr 13, 2020 at 07:32:56AM -0500, Nate Bargmann wrote:
> I doubt that Russ reads this list and may not be aware of the
> experiences of us that have dealt with a project that wholesale replaced
> working mailing lists with Discourse.  Russ should be made aware that
> Discourse is not some magic software that does not require any learning.

That's I agree with.


> On the contrary, it is different and requires a modern Web browser (how
> does the non-GUI user participate since it is noted that an email user
> is a distant second-class user?) but as he notes it is a centralized
> database that facilitates an amount of control that is lacking with
> email lists.  I think that the key to the discussion is that some people
> seek greater control over discussions.

Moreover, it brings two interesting aspects of the problem:

1) By its design the Discourse relies on Javascript executing in user's
browser. While Discourse itself may be the free software, such usage of
Discourse violates Software Freedom 1 ("change it so it does your
computing as you wish").

2) Centralization vs federation.
By its very design e-mail is de-centralized, and it allowed it to
successfully function for about 40 years.
Moreover, such decentralization actually empowers the end user (i.e. all
of us), because "your MTA - your rules".

But Discourse is centralized, and while I trust Debian project to make
the OS that I use daily (and strongly prefer to others), this move
strips end users of that limited power that they have here, at
debian-user.


> If the project wants to implement Discourse as an adjunct to existing
> communications channels, fine, I've no problem with that.

I don't feel easy while reminding it (), but Debian project has its
share of proposing alternatives (GNOME vs XFCE, systemd vs upstart vs
sysvinit for instance), which somehow ended with the majority of the
users using only one alternative.

And note that there were totally objective reasons for that, and users
were left with the final choice. Just like this time.

Reco



Re: Debian is testing Discourse

2020-04-13 Thread Reco
On Mon, Apr 13, 2020 at 06:09:59PM +0100, Brian wrote:
> > > On the contrary, it is different and requires a modern Web browser (how
> > > does the non-GUI user participate since it is noted that an email user
> > > is a distant second-class user?) but as he notes it is a centralized
> > > database that facilitates an amount of control that is lacking with
> > > email lists.  I think that the key to the discussion is that some people
> > > seek greater control over discussions.
> > 
> > Moreover, it brings two interesting aspects of the problem:
> > 
> > 1) By its design the Discourse relies on Javascript executing in user's
> > browser. While Discourse itself may be the free software, such usage of
> > Discourse violates Software Freedom 1 ("change it so it does your
> > computing as you wish").
> 
> I understand the concern with Javascript. I have read what there is of
> Discourse using Lynx. Am I missing out an anything?

Wow. Just wow. Thank you for the idea.
I confirm that it's enough to use any text-based browser (be it lynx,
links or w3m) to read, say, discourse.mozilla.org.
Cannot comment if it's possible to participate there with these fine
browsers (the answers is probably "no"), but it's a start.

Reco



Re: DOH (was: geolocation services disabled and Gnome maps)

2020-04-13 Thread Reco
Hi.

On Mon, Apr 13, 2020 at 06:42:10PM -0400, Lee wrote:
> On 4/13/20, Reco  wrote:
> > On Sun, Apr 12, 2020 at 07:46:38PM -0400, Lee wrote:
> >> > The questionable idea behind DOH is that the browser makers do not
> >> > trust
> >> > your local resolver.
> >>
> >> Mozilla claims it's a privacy issue:
> >> https://support.mozilla.org/en-US/kb/firefox-dns-over-https
> >
> > It's a privacy issue along with the other things.
> > With the default settings the Firefox user is handing all DNS resolution
> > to Cloudflare. Not an equivalent to complete browsing history, but close
> > enough.
> 
> Right.  The ISP can't see what names the user is looking up but
> Cloudflare sees every single one.  On the other hand, take a look at
>   https://wiki.mozilla.org/Security/DOH-resolver-policy

An interesting declaration. For instance:

1. The resolver may retain user data (including identifiable data...)
but should do so only for the purpose of operating the service and
must not retain that data for longer than 24 hours.
...
2. Transparency Report. There must be a transparency report published at
least yearly that documents the policy for how the party operating the
resolver will handle law enforcement requests for user data and that
documents the types and number of requests received and answered, except
to the extent such disclosure is prohibited by law.


Thus:

a) Cloudflare is allowed to store whatever they want for 24 hours.
b) They aren't forbidden to give that data to the law enforcement, which
is not binded by that Mozilla’s Trusted Recursive Resolver program.
c) Law enforcement entity hires some independent contractor to help them
store this data.
d) Next thing you know, everyone's DNS history is world-accessible via
some unprotected ElasticSearch instance on AWS.
e) And the best thing here is - Mozilla legally allowed it to happen.


> >> If firefox wasn't a viable alternative to chrome, what are the chances
> >> that change would have been implemented?
> >
> > It is implemented already, it's just there are alternatives to
> > declarativeNetRequest that are working - so far.
> 
> Ahh.  I thought Google backed down on the change..

I happen to build chromium from the source from time to time. Recent
versions (>=80) require re2 regex library just for declarativeNetRequest
alone.


> >> > With the advent of HTTPS all this may be seen as moot points (if you're
> >> > redirected elsewhere the certificate validation should fail), but
> >> > nevertheless DOH is forced upon the collective throat of Firefox users
> >> > as we speak (and Chrome users are likely to follow them Soon™).
> >> > Currently a Firefox user is supposed to trust Cloudflare to do DNS
> >> > queries for them, and HTTPS is used for this purpose because Security™.
> >>
> >> For some values of "security", DOH _is_ more secure.
> >
> > As far as the "last mile" is concerned - maybe.
> 
> How about as far as the "end user" is concerned? (which is what I
> thought we were talking about -- clueless end-users having doh forced
> on them)

It's akin to the tempered glass. I.e. it's less likely to break than a
normal glass, but it's still transparent. It's also akin to building a
castle on a sand.

I.e. if the user does not care about security it may improve overall
situation somewhat, but the cost of this improvement is privacy.


> >> How many people use a dnssec validating resolver?
> >
> > See above. Besides, DNSSEC is for integrity of zones, not privacy.
> > You need DNS-over-TLS if you need last one.
> 
> "integrity of zones" is part of "security" - yes?

Yes. My point is that it's only a part of the security. A needed part,
but a part nevertheless.


> DoT or DoH - either one gets you privacy from your ISP
> DoT is easy to block, DoH is harder to block, so somewhat censorship 
> resistant?

Both are easily blocked in their current form, in fact, DoH is easier in
this regard - it makes very distinct HTTPS (i.e. tcp:443) queries to a
ten (at most) well-known IPs.


> >> At least Cloudflare resolvers have dnssec enabled.
> >
> > *And* the ability to see users' DNS queries. Neat, right?
> 
> Yup, and probably a net win for people that don't have a clue about
> dns .. or at least people in the US.  Do people in the EU have to
> worry about their ISP selling their usage data?

No. You do not worry about something that widely happens, you deal with
it one way or another. And it's possible to sidestep GDPR just by
injecting "trusted partners'" ads in users' HTTP traffic, for instance.

Reco



Re: DOH (was: geolocation services disabled and Gnome maps)

2020-04-14 Thread Reco
;> >
> >> > See above. Besides, DNSSEC is for integrity of zones, not privacy.
> >> > You need DNS-over-TLS if you need last one.
> >>
> >> "integrity of zones" is part of "security" - yes?
> >
> > Yes. My point is that it's only a part of the security. A needed part,
> > but a part nevertheless.
> >
> >
> >> DoT or DoH - either one gets you privacy from your ISP
> >> DoT is easy to block, DoH is harder to block, so somewhat censorship
> >> resistant?

If it bothers you (it does bother me), you probably should not take DoH
as your only means of defence. An ipsec tunnel to a throwaway VPS served
me well so far, for instance.


> > Both are easily blocked in their current form, in fact, DoH is easier in
> > this regard - it makes very distinct HTTPS (i.e. tcp:443) queries to a
> > ten (at most) well-known IPs.
> 
> I count a lot more than 10 DoH providers at
>   https://github.com/curl/curl/wiki/DNS-over-HTTPS

curl is ahead of Firefox in this regard. But it's a good list, something
one could consider for blocking the controversial feature.


> >> >> At least Cloudflare resolvers have dnssec enabled.
> >> >
> >> > *And* the ability to see users' DNS queries. Neat, right?
> >>
> >> Yup, and probably a net win for people that don't have a clue about
> >> dns .. or at least people in the US.  Do people in the EU have to
> >> worry about their ISP selling their usage data?
> >
> > No. You do not worry about something that widely happens, you deal with
> > it one way or another.
> 
> you're saying that EU ISPs sell their user's online activity data?

I won't call it "selling" per se. It's rather "we provide our
sub-contractors with the data to increase the value of our services, and
we're closing collective eyes on what the contractors are doing with the
data".
I cannot disclose any names here.

Reco

[1] https://github.com/Eloston/ungoogled-chromium



Re: Debian is testing Discourse

2020-04-14 Thread Reco
Hi.

On Tue, Apr 14, 2020 at 08:14:22AM -0400, rhkra...@gmail.com wrote:
> On Tuesday, April 14, 2020 05:56:39 AM Dan Purgert wrote:
> > On Apr 14, 2020, Andrei POPESCU wrote:
> > > On Ma, 14 apr 20, 08:19:50, Curt wrote:
> > > > On 2020-04-14, Andrei POPESCU  wrote:
> > > > > It doesn't matter much as nobody is proposing to replace debian-user
> > > > > with Discourse.
> > > > 
> > > > Nobody but Neil McGovern himself.
> 
> Does he have some current status in Debian that would make his thoughts any 
> more of indicative of the intentions of Debian than anyone else?

A Debian Developer, unless I misunderstood that:

https://lists.debian.org/msgid-search/e1jneel-0007cd...@mail.einval.com

Current DPL is Sam Hartman, and an election process for the next DPL is
going on until Apr 18th.

Reco



Re: DOH

2020-04-14 Thread Reco
Hi.

On Tue, Apr 14, 2020 at 06:25:24PM +0100, Liam O'Toole wrote:
> I have two reservations about the approach advocated by Reco above.
> Maybe I'm not seeing some part of the big picture.
> 
> 1. The risk of DNS snooping is merely shifted from the ISP to the VPS 
> provider.

Usually you have a limited number of ISPs to choose.
You have the whole Internet of VPS providers. Choose one that does not
screw with your traffic, and if you don't like that one - there's always
another.


> 2. Having completed a DNS lookup unbeknownst to the ISP, we still have
> to make a connection to the resulting IP address through the ISP's
> gateway. The ISP can perform a reverse DNS lookup of the IP address if
> they are determined to snoop.

And that is why it's important to use DNS over TLS.
Unless your ISP can magically decrypt TLS on the fly, the scenario
you're describing is impossible. 

Reco



Re: DOH

2020-04-14 Thread Reco
Hi.

On Tue, Apr 14, 2020 at 10:26:09PM +0100, Liam O'Toole wrote:
> On Tue, 14 Apr, 2020 at 23:42:48 +0300, Reco wrote:
> 
> [...]
> 
> > > 2. Having completed a DNS lookup unbeknownst to the ISP, we still have
> > > to make a connection to the resulting IP address through the ISP's
> > > gateway. The ISP can perform a reverse DNS lookup of the IP address if
> > > they are determined to snoop.
> > 
> > And that is why it's important to use DNS over TLS.
> > Unless your ISP can magically decrypt TLS on the fly, the scenario
> > you're describing is impossible. 
> 
> I think you misunderstand me. I'm talking about making a connection to
> an IP address that you have already obtained by (encrypted) DNS.

I misunderstood you indeed. While it's true that this particular threat
is something that DNS over TLS cannot guard against, I suggest you to
consider this:

1) Not every IP on the Internet has PTR record.
2) There are multiple cases of sharing the same IP between multiple
sites (including HTTPS).
3) For HTTPS (and TLS in general) there's more precise method called SNI
snooping (there's TLSv1.3 against *that*, but it's not widely adopted).

Reco



Re: Accessing security.debian.org through https

2020-04-18 Thread Reco
Hi.

On Sat, Apr 18, 2020 at 06:48:59PM +0100, André Rodier wrote:
> I am investigating the option to enforce https access on my network,
> and I am surprised I have no way to access security.debian.org.

Technically, you can: https://deb.debian.org/debian-security
Not that using it will not be useful in any way as currently it just
serves an HTTP redirect to http://security.debian.org

> Is there any reason why https is not supported (yet?),

1) HTTPS vs HTTP is noticeable in terms of server load, especially if
the whole world tries to get the same package at the same time.

2) Release files are GPG signed, and contain multiple checksums for
every package served.
A package (or a Release) that's substituted by a third party will be
noticed by a local apt (so integrity is here), and confidentiality is
not an issue here.

> especially with lets-encrypt.

They use certificates signed by this CA already if it's appropriate
(deb.d.o, wiki.d.o, www.d.o to name a few).


Reco



Re: Accessing security.debian.org through https

2020-04-20 Thread Reco
Hi.

On Mon, Apr 20, 2020 at 08:11:29AM -0400, Greg Wooledge wrote:
> On Sat, Apr 18, 2020 at 09:13:43PM +0300, Reco wrote:
> > Technically, you can: https://deb.debian.org/debian-security
> > Not that using it will not be useful in any way as currently it just
> > serves an HTTP redirect to http://security.debian.org
> 
> That doesn't seem to be true.  As I said last week, my workplace's
> firewall has recently started blocking Debian's package mirrors, but
> 
> deb https://deb.debian.org/debian-security buster/updates main contrib 
> non-free

I stand corrected.
It's https://deb.debian.org/debian-security verbatim that does the
redirect.
An attempt to get any file via this apt-proxy-ng instance will result in
a file served by HTTPS.
For instance, this should work without any redirect:

https://deb.debian.org/debian-security/pool/updates/main/c/chromium/chromium_80.0.3987.162-1~deb10u1.dsc

Reco



Re: The simpliest way to automatically rebuild few Debian packages ?

2020-04-21 Thread Reco
Hi.

On Tue, Apr 21, 2020 at 03:23:37PM +0200, Thomas Martin wrote:
> My goal is simple : I'm applying few modifications on some Debian
> packages and would like those packages to be rebuilt with my changes
> when a new package version is available.

apt install apt-build

Requires some scripting to run without a human intervention, it's
relatively simple.

Reco



Re: The simpliest way to automatically rebuild few Debian packages ?

2020-04-21 Thread Reco
Hi.

On Tue, Apr 21, 2020 at 05:31:38PM +0200, Vincent Lefevre wrote:
> On 2020-04-21 16:44:57 +0300, Reco wrote:
> > On Tue, Apr 21, 2020 at 03:23:37PM +0200, Thomas Martin wrote:
> > > My goal is simple : I'm applying few modifications on some Debian
> > > packages and would like those packages to be rebuilt with my changes
> > > when a new package version is available.
> > 
> > apt install apt-build
> > 
> > Requires some scripting to run without a human intervention, it's
> > relatively simple.
> 
> Can it handle the rebuild for multiple architectures?

No. As far as I can tell, you cannot pass it that '-a' flag for
dpkg-buildpackage.
But then, '-a' means cross-compilation, and that tends to implement its
own, unique problems.

> And can one provide a log message associated with each patch?

I'm unsure what you mean by that. It longs the whole building process to
stdout, that includes applying the patches.

Reco



Re: Output from date command defaults to 12-hour in Buster.

2020-04-28 Thread Reco
Hi.

On Tue, Apr 28, 2020 at 08:36:11PM -0500, Martin McCormick wrote:
>   Is there any environment variable or local configuration
> variable which will make date produce the 24-hour time stamp
> similar to past implementations of date?

If you need it systemwide, consider doing this (will require relogin, at
least):

echo 'LC_TIME=C' >> /etc/default/locale

If you need it for your user only (will require a new terminal emulator), 

echo 'LC_TIME=C' >> ~/.bashrc

Reco



Re: armhf: buster: TLS / HTTPS partly broken

2020-05-03 Thread Reco
Hi.

On Sun, May 03, 2020 at 02:40:14PM +0200, Mark Jonas wrote:
> curl: (60) SSL certificate problem: unable to get local issuer certificate
>
> Does that mean a TLS library does not feature all required protocols on armhf?

TLS library that curl uses (openssl) is perfectly fine, but it cannot
validate any certificate unless you provide it with root CA
certificates.
So it likely means you haven't installed "ca-certificates" package.

Reco



Re: armhf: buster: TLS / HTTPS partly broken

2020-05-03 Thread Reco
On Sun, May 03, 2020 at 07:20:13PM +0200, Mark Jonas wrote:
> Hi Reco,
> 
> >> curl: (60) SSL certificate problem: unable to get local issuer certificate
> >>
> >> Does that mean a TLS library does not feature all required protocols on 
> >> armhf?
> >
> > TLS library that curl uses (openssl) is perfectly fine, but it cannot
> > validate any certificate unless you provide it with root CA
> > certificates.
> > So it likely means you haven't installed "ca-certificates" package.
> 
> This is what it looks like. But actually I installed ca-certificates.

Ok. Can you run tcpdump while you're running curl?
Specifically,

tcpdump -s0 -pnni any -w /tmp/curl.pcap tcp port 443

Reco



Re: armhf: buster: TLS / HTTPS partly broken

2020-05-04 Thread Reco
Hi.

On Mon, May 04, 2020 at 09:27:14AM +0200, Mark Jonas wrote:
> >> >> curl: (60) SSL certificate problem: unable to get local issuer 
> >> >> certificate
> >> >>
> >> >> Does that mean a TLS library does not feature all required protocols on 
> >> >> armhf?
> >> >
> >> > TLS library that curl uses (openssl) is perfectly fine, but it cannot
> >> > validate any certificate unless you provide it with root CA
> >> > certificates.
> >> > So it likely means you haven't installed "ca-certificates" package.
> >>
> >> This is what it looks like. But actually I installed ca-certificates.
> >
> > Ok. Can you run tcpdump while you're running curl?
> > Specifically,
> >
> > tcpdump -s0 -pnni any -w /tmp/curl.pcap tcp port 443
> 
> I tried to dump from within the running container but failed.

It's way too complicated. Docker is basically a one big NAT, so please
run tcpdump on a host instead.

But this hiccup gave me an idea - maybe libssl on armhf is perfectly
fine, but it's qemu which fails to emulate certain CPU instruction.

Reco



Re: armhf: buster: TLS / HTTPS partly broken

2020-05-04 Thread Reco
Hi.

On Mon, May 04, 2020 at 01:49:34PM +0200, Mark Jonas wrote:
> Hi Reco,
> 
> > > > Ok. Can you run tcpdump while you're running curl?
> > > > Specifically,
> > > >
> > > > tcpdump -s0 -pnni any -w /tmp/curl.pcap tcp port 443
> > >
> > > I tried to dump from within the running container but failed.
> >
> > It's way too complicated. Docker is basically a one big NAT, so please
> > run tcpdump on a host instead.
> 
> I used the identical image to run the container on an amhf host
> (Raspberry Pi 3). So there is now no QEMU in the way.

Curious. Just tested it with curl at Marvell Armada 385 (runs Debian 10,
armhf), works as supposed to.
I could also test it on Exynos 5422 (also runs Debian 10, armhf), but
it'll be the same.


> > But this hiccup gave me an idea - maybe libssl on armhf is perfectly
> > fine, but it's qemu which fails to emulate certain CPU instruction.
> 
> curl https://www.google.com still fails on the armhf host. So QEMU is
> out of the game.

Ok. Is it possible to run curl via strace from inside the docker?
Something like this would be perfect (-o designates an output file):

strace -o /tmp/curl -e trace=file curl https://www.google.com


Specifically, it should try to open a symlink to
/etc/ssl/certs/GlobalSign_Root_CA_-_R2.pem.
Here it's called /etc/ssl/certs/4a6481c9.0, may be machine-specific.


> Packet capturing now also worked. For capturing QEMU was the problem.
> I also captured aria2c (succeeds with warning) and wget (silently
> succeeds).

For the archives, curl's pcap looks like this:

SYN
SYN,ACK
TLS client hello
TLS server hello, change cipher spec (an upgrade from TLSv1.2 to TLS1.3)
Alert (Level: Fatal, Description: Unknown CA) <- curl sends this
RST
FIN,ACK

wget and aria show what they're supposed to (TLS handshake, Application
Data ping pong).

Reco



<    2   3   4   5   6   7   8   9   10   11   >