Re: Tracking the next Stable release

2019-04-09 Thread Mark Fletcher
On Mon, Apr 08, 2019 at 05:42:46PM -0300, Francisco M Neto wrote:
> Greetings!
> 
> 
> https://twitter.com/debian_tracker
> 
Nice! What does the level of release-critical bugs need to fall to 
before a release can happen -- it's not zero is it?

Mark



Re: Simple Linux to Linux(Debian) email

2019-04-08 Thread Mark Fletcher
On Mon, Apr 08, 2019 at 02:39:35PM +0100, Joe wrote:
> On Mon, 8 Apr 2019 21:33:03 +0900
> Mark Fletcher  wrote:
> 
> 
> > 
> > My image of an ideal solution is a piece of software that can present 
> > email to a remote MTA (ie an MTA not on the local machine) for
> > delivery, but is not itself an MTA, and certainly has no capability
> > to listen for incoming mail.
> > 
> 
> a) Sendmail. Not the full-featured MTA, but the utility.
> https://clients.javapipe.com/knowledgebase/132/How-to-Test-Sendmail-From-Command-Line-on-Linux.html
> 

Oh ah. Right, I hadn't separated the two in my mind. This may also do 
the job well I'm guessing.

> b) Write it yourself. If you can do simple scripting then you can write
> something that talks basic SMTP to a remote SMTP server.
> 
> Here's basic unencrypted SMTP:
> https://my.esecuredata.com/index.php?/knowledgebase/article/112/test-your-smtp-mail-server-via-telnet
> 



Yes, I had considered that too, and was going to script something up 
over a telnet session (inside my home LAN, albeit through a VPN to be 
able to tunnel back through a NAT'ing router) if this thread didn't turn 
up anything useful. But it did. :)

Also, I'm an engineer by training and follow the principle of re-use -- 
if there's a tool out there that does what I want I'd rather use it than 
write a new one. I admit I sometimes stray from that in the name of 
learning, but on this occasion I just want to solve a problem and move 
on.

> 
> c) Use a standard MTA and tell it not to listen to anything from
> outside your network. Use your firewall to not accept SMTP on the WAN
> port, and unless you have previously received email directly then the
> SMTP port shouldn't be open anyway. 
> 
> Use the MTA's configuration to listen only to localhost. Restart it and
> check where it's listening with netstat -tpan as root. 
> 
> That way you have two mechanisms to prevent access, even if you
> misconfigure one of them you should still be OK. After you have the MTA
> running and sending email where you want it to go, use ShieldsUp!! on
> https://grc.com to check which ports are open to the outside. Select
> 'All Service Ports' to check TCP/1-1055.
> 

Yes, agreed, this should also work. One thing I didn't mention in my 
original post is that I have to build all software for the "client" 
machine from scratch, and I'd expect a full-strength MTA to be a large 
project to build from source (many and potentially complex dependencies 
and so on), while a simple tool is likely to have a smaller and less 
complex dependency tree. Also because security is important on this box, 
every package I add needs careful consideration to make sure it doesn't 
compromise that -- again nudging me towards the smaller, simpler tool 
with fewer dependencies.

Thanks for your suggestions.

Mark



Re: Simple Linux to Linux(Debian) email

2019-04-08 Thread Mark Fletcher
On Mon, Apr 08, 2019 at 02:14:33PM +0100, Thomas Pircher wrote:
> Mark Fletcher wrote:
> > mutt won't let me go back and edit the subject line.
> 
> Hi Mark,
> 
> Yes, have a look at the dma or nullmailer packages.  There used to be
> more of these programs in Debian (ssmtp, for example), but on my system
> (Buster) only those two seem to have survived.
> 

Thanks, of those dma looks like a perfect match and nullmailer also 
would work.

> You could also use one of the big MTAs and configure them to listen to
> local connections only, and/or block the SMTP ports with a firewall, but
> both dma and nullmailer do their job just fine. Besides, they are much
> simpler to configure.
> 

Yes, I could -- but I'd feel safer in the presence of my own capacity 
for stupid mistakes using a piece of software that just can't listen for 
mail, in this particular scenario. So dma or nullmailer both fit the 
bill. I will pore over their docs as well as sSMTPs and see what comes 
out the best.

Thanks a lot for your help

Mark



Re: Simple Linux to Linux(Debian) email

2019-04-08 Thread Mark Fletcher
On Mon, Apr 08, 2019 at 07:54:30AM -0500, Ryan Nowakowski wrote:
> You might check out sSMTP[1]
> 
> [1] https://wiki.debian.org/sSMTP
> 
Thanks, looks like sSMTP will do the job. As was pointed out elsewhere 
in the thread, it seems to have been dropped from Buster, but that is no 
barrier for me as I can build it myself on the LFS machine.

Thanks a lot

Mark



Simple Linux to Linux(Debian) email

2019-04-08 Thread Mark Fletcher
Hello all

As I wrote this I began to consider this is slightly OT for this list; 
my apologies for not putting OT in the subject line but mutt won't let 
me go back and edit the subject line.

Short version: Is it reasonable to expect a piece of software to exist 
that establishes a direct connection to a "remote" MTA and delivers mail 
there for delivery, without also offering up mail reception 
capabilities? If it is, what would that software be? Or alternatively, 
is there a failsafe way to configure one of the MTAs (I have no strong 
allegiance to any MTA, although the only one I have experience with is 
exim4) such that even if I miss a configuration step it won't be 
contactable from outside? To be clear, I only wish to be able to send 
mail in one direction in this scenario...

The more detailed background:

My ISP has recently developed the unfortunate habit of changing my IP 
address moderately frequently. They're allowed -- I'm cheap so I haven't 
paid for a fixed IP. I'm shortly going to be moving so now really isn't 
a good time to reconsider that position.

They still aren't changing it crazily frequently, but I now run an 
OpenVPN server at home and it is a bit inconvenient when they change my 
home IP and I don't notice before going on a business trip or something.

I'd like to set up an alert that lets me know when my external IP 
address has changed.

The box that is in a position to notice that the IP address has changed 
is on the outer edge of my network connected directly to the internet. 
It runs LFS.

Deeper inside my network, accessible from the LFS box via the VPN, is a 
Debian Stretch machine where I do most of my work.

I've created a very simple script that is capable of parsing the output 
of "ip addr" and comparing the returned ip address for the relevant 
interface to a stored ip address, and thus being able to tell if the IP 
address has changed. What I'd like to do now is make a means for the LFS 
box to be able to notify me of the fact that the external-facing IP 
address has changed. 

My Debian machine runs exim4 and has a reasonably basic setup that 
allows it to accept mails from other machines on the network (although I 
may need to fiddle around with getting mail to come through the VPN) and 
deliver it either locally or using a friendly mail provider as a 
smarthost. I've successfully sent and received mail between this machine 
and a Buster machine on the same network, those two machines can see 
each other without the VPN. The Buster machine was also running exim4.

The LFS machine is, by design, very sparsely configured with only 
software I truly needed installed. I am willing to add software but wish 
to minimise the risk of installing something that opens up 
external-facing vulnerabilities as much as possible. What I'd really 
like is a piece of software that can reach out to my Stretch machine 
through the VPN to present an email for delivery without offering a 
local MTA that, improperly configured, might end up listening to the 
outside world and thus present a security risk.

I've looked at sendmail, postfix and of course exim4, and these are MTAs 
which could certainly do the job but which also present the risk of 
listening to the internet, especially if I do something stupid in the 
configuration which is entirely feasible. And from some basic tests I 
did on my Stretch machine I think the mail command expects there to be a 
local MTA for it to talk to...

My image of an ideal solution is a piece of software that can present 
email to a remote MTA (ie an MTA not on the local machine) for delivery, 
but is not itself an MTA, and certainly has no capability to listen for 
incoming mail.

Thanks in advance

Mark



Re: Acess Devian 9 laptop by another devica via wifi

2019-04-01 Thread Mark Fletcher
On Sat, Mar 23, 2019 at 08:31:34AM -0500, Tom Browder wrote:
> On Sat, Mar 23, 2019 at 5:12 AM Tom Browder  wrote:
> >
> > > > Is there any reliable way to either (1) always connect via the LAN or 
> > > > (2)
> > > > make the laptop broadcast its own LAN so I can login to it wirelessly 
> > > > from
> > > > the iPad?
> 
> Solved!!
> 
> I tried using my iPhjone as a personal hotspot and connected the
> laptop AND iPad to it and I can ssh into the laptop with no problems.
> 
> -Tom
> 

I'm pleasantly surprised to hear that worked. I wouldn't have said it 
was a given that an iPhone personal hotspot would do routing between 
multiple WiFi devices connected to it. Obviously between the WiFi and 
the phone, but...

Of course the downside of this approach is the iPhone itself switches 
away from WiFi when you do this, and any data usage *it* does while you 
are working goes over the phone, potentially costing you money... I once 
ran up a >$2000 phone bill while roaming in HK because I didn't realise 
an online broker's app was still running on the phone, streaming 
prices...

Mark



Re: Bluetooth audio problem

2019-04-01 Thread Mark Fletcher
On Sat, Mar 23, 2019 at 08:24:30AM -0500, Nicholas Geovanis wrote:
> On Fri, Mar 22, 2019 at 9:29 AM Mark Fletcher  wrote:
> 
> >
> > So this turned out to be a weirdie -- if I dropped the "sudo" my
> > original command worked.
> > So now, suddenly from that update that started this thread, if I run the
> > pactl command as an unprivileged user, it works fine.
> 
> 
> Is it possible that you had previously started pulseaudio as root, and
> could no longer communicate with it as an unprivileged user?
> I ask this having been a pulseaudio victim myself sometimes.
> 
> 

Hmm, interesting idea, but the situation I was previously in pertained 
over a period since Stretch became Stable until shortly before my 
original mail in this thread (sometime in February if I recall 
correctly). Over, naturally, multiple reboots.

For that period, I had to use sudo when issuing the pactl command (in 
Jessie and previously, the pactl command wasn't necessary at all).

So I guess I could have had some sort of configuration which repeatedly 
put me in that situation on every reboot, and the update that "created 
the problem" actually fixed whatever *that* problem was... otherwise, no 
I don't think so.

Thanks for the suggestion though

Mark



Re: Bluetooth audio problem

2019-03-23 Thread Mark Fletcher
On Fri, Mar 22, 2019 at 08:44:46PM +0100, deloptes wrote:
> Mark Fletcher wrote:
> 
> > So this turned out to be a weirdie -- if I dropped the "sudo" my
> > original command worked.
> > 
> > So now, suddenly from that update that started this thread, if I run the
> > pactl command as an unprivileged user, it works fine. I have no idea why
> > it changed but I'm just happy I have it working again.
> 
> you can mark also as solved, if solved
> 

True, I could have. But I don't think it will kill interested people who 
follow after to read a 3-mail thread to see the resolution.



Re: Bluetooth audio problem

2019-03-22 Thread Mark Fletcher
On Sun, Mar 03, 2019 at 06:04:05PM +0100, deloptes wrote:
> Mark Fletcher wrote:
> 
> > Hello
> > 
> > Since upgrading to Stretch shortly after it became stable, I have had to
> > execute the following after a reboot before being able to connect to
> > bluetooth devices using the Gnome bluetooth applet:
> > 
> > $ sudo pactl load-module module-bluetooth-discover
> > 



> > Now, when I run the above command it is erroring out with:
> > 
> > xcb_connection_has_error() returned true
> > Connection failure: Connection refused
> > pa_context_connect() failed: Connection refused
> > 
> 
> 
> When I want to debug pulse I do
> 
> echo "autospawn = no" > ~/.pulse/client.conf
> 
> kill PA and run it from command line with -v option you can also
> use --log-level (man pulseaudio)
> 
> perhaps you can see what is the problem there. If not it might be dbus issue
> with permissions - check the dbus settings
> 
> Also some times it helps to remove the ~/.pulse directory and restart
> pulseaudio.
> 

So this turned out to be a weirdie -- if I dropped the "sudo" my 
original command worked.

So now, suddenly from that update that started this thread, if I run the 
pactl command as an unprivileged user, it works fine. I have no idea why 
it changed but I'm just happy I have it working again.

Mark



Bluetooth audio problem

2019-03-03 Thread Mark Fletcher
Hello

Since upgrading to Stretch shortly after it became stable, I have had to 
execute the following after a reboot before being able to connect to 
bluetooth devices using the Gnome bluetooth applet:

$ sudo pactl load-module module-bluetooth-discover

Without that command, needed once only after each reboot, the Gnome 
applet is unable to connect to any bluetooth audio devices, eg my 
headphones to be used as an audio sink, or my iPhone to be used as an 
audio source. Once that command has been issued once, everything works 
as it should, and continues to do so until the next reboot.

I've been away for a couple of weeks and so hadn't installed updates to 
my stretch installation for something like 3 weeks, until Saturday this 
week when I installed updates. Unfortunately I didn't pay enough 
attention to exactly what was upgraded but I _believe_ I saw udev in the 
list of things getting upgraded.

Now, when I run the above command it is erroring out with:

xcb_connection_has_error() returned true
Connection failure: Connection refused
pa_context_connect() failed: Connection refused

Googling for this has only turned up old information which does not seem 
to relate to the problem I am facing. In most cases the context is audio 
not working; in my case audio output through speakers plugged into the 
sound card is working fine, USB mic connected by a wire is working 
fine, the only problem is anything bluetooth.

Bluetooth on this machine is provided by a USB bluetooth dongle which I 
have been using for ages.

Can anyone suggest steps to diagnose?

TIA

Mark



Re: Can't scan new disk

2019-02-25 Thread Mark Allums

On 2/25/19 8:48 AM, Curt wrote:

On 2019-02-25, Mark Allums  wrote:




This is not satisfactory.  Surely there is a way to neutralize a running
gvfsd/fuse mount on a device without reinstalling to whole OS.

Mark



man gvfsd says:

  ENVIRONMENT
GVFS_DISABLE_FUSE
If this environment variable is set, gvfsd will not start the fuse 
filesystem.

So maybe something like

  GVFS_DISABLE_FUSE=1
  export GVFS_DISABLE_FUSE

might be of aid in your neutralization efforts.


Doesn't work.




Or perhaps simply:

  gconftool --type Boolean --set /apps/nautilus/preferences/media_automount 
false

If not, sorry, and my respects to Martha.



I don't use Gnome/Nautilus.  MATE and/or Xfce.

Mark





Re: Can't scan new disk

2019-02-25 Thread Mark Allums

On 2/24/19 2:26 PM, David Christensen wrote:

On 2/24/19 6:42 AM, Mark Allums wrote:
Any advice as to how to stop the auto-mounter, gvfsd, or fuse, etc. 
from tying up my disk, or how to get fsck to scan it?


Use an OS and/or desktop that do not have automatic mounting.  I use 
Debian Stable with Xfce.  One of the first things I do after 
installation is to disable automounting via Thunar (Xfce file manager).



If that does not work, make your own Debian console-only live USB stick 
-- download the latest Debian Stable installer, burn it to media, 
connect a 16+ GB USB 3.0 flash drive, wipe the flash drive, power down, 
completely disconnect all HDD's and SSD's (power and data cables for 
internal drives, USB, Firewire, etc., cables for external drives), leave 
the USB flash drive connected, boot the install media, start the Debian 
installer, install to the USB flash drive, at the "Choose software to 
install" menu select "SSH server" and "standard system utilities" only, 
finish the install, reboot into the USB flash drive, remove the install 
media, SSH in from another machine, connect your USB dock and disk, and 
trouble-shoot.



David



This is not satisfactory.  Surely there is a way to neutralize a running 
gvfsd/fuse mount on a device without reinstalling to whole OS.


Mark



Re: Can't scan new disk

2019-02-24 Thread Mark Allums

On 2/20/19 3:20 AM, Alexander V. Makartsev wrote:

Maybe something simple like "lsof" command can shed some light on 
this problem?

 $ sudo lsof /dev/sdb
 $ sudo lsof /dev/sdb1


root@martha:~# lsof /dev/sdb
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system 
/run/user/1001/gvfs

  Output information may be incomplete.
root@martha:~# lsof /dev/sdb1
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system 
/run/user/1001/gvfs

  Output information may be incomplete.
root@martha:~#

There you have it. "lsof" command should not output anything if examined 
object is not in use.
I assume that "/dev/sdb1" gets auto-mounted by gvfsd [1] for user with 
UID 1001.
AFAIK GIO and company implements different mounting scheme without 
involving traditional kernel mounting and allow to restrict mounted 
devices only for user who mounted them.

So even root user can't access them if they are mounted by other user.
Try to use gio [2] utility to check status and unmount "/dev/sdb1" device.

[1] man gvfsd
[2] man gio


I man'ed them, but I got nothing useful for my trouble.  How does one 
stop gvfsd, or tell it not to mount anything (right now).  I'm about 
mid-grade with Linux skill, and the care and feeding of demons is a 
little above my pay grade.


root@martha:~# gio mount -u /dev/sdb1
gio: file:///dev/sdb1: Containing mount for file /dev/sdb1 not found
root@martha:~# gio mount -e /dev/sdb1
gio: file:///dev/sdb1: Containing mount for file /dev/sdb1 not found

Any advice as to how to stop the auto-mounter, gvfsd, or fuse, etc. from 
tying up my disk, or how to get fsck to scan it?


Mark



Re: Can't scan new disk

2019-02-20 Thread Mark Allums

On 2/20/19 3:20 AM, Alexander V. Makartsev wrote:

On 20.02.2019 11:16, Mark Allums wrote:

On 2/17/19 10:59 PM, Alexander V. Makartsev wrote:

On 17.02.2019 1:21, Mark Allums wrote:

On 2/16/19 2:41 AM, Curt wrote:

On 2019-02-15, Mark Allums  wrote:
I just bought a new backup disk, and I want to check it. It's 
mounted in

a USB dock.

Running the following gives an error:

root@martha:~# umount /dev/sdb1
root@martha:~# e2fsck -c -c -C 0 -f -F -k -p /dev/sdb1
/dev/sdb1 is in use.
e2fsck: Cannot continue, aborting.

What's causing this and how do I fix it?  It's not MATE; I tried
rebooting to rescue mode, but that didn't help.

Mark


People sometimes recommend 'fuser' in cases like these in order to
identify processes that might be accessing the drive.

I mean, the message says '/dev/sdb1 is in use.' Perhaps it is indeed.

  fuser -v -m /dev/sdb1

Worth a try, maybe, as no one else seems to have suggested it.


root@martha:~# fuser -v -m /dev/sdb1
root@martha:~#

No results.  Thanks.

Mark

Maybe something simple like "lsof" command can shed some light on 
this problem?

 $ sudo lsof /dev/sdb
 $ sudo lsof /dev/sdb1


root@martha:~# lsof /dev/sdb
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system 
/run/user/1001/gvfs

  Output information may be incomplete.
root@martha:~# lsof /dev/sdb1
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system 
/run/user/1001/gvfs

  Output information may be incomplete.
root@martha:~#

There you have it. "lsof" command should not output anything if examined 
object is not in use.
I assume that "/dev/sdb1" gets auto-mounted by gvfsd [1] for user with 
UID 1001.
AFAIK GIO and company implements different mounting scheme without 
involving traditional kernel mounting and allow to restrict mounted 
devices only for user who mounted them.

So even root user can't access them if they are mounted by other user.
Try to use gio [2] utility to check status and unmount "/dev/sdb1" device.

[1] man gvfsd
[2] man gio



The disk is not mounted.




Re: Can't scan new disk

2019-02-20 Thread Mark Allums

On 2/20/19 3:19 AM, Curt wrote:

On 2019-02-20, Mark Allums  wrote:



Maybe something simple like "lsof" command can shed some light on this
problem?
      $ sudo lsof /dev/sdb
      $ sudo lsof /dev/sdb1


root@martha:~# lsof /dev/sdb
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1001/gvfs
Output information may be incomplete.
root@martha:~# lsof /dev/sdb1
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1001/gvfs
Output information may be incomplete.
root@martha:~#



From what I'm reading you'll have to be martha for this and not root

(*une fois n'est pas coutume*), if martha mounted.


martha is the name of the server.



From man mount.fuse:


  SECURITY
The fusermount program is installed set-user-gid to fuse. This is done to
allow users from  fuse group to mount their own filesystem implementations.
There must however be some limitations, in order to prevent Bad User from 
doing
nasty things.  Currently those limitations are:

1. The user can only mount on a mountpoint, for which it has write 
permission

2. The mountpoint is not a sticky directory which isn't owned by the 
user (like /tmp usually
   is)

3. No other user (including root) can access the contents of the 
mounted filesystem.




The disk is not mounted.





Re: Can't scan new disk

2019-02-19 Thread Mark Allums

On 2/17/19 10:59 PM, Alexander V. Makartsev wrote:

On 17.02.2019 1:21, Mark Allums wrote:

On 2/16/19 2:41 AM, Curt wrote:

On 2019-02-15, Mark Allums  wrote:
I just bought a new backup disk, and I want to check it. It's 
mounted in

a USB dock.

Running the following gives an error:

root@martha:~# umount /dev/sdb1
root@martha:~# e2fsck -c -c -C 0 -f -F -k -p /dev/sdb1
/dev/sdb1 is in use.
e2fsck: Cannot continue, aborting.

What's causing this and how do I fix it?  It's not MATE; I tried
rebooting to rescue mode, but that didn't help.

Mark


People sometimes recommend 'fuser' in cases like these in order to
identify processes that might be accessing the drive.

I mean, the message says '/dev/sdb1 is in use.' Perhaps it is indeed.

  fuser -v -m /dev/sdb1

Worth a try, maybe, as no one else seems to have suggested it.


root@martha:~# fuser -v -m /dev/sdb1
root@martha:~#

No results.  Thanks.

Mark

Maybe something simple like "lsof" command can shed some light on this 
problem?

     $ sudo lsof /dev/sdb
     $ sudo lsof /dev/sdb1


root@martha:~# lsof /dev/sdb
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1001/gvfs
  Output information may be incomplete.
root@martha:~# lsof /dev/sdb1
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1001/gvfs
  Output information may be incomplete.
root@martha:~#




Also show us an output of "gdisk" command:
     $ sudo gdisk -l /dev/sdb


root@martha:~# gdisk -l /dev/sdb
GPT fdisk (gdisk) version 1.0.3

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdb: 23437770752 sectors, 10.9 TiB
Model: ST12000NE0007-2G
Sector size (logical/physical): 512/4096 bytes
Disk identifier (GUID): 6AFF425F-836E-4001-840E-FF40A0875F53
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 23437770718
Partitions will be aligned on 2048-sector boundaries
Total free space is 4029 sectors (2.0 MiB)

Number  Start (sector)End (sector)  Size   Code  Name
   12048 23437768703   10.9 TiB8300
root@martha:~#

Mark




Re: Can't scan new disk

2019-02-17 Thread Mark Allums



Seagate IronWolf Pro 12 TB
Buster, Orico USB 3.0 Dock


Running the following gives an error:

root@martha:~# umount /dev/sdb1
root@martha:~# e2fsck -c -c -C 0 -f -F -k -p /dev/sdb1
/dev/sdb1 is in use.
e2fsck: Cannot continue, aborting.



Does e2fsck work when the drive is connected to an internal SATA port?


Hasn't been tried, as I was not aware there was a difference between 
USB SATA and internal SATA. 




The idea is to devise a process of elimination to see if the problem is 
the drive, the USB dock, Debian, etc..


1.  Try another drive using that dock and Debian.


8 GB seagate drive, same error.



2.  Try another connection using that drive and Debian.


Tried different dock, rated for 12 TB.  Same result.

Used gparted to delete the original partition, create a new one.  The 
following ensued:



GParted 0.32.0 --enable-libparted-dmraid --enable-online-resize

Libparted 3.2
Create Primary Partition #1 (ext4, 10.91 TiB) on /dev/sdb  00:00:00( 
ERROR )


create empty partition  00:00:00( SUCCESS )

path: /dev/sdb1 (partition)
start: 2048
end: 23437768703
size: 23437766656 (10.91 TiB)
clear old file system signatures in /dev/sdb1  00:00:00( SUCCESS )

write 512.00 KiB of zeros at byte offset 0  00:00:00( SUCCESS )
write 4.00 KiB of zeros at byte offset 67108864  00:00:00( SUCCESS )
write 4.00 KiB of zeros at byte offset 274877906944  00:00:00( SUCCESS )
write 512.00 KiB of zeros at byte offset 12000136003584  00:00:00( 
SUCCESS )
write 4.00 KiB of zeros at byte offset 12000136462336  00:00:00( 
SUCCESS )
write 8.00 KiB of zeros at byte offset 12000136519680  00:00:00( 
SUCCESS )

flush operating system cache of /dev/sdb  00:00:00( SUCCESS )
set partition type on /dev/sdb1  00:00:00( SUCCESS )

new partition type: ext4
create new ext4 file system  00:00:00( ERROR )

mkfs.ext4 -F -O ^64bit -L '' '/dev/sdb1'  00:00:00( ERROR )

mke2fs 1.44.5 (15-Dec-2018)
/dev/sdb1 is apparently in use by the system; will not make a filesystem 
here!






3.  Try another OS using that drive and dock.



Win 10 with 2nd dock worked fine.  Not hard drive.  Not dock(s).  Seems 
to be Debian.  Systemd?


Mark




Re: Can't scan new disk

2019-02-16 Thread Mark Allums

On 2/15/19 9:12 PM, David Christensen wrote:

On 2/15/19 3:24 PM, Mark Allums wrote:
I just bought a new backup disk, and I want to check it. It's mounted 
in a USB dock.


Running the following gives an error:

root@martha:~# umount /dev/sdb1
root@martha:~# e2fsck -c -c -C 0 -f -F -k -p /dev/sdb1
/dev/sdb1 is in use.
e2fsck: Cannot continue, aborting.

What's causing this and how do I fix it?  It's not MATE; I tried 
rebooting to rescue mode, but that didn't help.



On 2/15/19 5:37 PM, Mark Allums wrote:
 > ... Seagate IronWolf Pro 12 TB


What is the make and model of the USB dock, and is it rated for 12 TB?


It is an Orico, I don't know the model, and I was not aware there was a 
"rating" for disk sizes.








Does e2fsck work when the drive is connected to an internal SATA port?


Hasn't been tried, as I was not aware there was a difference between USB 
SATA and internal SATA.  Also, it would be highly inconvenient.  I'd 
like to try other possible solutions first.


Thanks,

Mark





I use HDD trays and mobile docks for my backup drives:

https://www.startech.com/HDD/Mobile-Racks/Black-Serial-ATA-Drive-Drawer-with-Shock-Absorbers-Professional-Series~DRW115SATBK 



I will consider this.





David



Thanks a lot,
Mark



Re: Can't scan new disk

2019-02-16 Thread Mark Allums

On 2/16/19 2:41 AM, Curt wrote:

On 2019-02-15, Mark Allums  wrote:

I just bought a new backup disk, and I want to check it. It's mounted in
a USB dock.

Running the following gives an error:

root@martha:~# umount /dev/sdb1
root@martha:~# e2fsck -c -c -C 0 -f -F -k -p /dev/sdb1
/dev/sdb1 is in use.
e2fsck: Cannot continue, aborting.

What's causing this and how do I fix it?  It's not MATE; I tried
rebooting to rescue mode, but that didn't help.

Mark


People sometimes recommend 'fuser' in cases like these in order to
identify processes that might be accessing the drive.

I mean, the message says '/dev/sdb1 is in use.' Perhaps it is indeed.

  fuser -v -m /dev/sdb1

Worth a try, maybe, as no one else seems to have suggested it.


root@martha:~# fuser -v -m /dev/sdb1
root@martha:~#

No results.  Thanks.

Mark



Re: Can't scan new disk

2019-02-15 Thread Mark Allums

On 2/15/19 6:08 PM, deb wrote:


On 2/15/2019 6:24 PM, Mark Allums wrote:

umount /dev/sdb1
root@martha:~# e2fsck -c -c -C 0 -f -F -k -p /dev/sdb1 




Just curious, is it a Western Digital disk?





No, a Seagate IronWolf Pro 12 TB



Re: Can't scan new disk

2019-02-15 Thread Mark Allums

On 2/15/19 6:21 PM, songbird wrote:

Mark Allums wrote:


I just bought a new backup disk, and I want to check it. It's mounted in
a USB dock.

Running the following gives an error:

root@martha:~# umount /dev/sdb1
root@martha:~# e2fsck -c -c -C 0 -f -F -k -p /dev/sdb1
/dev/sdb1 is in use.
e2fsck: Cannot continue, aborting.

What's causing this and how do I fix it?  It's not MATE; I tried
rebooting to rescue mode, but that didn't help.


   sure you got the right disk?


100% sure.




   does it show up in the journal/log when you
unplug it and plug it back in?

   something may be grabbing it automatically perhaps
so check your fstab entry and make sure it isn't
being mounted automatically if you don't want it to
be done like that.  (is it appearing in /mnt ? ).

   check the label, uuid, partition table before doing
anything too serious to it (to make sure you really
do have the right device).

   i never have backup disks being mounted automatically
because i don't always even have them running.


   songbird



It isn't mounted.



Can't scan new disk

2019-02-15 Thread Mark Allums
I just bought a new backup disk, and I want to check it. It's mounted in 
a USB dock.


Running the following gives an error:

root@martha:~# umount /dev/sdb1
root@martha:~# e2fsck -c -c -C 0 -f -F -k -p /dev/sdb1
/dev/sdb1 is in use.
e2fsck: Cannot continue, aborting.

What's causing this and how do I fix it?  It's not MATE; I tried 
rebooting to rescue mode, but that didn't help.


Mark



Re: kmail - just a little problem

2019-02-14 Thread mark
On Wednesday, February 13, 2019 11:25:56 AM EST Hans wrote:
> Hi folks,
> 
> I am running into a little problem with kmail in plasma.
> 
> The problem is, that the column on the very left side (the one where the
> folders like "kmail-folder" are shown)  with the the definition "name" is
> very, very big (more than 4000 pixels wide). But I can not get it smaller,
> and there appeared a scrollbar below.
> 
> I found no way to get rid of this and get the column smaller. It cannot be
> shifted in any way. If I add the other columns like "unread", "size" and
> "general", those can be made smaller and wider.
> 
> Any idea, how this behaviour appeared and how I can get rid of this?
> 
> Thank you for reading this message and for any help.
> 
> Best regards
> 
> Hans

What kmail version are you using.  I can't replicate your problem with my setup 
and kmail  
5.7.3 .

What does your kmail look like on the screen?  I have a folder list "window" on 
the left and 
on the right I have 2 "windows" top is a  message list, bottom is message 
preview.  I can 
resize the message list and message preview by hovering the mouse over the 
vertical 
separator, click and hold on the vertical separator and drag it left or right 
to meet your 
needs.

I hope this helps,
Mark




Re: Why popular sites are looking ugly

2019-01-23 Thread Mark Allums


Debian is blazing fast and smooth on my box comparing to Windows.

The one problem I have is fonts rendering in Facebook, Twitter and some
others.

Concernig fonts I heard that there is some patent issue with freetype
which prevents autohinting. However it seems to be not the case as some
others site have really nice looking fonts (debian.org for example).



Try installing ttf-mscorefonts-installer.

Mark Allums



Re: Session Recording

2018-12-27 Thread Mark Fletcher
On Fri, Dec 28, 2018 at 0:46 Ilyass Kaouam  wrote:

> Hi,
>
> Please if you know any opensource tools he can recording session ?
> Freeipa can do this ?
>
> Thank's
>
> Depends what you mean by session.

For textual record of a series of commands and their output, as might be
useful over ssh, look into the “script” command.

For recording a graphical desktop including audio voiceover, I find
“simplescreenrecorder” to be very good.

Both are packaged for stretch.

Mark


Fwd: You removed Weboob package over pollitical reasons?Whole Internet laughs at you

2018-12-24 Thread Mark Fletcher
On Tue, Dec 25, 2018 at 7:56 Miles Fidelman 
wrote:

> Not for nothing...


Please don’t top post.

but I'd never heard of weboob before.  Looks like a
> rather powerful set of functions.  All the controversy has probably
> provided some much needed visibility.
>
> Personally, I don't care about the packaging - I tend to find that
> packagers tend to just muck things up.  For anything except the most
> common stuff, I'll always stick with >make;make install
>

In that case, why use Debian? The packaging (and the policies to support
and govern it) are what makes Debian, Debian. Might as well use LFS if
you’re going to make ; make install everything anyway.

(Not that make ; make install is in any way evil; it’s great when it’s
needed, it’s just not needed very much by users in Debian)

Happy Holidays to all.

Mark

With apologies to Miles for previously accidentally replying to him
directly instead of replying only on-list...


Re: internet outages

2018-12-22 Thread Mark Neidorff
On Saturday, December 22, 2018 11:34:27 AM EST Jude DaShiell wrote:
> Thanks for the script and the tool recommendation.
> Has Linux got a tool to check up on the router and find out if the
> router is doing its job?
> Before I mail logs into comcast, I want to make sure I've  done all due
> dilligence on this end so if comcast isn't having a problem they don't
> catch any undeserved heat.  A router replacement can be done if that's
> the source of these problems.  On more than one of these outage
> occassions I have used a stylus and rebooted the router to clear any
> potential malware just in case.

Rebooting the router will not clear any malware that is in NVRAM.  That is 
Comcast's job to 
diagnose and fix.

IMO, your due dilligence consists of testing things from the router to your 
PCs.  So, the list 
of suspects is:  router, inside network wiring,  insdie the building electrical 
wiring and the 
pc.  Most likely point of failure is the router in this case.  You can also try 
jiggling ethernet 
wires to see if you can find a problem there. 

Mark


Re: internet outages

2018-12-22 Thread mark
On Saturday, December 22, 2018 10:16:58 AM EST Jude DaShiell wrote:
> Has Linux got tools that can run while a computer runs that can poll
> several sites and log internet outages?  I figure a minute down time is a
> failure and have experienced several of these where my wifi connection had
> to be deactivated and reactivated to have the internet connection
> restored.  This is a new wifi router too.  The log would be sent into
> comcast along with payment requesting credits for the down times.
> 
> 
> 
> --

>From your description of the problem, it sounds like it is the router that is 
>not doing its job 
properly.  Before you send your request for credit, make sure that it is a 
Comcast problem.

But, if you want to jost test when the net is down:

Here is a script that you can run from a cron job which will log Internet 
status and store 
the results into a file  in your home folder called net-test.txt:

#! /bin/bash

date >> ~/net-test.txt
ping -c 1 google.com >> ~/net-test.txt

#end of file



<> Re: Looking for a "friendly" e-mail service

2018-12-10 Thread mark
On Monday, November 26, 2018 9:37:21 AM EST Mark Neidorff wrote:
> 
> It is time for me to give the static IP back and stop being my own e-mail
> service.  I'm moving from my static IP to Verizon FIOS, but I don't think
> that really matters.
> 
> If you know of an e-mail service that allows me  POP3 and SMTP connections,
> would you please post it in a reply.

Thank you all for your suggestions.  I found that Ionos (used to be 1&1 for 
those with long 
memories) gives me the best bang for the buck.  The accept mail for my domain 
(so I don't 
have to resubscribe to everything), provide pop3 and imap access, have a 2Gb 
limit on the 
contents of the mailbox (not on traffic) for $1 per month.  For $2 per month, 
they allow up 
to 20 user names under the same domain.  More storage is available for more 
money.  
Their help was excellent, and the process of getting set up was easy.  

Certainly they are not the only company to provide this service, but they suit 
my needs.  

Thanks to everyone,
Mark



Fwd: Recommendation for Virtual Machine and Instructions to set it up?

2018-12-06 Thread Mark Fletcher
Darn it, forgot to monkey with the headers when replying from gmail...
please see intended list reply below.

-- Forwarded message -
From: Mark Fletcher 
Date: Fri, Dec 7, 2018 at 8:19
Subject: Re: Recommendation for Virtual Machine and Instructions to set it
up?
To: 




On Fri, Dec 7, 2018 at 6:03 deloptes  wrote:

> rhkra...@gmail.com wrote:
>
> > What would you recommend for the software to run a VM under Jessie (that
> > would probably run Ubuntu), and can you recommend a fairly simple set of
> > instructions to first set up the VM, and then at least begin the install
> > process to that VM.
>
> Recently I am using headless or sometimes visual virtualbox. If you want it
> headless virtualbox is better. There are packages to download. I don't know
> if and which work on jessie.
> I do not think you need a backup if you install the packages.



I second this. I’ve been using virtualbox since around etch or so I think —
anyway a while. Since you’re on Jessie it should be in the repos. From
stretch on you need to add a repo to get it as it fell out if the Debian
repo. But it is still in Jessie — at least it was when Jessie was stable.

I have always found virtualbox surprisingly easy to set up and use — a lot
of things that as a noob I expected to be hard just weren’t. There’s a good
visual setup screen for creating new VMs and the documentation is quite
good as I recall.

The only thing I’ve never got working properly is 3D acceleration.

HTH

Mark


Re: issues with stretch, issue 2 from many

2018-11-30 Thread Mark Fletcher
On Sat, Dec 1, 2018 at 0:59 Greg Wooledge  wrote:

>
> Now, please answer the following questions:
>
> 1) What version of Debian are you running?
>
> 2) How do you log in to your computer?  If it's by a display manager
>(graphical login), which one is it?
>
> 3) How do you start the X window system?
>
> 4) How have you configured whichever dot files are relevant to #2 and #3?
>
> 5) What is the actual problem you are having?
>
>
> #2 IS CRITICALLY IMPORTANT and I have never yet seen you answer it.
> Maybe I missed it somewhere in this incredibly drawn-out and unfocused
> thread, but I don't think so.
>
> He said he uses xdm.


Looking for a "friendly" e-mail service

2018-11-26 Thread Mark Neidorff
(I know this is not Debian specific, but I think it is useful info for the 
members of the list.)

Admittedly, I'm spoiled.  I've had a static IP and my own domain for nearly 15 
years.  I set up a mailserver which has run without missing a beat in all that 
time.

It is time for me to give the static IP back and stop being my own e-mail 
service.  I'm moving from my static IP to Verizon FIOS, but I don't think that 
really matters.

Now, I don't like the webmail interfaces and the limited storage for old 
emails that the big players (gmail, yahoo,etc) use.  I like to download and 
process the email locally using either kmail or thunderbird (doesn't matter 
which to me.  I have experience with both.)

If you know of an e-mail service that allows me  POP3 and SMTP connections, 
would you please post it in a reply.

Thank you for any suggestions,

Mark
-- 
Why are games that any fool can play the best sellers?



Re: selinux and debian squeeze 9.5

2018-11-03 Thread Mark Fletcher
> squeeze! You could be very lucky and someone with the same outdated,
> no longer supported distribution and experiencing the same problem
> comes along. I wouldn't count on it though.
>
> > Any suggestions?
>
> The obvious.
>

Speaking of obvious — the OP says 9.5, so presumably they _meant_ to say
Stretch — no?

Mark


Re: what is sitting on USB device?

2018-10-25 Thread Mark Copper
On Tue, Oct 23, 2018 at 12:51 PM Curt  wrote:
>
> On 2018-10-23, Mark Copper  wrote:
> >
> > yes, there is a gnome environment variable that can stifle the gvfs
> > monitors and I have done that. Nor do I see any trace of the modules
> > mentioned in the error message.
>
> I didn't know that you had done that.
>
> > so I thought I'd try to go back to first principles and ask how one
> > might discover what is already using the device.
> >
> >
>
> If 'mount' is too confused, you might try 'lsblk'.
>
> --
> "Now she understood that Anna could not have been in lilac, and that her charm
> was just that she always stood out against her attire, that her dress could
> never be noticeable on her." Leo Tolstoy, Anna Karenina

I haven't sorted all this out, but here are a couple things, probably
not said correctly:

Using MTP, "media transfer protocol", to access the camera, Chromium
OS never actually mounts the external device on the file system.

On boot Chromium OS launches an MTP daemon which claims the camera
when plugged in. By disabling this daemon, gphoto2 in Debian inside
Crouton chroot can access the camera. Whew!



Re: what is sitting on USB device?

2018-10-23 Thread Mark Copper
On Tue, Oct 23, 2018 at 11:28 AM Curt  wrote:
>
> On 2018-10-23, Mark Copper  wrote:
> > Trying to connect to a device, I get this error message:
> >
> > *** Error ***
> > An error occurred in the io-library ('Could not claim the USB
> > device'): Could not claim interface 0 (Device or resource busy). Make
> > sure no other program (gvfs-gphoto2-volume-monitor) or kernel module
> > (such as sdc2xx, stv680, spca50x) is using the device and you have
> > read/write access to the device.
> > *** Error (-53: 'Could not claim the USB device') ***
> >
> > On general Linux principles, how does one go about what is keeping the
> > device busy? How does one distinguish between "busy" and a permissions
> > problem?
>
> On the internets I glanced at a forum thread where someone opined that
> 'gvfs-gphoto2-volume-monitor' might get in the way of camera-like
> thingamajiggers:
>
>  ps aux | grep gphoto
>
> to see whether this theory is viable or not (and if it is, you know, close or 
> stop
> or kill that gvfs puppy maybe).

yes, there is a gnome environment variable that can stifle the gvfs
monitors and I have done that. Nor do I see any trace of the modules
mentioned in the error message.

so I thought I'd try to go back to first principles and ask how one
might discover what is already using the device.



Re: what is sitting on USB device?

2018-10-23 Thread Mark Copper
On Tue, Oct 23, 2018 at 11:13 AM  wrote:
>
> On Tue, Oct 23, 2018 at 11:03:05AM -0500, Mark Copper wrote:
> > Trying to connect to a device, I get this error message:
>
> What are you trying to do while this error show up? How does it
> show up (e.g. desktop pop up, some log file...)?
>
> > *** Error ***
> > An error occurred in the io-library ('Could not claim the USB
> > device'): Could not claim interface 0 (Device or resource busy). Make
> > sure no other program (gvfs-gphoto2-volume-monitor) or kernel module
> > (such as sdc2xx, stv680, spca50x) is using the device and you have
> > read/write access to the device.
> > *** Error (-53: 'Could not claim the USB device') ***
>
> Things to try:
>
>   - Issue (on a terminal, as root or sudo) "dmesg | tail", a short while
> after having inserted the USB device.
>   - If the USB device poses as a storage device, issue "mount", to check
> whether something on your box (your DE, perhaps) has mounted the
> file system.
>   - Look in /var/log/messages and/or /var/log/syslog (or however these
> things are called, should your init system be systemd: I'm not
> qualified for that, others will chime in, I guess).
> Note that USB devices can pose as different things "at the same
> time".
>
> HTH
> -- tomás

The error is generated in response to this command:

$gphoto2 --summary

The camera is recognized properly in dmesg. But it might be relevant
that the Chrome OS sees it as a storage device, and it's important not
to treat the camera as a storage device if one wants to use the
computer to control the camera. However, I cannot see that the device
is actually mounted. (the output of "mount" has become so complicated
these days...)

I don't see either messages or syslog under the chroot.



what is sitting on USB device?

2018-10-23 Thread Mark Copper
Trying to connect to a device, I get this error message:

*** Error ***
An error occurred in the io-library ('Could not claim the USB
device'): Could not claim interface 0 (Device or resource busy). Make
sure no other program (gvfs-gphoto2-volume-monitor) or kernel module
(such as sdc2xx, stv680, spca50x) is using the device and you have
read/write access to the device.
*** Error (-53: 'Could not claim the USB device') ***

On general Linux principles, how does one go about what is keeping the
device busy? How does one distinguish between "busy" and a permissions
problem?

I can see that the system detects the device by, say, lsusb:
bus 001 device 007 ... Nikon

I haven't got anywhere with "lsof", but that at that point the
specifics of this system may come into play (Debian 9 installed as a
Crouton target on a chromebook).

Any suggestions off-hand?

Thanks.



Re: kmail2 and TLS problem

2018-09-22 Thread mark
On Wednesday, September 12, 2018 3:54:11 AM EDT Hans wrote:
> Hi foilks,
> after last update of debian/testing I got into a problem with TLS.
> 
> I can not get access to the mail servers running TLS. Also in the settings
> menu of kmail, I can not scan the server. Message: Server not reachable.
> 
> However, the server is reachable, as kmail-trinity is working fine.
> 
> This mail was sent via kmail-trinity.
> 
> As I do not know, if this is a bug or a local problem on my system:
> 
> Does anybody got into the same problem with actual kamil2 + debian/testing?
> 
> Thank you very much for any feedback!
> 
> Best regards
> 
> Hans

Hello Hans,

Is the mail server "yours"?  

If not, what do you mean that you can not get access to the mail server? 

Can you ping the mailserver?  Can you traceroute to the mailserver?  

Has the network configuration on your debian-testing box changed?

These are some preliminary questions.  Let's see where this goes?

My first guess is that there is some sort of a routing problem in your setup.  
Second guess 
is that there is a bug in debian/testing (that is why it is called testing).

Mark


Re: Debian installation

2018-09-02 Thread Mark Fletcher
On Sun, Sep 02, 2018 at 09:34:40PM -0700, Harold Hartley wrote:
> I had ordered myself a Debian dvd 9.5 and everything was installing
> great until it came to scan the mirrors.It seems I was not able to scan a 
> mirror so that I would be able to apt
> an app, but I tried many mirrors and nothing.Can someone on here help me with 
> a solution to get it to scan a mirror
> successfully.I ordered the dvd from a link from Debian website and nothing is 
> wrong
> with the dvd.
> --

Hello Harold, and welcome to Debian!

If there are _no_ mirrors available, that sounds to me more like a 
problem with your local network setup, most likely your network device 
in your computer is not recognised properly. By any chance is it WiFi? 
Some WiFi devices especially in laptops need proprietary firmware and 
don't work out of the box off the first DVD. There is a 
"firmware-included" installer that might have a better chance of working 
in that case. I wouldn't _generally_ expect that to be an issue if you 
are connected by a wired ethernet connection, but wouldn't totally rule 
it out.

If you can give us more information about your hardware and how you 
access the internet, the community should be able to help more.

Mark



Re: Confused by Amanda

2018-09-02 Thread Mark Fletcher
On Sun, Sep 02, 2018 at 11:46:44AM -0400, Gene Heskett wrote:
> On Sunday 02 September 2018 06:27:01 Dan Ritter wrote:
> 
> > Amanda is not good for the situation you describe.
> 
> No its not ideal in some cases,, which is why I wrote a wrapper script 
> for the make a backup portions of amanda. With the resources and configs 
> that existed at the time that backup was made actually appended to the 
> end of the vtape, pretty much an empty drive recovery is possible. It 
> appends the /usr/local/etc/amana/$config, 
> and /usr/local/var/amanda/record_of_backups to each vtape it makes. So 
> those 2 files can be recovered with tar and gzip, put back on a freshly 
> installed linux of your favorite flavor, and a restore made that will be 
> a duplicate of what your had last night when backup.sh was ran.
> 

Thanks Gene, I was hoping you would pipe up but didn't want to throw the 
spotlight on you if you weren't inclined to. This is exactly what I'm 
after so I will definitely check it out.

Thanks also to Dan and Jose, I can see what you mean and it makes much 
of the Amanda documentation make more sense now. But as I mentioned, my 
configuration currently isn't an end state and I'm planning to expand it 
to cover other machines on my network, at which point Amanda will make 
more sense. I get the concept of two Amandas, one to backup the Amanda 
server of the first, but then you're into a "turtles all the way down" 
scenario, aren't you? Just seems overkill when one Amanda can look after 
its own server as well, albeit with some jiggerypokery which Gene has 
kindly cast light on.

So I think we can agree, Amanda's expected usage model is ideally for 
situations where there are multiple machines to back up, you designate 
one machine the Amanda server (presumably the one with the easiest / 
fastest access to the backup media) and accept that that machine needs 
special, usually separate, arrangements for _its_ backup. But it's 
_possible_ with attention to the right details such as things Gene has 
pointed out, to include the Amanda server machine itself in the backup.

Thanks all, especially Gene for, I suspect, saving me a lot of work.

Mark



Confused by Amanda

2018-09-02 Thread Mark Fletcher
Hello

I use Amanda for daily backups on Stretch. I found it not too difficult 
to set up once I got my head around its virtual tape concept.

Recently, prompted by not very much, I have started to question whether 
having these backups really put me in a position to restore the machine 
if I need to.

I recently messed up some files and decided to resort to the backup to 
recover them. I was able to do so, but the process left me wondering if 
I would really be in a position to do so in all cases. For example, 
Amanda configuration is in /etc/amanda -- what if /etc was what I needed 
to restore? Similarly, I gather there are files under /var/lib/amanda -- 
what happens if /var is damaged?

I have not been able to understand from the Amanda documentation really 
all that I need to have in place to be able to expect to recover from, 
say, a disk replacement after catastrophic failure. I'm imagining, main 
disk goes to data heaven, I buy a new one, install Stretch again fresh, 
and now I want to re-install packages and restore their backed-up 
configuration as well as restore my data in /home etc. I know there are 
a few experienced users of Amanda on this list -- can anyone help me, or 
perhaps point me to a good resource that explains it, or even if there's 
a section in the documentation I've missed that makes it clear?

I guess a key point is, in my configuration, the same machine is both 
Amanda server and Amanda client. I guess I may expand this in the future 
to have this machine manage backups for other machines, but at the 
moment that is not happening. Of course, the disk that houses the Amanda 
virtual tapes is off-machine. 

What I'm looking for is along the lines of "your nightly backup routine 
needs to be run amdump, then rsync this, this and this directory 
somewhere safe" or whatever it is. Or alternatively "don't be an idiot 
you don't need to do any of that, amanda is magic in this, this and this 
way".

Thanks

Mark



Re: mailing list vs "the futur"

2018-08-28 Thread Mark Rousell
On 28/08/2018 23:25, Michael Stone wrote:
> On Tue, Aug 28, 2018 at 10:24:51PM +0100, Mark Rousell wrote:
>>If you have a bunch of users on remote SMTP and NNTP servers then
>> it's
>>always a wash. (MUAs don't typically download the entire message body
>>unless asked to, just as news readers don't typically download the
>> entire
>>message body unless asked to.) Basically, the efficiency argument
>> is bogus.
>>
>>
>> I can only say that I disagree that the bandwidth efficiency is bogus
>> overall.
>>
>> If by "remote [...] NNTP servers" you mean other NNTP servers that are
>> federated with your own, then this is surely a bandwidth saving
>> compared to
>> email. I.e. The data only needs to be sent once to the remote NNTP
>> servers for
>> local distribution to users who connect to those servers, thus reducing
>> bandwidth usage overall.
>
> No, because the idea of having ISPs set up NNTP transit servers for
> individual small discussion groups is...unlikely at best.

Well, I expect you're not going to like me saying this but isn't
expecting ISPs to do this more Usenet-style thinking? People don't
expect ISPs to run forum servers or mail list servers, so why should
they expect them to run NNTP servers for private discussions groups?

> so stop talking about 'a generic architecture of NNTP transit servers
> that isn't usenet but is still open to arbitrary groups and users but
> doesn't have any of the problems of usenet because it's carefully
> controlled and thus doesn't have abuse issues but isn't prohibitively
> resource intensive and people will set up serves and join the network
> because nobody likes stupid old HTTP anyway'

I have at no stage advocated a "generic architecture of NNTP transit
servers".

I have at no stage advocated any NNTP servers being "open to arbitrary
groups", other than those created by group owners.

I have at no stage suggested that it wouldn't have abuse issues, only
that abuse issues can be handled just as they are right now on any
non-federated NNTP server, on any mail list, on any web forum, or similar.

Indeed, NNTP is not prohibitively resource intensive when used as a
private discussion group protocol. You got that bit right.

I have at no stage suggested that people necessarily would want to set
up their own servers (although in principle they could). This sort of
thing is more likely to be run as a service, just as mail servers, mail
list services, and web forums often are at present.

And I have at no stage suggested that people don't like HTTP.

What I have been talking about (since I mentioned I'll be working on
NNTP) is implementing NNTP as an alternative access methodology to
message resources accessible via other means as well. There's more to
the project than that but this is the aspect that seem related to this
thread.

> and instead start talking about something that's likely to be
> implemented: centralized NNTP gateways to the services most people
> will use via SMTP or HTTP, with NNTP client access to the gateway.

This sounds a bit like trying to reinvent Usenet. It's not going to
happen that way.

> You might see people create private transit servers for local access,
> but the number of clients using such servers instead of the primary
> one would suggest de minimis bandwidth savings. If anything, the
> private transit servers would end up like most private debian archive
> mirrors and consume more bandwidth than they save (because most of the
> transferred files never get used). And in this model, any putative
> "transfer efficiency of NNTP" is simply not compelling.

Quite possibly. Although we've discussed the bandwidth efficiency of
NNTP at excessive length, it's admittedly not the primary driving
motivation for this work.

But, all the same, if bandwidth efficiency is an issue then I'd say that
NNTP is good for this scenario. YMMV of course, and that's fine.

>
> It's so incredibly uncommon to find a REST based discussion forum that
> doesn't come with its own HTML UI that I don't consider it worth
> considering.

Fine, so you prefer web UIs, if I understand you correctly.

> So the "broad client support" in question would be a web browser and
> that basically includes everything. Welcome to the 21st century!

Except that web browsers accessing web forums in the 21st Century don't
do everything. They can't. Other tools do some things better. That's
rather the point. There are other tools that bring other capabilities to
the table. For example, some of these tools are the NNTP and SMTP and
IMAP protocols, and these can be accessed by mail clients and NNTP
clients. These protocols and clients facilitate users who prefer these
tools to do things like choose their own client apps, control

Re: mailing list vs "the futur"

2018-08-28 Thread Mark Rousell
I was going to say that you and I have started going round in circles
and should just agree to disagree about certain things but this is a
different strand of the discussion that still seems to be advancing.

On 28/08/2018 20:01, Michael Stone wrote:
> On Tue, Aug 28, 2018 at 04:46:15PM +0100, Mark Rousell wrote:
>> web forums, app-based, IM-style, etc.) but none of that, to my mind,
>> lessens
>> NNTP's ideal applicability to getting private discussion group
>> messages from
>> place to place (the front end UI/UX being a different thing again).
>
> Ignoring the changes to user requirements for UI/UX is at least part
> of why NNTP is no longer a major factor in internet usage.

I agree to a considerable extent and I don't think I advocated ignoring
changes to user requirements for UI/UX. When I say "the front end UI/UX
being a different thing again" I mean that it's a different discussion,
not that a designer should ignore it.

And making NNTP available as an access method (certainly not a sole
access method) is not ignoring it.

>
>> The key advantage of NNTP over email/SMTP in terms of bandwidth
>> efficiency is
>> that, with NNTP, messages are only sent to users when they explicitly
>> ask for
>> them. This is more bandwidth-efficient than an email-based list since
>> the
>> email-based list must send out messages to each and every user
>> whether or not
>> they want them.
>
> It's more efficient at the provider level until someone decides they
> want all of the messages local, either as an archive or to run local
> search tools, etc. At that point you're transferring all of the
> messages just as you would via SMTP and it's basically a wash.

That's fine. Not everyone wants that but, for those who do, it's
certainly no worse than email. So I don't see it as a problem.

> If you have a bunch of users on remote SMTP and NNTP servers then it's
> always a wash. (MUAs don't typically download the entire message body
> unless asked to, just as news readers don't typically download the
> entire message body unless asked to.) Basically, the efficiency
> argument is bogus.

I can only say that I disagree that the bandwidth efficiency is bogus
overall.

If by "remote [...] NNTP servers" you mean other NNTP servers that are
federated with your own, then this is surely a bandwidth saving compared
to email. I.e. The data only needs to be sent once to the remote NNTP
servers for local distribution to users who connect to those servers,
thus reducing bandwidth usage overall.

> the bandwidth in question is so small as to not matter anyway.(We call
> this "premature optimization"; there are other concerns that are far
> more significant--like the UI/UX.) The entire 25 year archive of
> debian lists is probably on the order of one or two netflix movies.

Whilst I agree that UI/UX are important to users (which of course is
exactly why many prefer to receive their message via NNTP, so that they
can control their UI/UX), it is not necessarily the case that bandwidth
is always small. A static archive is not the bandwidth.

>
>> To my mind, a REST protocol has a different (but overlapping) use
>> case to NNTP
>> or email lists. I know of no standard, open REST protocol that
>> replaces either
>> NNTP or email discussion lists, for example.
>
> But there are a heck of a lot more deployed REST clients than NNTP
> clients.

I know you know this but I'll say it anyway: REST isn't a single
protocol, it's just a type of protocol. There are loads of REST-based
protocols around. Which one do you choose? There are no standardised
REST protocols for message distribution that I am aware of. There's
nothing REST-based that is like SMTP, or POP3, or IMAP, or NNTP, or
anything else that has broad client support across a range of device
types in this context.

Or is there? Have I overlooked anything?

Furthermore, there's no point saying that there are other protocols when
those other protocols do not and cannot address the use case that one
particular protocol, NNTP in this case, can and does address.

I am not saying that REST does not have its place. It's just that NNTP
(alongside other protocols and types of protocol, both standardised and
proprietary) is something that fulfils a currently commonly unfilled use
case, one that in fact does have demand.

> You can shake your fist at the cloud all you want, but reality is what
> it is. Consequently, a good experience for HTTP consumers is going to
> be a higher priority than NNTP users.

There are many possible priorities and different businesses and services
providers have different customer bases who have different sets of
priorities. I've not said that a web browser-based UI is unimportant (in
fact I think you'll note I've said that it is important for most users

Re: mailing list vs "the futur"

2018-08-28 Thread Mark Rousell
On 28/08/2018 19:33, Mark Rousell wrote:
> And ISPs' historical problems Usenet's massive bandwidth due to
> binaries does not change the fact that NNTP is very good for message
> distribution.

Missing "with" in the above.

-- 
Mark Rousell
 
 
 



Re: [OT] Best (o better than yahoo) mail provider for malinglists

2018-08-28 Thread Mark Rousell
On 28/08/2018 19:08, Miles Fidelman wrote:
> I would suggest looking for somebody who runs Sympa.
>
> Open source, well supported, more "industrial strength" than Mailman
> (designed for universities, supporting lots of lists).
>
> I've been running it on our servers, for at least a decade (who's
> counting) - it's rock solid, well supported by both a core team (at
> Renater - the French Research & Education Network), and a larger
> community.  (For example, a patch for DMARC came out almost
> immediately.  It took a lot longer for a mailman patch to show up, and
> even longer for it to make into the standard release).  Also, Sympa is
> built around a database, mailman isn't - makes a difference for folks
> running multiple lists.  Lots more things that can be customized.
>
> There's a list of hosting providers at
> https://www.sympa.org/users/custom - but they're mostly in France. 
> You might have to do a little hunting - or post on the sympa users list.
>
> There's also Groupserver (http://groupserver.org) - a rather
> interesting package that does a good job of melding traditional lists,
> with a web-based forum interface.  It's open source, with hosting
> available - from a small group in New Zealand.  It has a bit of
> traction in the "electronic democracy" community.

If I understand correctly, I think that Francesco was asking for a good
(free) email service provider at which he could receive emails from mail
lists, rather than a mail list provider.

Nevertheless, thanks for your mail list software suggestions. I've heard
of Sympa but never seen them described in the manner you did here. And I
am sorry to say that I had never heard of GroupServer before. Thanks for
the useful information.


-- 
Mark Rousell
 
 
 



Re: mailing list vs "the futur"

2018-08-28 Thread Mark Rousell
On 28/08/2018 19:23, Miles Fidelman wrote:
> On 8/28/18 1:48 PM, Michael Stone wrote:
>
>> On Tue, Aug 28, 2018 at 05:02:08PM +0100, Mark Rousell wrote:
>>> Lots of people download files from FTP servers but that's a wholly
>>> different
>>> culture and use case than Usenet provided for in practice. And who
>>> said that
>>> binaries (whether legal or illegal) was not a big part of Usenet at
>>> its height?
>>
>> Anyone who argues that NNTP is the most efficient thing around? I
>> guarantee that for large files FTP is more efficient, and that when
>> one person is sending a file to a small number of other peopl, FTP is
>> dramatically more efficient. I guess NNTP binary distribution is more
>> efficient in some theoretical world where exactly the right
>> subscriptions are distributed to exactly the right people via local
>> transit servers, with no reposts. We can probably just write the
>> volume of such transfers off as noise in the real world.
>
> NNTP is exceptionally efficient for large scale message distribution -
> when compared to, say, a mailing list server that sends a message per
> subscriber.

Indeed.

And ISPs' historical problems Usenet's massive bandwidth due to binaries
does not change the fact that NNTP is very good for message distribution.


-- 
Mark Rousell
 
 
 



Re: mailing list vs "the futur"

2018-08-28 Thread Mark Rousell
On 28/08/2018 18:48, Michael Stone wrote:
> I guarantee that for large files FTP is more efficient, and that when
> one person is sending a file to a small number of other peopl, FTP is
> dramatically more efficient.

I am sure. But it still doesn't make FTP meaningfully comparable to
Usenet or NNTP in the context of this sub-thread discussion.

> I guess NNTP binary distribution is more efficient in some theoretical
> world where exactly the right subscriptions are distributed to exactly
> the right people

I can only point you to the world as it actually stood where binary
distribution (for certain types of binary for a certain type of user
base) via Usenet was outstandingly common at one time (which you of
course know). FTP just wasn't a feasible candidate protocol for that
particular use case. As such, yes, NNTP was efficient enough. As I said
when I entered this sub-thread (with added comment in square brackets):

NNTP was inefficient in this regard compared to what other protocol
or protocols, exactly?

Compared to email? Well, email suffered from very similar issues
transferring binaries.

Compared to DCC over IRC? (DCC being a then-popular one-to-one
alternative to Usenet's one-to-many distribution model). I must
admit that I've never examined the details of the DCC protocol but
it is certainly inefficient in terms of /user experience/ compared
to Usenet over NNTP: In practice DCC was essentially synchronous,
one at a time, needing continuous user management whereas Usenet
facilitated a time-efficient asynchronous access mechanism for the
end user without continuous management.

So what one-to-many distribution platforms or protocols existed in
this timeframe against which to compare NNTP (or Usenet)?

I perhaps should have asked "NNTP was inefficient in this regard
compared to what other *relevant *protocol or protocols, exactly?".

You have observed, quite correctly of course, that FTP is a more
bandwidth-efficient protocol that was available in the timeframe under
discussion for binary file transfers but the fact nonetheless remains
that FTP did not and does not fulfil the particular mass volume and mass
user numbers one-to-many use case to which Usenet was put at that time.
FTP did not and does not have the federated, distributed, public access
nature that Usenet provided and that led to its success in this context.

Sure, Usenet became impossible to cope with for ISPs due to the volume
of binaries groups. But, from a user experience perspective, it was very
efficient indeed (for reasons I enumerated in other messages) for the
job it ended up being used for. And it was not significantly more
bandwidth-inefficient than any other suitable or relevant system or
protocol because, at the time, there were no other systems or protocols
that could really fulfil the Usenet use case.

FTP, despite more bandwidth-efficiently allowing binary transfers of
course, still did not fulfil the same use case.

Anyway, this part of the discussion is just more about Usenet history.
It has nothing to do with NNTP in a discussion group context which is
why I initially commented in this thread.

> via local transit servers, with no reposts. We can probably just write
> the volume of such transfers off as noise in the real world.

You seem to be again conflating Usenet's issues relating to huge
bandwidth due to mass distribution of binaries with the completely
different use case of NNTP that is the subject of this thread.



-- 
Mark Rousell
 
 
 



Re: [OT] Best (o better than yahoo) mail provider for malinglists

2018-08-28 Thread Mark Rousell
On 28/08/2018 17:12, Francesco Porro wrote:
> Ciao,
>
> As a member of this mailing list, I have a little (OT) question for you:
> which is the best free email service around to receive mailing lists?

I cannot personally recommend any free, proprietary email service providers.

Instead I'd say that running your own mail server would be best for
this, assuming you have some kind of always-on connection with a static
IP you can utilise.

Although incoming spam is a potential problem the real difficulties with
running your own mail server in my opinion are (a) maintaining
deliverability of outgoing mail and (b) making sure you're not relaying
spam. Keeping software and configuration up to date is important.
However, in the sort of scenario you describe, you might not need to use
your mail server for outgoing mail which could simplify things. Ideally
you could use your ISP's or domain provider's mail server for outgoing
mail whilst directing incoming mail for your domain to your own server.
(I should add that using your own domain is always wise, rather than
relying on service providers' email addresses).

Learning how to do all this could involve a learning curve but it's
entirely feasible.

-- 
Mark Rousell
 
 
 



Re: mailing list vs "the futur"

2018-08-28 Thread Mark Rousell
On 28/08/2018 15:27, Michael Stone wrote:
> On Tue, Aug 28, 2018 at 02:52:36PM +0100, Mark Rousell wrote:
>> Except for perhaps hacked servers in some cases, FTP never did have
>> much of a
>> part to play in binaries distribution from what I could see.
>
> I guess you didn't use debian? Or are we only talking about the
> illegal content that I thought wasn't the reason usenet is important.
> It's so hard to keep track of what the point was supposed to be.

Oops, I meant to reply to this part.

Lots of people download files from FTP servers but that's a wholly
different culture and use case than Usenet provided for in practice. And
who said that binaries (whether legal or illegal) was not a big part of
Usenet at its height? I certainly said no such thing and nor did you, as
far as I could see.


-- 
Mark Rousell
 
 
 



Re: mailing list vs "the futur"

2018-08-28 Thread Mark Rousell
On 28/08/2018 15:27, Michael Stone wrote:
> I will not bother to reply to the rest of the long discussion of
> usenet, since I don't want to be accused (again) of "incorrectly"
> talking about usenet instead of NNTP by someone who wrote a long
> message about usenet.

Note that I could not refute your apparent conflation of Usenet and NNTP
without writing about Usenet. Admittedly, I also got drawn into a
side-discussion of Usenet history with you.

Anyway, I agree that it is difficult to remember who said what (or,
perhaps more importantly, *why* they said it) when a thread gets as
convoluted as this one.

We can agree to agree about that which we agree about, and to disagree
about that which we disagree about. :-)

-- 
Mark Rousell
 
 
 



Re: mailing list vs "the futur"

2018-08-28 Thread Mark Rousell
On 28/08/2018 14:52, Mark Rousell wrote:
> Additionally, both FTP and HTTP are not federated, many-to-many
> services or systems. I say again that Usenet was unique in this
> timeframe for the use case of public access, one-to-many, binary
> distribution.

The above is not complete. I meant to write this:-

Additionally, both FTP and HTTP were not and are not federated,
one-to-many services or systems in the way that Usenet was (and is). I
say again that Usenet was unique in this timeframe for the use case of
public access, one-to-many, binary distribution.


-- 
Mark Rousell
 
 
 



Re: mailing list vs "the futur"

2018-08-28 Thread Mark Rousell
ong because they were possible (not that
there was, initially, demand for them). Now such secondary features are
expected by many classes of user. But, despite this, NNTP and web forums
(and other access methodologies) are not mutually exclusive.

To be clear, the kinds of users who choose to access a discussion group
of some sort via NNTP or email (one that most users access as a web
forum) are unlikely to care that they don't see a Like button. On the
other hand, any modern NNTP or mail client can in fact show them a
clickable Like button that actually works (unless they've chosen to
disable complex HTML, which is fine of course).

> As much as greybeards don't understand why the like button matters,
> it's really hard to convince actual users these days that the reason
> you don't have one is that they don't need it and you know better than
> they do.

I wouldn't attempt to do such a thing. :-) If I was designing a
discussion environment from scratch, there's no way I'd design it
without a web front end as one possible access method. But one should
not ignore the fact that the greybeard population is a large one and
actually is getting larger.

Even as the percentage of technical users on the Internet continues to
reduce as a proportion of the Internet population as a whole, the
absolute number of technical users who would like or might like
standardised access methods and tools grows. Their voices may be diluted
but are no less real. As individuals, they may prefer to use different
UIs and different access methodologies from different places: A
Tapatalk-like UI on their mobile device (speaking REST to the back end),
a NNTP read/write feed on their preferred main Usenet/NNTP client, or an
email read/write feed on their preferred main email client (accessed on
mobile and/or desktop), and perhaps a web browser UI for occasional
quick access from a different form factor device. All of these methods
can and should co-exist, in my opinion (ideally speaking).

> There are a lot of other usability issues that basically boil down to
> the change in user behavior from using a smart client at one location
> (possibly telnetting/sshing into there from elsewhere) to using a
> variety of dumb clients to access centralized resources. Protocols
> that transfer dumb messages but allow a lot of endpoint customization
> are great in the smart client model but tend to not have a good user
> experience in the dumb client model. (And vice versa--if you have the
> ability to highly customize a specific workstation, it's frustrating
> to not be able to.) Given demographic and technological trends,
> ignoring the dumb client paradigm is short sighted at best.

I agree. Nothing I've said should be taken to mean that I ignore the
dumb client scenario. But I also recognise that there are many use cases
with many types of user.

> This is exactly what I was talking about earlier. Yes, it's
> theoretically possible. But for a variety of real-world reasons, large
> scale NNTP-accessible web forums haven't succeeded. (If you look, you
> can find a lot of attempts that never really gained much traction.) It
> was more practical in the tapatalk case to invent a new protocol than
> to use NNTP. I'd suggest that it's worth understanding why rather than
> just insisting that everybody is wrong and NNTP is the answer that
> they just haven't figured out.

Indeed, but our discussion has ranged in various directions and are
beginning to conflate somewhat issues. I'm not pitching NNTP as a
replacement for this sort of scenario

In the original context of this sub-thread, that is access to discussion
groups like this one, NNTP has advantages. This is its ideal use case
alongside email. It is the fact that both email and NNTP are
standardised that appeals to users in this context, as well as the
cleanness and efficiency of these protocols compared to web browser
based access or other UIs or access methodologies.

-- 
Mark Rousell
 
 
 



Re: mailing list vs "the futur"

2018-08-28 Thread Mark Rousell
On 28/08/2018 13:16, Mark Rousell wrote:
>
> Footnote:-
> 1: A more recent example of a very similar skewed and confused view of
> things is the Casio F-91 watch. Certain elements of US intelligence
> had noticed that many terrorist suspects arrested in Iraq were wearing
> the Casio F-91W watch model. The intelligence reports extrapolated
> this apparent correlation to suggest, amongst other things, that the
> watch was chosen because its alarm capabilities allowed an alarm to be
> set more than 24 in the future (in fact that particular model allows
> no such thing, although some other Casio models do). In truth, the
> Casio F-91W model was and still is popular with third world terrorist
> suspects because it is (a) very cheap, and (b) it is produced in
> greater numbers than any other watch model in the world. I.e. Lots of
> people in third world countries wear Casio F-91Ws, not just
> terrorists. And yet the intelligence people were ignorant of the wider
> popularity of the F-91W and extrapolated incorrectly from the limited
> (skewed) data set of which they were aware. Similar errors of limited
> vision, confusion, and skew were made in the timeframe we're
> discussing here by some people running training course for professionals.

I just noticed a typo in the above. Here is the corrected version:

A more recent example of a very similar skewed and confused view of
things is the Casio F-91 watch. Certain elements of US intelligence had
noticed that many terrorist suspects arrested in Iraq were wearing the
Casio F-91W watch model. The intelligence reports extrapolated this
apparent correlation to suggest, amongst other things, that the watch
was chosen because its alarm capabilities allowed an alarm to be set
more than 24 hours in the future and so it would be useful as a bomb
timer. (In fact that particular model allows no such thing although some
other Casio watch models do). In truth, the Casio F-91W model was and
still is popular with third world terrorist suspects because it is (a)
very cheap, and (b) it is produced in greater numbers than any other
watch model in the world. I.e. Lots of people in third world countries
wear Casio F-91Ws, not just terrorists. And yet the intelligence people
were ignorant of the wider popularity of the F-91W and extrapolated
incorrectly from the limited (skewed) data set of which they were aware.
Similar errors of limited vision, confusion, and skew were made in the
timeframe we're discussing here by some people running training courses
for certain types of professional.



-- 
Mark Rousell
 
 
 



Re: mailing list vs "the futur"

2018-08-28 Thread Mark Rousell
On 28/08/2018 13:55, Michael Stone wrote:
> On Tue, Aug 28, 2018 at 01:16:45PM +0100, Mark Rousell wrote:
>> NNTP was inefficient in this regard compared to what other protocol or
>> protocols, exactly?
>
> FTP and later HTTP, which handled binaries efficiently. In fact, one
> was even named in a way to suggest it was a good way to transfer
> files. :)

HTTP came later and wasn't relevant in the timeframe to which you referred.

Additionally, both FTP and HTTP are not federated, many-to-many services
or systems. I say again that Usenet was unique in this timeframe for the
use case of public access, one-to-many, binary distribution.

Except for perhaps hacked servers in some cases, FTP never did have much
of a part to play in binaries distribution from what I could see.

I think it was file sharing P2P protocols that eventually reduced
people's preference for Usenet coupled with (as you say) ISPs' great
difficulty in continuing to support Usenet servers.

> Yes, academic, commercial ISP, and paid subscription servers. I also
had some insight into what it took
> to keep the servers running, not just the user-side view... I followed
a number of text groups, until the
> signal to noise ratio got low enough to make it not worth the effort.

I too was a Usenet user in that timeframe and worked for an ISP at the
time, although not directly on Usenet/NNTP servers.

> You seem to have an overly idealistic view of the level of logging on
> most news servers 20 years ago.

You mean about the same amount of logging as on mail servers, FTP
servers, or anything else at the time, then?

Sure, there wasn't much logging in practice. I didn't say there was. I'm
not being idealistic. I am simply observing that, logging or not,
accessing a NNTP server did not hide one's IP address any more then than
it does now. Indeed, despite greater legislation-mandated logging in
many countries, the technical opportunities to access a server of
potentially any type in a genuinely anonymous way are much greater now
than they were back then due to widespread availability of VPN services!

I therefore do not agree that anonymity was a primary driving factor for
the use of Usenet for one-to-many distribution of binaries (although I
don't doubt that the essentially false idea of anonymity may have
influenced many less-expert users). I'm not being idealistic about the
amount of access logging that went on when I say this; I am simply being
pragmatic. I am being pragmatic because Usenet was simply the only
widely available, worldwide, federated, public system available to
distribute data (especially binaries) in a one-to-many manner. Other
systems or protocols such as FTP just couldn't do what Usenet could do
back then.

> Also, for the record, I don't think I ever had a "training course" on
> usenet.

That's good. :-)

> As far as being wrong...if LE siezed an anonymous FTP server
> distributing illegal content and either reviewed its logs or monitored
> its link they could get a list of each IP that accessed content.

And the very same applied to a NNTP server attached to Usenet (or a
standalone NNTP server for that matter). It was and is no more difficult
for a NNTP server than for a FTP server, or an email server, or anything
else.

> There is no central point from which you can see who accessed usenet
> content.

But why would you expect there to be? It's a federated system. If you
were expecting such a thing then you were expecting the wrong thing.

One might also observe that looking for who accessed Usenet content is
surely a waste of time. If one is interested in preventing distribution
of illegal data of some sort then the primary concern is the sender, and
the sender was not anonymous with NNTP (regardless of the existence of
logs or not). Remember, the point here is one-to-many distribution, and
it's the one that law enforcement should surely be interested in.

> The bottom line is that for a period of time, usenet was the easiest
> way to obtain certain illegal content. There were certainly overblown
> reports that usenet was nothing but illegal content, and it's
> certainly possible to transfer illegal content via other protocols,
> but it's naive and/or disingenous to pretend that usenet didn't have a
> problem.

Oh I agree with you on this about Usenet.

But:
(a) I don't blame NNTP for this since it is not responsible for Usenet's
problems that ultimately derived from its massive scale, not its
protocols. The problems occurred as a result of the fact that Usenet was
and is a massive, worldwide, publicly federated one-to-many distribution
system. I.e. It was custom made (without its creators even realising
it!) for large scale distribution of binaries, whether legal or illegal.

(b) I dispute that Usenet had any real, reliable anonymity (although I
accept that some people may have erroneously believed it did). I have no
idealism or inf

Re: mailing list vs "the futur"

2018-08-28 Thread Mark Rousell
othing about NNTP that requires it to be connected to Usenet.
You lose nothing whatsoever in the context under discussion here: That
is private discussion groups like this mail list. Sharing with Usenet
adds nothing whatsoever to this scenario.

Furthermore, you characterise NNTP as "kinda overweight for the simple
problem of transferring a few text messages in a client/server fashion"
and I could not disagree more. NNTP is ideally suited to sharing
messages in a client/server fashion. As I have observed, it is more
efficient in this regard (in terms of bandwidth-efficiency as well as
management efficiency) than email lists and it is at least as efficient
in these terms (and likely more bandwidth-efficient) than web forums.

> But this isn't a thing that's done much; the NNTP protocol is both too
> complicated and also too lacking in functionality that modern
> discussion group members look for.

It's not done much because NNTP (as well as email lists, albeit slightly
less so with mail lists) has fallen out of favour as web forums have
increased in popularity. However, as many members of this list have
observed, NNTP is still, for those who like it, an ideal method of
accessing this kind of discussion.

I am surprised to see you say that NNTP is "too complicated". In the
medium term future I'll have to implement a NNTP server and, having
looked at the RFCs, it doesn't look too bad. Have you experience of
implementing NNTP?

As for "lacking in functionality", it all depends on what you expect it
to provide. As a way of getting messages from place to place, it seems
ideal. As a way of providing other features, it depends. I would not,
for example, expect NNTP on its own to provide a moderation UI or avatars.

Note that modern web forums often provide email alerts (and some allow
email participation). The fact that email is as 'primitive' as NNTP in
this respect compared to web forums native interfaces does not detract
from the fact that it is useful for many users.

> There are probably more people using tapatalk for that purpose--even
> though it's hideous and proprietary--simply because it's a better fit
> than NNTP for a modern discussion group.

It all depends. I'm a member of a number of discussion groups: Mail
lists, NNTP-based discussion groups, web forums. For me, Tapatalk is, as
you say, hideous and proprietary and so it's not a good fit for my uses.
For me, all those discussion groups could perhaps better be transmitted
to me over NNTP. That would work better for my use case. It would be
ideal for me, in fact.

Yes, as I have observed, web forums (for which Tapatalk is a front end)
have grown in popularity as NNTP and email have declined in popularity
but what works best really does depend on the user (and type of user).

Note also that the front end user experience is not necessarily directly
dependent on the transport protocol. For example, it is entirely
feasible for a front end that looks and works much like Tapatalk or like
the mobile web versions of popular web forums to communicate with its
back end via NNTP, or to communicate with different back ends using a
range of protocols (e.g. NNTP, email, REST, and so on).

-- 
Mark Rousell
 
 
 



Re: mailing list vs "the futur"

2018-08-28 Thread Mark Rousell
ny distribution medium*. In fact
it was effectively the *only* one-to-many distribution medium available
at all until the first peer-to-peer file sharing networks began to
appear (which is why I wonder what other system or protocol you are
comparing NNTP's binary transfer efficiency against).

I should add that I described Usenet as an "efficient" distribution
medium above and it most certainly was efficient in this respect. Even
though, as you say, NNTP needs to encode binaries, Usenet was still
efficient because of its one-to-many capability and its asynchronous
capability. It just worked.

And let me re-iterate that none of this history, whilst interesting,
particularly relates to NNTP's continued suitability for discussion
groups such as this one.

> It really doesn't seem like you ever looked at the stats on what
> fraction of the feed an ISP received was ever requested by any
> customer, or you wouldn't argue that this was an efficient mechanism.
> (But god forbid you stopped carrying
> alt.binaries.stupid.waste.of.space because then customers would tie up
> the support line complaining that your newsgroup count was lower than
> your competitor's newsgroup count.) Again, nice idea 30 years ago, but
> incapable of withstanding abuse on the modern internet.

You're still conflating Usenet with NNTP. What you refer to here was an
issue with Usenet. This tells us nothing whatsoever about the
suitability of NNTP for discussion group transport, something for which
NNTP was and is ideal. This use of NNTP is nothing to do with Usenet and
is nothing to do with Usenet's binary-related practical problems.



Footnote:-
1: A more recent example of a very similar skewed and confused view of
things is the Casio F-91 watch. Certain elements of US intelligence had
noticed that many terrorist suspects arrested in Iraq were wearing the
Casio F-91W watch model. The intelligence reports extrapolated this
apparent correlation to suggest, amongst other things, that the watch
was chosen because its alarm capabilities allowed an alarm to be set
more than 24 in the future (in fact that particular model allows no such
thing, although some other Casio models do). In truth, the Casio F-91W
model was and still is popular with third world terrorist suspects
because it is (a) very cheap, and (b) it is produced in greater numbers
than any other watch model in the world. I.e. Lots of people in third
world countries wear Casio F-91Ws, not just terrorists. And yet the
intelligence people were ignorant of the wider popularity of the F-91W
and extrapolated incorrectly from the limited (skewed) data set of which
they were aware. Similar errors of limited vision, confusion, and skew
were made in the timeframe we're discussing here by some people running
training course for professionals.

-- 
Mark Rousell
 
 
 



Re: mailing list vs "the futur"

2018-08-28 Thread Mark Rousell
On 28/08/2018 00:04, Gene Heskett wrote:
> My knowledge is based on a conversation I had with my then isp in about 
> 1993 or so, so its entirely possible that the protocol has been changed 
> since then. What they had then struck me as very very wastefull of 
> resources. Because I was such a PITA, they actually built another 
> machine for NNTP and had at bring in another oc3 circuit to feed it. I 
> had what was a full house Amiga 2000 with 64 megs of ram on a PP 040 
> board, had a pair of 1GB scsi seagates, their machine had a 47GB drive, 
> which was filled in just an hour or so, so the expire was set at 8 
> hours. So the last thing I did at night was dial them up and grab what I 
> wanted that was new, and the first thing in the morning, the same.
>
> Sheer economics has likely driven some major changes in how NNTP works 
> today. And I expect thats a hell of a lot better for the average ma & pa 
> isp. By the time I built a new machine and put red hat 5 on it, in 1998 
> I think, NNTP had degenerated to 90% spam, so I never rejoined that pool 
> party, it was too polluted for me. Email was easier to filter, and here 
> I am still, almost 20 years later, and older too, I'll be 84 in a couple 
> more months if I don't miss morning roll call first.

As in my reply to Michael Stone, posted just now, you are conflating
NNTP with Usenet. The problems you describe above are all to do with
Usenet, not with the NNTP protocol per se.

NNTP is not a bandwidth hog. It is not now and never has been.

Usenet was (and still is) a bandwidth hog, but it would have been so no
matter protocol was used to transmit it.

-- 
Mark Rousell
 
 
 



Re: mailing list vs "the futur"

2018-08-28 Thread Mark Rousell
On 27/08/2018 21:13, Michael Stone wrote:
> On Mon, Aug 27, 2018 at 12:28:35PM -0400, Dan Ritter wrote:
>> On Mon, Aug 27, 2018 at 11:37:48AM -0400, Gene Heskett wrote:
>>>
>>> That bandwidth limit is not on your side of the isp, its the bandwidth
>>> from the main trunk lines to the isp. NNTP is a huge bandwidth hog
>>> regardless of how much of it your isp accepts for spooling on local
>>> disk
>>> to serve you.
>>>
>>
>> This is not the case.
>
> Yes it is. Most ISPs stopped supporting NNTP because of the ridiculous
> bandwidth (and disk space) demands. Your rebuttal skipped over the
> part about people posting off-topic junk all over the place, and the
> fact that (the couple of cranks who actually just wanted to read
> comp.misc or whatever aside) most people who wanted "newsgroups"
> really wanted the high volume binary groups with pirated software or
> movies or whatever--so if an ISP dropped the high volume part of the
> NNTP feed, they basically had no reason not to drop the whole thing.
> Back in the late 90s when the handwriting was on the wall it was
> pushing toward 100GB/day to keep up with a full newsfeed.

You appear to be conflating the NNTP protocol with Usenet, the global
message transmission network. They are different things. Usenet as we
currently know it relies on NNTP but NNTP is not Usenet.

Whilst I agree that it is true that ISPs stopped running their own
Usenet-linked NNTP servers for the reasons you describe, it is
nevertheless wholly false to say that NNTP is the problem in this
context. The problem was Usenet and the massive bulk of binary groups.
NNTP was not and is not to blame for Usenet's excess. Any distribution
protocol would have been a bandwidth hog in those circumstances.

> In theory you can still use an NNTP client (vs a server) to follow a
> limited number of text-only groups fairly efficiently. In practice
> there's just not that much left worth following because the experience
> got to be so bad, and because so few people are even aware it exists
> anymore. If you purchase newsgroup service as a standalone from a
> specialized company you typically get a somewhat more curated
> experience (for a pretty sizable fraction of the total price of your
> internet connection, to pay the costs outlined above). The reality is
> that the primary use of these services is downloading pirated software
> and other binaries.

Even here, where you recognise that NNTP can be used for discussions
just like this mail list, you still seem to viewing NNTP primarily
within the context of Usenet. There is no need for NNTP-based
discussions to involve the single, federated Usenet system.

It is certainly true that NNTP has fallen out of favour for private
discussion groups (nothing to do with Usenet) but there are lots of
reasons for this (a long and complex discussion in its own right), and
Usenet's problems with volume are only peripherally connected to the
reduction in the use of NNTP for text-based discussions of the nature
carried here.

Both mail list-based discussion groups and NNTP-based discussion groups
have reduced in popularity as web-based forums have increased in
popularity, despite the fact that web-based forums are not an exact
replacement for the use cases of either email or NNTP. This change in
popularity never had and does not have any connection with the bandwidth
requirements of Usenet (regardless of the protocol used to carry it)/

It should be noted that private NNTP-based groups that were not shared
with Usenet existed long before ISPs stopped providing Usenet feeds as
part of their general service.

In truth, NNTP (colloquially but incorrectly referred to as "Usenet") is
still a great protocol for private discussions groups such as this mail
list and many others likes it, even if few[1] use it. Used in this
manner, NNTP is not and *never was* a bandwidth hog. NNTP is probably
more bandwidth-efficient overall than an email discussion list and as,
or potentially more, bandwidth-efficient than a web-based forum. NNTP
may have fallen out of favour for this type of use case (primarily in
favour of web forums as things now stand) but it can and does still do
the job in a bandwidth-efficient manner.



Footnote:-
1: For example, Mozilla still use NNTP discussions groups which are
mirrored as email lists.

-- 
Mark Rousell
 
 
 



pactl and bluetooth

2018-08-26 Thread Mark Fletcher
Hello the list

I'm running stretch amd64, upgraded from at least jessie and I think 
wheezy -- memory's a bit hazy now. I use Gnome on this machine.

Every time I reboot I find I can't connect my bluetooth headphones to 
the computer. In the Gnome bluetooth applet, when I click the slide 
button to connect, it immediately slides from ON to OFF without seeming 
to do anything.

A while back the archives of this list helped me discover that in order 
to fix this I need to run as root:

pactl load-module module-bluetooth-discover

Doing this immediately makes it possible to connect the headphones.

Can anyone point me at documentation of how I could arrange things so 
this happens automatically and I don't have to type it by hand? I know 
there's a way but I am struggling to find, or remember, how.

Thanks

Mark



Re: Re: Re: DHCP to static without reboot?

2018-08-25 Thread Mark Pavlichuk
If I use a completely vanilla freshly installed VM (on VirtualBox)...  
LXDE, but otherwise standard options:


Even a simple "ifdown " with no configuration changes gives 
inconsistent results.  Three different results so far:


1) Sometimes it SEEMS like it releases the DHCP address, and completes 
successfully.  ifconfig however shows the interface still there, with an 
IP that can be pinged.


2) Again, seems to complete successfully BUT an ifconfig shows the 
interface without any IP.


3) The ifdown completes successfully, and gets the expected results.

I've waited to make sure I'm not being too quick for a process to 
complete or something.  I'm simply getting inconsistent results with 
identical VM snapshots, and I'm at a loss.


Oh, I forgot...  I think I mentioned earlier in the thread I got some 
kind of (uninformative) systemd message, but that was just one run out 
of perhaps 50 or so.


--
Mark Pavlichuk



Re: Re: DHCP to static without reboot?

2018-08-24 Thread Mark Pavlichuk
ifdown doesn't seem to work.  It seems to complete successfully with 
messages about a released DHCP address, but ifconfig shows the interface 
is still up, a ping will be returned etc...  Though if I try another 
ifdown it will say the interface is not configured.


If I then make configuration changes, then ifup, ifconfig will show no 
change.


--
Mark Pavlichuk



Re: DHCP to static without reboot?

2018-08-24 Thread Mark Pavlichuk

I spoke too soon...

I'm getting inconsistent results each time I try...  with things usually 
not working.  I usually brute-force problems by just exploring every 
combination of configuration and commands, but something very weird is 
going on here.


Now I'm getting :

Job for networking.service failed because the control process exited 
with error code.

See "systemctl status networking.service" and "journalctl -xe" for details.

...which gives:

Aug 25 12:00:23 stretch systemd[1]: networking.service: Unit entered 
failed state
Aug 25 12:00:23 stretch systemd[1]: networking.service: Failed with 
result 'exit-


My only guesses are there's either a weird background process doing 
things I don't want it to, or else perhaps there's a small chance of 
some kind of hardware issue (though this is in a VM).


--
Mark Pavlichuk



Re: DHCP to static without reboot?

2018-08-24 Thread Mark Pavlichuk
Ahh, the problem was the "allow-hotplug" that my interface acquired...  
though it's weird that even stopping dhclient and the network daemon 
didn't stop the interface from bouncing back up (with a DHCP address).  
Is there some other daemon I'm unaware of?  (I was experimenting with a 
fresh Stretch/LXDE default install).


--
Mark Pavlichuk



DHCP to static without reboot?

2018-08-24 Thread Mark Pavlichuk
I can't seem to change the configuration of a nic from DHCP to static in 
Stretch (without rebooting) - I do this a lot so I don't want to have to 
reboot every time.  I'm using the old "killall dhclient;ifdown eth0;ifup 
eth0" method which also still seems to be in the documentation... at 
least on the debian.org site.  I've also tried doing various 
combinations of "service stop/start/restart networking" along with 
ifup/ifdown commands to no avail.  Is there some special new magic?


--
Mark Pavlichuk



Re: yabasic problem

2018-08-20 Thread Mark Fletcher
Isn’t the problem that you misspelled “experimental” in your original file
paths?

Mark
On Mon, Aug 20, 2018 at 21:13 Richard Owlett  wrote:

> On 08/20/2018 02:35 AM, Thomas Schmitt wrote:
> > David Wright wrote:
> >  [snip]
> >> Would you agree, though, that "BASIC" is the language that must
> >> have the biggest contrast between its well-endowed versions and
> >> the most dire cr*p.
> >
> > Well, back then i perceived HP BASIC as the best language of all. It made
> > me boss on all those expensive HP machines (from 9845B to 9000/320).
> > But C ran on all Unix workstations. And as soon as i became ambidextrous
> > enough, i fell in love with the display manager of the Apollo Domain
> DN3000.
> >
> > Microsoft's Visual Basic is said to have surpassed HP BASIC in the years
> > later.
> >
> >
> >> Where would yabasic fit?
>
> I think it would rate fairly well. I browsed the BASICs in the
> repository and seemed best matched to my preferences, specifically no GUI.
>
> >
> > It seems to be inspired by C (see the syntax of "open"). But why use such
> > a C-BASIC when there is gcc, gdb and valgrind ?
> >
> 'Cause the last time I used C was about 4 *DECADES* ago ;}
> Although I have programmed, I would claim to be a *programmer*.
>
>
>


Re: Debian 9 rocks, really

2018-08-14 Thread mark
On Saturday, March 24, 2018 6:31:11 PM EDT Andre Rodier wrote:
> Hello all,
> 
> I have been using Linux since more than 20 years, and Debian Linux since
> Potato. I even remember the time when you had to carefully read the
> documentation of your monitor to avoid damaging it, by choosing the
> wrong frequencies for the X server.
> 
> André Rodier.

Hello Andre,
I don't know if you are aware, but there is a large group of things which are 
listed as *something*rocks.  A couple of examples:  I have used qmailrocks to 
help setup a mail server, linuxrocks.online is out there (and many 
others).  I just don't want to see anyone getting in trouble by being 
innocent.

Mark



Re: mailing list vs "the futur"

2018-08-10 Thread Mark Rousell
On 10/08/2018 00:03, Rich Kulawiec wrote:
>
> No.  This is an absolutely terrible idea.  Here's why mailing lists
> are (along with Usenet newsgroups) vastly superior to web-based anything:
> [excellent list redacted for brevity]

Well said! What a very useful list of the reasons that mail lists
continue to have great practical utility.

-- 
Mark Rousell
 
 
 



Re: mailing list vs "the futur"

2018-08-10 Thread Mark Rousell
On 09/08/2018 18:39, tech wrote:
> Should'nt be time to move away from an old mail-listing to something
> more modern like a bugzilla or else ???

No. Mail lists works as well now as they did then.

Mail lists are efficient, to the point, simple to use.

Don't try to fix what isn't broken.


-- 
Mark Rousell
 
 
 



Re: Specifying multiple NICs

2018-08-06 Thread mark
On Wednesday, August 1, 2018 2:56:55 PM EDT Brian wrote:
> On Wed 01 Aug 2018 at 19:57:32 +0200, Pascal Hambourg wrote:
> > Le 01/08/2018 à 19:32, Brian a écrit :
> > > On Wed 01 Aug 2018 at 12:00:41 -0400, Mark Neidorff wrote:
> > > > In the past,  I referred to each NIC as eth0, eth1,. but now,
> > > > these names are not permanent, and the designation can change on
> > > > boot.  I looked at the "Network Coinfiguration" document which didn't
> > > > have a solution.  So, either how do I make the names for the NICs
> > > > permanent or what do I use fot the names of the NICs?
> > > 
> > > Starting with v197, systemd/udev will automatically assign predictable,
> > > stable network interface names for all local Ethernet devices. jessie
> > > has udev v215. jessie-backports has v230.
> > 
> > Jessie still has the old persistent naming scheme using
> > /lib/udev/rules.d/75-persistent-net-generator.rules and
> > /etc/udev/rules.d/70-persistent-net.rules by default, and the new
> > predictable naming scheme is disabled (net.ifnames=0). The new predictable
> > naming scheme has been enabled by default only since Stretch.
> 
> Enable it, then.
> 
> Delete /etc/udev/rules.d/70-persistent-network.rules and put
> net.ifnames=1 on the kernel command line when booting.


Thanks everyone for pointing me in the right direction.  This should all work 
its way out now.

Mark



Specifying multiple NICs

2018-08-01 Thread Mark Neidorff
I'm setting up a "just in case" replacement mailserver for my domain and my 
local network.  I'm using Debian Jessie, because the latest instructions for 
setting the mailserver (qmail) are written for Jessie.  The mailserver has 2 
NICs (one for local network, and one for Internet).

In the past,  I referred to each NIC as eth0, eth1,. but now, these names 
are not permanent, and the designation can change on boot.  I looked at the 
"Network Coinfiguration" document which didn't have a solution.  So, either how 
do I make the names for the NICs permanent or what do I use fot the names of 
the NICs?

Thanks,
Mark
-- 
If you finding the going easy, you're probably going downhill.



Re: Nvidia drivers

2018-07-05 Thread Mark Allums

Sorry,
I failed to read your whole message.  Make sure you have installed the 
the kbuild and headers for your current running kernel.  Then consider 
upgrading your nvidia driver to the latest version (in sid, still, I 
believe).  If you do the latter, be sure and use the kernel parameter I 
showed you in my earlier post.


Mark

On 7/5/18 8:03 PM, Mark Allums wrote:

On 7/5/18 5:42 PM, Francisco Mariano-Neto wrote:

Hey all,

I'm running kernel 4.15 with nvidia-driver 390.48-3 with no
problems. However, recently my kernel was automatically upgraded to 4.16
and it broke the nvidia driver.

Running 'dkms autoinstall --all' does not help, it complains
about not finding kernel headers (which are installed) and quits.

Any ideas on how I can rebuild the kernel module for the new
kernel version?

Thanks
Francisco


This requires a workaround, a kernel parameter at boot.

GRUB_CMDLINE_LINUX_DEFAULT="slab_common.usercopy_fallback=y"

Edit the config file like this,

$sudo gedit /etc/default/grub

Then run $sudo update-grub.





Re: Nvidia drivers

2018-07-05 Thread Mark Allums

On 7/5/18 5:42 PM, Francisco Mariano-Neto wrote:

Hey all,

I'm running kernel 4.15 with nvidia-driver 390.48-3 with no
problems. However, recently my kernel was automatically upgraded to 4.16
and it broke the nvidia driver.

Running 'dkms autoinstall --all' does not help, it complains
about not finding kernel headers (which are installed) and quits.

Any ideas on how I can rebuild the kernel module for the new
kernel version?

Thanks
Francisco


This requires a workaround, a kernel parameter at boot.

GRUB_CMDLINE_LINUX_DEFAULT="slab_common.usercopy_fallback=y"

Edit the config file like this,

$sudo gedit /etc/default/grub

Then run $sudo update-grub.



Re: basics of CUPS troubleshooting

2018-06-15 Thread Mark Copper
On Fri, Jun 15, 2018 at 3:36 PM, Brian  wrote:

> On Mon 11 Jun 2018 at 18:40:49 -0500, Mark Copper wrote:
>
> > On Tue, May 29, 2018 at 2:29 PM, Brian  wrote:
> >
> > > On Mon 28 May 2018 at 18:20:13 -0500, Mark Copper wrote:
> > >
> > > > Having upgraded to Stretch, a file that I need to print no longer
> prints
> > > > properly. (It did before.)
> > > >
> > > > I am sure the difficulty is so idiosyncratic no one here will have
> > > > experienced it. So I'm not asking how to fix it specifically.
> Rather I'm
> > > > looking for advice how to isolate the difficulty, generally speaking.
> > >
> > > What is offered in the printing section of the wiki could help.
> > >
> >
> > Thanks for this. I was unaware.
> >
> > The problem I'm seeing seems to be in the CUPS filter gstoraster. That
> is,
> > all is well at the preceding step:
> >
> > # /usr/sbin/cupsfilter -p /etc/cups/ppd/Zebra.ppd -m
> > application/vnd.cups-pdf -o orientation-requested=3 tag_generator.pdf >
> > out.pdf
>
> The output of pdfinfo for tag_generator.pdf would be useful.
>
> > Well, almost. It was necessary to include the "orientation-requested=3"
> > option to keep the document from rotating.
>
> That directive produces a rotation of 0 degrees. It is difficult to see
> how it promotes or stops rotation.
>
> > But at the next step, the document is laid on its side no matter the
> > orientation requested:
>
> "laid on its side" means?
>

Thanks for response.

Here is pdfinfo for input pdf file:
$ pdfinfo tag_generator
Producer:   PDF::API2 2.023 [linux]
Tagged: no
UserProperties: no
Suspects:   no
Form:   none
JavaScript: no
Pages:  1
Encrypted:  no
Page size:  90 x 144 pts
Page rot:   0
File size:  14233 bytes
Optimized:  no
PDF version:1.4

I'm unable to reproduce the autorotation in the absence of
orientation-requested=3
option.  I must have been mistaken. Sorry.

"laid on its side" to mean rotated 90° clockwise.


Re: basics of CUPS troubleshooting

2018-06-11 Thread Mark Copper
On Tue, May 29, 2018 at 2:29 PM, Brian  wrote:

> On Mon 28 May 2018 at 18:20:13 -0500, Mark Copper wrote:
>
> > Having upgraded to Stretch, a file that I need to print no longer prints
> > properly. (It did before.)
> >
> > I am sure the difficulty is so idiosyncratic no one here will have
> > experienced it. So I'm not asking how to fix it specifically.  Rather I'm
> > looking for advice how to isolate the difficulty, generally speaking.
>
> What is offered in the printing section of the wiki could help.
>

Thanks for this. I was unaware.

The problem I'm seeing seems to be in the CUPS filter gstoraster. That is,
all is well at the preceding step:

# /usr/sbin/cupsfilter -p /etc/cups/ppd/Zebra.ppd -m
application/vnd.cups-pdf -o orientation-requested=3 tag_generator.pdf >
out.pdf

Well, almost. It was necessary to include the "orientation-requested=3"
option to keep the document from rotating.

But at the next step, the document is laid on its side no matter the
orientation requested:

# /usr/sbin/cupsfilter -p /etc/cups/ppd/Zebra.ppd -m
application/vnd.cups-raster  out.pdf > out.ras


Or, in more detail,

# pdfinfo out.pdf
Producer:   PDF::API2 2.023 [linux]
Tagged: no
UserProperties: no
Suspects:   no
Form:   none
JavaScript: no
Pages:  1
Encrypted:  no
Page size:  90 x 144 pts
Page rot:   0
File size:  14151 bytes
Optimized:  no
PDF version:1.4

output of "/usr/sbin/cupsfilter -p /etc/cups/ppd/Zebra.ppd -m
application/vnd.cups-raster  out.pdf > out.ras" to standard error says this:

DEBUG: argv[0]="cupsfilter"
DEBUG: argv[1]="1"
DEBUG: argv[2]="root"
DEBUG: argv[3]="out.pdf"
DEBUG: argv[4]="1"
DEBUG: argv[5]=""
DEBUG: argv[6]="out.pdf"
DEBUG: envp[0]=""
DEBUG: envp[1]="CONTENT_TYPE=application/pdf"
DEBUG: envp[2]="CUPS_DATADIR=/usr/share/cups"
DEBUG: envp[3]="CUPS_FONTPATH=/usr/share/cups/fonts"
DEBUG: envp[4]="CUPS_SERVERBIN=/usr/lib/cups"
DEBUG: envp[5]="CUPS_SERVERROOT=/etc/cups"
DEBUG: envp[6]="LANG=en_US.UTF8"
DEBUG: envp[7]="PATH=/usr/lib/cups/filter:/usr/bin:/usr/sbin:/bin:/usr/bin"
DEBUG: envp[8]="PPD=/etc/cups/ppd/Zebra.ppd"
DEBUG: envp[9]="PRINTER_INFO=cupsfilter"
DEBUG: envp[10]="PRINTER_LOCATION=Unknown"
DEBUG: envp[11]="PRINTER=cupsfilter"
DEBUG: envp[12]="RIP_MAX_CACHE=128m"
DEBUG: envp[13]="USER=root"
DEBUG: envp[14]="CHARSET=utf-8"
DEBUG: envp[15]="FINAL_CONTENT_TYPE=application/vnd.cups-raster"
INFO: pdftopdf (PID 8482) started.
INFO: gstoraster (PID 8483) started.
DEBUG: OUTFORMAT="(null)", so output format will be CUPS/PWG Raster
DEBUG: pdftopdf: Last filter determined by the PPD: rastertolabel;
FINAL_CONTENT_TYPE: application/vnd.cups-raster => pdftopdf will not log
pages in page_log.
DEBUG: Color Manager: Calibration Mode/Off
DEBUG: Calling FindDeviceById(cups-cupsfilter)
DEBUG: Failed to send: org.freedesktop.ColorManager.NotFound:device id
'cups-cupsfilter' does not exist
DEBUG: Failed to get find device cups-cupsfilter
DEBUG: Calling FindDeviceById(cups-cupsfilter)
DEBUG: Failed to send: org.freedesktop.ColorManager.NotFound:device id
'cups-cupsfilter' does not exist
DEBUG: Failed to get device cups-cupsfilter
INFO: Color Manager: no profiles specified in PPD
DEBUG: Color Manager: ICC Profile: None
DEBUG: Ghostscript using Any-Part-of-Pixel method to fill paths.
DEBUG: Ghostscript command line: gs -dQUIET -dPARANOIDSAFER -dNOPAUSE
-dBATCH -dNOINTERPOLATE -dNOMEDIAATTRS -sstdout=%stderr
-sOutputFile=%stdout -sDEVICE=cups -dAdvanceDistance=1000 -r300x300
-dDEVICEWIDTHPOINTS=90 -dDEVICEHEIGHTPOINTS=162 -dcupsBitsPerColor=1
-dcupsColorOrder=0 -dcupsColorSpace=3 -dcupsCompression=-1
-dcupsRowStep=200 -scupsPageSizeName=w90h162 -I/usr/share/cups/fonts -c
'<>setpagedevice' -f -_
DEBUG: envp[0]=""
DEBUG: envp[1]="CONTENT_TYPE=application/pdf"
DEBUG: envp[2]="CUPS_DATADIR=/usr/share/cups"
DEBUG: envp[3]="CUPS_FONTPATH=/usr/share/cups/fonts"
DEBUG: envp[4]="CUPS_SERVERBIN=/usr/lib/cups"
DEBUG: envp[5]="CUPS_SERVERROOT=/etc/cups"
DEBUG: envp[6]="LANG=en_US.UTF8"
DEBUG: envp[7]="PATH=/usr/lib/cups/filter:/usr/bin:/usr/sbin:/bin:/usr/bin"
DEBUG: envp[8]="PPD=/etc/cups/ppd/Zebra.ppd"
DEBUG: envp[9]="PRINTER_INFO=cupsfilter"
DEBUG: envp[10]="PRINTER_LOCATION=Unknown"
DEBUG: envp[11]="PRINTER=cupsfilter"
DEBUG: envp[12]="RIP_MAX_CACHE=128m"
DEBUG: envp[13]="USER=root"
DEBUG: envp[14]="CHARSET=utf-8"
DEBUG: envp[15]="FINAL_CONTENT_TYPE=application/vnd.cups-raster"
INFO: pdftopdf (PID 8482) exited with no errors.
INFO: Start rendering...
INFO: Processing

Re: akondadi - should be really improved!

2018-06-11 Thread mark
On Thursday, June 7, 2018 3:18:01 PM EDT Hans wrote:
> Hi folks,
> 
> just a little feedback. IMO akonadi in kmail should be either really
> improved or dropped.
> 
> I do not want to mourne, as I know, there is lots of work done in free time,
> but akonadi at the moment (better since the change to akonadi in kmail) is
> pita.
> 
> Akonadi is slow, really slow. If you want to delete lots of mails in one
> line, it does not. You have to delete one after the other, which is
> annoying.
> 
> And even this is not always working. Rather often it hangs. Then you have to
> click to another folder (i.e. sent mail), then click the desired folder
> where you want to delete the mails and you can go on.
> 
> Looks like fast deletion is not possible.
> 
> Don't know if anyone got into the same trouble, but IMO akonadi should be
> improved.
> 
> Please don't mind, best regards
> 
> Hans

Hans,

I'm running KMail version 5.5.2.  From what you report, it sounds to me like 
you are running an older version.  The version I run allows me to select 
multiple e-mails and delete them all at once.  To be fair, all is not perfect.  
Sometimes an e-mail comes in which can not be deleted.  To deal with that, I 
downloaded and use akonadiconsole.  Once you get the hang of it (took me about 
5 minutes) you can find and delete those e-mails easily.

(for others) I do not want to enter into the Trinity vs current version 
debate.

Best of luck,

Mark



Re: new install of amd64, 9-4 from iso #1

2018-06-10 Thread Mark Fletcher
On Sun, Jun 10, 2018 at 04:44:16PM -0400, Gene Heskett wrote:
> Greetings all;
> 
> I have the dvd written, and a new 2T drive currently occupying 
> the /dev/sdc slot.
> 
> What I want, since the drive has been partitioned to /boot, /home, /, and 
> swap, is 1; for this install to not touch any other drive currently 
> mounted, and 2; use the partitions I've already setup on this new drive 
> without arguing with me.
> 
> and 3: to  treat the grub install as if there are no other drives hooked 
> up. I don't need grub to fill half the boot screen with data from the 
> other drives.
> 
> How do I best achieve that?
> 
> Thanks a bunch.
> 

1 and 2 are simply a matter of giving the sensible answers to the 
appropriate questions from the installer. I can't remember exactly what 
the options are called but there is an expert partition mode that allows 
you to partition the disk how you want and I'd use that to verify the 
partitions are as you want and not change anything, map the parts of the 
filesystem you want to go on each partition in the installer, then 
continue.

If you don't tell it to install anything to the other disks then it won't.

For 3, I think I need to defer to the grub experts, not sure if you will 
have to preseed your install or if there is an easier way.

Mark



Re: Install matplotlib in Debian 9

2018-06-10 Thread Mark Fletcher
On Fri, Jun 08, 2018 at 09:21:23PM +0200, didier gaumet wrote:
> Le 08/06/2018 à 20:51, Markos a écrit :
> > Hi,
> > 
> > I'm starting my studies with Python 3 on Debian 9.
> > 
> > I have to install the matplotlib module, but I'm in doubt what is the
> > difference to install with the command:
> > 
> > pip3 install matplotlib
> > 
> > or
> > 
> > apt-get install python3-matplotlib
> > 
> > Is there any difference in the packages that are installed?
> > 
> > Thanks,
> > 
> > Markos
> > 
> 
> I suppose that this comparable to install a Firefox extension via
> apt-get or from Firefox: apt-get will provide an older version
> system-wide while pip3 will provide a more up-to-date version only in a
> user environment?
> Do not take my word for it, though: I have absolutely no competence in
> Python.
> 

Using pip is like building non-python software from source when it is 
already packaged for Debian -- possible, and occasionally necessary in 
some circumstances, but to be avoided where you can. If you use the 
Debian packaging system, Debian knows what you have installed and what 
libraries your system is dependent on, etc, and won't do anything to 
break your system for example when you upgrade. But if you install using 
pip Debian doesn't know anything about it (so won't upgrade it for you 
when you upgrade). In particular, but not limited to, upgrading a system 
that has a mix of manually-built and Debian-installed packages can be a 
pain.

I can tell you from experience the version of matplotlib in Debian 9, 
while not the latest and greatest, is plenty good enough. I use it quite 
a lot.

If this is you making a foray into data science with python, by the way, 
I also strongly recommend the pandas library (also in Debian, and again 
the version in Stretch is not latest but plenty new enough).

Mark



Re: The Internet locks up Buster

2018-06-07 Thread Mark Fletcher
On Thu, Jun 07, 2018 at 02:13:17PM -0400, Borden Rhodes wrote:
> > I.e. 12309 bug is back. It's obscure and presumably fixed (at least four
> > times fixed) bug that happens with relatively slow filesystem (be it
> > SSD/HDD/NFS or whatever) and a large amount of free RAM. I first
> > encountered the thing back in 2.6.18 days, where it was presumably
> > implemented (as in - nobody complained before ;).
> 
> Thank you, Reco and Abdullah, for providing some very helpful
> information. I'll retest with the kernel parameters. I went over to
> https://bugzilla.kernel.org/show_bug.cgi?id=12309 and it seems they've
> closed the bug and/or given up on this. Is there any value in
> continuing to whine about this problem? I mean, it's not like
> large-capacity RAM is going away.
> 

I feel like we are missing a trick here. Even with a relatively slow I/O 
device (I was faintly amused to see SSD in the list of relatively slow 
devices, if SSD is slow what is fast?) it should eventually catch up 
UNLESS something is generating an insane amount of I/O over a sustained 
period. Just browsing the web shouldn't do that unless RAM is very 
tight, and the O/P indicated they have lots of RAM.

I run my machine here with 24GB RAM and part of my filesystem is on an 
external USB hard drive cage. From reading this thread you'd think that 
when I run data analyses reading and writing that external drive cage, 
it would be a recipe for this bug, but it isn't. And that is because 
those processes do a lot of work and make the CPU work hard, but they 
don't do insane amounts of I/O. (lots, but not insane amounts)

So, I think the O/P should look into what is causing all the I/O in the 
first place, and why that I/O is sustained even when most of the 
processes on the system are blocked. Something isn't right there. The 
usual suspect would be swapping but again the O/P said they have 
"large-capacity RAM" and were just browsing the web with or without 
LibreOffice open -- this shouldn't trigger swapping.

Mark



Re: KPatience cards too small

2018-06-03 Thread Mark Neidorff
On Saturday, June 2, 2018 11:14:13 AM EDT arne wrote:
> Hi,
> 
> Looks like a bug in Kpat, the cards are way too small:
> Screenshot:
> 
> https://i.paste.pics/652e13761f68299de40c01f409392284.png
> 
> To find the cards: top center
> 
> Debian Stretch amd64 up-to-date
> 4k monitor
> Nvidia GeForce GTX 1050 videocard
> 
> How to solve this?
> 
> Thanks!

Hi,

First thing, check on which video driver are you using?  Is there another 
driver that you could try?  That may be your simplest test to finding out where 
the problem is.

Mark



basics of CUPS troubleshooting

2018-05-28 Thread Mark Copper
Having upgraded to Stretch, a file that I need to print no longer prints
properly. (It did before.)

I am sure the difficulty is so idiosyncratic no one here will have
experienced it. So I'm not asking how to fix it specifically.  Rather I'm
looking for advice how to isolate the difficulty, generally speaking.

The file to print is a pdf file and the printer is a Zebra label printer.
The file is sent to the printer via command line, lpr. The printer is
configured via CUPS web interface. The driver is provided by CUPS (Zebra
ZPL label printer) and no external ppd file is required. The file prints
without indication of error in the cups error log.

The problem is that the orientation of the file can no longer be changed.
(Under Jessie & Wheezy, the orientation would change automatically to
accommodate the label dimensions).  Now, even when calling the
"orientation-requested" option, the orientation cannot be changed 90
degrees, instead it just flips 180 degrees.

So I'm wondering what changed? Can I somehow isolate the change to a file?
Is there documentation what changes might have been made to the file
somewhere?

Thanks for reading.


Re: making more room in root partition for distribution upgrade

2018-05-24 Thread Mark Copper
On Mon, May 21, 2018 at 12:23 PM, Pascal Hambourg <pas...@plouf.fr.eu.org>
wrote:

Le 21/05/2018 à 18:14, Mark Copper a écrit :
>
>> On Sun, May 20, 2018 at 3:19 AM, Pascal Hambourg <pas...@plouf.fr.eu.org>
>> wrote:
>>
>>> Le 18/05/2018 à 02:05, Mark Copper a écrit :
>>>
>>> You will have to move/delete and re-create the swap too.
>>> Gparted allows to resize and move an unused partition. Better have a
>>> backup
>>> though.
>>>
>>
>> yes, if I understand, the file system is lost on any partition,
>> primary or logical, whose first cylinder is changed.
>>
>
> Not with Gparted. Gparted moves the data to the new location of the
> partition. But things can go wrong during the operation (power failure,
> system crash...) so better keep a backup


Ah, gparted made it all very easy. (One little bump: some bug requiring a
little empty space separating logical partitions). "/" enlarged, stretch
installed with room to, er, stretch!  Thanks.


Re: making more room in root partition for distribution upgrade

2018-05-21 Thread Mark Copper
>
> The release notes even give detailed instructions as to how you might mount
> bind (or is bind mount?) a usb key as a temporary /var/cache/apt/archives
> directory.
>

That's an intriguing idea.  I'll look.  Thanks.



Re: making more room in root partition for distribution upgrade

2018-05-21 Thread Mark Copper
On Sun, May 20, 2018 at 3:19 AM, Pascal Hambourg <pas...@plouf.fr.eu.org> wrote:
> Le 18/05/2018 à 02:05, Mark Copper a écrit :
>>>
>>>
>>>> There was a day when a 10 gb partition seemed like plenty of space to
>>>> leave
>>>> for the system but now it's not. An upgrade to Stretch appears to need
>>>> more.
>
>
> How do you know ?

I don't, actually. I'm reacting to warnings of limited space received
when upgrading Jessie. And previously when upgrading from Wheezy IIRC.

>
>>>> Device BootStart   End   Sectors   Size Id Type
>>>> /dev/sda1  *2048  19531775  19529728   9.3G 83 Linux
>>>> /dev/sda2   19533822 312580095 293046274 139.8G  5 Extended
>>>> /dev/sda5   19533824  27578367   8044544   3.9G 82 Linux swap /
>>>> Solaris
>>>> /dev/sda6   27580416 312580095 284999680 135.9G 83 Linux
>>>>
>>>> $ cat /etc/fstab
>>>> # / was on /dev/sda1 during installation
>>>> # /home was on /dev/sda6 during installation
>>>> # swap was on /dev/sda5 during installation
>
>
>>>> This must be a FAQ. But there appear to be two ways forward.
>>>>
>>>> 1. Back-up /home, enlarge / partition, copy back-up back to new, smaller
>>>> /home partition (because /home will then start on a different cylinder
>>>> so
>>>> data will be lost).
>
>
> You will have to move/delete and re-create the swap too.
> Gparted allows to resize and move an unused partition. Better have a backup
> though.

yes, if I understand, the file system is lost on any partition,
primary or logical, whose first cylinder is changed.

>
>>>> 2. Carve out a new partition for /usr at end of disk which will free up
>>>> over 6 gb.
>
>
> The Debian initramfs supports a separate /usr since Jessie.

Given the system as it currently exists, this seems the easiest way to
go. (actually there are several boxes like this needing attention).

>
>> $ du -h /var
>> ...
>> 598M/var
>>
>> but
>>
>> $ du -h /usr
>> ...
>> 4.2G/usr/share
>> 6.5G/usr
>
>
> What about the rest ? How much free space is available ?
> Maybe the upgrade requires more space in order to download and store the new
> packages. Have you considered moving /var/cache/apt/archives to the /home
> partition (through a symlink or bind mount) so that downloaded packages do
> not use space in the / filesystem ?
>

Yes, should have included that:

:~# df -h
Filesystem  Size  Used Avail Use% Mounted on
/dev/sda1   9.1G  7.8G  870M  91% /
udev 10M 0   10M   0% /dev
tmpfs   402M  6.1M  396M   2% /run
tmpfs  1005M   92K 1005M   1% /dev/shm
tmpfs   5.0M  4.0K  5.0M   1% /run/lock
tmpfs  1005M 0 1005M   0% /sys/fs/cgroup
/dev/sda6   134G  6.0G  121G   5% /home
tmpfs   201M  8.0K  201M   1% /run/user/1000
/dev/sdb3   915G  5.8G  863G   1% /media/mark/d-live 9.4.0 gn amd64
/dev/sdb1   2.3G  2.3G 0 100% /media/mark/d-live 9.4.0 gn amd641

No, I had not considered playing with any part of /var. With /var
taking less than 1 gb and /var/cache/apt/archives less than 1mb, /usr
had seemed the elephant in the room. Might that be a way to go? I just
need to get to Stretch for now.

Thank you.



Re: making more room in root partition for distribution upgrade

2018-05-17 Thread Mark Copper
On Thu, May 17, 2018 at 6:32 PM, bw <bwtn...@yahoo.com> wrote:

>
>
> On Thu, 17 May 2018, Mark Copper wrote:
>
> > There was a day when a 10 gb partition seemed like plenty of space to
> leave
> > for the system but now it's not. An upgrade to Stretch appears to need
> more.
> >
> > ~# fdisk -l
> >
> > Disk /dev/sda: 149.1 GiB, 160041885696 bytes, 312581808 sectors
> > Units: sectors of 1 * 512 = 512 bytes
> > Sector size (logical/physical): 512 bytes / 512 bytes
> > I/O size (minimum/optimal): 512 bytes / 512 bytes
> > Disklabel type: dos
> > Disk identifier: 0x0007c9ed
> >
> > Device BootStart   End   Sectors   Size Id Type
> > /dev/sda1  *2048  19531775  19529728   9.3G 83 Linux
> > /dev/sda2   19533822 312580095 293046274 139.8G  5 Extended
> > /dev/sda5   19533824  27578367   8044544   3.9G 82 Linux swap /
> Solaris
> > /dev/sda6   27580416 312580095 284999680 135.9G 83 Linux
> >
> > $ cat /etc/fstab
> > # /etc/fstab: static file system information.
> > #
> > # Use 'blkid' to print the universally unique identifier for a
> > # device; this may be used with UUID= as a more robust way to name
> devices
> > # that works even if disks are added and removed. See fstab(5).
> > #
> > #
> > proc/proc   procdefaults0   0
> > # / was on /dev/sda1 during installation
> > UUID=f2959403-fb9c-4e56-adbf-e5b7c1f63dd8 /   ext3
> > errors=remount-ro 0   1
> > # /home was on /dev/sda6 during installation
> > UUID=274b606c-c556-47cb-8db3-2733b7adac3f /home   ext3
> > defaults0   2
> > # swap was on /dev/sda5 during installation
> > UUID=5642269c-ada4-4466-a516-4a2360ee0ec1 noneswap
> > sw  0   0
> >
> >
> > This must be a FAQ. But there appear to be two ways forward.
> >
> > 1. Back-up /home, enlarge / partition, copy back-up back to new, smaller
> > /home partition (because /home will then start on a different cylinder so
> > data will be lost).
> >
> > or
> >
> > 2. Carve out a new partition for /usr at end of disk which will free up
> > over 6 gb.
> >
> > What have other people done?
> >
> > Thanks.
> >
>
> release notes on upgrading have some info about disk space.  It's maybe
> /var/cache/apt/archives taking up all your space?  I use 30 gb partitions
> usually and they very rarely get over 8-10 gigs.
>
> https://www.debian.org/releases/stable/amd64/release-
> notes/ch-upgrading.en.html#sufficient-space
>
> good luck!
>
>
I think I'm good there:

$ du -h /var
...
598M/var

but

$ du -h /usr
...
4.2G/usr/share
6.5G/usr

Point well taken about removing packages though.  Thanks


Re: setting up a drive automount in systemd?

2018-05-17 Thread Mark Fletcher
On Thu, May 17, 2018 at 11:36:01AM +0100, Darac Marjal wrote:

> However, before you go doing that, consider that systemd ALSO comes with a
> program called "systemd-fstab-generator". Contrary to its name, this
> generates unit files FROM an fstab (rather than generating an fstab).
> Therefore, following the Principle of Least Surprise, the current thinking
> is to continue to maintain your mountpoints in /etc/fstab and let systemd do
> the translation on the fly.
> 
> In summary, then, while you CAN run systemd without /etc/fstab, that file is
> still recommnded as the expected configuration file for mountpoints.
> 

That has the ring of good advice. Especially since there could easily be 
other programs on your system that expect to find information about the 
mountpoints on the system by looking in fstab -- it's a file on the 
system, it's a long-standing standard, people are allowed to look at it 
and it's not unreasonable to imagine some piece of software will.

Mark



Re: GPG error when trying to update Lenny

2018-05-17 Thread Mark Fletcher
On Wed, May 16, 2018 at 03:20:09PM +, Marie-Madeleine Gullibert wrote:
> Hello to all, 
> 
> I'm relatively new to Debian. I'm helping out a small organization that has a 
> library server installed on Debian to update their system. They run currently 
> on Debian lenny so I'm first trying to upgrade the Debian system, but I keep 
> running into a GPG error when I try to first update. I've tried many things 
> but none have worked so far, and would gladly welcome any suggestions. I do 
> have debian-archive-keyring installed (and up to date) and I've tried 
> retrieving my expired keys from a two different keyservers to no avail. 
> 
> Here's what happens (I'm running as root): 
> 
> localhost:~# apt-get update
> Get:1 http://archive.debian.org lenny Release.gpg [1034B]
> Ign http://archive.debian.org lenny/main Translation-en_US
> Get:2 http://archive.debian.org lenny/updates Release.gpg [836B]
> Ign http://archive.debian.org lenny/updates/main Translation-en_US
> Ign http://archive.debian.org lenny/updates/contrib Translation-en_US
> Get:3 http://archive.debian.org lenny/volatile Release.gpg [481B]
> Ign http://archive.debian.org lenny/volatile/main Translation-en_US
> Hit http://archive.debian.org lenny Release
> Hit http://archive.debian.org lenny/updates Release
> Hit http://archive.debian.org lenny/volatile Release
> Get:4 http://archive.debian.org lenny Release [99.6kB]
> Get:5 http://archive.debian.org lenny/updates Release [92.4kB]
> Ign http://archive.debian.org lenny Release
> Get:6 http://archive.debian.org lenny/volatile Release [40.7kB]
> Ign http://archive.debian.org lenny/updates Release
> Ign http://archive.debian.org lenny/volatile Release
> Ign http://archive.debian.org lenny/main Packages/DiffIndex
> Ign http://archive.debian.org lenny/main Sources/DiffIndex
> Ign http://archive.debian.org lenny/updates/main Packages/DiffIndex
> Ign http://archive.debian.org lenny/updates/contrib Packages/DiffIndex
> Ign http://archive.debian.org lenny/updates/main Sources/DiffIndex
> Ign http://archive.debian.org lenny/updates/contrib Sources/DiffIndex
> Ign http://archive.debian.org lenny/volatile/main Packages/DiffIndex
> Ign http://archive.debian.org lenny/volatile/main Sources/DiffIndex
> Hit http://archive.debian.org lenny/main Packages
> Hit http://archive.debian.org lenny/main Sources
> Hit http://archive.debian.org lenny/updates/main Packages
> Hit http://archive.debian.org lenny/updates/contrib Packages
> Hit http://archive.debian.org lenny/updates/main Sources
> Hit http://archive.debian.org lenny/updates/contrib Sources
> Hit http://archive.debian.org lenny/volatile/main Packages
> Hit http://archive.debian.org lenny/volatile/main Sources
> Fetched 235kB in 0s (301kB/s)
> Reading package lists... Done
> W: GPG error: http://archive.debian.org lenny Release: The following 
> signatures were invalid: KEYEXPIRED 1520281423 KEYEXPIRED 1337087218
> W: GPG error: http://archive.debian.org lenny/updates Release: The following 
> signatures were invalid: KEYEXPIRED 1356982504
> W: GPG error: http://archive.debian.org lenny/volatile Release: The following 
> signatures were invalid: KEYEXPIRED 1358963195
> W: You may want to run apt-get update to correct these problems
> 

Since you want to upgrade the installation to a later version, my 
suggestion is don't bother first trying to update Lenny. Just update 
your sources.list to the next release (was that Jessie? I don't even 
recall) and then update as usual.

Some releases had recommendations to use aptitude / not use aptitude, as 
opposed to apt-get, to do the update, I don't recall now if releases 
after Lenny did, but hopefully this comment will trigger someone else 
who does remember to chime in. Google may still be able to find old 
copies of the upgrade guides that are published with each new Debian 
release.

The only other piece of advice I have is don't try to go straight to 
stretch or buster from lenny -- instead upgrade one major release at a 
time, as that path is better trodden and more likely to work, and any 
issues you encounter are more likely to have been well-discussed in 
places Google can find (including the archives of this list).

Mark



making more room in root partition for distribution upgrade

2018-05-17 Thread Mark Copper
There was a day when a 10 gb partition seemed like plenty of space to leave
for the system but now it's not. An upgrade to Stretch appears to need more.

~# fdisk -l

Disk /dev/sda: 149.1 GiB, 160041885696 bytes, 312581808 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0007c9ed

Device BootStart   End   Sectors   Size Id Type
/dev/sda1  *2048  19531775  19529728   9.3G 83 Linux
/dev/sda2   19533822 312580095 293046274 139.8G  5 Extended
/dev/sda5   19533824  27578367   8044544   3.9G 82 Linux swap / Solaris
/dev/sda6   27580416 312580095 284999680 135.9G 83 Linux

$ cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
#
proc/proc   procdefaults0   0
# / was on /dev/sda1 during installation
UUID=f2959403-fb9c-4e56-adbf-e5b7c1f63dd8 /   ext3
errors=remount-ro 0   1
# /home was on /dev/sda6 during installation
UUID=274b606c-c556-47cb-8db3-2733b7adac3f /home   ext3
defaults0   2
# swap was on /dev/sda5 during installation
UUID=5642269c-ada4-4466-a516-4a2360ee0ec1 noneswap
sw  0   0


This must be a FAQ. But there appear to be two ways forward.

1. Back-up /home, enlarge / partition, copy back-up back to new, smaller
/home partition (because /home will then start on a different cylinder so
data will be lost).

or

2. Carve out a new partition for /usr at end of disk which will free up
over 6 gb.

What have other people done?

Thanks.


Re: Email tutorial?

2018-04-24 Thread mark
On Tuesday, April 24, 2018 3:56:06 PM EDT J.W. Foster wrote:
> I am trying once again to get an email server to run on my server. I NEED a
> qualified tutorial or some real assistance in getting it operational and
> secure. I am aware that there are MANY primers or docs on this. Problem is
> they like most are done for an individuals system and are not really
> designed for my system. So here is what I'm working with:
> 1. all IP addresses are DHCP regulated by Spectrum internet.

<>

No offense ment, but without a static IP address for mail to be sent to, you 
anot able to run a mail server.  Why not?  Think of it this way: you live on a 
street with a row of houses (to make this simple, don't consider multiple 
streets or different blocks with the same house numbers as your block.).  You 
are in house #1.  "Snail Mail" can be sent to you as long as house #1 is 
specified.   Every couple of nights, someone comes along and takes the numbers 
off of the houses and puts them back on randomly.  Now mail going to you (#1) 
may be delivered to a different house that now has the #1 on it.

DHCP is like that, with an added twist:  When IP address change, in order for 
you to get the e-mail an association between your IP-address and your physicla 
computer, the change has to be broadcast to all the IP servers on the Internet 
before you will be able to receive mail again.  That change can take days.

So, step 1 for you is to either spend the money on a static IP address or 
check out one of the services that will show the Internet one IP address for 
you, and will keep track of yours when it changes.  My expericence with those 
is that you will, from time to time, lose e-mail.  If you are serious about 
setting up a mail server, then complete step 1.

Mark




Re: I wish put another Debian, and with its command line.

2018-04-23 Thread mark
On Monday, April 23, 2018 4:08:38 PM EDT Gdsi wrote:
> Hi all.
> On my disk is a little free space at which I wish put another Debian, and
> with its command line. A few times I tried doing it but always there was a
> excess , as the installer don't say exactly what is into minimal inst-ion,
> and I'm afraid there's kernel only. If I shall not be setting check marks
> for additional components, will be there: 'apt', man pages and some editor?
> Thank.

What is your goal in doing another install?  If you want a text interface, 
then open a terminal window and make it full screen.  Poof!  Best of both 
worlds.

Mark



Re: Kmail - slow or no erasing of mails when deleting fast

2018-04-23 Thread mark
On Monday, April 23, 2018 3:07:49 AM EDT Hans wrote:
> Hi folks,
 

> I believe, that akonadi is to slow to write the changes into its database.
> When erasing mails slowly, then it is fast enough.
> 

Well ,

Question, what version of KMail are you using (Help:About KMail)?

I'm using version 5.5.2 and while some things are a bit slow, this version is 
much better than those before it.  I've also seen "greyed out" e-mails which 
were artifacts of the folder not being updated properly, and as you point out, 
they go away when you force a folder refresh. Assuming you are using a current 
version, I'd post a bug report on the KDE site:

https://www.kde.org/applications/internet/kmail/

If you are running an "old" version, your report will be ignored as work is 
only done on new versions.

Good luck,
Mark



Re: openvpn client DNS security

2018-04-05 Thread Mark Fletcher
On Thu, Apr 05, 2018 at 11:48:51AM +0200, Roger Price wrote:
> Hi, I had a problem setting up DNS on an openvpn client.  I'll describe it
> here before submitting a bug report - I would appreciate comment on the
> security aspects.
> 

> 
> Looking more closely at script /etc/openvpn/update-resolv-conf, it begins
> with the line
> 
>  [ -x /sbin/resolvconf ] || exit 0
> 
> File /sbin/resolvconf is not present, because package resolvconf is not a
> prerequisite for openvpn, so the script fails silently!  This looks to me
> like a serious security problem.  Joe Road-Warrior is out there, connected
> to the "free" Wifi.  He follows corporate instructions to turn on his
> openvpn client, but because of the exit 0 he is still using the local
> thoroughly compromised DNS server.
> 

apt-cache rdepends resolvconf shows a dependency of openvpn on 
openresolv, which according to apt-file provides /sbin/resolvconf (and 
also, if I am reading apt-cache output correctly, depends on 
resolvconf...)

I can only assume one of the dependencies in that stack is a "suggests" 
rather than a "depends". If you are going to report a bug probably worth 
acknowledging this so you don't get turned away at the door.

... Yep, checking apt show openvpn, resolvconf is indeed a "suggests".

Mark



Re: apt{-cache,-get,itude} show wrong version of package after update

2018-04-05 Thread Mark Fletcher
On Wed, Mar 28, 2018 at 09:31:11AM +0200, to...@tuxteam.de wrote:
> On Wed, Mar 28, 2018 at 07:47:05AM +0900, Mark Fletcher wrote:
> 
> [...]
> 
> > I'm not sure if you really did what it sounds like you did here, but if 
> > you did... you can't mix and match commands to apt-get and aptitude.
> 
> I think this is false, at least in such an unrestricted and
> sweeping way. Apt (and apt-get, its younger cousin) and aptitude
> are just front ends to dpkg and use the same data bases in
> the background.
> 
> In particular...
> 
> > You did apt-get update so you need to use apt-get upgrade, or 
> > dist-upgrade, or whatever the apt-get command is
> 
> ...apt update and apt-get update are equivalent (as most
> probably aptitude update is).
> 

It wasn't apt and apt-get that were being compared though, it was 
aptitude and apt-get. And there _is_ some sort of difference between 
those two such that you have to update with the right one; I'm sure I've 
seen discussion of that on this forum before (I don't have links 
though).

> > (I don't much use 
> > apt-get, have switched to the apt command since upgrading to stretch).
> 
> Apt is just a friendlier front-end for apt-get: the command
> outputs are not compatible (and you'll see a warning to that
> effect in apt, aimed at those who want to use apt's output
> in scripts), and aptitude has, AFAIK, some *extra* databases
> to record user intention, and a different dependency resolver,
> but the basic data sets (which packages are available, what
> state each is in, etc.) are common.

See above. The only person who mentioned apt was me, and even then only 
in the context of that's what I use nowadays. The OP never mentioned apt.

In any case, those "extra databases" are probably a pretty good reason 
not to mix and match front-ends in quite the way the OP was doing, even 
if it doesn't immediately lead straight to trouble trying to get one's 
system updated properly in the way I suggested it might.
> 
> > If you want to use aptitude upgrade, or dist-upgrade, or safe-upgrade, 
> > or whatever the command is (embarrassingly I have forgotten, I used 
> > aptitude for years _before_ upgrading to stretch) you need to first do 
> > aptitude update.
> > 
> > apt-get update followed by aptitude upgrade will lead to pain.
> 
> I don't think so: but I'm ready to be proven wrong!
> 

Certainly I have no proof except my experience and my (patchy) memory 
that I have seen discussion of this point on this list before.

Anyway the actual issue in this case turned out to be nothing to do with 
mixing and matching front-ends to dpkg. Glad the OP got his problem 
figured out.

Mark



Re: apt{-cache,-get,itude} show wrong version of package after update

2018-03-27 Thread Mark Fletcher
On Tue, Mar 27, 2018 at 07:50:03PM +0200, Jean-Baptiste Thomas wrote:
> After apt-get update, attempting to install ntp tries to
> download version 1:4.2.8p10+dfsg-3+deb9u1 and fails. It tries
> to download +deb9u1 because
> 
>   $ aptitude show ntp
>   Package: ntp
>   Version: 1:4.2.8p10+dfsg-3+deb9u1
>   State: not installed
>   [...]
> 
> and it fails because the version of the package in the Debian 9
> mirror listed in /etc/apt/sources.list is +deb9u2 :
> 
>   ntp_4.2.8p10+dfsg-3+deb9u2_amd64.deb
> 
> I don't understand what went wrong. apt-get update seemed to go
> well, only complaining about missing "DEP-11 64x64 Icons", which
> are presumably not a vital part of ntp.
> 
> How is this possible ? I'm confused.

I'm not sure if you really did what it sounds like you did here, but if 
you did... you can't mix and match commands to apt-get and aptitude.

You did apt-get update so you need to use apt-get upgrade, or 
dist-upgrade, or whatever the apt-get command is (I don't much use 
apt-get, have switched to the apt command since upgrading to stretch).

If you want to use aptitude upgrade, or dist-upgrade, or safe-upgrade, 
or whatever the command is (embarrassingly I have forgotten, I used 
aptitude for years _before_ upgrading to stretch) you need to first do 
aptitude update.

apt-get update followed by aptitude upgrade will lead to pain.

Hope that helps

Mark



Re: Password Manager opinions and recommendations

2018-03-26 Thread Mark Fletcher
On Mon, Mar 26, 2018 at 08:34:28PM +0100, Brian wrote:
> On Sun 25 Mar 2018 at 22:43:26 +0200, Ángel wrote:
> 
> > On 2018-03-25 at 19:47 +0100, Brian wrote:
> > > 1 day after the breach your data had been compromised. Changing your
> > > password 10 days later on in your 1 month cycle doesn't seem to me to
> > > be reactive security. Better than nothing, I suppose, but closing the
> > > door after etc.
> > > 
> > > In any case, your 20 character, high entropy password was your ultimate
> > > defence. (Not unless Yahoo! didn't hash).
> > 
> > 
> > Sure. If someone stole your password, be that by compromising and
> > injecting a password-stealing javascript server side, due to a sslstrip
> > you didn't notice on that free wifi, perhaps just someone looking at the
> > keys you pressed when entering your password, etc. the data you had up
> > to that point in that service should be considered compromised.
> > 
> > However, if the password was changed N days/months later, as part of a
> > periodic password change, that would mean that data processed after that
> > date would no longer be in risk, whereas otherwise the account would
> > continue being accessible by the bad actors for years (assuming that you
> > are not using a pattern that removes the benefit or rotating the
> > password!).
> 
> I would be more accepting of this argument if it fitted with real world
> examples in other fields. Nobody offers the advice to change the locks
> on your front door or your car at regular intervals. But the computer
> security business has conjured up the "what if" argument to counteract
> commensense.
> 
It's pretty difficult to steal someone's keys without them realising it 
has happened. In contrast, password compromise happens without the 
victim's knowledge all the time.

Mark



Re: Debian 9 rocks, really

2018-03-26 Thread Mark Fletcher
On Mon, Mar 26, 2018 at 10:06:17AM +0200, Mart van de Wege wrote:
> Andre Rodier <an...@rodier.me> writes:
> 
> > Hello all,
> >
> > I have been using Linux since more than 20 years, and Debian Linux
> > since Potato.
> 
> Same here. I started out on Red Hat 6.2, and discovered Debian when it
> was on potato. I've been using some flavour of Debian personally since,
> and some flavour of it or RH professionally.
> 
> I love it. It's been great consistently, and 9 really shines. I even
> like systemd although I have some reservations about its design (I think
> it's a bit over-engineered).
> 
> Debian 9 give me dev tools, and tools to manage service resources better
> than ever. It's a lovely base system.
> 

#metoo , in a good way!

I started with Debian in 1996 -- the lovely Stephen Early, whom I 
occasionally see on this list, may have "fond" memories of porting the 
source code for the 1.3 development kernel onto my machine on floppy 
disks so he could help me get my brand spanking new ethernet card 
working! Yeah, probably not that fond memories...

In those days I dual-booted Windows and Debian. I was away for a few 
years after uni and then had a brief affair with SuSE but we don't 
talk about that in polite company. Came back to Debian when Woody was 
stable and been running Debian as the only OS on the box ever since. (I 
still run Windows but only in VMs, very much a minority use case for me 
now)

Over the years I have oscillated between the stable distribution and 
whatever was testing at the time. Now I run both, stretch on machines 
that are doing important stuff and buster to see what is coming.

As others have said, the combination of the philosophy, the dedication 
of the team, and the community make this a great way to spend one's 
computer's time. I love it.

Mark



Re: quick scripting 'is /P/Q mounted'

2018-03-14 Thread Mark Fletcher
On Tue, Mar 13, 2018 at 03:56:00PM -0400, The Wanderer wrote:
> On 2018-03-13 at 15:39, Joe wrote:
> 
> > On Tue, 13 Mar 2018 14:49:56 +0100 <to...@tuxteam.de> wrote:
> 
> That test can be spoofed, however, by the creation of a directory with
> the same name (and/or other characteristics) under the mount point while
> the mount is not active.
> 

Yes, but in most use cases one would not be worried about malicious 
actions, you are trying to protect against cock-ups.

> Even if you don't think anything malicious is ever going to try to spoof
> this in whatever case is at hand, can you be sure no script (or, for
> that matter, user) will ever attempt to create that directory under the
> mistaken impression that the mount is active?
> 

Yeah, that's a fair point though.

Mark



Re: quick scripting 'is /P/Q mounted'

2018-03-13 Thread Mark Fletcher
On Tue, Mar 13, 2018 at 08:49:58PM +1100, David wrote:
> On 13 March 2018 at 14:40, Mike McClain <mike.junk...@att.net> wrote:
> >
> > If my other computer is South40 and I want to mount South40's /docs
> > on my /south40/docs/ directory I can do that. As one script calls
> > another I want to know if I need to mount South40 without
> > $( mount | grep 'south40/docs').
> >
> > Suggestions?
> 
> Installing the package util-linux will provide the mountpoint command
> which exits true=0 if its argument is in use as a mountpoint. Example:
> 
> $ if mountpoint / ; then echo "exit status is $?" ; fi
> / is a mountpoint
> exit status is 0
> 
Unless I've misunderstood the question, you can tell if something is 
mounted at a mount point by checking if anything is present under the 
mount point, eg if you know there is a directory /Y that gets mounted 
under mount point /X, you can can make sure /X/Y doesn't exist under the 
mount poiunt and then check for the existence of /X/Y -- it will be 
there if the mount point is in use and not if not.

Mark



Re: can't install mplayer of stretch

2018-03-04 Thread Mark Fletcher
On Sun, Mar 04, 2018 at 10:01:23PM +, Long Wind wrote:
> there is a typo in my last post:  not -> now
> the cause might be i use installation CD 9.3.0but now apt source is newest 
> (ftp.utexas.edu) 
> 
> On Sunday, March 4, 2018 4:55 PM, Long Wind <longwi...@yahoo.com> wrote:
>  
> 
>  the cause might be i use installation CD 9.3.0but not apt source is newest 
> (ftp.utexas.edu)
> below is error msg of "apt-get install mplayer"is it possible to fix it? 
> Thanks!
> 
> Reading package lists...
> Building dependency tree...
> Reading state information...
> Some packages could not be installed. This may mean that you have
> requested an impossible situation or if you are using the unstable
> distribution that some required packages have not yet been created
> or been moved out of Incoming.
> The following information may help to resolve the situation:
> 
> The following packages have unmet dependencies:
>  mplayer : Depends: libavcodec57 (>= 7:3.2.2) but it is not going to be 
> installed or
> libavcodec-extra57 (>= 7:3.2.2) but it is not going to be 
> installed
>Depends: libavformat57 (>= 7:3.2.2) but it is not going to be 
> installed
>Depends: libswresample2 (>= 7:3.2.2) but it is not going to be 
> installed
> E: Unable to correct problems, you have held broken packages.
> 
> 
>
Do apt update && apt upgrade as root, and see what it wants to do. You 
don't have to let it do it -- you can answer N when it asks for 
confirmation on the upgrade -- but you can see if it wants to upgrade a 
ton of packages. If it does, my suggestion would be to let it do so and 
then try the installation again.

The issue could be that it still thinks something is in the repository 
with a dependency on an older version of one of those libraries, and you 
need to update its understanding for it to see how it can fulfill your 
request to install mplayer.

If that doesn't help then you need to look closely at the libraries 
mplayer depends on and see if you have something pinning an older 
version of them or a conflict preventing them from being installed. I 
don't recall if apt has a why / why-not command, but aptitude does -- 
one early step would be to let the system tell you why it can't install 
the dependencies of mplayer. Note your error message complained about 
all of them, but it only takes one to actually have a problem, so check 
them all.

HTH

Mark



Re: requesting assistance troubleshooting Kmail

2018-02-24 Thread mark
This may or may not be helpful to Charlie, but others might find it helpful

I use kmail ( version 5.5.2).  I have it set up to download all e-mail as 
local IMAP(not stored on server) so that each message is a separate file on my 
PC.  Hasn't been a problem in a long time.  Recently, I've gotten some 
"ghostly" messages that I could not delete.  There was a cryptic error message 
pointing at the akonadi database.  

>From searching the web, I found that there is a management program called 
akonadiconsole.  It allowed me to solve my problems.  

Here is what I did and some notes on what I found.  So, I downloaded and 
installed it.  Note that it should be run as a user (not as root) to access 
that user's e-mail.  The program opens with 5 windows.  Top left is 
"Collection" which are the various folders that the e-mail is stored in.  Top 
right doesn't have a title, but displays the "ID", "Remote Id"  and "MimeType" 
of each message.  What I found (your results may be different) is that the 
messages that I was having problems with did not have an entry in the MimeType 
column.  I right clicked on a message without a "MimeType" and chose "delete". 
(once I select a message, the body of the message displays below the message 
list) The problematic message was gone.  It took a very few minutes for me to 
go through my messages, and clean out the problems.

WORD OF WARNING:  This is a low level database access tool.  Changes that you 
make are immediate, and not reversable.  You can mess things up badly if you 
are not deliberate and careful.  (I did not try to mess my stuff up to find 
out), so be very careful.

Good luck, and best wishes,
Mark


On Saturday, February 17, 2018 4:07:27 PM EST charlie derr wrote:
> Apologies if this turns out to be a duplicate message. I tried to send
> to the list using my problematic Kmail client (and a message appeared
> in my sent mail folder) but so far I don't see it having posted when I
> look at the web archives for debian-user.
> 
> I sent a similar message to debian-kde this morning, but it appears
> there's not much activity there, so I figured I'd also try here.
> 
> I've been a debian GNU/linux user (I like KDE for my desktop, though I
> generally keep openbox (and gnome) available as "fallbacks") for quite
> some time now (decades at least) and am in general a very happy user.
> I've been using  both thunderbird and claws-mail as IMAP clients for
> some time, but recently (a few months ago) I set up Kmail (and I like
> it very much, as integration with gnupg was relatively painless). I am
> again using it as an IMAP client (with a different gmail address). I
> have two computers, both running debian 9 stretch. On one of them (the
> laptop), Kmail is functioning properly, but on the desktop machine
> which I'm writing from now, incoming email has not been appearing since
> the end of January. I don't see any obvious menu entries which would
> give me access to log files in order to try to troubleshoot the
> problem, so I'm looking for any advice as to how I might go about
> correcting the issue so this instance of Kmail can again receive email
> (it's possible that the problem is as simple as an incorrectly entered
> password though I've just attempted to reenter it with no improvement
> in mail fetching behavior). I'm not currently subscribed to this list,
> so CCing me is perfectly fine (though I'll monitor the web archives for
> responses on which I may not be CCed).
> 
>thanks so much in advance for any information anyone may have to
>share with me,
> 
>  ~c




Re: troubleshooting Kmail

2018-02-20 Thread Mark Neidorff
On Tuesday, February 20, 2018 2:07:29 AM EST deloptes wrote:
> Hi,
> 
<<>>
> > I am a long time kmail user.  I have noticed significant improvment in
> > stability and the filtering of incoming mail.  I use the filtering
> > extensively.
> > Before the last release, at the beginning of a KDE session, filtering was
> > OK,
> > but it slowed down with use.  In the latest version, it is extremely fast,
> > and
> > it doesn't get slower with use.  The only "bug" I have found in this
> > version
> > of kmail (5.5.2) is that an occasional "ghost" message will be in a folder
> > and
> > can't be removed.  I store emails locally via IMAP--one message per
> > file--and
> > except for the ghosts, I am extremely pleased.  I currently have over
> > 126,000
> > messages stored and about 8 "ghost" messages.  I searched through the
> > individual files that contain the e-mails and I can't find files for the
> > ghost
> > messages.
> > 
> > 
> > If the attitude of the KDE folks is the problem, please remember that they
> > are
> > not full time KDE programmers and customer service is probably not their
> > strong suit.
> 
> Look, either something works or does not work. Those bugs and KDE not
> fixing them is not acceptable.
> I know that they are not working full time or for profit. This is also not
> an excuse. Don't try to cover them and their attitude, please.
> It is pointless. When they bring up a working product, I will start using
> it and I mean working at acceptable level.
> Those problems you or others describe can not qualify the product as stable.
> I am willing to do some compromise on my requirements, but there is too
> much to compromise on, looking at KDE.
> And as I said - the biggest problem is their attitude. The attitude to
> release crap in stable and call it stable - call it whatever you want but
> not stable!
> 
> > I don't know if you consider this a valid comparison or not, but:
> > In October 2017 (as I recall), my bank (which shall remain nameless)
> > announced
> > that there would be a new version of the on-line access software coming
> > out on
> > January 1st.  Then, around January 10th they announced that the upgrade
> > had
> > some unresolved issues, and would not be rolled out until February 1st.
> > February 1st arrived and passed.  The new software was put in place on the
> > 12th.  Since then, I have been unable to login to my account.  No help on
> > the
> > screen.  When I called last week, they said that they were ware of the
> > problem
> > and were working very hard to resolve it.  No apology.  They can tell me
> > my
> > balance over the phone, but that is about it.  IMO, this is absurd.
> > 
> > Well this is what I am talking about - KDE is exactly the same - absurd!
> 
> I have to admit that KDE5 is much better that KDE4, but still - no stable
> and with that attitude and mind set, I doubt they will ever bring up
> something stable, which is really a pity.
> 
> I was involved in couple of discussions with them back in 2007 or 2008
> after they released the KDE4 crap. Can you imagine this was 10y ago.
> 
> regards

Deloptes,

I respect your opinion, and the many contributions that you have made to this 
list.  You and I have both been more than annoyed with bad attitudes, you with 
KDE me with my bank.  I pointed out the problem that I had and how it has been 
mishandled, IMO. You mentioned "those bugs" but you haven't given specific 
examples.  Please give the examples.

Thanks,
Mark

-- 
Its not whether you win or lose, its how you place the blame...



<    1   2   3   4   5   6   7   8   9   10   >