Re: Konsole is not bash

2023-08-20 Thread Bob Weber

On 8/20/23 10:28, gene heskett wrote:
I cannot make bashes redirection (cmd 2>&1 >tmp/cmd.log) work in Konsole. What 
terminal actually uses bash for the heavy lifting?


Cheers, Gene Heskett.


In konsole its in the settings for the profile you are using.  Mine just says 
bash not /usr/bin/bash.  If a profile uses ssh that will be there also.  Its 
under "Settings/Edit current profile"  or  "Settings/Manage profiles".


--


*...Bob*

Re: apt-cacher internal error (died)

2022-09-21 Thread Bob Weber

On 9/21/22 10:36, Darac Marjal wrote:



On 21/09/2022 14:07, Adam Weremczuk wrote:

Hi David,

There is still something wrong with my /etc/apt/sources.list

Perhaps caused by stretch reaching end of life on 30 June 2022.

Can somebody provide me with a tested list of mirrors for stretch working in 
Sep 2022 for apt-cacher-ng server and clients?


I've tried several different sets getting no errors from "apt update" on the 
server (which has internet connectivity).


Every time I repeat this list in /etc/apt/sources.list on a client replacing 
FQDN (e.g. deb.debian.org or security.debian.org) with my server's IP and 
port (192.168.100.1:3142) I get DNS errors for security mirror as below:


Err:5 http://192.168.100.1:3142/debian-security stretch/updates Release
  503  DNS error for hostname debian-security: Name or service not known. If 
debian-security refers to a configured cache repository, please check the 
corresponding configuration file.


I'm no expert in apt-cacher-ng, but the error here says that's it's trying to 
look up "debian-security" as a hostname. If I'm reading this page 
 correctly, you shouldn't 
be changing /etc/apt/sources.list to point to apt-cacher-ng, instead, you 
should continue to point it to deb.debian.org or snapshot.debian.org and, 
instead, tell apt to use apt-cacher-ng as an HTTP proxy.


The protocol that a HTTP server and a HTTP proxy use are _slightly_ different. 
Instead of a client asking a server "Give me /path/to/index.html", it needs to 
tell the proxy "Give me /path/to/index.html from example.com". I suspect you 
problem comes of trying to download packages from apt-cacher-ng, rather than 
proxying through apt-cacher-ng.





or

503  DNS error for hostname security: No address associated with hostname.

Perhaps /etc/apt-cacher-ng/acng.conf on the server needs amending as well?

I've found a line there that reads:

Remap-debrep: file:deb_mirror*.gz /debian ; file:backends_debian # Debian 
Archives


Regards,
Adam



On 13/09/2022 05:54, David Wright wrote:

Err:5http://192.168.100.1:3142/security stretch/updates Release
    503  DNS error for hostname security: No address associated with
hostname. If security refers to a configured cache repository, please
check the corresponding configuration file.
E: The repository 'http://192.168.100.1:3142/security stretch/updates
Release' does not have a Release file.


I use apt-cacher-ng also.  The apt proxy is setup in each client with the 
following file:


-- /etc/apt/apt.conf.d/000apt-cacher-ng-proxy

# This configuration snipped is intended to be stored in /etc/apt/apt.conf.d/
# on the client system in order to change a regular setup to use apt-cacher-ng.
#
Acquire::http::Proxy "http://172.16.0.1:3142/";;

# Little optimization. A value of 10 has been used in earlier version of
# apt-get but was disabled in the beginning of the second decade because of
# incompatibilities with certain HTTP proxies. However, it still beneficial
# with proxy servers that support it good enough (like apt-cacher-ng).
#
Acquire::http::Pipeline-Depth "23";

-

That tells apt to use a proxy on the server 172.16.0.1 at port 3142.  The source 
list files remain the same.


Like for testing:

deb http://deb.debian.org/debian/ testing main non-free contrib

apt-cacher-ng can even connect to an external proxy (In setup file) but if it is 
running on a machine that can get to the internet it will not need an additional 
proxy (why waist resources).


This makes apt-cacher-ng work just like the client was connected to the internet 
with the exception of adding the above 000apt-cacher-ng-proxy file being the 
only change needed.


...bob



Re: Windows on VMware on Deb 11: safely usable?

2022-08-22 Thread Bob Weber

On 8/22/22 08:41, Tom Browder wrote:

On Wed, Aug 17, 2022 at 21:39 step...@gmail.com  wrote:

On 8/17/22 19:35, Stefan Monnier wrote:
> Tom Browder [2022-08-17 05:53:05] wrote:
>> I would love to run Windows on a VM on Debian iff I can have it be 
reliable
>> enough to use with reasonable response (no games, just Office 360, IO
>> Drive, H&R Block, and such). I haven't kept up with the VM world but a
>> quick search shows VMware might be a good choice.
>
> Last I had to run a Windows VM I used kvm (aka Qemu) and that worked
> very nicely.  It's easy to install (it's in the Debian repositories),
> very featureful, and used for "real systems" (tho in my case I always
> used it very punctually to run some specific tool only available in
> Windows).

Yep; same. Ran multiple windows vms in kvm (libvirt/qemu). Stable and solid.


So I will try Debian 11's packages "qemu-kvm" and "aqemu" and install Windows 
10" as a test on my current main host, but only if I can remove all if I need 
to and if it will not interfere with my smooth running setup. Is that true?


Note if I proceed and need help, I will start a new thread.

-Tom


Why not try virt-manager package.  At least it is updated recently.  
See:https://virt-manager.org/ for details. It even runs VMs on remote machines 
over ssh ... a function I've used before.


When you run a VM it displays it in a window with full graphics and mouse 
support.  I have 20 or so VMs ... even several Windows 10 ... to try updates for 
Debian testing before I commit them to my main host machine.  You can easily set 
the number of CPUs, memory, disk space, graphics, network and more for the VM.  
You can pass USB devices to the VM all through virt-manager.


I have run a weather station in a Debian VM for over 8 years with little 
problem.

I am running a 10+ year old AMD 6 core CPU with 24GB memory.  Windows 10 seems 
to run fine with 5GB of memory.



--


*...Bob*

Re: Is there an easy way to get the latest version number of a source package that is available?

2022-06-27 Thread Bob Weber

On 6/27/22 10:31, Tim Woodall wrote:

Hi,

apt-get --only-source --download-only source 

will download the latest version of the source package.

Is there a one liner that will give me the version of the package
(including the epoch) without downloading the package and parsing the
dsc?



Tim.



I use 2 aliases that have server me well (almost daily).

Search for any pattern of package names:

alias al='apt list | grep '

Search for only installed package names:

alias ali='apt list --installed | grep '

al linux-im gives me this:


linux-image-5.10.0-8-amd64/now 5.10.46-4 amd64 [installed,local]
linux-image-5.14.0-3-amd64/now 5.14.12-1 amd64 [residual-config]
linux-image-5.15.0-2-amd64/now 5.15.5-2 amd64 [installed,local]
linux-image-5.15.0-3-amd64/now 5.15.15-2 amd64 [installed,local]
linux-image-5.18.0-2-amd64-dbg/testing,unstable 5.18.5-1 amd64
linux-image-5.18.0-2-amd64-unsigned/testing,unstable 5.18.5-1 amd64
linux-image-5.18.0-2-amd64/testing,unstable 5.18.5-1 amd64
linux-image-5.18.0-2-cloud-amd64-dbg/testing,unstable 5.18.5-1 amd64
linux-image-5.18.0-2-cloud-amd64-unsigned/testing,unstable 5.18.5-1 amd64
linux-image-5.18.0-2-cloud-amd64/testing,unstable 5.18.5-1 amd64
linux-image-5.18.0-2-rt-amd64-dbg/testing,unstable 5.18.5-1 amd64
linux-image-5.18.0-2-rt-amd64-unsigned/testing,unstable 5.18.5-1 amd64
linux-image-5.18.0-2-rt-amd64/testing,unstable 5.18.5-1 amd64
linux-image-amd64-dbg/testing,unstable 5.18.5-1 amd64
linux-image-amd64-signed-template/testing,unstable 5.18.5-1 amd64
linux-image-amd64/testing,unstable 5.18.5-1 amd64
linux-image-cloud-amd64-dbg/testing,unstable 5.18.5-1 amd64
linux-image-cloud-amd64/testing,unstable 5.18.5-1 amd64
linux-image-rt-amd64-dbg/testing,unstable 5.18.5-1 amd64
linux-image-rt-amd64/testing,unstable 5.18.5-1 amd64

and ali linux-im


linux-image-5.10.0-8-amd64/now 5.10.46-4 amd64 [installed,local]
linux-image-5.15.0-2-amd64/now 5.15.5-2 amd64 [installed,local]
linux-image-5.15.0-3-amd64/now 5.15.15-2 amd64 [installed,local]


--


*...Bob*

Re: Networking book recommendation

2022-05-03 Thread Bob Weber

On 5/3/22 17:14, Tom Browder wrote:


I appreciate all the responses, and I realize, once again, that I should have 
given a little more background for the question:


I have been running 10+ websites using SNI on Apache on two leased remote 
servers for many years. I am now moving the whole operation, gradually, to 
operate out of my home on my own Debian server. During those years I've had 
several hardware failures that were hard to deal with remotely, hence the 
decision to come home (especially since I now have a bit more space for the 
additional equipment).


I have been using a firewall and iptables to minimize inbound traffic, but the 
details some have sent are very helpful for my current plan.


In addition to the webserver being accessed externally, I will be sshing into 
my home server while traveling.


Thanks to all.

-Tom


Have you thought of using a small VM in the cloud?  I have been running a 
droplet at Digital Ocean for several years.  For $5 a month I get a fast 1 cpu 
VM, 25G of file space, 1 G of memory and a static ip address.  I have several 
web sites there, email for my family, and at times a VPN.  I run Debian ... its 
just like my other systems so its easier to maintain.  I use the free 
letsencrypt service for the certificates for my web sites. The only other cost 
is for the DNS names for my sites (which you would need if you did this from home).


I access it over ssh on a non standard port to keep the knockers out.  I use ssh 
keys to login with passwords disabled.  If you mess up you can access the site 
over a web based shell access.  I use shorewall for my firewall (iptables based) 
and fail2ban to watch my logs there to block ip(s) that are up to mischief.  I 
also block ip ranges of China and Russia.


Depending on your needs you may need more memory or file space but for $5 a 
month this has been a great way to host my web sites, email and VPN.  You could 
even set up a VPN to connect back to your system at home when you are on the 
road.  So this keeps all the traffic off your home systems and network.


--


*...Bob*

Re: Mounting NFS share from Synology NAS

2022-02-02 Thread Bob Weber

On 2/2/22 07:36, gene heskett wrote:


Sounds like how my network grew, with more cnc'd machines added. But I
was never able the make MFSv4 Just Work for anything for more than the
next reboot of one of the machines.  Then I discovered sshfs which Just
Does anything the user can do, it does not allow root access, but since I
am the same user number on all machines, I just put whatever needs root
in a users tmp dir then ssh login to that machine, become root and then
put the file wherever it needs to go. I can do whatever needs done, to
any of my machines, currently 7, from a comfy office chair.
Stay well all.

Cheers, Gene Heskett.


I second the sshfs approach.   I use it between several Debian servers and have 
been happy with the results.  Once setup in the fstab a click in a GUI or mount 
command on the cli mounts the remote server on a directory specified in the fstab.


A sample of a line in the fstab (check docs for more options):

sshfs#r...@172.16.0.xxx:/   /mnt/deb-test  fuse user,noauto,rw    0   0

The user at the remote system is root in this example.  Not a good idea unless 
you are the only one who can login to your system. I use ssh keys always.  If 
they are created without a password sshfs won't ask for one when it is mounted 
(I need this for my backup system Backuppc).  I even use sshfs to access a 
Digital Ocean droplet I have over the internet.


The current NAS you have might work with sshfs if their ssh server supports 
SFTP.


--


*...Bob*

Re: Mounting NFS share from Synology NAS

2022-02-01 Thread Bob Weber

On 2/1/22 10:32, Christian Britz wrote:


This is my entry in /etc/fstab:
diskstation:/volume1/Medien /Daten nfs
nfsvers=4,rw,x-systemd.automount,noauto 0 0


Have you tried the user option in fstab?

user - Permit any user to mount the filesystem.

nouser - Only permit root to mount the filesystem. This is also a default 
setting.

--


*...Bob*

Re: what to do with USB stick that gives badblocks errors

2021-11-24 Thread Bob Weber

On 11/24/21 11:52, Kenneth Parker wrote:

Try Steve Gibson's initdisk.  It claims:

"Experience has shown that USB thumb drives believed
to be dead may be brought back to life with InitDisk."

https://www.grc.com/initdisk.htm

Steve has done a lot of testing on USB flash drives and has discovered ways of 
getting down to the raw drive past the controller.


Check out his other freeware especially shields up!

--


*...Bob*

Re: aboutdebian.com

2021-11-20 Thread Bob Weber

On 11/19/21 22:38, A_Man_Without_Clue wrote:



On 11/20/21 12:07 AM, Peter Ehlert wrote:

On 11/19/21 6:52 AM, Nicholas Geovanis wrote:

On Fri, Nov 19, 2021, 1:05 AM Nate Bargmann mailto:n...@n0nb.us>> wrote:

 * On 2021 18 Nov 23:00 -0600, A_Man_Without_Clue wrote:
 > Does anyone remember the site existed in the past,
 aboutdebian.com?

 I can't say that I do.


I do remember it and it was a good resource at one time. My
recollection is that the relevant contents were moved to the Debian
wiki. But I can't support that.

I also remember it as being a topic of conversation...

 > I wonder if the contents are moved to somewhere else or they are not
 > available at all?

 It looks like the last time it was online with content was
 approximately
 29 Feb 2020:

 https://web.archive.org/web/20200229050405/http://www.aboutdebian.com/
 


something is odd, I see a post on thehttps://aboutdebian.com/  page
dated 2021-10-10


 I see that as of that date, the site had not been updated for Buster
 which by that time had been released nearly half a year earlier.

 After that the Web archive shows a blank page and captures from last
 month show nothing Debian related.

 - Nate

 --
 "The optimist proclaims that we live in the best of all
 possible worlds.  The pessimist fears this is true."
 Web:https://www.n0nb.us  
 Projects:https://github.com/N0NB  
 GPG fingerprint: 82D6 4F6B 0E67 CD41 F689 BBA6 FB2C 5130 D55A 8819



Sad, it's long gone and the contents are virtually lost. It was very
tutorial resource so, I just miss. I have several page printed out on
the paper, I still use for basic tutorial.


Thanks everybody.


Its on internet archive at least back to about 2018 and before.

--


*...Bob*

Re: RAID-1 and disk I/O

2021-07-17 Thread Bob Weber

On 7/17/21 08:34, Urs Thuermann wrote:

Here, the noticable lines are IMHO

 Raw_Read_Error_Rate (208245592 vs. 117642848)
 Command_Timeout (8 14 17 vs. 0 0 0)
 UDMA_CRC_Error_Count(11058 vs. 29)

Do these numbers indicate a serious problem with my /dev/sda drive?
And is it a disk problem or a transmission problem?
UDMA_CRC_Error_Count sounds like a cable problem for me, right?

BTW, for a year so I had problems with /dev/sda every couple of month,
where the kernel set the drive status in the RAID array to failed.  I
could always fix the problem by hot-plugging out the drive, wiggling
the SATA cable, re-inserting and re-adding the drive (without any
impact on the running server).  Now, I haven't seen the problem for
quite a while.  My suspect is that the cable is still not working very
good, but failures are not often enough to set the drive to "failed"
status.

urs

I switched from Seagate to WD Red years ago since I couldn't get them to last 
more than a year or so.  I have one WD that is 6.87 years old with no errors.  
Well past the 5 year life expectancy. In recent years WD has pulled a marketing 
controversy on their Red drives.  See:


https://arstechnica.com/gadgets/2020/06/western-digital-adds-red-plus-branding-for-non-smr-hard-drives/

So be careful to get the Pro version if you decide to try WD. I use the 
WD4003FFBX (4T) drives (Raid 1) and have them at 2.8 years running 24/7 with no 
problems.


If you value your data get another drive NOW .. they are already 5 and 5.8 years 
old!  Add it to the array and let it settle in (sync) and see what happens.  I 
hope your existing array can hold together long enough to add a 3rd drive.  I 
would have replaced those drives long ago from all the errors reported.  You 
might want to get new cables also since you have had problems in the past.


I also run self tests weekly to make sure the drives are ok.  I run smartctl -a 
daily also.  I also run backuppc on a separate server to get backups of 
important data.


There are some programs in /usr/share/mdadm that can check an array but I would 
wait until you have a new drive added to the array before testing the array.  
Here is the warning that comes with another script I found:




DATA LOSS MAY HAVE OCCURRED.

This condition may have been caused by one of more of the following events:

. A LEGITIMATE write to a memory mapped file or swap partition backed by a
    RAID1 (and only a RAID1) device - see the md(4) man page for details.

. A power failure when the array was being written-to.
  Data corruption by a hard disk drive, drive controller, cable etc.

. A kernel bug in the md or storage subsystems etc.

. An array being forcibly created in an inconsistent state using --assume-clean

This count is updated when the md subsystem carries out a 'check' or
'repair' action.  In the case of 'repair' it reflects the number of
mismatched blocks prior to carrying out the repair.

Once you have fixed the error, carry out a 'check' action to reset the count
to zero.

See the md (section 4) manual page, and the following URL for details:

https://raid.wiki.kernel.org/index.php/Linux_Raid#Frequently_Asked_Questions_-_FAQ

--

The problem is that if a miss count occurs then which drive (Raid 1) is 
correct!  I also run programs like debsums to check programs after an update so 
I know there is no bit rot in important programs as explained above.


Hope this helps.

--



*...Bob*

Re: Clipboard

2021-06-23 Thread Bob Weber

On 6/23/21 08:08, William Lee Valentine wrote:

My comment may be rudimentary.

I copy onto the Windows clipboard all the time (by selecting text with
the mouse and then pressing control/C). I am not then able to paste the
selected text into a text document with one click: I must minimize the
original document (or close it), and I then have two choices.

(1) I can open a second document, click at some point within it, and
press control/V to paste my text into the second document.

(2) I can execute a text editor like Notepad and press control/V, to
paste the selected text into the text editor. I can then tell the text
editor to save the new file as a text document.

-- William Lee Valentine

You didn't mention which display manager you use.  I use KDE and it is really 
easy to cut and paste.  I believe other DMs have this same capability since I 
have used it in VMs running those other systems.   The page at 
"https://userbase.kde.org/Klipper"; describes the clipboard:


"Within Plasma there are two different buffers. One is the clipboard and the 
other is the selection. The clipboard buffer is filled when you press Ctrl + X 
or Ctrl + C and pasted by using Ctrl + V. The selection buffer is filled by 
simply marking some text and pasted by pressing the middle mouse button. Having 
said that it is important to know that Klipper can be configured to hold both 
buffers."


So after simply highlighting text you can go to another window (say a new 
document) and past the text with the middle mouse button (usually a scroll 
button or emulated on a laptop by hitting both left and right touch-pad buttons 
at the same time).  No ctrl-c ctrl-v.  Plus your clipboard can hold as many 
items as you want (I have mine set to 60).  So you could go through one document 
highlighting things you want to copy over and over ... then go to a second 
document and hitting Alt-c (quick display of highlighted text) to choose what 
you want to past.  The item to be pasted is at the top of the list.  You can 
choose another item simply by left clicking that item ... going back to the new 
document and hitting the middle mouse button!


So you don't have to keep going back and forth between documents just highlight 
multiple times until you are done (assuming you have set  the count high enough) 
and then go back to the new document and past each item in whatever order you want.


Hope this helps.

--



*...Bob*

Re: $PATH problem

2021-06-11 Thread Bob Weber

On 6/11/21 19:09, Gary L. Roach wrote:

*systemctl status backuppc.service**
*


Thats good.  Does the web interface work? Try:

http://ip-of-server/backuppc/index.cgi?action=summary

You might have to login.  Check the file /etc/backuppc/htpasswd. On my system it 
has the user and encrypted password for the bcckuppc user.  If you dont know the 
password then as root run "passwd backuppc".  That will allow you to change the 
backuppc user password.


--


*...Bob*

Re: $PATH problem

2021-06-11 Thread Bob Weber

On 6/11/21 16:48, Gary L. Roach wrote:

Hi all,

Operating System: Debian GNU/Linux 10
KDE Plasma Version: 5.14.5
Qt Version: 5.11.3
KDE Frameworks Version: 5.54.0
Kernel Version: 4.19.0-16-amd64
OS Type: 64-bit
Processors: 4 × AMD FX(tm)-4350 Quad-Core Processor
Memory: 15.6 GiB of RAM

I have been trying to install Backuppc on my system and have run into a 
problem with the $PATH settings. I put the proper path setting in the 
/etc/environment file which, on login, is supposed to supply that path to all 
users. This worked fine for gary and root but not Backuppc user. The path for 
backuppc doesn't have all of the paths needed.


Any suggestions?


Gary R.

My backuppc server has no /etc/environment file.  All the commands needed by 
backuppc are in its config with full paths to each command.  There is a MyPath 
setting in the Server Main Configuration Editor settings under Server but the 
doc says its only for keeping perl happy.  You get to this under the web 
interface for backuppc.


The Debian install should have fixed all the settings for you so that the server 
part just works.  You just have to add machines and and tell backuppc what to 
backup on each machine.


Does the backuppc service run?  You can see if there are any startup problems 
with

"journalctl -b -ru backuppc"

Each host has a log file and under the Server section there is a log file for 
backuppc itself.


The backuppc data is stored under /var/lib/backuppc (on my system that is s 
symbolic link to the actual disk/partition large enough to store all the backed 
up data, backuppc configs, log files and ssh keys).



--


*...Bob*

Re: passwordless SSH

2021-05-29 Thread Bob Weber

On 5/29/21 16:12, Gary L. Roach wrote:

Operating System: Debian GNU/Linux 10
KDE Plasma Version: 5.14.5
Qt Version: 5.11.3
KDE Frameworks Version: 5.54.0
Kernel Version: 4.19.0-16-amd64
OS Type: 64-bit
Processors: 4 × AMD FX(tm)-4350 Quad-Core Processor
Memory: 15.6 GiB of RAM

I have been trying to setup passwordless SSH for a Backuppc system. I have 
three Debian 10  systems (including the server) . SSH sets up fine on one of 
the client machine and "ssh backu...@192.168.254.xx starts without asking for 
a password. The other machine (supposedly identical) not only asks for a 
password but will not accept any of the known passwords.  If I go to the 
offending machine and attempt to su to the backuppc user , I am asked for a 
password and no passwords work. This doesn't allow the use of ssh-copy-id for 
transfering the encryption key to that machine. I have tried to reset the 
backuppc password three times but did not solve the problem. In both systems 
the public key is stored in /var/lib/backuppc/.ssh as id_rsa.pub.


I also have a Windoz 7 laptop that I want to include and have managed to get 
ssh and rsync installed (what a mess that was). I have not tried to get 
passwordless access to that yet. For later.


Any insights?

Gary R.

The servers that are being backed up do not have a backuppc user.  They need to 
have root access to access all the files you may need to backup.  These 
commands will get the proper root access on each server being backed up from the 
backuppc server and backuppc user.


You didn't say whether the working password less ssh was working on the host 
(backuppc machine) or not.  So I will give you general instructions here.  
Some commands will need root access on the backuppc server to run.



THESE COMMANDS WILL BE RUN ON THE BACKUPPC SERVER FOR EACH MACHINE TO BACKUP.

First make sure you can login to the backuppc user.  Look at your passwd file 
in /etc.  It will have an entry for backuppc ... it should have a user home 
directory and user command interpreter listed.  Look at your own entry to see 
how the entries are formatted or look at "man 5 passwd". The directory should 
be the backuppc base directory /var/lib/backuppc and command interpreter 
/bin/bash.  Create a password for the backuppc user (as root) with "passwd 
backuppc".  Now login to the backuppc user with that password (or just "su - 
backuppc" from root).



Now follow the instructions at:

https://linuxize.com/post/how-to-setup-passwordless-ssh-login/

You will need to follow those instructions for each linux server you want to 
backup.  The .ssh directory will be under the directory listed in the passwd 
file (/var/lib/backuppc). DO NOT USE A PASSWORD TO create the key pair files! 
They should go into the /var/lib/backuppc/.ssh directory (only do this ONCE!).  
In step 03. the username should be root@ip-address (you will need root access on 
that machine to backup all files from the backuppc user on the backuppc 
server).  In step 04 you should be able to "ssh root@ip-address" without a 
password.



THESE COMMANDS ARE RUN ON EACH SERVER TO BE BACKED UP.

If yyou can't "ssh root@ip-address" without a password you may also need the 
line

"PermitRootLogin yes"

in the /etc/ssh/sshd_config file on each server to be backed up.

If you want to you can follow the instructions at "Disabling SSH Password 
Authentication".  Be very careful to follow the instructions closely.  These 
are not needed to get backuppc running!  You will need to be able to sudo into 
root from an unprivileged user to get root access so be VERY careful to follow 
the instructions.


...Bob




Re: going beyond a ch341 uart-usb convertor

2021-04-12 Thread Bob Weber

On 4/11/21 23:55, Gene Heskett wrote:

Greetings all;

Building a design/builder for a 3d printer, which when a std usb to
printer cable is connected between the computer and the 3d printer,
Identifies as a ch341 convertor cable once it is plugged into the
printer.

conman seems helpless, as does cutecom.

cura has a monitor that can drive an ender 3 over this same cable but
I've not been able to establish a connection to cura, for whatever
reason.

Perhaps conman or cutecom is not the correct way to do it. IDK, and the
docs for this printer are as non-existant as the chinese company that
made it. Its labeled as a NEWEREAL M-18-S, and is nearly twice as big as
an ender 3. Work envelope is 310x310x400mm, and is the same bed-slinger
style as an ender 3.

What would the next thing to try and discover why its not working?

Cheers, Gene Heskett


I have used putty to connect to my ender 3 and a CNC3-3018.  I used 
/dev/ttyUSB0, 115200 baud and type serial.  Its half duplex so you don't see 
what you type unless you turn on echo mode.  As mentioned before you need to be 
in the dialout group for this to work.  I have connected bCNC to the 3018 so I 
know that works.  I usually just transfer the gcode over memory cards from my 
desktop host.  I run freeCAD, slic3r, lightburn all in a VM (I don't trust 
appimages etc) on my desktop to do the design work.  I have cura but I haven't 
tried to connect it via usb yet.


--


*...Bob*

Re: transfer speed data

2020-12-22 Thread Bob Weber

On 12/22/20 7:55 PM, mick crane wrote:

hello,
I have a buster PC and a bullseye PC which are both supposed to have gigabyte 
network cards connected via a little Gigabyte switch box.
Transferring files between them, I forget which shows the transfer speed per 
file, either scp or rsync the maximum is 50 Mbs per file.

Would you expect that to be quicker ?

mick

To check the network try iperf3.  Set one PC to be a server and the other a 
client.  It will show the transfer rate between the 2 PCs.  It uses port 5201 
so if you have a firewall on the server side PC you will need to open that port.Â



Here is what I get between 2 PC running gigabit nics.


Connecting to host bingo, port 5201
[  5] local 172.16.0.3 port 40752 connected to 172.16.0.1 port 5201
[ ID] Interval           Transfer     Bitrate         Retr 
 Cwnd
[  5]   0.00-1.00   sec   112 MBytes   939 Mbits/sec    2    277 
KBytes       Â
[  5]   1.00-2.00   sec   112 MBytes   943 Mbits/sec    0    277 
KBytes       Â
[  5]   2.00-3.00   sec   112 MBytes   942 Mbits/sec    0    277 
KBytes       Â
[  5]   3.00-4.00   sec   112 MBytes   938 Mbits/sec    0    277 
KBytes       Â
[  5]   4.00-5.00   sec   112 MBytes   942 Mbits/sec    0    277 
KBytes       Â
[  5]   5.00-6.00   sec   112 MBytes   941 Mbits/sec    0    277 
KBytes       Â
[  5]   6.00-7.00   sec   112 MBytes   938 Mbits/sec    0    277 
KBytes       Â
[  5]   7.00-8.00   sec   113 MBytes   944 Mbits/sec    0    277 
KBytes       Â
[  5]   8.00-9.00   sec   110 MBytes   922 Mbits/sec    0    277 
KBytes       Â
[  5]   9.00-10.00  sec   112 MBytes   938 Mbits/sec    0    277 
KBytes       Â

- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.09 GBytes   939 Mbits/sec    2 
            sender
[  5]   0.00-10.00  sec  1.09 GBytes   938 Mbits/sec 
                 receiver


iperf Done.


--



*...Bob*

Re: package of cfdisk

2020-11-01 Thread Bob Weber

On 11/1/20 5:53 PM, gregoire roumache wrote:

Hello,

I've found multiple bugs while using the command : cfdisk. I've written a 
report to sub...@bugs.debian.org , however it 
was rejected because I didn't specify a package (line at the very first line 
of the mail body). Unfortunately, I couldn't determine what package cfdisk was 
part of. If you could give me its name, it would be very helpful!


Sincerely,

Grégoire Roumache


"apt-file search cfdisk" (without quotes) will also find the package.  You might 
have to install the apt-file package to use it.


--


*...Bob*


Re: Quick help on

2020-10-08 Thread Bob Weber

On 10/8/20 1:30 PM, Charles Curley wrote:

On Thu, 8 Oct 2020 10:34:33 -0400
rhkra...@gmail.com wrote:


I've recently found that if I leave the machines with the KVM set to
the Jessie machine, when I come back, the power light on the monitor
is red, but does not come back to life when I move the mouse or press
a key.

My work-around for a similar situation is CTL-ALT-F1, then release all
keys, then CTL-ALT-F7. Which I believe forces X to re-initialize itself.

I have the same problem in qemu VMs.  I do the equivalent of CTL-ALT-F1 then 
CTL-ALT-F7 to get the GUI back.  It started in the last year so it is something 
in the system is doing it.  Googling wasn't much help.  I have put all the 
settings (in KDE) to not sleep or turn off the monitor to no avail.


--


*...Bob*


Re: Two questions about LUKS in a file container

2020-09-12 Thread Bob Weber

On 9/12/20 12:10 PM, rhkra...@gmail.com wrote:

I'm thinking about putting my backup encrypted files in a LUKS filesystem within
a file instead of on a dedicated partition (for a few reasons).

I have two questions about that:

* if I don't have that LUKS filesystem "mounted" and open and I write to it,
I assume (or hope) that nothing will get written and I will get a warning or
error message of some sort?

* doesn't exactly apply to this situation, but, on the other hand, if my
"source" / original / non-backup LUKS system is in a file instead of on a
dedicated partition, and I use commands (like rsync or such) to copy the
unencrypted files not on the LUKS system, but I use options like the ones to
stay on the current filesystem (--one-file-system), I assume (or hope) that the
stuff in the encrypted partition will not get copied?

  


I assume that you are referring to something like is described here:

https://willhaley.com/blog/encrypted-file-container-disk-image-in-linux/

The procedure described there creates a file encrypted.img that is a luks volume 
that requires a filesystem (mkfs.ext4) and mount point to be used as a encrypted 
storage.  If you want you can leave out --key-file mykey.keyfile and you will be 
asked for a pass phrase.


Files can be copied with rsync to the mount point $HOME/Private/ and they will 
be encrypted and not visible to the system after the umount and cryptsetup 
luksClose commands.


In my experiment the file encrypted.img can be written to or truncated while it 
is being used as a mounted encrypted volume but once you umount and luksClose 
the file ALL DATA is lost!  So to be safe let the file encrypted.img belong to 
root (with mode 600) and let a normal user write to the mounted volume at 
$HOME/Private/ after the chown command is run for the user.  Once the file 
encrypted.img is unmounted and closed out with luksClose it can be copied or 
moved to other places like a flash drive like any other file.


Warning: If you forget to open and mount the file encrypted.img to 
$HOME/Private/ and you copy files to $HOME/Private/ it will appear to work 
correctly but they will not be encrypted!  If you don't move the files out of 
$HOME/Private/ before you correct the mistake and mount encrypted.img you will 
not see those files in $HOME/Private/ until you unmount encrypted.img.


Note:

By saying mount encrypted.img I mean the 2 commands: "cryptsetup luksOpen 
encrypted.img myEncryptedVolume" and then "mount /dev/mapper/myEncryptedVolume 
$HOME/Private/".


The unmount encrypted.img commands are "umount $HOME/Private/" and "cryptsetup 
luksClose myEncryptedVolume".



I am not an expert on cryptsetup.  I have used these commands before but I was 
curious to see if the system it protected encrypted.img while it was being 
used.  I see that root can muck around with or delete encrypted.img making it 
unusable so your only protections are just like other files  backup!




--


*...Bob*


Re: Slow SSH over WLAN? How to check?

2020-09-03 Thread Bob Weber

On 9/3/20 9:48 AM, riveravaldez wrote:

Hi,

I'm under the impression that one of my LAN-SSH connections is working
poorly. When I SSH from a wired desktop machine (generic) to a
Wi-Fi-ed notebook (ThinkPadX220) things take irregular and seemingly
excessive amounts of time to happen (you type and the text appears a
moment later, etc.). This is just a
desktop→cable→router→Wi-Fi→notebook (W)LAN scheme.
Issue appears also logging from notebook to desktop.

Also I've been having some apparent poor performance in simple
web-navigation with that notebook (always through Wi-Fi), so, I'm
suspecting: maybe some issue with the firmware-iwlwifi?

$ lspci | grep "Network controller"
03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6205
[Taylor Peak] (rev 34)

Both machines run Debian testing (updated).

What could I do to check/test the health/performance of the connection
in order to diagnose if there's effectively a problem?

Thanks a lot!

Try iperf3.  Install on both machines and start one as a server and one as a 
client to see the network speed. Run with the -R option to see the reverse 
speed.  Make sure there is no firewall on the server machine or open the port 5201.



--


*...Bob*


Re: ot: hack me

2020-08-17 Thread Bob Weber

On 8/17/20 1:59 PM, gru...@mailfence.com wrote:

does anyone know of a reliable site that can stress test my firewall

Just go to grc.com and under services select ShieldsUP.  Steve Gibson runs this 
site and is a well known security expert.  You can test your firewall and get an 
explanation of what the open ports (if any) mean.


--


*...Bob*


Re: delimiters with more than one character? ...

2020-07-15 Thread Bob Weber

On 7/15/20 8:44 AM, Greg Wooledge wrote:

On Wed, Jul 15, 2020 at 08:34:36AM -0400, Bob Weber wrote:

My only purpose was to show how tr could be used to handle multiple
characters as a delimiter either as tr -s '\\\|' '\|' or

The problem is, it can't, at least not the way you showed.  The original
example, sadly, did NOT contain instances of the | and \ characters in
isolation, so one might be lulled into a false sense of security, and
write code that (for example) simply deletes all of the \ characters,
and then splits on the | characters.

But that won't work in the general case, where | and \ might appear as
literal data characters.

My own solution, which involved using awk to convert the \| pairs into
NUL bytes, is also technically incorrect.  However, there was an
additional stipulation: the stream was to be converted into a bash
array.  A bash array is a list of C strings, so they cannot contain
NUL bytes.  Therefore you can't possibly have NUL bytes in the original
input stream (at least, not and still produce a bash array), so my
conversion of the multi-character delimiters into NUL bytes will "work".

But it's a freaking ugly problem any way you look at it, and it just
got uglier when it was revealed that the OP might be trying to write
shell code that parses shell code.  Especially if the code in question
is a series of poorly written GNU-tainted grep commands.


Which is why I showed this:

tr -s '\\\|' '\|'

which replaces \| with a single character which is known not to be in the input 
data and usable as a awk field separator.  It just happens to be a | which is 
ok with awk and and I have used as a separator in code over 30 years ago.

--


*...Bob*


Re: delimiters with more than one character? ...

2020-07-15 Thread Bob Weber

On 7/15/20 6:29 AM, Albretch Mueller wrote:

the thing is that that was the one liner passed to a find command
which then I need to use as an array

  lbrtchx


My only purpose was to show how tr could be used to handle multiple characters 
as a delimiter either as tr -s '\\\|' '\|' or


tr -d '\\' when used with awk.  Plus the use of awk's printf command to produce 
the output you wanted.  Please explain what you are trying to do so we can be 
more helpful.

--


*...Bob*


Re: external bluetooth keyboard / mouse paired but not used

2020-07-14 Thread Bob Weber

On 7/14/20 1:11 PM, Andrea Borgia wrote:


Il 12/07/20 20:17, Andrea Borgia ha scritto:

After the pairing, I can see in the logs that there are 3 new devices 
(keyboard, mouse, multimedia controls) as input devices but that's it: it 
doesn't work in parallel with the internal touchpad and keyboard.


I've tried both with KDE and on console, no go.

Are there any special settings I have to make?


No ideas?


I bought a bluetooth keyboard with the intention to use it with our Samsung 
tablets.  It just worked in KDE to my surprise.  Is the mouse a part of the 
keyboard or a separate device?


These comments are for KDE.

I use bluetooth all the time to play audio on bt headphones and through a bt 
gateway to my HIFI for audio.  I've been able to connect a bt headphone and the 
bt gateway at the same time but I don't think audio worked after that until I 
disconnected both and then reconnected to just one.  So try using just ONE bt 
device (like the keyboard) and see if that works.  If so you may need to get 
another usb bt device (plugs into your computer) for the mouse.


Have you gotten audio through to a bt device?

Do you have the bt tray widget visible ... a bt symbol in the tray.  If so then 
you should be able to see what is connected.  Make sure only one device is 
connected.  Some devices tend to auto connect after pairing so turn off all bt 
devices except the one your are testing.  You might have to reboot after you are 
sure just one device is on.  There is also an application bluetoothctl that can 
restart bt devices but it is just as easy to reboot.


If there no bt widget in the tray click on the up arrow to bring up the "Status 
and Notifications" panel and see if bt is there.  If so click on bt and enable 
bt for the system.


--


*...Bob*


Re: delimiters with more than one character? ...

2020-07-14 Thread Bob Weber

On 7/14/20 8:52 AM, Albretch Mueller wrote:

  I have a string delimited by two characters: "\|"

  _S=" 34 + 45 \| abc \| 1 2 3 \| c\|123abc "

  which then I need to turn into a array looking like:

   _S_AR=(
" 34 + 45"
" abc"
" 1 2 3"
" c"
"123abc"
)


Try:

echo " 34 + 45 \| abc \| 1 2 3 \| c\|123abc " | tr -d '\\' | awk 'BEGIN { FS="|" 
} { printf " _S_AR=(\n\"%s\"\n\"%s\"\n\"%s\"\n\"%s\"\n\"%s\"\n)\n",$1,$2,$3,$4,$5}'


All one line.

First tr deletes the backslash leaving the pipe for a field separator in awk.  
You could also use tr -s '\\\|' '\|' to guarantee that you only work on the \| 
combination in case there are other backslashes not followed by a pipe.



--


*...Bob*


Re: Firefox non-ESR update needed

2020-07-06 Thread Bob Weber

On 7/6/20 5:28 PM, Gary Dale wrote:
This is a wish-list feature but I'm running Debian/Bullseye and the only 
version of Firefox is the ESR one. It's stable but it has display bugs that 
I'd like to if they are fixed in a newer version.


While generally the pages I create look pretty much the same on whatever 
browser I use, I have one where firefox_esr seems to reverse two columns 
(bootstrap 4 col class within a row class). Chromium gets the columns in the 
right order but has a different idea of vertical spacing than firefox_esr when 
looking through a small-device filter - plus it crashes within a minute of 
starting (I don't mind that because Chromium is just a pop-up generator anyway).


Strangely, the version of Chrome on my antique smartphone seems to agree with 
firefox_esr, as does the version of Firefox for Android that I'm using.


An updated (non-ESR) version of Firefox would allow me to figure out which 
spacing is the more accurate. I expect outside of the Debian world, most 
people are not using the ESR release and most are probably more current than 
what I'm using.


Anyway, just putting it out there that a non-ESR version might be nice...

Firefox 78 is in unstable.  I run testing but I occasionally pick up things in 
unstable if they don't mess up testing like firefox.  I have used it for a while 
mainly watching Netflix and for other sites that don't like Chrome beta (banking 
mostly).


--


*...Bob*


Re: Debian man pages have annoying feature(sic)

2020-06-01 Thread Bob Weber

On 6/1/20 6:02 PM, Richard Owlett wrote:

On 06/01/2020 04:02 PM, Ralph Katz wrote:

On 5/30/20 3:52 AM, Richard Owlett wrote:
...

*PROBLEM*
As package is not installed, that directory does *NOT* exist.

Where to find required documentation on the web?

NOTE BENE
This post is about man pages as a class.



apt show debian-goodies
...
debman - Easily view man pages from a binary .deb without extracting
    [man, apt* (via debget)]

So...  ~$ dman packagename   # will fetch the manpages as though they
were local.

Regards,
Ralph



Thank you. Looks interesting.
1st didn't work even after installing debian-goodies.
Suspect operator. Leaving now. will pursue in morning



Try the online manuals at https://manpages.debian.org/ 
 .



--


*...Bob*


Re: Backup ideas

2020-04-28 Thread Bob Weber

On 4/28/20 3:15 PM, Default User wrote:

On 2020-04-28 [TU] at 14:18 EDT, Bob Weber  said:


According to the manual the -x option is:

-x, --one-file-system   don't cross filesystem boundaries

Question:

When you use rsync, do you ever do it on a live, mounted filesystem
from within said machine/filesystem (that is, using the same machine)?

Or do you do it on a "dormant" unmounted filesystem, either from
another machine or from a "live [usb or .iso] utility distribution or
boot disk from which you have booted the same machine?

Most references to rsync I have seen just seem to accept as a given,
that you are doing it remotely, from across a LAN (or across the
world).

And don't seem to address whether the machine/filesystem they are
rsyncing to/from is "live" (mounted), or can/should be unmounted (like
it would be when imaging a disk with dd or Clonezilla, for example.

Yes that is the way I use rsnapshot (which uses rsync) ... on a live system.  I 
do this before an upgrade.  rsnapshot copies the files to a directory under 
/home.  I have a very big /home filesystem.  The files I am interested in are 
under /usr /etc and the various bin opt and lib directories.  These files will 
just be open for reading but not for writing.  There are log files and mail 
files under /var that may be copied in an open for write state but I can loose 
those files if necessary.  Also, files under /tmp and /run may be lost but those 
directories are usually cleared on reboot.  I do run PostgreSQL but the data 
files are under /home (mounted from another partition)  ... these are the kind 
of files that should be backed up on a "dormant" filesystem or after PostgreSQL 
is shut down.  I also run several virtual machines but their files are also 
under /home.


To restore the rsnapshot backup if I need to I run sysrescuecd. I mount the 
filesystems on /root/src and /root/dst and use rsync with the correct options 
including  --delete to get rid of any extra files that were just upgraded.  I 
usually use  -aHXA --one-file-system --progress for rsync backup options.  That 
way I can be sure the file attributes will be preserved.


A note about borgbackup.  I used it a few years ago but I found that ALL 
attributes were not backed up.  Maybe I didn't use the correct options.   The 
many options can be confusing.  The backup medium should be a journaling 
filesystem according to the docs. rsnapshot just duplicates the file structure 
of the source so you can just get one file back if you want rather than having 
to use borg to get the backed up file.



--


*...Bob*


Re: Backup ideas

2020-04-28 Thread Bob Weber



6 - Finally, using rsync I actually am doing two separate backups:

date; sudo rsync -avvzHAXPSish --delete --stats
--exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found","/home/default/*"}
/ /media/default/USBHD005/Backup_of_Dell_Debian_dimwit/root_partition

[This backs up the filesystem EXCEPT for the home directory.]

And:

date; sudo rsync -avvzHAXPSish --delete --stats
--exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"}
/home/default 
/media/default/USBHD005/Backup_of_Dell_Debian_dimwit/home_partition

[This backs up the home directory.]

Note: my home directory is on a separate (extended) partition from my
root directory.

Another note: rsync did NOT like the -x option.  I wanted to use that
to prevent getting into a recursive loop while backing up. Rsync just
refused, complaining with a sparse non-informative error code message.
But without it, it seems to work okay.  Go figure.


According to the manual the -x option is:

-x, --one-file-system   don't cross filesystem boundaries

I use that option all the time to keep from backing up my large home directory 
when I only want the system directories under root.  It even keeps rsync from 
copying system directories like /proc /dev and /sys.


Before I do a system update/upgrade I run rsnapshot (debian package) on the root 
system directories so I can get the system back in case of major failure in some 
of the updates (I run testing so I have to be careful).  I do run this on a live 
system and on 3 or 4 occasions I have had to restore from the snapshot 
successfully getting my system back alive.


Do you have another system you could backup to?  I can get around 50 mega bytes 
per second transfer over 1Gb Ethernet so you might try that.


My main backup is done by backuppc on a dedicated server .  I have 4 or 5 
systems that get unattended daily backups with versions going back about a 
year.  All my systems use a 2 drive raid1  array so I can survive a single disk 
crash without having to resort to restoring a backup.  Every few months I 
install an extra drive in the backuppc server and have raid sync it to the 2 
drives in the server.  After syncing I pull the drive and put it in my barn for 
offsite storage.  Since it is a raid1 full copy you can take that drive and 
mount it on another system and get the files back if you need to (running the 
raid array in a degraded mode).



I have been using rsync to backup live, from within Debian.  Maybe not
a good idea.  I could instead try using rsync from a live usb, such as
SystemRescueCD, etc.  I'll try that later.  After all, it does seem to
make more sense to back up a dead filesystem from outside it than a
live filesystem from inside it.

Finally, after well over seven hours into rsyncing (with no end in
sight) from the first external usb drive to the second one (both are
HDD), I am beginning to wonder if that is a good idea.  Those first
full backups always take forever . . .



--


*...Bob*


Re: Small Open Source Digital Classroom

2020-03-31 Thread Bob Weber

On 3/31/20 12:44 PM, Markos wrote:


Hi Friends,

My wife is a teacher and is trying to teach remote lessons using only WhatsApp 
video calls.


To help her I am looking for a package in the Debian repository to organize 
virtual classes for small groups, 5 or 6 students maximum.


Any suggestion of an open-source program easy-to-configure and easy-to-use for 
this?


Preferably a Debian package?

Thank you,

Markos

Have you seen Debian education? Start here and see if there are any programs 
that your wife could use.  They probably would need the students to have Debian 
on their computers or you could use vnc so they could all see what the teacher 
is doing on her computer (possible security problems maybe make students only 
view without write).


https://blends.debian.org/edu/tasks/ 


--


*...Bob*


Re: Guest can't see hosts network connection using QEMU-KVM

2020-03-27 Thread Bob Weber

On 3/27/20 4:25 AM, Alexander V. Makartsev wrote:

On 27.03.2020 12:45, deloptes wrote:

Gary L. Roach wrote:


AMD-64 4 cpu processor

Host Debian Buster

Guest kubuntu 18.04

Virtual Machine QEMU

Why don't you take virtualbox or vmplayer?

Choosing QEMU in such a case is masochism pure.

I've had quite smooth sailing using KVM with libvirt\QEMU and it works just 
fine, just like virtualbox, and I don't have to install and manage additional 
hypervisor on my host.
With "virt-manager" [1] GUI app it is simple to manage guests and virtual 
network, at least for x86_64 architecture it is working great.


[1] https://virt-manager.org/

--
With kindest regards, Alexander.

⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
⢿⡄⠘⠷⠚⠋⠀https://www.debian.org
⠈⠳⣄


I second this.  I run Windows (for taxes), debian, reactos, tails and suse VMs 
using virt-manager and libvirt\QEMU.


I even run weather station software 24/7 in a VM.

You can pass usb ports through to the vm ... like in updating Garmin GPS in 
Windows.  The networking is taken care of by virt-manager.  You can use NAT (for 
a private address in the 192.168.x.x range) or direct connection through the 
host ethernet port to my LAN in the 172.16.x.x range. virt-manager will even 
connect to a remote machine (over the LAN with ssh tunnel) to run VMs on that 
machine just as if they were on the local host.  Each VM has its own window on 
the host (I use kde) where you can run text or graphical interfaces with mouse 
and keyboard support ... even VMs on remote host.


--


*...Bob*


Re: Ethernet trouble

2020-01-31 Thread Bob Weber

On 1/31/20 1:41 PM, Michael Stone wrote:

On Fri, Jan 31, 2020 at 01:31:43PM -0500, Bob Weber wrote:

First I created /etc/systemd/network/10-eth0.link using the MAC address and
the name eth0.  If the MAC changes then there are other characteristics to add
to the [Match] section to uniquely define the port (see above link).

---

root@debian-ZFS ~ # cat /etc/systemd/network/10-eth0.link
[Match]
MACAddress=52:54:00:ea:e3:53

[Link]
Name=eth0


You went through more effort than you needed to. You can turn off predictable 
names by simply booting with net.ifnames=0 on the kernel command line (you can 
make that permanent by editing GRUB_CMDLINE_LINUX= in /etc/default/grub and 
running update-grub).


The net.ifnames=0 used to work on my 2 port machine but quit about a year ago.  
Messed up my firewall rules.  Not very nice to have the internet side connected 
to LAN port!   That's when I went through the pain of understanding the systemd way!


--


*...Bob*


Re: Ethernet trouble

2020-01-31 Thread Bob Weber

On 1/31/20 1:36 PM, Greg Wooledge wrote:

On Fri, Jan 31, 2020 at 01:31:43PM -0500, Bob Weber wrote:

First I created  /etc/systemd/network/10-eth0.link using the MAC address and
the name eth0.  If the MAC changes then there are other characteristics to
add to the [Match] section to uniquely define the port (see above link).

---

root@debian-ZFS ~ # cat /etc/systemd/network/10-eth0.link
[Match]
MACAddress=52:54:00:ea:e3:53

[Link]
Name=eth0


Second I linked the default to /dev/null.

-

link -s /dev/null /etc/systemd/network/99-Default.link

I'm almost 100% sure that should be all lower-case, if you expect
it to do anything.  The file it's overriding is 99-default.link
(lower-case).


Parsed configuration file /usr/lib/systemd/network/99-default.link
Skipping empty file: /etc/systemd/network/99-Default.link

It's going to use the 99-default.link file because you didn't actually
override it.  But since you're mapping explicitly on the MAC address of
the interface, it doesn't really matter.

Sorry I missed this.  I used the lower case in all the machines on my network  
... m ymind thinks in upper/lower case ... too bad systemd can't.  I went back 
and renamed the file to lower case and got this output from the udevadm command.


-

SYSTEMD_LOG_LEVEL=debug udevadm test-builtin net_setup_link /sys/class/net/eth0
Trying to open "/etc/systemd/hwdb/hwdb.bin"...
Trying to open "/etc/udev/hwdb.bin"...
Trying to open "/usr/lib/systemd/hwdb/hwdb.bin"...
Trying to open "/lib/systemd/hwdb/hwdb.bin"...
Trying to open "/lib/udev/hwdb.bin"...
=== trie on-disk ===
tool version:  244
file size:    10287564 bytes
header size 80 bytes
strings    2145012 bytes
nodes  8142472 bytes
Load module index
Found container virtualization none.
timestamp of '/etc/systemd/network' changed
Skipping overridden file '/usr/lib/systemd/network/99-default.link'.
Skipping empty file: /etc/systemd/network/99-default.link
Parsed configuration file /usr/lib/systemd/network/73-usb-net-by-mac.link
Parsed configuration file /etc/systemd/network/10-eth0.link
Created link configuration context.
ID_NET_DRIVER=virtio_net
eth0: Config file /etc/systemd/network/10-eth0.link is applied
ethtool: autonegotiation is unset or enabled, the speed and duplex are not 
writable.
eth0: Device has name_assign_type=4
Using default interface naming scheme 'v243'.
eth0: Policies didn't yield a name, using specified Name=eth0.
ID_NET_LINK_FILE=/etc/systemd/network/10-eth0.link
ID_NET_NAME=eth0
Unload module index
Unloaded link configuration context.


New lines:

Skipping overridden file '/usr/lib/systemd/network/99-default.link'

Skipping empty file: /etc/systemd/network/99-default.link


That's more like it.


--


*...Bob*


Re: Ethernet trouble

2020-01-31 Thread Bob Weber

On 1/31/20 2:05 AM, ghe wrote:



On Jan 30, 2020, at 04:48 PM, Bob Weber  wrote:
"Example 3. Debugging NamePolicy= assignments" near the bottom of the page at
"https://www.freedesktop.org/software/systemd/man/systemd.link.html";

Yeah. That's one I looked at. The one with the table of the Ethernet speeds and 
duplexity. And the list and descriptions of data that're sometimes needed in 
the file.

I'll look at this again tomorrow, Bob, but I'm really not impressed with the way systemd 
is setting up the Ethernet interfaces. Like I said before, "Counting Ethernet 
interfaces isn't rocket science." But it can be made so if you make things complex 
and spread the config over several dirs and several files, some of which are explained in 
the dox but turn out not to exist on my Buster disk.

Somehow, back in the eth days, the data in Debian's /etc/network/interfaces 
file was enough to get networking going. Then, on an Ethernet network, the 
Ethernet chips pretty well figured out the best speed and duplex all by 
themselves as soon as they connected to something.


This nameing configuration has worked on 5 Debian systems all running updated 
testing.

And counting interfaces has worked for me for a couple decades, on many systems 
and several OSs. But I'll find your earlier email and try systemd one more 
time. It'd be nice for the interface names to be, as systemd calls it, 
'consistent.'

And, FWIF, I appreciate your help and advice...

I just ran a test on a VM that I installed last week so it is pretty much up to 
date.  I ran the command "ip a" which gave me the current undesirable name 
"enp1s0" and MAC address.


First I created  /etc/systemd/network/10-eth0.link using the MAC address and the 
name eth0.  If the MAC changes then there are other characteristics to add to 
the [Match] section to uniquely define the port (see above link).


---

root@debian-ZFS ~ # cat /etc/systemd/network/10-eth0.link
[Match]
MACAddress=52:54:00:ea:e3:53

[Link]
Name=eth0


Second I linked the default to /dev/null.

-

link -s /dev/null /etc/systemd/network/99-Default.link


Next I ran the test command from Example 3 at the above link to see what udevadm 
thinks.


-

SYSTEMD_LOG_LEVEL=debug udevadm test-builtin net_setup_link 
/sys/class/net/enp1s0
Trying to open "/etc/systemd/hwdb/hwdb.bin"...
Trying to open "/etc/udev/hwdb.bin"...
Trying to open "/usr/lib/systemd/hwdb/hwdb.bin"...
Trying to open "/lib/systemd/hwdb/hwdb.bin"...
Trying to open "/lib/udev/hwdb.bin"...
=== trie on-disk ===
tool version:  244
file size:    10287564 bytes
header size 80 bytes
strings    2145012 bytes
nodes  8142472 bytes
Load module index
Found container virtualization none.
timestamp of '/etc/systemd/network' changed
Parsed configuration file /usr/lib/systemd/network/99-default.link
Skipping empty file: /etc/systemd/network/99-Default.link
Parsed configuration file /usr/lib/systemd/network/73-usb-net-by-mac.link
Parsed configuration file /etc/systemd/network/10-eth0.link
Created link configuration context.
ID_NET_DRIVER=virtio_net
enp1s0: Config file /etc/systemd/network/10-eth0.link is applied
ethtool: autonegotiation is unset or enabled, the speed and duplex are not 
writable.
enp1s0: Device has name_assign_type=4
Using default interface naming scheme 'v243'.
enp1s0: Policies didn't yield a name, using specified Name=eth0.
ID_NET_LINK_FILE=/etc/systemd/network/10-eth0.link
ID_NET_NAME=eth0
Unload module index
Unloaded link configuration context.


Notice the ID_NET_NAME=eth0 is what I wanted.  Also option to the udevadm 
command /sys/class/net/enp1s0 contains the current undesirable name of the 
interface (enp1s0).


Now I rebooted the VM.  I ran the "ip a" command again.

-
2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
group default qlen 1000

    link/ether 52:54:00:ea:e3:53 brd ff:ff:ff:ff:ff:ff
    inet 192.168.240.228/24 brd 192.168.240.255 scope global dynamic 
noprefixroute eth0

   valid_lft 3550sec preferred_lft 3550sec
    inet6 fe80::5054:ff:feea:e353/64 scope link noprefixroute
   valid_lft forever preferred_lft forever


Just what I wanted.


Now running the udevadm command from before with the old name fails:

SYSTEMD_LOG_LEVEL=debug udevadm test-builtin net_setup_link 
/sys/class/net/enp1s0
Trying to open "/etc/systemd/hwdb/hwdb.bin"...
Trying to open "/etc/udev/hwdb.bin"...
Trying to open "/usr/lib/systemd/hwdb/hwdb.bin"...
Trying to open "/lib/systemd/hwdb/hwdb.bin"...
Trying to open "/lib/udev/hwdb.bin"...
=== trie on-disk ===
tool version:  244
file size:    10287564 bytes
header size 80 bytes
strings    2145012 bytes
nodes  8142472 bytes
Load module index
Found contain

Re: Ethernet trouble

2020-01-30 Thread Bob Weber

On 1/30/20 6:17 PM, ghe wrote:

On 1/30/20 1:42 PM, Bob Weber wrote:


That's why I recommended you look into systemd link files.

I looked that up on the 'Net, and it seems pretty reasonable. I looked
around a bit and was told to edit

/usr/lib/systemd/network/99-default.link

(MAC addresses are back to hardware again, but easier to handle -- at
least they're the same whenever you look at them. And Debian puts config
files in /etc. Used to, anyway)

There's a line in 99-default.link about =persistent. The web
says that if I change that to 'none' I'll get the old names back.

I did, and I didn't.


Systemd has
the undesired effect of renaming interfaces.  You need to use the MAC
address to indicate which port should be eth0 , etc.

It looks like it'll take a lot more than changing a value in a config
file to have happen what I expect. I think I'll just leave things alone
for the time being. Now I know to expect systemd to break things, and
now I know to write around it. I was completely at a loss when those
numbers just changed for no apparent reason.

Counting Ethernet interfaces isn't rocket science.

Again, thanks list.

That's why I showed in theprevious email a file for eth0 and eth1 matching their 
MAC address.   The "99-default.link" file is taken out of the works by 
(symbolic) linking it to /dev/null.  This means whatever was in that file 
messing up the port names is gone.  The kernel command line option 
"net.ifnames=0" may or may not be needed ... try without at first.


After a reboot the names should be what you put in the  [Link] section of the 
files " /etc/systemd/network/10-eth0.link" and 
"/etc/systemd/network/20-eth0.link" assuming you put in the correct MAC address 
in the [Match] section.


If the names are are still not correct then there are some examples of a udevadm 
command like in the "Example 3. Debugging NamePolicy= assignments" near the 
bottom of the page at 
"https://www.freedesktop.org/software/systemd/man/systemd.link.html 
<https://www.freedesktop.org/software/systemd/man/systemd.link.html>"


This nameing configuration has worked on 5 Debian systems all running updated 
testing.


Note: the /sys/class/net/hub0 mentioned in Example 3 should be replaced by the 
current port name found in /sys/class/net directory.


--


*...Bob*


Re: Ethernet trouble

2020-01-30 Thread Bob Weber

On 1/30/20 1:58 PM, ghe wrote:

On 1/29/20 7:06 PM, David Wright wrote:


These boards, do their PCI addresses have the save bus number but
different slot/device numbers? dmesg or kern.log will give you
those: they look like NN:DD.F optionally preceded by :, where
 is the domain (typically ), NN is the bus, DD the device
of slot, F the function(s) provided by that card, eg
pci :00:0e.0: [10ec:8139] type 00 class 0x02

Well, I don't in any way consider myself a hardware guy, but in Java,
Pascal, C, PERL, Python, FORTRAN, BashScripts, etc, '+' usually does the
same thing every time I type it.

I looked at dmesg a bit. I greped it for 'enp' and there was a funny
joke in the first 2 lines (of the grep output):

[2.181317] e1000e :08:00.0 enp8s0: renamed from eth1
[2.422105] e1000e :07:00.0 enp7s0: renamed from eth0

So something took the rational Ethernet interface names and,
intentionally I assume, broke hundreds of lines of code.

That's why I recommended you look into systemd link files. Systemd has the 
undesired effect of renaming interfaces.  You need to use the MAC address to 
indicate which port should be eth0 , etc.  See my previous post.



...Bob



Re: Ethernet trouble

2020-01-29 Thread Bob Weber

On 1/29/20 12:05 PM, ghe wrote:

On 1/29/20 8:04 AM, Curt wrote:


'p' indicates the PCI bus and 's' indicates the slot, was my
understanding of the naming scheme.

Yeah. That's what I was told too.


Would a BIOS/Firmware upgrade
modify the PCI bus and slot number of your Ethernet ports?

I doubt it. SuperMicro's BIOS writers aren't that stupid. I certainly
hope they aren't.

Besides:
1) There was no change to the BIOS.
2) The interfaces weren't moved anywhere. They're still soldered to the MB.

I have struggled with this for hours before.  The systemd naming convention is 
explained at:


https://www.freedesktop.org/software/systemd/man/systemd.link.html 



Pay attention to the Examples near the bottom of the page.  There are udevadm 
commands that you can help you check the configuration.   I ended up with the 
four following configurations:



1.  kernel command line option in grub add "net.ifnames=0"


2.  /etc/systemd/network/10-eth0.link:

[Match]
MACAddress=00:xx:xx:xx:33:ce

[Link]
Name=eth0


3.  /etc/systemd/network/10-eth0.link:

[Match]
MACAddress=00:xx:xx:xx:33:d2

[Link]
Name=eth1


4.  And finally:

/etc/systemd/network/99-default.link  linked to /dev/null



This is my main router machine so these names have to be the same on every boot 
so the firewall rules work as desired.  There also seems to be bugs in systemd 
(I use testing) so that these appear to not work.  I think the key was setting 
99-default.link to /dev/null.


Hope this helps.
--


*...Bob*


Re: Displaying an arbitrary file in _both_ HEX and ASCII

2020-01-22 Thread Bob Weber

On 1/22/20 8:12 AM, Richard Owlett wrote:

I'm running Debian 9.8 with MATE desktop.
I'm exploring a data file with the intention of eventually parsing it in a 
useful fashion.


Just downloaded ghex. I like the display format.
Its tools are inconvenient.

I need to:
 1. Simultaneously display in _both_ HEX and ASCII format
 2. Know the current offset in *DECIMAL* format.
    {knowing the offset also in HEX might be nice}
 3. Goto to an offset - expressed in DECIMAL.
 4. Advance a specific number of bytes.
 5. Search for an ASCII string.
 6. Search for arbitrary sequence of bytes expressed as HEX.

Suggested tool(s) in Debian repository?
TIA

Have you tried mc.  Its a console file manager that has a view/edit function 
that appears to do what you want except moving from the current position a 
specific number of bytes and the file position is in HEX but you can move to a 
position in decimal or HEX.  Its the first program I install after a new install.


On my large monitors I can see 40 characters per line. Smaller/Larger fonts or 
window sizes will give different characters per line.  Being a console program 
it can be run in GUI (I use KDE) or just text mode.  Read the help (F1) to see 
how to use the various function keys ... especially the TAB and ESC key.


--


*...Bob*


Re: dhclient and ipv6 DNS Servers

2020-01-14 Thread Bob Weber



Meanwhile, there are a few different ways to keep your resolv.conf
file untouched, rather than relying on isc-dhcp-client to continually
rewrite it in the form you want.  The wiki page describes some of
those ways.

Personally, I do not understand the appeal of the "put lines in
configuration file X so that isc-dhcp-client will use them when it
rewrites configuration file Y" approach.  I'd rather just edit file Y by
hand and tell isc-dhcp-client not to touch it at all.  It's a shame that
it's so incredibly difficult to do that.  But, that's why we have the
resolvconf package, and it's why we have the wiki page that describes
how to do it.

Just edit the file /etc/resolv.conf and make it immutable (chattr +i  
/etc/resolv.conf).  At least you will know what is in the file and that it can't 
be changed (mistakes and all).  I use this to keep chrome from changing the 
google-chrome-beta.list file every time it starts since I use the HTTPS/// 
option of  apt-cacher-ng to use https transfers outside my LAN.


--


*...Bob*


Re: Easiest Way to forward an email Message from Linux to a Mac

2019-11-02 Thread Bob Weber

On 11/2/19 8:10 AM, Martin McCormick wrote:

Here is the setup.  We are on a private vlan as in 192.168.x.x.
All local host names are resolved via hosts files.  Messages to
go to the big wide world must go through Suddenlink's SMTP
smarthost and I definitely don't want to break that.

On rare occasions, I want to forward an email to the Mac
which normally doesn't send or receive emails.  What would be the
simplest way to "forward" an email from the Linux box to the
Mac's mailer?

The Mac only needs to be able to receive, not send any
email.

Thank you

Martin WB5AGZ

Why not create a user on the Linux box to receive such emails and have the MAC 
client connect to that user on the Linux box.  You might have to install a pop 
server (popa3d ... easiest to install and configure) or imac server 
(dovecot-imapd ... harder to configure and probably more than you need) on the 
Linux box if one isn't installed already.


Your MAC would have to have an email client capable of connecting to a pop or 
imac mailbox at the ip of the Linux box or host name in the hosts file 
corresponding to the Linux box.


--


*...Bob*


Re: Good advice on Linux (debian) compatible microscope

2019-10-26 Thread Bob Weber

On 10/26/19 5:55 PM, deloptes wrote:

Doug McGarrett wrote:


You haven't said what you're going to look at, but in my humble opinion,
if you only want to LOOK, not record, a binocular optical microscope
with a ring light and under slide illumination option is the way to go.
I don't know if a high magnification microscope like you're describing
is available with a zoom function, as lower gain units are, but if there
is a zoom function available, get it.

OK - thank you this sounds like a classical microscope


I am personally familiar with much lower gain instruments, for
inspecting and assembling electronic circuits using surface mount
devices. That kind of microscope would use about 7X to 20X zoom
magnification.

yes but modern electronics get smaller and smaller - factor of 10x is not
good.


If you want to record, there are optical microscopes with a "third eye"
where a camera can be installed, and the camera could be an electronic
camera with output to a computer.

this is also a good idea


When you know for sure what kind of scope you want, look to eBay or a
similar source--microscopes are quite expensive!

--doug, retired RF Engineer

One thing I would like to use it for is electronics and another thing is for
the children that are in school. So I wouldn't spend too much for
professional optics (lenses) that are indeed quite expensive, but I may
consider the other options you mentioned. I am also kind of reserved when
it comes to modern things. They usually sell you some crap made in china.

thanks


If you aren't concerned about high quality then this might be of interest.

Jiusion 40 to 1000x Magnification Endoscope on amazon for about $22.  Best of 
all it works on linux!  No drivers needed on Debian.


Also watch:

https://www.youtube.com/watch?v=xxUPCV3gbqw

https://www.youtube.com/watch?v=EIGdKyXBQ5M

to see how it works.

just use mpv:

mpv /dev/video0
Playing: /dev/video0
 (+) Video --vid=1 (rawvideo 640x480 25.000fps)
[autoconvert] Converting yuyv422 -> yuv422p
VO: [gpu] 640x480 yuv422p

the actual device may be different.  My system had video0 and video1.

--


*...Bob*


Re: Signs of hard drive failure?

2019-10-22 Thread Bob Weber

On 10/22/19 1:01 PM, Ken Heard wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

At about 04:35 EDT today 2019-10-22 Tuesday I ran 'sudo smartctl -t
long /dev/sdb'.  At 12:48 I ran the command in the next line.  The
result is below after the next paragraph.

Based on the results below relating to /dev/sdb and the results
relating to /dev/sda quoted in my previous email today, I conclude
that while smartctl considers my two hard drives three years old and
consequently prone to failure -- what else really is new, all devices
are prone to failure -- my current backup problems are not related to
failure of either or both of these drives.  As I stated in my original
post, they have been in use for only 1.75 years.  I shall consequently
continue to examine at my scripts for errors in creating tarballs for
my backups.

Regards, Ken
- ---

ken@SOL:~$ sudo smartctl -a /dev/sdb
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-9-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke,
www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family: Seagate Barracuda 7200.14 (AF)
Device Model: ST2000DM001-1ER164
Serial Number:Z4Z0TKY8
LU WWN Device Id: 5 000c50 0796a3479
Firmware Version: CC25
User Capacity:2,000,398,934,016 bytes [2.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate:7200 rpm
Form Factor:  3.5 inches
Device is:In smartctl database [for details use: -P show]
ATA Version is:   ACS-2, ACS-3 T13/2161-D revision 3b
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:Tue Oct 22 12:45:42 2019 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82) Offline data collection activity
 was completed without error.
 Auto Offline Data Collection:
Enabled.
Self-test execution status:  (   0) The previous self-test routine
completed
 without error or no self-test
has ever
 been run.
Total time to complete Offline
data collection:(   80) seconds.
Offline data collection
capabilities:(0x7b) SMART execute Offline immediate.
 Auto Offline data collection
on/off support.
 Suspend Offline collection
upon new
 command.
 Offline surface scan supported.
 Self-test supported.
 Conveyance Self-test supported.
 Selective Self-test supported.
SMART capabilities:(0x0003) Saves SMART data before entering
 power-saving mode.
 Supports SMART auto save timer.
Error logging capability:(0x01) Error logging supported.
 General Purpose Logging supporte
d.
Short self-test routine
recommended polling time:(   1) minutes.
Extended self-test routine
recommended polling time:( 219) minutes.
Conveyance self-test routine
recommended polling time:(   2) minutes.
SCT capabilities:  (0x1085) SCT Status supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE
UPDATED  WHEN_FAILED RAW_VALUE
   1 Raw_Read_Error_Rate 0x000f   115   099   006Pre-fail
Always   -   90023624
   3 Spin_Up_Time0x0003   096   096   000Pre-fail
Always   -   0
   4 Start_Stop_Count0x0032   100   100   020Old_age
Always   -   557
   5 Reallocated_Sector_Ct   0x0033   100   100   010Pre-fail
Always   -   0
   7 Seek_Error_Rate 0x000f   073   060   030Pre-fail
Always   -   24992023
   9 Power_On_Hours  0x0032   096   096   000Old_age
Always   -   3834
  10 Spin_Retry_Count0x0013   100   100   097Pre-fail
Always   -   0
  12 Power_Cycle_Count   0x0032   100   100   020Old_age
Always   -   556
183 Runtime_Bad_Block   0x0032   100   100   000Old_age
Always   -   0
184 End-to-End_Error0x0032   100   100   099Old_age
Always   -   0
187 Reported_Uncorrect  0x0032   100   100   000Old_age
Always   -   0
188 Command_Timeout 0x0032   100   100   000Old_age
Always   -   0 0 0
189 High_Fly_Writes 0x003a   096   096   000Old_age
Always   -   4
19

Re: Signs of hard drive failure?

2019-10-21 Thread Bob Weber

On 10/20/19 10:21 PM, Ken Heard wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

In the past week or so some in my computer procedures have become
sluggish, and some others have not worked at all.

For example the following script works:

#! /bin/bash
CURPWD=$PWD
cd /home/ken
tar -czf /media/fde/backups/kfinancescurrent.tgz --wildcards\
- --exclude-from=docs/tarlists/kfinancesarchive.lst docs/01-kens/Finances
cd $CURPWD

Whereas this one does not work now but did two weeks ago:

#!/bin/bash
# Shell script to create a tgz file for the contents of the
# /home/ken/docs and the /usr/local/ directories,
# minus the files in file /home/ken/docs/tarlists/kexcludedocs.lst
# This script may be run from any directory to which the user has
write # permission.

# Start by creating a variable with the current directory.
CURPWD=$PWD
# Change directory to /
cd /
# Create the tarball.
tar -czpf media/fde/backups/kdocsfull.tgz  -X
/home/ken/docs/tarlists/kdocsfullexclude.lst -T
/home/ken/docs/tarlists/kdocsfullinclude.lst
# Return to the starting directory.
cd $CURPWD

Now when I try to run it it returns the following:

ken@SOL:~$ tar -czpf media/fde/backups/kdocsfull.tgz  -X
/home/ken/docs/tarlists/kdocsfullexclude.lst -T
/home/ken/docs/tarlists/kdocsfullinclude.lst
tar (child): media/fde/backups/kdocsfull.tgz: Cannot open: No such
file or directorytar: home/ken/docs: Cannot stat: No such file or
directory
tar: usr/local: Cannot stat: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now

All the files/directories which this script cannot stat do in fact
exist, proven by the fact that the first script uses the same
directories, but different files in those directories.

As these symptoms can indicate I think hard drive failure or potential
failure I am trying to explore this possibility.

I am using Stretch and TDE with two 2 TB Seagate Barracuda hard drives
in a RAID 1 configuration.  Both drives were purchased at the same
time and were installed in the box on 2016-05-30.  Although hree and
one half years ago, this particular box is only used six months out of
twelve.  I would not have thought that drives -- if they last the
first year -- would not show signs of failure after 1.75 years.

In any event, I ran smartctl -a on both drives.  For both "SMART
overall-health self-assessment test result [was] 'PASSED'"
Nevertheless for all the specific attributes, identical for both
drives, three of them had the indication 'Pre-fail' and 'Old-age' for
the other nineteen.

I also ran 'badblock -v'.  Both had 1953514583 blocks.  The test for
/dev/sda was interrupted at block 738381440, and for /dev/sdb at block
42064448.

I am not sure what all these test results mean, or how accurate they
would be for indicating if or when a failure would occur. It did occur
to me that if after copying all my data files to an external hard
drive I could replace the /dev/sdb device with a new one and copy all
the data in /dev/sda on the assumption with a new and presumably
pristine drive the OS given the choice would access the data it wanted
from the drive which responded the quicker.

If that approach worked I could replace the other drive in another
year or two (really one year of use) so that both drives in the RAID 1
would not be of the same age.

Comments and advice as to the best way of getting this computer back
to 'normal' to the extent that such a state could ever be 'normal'.

Regards Ken

-BEGIN PGP SIGNATURE-

iF0EARECAB0WIQR9YFyM2lJAhDprmoOU2UnM6QmZNwUCXa0WDwAKCRCU2UnM6QmZ
N6elAJ0dWU0ElkIqvRebe8xGCrg77Tl0IQCeIj94dVV7aeBfjBq6Mpna/Jol/J0=
=PrGY
-END PGP SIGNATURE-

I would first check if the raid was working.  Use "cat /proc/mdstat".  You will 
see something like this for each raid drive configured:


md0 : active raid1 sdb1[3] sda1[2]
  28754230 blocks super 1.2 [2/2] [UU]

Make sure that both U's are there.  If not be careful because the raid is 
operating on one disk.  Before you reboot copy all the important data from that 
raid drive.


Next use smartctl to do a long self test.  Use "smartctl -t long /dev/sda".  You 
can still use the machine but it will slow the test down.  The tests take a long 
time and smartctl will estimate how long.  Then do the second drive "smartctl -t 
long /dev/sdb".


If these pass then you could try booting with a system rescue CD.  First check 
what drive names it has used by running "ls /dev/md*".  You will see something 
like /dev/md0 or /dev/md123. Now check the filesystem on the raid drive with 
"fsck -f /dev/mdx" replaceing x with what you found in the previous command.


That should keep you bust for a while.  Let the list know what you found.


--


*...Bob*


Re: xorriso as a backup &/or archival tool

2019-08-28 Thread Bob Weber

Recently I was suggested I read

   https://www.gnu.org/software/xorriso/

and

   http://scdbackup.sourceforge.net/main_eng.html

which led to exploring "afio archives" and "zisofs compression".


Have you considered rsync.  I wound make sure that a backup system would handle 
all the file attributes a modern linux system uses.  I have a few files I made 
immutable so programs cant change them on me.  Your 10TB drive could even be 
located on a separate system.  rsync compresses files as they are transferred so 
network bandwidth shouldn't be a problem ... assuming at least a 1gb 
connection.  rsync doesn't compress the files  on the destination though.


The simple command:

rsync -aHAXxv /  bkup:machine1/root

Would copy the root filesystem on the current machine to the bkup machine in the 
directory machine1/root.  The word bkup could be a DNS or host file entry or 
just an ip address.


The directory machine1 could be a mounted partition or just a directory on a 
large ~10TB filesystem  an advantage that you wouldn't need to worry about 
the size of the disk you are copying.   The lower case x option causes rsync to 
only copy one file system ( root / ) if you wanted to copy the home directory 
separately if it was mounted on a separate partition on the source system.


Another advantage to this system is you can see/access every file just where you 
expect it to be ... no mounting of a iso file system or using another program to 
access the extended iso.  Also, I'm not sure I would rely on flash drives as a 
long term backup.  The bits are stored as a small electric charge that could 
dissipate over time.



*...Bob*


Re: KDE program 'klipper' is missing from Jessie to Stretch

2019-08-15 Thread Bob Weber

On 8/15/19 9:49 AM, Keith Christian wrote:

Bob,

Thank you.

I assume the clipboard application that installs by default on stretch can be 
removed and replaced with the (far superior, IMHO) klipper app?


Or can (or should?) the two clipboard applications coexist?

Keith



On Wed, Aug 14, 2019 at 4:48 PM Bob Weber <mailto:bobrwe...@gmail.com>> wrote:


On 8/14/19 6:27 PM, Keith Christian wrote:

klipper was a feature-rich clipboard manager but I can't find it in Stretch.

This is all I see:

$ apt-cache search klipper
cairo-dock-clipper-plug-in - Clipper plug-in for Cairo-dock

cairo-dock-clipper-plug-in claims to be a clone of klipper but the
features seem to be hidden.

Anyone know if 'klipper' is deprecated or simply not in Debian?  Not
much found on web searches even targetingmail.kde.org  
<http://mail.kde.org>  mailing lists.


Its in plasma-workspace now.  Latest update:

[2019-02-24] plasma-workspace 4:5.14.5.1-1 MIGRATED to testing

It should have been installed with kde/plasma. "apt-file search klipper"
finds it.


I would be careful trying to remove klipper.  The other files in 
plasma-workspace are very important.


apt-file list plasma-workspace:

plasma-workspace: /usr/bin/gmenudbusmenuproxy
plasma-workspace: /usr/bin/kcheckrunning
plasma-workspace: /usr/bin/kcminit
plasma-workspace: /usr/bin/kcminit_startup
plasma-workspace: /usr/bin/kdostartupconfig5
plasma-workspace: /usr/bin/klipper
plasma-workspace: /usr/bin/krunner
plasma-workspace: /usr/bin/ksmserver
plasma-workspace: /usr/bin/ksplashqml
plasma-workspace: /usr/bin/kstartupconfig5
plasma-workspace: /usr/bin/kuiserver5
plasma-workspace: /usr/bin/plasma_waitforname
plasma-workspace: /usr/bin/plasmashell
plasma-workspace: /usr/bin/plasmawindowed
plasma-workspace: /usr/bin/startkde
plasma-workspace: /usr/bin/systemmonitor
plasma-workspace: /usr/bin/xembedsniproxy

These are just the ones in /usr/bin.  If you try to remove plasma-workspace then 
kde would be gone.


The new klipper seems to have plenty of options.  I use Alt-C -- Open klipper at 
mouse position  a lot (not sure if that is the default).  Makes it very easy to 
see and change what will be pasted.  I run testing here and didn't notice the 
change to the new klipper.  I see the older klipper package was removed back in 
2015 as part of kde-workspace.


-- 



*...Bob*



--


*...Bob*


Re: KDE program 'klipper' is missing from Jessie to Stretch

2019-08-14 Thread Bob Weber

On 8/14/19 6:27 PM, Keith Christian wrote:

klipper was a feature-rich clipboard manager but I can't find it in Stretch.

This is all I see:

$ apt-cache search klipper
cairo-dock-clipper-plug-in - Clipper plug-in for Cairo-dock

cairo-dock-clipper-plug-in claims to be a clone of klipper but the
features seem to be hidden.

Anyone know if 'klipper' is deprecated or simply not in Debian?  Not
much found on web searches even targeting mail.kde.org mailing lists.


Its in plasma-workspace now.  Latest update:

[2019-02-24] plasma-workspace 4:5.14.5.1-1 MIGRATED to testing

It should have been installed with kde/plasma. "apt-file search klipper" finds 
it.

--


*...Bob*


Re: Setting up bind9/DNS

2019-06-28 Thread Bob Weber

On 6/28/19 12:44 PM, Dennis Wicks wrote:

Greetings,

I have apache2 installed on my local machine with a bunch of virtual hosts 
that I use for test and development of html, wordpress, etc. It works fine to 
access the virt hosts locally, but I want to access them from other systems on 
my local network; windows/IE of various versions, smart phones, tablets, 
laptops, etc.


They all can access my base host name because my DSL modem/router has DHCP and 
DNS in it and when it sets up an address with DHCP it puts an entry in its DNS 
and everything is fine. (All systems on the local net use the modem/router for 
dns.) But nothing like this happens with the virtual hosts!


I was thinking that I could setup a nameserver on my machine with enries in it 
for the virtual hosts and have my local network address in the list of 
nameservers in my modem/router, and that is where I need the help.


I have installed bind9, running on buster. So how do I set up the name server 
and populate it with the info for my virtual hosts? Pointers to forums, 
cookbooks, etc. would be appreciated as well as hints and tips!


TIA!
Dennnis


First you will need to read about Apache virtual hosts here:

http://httpd.apache.org/docs/current/vhosts/name-based.html

Basically what happens is the browser sends the name that it is trying to reach 
in its header and Apache uses that info to direct that request to the 
appropriate directory.  All the different names will point to the same address 
... the address of your host.  To use bind for this then all your hosts on your 
network will have to use the bind DNS server as their DNS server.  I use a 
debian box as my router/firewall so it is easy for me to change DNS entrys for 
my home network.


It might be easier if your router would allow you to add entries to its DNS 
server.  If not then you could use each machines hosts file to put in your 
private addresses.  You will have to make up your own names.  Example:


host1.home

cookbook.home

forum.home

These would all point to the address of host1 but Apache would be able to direct 
the requests to different directories under /var/www depending on the name 
used.  I use this method on a VM at digital ocean to serve 4 or 5 different web 
sites from the one address.


Your apache config might look like this:


    ServerName host1.home
    DocumentRoot "/var/www/host1"



    ServerName cookbook.home
    DocumentRoot "/var/www/cookbook"


etc...

Make sure all the files under /var/www are owned by www-data and group www-data 
(chown www-data.www-data files).


--


*...Bob*


Re: USB digital microscope from Walmart

2019-06-05 Thread Bob Weber

On 6/5/19 3:09 PM, Mike McClain wrote:

 I bought a USB digital microscope from Walmart that the ads
claimed would work under Win2K and Linux. So far the supplier has
failed to back up that claim with meaningful info.
Has anyone had any luck getting one of these working under Debian?
 This one claims 1000x magnification and the supplier is E4. They
don't answer the phone and email correspondence has so far prove
useless.
Thanks,
Mike
--
 If all the CHP drove the speed limit, perforce, so would the rest of us.
How many lives a year would that save? - MM

I Have something that may be similar.  Its Jiusion Digital Microscope.  It works 
with the viewer guvcview.  Its in Debian so it should be safe.  I had to plug it 
in several times to get the kernel to recognize it ... use lsusb.  First run 
lsusb then plug it in and see if there is any difference.  Mine just showed up 
as Bus 001 Device 015: ID a16f:0304 with no name.  Yoursd will be different so 
just look for the change.


I got the idea from Kris Occhipinti.  Link: 
https://www.youtube.com/watch?v=xxUPCV3gbqw is where he runs the microscope with 
cheese.


Hope this helps.

--


*...Bob*


Re: kvm win8 guest audio is terrible.

2019-05-21 Thread Bob Weber

On 5/20/19 10:35 PM, David Christensen wrote:

On 5/20/19 2:55 PM, R. Ramesh wrote:
I created a fresh install of debian stretch amd64. In that I created a 
qemu/kvm guess install of win8. It only accepted hda as a valid sound card. 
All others show us without drivers. So, I am limited to only HDA as -soundhw. 
Further HDA sounds so broken if I try to test with any sound file or youtube 
video. It is grainy/distorted and outright horrible.


Google searches mentions something about MSI and those approaches are simply 
not accepted by win8. So, I could not use them. I am wondering if there is 
something fundamentally wrong with windows guests or is there a setting that 
I missing.


Ramesh
Have you tried spice under Virtual Machine manager?  I used to have similar 
problems.  Let all 6 cores run at maximum speed helped some but still occasional 
dropouts in Win7 and Win10.  I switched to spice and audio and video improved 
greatly.

--


*...Bob*


Re: bind gets permission errors in buster--systemd-related?

2019-05-15 Thread Bob Weber
I also have a similar problem accessing /run/named.  bind can't create the 
directory or any files in it.  The error messages:


couldn't mkdir '//run/named': Permission denied

could not create //run/named/session.key

Apparmor problems can be fixed by running aa-logprof and selecting the best 
"fix" for your system.  I have done that if needed over the months since 
apparmor was installed.  The other problem is that /run is a type tmpfs so it is 
created after each boot so any manual fixes are lost after a reboot.  I also 
have the same problem for the apt-cacher-ng program.  Since this machine is my 
router for my home network it is rarely rebooted so I have a temporary fix by 
running the following script manually:


cd /run

mkdir named
chown bind.bind named
systemctl restart bind9

mkdir apt-cacher-ng

chown apt-cacher-ng.apt-cacher-ng apt-cacher-ng
systemctl restart apt-cacher-ng


My /etc/bind config directory has no reference to /run.  I do see a 
/run/resolvconf directory which has resolv.conf in it pointing to localhost and 
search domain.  This seems correct since bind is listening on localhost and you 
want to actually use bind to get and cache dns requests.


My bind is version 9.11.5.P4+dfsg-5.

--


*...Bob*


Re: A Basic Mount Observation

2019-05-07 Thread Bob Weber

On 5/7/19 12:02 PM, Cindy Sue Causey wrote:

I didn't fully *cognitively* grasp what you're saying, BUT I did grasp
enough to attempt the following via xfce4-terminal:

$ cd /mountpoint
$ ls
$ *(anticipated) crickets*
$ sudo (YEAH, I KNOW!) mount LABEL=buster-backup /mountpoint
$ ls
$ *mammoth-sized crickets*


I would never have thought to do it this way.  It always made more sense to be 
in the directory just above the actual mount point.  Say I have a directory 
/mnt/tom.  I would:


cd  /mnt

mount /dev/whatever tom

ls tom

... would show the contents of what was just mounted if there were no errors.

If you have the following line in /etc/fstab (and the fuse programs necessary):

sshfs#sue@cindyslaptop:/home/sue /mnt/sue fuse    user,noauto,rw    
0   0


and run:

cd /mnt

mount sue

ls sue

... would show sue's directory on cindyslaptop.  Notice that since there is an 
entry for /mnt/sue in fstab you only need to mount the directory /mnt/sue.


I use this to mount a directory on a remote machine locally after setting up 
public key authentication so it doesn't even ask for a password.   This could 
also be used with file systems in /dev but they would always need to be the same 
name (like /dev/sdc1) or some other way to identify the exact drive to be 
mounted like the drives UUID.



*...Bob*


Re: systemd mdadm spamming my syslog

2019-03-11 Thread Bob Weber

On 3/10/19 5:20 PM, Joe Pfeiffer wrote:

Since a recent update, my /var/log/syslog is getting spammed with huge
numbers of messages of the form

Mar 10 14:02:25 snowball systemd-udevd[18681]: Process '/sbin/mdadm 
--incremental --export /dev/sda3 --offroot 
/dev/disk/by-id/ata-ST3000DM001-1ER166_Z501MTQ3-part3 
/dev/disk/by-partuuid/ad6f31ee-1866-400c-84f8-2c54da6abd2e 
/dev/disk/by-path/pci-:00:11.0-ata-1-part3 
/dev/disk/by-id/wwn-0x5000c50086c86ae8-part3' failed with exit code 1.

When I run the command by hand, I get

root@snowball:~# /sbin/mdadm --incremental --export /dev/sda3 --offroot 
/dev/disk/by-id/ata-ST3000DM001-1ER166_Z501MTQ3-part3 
/dev/disk/by-partuuid/ad6f31ee-1866-400c-84f8-2c54da6abd2e 
/dev/disk/by-path/pci-:00:11.0-ata-1-part3 
/dev/disk/by-id/wwn-0x5000c50086c86ae8-part3
mdadm: cannot reopen /dev/sda3: Device or resource busy.

Which at least gives me a small clue, but really not much of one.

I'm not even really clear on whther this is a systemd or mdadm bug.

So, some questions:

1) what is this command trying to do?  I do understand a little about mdadm,
and am running a RAID 1 array on this machine.  But this is using an
option (--offroot) that doesn't even appear in the man page, and I've
got no idea what it's trying to accomplish.

2) how can I make it stop?

I also have these kind of messages for my 3 raid1 arrays.  The messages started 
after an update to testing done on 2/26/18 and have continued up to 3/11/18 
(today).   I looked at the terminal logs and I was running udev (240-6) 
(installed 2/22) and mdadm (4.1-1) (installed 2/6).  My udev is now at 241-1 and 
mdadm is still at 4.1-1.


Since the arrays are started correctly as shown by "cat /proc/mdstat" I haven't 
paid much attention to these messages. They only occur at boot.


My system is running testing and I do an upgrade just about every day.

--


*...Bob*


Re: Apache Backuppc problem

2018-08-11 Thread Bob Weber

On 8/11/18 1:50 PM, Gary Roach wrote:

Hi all.

Debian Stretch OS

I have installed the BackupPC /Apache2 package before and had no trouble 
accessing the BackupPC GUI at localhost/backuppc. This time (using apt install 
backuppc) I keep getting a window asking if I wish to save a bin file. The 
server downloads a file like W6gLcuk0.bin instead of serving up the GUI web 
page. I have tried several things suggested on line but nothing seems to work. 
Apache2 was installed using the .iso net install disk.


On past installations, the GUI worked out of the box. Not this time.

Any help will be sincerely appreciated.

Gary R


I see that there is an apache.conf file in the /etc/backuppc directory.  My 
installation has a symbolic link to that file in the /etc/apache2/conf-enabled 
directory.  The settings in that file seem to control the backuppc cgi-bin 
directory.  Here is what in my file:


Alias /backuppc /usr/share/backuppc/cgi-bin/


    AllowOverride None
    Allow from all

    # Uncomment the line below to ensure that nobody can sniff importanti
    # info from network traffic during editing of the BackupPC config or
    # when browsing/restoring backups.
    # Requires that you have your webserver set up for SSL (https) access.
    #SSLRequireSSL

    Options ExecCGI FollowSymlinks
    AddHandler cgi-script .cgi
    DirectoryIndex index.cgi

    AuthUserFile /etc/backuppc/htpasswd
    AuthType basic
    AuthName "BackupPC admin"
    require valid-user



Hope this gets you in the right direction.

--


*...Bob*


Re: System freeze using web browsers

2018-05-09 Thread Bob Weber

On 5/9/18 5:17 PM, Alexander Beerhoff wrote:

Dear Debian Team,
I’ve found increasing heavier problems using web browser. In earlier stage the 
“system” freeze (no ctrl-alt-function-key response) when watching long video 
online (say at least 30 min, say on you tube.com ), now (as 
consequence of system update??) cannot use any browser: tried chromium 
firefox(esr,stable,nightly) epiphany-browser qutebrowser and the result are 
system freeze or shutdown (watching video).
I’ve described the problem in very generic terms, I’ve filled ticket on 
mozilla for earlier problem but cannot profile browser since gui is very 
unresponsive when not freezed. Can you help? Waiting for your welcome advices.

Thank you for your attention and best regards.

Basic info:
Computer Acer Aspire V5 (121)
uname -v: #1 SMP Debian 4.16.5-1
--
Umi sukoschi
Niwa ni izumi no
Ko no ma ka na


Not knowing the details of you system I would suggest to back down to a 4.15-?? 
kernel.  I had the same problem watching Netflix on the official chrome browser 
running the 4.16.0-1 amd64 kernel. I even downgraded chrome but the fix was 
going back to kernel 4.15.17-1.  If you have other kernels installed just reboot 
and select another kernel from the grub menu to see if that is the fix for you.  
If you can't install the 4.15.17-1 kernel from your current sources try 
http://snapshot.debian.org/ in the last week of April.


--


*...Bob*


Re: Backup problem using "cp"

2018-05-07 Thread Bob Weber

On 5/7/18 9:28 AM, Thomas Schmitt wrote:

Hi,

Richard Owlett wrote:

My goal was to copy root and its sub-directory to a directory on another
physical device.

Well understood.
In a slightly different scenario (backup on Blu-ray) i do this several
times per day.

But i would not dare to give the whole root tree as input to any copying
program or backup archiver. Not only because of the risk of stepping on my
own foot but also because there are several trees which do not deserve
backup or could even make trouble when being fully read.

In my root directory that would be: /dev /mnt /proc /run /sys
E.g. because of
   $ ls -l /proc/kcore
   -r 1 root root 140737477877760 May  7 15:22 /proc/kcore

(Somebody else shall try whether it's really readable and what comes out.
  The announced size is nearly 128 TiB.)


Have a nice day :)

Thomas


There is a program called rsnapshot that uses rsync for the actual work of 
copying but has a config file where you can supply exclude directories (like 
/media).  I just run "rsnapshot hourly" to copy my root file system before an 
apt upgrade command just in case a major problem occurs with the update.  The 
/proc /sys and /dev directories are not copied since they are "mounted" system 
directories.  rsnapshot uses hard links between backups so only the changed 
files are actually copied.  The number of versions to keep is configured in 
/etc/rsnapshot.conf.


In using your cp command, rsync or rsnapshot it is very important that the 
destination filesystem be able to handle hard links and all the file attributes 
of a linux file system.  So make sure that at least there is an ext3 or ext4 
type file system on the destination drive.  If you are not sure what file system 
is in use for the backup destination just run the mount command (as root) 
without any arguments and it will print out all the mounted file systems and 
types.  I get this line for my /home directory:


/dev/md2 on /home type ext4 (rw,relatime,data=ordered)

If I plug in a flash drive and have the system mount it I get (all one line):

/dev/sdg1 on /media/bob/3A78-573B type vfat 
(rw,nosuid,nodev,relatime,uid=1000,gid=1000,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,showexec,utf8,flush,errors=remount-ro,uhelper=udisks2)


Notice the type vfat which is NOT what you want to see on the destination file 
system.


--


*...Bob*


Re: DNS server won't talk to me

2018-04-19 Thread Bob Weber

On 4/19/18 7:44 PM, Francois Gouget wrote:

So I'm running a bind server and while it works I ran into a domain name
that it refuses to resolve: maibokun.com.

Digging into it, it looks like one DNS server is refusing to talk to me:

On my box:
$ host maibokun.com
;; connection timed out; no servers could be reached
$ host maibokun.com 210.143.111.171
;; connection timed out; no servers could be reached

Same thing on my laptop. But if I connect the laptop to another Wifi
network (thus changing it public IP address) or run the command on a
computer on the other side of the atlantic I get:

$ host maibokun.com
maibokun.com has address 210.188.220.102
maibokun.com mail is handled by 10 mail.maibokun.com.
$ host maibokun.com 210.143.111.171
Using domain server:
Name: 210.143.111.171
Address: 210.143.111.171#53
Aliases:

maibokun.com has address 210.188.220.102
maibokun.com mail is handled by 10 mail.maibokun.com.


Are DNS servers banning queries from some residential addresses or
something like this? Anyone else seeing the same issue?


Try having bind forward the requests to another public DNS server like opendns.  
You could even protect yourself by having opendns block malware and other bad 
sites.   My bind named.conf.options file has the forwarding setup like this.


        forwarders {
    // opendns
    //    208.67.222.222;
    //    208.67.220.220;
    127.0.2.1;
    };
    forward only;

If you are really worried that your DNS queries are being diverted by man in the 
middle attacks use dnscrypt-proxy.  I have dnscrypt-proxy listening on 127.0.2.1 
(as above shows) and forwarding bind's DNS queries to opendns (cisco) over a 
secure channel.  I even redirect all DNS (port 53 udp) queries to any server to 
my bind with a shorewall redirect rule (firewall).


This setup returns this from a host command:

host  maibokun.com
maibokun.com has address 210.188.220.102
maibokun.com mail is handled by 10 mail.maibokun.com.


--


*...Bob*


Re: AppArmor permissions to create a specific directory

2018-03-11 Thread Bob Weber

On 3/11/18 2:13 PM, André Rodier wrote:

On 11/03/18 17:56, André Rodier wrote:

Hello,

I am working on a project to help self hosting emails with Debian.

I reached a point I am satisfied, but I have an issue with AppArmor some
experts may know how to solve.

I have set the rules with Dovecot and AppArmor, and it works very well
so far, except when the mail folder is not existing yet.

Is there any way to write a permission for AppArmor, that will let
dovecot create the maildir folder when it is not exists.

This is the error I have, the first time a user tries to access his mail
box:

Mar 11 17:45:05 homebox kernel: [  356.357353] audit: type=1400 audit(1520790305.235:176): apparmor="DENIED" operation="mkdir" 
profile="/usr/lib/dovecot/imap" name="/home/users/andre/mails/" pid=32645 comm="imap" requested_mask="c" 
denied_mask="c" fsuid=1001 ouid=1001

Obviously, I don't want to add a rule to let dovecot to write in the
home directory!

Thanks for your help,
André


OK, I am now creating the mail folders before the deployment of Dovecot,
for each user.

Actually, this make more sense. The only issue is when creating new users.


Check out /etc/skel.  useradd with the -m option copies that directory to the 
created user directory changing permissions and ownership to the new user.


--


*...Bob*


Re: My site has become unreachable when I've implemented SSL

2018-02-19 Thread Bob Weber

On 2/19/18 2:54 PM, Aldo Maggi wrote:

Thank you for your fast answer!

root@Casa-mia-1:~# lsof -i :443
COMMAND  PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
apache2  879 root6u  IPv6  20270  0t0  TCP *:https (LISTEN)
apache2  948 www-data6u  IPv6  20270  0t0  TCP *:https (LISTEN)
apache2  949 www-data6u  IPv6  20270  0t0  TCP *:https (LISTEN)
apache2  950 www-data6u  IPv6  20270  0t0  TCP *:https (LISTEN)
apache2  951 www-data6u  IPv6  20270  0t0  TCP *:https (LISTEN)
apache2  952 www-data6u  IPv6  20270  0t0  TCP *:https (LISTEN)
apache2 1385 www-data6u  IPv6  20270  0t0  TCP *:https (LISTEN)
apache2 1386 www-data6u  IPv6  20270  0t0  TCP *:https (LISTEN)
apache2 3386 www-data6u  IPv6  20270  0t0  TCP *:https (LISTEN)

As for ufw, indeed port 443 was not enabled and I had problems in doing
it (bad port), at the end I wrote:
ufw allow https
Rule added
Rule added (v6)

now I have:

root@Casa-mia-1:~# ufw status
Status: active

To Action  From
-- --  
22/tcp ALLOW   Anywhere
CUPS   ALLOW   Anywhere
..
Telnet ALLOW   Anywhere
VNCALLOW   Anywhere
WWWALLOW   Anywhere
Anywhere   ALLOW   192.168.3.100
Anywhere   ALLOW   192.168.3.0/24
/tcp   ALLOW   Anywhere
5900:5910/tcp  ALLOW   Anywhere
2049   ALLOW   192.168.3.100
80/tcp ALLOW   Anywhere
443/tcpALLOW   Anywhere
22/tcp (v6)ALLOW   Anywhere (v6)
CUPS (v6)  ALLOW   Anywhere (v6)
...
WWW (v6)   ALLOW   Anywhere (v6)
/tcp (v6)  ALLOW   Anywhere (v6)
5900:5910/tcp (v6) ALLOW   Anywhere (v6)
80/tcp (v6)ALLOW   Anywhere (v6)
443/tcp (v6)   ALLOW   Anywhere (v6)

root@Casa-mia-1:~# systemctl restart apache2

but ... no avail, still "connection refused"

What else could be the culprit :-D

Thanks for your time!

Aldo :-)

P.S. Furthermore in /apache2/error.log I find:
PHP Warning:  PHP Startup: Unable to load dynamic library
'/usr/lib/php/20151012/apc.so' - /usr/lib/php/20151012/apc.so: cannot
open shared object file: No such file or directory in Unknown on line 0

Il giorno Mon, 19 Feb 2018 12:48:25 -0500
Greg Wooledge  ha scritto:


On Mon, Feb 19, 2018 at 06:36:01PM +0100, Aldo Maggi wrote:

Anyway, now if I browse writing my IP I get the Apache default page
(the browser tells me, anyway, that the site is unsecure), if I
write the name of the site I get (traslated from Italian):
Unable to reach the site
Connection denied by mysite.com

"Connection refused" (the correct English translation) means that
either the service is not listening to that port, or the packets
were rejected by a firewall.

You will need to examine both of those possibilities.

Making sure the service is listening on :443 should be fairly easy.
You can use "lsof -i :443" for example, or some ss or netstat command.

Checking whether you have a firewall blocking incoming 443 will be
a bit harder.



Looks like apache is only listening to IPV6 (see above lsof output).  So if the 
domain that you used in the command:


letsencrypt --apache -d mysite.com

resolves to an IPV4 address you need to tell apache to listen to your IPV4 
address.  Your firewall looks like it has opened IPV4 and IPV6.  I also assume 
that you try to access the site with that domain name in the url in your 
browser.  Check the file /etc/apache2/ports.conf.  It might be useful to run the 
command "ip a" to see what addresses are assigned to your ethernet ports so you 
can properly set up the ports.conf file.


--


*...Bob*


Re: Iptables at boot

2018-02-14 Thread Bob Weber

On 2/14/18 4:51 PM, Rodary Jacques wrote:

I was just going to give up , and I even installed shorewall, when my last 
attempt with my very old iptables config (from redhat 7.2) did work. I of 
course to still get rid of stupid systemd config, but I don't really care since 
my server is allways up!. Thank you anyway for your hints.
Jacques

If this server is connected directly to the internet make sure the older config 
is really working run Steve Gibson's shields up (https://www.grc.com/shieldsup) 
and scan at least the lower 1024 ports.  They should all be green unless you are 
serving up data like a web page (port 80 and/or 443).


--


*...Bob*


Re: MIDI-to-USB on Debian?

2018-02-14 Thread Bob Weber

On Tue, Feb 13, 2018 at 03:00:20PM -0600, Nicholas Geovanis wrote:

Does anyone have a MIDI-to-USB adapter they could recommend for
Debian and/or linux?
This is just for a point-to-point connection from a Yamaha keyboard
to a laptop. Software on the laptop remains undetermined, probably
some combination of Rhythmbox, CSound, Supercollider and god knows
what else. Thanks..Nick
I've had good results with midiplus Tbox2X2 USB MIDI Interfaces. About $30 on 
amazon (https://www.amazon.com/gp/product/B00WU6F4M6). It has 2 interfaces for 2 
midi devices in a nice metal box with leds showing in/out activity.  It works 
with LMMS on debian testing without firmware download on a couple consumer grade 
keyboards I have.  I have only used one device at a time so far. Powered by usb 
interface.


*...Bob*


Re: Iptables at boot

2018-01-31 Thread Bob Weber

On 1/31/18 12:28 PM, Jacques Rodary wrote:


Hi

Many things happened since my first message: I first had to get rid of connman 
(connection manager), which insisted to preset iptables rules without any 
notice. My Debian box is uset as a DNS chrooted server (also I had to modify 
bind9.service behaviour), and I use iptables to do NAT, since I have one 
routable address for several clients. With Jessie I managed to have all this 
working. When upgrading to stretch, because of a stupid error with grub on my 
RAID system, and of an insufficient backup, I lost most of my config. Thanks 
for your help. When everything will be OK, I surely will have the use for your 
answers.


Jacques

Have you looked at shorewall?  I use it on all my debian linux installs.  
Basically its a front end to the kernel iptables network filters.  It sets up 
the iptables entries and then goes away so that there is no additional program 
running after it does its job.   It starts up on boot after you have set up the 
rules the way you want.  You have to set a parameter in the 
/etc/default/shorewall file to have it start since you don't want to loose 
connection to your machine if you are logging in through a network port.  That 
way you can test it before you actually use it.  It is driven by several text 
config files in /etc/shorewall. For instance NAT is set up easily by this 
command in the  snat file (my internet connection is on eth1 and local 172 net 
is on eth0):


MASQUERADE  172.16.0.1/16   eth1

I redirect all the dns and time requests to my router machine even if the client 
has requested these services from an outside address.  I use opendns for its 
malware filters so bind is set to forward all non local dns querys to opendns 
servers.  I also use dnscrypt-proxy to get a secure connection to opendns so 
that I can be assured that the data coming back from opendns hasn't been 
tampered with.  These 2 lines in the rules file accomplish the redirection:


REDIRECT    Loc 53   tcp,udp   53 -
REDIRECT    Loc 123 tcp,udp  123    -

There is plenty of documentation and examples for simple setups available on the 
shorewall web site.


--


*...Bob*


Re: Cannot connect to WiFi.

2017-11-15 Thread Bob Weber
On 11/15/17 2:48 PM, Juan R. de Silva wrote:
> Hi folks,
>
> My ISP replaced my old modem with the new one. I changed my WiFi 
> Authentication key and the name of the WiFi network. Then I made Network 
> Manager to "forget" my old WiFi. Network Manager finds my new WiFi but I 
> cannot connect to it.
>
> When "Authentication Key is required" dialog pops up and the key is 
> entered,j Connect button remains grayed out/disabled. Thus there is no 
> way to get through but pressing Cancel button. 
>
> I'm running Debian Stretch.
>
> Could somebody help. It's quite urgent now.
>
> Thanks.
>
>
>
I had a similar problem with a new wifi router.  Under KDE I set the network
manager settings dialog to set up a wifi connection including a password for
WAP2.  It wouldn't connect.  You can view the password in NM and it looked
correct ... I checked several times but no luck.  I wiped out the password by
holding down the backspace key and I noticed I might have entered a leading
space by mistake.  So try to put the password in the network manager settings
dialog paying close attention to what you entered.  If that fails go back to the
router and re-enter the password again just in case you have some invisible
characters there.  This might also happen with the SSID.  If all else fails I
would set the SSID to something simple (say 3 characters) and the same for the
password just for a test to see if you can connect.  Also make sure the WPA2 is
set to AES encryption.  This is the recommended setting and I know it works with
my NM in debian testing.

-- 


*...Bob*


Re: Sync two disks and hot swap

2017-11-09 Thread Bob Weber
On 11/9/17 2:01 AM, David Christensen wrote:
> On 11/08/17 17:44, Bob Weber wrote:
>> On 11/8/17 5:59 PM, David Christensen wrote:
>>> I have read articles about building a RAID 1 with three drives, migrating in
>>> data, pulling one drive and placing it off-site, operating in degraded mode 
>>> on
>>> two drives, and then periodically re-installing the third drive, 
>>> resilvering,
>>> pulling one drive and placing it off-site, and returning to degraded
>>> operations on two drives.  But STFW just now, I see a lot of posts with 
>>> titles
>>> indicating this is a bad idea.
> ...
>>> But what I really want is some form of snapshot technology (rsync/hard link,
>>> LVM, btrfs, ZFS) with all the goodies -- realtime compression, realtime
>>> de-duplication, and encryption.  I need a more powerful backup server (many
>>> core processor with AES-NI, 16+ GB RAM, SSD caches, etc.).
>
>> I have used raid 1 to make a drive I can take off site for backup.  You just
>> grow the raid 1 array by one disk and add the disk you want to take out 
>> (even on
>> a usb/sata connection ... but slow).   Of course the disk or partition(s) 
>> need
>> to be the same size as the array.  Let it sync and then boot to a live cd and
>> you can fail and remove that drive.   Or just power down and remove the 
>> drive.
>> That way the embedded file system will be unmounted correctly.  I have then
>> taken that one drive and connected it to another system and been able to run 
>> the
>> raid 1 in degraded mode and mount the embedded file system(s) to get to the
>> files.  To make the original raid happy just grow the array again setting the
>> number of drives back to what it was originally (you can grow to a smaller
>> number).  The syncing can be slow since every byte on the drive needs to be
>> "synced" instead of just the space the files take up.
>
> Okay.  What RAID technology were you using -- LVM, mdadm, btrfs, ZFS, other?
>
I use software raid with mdadm.  Its pretty forgiving with powering down and
removing a drive (after sync) and growing the array back down to the original
size.  I mainly do this on my backup server running backuppc.  The files are
compressed and hard linked between backups if they have not changed.  This makes
any other type of offsite backup pretty hard ... rsync just ran out of memory. 
So adding a drive to the raid 1 and syncing is easy in comparison.  Having grub
make the drive bootable  (since the os is also on a raid 1 partition) makes the
drive very easy to just install in new hardware and get going again (assuming
the original backup system was destroyed).

-- 


*...Bob*


Re: Sync two disks and hot swap

2017-11-08 Thread Bob Weber
On 11/8/17 5:59 PM, David Christensen wrote:
> On 11/08/17 02:49, Dominik George wrote:
>> Hi,
>>
>> I have the following scenario:
>>
>>   * A server with two hard drives in removable cases
>>   * A backup process writes data to both disks, making up a live backup 
>> server
>>   * A third disk is to be kept off-site
>>   * On a ergular basis, I want to hot-swap one of the disks, as in, remove
>>     one of the two synced disks and replace it with the stale off-site copy,
>>     and put the now recent copy off-site
>>
>> I figure that a simple software RAID 1 would do the trick, but it is not
>> really made for it and would need some complex manual intervention in
>> order to not break the state on the removed disk.
>>
>> Any ideas on how to achieve this, or arguments that RAID 1 would indeed
>> be a good solution?
>
> Are the two drives in RAID (1?) or do they each have their own file system?
>
>
> I have read articles about building a RAID 1 with three drives, migrating in
> data, pulling one drive and placing it off-site, operating in degraded mode on
> two drives, and then periodically re-installing the third drive, resilvering,
> pulling one drive and placing it off-site, and returning to degraded
> operations on two drives.  But STFW just now, I see a lot of posts with titles
> indicating this is a bad idea.
>
>
> I have three drives in mobile dock drawers, each with LUKS and ext4. One is
> on-line in my backup server, one is near-site, and one is off-site. 
> Periodically, I put the near-site drive into the backup server, rsync the
> on-line drive to the near-site drive, remove the near-site drive, and then
> swap the near-site and off-site drives. Admittedly the wear is uneven, but
> it's KISS and it works.
>
>
> But what I really want is some form of snapshot technology (rsync/hard link,
> LVM, btrfs, ZFS) with all the goodies -- realtime compression, realtime
> de-duplication, and encryption.  I need a more powerful backup server (many
> core processor with AES-NI, 16+ GB RAM, SSD caches, etc.).
>
>
> David
>
>
I have used raid 1 to make a drive I can take off site for backup.  You just
grow the raid 1 array by one disk and add the disk you want to take out (even on
a usb/sata connection ... but slow).   Of course the disk or partition(s) need
to be the same size as the array.  Let it sync and then boot to a live cd and
you can fail and remove that drive.   Or just power down and remove the drive.  
That way the embedded file system will be unmounted correctly.  I have then
taken that one drive and connected it to another system and been able to run the
raid 1 in degraded mode and mount the embedded file system(s) to get to the
files.  To make the original raid happy just grow the array again setting the
number of drives back to what it was originally (you can grow to a smaller
number).  The syncing can be slow since every byte on the drive needs to be
"synced" instead of just the space the files take up.

I tried to use btrfs in several VMs running debian but I kept having to delete
snapshots to make sure I had enough free space.

-- 


*...Bob*


Re: DHCP server that itself gets an IP address by DHCP

2017-08-24 Thread Bob Weber
On 8/24/17 3:45 AM, Mark Fletcher wrote:
> Hello the list!
>
> [I suppose this is a little bit OT -- but you guys are the best 
> concentration of experts I know, so here goes anyway...]
>
> My local network consists of a bunch of Debian machines of various ages, 
> various iDevices, and the odd Windows machine connected either by wired 
> or wireless ethernet to a Buffalo AirStation, whose WAN port is 
> connected to a mini-ITX machine running LFS which acts as my firewall. 
> The firewall's other interface connects to my cable modem and thence to 
> the internet.
>
> For co-operation with my ISP my firewall gets its external IP address 
> via DHCP from the ISP. I use systemd-networkd to achieve this, and this 
> also takes care of populating /etc/resolv.conf with the name servers 
> provided by the ISP.
>
> So the firewall has 2 interfaces, the external facing one of which gets 
> an IP address from my ISP via DHCP, and the internal facing one has a 
> fixed private IP address.
>
> The AirStation is also set up to get its WAN IP address via DHCP, since 
> A) that is how it comes out of the box, B) the AirStation was for years 
> the last line of defence between my network and the internet and the 
> addition of the dedicated firewall is a relatively recent thing, and C) 
> both the instructions and the web configuration tool are in Japanese 
> and, this being a Japan-market-facing device, the language can't be 
> changed. So I like to futz with the settings on the AirStation as little 
> as possible.
>
> So I run dhcpd on the firewall machine, facing only the 
> local-network-facing interface, so that when the AirStation asks for an 
> IP address, it can be provided with one.
>
> The Airstation is _itself_ running a DHCP server on its LAN ports / 
> WiFi, which is how the rest of my machines on my network get their local 
> IP addresses. So the DHCP server on my firewall in effect services 
> _only_ the AirStation.
>
> My question is this -- I want to pass through the name servers my ISP is 
> providing, to the AirStation when it asks, so that the AirStation can 
> use the ISP's name servers. I did think about running a DNS on the 
> firewall also but this seems unnecessary, and would just create an extra 
> hop to answer DNS queries.
>
> Right now I have the name server IP addresses hard coded in the 
> dhcp.conf config file, which is fine as long as the ISP doesn't change 
> them. But, if the ISP were to change its name servers, the firewall 
> would pick up the changes but as things stand it would continue to 
> provide the old name server addresses to the AirStation, which would 
> mean the rest of the network would no longer be able to resolve DNS 
> queries the AirStation didn't already have cached.
>
> Is there any clever way to pass through the name server settings 
> the DHCP server provides, so that if the ISP should change its name 
> server IP addresses in the future, my local DHCP server would pass along 
> the new addresses when next asked?
>
> In other words, instead of specifying the name server addresses 
> explicitly in the dhcp.conf file, is there a way to specify that they 
> should be taken from the host the DHCP server is running on?
>
> Thanks
>
> Mark
>
>
I have a similar setup as yours but I agree with Reco as I have a caching DNS
server on my firewall machine along with dhcp. It is setup to use  DNSCrypt to
encrypt/protect the connection to opendns (most DNS is in the open and can be
hacked).  I also have a local domain (like mynamehome.net) so I can connect to
my local machines by name (bob.mynamehome.net).  I do have my wireless access
points only serving wireless connections (192.168.xxx.xxx/24) and the wired part
of my network connects directly to the firewall through a switch
(172.16.0.0/16).  I also have firewall rules set up to redirect all connections
going to external DNS servers (google chrome and android devices sometimes make
their own connections to google DNS) to be re-directed to my own DNS server so I
am assured that all DNS is over a encrypted link.  All this allows you to be in
complete control over what DNS server is used and that your ISP isn't
redirecting your internet connections through a botched DNS server returning
incorrect addresses (either on purpose or because of a hacked server).  Of
course the firewall machine has rules that block all external (internet)
connections while allowing internal connections through.  I use shorewall which
makes setting up firewall rules a little easier.

-- 


*...Bob*


Re: kvm/qemu virtual machine can't find hard drives

2017-08-18 Thread Bob Weber
On 8/18/17 4:08 PM, Gary Roach wrote:
> I really appreciate all of you quick responses.
>
> For some unknown reason, when I searched the  Debian database virt came up
> empty. This time it didn't. So, at this point, its go RTFM.
>
> The manual will probably clear it up but When trying to run virt-manager, I
> did get the following error:
>
> unable to complete install: 'unsupported configuration: CPU mode 'custom' for
> x86_64 kvm domain on x86_64 host is not supported by hypervisor'
>
> Traceback (most recent call last):
>   File "/usr/share/virt-manager/virtManager/asyncjob.py", line 88, in 
> cb_wrapper
>     callback(asyncjob, *args, **kwargs)
>   File "/usr/share/virt-manager/virtManager/create.py", line 2288, in
> _do_async_install
>     guest.start_install(meter=meter)
>   File "/usr/share/virt-manager/virtinst/guest.py", line 461, in start_install
>     doboot, transient)
>   File "/usr/share/virt-manager/virtinst/guest.py", line 396, in _create_guest
>     self.domain = self.conn.createXML(install_xml or final_xml, 0)
>   File "/usr/lib/python2.7/dist-packages/libvirt.py", line 3523, in createXML
>     if ret is None:raise libvirtError('virDomainCreateXML() failed', 
> conn=self)
> libvirtError: unsupported configuration: CPU mode 'custom' for x86_64 kvm
> domain on x86_64 host is not supported by hypervisor
>
> I'll give things another try and may be back later. If not, thanks again.
>
> Gary R
>
>
>
According to AMD the FX series cpu supports AMD Virtualization™ (AMD-V™)
Technology with IOMMU so it must be turned off in the BIOS so check there.


...bob





Re: kvm/qemu virtual machine can't find hard drives

2017-08-17 Thread Bob Weber
On 8/17/17 1:06 PM, Gary Roach wrote:
> Hi all,
>
> Debian 9 (Stretch) system
> KDE Desktop
> MSI970A-G43 motherboard
> AMD FX 4350 processor - not overclocked
>
> Ive been trying to get a virtual machine set up and have run into problems
> with both virtualbox and kvm/qemu packages. I have a very messy project (Elmer
> fem) and want things completely walled off from my regular system. So I opted
> for a virtual machine.
>
> The kvm/qemu package seems to install properly (no errors). But when I try to
> install Debian 9 into it, the installer can't find my two hard drives sda and
> sdb. It then ask me to pick from a long list of drivers. I haven't been able
> to determine what the drivers should be for my system.  Sda is boot drive, 500
> Gb WD160 and the there is a WD10 1 Tb, ext4 blank drive. Both drives show up
> on Dolphin so they must be mounted.
>
> Am I missing some library or something? Any help will be greatly appreciated.
>
> Gary R.
>
Usually the qemu vm runs from a file (created by qemu-img) set up as a disk
drive by qemu.  I use Virtual Machine Manager (along with libvirtd) which can do
all the hard stuff for you.  I usually make my own virtual drive files with
qemu-img and let Virtual Machine Manager control access to them.  libvirtd does
a good job of letting you run qemu from your user account and access the
necessary resources on the host machine.  libvirtd sets up all the network
devices and bridges needed to access the real world. Virtual Machine Manager can
connect to USB devices on the host machine, manage CD drive access (either to
hardware cd drive or iso images ... like an install image), boot devices
(usually the sda drive or cd drive), the amount of memory for each vm and
additional drive you may want (like to try multi disk raid).   Virtual Machine
Manager opens up a vnc (or spice) window where you can see the output from the
vm when it is running on your kde desktop .. either text mode or graphical
mode.  I have about 15 vm's defined.  One runs debian with kde to handle my
weather station.  Another runs win 10 (ug) so I can do my taxes.  Most of the
others are debian and kde testing and unstable installs that I use to test
updates before I commit them to my host desktop machine.

Most debian installs work easily with a 20 or 20 GB virtual drive.  You create
the file necessary with a command like this:

qemu-img create -f qcow2 /home/img/Mymachine/drive.img 30G

This assumes that /home is mounted on your 1TB drive.  Looks like the packages
to get you started are libvirt-daemon-system and virt-manager.

Hope this helps.

...bob




Re: Question on using Debian with Davis weather station data loggers

2017-07-13 Thread Bob Weber
I have a Davis Vantage Pro2 up since 2008 and I am using weewx (since 2013)
running on debian installed in vm.  It saves data in sqlite data files which can
be converted to other data file types if needed.  I wrote some python programs
to convert the data (including the time data to human readable format) to
postgressql and display in customized graphs.  There is a user base to help you
through any problems.  It can update placed like wunderground and produces its
own customizable html pages for  viewing data on a web site (see mine at
weather.skyeweb.com).  It also supports several other brands of stations.  I use
a ip logger connected to an envoy so all my data is transferred over my
network.  My Davis console can then be placed anywhere without having to be near
a computer.


*...Bob*
On 7/12/17 10:07 PM, Jason wrote:
> I have a Davis Vantage Pro2 weather station and would like to get a
> data logger for it but I'm not sure what it would take to retrieve the
> data with Linux. They have a USB version and a Serial version data
> logger available, both packaged with either Windows or Mac software.
>
> I'd be curious if anyone on here has any recommendations for which
> version would be most compatible with Linux and what is needed to make
> it work. All I want is to be able to retrieve the data (maybe in
> CSV format?) so I can import to a spreadsheet.
>
> Thanks.



Re: Relative stability of Testing vs Unstable

2017-07-05 Thread Bob Weber
On 7/5/17 8:17 PM, Jason Cohen wrote:
> I've been using Debian for a number of years, but my experience has
> typically been with servers where I have used the Stable branch for its
> reliability and security support.  However, I recently began using
> Debian Stretch for my desktop and foresee a need for more frequent
> software updates than the approximate 2 year cadence of the Stable
> release.  While the backports repository is great, it only covers a
> small subset of packages.
>
> My question is how Debian Testing and Unstable compare in terms of
> stability.  The Debian documentation suggests that Testing is more
> stable than Unstable because packages are delayed by 2-10 days and can
> only be promoted if no RC bugs are opened in that period [1].  Yet,
> other sources indicate that Testing can stay broken for weeks due to
> large transitions or the freeze before a new stable release [2].
>
I have been using testing/kde for several years now and have been bit by some of
these transitions.  I use ext4 so my snapshot is made by rsnapshot (rsync
wrapper).  I have had to restore from this snapshot several time when my system
won't go into graphics node.  Unfortunately rsync has failed when a package has
a file with the same date and time but is actually different as seen by its
md5sum.  There is an explanation for this behavior but I never understood it. 
To me a file should never have the same date and time but actually be
different.  rsync's checksum option is very slow!.  Apparently there is a fix
coming to the way packages are generated which will fix this!   I use debsums
-ca to check all the package files.  I also have a apt-cache-ng server running
so that I can always go back to a previous package if needed or pick up a
package that has failed debsums.  I also run several VMs with various testing
and unstable systems so I can see for myself what works.  I use testing and
unstable source list files and pinning to determine what each system uses.  For
testing:

Package: *
Pin: release a=testing
Pin-Priority: 900

Package: *
Pin: release a=unstable
Pin-Priority: 800


Just reverse the pin priory for unstable.  This way if I see a package in
unstable that I want to try in a testing system I can specify the version number
in the install command like this.

apt install package=version

You can also use apt-mark to set a hold on a package if you see problems others
are having with a package in the various debian lists.  I did this for some xorg
packages a while back to wait for a fix to propagate out.

I have a grub boot option to boot to a iso copy of SystemRescueCd if I need to
restore files or tweak the system.  On top of all this I have a backuppc server
backing up all my systems so if I forget a snapshot before a upgrade I still
have a backup.  I use software raid1  so I can survive a single disk failure. 
It also allows me to add a disk to the array and pull it after it is synced up
and put in a off site storage to have a backup from a fire or water damage 
event.

In my experience testing gets very "stable" running up to a new release.  Just
before the last release I spent a day updating all my systems/laptops so that I
would be about at that stable point.  I have not updated my main system since
and now apt wants to update 744 (after about 20 days) packages!  I haven't seen
any major problems in my VMs that I update every day so I suppose I will update
it in a few days.


...Bob



Re: apt-cacher-ng related program and package suggestion

2017-06-15 Thread Bob Weber



> Simple workaround for now:
>
> /etc/apt/sources.list:
> 
> deb http://site1.ip.address/debian jessie-backports main
> #deb http://site2.ip.address/debian jessie-backports main
> 
>
> Now just change the comment from one to the other, run apt-get
> update and go.
>
> Second workaround, if you have lots of entries to be switched:
> create two sources.list files, named home.list and uni.list.
>
> When you move from place to place, copy the appropriate one 
> in to /etc/apt/sources.list and run apt-get update.
>
> -dsr-
>
Its even easier than that.  Don't mess with the .list files just change  the 
file

/etc/apt/apt.conf.d/000apt-cacher-ng-proxy

according to the login.

The contents of that file is just the address/port of the apt-cacher-ng server.

Acquire::http::Proxy "http://172.16.0.1:3142/";;

Just write a script to check what network you are on.  Like ping the
apt-cacher-ng server(s) until a response is sent and set 
/etc/apt/apt.conf.d/000apt-cacher-ng-proxy accordingly.

Try the following line with your own existing and non existing ips.

for f in 172.16.0.1 172.16.0.2 172.16.0.33; do if ping -c3 $f; then echo $f
yes;else echo $f no;fi; done

bob




Re: Strange clicking noise from my laptop hard drive

2017-05-10 Thread Bob Weber
On 5/10/17 3:32 PM, Philip Ashmore wrote:
> Hi there.
>
> I managed to record it using my laptops built-in microphone.
>
> To those who would prefer a description:
> It starts fast and slows down. I opened the file with Audacity and determined
> the intervals to be 0.1, 0.3, 0.6, 10, 15 seconds, then it repeats.
>
> It sounds like some kind of protocol negotiation algorithm that requires a
> disk sync when the polling time doubles.
>
Its most likely the disk trying to read some bad sectors.  It is trying to
access the bad sector by approaching it from different ends of the disk ...
that's where the whine sound comes from at different intervals.   This is a bad
sign so get any important data off the drive NOW.

If the system lets you ... run 'smartctl -a /dev/sdx' (replace sdx with your
drive) (if command not found error then install package smartmontools) and look
for things like UDMA_CRC_Error_Count, Offline_Uncorrectable and
Reallocated_Sector_Ct that are high (might just be over zero).  There may be
other messages that indicate drive errors.  You could do a long self test
('smartctl -t long /dev/sdx') but that will only tell you where the errors are. 
If you can get a copy of SystemRescueCd at http://www.system-rescue-cd.org/.  It
is a live Linux repair disk that can do many things.  It has utilities that can
retrieve files from a bad disk and more.  This would give you a chance to mount
the drive and get the important data you need. 

To attempt a repair I would boot the  SystemRescueCd and run the command
'badblocks  -svn /dev/sdx'.  That is a non destructive read then write test and
would try to read the bad blocks and if successful then write them back giving
the drive a chance to relocate the bad sector(s).  If this fails and you want to
try to fix the drive then try spinrite (from grc.com).  It is a commercial
program but it does a deep scrubbing on the disk and has been able to restore
bad disks. 


...bob



Re: Live Fille System Backup

2017-05-05 Thread Bob Weber
On 5/4/17 7:17 PM, Sergei G wrote:
> I am running Raspberry PI and I would like to dump full file system without
> shutting down the system.  One machine runs nginx and another runs
> PostgreSQL.  I have had a good success with FreeBSD and dump software, because
> it is part of the OS and core team maintains it.  
>
> However, dump utility is no longer maintained by the original developer and is
> effectively getting too old.  I am no longer sure what is a dump-like solution
> that's easy to work with at a file system level.
>
> I would like a backup tool that does not bring a million dependencies with MBs
> of files.  Something that works on server without X Windows and can send
> backup to an externally attached USB drive.  Nothing fancy.   No network
> infrastructure.  Incremental backups would be greatly appreciated.  Ability to
> pipe to a compression program is a plus, just like I did with dump.
>
> I'd like to be able to apply a similar solution on actual Debian and Ubuntu
> VMs.  When I go to Ubuntu I might have to deal with newer ext file systems. 
> Not sure what is supported there.
>
> If there is no good live backup solution, I am willing to take the system down
> and back it up using another system.  It is not ideal, but it would work.
>
> Any advice would be greatly appreciated.
>
> P.S. All mentioned systems are Debian based and I feel that it is appropriate
> to ask on Debian user list.
>
>
> Thank you
I use rsnapshot to live backup my root partition (home is mounted on another
partition) to a directory under home.  rsnapshot uses rsync with the appropriate
options (I do add some extras) to do a backup.  I do this before I do an "apt
upgrade"  so that if things don't work well after the upgrade I can restore what
was there before.  I have used this "backup" several times to take my system
back to before the upgrade and have not had any problems.  I am not backing up
databases so those remarks here about that sort of backup apply.  To restore I
boot a copy of  SystemRescueCd, mount the source and destination partitions and
run a script that runs rsync with similar options to restore the root back to a
working state.  The rsnapshot backup is an exact copy (uncompressed) of the root
so you can go get a single file easily if you need to.

I also have several VM's of different Debian installs (unstable and testing)
that I upgrade every few days to follow any problems that arise but since the
VM's can't duplicate my main system's xorg setup they don't catch everything.  I
use rsnapshot on these VM's also and have restored the rsnapshot backup several
times.  I have tried btrfs on these vm's but I always ended up running out of
free space during an upgrade so I would have to delete some older snapshots to
free up some space and run the upgrade again.  These vm's are running on 20 to
30 GB virtual hard drives so there is not a lot of free space for the btrfs
snapshots.

Grub can also handle booting an iso of SystemRescueCd so I don't even have to
find a cd with SystemRescueCd on it.  Just select the grub menu option for
SystemRescueCd.

For a daily backup I use backuppc to backup all my local systems and a remote vm
at digital ocean.  These backups are all on live systems and would have the
problems with live databases as described by others.  You just need an ssh login
using rsa key pairs so you don't have to supply a password to backuppc. 
Postgresql has ways to backup databases and is made very easy by using a program
like "pgadmin III" (in Debian as pgadmin3/testing,unstable,now 1.22.2-1 amd64). 
You might have to stop database updates while you backup though.  Individual
tables or the whole database can be backuped.

...bob





Re: A minimal relational database in Debian

2017-03-01 Thread Bob Weber
Unacceptable. It affects the size of _everything_, not just font size.
> I collided with with a fatal side-effect. I tried a too large factor.
> Now the "accept" button is now off screen and *NOT* accessible.
> Don't see any option short of reinstall to resolve.
> I consider LibreOffice *DOA*
>
Try holding down the alt key and use the left mouse button.  It should allow you
to move the window anywhere on the screen so you can see the button that is off
screen.
>>
>> For me this works:
>> Go to Tools > Options. Under LibreOffice, select View. Then find the Scaling
>> option under User Interface. The default value is 100%. Increase this number
>> until the font size is functional for you.
>>
>> Regards,
>> jvp.
>>
>>
>>
>
>
>
>



Re: HELP! Re: How to fix I/O errors? (SOLVED)

2017-02-12 Thread Bob Weber
On 02/12/2017 01:59 PM, Marc Shapiro wrote:
> On 02/12/2017 08:30 AM, Marc Auslander wrote:
>> I do not use LVM over raid 1.  I think it can be made to work,
>> although IIRC booting from an LVM over RAID partion has caused issues.
> my boot partitions are separate.  They are not under LVM.
>> LVM is useful when space requirements are changing over time and the
>> ability to add additional disks and grow logical partions is needed.
>> In my case, that isn't an issue.  I have only a small number of
>> paritions - 3 because of history but starting from scratch, I'd only
>> have two - root (including boot) and /home.
> I started using LVM when I had a much smaller disk (40GB).  With the current
> 1TB disk, even with three accounts on the box, and expanding several
> partitions when moving to the new disk, I have still partitioned less than
> half the disk and that is less than 1/3 used. So, no, LVM is probably not an
> issue any more.
>
> BTW, what is your third partition, and why would you not separate it now if
> starting from scratch?
>> I converted to mdamd raid as follows, IIRC.
>>
>> Install the second disk, and parition it the way I wanted.
>> Create a one disk raid 1 partion in each of the new paritions.
>> Take down my system, boot a live system from CD, and use a reliable
>> copy program like rsync to copy each of the partitions contents to the
>> equivalent raid partition.
>> Run grub to set the new disk as bootable.  This is by far the
>> trickiest part.
>> Boot the new system and verify it's happy.
>> Repartion the now spare disk to match the new one if necessary.
>> You may need to zero the front of each partion with dd if=/dev/zero
>> to avoid mdadm error checks.
>> Add the partitions from that disk to the mdadm paritions and let mdadm
>> do its thing.
>>
> On 02/12/2017 07:08 AM, Bob Weber wrote:
>>
>> I use raid 1 also for the redundancy it provides.  If I need a backup I just
>> connect a disk, grow each array and add it to the array (I have 3 arrays for
>> /, /home and swap).  It syncs up in a couple hours (depending on size of the
>> array).  If you have grub install itself on the added disk you have a
>> bootable copy of your system (mdadm will complain about a degraded array).  I
>> then remove the drive and place it in another outbuilding in case of fire. 
>> You can even use a external USB disk housing for the drive to keep from
>> shutting down the system.  The sync is MUCH slower ... just coma back the
>> next day and you will have your backup.  You then grow each array back to the
>> number of disks you had before and all is happy again.  Note that this single
>> disk backup will only work with raid 1.
>>
> So, how do you do a complete restore from backup?  Boot from just the single
> backup drive and add additional drives as Marc Auslander describes, above?

Yes if that is what you need to do if there was a complete failure in your
machine and maybe you had to start over with a new motherboard and power supply.

>
>
> One other question.  If using raid, how do you know when a disk is starting to
> have trouble, as mine did?  Since the whole purpose of raid is to keep the
> system up and running I wouldn't expect errors to pop up like I was getting. 
> Do you have to keep an eye on log files?  Which ones?  Or is there some other
> way that mdadm provides notification of errors?  I've got to admit, even
> though I have been using Debian for 18 or 19 years (since Bo), log files have
> never been my favorite thing.  I generally only look at them when I have a
> problem and someone on this luist tells me what to look for and where.
>
> Marc
>
>
I use a program called ossec.  It watches logs of all my linux boxes so I get
email messages about disk problems.  I also do periodic self tests on all my
drives controlled by smartd from the  smartmontools package.  I also use a
package called logwatch which summarizes my logs.   The messages from mdadm and
smartd are seen by ossec.  When I mess with an array to make it larger and add a
disk for backup I get the messages in my mailbox about a degraded array.  As I'm
reading them I am startled until I remember ...Oh I did that!  I have a daily
cron job that emails the output of "smartctl -a /dev/sdx" for each drive on each
machine so I can keep a history of the parameters for each drive.

I also use backuppc on a dedicated server to backup all my boxes.  That way I
can get back files I deleted by mistake or modified and has to go back to a
previous version.  I now have all my machines on raid 1,  My wife just recently
gave up on Win 10 with all those updates that just took over her machine when
Windows wanted 

Re: HELP! Re: How to fix I/O errors? (SOLVED)

2017-02-12 Thread Bob Weber
I use raid 1 also for the redundancy it provides.  If I need a backup I just
connect a disk, grow each array and add it to the array (I have 3 arrays for /,
/home and swap).  It syncs up in a couple hours (depending on size of the
array).  If you have grub install itself on the added disk you have a bootable
copy of your system (mdadm will complain about a degraded array).  I then remove
the drive and place it in another outbuilding in case of fire.  You can even use
a external USB disk housing for the drive to keep from shutting down the
system.  The sync is MUCH slower ... just coma back the next day and you will
have your backup.  You then grow each array back to the number of disks you had
before and all is happy again.  Note that this single disk backup will only work
with raid 1.


*...Bob*
On 02/11/2017 10:42 PM, Marc Shapiro wrote:
> On 02/11/2017 05:22 PM, Marc Auslander wrote:
>> You didn't ask for advice so take it or ignore it.
>>
>> IMHO, in this day and age, there is no reason not to run raid 1.  Two
>> disks, identially partitioned, each parition set up as a raid 1
>> partition with two copies.
>>
>> When a disk dies, you remove it from all the raid partitions, pop in a
>> new disk, partition it,  add the new partitions back into the raid
>> partitions and raid rebuilds the copies.
>>
>> Except for taking the system down to replace the disk (assuming you
>> don't have a third installed as a spare) you just keep running as if
>> nothing has happened.
>>
> I had been considering using raid 1 and I have not yet ruled it out entirely. 
> I have never used raid and have been reading up on it over the past couple of
> weeks.  AIUI you can use LVM over raid.  Is there any actual advantage to
> this?  I was trying to determine the advantages of using straight raid,
> straight LVM, or LVM over raid.  If I decide, later, to use raid, how dificult
> is it to add to a currently running system (with, or without LVM)?
>
>
> Marc
>
>



Re: Advice / recommendations on Inexpensive Managed Ethernet Switches

2017-02-03 Thread Bob Weber
You might look at the Ubiquiti EdgeRouter X Advanced Gigabit Ethernet Routers
ER-X 256MB Storage 5 Gigabit RJ45 ports abut $50 on Amazon.  It actually runs a
small Debian like OS.  It is configured by a web interface and a command line
interface through ssh or embedded in the web interface.  It has counters and
displays graphs of the current throughput of each port.  The basic router
configuration (configured by wizards to get you started) has one port to connect
to the internet (your dsl modem) and NATed to the other 4 ports set up like a
switch.  It has a DHCP server to assign internal IP addresses on your LAN if you
want.  Mirroring is also possible through the command line interface.  Port rate
limiting is also possible.  While I use a Debian box for my main router/firewall
I have been experimenting with a ER-X for a while as a backup in case the Debian
box goes down.

I also have a TP-Link 5-Port Gigabit Ethernet Web Managed Easy Smart Switch
(TL-SG105E v2.0) about $28 on Amazon.  It has a Web configuration interface
(make sure you get the V2.0) and can be easily set up to mirror ports.  This is
not a router so it won't protect your internal LAN like the ER-X would. 

Now to actually monitor the traffic from a mirrored port connected to your
desktop Debian you can use wireshark.  It can display traffic in real time
showing source and destination address/names and protocols.  It can filter by IP
so you could just see the traffic your son generates.  You can graph the data
also.  Wireshark has many ways to see the data it collects.  My favorite is
"conversations" which shows source and destinations and packets/bytes
transferred.  For instance you might see your son's internal IP going to youtube
and the data he uses just to watch a video.

Another program I use to just watch data amounts being used is vnstat.  It can
show data usage by hour, day or month.  Just install vnstat on each Debian
machine and have the results of "vnstat -i eth0 -d" emailed to you every day by
a crontab entry.  Here is an example of what is on my outgoing port on my route 
box.

vnstat -i eth1 -d

 eth1  /  daily

 day rx  | tx  |total|   avg. rate
 +-+-+---
 01/05/2017 4.82 GiB |  274.30 MiB |5.09 GiB |  493.72 kbit/s
 01/06/2017 5.16 GiB |  250.13 MiB |5.40 GiB |  524.53 kbit/s
 01/07/2017 4.13 GiB |  271.32 MiB |4.39 GiB |  426.58 kbit/s
 01/08/2017 4.61 GiB |  267.46 MiB |4.87 GiB |  472.95 kbit/s
 01/09/2017 3.35 GiB |  624.10 MiB |3.96 GiB |  384.68 kbit/s
 01/10/2017 4.72 GiB |  263.63 MiB |4.98 GiB |  483.42 kbit/s
 01/11/2017 5.02 GiB |  303.67 MiB |5.32 GiB |  516.44 kbit/s
 01/12/2017 2.87 GiB |  194.76 MiB |3.06 GiB |  297.22 kbit/s
 01/13/2017 4.44 GiB |  270.56 MiB |4.70 GiB |  456.34 kbit/s
 01/14/2017 4.36 GiB |  244.49 MiB |4.60 GiB |  446.73 kbit/s
 01/15/2017 4.04 GiB |  354.37 MiB |4.39 GiB |  426.23 kbit/s
 01/16/2017 4.60 GiB |  360.85 MiB |4.95 GiB |  480.43 kbit/s
 01/17/2017 4.07 GiB |  269.75 MiB |4.34 GiB |  420.89 kbit/s
 01/18/2017 3.90 GiB |  272.31 MiB |4.17 GiB |  404.66 kbit/s
 01/19/2017 4.70 GiB |  321.41 MiB |5.01 GiB |  486.59 kbit/s
 01/20/2017 4.65 GiB |  294.00 MiB |4.94 GiB |  479.26 kbit/s
 01/21/2017 7.12 GiB |  343.20 MiB |7.45 GiB |  723.52 kbit/s
 01/22/2017 7.23 GiB |  379.96 MiB |7.60 GiB |  737.88 kbit/s
 01/23/2017 5.54 GiB |  290.97 MiB |5.82 GiB |  565.08 kbit/s
 01/24/2017 4.85 GiB |  355.95 MiB |5.20 GiB |  505.09 kbit/s
 01/25/2017 3.48 GiB |  259.62 MiB |3.73 GiB |  362.58 kbit/s
 01/26/201710.14 GiB |  469.21 MiB |   10.60 GiB |1.03 Mbit/s
 01/27/2017 4.94 GiB |  324.84 MiB |5.26 GiB |  510.76 kbit/s
 01/28/2017 5.75 GiB |  332.64 MiB |6.08 GiB |  589.86 kbit/s
 01/29/2017 4.16 GiB |  291.04 MiB |4.44 GiB |  431.41 kbit/s
 01/30/2017 5.93 GiB |  331.44 MiB |6.25 GiB |  606.99 kbit/s
 01/31/2017 3.36 GiB |  247.76 MiB |3.61 GiB |  350.02 kbit/s
 02/01/2017 3.22 GiB |  248.35 MiB |3.47 GiB |  336.53 kbit/s
 02/02/2017 3.87 GiB |  257.72 MiB |4.12 GiB |  399.78 kbit/s
 02/03/2017 1.21 GiB |  128.89 MiB |1.34 GiB |  265.66 kbit/s
 +-+-+---
 estimated  2.48 GiB | 262 MiB |2.74 GiB |


I watch several hours of Netflix a day so this is pretty high usage. 

Hope this helps.

*...Bob*

On 02/02/2017 10:42 PM, rhkra...@gmail.com wrote:
> Thanks for the replies (from Dan and Frank)!
>
> I'm going to do some thinking--at first I just wanted to find out how we were 
> using so much bandwidth, but, once I do, I might want to try blocking some of 
> it if t

Re: Problem google-chrome Flash Player with http://www.nbcnews.com/nightly-news

2017-01-14 Thread Bob Weber
I should have also added the exception "[*.]nbcnews.com" as well as
"[*.]nbc.com" in the flash chrome exception list.  Sorry I missed that.



*...Bob*
On 01/13/2017 09:40 PM, Hugo Vanwoerkom wrote:
> Hi
>
> The Flash Player that comes with Google-Chrome has become more noisy with
> newer versions of Google-Chrome.
>
> In the beginning Flash just displayed the video.
>
> Then it displayed the start symbol that you had to click in order for the
> video to start.
>
> Now it displays a puzzle piece with the message to right click the puzzle
> piece and then select "Run the plug-in".
>
> That works generally but it fails with NBC News pages. "Right-click to run
> Adobe Flash Player" appears overlayed by a rotating circle segment. but
> right-click never shows ythe option to run the plug-in.
>
> Can anyone verify this and what is to be done?
>
> Hugo
>
>



Re: Problem google-chrome Flash Player with http://www.nbcnews.com/nightly-news

2017-01-14 Thread Bob Weber
This is the safe way to browse the web.  The flash plugin is apparently disabled
by default.  What you can do is make an exception for nbc.  Go into
settings/advanced and click on the button for "Content settings".  Scroll down
to Flash. Click on "Manage exceptions" and enter "[*.]nbc.com" without the
quotes.  Make sure Allow is set and hit the enter key.  This makes an exception
for all of nbc so that things are displayed correctly.  You should also check
that you don't ad/malware blocking extensions installed that might block content
on nbc.  If so make sure nbc is white listed there also.



*...Bob*
On 01/13/2017 09:40 PM, Hugo Vanwoerkom wrote:
> Hi
>
> The Flash Player that comes with Google-Chrome has become more noisy with
> newer versions of Google-Chrome.
>
> In the beginning Flash just displayed the video.
>
> Then it displayed the start symbol that you had to click in order for the
> video to start.
>
> Now it displays a puzzle piece with the message to right click the puzzle
> piece and then select "Run the plug-in".
>
> That works generally but it fails with NBC News pages. "Right-click to run
> Adobe Flash Player" appears overlayed by a rotating circle segment. but
> right-click never shows ythe option to run the plug-in.
>
> Can anyone verify this and what is to be done?
>
> Hugo
>
>



Re: Urgent help needed - Debian boot hangs

2017-01-06 Thread Bob Weber
On 01/06/2017 09:57 AM, h...@hanswkraus.com wrote:
>
> Am 06.01.2017 15:45, schrieb Dan Ritter:
>
>> On Fri, Jan 06, 2017 at 03:32:01PM +0100, h...@hanswkraus.com
>>  wrote:
>>> Hi,
>>>
>>> urgent help needed. After adding an iptable entry for masquerading my
>>> local network and making it permanent with
>>> the package "iptables-persistent" my Debian server doesn't boot any
>>> more.
>>> It hangs with the line:
>>> A start job is running for LSB: Raise network interfaces. ( ... /no
>>> limit)
>>> The root file system where it boots from is on an Adaptec HW Raid
>>> interface, the motherboard is a Asrock.
>>>
>>> I habe tried to boot from a live CD (by pressing F11 as the short boot
>>> msg of the motherboard suggests), but with no avail,
>>> the boot menue doesn't appear. I don't see the Grub menue, the screen is
>>> momentarily blank.
>>> The computer insists on booting from the Adaptec.
>>>
>>> Is there any chance to interrupt the booting process? The other command
>>> windows (Alt-F2 .. Alt-F6) show only the line
>>> A start job is running for LSB: Raise network interfaces. ( ... /no
>>> limit)
>>
>> When you boot normally, do you get a GRUB menu flashing by? If
>> so, you can interrupt it there (try down-arrow) and edit the
>> boot line to change the init system to init=/bin/sh
>>
>> Then you can mount your root filesystem r/w
>> mount -o remount,rw /
>> and edit your iptables troubles away.
>>
>> -dsr-
>>
> Hi Dan,
>
> tanks for the help but I don't see a GRUB menu. Maybe it's there because I get
> an info
> from the screen: frequency out of range... After a dew seconds the system
> beeps once
> and continues booting.
>

This is where your initial problem is.  Apparently your monitor can't sync to
the resolution your system is using during boot.  If you can try another monitor
that might have a better sync range.  Maybe try to jump into the bios setup and
see if there are any options that would affect the video resolution like setting
a generic VGA mode.  I've had this happen but I have been able to ssh into the
machine after it boots to try different grub options.
 
>
> Kind regards,
> Hans
>
>  




*...Bob*


Re: Upgrade Wheezy to Jessie - gave up waiting for root device

2017-01-06 Thread Bob Weber
On 01/06/2017 08:38 AM, Steven Kauffmann wrote:
> Hi all,
>
> I'm busy with an upgrade from Wheezy to Jessie. I didn't had any issue during
> the upgrade so far, but I cannot boot into the new kernel (3.16.0-4-amd64).
>
> When booting I get the following output: give up waiting for root device ...
> I can still boot into the old kernel (3.2.0-4-amd64).
>
> What I did so far:
> * Reinstalled kernel image 3.16.0-4
> * Reinstalled grub in MBR
> * checked UUID of boot partition in grub
> * in the command line of grub I can access the boot partition (using ls
> hd(0,1) ). Vmlinux and initrd.img are available.
> * Added delay in grub settings
>
Have you tried "rootdelay=5"?  That has helped me in the past.  That option
makes the kernel wait 5 seconds for the root device.  If that helps then try a
smaller number so your boot time won't be overly long.  Also add "noquiet" so
you will see all the kernel messages as it boots. 

Just add those 2 kernel options to whatever is already in GRUB_CMDLINE_LINUX= in
/etc/default/grub.  Quiet may already be there so just add the no in front. 
Then run update-grub so grub can update /boot/grub/grub.cfg file.

> So far no result, all things I'v been trying end up in the "give up waiting
> for root device" error.
>
> This is how the partition table is looking like:
> Device Boot Start   End   Sectors  Size Id Type
> /dev/sda12048 121636863 121634816   58G 83 Linux
> /dev/sda2   164270080 167772159   3502080  1.7G  5 Extended
> /dev/sda3   121636864 164270079  42633216 20.3G 83 Linux
> /dev/sda5   164274176 167772159   3497984  1.7G 82 Linux swap / Solaris
>
> output blkid:
>  blkid
> /dev/sr0: UUID="2016-10-20-09-23-51-00" LABEL="GParted-live" TYPE="udf"
> PTUUID="647d59b6" PTTYPE="dos"
> /dev/sda1: UUID="e42240d0-f587-440e-a10d-8c568938fb84" TYPE="ext4"
> PARTUUID="000120c8-01"
> /dev/sda3: LABEL="users" UUID="0554d637-49cf-4981-8630-046385eb8b6c"
> TYPE="ext4" PARTUUID="000120c8-03"
> /dev/sda5: UUID="f9795e42-60d7-48eb-a762-026be37e8e8f" TYPE="swap"
> PARTUUID="000120c8-05"
>
>
> Any idea how to solve this?
>
...Bob



Re: New to iptables

2017-01-04 Thread Bob Weber
While you computer should be protected by a fire wall (I use shorewall for
that)  maybe you should look at privoxy.  privoxy is a Privacy Enhancing Proxy
that the browser can be set to go through to access web sites. 

The privoxy setup for your sand-boxed install would be set to allow access only
to the banking sites by url and block all others.  That way you don't have to
worry about  the ip addresses a bank might have at the time you access it (they
may have multiple addresses for load shearing for example).  Again the
sand-boxed install should have a firewall that only lets outgoing requests get
through and blocks all incoming probes.  Shorewall can easily do this for you so
you won't have to mess with the workings of iptables. 

Your open install should also use privoxy with a more open setup that will help
you stay away from malware and add sites.  Shorewall firewall can be set to
allow incoming access to any servers you might have like ssh and let outgoing
requests get through.

If your computer has a processor that will support virtual machines and at least
4GB ram and a spare 20G or so of file space you could easily install Debian in a
VM and add all the firewall and privoxy rules to get to your banking sites. 
KVM/QEMU and virtual machine manager make this process easy.  To get to your
banking sites you would just spin up the sand-boxed VM.  It would show up in a
separate window and allow you to have all the other stuff you were doing on you
host un-sand-boxed machine still visible.  It might even make more sense to make
the VM be your "dirty" so that if it did get infected you would just install
Debian again. Or keep a spare copy of the just installed image file that the VM
runs off of and simply copy the spare over the messed up image file and be back
in business in a few minutes.

These are just a few examples of what you can do.  I use VMs all the time mostly
for testing updates before I commit them to my host desktop machine.  One VM
even runs my weather station software 24/7. 

 
*...Bob*
On 01/04/2017 11:54 AM, Richard Owlett wrote:
> I'm searching for an introduction to iptables that leads me to answers to the
> questions *I* have. I've got a flock of links I'm working thru.
>
>
> In the meantime I have a few questions.
>
> One of the links led to _Securing Debian Manual_ and in particular
> "Appendix F - Security update protected by a firewall"
> {https://www.debian.org/doc/manuals/securing-debian-howto/ap-fw-security-update.en.html}
>
>
> I follow the description as far as it goes - i.e. access is limited to a
> specific URL.
> QUESTION 1
> What happens if the URL is not "security.debian.org" but my bank.
> I assume that there is no problem with links within the same domain.
> I DO know however that the site gets information from other sites to handle my
> requests. From what I can follow they are JavaScripts applets(right word) to
> display information. What would happen?
>
> Because of my my uncertainties intend to have a "sandboxed" install. The
> associated partition will have only Debian and the browser.
>
> Question 2
> There will be a separate install of Debian that I will use for "everything
> else". Can the iptables of that install be set to allow access to any domain
> *EXCEPT* my bank's? The goal being minimization of "operator error".
>
> Question 3
> Is there a simple minded tool that I could enter the show in the example in
> "Appendix F".
>
> TIA
>
>
>



Re: debugging KVM connections

2017-01-01 Thread Bob Weber
Windows is another story.  You can't just change video cards since you might
trigger Windows wanting to be authorized again. 


I found logs for each VM in /var/log/libvirt/qemu.  Is that where you were 
looking?



*...Bob*
On 01/01/2017 05:36 PM, Gary Dale wrote:
> On 01/01/17 05:18 PM, Bob Weber wrote:
>>
>> I've had a similar problem after an xorg update on a VM running with the QXL
>> video display.   I downgrade the xorg packages and all is well.  If you are
>> not running X then this may not help you.  You might try another video
>> display like VMVGA.
>>
>> Have you tried ssh into the VM after the viewer closes?
>>
>> *...Bob*
>
> It's a Windows 7 virtual machine. That precludes a lot of what I could try.
>
> Any idea which logs might contain some mention of what went wrong? The Libvirt
> ones on both the local and remote machines have nothing around the time I'm
> connecting.
>



Re: debugging KVM connections

2017-01-01 Thread Bob Weber
I've had a similar problem after an xorg update on a VM running with the QXL
video display.   I downgrade the xorg packages and all is well.  If you are not
running X then this may not help you.  You might try another video display like
VMVGA.

Have you tried ssh into the VM after the viewer closes?

*...Bob*
On 01/01/2017 05:05 PM, Gary Dale wrote:
> I'm trying to track down a problem I'm having connecting to a remote virtual
> machine (that may or may not actually work). I have the router directing ssh
> traffic to the actual machine the virtual machine is running on.
>
> Virt-manager is able to make the connection to the remote machine and shows
> the virtual machine. I can open the virtual machine but when I run it, the
> viewer closes. The virt-manager shows the virtual machine is running but I
> can't connect to it or shut it down. I can force a shutdown, that's all.
>
> When I check the logs on my local machine and on the remote machine, I can't
> find anything related to the virtual machine or the viewer shutdown.
>
> Any ideas on how I proceed?
>
>



Re: Distinguishing between hardware, software, and operator induced symptoms

2016-11-18 Thread Bob Weber
Try system-rescue-cd at http://www.system-rescue-cd.org.  It has a full set of
Linux commands (boots several versions of Linux kernels) and some hardware
diagnostics including memtest86 you mentioned.  It is a full Linux OS even with
a graphical mode (run startx or select on main menu) to run a browser and other
graphical tools. 

One of the tools is a disk scan and repair utility.  It even has a remap mode to
remap bad sectors.

I have used this disk (burned to CD or put in a flash drive) for years repairing
my Linux or Windows OS's (mostly XP).  It has a grub boot utility to help repair
grub installations.

The ultimate disk repair utility is spinrite (at grc.com).  It is a $89 pay
program but when you need data off a failing drive its worth it.  There are many
testimonials about how a Windows installation can be made bootable again after
running a spinrite scan.

*...Bob*
On 11/18/2016 09:25 AM, Richard Owlett wrote:
> As noted in the "Invoking ddrescue" thread
> [https://lists.debian.org/debian-user/2016/11/msg00641.html], my laptop
> [dedicated to educational/experimental projects which could fail
> spectacularly] used to apparently successfully run ddrescue, malfunctioned.
>
> <*BACKGROUND*>
> The laptop is a used Lenovo R61 running Debian 8.6.0 with MATE D.E. installed
> from purchased set of DVDs. The "damaged" and destination drives were
> connected by separate USB adapters [each powered by separate wall warts].
>
> Sequence of events:
> A. Run ddrescue
>1. Power on laptop, responding "root" at login prompt.
>2. To force predictable /dev/sdX assignments, sequentially connect 
> destination
>   and "damaged" drives.
>3. Apparently run ddrescue to a successful conclusion.
>4. Disconnect "damaged" drive.
>5. Power down for the night.
> B. Setup to extract data in useful format from the rescued partitions
>[There are missing details as I report from memory, log files do *NOT* 
> exist]
>1. Power up sequence fails to run successfully
>   a. systemd reports it's checking a partition with mounting problems
>  [it is the same message as when the UUID of the swap partition does
>   not have the expected value.] I
>   b. I notice that wall wart for the destination drive is unplugged.
>  I power down, plug it in, power up laptop.
>   c. systemd again reports mounting problem. I allow sequence to continue.
>  I never receive login screen and assume fatal error related USB 
> drive.
>   d. Power down, disconnect USB drive, attempt reboot from scratch.
>  i. Don't recall if systemd complained that USB was not present.
> [It is mentioned in /etc/fstab -- see previous thread.]
> ii. Boot sequence appears to run to point where appearance of login
> screen expected. It does not. I'm not able to glean useful
> information
> from log files.
> 2. Decide to reinstall as there is no valuable data on the hard-drive.
>a. Neither the Install nor Live DVD's will load.
>b. On a second machine dd the Install DVD to a flash drive.
>   It installs as expected.
> < end *BACKGROUND*>
>
>
>
>
> QUESTIONS:
>
> A. Hardware diagnostics [as CD has already proved unreliable]
>1. Memory - I have memtest86+, will have to put it on flash drive
>2. Hard disk - Somewhere I have a Seagate specific diagnostic which has
>   proved useful on non-Seagate drives. Is there a recommended more
>   generic diagnostic?
>3. Are there recommended [for want of a better term] board level 
> diagnostics
>   that do not depend on an OS already being installed?
> B. OS integrity checks
>I would assume that being able to login without noticing any thing is a
>fairly good check. However, I have experience symptoms which may have 
> multiple
>unrelated causes. Is there a suggested system integrity check?
> C. Other
>I have a typical collection of diagnostic CD/DVDs from which I can create
>equivalent iso files. I have a vague recollection of a procedure to put 
> GRUB
>[or was it LILO?] on a bootable flash drive with multiple iso files and 
> being
>able to choose which to boot. Ring any bells? Suggested search terms?
>
> TIA
>
>
>



Re: Recommendation: Backup system

2016-10-02 Thread Bob Weber
On 10/02/2016 08:50 AM, rhkra...@gmail.com wrote:
>> Am 01.10.2016 um 23:06 schrieb Bob Weber:
>>> Like I said backuppc uses incremental and full backups.  The web
>>> interface lets you browse any backup (inc or full) and you see all the
>>> files backed up.  I set the incremental for each day up to a week.  So I
>>> have up to 7 of them.  The full can kept for for however long you want.
>>> I currently keep 12 weekly, 8 bi-weekly and 4 monthly full backups so
>>> that covers almost a year.
> I am not the op, but backuppc sounded pretty nice, so yesterday I tried to 
> install it on both my Wheezy and Jessie systems.  It didn't work (with 
> different failures) on either system--I won't give much detail for now, but 
> I'd 
> just ask a few questions:
>
>* what system (what version of Debian) are you using?
>
>* should I expect that it will properly configure a web server (on the 
> Wheezy system it talked about apache2, iirc), or must I have a properly 
> configured web server before installing backuppc?
>
> Some cryptic notes on the failures:
>
> On wheezy, I thought the installation completed successfully--it ran 
> something 
> it called a script, and, in or after the script it gave me a url to log in to 
> manage backuppc along with a username and password.  When I tried to go to 
> that URL, using either http or https on either of my browsers, it gave me a 
> 404 error.
>
> On jessie, it apparently did not complete the installation, it told me it 
> could not run that initial script.
>
> Suggestions?
>
>

I am running stable 8.5 on my backup machine.  It is a 386 install.  Backuppc is
version backuppc/stable,now 3.3.0-2 i386 [installed]

It uses apache2 (dont know if it will use other web servers) and has put the
following in the apache  /etc/apache2/conf-enabled/backuppc.conf file:

Alias /backuppc /usr/share/backuppc/cgi-bin/


AllowOverride None
Allow from all

# Uncomment the line below to ensure that nobody can sniff importanti
# info from network traffic during editing of the BackupPC config or
# when browsing/restoring backups.
# Requires that you have your webserver set up for SSL (https) access.
#SSLRequireSSL

Options ExecCGI FollowSymlinks
AddHandler cgi-script .cgi
DirectoryIndex index.cgi

AuthUserFile /etc/backuppc/htpasswd
AuthType basic
AuthName "BackupPC admin"
require valid-user



So I would assume that there should be a working install of apache2 before
backuppc is installed.  The only install problem I remember (this was quite a
while ago) was that I wanted the backups put in a directory mounted on a
separate partition.  Even though there was a setting for the backup directory in
backuppc the directory is hard coded to  "/var/lib/backuppc" in some of the
installed backuppc programs.  So to use another location you have to symbolic
link /var/lib/backuppc to that directory before install (or mount the partition
on /var/lib/backuppc).  So if you want to use another directory delete
/var/lib/backuppc and make the link (link -s /var/lib/backuppc  /somewhere-else
  ormount /dev/sdxx /var/lib/backuppc).  Then run "apt-get install
--reinstall backuppc" and hopefully things will be setup correctly at that new
location.  Since I access the backup server on my local net I don't use https. 

I use rsync over ssh to connect to linux servers so the data transferred is over
a secure tunnel. So my backup of a remote vm is secure.  The backuppc user
should have public key that is placed in the authorized_keys file of the clients
that are to be backed up. http://backuppc.sourceforge.net/faq/ssh.html explains
this procedure.

Hope this helps.  I have run backuppc over 10 years at several locations where I
have worked and at home.  It just seems to run.

...Bob



Re: Recommendation: Backup system

2016-10-01 Thread Bob Weber
Like I said backuppc uses incremental and full backups.  The web interface lets
you browse any backup (inc or full) and you see all the files backed up.  I set
the incremental for each day up to a week.  So I have up to 7 of them.  The full
can kept for for however long you want.  I currently keep 12 weekly, 8 bi-weekly
and 4 monthly full backups so that covers almost a year.

There is another solution you might like called rsnapshot.  I use it to backup
just my root directory on my desktop before I do updates.  That way if something
goes wrong I can boot into a rescue cd and restore the system to the state
before the update.  I just can't afford to have my desktop to break.  rsnapshot
uses rsync so it can backup any computer that has rsync.  It uses hard links so
duplicate files are only stored once.  You specify how many backups you want to
keep and rsnapshot deletes older ones over that max before adding the new one. 
That way you always have backups (assuming you set the count greater that 1)
that will be there even if there is a transfer error.  This is similar to your
script but is very versatile.


*...Bob*
On 10/01/2016 04:42 PM, mo wrote:
> Maybe this is a little OT, but what kind of backup strategy would you guys
> recommend? (Any advice Gene? :) )
>
>



Re: Recommendation: Backup system

2016-10-01 Thread Bob Weber
I use backuppc.  It is web browser based setup and usage.  It takes incremental
and full backups that can remain as long as you want or have space for.  It can
browse files by name or in a version mode where you can see the date where a
file changed and restore an earlier version if you want (or to a separate 
download directory).  It compresses files for space and only keeps one copy of a
file's data if it is located in different directories or servers (using hard
links as needed).  It can even backup user data for windows users (samba).  I
use the rsync transfer for Linux machines and even with windows running Cygwin.

I currently backup 8 computers going back almost 1 year.  I even backup a vm at
digital ocean.  Backuppc reports this:

144 full backups of total size 8951.56GB (prior to pooling and compression),
57 incr backups of total size 57.13GB (prior to pooling and compression).
Pool is 358.94GB comprising 1903010 files and 4369 directories (as of 10/1 
01:09).

So 8951GB is compressed or pooled into just 358 GB! 

*...Bob*
On 10/01/2016 05:37 AM, mo wrote:
> Hi Debian users :)
>
> Information:
> Distributor ID:Debian
> Description:Debian GNU/Linux 8.6 (jessie)
> Release:8.6
> Codename:jessie
>
> As the title say i'm in search for a backup application/system.
> Currently i manage my backups with a little script that i wrote... but it does
> not really serve my needs anymore.
> I want to be able to make backups on my main PC and also on my server, the
> backups i would then store on my NAS.
>
> Make a long story short:
> Have you guys a recommendation for me?
> Is there a specific application you use for your backups guys?
>
> Btw: I dont mind configuring or playing around with new applications, every
> recommendation is welcome ;)
>
>
> Here is my current backup script (Which is run by cron daily):
> #!/bin/bash
>
> TO_BACKUP="/home /etc /var/log"
> BACKUP_DIR="/var/backup"
> BACKUP_ARCHIVE="backup-`date +%d_%m_%Y-%H:%M`.tar"
> TAR_OPTIONS='-cpf'
>
> delete_old_backup() {
> if [ -f ${BACKUP_DIR}/backup*.tar ]; then
> rm -rf $BACKUP_DIR/backup*
> fi
> }
>
> create_new_backup() {
> tar $TAR_OPTIONS ${BACKUP_DIR}/$BACKUP_ARCHIVE $TO_BACKUP
> }
>
> main() {
> delete_old_backup
> create_new_backup
> }
>
> main
>
> Greets
> mo
>
>



Re: (OT kinda) Newly-discovered TCP flaw

2016-08-11 Thread Bob Weber
The way to do it is to put the line:

net.ipv4.tcp_challenge_ack_limit = 9

in a file in the /etc/sysctl.d directory named xxx.conf (replace xxx with your
preferred name).

Then run "sysctl -p xxx.conf" and the new value is installed in the kernel
tree.  My system had a value of 100 before I changed it.  At boot the file will
be read so the new value will be used then also.


Check what is there now with:

sysctl -a | grep net.ipv4.tcp_challenge_ack_limit

You should see 999... after you run the sysctl -p command.

When the kernel is fixed then you just need to delete the xxx.conf file and the
next boot the default value will get used.

...Bob



Re: bluetooth headset connected but no sound

2016-07-10 Thread Bob Weber
Have you configured pavucontrol?  Under the "Output Devices" tab you should see
the BT headphones and other audio devices in your system.  If you don't see the
BT device (even if you do see it) then go to the "Configuration" tab and make
sure the BT device is in Hi fi playback mode (A2DP) and not OFF.  Now start
audacious and play something.  Go to the "Playback" tab in pavucontrol and see
what device audacious is playing on.  Assuming you have more than one playback
device configured just click on the button next to audacious and select the BT
device from the droop down list.  Make sure the volume control is up near 100%. 
Go to the "Output Devices" tab and make sure the volume is up on the BT device. 
Hopefully you are now listening to what audacious is playing.

Hope this helps.  I have been using BT devices for several years now in KDE. 

*...Bob*
On 07/10/2016 12:45 PM, Bob wrote:
> Hi list,
>
> I have connected BT headset to my debian box following the wiki
> https://wiki.debian.org/BluetoothUser/a2dp
>
> blueman applet successfully connects the headset and being connected the BT
> headset also announces the same.
>
> 
> Jul 10 16:41:05 localhost bluetoothd[21780]:
> /org/bluez/hci0/dev_20_15_11_20_39_93/fd14: fd(20) ready
> Jul 10 16:41:05 localhost kernel: [27291.168299] input: 20:15:11:20:39:93 as
> /devices/virtual/input/input33
> ~
>
> Through blueman I have selected Audio profile as A2DP sink. pavucontrol also
> shows the device. Till now all are as expected.
>
> But audacious doesn't send music to BT head set. Have I forgot anything to
> set/configure ?
>
> Any help is much appreciated,
> Bob
>
>



Re: Plasma5: After Update right click on desktop not working any more

2016-07-06 Thread Bob Weber
Brad

Do you know what is holding up the libkf5... transition to 5.23 in testing?  I
see there are some architectures that have build problems.  Will that hold up
the amd64?  About half of my libkf5 files are at 5.22 and the other half at 
5.23.



*...Bob*
On 07/06/2016 04:58 PM, Brad Rogers wrote:
> On Wed, 06 Jul 2016 21:28:03 +0200
> Hans  wrote:
>
> Hello Hans,
>
>> Hi Brad,
>>> Can't find that package anywhere in Debian.  Do you, perhaps, mean
>>> libkf5plasma5?
>> Yes, sorry,. was a typo.
> Easily done;  I've done it myself many times.
>
> {snip}
>> P.S. I suppose, you are involved, so please tell the other developers
> Only in that I've been reading the relevant thread.
>
>> of my results. :) Would be nice!  
> Consider it done.  Also, I've updated libkf5plasma5 here.  I can confirm
> your findings.  Thanks for taking the time to pin down the culprit.
>



Re: Plasma5: After Update right click on desktop not working any more

2016-07-06 Thread Bob Weber
I installed just libkf5plasma5 (5.23) in my 4 test VMs running Debian testing 
and confirmed that it appears to have fixed the right click problem.   The test 
VMs were upgraded this morning.

I just downloaded the libkf5plasma5 deb file and installed it with dpkg -i.  
This kept me from adding unstable to the sources.list file and possibly 
installing some unstable packages that would break other things. 
https://packages.debian.org can be very helpful.



*...Bob*
On 07/06/2016 04:58 PM, Brad Rogers wrote:
> On Wed, 06 Jul 2016 21:28:03 +0200
> Hans  wrote:
>
> Hello Hans,
>
>> Hi Brad,
>>> Can't find that package anywhere in Debian.  Do you, perhaps, mean
>>> libkf5plasma5?
>> Yes, sorry,. was a typo.
> Easily done;  I've done it myself many times.
>
> {snip}
>> P.S. I suppose, you are involved, so please tell the other developers
> Only in that I've been reading the relevant thread.
>
>> of my results. :) Would be nice!  
> Consider it done.  Also, I've updated libkf5plasma5 here.  I can confirm
> your findings.  Thanks for taking the time to pin down the culprit.
>



Re: Plasma5: After Update right click on desktop not working any more

2016-07-04 Thread Bob Weber
I upgraded my testing system this morning and also lost the right button
function on the desktop. However, I also lost the right click function on the
kde menu button.  I had to create a desktop icon to kmenuedit to edit the KDE
menu.  Left clicking the activities button in the upper left corner of the
screen (default position ?) still brings up options for widgets and desktop
settings so those functions are still available.


*...Bob*



Re: Recovering data from a Raid 1Sata HD

2015-12-23 Thread Bob Weber
Yes you can mount the single raid1 disk.  Your wheezy system should have
recognized your drive as a md device when booted.  Run the commend 'cat
/proc/mdstat'.  It will show you the current md devices and their status.  This
is what I get after connecting a drive to my system over a usb connection.  It
added drives /dev/sdg[1-3] to my system drive list (ls /dev/sd*).  I had to
assemble the drive partitions into md devices with the commands 'mdadm
--assemble /dev/md5 /dev/sdg1', 'mdadm --assemble /dev/md6 /dev/sdg2' and 'mdadm
--assemble /dev/md7 /dev/sdg3'.

cat /proc/mdstat
Personalities : [raid1]
md7 : active (auto-read-only) raid1 sdg3[3]
  211690544 blocks super 1.2 [3/1] [__U]
 
md6 : active (auto-read-only) raid1 sdg2[3]
  1048564 blocks super 1.2 [3/1] [__U]
 
md5 : active (auto-read-only) raid1 sdg1[3]
  31456184 blocks super 1.2 [3/1] [__U]
 
md2 : active raid1 sda3[3] sdb3[5]
  938881600 blocks super 1.2 [2/2] [UU]
 
md1 : active raid1 sda2[3] sdb2[5]
  4192192 blocks super 1.2 [2/2] [UU]
 
md0 : active raid1 sda1[0] sdb1[1]
  33554368 blocks [2/2] [UU]
 
unused devices: 

Where my original system devices are md0, md1 and md2.  You will also see that
this is a degraded array with the notation [3/1] [__U] which means that the
original array had 3 drives and now has one.

If you have just booted the system with the drive connected it may have
assembled the md devices for you.  Look for the drive partitions for the old
drive and you will see the md device your system has assigned to it.  It might
me something like /dev/md127, /dev/md126 and so on depending on how many
partitions the drive has. 

Now mount the array with 'mount /dev/md5 /mnt/src' after creating the directory
/mnt/src first.  Of course your device names and drive names will be different. 
If the original array device was a swap partition (as my md6 above is) it will
not mount.  Once you are finished and have unmounted the array you should stop
the array(s) with the command 'mdadm --stop /dev/md[5-7]'.  Warning ... of
course be careful and not try to stop the arrays your system is using.  I think
mdadm will warn you that the array is in use and cannot be stopped. 


Hope this helps.

...Bob



Re: after update kde (plasma) will not start any more

2015-10-28 Thread Bob Weber
On 10/28/2015 03:46 PM, Hans wrote:
> Am Mittwoch, 28. Oktober 2015, 12:53:05 schrieb Bob Weber:
>> Try this to see if it works:
> Hi Bob,
>> The solution is to downgrade libqt5x11extras5 to
>> libqt5x11extras5_5.4.2-2+b1_amd64 (or 385 if your system is 32bit).
>>
>> You should find libqt5x11extras5_5.4.2-2+b1_amd64.deb in the
>> /var/cache/apt/archives directory if you haven't run "apt-get clean"
>> recently. Mine was dated Aug 9.  Use "dpkg -i
>> libqt5x11extras5_5.4.2-2+b1_amd64.deb" to install it.  dpkg will notify you
>> that it is downgrading the package.
> yes, I already suspected this lib, but that version disappeared from the 
> official repo. However, I found it in the internet. Installed, worked.
>
> But as I suggested: latest prior version should stay in the official repo for 
> some time, just to be able to revert back.
>
>>
>>
>> *...Bob*
> Thanks anyway
>
> Best 
>
> Hans


I have been burned too many times running testing so I have developed the
following procedures to make things a little safer.

I use approx on my internet router/server machine to serve debian packages to
all the other machines on my network.  That way I have all the packages that
have been installed since I started using approx.  I have disabled the cron job
for approx that cleans out the older packages approx retrieves so I don't have
to worry that I will lose any needed packages.  I can use "apt-get clean" on
each machine without worrying that I will lose older needed packages.

Besides daily backups using backuppc on a dedicated server I also use rsnapshot
to make a snapshot of my root file system (home is on another file system where
these backups go)  before I do an upgrade ... just in case.  If I miss some
problem a quick reboot into sysrescucd and I copy the snapshot back to the root
file system so I can get going again!

I also first upgrade 3 VM's running testing also so I will know of any problems
before I upgrade my main machine.  I caught the libqt5x11extras5 problem that
way.  I had the black screen of death on a VM.  I googled for a solution to the
problem after seeing sddm crashing with a segfault in libxcb.so in syslog.

...Bob





Re: after update kde (plasma) will not start any more

2015-10-28 Thread Bob Weber
Try this to see if it works:

The solution is to downgrade libqt5x11extras5 to
libqt5x11extras5_5.4.2-2+b1_amd64 (or 385 if your system is 32bit).

You should find libqt5x11extras5_5.4.2-2+b1_amd64.deb in the
/var/cache/apt/archives directory if you haven't run "apt-get clean" recently.  
Mine was dated Aug 9.  Use "dpkg -i libqt5x11extras5_5.4.2-2+b1_amd64.deb" to
install it.  dpkg will notify you that it is downgrading the package.



*...Bob*



Re: Running HAL and udev Simultanteouly

2015-09-15 Thread Bob Weber
I installed hal from debian in a vm with debian testing amd64.  Chrome beta no
go with some error code.  Won't even start hald.  Iceweasel plays the splash
screen and looks like it wants to play the program but the screen stays blank. 
Hald is running.  I can move the mouse along the timeline and see previews but
no more.  I have found that the 3 main networks will play in chrome beta without
hal in my desktop amd64 testing system.

*...Bob*
On 09/15/2015 05:43 PM, Bartek wrote:
> Still searching for a fix to stream TV via Hulu.  Worked fine with
> Chrome/pepperflash and Iceweasal/flash until about mid-July, around the
> release time of Windows 10. Hulu indicates that with Linux, HAL is
> needed. My system is custom: 64-bit Wheezy, no desktop, just Openbox WM
> and LXPanel, and udev.
>
> Wondering if installing HAL as Hulu suggested would cause problems? I
> have doubts that it will work anyway.
>
> Any advice or fixes appreciated.
>
> Thanks.
>
> B
>
>



Re: Me vs. Qemu

2015-08-20 Thread Bob Weber


On 08/20/2015 11:26 AM, Bob Bernstein wrote:
> I am back to the list to submit virt progress (or lack thereof) reports.
>
> I have what appears to be a working OpenBSD vm running in a very little tiny
> qemu window (have yet to launch X) on my Jessie.
>
> Please review the following steps I took. They may be flawed so much as to
> account for the _lack of networking_ on the openbsd instance.
>
> (I chose the install option of "dhcp" rather than
> give the vm a static ipv4 address That choice, dhcp,
> works when I use it with my Vmware Player on Windows 7
> and the same OpenBSD iso image. More later...)
>
> I sorta guessed my way thru some of this stuff:
>
> To make the raw disk:
> 1.# qemu-img create myimage.img 12G
>
> To make the vm:
> 2. # qemu-system-x86_64 -boot d -cdrom cd57.iso myimage.im
>
> To launch my new vm:
> 3. # qemu-system-x86_64 myimage.img
>
>
> Comments/corrections/excoriations are welcome!
>
If you run a GUI then Virtual Machine Manager makes this all a snap.  VMM
manages guest network, guest storage and guests through libvirtd.  Just open a
guest window and click run.  Thats all there is to it.  Of course you have to
configure the guest... things like cpu cores, memory, network and display 
adapters.

...bob



Re: Virtual noobie

2015-08-18 Thread Bob Weber



*...Bob*
On 08/18/2015 07:43 PM, doug wrote:
>
> On 08/18/2015 07:17 PM, Bob Weber wrote:
>>
>>>
> /snip/
>>>
>> I also use the kvm/qemu packages.  There is also a GUI that makes setting up
>> and running VMs very easy.  Its Virtual Machine Manager and it uses libvirtd
>> to manage machines.  I have also at one time installed bsd and it ran fine. 
>> You can also try out different live cd by just downloading the ISO and
>> connecting it to a guest in Virtual Machine Manager.  Installs are done this
>> way also.
>>
>> Have fun!
>>
>> ...Bob
> somewhat off topic, I guess, but, question:
>
> I tried some time ago to install a BSD but it wanted something other than ntfs
> or extx file system, and I had to
> make a special partition for it that nothing else could read. I finally
> decided it was too different, but if you
> install a BSD under a vm, what does it do about the file system?
>
> --doug

The OS that is running as a guest is responsible for the file system it uses. 
There is a file on the host created with qemu-img that is presented to the guest
as a hard disk that is totally isolated from the host.  Therefore, the host
doesn't need to mount the guest file system so the guest is free to partition
and format the "disk" as it sees fit.  If the file system on the image file is
known by the host it can be mounted and viewed by the host if the guest is not
running.  The commands for this are (on debian testing anyway):

# on the host run...
# install module
modprobe nbd max_part=8

# connect image
qemu-nbd --connect=/dev/nbd0 .img

#  Mount partition one
mount /dev/nbd0p1 /mnt/foo

Make sure you unmount the partition and disconnect the /dev/nbd... device before
running the guest again.  Things get messed up badly if you forget this part.

When I decided to go with kvm, virtualbox had been taken over by oracle.  I was
worried that it would become proprietary or inhibited in some way.  I remember
an earlier sun version wouldn't allow you to connect host usb devices ... only
commercial versions allowed this.  With qemu/kvm I have hot connected a usb host
bluetooth device to debian and Win10 guests and been able to play music through
external bluetooth headphones from the guest!  Thats quite an achievement if you
have ever played with bluetooth audio devices.

...bob


Re: Virtual noobie

2015-08-18 Thread Bob Weber

**
On 08/18/2015 06:06 PM, Bob Bernstein wrote:
> On Tue, 18 Aug 2015, Daniel Bareiro wrote:
>
>> # aptitude install qemu-kvm
>
> I'll give it a shot. Don't be surprised if you see me pop up here again!
>
>> # egrep '(vmx|svm)' --color=always /proc/cpuinfo
>
> This shows two instances of 'svm', in red. Is that what I'm looking for?
>
>> You should also check that virtualization is enabled in the BIOS.
>
> Probably I did not know what to look for, but I did not see anything in the
> bios that mentioned "virtual" in any context.
>
>
>> Namaste also for you :)
>
> Thank you. With the devil dancing all over this stupid planet the occasional
> blessing cannot possibly be misplaced.
>
>
I also use the kvm/qemu packages.  There is also a GUI that makes setting up and
running VMs very easy.  Its Virtual Machine Manager and it uses libvirtd to
manage machines.  I have also at one time installed bsd and it ran fine.  You
can also try out different live cd by just downloading the ISO and connecting it
to a guest in Virtual Machine Manager.  Installs are done this way also.

Have fun!

...Bob


Re: chromium >= 37.0.2062.117 on i386

2014-09-25 Thread Bob Weber
On Thu, Sep 25, 2014 at 09:33:50PM +0200, Sven Joachim wrote:
>> On 2014-09-25 21:10 +0200, Alex Andreotti wrote:
>>
>>> Hello, 
>>>
>>> is there a way to get a .deb of chromium >= 37.0.2062.117 on arch
>>> i386? (unstable/sid)
>>> (if need to be built it is ok if there's a debian/rules)
>> Since the package FTBFS on the buildd, your only chance is to build from
>> source.  Which is likely not to work, but at least you can try.
>>
>>> The cpu is an amd64 (with sse2 support) but I've the system installed
>>> as i386.
>> You want to have at least 4 GB of RAM and 15 GB disk space free.  The
>> build is likely going to take a few hours, unless it fails early.
>>
>> Good luck,
> I'll give a try :)
> Thank you

Just install from google: https://www.google.com/chrome/browser/beta.html

I am using it in testing (64 bit) and a vm with debian 7.5 (32 bit) both at
version 38.0.2125.77 beta.  It even comes with pepperflash Version 15.0.0.152.

...bob


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/54247872.7000...@gmail.com



  1   2   >