Re: sendmail without DNS

2024-07-21 Thread Adam Weremczuk

Thanks for pointing that out.

I've noticed that installing sendmail package was removing postfix and 
vice versa.


That made me think these two were mutually exclusive.

After reinstalling postfix, logwatch suddenly started sending emails so 
everything is now working as expected.


---
Adam


On 21/07/2024 14:23, Greg Wooledge wrote:


Blimey.  You are COMPLETELY confused, aren't you.

If postfix (the package named "postfix") is installed, and if sendmail
(the package named "sendmail") is NOT installed, then you are using
Postfix to send mail.

Part of the postfix package is a /usr/sbin/sendmail program which
implements the command line interface for local programs to send mail.

EVERY MTA has to implement the /usr/sbin/sendmail program.

Including Postfix.

If you're running Postfix (*not* Sendmail) as your MTA, and if you've
got it configured how you want it, then you are DONE.  You don't need
to ask us how to configure Sendmail to do the same thing, because you're
not USING Sendmail.





Re: sendmail without DNS

2024-07-21 Thread Adam Weremczuk

Let me rephrase my question, which should be easier to answer.

What exactly shall I substitute:

mailer = "/usr/sbin/sendmail -t"

with in /usr/share/logwatch/default.conf/logwatch.conf

to make logwatch use postfix (already working without DNS) instead of 
sendmail?



On 21/07/2024 08:08, Jeff Pang wrote:

Sendmail is too old to be supported.
You may use postfix and exim instead. They are main stream MTA software 
today.




sendmail without DNS

2024-07-21 Thread Adam Weremczuk

This is in a way a continuation of my recently "purely local DNS" thread.

To recap: my objective is to send emails to a single domain with both 
DNS and any other email traffic being disabled.


A simple working solution that I've found for Postfix is:

/etc/hosts
1.2.3.4example.com

/etc/postfix/main.cf
smtp_dns_support_level = disabled
smtp_host_lookup = native

Now I'm trying to achieve the same thing for Sendmail to no avail.

So far I've tried:

- the above /etc/hosts entry

- DEAMON_OPTIONS(`Port-smtp,Addr=127.0.0.1, Name=MTA')dnl in sendmail.mc 
followed by m4 sendmail.mc > sendmail.cf


- /etc/mail/mailertable
example.com esmtp:[1.2.3.4]

1. Has anybody tried and got it working?

2. What's the best way to engage with Sendmail forums / mailing list?

Both comp.mail.sendmail and newscomp.mail.sendmail usenet groups appear 
to be dead.


---
Adam



Re: purely local DNS

2024-07-17 Thread Adam Weremczuk

Thanks for the hint Todd.

I've replaced it with:

smtp_dns_support_level = disabled

and it's still working as expected.

---
Adam

On 15/07/2024 18:49, Todd Zullinger wrote:



It's probably worth noting that `disable_dns_lookups` has
been deprecated for a long time.  The postconf(5) man page
says:

 As of Postfix 2.11, this parameter is deprecated; use
 smtp_dns_support_level instead.

(Debian 12 has postfix-3.7.11; well past postfix-2.11.)

I don't know if `smtp_dns_support_level` is needed at all
with `smtp_host_lookup = native`.  I've never run an MTA
where I wanted DNS lookups disabled, so I don't have any
direct experience.

If it is needed, you'd surely be better off avoiding the
long-deprecated `disable_dns_lookups` parameter which will
just set you up for failure with some future update.





Re: purely local DNS

2024-07-16 Thread Adam Weremczuk
My intention was to send emails to a single domain with any other email 
traffic being disabled.


In order to achieve this I considered smart host, dnsmasq and even bind9.

The 3-liner solution that I've found seems the simplest, least intrusive 
and appears to be working fine.



On 16/07/2024 01:33, Max Nikulin wrote:


I assume that you are not trying to achieve "smart host" configuration 
for sending mail.


Perhaps you can run a dedicated dnsmasq instance with no upstream DNS 
servers. Option that might help: --dns-rr, --mx-host, --mx-target.







Re: purely local DNS

2024-07-15 Thread Adam Weremczuk

I'm using Postfix and this all that was needed:

/etc/hosts
1.2.3.4 example.com

/etc/postfix/main.cf
disable_dns_lookups = yes
smtp_host_lookup = native



Re: [Back In Time] Request to update translations before upcoming release

2024-07-15 Thread Adam Sjøgren
c.bu...@posteo.jp writes:

> It would be great if you could help that project and offer Danish
> translations or review them [2] (currently at 57%).

Should be at 100% now:

   "Translation status

   384  Strings 100%
 2,369  Words   100%
14,692  Characters  100%"

 · https://translate.codeberg.org/projects/backintime/common/da/


  Best regards,

Adam

-- 
 "Wiggle, wiggle, wiggle like a ton of lead,Adam Sjøgren
  Wiggle - you can raise the dead" a...@koldfront.dk



Re: purely local DNS

2024-07-15 Thread Adam Weremczuk

I want to achieve the first objective and the values are static.
I just hoped there is a one liner hack (like A records in /etc/hosts) to 
achieve this vs reconfiguring my MTA.



On 15/07/2024 14:33, Greg Wooledge wrote:

On Mon, Jul 15, 2024 at 14:00:03 +0100, Adam Weremczuk wrote:

What I need to configure for my Debian 12 VM:
- no public or LAN DNS whatsoever
- ability to fetch a single MX record for a single domain

I don't think I can add MX to /etc/hosts which only works for A records.

I'm after a similarly simple, "one liner" solution.


I'm *so* confused by this question.  You want to be able to *fetch* an MX
record?  You don't want to configure your MTA in a static way so that
it delivers mail properly for this domain right now?  You need to be able
to *fetch* the MX record in real time in case it changes?

And you want to do that *without* being able to contact the real DNS?

How does one reconcile these goal statements?  It's beyond me.





Re: purely local DNS

2024-07-15 Thread Adam Weremczuk

It doesn't work.

mail.example.com record doesn't exist to start with.

Even if I add:

1.2.3.4 example.com
5.6.7.8 mail.example.com

to /etc/hosts

I get:

0A032940922 657 Mon Jul 15 14:40:01  user1@mymachine
(Host or domain name not found. Name service error for name=example.com 
type=MX: Host not found, try again)

 us...@example.com


On 15/07/2024 14:17, Jeff Pang wrote:


Given you want to send mail to foo.com whose mx record is mail.foo.com 
whose IP is 1.2.3.4


Then write this entry in hosts file:
1.2.3.4  foo.com

Which should work for sending mail.

Regards

On 2024-07-15 21:00, Adam Weremczuk wrote:

What I need to configure for my Debian 12 VM:
- no public or LAN DNS whatsoever
- ability to fetch a single MX record for a single domain

I don't think I can add MX to /etc/hosts which only works for A records.

I'm after a similarly simple, "one liner" solution.

---
Adam






purely local DNS

2024-07-15 Thread Adam Weremczuk

What I need to configure for my Debian 12 VM:
- no public or LAN DNS whatsoever
- ability to fetch a single MX record for a single domain

I don't think I can add MX to /etc/hosts which only works for A records.

I'm after a similarly simple, "one liner" solution.

---
Adam



Re: cli64 CPU segfaults

2024-01-30 Thread Adam Weremczuk

Thanks everyone for useful feedback :)

On 29/01/2024 21:05, Gremlin wrote:

On 1/29/24 14:35, Michael Kjörling wrote:
On 29 Jan 2024 19:20 +, from ad...@matrixscience.com (Adam 
Weremczuk):

I have 2 bare metal Debian 12.4 servers with fairly new Intel CPUs and
plenty of memory.

On both, dmesg continuously reports:

(...)
[Mon Jan 29 12:13:00 2024] cli64[1666090]: segfault at 0 ip 
0040dd3b
sp 7ffc2bfba630 error 4 in cli64[40+18a000] likely on CPU 41 
(core

17, socket 0)
(...)


What's cli64? A package search comes up empty for me.

https://packages.debian.org/search?searchon=contents=cli64=exactfilename=bookworm=any 






https://www.advancedclustering.com/act_kb/what-is-cli64/







cli64 CPU segfaults

2024-01-29 Thread Adam Weremczuk

Hi all,

I have 2 bare metal Debian 12.4 servers with fairly new Intel CPUs and 
plenty of memory.


On both, dmesg continuously reports:

(...)
[Mon Jan 29 12:13:00 2024] cli64[1666090]: segfault at 0 ip 
0040dd3b sp 7ffc2bfba630 error 4 in cli64[40+18a000] 
likely on CPU 41 (core 17, socket 0)
[Mon Jan 29 12:13:00 2024] Code: 48 8b 45 c8 8b 80 cc 00 00 00 48 8b 55 
c8 48 98 0f b6 44 42 4d 0f b6 f0 bf a8 0a 79 00 e8 95 1b 01 00 48 89 45 
f0 48 8b 45 f0 <48> 8b 00 48 83 c0 10 48 8b 00 48 8b 7d f0 be b6 0a 41 
00 ff d0 8b
[Mon Jan 29 12:19:01 2024] cli64[1667727]: segfault at 0 ip 
0040dd3b sp 7ffde94347f0 error 4 in cli64[40+18a000] 
likely on CPU 16 (core 16, socket 0)
[Mon Jan 29 12:19:01 2024] Code: 48 8b 45 c8 8b 80 cc 00 00 00 48 8b 55 
c8 48 98 0f b6 44 42 4d 0f b6 f0 bf a8 0a 79 00 e8 95 1b 01 00 48 89 45 
f0 48 8b 45 f0 <48> 8b 00 48 83 c0 10 48 8b 00 48 8b 7d f0 be b6 0a 41 
00 ff d0 8b
[Mon Jan 29 12:24:02 2024] cli64[1669594]: segfault at 0 ip 
0040dd3b sp 7ffd305bebe0 error 4 in cli64[40+18a000] 
likely on CPU 40 (core 16, socket 0)
[Mon Jan 29 12:24:02 2024] Code: 48 8b 45 c8 8b 80 cc 00 00 00 48 8b 55 
c8 48 98 0f b6 44 42 4d 0f b6 f0 bf a8 0a 79 00 e8 95 1b 01 00 48 89 45 
f0 48 8b 45 f0 <48> 8b 00 48 83 c0 10 48 8b 00 48 8b 7d f0 be b6 0a 41 
00 ff d0 8b
[Mon Jan 29 12:29:03 2024] cli64[1675152]: segfault at 0 ip 
0040dd3b sp 7ffddbe853b0 error 4 in cli64[40+18a000] 
likely on CPU 43 (core 19, socket 0)
[Mon Jan 29 12:29:03 2024] Code: 48 8b 45 c8 8b 80 cc 00 00 00 48 8b 55 
c8 48 98 0f b6 44 42 4d 0f b6 f0 bf a8 0a 79 00 e8 95 1b 01 00 48 89 45 
f0 48 8b 45 f0 <48> 8b 00 48 83 c0 10 48 8b 00 48 8b 7d f0 be b6 0a 41 
00 ff d0 8b

(...)

$ sudo dmesg -T | grep cli64 | wc -l
1349

Other than that, they seem to be running ok.

I don't see it on similar, AMD powered kits.

Somebody suggested a faulty memory module. Or software trying to access 
a restricted part of the memory. I'm not convinced.


Any ideas or hints?

Cheers,
Adam


Re: Request for translation of backup application "Back In Time"

2024-01-15 Thread Adam Sjøgren
l ikke blive 
sikkerhedskopieret med mindre du også inkluderer den sti.
 Vil du inkludere det som linket peger på i stedet?

 Failed to create new SSH key in {path}
 Oprettelsen af ny SSH nøgle i {path} mislykkedes

 Full snapshot path: 
 Fuld sti til tilstands-billeder: 

 enabled
 slået til

 disabled
 slået fra

 Restore Settings
 Genddan opsætning

 No config found
 Ingen opsætning fundet

 Options about comparing snapshots
 Valgmuligheder om sammenligning af tilstands-billeder

 Differing snapshots only
 Kun tilstands-billeder der er forskellige

 List only equal snapshots to: 
 Vis kun tilstands-billeder der er ens med: 

 Deep check (more accurate, but slow)
 Grundig sammenligning (mere præcis, men langsom)

 Delete
 Slet

 Select All
 Vælg alle

 Compare
 Sammenlign

 Do you really want to delete {file} in {count} snapshots?
 Vil du virkelig slette {file} i {count} tilstands-billeder?

 This cannot be revoked!
 Dette kan ikke tilbagekaldes!

 WARNING
 ADVARSEL

 Exclude {path} from future snapshots?
 Ekskludér {path} fra tilstands-billeder fremover?



  Best regards,

Adam

-- 
 "Rikstäckande rajtan-tajtan med Anders och Måns"   Adam Sjøgren
   a...@koldfront.dk



systemd-journald log location

2023-12-04 Thread Adam Weremczuk

Hi all,

Is it a good idea to move it from /run/log/journal to e.g. /var/journal-log?

I can't find a suitable option in /etc/systemd/journald.conf

I recently increased SystemMaxUse and RuntimeMaxUse which quickly filled 
up all space.


I didn't realise that our cloud provider decided to image the VM with 
only 100 MB space assigned to /run !


Not easily changeable. Debian 10.13.

Regards,
Adam



logrotate failed state

2023-11-29 Thread Adam Weremczuk

Hi all,

Debian 10.13

Is it normal for logrotate service to go from "inactive (dead)" to 
"failed (Result: exit-code)" state every time logrotate.timer 
(active-waiting) kicks in?


E.g. "sudo systemctl restart logrotate" brings back "inactive (dead)" 
but only until the next logrotate.time trigger (midnight in my case).


Journal only goes back a couple of hours and the only warning/error I 
can see is:


"Warning: Journal has been rotated since unit was started. Log output is 
incomplete or unavailable."


Both systemd-journald and logrotate.timer run with the default settings.

Regards,
Adam



unexplained crash

2023-11-17 Thread Adam Weremczuk

Hello,

Yesterday SSH on my desktop PC running Ubuntu 20.04 became unresponsive.

The machine was responding to ping and "telnet 22" was briefly 
connecting before connection closed.


The graphical login prompt was visible, but when I tried to log in, it 
threw me to the text console. No login prompt on any virtual consoles 
and Ctrl-Alt-Del did nothing.


There is no information logged in wtmp, kern.log and dmesg. Nothing 
interesting in auth.log


The last 40 lines from syslog look as below:

Nov 16 17:04:23 ubu20 systemd[1]: session-5089.scope: Succeeded.
Nov 16 17:04:33 ubu20 systemd[1]: Stopping User Manager for UID 0...
Nov 16 17:04:33 ubu20 systemd[1259208]: Stopped target Main User Target.
Nov 16 17:04:33 ubu20 systemd[1259208]: Stopped target Basic System.
Nov 16 17:04:33 ubu20 systemd[1259208]: Stopped target Paths.
Nov 16 17:04:33 ubu20 systemd[1259208]: ubuntu-report.path: Succeeded.
Nov 16 17:04:33 ubu20 systemd[1259208]: Stopped Pending report trigger 
for Ubuntu Report.

Nov 16 17:04:33 ubu20 systemd[1259208]: Stopped target Sockets.
Nov 16 17:04:33 ubu20 systemd[1259208]: Stopped target Timers.
Nov 16 17:04:33 ubu20 systemd[1259208]: dbus.socket: Succeeded.
Nov 16 17:04:33 ubu20 systemd[1259208]: Closed D-Bus User Message Bus 
Socket.

Nov 16 17:04:33 ubu20 systemd[1259208]: dirmngr.socket: Succeeded.
Nov 16 17:04:33 ubu20 systemd[1259208]: Closed GnuPG network certificate 
management daemon.

Nov 16 17:04:33 ubu20 systemd[1259208]: gpg-agent-browser.socket: Succeeded.
Nov 16 17:04:33 ubu20 systemd[1259208]: Closed GnuPG cryptographic agent 
and passphrase cache (access for web browsers).

Nov 16 17:04:33 ubu20 systemd[1259208]: gpg-agent-extra.socket: Succeeded.
Nov 16 17:04:33 ubu20 systemd[1259208]: Closed GnuPG cryptographic agent 
and passphrase cache (restricted).

Nov 16 17:04:33 ubu20 systemd[1259208]: gpg-agent-ssh.socket: Succeeded.
Nov 16 17:04:33 ubu20 systemd[1259208]: Closed GnuPG cryptographic agent 
(ssh-agent emulation).

Nov 16 17:04:33 ubu20 systemd[1259208]: gpg-agent.socket: Succeeded.
Nov 16 17:04:33 ubu20 systemd[1259208]: Closed GnuPG cryptographic agent 
and passphrase cache.

Nov 16 17:04:33 ubu20 systemd[1259208]: pk-debconf-helper.socket: Succeeded.
Nov 16 17:04:33 ubu20 systemd[1259208]: Closed debconf communication socket.
Nov 16 17:04:33 ubu20 systemd[1259208]: snapd.session-agent.socket: 
Succeeded.
Nov 16 17:04:33 ubu20 systemd[1259208]: Closed REST API socket for snapd 
user session agent.

Nov 16 17:04:33 ubu20 systemd[1259208]: Reached target Shutdown.
Nov 16 17:04:33 ubu20 systemd[1259208]: systemd-exit.service: Succeeded.
Nov 16 17:04:33 ubu20 systemd[1259208]: Finished Exit the Session.
Nov 16 17:04:33 ubu20 systemd[1259208]: Reached target Exit the Session.
Nov 16 17:04:33 ubu20 systemd[1]: user@0.service: Succeeded.
Nov 16 17:04:33 ubu20 systemd[1]: Stopped User Manager for UID 0.
Nov 16 17:04:33 ubu20 systemd[1]: Stopping User Runtime Directory 
/run/user/0...

Nov 16 17:04:33 ubu20 systemd[2567]: run-user-0.mount: Succeeded.
Nov 16 17:04:33 ubu20 systemd[1294]: run-user-0.mount: Succeeded.
Nov 16 17:04:33 ubu20 systemd[1]: run-user-0.mount: Succeeded.
Nov 16 17:04:33 ubu20 systemd[1]: user-runtime-dir@0.service: Succeeded.
Nov 16 17:04:33 ubu20 systemd[1]: Stopped User Runtime Directory 
/run/user/0.

Nov 16 17:04:33 ubu20 systemd[1]: Removed slice User Slice of UID 0.

Does anybody have an idea what could have happened?

Regards,
Adam



bardzo prosze

2023-06-07 Thread adam poplawski

dzień dobry

Nazywam się Adam i mam 36 lat. Od dziecka cierpię na mózgowe 
porażenie dziecięce czterokończynowe, co uniemożliwia mi normalne

funkcjonowanie. Mimo to, każdego dnia staram się być pogodnym i
pełnym nadziei człowiekiem, mającym wiele marzeń.

Jednym z moich największych marzeń jest możliwość prowadzenia
normalnego życia. Niestety, choroba postępuje i bez odpowiednich
ćwiczeń zanikają moje mięśnie, co prowadzi do pogorszenia całego
stanu mojego organizmu. Dlatego tak ważne są dla mnie turnusy
rehabilitacyjne, dzięki którym mogę poprawić koordynację ruchową,
z którą mam największe problemy.

Jednak koszty takiego turnusu to dla mnie ogromny wydatek - jedna 
taka terapia kosztuje 6000 złotych.

W moim wieku trudno już uzyskać
dofinansowanie, a PFROM maksymalnie dofinansowuje
tylko 1600 złotych
raz na 2 lata. Dlatego zwracam się do Ciebie z prośbą o wsparcie
finansowe. Moje skromne środki nie pozwalają mi na regularną
rehabilitację, a tylko ona daje mi szansę na poprawę stanu mojego
zdrowia i realizację moich marzeń.

Moja pasja to muzyka elektroniczna i fotografia, ale najważniejsze
jest dla mnie móc cieszyć się życiem takim, jakie jest. Wiem, że z
Twoją pomocą moje marzenie o normalnym życiu może się spełnić.Będę
niezmiernie wdzięczny za każdą, nawet najmniejszą kwotę, którą
zdecydujesz się przekazać.

Fundacja avalon ul. Domaniewska 50A 02-672 Warszawa

tytuł wpłaty 825 Adam Popławski na pomoc

nr konta 62 1600 1286 0003 0031 8642 6001 brak prowizji od wplaty

SMS CHARYTATYWNY Wystarczy wysłać
SMS pod numer 75 165 w treści POMOC
825 bardzo jest ważna ta treść niestety nie miałem wpływu na treść
Koszt SMSa to 6,15 zł. brutto (kwota zawiera VAT), 
z czego dostaje 6 zł zł



JAK PRZEKAZAĆ 1,5 % PODATKU?

Wypełniając odpowiednią rubrykę w rocznym zeznaniu podatkowym PIT

1 NUMER KRS: 270809

2 Wnioskowana Kwota nie może przekroczyć 1% po zaokrągleniu
do pełnych dziesiątek groszy w dół.

3 INFORMACJE UZUPEŁNIAJĄCE : 825 Popławski Adam

4 Zaznaczyć Wyrażam zgodę

.
Z wyrazami szacunku Adam<>

smime.p7s
Description: S/MIME cryptographic signature


wybacz mi

2023-06-07 Thread adam poplawski

dzień dobry

Nazywam się Adam i mam 36 lat. Od dziecka cierpię na mózgowe 
porażenie dziecięce czterokończynowe, co uniemożliwia mi normalne

funkcjonowanie. Mimo to, każdego dnia staram się być pogodnym i
pełnym nadziei człowiekiem, mającym wiele marzeń.

Jednym z moich największych marzeń jest możliwość prowadzenia
normalnego życia. Niestety, choroba postępuje i bez odpowiednich
ćwiczeń zanikają moje mięśnie, co prowadzi do pogorszenia całego
stanu mojego organizmu. Dlatego tak ważne są dla mnie turnusy
rehabilitacyjne, dzięki którym mogę poprawić koordynację ruchową,
z którą mam największe problemy.

Jednak koszty takiego turnusu to dla mnie ogromny wydatek - jedna 
taka terapia kosztuje 6000 złotych.

W moim wieku trudno już uzyskać
dofinansowanie, a PFROM maksymalnie dofinansowuje
tylko 1600 złotych
raz na 2 lata. Dlatego zwracam się do Ciebie z prośbą o wsparcie
finansowe. Moje skromne środki nie pozwalają mi na regularną
rehabilitację, a tylko ona daje mi szansę na poprawę stanu mojego
zdrowia i realizację moich marzeń.

Moja pasja to muzyka elektroniczna i fotografia, ale najważniejsze
jest dla mnie móc cieszyć się życiem takim, jakie jest. Wiem, że z
Twoją pomocą moje marzenie o normalnym życiu może się spełnić.Będę
niezmiernie wdzięczny za każdą, nawet najmniejszą kwotę, którą
zdecydujesz się przekazać.

Fundacja avalon ul. Domaniewska 50A 02-672 Warszawa

tytuł wpłaty 825 Adam Popławski na pomoc

nr konta 62 1600 1286 0003 0031 8642 6001 brak prowizji od wplaty

SMS CHARYTATYWNY Wystarczy wysłać
SMS pod numer 75 165 w treści POMOC
825 bardzo jest ważna ta treść niestety nie miałem wpływu na treść
Koszt SMSa to 6,15 zł. brutto (kwota zawiera VAT), 
z czego dostaje 6 zł zł



JAK PRZEKAZAĆ 1,5 % PODATKU?

Wypełniając odpowiednią rubrykę w rocznym zeznaniu podatkowym PIT

1 NUMER KRS: 270809

2 Wnioskowana Kwota nie może przekroczyć 1% po zaokrągleniu
do pełnych dziesiątek groszy w dół.

3 INFORMACJE UZUPEŁNIAJĄCE : 825 Popławski Adam

4 Zaznaczyć Wyrażam zgodę

.
Z wyrazami szacunku Adam<>

smime.p7s
Description: S/MIME cryptographic signature


Error trying to compile kernel!

2023-02-19 Thread 43i3 Adam
hi, please a need help i never had this kind of error compile my own
kernel. please you can find there the version of kernel and the command
line a use for that. thanks ...

kernel version: linux-6.1.12
os type: 64-bit
processors: intel i5 .2.7
gcc version 10.2.1 20210110 (Debian 10.2.1-6)

the command line : make bindeb-pkg -j"$(nproc)" LOCALVERSION=-"$(dpkg
--print-architecture)" KDEB_PKGVERSION="$(make k
ernelversion)-1"

The error :
ake[5]: *** [scripts/Makefile.build:500: drivers/media] Error 2
make[5]: *** Waiting for unfinished jobs
  CC [M]  drivers/staging/qlge/qlge_ethtool.o
  CC [M]  drivers/staging/qlge/qlge_devlink.o
  LD [M]  drivers/staging/qlge/qlge.o
make[4]: *** [scripts/Makefile.build:500: drivers] Error 2
make[3]: *** [Makefile:1992: .] Error 2
make[2]: *** [debian/rules:7: build-arch] Error 2
dpkg-buildpackage: error: debian/rules binary subprocess returned exit
status 2
make[1]: *** [scripts/Makefile.package:86: bindeb-pkg] Error 2
make: *** [Makefile:1636: bindeb-pkg] Error 2

Thanks ... long live linux. thanks to Sir Linus Torvalds


LTFS in Debian 11

2022-11-14 Thread Adam Weremczuk

Hi,

Has anybody had much success with it?

This is the closest thing that I've managed to find: 
https://github.com/LinearTapeFileSystem/


Only Debian 10 version with no updates for almost 2 years :(

My tape drive is Quantum LTO8 HH SAS External and works pretty well with 
WS 2019.


Regards,
Adam



Re: Bruger og gruppe: mail mangler

2022-11-06 Thread Adam Sjøgren
Flemming writes:

> Netop disse linier mangler på min nye vps. Det lader til at være en fejl.
>
> ls -l /var/
>
> giver bl.a.:
>
> drwxrwsr-x  2 root 8 4096 Aug 25  2021 mail

Ja, det ser forkert ud. Har du selv installeret Debian?

For sjov downloade'de jeg lige debian-11.5.0-amd64-netinst.iso og
installerede på en virtuel maskine. Efter installering ser det således
ud:

asjo@debian:~$ ls -ld /var/mail/
drwxrwsr-x 2 root mail 6 Nov  6 17:34 /var/mail/
asjo@debian:~$ grep mail /etc/passwd /etc/group
/etc/passwd:mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
/etc/group:mail:x:8:
asjo@debian:~$ 


  Mvh.

   Adam

-- 
 "The strength to change what I can, the inability  Adam Sjøgren
  to accept what I can't, and the incapacity to tell   a...@koldfront.dk
  the difference."



Re: Fanget i DNS - Nu undsluppet

2022-11-05 Thread Adam Sjøgren
Flemming writes:

> Puha, sikke en jungle det er blevet at have sin egen mailserver! Det
> krævede en vis stædighed at hugge sig igennem junglen. Men mange tak
> for Jeres hjælp!!!

Lars Ingebrigtsen - en af GNU Emacs' vedligeholdere - har lavet et
script der løser problemet:

 "So You Want To Run Your Own Mail Server…"
  · 
https://lars.ingebrigtsen.no/2020/03/25/so-you-want-to-run-your-own-mail-server/

> quickdns.dk - nameserveren blev ikke accepteret af dk-hostmaster.dk

Det lyder besynderligt.


Godt at høre at du fandt en kombination der virker for dig!


  Mvh.

   Adam

-- 
 "Check my heart and respiration    Adam Sjøgren
  I feel I look a little pale  a...@koldfront.dk
  I just missed another station
  I'm stepping off the carousel"



Re: Fanget i DNS

2022-11-03 Thread Adam Sjøgren
Flemming writes:

> Jeg har bemærket at der er danske alternativer med gratis dns. Er der
> nogle I kan anbefale?

Der er QuickDNS: https://www.quickdns.dk/ - de har vist ikke support for
DNSSEC, hvilket var en af grundene til at jeg har en minimal virtuel
server hos hhv. Linode og Digital Ocean som jeg sammen med min
hjemmeserver bruger til at køre DNS (og mail) selv.

bind er lidt bøvlet at få sat op, men når det først kører, så kører det
bare.

Mht. reverse DNS og udbydere sætter Kviknet det op hvis man henvender
sig til kundeservice.

(Før Fullrate blev købt af YouSee kunne man selv konfigurere det i deres
brugerinterface, det forsvandt naturligvis efter sammenlægningen.)

For VPS'er har både Linode og Digital Ocean rDNS sat til hostnavnet for
både IPv4 og IPv6 for mine to.


  Best regards,

    Adam

-- 
 "Many activities in life are either so difficult   Adam Sjøgren
  or so boring and repetitive that we would like   a...@koldfront.dk
  to delegate them to other people or to machines"



permanently adding driver to Debian live USB stick

2022-10-20 Thread Adam Weremczuk

Hi all,

My Deb 11.5 live bootable USB stick is missing a driver for Intel 
Wireless-AC 9560 adapter.


Intel provides a link to the missing firmware: 
https://www.intel.com/content/www/us/en/support/articles/05511/wireless.html


and instruct to copy files (iwlwifi-9000-pu-b0-jf-b0-34.ucode in this 
case) to /lib/firmware which is nowhere to be found on the USB stick.


The directory structure looks like below:

(f) autorun.ico
(f) autorun.inf
(d) boot
(d) d-i
(d) dists
(d) EFI
(d) isolinux
(d) live
(d) pool
(f) syslinux.cfg

Is there a way of permanently including the firmware file so that the 
WiFi automatically becomes operational every time I boot Debian live?


Regards,
Adam



Re: apt-cacher internal error (died)

2022-09-21 Thread Adam Weremczuk
Thanks David, the server runs 9.2 and clients 9.1 so hopefully it will 
work ok.


Another reason I kept changing sources.list in the client was in the 
aftermath of my recent attempts against apt-cacher.


In apt-cacher you change every line to point to apt-cacher:3142.
At least that's what I have in the legacy config I've inherited.
I'm assuming it worked at some point in the past.

On 21/09/2022 17:49, David Wright wrote:

  "One word of advice: serving to clients running a more recent
   distribution from a server running an older one can cause problems,
   because of minor improvements that are sometimes made in the way
   the archives and the cache are handled and stored."




Re: apt-cacher internal error (died)

2022-09-21 Thread Adam Weremczuk

Thank you Darac, that was my problem indeed!
What confused me was the fact that, despite being fundamentally 
misconfigured, it appeared to be almost working. Just complaining about 
one security mirror.


On 21/09/2022 15:36, Darac Marjal wrote:
I'm no expert in apt-cacher-ng, but the error here says that's it's 
trying to look up "debian-security" as a hostname. If I'm reading this 
page  correctly, you 
shouldn't be changing /etc/apt/sources.list to point to apt-cacher-ng, 
instead, you should continue to point it to deb.debian.org or 
snapshot.debian.org and, instead, tell apt to use apt-cacher-ng as an 
HTTP proxy.


The protocol that a HTTP server and a HTTP proxy use are _slightly_ 
different. Instead of a client asking a server "Give me 
/path/to/index.html", it needs to tell the proxy "Give me 
/path/to/index.html from example.com". I suspect you problem comes of 
trying to download packages from apt-cacher-ng, rather than proxying 
through apt-cacher-ng.






Re: apt-cacher internal error (died)

2022-09-21 Thread Adam Weremczuk

Hi David,

There is still something wrong with my /etc/apt/sources.list

Perhaps caused by stretch reaching end of life on 30 June 2022.

Can somebody provide me with a tested list of mirrors for stretch 
working in Sep 2022 for apt-cacher-ng server and clients?


I've tried several different sets getting no errors from "apt update" on 
the server (which has internet connectivity).


Every time I repeat this list in /etc/apt/sources.list on a client 
replacing FQDN (e.g. deb.debian.org or security.debian.org) with my 
server's IP and port (192.168.100.1:3142) I get DNS errors for security 
mirror as below:


Err:5 http://192.168.100.1:3142/debian-security stretch/updates Release
  503  DNS error for hostname debian-security: Name or service not 
known. If debian-security refers to a configured cache repository, 
please check the corresponding configuration file.


or

503  DNS error for hostname security: No address associated with hostname.

Perhaps /etc/apt-cacher-ng/acng.conf on the server needs amending as well?

I've found a line there that reads:

Remap-debrep: file:deb_mirror*.gz /debian ; file:backends_debian # 
Debian Archives


Regards,
Adam



On 13/09/2022 05:54, David Wright wrote:

Err:5http://192.168.100.1:3142/security  stretch/updates Release
503  DNS error for hostname security: No address associated with
hostname. If security refers to a configured cache repository, please
check the corresponding configuration file.
E: The repository 'http://192.168.100.1:3142/security  stretch/updates
Release' does not have a Release file.




Re: apt-cacher internal error (died)

2022-09-12 Thread Adam Weremczuk

Hi David,

I've tried your list and the server updates fine.

On a client I see again:

sudo apt update
Ign:1 http://192.168.100.1:3142/debian stretch InRelease
Ign:2 http://192.168.100.1:3142/debian-security stretch/updates InRelease
Hit:3 http://192.168.100.1:3142/debian stretch-updates InRelease
Hit:4 http://192.168.100.1:3142/debian stretch-backports InRelease
Hit:5 http://192.168.100.1:3142/debian stretch Release
Err:6 http://192.168.100.1:3142/debian-security stretch/updates Release
  503  DNS error for hostname debian-security: Name or service not 
known. If debian-security refers to a configured cache repository, 
please check the corresponding configuration file.

Reading package lists... Done
E: The repository 'http://192.168.100.1:3142/debian-security 
stretch/updates Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is 
therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user 
configuration details.


It looks like there is something wrong with debian-security repo 
specifically.


Could the fact that the client has no internet connectivity play a role?
I would imagine this is a typical scenario where you want to use apt-cacher.

Regards,
Adam


On 12/09/2022 16:42, David Wright wrote:

On Mon 12 Sep 2022 at 14:33:59 (+0100), Adam Weremczuk wrote:

Switching to apt-cacher-ng brings no immediate joy :(

CLIENT (192.168.100.243)
sudo apt update
Err:5 http://192.168.100.1:3142/security stretch/updates Release
   503  DNS error for hostname security: No address associated with
hostname. If security refers to a configured cache repository, please
check the corresponding configuration file.
E: The repository 'http://192.168.100.1:3142/security stretch/updates
Release' does not have a Release file.


I would expect to see lines like:

Get:2 http://security.debian.org/debian-security bullseye-security/main amd64 
libgdk-pixbuf-2.0-0 amd64 2.42.2+dfsg-1+deb11u1 [147 kB]

but with stretch/updates rather than bullseye-security, of course,
as the syntax was regularised.


SERVER (192.168.100.1)
1662988978|E|769|192.168.100.243|security/dists/stretch/updates/InRelease
[HTTP error, code: 503]
1662988978|M|Download of security/dists/stretch/updates/Release aborted

Why a DNS error if I use IPs internally for this exercise?


AFAIK apt-cacher-ng still needs to make contact with debian-security
as part of the process.


Is there something wrong with sources.list on the SERVER:

deb http://ftp.uk.debian.org/debian stretch main contrib non-free
deb-src http://ftp.uk.debian.org/debian stretch main contrib non-free
deb http://security.debian.org/ stretch/updates main contrib non-free
deb-src http://security.debian.org/ stretch/updates main contrib non-free


Mine were as attached, with an extra part to security's address.


sudo apt update
Ign:1 http://ftp.uk.debian.org/debian stretch InRelease
Ign:2 http://hwraid.le-vert.net/debian stretch InRelease
Hit:3 http://ftp.uk.debian.org/debian stretch Release
Get:4 http://security.debian.org stretch/updates InRelease [59.1 kB]
Fetched 59.1 kB in 0s (119 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
381 packages can be upgraded. Run 'apt list --upgradable' to see them.


Cheers,
David.




Re: apt-cacher internal error (died)

2022-09-12 Thread Adam Weremczuk

Switching to apt-cacher-ng brings no immediate joy :(

CLIENT (192.168.100.243)
sudo apt update
Err:5 http://192.168.100.1:3142/security stretch/updates Release
  503  DNS error for hostname security: No address associated with 
hostname. If security refers to a configured cache repository, please 
check the corresponding configuration file.
E: The repository 'http://192.168.100.1:3142/security stretch/updates 
Release' does not have a Release file.


SERVER (192.168.100.1)
1662988978|E|769|192.168.100.243|security/dists/stretch/updates/InRelease [HTTP 
error, code: 503]

1662988978|M|Download of security/dists/stretch/updates/Release aborted

Why a DNS error if I use IPs internally for this exercise?

Is there something wrong with sources.list on the SERVER:

deb http://ftp.uk.debian.org/debian stretch main contrib non-free
deb-src http://ftp.uk.debian.org/debian stretch main contrib non-free
deb http://security.debian.org/ stretch/updates main contrib non-free
deb-src http://security.debian.org/ stretch/updates main contrib non-free

sudo apt update
Ign:1 http://ftp.uk.debian.org/debian stretch InRelease
Ign:2 http://hwraid.le-vert.net/debian stretch InRelease
Hit:3 http://ftp.uk.debian.org/debian stretch Release
Get:4 http://security.debian.org stretch/updates InRelease [59.1 kB]
Fetched 59.1 kB in 0s (119 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
381 packages can be upgraded. Run 'apt list --upgradable' to see them.

?

Thanks,
Adam


On 08/09/2022 19:12, Adam Weremczuk wrote:

Hi David,

 From SERVER:/etc/apt-cacher/apt-cacher.conf

user = www-data
group = www-data

sudo chown -R www-data:www-data /var/cache/apt/archives/*

Same error on the CLIENT :(

I think I'll give apt-cacher-ng a shot instead although I wouldn't mind 
knowing why apt-cacher keeps failing.


Regards,
Adam

On 08/09/2022 16:28, David Wright wrote:

Disclaimer: I run apt-cacher-ng, and have never looked at apt-cacher.

On Wed 07 Sep 2022 at 17:50:16 (+0100), Adam Weremczuk wrote:


SERVER

Wed Sep  7 17:06:40 2022|error [10088]: Failed to open/create
/var/cache/apt-cacher/packages/memtest86+_5.01-3_amd64.deb for return:
Permission denied at /usr/sbin/apt-cacher line 735,  line 4.
Wed Sep  7 17:07:58 2022|warn [20848]: Warning: unable to close
filehandle __ANONIO__ properly: Bad file descriptor at
/usr/sbin/apt-cacher line 1539.

Permissions seem fine:

ls -al /var/cache/apt-cacher/packages/memtest86+_5.01-3_amd64.deb
lrwxrwxrwx 1 proxy proxy 51 Aug 22 18:13
/var/cache/apt-cacher/packages/memtest86+_5.01-3_amd64.deb ->
/var/cache/apt/archives/memtest86+_5.01-3_amd64.deb

ls -al /var/cache/apt/archives/memtest86+_5.01-3_amd64.deb
-rw-r--r-- 1 myuser users 75142 Nov 18  2021
/var/cache/apt/archives/memtest86+_5.01-3_amd64.deb

All folders in both paths are 755.


I don't understand the commingling of apt-cacher and apt; is this
how apt-cacher is designed to work? When I install a new package
on a client, the server does not use /var/cache/apt/archives/,
but only its /var/cache/apt-cacher/ directories, from which it
will serve clients.

If someone was logged in to a client and installing package foo,
and I happened to be logged in to the server and installing foo
directly (not via apt-cacher), it would appear from your logs
that we'd both be trying to use the same directory. How would
the permissions work then, and if I cleaned apt's cache, where
would apt-cacher serve the deleted foo file from?

BTW Who is myuser and who is apt-cacher running as?

Cheers,
David.





Re: apt-cacher internal error (died)

2022-09-08 Thread Adam Weremczuk

Hi David,

From SERVER:/etc/apt-cacher/apt-cacher.conf

user = www-data
group = www-data

sudo chown -R www-data:www-data /var/cache/apt/archives/*

Same error on the CLIENT :(

I think I'll give apt-cacher-ng a shot instead although I wouldn't mind 
knowing why apt-cacher keeps failing.


Regards,
Adam

On 08/09/2022 16:28, David Wright wrote:

Disclaimer: I run apt-cacher-ng, and have never looked at apt-cacher.

On Wed 07 Sep 2022 at 17:50:16 (+0100), Adam Weremczuk wrote:


SERVER

Wed Sep  7 17:06:40 2022|error [10088]: Failed to open/create
/var/cache/apt-cacher/packages/memtest86+_5.01-3_amd64.deb for return:
Permission denied at /usr/sbin/apt-cacher line 735,  line 4.
Wed Sep  7 17:07:58 2022|warn [20848]: Warning: unable to close
filehandle __ANONIO__ properly: Bad file descriptor at
/usr/sbin/apt-cacher line 1539.

Permissions seem fine:

ls -al /var/cache/apt-cacher/packages/memtest86+_5.01-3_amd64.deb
lrwxrwxrwx 1 proxy proxy 51 Aug 22 18:13
/var/cache/apt-cacher/packages/memtest86+_5.01-3_amd64.deb ->
/var/cache/apt/archives/memtest86+_5.01-3_amd64.deb

ls -al /var/cache/apt/archives/memtest86+_5.01-3_amd64.deb
-rw-r--r-- 1 myuser users 75142 Nov 18  2021
/var/cache/apt/archives/memtest86+_5.01-3_amd64.deb

All folders in both paths are 755.


I don't understand the commingling of apt-cacher and apt; is this
how apt-cacher is designed to work? When I install a new package
on a client, the server does not use /var/cache/apt/archives/,
but only its /var/cache/apt-cacher/ directories, from which it
will serve clients.

If someone was logged in to a client and installing package foo,
and I happened to be logged in to the server and installing foo
directly (not via apt-cacher), it would appear from your logs
that we'd both be trying to use the same directory. How would
the permissions work then, and if I cleaned apt's cache, where
would apt-cacher serve the deleted foo file from?

BTW Who is myuser and who is apt-cacher running as?

Cheers,
David.





apt-cacher internal error (died)

2022-09-07 Thread Adam Weremczuk

Hi all,

The server runs Debian 9.2 and the client Debian 9.1

My errors:

CLIENT

$ sudo apt update
Ign:1 http://192.168.1.100:3142/debian stretch InRelease
Hit:2 http://192.168.1.100:3142/debian stretch-updates InRelease
Hit:3 http://192.168.1.100:3142/debian stretch Release
Reading package lists... Done
Building dependency tree
Reading state information... Done
91 packages can be upgraded. Run 'apt list --upgradable' to see them.

$ sudo apt install memtest86+
Reading package lists... Done
Building dependency tree
Reading state information... Done
Suggested packages:
  hwtools memtester kernel-patch-badram memtest86 mtools
The following NEW packages will be installed:
  memtest86+
0 upgraded, 1 newly installed, 0 to remove and 91 not upgraded.
Need to get 75.1 kB of archives.
After this operation, 2,448 kB of additional disk space will be used.
Err:1 http://192.168.1.100:3142/debian stretch/main amd64 memtest86+ 
amd64 5.01-3

  502  apt-cacher internal error (died)
E: Failed to fetch 
http://192.168.1.100:3142/debian/pool/main/m/memtest86+/memtest86+_5.01-3_amd64.deb 
 502  apt-cacher internal error (died)
E: Unable to fetch some archives, maybe run apt-get update or try with 
--fix-missing?


SERVER

Wed Sep  7 17:06:40 2022|error [10088]: Failed to open/create 
/var/cache/apt-cacher/packages/memtest86+_5.01-3_amd64.deb for return: 
Permission denied at /usr/sbin/apt-cacher line 735,  line 4.
Wed Sep  7 17:07:58 2022|warn [20848]: Warning: unable to close 
filehandle __ANONIO__ properly: Bad file descriptor at 
/usr/sbin/apt-cacher line 1539.


Permissions seem fine:

ls -al /var/cache/apt-cacher/packages/memtest86+_5.01-3_amd64.deb
lrwxrwxrwx 1 proxy proxy 51 Aug 22 18:13 
/var/cache/apt-cacher/packages/memtest86+_5.01-3_amd64.deb -> 
/var/cache/apt/archives/memtest86+_5.01-3_amd64.deb


ls -al /var/cache/apt/archives/memtest86+_5.01-3_amd64.deb
-rw-r--r-- 1 myuser users 75142 Nov 18  2021 
/var/cache/apt/archives/memtest86+_5.01-3_amd64.deb


All folders in both paths are 755.

Any ideas?

Regards,
Adam



Re: Getting a patch applied with an unresponsive maintainer

2022-05-03 Thread Adam Dinwoodie
On Tue, May 03, 2022 at 10:22:29AM +0100, Jonathan Dowland wrote:
> On Tue, May 03, 2022 at 09:19:36AM +0100, Adam Dinwoodie wrote:
> > So I guess the question now is: what, if anything, can I do to get that
> > code into a build and out the door and onto the Debian package
> > repositories?
> 
> Can you prepare an NMU patch (which incorporates the fix patch, as well
> as a changelog entry indicating it's a non-maintainer upload)? Then post
> that to the bug and explain you are seeking a sponsor to upload it.
> 
> <https://wiki.debian.org/NonMaintainerUpload>
> 
> You could ask for a sponsor on the debian-mentors or debian-devel
> mailing lists.

Exactly what I needed, thank you!

I hadn't known about the -mentors list, and I wasn't sure going straight
to -devel was appropriate, but I think that gives me my next steps here
:)



Re: Getting a patch applied with an unresponsive maintainer

2022-05-03 Thread Adam Dinwoodie
On Mon, May 02, 2022 at 11:03:01AM -0400, songbird wrote:
> 
> Adam Dinwoodie wrote:
> 
> i've sent a private reply since i'm not sure gmane sent the
> Cc: i requested.

It didn't :(

> ...
> > Can anyone give any advice about what my next steps might be if I want
> > to get this patch made more widely available?
> 
>   in looking at the following i see there is a request for
> help and that not much else has happened.
> 
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1006264
> 
> also look at:
> 
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=964947

Right.  That's ... not very promising.

>   do you have access to the repository on salsa?

With that hint, I've just found the repository[0].  It looks like the
maintainer created a patch to achieve effectively the same result as my
patch back in July 2019[1].  However, while it's in a branch labelled
"debian/sid", it doesn't appear to actually be available in Sid.

[0]: https://salsa.debian.org/smlx-guest/dhcpcd5
[1]: 
https://salsa.debian.org/smlx-guest/dhcpcd5/-/commit/62162b20e2fb14336f6d884ec6603ebf1d3ac463

So I guess the question now is: what, if anything, can I do to get that
code into a build and out the door and onto the Debian package
repositories?

> > I'm not a Debian developer, and as best I can tell I'd need to have
> > developer privileges already to be able to kick off a non-maintainer
> > upload.  And I don't think I currently have the spare bandwidth to do
> > justice to becoming a developer (I'm already the maintainer of a few
> > Cygwin packages, plus all of the other obligations of life...).  I am,
> > however, very happy to engage with discussions about patches and
> > approaches.
> >
> > Please keep me on the To/Cc list for any replies; I'm not currently
> > subscribed to this list.
> 
>   i've tried to do that via a Cc but i'm not sure gmane honors
> those.  we'll see...  :)

I'll try to remember to keep an eye on the mailing list archives in case
any other replies fall into the same hole...



Getting a patch applied with an unresponsive maintainer

2022-05-02 Thread Adam Dinwoodie
Hello!

I've found a bug in the dhcpcd5 package[0].  I've submitted a patch[1],
and I'd like to look at getting it included in the official package, not
least so I can stop maintaining my own local patches.  However -- unlike
when I've gone through this process after reporting bugs in other
packages -- the maintainer here doesn't seem to be monitoring or
responding to the BTS.

[0]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1008059
[1]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1008059#34

Can anyone give any advice about what my next steps might be if I want
to get this patch made more widely available?

I'm not a Debian developer, and as best I can tell I'd need to have
developer privileges already to be able to kick off a non-maintainer
upload.  And I don't think I currently have the spare bandwidth to do
justice to becoming a developer (I'm already the maintainer of a few
Cygwin packages, plus all of the other obligations of life...).  I am,
however, very happy to engage with discussions about patches and
approaches.

Please keep me on the To/Cc list for any replies; I'm not currently
subscribed to this list.

Adam



Re: swap maxed out when plenty of RAM available

2022-03-25 Thread Adam Weremczuk
 2022] slab 12877824
[Tue Mar 22 00:24:10 2022] sock 0
[Tue Mar 22 00:24:10 2022] shmem 506744832
[Tue Mar 22 00:24:10 2022] file_mapped 270336
[Tue Mar 22 00:24:10 2022] file_dirty 0
[Tue Mar 22 00:24:10 2022] file_writeback 135168
[Tue Mar 22 00:24:10 2022] anon_thp 0
[Tue Mar 22 00:24:10 2022] inactive_anon 264527872
[Tue Mar 22 00:24:10 2022] active_anon 256671744
[Tue Mar 22 00:24:10 2022] inactive_file 905216
[Tue Mar 22 00:24:10 2022] active_file 0
[Tue Mar 22 00:24:10 2022] unevictable 0
[Tue Mar 22 00:24:10 2022] slab_reclaimable 6713344
[Tue Mar 22 00:24:10 2022] slab_unreclaimable 6164480
[Tue Mar 22 00:24:10 2022] pgfault 1443488376
[Tue Mar 22 00:24:10 2022] pgmajfault 9734703
[Tue Mar 22 00:24:10 2022] workingset_refault 9720216
[Tue Mar 22 00:24:10 2022] workingset_activate 2073753
[Tue Mar 22 00:24:10 2022] workingset_nodereclaim 0
[Tue Mar 22 00:24:10 2022] pgrefill 12141300
[Tue Mar 22 00:24:10 2022] pgscan 18087447
[Tue Mar 22 00:24:10 2022] pgsteal 9866147
[Tue Mar 22 00:24:10 2022] Tasks state (memory values in pages):
[Tue Mar 22 00:24:10 2022] [  pid  ]   uid  tgid total_vm  rss 
pgtables_bytes swapents oom_score_adj name
[Tue Mar 22 00:24:10 2022] [   2211] 0  2211    14228 230   
159744  125 0 systemd
[Tue Mar 22 00:24:10 2022] [   2691]   107  2691    11282 43   
126976   71  -900 dbus-daemon
[Tue Mar 22 00:24:10 2022] [   2729] 0  2729    62560 312   
135168   87 0 rsyslogd
[Tue Mar 22 00:24:10 2022] [   2731] 0  2731 9495 28   
114688   89 0 systemd-logind
[Tue Mar 22 00:24:10 2022] [   2730] 0  2730 6998 17    
98304   44 0 cron
[Tue Mar 22 00:24:10 2022] [   3025] 0  3025    20295 27   
135168  172 0 master
[Tue Mar 22 00:24:10 2022] [   3029]   101  3029    20824 26   
163840  170 0 qmgr
[Tue Mar 22 00:24:10 2022] [  13677]   101 13677    20812 192   
155648    0 0 pickup
[Tue Mar 22 00:24:10 2022] [   2915] 0  2915    17488 30   
176128  162 -1000 sshd
[Tue Mar 22 00:24:10 2022] [   2762] 0  2762 3164 0    
65536   32 0 agetty
[Tue Mar 22 00:24:10 2022] [   2761] 0  2761 3164 1    
69632   32 0 agetty
[Tue Mar 22 00:24:10 2022] [   2763] 0  2763 3164 1    
73728   31 0 agetty
[Tue Mar 22 00:24:10 2022] [  21053] 0 21053 1069 17    
53248    0 0 apt.systemd.dai
[Tue Mar 22 00:24:10 2022] [  21057] 0 21057 1069 31    
53248    0 0 apt.systemd.dai
[Tue Mar 22 00:24:10 2022] [  21065] 0 21065    17753 2552   
180224    0 0 apt-get
[Tue Mar 22 00:24:10 2022] [  21068] 0 21068 9475 110   
110592    0 0 systemd-journal
[Tue Mar 22 00:24:10 2022] 
oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=ns,mems_allowed=0-1,oom_memcg=/lxc/101,task_memcg=/lxc/101/ns,task=apt-get,pid=21065,uid=0
[Tue Mar 22 00:24:10 2022] Memory cgroup out of memory: Killed process 
21065 (apt-get) total-vm:71012kB, anon-rss:10208kB, file-rss:0kB, 
shmem-rss:0kB, UID:0 pgtables:176kB oom_score_adj:0
[Tue Mar 22 00:24:10 2022] oom_reaper: reaped process 21065 (apt-get), 
now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB


On 22/03/2022 14:55, Adam Weremczuk wrote:


Hi all,

I run a tiny and lightweight Debian 9.9 LXC container on Proxmox 6.2-6.

It has 512 MB of memory and 512 MB of swap assigned and typically 
needs 50-100 MB to operate.


Last year I started seeing about half of swap being used with very 
little use of RAM.


I then made the following changes:

/etc/sysctl.d/60-my-swappiness.conf
vm.swappiness=10

/etc/sysctl.conf
vm.swappiness=10

and rebooted.

The container was running like that for several months until this 
morning when its core service (dhcp) started failing.


I logged in to investigate and noticed 100% of swap being used with 
maybe 10-20% of RAM in use.


Before I had time to look into details the container crashed (powered 
off).


I'll probably try to get rid of swap entirely as an experiment to see 
what happens.


Unless somebody has any better ideas and hints?

Regards,
Adam


swap maxed out when plenty of RAM available

2022-03-22 Thread Adam Weremczuk

Hi all,

I run a tiny and lightweight Debian 9.9 LXC container on Proxmox 6.2-6.

It has 512 MB of memory and 512 MB of swap assigned and typically needs 
50-100 MB to operate.


Last year I started seeing about half of swap being used with very 
little use of RAM.


I then made the following changes:

/etc/sysctl.d/60-my-swappiness.conf
vm.swappiness=10

/etc/sysctl.conf
vm.swappiness=10

and rebooted.

The container was running like that for several months until this 
morning when its core service (dhcp) started failing.


I logged in to investigate and noticed 100% of swap being used with 
maybe 10-20% of RAM in use.


Before I had time to look into details the container crashed (powered off).

I'll probably try to get rid of swap entirely as an experiment to see 
what happens.


Unless somebody has any better ideas and hints?

Regards,
Adam


Re: deprecated options in openssh

2021-09-10 Thread Adam Weremczuk

On 10/09/2021 17:46, Greg Wooledge wrote:


Depends on which syslog daemon implementation you're using, I think.


My environment: Linux deb10 5.4.44-1-pve #1 SMP PVE 5.4.44-1 (Fri, 12 
Jun 2020 08:18:46 +0200) x86_64 GNU/Linux


Pretty minimalistic set up.

Rsyslog 8.1901.0-1 out of the box, no customisation at all.

Not sure what else to say.



Re: deprecated options in openssh

2021-09-10 Thread Adam Weremczuk

On 10/09/2021 17:51, David Wright wrote:


When you commence your call, both you and the person at the other end
probably exchange some pleasantries, which confirm that you're both
who you say you are. These all get recorded too.

Ssh is no different.
Are you saying these entries could belong to an ssh client trying to 
connect as part of ssh handshake?


My messages are stamped 10:12:30 and 10:12:31. I run ntp across all 
hosts on the LAN. In auth.log there are no connection attempts logged 
between 10:10:07 and 10:14:05.


Could syslog also take time stamps from a client?



Re: deprecated options in openssh

2021-09-10 Thread Adam Weremczuk

On 10/09/2021 13:11, Greg Wooledge wrote:


Not matching what's in the file:

awk 'NR==25' /etc/ssh/sshd_config

awk 'NR==28' /etc/ssh/sshd_config

awk 'NR==29' /etc/ssh/sshd_config
# Lifetime and size of ephemeral version 1 server key

OK, so "it" is in fact "The warnings in syslog contain line numbers which
do not align with the line numbers of the file that I see"?

Seems harmless enough -- just comment out the offending options wherever
they are, ignoring the line numbers in the warnings.
All these lines have been commented out but, as David Wright pointed 
out, commenting out isn't enough to stop them being the defaults.
Ssh doesn't seem to be reading the local /etc/ssh/sshd_config as the 
line numbers mismatch.

The service hasn't been restarted around that time and the file hasn't been
modified for even longer:

systemctl status ssh.service | grep running
    Active: active (running) since Wed 2021-08-18 17:36:45 UTC; 3 weeks 1
days ago

All right, now we're getting somewhere.

Is it possible that these lines are being remotely syslogged to you from
another host?

It's unfortunate that you omitted most of the systemctl output.  It would
have been nice to see whether PID 145 is actually sshd on this host.  You
could also check by hand, of course:  ps -fp 145   and   ps -ef | grep sshd


PID 145 doesn't match anything that I could identify.

This container:
openssh-server 7.9p1-10+deb10u2

systemctl status ssh.service
* ssh.service - OpenBSD Secure Shell server
   Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor 
preset: enabled)
   Active: active (running) since Wed 2021-08-18 17:36:45 UTC; 3 weeks 
1 days ago

 Docs: man:sshd(8)
   man:sshd_config(5)
  Process: 137 ExecStartPre=/usr/sbin/sshd -t (code=exited, 
status=0/SUCCESS)

 Main PID: 165 (sshd)
    Tasks: 1 (limit: 4915)
   Memory: 11.1M
   CGroup: /system.slice/ssh.service
   `-165 /usr/sbin/sshd -D

LXC parent:
openssh-server 7.9p1-10+deb10u2

systemctl status sshd.service

● ssh.service - OpenBSD Secure Shell server
   Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor 
preset: enabled)
   Active: active (running) since Wed 2021-08-18 18:31:24 BST; 3 weeks 
1 days ago

 Docs: man:sshd(8)
   man:sshd_config(5)
  Process: 1659 ExecStartPre=/usr/sbin/sshd -t (code=exited, 
status=0/SUCCESS)

 Main PID: 1910 (sshd)
    Tasks: 1 (limit: 4915)
   Memory: 34.3M
   CGroup: /system.slice/ssh.service
   └─1910 /usr/sbin/sshd -D


You might also want to double-check "journalctl -u ssh" against the
contents of the syslog file.  As far as I know, the systemd journal
cannot accept input from a foreign host, so it should always show
info that comes from services running on localhost.

None of the deprecated options can be found in journalctl:

journalctl -u ssh | grep UsePrivilegeSeparation
journalctl -u ssh | grep KeyRegenerationInterval
etc.

There is actually a gap when the warnings are logged:

Aug 28 10:10:22 deb10 sshd[16443]: Did not receive identification string 
from...

Aug 28 10:14:05 deb10 sshd[16444]: Connection from...

The mysterious warnings arrive in 2 waves at:

Aug 28 10:12:30
Aug 28 10:12:31

Would it be possible for another host to log to syslog without a prior 
explicit manual configuration allowing that?




Re: deprecated options in openssh

2021-09-10 Thread Adam Weremczuk

Hi all,

Weeks later it happened again and I'm not any less puzzled:

/var/log/syslog

Aug 28 10:12:30 deb10 sshd[145]: /etc/ssh/sshd_config line 25: 
Deprecated option UsePrivilegeSeparation
Aug 28 10:12:30 deb10 sshd[145]: /etc/ssh/sshd_config line 28: 
Deprecated option KeyRegenerationInterval
Aug 28 10:12:30 deb10 sshd[145]: /etc/ssh/sshd_config line 29: 
Deprecated option ServerKeyBits
Aug 28 10:12:30 deb10 sshd[145]: /etc/ssh/sshd_config line 49: 
Deprecated option RSAAuthentication
Aug 28 10:12:30 deb10 sshd[145]: /etc/ssh/sshd_config line 57: 
Deprecated option RhostsRSAAuthentication
Aug 28 10:12:31 deb10 sshd[207]: /etc/ssh/sshd_config line 25: 
Deprecated option UsePrivilegeSeparation
Aug 28 10:12:31 deb10 sshd[207]: /etc/ssh/sshd_config line 28: 
Deprecated option KeyRegenerationInterval
Aug 28 10:12:31 deb10 sshd[207]: /etc/ssh/sshd_config line 29: 
Deprecated option ServerKeyBits
Aug 28 10:12:31 deb10 sshd[207]: /etc/ssh/sshd_config line 49: 
Deprecated option RSAAuthentication
Aug 28 10:12:31 deb10 sshd[207]: /etc/ssh/sshd_config line 57: 
Deprecated option RhostsRSAAuthentication


Not matching what's in the file:

awk 'NR==25' /etc/ssh/sshd_config

awk 'NR==28' /etc/ssh/sshd_config

awk 'NR==29' /etc/ssh/sshd_config
# Lifetime and size of ephemeral version 1 server key

etc.

The service hasn't been restarted around that time and the file hasn't 
been modified for even longer:


systemctl status ssh.service | grep running
   Active: active (running) since Wed 2021-08-18 17:36:45 UTC; 3 weeks 
1 days ago


stat /etc/ssh/sshd_config
  File: /etc/ssh/sshd_config
  Size: 3864    Blocks: 9  IO Block: 4096 regular file
Device: 34h/52d Inode: 94834   Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/ root)
Access: 2021-09-10 06:48:08.449310637 +
Modify: 2021-07-06 07:15:34.222154544 +
Change: 2021-07-06 07:15:34.222154544 +
 Birth: -

This is a Proxmox LXC container and I thought that maybe the syslog 
entries were for some reason referring to the master host, but not!


awk 'NR==25' /etc/ssh/sshd_config
# Logging
awk 'NR==28' /etc/ssh/sshd_config

awk 'NR==29' /etc/ssh/sshd_config
# Authentication:

What's going on here? :)

Regards,
Adam

On 16/08/2021 18:27, David Wright wrote:

On Mon 16 Aug 2021 at 16:49:16 (+0100), Adam Weremczuk wrote:

Installation and configuration was straightforward:

sudo apt install logwatch

/etc/cron.daily/00logwatch
#execute
/usr/sbin/logwatch --detail low --mailto x...@domain.com

The master config file /usr/share/logwatch/default.conf/logwatch.conf
left with defaults.

Only one report per day arrives. Same as for the other dozen of Debian
(mostly older) machines it's installed on and which don't show this
issue.

I presume logwatch is watching your logs, so the first place to check
is the actual logs themselves.

My guess (it's no more than that) is that one of the other dozen
machines that you occasionally log into has a slightly different
configuration from this one, perhaps older, with options that are
now considered less secure (but no extra lines inserted).

The options that are commented out in each machine's config file are
the defaults being used by the server, so they /are/ in force.
When you connect to a remote machine's server, I'm assuming it gets
told what the remote's options are, and it's remonstrating about them.
(The fact that options are commented will be irrelevant, therefore.)

Note that I may have all this in reverse: the remote machine could be
complaining about yours, and sending you the log by email. So, as I say,
the first step is to find the log entries that logwatch has watched for.

Cheers,
David.





Re: deprecated options in openssh

2021-08-16 Thread Adam Weremczuk

Installation and configuration was straightforward:

sudo apt install logwatch

/etc/cron.daily/00logwatch
#execute
/usr/sbin/logwatch --detail low --mailto x...@domain.com

The master config file /usr/share/logwatch/default.conf/logwatch.conf 
left with defaults.


Only one report per day arrives. Same as for the other dozen of Debian 
(mostly older) machines it's installed on and which don't show this issue.


I've run a recursive search across the entire file system but no other 
occurrences of the problematic options have been found:


sudo find / -type f -exec grep -l UsePrivilegeSeparation {} \;

Still puzzled...

On 16/08/2021 15:34, Greg Wooledge wrote:

On Mon, Aug 16, 2021 at 03:06:30PM +0100, Adam Weremczuk wrote:

I run openssh 7.9p1-10+deb10u2 on Debian 10.10.

Logwatch, which runs daily, occasionally (maybe 2-3 times per month) reports
the following:

Sometimes you get warnings, and sometimes you don't?  That's a red flag
right off the bat.

Is this "logwatch" thing run by a crontab entry, or by a systemd timer?

Are the ones that give warnings run by a *different* crontab entry, or
a *different* systemd timer?


Why is logwatch still complaining and why is it getting the line numbers
wrong?

My first guess is that there's another sshd_config file somewhere else
that it's reading, on the occasions where you get the warnings, possibly
due to a second crontab entry or whatever.

Or maybe logwatch has a configuration file that defines different tasks
depending on the day, and one of the tasks is set to read the wrong file?





deprecated options in openssh

2021-08-16 Thread Adam Weremczuk

Hi all,

I run openssh 7.9p1-10+deb10u2 on Debian 10.10.

Logwatch, which runs daily, occasionally (maybe 2-3 times per month) 
reports the following:


- SSHD Begin 

 Deprecated options in SSH config:
    KeyRegenerationInterval - line 28
    RSAAuthentication - line 49
    RhostsRSAAuthentication - line 57
    ServerKeyBits - line 29
    UsePrivilegeSeparation - line 25

I've checked /etc/ssh/sshd_config and the options are there but:

- they are all commented out with ## (why should I be forced to delete 
commented out lines?)

- none of these options is mentioned in any other file under /etc
- the line numbers are shifted by 2 (e.g. line 25 is in fact line 27) 
because of 2 custom lines that I added at the very beginning of the file


I've definitely restarted ssh service since making the changes.

Why is logwatch still complaining and why is it getting the line numbers 
wrong?


Regards,
Adam



chrome and chromium crashing every minute

2019-12-20 Thread Adam Weremczuk

Hi all,

My environment:

$ uname -a
Linux ubuntu 4.4.0-170-generic #199-Ubuntu SMP Thu Nov 14 01:45:04 UTC 
2019 x86_64 x86_64 x86_64 GNU/Linux

Ubuntu 16.04.6 LTS

$ dpkg -l | grep -i chrome
ii  chrome-gnome-shell 9-0ubuntu1~ubuntu16.04.3 all  GNOME Shell 
extensions integration for web browsers
ii  chromium-browser 79.0.3945.79-0ubuntu0.16.04.1 amd64    Chromium 
web browser, open-source version of Chrome
ii  google-chrome-stable 79.0.3945.88-1 amd64    The web browser 
from Google


My errors:

Dec 19 10:56:32 ubuntu chromium-browser.desktop[5181]: 
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:(tg)kill() 
failure
Dec 19 10:56:32 ubuntu kernel: [  387.726455] VizCompositorTh[6627]: 
segfault at 9dc ip 55b5b7d395b6 sp 7efde7ffcf40 error 6 in 
chromium-browser[55b5b2b96000+9174000]
Dec 19 10:56:32 ubuntu chromium-browser.desktop[5181]: 
[5181:5205:1219/105632.381276:FATAL:gpu_data_manager_impl_private.cc(990)] 
The display compositor is frequently crashing. Goodbye.
Dec 19 10:56:32 ubuntu kernel: [  387.859687] traps: 
Chrome_IOThread[5205] trap int3 ip:55cf87ed9c98 sp:7ff350cc5880 error:0


Dec 19 10:56:32 ubuntu google-chrome.desktop[4735]: 
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:(tg)kill() 
failure
Dec 19 10:56:32 ubuntu kernel: [  388.004152] VizCompositorTh[5440]: 
segfault at 539 ip 564dac5ef5d6 sp 7f3a4d866f00 error 6 in 
chrome[564da84ac000+717]
Dec 19 10:56:32 ubuntu google-chrome.desktop[4735]: 
[6697:6697:1219/105632.741548:ERROR:gpu_channel_manager.cc(450)] 
ContextResult::kFatalFailure: Failed to create shared context for 
virtualization.


Dec 19 10:57:52 ubuntu google-chrome.desktop[4735]: 
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:(tg)kill() 
failure
Dec 19 10:57:52 ubuntu kernel: [  467.937280] VizCompositorTh[6705]: 
segfault at a29 ip 555d2e7925d6 sp 7f3f5e058f00 error 6 in 
chrome[555d2a64f000+717]
Dec 19 10:57:52 ubuntu google-chrome.desktop[4735]: 
[6757:6757:1219/105752.681148:ERROR:gpu_channel_manager.cc(450)] 
ContextResult::kFatalFailure: Failed to create shared context for 
virtualization.


I've tried updating and reinstalling but the problem persists.

Any ideas?

Thanks,
Adam



Re: debconf20 serà a Israel (!!??)

2019-09-04 Thread Adam Deosdad
Espanya no va reconèixer Israel fins el 1985. Israel és l'únic estat de la
zona que dona suport als Kurds i on els homosexuals no són executats.
Hi ha molts col·lectius contraris als assentaments il·legals.
i com deia Golda Meir "la pau arribarà quan els palestins estimin més als
seus fill del que ens odien"


El dt., 3 de set. 2019, 18:12, Ernest Adrogué  va escriure:

> 2019-09- 3, 07:16 (+0200); Àlex escriu:
> > I molta gent tindrà prohibida l'entrada a Israel per anar a la Debconf.
>
> D'acord, això és un argument més sòlid. (Si és cert, que ho desconec.)
>
> Salut.
>
>


Re: clone ACL permissions

2019-07-23 Thread Adam Weremczuk

On 23/07/19 12:20, Thomas Schmitt wrote:


Hi,

consider this from man setfacl:

--restore=file
Restore a permission backup created by `getfacl -R' or similar. All
permissions of a complete directory subtree are restored using this
mechanism. If the input contains owner comments or group  comments,
setfacl  attempts  to  restore  the  owner and owning group. If the
input contains flags comments (which define the setuid, setgid, and
sticky bits), setfacl sets those three bits accordingly; otherwise,
it clears them. This option cannot  be  mixed  with  other  options
except `--test'.

You could write a program which reads the text blocks in the getfacl -R
file and only writes those to a new file, which contain lines other than
the chmod-related "user::", "group::", "other::".
This would curb the eagerness of setfacl --restore to the files which
really need it.

Depending on your situation. some manipulation of the file paths in the -R
file might be necessary.

(And of course you urgently need a backup of your ACLs. If your favorite
  backup tool does not record them, consider to run getfacl -R and to
  include the resulting file in your backup.)


Have a nice day :)

Thomas


Hi Thomas,

Thank you for a quick and useful reply.
I definitely need to review all backups and include ACLs where needed.
I'm also wondering if rsync is capable of syncing ACLs without touching 
anything else.

I vaguely recall using it for a similar purpose some time ago.

Thanks,
Adam



clone ACL permissions

2019-07-23 Thread Adam Weremczuk

Hi all,

I've just found out that my backups were missing ACL and the restore 
will not work until this is fixed.


Luckily I have the luxury of checking what the permissions should look 
like on a running system, e.g:


RESTORED:

# file: samba/sysvol
# owner: root
# group: 300
user::rwx
group::rwx
other::---

RUNNING:

# file: samba/sysvol
# owner: root
# group: 300
user::rwx
user:root:rwx
group::rwx
group:300:rwx
group:301:r-x
group:302:rwx
group:303:r-x
mask::rwx
other::---
default:user::rwx
default:user:root:rwx
default:group::---
default:group:300:rwx
default:group:301:r-x
default:group:302:rwx
default:group:303:r-x
default:mask::rwx
default:other::---

There are no trivial permissions patterns to follow so manual 
reconciliation would take very long and be error prone.


QUESTION:

Is it possible to "clone" ACL permissions?

I.e. recursively read ACL (getfacl?) on all files and folders and write 
(setacl?) to the same list of files and folders elsewhere?


Thanks,
Adam



Re: apt-cacher errors

2019-03-25 Thread Adam Weremczuk

Hi Greg,

Thank you for taking the time to point out all the shortcomings.


On 25/03/19 13:21, Greg Wooledge wrote:

On Mon, Mar 25, 2019 at 12:11:21PM +, Adam Weremczuk wrote:

I've found 30 entries referencing wheezy and removed them all:

sudo find /var/cache/apt-cacher/ -type f -name *wheezy* | xargs rm

sudo find /var/cache/apt-cacher -type f -name '*wheezy*' -delete

There are three mistakes in your command:

1) The glob must be quoted, or the shell will expand it based on the files
in the current working directory, wherever that happens to be.
I would normally use \*wheezy\* but I new that wouldn't make any 
difference as I saw the list earlier.

Any practical difference between \*wheezy\* and '*wheezy*' in this case?

2) xargs without -0 is unsafe to use for filenames, because they may contain
whitespace or single quotes or double quotes, all of which are special
to xargs.
Again, I saw the list so didn't bother but a fair point indeed, it could 
easily fail or even turn disastrous.

3) You ran find with sudo privileges (probably not necessary), and failed
to run the rm with sudo privileges.  All of the removals are therefore
going to fail.
TBH I run it as root and added single sudo in front after pasting not to 
promote bad practices :)


You might argue that "apt-cacher never has any files with spaces!"
That may be true.  But it's still a good habit to develop.  Also, -delete
is more efficient than | xargs rm, albeit not portable to POSIX scripts.

Never used it with find, good to know.


If you want it to be portable as well as safe, then:

sudo find /var/cache/apt-cacher -type f -name '*wheezy*' -exec rm {} +

That's less efficient than -delete, but it's the best you can do if
POSIX portability is required.

Agree.

Thanks again,
Adam



Re: apt-cacher errors

2019-03-25 Thread Adam Weremczuk

I've found 30 entries referencing wheezy and removed them all:

sudo find /var/cache/apt-cacher/ -type f -name *wheezy* | xargs rm

which appears to have fixed the issue.

Thanks,
Adam


On 25/03/19 11:43, Roberto C. Sánchez wrote:

On Mon, Mar 25, 2019 at 07:05:11AM -0400, Roberto C. Sánchez wrote:

On Mon, Mar 25, 2019 at 08:59:40AM +, Adam Weremczuk wrote:

Hi all,

On 24th March (last Sunday) I received the following (for the first time):

Subject: Cron test -x /usr/share/apt-cacher/apt-cacher-cleanup.pl && 
/usr/share/apt-cacher/apt-cacher-cleanup.pl

Error: cannot open ../headers/debian_dists_wheezy_Release for locking: No such 
file or directory
Failed to open filehandles for debian_dists_wheezy_Release. Resolve this 
manually.
Exiting to prevent deletion of cache contents.

The above all came from systems originally running wheezy which were upgraded 
to stretch about 2 years ago.

Questions:

1. How to get rid of these errors? I would prefer to avoid spending half a day 
deciphering a chain of Perl scripts which I'm not familiar with.
2. What specifically happened last week to trigger this behavior? Was it e.g. a 
permanent removal of all wheezy repos and references?


https://lists.debian.org/debian-devel-announce/2019/03/msg6.html

You will need to change your sources.list.

http://archive.debian.org/debian/README


Perhaps I spoke too soon.  I encountered this error in my apt-cacher-ng
log after replying to your message.

Checking/Updating debrep/dists/wheezy/Release...
Attempting to download the alternative version... Checking/Updating
debrep/dists/wheezy/InRelease...
404 Not Found

After seeing that I re-read your message and noticed that the system had
been initially installed as wheezy but now runs stretch.

It has been some years since I moved to apt-cacher-ng, so I've forgotten
how it differs from apt-cacher, but in my case I had to remove the
directories /var/cache/apt-cacher-ng/debrep/dists/wheezy* and then
re-run the expiration task.

You may need to do something similar.

Regards,

-Roberto





apt-cacher errors

2019-03-25 Thread Adam Weremczuk

Hi all,

On 24th March (last Sunday) I received the following (for the first time):

Subject: Cron test -x /usr/share/apt-cacher/apt-cacher-cleanup.pl && 
/usr/share/apt-cacher/apt-cacher-cleanup.pl

Error: cannot open ../headers/debian_dists_wheezy_Release for locking: No such 
file or directory
Failed to open filehandles for debian_dists_wheezy_Release. Resolve this 
manually.
Exiting to prevent deletion of cache contents.

The above all came from systems originally running wheezy which were upgraded 
to stretch about 2 years ago.

Questions:

1. How to get rid of these errors? I would prefer to avoid spending half a day 
deciphering a chain of Perl scripts which I'm not familiar with.
2. What specifically happened last week to trigger this behavior? Was it e.g. a 
permanent removal of all wheezy repos and references?

Regards,
Adam



GIMP Crash

2019-03-16 Thread Adam Haas
I was working with the Gnu Image Manipulation Program yesterday when a
segmentation fault occurred. Attached is the information spit out in
association with the event. Please let me know what additional information
you need from me and I will pass it along.

- Adam Haas
```
GNU Image Manipulation Program version 2.10.8
git-describe: GIMP_2_10_6-294-ga967e8d2c2
C compiler:
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/8/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none
OFFLOAD_TARGET_DEFAULT=1
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Debian 
8.2.0-13' --with-bugurl=file:///usr/share/doc/gcc-8/README.Bugs 
--enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --prefix=/usr 
--with-gcc-major-version-only --program-suffix=-8 
--program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id 
--libexecdir=/usr/lib --without-included-gettext --enable-threads=posix 
--libdir=/usr/lib --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug 
--enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new 
--enable-gnu-unique-object --disable-vtable-verify --enable-libmpx 
--enable-plugin --enable-default-pie --with-system-zlib 
--with-target-system-zlib --enable-objc-gc=auto --enable-multiarch 
--disable-werror --with-arch-32=i686 --with-abi=m64 
--with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic 
--enable-offload-targets=nvptx-none --without-cuda-driver 
--enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu 
--target=x86_64-linux-gnu
Thread model: posix
gcc version 8.2.0 (Debian 8.2.0-13) 

using GEGL version 0.4.14 (compiled against version 0.4.12)
using GLib version 2.58.3 (compiled against version 2.58.1)
using GdkPixbuf version 2.38.1 (compiled against version 2.38.0)
using GTK+ version 2.24.32 (compiled against version 2.24.32)
using Pango version 1.42.3 (compiled against version 1.42.3)
using Fontconfig version 2.13.1 (compiled against version 2.13.1)
using Cairo version 1.16.0 (compiled against version 1.16.0)

```
> fatal error: Segmentation fault

Stack trace:
```
/usr/lib/libgimpbase-2.0.so.0(gimp_stack_trace_print+0x397)[0x7efecef12e27]
gimp-2.10(+0xd14a0)[0x564d3eac34a0]
gimp-2.10(+0xd18d8)[0x564d3eac38d8]
gimp-2.10(+0xd2037)[0x564d3eac4037]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x12730)[0x7efece20e730]
/usr/lib/x86_64-linux-gnu/libpango-1.0.so.0(+0x179e4)[0x7efecee749e4]
/usr/lib/x86_64-linux-gnu/libpango-1.0.so.0(pango_attribute_copy+0xf)[0x7efecee742df]
/usr/lib/x86_64-linux-gnu/libpango-1.0.so.0(pango_attr_list_copy+0x28)[0x7efecee74c98]
/usr/lib/x86_64-linux-gnu/libpango-1.0.so.0(+0x20697)[0x7efecee7d697]
/usr/lib/x86_64-linux-gnu/libpango-1.0.so.0(+0x24143)[0x7efecee81143]
/usr/lib/x86_64-linux-gnu/libpango-1.0.so.0(+0x26302)[0x7efecee83302]
/usr/lib/x86_64-linux-gnu/libgtk-x11-2.0.so.0(+0x12802a)[0x7efecf15a02a]
/usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0(g_closure_invoke+0xb1)[0x7efece4d4b91]
/usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0(+0x244ec)[0x7efece4e84ec]
/usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0(g_signal_emit_valist+0xd8e)[0x7efece4f125e]
/usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0(g_signal_emit_by_name+0x4b4)[0x7efece4f1df4]
/usr/lib/x86_64-linux-gnu/libgtk-x11-2.0.so.0(+0x19f4e8)[0x7efecf1d14e8]
/usr/lib/x86_64-linux-gnu/libgtk-x11-2.0.so.0(+0x84a35)[0x7efecf0b6a35]
/usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0(g_closure_invoke+0xb1)[0x7efece4d4b91]
/usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0(+0x244ec)[0x7efece4e84ec]
/usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0(g_signal_emit_valist+0xd8e)[0x7efece4f125e]
/usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0(g_signal_emit_by_name+0x4b4)[0x7efece4f1df4]
/usr/lib/x86_64-linux-gnu/libgtk-x11-2.0.so.0(+0x19f4e8)[0x7efecf1d14e8]
/usr/lib/x86_64-linux-gnu/libgtk-x11-2.0.so.0(+0xf53bc)[0x7efecf1273bc]
/usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0(g_closure_invoke+0xb1)[0x7efece4d4b91]
/usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0(+0x244ec)[0x7efece4e84ec]
/usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0(g_signal_emit_valist+0xd8e)[0x7efece4f125e]
/usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0(g_signal_emit_by_name+0x4b4)[0x7efece4f1df4]
/usr/lib/x86_64-linux-gnu/libgtk-x11-2.0.so.0(+0x19f4e8)[0x7efecf1d14e8]
/usr/lib/x86_64-linux-gnu/libgtk-x11-2.0.so.0(+0x84a35)[0x7efecf0b6a35]
/usr/lib/x86_64-linux-gnu/libgtk-x11-2.0.so.0(+0x1a52d2)[0x7efecf1d72d2]
/usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0(g_closure_invoke+0xb1)[0x7efece4d4b91]
/usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0(+0x244ec)[0x7efece4e84ec]
/usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0(g_signal_emit_valist+0xd8e)[0x7efece4f125e]
/usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0(g_signal_emit_by_name+0x4b4)[0x7efece4f1df4]
/usr/lib/x86_64-linux-gnu/libgtk-x11-2.0.so.0(+0x19f4e8)[0x7efecf1d14e8]
/usr/lib/x86_64-linux-gnu/libgtk-x11-2.0.so.0(+0x84a35)[0x7efecf0b6a35]
/

Re: Please Recommend Affordable and Reliable Cloud Storage for 50 TB of Data

2019-02-18 Thread Adam Weremczuk

Has anybody tried: https://rclone.org ?

I still think something like this combined with 5  x G Suite Business 
accounts would be the best value for money.


I.e. $50 pm for unlimited storage.


On 18/02/19 08:57, Curt wrote:

No support for standard protocols that I can see and you need to install
their proprietary app (for which there is none for Linux, though they
say they're working on one).

So those constitute rather important cons that probably cancel out the
pros, at least until a linux client appears. I believe SpiderOak One has
a Debian client, BTW (storage of 50TB would be about $250.00 a month
with them if I'm calculating correctly).




Re: P2V Debian 9 with VMware Converter

2019-02-15 Thread Adam Weremczuk
I've investigated a bit more and it actually has something to with EFI 
rather than Debian version.



This Debian 9 clones fine:

Model: IBM ServeRAID M5014 (scsi)
Disk /dev/sda: 998GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End Size    File system  Name  Flags
 1  1049kB  2097kB  1049kB bios_grub
 2  2097kB  271MB   268MB   fat32  boot, esp
 3  271MB   998GB   998GB  lvm


And this one gives the error:

Model: DELL PERC H730P Mini (scsi)
Disk /dev/sda: 4197GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End Size    File system  Name  Flags
 1  1049kB  511MB   510MB   fat16    uefi  boot, esp
 2  511MB   767MB   256MB   ext3 boot
 3  767MB   4197GB  4196GB   lvm   lvm


All my existing VMs across all ESXi hosts are configured with BIOS boot.



Re: P2V Debian 9 with VMware Converter

2019-02-15 Thread Adam Weremczuk

Hi Calbaza,

I'm not surprised it works for Debian 8 (released in 2015) as the tool 
officially supports Ubuntu 16 (released in 2016).


It also works like a charm for Debian 7 (2013) but not Debian 9 (2017).

All my VMs run as version 11 too.

It fails with the same error when tried against 3 different ESXi hosts 
(6.0, 6.0 and 6.7) running on 2010, 2015 and 2017 server hardware 
respectively.


I think the errors are misleading and it should start working in one of 
the next releases on VMware vCenter Converter Standalone.


I'm now giving Veeam a shot on this (which also doesn't require source 
machine to be shut down).


If it fails I will resort to old school manual VM creation and data copying.

Thanks,
Adam


On 15/02/19 15:38, Calabaza wrote:


I convert about 17 Debian machines with Debian 8 with little or no problem.
(I understand that you need Debian 9 but that is my experience).

I'm think your problem is about the virtual hardware versión of
destination in your ESX.

I have all my machines with Virtual Hardware version: 11

Read this links, may be help you:

[0] What other requirements and considerations are there for virtual
machines with EFI firmware?
[0] https://communities.vmware.com/docs/DOC-28494
[1] https://communities.vmware.com/thread/519642

Here [2] say that

"(...) EFI firmware is supported from hardware version 11 and above. (...)"

[2] https://communities.vmware.com/thread/584625
[3] 
https://akmyint.wordpress.com/2012/09/30/my-p2v-notes-stand-alone-servers-or-non-clustered/

I'm a Spanish speaker, sorry for my bad English.





Re: Please Recommend Affordable and Reliable Cloud Storage for 50 TB of Data

2019-02-15 Thread Adam Weremczuk

The upload volume is currently capped at 750GB per day:

https://support.google.com/a/answer/172541?hl=en

So it would take you about 67 days to push to a single account and about 
14 days if you split the data into 5 even chunks and push simultaneously 
to 5 accounts.


You would need a very fast internet connection to go much faster with 
any provider anyway.



On 15/02/19 09:48, Adam Weremczuk wrote:
Actually even a cheaper "Business" plan offers unlimited storage for 5 
or more users.


So you might spend as little as $50 per month:

https://gsuite.google.com/intl/en_us/pricing.html

My links are for UK and US.

Edit the URL to browse different regions.


On 15/02/19 09:34, Adam Weremczuk wrote:

Hi,

It could make sense to sign up for Google Enterprise subscription:

https://gsuite.google.co.uk/intl/en_uk/pricing.html

5 users will cost you 5 x £20 = £100 per month and give 5 accounts 
with unlimited storage in a trusted and reliable place.


AFAIK there is upload speed cap in place so it may take you many days 
to complete the initial push.


Moving forward I would recommend rsync or similar for differential 
data updates.


Not sure why you've posted your question to this list though?

Thanks,
Adam






Re: Please Recommend Affordable and Reliable Cloud Storage for 50 TB of Data

2019-02-15 Thread Adam Weremczuk
Actually even a cheaper "Business" plan offers unlimited storage for 5 
or more users.


So you might spend as little as $50 per month:

https://gsuite.google.com/intl/en_us/pricing.html

My links are for UK and US.

Edit the URL to browse different regions.


On 15/02/19 09:34, Adam Weremczuk wrote:

Hi,

It could make sense to sign up for Google Enterprise subscription:

https://gsuite.google.co.uk/intl/en_uk/pricing.html

5 users will cost you 5 x £20 = £100 per month and give 5 accounts 
with unlimited storage in a trusted and reliable place.


AFAIK there is upload speed cap in place so it may take you many days 
to complete the initial push.


Moving forward I would recommend rsync or similar for differential 
data updates.


Not sure why you've posted your question to this list though?

Thanks,
Adam




Re: Please Recommend Affordable and Reliable Cloud Storage for 50 TB of Data

2019-02-15 Thread Adam Weremczuk

Hi,

It could make sense to sign up for Google Enterprise subscription:

https://gsuite.google.co.uk/intl/en_uk/pricing.html

5 users will cost you 5 x £20 = £100 per month and give 5 accounts with 
unlimited storage in a trusted and reliable place.


AFAIK there is upload speed cap in place so it may take you many days to 
complete the initial push.


Moving forward I would recommend rsync or similar for differential data 
updates.


Not sure why you've posted your question to this list though?

Thanks,
Adam



Re: P2V Debian 9 with VMware Converter

2019-02-13 Thread Adam Weremczuk
Forgot to mention the source server was up and running the entire time, 
no down time.



On 13/02/19 14:06, Alexandre GRIVEAUX wrote:

It's hard to beat 20-30 seconds it takes to click and type into VMware
Converter and leave it running.
It works well for Debian 7, probably 8 as well but not 9 :(




Re: P2V Debian 9 with VMware Converter

2019-02-13 Thread Adam Weremczuk
Short answer - because it will take significantly longer and potentially 
lead to more errors.
Especially if I have a number of servers with different structures and 
purposes.


It's hard to beat 20-30 seconds it takes to click and type into VMware 
Converter and leave it running.

It works well for Debian 7, probably 8 as well but not 9 :(


On 13/02/19 13:09, Alexandre GRIVEAUX wrote:

Hello,

Why did you use a vmware converter instead configuring a VM and copy 
your data ?


Regards,





P2V Debian 9 with VMware Converter

2019-02-13 Thread Adam Weremczuk

Hi all,

I persistently get "The destination does not support EFI firmware" .

Apparently the latest Converter doesn't support Debian 9 (yet?).

More details on my issue here: 
https://communities.vmware.com/message/2837600


Has anybody had success tricking Converter to perform a migration?

Any other P2V tools you would recommend?

Thanks,
Adam



Re: dd performance test differences

2018-11-02 Thread Adam Weremczuk

Hi Mike,

Thanks for the suggestion.

New test results with suggested parameters below:

Slower server
W: 1310720 bytes (13 GB, 12 GiB) copied, 97.5106 s, 134 MB/s
R: 1310720 bytes (13 GB, 12 GiB) copied, 28.6353 s, 458 MB/s

Faster server
W: 1310720 bytes (13 GB) copied, 83.7368 s, 157 MB/s
R: 1310720 bytes (13 GB) copied, 11.6786 s, 1.1 GB/s

No huge discrepancy but still something making me scratch my head :-)

BTW, small values can be useful when e.g. BBU fails and you want to get 
an idea about performance quickly:


dd if=/dev/zero of=test.bin bs=512 count=1024 oflag=sync
524288 bytes (524 kB, 512 KiB) copied, 25.101 s, 20.9 kB/s

---
Adam


On 02/11/18 12:40, Michael Stone wrote:
That's a uselessly small block size & count. Try again with something 
more like bs=128k count=10


Note that your dd test is a write test and your testparm is a read 
test. It would probably be useful to also do a dd read test with 
parameters above. (if=test.bin of=/dev/null)


Mike Stone 




Re: dd performance test differences

2018-11-02 Thread Adam Weremczuk
Forgot to mention both run Debian (7.1 and 9.5) and filesystems are ext4 
on both.



On 02/11/18 11:58, Adam Weremczuk wrote:

Hi all,

Can somebody explain this huge difference between 2 (almost) identical 
servers:


- 



dd if=/dev/zero of=test.bin bs=512 count=1024 oflag=sync

524288 bytes (524 kB) copied, 0.00133898 s, 392 MB/s

vs

524288 bytes (524 kB, 512 KiB) copied, 0.3026 s, 1.7 MB/s

- 



With "hdparm -tT /dev/sda" discrepancies are much smaller but still 
noticeable:


 Timing cached reads:   15976 MB in  2.00 seconds = 7996.66 MB/sec
 Timing buffered disk reads: 2134 MB in  3.00 seconds = 710.98 MB/sec

vs

 Timing cached reads:   14282 MB in  1.99 seconds = 7161.56 MB/sec
 Timing buffered disk reads: 1172 MB in  3.00 seconds = 390.61 MB/sec

- 



I would thought all meaningful aspects are identical:
- server model (IBM/Lenovo X3650 M3)
- identical disks (count, brand, model, capacity) and RAID controller 
cards (LSI ServeRAID M5014 SAS/SATA Controller)

- RAID settings:

Adapter 0 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name    :
RAID Level  : Primary-5, Secondary-0, RAID Level Qualifier-3
Size    : 3.629 TB
Sector Size : 512
Parity Size : 929.458 GB
State   : Optimal
Strip Size  : 128 KB
Number Of Drives per span:5
Span Depth  : 2
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache 
if Bad BBU
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache 
if Bad BBU

Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy   : Disabled
Encryption Type : None
Is VD Cached: No

- both have BBUs in optimal states

- 



There is a slight difference in RAID firmware:

FW Package Build: 12.12.0-0085

vs

FW Package Build: 12.15.0-0248

With the newer giving lower results.

Shall I start digging with Lenovo / LSI or am I missing something?

Thanks,
Adam





dd performance test differences

2018-11-02 Thread Adam Weremczuk

Hi all,

Can somebody explain this huge difference between 2 (almost) identical 
servers:


-

dd if=/dev/zero of=test.bin bs=512 count=1024 oflag=sync

524288 bytes (524 kB) copied, 0.00133898 s, 392 MB/s

vs

524288 bytes (524 kB, 512 KiB) copied, 0.3026 s, 1.7 MB/s

-

With "hdparm -tT /dev/sda" discrepancies are much smaller but still 
noticeable:


 Timing cached reads:   15976 MB in  2.00 seconds = 7996.66 MB/sec
 Timing buffered disk reads: 2134 MB in  3.00 seconds = 710.98 MB/sec

vs

 Timing cached reads:   14282 MB in  1.99 seconds = 7161.56 MB/sec
 Timing buffered disk reads: 1172 MB in  3.00 seconds = 390.61 MB/sec

-

I would thought all meaningful aspects are identical:
- server model (IBM/Lenovo X3650 M3)
- identical disks (count, brand, model, capacity) and RAID controller 
cards (LSI ServeRAID M5014 SAS/SATA Controller)

- RAID settings:

Adapter 0 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name    :
RAID Level  : Primary-5, Secondary-0, RAID Level Qualifier-3
Size    : 3.629 TB
Sector Size : 512
Parity Size : 929.458 GB
State   : Optimal
Strip Size  : 128 KB
Number Of Drives per span:5
Span Depth  : 2
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if 
Bad BBU
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if 
Bad BBU

Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy   : Disabled
Encryption Type : None
Is VD Cached: No

- both have BBUs in optimal states

-

There is a slight difference in RAID firmware:

FW Package Build: 12.12.0-0085

vs

FW Package Build: 12.15.0-0248

With the newer giving lower results.

Shall I start digging with Lenovo / LSI or am I missing something?

Thanks,
Adam



Re: Latest Thunderbird update breaks multiple plugins

2018-10-16 Thread Adam Weremczuk

Hi Martin,

I suffered this pain earlier this morning on Ubuntu 16.04.

For Lightning you might tried to substitute it with "xul-ext-lightning" 
but I ended up removing Thunderbird and rolled back to 52.9 
(https://ubuntu.pkgs.org/16.04/ubuntu-updates-main-amd64/thunderbird_52.9.1+build3-0ubuntu0.16.04.1_amd64.deb.html) 
which offered Lightning 5.4.


They appear to be working together fine just as they used to with all my 
local settings kept intact.


Following that I disabled updates:

$ sudo apt-mark hold thunderbird
thunderbird set on hold.

I guess that's your only option for discontinued plugins.

Hopefully a compatible version of Lightning will be out soon.

Cheers,
Adam


On 16/10/18 09:39, Martin wrote:

Hi list members,

with the latest Thunderbird update (60.2.1) some plugins are broken. E.g. Color 
Folders (last change July 2014), FireTray (last change May 2016, discontinued), 
Lightning (last change May 2017).
Does some of you have a clue, if one should just forget about those plugins or 
what to do?
Any, yes I know, there may be Bugzilla.





Re: DRBD sync speed

2018-10-11 Thread Adam Weremczuk

Hi Dan,

Yes, I tried tweaking config following that link but for some reason the 
sync progress is not showing any more.

I guess I need to fiddle with it more.

I have 16 x 500 GB disks in each server and my layout is as below:

1-4: VD0: RAID10: 2 spans of 2 disks -> 1TB for Proxmox containers and VMs
5-14: VD1: RAID50: 2 spans of 5 disks -> 4TB for storage (which I'm 
trying to sync for redundancy using DRBD)

15-16: global hot spares

It appears to provide the best performance, resiliency and space 
utilisation balance.
I've been referring to this chart: 
https://www.datarecovery.net/articles/raid-level-comparison.aspx


Is there anything fundamentally wrong with my architecture?

Thanks,
Adam


On 10/10/18 16:54, Dan Ritter wrote:

Have you read
https://serverfault.com/questions/740311/drbd-terrible-sync-performance-on-10gige

and edited your drbd.conf to suit?

Unrelated: how is it that you decided on RAID50 for 4TB of disk space?
If it's valuable, you should be looking at RAID10. If it's not
valuable, why 50 over 6 or RAIDZ2 or 3?

-dsr-




DRBD sync speed

2018-10-10 Thread Adam Weremczuk

Hi all,

I'm trying out DRBD Pacemaker HA Cluster on Debian 9.5

I have 2 identical servers connected with 2 x 1 Gbps links in bond_mode 
balance-rr.


The bond is working fine; I get a transfer rate of 150 MB/s with scp.

Following this guide: 
https://www.theurbanpenguin.com/drbd-pacemaker-ha-cluster-ubuntu-16-04/ 
was going  smoothly up until:


drbdadm -- --overwrite-data-of-peer primary r0/0

cat /proc/drbd
version: 8.4.10 (api:1/proto:86-101)
srcversion: 17A0C3A0AF9492ED4B9A418
 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-
    ns:10944 nr:0 dw:0 dr:10992 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f 
oos:3898301536

    [>] sync'ed:  0.1% (3806932/3806944)M
    finish: 483:25:13 speed: 2,188 (2,188) K/sec

The transfer rate is horribly slow and at this pace it's going to take 
20 days for two 4 TB volumes to sync!


That's almost 15 times slower comparing with the guide video (8:30): 
https://www.youtube.com/watch?v=WQGi8Nf0kVc


The volumes have been zeroed and contain no live data yet.

My sdb disks are logical drives (hardware RAID) set up as RAID50 with 
the defaults:


Strip size: 128 KB
Access policy: RW
Read policy: Normal
Write policy: Write Back with BBU
IO policy: Direct
Drive Cache: Disable
Disable BGI: No

Performance looks good when tested with hdparm:

hdparm -tT /dev/sdb1

/dev/sdb1:
 Timing cached reads:   15056 MB in  1.99 seconds = 7550.46 MB/sec
 Timing buffered disk reads: 2100 MB in  3.00 seconds = 699.81 MB/sec

The volumes have been zeroed and contain no live data yet.

Any idea why the sync rate is so painfully slow and how to improve it?

Regards,
Adam










[SOLVED] Re: dual port cross over cable bonding

2018-10-05 Thread Adam Weremczuk

A single dual port gigabit card on each server.

I'm more concerned about performance than redundancy.

The bond operates in 2 gigabit mode:

ethtool bond1
(...)
    Speed: 2000Mb/s
    Duplex: Full
(...)

My working config (will play with it a bit more):

auto bond1
iface bond1 inet static
    slaves ens1f0 ens1f1
    address 192.168.200.1
    netmask 255.255.255.252
    bond_miimon 100
    bond_mode balance-rr
    bond_xmit_hash_policy layer3+4

I've done a simple some performance test - copied a file over ssh (which 
adds own overhead).


90-100 MB/s over a single link and 150-160MB/s over a dual bond

So it's definitely working and making a substantial difference.

Unplugging one cable doesn't break the transfer; speed temporarily 
decreases but then goes up again after cable is reinserted.



On 05/10/18 16:01, Dan Purgert wrote:

Are these "just" 10/100 cards?  I mean, if they're gbit capable,
crossover cables violate the spec, and you're limiting them to 100m
only...





[SOLVED] Re: dual port cross over cable bonding

2018-10-05 Thread Adam Weremczuk
My config was ok and it's working like a charm after both servers have 
been rebooted.

For some reason "systemctl restart networking" wasn't enough.



Re: dual port cross over cable bonding

2018-10-05 Thread Adam Weremczuk

Yes, 192.168.200.1 and 192.168.200.2 netmask 30
All ports and cables work when tested 1-1 but not with my 2-2 bonding 
config.



Looks like your LACP is correct but your IP addressing is wrong.

Do you have parentheses in it, or are you trying to suggest that
one is .1 and the other is .2?

-dsr-





dual port cross over cable bonding

2018-10-05 Thread Adam Weremczuk

Hello,

I have 2 servers running Proxmox 5.2 (based on Debian 9).

I've connected Ethernet ports between them with a pair of cross over cables.

When only 1 port on each and 1 cable are used connectivity looks fine 
with the following config:


---

auto ens1f0
iface ens1f0 inet static
    address 192.168.200.1(2)
    netmask 255.255.255.252

---

Then I connected 2 cables and attempted link aggregation:

---

iface ens1f0 inet manual

iface ens1f1 inet manual

auto bond1
iface bond1 inet static
    slaves ens1f0 ens1f1
    address 192.168.200.1(2)
    netmask 255.255.255.252
    bond_miimon 100
#    bond_mode 802.3ad
    bond_mode balance-rr
    bond_xmit_hash_policy layer3+4

---

Tried both 802.3ad and balance-rr modes.

AFAIK only these 2 provide link aggregation.

Ethtool appears to be happy:

ethtool bond1
Settings for bond1:
    Supported ports: [ ]
    Supported link modes:   Not reported
    Supported pause frame use: No
    Supports auto-negotiation: No
    Advertised link modes:  Not reported
    Advertised pause frame use: No
    Advertised auto-negotiation: No
    Speed: 2000Mb/s
    Duplex: Full
    Port: Other
    PHYAD: 0
    Transceiver: internal
    Auto-negotiation: off
    Link detected: yes

Unfortunately in either mode cross pinging fails with "Destination Host 
Unreachable".


Own interfaces ping ok.

The same configuration works fine against managed switch ports (LACP/LAG).

So my question is why this is not working and whether it's possible at all?

Regards,
Adam



Re: increasing size of /run

2018-07-13 Thread Adam Weremczuk

Thank you both for useful advice.

For me the AIDE script is a bit too complex (700+ lines) to quickly 
analyse and experiment with.


I've decided to take a lazy path which is simply increasing the size of 
/run and retrying AIDE job.

No problem if it buzzes in the background for a couple of days again.

Cheers,
Adam



increasing size of /run

2018-07-13 Thread Adam Weremczuk

Hi all,

I have a one off big job (full AIDE report on millions of files) which 
I'm trying to run on old Debian 7.1.

The system uses 2 physical disks and physical volumes, no LVM.
It has 8GB or RAM and 32GB of swap which appears to be just enough.
After running for a couple of days the job failed when it filled all the 
space on /run


I came across information that the size of /run is hard coded to be 10% 
of RAM.

It would makes sense as it's currently showing as 800MB:

Filesystem Type   Size  Used Avail Use% Mounted on
tmpfs  tmpfs  793M  264K  793M   1% /run

I have no entry for /run in /etc/fstab so decided to look into 
/usr/share/initramfs-tools/init

To my surprise the line was showing 20%, not 10%:

mount -t tmpfs -o "nosuid,size=20%,mode=0755" tmpfs /run

I've changed it to 60% (hoping to triple the size) and rebooted but the 
size of /run hasn't changed.


What's the safest and quickest way to temporarily triple the size of /run ?

Thanks,
Adam



Re: Expired GPG keys of older release

2018-06-20 Thread Adam Cecile
Oh nice, i'll check tomorrow or on Friday, thanks for this suggestion. Could 
help a lot with third parties repo using weak timestamp also.

On June 20, 2018 7:37:19 PM GMT+02:00, Don Armstrong  wrote:
>On Tue, 19 Jun 2018, Adam Cecile wrote:
>> On 06/19/2018 10:48 PM, Don Armstrong wrote:
>> > On Tue, 19 Jun 2018, Adam Cecile wrote:
>> > > That's a pity, don't you think so ? I think Debian should renew
>the
>> > > archive key, so we can still verify packages signatures.
>> > You can still verify them. Key expiration doesn't make existing
>> > signatures invalid. [Indeed, gpgv doesn't even check for expired
>keys.]
>> > 
>> With apt ? I had to set allowunauthenticated = 1 in apt.conf,
>otherwise apt
>> wouldn't install anything.
>
>Hrm; it looks like apt has its own internal version of gpgv which
>actually tests the time.
>
>In theory, [allow-weak=yes] should work, but I haven't actually tested
>this.
>
>-- 
>Don Armstrong  https://www.donarmstrong.com
>
>You are educated when you have the ability to listen to almost
>anything without losing your temper or self-confidence.
> -- Robert Frost

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Re: Expired GPG keys of older release

2018-06-20 Thread Adam Cecile
Again, this is aim to disable Release timestamp validation, not related to gpg 
:/

On June 20, 2018 7:04:33 PM GMT+02:00, Curt  wrote:
>On 2018-06-20,   wrote:
>>
>> On Wed, Jun 20, 2018 at 02:27:24PM +0200, Adam Cecile wrote:
>>
>> [...]
>>
>>> I still thinks it *sucks* to have no alternative then considering
>>> packages signed by an expired key like unsigned packages
>>
>> That was my impression too: there should be a separate option for
>> "yes, I know the key is expired". Perhaps you can bring it up in
>> debian-devel or on the APT mailing list
>(https://lists.debian.org/deity/),
>> to see if they recommend filing a bug?
>>
>> Cheers
>> - -- tomás
>>
>>
>
>What does this do?
>
> -o Acquire::Check-Valid-Until=false update

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Re: Expired GPG keys of older release

2018-06-20 Thread Adam Cecile
Exactly, thank you.

Actually I've been contributing to Debian a lot some time ago and I don't think 
I've been rude or something, so please show some respect.

On June 20, 2018 5:57:45 PM GMT+02:00, "Roberto C. Sánchez" 
 wrote:
>On Wed, Jun 20, 2018 at 11:16:46AM -0400, Greg Wooledge wrote:
>> On Wed, Jun 20, 2018 at 11:12:18AM -0400, Roberto C. Sánchez wrote:
>> > The output appears to be from a step in a Dockerfile.
>> 
>> Then the Docker users should know how to use their stupid Dockers and
>> shouldn't require hand-holding from non-Docker mailing lists.
>> 
>To be fair, Adam appears to know what he is doing.  The issue he has
>raised is that there does not appear to be a way to get apt to ignore
>the key expiration.  The two options are apparently "all security on"
>or "all security off".  It is clear that there are users that need a
>middle ground option that does not currently exist.
>
>Regards,
>
>-Roberto
>
>-- 
>Roberto C. Sánchez

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Re: Expired GPG keys of older release

2018-06-20 Thread Adam Cecile

On 06/20/2018 02:17 PM, Greg Wooledge wrote:

On Wed, Jun 20, 2018 at 08:47:39AM +0200, Adam Cecile wrote:

  ---> Running in 2300490ebb96

You didn't show the command that you typed.  That makes it harder to
give solutions.


W: GPG error: http://archive.debian.org squeeze Release: The following

Is a warning.  You can tell by the giant W.


WARNING: The following packages cannot be authenticated!

Is a warning.  You can tell by the giant WARNING.


E: There are problems and -y was used without --force-yes

Stop typing the -y option when you type the command in the terminal
that you're typing commands into.  Or, as a second-rate fallback
solution, consider typing the --force-yes option as well.

And if you mention the D word (the one that is not Debian) to me, or
if you say that you are not typing commands into a terminal, I will
most likely yell at you.

It's indeed a dockerfile, way better to run that very old app into a 
squeeze docker than the whole server.


Anyway, the command is apt-get install -y wget ca-certificates and it 
refuses to run it because the package are marked as unauthenticated, 
thanks to the expired key.


I still thinks it *sucks* to have no alternative then considering 
packages signed by an expired key like unsigned packages




Re: Expired GPG keys of older release

2018-06-20 Thread Adam Cecile

On 06/20/2018 10:08 AM, john doe wrote:

On 6/20/2018 9:55 AM, Adam Cecile wrote:

On 06/20/2018 09:43 AM, john doe wrote:

On 6/20/2018 8:47 AM, Adam Cecile wrote:

On 06/20/2018 08:39 AM, john doe wrote:

On 6/19/2018 10:55 PM, Adam Cecile wrote:

On 06/19/2018 10:48 PM, Don Armstrong wrote:

On Tue, 19 Jun 2018, Adam Cecile wrote:
That's a pity, don't you think so ? I think Debian should renew 
the

archive key, so we can still verify packages signatures.

You can still verify them. Key expiration doesn't make existing
signatures invalid. [Indeed, gpgv doesn't even check for expired 
keys.]


With apt ? I had to set allowunauthenticated = 1 in apt.conf, 
otherwise apt wouldn't install anything.




Can you give us the warning/error you're getting?


  ---> Running in 2300490ebb96
Get:1 http://archive.debian.org squeeze Release.gpg [1655 B]
Get:2 http://archive.debian.org squeeze-lts Release.gpg [819 B]
Get:3 http://archive.debian.org squeeze Release [96.0 kB]
Ign http://archive.debian.org squeeze Release
Get:4 http://archive.debian.org squeeze-lts Release [34.3 kB]
Get:5 http://archive.debian.org squeeze/main amd64 Packages [8370 kB]
Get:6 http://archive.debian.org squeeze-lts/main amd64 Packages 
[390 kB]

Fetched 8893 kB in 0s (10.0 MB/s)
Reading package lists...
W: GPG error: http://archive.debian.org squeeze Release: The 
following signatures were invalid: KEYEXPIRED 1520281423 KEYEXPIRED 
1501892461

Reading package lists...
Building dependency tree...
Reading state information...
The following extra packages will be installed:
   libssl0.9.8 openssl
The following NEW packages will be installed:
   ca-certificates libssl0.9.8 openssl wget
0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded.
Need to get 2980 kB of archives.
After this operation, 7578 kB of additional disk space will be used.
WARNING: The following packages cannot be authenticated!
   ca-certificates
E: There are problems and -y was used without --force-yes



As other as pointed out if the expiration date is not extended on 
the key your out of luck! :)


https://www.debian.org/News/2011/20110209

One workaroungd could be:
1)   Download all required packages
2)  Verify the downloaded packages using 'gpg --verify'
3)  Install the verified pkgs

The best workaround would be to upgrade to Debian Stretch (6 to 7, 7 
to 8, 8 to 9)! :)


For sake of completeness:
  apt-key update  - update keys using the keyring package
  apt-key net-update  - update keys using the network


Well, that's a docker image, I'm not using Squeeze on production 
anywhere except this hacky stuff for a friend ;-)




Mabey:

From:

https://unix.stackexchange.com/questions/2544/how-to-work-around-release-file-expired-problem-on-a-local-mirror 




"-o Acquire::Check-Valid-Until=false
For example:
sudo apt-get -o Acquire::Check-Valid-Until=false update"

https://manpages.debian.org/stretch/apt/apt.conf.5.en.html

Sadly, it's already there. Actually it just check the Release file 
timestamp and ignores it if too old. It's not related to GPG signature.




Re: Expired GPG keys of older release

2018-06-20 Thread Adam Cecile

On 06/20/2018 08:39 AM, john doe wrote:

On 6/19/2018 10:55 PM, Adam Cecile wrote:

On 06/19/2018 10:48 PM, Don Armstrong wrote:

On Tue, 19 Jun 2018, Adam Cecile wrote:

That's a pity, don't you think so ? I think Debian should renew the
archive key, so we can still verify packages signatures.

You can still verify them. Key expiration doesn't make existing
signatures invalid. [Indeed, gpgv doesn't even check for expired keys.]

With apt ? I had to set allowunauthenticated = 1 in apt.conf, 
otherwise apt wouldn't install anything.




Can you give us the warning/error you're getting?


 ---> Running in 2300490ebb96
Get:1 http://archive.debian.org squeeze Release.gpg [1655 B]
Get:2 http://archive.debian.org squeeze-lts Release.gpg [819 B]
Get:3 http://archive.debian.org squeeze Release [96.0 kB]
Ign http://archive.debian.org squeeze Release
Get:4 http://archive.debian.org squeeze-lts Release [34.3 kB]
Get:5 http://archive.debian.org squeeze/main amd64 Packages [8370 kB]
Get:6 http://archive.debian.org squeeze-lts/main amd64 Packages [390 kB]
Fetched 8893 kB in 0s (10.0 MB/s)
Reading package lists...
W: GPG error: http://archive.debian.org squeeze Release: The following 
signatures were invalid: KEYEXPIRED 1520281423 KEYEXPIRED 1501892461

Reading package lists...
Building dependency tree...
Reading state information...
The following extra packages will be installed:
  libssl0.9.8 openssl
The following NEW packages will be installed:
  ca-certificates libssl0.9.8 openssl wget
0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded.
Need to get 2980 kB of archives.
After this operation, 7578 kB of additional disk space will be used.
WARNING: The following packages cannot be authenticated!
  ca-certificates
E: There are problems and -y was used without --force-yes



Re: Expired GPG keys of older release

2018-06-20 Thread Adam Cecile

On 06/20/2018 09:43 AM, john doe wrote:

On 6/20/2018 8:47 AM, Adam Cecile wrote:

On 06/20/2018 08:39 AM, john doe wrote:

On 6/19/2018 10:55 PM, Adam Cecile wrote:

On 06/19/2018 10:48 PM, Don Armstrong wrote:

On Tue, 19 Jun 2018, Adam Cecile wrote:

That's a pity, don't you think so ? I think Debian should renew the
archive key, so we can still verify packages signatures.

You can still verify them. Key expiration doesn't make existing
signatures invalid. [Indeed, gpgv doesn't even check for expired 
keys.]


With apt ? I had to set allowunauthenticated = 1 in apt.conf, 
otherwise apt wouldn't install anything.




Can you give us the warning/error you're getting?


  ---> Running in 2300490ebb96
Get:1 http://archive.debian.org squeeze Release.gpg [1655 B]
Get:2 http://archive.debian.org squeeze-lts Release.gpg [819 B]
Get:3 http://archive.debian.org squeeze Release [96.0 kB]
Ign http://archive.debian.org squeeze Release
Get:4 http://archive.debian.org squeeze-lts Release [34.3 kB]
Get:5 http://archive.debian.org squeeze/main amd64 Packages [8370 kB]
Get:6 http://archive.debian.org squeeze-lts/main amd64 Packages [390 kB]
Fetched 8893 kB in 0s (10.0 MB/s)
Reading package lists...
W: GPG error: http://archive.debian.org squeeze Release: The 
following signatures were invalid: KEYEXPIRED 1520281423 KEYEXPIRED 
1501892461

Reading package lists...
Building dependency tree...
Reading state information...
The following extra packages will be installed:
   libssl0.9.8 openssl
The following NEW packages will be installed:
   ca-certificates libssl0.9.8 openssl wget
0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded.
Need to get 2980 kB of archives.
After this operation, 7578 kB of additional disk space will be used.
WARNING: The following packages cannot be authenticated!
   ca-certificates
E: There are problems and -y was used without --force-yes



As other as pointed out if the expiration date is not extended on the 
key your out of luck! :)


https://www.debian.org/News/2011/20110209

One workaroungd could be:
1)   Download all required packages
2)  Verify the downloaded packages using 'gpg --verify'
3)  Install the verified pkgs

The best workaround would be to upgrade to Debian Stretch (6 to 7, 7 
to 8, 8 to 9)! :)


For sake of completeness:
  apt-key update  - update keys using the keyring package
  apt-key net-update  - update keys using the network


Well, that's a docker image, I'm not using Squeeze on production 
anywhere except this hacky stuff for a friend ;-)




Re: Expired GPG keys of older release

2018-06-20 Thread Adam Cecile

On 06/19/2018 10:48 PM, Don Armstrong wrote:

On Tue, 19 Jun 2018, Adam Cecile wrote:

That's a pity, don't you think so ? I think Debian should renew the
archive key, so we can still verify packages signatures.

You can still verify them. Key expiration doesn't make existing
signatures invalid. [Indeed, gpgv doesn't even check for expired keys.]

With apt ? I had to set allowunauthenticated = 1 in apt.conf, otherwise 
apt wouldn't install anything.




Re: Expired GPG keys of older release

2018-06-19 Thread Adam Cecile
That's a pity, don't you think so ? I think Debian should renew the archive 
key, so we can still verify packages signatures.

On June 19, 2018 8:33:21 PM GMT+02:00, john doe  wrote:
>On 6/19/2018 9:22 AM, Adam Cecile wrote:
>> Hello,
>> 
>> 
>> GPG key that signed the Squeeze repo is now expired. How should I
>handle 
>> this properly ? Despite the key is expired, it use to be valid and I 
>> don't like much the idea of going for [trusted=yes] for each impacted
>
>> sources.list entry.
>> 
>
>Sadly, if the expiry date of the key is not extended there is little
>you 
>can do beyond insuring that the key in your keyring is up-to-date which
>
>is normaly done automatically on Debian.
>
>Googling this gives some things to try.
>
>-- 
>John Doe

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Expired GPG keys of older release

2018-06-19 Thread Adam Cecile

Hello,


GPG key that signed the Squeeze repo is now expired. How should I handle 
this properly ? Despite the key is expired, it use to be valid and I 
don't like much the idea of going for [trusted=yes] for each impacted 
sources.list entry.


Thanks in advance,


Adam.



squid proxy fails to resolve LAN host names

2018-03-26 Thread Adam Weremczuk

Hi all,

Steps to reproduce (query any LAN host running a web server):

$wgethttp://lanhost <http://yabby/>  2>&1 | grep response
HTTP request sent, awaiting response... 200 OK

$export http_proxy=proxy:3128

$wgethttp://lanhost <http://yabby/>  2>&1 | grep response
Proxy request sent, awaiting response... 503 Service Unavailable

The error message says:

---

Unable to determine IP address from host name lanhost.
The DNS server returned:
Name Error: The domain name does not exist.
This means that the cache was not able to resolve the hostname presented in the
URL.

---

It's not a burning problem since running global proxy settings in automatic
mode works fine.
We can also define them per application: browsers, wget, apt etc.

Any other DNS querying (ping, nslookup, host) I tried from LAN clients and 
proxy container resolved fine.
Nothing helpful in logs.

Any idea why proxy fails to resolve LAN host names only in this specific 
scenario?

Thanks
Adam




Re: password hash in shadow file

2018-03-13 Thread Adam Weremczuk

Quite possibly I changed it to the same password.
Not sure now as it was almost a month ago but can't find any better 
explanation.

Of course hashes are meant to be irreversible.
I guess I'm trying to catch my own shadow ;)


On 13/03/18 16:19, to...@tuxteam.de wrote:

Still strange. Are you sure that you stopped "passwd" early enough?
Had you entered the password already? Twice?




Re: password hash in shadow file

2018-03-13 Thread Adam Weremczuk
I think it was me invoking "passwd" as root and aborting (ctrl+D) 
without making any changes.

Would that be enough to update the shadow file?


On 13/03/18 15:47, to...@tuxteam.de wrote:

What I don't understand is how the system changed the hashing
method without getting you involved. You don't remember having
had to enter the root password?

That would be strange.

Cheers




password hash in shadow file

2018-03-13 Thread Adam Weremczuk

Hi all,

I've just spotted that on one of my old wheezy servers root entry in 
/etc/shadow was updated just over 3 weeks ago.


The root password is still the same and the lastchanged count is much 
higher than 3 weeks.


The difference I've noticed is the hashed password string being much longer.

It's now prefixed with $6$ (SHA-512 algorithm) comparing with $1$ (MD5) 
before the change.


My first suspect was a security patch but the system was not updated 
around that time.


Has anybody seen this before and could explain?

Thanks
Adam



Re: gnats user

2018-03-06 Thread Adam Weremczuk

Thanks, a useful read.
But all it says regarding gnats is "." :)


On 2018-03-06 16:57, Reco wrote:

[1], chapter 12.1.12.1.

Reco

[1]https://www.debian.org/doc/manuals/securing-debian-howto/ch12.en.html




gnats user

2018-03-06 Thread Adam Weremczuk

Hi all,

Can somebody explain why gnats user comes even with a minimalistic 
netinstall Debian?


I briefly used Gnats Bug-Reporting System long time ago but AFAIR this 
project died in 2005.


It just feels weird for me to see it in /etc/passwd all the time.

Thanks
Adam



Re: Dell Open Manage

2018-02-13 Thread Adam Weremczuk

Please keep me updated Dave.
Especially if you discover something big :)
I'm not desperate , happy to wait for another few weeks or so.

Thanks
Adam


On 13/02/18 13:37, Dave Sherohman wrote:

Coincidentally, earlier today I installed srvadmin-idracadm8 on stretch
from

debhttp://linux.dell.com/repo/community/ubuntu  jessie openmanage

and racadm was able to read the network config from my drac with no
problems.  I haven't tried changing the network config yet, or any other
operations, so I can't say whether it works completely, but it's at
least partially functional.

-- Dave Sherohman




openssh-server change log

2018-02-13 Thread Adam Weremczuk

Hello,

Just to let you know - the link on Debian website gives 404:

https://packages.debian.org/wheezy/openssh-server

http://metadata.ftp-master.debian.org/changelogs/main/o/openssh/openssh_6.0p1-4+deb7u7_changelog

I got it from the tarball so it's not critical.

Thanks
Adam



end of security support for wheezy LTS

2018-02-13 Thread Adam Weremczuk

Hi all,

Our PCI compliance scanner (probably falsely) claims it's 2018-05-01.

The Wikipedia page: https://en.wikipedia.org/wiki/Debian_version_history 
just says "May 2018".


Debian website:

https://www.debian.org/releases/wheezy/
https://wiki.debian.org/LTS

clearly states "end of May 2018".

I'm treating the latter as the more reliable source.

My related question:

Is it likely for this date to still be moved forward or back?
Can it happen on a short notice (say less than a month)?

Regards
Adam



Dell Open Manage

2018-02-12 Thread Adam Weremczuk

Hello,

I'm referring to:

http://linux.dell.com/repo/community/debian/

Does anybody know if jessie version works well on stretch?

Or if an official release for stretch is going to be available soon?

Thanks
Adam



Re: Package pinning on origin AND version

2018-02-06 Thread Adam Cecile

On 02/06/2018 02:16 PM, The Wanderer wrote:

On 2018-02-06 at 07:52, Adam Cecile wrote:


On 02/06/2018 01:46 PM, The Wanderer wrote:


Pin: version 1.3.*, release o=packages.le-vert.net

Hello,

Thanks for the answer, sadly it's not working:

mesos:
Installed: 1.3.1-1+Debian-stretch-9.1
Candidate: 1.4.1-1+Debian-stretch-9.1

Not sure what may be causing that. Just as a stab in the dark, maybe try
it with 'Pin-Priority: 1001', in case something else is setting the
priority of the 1.4.x version to 1000? (Again per 'man apt_preferences',
when two versions have equal priority, the higher version number wins
out.)

Note that a priority greater than 1000 will lead the matching version to
be considered even when it would be a version-number downgrade from the
installed version. That may be what you want (it often is, for me), but
it's something that's good to be aware of.

If you're interested in testing yourself, it's pretty easy both 
repositories are public:


echo "deb http://packages.le-vert.net/mesos/debian stretch main" > 
/etc/apt/sources.list.d/packages.le-vert.net_mesos.list
echo "deb http://repos.mesosphere.io/debian jessie main" > 
/etc/apt/sources.list.d/mesosphere.list


I guess the coma works only if you're using subfilters of "release", 
like as stated in the documentation.
So maybe what I'm trying to do is simply not supported, but I'd like to 
have a confirmation before I seek for a different solution (any hint 
would be appreciated...)


Regards, Adam.



Re: Package pinning on origin AND version

2018-02-06 Thread Adam Cecile

On 02/06/2018 01:46 PM, The Wanderer wrote:

Pin: version 1.3.*, release o=packages.le-vert.net


Hello,

Thanks for the answer, sadly it's not working:

mesos:
  Installed: 1.3.1-1+Debian-stretch-9.1
  Candidate: 1.4.1-1+Debian-stretch-9.1



Package pinning on origin AND version

2018-02-06 Thread Adam Cecile

Hello,

I'd like to do something like this:

Package: mesos
Pin: version 1.3.*
Pin: release o=packages.le-vert.net
Pin-Priority: 1000

But sadly the last "Pin:" line overrides the previous one.


My problem here is that I'd like the version 
"1.3.1-1+Debian-stretch-9.1" to be the candidate one. Depending on which 
"Pin:" line is the latest one I get either "1.4.1-1+Debian-stretch-9.1" 
or "1.3.2-2.0.1".


mesos:
  Installed: 1.3.1-1+Debian-stretch-9.1
  Candidate: 1.4.1-1+Debian-stretch-9.1
  Version table:
 1.4.1-1+Debian-stretch-9.1 1000
    500 http://packages.le-vert.net/mesos/debian stretch/main amd64 
Packages

 1.4.1-2.0.1 500
    500 http://repos.mesosphere.io/debian jessie/main amd64 Packages
 1.4.0-2.0.1 500
    500 http://repos.mesosphere.io/debian jessie/main amd64 Packages
 1.3.2-2.0.1 500
    500 http://repos.mesosphere.io/debian jessie/main amd64 Packages
 *** 1.3.1-1+Debian-stretch-9.1 1000
    500 http://packages.le-vert.net/mesos/debian stretch/main amd64 
Packages

    100 /var/lib/dpkg/status
 1.3.1-2.0.1 500
    500 http://repos.mesosphere.io/debian jessie/main amd64 Packages
 1.3.0-2+Debian-stretch-9.0 1000
    500 http://packages.le-vert.net/mesos/debian stretch/main amd64 
Packages

 1.3.0-1+Debian-stretch-9.0 1000


Thanks in advance,

Adam



Re: l'ajuntament de bcn adopta gnu/linux

2018-01-16 Thread Adam Deosdad
No participo mai, però en aquest cas m'agradaria aportar perquè he estat en
un ajuntament d'una població de 12mil habitants.

Els regidors no n'entenen, per tant tot queda en mans dels tècnics que
s'han d'enfrontar als treballadors que són els més contraris als canvis. Jo
mateix desprès d'escriure el manual, i el cap fer la formació als
treballadors, ens venien en  l'excusa que no volen llegir un
manual/tutorial de 10 pàgines amb unes 3 captures de pantalla per pàgina. I
era una simple aplicació web, per demanar dies lliures, anar al metge, ...
Fins i tot tenien problemes al login, per no llegir que era el correu
corporatiu i no el personal.
Els tècnics poden aconsellar bé als polítics, però l'obstacle serà el
treballador/funcionari.

El dia 17 gen. 2018 07:58, "Àlex"  va escriure:

> El 16/01/18 a las 20:36, Robert Marsellés escribió:
> > Hola,
> >
> > El 15/01/18 a les 22:01, Ernest Adrogué ha escrit:
> >> A veure com va...
> >>
> >> https://linux.slashdot.org/story/18/01/15/0415219/city-
> of-barcelona-dumps-windows-for-linux-and-open-source-software
> >>
> > Algunes experiències internacionals prèvies amb gran rebombori [1] no
> > suggereixen cap mena d'optimisme [2].
> >
> > robert
> >
> > [1]
> > https://usatoday30.usatoday.com/money/industries/technology/2003-07-13-
> microsoft-linux-munich_x.htm
> >
> > [2]
> > https://www.theregister.co.uk/2017/11/13/munich_committee_
> says_all_windows_2020/
> >
> Microsoft és un loby molt fort, amb moltes persones "col.locades" als
> governs prenent decisions. Una de les polítiques que va prendre la
> decisió a Munich, resulta que era la vicepresidenta de Microsoft a
> Europa fins el 2015.
>
> Però també hi ha casos d'èxit, també. Mira les gendarmeries franceses,
> per exemple. Porten 10 anys amb Linux, més que contents i a sobre han
> estalviat 20 millions d'euros en llicències. Però encara avui en dia
> reben ordres de superiors, que no compleixen, de que canviïn
> inmediatament a Windows. El loby Microsoft té ara mateix sis consellers
> amb connexions directes amb ministres i polítics.
>
>
> Un article interessant de llegir:
>
> http://www.investigate-europe.eu/en/why-europes-dependency-
> on-microsoft-is-a-huge-security-risk/
>
>
>
>
>


hostname issue

2018-01-15 Thread Adam Weremczuk

Hi all,

Today when I logged into my Debian 9.2 VM I noticed that the hostname 
has changed to the first block of the IP address I was accessing this VM 
from last:


root@83:~#

"hostname" command was returning the full IP address 83.xxx.xxx.xxx

Cat /etc/hostname was still showing the correct name.

After apt-get update, ap-get upgrade (to 9.3) and reboot things went 
back to normal.


Is it a known bug?

Has anybody seen anything like this before?

Thanks
Adam



Re: KVM PCI Passthrough NVidia GeForce GTX 1080 Ti error code 43

2017-11-14 Thread Adam Cecile

Hello,

I'm not sure why you are using an older driver but here what's available 
in Debian stretch at the moment: 375.82-1~deb9u1
I don't have 1080Ti myself but this driver can handle the regular 1080 
for sure and I bet it does handle the Ti as well.


If you want to give a try with a Stretch virtual machine, when calling 
virt-builder replace debian-8 by debian-9 and that's it ;-)


Adam.


On 11/14/2017 01:18 AM, Ramon Hofer wrote:

Dear Adam,

On Mon, 13 Nov 2017 22:49:02 +0100
Adam Cécile <adam.cec...@hitec.lu> wrote:


Here is my notes/scripts when I did that to attach an nvidia card
inside a KVM virtual machine:
https://github.com/eLvErDe/nvidia-docker-cuda-kvm-with-passthru/blob/master/create-kvm-for-nvidia-docker.sh

I rembered that I can just upgrade to Stretch the usual way by changing
the sources.list files.
Unfortunately it is exactly the same as what I described in my prvious
email to Alexander. I could install and configure the driver but
startxfce4 does not start.

startxfce4: https://pastebin.com/0MeCU492

xorg.conf: https://pastebin.com/a4qGhsxz

Maybe I do not startxfce4 correctly?
Do I need to change xorg.conf?
Or is there any way to check if the driver works in the guest?


Thanks again for your help.


Best regards,
Ramon





Re: KVM PCI Passthrough NVidia GeForce GTX 1080 Ti error code 43

2017-11-14 Thread Adam Cécile

Hi,

Here is my notes/scripts when I did that to attach an nvidia card inside 
a KVM virtual machine:

https://github.com/eLvErDe/nvidia-docker-cuda-kvm-with-passthru/blob/master/create-kvm-for-nvidia-docker.sh

Adam.

On 11/13/2017 10:37 PM, Ramon Hofer wrote:

Dear Alexander,

Thank you very much for your reply.


The system I am using:
lshw: https://pastebin.com/tB7FqqxN

Host OS:Debian 9 Stretch
Mainboard: Supermicro C7Z170-M (activated VT-d in Bios)
CPU: Intel Core i7-7700K CPU @ 4.20GHz
GPU: EVGA GeForce GTX1080 Ti

The GPU is not listed because I have blacklisted it:
 $ cat /etc/modprobe.d/blacklist.conf
 blacklist nouveau

lspci: https://pastebin.com/6qYuJRPg

I found this guide:
https://scottlinux.com/2016/08/28/gpu-passthrough-with-kvm-and-debian-linux/

After installing Win7 guest, enabling PCI passthrough using
virt-manager, installing the NVidia driver in the guest, Windows
reports the error 43 for the GPU.

Windows has stopped this device because it has reported problems.
(Code 43)

This is described in the above mentioned post and a workaround is
linked:
https://www.reddit.com/r/VFIO/comments/479xnx/guests_with_nvidia_gpus_can_enable_hyperv/

Unfortunately I do not know how to apply the workaround. I
understand
that I should create a file '/usr/libexec/qemu-kvm-hv-vendor' with
the
following content:

 #!/bin/sh
 exec /usr/bin/qemu-kvm \
 `echo "\$@" | sed 's|hv_time|hv_time,hv_vendor_id=whatever|g'`

Or according to the original redhat mailing list post by Alex
Williamson:
https://www.redhat.com/archives/vfio-users/2016-March/msg00092.html

 $ cat /usr/libexec/qemu-kvm-hv-vendor
 #!/bin/sh
 exec /usr/bin/qemu-kvm \
`echo "\$@" | sed
's|hv_time|hv_time,hv_vendor_id=KeenlyKVM|g'`

But since there is no qemu-kvm present and the directory
'/usr/libexec' does not exist on my system, I wonder how I should
proceed.

This is interesting topic and I hope to find some time to spare to
implement and test this setup on my system. Can't suggest you anything
yet, because this "Code 43" error is generic and can happen even on
normal systems. The reasons could be limitless from driver version
conflict to bios\uefi firmware bug of your motherboard. I wonder, what
VEN_ID and DEV_ID are reported for your VGA in Windows guest? Have you
tried Windows 8.1 or 10 as guests? They could have more support for
virtualization in general.

Interesting. I thought I was just not able to setup KVM / QEMU
properly. Because I read and heard that NVidia deliberately switches
the card off when the driver detects that it is virtualised.

I am using the newest BIOS version (updated on Sunday) on the
motherboard:
Supermicro C7Z170-M
BIOS Version: 2.0a
BIOS Tag: 1088B
Date: 07/17/2017
Time: 15:51:37

Unfortunately I do not know anything about a bioy\uefi firmware bug. Is
this a known issue of my mainboard version?

In the BIOS for the "Boot mode select" setting, I have chosen
"Legacy" (there would also be "UEFI" or "DUAL"). Do you think it might
be worth trying to change it to the other two?

I have uploaded dmesg output if it helps:
dmesg: https://pastebin.com/79Us7WMf

In the Windows 7 guest, the reported IDs are:
VEN_ID: 10DE
DEV_ID: 1B06

The driver version in the Windows 7 guest is:
23.21.13.8813 (Date: 27.10.2017)

I have thought about buying a Windows 10 copy, but it is not possible
to get the direct download version in Switzerland, so I postponed the
purchase due to lack of patience.

But if it helps, here is the information from a Debian 9 guest with the
nvidia-driver package installed:
ID: 10de:1b06
VGA compatible controller [0300]: NVIDIA Corporation Device [10de:1b06]
(rev a1)

This is also what nvidia-detect says:
$ sudo nvidia-detect
Detected NVIDIA GPUs:
00:09.0 VGA compatible controller [0300]: NVIDIA Corporation Device
[10de:1b06] (rev a1)

Checking card:  NVIDIA Corporation Device 1b06 (rev a1)
Your card is supported by the default drivers.
It is recommended to install the
 nvidia-driver
package.

So I installed nvidia-driver and rebooted the Debian 9 guest.
The display setting in XFCE4 still does not show the NVidia card and
the nvidia-setting program reports that I should run nvidia-xconfig as
root, which I did.
This is the resulting config file:
xorg.conf: https://pastebin.com/sCe30emi

Unfortunately lightdm fails to start. Here is the suggested log:
systemctl status lightdm.service: https://pastebin.com/VYgKuCy1

Since there is not much information in that log I have created a
pastebin for dmesg of the failed Debian 9 guest boot attempt:
dmesg for lightdm fail: https://pastebin.com/Djx2YycH


I am not sure if I can still get a copy of Windows 8 somewhere, but if
you think it helps, I can go an buy Windows 10.
Please let me know how I can help you helping me :-)

The problem got me thinking yesterday and today I asked around if
anybody wants the card and if I should buy an AMD GPU. But since 

Re: Nagios package - hiding or not existing

2017-10-19 Thread Adam Cécile
Just in case it's not a priority for you, I "up-ported" Jessie's package 
to Stretch and made them available as a repo here:

http://packages.le-vert.net/nagios3/

Regards, Adam.

On 10/19/2017 11:05 PM, Fekete Tamás wrote:

I'm sad to hear that, but thank you for the info!

2017-10-18 21:31 GMT+02:00 Brian <a...@cityscape.co.uk 
<mailto:a...@cityscape.co.uk>>:


On Wed 18 Oct 2017 at 21:03:19 +0200, Fekete Tamás wrote:

> does anyone know if there is a nagios3 package available for
Debian 9.x?
>
> According to :
https://packages.debian.org/search?keywords=nagios3
<https://packages.debian.org/search?keywords=nagios3> there
> is no package for this generation of this Debian. Is it possible?

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=845765
<https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=845765>

--
Brian.






Re: Problem reserving enough space for Java object heap since stretch upgrade

2017-10-09 Thread Adam Kessel

On 7/1/2017 10:37 AM, David Wright wrote:


I have been unable to execute Java with >=2048M memory allocation
since upgrading to stretch. I've changed nothing in my configuration
otherwise.

I have plenty of RAM:

# free
   totalusedfree  shared buff/cache
available
Mem:5168396 3326140  245712 85320 1596544 1227812
Swap:   2255616  259204 1996412

# ulimit -v
unlimited

What does # ulimit -H -v say?


# java -Xmx2048M
Error occurred during initialization of VM
Could not reserve enough space for 2097152KB object heap

For the record, this problem is fixed for me in 4.9.0-4-686-pae ( = 
4.9.51-1 ).


Adam



  1   2   3   4   5   6   7   8   9   10   >