Anomoly from `freebsd-update IDS` in 12.4-RELEASE-p9 - dual entries for /etc/ssh/sshd_config

2023-12-10 Thread Walter Cramer
When running `freebsd-update IDS` on a few 12.4-RELEASE-p9 systems which have local changes in /etc/ssh/sshd_config, I get TWO separate lines of output about /etc/ssh/sshd_config: ... /etc/ssh/sshd_config has SHA256 hash XXX, but should have SHA256 hash

Re: Zpool doesn't boot anymore after FreeBSD 12.1

2020-10-21 Thread Walter Cramer
n of da1 and da2 since both are already inserted on pool? I think it's not allowed... Thanks. On Wed, Oct 21, 2020 at 5:18 PM Walter Cramer wrote: My guess - there is a work-around or two, but you'll face a lot more grief, long-term, if you don't do things the right way (aka do a bunch of re

Re: Zpool doesn't boot anymore after FreeBSD 12.1

2020-10-21 Thread Walter Cramer
My guess - there is a work-around or two, but you'll face a lot more grief, long-term, if you don't do things the right way (aka do a bunch of re-install work) now. I'd start with 'gpart backup da0' (guessing that gpt/disk0 is on da0), to see how the original disk is partitioned. Then

Re: jexec as user?

2019-11-19 Thread Walter Cramer
On Tue, 19 Nov 2019, mike tancsa wrote: On 11/19/2019 8:09 AM, Christos Chatzaras wrote: On 19 Nov 2019, at 15:02, mike tancsa wrote: On 11/19/2019 6:42 AM, Ronald Klop wrote: Hi, Is it possible to jexec into a jail as a regular user. Or to enable that somewhere? Or is the way to do such a

?Minor Security Issue - DNS, /etc/hosts, freebsd-update, ?pkg

2019-07-04 Thread Walter Cramer
Suspected severity: Low. Systems with inattentive administrators may not receive the latest updates, and no obvious error messages will point out the problem. Situation discovered in: A few older 11.2-RELEASE FreeBSD systems, with /etc/hosts entries like this: 96.47.72.72

Re: ZFS...

2019-05-08 Thread Walter Cramer
On Wed, 8 May 2019, Paul Mather wrote: On May 8, 2019, at 9:59 AM, Michelle Sullivan wrote: Paul Mather wrote: due to lack of space. Interestingly have had another drive die in the array - and it doesn't just have one or two sectors down it has a *lot* - which was not noticed by the

Re: ZFS...

2019-05-06 Thread Walter Cramer
On Mon, 6 May 2019, Patrick M. Hausen wrote: Hi! Am 30.04.2019 um 18:07 schrieb Walter Cramer : With even a 1Gbit ethernet connection to your main system, savvy use of (say) rsync (net/rsync in Ports), and the sort of "know your data / divide & conquer" tactics that Karl

Re: ZFS...

2019-04-30 Thread Walter Cramer
Brief "Old Man" summary/perspective here... Computers and hard drives are complex, sensitive physical things. They, or the data on them, can be lost to fire, flood, lightning strikes, theft, transportation screw-ups, and more. Mass data corruption by faulty hardware or software is mostly

RE: Crontab Question

2019-04-10 Thread Walter Cramer
On Wed, 10 Apr 2019, Software Info wrote: OK. So although the script is located in my home directory, it doesn???t start there? Sorry but I don???t quite understand. Could you explain a little further please? Both 'cp' and 'ls' are located in /bin. But if I run the 'ls' command in /root,

Re: Observations from a ZFS reorganization on 12-STABLE

2019-03-18 Thread Walter Cramer
I suggest caution in raising vm.v_free_min, at least on 11.2-RELEASE systems with less RAM. I tried "65536" (256MB) on a 4GB mini-server, with vfs.zfs.arc_max of 2.5GB. Bad things happened when the cron daemon merely tried to run `periodic daily`. A few more details - ARC was mostly full,

Re: Jailed periodic daily scripts smashing CPU

2017-02-16 Thread Walter Cramer
Adding something like: 'sleep $(( $(sysctl -n security.jail.param.jid) * 15 )) && ' in front of more resource-intensive commands in /etc/crontab can reliably spread out the load from a larger number of jails. (But if you start and stop jails frequently enough to spread out the current list