RE: [gentoo-user] clone root from HDD to SSD causes no video with NVIDIA driver

2020-06-16 Thread Raffaele BELARDI
> -Original Message-
> From: J. Roeleveld 
> Sent: Monday, June 15, 2020 16:20
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] clone root from HDD to SSD causes no video with
> NVIDIA driver
> 
> On Monday, June 15, 2020 9:56:39 AM CEST Raffaele BELARDI wrote:
> >   *   From: Dale 
> >   *   Sent: Wednesday, June 10, 2020 08:02
> >   *   To: gentoo-user@lists.gentoo.org u...@lists.gentoo.org>
> > *   Subject: Re: [gentoo-user] clone root from HDD to SSD causes no video
> > with NVIDIA driver
> >
> >  *   Raffaele BELARDI wrote:
> >  *   nomodeset did not change anything, but adding EFI_FB to the kernel
> > finally got me a functional console.
>  *   But if I startx from there I am
> > back again to the same point, no X, no console switching with CTR-ALT-Fn, *
> >   no crash in syslog, I have to SSH to get to a working shell. I'm not
> > getting anywhere, I think I'll better install from stage3.
> >
> >   *   Odds are, if you start from stage3, you will get the same problem
> > again unless you do something different.
>  *   When I first stated using
> > Gentoo, I didn't realize that one can restart a install pretty much
> > anywhere in the install. *   Starting over doesn't get you anything
> > different if you repeat the same steps.
> > Just to update: I tried all the hints received here with no luck.
> > Since others on this list managed to get uefifb working with the
> > NVIDIA driver I believe the problem could be my mobo/UEFI FW/GPU
> > combination. I found some rather old posts ([1], [2]) supporting this
> > hypothesis. For the moment I switched to nouveau.
> 
> > Thanks again to all,
> >
> > raffaele
> >
> > [1]
> > https://forums.developer.nvidia.com/t/uefi-nvidia-vga-console-complain
> > ts/37
> > 690
>  [2]
> > https://forums.developer.nvidia.com/t/nvidia-devs-any-eta-on-fbdev-con
> > sole->
> mode-setting-implementation/47043
> 
> Personally, I would not expect this to be related to mainboard firmware/bios
> issues as I have not had any issues with efifb and nvidia-drivers on several
> systems.

I still have some hopes, I intend to give NVIDIA another try later.
> 
> What is your kernel-commandline?
> 
> Mine is really simple:
> $ cat /proc/cmdline
> root=/dev/nvme0n1p3

root=/dev/sdb5 ro quiet raid=noautodetect

> 
> I get the following in my dmesg for "efifb":
> 
> [8.717047] efifb: probing for efifb
> [8.717061] efifb: framebuffer at 0xd100, using 3072k, total 3072k
> [8.717062] efifb: mode is 1024x768x32, linelength=4096, pages=1
> [8.717064] efifb: scrolling: redraw
> [8.717065] efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0
> [8.719748] fb0: EFI VGA frame buffer device
> 

Same here:
[0.705019] efifb: probing for efifb
[0.705029] efifb: framebuffer at 0xc000, using 3072k, total 3072k
[0.705030] efifb: mode is 1024x768x32, linelength=4096, pages=1
[0.705030] efifb: scrolling: redraw
[0.705031] efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0
[0.705122] Console: switching to colour frame buffer device 128x48
[0.706608] fb0: EFI VGA frame buffer device
 
> Which is nowhere near the real resolution my screen can handle, but for
> emergencies, this is definitely sufficient.
> 
> For completeness, these are the entries for nvidia:
> 
> $ dmesg | grep -i nvidia
> [   11.222893] nvidia: loading out-of-tree module taints kernel.
> [   11.222908] nvidia: module license 'NVIDIA' taints kernel.
> [   11.241368] nvidia-nvlink: Nvlink Core is being initialized, major device
> number 240
> [   11.241687] nvidia :01:00.0: vgaarb: changed VGA decodes:
> olddecodes=io+mem,decodes=none:owns=io+mem
> [   11.283229] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  440.82
> Wed Apr
> 1 20:04:33 UTC 2020
> [   11.287732] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for
> UNIX platforms  440.82  Wed Apr  1 19:41:29 UTC 2020
> [   11.289189] [drm] [nvidia-drm] [GPU ID 0x0100] Loading driver
> [   11.289191] [drm] Initialized nvidia-drm 0.0.0 20160202 for :01:00.0 on
> minor 0
> [   11.861737] input: HDA NVidia HDMI/DP,pcm=3 as /devices/
> pci:00/:00:03.0/:01:00.1/sound/card1/input28
> [   11.862152] input: HDA NVidia HDMI/DP,pcm=7 as /devices/
> pci:00/:00:03.0/:01:00.1/sound/card1/input29
> [   11.979061] input: HDA NVidia HDMI/DP,pcm=8 as /devices/
> pci:00/:00:03.0/:01:00.1/sound/card1/input30
> [   11.979134] input: HDA NVidia HDMI/DP,pcm=9 as /devices/
> pci:00/:00:03.0/:01:00.1/sound/card1/input31
> 

I don't have these at the moment because I switched to noveau to stabilize the 
system, later I'll try again with NVIDIA.

> On a side-note, anyone know how to prevent these sound-devices from
> appearing?
> I never use these on this system.
> 
> --
> Joost
> 
> 
> 




Re: [gentoo-user] Testing a used hard drive to make SURE it is good.

2020-06-16 Thread Dale
David Haller wrote:
> Hello,
>
> On Mon, 15 Jun 2020, Dale wrote:
> [..]
>> While I'm at it, when running dd, I have zero and random in /dev.  Where
>> does a person obtain a one?  In other words, I can write all zeros, I
>> can write all random but I can't write all ones since it isn't in /dev. 
>> Does that even exist?  Can I create it myself somehow?  Can I download
>> it or install it somehow?  I been curious about that for a good long
>> while now.  I just never remember to ask. 
> I've wondered that too. So I just hacked one up just now.
>
>  ones.c 
> #include 
> #include 
> #include 
> static unsigned int buf[BUFSIZ];
> int main(void) {
> unsigned int i;
> for(i = 0; i < BUFSIZ; i++) { buf[i] = (unsigned int)-1; }
> while( write(STDOUT_FILENO, buf, sizeof(buf)) );
> exit(0);
> }
> 
>
> Compile with:
> gcc $CFLAGS -o ones ones.c
> or
> gcc $(portageq envvar CFLAGS) -o ones ones.c
>
> and use/test e.g. like
>
> ./ones | dd of=/dev/null bs=8M count=1000 iflag=fullblock
>
> Here, it's about as fast as
>
> cat /dev/zero | dd of=/dev/null bs=8M count=1000 iflag=fullblock
>
> (but only about ~25% as fast as 
> dd if=/dev/zero of=/dev/null bs=8M count=1000 iflag=fullblock
> for whatever reason ever, but the implementation of /dev/zero is
> non-trivial ...)
>
> HTH,
> -dnh
>


Thanks David for the reply and others as well.  I got some good ideas
from some experts plus gave me things to google.  More further down.

For the /dev/one, I found some which seems to work.  They listed further
down.  I think my google search terms was poor.  Google doesn't have ESP
for sure.  O_o

I mentioned once long ago that I keep a list of frequently used
commands.  I do that because, well, my memory at times isn't that
great.  Here is some commands I ran up on based on posts here and what
google turned up when searching for things related on those posts.  I
wanted to share just in case it may help someone else.  ;-)  dd commands
first. 


root@fireball / # cat /root/freq-commands | grep dd
dd commands
dd if=/dev/zero of=/dev/sd bs=4k conv=notrunc
dd if=/dev/zero of=/dev/sd bs=4k conv=notrunc oflag=direct  #disables cache
dd if=/dev/zero of=/dev/sd bs=1M conv=notrunc
dd if=<(yes $'\01' | tr -d "\n") of=
dd if=<(yes $'\377' | tr -d "\n") of=
dd if=<(yes $'\xFF' | tr -d "\n") of=
root@fireball / #


The target device or file needs to be added to the end of course on the
last three.  I tend to leave out some of the target to make sure I don't
copy and paste something that ends badly.  dd can end badly if targeting
the wrong device. 


root@fireball / # cat /root/freq-commands | grep smartctl
smartctl -t long /dev/sd
smartctl -t full  ##needs research
smartctl -c -t short -d sat /dev/sd  ##needs research
smartctl -t conveyance -d sat /dev/sd  ##needs research
smartctl -l selftest -d sat /dev/sd  ##needs research
smartctl -t  /dev/sd  ##needs research
smartctl -c /dev/sd  ##displays test times in minutes
smartctl -l selftest /dev/sd
root@fireball / #


The ones where I have 'needs research' on the end, I'm still checking
the syntax of the command.  I haven't quite found exact examples of them
yet.  This also led to me wanting to print the man page for smartctl. 
That is a task in itself.  Still, google found me some options which are
here:


root@fireball / # cat /root/freq-commands | grep man
print man pages to text file
man  | col -b > /home/dale/Desktop/smartctl.txt
print man pages to .pdf but has small text.
man -t  > /home/dale/Desktop/smartctl.pdf
root@fireball / #


It's amazing sometimes how wanting to do one thing, leads to learning
how to do many other things, well, trying to learn how anyway.  LOL 

I started the smartctl longtest a while ago.  It's still running but it
hasn't let the smoke out yet.  It's a good sign I guess. I only have one
SATA port left now.  I got to order another PCI SATA card I guess.  :/ 
I really need to think on the NAS project. 

Thanks to all. 

Dale

:-)  :-) 


Re: [gentoo-user] Testing a used hard drive to make SURE it is good.

2020-06-16 Thread William Kenworthy
In case no one has mentioned it, check out "stress" and "stress-ng" -
they have HDD tests available. (I am going to have to look into that
--ignite-cpu option ... :)

BillK

On 16/6/20 3:17 pm, Dale wrote:
> David Haller wrote:
>> Hello,
>>
>> On Mon, 15 Jun 2020, Dale wrote:
>> [..]
>>> While I'm at it, when running dd, I have zero and random in /dev.  Where
>>> does a person obtain a one?  In other words, I can write all zeros, I
>>> can write all random but I can't write all ones since it isn't in /dev. 
>>> Does that even exist?  Can I create it myself somehow?  Can I download
>>> it or install it somehow?  I been curious about that for a good long
>>> while now.  I just never remember to ask. 
>> I've wondered that too. So I just hacked one up just now.
>>
>>  ones.c 
>> #include 
>> #include 
>> #include 
>> static unsigned int buf[BUFSIZ];
>> int main(void) {
>> unsigned int i;
>> for(i = 0; i < BUFSIZ; i++) { buf[i] = (unsigned int)-1; }
>> while( write(STDOUT_FILENO, buf, sizeof(buf)) );
>> exit(0);
>> }
>> 
>>
>> Compile with:
>> gcc $CFLAGS -o ones ones.c
>> or
>> gcc $(portageq envvar CFLAGS) -o ones ones.c
>>
>> and use/test e.g. like
>>
>> ./ones | dd of=/dev/null bs=8M count=1000 iflag=fullblock
>>
>> Here, it's about as fast as
>>
>> cat /dev/zero | dd of=/dev/null bs=8M count=1000 iflag=fullblock
>>
>> (but only about ~25% as fast as 
>> dd if=/dev/zero of=/dev/null bs=8M count=1000 iflag=fullblock
>> for whatever reason ever, but the implementation of /dev/zero is
>> non-trivial ...)
>>
>> HTH,
>> -dnh
>>
>
>
> Thanks David for the reply and others as well.  I got some good ideas
> from some experts plus gave me things to google.  More further down.
>
> For the /dev/one, I found some which seems to work.  They listed
> further down.  I think my google search terms was poor.  Google
> doesn't have ESP for sure.  O_o
>
> I mentioned once long ago that I keep a list of frequently used
> commands.  I do that because, well, my memory at times isn't that
> great.  Here is some commands I ran up on based on posts here and what
> google turned up when searching for things related on those posts.  I
> wanted to share just in case it may help someone else.  ;-)  dd
> commands first. 
>
>
> root@fireball / # cat /root/freq-commands | grep dd
> dd commands
> dd if=/dev/zero of=/dev/sd bs=4k conv=notrunc
> dd if=/dev/zero of=/dev/sd bs=4k conv=notrunc oflag=direct  #disables
> cache
> dd if=/dev/zero of=/dev/sd bs=1M conv=notrunc
> dd if=<(yes $'\01' | tr -d "\n") of=
> dd if=<(yes $'\377' | tr -d "\n") of=
> dd if=<(yes $'\xFF' | tr -d "\n") of=
> root@fireball / #
>
>
> The target device or file needs to be added to the end of course on
> the last three.  I tend to leave out some of the target to make sure I
> don't copy and paste something that ends badly.  dd can end badly if
> targeting the wrong device. 
>
>
> root@fireball / # cat /root/freq-commands | grep smartctl
> smartctl -t long /dev/sd
> smartctl -t full  ##needs research
> smartctl -c -t short -d sat /dev/sd  ##needs research
> smartctl -t conveyance -d sat /dev/sd  ##needs research
> smartctl -l selftest -d sat /dev/sd  ##needs research
> smartctl -t  /dev/sd  ##needs research
> smartctl -c /dev/sd  ##displays test times in minutes
> smartctl -l selftest /dev/sd
> root@fireball / #
>
>
> The ones where I have 'needs research' on the end, I'm still checking
> the syntax of the command.  I haven't quite found exact examples of
> them yet.  This also led to me wanting to print the man page for
> smartctl.  That is a task in itself.  Still, google found me some
> options which are here:
>
>
> root@fireball / # cat /root/freq-commands | grep man
> print man pages to text file
> man  | col -b > /home/dale/Desktop/smartctl.txt
> print man pages to .pdf but has small text.
> man -t  > /home/dale/Desktop/smartctl.pdf
> root@fireball / #
>
>
> It's amazing sometimes how wanting to do one thing, leads to learning
> how to do many other things, well, trying to learn how anyway.  LOL 
>
> I started the smartctl longtest a while ago.  It's still running but
> it hasn't let the smoke out yet.  It's a good sign I guess. I only
> have one SATA port left now.  I got to order another PCI SATA card I
> guess.  :/  I really need to think on the NAS project. 
>
> Thanks to all. 
>
> Dale
>
> :-)  :-) 


Re: [gentoo-user] Testing a used hard drive to make SURE it is good.

2020-06-16 Thread Dale
Mark Knecht wrote:
>
>
> On Mon, Jun 15, 2020 at 12:37 PM Dale  > wrote:
> >
> > Howdy,
> >
> > I finally bought a 8TB drive.  It is used but they claim only a
> short duration.  Still, I want to test it to be sure it is in grade A
> shape before putting a lot of data on it and depending on it.  I am
> familiar with some tools already.  I know about SMART but it is not
> always 100%.  It seems to catch most problems but not all.  I'm
> familiar with dd and writing all zeores or random to it to see if it
> can in fact write to all the parts of the drive but it is slow. It can
> take a long time to write and fill up a 8TB drive. Days maybe??  I
> googled and found a new tool but not sure how accurate it is since
> I've never used it before.  The command is badblocks.  It is installed
> on my system so I'm just curious as to what it will catch that others
> won't.  Is it fast or slow like dd?
> >
> > I plan to run the SMART test anyway.  It'll take several hours but
> I'd like to run some other test to catch errors that SMART may miss. 
> If there is such a tool that does that.  If you bought a used drive,
> what would you run other than the long version of SMART and its test? 
> Would you spend the time to dd the whole drive?  Would badblocks be a
> better tool?  Is there another better tool for this?
> >
> > While I'm at it, when running dd, I have zero and random in /dev. 
> Where does a person obtain a one?  In other words, I can write all
> zeros, I can write all random but I can't write all ones since it
> isn't in /dev.  Does that even exist?  Can I create it myself
> somehow?  Can I download it or install it somehow?  I been curious
> about that for a good long while now.  I just never remember to ask.
> >
> > When I add this 8TB drive to /home, I'll have 14TBs of space.  If I
> leave the 3TB drive in instead of swapping it out, I could have about
> 17TBs of space.  O_O
> >
> > Thanks to all.
> >
> > Dale
> >
> > :-)  :-)
>
> The SMART test, long version, will do a very reasonable job catching
> problems. Run it 2 or 3 times if it makes you feel better.
>
> Chris's suggestion about Spinrite is another option but it is slow,
> slow, slow. Might take you weeks? On a drive that large if it worked
> at all.
>
> As an aside, but important, I fear that you're possibly falling into
> the trap most of us do at home. Please don't. Once you have 17TB of
> space on your system how are you planning on doing your weekly
> backups? Do you have 17TB+ on an external drive or system? Will you
> back up to BlueRay discs or something like that?
>
> Mark


Way back, we used Spinrite to test drives.  Think mid 90's.  Yea, it was
slow then on what today is a tiny hard drive.  Can't imagine modern
drive sizes.  It is good tho.  It reads/writes every single part of a
drive.  It will generally find fault if there is one. 

Right now, I'm backing up to a 8TB external drive, sadly it is a SMR
drive but it works.  As I go along, I'll be breaking down my backups. 
Example.  I may have my Documents directory, which includes my camera
pics, backed up to one drive.  I may have videos backed up to another
drive.  Other directories may have to be on other drives.  The biggest
things I don't want to lose:  Camera pics that could not be replaced
except with a backup.  Videos, some of which are no longer available. 
That requires a large drive.  It currently is approaching 6TBs and I
have several videos in other locations that are not included in that. 
Documents which would be hard to recreate.  Since I have all my emails
locally, I don't want to lose those either.  Just a bit ago, I was
searching for posts regarding smartctl.  I got quite a few hits.

Even if I build a NAS setup, I still need a backup arrangement.  Even if
I have a RAID setup, still need backups.  It gets complicated for sure. 
Sort of expensive too.  Just imagine if my DSL was 10 times faster. 
O_O  I'd need to order drives by the case.

Dale

:-)  :-) 


Re: [gentoo-user] Testing a used hard drive to make SURE it is good.

2020-06-16 Thread Dale
William Kenworthy wrote:
>
> In case no one has mentioned it, check out "stress" and "stress-ng" -
> they have HDD tests available. (I am going to have to look into that
> --ignite-cpu option ... :)
>
> BillK
>
>

I did see that mentioned somewhere but forgot about it.  Another
option.  May have to edit the frequent commands file again. 

Thanks.

Dale

:-)  :-) 


Re: [gentoo-user] Testing a used hard drive to make SURE it is good.

2020-06-16 Thread Wols Lists
On 16/06/20 08:34, Dale wrote:
> Right now, I'm backing up to a 8TB external drive, sadly it is a SMR
> drive but it works.  As I go along, I'll be breaking down my backups. 
> Example.  I may have my Documents directory, which includes my camera
> pics, backed up to one drive.  I may have videos backed up to another
> drive.  Other directories may have to be on other drives.  The biggest
> things I don't want to lose:  Camera pics that could not be replaced
> except with a backup.  Videos, some of which are no longer available. 
> That requires a large drive.  It currently is approaching 6TBs and I
> have several videos in other locations that are not included in that. 
> Documents which would be hard to recreate.  Since I have all my emails
> locally, I don't want to lose those either.  Just a bit ago, I was
> searching for posts regarding smartctl.  I got quite a few hits.

Streaming to an SMR should be fine. Doing a cp to a new directory, or
writing a .tar file, or stuff like that.

What is NOT fine is anything that is likely to result in a lot of
head-seeking as files and directories get modified ...

Remember that when backing up - so a btrfs with snapshots, or an lvm
snapshot with rsync in place, is most definitely not a good idea with SMR.

Cheers,
Wol



Re: [gentoo-user] Testing a used hard drive to make SURE it is good.

2020-06-16 Thread Neil Bothwick
On Tue, 16 Jun 2020 02:34:55 -0500, Dale wrote:

> Even if I build a NAS setup, I still need a backup arrangement.  Even if
> I have a RAID setup, still need backups.  It gets complicated for sure. 
> Sort of expensive too.  Just imagine if my DSL was 10 times faster. 
> O_O  I'd need to order drives by the case.

Not necessarily, if the files are going to remain available online, you
only need to back up the URLs. Downloading again could well be faster
than restoring from backups. 


-- 
Neil Bothwick

Always proofread carefully to see if you any words out.


pgpf0RiIhUgG9.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] Bitwarden, anyone?

2020-06-16 Thread Neil Bothwick
On Tue, 16 Jun 2020 00:42:50 +0100, Peter Humphrey wrote:

> > > So it can work, then. I just have to work out what I'm doing wrong.
> > > I have a support request in with them; no reply yet.  
> > 
> > Do other AppImages work on your computer?  
> 
> This is the first one I've tried. Do you recommend any others in
> particular?

You could try this one: https://www.balena.io/etcher/

I just wondered if your problem was with the BitWarden AppImages or
AppImages in general.


-- 
Neil Bothwick

System halted - Press all keys at once to continue.


pgpuBEegthIDm.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] Testing a used hard drive to make SURE it is good.

2020-06-16 Thread Dale
Neil Bothwick wrote:
> On Tue, 16 Jun 2020 02:34:55 -0500, Dale wrote:
>
>> Even if I build a NAS setup, I still need a backup arrangement.  Even if
>> I have a RAID setup, still need backups.  It gets complicated for sure. 
>> Sort of expensive too.  Just imagine if my DSL was 10 times faster. 
>> O_O  I'd need to order drives by the case.
> Not necessarily, if the files are going to remain available online, you
> only need to back up the URLs. Downloading again could well be faster
> than restoring from backups. 
>
>


That's the thing, some I have are no longer available anywhere that I
can find.  Even youtube deletes videos with their censorship not because
of copyright but because they don't like the content.  Anytime I find a
good video that I think may be useful later, I download it so that I
have it.  For the most part, it's a good point.  Thing is, my DSL is far
slower than any drive I have, even if it was a USB drive using USB1
speeds.  Downloading is the best way to insure I can watch a video later. 

Of course, videos aren't the only thing I don't want to lose.  I have a
lot of things I don't want to get away from me.  As we know, the more
copies in different locations the safer it is.  I wish I could use a
cloud account but as slow as my download is, upload is even slower, as
usual. 

That said, I'm glad to have the info I have stored here.  I'm just
trying to make sure it doesn't get away from me. 

I wish your way could work tho. 

Dale

:-)  :-) 


Re: [gentoo-user] Testing a used hard drive to make SURE it is good.

2020-06-16 Thread Dale
Wols Lists wrote:
> On 16/06/20 08:34, Dale wrote:
>> Right now, I'm backing up to a 8TB external drive, sadly it is a SMR
>> drive but it works.  As I go along, I'll be breaking down my backups. 
>> Example.  I may have my Documents directory, which includes my camera
>> pics, backed up to one drive.  I may have videos backed up to another
>> drive.  Other directories may have to be on other drives.  The biggest
>> things I don't want to lose:  Camera pics that could not be replaced
>> except with a backup.  Videos, some of which are no longer available. 
>> That requires a large drive.  It currently is approaching 6TBs and I
>> have several videos in other locations that are not included in that. 
>> Documents which would be hard to recreate.  Since I have all my emails
>> locally, I don't want to lose those either.  Just a bit ago, I was
>> searching for posts regarding smartctl.  I got quite a few hits.
> Streaming to an SMR should be fine. Doing a cp to a new directory, or
> writing a .tar file, or stuff like that.
>
> What is NOT fine is anything that is likely to result in a lot of
> head-seeking as files and directories get modified ...
>
> Remember that when backing up - so a btrfs with snapshots, or an lvm
> snapshot with rsync in place, is most definitely not a good idea with SMR.
>
> Cheers,
> Wol

Yea, I've read up on them a bit.  They have uses where they work fine
and one can't really tell the difference between it and a PMR/CMR drive.
In my case, it works OK but I have to leave it on a little bit after I
complete my backups and even unmount it.  There was a thread on this
where I asked why I could feel the heads bumping around for a while
after my backup was done.  I think it was Rich that guessed it was a SMR
drive.  Before that, never heard of the thing.  For the small backups I
do every day or two, it works fine.  After some research, it was
discovered that Rich guessed right.  SMR it is. 

I purposely made sure the drive I recently bought was not a SMR drive
tho.  I don't want /home on that, or any other file system for my OS. 
Honestly, I don't plan to buy any SMR drives in the near future.  Maybe
when all the Linux tools figure out how to deal with and manage them. 

I might add, I don't have LVM on that drive.  I read it does not work
well with LVM, RAID etc as you say.  Most likely, that drive will always
be a external drive for backups or something.  If it ever finds itself
on the OS or /home, it'll be a last resort. 

Dale

:-)  :-) 


Re: [gentoo-user] Testing a used hard drive to make SURE it is good.

2020-06-16 Thread Wols Lists
On 16/06/20 10:04, Dale wrote:
> I might add, I don't have LVM on that drive.  I read it does not work
> well with LVM, RAID etc as you say.  Most likely, that drive will always
> be a external drive for backups or something.  If it ever finds itself
> on the OS or /home, it'll be a last resort. 

LVM it's probably fine with. Raid, MUCH less so. What you need to make
sure does NOT happen is a lot of random writes. That might make deleting
an lvm snapshot slightly painful ...

But adding a SMR drive to an existing ZFS raid is a guarantee for pain.
I don't know why, but "resilvering" causes a lot of random writes. I
don't think md-raid behaves this way.

But it's the very nature of raid that, as soon as something goes wrong
and a drive needs replacing, everything is going to get hammered. And
SMR drives don't take kindly to being hammered ... :-)

Even in normal use, a SMR drive is going to cause grief if it's not
handled carefully.

Cheers,
Wol



Re: [gentoo-user] Bitwarden, anyone?

2020-06-16 Thread Peter Humphrey
On Tuesday, 16 June 2020 09:43:15 BST Neil Bothwick wrote:
> On Tue, 16 Jun 2020 00:42:50 +0100, Peter Humphrey wrote:
> > > > So it can work, then. I just have to work out what I'm doing wrong.
> > > > I have a support request in with them; no reply yet.
> > > 
> > > Do other AppImages work on your computer?
> > 
> > This is the first one I've tried. Do you recommend any others in
> > particular?
> 
> You could try this one: https://www.balena.io/etcher/
> 
> I just wondered if your problem was with the BitWarden AppImages or
> AppImages in general.

It turns out that BitWarden requires execution of a program in /tmp. They said 
to make sure /tmp wasn't mounted noexec!

So I created a ~/.cache/bwtmp directory and passed TMPDIR= to bitwarden, but 
then it threw another error. I'd better take this up with BitWarden.

Wynn Wolf Arbor suggested this explanation, but I can't find his message now. 
(Is KMail playing up again?)

-- 
Regards,
Peter.






Re: [gentoo-user] Testing a used hard drive to make SURE it is good.

2020-06-16 Thread Dale
Wols Lists wrote:
> On 16/06/20 10:04, Dale wrote:
>> I might add, I don't have LVM on that drive.  I read it does not work
>> well with LVM, RAID etc as you say.  Most likely, that drive will always
>> be a external drive for backups or something.  If it ever finds itself
>> on the OS or /home, it'll be a last resort. 
> LVM it's probably fine with. Raid, MUCH less so. What you need to make
> sure does NOT happen is a lot of random writes. That might make deleting
> an lvm snapshot slightly painful ...
>
> But adding a SMR drive to an existing ZFS raid is a guarantee for pain.
> I don't know why, but "resilvering" causes a lot of random writes. I
> don't think md-raid behaves this way.
>
> But it's the very nature of raid that, as soon as something goes wrong
> and a drive needs replacing, everything is going to get hammered. And
> SMR drives don't take kindly to being hammered ... :-)
>
> Even in normal use, a SMR drive is going to cause grief if it's not
> handled carefully.
>
> Cheers,
> Wol

>From what I've read, I agree.  Basically, as some have posted in
different places, SMR drives are good when writing once and leaving it
alone.  Basically, about like a DVD-R.  From what I've read, let's say I
moved a lot of videos around, maybe moved the directory structure
around, which means a lot of data to move.  I think I'd risk just
putting a new file system on it and then backup everything from
scratch.  It may take a little longer given the amount of data but it
would be easier on the drive.  It would keep it from hammering as you
put it that drive to death. 

I've also read about the resilvering problems too.  I think LVM
snapshots and something about BTFS(sp?) has problems.  I've also read
that on windoze, it can cause a system to freeze while it is trying to
rewrite the moved data too.  It gets so slow, it actually makes the OS
not respond.  I suspect it could happen on Linux to if the conditions
are right.

I guess this is about saving money for the drive makers.  The part that
seems to really get under peoples skin tho, them putting those drives
out there without telling people that they made changes that affect
performance.  It's bad enough for people who use them where they work
well but the people that use RAID and such, it seems to bring them to
their knees at times.  I can't count the number of times I've read that
people support a class action lawsuit over shipping SMR without telling
anyone.  It could happen and I'm not sure it shouldn't.  People using
RAID and such, especially in some systems, they need performance not
drives that beat themselves to death.

My plan, avoid SMR if at all possible.  Right now, I just don't need the
headaches.  The one I got, I'm lucky it works OK, even if it does bump
around for quite a while after backups are done. 

My new to me hard drive is still testing.  Got a few more hours left
yet.  Then I'll run some more tests.  It seems to be OK tho. 

Dale

:-)  :-) 


Re: [gentoo-user] Testing a used hard drive to make SURE it is good.

2020-06-16 Thread Michael
On Tuesday, 16 June 2020 12:26:01 BST Dale wrote:
> Wols Lists wrote:
> > On 16/06/20 10:04, Dale wrote:
> >> I might add, I don't have LVM on that drive.  I read it does not work
> >> well with LVM, RAID etc as you say.  Most likely, that drive will always
> >> be a external drive for backups or something.  If it ever finds itself
> >> on the OS or /home, it'll be a last resort.
> > 
> > LVM it's probably fine with. Raid, MUCH less so. What you need to make
> > sure does NOT happen is a lot of random writes. That might make deleting
> > an lvm snapshot slightly painful ...
> > 
> > But adding a SMR drive to an existing ZFS raid is a guarantee for pain.
> > I don't know why, but "resilvering" causes a lot of random writes. I
> > don't think md-raid behaves this way.
> > 
> > But it's the very nature of raid that, as soon as something goes wrong
> > and a drive needs replacing, everything is going to get hammered. And
> > SMR drives don't take kindly to being hammered ... :-)
> > 
> > Even in normal use, a SMR drive is going to cause grief if it's not
> > handled carefully.
> > 
> > Cheers,
> > Wol
> 
> From what I've read, I agree.  Basically, as some have posted in
> different places, SMR drives are good when writing once and leaving it
> alone.  Basically, about like a DVD-R.  From what I've read, let's say I
> moved a lot of videos around, maybe moved the directory structure
> around, which means a lot of data to move.  I think I'd risk just
> putting a new file system on it and then backup everything from
> scratch.  It may take a little longer given the amount of data but it
> would be easier on the drive.  It would keep it from hammering as you
> put it that drive to death. 
> 
> I've also read about the resilvering problems too.  I think LVM
> snapshots and something about BTFS(sp?) has problems.  I've also read
> that on windoze, it can cause a system to freeze while it is trying to
> rewrite the moved data too.  It gets so slow, it actually makes the OS
> not respond.  I suspect it could happen on Linux to if the conditions
> are right.
> 
> I guess this is about saving money for the drive makers.  The part that
> seems to really get under peoples skin tho, them putting those drives
> out there without telling people that they made changes that affect
> performance.  It's bad enough for people who use them where they work
> well but the people that use RAID and such, it seems to bring them to
> their knees at times.  I can't count the number of times I've read that
> people support a class action lawsuit over shipping SMR without telling
> anyone.  It could happen and I'm not sure it shouldn't.  People using
> RAID and such, especially in some systems, they need performance not
> drives that beat themselves to death.
> 
> My plan, avoid SMR if at all possible.  Right now, I just don't need the
> headaches.  The one I got, I'm lucky it works OK, even if it does bump
> around for quite a while after backups are done. 
> 
> My new to me hard drive is still testing.  Got a few more hours left
> yet.  Then I'll run some more tests.  It seems to be OK tho. 
> 
> Dale
> 
> :-)  :-) 

Just to add my 2c's before you throw that SMR away, the use case for these 
drives is to act as disk archives, rather than regular backups.  You write 
data you want to keep, once.  SMR disks would work well for your use case of 
old videos/music/photos you want to keep and won't be overwriting every other 
day/week/month.  Using rsync with '-c' to compare checksums will also make 
sure what you've copied is as good/bad as the original fs source.

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Bitwarden, anyone?

2020-06-16 Thread Neil Bothwick
On Tue, 16 Jun 2020 12:05:33 +0100, Peter Humphrey wrote:

> It turns out that BitWarden requires execution of a program in /tmp.
> They said to make sure /tmp wasn't mounted noexec!
> 
> So I created a ~/.cache/bwtmp directory and passed TMPDIR= to
> bitwarden, but then it threw another error. I'd better take this up
> with BitWarden.

Have you tried remounting /tmp with exec, just to see if it works?


-- 
Neil Bothwick

Pound for pound, the amoeba is the most vicious animal on the earth.


pgp40BRwJsmkB.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] Bitwarden, anyone?

2020-06-16 Thread Wynn Wolf Arbor
On 2020-06-16 12:05, Peter Humphrey wrote:
> So I created a ~/.cache/bwtmp directory and passed TMPDIR= to
> bitwarden, but then it threw another error. I'd better take this up
> with BitWarden.

I just tried getting it to work again. If this is anything like on my
system, once the noexec problem is fixed, the app fails here because it
doesn't find libsecret. I'm not sure why this is not bundled in the
AppImage, but emerging app-crypt/libsecret fixes this for me, and I can
run the app without any further issues. I might have other libs it
depends on installed already, so I can't say for sure without looking at
the error message.

Hope that helps.

-- 
Wolf



Re: [gentoo-user] Testing a used hard drive to make SURE it is good.

2020-06-16 Thread Rich Freeman
On Tue, Jun 16, 2020 at 7:36 AM Michael  wrote:
>
> Just to add my 2c's before you throw that SMR away, the use case for these
> drives is to act as disk archives, rather than regular backups.  You write
> data you want to keep, once.

If your write pattern is more like a tape SMR should be ok in theory.
For example, if you wrote to a raw partition using tar (without a
filesystem) I suspect most SMR implementations (including
drive-managed) would work tolerably (a host-managed implementation
would perform identically to CMR).  Once you toss in a filesystem then
there is no guarantee that the writes will end up being sequential.

And of course the problem with these latest hidden SMR drives is that
they generally don't support TRIM, so even repeated sequential writes
can be a problem because the drive doesn't realize that after you send
block 1 you're going to send blocks 2-100k all sequentially.  If it
knew that then it would just start overwriting in place obliterating
later tracks, since they're just going to be written next anyway.
Instead this drive is going to cache every write until it can
consolidate them, which isn't terrible but it still turns every seek
into three (write buffer, read buffer, write permanent - plus updating
metadata).  If they weren't being sneaky they could have made it
drive-managed WITH TRIM so that it worked more like an SSD where you
get the best performance if the OS uses TRIM, but it can fall back if
you don't.  Sequential writes on trimmed areas for SMR should perform
identically to writes on CMR drives.


-- 
Rich



Re: [gentoo-user] Testing a used hard drive to make SURE it is good.

2020-06-16 Thread Dale
Michael wrote:
> On Tuesday, 16 June 2020 12:26:01 BST Dale wrote:
>
>> From what I've read, I agree.  Basically, as some have posted in
>> different places, SMR drives are good when writing once and leaving it
>> alone.  Basically, about like a DVD-R.  From what I've read, let's say I
>> moved a lot of videos around, maybe moved the directory structure
>> around, which means a lot of data to move.  I think I'd risk just
>> putting a new file system on it and then backup everything from
>> scratch.  It may take a little longer given the amount of data but it
>> would be easier on the drive.  It would keep it from hammering as you
>> put it that drive to death. 
>>
>> I've also read about the resilvering problems too.  I think LVM
>> snapshots and something about BTFS(sp?) has problems.  I've also read
>> that on windoze, it can cause a system to freeze while it is trying to
>> rewrite the moved data too.  It gets so slow, it actually makes the OS
>> not respond.  I suspect it could happen on Linux to if the conditions
>> are right.
>>
>> I guess this is about saving money for the drive makers.  The part that
>> seems to really get under peoples skin tho, them putting those drives
>> out there without telling people that they made changes that affect
>> performance.  It's bad enough for people who use them where they work
>> well but the people that use RAID and such, it seems to bring them to
>> their knees at times.  I can't count the number of times I've read that
>> people support a class action lawsuit over shipping SMR without telling
>> anyone.  It could happen and I'm not sure it shouldn't.  People using
>> RAID and such, especially in some systems, they need performance not
>> drives that beat themselves to death.
>>
>> My plan, avoid SMR if at all possible.  Right now, I just don't need the
>> headaches.  The one I got, I'm lucky it works OK, even if it does bump
>> around for quite a while after backups are done. 
>>
>> My new to me hard drive is still testing.  Got a few more hours left
>> yet.  Then I'll run some more tests.  It seems to be OK tho. 
>>
>> Dale
>>
>> :-)  :-) 
> Just to add my 2c's before you throw that SMR away, the use case for these 
> drives is to act as disk archives, rather than regular backups.  You write 
> data you want to keep, once.  SMR disks would work well for your use case of 
> old videos/music/photos you want to keep and won't be overwriting every other 
> day/week/month.  Using rsync with '-c' to compare checksums will also make 
> sure what you've copied is as good/bad as the original fs source.


I try to update about once a day, that way the changes or additions are
fairly small.  On occasion tho, I find a better version of a video which
means I have a new file and delete the old.  That may make it a little
harder for the SMR drive but the amount of data, given my slow DSL, is
not large enough to matter.  I think the biggest changes rsync has
reported so far, about 4 or 5GBs or so. 

My general process is like this.  I find a point where I can backup.  I
power up the external drive, mount it using KDE's Device Notifier, use
rsync to update the files and then unmount the drive with DN.  After
that, I let it sit until I notice that it is not doing that bumping
thing for a bit.  Sometimes that is a couple minutes, sometimes it is 10
or 15 minutes or so.  Generally, it isn't very long really.  Sometimes I
go do something else, cook supper, mow the grass or whatever and cut it
off when I get back. 

In theory I could cut it off right after the backup is done and I've
unmounted it.  Thing is, the changes will build up depending on how
large the cache/buffer/whatever is that it stores as CMR.  From what
I've read, it has a PMR/CMR section and then the rest is SMR.  It writes
new stuff to the PMR/CMR section and when it has time, it moves it to
the SMR parts.  It then does its rewrite thing with the shingles.  I'm
sort of making it simple but that's what some have claimed it does. 

Let's keep in mind, the drive I just bought in this thread is a PMR
drive.  The SMR drive is one I've had a while in a external enclosure. 
Most of the time, it holds my desk down and a stack of Blu-ray discs
up.  That bumpy thing sometimes makes the discs fall off tho.  I need to
clean my desk off, again. 

While I wish my backup drive wasn't a SMR, at least it is acceptable in
performance for what I'm using it for.  If I had spent money on that
drive and put it on /home, then I'd be pretty upset.  We're talking
steam and smoke upset.  It's not like these drives are $20 or $30 or
something.  I got a good deal paying about $150 for this latest new to
me drive.  Still, that's $150 that I don't want to waste on something
that can't handle what I do.  Backup drive that is SMR, well, OK.  I'm
not really pleased about it but it works OK.  Having it on /home where
it could cause my system to freeze or something, well, that may remind
me of the hal days.  I'm sure some recall me and my love for

Re: [gentoo-user] Bitwarden, anyone?

2020-06-16 Thread Peter Humphrey
On Tuesday, 16 June 2020 12:40:35 BST Neil Bothwick wrote:
> On Tue, 16 Jun 2020 12:05:33 +0100, Peter Humphrey wrote:
> > It turns out that BitWarden requires execution of a program in /tmp.
> > They said to make sure /tmp wasn't mounted noexec!
> > 
> > So I created a ~/.cache/bwtmp directory and passed TMPDIR= to
> > bitwarden, but then it threw another error. I'd better take this up
> > with BitWarden.
> 
> Have you tried remounting /tmp with exec, just to see if it works?

Yes, and it threw another error:

A JavaScript error occurred in the main process
Uncaught Exception:
Error: libsecret-1.so.0: cannot open shared object file: No such file or 
directory
at process.func (electron/js2c/asar.js:138:31)
at process.func [as dlopen] (electron/js2c/asar.js:138:31)
at Object.Module._extensions..node (internal/modules/cjs/loader.js:828:18)
at Object.func (electron/js2c/asar.js:138:31)
at Object.func [as .node] (electron/js2c/asar.js:147:18)
at Module.load (internal/modules/cjs/loader.js:645:32)
at Function.Module._load (internal/modules/cjs/loader.js:560:12)
at Module.require (internal/modules/cjs/loader.js:685:19)
at require (internal/modules/cjs/helpers.js:16:16)
at Object. 
(/home/prh/Bitwarden-1.18.0-x86_64/opt/Bitwarden/resources/app.asar/node_modules/keytar/lib/keytar.js:1:14)

-- 
Regards,
Peter.






Re: [gentoo-user] Bitwarden, anyone?

2020-06-16 Thread Peter Humphrey
On Tuesday, 16 June 2020 12:52:52 BST Wynn Wolf Arbor wrote:
> On 2020-06-16 12:05, Peter Humphrey wrote:
> > So I created a ~/.cache/bwtmp directory and passed TMPDIR= to
> > bitwarden, but then it threw another error. I'd better take this up
> > with BitWarden.
> 
> I just tried getting it to work again. If this is anything like on my
> system, once the noexec problem is fixed, the app fails here because it
> doesn't find libsecret. I'm not sure why this is not bundled in the
> AppImage, but emerging app-crypt/libsecret fixes this for me, and I can
> run the app without any further issues. I might have other libs it
> depends on installed already, so I can't say for sure without looking at
> the error message.
> 
> Hope that helps.

Certainly did - thanks! I'll tell them about it.

-- 
Regards,
Peter.






Re: [gentoo-user] Bitwarden, anyone?

2020-06-16 Thread Neil Bothwick
On Tue, 16 Jun 2020 13:52:52 +0200, Wynn Wolf Arbor wrote:

> > So I created a ~/.cache/bwtmp directory and passed TMPDIR= to
> > bitwarden, but then it threw another error. I'd better take this up
> > with BitWarden.  
> 
> I just tried getting it to work again. If this is anything like on my
> system, once the noexec problem is fixed, the app fails here because it
> doesn't find libsecret. I'm not sure why this is not bundled in the
> AppImage, but emerging app-crypt/libsecret fixes this for me, and I can
> run the app without any further issues. I might have other libs it
> depends on installed already, so I can't say for sure without looking at
> the error message.

That explains why it just worked for me. /tmp is mounted without noexec
and libsecret is already installed.


-- 
Neil Bothwick

You can't teach a new mouse old clicks.


pgpBhXqkpsxpT.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] emerge -u fails with "OSError: [Errno 12] Cannot allocate memory"

2020-06-16 Thread J. Roeleveld
On 16 June 2020 20:31:56 CEST, n952162  wrote:
>Admonished to get everything updated, I turned to my raspberry pi with
>Sakaki's binary image.  Synced and updated portage with no problem. 
>Then I did an emerge -u @world and got (after *hours* of dependency
>checking):
>
> >>> Jobs: 0 of 206 complete, 1 running Load avg: 2.84, 3.44, 3.85
> >>> Emerging binary (1 of 206) sys-libs/glibc-2.31-r5::gentoo
> >>> Jobs: 0 of 206 complete, 1 running Load avg: 2.84, 3.44, 3.85
> >>> Jobs: 0 of 206 complete Load avg: 3.60, 3.54, 3.87
> >>> Installing (1 of 206) sys-libs/glibc-2.31-r5::gentoo
> >>> Jobs: 0 of 206 complete Load avg: 3.60, 3.54, 3.87
>Exception in callback AsynchronousTask._exit_listener_cb(method...0x7f9180d9d8>>)
>handle: method...0x7f9180d9d8>>)>
>Traceback (most recent call last):
>   File "/usr/lib64/python3.6/asyncio/events.py", line 145, in _run
>     self._callback(*self._args)
>   File
>"/usr/lib64/python3.6/site-packages/_emerge/AsynchronousTask.py", line
>201, in _exit_listener_cb
>     listener(self)
>   File
>"/usr/lib64/python3.6/site-packages/_emerge/BinpkgPrefetcher.py", line
>31, in _fetcher_exit
>     self._start_task(verifier, self._verifier_exit)
>   File "/usr/lib64/python3.6/site-packages/_emerge/CompositeTask.py",
>line 113, in _start_task
>     task.start()
>   File
>"/usr/lib64/python3.6/site-packages/_emerge/AsynchronousTask.py", line
>30, in start
>     self._start()
>   File "/usr/lib64/python3.6/site-packages/_emerge/BinpkgVerifier.py",
>line 59, in _start
>     self._digester_exit)
>   File "/usr/lib64/python3.6/site-packages/_emerge/CompositeTask.py",
>line 113, in _start_task
>     task.start()
>   File
>"/usr/lib64/python3.6/site-packages/_emerge/AsynchronousTask.py", line
>30, in start
>     self._start()
>   File
>"/usr/lib64/python3.6/site-packages/portage/util/_async/FileDigester.py",
>line 30, in _start
>     ForkProcess._start(self)
>   File "/usr/lib64/python3.6/site-packages/_emerge/SpawnProcess.py",
>line 112, in _start
>     retval = self._spawn(self.args, **kwargs)
>   File
>"/usr/lib64/python3.6/site-packages/portage/util/_async/ForkProcess.py",
>line 24, in _spawn
>     pid = os.fork()
>   File "/usr/lib64/python3.6/site-packages/portage/__init__.py", line
>246, in __call__
>     rval = self._func(*wrapped_args, **wrapped_kwargs)
>OSError: [Errno 12] Cannot allocate memory
>
>What's the recommended course of action here?
>
>Log attached.

Suggestion:
1) ensure you only have 1 job running and absolutely no parallel builds. 
"--jobs 1" for both emerge and make

2) get SWAP, preferably on USB stick/harddrive so as not to kill the SD card.

Because rasppis are low on memory and they have very specific uses, I tend not 
to bother with Gentoo on them.

--
Joost
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.



Re: [gentoo-user] virtualbox in headless configuration broken after update: delayed echo [ RESOLVED, kinda ]

2020-06-16 Thread n952162

On 06/10/20 15:19, n952162 wrote:


I updated my system and now characters typed into vbox over ssh are
not echo-ed until *after* a CR is entered.

I diffed the stty output, to see if I could spot anything:

10~>cat /tmp/sttydiff
2,3c2,3
<  rows 37
<  columns 100
---
>  rows 44
>  columns 88
21d20
<  discard = ^O
23c22,23
< min = 1
---
> discard = ^O
>  min = 1
30c30
< hupcl
---
> -hupcl
36c36
< brkint
---
> -brkint
48,49c48,49
< imaxbel
< iutf8
---
> -imaxbel
> -iutf8

Also, the font seems to be screwed up, because the last line of the
window only shows the top half of the line.

Anybody else encounter this or know what's wrong?

Vbox seems to work okay when run locally, on the machine it's
installed on.




I think this is resolved, kinda.
I just discovered that if I turn off the vbox menu bar, the command
entry line works properly again, both in X-less console mode and in X.
    Settings -> User Interface -> Enable menu bar (disable this)

I've always had that menu bar, and need it, so something got
changed/broken, and I still have a problem, but at least now I don't
have to enter commands in blindly.


Re: [gentoo-user] virtualbox in headless configuration broken after update: delayed echo [ RESOLVED, kinda ]

2020-06-16 Thread J. Roeleveld
On 16 June 2020 21:07:56 CEST, n952162  wrote:
>On 06/10/20 15:19, n952162 wrote:
>>
>> I updated my system and now characters typed into vbox over ssh are
>> not echo-ed until *after* a CR is entered.
>>
>> I diffed the stty output, to see if I could spot anything:
>>
>> 10~>cat /tmp/sttydiff
>> 2,3c2,3
>> <  rows 37
>> <  columns 100
>> ---
>> >  rows 44
>> >  columns 88
>> 21d20
>> <  discard = ^O
>> 23c22,23
>> < min = 1
>> ---
>> > discard = ^O
>> >  min = 1
>> 30c30
>> < hupcl
>> ---
>> > -hupcl
>> 36c36
>> < brkint
>> ---
>> > -brkint
>> 48,49c48,49
>> < imaxbel
>> < iutf8
>> ---
>> > -imaxbel
>> > -iutf8
>>
>> Also, the font seems to be screwed up, because the last line of the
>> window only shows the top half of the line.
>>
>> Anybody else encounter this or know what's wrong?
>>
>> Vbox seems to work okay when run locally, on the machine it's
>> installed on.
>>
>>
>
>I think this is resolved, kinda.
>I just discovered that if I turn off the vbox menu bar, the command
>entry line works properly again, both in X-less console mode and in X.
>     Settings -> User Interface -> Enable menu bar (disable this)
>
>I've always had that menu bar, and need it, so something got
>changed/broken, and I still have a problem, but at least now I don't
>have to enter commands in blindly.

Are these Virtualbox VMs critical?
If yes, I would suggest migrating them to a more reliable virtualisation 
technology.

I do not consider Virtualbox suitable for anything but a desktop based VM 
method for a quick test or simulation.

Gor anything serious, I would suggest Xen, KVM or VMWare.

--
Joost
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.



Re: [gentoo-user] virtualbox in headless configuration broken after update: delayed echo [ RESOLVED, kinda ]

2020-06-16 Thread n952162

On 06/16/20 22:36, J. Roeleveld wrote:

On 16 June 2020 21:07:56 CEST, n952162  wrote:

On 06/10/20 15:19, n952162 wrote:

I updated my system and now characters typed into vbox over ssh are
not echo-ed until *after* a CR is entered.

I diffed the stty output, to see if I could spot anything:

10~>cat /tmp/sttydiff
2,3c2,3
<  rows 37
<  columns 100
---

   rows 44
   columns 88

21d20
<  discard = ^O
23c22,23
< min = 1
---

discard = ^O
   min = 1

30c30
< hupcl
---

-hupcl

36c36
< brkint
---

-brkint

48,49c48,49
< imaxbel
< iutf8
---

-imaxbel
-iutf8

Also, the font seems to be screwed up, because the last line of the
window only shows the top half of the line.

Anybody else encounter this or know what's wrong?

Vbox seems to work okay when run locally, on the machine it's
installed on.



I think this is resolved, kinda.
I just discovered that if I turn off the vbox menu bar, the command
entry line works properly again, both in X-less console mode and in X.
     Settings -> User Interface -> Enable menu bar (disable this)

I've always had that menu bar, and need it, so something got
changed/broken, and I still have a problem, but at least now I don't
have to enter commands in blindly.

Are these Virtualbox VMs critical?
If yes, I would suggest migrating them to a more reliable virtualisation 
technology.

I do not consider Virtualbox suitable for anything but a desktop based VM 
method for a quick test or simulation.

Gor anything serious, I would suggest Xen, KVM or VMWare.

--
Joost


Well, no, they're really not critical, but your comment surprises me. 
I've been using vbox for years, on various assignments, and never
encountered anything else.  Can you say a word or two to that, or
provide a URL?  Which free vm is "the best"?




Re: [gentoo-user] Testing a used hard drive to make SURE it is good.

2020-06-16 Thread antlists

On 16/06/2020 12:26, Dale wrote:
I've also read about the resilvering problems too.  I think LVM 
snapshots and something about BTFS(sp?) has problems.  I've also read 
that on windoze, it can cause a system to freeze while it is trying to 
rewrite the moved data too.  It gets so slow, it actually makes the OS 
not respond.  I suspect it could happen on Linux to if the conditions 
are right.



Being all technical, what seems to be happening is ...

Random writes fillup the PMR cache. The drive starts flushing the cache, 
but unfortunately you need a doubly linked list or something - you need 
to be able to find the physical block from the logical address (for 
reading) and to find the logical block from the physical address (for 
cache-flushing). So once the cache fills, the drive needs "down time" to 
move stuff around, and it stops responding to the bus. There are reports 
of disk stalls of 10 minutes or more - bear in mind desktop drives are 
classed as unsuitable for raid because they stall for *up* *to* *two* 
minutes ...


I guess this is about saving money for the drive makers.  The part that 
seems to really get under peoples skin tho, them putting those drives 
out there without telling people that they made changes that affect 
performance.  It's bad enough for people who use them where they work 
well but the people that use RAID and such, it seems to bring them to 
their knees at times.  I can't count the number of times I've read that 
people support a class action lawsuit over shipping SMR without telling 
anyone.  It could happen and I'm not sure it shouldn't.  People using 
RAID and such, especially in some systems, they need performance not 
drives that beat themselves to death.


Most manufacturers haven't been open, but at least - apart from WD - 
they haven't been stupid either. Bear in mind WD actively market their 
Red drives as suitable for NAS or Raid, putting SMR in there was 
absolutely dumb. Certainly in the UK, as soon as news starts getting 
round, they'll probably find themselves (or rather their retailers will 
get shafted with) loads of returns as "unfit for purpose". And, 
basically, they have a legal liability with no leg to stand on because 
if a product doesn't do what it's advertised for, then the customer is 
*entitled* to a refund.


Dunno why, I've never been a WD fan, so I dodged that bullet. I just 
caught another one, because I regularly advise people they shouldn't be 
running Barracudas, while running two myself ... :-)


Cheers,
Wol



Re: [gentoo-user] Testing a used hard drive to make SURE it is good.

2020-06-16 Thread antlists

On 16/06/2020 13:25, Rich Freeman wrote:

And of course the problem with these latest hidden SMR drives is that
they generally don't support TRIM,


This, I believe, is a problem with the ATA spec. I don't understand 
what's going on, but something like for these drives you need v4 of the 
spec, and only v3 is finalised. Various people have pointed out holes in 
this theory, so you don't need to add to them :-) But yes, I do 
understand that apparently there is no official standard way to send a 
trim to these drives ...



so even repeated sequential writes
can be a problem because the drive doesn't realize that after you send
block 1 you're going to send blocks 2-100k all sequentially.  If it
knew that then it would just start overwriting in place obliterating
later tracks, since they're just going to be written next anyway.


No it can't do that. Because when it overwrites the end of the file it 
will be obliterating other random files that aren't going to be 
overwritten ...



Instead this drive is going to cache every write until it can
consolidate them, which isn't terrible but it still turns every seek
into three (write buffer, read buffer, write permanent - plus updating
metadata). 


Which IS terrible if you don't give the drive down-time to flush the 
buffer ...



If they weren't being sneaky they could have made it
drive-managed WITH TRIM so that it worked more like an SSD where you
get the best performance if the OS uses TRIM, but it can fall back if
you don't.  Sequential writes on trimmed areas for SMR should perform
identically to writes on CMR drives.


You're forgetting one thing - rewriting a block on SSD or CMR doesn't 
obliterate neighbouring blocks ... with SMR for every track you rewrite 
you have to salvage the neighbouring track too ...


Cheers,
Wol



Re: [gentoo-user] Testing a used hard drive to make SURE it is good.

2020-06-16 Thread Dale
antlists wrote:
> On 16/06/2020 12:26, Dale wrote:
>> I've also read about the resilvering problems too.  I think LVM
>> snapshots and something about BTFS(sp?) has problems.  I've also read
>> that on windoze, it can cause a system to freeze while it is trying
>> to rewrite the moved data too.  It gets so slow, it actually makes
>> the OS not respond.  I suspect it could happen on Linux to if the
>> conditions are right.
>>
> Being all technical, what seems to be happening is ...
>
> Random writes fillup the PMR cache. The drive starts flushing the
> cache, but unfortunately you need a doubly linked list or something -
> you need to be able to find the physical block from the logical
> address (for reading) and to find the logical block from the physical
> address (for cache-flushing). So once the cache fills, the drive needs
> "down time" to move stuff around, and it stops responding to the bus.
> There are reports of disk stalls of 10 minutes or more - bear in mind
> desktop drives are classed as unsuitable for raid because they stall
> for *up* *to* *two* minutes ...
>
>> I guess this is about saving money for the drive makers.  The part
>> that seems to really get under peoples skin tho, them putting those
>> drives out there without telling people that they made changes that
>> affect performance.  It's bad enough for people who use them where
>> they work well but the people that use RAID and such, it seems to
>> bring them to their knees at times.  I can't count the number of
>> times I've read that people support a class action lawsuit over
>> shipping SMR without telling anyone.  It could happen and I'm not
>> sure it shouldn't.  People using RAID and such, especially in some
>> systems, they need performance not drives that beat themselves to death.
>
> Most manufacturers haven't been open, but at least - apart from WD -
> they haven't been stupid either. Bear in mind WD actively market their
> Red drives as suitable for NAS or Raid, putting SMR in there was
> absolutely dumb. Certainly in the UK, as soon as news starts getting
> round, they'll probably find themselves (or rather their retailers
> will get shafted with) loads of returns as "unfit for purpose". And,
> basically, they have a legal liability with no leg to stand on because
> if a product doesn't do what it's advertised for, then the customer is
> *entitled* to a refund.
>
> Dunno why, I've never been a WD fan, so I dodged that bullet. I just
> caught another one, because I regularly advise people they shouldn't
> be running Barracudas, while running two myself ... :-)
>
> Cheers,
> Wol
>
>


>From what I've read, all the drive makers were selling SMR without
telling anyone at first.  It wasn't just WD but Seagate as well.  There
was another maker as well but can't recall what the brand was.  I want
to say HGST but could have been something else.  I tend to like WD and
Seagate and have had a couple Toshibas as well.  I've had a WD go bad
but I've had a Seagate go bad too.  I'm of the mindset that most drives
are good but on occasion, you hit a bad batch.  No matter what brand it
is, there is a horror story out there somewhere.  I've been lucky so
far.  It seems SMART catches that a drive is failing before it actually
does.  I had one that gave the 24 hour warning and it wasn't kidding
either.  Another just starting reporting bad spots.  I replaced it
before it corrupted anything.  I've never lost data that I can recall tho. 

I've read that if there is a lawsuit, the EU will likely be first and
the easiest.  If you say something should work in a certain way and it
doesn't, refund for sure.  Given the large scale of this, lawsuit is
possible.  I'm no lawyer but I do think what the makers did in hiding
this info is wrong.  It doesn't matter what brand it is, they should be
honest about their products.  This is especially true for situations
like RAID, NAS and other 24/7 systems.  Thing is, even my system falls
into that category.  I run 24/7 here except during power failures.  LVM
likely requires a better drive than a regular home type system that is
only used a little each day.  Commercial type systems that are in heavy
use, they require a really heavy duty components.  Claiming something is
or leaving out info that shows they are not is not good.  They should
have known it would bite them at some point.  People have far to many
tools to test drives and uncover the truth. 

Little update.  The drive passed its first SMART long test.  I started
badblocks hours ago and it is almost done.  It's at 96% right now.  I
think it lists bad blocks as it finds them and so far, it hasn't listed
any.  I'll post the results when it is done.  So far, the drive I bought
seems to be in very good condition. 

Now to wait on the last little bit to finish.  Just hope it doesn't get
right to the end and start blowing smoke.  :/

Dale

:-)  :-) 


Re: [gentoo-user] virtualbox in headless configuration broken after update: delayed echo [ RESOLVED, kinda ]

2020-06-16 Thread J. Roeleveld
On Tuesday, June 16, 2020 11:08:23 PM CEST n952162 wrote:
> On 06/16/20 22:36, J. Roeleveld wrote:
> > On 16 June 2020 21:07:56 CEST, n952162  wrote:
> >> On 06/10/20 15:19, n952162 wrote:
> >>> I updated my system and now characters typed into vbox over ssh are
> >>> not echo-ed until *after* a CR is entered.
> >>> 
> >>> I diffed the stty output, to see if I could spot anything:
> >>> 
> >>> 10~>cat /tmp/sttydiff
> >>> 2,3c2,3
> >>> <  rows 37
> >>> <  columns 100
> >>> ---
> >>> 
> rows 44
> columns 88
> >>> 
> >>> 21d20
> >>> <  discard = ^O
> >>> 23c22,23
> >>> < min = 1
> >>> ---
> >>> 
>  discard = ^O
>  
> min = 1
> >>> 
> >>> 30c30
> >>> < hupcl
> >>> ---
> >>> 
>  -hupcl
> >>> 
> >>> 36c36
> >>> < brkint
> >>> ---
> >>> 
>  -brkint
> >>> 
> >>> 48,49c48,49
> >>> < imaxbel
> >>> < iutf8
> >>> ---
> >>> 
>  -imaxbel
>  -iutf8
> >>> 
> >>> Also, the font seems to be screwed up, because the last line of the
> >>> window only shows the top half of the line.
> >>> 
> >>> Anybody else encounter this or know what's wrong?
> >>> 
> >>> Vbox seems to work okay when run locally, on the machine it's
> >>> installed on.
> >> 
> >> I think this is resolved, kinda.
> >> I just discovered that if I turn off the vbox menu bar, the command
> >> entry line works properly again, both in X-less console mode and in X.
> >> 
> >>  Settings -> User Interface -> Enable menu bar (disable this)
> >> 
> >> I've always had that menu bar, and need it, so something got
> >> changed/broken, and I still have a problem, but at least now I don't
> >> have to enter commands in blindly.
> > 
> > Are these Virtualbox VMs critical?
> > If yes, I would suggest migrating them to a more reliable virtualisation
> > technology.
> > 
> > I do not consider Virtualbox suitable for anything but a desktop based VM
> > method for a quick test or simulation.
> > 
> > Gor anything serious, I would suggest Xen, KVM or VMWare.
> > 
> > --
> > Joost
> 
> Well, no, they're really not critical, but your comment surprises me. 
> I've been using vbox for years, on various assignments, and never
> encountered anything else.  Can you say a word or two to that, or
> provide a URL?  Which free vm is "the best"?

I never bothered bookmarking URLs about this, but can elaborate on my 
reasoning and experience.

Virtualbox is a nice product and I do use it when it is convenient. It is 
perfect for quickly starting a VM to test something. It integrates nicely with 
the desktop to be able to quickly copy/paste data across and also easy to 
connect to the filesystem on the host.

This also mentions the reason why it is NOT suitable for actual production 
use. It is a virtualisation tool for a desktop.

If you want your VMs to run as fast and stable as possible, you want the host 
to be as minimal as possible. This means:
- it runs headless (no GUI, just text) and the host has only 1 task: Run VMs.
- it doesn't contain anything else (only exception is stuff for monitoring)

Virtualbox does not (afaik) support block-devices for VMs. It only supports 
file-based disks. This is fine as it allows you to "quickly" move these to 
different storage. But it adds another layer between the hardware and VM 
(filesystem on the host) which adds it's own write-caching and potential 
corruption (I have had this on several occasions).

The virtualisation systems I mentioned in my previous email (Xen, KVM, VMWare) 
all support block-devices and sit as close to hardware as is possible. In the 
case of VMWare, I am talking about the server product, not the desktop 
product. The VMWare desktop product has the same problems as VirtualBox.

As for which free one is best, I am reluctant to answer specifically as both 
Xen and KVM are good.

Personally, I use Xen. I have been using it since one of the 2.x versions and 
KVM didn't exist back then.
Xen has the hypervisor in a small "kernel" and the host runs as a VM with full 
privileges. You can add additional privileges VMs to provide storage, further 
seperating tasks between VMs.
Citrix also provides a free version of their Xen-product which can be managed 
remotely, but their remote-tool is windows-only last time I checked. I run Xen 
on top of Gentoo and manage everything from the CLI.

KVM runs inside a Linux kernel and this instance automatically is the host. (I 
don't know enough to properly compare the 2, there are plenty of resources 
about comparisons online, most are biased to one or the other)

Both Xen and KVM can be managed with other tools like virt-manager. I don't as 
I don't like the way those tools want to manage the whole environment.

As for use of these systems, when only looking at companies where I have 
experience with:

- VMWare is often used for virtualising servers
- Xen (Citrix) is often used to provide Virtual Desktop to users
- KVM is used by most VPS providers
- Virtualbox is used for training sessions

I have not come across MS HyperV outside of small busi

Re: [gentoo-user] virtualbox in headless configuration broken after update: delayed echo [ RESOLVED, kinda ]

2020-06-16 Thread n952162

On 06/17/20 06:48, J. Roeleveld wrote:

On Tuesday, June 16, 2020 11:08:23 PM CEST n952162 wrote:

On 06/16/20 22:36, J. Roeleveld wrote:


Are these Virtualbox VMs critical?
If yes, I would suggest migrating them to a more reliable virtualisation
technology.

I do not consider Virtualbox suitable for anything but a desktop based VM
method for a quick test or simulation.

Gor anything serious, I would suggest Xen, KVM or VMWare.

--
Joost

Well, no, they're really not critical, but your comment surprises me.
I've been using vbox for years, on various assignments, and never
encountered anything else.  Can you say a word or two to that, or
provide a URL?  Which free vm is "the best"?

I never bothered bookmarking URLs about this, but can elaborate on my
reasoning and experience.

Virtualbox is a nice product and I do use it when it is convenient. It is
perfect for quickly starting a VM to test something. It integrates nicely with
the desktop to be able to quickly copy/paste data across and also easy to
connect to the filesystem on the host.

This also mentions the reason why it is NOT suitable for actual production
use. It is a virtualisation tool for a desktop.

If you want your VMs to run as fast and stable as possible, you want the host
to be as minimal as possible. This means:
- it runs headless (no GUI, just text) and the host has only 1 task: Run VMs.
- it doesn't contain anything else (only exception is stuff for monitoring)

Virtualbox does not (afaik) support block-devices for VMs. It only supports
file-based disks. This is fine as it allows you to "quickly" move these to
different storage. But it adds another layer between the hardware and VM
(filesystem on the host) which adds it's own write-caching and potential
corruption (I have had this on several occasions).

The virtualisation systems I mentioned in my previous email (Xen, KVM, VMWare)
all support block-devices and sit as close to hardware as is possible. In the
case of VMWare, I am talking about the server product, not the desktop
product. The VMWare desktop product has the same problems as VirtualBox.

As for which free one is best, I am reluctant to answer specifically as both
Xen and KVM are good.

Personally, I use Xen. I have been using it since one of the 2.x versions and
KVM didn't exist back then.
Xen has the hypervisor in a small "kernel" and the host runs as a VM with full
privileges. You can add additional privileges VMs to provide storage, further
seperating tasks between VMs.
Citrix also provides a free version of their Xen-product which can be managed
remotely, but their remote-tool is windows-only last time I checked. I run Xen
on top of Gentoo and manage everything from the CLI.

KVM runs inside a Linux kernel and this instance automatically is the host. (I
don't know enough to properly compare the 2, there are plenty of resources
about comparisons online, most are biased to one or the other)

Both Xen and KVM can be managed with other tools like virt-manager. I don't as
I don't like the way those tools want to manage the whole environment.

As for use of these systems, when only looking at companies where I have
experience with:

- VMWare is often used for virtualising servers
- Xen (Citrix) is often used to provide Virtual Desktop to users
- KVM is used by most VPS providers
- Virtualbox is used for training sessions

I have not come across MS HyperV outside of small businesses that need some
local VMs. These companies tend to put all their infrastructure with one of
the big cloud-VM providers (Like AWS, Azure, Googles,...)

--
Joost




Thank you for this excellent survey/summary.  It tells me that vbox is
good for my current usages, but I should start exposing myself to Xen as
a possible migration path.






[gentoo-user] WARNING: Do not update your system on ~amd64

2020-06-16 Thread Andreas Fink
Hello,
I've noticed a problem with the current PAM update to
sys-libs/pam-1.4.0.
The update adds passwdqc USE to sys-auth/pambase, which pulls in
sys-auth/passwdqc. However sys-auth/passwdqc fails to build on my
system, and leaves me with an installed sys-libs/pam-1.4.0 which is
broken and does not allow any new login.
The end result is that sys-libs/pam-1.4.0 was successfully merged but
sys-auth/pambase will not be merged, due to a build failure in passwdqc.
Disabling the USE flag passwdqc for pambase allows an update to pambase
too, and logins work again.
This is a warning to anyone out there who updates daily and runs an
~amd64.

One system that I updated and restarted, I cannot login to it anymore
(or ssh into it). Another system that I updated and currently am
writing from, I'm still logged in after the broken update. and I can
see the following error message (before disabling the USE flag passwdqc
for the package pambase):
PAM unable to dlopen(/lib64/security/pam_cracklib.so): 
/lib64/security/pam_cracklib.so: cannot open shared object file: No such file 
or directory
PAM adding faulty module: /lib64/security/pam_cracklib.so

After doing a
USE=-passwdqc emerge -a1 pambase
the error messages disappear from the system logs and I am able to
login to my machine again. However if you reboot with the broken state
you will have a hard time updating it, since you cannot login to your
machine anymore and need a chroot from a live system.

The bug report for passwdqc is here:
https://bugs.gentoo.org/728528

Cheers
Andreas



Re: [gentoo-user] emerge -u fails with "OSError: [Errno 12] Cannot allocate memory" [ RESOLVED, kinda ]

2020-06-16 Thread n952162

On 06/16/20 21:35, J. Roeleveld wrote:

On 16 June 2020 20:31:56 CEST, n952162  wrote:

Admonished to get everything updated, I turned to my raspberry pi with
Sakaki's binary image.  Synced and updated portage with no problem.
Then I did an emerge -u @world and got (after *hours* of dependency
checking):


Jobs: 0 of 206 complete, 1 running Load avg: 2.84, 3.44, 3.85
Emerging binary (1 of 206) sys-libs/glibc-2.31-r5::gentoo
Jobs: 0 of 206 complete, 1 running Load avg: 2.84, 3.44, 3.85
Jobs: 0 of 206 complete Load avg: 3.60, 3.54, 3.87
Installing (1 of 206) sys-libs/glibc-2.31-r5::gentoo
Jobs: 0 of 206 complete Load avg: 3.60, 3.54, 3.87

Exception in callback AsynchronousTask._exit_listener_cb(>)
handle: >)>
Traceback (most recent call last):
   File "/usr/lib64/python3.6/asyncio/events.py", line 145, in _run
     self._callback(*self._args)
   File
"/usr/lib64/python3.6/site-packages/_emerge/AsynchronousTask.py", line
201, in _exit_listener_cb
     listener(self)
   File
"/usr/lib64/python3.6/site-packages/_emerge/BinpkgPrefetcher.py", line
31, in _fetcher_exit
     self._start_task(verifier, self._verifier_exit)
   File "/usr/lib64/python3.6/site-packages/_emerge/CompositeTask.py",
line 113, in _start_task
     task.start()
   File
"/usr/lib64/python3.6/site-packages/_emerge/AsynchronousTask.py", line
30, in start
     self._start()
   File "/usr/lib64/python3.6/site-packages/_emerge/BinpkgVerifier.py",
line 59, in _start
     self._digester_exit)
   File "/usr/lib64/python3.6/site-packages/_emerge/CompositeTask.py",
line 113, in _start_task
     task.start()
   File
"/usr/lib64/python3.6/site-packages/_emerge/AsynchronousTask.py", line
30, in start
     self._start()
   File
"/usr/lib64/python3.6/site-packages/portage/util/_async/FileDigester.py",
line 30, in _start
     ForkProcess._start(self)
   File "/usr/lib64/python3.6/site-packages/_emerge/SpawnProcess.py",
line 112, in _start
     retval = self._spawn(self.args, **kwargs)
   File
"/usr/lib64/python3.6/site-packages/portage/util/_async/ForkProcess.py",
line 24, in _spawn
     pid = os.fork()
   File "/usr/lib64/python3.6/site-packages/portage/__init__.py", line
246, in __call__
     rval = self._func(*wrapped_args, **wrapped_kwargs)
OSError: [Errno 12] Cannot allocate memory

What's the recommended course of action here?

Log attached.

Suggestion:
1) ensure you only have 1 job running and absolutely no parallel builds. "--jobs 
1" for both emerge and make

2) get SWAP, preferably on USB stick/harddrive so as not to kill the SD card.

Because rasppis are low on memory and they have very specific uses, I tend not 
to bother with Gentoo on them.

--
Joost


I started getting a harddisk ready for a swap area, but then decided to
try to emerge @system as a first step (using the -j 1 option this time -
thank you) and that completed, as did then the subsequent emerge of @world.

It completed successfully (as I interpret it) with only 1 package being
emerged, but it also output these messages:

    WARNING: One or more updates/rebuilds have been skipped due to
   a dependency conflict:

    xfce-base/libxfce4ui:0


    !!! The following binary packages have been ignored due to non
   matching USE:

    =sys-devel/clang-9.0.1 python_single_target_python3_6
   -python_single_target_python3_7
    =sys-devel/clang-8.0.1 python_targets_python2_7


    !!! The following binary packages have been ignored due to
   changed dependencies:

 mail-mta/ssmtp-2.64-r3::gentoo
 sys-devel/llvm-9.0.1::gentoo
 sys-devel/llvm-8.0.1::gentoo


  Unclear to me is:

 *    why the dependency conflict for xfce-base/libxfce4ui did not
   prevent the emerge when dependency conflicts seem to normally do so.
 *    why the non-matching USE flags didn't cause, this time, the
   emerge to break
 *    What the difference is between:
 o      - the WARNING above
 o      - the two !!!  events
 o      - terminating errors in general






Re: [gentoo-user] Testing a used hard drive to make SURE it is good.

2020-06-16 Thread Dale
Dale wrote:
> Howdy,
>
> I finally bought a 8TB drive.  It is used but they claim only a short
> duration.  Still, I want to test it to be sure it is in grade A shape
> before putting a lot of data on it and depending on it.  I am familiar
> with some tools already.  I know about SMART but it is not always
> 100%.  It seems to catch most problems but not all.  I'm familiar with
> dd and writing all zeores or random to it to see if it can in fact
> write to all the parts of the drive but it is slow. It can take a long
> time to write and fill up a 8TB drive. Days maybe??  I googled and
> found a new tool but not sure how accurate it is since I've never used
> it before.  The command is badblocks.  It is installed on my system so
> I'm just curious as to what it will catch that others won't.  Is it
> fast or slow like dd?
>
> I plan to run the SMART test anyway.  It'll take several hours but I'd
> like to run some other test to catch errors that SMART may miss.  If
> there is such a tool that does that.  If you bought a used drive, what
> would you run other than the long version of SMART and its test? 
> Would you spend the time to dd the whole drive?  Would badblocks be a
> better tool?  Is there another better tool for this? 
>
> While I'm at it, when running dd, I have zero and random in /dev. 
> Where does a person obtain a one?  In other words, I can write all
> zeros, I can write all random but I can't write all ones since it
> isn't in /dev.  Does that even exist?  Can I create it myself
> somehow?  Can I download it or install it somehow?  I been curious
> about that for a good long while now.  I just never remember to ask. 
>
> When I add this 8TB drive to /home, I'll have 14TBs of space.  If I
> leave the 3TB drive in instead of swapping it out, I could have about
> 17TBs of space.  O_O 
>
> Thanks to all.
>
> Dale
>
> :-)  :-) 


Update.  I got a lot of info and suggestions from the replies.  Thanks
to all for those.  The drive has passed all the test so far.  I ran a
short and long SMART selftest.  Results:


root@fireball / # smartctl -l selftest /dev/sde
smartctl 7.0 2018-12-30 r4883 [x86_64-linux-5.6.7-gentoo] (local build)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status  Remaining 
LifeTime(hours)  LBA_of_first_error
# 1  Short offline   Completed without error   00%
24592 -
# 2  Extended offline    Completed without error   00%
24592 -
# 3  Short offline   Completed without error   00%
24213 -
# 4  Short offline   Completed without error   00%
23493 -
# 5  Short offline   Completed without error   00%
22749 -
# 6  Short offline   Completed without error   00%
22054 -
# 7  Short offline   Completed without error   00%
21310 -
# 8  Short offline   Completed without error   00%
20566 -
# 9  Short offline   Completed without error   00%
19846 -
#10  Short offline   Completed without error   00%
19101 -
#11  Short offline   Completed without error   00%
18381 -
#12  Short offline   Completed without error   00%
17637 -
#13  Short offline   Completed without error   00%
16893 -
#14  Short offline   Completed without error   00%
16173 -
#15  Short offline   Completed without error   00%
12108 -
#16  Short offline   Completed without error   00%
11940 -
#17  Short offline   Completed without error   00%
11772 -
#18  Short offline   Completed without error   00%
11604 -
#19  Short offline   Completed without error   00%
11436 -
#20  Short offline   Completed without error   00%
11268 -
#21  Short offline   Completed without error   00%
11100 -

root@fireball / #


I then ran badblocks to test it.  This is the results of it. 


root@fireball / # badblocks -b 4096 -s -v /dev/sde
Checking blocks 0 to 1953506645
Checking for bad blocks (read-only test):
done
Pass completed, 0 bad blocks found. (0/0/0 errors)
root@fireball / #

It doesn't show it now but I had it show the progress including run
time.  It took about 15 hours to run on this 8TB drive.  If anyone wants
to test a drive in the future, that may help estimate the amount of time
to run this test. 

I tried the conveyance test but this drive doesn't support it.  Since it
shows no errors and it passed the SMART tests as well, I'm thinking it
is time to put data on the thing.  Off to the LVM manual. 

Thanks to all for the tips, tricks and suggestions. 

Dale

:-)  :-) 


Re: [gentoo-user] virtualbox in headless configuration broken after update: delayed echo [ RESOLVED, kinda ]

2020-06-16 Thread J. Roeleveld
On Wednesday, June 17, 2020 7:42:30 AM CEST n952162 wrote:
> On 06/17/20 06:48, J. Roeleveld wrote:
> > On Tuesday, June 16, 2020 11:08:23 PM CEST n952162 wrote:
> >> On 06/16/20 22:36, J. Roeleveld wrote:



> > I have not come across MS HyperV outside of small businesses that need
> > some
> > local VMs. These companies tend to put all their infrastructure with one
> > of
> > the big cloud-VM providers (Like AWS, Azure, Googles,...)
> > 
> > --
> > Joost
> 
> Thank you for this excellent survey/summary.  It tells me that vbox is
> good for my current usages, but I should start exposing myself to Xen as
> a possible migration path.

I would actually suggest to read up on both Xen and KVM and try both on spare 
machines.
See which best fits your requirements and also see if the existing management 
tools actually do things in a way that you can work with.

My systems have evolved over the past 25-odd years and I started using Xen to 
reduce the amount of physical systems I had running. At the time, VMWare was 
expensive, KVM didn't exist yet and was missing some important features for a 
few years after it appeared (not sure if this exists yet, not found anything 
about it on KVM):
- limit memory footprint of host-VM during boot.
- Dedicate CPU-core(s) to the host

Limiting the memory size is important, because there are several parts of the 
kernel (and userspace) that base their memory-settings on this amount. This is 
really noticable when the host thinks it has 384GB available when 370GB is 
passed to VMs.

Dedicating CPU-cores exclusively to the host means the host will always have 
CPU-resources available. This is necessary because all the context-switching 
is handled by the host and if this stalls, the whole environment is impacted.

For a lab-system, I was also missing the ability to save the full state of a 
VM for a snapshot. All the howto's and guides I can find online only talk 
about making a snapshot of the disks. Not of the memory as well. Especially 
when used to Virtualbox, you will notice this issue. When only snapshotting 
the disk, your snapshot is basically the state of when you literally pulled 
the plug of your VM if you want to restore back to this.

For KVM, I have found a few hints that this was planned. But I have not found 
anything about this. Virt-manager does not (last time I looked) support Xen's 
functionality of storing the memory when creating snapshots either. Which is 
why I don't use that even for my lab/testing-server.

As for tips/tricks (below works for Xen, but should also work with KVM):

The way I create a new Gentoo-VM is simply to create a new block-device 
(Either LVM or ZFS), do all the initial steps in the chroot from the host and 
when it comes to the first-reboot, umount the filesystems, hook it up to a new 
VM and start that.

Because of this, I can update the host as follows:
- create new "partitions" for the host-system.
- Install the latest versions, migrate the config across
- reboot into the new host.

If all goes fine, I can clean up the "old" partitions and prepare them for 
next time. If there are issues, I have a working "old" version I can quickly 
revert to.

--
Joost





Re: [gentoo-user] emerge -u fails with "OSError: [Errno 12] Cannot allocate memory" [ RESOLVED, kinda ]

2020-06-16 Thread J. Roeleveld
On Wednesday, June 17, 2020 7:54:22 AM CEST n952162 wrote:
> On 06/16/20 21:35, J. Roeleveld wrote:
> > On 16 June 2020 20:31:56 CEST, n952162  wrote:
> >> Admonished to get everything updated, I turned to my raspberry pi with
> >> Sakaki's binary image.  Synced and updated portage with no problem.
> >> Then I did an emerge -u @world and got (after *hours* of dependency
> >> 
> >> checking):
> > Jobs: 0 of 206 complete, 1 running Load avg: 2.84, 3.44, 3.85
> > Emerging binary (1 of 206) sys-libs/glibc-2.31-r5::gentoo
> > Jobs: 0 of 206 complete, 1 running Load avg: 2.84, 3.44, 3.85
> > Jobs: 0 of 206 complete Load avg: 3.60, 3.54, 3.87
> > Installing (1 of 206) sys-libs/glibc-2.31-r5::gentoo
> > Jobs: 0 of 206 complete Load avg: 3.60, 3.54, 3.87
> >> 
> >> Exception in callback AsynchronousTask._exit_listener_cb( >> method...0x7f9180d9d8>>)
> >> handle:  >> method...0x7f9180d9d8>>)>
> >> 
> >> Traceback (most recent call last):
> >>File "/usr/lib64/python3.6/asyncio/events.py", line 145, in _run
> >>  self._callback(*self._args)
> >>File
> >> 
> >> "/usr/lib64/python3.6/site-packages/_emerge/AsynchronousTask.py", line
> >> 201, in _exit_listener_cb
> >> 
> >>  listener(self)
> >>File
> >> 
> >> "/usr/lib64/python3.6/site-packages/_emerge/BinpkgPrefetcher.py", line
> >> 31, in _fetcher_exit
> >> 
> >>  self._start_task(verifier, self._verifier_exit)
> >>File "/usr/lib64/python3.6/site-packages/_emerge/CompositeTask.py",
> >> 
> >> line 113, in _start_task
> >> 
> >>  task.start()
> >>File
> >> 
> >> "/usr/lib64/python3.6/site-packages/_emerge/AsynchronousTask.py", line
> >> 30, in start
> >> 
> >>  self._start()
> >>File "/usr/lib64/python3.6/site-packages/_emerge/BinpkgVerifier.py",
> >> 
> >> line 59, in _start
> >> 
> >>  self._digester_exit)
> >>File "/usr/lib64/python3.6/site-packages/_emerge/CompositeTask.py",
> >> 
> >> line 113, in _start_task
> >> 
> >>  task.start()
> >>File
> >> 
> >> "/usr/lib64/python3.6/site-packages/_emerge/AsynchronousTask.py", line
> >> 30, in start
> >> 
> >>  self._start()
> >>File
> >> 
> >> "/usr/lib64/python3.6/site-packages/portage/util/_async/FileDigester.py",
> >> line 30, in _start
> >> 
> >>  ForkProcess._start(self)
> >>File "/usr/lib64/python3.6/site-packages/_emerge/SpawnProcess.py",
> >> 
> >> line 112, in _start
> >> 
> >>  retval = self._spawn(self.args, **kwargs)
> >>File
> >> 
> >> "/usr/lib64/python3.6/site-packages/portage/util/_async/ForkProcess.py",
> >> line 24, in _spawn
> >> 
> >>  pid = os.fork()
> >>File "/usr/lib64/python3.6/site-packages/portage/__init__.py", line
> >> 
> >> 246, in __call__
> >> 
> >>  rval = self._func(*wrapped_args, **wrapped_kwargs)
> >> 
> >> OSError: [Errno 12] Cannot allocate memory
> >> 
> >> What's the recommended course of action here?
> >> 
> >> Log attached.
> > 
> > Suggestion:
> > 1) ensure you only have 1 job running and absolutely no parallel builds.
> > "--jobs 1" for both emerge and make
> > 
> > 2) get SWAP, preferably on USB stick/harddrive so as not to kill the SD
> > card.
> > 
> > Because rasppis are low on memory and they have very specific uses, I tend
> > not to bother with Gentoo on them.
> > 
> > --
> > Joost
> 
> I started getting a harddisk ready for a swap area, but then decided to
> try to emerge @system as a first step (using the -j 1 option this time -
> thank you) and that completed, as did then the subsequent emerge of @world.

You're welcome.
Please be aware that SD-cards are really not designed for the type of use a 
Gentoo update causes. I would definitely put at least the build-dir (usually /
var/tmp/portage) on an external drive to avoid excessive wear and tear (and 
catastrophic failures)

> It completed successfully (as I interpret it) with only 1 package being
> emerged, but it also output these messages:
> 
>  WARNING: One or more updates/rebuilds have been skipped due to
> a dependency conflict:
> 
>  xfce-base/libxfce4ui:0
> 
> 
>  !!! The following binary packages have been ignored due to non
> matching USE:
> 
>  =sys-devel/clang-9.0.1 python_single_target_python3_6
> -python_single_target_python3_7
>  =sys-devel/clang-8.0.1 python_targets_python2_7
> 
> 
>  !!! The following binary packages have been ignored due to
> changed dependencies:
> 
>   mail-mta/ssmtp-2.64-r3::gentoo
>   sys-devel/llvm-9.0.1::gentoo
>   sys-devel/llvm-8.0.1::gentoo
> 
> 
>Unclear to me is:
> 
>   *why the dependency conflict for xfce-base/libxfce4ui did not
> prevent the emerge when dependency conflicts seem to normally do so.
>   *why the non-matching USE flags didn't cause, this time, the
> emerge to break
>   *What the difference is between:
>   o  - the WARNING above
>   o  - the two !!!  events
>   o  - terminating errors in gen