Re: [gentoo-user] Update is blocked by ots previous version...?

2017-12-07 Thread tuxic

Blocking the package which gets blocked by its previous one?
Is this a wprkaround, a solution ?
Feels weird...


On 12/07 09:53, Jalus Bilieyich wrote:
> Put in package.mask:
> 
> >=app-emulation/containerd-1.0.0:0/0
> 
> On 12/07/2017 08:36 PM, tu...@posteo.de wrote:
> > Hi,
> > 
> > this morning in its endless wisdom emerge spake to me today:
> > 
> > | WARNING: One or more updates/rebuilds have been skipped due to a 
> > dependency conflict:
> > | 
> > | app-emulation/containerd:0
> > | 
> > |   (app-emulation/containerd-1.0.0:0/0::gentoo, ebuild scheduled for 
> > merge) conflicts with
> > | ~app-emulation/containerd-1.0.0_beta2_p20171019 required by 
> > (app-emulation/docker-17.11.0:0/0::gentoo, installed)
> > | ^ ^
> > 
> > 
> > I removed containerd yesterday and installed it again only to get the same
> > error this morning again.
> > 
> > Any away around this?
> > 
> > Cheers
> > Meino
> > 
> > 
> > 
> > 
> 



Re: [gentoo-user] Update is blocked by ots previous version...?

2017-12-07 Thread Jalus Bilieyich
Put in package.mask:

>=app-emulation/containerd-1.0.0:0/0

On 12/07/2017 08:36 PM, tu...@posteo.de wrote:
> Hi,
> 
> this morning in its endless wisdom emerge spake to me today:
> 
> | WARNING: One or more updates/rebuilds have been skipped due to a dependency 
> conflict:
> | 
> | app-emulation/containerd:0
> | 
> |   (app-emulation/containerd-1.0.0:0/0::gentoo, ebuild scheduled for merge) 
> conflicts with
> | ~app-emulation/containerd-1.0.0_beta2_p20171019 required by 
> (app-emulation/docker-17.11.0:0/0::gentoo, installed)
> | ^ ^
> 
> 
> I removed containerd yesterday and installed it again only to get the same
> error this morning again.
> 
> Any away around this?
> 
> Cheers
> Meino
> 
> 
> 
> 



Re: [gentoo-user] OT: git, how to compare a repo with a loose tree

2017-12-07 Thread Michael Orlitzky
On 12/07/2017 09:58 PM, Ian Zimmerman wrote:
> I would like to use "git diff" to show differences between the
> current state of a git repository and a normal directory tree somewhere
> on the filesystem, ie. one without a .git subdirectory.  This is proving
> surprisingly hard to do.

If "git diff" isn't important, I was able to fake something close but
not quite like it:

  colordiff --recursive \
--suppress-common-lines \
--unified \
--exclude=.git \
--new-file \
 \
 \
  | most

That uses app-misc/colordiff to colorize the diff output, and
sys-apps/most as my pager.



[gentoo-user] OT: git, how to compare a repo with a loose tree

2017-12-07 Thread Ian Zimmerman
I would like to use "git diff" to show differences between the
current state of a git repository and a normal directory tree somewhere
on the filesystem, ie. one without a .git subdirectory.  This is proving
surprisingly hard to do.

git diff has a documented mode to compare general "paths" as they call
it: the --no-index option.  But when I try it like this inside a git repo,

 git diff --no-index . /somedir

git apparently "forgets" that the current directory is a repo, and just
basically apes diff -r.  This means it doesn't know which files are
tracked, and in particular it reports every freaking file under ./.git
as deleted.  And there is no exclude option that I see.  Argh!  How can
I get around this?

If it matters: I'm fine with assuming the repo is clean ie. no
uncommitted changes, so the current state can be represented as any of:
working tree, "index" or HEAD.

-- 
Please don't Cc: me privately on mailing lists and Usenet,
if you also post the followup to the list or newsgroup.
To reply privately _only_ on Usenet, fetch the TXT record for the domain.



[gentoo-user] Update is blocked by ots previous version...?

2017-12-07 Thread tuxic
Hi,

this morning in its endless wisdom emerge spake to me today:

| WARNING: One or more updates/rebuilds have been skipped due to a dependency 
conflict:
| 
| app-emulation/containerd:0
| 
|   (app-emulation/containerd-1.0.0:0/0::gentoo, ebuild scheduled for merge) 
conflicts with
| ~app-emulation/containerd-1.0.0_beta2_p20171019 required by 
(app-emulation/docker-17.11.0:0/0::gentoo, installed)
| ^ ^


I removed containerd yesterday and installed it again only to get the same
error this morning again.

Any away around this?

Cheers
Meino






Re: [gentoo-user] OT: btrfs raid 5/6

2017-12-07 Thread Wols Lists
On 07/12/17 22:35, Frank Steinmetzger wrote:
>> (Oh - and md raid-5/6 also mix data and parity, so the same holds true
>> > there.)

> Ok, wasn’t aware of that. I thought I read in a ZFS article that this were a
> special thing.

Say you've got a four-drive raid-6, it'll be something like

data1   data2   parity1 parity2
data3   parity3 parity4 data4
parity5 parity6 data5   data6

The only thing to watch out for (and zfs is likely the same) if a file
fits inside a single chunk it will be recoverable from a single drive.
And I think chunks can be anything up to 64MB.

Cheers,
Wol



Re: [gentoo-user] OT: btrfs raid 5/6

2017-12-07 Thread Rich Freeman
On Thu, Dec 7, 2017 at 11:04 AM, Frank Steinmetzger  wrote:
> On Thu, Dec 07, 2017 at 10:26:34AM -0500, Rich Freeman wrote:
>
>> […]  They want 1GB/TB RAM, which rules out a lot of the cheap ARM-based
>> solutions.  Maybe you can get by with less, but finding ARM systems with
>> even 4GB of RAM is tough, and even that means only one hard drive per
>> node, which means a lot of $40+ nodes to go on top of the cost of the
>> drives themselves.
>
> You can't really get ECC on ARM, right? So M-ITX was the next best choice. I
> have a tiny (probably one of the smallest available) M-ITX case for four
> 3,5″ bays and an internal 2.5″ mount:
> https://www.inter-tech.de/en/products/ipc/storage-cases/sc-4100
>

I don't think ECC is readily available on ARM (most of those boards
are SoCs where the RAM is integral and can't be expanded).  If CephFS
were designed with end-to-end checksums that wouldn't really matter
much, because the client would detect any error in a storage node and
could obtain a good copy from another node and trigger a resilver.
However, I don't think Ceph is quite there, with checksums being used
at various points but I think there are gaps where no checksum is
protecting the data.  That is one of the things I don't like about it.

If I were designing the checksums for it I'd probably have the client
compute the checksum and send it with the data, then at every step the
checksum is checked, and stored in the metadata on permanent storage.
Then when the ack goes back to the client that the data is written the
checksum would be returned to the client from the metadata, and the
client would do a comparison.  Any retrieval would include the client
obtaining the checksum from the metadata and then comparing it to the
data from the storage nodes.  I don't think this approach would really
add any extra overhead (the metadata needs to be recorded when writing
anyway, and read when reading anyway).  It just ensures there is a
checksum on separate storage from the data and that it is the one
captured when the data was first written.  A storage node could be
completely unreliable in this scenario as it exists apart from the
checksum being used to verify it.  Storage nodes would still do their
own checksum verification anyway since that would allow errors to be
detected sooner and reduce latency, but this is not essential to
reliability.

Instead I think Ceph does not store checksums in the metadata.  The
client checksum is used to verify accurate transfer over the network,
but then the various nodes forget about it, and record the data.  If
the data is backed on ZFS/btrfs/bluestore then the filesystem would
compute its own checksum to detect silent corruption while at rest.
However, if the data were corrupted by faulty software or memory
failure after it was verified upon reception but before it was
re-checksummed prior to storage then you would have a problem.  In
that case a scrub would detect non-matching data between nodes but
with no way to determine which node is correct.

If somebody with more knowledge of Ceph knows otherwise I'm all ears,
because this is one of those things that gives me a bit of pause.
Don't get me wrong - most other approaches have the same issues, but I
can reduce the risk of some of that with ECC, but that isn't practical
when you want many RAM-intensive storage nodes in the solution.

-- 
Rich



Re: [gentoo-user] OT: btrfs raid 5/6

2017-12-07 Thread Frank Steinmetzger
On Thu, Dec 07, 2017 at 09:49:29PM +, Wols Lists wrote:

> > So in case I ever need to send in a drive for repair/replacement, noone can
> > read from it (or only in tiny bits'n'pieces from a hexdump), because each
> > disk contains a mix of data and parity blocks.
> >
> > I think I'm finally sold. :)
> > And with that, good night.
>
> So you've never heard of LUKS?

Sure thing, my laptop’s whole SSD is LUKSed and so are all my other home and
backup partitions. But encrypting ZFS is different, because every disk needs
to be encrypted separately since there is no separation between the FS and
the underlying block device.

This will result in a big computational overhead, choking my poor Celeron.
When I benchmarked reading from a single LUKS container in a ramdisk, it
managed around 160 MB/s IIRC. I might give it a try over the weekend before
I migrate my data, but I’m not expecting miracles. Should have bought an i3
for that.

> (Oh - and md raid-5/6 also mix data and parity, so the same holds true
> there.)

Ok, wasn’t aware of that. I thought I read in a ZFS article that this were a
special thing.

-- 
Gruß | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

This is no signature.


signature.asc
Description: Digital signature


Re: [gentoo-user] OT: btrfs raid 5/6

2017-12-07 Thread Wols Lists
On 07/12/17 21:37, Frank Steinmetzger wrote:
> Ooooh, I just came up with another good reason for raidz over mirror:
> I don't encrypt my drives because it doesn't hold sensitive stuff. (AFAIK
> native ZFS encryption is available in Oracle ZFS, so it might eventually
> come to the Linux world).
> 
> So in case I ever need to send in a drive for repair/replacement, noone can
> read from it (or only in tiny bits'n'pieces from a hexdump), because each
> disk contains a mix of data and parity blocks.
> 
> I think I'm finally sold. :)
> And with that, good night.

So you've never heard of LUKS?

GPT
LUKS
MD-RAID
Filesystem

Simple stack so if you ever have to pull a disk, just delete the LUKS
key from it and everything from that disk is now random garbage.

(Oh - and md raid-5/6 also mix data and parity, so the same holds true
there.)

Cheers,
Wol



Re: [gentoo-user] OT: btrfs raid 5/6

2017-12-07 Thread Frank Steinmetzger
On Wed, Dec 06, 2017 at 07:29:08PM -0500, Rich Freeman wrote:
> On Wed, Dec 6, 2017 at 7:13 PM, Frank Steinmetzger  wrote:
> > On Wed, Dec 06, 2017 at 06:35:10PM -0500, Rich Freeman wrote:
> >>
> >> IMO the cost savings for parity RAID trumps everything unless money
> >> just isn't a factor.
> >
> > Cost saving compared to what? In my four-bay-scenario, mirror and raidz2
> > yield the same available space (I hope so).
> >
> 
> Sure, if you only have 4 drives and run raid6/z2 then it is no more
> efficient than mirroring.  That said, it does provide more security
> because raidz2 can tolerate the failure of any two disks, while
> 2xraid1 or raid10 can tolerate only half of the combinations of two
> disks.


Ooooh, I just came up with another good reason for raidz over mirror:
I don't encrypt my drives because it doesn't hold sensitive stuff. (AFAIK
native ZFS encryption is available in Oracle ZFS, so it might eventually
come to the Linux world).

So in case I ever need to send in a drive for repair/replacement, noone can
read from it (or only in tiny bits'n'pieces from a hexdump), because each
disk contains a mix of data and parity blocks.

I think I'm finally sold. :)
And with that, good night.

-- 
Gruß | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

“I think Leopard is a much better system [than Windows Vista] … but OS X in
some ways is actually worse than Windows to program for.  Their file system is
complete and utter crap, which is scary.” – Linus Torvalds


signature.asc
Description: Digital signature


Re: [gentoo-user] OT: btrfs raid 5/6

2017-12-07 Thread Wols Lists
On 07/12/17 20:17, Richard Bradfield wrote:
> On Thu, Dec 07, 2017 at 06:35:16PM +, Wols Lists wrote:
>> On 07/12/17 09:52, Richard Bradfield wrote:
>>> I did also investigate USB3 external enclosures, they're pretty
>>> fast these days.
>>
>> AARRGGH !!!
>>
>> If you're using mdadm, DO NOT TOUCH USB WITH A BARGE POLE !!!
>>
>> I don't know the details, but I gather the problems are very similar to
>> the timeout problem, but much worse.
>>
>> I know the wiki says you can "get away" with USB, but only for a broken
>> drive, and only when recovering *from* it.
>>
>> Cheers,
>> Wol
>>
> 
> I'm using ZFS on Linux, does that make you any less terrified? :)
> 
> I never ended up pursuing the USB enclosure, because disks got bigger
> faster than I needed more storage, but I'd be interested in hearing if
> there are real issues with trying to mount drive arrays over XHCI, given
> the failure of eSATA to achieve wide adoption it looked like a good
> route for future expansion.
> 
Sorry, not a clue. I don't know zfs.

The problem with USB, as I understand it, is that USB itself times out.
If that happens, there is presumably a tear-down/setup delay, which is
the timeout problem, which upsets mdadm.

My personal experience is that the USB protocol also seems vulnerable to
crashing and losing drives.

In the --replace scenario, the fact that you are basically streaming
from the old drive to the new one seems not to trip over the problem,
but anything else is taking rather unnecessary risks ...

As for eSATA, I want to get hold of a JBOD enclosure, but I'll then need
to get a PCI card with an external port-multiplier ESATA capability. I
suspect one of the reasons it didn't take off was the multiplicity of
specifications, such that people probably bought add-ons that were
"unfit for purpose" because they didn't know what they were doing, or
the mobo suppliers cut corners so the on-board ports were unfit for
purpose, etc etc. So the whole thing sank with a bad rep it didn't
deserve. Certainly, when I've been looking, the situation is, shall we
say, confusing ...

Cheers,
Wol

Cheers,
Wol



Re: [gentoo-user] OT: btrfs raid 5/6

2017-12-07 Thread Richard Bradfield

On Thu, Dec 07, 2017 at 06:35:16PM +, Wols Lists wrote:

On 07/12/17 09:52, Richard Bradfield wrote:

I did also investigate USB3 external enclosures, they're pretty
fast these days.


AARRGGH !!!

If you're using mdadm, DO NOT TOUCH USB WITH A BARGE POLE !!!

I don't know the details, but I gather the problems are very similar to
the timeout problem, but much worse.

I know the wiki says you can "get away" with USB, but only for a broken
drive, and only when recovering *from* it.

Cheers,
Wol



I'm using ZFS on Linux, does that make you any less terrified? :)

I never ended up pursuing the USB enclosure, because disks got bigger
faster than I needed more storage, but I'd be interested in hearing if
there are real issues with trying to mount drive arrays over XHCI, given
the failure of eSATA to achieve wide adoption it looked like a good
route for future expansion.

--
Richard



Re: [gentoo-user] OT: btrfs raid 5/6

2017-12-07 Thread Wols Lists
On 07/12/17 14:53, Frank Steinmetzger wrote:
> When I configured my kernel the other day, I discovered network block
> devices as an option. My PC has a hotswap bay[0]. Problem solved. :) Then I
> can do zpool replace with the drive-to-be-replaced still in the pool, which
> improves resilver read distribution and thus lessens the probability of a
> failure cascade.

Or with mdadm, there's "mdadm --replace". If you want to swap a drive
(rather than replace a failed drive), this both preserves redundancy and
reduces the stress on the array by doing disk-to-disk copy rather than
recalculating the new disk.

Cheers,
Wol



Re: [gentoo-user] OT: btrfs raid 5/6

2017-12-07 Thread Wols Lists
On 07/12/17 09:52, Richard Bradfield wrote:
> I did also investigate USB3 external enclosures, they're pretty
> fast these days.

AARRGGH !!!

If you're using mdadm, DO NOT TOUCH USB WITH A BARGE POLE !!!

I don't know the details, but I gather the problems are very similar to
the timeout problem, but much worse.

I know the wiki says you can "get away" with USB, but only for a broken
drive, and only when recovering *from* it.

Cheers,
Wol



Re: [gentoo-user] preparing for profile switch -- major problem

2017-12-07 Thread Michael Orlitzky
On 12/07/2017 09:04 AM, John Covici wrote:
> 
> I have it set to 120 by default.
> 

Um... try "emerge -uDN1 perl" instead of @world? The perl
upgrade was tough, IIRC because it needed a large backtrack value to
succeed. But back when the perl upgrade was "new", a @world update was
simple enough to succeed. Now you've got all the python and profile
stuff going on too, so it might be necessary to upgrade one subsystem at
a time.



Re: [gentoo-user] grub-0.97-r16 and profile 17.0 change

2017-12-07 Thread Kai Peter

On 2017-12-07 15:22, Peter Humphrey wrote:

On Thursday, 7 December 2017 12:04:08 GMT Kai Peter wrote:

On 2017-12-06 13:28, Peter Humphrey wrote:
> On Sunday, 3 December 2017 15:12:21 GMT Mick wrote:
>> On 03-12-2017 ,10:57:33, Peter Humphrey wrote:


--->8


> Sys-boot/grub-0.97-r17 compiled and installed all right, as a package,
> but when I went to install it to the MBR I got an error complaining of a
> mismatch or corruption in stage X. The wording was something like that,
> and I forget the value of X. There was no mention of disk space, and the
> boot partition is 2GB, so I think it's something else.
>
> Installing sys-boot/grub-static-0.97-r12 instead went smoothly, so I've
> left it like that for the moment.
>
> Does the team think I should go back to grub-0.97-r17, take proper
> records and file a bug report?

I question if this makes sense for a masked ebuild.


Masked? Not here, it isn't.

In the meaning of 'keyworded':
KEYWORDS="~amd64 ~x86 ~x86-fbsd"
(Why i did know that this will be misunderstood?)

Anyway, it's your choice to file a bug.



I'm curious about what was discussed until now. The issue seems to be
quite simple to solve.

The build fails but portage/gcc does give clear info in this case: the
option "-nopie" have to be changed to "-no-pie". This option is set in
860_all_grub-0.97-pie.patch. Here is a diff:


--->8

Yes, this has been made clear already, but it's not the problem I had.
Didn't find it in this thread - my fault. Btw. kernels haven't to be 
stored in /boot necessarily - related to the posts of the size of the 
boot partition. And maybe related to your problem: the r17 ebuild 
differs by the use of patches heavily.


Maybe the easiest way is to create a new grub-patches package, but 
there
are other ways to change this too. I'm expected the upstream will 
change

this soon - within the remaining 5 weeks ;-).

Another thing is I question that grub-legacy have to be rebuild at 
all.

I'm pretty sure it is save to remove it from the world file or comment
it out.


Then the first emerge -c will remove it from the system.
Does anybody run emerge -c blindly w/o reviewing the packages before? If 
yes compile it outside of portage. Or backup the required files, do 
emerge -c and restore the backup'd files afterwards. Or ...



Anyhow, upgrading to grub2 is IMHO the right way. There are some
examples given in parallel threads how to write a grub.cfg by hand - 
and

keep it simple :-). Then nothing else then the grub2 binary and
grub2-install is required usually.


Long-standing readers may remember that I have reasons for avoiding 
grub-2.

I still think it's a monstrosity and I'd much prefer never to have to
wrestle with it again.
Now, AFAIK, grub2 wants to be a universal boot loader for different 
architectures against grub-legacy is for PC's only. If you still want to 
rely on grub-legacy it would be the best solution to take over the 
project or fork it.


On the other hand, I suppose I could have another go at writing my own
grub.cfg, just for the one little Atom box, if only for a quiet life.


--
Sent with eQmail-1.10



Re: [gentoo-user] OT: btrfs raid 5/6

2017-12-07 Thread Frank Steinmetzger
On Thu, Dec 07, 2017 at 10:26:34AM -0500, Rich Freeman wrote:
> On Thu, Dec 7, 2017 at 9:53 AM, Frank Steinmetzger  wrote:
> >
> > I see. I'm always looking for ways to optimise expenses and cut down on
> > environmental footprint by keeping stuff around until it really breaks. In
> > order to increase capacity, I would have to replace all four drives, whereas
> > with a mirror, two would be enough.
> >
> 
> That is a good point.  Though I would note that you can always replace
> the raidz2 drives one at a time - you just get zero benefit until
> they're all replaced.  So, if your space use grows at a rate lower
> than the typical hard drive turnover rate that is an option.
> 
> >
> > When I configured my kernel the other day, I discovered network block
> > devices as an option. My PC has a hotswap bay[0]. Problem solved. :) Then I
> > can do zpool replace with the drive-to-be-replaced still in the pool, which
> > improves resilver read distribution and thus lessens the probability of a
> > failure cascade.
> >
> 
> If you want to get into the network storage space I'd keep an eye on
> cephfs.

No, I was merely talking about the use case of replacing drives on-the-fly
with the limited hardware available (all slots are occupied). It was not
about expanding my storage beyond what my NAS case can provide.

Resilvering is risky business, more so with big drives and especially once
they get older. That's why I was talking about adding the new drive
externally, which allows me to use all old drives during resilvering. Once
it is resilvered, I install it physically.

> […]  They want 1GB/TB RAM, which rules out a lot of the cheap ARM-based
> solutions.  Maybe you can get by with less, but finding ARM systems with
> even 4GB of RAM is tough, and even that means only one hard drive per
> node, which means a lot of $40+ nodes to go on top of the cost of the
> drives themselves.

No need to overshoot. It's a simple media archive and I'm happy with what I
have, apart from a few shortcomings of the case regarding quality and space.
My main goal was reliability, hence ZFS, ECC, and a Gold-rated PSU. They say
RAID is not a backup. For me it is -- in case of disk failure, which is my
main dread.

You can't really get ECC on ARM, right? So M-ITX was the next best choice. I
have a tiny (probably one of the smallest available) M-ITX case for four
3,5″ bays and an internal 2.5″ mount:
https://www.inter-tech.de/en/products/ipc/storage-cases/sc-4100

Tata...
-- 
I cna ytpe 300 wrods pre mniuet!!!



Re: [gentoo-user] preparing for profile switch -- major problem

2017-12-07 Thread John Covici
On Thu, 07 Dec 2017 09:37:56 -0500,
Alan McKinnon wrote:
> 
> On 07/12/2017 07:44, John Covici wrote:
> > Hi. In preparing for the profile switch and the emerge -e world, I
> > have run into a serious problem with perl.  I think I saw on this list
> > where perl 5.26 was going to have problems -- maybe until it is
> > stabilized -- but if I mask it off,  I get the following:
> 
> Unmask the latest perl and update world with the old profile
> 
> This will go smoothly as the perl team did an excellent job making sure
> everything perl-ish in the tree works in concert with everything else.
> However I do recall that trying to do it with a partial world update
> didn't work - too many affected packages, so trying just perl + deps did
> not work. Rather do a normal world update.
> 
> Once done, then switch your profile to 17.0 and do the giant emerge -e
> world that requires.
> 
> tl;dr
> 
> the news message about perl might make you think the sky will fall on
> your head and all your kittens will die, this is actually not true.
> 
> The v5.26 updates mostly had to do with perl's search path for perl
> modules. Just like how we've frowned on having "." in the shell PATH for
> decades, perl now implemented something like that for modules too. The
> possible problem anticipated is that modules would now break if a
> modules could not find another module it needs. But this really only
> applied to modules outside the perl distribution itself. And the Gentoo
> perl team dealt with all that already
> 
> It's widely discussed all over the internet in all the usual places, you
> are only affected if one of more of these applies:
> 
> - you write perl modules yourself (solution: update your own code)
> - you use many ancient cpan modules that no-one touched for yonks
> (solution: maybe use currently supported modules instead)
> - you heavily rely on a third party perl app that might not have been
> updated - musicbrainz and radiator come to mind as examples (solution:
> harass your app vendor)
> 
> -- 
> Alan McKinnon
> alan.mckin...@gmail.com
> 
> 

So, I have already switched to the new profile, should I switch back,
do a regular world update, or do the world update with the new profile
-- I am compiling gcc as I write, although its not finished yet, I can
interrupt it if necessary.

-- 
Your life is like a penny.  You're going to lose it.  The question is:
How do
you spend it?

 John Covici
 cov...@ccs.covici.com



Re: [gentoo-user] OT: btrfs raid 5/6

2017-12-07 Thread Rich Freeman
On Thu, Dec 7, 2017 at 9:53 AM, Frank Steinmetzger  wrote:
>
> I see. I'm always looking for ways to optimise expenses and cut down on
> environmental footprint by keeping stuff around until it really breaks. In
> order to increase capacity, I would have to replace all four drives, whereas
> with a mirror, two would be enough.
>

That is a good point.  Though I would note that you can always replace
the raidz2 drives one at a time - you just get zero benefit until
they're all replaced.  So, if your space use grows at a rate lower
than the typical hard drive turnover rate that is an option.

>
> When I configured my kernel the other day, I discovered network block
> devices as an option. My PC has a hotswap bay[0]. Problem solved. :) Then I
> can do zpool replace with the drive-to-be-replaced still in the pool, which
> improves resilver read distribution and thus lessens the probability of a
> failure cascade.
>

If you want to get into the network storage space I'd keep an eye on
cephfs.  I don't think it is quite to the point where it is a
zfs/btrfs replacement option, but it could get there.  I don't think
the checksums are quite end-to-end, but they're getting better.
Overall stability for cephfs itself (as opposed to ceph object
storage) is not as good from what I hear.  The biggest issue with it
though is RAM use on the storage nodes.  They want 1GB/TB RAM, which
rules out a lot of the cheap ARM-based solutions.  Maybe you can get
by with less, but finding ARM systems with even 4GB of RAM is tough,
and even that means only one hard drive per node, which means a lot of
$40+ nodes to go on top of the cost of the drives themselves.

Right now cephfs mainly seems to appeal to the scalability use case.
If you have 10k servers accessing 150TB of storage and you want that
all in one managed well-performing pool that is something cephfs could
probably deliver that almost any other solution can't (and the ones
that can cost WAY more than just one box running zfs on a couple of
RAIDs).


-- 
Rich



Re: [gentoo-user] grub-0.97-r16 and profile 17.0 change

2017-12-07 Thread Helmut Jarausch

On 12/07/2017 03:22:30 PM, Peter Humphrey wrote:
Long-standing readers may remember that I have reasons for avoiding  
grub-2.

I still think it's a monstrosity and I'd much prefer never to have to
wrestle with it again.

On the other hand, I suppose I could have another go at writing my own
grub.cfg, just for the one little Atom box, if only for a quiet life.


I've solved my grub2 problems as follows.

I've written a template file and a tiny Python script which replaces  
the variables

within that template.

If anybody is interested, I will share my template and Python script.

Helmut




Re: [gentoo-user] OT: btrfs raid 5/6

2017-12-07 Thread Frank Steinmetzger
On Thu, Dec 07, 2017 at 09:52:55AM +, Richard Bradfield wrote:
> On Thu, 7 Dec 2017, at 09:28, Frank Steinmetzger wrote:
> > > I incorporated ZFS' expansion inflexibility into my planned
> > > maintenance/servicing budget.
> >
> > What was the conclusion? That having no more free slots meant that you
> > can just as well use the inflexible Raidz, otherwise would have gone with
> > Mirror?
>
> Correct, I had gone back and forth between RaidZ2 and a pair of Mirrors.
> I needed the space to be extendable, but I calculated my usage growth
> to be below the rate at which drive prices were falling, so I could
> budget to replace the current set of drives in 3 years, and that
> would buy me a set of bigger ones when the time came.

I see. I'm always looking for ways to optimise expenses and cut down on
environmental footprint by keeping stuff around until it really breaks. In
order to increase capacity, I would have to replace all four drives, whereas
with a mirror, two would be enough.

> I did also investigate USB3 external enclosures, they're pretty
> fast these days.

When I configured my kernel the other day, I discovered network block
devices as an option. My PC has a hotswap bay[0]. Problem solved. :) Then I
can do zpool replace with the drive-to-be-replaced still in the pool, which
improves resilver read distribution and thus lessens the probability of a
failure cascade.

[0] http://www.sharkoon.com/?q=de/node/2171



Re: [gentoo-user] preparing for profile switch -- major problem

2017-12-07 Thread Alan McKinnon
On 07/12/2017 07:44, John Covici wrote:
> Hi. In preparing for the profile switch and the emerge -e world, I
> have run into a serious problem with perl.  I think I saw on this list
> where perl 5.26 was going to have problems -- maybe until it is
> stabilized -- but if I mask it off,  I get the following:

Unmask the latest perl and update world with the old profile

This will go smoothly as the perl team did an excellent job making sure
everything perl-ish in the tree works in concert with everything else.
However I do recall that trying to do it with a partial world update
didn't work - too many affected packages, so trying just perl + deps did
not work. Rather do a normal world update.

Once done, then switch your profile to 17.0 and do the giant emerge -e
world that requires.

tl;dr

the news message about perl might make you think the sky will fall on
your head and all your kittens will die, this is actually not true.

The v5.26 updates mostly had to do with perl's search path for perl
modules. Just like how we've frowned on having "." in the shell PATH for
decades, perl now implemented something like that for modules too. The
possible problem anticipated is that modules would now break if a
modules could not find another module it needs. But this really only
applied to modules outside the perl distribution itself. And the Gentoo
perl team dealt with all that already

It's widely discussed all over the internet in all the usual places, you
are only affected if one of more of these applies:

- you write perl modules yourself (solution: update your own code)
- you use many ancient cpan modules that no-one touched for yonks
(solution: maybe use currently supported modules instead)
- you heavily rely on a third party perl app that might not have been
updated - musicbrainz and radiator come to mind as examples (solution:
harass your app vendor)

-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] grub-0.97-r16 and profile 17.0 change

2017-12-07 Thread Peter Humphrey
On Thursday, 7 December 2017 12:04:08 GMT Kai Peter wrote:
> On 2017-12-06 13:28, Peter Humphrey wrote:
> > On Sunday, 3 December 2017 15:12:21 GMT Mick wrote:
> >> On 03-12-2017 ,10:57:33, Peter Humphrey wrote:

--->8

> > Sys-boot/grub-0.97-r17 compiled and installed all right, as a package,
> > but when I went to install it to the MBR I got an error complaining of a
> > mismatch or corruption in stage X. The wording was something like that,
> > and I forget the value of X. There was no mention of disk space, and the
> > boot partition is 2GB, so I think it's something else.
> > 
> > Installing sys-boot/grub-static-0.97-r12 instead went smoothly, so I've
> > left it like that for the moment.
> > 
> > Does the team think I should go back to grub-0.97-r17, take proper
> > records and file a bug report?
> 
> I question if this makes sense for a masked ebuild.

Masked? Not here, it isn't.

> I'm curious about what was discussed until now. The issue seems to be
> quite simple to solve.
> 
> The build fails but portage/gcc does give clear info in this case: the
> option "-nopie" have to be changed to "-no-pie". This option is set in
> 860_all_grub-0.97-pie.patch. Here is a diff:

--->8

Yes, this has been made clear already, but it's not the problem I had.

> Maybe the easiest way is to create a new grub-patches package, but there
> are other ways to change this too. I'm expected the upstream will change
> this soon - within the remaining 5 weeks ;-).
> 
> Another thing is I question that grub-legacy have to be rebuild at all.
> I'm pretty sure it is save to remove it from the world file or comment
> it out.

Then the first emerge -c will remove it from the system.

> Anyhow, upgrading to grub2 is IMHO the right way. There are some
> examples given in parallel threads how to write a grub.cfg by hand - and
> keep it simple :-). Then nothing else then the grub2 binary and
> grub2-install is required usually.

Long-standing readers may remember that I have reasons for avoiding grub-2. 
I still think it's a monstrosity and I'd much prefer never to have to 
wrestle with it again.

On the other hand, I suppose I could have another go at writing my own 
grub.cfg, just for the one little Atom box, if only for a quiet life.

-- 
Regards,
Peter.




Re: [gentoo-user] preparing for profile switch -- major problem

2017-12-07 Thread John Covici
On Thu, 07 Dec 2017 07:42:45 -0500,
Michael Orlitzky wrote:
> 
> On 12/07/2017 12:44 AM, John Covici wrote:
> > Hi. In preparing for the profile switch and the emerge -e world, I
> > have run into a serious problem with perl.  I think I saw on this list
> > where perl 5.26 was going to have problems -- maybe until it is
> > stabilized -- but if I mask it off,  I get the following:
> > 
> 
> Try adding "--backtrack=100" to your "emerge" command.
> 

I have it set to 120 by default.

-- 
Your life is like a penny.  You're going to lose it.  The question is:
How do
you spend it?

 John Covici
 cov...@ccs.covici.com



Re: [gentoo-user] preparing for profile switch -- major problem

2017-12-07 Thread Michael Orlitzky
On 12/07/2017 12:44 AM, John Covici wrote:
> Hi. In preparing for the profile switch and the emerge -e world, I
> have run into a serious problem with perl.  I think I saw on this list
> where perl 5.26 was going to have problems -- maybe until it is
> stabilized -- but if I mask it off,  I get the following:
> 

Try adding "--backtrack=100" to your "emerge" command.



Re: [gentoo-user] grub-0.97-r16 and profile 17.0 change

2017-12-07 Thread Kai Peter

On 2017-12-06 13:28, Peter Humphrey wrote:

On Sunday, 3 December 2017 15:12:21 GMT Mick wrote:

On 03-12-2017 ,10:57:33, Peter Humphrey wrote:
> On Saturday, 2 December 2017 12:30:57 GMT Mick wrote:
> > I'm getting this error after I changed my profile as per
> > '2017-11-30-new-17-
> >
> > profiles' news item:
> > >>> Compiling source in
> > >>> /data/tmp_var/portage/sys-boot/grub-0.97-r16/work/
>
> [...]
>
> > However, sys-boot/grub-0.97-r17 installed fine once keyworded on this
> > (mostly) stable system.  This may save time for others who come across
> > the same problem.
> sys-boot/grub-0.97-r17
> It has. Thanks Mick.

Unfortunately, an older system with only 50MB /boot partition did not
have enough space to allow sys-boot/grub-0.97-r17 to install all its
files and fs drivers.  I ended up restoring /boot from a back up.  
YMMV.


I spoke too soon, too. Sys-boot/grub-0.97-r17 compiled and installed 
all
right, as a package, but when I went to install it to the MBR I got an 
error

complaining of a mismatch or corruption in stage X. The wording was
something like that, and I forget the value of X. There was no mention 
of
disk space, and the boot partition is 2GB, so I think it's something 
else.


Installing sys-boot/grub-static-0.97-r12 instead went smoothly, so I've 
left

it like that for the moment.

Does the team think I should go back to grub-0.97-r17, take proper 
records

and file a bug report?


I question if this makes sense for a masked ebuild.

I'm curious about what was discussed until now. The issue seems to be 
quite simple to solve.


The build fails but portage/gcc does give clear info in this case: the 
option "-nopie" have to be changed to "-no-pie". This option is set in 
860_all_grub-0.97-pie.patch. Here is a diff:


--- a/860_all_grub-0.97-pie.patch   2012-05-31 01:00:13.0 
+0200
+++ b/860_all_grub-0.97-pie.patch   2017-12-07 11:28:57.536089642 
+0100

@@ -17,8 +17,8 @@
 +grub_cv_cc_fpie=no)
 +])
 +if test "x$grub_cv_cc_fpie" = xyes; then
-+  STAGE1_CFLAGS="$STAGE1_CFLAGS -nopie"
-+  STAGE2_CFLAGS="$STAGE2_CFLAGS -nopie"
++  STAGE1_CFLAGS="$STAGE1_CFLAGS -no-pie"
++  STAGE2_CFLAGS="$STAGE2_CFLAGS -no-pie"
 +fi
fi
  fi

Maybe the easiest way is to create a new grub-patches package, but there 
are other ways to change this too. I'm expected the upstream will change 
this soon - within the remaining 5 weeks ;-).


Another thing is I question that grub-legacy have to be rebuild at all. 
I'm pretty sure it is save to remove it from the world file or comment 
it out.


Anyhow, upgrading to grub2 is IMHO the right way. There are some 
examples given in parallel threads how to write a grub.cfg by hand - and 
keep it simple :-). Then nothing else then the grub2 binary and 
grub2-install is required usually.


Kai
--
Sent with eQmail-1.10



[gentoo-user] mesa 17.1.10 update caused slight tearing for Firefox context menu

2017-12-07 Thread Akater

After a recent mesa update on gentoo-hardened, Firefox context menu
fades in with artifacts, as if fade-in effect was drawn too slowly. It's
not a big deal but it's certainly visible.

I can't be absolutely sure it's mesa. Looks like that.

Info from eix output:

 Installed versions:  17.1.10^d(01:53:11 2017-12-05)(classic dri3 egl 
gallium gbm nptl pax_kernel pic vaapi -bindist -d3d9 -debug -gles1 -gles2 -llvm 
-opencl -openmax -osmesa -selinux -unwind -valgrind -vdpau -vulkan -wayland -xa 
-xvmc ABI_MIPS="-n32 -n64 -o32" ABI_PPC="-32 -64" ABI_S390="-32 -64" 
ABI_X86="64 -32 -x32" VIDEO_CARDS="i965 r600 -freedreno -i915 -imx -intel 
-nouveau -r100 -r200 -r300 -radeon -radeonsi -vc4 -vivante -vmware")

No other rendering issues were observed. However, I don't use demanding
graphics all that much; it has always been Firefox that produced
artifacts.

Not sure if this can be helped easily, unless there's something with my
USE flags that more experienced mailing list readers would find
suspicious.


signature.asc
Description: PGP signature


Re: [gentoo-user] OT: btrfs raid 5/6

2017-12-07 Thread Richard Bradfield
On Thu, 7 Dec 2017, at 09:28, Frank Steinmetzger wrote:
> > I incorporated ZFS' expansion inflexibility into my planned
> > maintenance/servicing budget.
> 
> What was the conclusion? That having no more free slots meant that you
> can just as well use the inflexible Raidz, otherwise would have gone with
> Mirror?

Correct, I had gone back and forth between RaidZ2 and a pair of Mirrors.
I needed the space to be extendable, but I calculated my usage growth
to be below the rate at which drive prices were falling, so I could
budget to replace the current set of drives in 3 years, and that
would buy me a set of bigger ones when the time came.

I did also investigate USB3 external enclosures, they're pretty
fast these days.

-- 
I apologize if my web client has mangled my message.
Richard



Re: [gentoo-user] OT: btrfs raid 5/6

2017-12-07 Thread Frank Steinmetzger
On Thu, Dec 07, 2017 at 07:54:41AM +, Richard Bradfield wrote:
> On Wed, Dec 06, 2017 at 06:35:10PM -0500, Rich Freeman wrote:
> >On Wed, Dec 6, 2017 at 6:28 PM, Frank Steinmetzger  wrote:
> >>
> >>I don’t really care about performance. It’s a simple media archive powered
> >>by the cheapest Haswell Celeron I could get (with 16 Gigs of ECC RAM though
> >>^^). Sorry if I more or less stole the thread, but this is almost the same
> >>topic. I could use a nudge in either direction. My workplace’s storage
> >>comprises many 2× mirrors, but I am not a company and I am capped at four
> >>bays.
> >>
> >>So, Do you have any input for me before I fetch the dice?
> >>
> >
> >IMO the cost savings for parity RAID trumps everything unless money
> >just isn't a factor.
> >
> >Now, with ZFS it is frustrating because arrays are relatively
> >inflexible when it comes to expansion, though that applies to all
> >types of arrays. That is one major advantage of btrfs (and mdadm) over
> >zfs.  I hear they're working on that, but in general there are a lot
> >of things in zfs that are more static compared to btrfs.
> >
> >-- 
> >Rich
> >
> 
> When planning for ZFS pools, at least for home use, it's worth thinking
> about your usage pattern, and if you'll need to expand the pool before
> the lifetime of the drives rolls around.

When I set the NAS up, I migrated everything from my existing individual
external harddrives onto it (the biggest of which was 3 TB). So the main
data slurping is over. Going from 6 to 12 TB should be enough™ for a lng
time unless I start buying TV series on DVD for which I don't have physical
space.

> I incorporated ZFS' expansion inflexibility into my planned
> maintenance/servicing budget.

What was the conclusion? That having no more free slots meant that you can
just as well use the inflexible Raidz, otherwise would have gone with Mirror?

> I expect I'll do the same thing late next year, I wonder if 4TB will be
> the sweet spot, or if I might be able to get something larger.

Me thinks 4 TB was already the sweet spot when I bought my drives a year
back (regarding ¤/GiB). Just checked: 6 TB is the cheapest now according to
a pricing search engine. Well, the German version anyway[1]. The brits are a
bit more picky[2].

[1] https://geizhals.de/?cat=hde7s=10287_NAS~957_Western+Digital=r
[2] https://skinflint.co.uk/?cat=hde7s=10287_NAS%7E957_Western+Digital=r

-- 
This message was written using only recycled electrons.



Re: [gentoo-user] New profile 17: How urgent is the rebuild of world technically?

2017-12-07 Thread Neil Bothwick
On Thu, 7 Dec 2017 00:59:33 +0100, Frank Steinmetzger wrote:

> > How does a restarted emerge @world recognizes packages, which are
> > already compiled according to the new standard?  
> 
> I “circumvent” those questions by doing:
> emerge -pveD world > worldlist
> emerge -1O $(cat worldlist)
> 
> If the system for whatever reason fails and I need to interrupt the
> merge, I simply remove the lines from worldlist that have already been
> built and then repeat the last command. Plus I can exclude some
> packages that don’t need a rebuild: -bins, -docs, virtuals, most perl
> and tex packages and so on. This saves a bit of time on the slower
> laptop.

I wrote a script to handle this some years ago, and it has come in handy
this week. It emerges all packages that have not been done since a given
time.

In this case, I run

mergeolderthan -r glibc

since glibc was emerged right before the world emerge

#!/bin/bash

EMERGE_ARGS="--oneshot --keep-going"

usage() {
echo -e "\nUsage: $(basename $0) [-f file] [-r 
category/package[-version] [-h]"
echo "-f re-emerge all packages older than this file"
echo "-r re-emerge all packages older than this package"
echo "-h Show this text"
echo -e "\nAll other options are passed to the emerge command"
echo -e "$*"
exit
}

while getopts f:r:pvlh ARG; do
case "${ARG}" in
f) REFFILE=${OPTARG} ;;
r) REFFILE=$(ls -1 /var/db/pkg/${OPTARG}*/environment.bz2 | 
head -n 1) ;;
p) EMERGE_ARGS="${EMERGE_ARGS} --pretend" ;;
v) EMERGE_ARGS="${EMERGE_ARGS} --verbose" ;;
l) LIST="y" ;;
h) usage ;;
esac
done
shift $(expr ${OPTIND} - 1)

[[ "${REFFILE}" ]] || usage "\nYou must specify a reference with -f or -r\n"
[[ -f ${REFFILE} ]] || usage "\n${REFFILE} not found\n"

PKGLIST=$(mktemp -t mergeolderthan.)

emerge -ep --exclude gentoo-sources @world | grep -v sys-kernel/gentoo-sources 
| awk -F] '/^\[ebuild/ {print $2}' | awk '{print $1}' | while read PKG; do
if [[ /var/db/pkg/${PKG}/environment.bz2 -ot ${REFFILE} ]]; then
echo "=${PKG}" >>$PKGLIST
fi
done

if [[ "${LIST}" ]]; then
cat ${PKGLIST} && rm -f ${PKGLIST}
else
cat ${PKGLIST} | xargs --no-run-if-empty emerge ${EMERGE_ARGS} && rm -f 
${PKGLIST}
fi



-- 
Neil Bothwick

Hickory Dickory Dock, The mice ran up the clock, The clock struck one, The
others escaped with minor injuries.


pgp3nkpJOWd1h.pgp
Description: OpenPGP digital signature