Re: [gentoo-user] Re: Re-run grub-install to update installed boot code!

2024-02-18 Thread Dale
Grant Edwards wrote:
> On 2024-02-17, Dale  wrote:
>> Grant Edwards wrote:
>>> Today's routine update says:
>>>
>>> Re-run grub-install to update installed boot code!
>>>
>>> Is "sudo grub-install" really all I have to do?  [...]
>>>
>>> Or do I have to run grub-install with all the same options that
>>> were originally used to install grub?
>> I been wondering the same since I saw this posted on -dev.  The news
>> item seems to mention the EFI booting but I'm sure us legacy booting
>> users need to do the same.  At this point, I may skip updating grub
>> this week until I know exactly what I'm supposed to do as well.  I'd
>> think we need to reinstall like when we first did our install but
>> not sure.  :/
> That was my guess. I should have recorded the options originally
> passed to grub-install. Now that I have BIOS boot partitions (instead
> of using embedded blocklists) on all my machines, reinstalling grub
> should be trivial. I think all I have to do is tell grub-install the
> boot device.
>
>> It would suck to have a unbootable system.
> More than once I've had to boot from either systemrescuecd or minimal
> gentoo install ISO so I could re-install (or re-configure) grub after
> someting gets messed up. It's not difficult, but it is annoying.
>
> --
> Grant


I updated my NAS box OS.  It updated grub as well.  I figured it would
be a good test system.  All I did was this:


nas / # grub-install /dev/sda
Installing for i386-pc platform.
Installation finished. No error reported.
nas / #


I rebooted the system and it booted just fine here.  According to ls,
files in ls /boot/grub/i386-pc/ were updated.  They had today's date. 

So, I guess it is pretty simple.  Now to remember doing this.  Heck,
I've never paid much attention to grub updating before. 

Dale

:-)  :-) 



[gentoo-user] Re: Re-run grub-install to update installed boot code!

2024-02-17 Thread Grant Edwards
On 2024-02-17, Dale  wrote:
> Grant Edwards wrote:
>> Today's routine update says:
>>
>> Re-run grub-install to update installed boot code!
>>
>> Is "sudo grub-install" really all I have to do?  [...]
>>
>> Or do I have to run grub-install with all the same options that
>> were originally used to install grub?
>
> I been wondering the same since I saw this posted on -dev.  The news
> item seems to mention the EFI booting but I'm sure us legacy booting
> users need to do the same.  At this point, I may skip updating grub
> this week until I know exactly what I'm supposed to do as well.  I'd
> think we need to reinstall like when we first did our install but
> not sure.  :/

That was my guess. I should have recorded the options originally
passed to grub-install. Now that I have BIOS boot partitions (instead
of using embedded blocklists) on all my machines, reinstalling grub
should be trivial. I think all I have to do is tell grub-install the
boot device.

> It would suck to have a unbootable system.

More than once I've had to boot from either systemrescuecd or minimal
gentoo install ISO so I could re-install (or re-configure) grub after
someting gets messed up. It's not difficult, but it is annoying.

--
Grant





[gentoo-user] Re: Re[4]: Re: Portage, git and shallow cloning

2018-07-08 Thread Martin Vaeth
Rich Freeman  wrote:
> emerge --sync works just fine if
> there are uncommitted changes in your repository, whether they are
> indexed or otherwise.

You are right. It seems to be somewhat "random" when git pull
refuses to work and when not. I could not detect a common scheme.
Maybe this has mainly to do with using overlayfs and git becoming
confused.




Re: [gentoo-user] Re: Re[4]: Re: Portage, git and shallow cloning

2018-07-08 Thread Rich Freeman
On Sun, Jul 8, 2018 at 4:28 AM Martin Vaeth  wrote:
>
> Rich Freeman  wrote:
>
> It's the *history* of the metadata which matters here:

You make a reasonable point here.

> > "The council does not require that ChangeLogs be generated or
> >   distributed through the rsync system. It is at the discretion of our
> >   infrastructure team whether or not this service continues."
>
> The formulation already makes it clear that one did not want to
> put pressure on infra, and at that time it was expected that
> every user would switch to git anyway.

The use of git for history, and yes, in general the Council tries not
to forbid projects from providing services.  The intent was to
communicate that it was simply not an expectation that they do so.

> At that time also the gkeys project was very active, and git was
> (besides webrsync) the only expected way to get checksums for the
> full tree. In particular, rsync was inherently insecure.

Honestly, I don't think gkeys really played any part in this, but
there was definitely an intent for signature checking in the tree to
become more robust.  As you point out (in a part I trimmed) it ought
to be possible to do this.  Indeed, git support for signing commits
was considered a requirement for git implementation.

> >> 4. Even if the user made the mistake to edit a file, portage should
> >>not just die on syncing.
> >
> > emerge --sync won't die in a situation like in general.
>
> It does: git push refuses to start if there are uncommitted changes.
>

I did a test before I made my post.  emerge --sync works just fine if
there are uncommitted changes in your repository, whether they are
indexed or otherwise.  I didn't test merge conflicts but I'd hope it
would fail if these exist.

-- 
Rich



[gentoo-user] Re: Re[4]: Re: Portage, git and shallow cloning

2018-07-08 Thread Martin Vaeth
Rich Freeman  wrote:
>> I was speaking about gentoo's git repository, of course
>> (the one which was attacked on github), not about a Frankensteined one
>> with metadata history filling megabytes of disk space unnecessarily.
>> Who has that much disk space to waste?
>
> Doesn't portage create that metadata anyway when you run it

You should better have it created by egencache in portage-postsyncd;
and even more you should download some other repositories as well
(news announcements, GLSA, dtd, xml-schema) which are maintained
independently, see e.g.
https://github.com/vaeth/portage-postsyncd-mv

It is the Gentoo way: Download only the sources and build it from there.
That's also a question of mentality and why I think most gentoo users
who use git would prefer that way.

> negating any space savings at the cost of CPU to regenerate the cache?

It's the *history* of the metadata which matters here:
Since every changed metadata file requires a fraction of a second,
one can estimate rather well that several ten thousand files are
changed hourly/daily/weekly (the frequency depending mainly on eclass
changes: One change in some eclass requires a change for practically
every version of every package) so that the history of metadata changed
produced by this over time is enormous. This history, of course,
is completely useless and stored completely in vain.
One of the weaknesses of git is that it is impossible, by design,
to omit such superfluous history selectively (once the files *are*
maintained by git).

>> For the official git repository your assertions are simply false,
>> as you apprently admit: It is currently not possible to use the
>> official git repo (or the github clone of it which was attacked)
>> in a secure manner.
>
> Sure, but this also doesn't support signature verification at all
> [...] so your points still don't apply.

Hu? This actually *was* my point.

BTW, portage might easily support signature verification if just
distribution of the developers' public keys would be properly
maintained (e.g. via gkeys or simpler via some package):
After all, gentoo infra should always have an up-to-date list of
these keys anyway.
(If they don't, it would make it even more important to use the
source repo instead of trusting a signature which is given
without sufficient verification)

>> Your implicit claim is untrue. rsync - as used by portage - always
>> transfers whole files, only.
>
> rsync is capable of transferring partial files.

Yes, and portage is explicitly disabling this. (It costs a lot of
server CPU time and does not save much transfer data if the files
are small, because a lot of hashes have to be transferred
(and calculated - CPU-time!) instead.)

> However, this is based on offsets from the start of the file

There are new algorithms which support also detection of insertions
and deletions via rolling hashes (e.g. for deduplicating filesystems).
Rsync is using quite an advanced algorithm as well, but I would
need to recheck its features.

Anyway, it plays no role for our discussion, because for such
small files it hardly matters, and portage is disabling
said algorithm anyway.

> "The council does not require that ChangeLogs be generated or
>   distributed through the rsync system. It is at the discretion of our
>   infrastructure team whether or not this service continues."

The formulation already makes it clear that one did not want to
put pressure on infra, and at that time it was expected that
every user would switch to git anyway.
At that time also the gkeys project was very active, and git was
(besides webrsync) the only expected way to get checksums for the
full tree. In particular, rsync was inherently insecure.

The situation has changed meanwhile on both sides: gkeys was
apparently practically abandoned, and instead gemato was introduced
and is actively supported. That suddenly the gentoo-mirror repository
is more secure than the git repository is also a side effect of
gemato, because only for this the infra keys are now suddenly
distributed in a package.

> If you're using squashfs git pull probably isn't the right solution for you.

Exactly. That's why I completely disagree with portage's regression
of replacing the previously working solution by the only partially
working "git pull".

>> 4. Even if the user made the mistake to edit a file, portage should
>>not just die on syncing.
>
> emerge --sync won't die in a situation like in general.

It does: git push refuses to start if there are uncommitted changes.

> but I don't think the correct default in this case should be
> to just wipe out the user's changes.

I do: Like for rsync a user should not do changes to the distributed
tree (unless he makes a PR) but in an overlay; otherwise he will
permanently have outdated files which are not correctly updated.
*If* a user wants such changes, he should correctly use git and commit.

But I am not against to make this an opt-in option for enabling it
by a developer (or advanced u

Re: [gentoo-user] Re: Re[4]: Re: Portage, git and shallow cloning

2018-07-07 Thread Rich Freeman
On Sat, Jul 7, 2018 at 5:29 PM Martin Vaeth  wrote:
>
> Rich Freeman  wrote:
> > On Sat, Jul 7, 2018 at 1:34 AM Martin Vaeth  wrote:
> >>
> >> Biggest issue is that git signature happens by the developer who
> >> last commited which means that in practice you need dozens/hundreds
> >> of keys.
> >
> > This is untrue. [...]
> > It will, of course, not work on the regular git repo [...]
> > You need to use a repo that is signed by infra
> > (which typically includes metadata/etc as well).
>
> I was speaking about gentoo's git repository, of course
> (the one which was attacked on github), not about a Frankensteined one
> with metadata history filling megabytes of disk space unnecessarily.
> Who has that much disk space to waste?

Doesn't portage create that metadata anyway when you run it, negating
any space savings at the cost of CPU to regenerate the cache?

>
> For the official git repository your assertions are simply false,
> as you apprently admit: It is currently not possible to use the
> official git repo (or the github clone of it which was attacked)
> in a secure manner.
>

Sure, but this also doesn't support signature verification at all (at
least not by portage - git can of course manually verify any commit),
so your points still don't apply.

> > and as a bonus they want them prepended to
> > instead of appended so that rsync resends the whole thing instead of
> > just the tail...
>
> Your implicit claim is untrue. rsync - as used by portage - always
> transfers whole files, only.

rsync is capable of transferring partial files.  I can't vouch for how
portage is using it, but both the rsync command line program and
librsync can do partial file transfers.  However, this is based on
offsets from the start of the file, so appending to a file will result
in the first part of the file being identical, but prepending will
break rsync's algorithm.

>
> > But, this was endlessly debated before the decision was made.
>
> The decision was about removing the ChangeLogs from the git
> repository. This was certainly the correct decision, because -
> as you said - the ChangeLogs *can* be regenerated from the
> git history and thus it makes no sense to modify/store them
> redundantly.

There were two decisions:

https://projects.gentoo.org/council/meeting-logs/20141014-summary.txt

"do we need to continue to create new ChangeLog entries once we're
operating in git?"  No.

https://projects.gentoo.org/council/meeting-logs/20160410-summary.txt

"The council does not require that ChangeLogs be generated or
  distributed through the rsync system. It is at the discretion of our
  infrastructure team whether or not this service continues."
  Accepted (4 yes, 1 no, 2 abstention)

> > It probably should be a configurable option in repos.conf, but
> > honestly, forced pushes are not something that should be considered a
> > good practice.
>
> 1. portage shouldn't decide about practices of overlays.

Hence the reason I suggested it should be a repos.conf option.

> 2. also in the official gentoo repository force pushes happen
>occassionally. Last occurrence was e.g. when undoing the
>malevolent forced push ;)

Sure, but that was a fast-forward from the last good commit, so it
wouldn't require a force pull unless a user had done a force pull on
the bad repo.

> 3. git pull fails not only for forced pushes but also in several
>other occassions; for instance, if your filesystem changed inodes
>numbers (e.g. squash + overlayfs after a resquash+remount).

If you're using squashfs git pull probably isn't the right solution for you.

> 4. Even if the user made the mistake to edit a file, portage should
>not just die on syncing.

emerge --sync won't die in a situation like in general.  Maybe it will
if there is a merge conflict, but I don't think the correct default in
this case should be to just wipe out the user's changes.  I'm all for
making that an option, however.

-- 
Rich



[gentoo-user] Re: Re[4]: Re: Portage, git and shallow cloning

2018-07-07 Thread Martin Vaeth
Rich Freeman  wrote:
> On Sat, Jul 7, 2018 at 1:51 AM Martin Vaeth  wrote:
>> Davyd McColl  wrote:
>>
>> > I ask because prior to the GitHub incident, I didn't have signature
>> > verification enabled
>>
>> Currently, it is not practical to change this, see my other posting.
>
> You clearly don't understand what it actually checks.

Davyd and I were obviously speaking about the gentoo repository
(the official one and the one on github which got hacked).
For these repositories verification is practically not possible.
(That there are also *other* repositories - with huge metadata history -
which might be easier to verify is a different story).

Perversely, the official comments after the hack had
suggested that you should have enabled signature verification for
the hacked repository which was simply practically not possible.




[gentoo-user] Re: Re[4]: Re: Portage, git and shallow cloning

2018-07-07 Thread Martin Vaeth
Rich Freeman  wrote:
> On Sat, Jul 7, 2018 at 1:34 AM Martin Vaeth  wrote:
>>
>> Biggest issue is that git signature happens by the developer who
>> last commited which means that in practice you need dozens/hundreds
>> of keys.
>
> This is untrue. [...]
> It will, of course, not work on the regular git repo [...]
> You need to use a repo that is signed by infra
> (which typically includes metadata/etc as well).

I was speaking about gentoo's git repository, of course
(the one which was attacked on github), not about a Frankensteined one
with metadata history filling megabytes of disk space unnecessarily.
Who has that much disk space to waste?

For the official git repository your assertions are simply false,
as you apprently admit: It is currently not possible to use the
official git repo (or the github clone of it which was attacked)
in a secure manner.

>> > unless you stick --force in your pull
>>
>> Unfortunately, it is not that simple: git pull --force only works if
> [...]
> You completely trimmed the context around my quote. [...]
> they simply would not be pulled without --force.

I was saying that they would not be pulled *with* --force either,
because pull --force is not as strong as you think it is (it would
have shown you conflicts to resolve manually).
You would have to use the commands that I have posted.

> You seem to be providing advice for how to do a pull with a shallow
> repository

No, what I said is not related to a shallow repository. It has to do
with pulling a forced push, in general.

>> At least since the ChangeLogs have been removed.
>> IMHO it was the wrong decision to not keep them in the rsync tree
>> (The tool to regenerate them from git was/is available).
>
> Changelogs are redundant with git, and they take a ton of space (which
> of late everybody seems to be super-concerned about)

Compared to the git history, they take very little space.
If you squash the portage tree, it is hardly measurable.
And with the ChangeLogs, rsync would still be a sane option for
most users. Without ChangeLogs many users are unnecessarily forced
to change and to sacrifice the space for git history.

> and as a bonus they want them prepended to
> instead of appended so that rsync resends the whole thing instead of
> just the tail...

Your implicit claim is untrue. rsync - as used by portage - always
transfers whole files, only.

> But, this was endlessly debated before the decision was made.

The decision was about removing the ChangeLogs from the git
repository. This was certainly the correct decision, because -
as you said - the ChangeLogs *can* be regenerated from the
git history and thus it makes no sense to modify/store them
redundantly.

But I was speaking about the distribution of ChangeLogs in rsync:
Whenever the infrastructure uses egencache to generate the metadata,
it could simply pass --update-changelogs so that rsync users
still would have ChangeLogs: They cannot get them from git history.

> My
> point is that the sorts of people who like Gentoo would probably tend
> to like git.

"Liking" git does not mean that one has to use it also for things
for which it brings nothing. And for most users none of its features
is useful for the portage tree. With one exception: ChangeLogs.
That's why I am adverising to bring them back to the rsync tree.

> The "keys problem" has nothing to do with the security of git
> verification, because those keys are not used by git verification on
> the end-user side.

Whoever is that git/developer affine that he prefers git despite
it costs more disk space will certainly want to use the actual
source repository and not a worse rsync-clone repository.

> It probably should be a configurable option in repos.conf, but
> honestly, forced pushes are not something that should be considered a
> good practice.

1. portage shouldn't decide about practices of overlays.
2. also in the official gentoo repository force pushes happen
   occassionally. Last occurrence was e.g. when undoing the
   malevolent forced push ;)
3. git pull fails not only for forced pushes but also in several
   other occassions; for instance, if your filesystem changed inodes
   numbers (e.g. squash + overlayfs after a resquash+remount).
4. Even if the user made the mistake to edit a file, portage should
   not just die on syncing.




Re: [gentoo-user] Re: Re[4]: Re: Portage, git and shallow cloning

2018-07-07 Thread Rich Freeman
On Sat, Jul 7, 2018 at 1:34 AM Martin Vaeth  wrote:
>
> Rich Freeman  wrote:
> >
> > Biggest issue with git signature verification is that right now it
> > will still do a full pull/checkout before verifying
>
> Biggest issue is that git signature happens by the developer who
> last commited which means that in practice you need dozens/hundreds
> of keys.

This is untrue.  The last git signature is made by infra or the
CI-bot, and this is the signature that portage checks.

Portage will NOT accept a developer key, or any other key in your
keychain, as being valid.

It will, of course, not work on the regular git repo used for
committing for this reason.  You need to use a repo that is signed by
infra (which typically includes metadata/etc as well).

I'll trim most of the rest of your email and only reply to significant
bits, because you seem to not understand the point above which
invalidates almost everything you wrote.  The concerns you raise would
be an issue if you were checking individual developer keys.

>
> So currently, it is impossible to do *any* automatic tree verification,
> unless you manually fetch/update all of the developer keys.
>

As noted, you don't need to fetch any developer keys, and if you do
fetch them, portage will ignore them.

>
> > unless you stick --force in your pull
>
> Unfortunately, it is not that simple: git pull --force only works if
> the checked out tree is old enough (in which case git pull without --force
> would have worked also, BTW).

You completely trimmed the context around my quote.  I was talking
about the malicious commits in the recent attack.  They were
force-pushed, so it doesn't matter how complete your repository is -
they simply would not be pulled without --force.

You seem to be providing advice for how to do a pull with a shallow
repository, which I'm not talking about.

> > Honestly, I think git is a good fit for a lot of Gentoo users.
>
> At least since the ChangeLogs have been removed.
> IMHO it was the wrong decision to not keep them in the rsync tree
> (The tool to regenerate them from git was/is available).

Changelogs are redundant with git, and they take a ton of space (which
of late everybody seems to be super-concerned about).  I don't get
that on one hand people get twitchy about /usr/portage taking more
than 1GB, and on the other hand they want a bazillion text files
dumped all over the place, and as a bonus they want them prepended to
instead of appended so that rsync resends the whole thing instead of
just the tail...

But, this was endlessly debated before the decision was made.  Trust
me, I read every post before voting to have them removed.

>
> > it is different, but all the history/etc is the sort of thing I think
> > would appeal to many here.
>
> Having the ChangeLogs would certainly be sufficient for the majority
> of users. It is very rare that a user really needs to access the
> older version of the file, and in that case it is simple enough
> to fetch it manually from e.g. github.

It is very rare that somebody would want to use Gentoo at all.  My
point is that the sorts of people who like Gentoo would probably tend
to like git.  But, to each their own...

>
> > Security is obviously getting a renewed focus across the board
>
> Unfortunately, due to the mentioned keys problem, git is
> currently the *unsafest* method for syncing.

The "keys problem" has nothing to do with the security of git
verification, because those keys are not used by git verification on
the end-user side.  An infra-controlled key is used for verification
whether you sync with git or rsync.  Either way you're relying on
infra checking the developer keys at time of commit.

Now, as I already mentioned git syncing is currently less safe due to
it doing the checkout before the verification, and they are in the
process of fixing this.

> (BTW, due to the number of committers the portage tree has a quite
> strict policy w.r.t. forced pushes. Overlays, especially of single
> users, might have different policies and thus can fail quite often
> due to the "git pull" bug.)

It probably should be a configurable option in repos.conf, but
honestly, forced pushes are not something that should be considered a
good practice.  There are times that it is the best option, but those
are rare, IMO.

-- 
Rich



Re: [gentoo-user] Re: Re[4]: Re: Portage, git and shallow cloning

2018-07-07 Thread Rich Freeman
On Sat, Jul 7, 2018 at 1:51 AM Martin Vaeth  wrote:
>
> Davyd McColl  wrote:
>
> > I ask because prior to the GitHub incident, I didn't have signature
> > verification enabled
>
> Currently, it is not practical to change this, see my other posting.
>

You clearly don't understand what it actually checks.  It is
completely practical to enable this today (though not as secure as it
could be).  I'll elaborate in a reply to the other email.

-- 
Rich



[gentoo-user] Re: Re[4]: Re: Portage, git and shallow cloning

2018-07-06 Thread Martin Vaeth
Davyd McColl  wrote:
> @Rich: if I understand the process correctly, the same commits are
> pushed to infra and GitHub by the CI bot?

Yes, the repositories are always identical (up to a few seconds delay).

> I ask because prior to the GitHub incident, I didn't have signature
> verification enabled

Currently, it is not practical to change this, see my other posting.

> then I should (in theory) be able to change my repo.conf
> settings, fiddle the remote in /usr/portage, and switch seamlessly from
> gentoo to GitHub?

If by "fiddle the remote in /usr/portage" you mean to edit
the .git/config file you are right.
Note that just changing the remote in repos.conf has only any
effect if you completely removed /usr/portage, and portage has
to clone anew.




[gentoo-user] Re: Re[2]: Re: Portage, git and shallow cloning

2018-07-06 Thread Martin Vaeth
Rich Freeman  wrote:
>
> git has the advantage that it can just read the current HEAD and from
> that know exactly what commits are missing, so there is way less
> effort spent figuring out what changed.

I don't know the exact protocol, but I would assume that git is
even more efficient: I would assume

1. git transfers only changes between similar files
(in contrast: rsync could only do this if the filename has not
changed, and even that is switched off for portage syncing).

2. git transfers compressed data.

(Both are assumptions which perhaps some git guru might confirm.)




[gentoo-user] Re: Re[4]: Re: Portage, git and shallow cloning

2018-07-06 Thread Martin Vaeth
Rich Freeman  wrote:
>
> Biggest issue with git signature verification is that right now it
> will still do a full pull/checkout before verifying

Biggest issue is that git signature happens by the developer who
last commited which means that in practice you need dozens/hundreds
of keys. No package is available for this, and the only tool which
I know which was originally developed to manage these (app-crypt/gkeys)
is not ready for usage for verifaction (gkeys-gpg --verify was
apparently never run by its developer since its python code breaks
already for argument parsing), and its developmant has stalled.

Moreover, although I have written a dirty substitute for gkeys-gpg, it
is not clear how to use gkeys to update signatures and remove staled
ones: It appears that for each usage you have to fetch all seeds and
keys anew. (And I am not even sure whether the seeds it fetches are
really still maintained).

So currently, it is impossible to do *any* automatic tree verification,
unless you manually fetch/update all of the developer keys.

Safest bet if you are a git user is to verify manually whether the
"Verify" field of the latest commit in github really belongs to a
gentoo devloper and is not a fake account. (Though that may be hard
to decide.)

> until the patch makes its way into release (the patch will do a fetch
> and verify before it does a checkout

This helps nothing to get all the correct keys (and no fake keys!)
you need to verify the signature.

> unless you stick --force in your pull

Unfortunately, it is not that simple: git pull --force only works if
the checked out tree is old enough (in which case git pull without --force
would have worked also, BTW).
The correct thing to do if git pull failed is:

git update-index --refresh -q --unmerged # -q is important here!
git fetch
git reset --hard $(git rev-parse --abbrev-ref \
  --symbolic-full-name @{upstream})

(The first command is needed to get rid of problems caused by filesystems
like overlayfs).

(If you are a developer and do not want to risk that syncing overrides
your uncommited changes, you might want to replace --hard by --merge).

> not a great idea for scripts and portage doesn't do this).

I think it is a very great idea. In fact, portage did do this previously
*always* (with --merge instead of --hard) and the only reason this was
removed is that the
  git update-index --refresh -q --unmerge
takes quite some time which is not necessary for people who do not
use a special filesystem like overlayfs for the portage tree.
The right thing to do IMHO is that portage would use this anyway as
a fallback if "git pull" fails. I usually patch portage to do this.

> that was just dumb luck

Exactly. That's why using "git pull" should not be considered as
a security measurement. It is only a safety measurement if you are
a developer and want to avoid loosing local changes at any price
if you mistakenly sync before committing (although the mentioned
--merge instead of --hard should be safe here, too).

> Honestly, I think git is a good fit for a lot of Gentoo users.

At least since the ChangeLogs have been removed.
IMHO it was the wrong decision to not keep them in the rsync tree
(The tool to regenerate them from git was/is available).

> it is different, but all the history/etc is the sort of thing I think
> would appeal to many here.

Having the ChangeLogs would certainly be sufficient for the majority
of users. It is very rare that a user really needs to access the
older version of the file, and in that case it is simple enough
to fetch it manually from e.g. github.

> Also, git is something that is becoming increasingly unavoidable

If you learn something about git from using it through portage,
this only indicates a bug in portage. (Like e.g. using "git pull" is).

> Security is obviously getting a renewed focus across the board

Unfortunately, due to the mentioned keys problem, git is
currently the *unsafest* method for syncing. The "git pull" bug
of portage is not appealing for normal usage, either.
(BTW, due to the number of committers the portage tree has a quite
strict policy w.r.t. forced pushes. Overlays, especially of single
users, might have different policies and thus can fail quite often
due to the "git pull" bug.)




[gentoo-user] Re: Re: Re: Re: SMTP on Kmail-4.14.10 does not work without kde-apps/kwalletd-16.04.3-r1

2017-06-08 Thread Jörg Schaible
Mick wrote:

> On Thursday 08 Jun 2017 16:56:21 Jörg Schaible wrote:
>> Mick wrote:
>> > On Thursday 08 Jun 2017 13:21:56 Jörg Schaible wrote:
>> >> > Yes, this seems to be the problem.  Starting Kmail does not launch
>> >> > kwalletd5 and as a consequence kmail starts asking for each email
>> >> > account password separately.
>> >> > 
>> >> > I guess until kmail:5 is installed I will have to start kwalletd5 by
>> >> > hand.
>> >> 
>> >> My situation is different, since I use kwallet-pam. That one will
>> >> start kwallet5 automatically and I am not asked by KMail for passwords
>> >> (after it now also uses kwallet5).
>> > 
>> > I'm puzzled:  I have kde-plasma/kwallet-pam-5.8.6 installed, but it
>> > will *not*
>> > start kwalletd5.  Bear in mind though, I do not run a full plasma
>> > desktop.
>> 
>> Do you run SDDM? Do you have those two lines in it?
>> 
>> -auth   optionalpam_kwallet5.so
>> -sessionoptionalpam_kwallet5.so auto_start
> 
> Ha!  Thanks for this hint!
> 
> I had these entries in my /etc/pam.d/sddm
> 
> -auth optionalpam_kwallet.so kdehome=.kde4
> -auth optionalpam_kwallet5.so
> -session  optionalpam_kwallet.so
> -session  optionalpam_kwallet5.so auto_start

It seems the only lines required are the kwallet5.so ones.

> but ... sddm has stopped working properly with enlightenment, which is my
> desktop of choice.  So I started using lightdm and forgot to add these
> entries - TBH I thought they were not needed because everything worked as
> it should until now without them.
> 
> So, I added the two lines you suggested and rebooted.  I checked that
> kwalletd5 is running:
> 
> $ ps axf | grep kwallet
>  4515 pts/1SN+0:00  \_ grep --color=auto
>  kwallet
>  4088 ?S  0:00 /usr/bin/kwalletd5 --pam-login 8 10

[snip]

> I think I'll give up on this.  It seems kwallet:4/5 is not working as
> intended
> on my set up.  I hope by the time kdepim has moved from :4 to :5 things
> will
> work as intended.  Until then I will keep the old kwallet:4 installed,
> because at least this works as always did.

First I thought you might have run into an incompatibility between sddm and 
lightdm ...

>> Does your system password match the password of your wallet?
> 
> No, all users have different kwallet and user login passwds.

... but that's it. kwallet-pam requires the same password for wallet and 
login, otherwise it cannot work. Simply adjust the kwallet password or ...

>> > Running Krusader:5 and trying to connect to a remote system starts
>> > kwalletd5 fine, but running kmail:4 it does not.
>> 
>> Does kmail:4 work after krusader:5 started kwallet5?
> 
> Yes, because Krusader:5 calls kwalletd5 and asks for its password.  Then
> this is available for all applications to use, including kmail:4.

... you will need something else to start kwallet5 then. KMail must be fixed 
first to start the new wallet daemon. Maybe you should open a bugzilla 
ticket. I am quite sure you will find an entry in the error log when KMail 
tries to access dbus for the old daemon.

Cheers,
Jörg




Re: [gentoo-user] Re: Re: Re: SMTP on Kmail-4.14.10 does not work without kde-apps/kwalletd-16.04.3-r1

2017-06-08 Thread Mick
On Thursday 08 Jun 2017 16:56:21 Jörg Schaible wrote:
> Mick wrote:
> > On Thursday 08 Jun 2017 13:21:56 Jörg Schaible wrote:
> >> > Yes, this seems to be the problem.  Starting Kmail does not launch
> >> > kwalletd5 and as a consequence kmail starts asking for each email
> >> > account password separately.
> >> > 
> >> > I guess until kmail:5 is installed I will have to start kwalletd5 by
> >> > hand.
> >> 
> >> My situation is different, since I use kwallet-pam. That one will start
> >> kwallet5 automatically and I am not asked by KMail for passwords (after
> >> it now also uses kwallet5).
> > 
> > I'm puzzled:  I have kde-plasma/kwallet-pam-5.8.6 installed, but it will
> > *not*
> > start kwalletd5.  Bear in mind though, I do not run a full plasma desktop.
> 
> Do you run SDDM? Do you have those two lines in it?
> 
> -auth   optionalpam_kwallet5.so
> -sessionoptionalpam_kwallet5.so auto_start

Ha!  Thanks for this hint!

I had these entries in my /etc/pam.d/sddm

-auth   optionalpam_kwallet.so kdehome=.kde4
-auth   optionalpam_kwallet5.so
-sessionoptionalpam_kwallet.so
-sessionoptionalpam_kwallet5.so auto_start

but ... sddm has stopped working properly with enlightenment, which is my 
desktop of choice.  So I started using lightdm and forgot to add these entries 
- TBH I thought they were not needed because everything worked as it should 
until now without them.

So, I added the two lines you suggested and rebooted.  I checked that 
kwalletd5 is running:

$ ps axf | grep kwallet
 4515 pts/1SN+0:00  \_ grep --color=auto kwallet
 4088 ?S  0:00 /usr/bin/kwalletd5 --pam-login 8 10

Nevertheless, kmail (akonadi resources in particular) asks for each IMAP4 
account password to be entered when it launches, without asking for the 
kwalletd5 password (it is different to the login passwd of the user).

So, I start kwalletmanager5 which dully advises me:

"The wallet is currently closed"

So, I click 'Open' and the 'KDE Wallet Service' pops up to ask me for the 
kwallet passwd.


I now have more than one kwalletd5 running:

$ ps axf | grep kwallet
 4623 pts/1SNl+   0:01  |   \_ kwalletmanager5
 4720 pts/2SN+0:00  \_ grep --color=auto kwallet
 4088 ?S  0:00 /usr/bin/kwalletd5 --pam-login 8 10
 4629 ?SNLl   0:01 /usr/bin/kwalletd5


I log out/in and notice I suddenly have two kwalletd5s running due to pam-
login:

$ ps axf | grep kwallet
 5063 pts/0SN+0:00  \_ grep --color=auto kwallet
 4088 ?S  0:00 /usr/bin/kwalletd5 --pam-login 8 10
 4960 ?S  0:00 /usr/bin/kwalletd5 --pam-login 8 10


So, I start kmail again and guess what?  It starts asking for IMAP4 passwds 
all over again.  

I start kwalletmanager5 and indeed the kwallet is currently ... closed!  I 
click to open it, enter password and all works fine again.

Now I have an even greater selection of kwalletd5 processes running:

$ ps axf | grep kwallet
 5197 pts/0SNl+   0:01  |   \_ kwalletmanager5
 5411 pts/1SN+0:00  \_ grep --color=auto kwallet
 4088 ?S  0:00 /usr/bin/kwalletd5 --pam-login 8 10
 4960 ?S  0:00 /usr/bin/kwalletd5 --pam-login 8 10
 5202 ?SNLl   0:00 /usr/bin/kwalletd5


I think I'll give up on this.  It seems kwallet:4/5 is not working as intended 
on my set up.  I hope by the time kdepim has moved from :4 to :5 things will 
work as intended.  Until then I will keep the old kwallet:4 installed, because 
at least this works as always did.


> Does your system password match the password of your wallet?

No, all users have different kwallet and user login passwds.


> > Running Krusader:5 and trying to connect to a remote system starts
> > kwalletd5 fine, but running kmail:4 it does not.
> 
> Does kmail:4 work after krusader:5 started kwallet5?

Yes, because Krusader:5 calls kwalletd5 and asks for its password.  Then this 
is available for all applications to use, including kmail:4.

-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


[gentoo-user] Re: Re: Re: SMTP on Kmail-4.14.10 does not work without kde-apps/kwalletd-16.04.3-r1

2017-06-08 Thread Jörg Schaible
Mick wrote:

> On Thursday 08 Jun 2017 13:21:56 Jörg Schaible wrote:
>> > Yes, this seems to be the problem.  Starting Kmail does not launch
>> > kwalletd5 and as a consequence kmail starts asking for each email
>> > account password separately.
>> >
>> > I guess until kmail:5 is installed I will have to start kwalletd5 by
>> > hand.
>> 
>> My situation is different, since I use kwallet-pam. That one will start
>> kwallet5 automatically and I am not asked by KMail for passwords (after
>> it now also uses kwallet5).
> 
> I'm puzzled:  I have kde-plasma/kwallet-pam-5.8.6 installed, but it will
> *not*
> start kwalletd5.  Bear in mind though, I do not run a full plasma desktop.


Do you run SDDM? Do you have those two lines in it?

-auth   optionalpam_kwallet5.so
-sessionoptionalpam_kwallet5.so auto_start

Does your system password match the password of your wallet?

> Running Krusader:5 and trying to connect to a remote system starts
> kwalletd5 fine, but running kmail:4 it does not.

Does kmail:4 work after krusader:5 started kwallet5?
 
>> >> Note, you should install knotify:4 if you want notifications about
>> >> failed mail delivery from KMail. That was removed for me in an
>> >> dependency clean-up, because I had no entry for it in world.
>> >
>> > 
>> >
>> > Hmm ... I thought kde-apps/knotify:4 was replaced with kde-
>> > frameworks/knotifyconfig:5?
>> 
>> I simply recognized following line in the error log:
>> 
>> akonadi_newmailnotifier_agent(6002)/kdeui (KNotification)
>> KNotification::slotReceivedIdError: Error while contacting notify daemon
>> "The name org.kde.knotify was not provided by any .service files"
>> 
>> After installing knotify:4 I suddenly got desktopn messages from KMail
>> again.

[snip]
 
> I think both kwallet:4 and knotify:4 should have been retained as
> dependencies
> until all kde:4 packages are removed from portage.  I can think why
> knotify should be in world, since it ought to be a dependency for kdepim-
> meta/kdepimlibs/kdepim-runtime, all of which are I have installed here.

For me kwallet5 actually replaced kwallet:4 with the latest update. And I am 
glad about it, because the two wallets started to diverge and I had no 
longer a kwalletmanager:4.

Cheers,
Jörg




Re: [gentoo-user] Re: Re: SMTP on Kmail-4.14.10 does not work without kde-apps/kwalletd-16.04.3-r1

2017-06-08 Thread Mick
On Thursday 08 Jun 2017 13:21:56 Jörg Schaible wrote:
> > Yes, this seems to be the problem.  Starting Kmail does not launch
> > kwalletd5 and as a consequence kmail starts asking for each email account
> > password separately.
> >
> > I guess until kmail:5 is installed I will have to start kwalletd5 by hand.
> 
> My situation is different, since I use kwallet-pam. That one will start 
> kwallet5 automatically and I am not asked by KMail for passwords (after it 
> now also uses kwallet5).

I'm puzzled:  I have kde-plasma/kwallet-pam-5.8.6 installed, but it will *not* 
start kwalletd5.  Bear in mind though, I do not run a full plasma desktop.

Running Krusader:5 and trying to connect to a remote system starts kwalletd5 
fine, but running kmail:4 it does not.


> >> Note, you should install knotify:4 if you want notifications about failed
> >> mail delivery from KMail. That was removed for me in an dependency
> >> clean-up, because I had no entry for it in world.
> >
> > 
> >
> > Hmm ... I thought kde-apps/knotify:4 was replaced with kde-
> > frameworks/knotifyconfig:5?
> 
> I simply recognized following line in the error log:
> 
> akonadi_newmailnotifier_agent(6002)/kdeui (KNotification) 
> KNotification::slotReceivedIdError: Error while contacting notify daemon 
> "The name org.kde.knotify was not provided by any .service files"
> 
> After installing knotify:4 I suddenly got desktopn messages from KMail 
> again.
> 
> > A user complained that new messages no longer create a popup.
> 
> Seems to match the error log.
> 
> Cheers,
> Jörg

I think both kwallet:4 and knotify:4 should have been retained as dependencies 
until all kde:4 packages are removed from portage.  I can think why knotify 
should be in world, since it ought to be a dependency for kdepim-
meta/kdepimlibs/kdepim-runtime, all of which are I have installed here.
-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


[gentoo-user] Re: Re: SMTP on Kmail-4.14.10 does not work without kde-apps/kwalletd-16.04.3-r1

2017-06-08 Thread Jörg Schaible
Hi Mick,

Mick wrote:

> On Thursday 08 Jun 2017 02:04:44 Jörg Schaible wrote:
>> Mick wrote:
>> > On Tuesday 06 Jun 2017 16:35:40 you wrote:
>> >> Hi All,
>> >> 
>> >> I've updated a number of kde (plasma) packages, including kde-
>> >> frameworks/kwallet-5.34.0-r1.  A depclean action wanted to remove
>> >> kde-apps/kwalletd-16.04.3-r1 and I let it do its tha'ng.
>> >> 
>> >> Following a new login I discovered that *every* time I wanted to send
>> >> a
>> >> message I was being asked for the SMTP password.  For some users with
>> >> 6 or
>> >> more email accounts this soon became tiresome.  The pop up advises
>> >> that the password should be saved in kwallet and offers the choice to
>> >> save it in ...
>> >> the configuration file!  It also advises that although it will be
>> >> obfuscated
>> >> it will not be safe in the configuration file.  There are two buttons,
>> >> one to save the password in the configuration file and another to not
>> >> save it (at all).
>> >> 
>> >> So to retain what sanity I may have left, I had to re-install kde-
>> >> apps/kwalletd-16.04.3-r1, which appears to be able to manage SMTP
>> >> passwords without asking each time the user.
>> >> 
>> >> Have you noticed the same?  Is there a fix or workaround for this?
>> > 
>> > Just to clarify, there doesn't seem to be a problem with IMAP4
>> > passwords, only with SMTP.
>> > 
>> > When updating to kde-frameworks/kwallet-5.34.0-r1 I noticed this elog
>> > message, which implies I should no longer need the old kwalletd4:
>> > 
>> > "LOG: postinst
>> > Starting with 5.34.0-r1, kwallet is able to serve applications
>> > that still require old kwalletd4. After migration has finished,
>> > kde-apps/kwalletd can be removed."
>> > 
>> > So, do I have to wait for Kmail:4 to be updated to Kmail:5 before I
>> > give up on kwalletd4 despite the above message?
>> 
>> I had the same problem. But event if kwalletd is installed, it was no
>> longer started.
> 
> Yes, this seems to be the problem.  Starting Kmail does not launch
> kwalletd5 and as a consequence kmail starts asking for each email account
> password separately.
> 
> I guess until kmail:5 is installed I will have to start kwalletd5 by hand.

My situation is different, since I use kwallet-pam. That one will start 
kwallet5 automatically and I am not asked by KMail for passwords (after it 
now also uses kwallet5).

>> Note, you should install knotify:4 if you want notifications about failed
>> mail delivery from KMail. That was removed for me in an dependency
>> clean-up, because I had no entry for it in world.
> 
> Hmm ... I thought kde-apps/knotify:4 was replaced with kde-
> frameworks/knotifyconfig:5?

I simply recognized following line in the error log:

akonadi_newmailnotifier_agent(6002)/kdeui (KNotification) 
KNotification::slotReceivedIdError: Error while contacting notify daemon 
"The name org.kde.knotify was not provided by any .service files"

After installing knotify:4 I suddenly got desktopn messages from KMail 
again.

> A user complained that new messages no longer create a popup.

Seems to match the error log.

Cheers,
Jörg




Re: [gentoo-user] Re: Re: Qt-4.8.7 bug

2017-05-25 Thread Peter Humphrey
On Wednesday 24 May 2017 08:58:53 Peter Humphrey wrote:
> On Tuesday 23 May 2017 23:16:48 Frank Steinmetzger wrote:
> > I, too, was affected by this. I did the libstdc++ rebuild after
> > upgrading
> > gcc (some 550 packages) a while back and now I was hit by the Qt
> > problem,
> > so another rebuild of 500 packages with --changed-deps world.
> > 
> > 
> > Once finished, it left me with a new problem:
> > KDE doesn’t find my beloved terminus font anymore, both on my PC and my
> > laptop. It does not show up in any font selection dialog. The same goes
> > for GTK applications such as gimp (GTK2) and firefox (GTK3). No Terminus
> > anywhere.
> > 
> > Does that ring a bell with anyone?
> 
> Not with me, no, but on looking at System Settings to see, I found all the
> icons missing. And the selected single-click-to-open setting was ignored
> - not everywhere, just in System Settings.
> 
> Remerging kde-plasma/systemsettings-5.8.6 hasn't helped.

Neither has emerge -e world + reboot, so something's broken somewhere.

-- 
Regards
Peter




Re: [gentoo-user] Re: Re: Qt-4.8.7 bug

2017-05-24 Thread Peter Humphrey
On Tuesday 23 May 2017 23:16:48 Frank Steinmetzger wrote:
> On Mon, May 22, 2017 at 09:49:01AM +0200, Jörg Schaible wrote:
> > Peter Humphrey wrote:
> > 
> > [snip]
> > 
> > well, this does not seem to be the complete truth. When I switched to
> > gcc
> > 5.x I did a revdep-rebuild for anything that was compiled against
> > libstdc++.so.6 just like the according news entry was recommending. And
> > I am quite sure that those Qt plugins were part of my 515 recompiled
> > packages.
> > 
> > Nevertheless, my KDE 4 apps were broken after the update to Qt 4.8.7.
> > Rebuilding anything that was using libQtCore.so.4 solved it, but I fail
> > to see how this is related to the gcc update two weeks ago.
> 
> I, too, was affected by this. I did the libstdc++ rebuild after upgrading
> gcc (some 550 packages) a while back and now I was hit by the Qt problem,
> so another rebuild of 500 packages with --changed-deps world.
> 
> 
> Once finished, it left me with a new problem:
> KDE doesn’t find my beloved terminus font anymore, both on my PC and my
> laptop. It does not show up in any font selection dialog. The same goes
> for GTK applications such as gimp (GTK2) and firefox (GTK3). No Terminus
> anywhere.
> 
> Does that ring a bell with anyone?

Not with me, no, but on looking at System Settings to see, I found all the 
icons missing. And the selected single-click-to-open setting was ignored - 
not everywhere, just in System Settings.

Remerging kde-plasma/systemsettings-5.8.6 hasn't helped.

-- 
Regards
Peter




Re: [gentoo-user] Re: Re: Qt-4.8.7 bug

2017-05-23 Thread Frank Steinmetzger
On Mon, May 22, 2017 at 09:49:01AM +0200, Jörg Schaible wrote:

> Peter Humphrey wrote:
>
> [snip]
>
> well, this does not seem to be the complete truth. When I switched to gcc
> 5.x I did a revdep-rebuild for anything that was compiled against
> libstdc++.so.6 just like the according news entry was recommending. And I am
> quite sure that those Qt plugins were part of my 515 recompiled packages.
>
> Nevertheless, my KDE 4 apps were broken after the update to Qt 4.8.7.
> Rebuilding anything that was using libQtCore.so.4 solved it, but I fail to
> see how this is related to the gcc update two weeks ago.

I, too, was affected by this. I did the libstdc++ rebuild after upgrading
gcc (some 550 packages) a while back and now I was hit by the Qt problem, so
another rebuild of 500 packages with --changed-deps world.


Once finished, it left me with a new problem:
KDE doesn’t find my beloved terminus font anymore, both on my PC and my laptop.
It does not show up in any font selection dialog. The same goes for GTK
applications such as gimp (GTK2) and firefox (GTK3). No Terminus anywhere.

Does that ring a bell with anyone?


$ eix -e terminus-font
[I] media-fonts/terminus-font
 Available versions:  4.39-r1 ~4.40 {X a-like-o +center-tilde distinct-l 
+pcf +pcf-unicode-only +psf quote raw-font-data ru-dv +ru-g ru-i ru-k}
 Installed versions:  4.39-r1(20:45:20 12/24/16)(X center-tilde pcf 
pcf-unicode-only psf ru-g -a-like-o -distinct-l -quote -raw-font-data -ru-dv 
-ru-i -ru-k)

$ fc-list | grep -i terminus
/usr/share/fonts/terminus/ter-x24b.pcf.gz: xos4 Terminus:style=Bold
/usr/share/fonts/terminus/ter-x12b.pcf.gz: xos4 Terminus:style=Bold
/usr/share/fonts/terminus/ter-x32b.pcf.gz: xos4 Terminus:style=Bold
/usr/share/fonts/terminus/ter-x22b.pcf.gz: xos4 Terminus:style=Bold
/usr/share/fonts/terminus/ter-x18b.pcf.gz: xos4 Terminus:style=Bold
/usr/share/fonts/terminus/ter-x28b.pcf.gz: xos4 Terminus:style=Bold
/usr/share/fonts/terminus/ter-x20b.pcf.gz: xos4 Terminus:style=Bold
/usr/share/fonts/terminus/ter-x18n.pcf.gz: xos4 Terminus:style=Regular
/usr/share/fonts/terminus/ter-x28n.pcf.gz: xos4 Terminus:style=Regular
/usr/share/fonts/terminus/ter-x20n.pcf.gz: xos4 Terminus:style=Regular
/usr/share/fonts/terminus/ter-x12n.pcf.gz: xos4 Terminus:style=Regular
/usr/share/fonts/terminus/ter-x32n.pcf.gz: xos4 Terminus:style=Regular
/usr/share/fonts/terminus/ter-x22n.pcf.gz: xos4 Terminus:style=Regular
/usr/share/fonts/terminus/ter-x14n.pcf.gz: xos4 Terminus:style=Regular
/usr/share/fonts/terminus/ter-x24n.pcf.gz: xos4 Terminus:style=Regular
/usr/share/fonts/terminus/ter-x16b.pcf.gz: xos4 Terminus:style=Bold
/usr/share/fonts/terminus/ter-x16n.pcf.gz: xos4 Terminus:style=Regular
/usr/share/fonts/terminus/ter-x14b.pcf.gz: xos4 Terminus:style=Bold

$ qlop -l terminus-font|tail -n 1
Sat Dec 24 20:45:34 2016 >>> media-fonts/terminus-font-4.39-r1

I never used eselect fontconfig in the past (do I actually have to?), but
since terminus was disabled, I enabled it. It did not help either.

$ eselect fontconfig list|grep terminus
[50]  75-yes-terminus.conf *


Cheers.
-- 
Gruß | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Congratulations!  You’ve never been late as early as today.


signature.asc
Description: Digital signature


[gentoo-user] Re: Re: Re: Qt-4.8.7 bug

2017-05-22 Thread Jörg Schaible
Peter Humphrey wrote:

> On Monday 22 May 2017 09:49:01 Jörg Schaible wrote:
>> Hi Peter,
>> 
>> Peter Humphrey wrote:
>> 
>> [snip]
>> 
>> > Have you seen https://bugs.gentoo.org/show_bug.cgi?id=595618 ? It says
>> > that "Qt plugins compiled with gcc-4 are incompatible with
>> > > > be
>> > expected to anticipate that. On the other hand, some kind of notice
>> > could
>> > be issued, and bug 618922 is pursuing that. (That's the one I started
>> > this thread with.)
>> 
>> well, this does not seem to be the complete truth. When I switched to gcc
>> 5.x I did a revdep-rebuild for anything that was compiled against
>> libstdc++.so.6 just like the according news entry was recommending. And I
>> am quite sure that those Qt plugins were part of my 515 recompiled
>> packages.
>> 
>> Nevertheless, my KDE 4 apps were broken after the update to Qt 4.8.7.
>> Rebuilding anything that was using libQtCore.so.4 solved it, but I fail
>> to see how this is related to the gcc update two weeks ago.
> 
> I can only suggest you read bug report 618922 if you haven't already,
> including following its reference to bug 595618. It makes sense to me.

It does not for me. My packages were already compiled with gcc-5.4.0. Those 
Buzilla issues only talk about (plasma/qt) packages compiled with previous 
gcc-4.x which are supposed to be incompatible. All of the plasma/qt related 
packages that have been recompiled, because they were built upon 
libQtCore.so.4 were already recompiled with gcc5. I've checked my logs.

Cheers,
Jörg




Re: [gentoo-user] Re: Re: Qt-4.8.7 bug

2017-05-22 Thread Peter Humphrey
On Monday 22 May 2017 09:49:01 Jörg Schaible wrote:
> Hi Peter,
> 
> Peter Humphrey wrote:
> 
> [snip]
> 
> > Have you seen https://bugs.gentoo.org/show_bug.cgi?id=595618 ? It says
> > that "Qt plugins compiled with gcc-4 are incompatible with
> >  > be
> > expected to anticipate that. On the other hand, some kind of notice
> > could
> > be issued, and bug 618922 is pursuing that. (That's the one I started
> > this thread with.)
> 
> well, this does not seem to be the complete truth. When I switched to gcc
> 5.x I did a revdep-rebuild for anything that was compiled against
> libstdc++.so.6 just like the according news entry was recommending. And I
> am quite sure that those Qt plugins were part of my 515 recompiled
> packages.
> 
> Nevertheless, my KDE 4 apps were broken after the update to Qt 4.8.7.
> Rebuilding anything that was using libQtCore.so.4 solved it, but I fail to
> see how this is related to the gcc update two weeks ago.

I can only suggest you read bug report 618922 if you haven't already, 
including following its reference to bug 595618. It makes sense to me.

-- 
Regards
Peter




[gentoo-user] Re: Re: Qt-4.8.7 bug

2017-05-22 Thread Jörg Schaible
Hi Peter,

Peter Humphrey wrote:

[snip]

> Have you seen https://bugs.gentoo.org/show_bug.cgi?id=595618 ? It says
> that "Qt plugins compiled with gcc-4 are incompatible with
>  expected to anticipate that. On the other hand, some kind of notice could
> be issued, and bug 618922 is pursuing that. (That's the one I started this
> thread with.)

well, this does not seem to be the complete truth. When I switched to gcc 
5.x I did a revdep-rebuild for anything that was compiled against 
libstdc++.so.6 just like the according news entry was recommending. And I am 
quite sure that those Qt plugins were part of my 515 recompiled packages.

Nevertheless, my KDE 4 apps were broken after the update to Qt 4.8.7. 
Rebuilding anything that was using libQtCore.so.4 solved it, but I fail to 
see how this is related to the gcc update two weeks ago.

Cheers,
Jörg





[gentoo-user] Re: Re: Flashing hardware via WINE ?

2017-03-19 Thread Jörg Schaible
tu...@posteo.de wrote:

[snip]

> Hi Kai (that's a rhyme! :)
> 
> I have installed Virtualbox already and use the Linux Image I
> installed there for banking purposes only. Feels more secure.
> 
> I would prefer the WIndows-in-a-(virtual)box-solution) as you
> do -- if I would own a Windows installation disc. But do not.

You might give ReactOS a try: https://www.reactos.org/
Works for me in a VBox, but I have no idea if serial ports are supported.

[snip]

Cheers,
Jörg





Re: [gentoo-user] Re: Re: boost-1.62.0-r1 blocked by nothing ??

2017-02-04 Thread Dale
Mick wrote:
> On Saturday 04 Feb 2017 01:33:24 Dale wrote:
>> Mick wrote:
>>> On Friday 03 Feb 2017 22:00:11 Dale wrote:
 Jörg Schaible wrote:
> Dale wrote:
>
> [snip]
>
>> Portage lock?  Sometimes, my brain does that too.  lol
> Hehe.
>
>> I thought about it after I hit send but figured you would get the
>> thought, maybe you had one or the other in a mask/unmask file or
>> something that resulted in a conflict?  I was sort of thinking it but
>> didn't type it in for some reason.  Still, if you did the same command
>> I
>> posted, you would have seen the difference and thought on it. 
>> Generally
>> if there is a difference like that, it's because of a local setting, or
>> a change in the tree due to different sync time, which would give the
>> idea of syncing again.
> Again the same issue on another box:
>
> === %< ==
> $ equery l -p boost boost-build
>
>  * Searching for boost ...
>
> [-P-] [  ] dev-libs/boost-1.55.0-r2:0/1.55.0
> [IP-] [  ] dev-libs/boost-1.56.0-r1:0/1.56.0
> [-P-] [ ~] dev-libs/boost-1.58.0-r1:0/1.58.0
> [-P-] [ ~] dev-libs/boost-1.59.0:0/1.59.0
> [-P-] [ ~] dev-libs/boost-1.60.0:0/1.60.0
> [-P-] [ ~] dev-libs/boost-1.61.0:0/1.61.0
> [-P-] [ ~] dev-libs/boost-1.61.0-r1:0/1.61.0
> [-P-] [  ] dev-libs/boost-1.62.0-r1:0/1.62.0
> [-P-] [ ~] dev-libs/boost-1.63.0:0/1.63.0
>
>  * Searching for boost-build ...
>
> [-P-] [  ] dev-util/boost-build-1.55.0:0
> [-P-] [ ~] dev-util/boost-build-1.55.0-r1:0
> [IP-] [  ] dev-util/boost-build-1.56.0:0
> [-P-] [ ~] dev-util/boost-build-1.58.0:0
> [-P-] [ ~] dev-util/boost-build-1.59.0:0
> [-P-] [ ~] dev-util/boost-build-1.60.0:0
> [-P-] [ ~] dev-util/boost-build-1.61.0:0
> [-P-] [  ] dev-util/boost-build-1.62.0-r1:0
> [-P-] [ ~] dev-util/boost-build-1.63.0:0
> === %< ==
>
> Portage should be capable of an update.
>
>> Anyway, glad it is going.  That's what matters.
> Yep, glad that I have a solution for it now.
>
> - Jörg
 That is really weird.  That looks like exactly the same output I have
 and mine updated just fine.  At least, I don't recall having issues.  I
 read a couple other posts where people were having to run the same
 command more than once to get portage to find a upgrade path.  I wonder,
 does emerge/portage/tree have a hiccup somewhere?  Is this a bug that
 hasn't quite had a finger put on it??

 Weird.

 Dale

 :-)  :-)
>>> From what I have seen when there are two stable versions of the same
>>> package, portage needs to be told which one to install.
>> It should just update them.  At least that is what it has always done
>> for me.  Once they remove the keyword/mask for the packages, they should
>> be put in the upgrade list and portage just figure out which goes first,
>> if it can't go in parallel.
> There should be no keyword/mask to remove.  I am talking about a package 
> which 
> at this point in time has two unkeyworded and unmasked stable versions, like 
> boost and boost-build above.


I was talking about when the devs do it.  Once they remove the
keyword/mask and portage sees a update is available, it should just
update. 


>
>> I don't recall having to tell emerge to do this other than my usual
>> emerge -uvaDN world command.
> I'll post an example next time I come across this (assuming I don't forget), 
> as it happens every now and then here.  The solution is to emerge -1av 
>  as Neil has already posted.
>

I'd be interested in that.  Given some posts by others, it seems
something odd is going on at times. 

Dale

:-)  :-) 



Re: [gentoo-user] Re: Re: boost-1.62.0-r1 blocked by nothing ??

2017-02-04 Thread Mick
On Saturday 04 Feb 2017 01:33:24 Dale wrote:
> Mick wrote:
> > On Friday 03 Feb 2017 22:00:11 Dale wrote:
> >> Jörg Schaible wrote:
> >>> Dale wrote:
> >>> 
> >>> [snip]
> >>> 
>  Portage lock?  Sometimes, my brain does that too.  lol
> >>> 
> >>> Hehe.
> >>> 
>  I thought about it after I hit send but figured you would get the
>  thought, maybe you had one or the other in a mask/unmask file or
>  something that resulted in a conflict?  I was sort of thinking it but
>  didn't type it in for some reason.  Still, if you did the same command
>  I
>  posted, you would have seen the difference and thought on it. 
>  Generally
>  if there is a difference like that, it's because of a local setting, or
>  a change in the tree due to different sync time, which would give the
>  idea of syncing again.
> >>> 
> >>> Again the same issue on another box:
> >>> 
> >>> === %< ==
> >>> $ equery l -p boost boost-build
> >>> 
> >>>  * Searching for boost ...
> >>> 
> >>> [-P-] [  ] dev-libs/boost-1.55.0-r2:0/1.55.0
> >>> [IP-] [  ] dev-libs/boost-1.56.0-r1:0/1.56.0
> >>> [-P-] [ ~] dev-libs/boost-1.58.0-r1:0/1.58.0
> >>> [-P-] [ ~] dev-libs/boost-1.59.0:0/1.59.0
> >>> [-P-] [ ~] dev-libs/boost-1.60.0:0/1.60.0
> >>> [-P-] [ ~] dev-libs/boost-1.61.0:0/1.61.0
> >>> [-P-] [ ~] dev-libs/boost-1.61.0-r1:0/1.61.0
> >>> [-P-] [  ] dev-libs/boost-1.62.0-r1:0/1.62.0
> >>> [-P-] [ ~] dev-libs/boost-1.63.0:0/1.63.0
> >>> 
> >>>  * Searching for boost-build ...
> >>> 
> >>> [-P-] [  ] dev-util/boost-build-1.55.0:0
> >>> [-P-] [ ~] dev-util/boost-build-1.55.0-r1:0
> >>> [IP-] [  ] dev-util/boost-build-1.56.0:0
> >>> [-P-] [ ~] dev-util/boost-build-1.58.0:0
> >>> [-P-] [ ~] dev-util/boost-build-1.59.0:0
> >>> [-P-] [ ~] dev-util/boost-build-1.60.0:0
> >>> [-P-] [ ~] dev-util/boost-build-1.61.0:0
> >>> [-P-] [  ] dev-util/boost-build-1.62.0-r1:0
> >>> [-P-] [ ~] dev-util/boost-build-1.63.0:0
> >>> === %< ==
> >>> 
> >>> Portage should be capable of an update.
> >>> 
>  Anyway, glad it is going.  That's what matters.
> >>> 
> >>> Yep, glad that I have a solution for it now.
> >>> 
> >>> - Jörg
> >> 
> >> That is really weird.  That looks like exactly the same output I have
> >> and mine updated just fine.  At least, I don't recall having issues.  I
> >> read a couple other posts where people were having to run the same
> >> command more than once to get portage to find a upgrade path.  I wonder,
> >> does emerge/portage/tree have a hiccup somewhere?  Is this a bug that
> >> hasn't quite had a finger put on it??
> >> 
> >> Weird.
> >> 
> >> Dale
> >> 
> >> :-)  :-)
> > 
> > From what I have seen when there are two stable versions of the same
> > package, portage needs to be told which one to install.
> 
> It should just update them.  At least that is what it has always done
> for me.  Once they remove the keyword/mask for the packages, they should
> be put in the upgrade list and portage just figure out which goes first,
> if it can't go in parallel.

There should be no keyword/mask to remove.  I am talking about a package which 
at this point in time has two unkeyworded and unmasked stable versions, like 
boost and boost-build above.


> I don't recall having to tell emerge to do this other than my usual
> emerge -uvaDN world command.

I'll post an example next time I come across this (assuming I don't forget), 
as it happens every now and then here.  The solution is to emerge -1av 
 as Neil has already posted.

-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Re: Re: boost-1.62.0-r1 blocked by nothing ??

2017-02-03 Thread Dale
Mick wrote:
> On Friday 03 Feb 2017 22:00:11 Dale wrote:
>> Jörg Schaible wrote:
>>> Dale wrote:
>>>
>>> [snip]
>>>
 Portage lock?  Sometimes, my brain does that too.  lol
>>> Hehe.
>>>
 I thought about it after I hit send but figured you would get the
 thought, maybe you had one or the other in a mask/unmask file or
 something that resulted in a conflict?  I was sort of thinking it but
 didn't type it in for some reason.  Still, if you did the same command I
 posted, you would have seen the difference and thought on it.  Generally
 if there is a difference like that, it's because of a local setting, or
 a change in the tree due to different sync time, which would give the
 idea of syncing again.
>>> Again the same issue on another box:
>>>
>>> === %< ==
>>> $ equery l -p boost boost-build
>>>
>>>  * Searching for boost ...
>>>
>>> [-P-] [  ] dev-libs/boost-1.55.0-r2:0/1.55.0
>>> [IP-] [  ] dev-libs/boost-1.56.0-r1:0/1.56.0
>>> [-P-] [ ~] dev-libs/boost-1.58.0-r1:0/1.58.0
>>> [-P-] [ ~] dev-libs/boost-1.59.0:0/1.59.0
>>> [-P-] [ ~] dev-libs/boost-1.60.0:0/1.60.0
>>> [-P-] [ ~] dev-libs/boost-1.61.0:0/1.61.0
>>> [-P-] [ ~] dev-libs/boost-1.61.0-r1:0/1.61.0
>>> [-P-] [  ] dev-libs/boost-1.62.0-r1:0/1.62.0
>>> [-P-] [ ~] dev-libs/boost-1.63.0:0/1.63.0
>>>
>>>  * Searching for boost-build ...
>>>
>>> [-P-] [  ] dev-util/boost-build-1.55.0:0
>>> [-P-] [ ~] dev-util/boost-build-1.55.0-r1:0
>>> [IP-] [  ] dev-util/boost-build-1.56.0:0
>>> [-P-] [ ~] dev-util/boost-build-1.58.0:0
>>> [-P-] [ ~] dev-util/boost-build-1.59.0:0
>>> [-P-] [ ~] dev-util/boost-build-1.60.0:0
>>> [-P-] [ ~] dev-util/boost-build-1.61.0:0
>>> [-P-] [  ] dev-util/boost-build-1.62.0-r1:0
>>> [-P-] [ ~] dev-util/boost-build-1.63.0:0
>>> === %< ==
>>>
>>> Portage should be capable of an update.
>>>
 Anyway, glad it is going.  That's what matters.
>>> Yep, glad that I have a solution for it now.
>>>
>>> - Jörg
>> That is really weird.  That looks like exactly the same output I have
>> and mine updated just fine.  At least, I don't recall having issues.  I
>> read a couple other posts where people were having to run the same
>> command more than once to get portage to find a upgrade path.  I wonder,
>> does emerge/portage/tree have a hiccup somewhere?  Is this a bug that
>> hasn't quite had a finger put on it??
>>
>> Weird.
>>
>> Dale
>>
>> :-)  :-)
> From what I have seen when there are two stable versions of the same package, 
> portage needs to be told which one to install.

It should just update them.  At least that is what it has always done
for me.  Once they remove the keyword/mask for the packages, they should
be put in the upgrade list and portage just figure out which goes first,
if it can't go in parallel. 

I don't recall having to tell emerge to do this other than my usual
emerge -uvaDN world command. 

Dale

:-)  :-) 



Re: [gentoo-user] Re: Re: boost-1.62.0-r1 blocked by nothing ??

2017-02-03 Thread Mick
On Friday 03 Feb 2017 22:00:11 Dale wrote:
> Jörg Schaible wrote:
> > Dale wrote:
> > 
> > [snip]
> > 
> >> Portage lock?  Sometimes, my brain does that too.  lol
> > 
> > Hehe.
> > 
> >> I thought about it after I hit send but figured you would get the
> >> thought, maybe you had one or the other in a mask/unmask file or
> >> something that resulted in a conflict?  I was sort of thinking it but
> >> didn't type it in for some reason.  Still, if you did the same command I
> >> posted, you would have seen the difference and thought on it.  Generally
> >> if there is a difference like that, it's because of a local setting, or
> >> a change in the tree due to different sync time, which would give the
> >> idea of syncing again.
> > 
> > Again the same issue on another box:
> > 
> > === %< ==
> > $ equery l -p boost boost-build
> > 
> >  * Searching for boost ...
> > 
> > [-P-] [  ] dev-libs/boost-1.55.0-r2:0/1.55.0
> > [IP-] [  ] dev-libs/boost-1.56.0-r1:0/1.56.0
> > [-P-] [ ~] dev-libs/boost-1.58.0-r1:0/1.58.0
> > [-P-] [ ~] dev-libs/boost-1.59.0:0/1.59.0
> > [-P-] [ ~] dev-libs/boost-1.60.0:0/1.60.0
> > [-P-] [ ~] dev-libs/boost-1.61.0:0/1.61.0
> > [-P-] [ ~] dev-libs/boost-1.61.0-r1:0/1.61.0
> > [-P-] [  ] dev-libs/boost-1.62.0-r1:0/1.62.0
> > [-P-] [ ~] dev-libs/boost-1.63.0:0/1.63.0
> > 
> >  * Searching for boost-build ...
> > 
> > [-P-] [  ] dev-util/boost-build-1.55.0:0
> > [-P-] [ ~] dev-util/boost-build-1.55.0-r1:0
> > [IP-] [  ] dev-util/boost-build-1.56.0:0
> > [-P-] [ ~] dev-util/boost-build-1.58.0:0
> > [-P-] [ ~] dev-util/boost-build-1.59.0:0
> > [-P-] [ ~] dev-util/boost-build-1.60.0:0
> > [-P-] [ ~] dev-util/boost-build-1.61.0:0
> > [-P-] [  ] dev-util/boost-build-1.62.0-r1:0
> > [-P-] [ ~] dev-util/boost-build-1.63.0:0
> > === %< ==
> > 
> > Portage should be capable of an update.
> > 
> >> Anyway, glad it is going.  That's what matters.
> > 
> > Yep, glad that I have a solution for it now.
> > 
> > - Jörg
> 
> That is really weird.  That looks like exactly the same output I have
> and mine updated just fine.  At least, I don't recall having issues.  I
> read a couple other posts where people were having to run the same
> command more than once to get portage to find a upgrade path.  I wonder,
> does emerge/portage/tree have a hiccup somewhere?  Is this a bug that
> hasn't quite had a finger put on it??
> 
> Weird.
> 
> Dale
> 
> :-)  :-)

From what I have seen when there are two stable versions of the same package, 
portage needs to be told which one to install.
-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Re: Re: boost-1.62.0-r1 blocked by nothing ??

2017-02-03 Thread Dale
Jörg Schaible wrote:
> Dale wrote:
>
> [snip]
>
>> Portage lock?  Sometimes, my brain does that too.  lol
> Hehe.
>  
>> I thought about it after I hit send but figured you would get the
>> thought, maybe you had one or the other in a mask/unmask file or
>> something that resulted in a conflict?  I was sort of thinking it but
>> didn't type it in for some reason.  Still, if you did the same command I
>> posted, you would have seen the difference and thought on it.  Generally
>> if there is a difference like that, it's because of a local setting, or
>> a change in the tree due to different sync time, which would give the
>> idea of syncing again.
> Again the same issue on another box:
>
> === %< ==
> $ equery l -p boost boost-build
>  * Searching for boost ...
> [-P-] [  ] dev-libs/boost-1.55.0-r2:0/1.55.0
> [IP-] [  ] dev-libs/boost-1.56.0-r1:0/1.56.0
> [-P-] [ ~] dev-libs/boost-1.58.0-r1:0/1.58.0
> [-P-] [ ~] dev-libs/boost-1.59.0:0/1.59.0
> [-P-] [ ~] dev-libs/boost-1.60.0:0/1.60.0
> [-P-] [ ~] dev-libs/boost-1.61.0:0/1.61.0
> [-P-] [ ~] dev-libs/boost-1.61.0-r1:0/1.61.0
> [-P-] [  ] dev-libs/boost-1.62.0-r1:0/1.62.0
> [-P-] [ ~] dev-libs/boost-1.63.0:0/1.63.0
>
>  * Searching for boost-build ...
> [-P-] [  ] dev-util/boost-build-1.55.0:0
> [-P-] [ ~] dev-util/boost-build-1.55.0-r1:0
> [IP-] [  ] dev-util/boost-build-1.56.0:0
> [-P-] [ ~] dev-util/boost-build-1.58.0:0
> [-P-] [ ~] dev-util/boost-build-1.59.0:0
> [-P-] [ ~] dev-util/boost-build-1.60.0:0
> [-P-] [ ~] dev-util/boost-build-1.61.0:0
> [-P-] [  ] dev-util/boost-build-1.62.0-r1:0
> [-P-] [ ~] dev-util/boost-build-1.63.0:0
> === %< ==
>
> Portage should be capable of an update.
>  
>> Anyway, glad it is going.  That's what matters.
> Yep, glad that I have a solution for it now.
>
> - Jörg
>

That is really weird.  That looks like exactly the same output I have
and mine updated just fine.  At least, I don't recall having issues.  I
read a couple other posts where people were having to run the same
command more than once to get portage to find a upgrade path.  I wonder,
does emerge/portage/tree have a hiccup somewhere?  Is this a bug that
hasn't quite had a finger put on it?? 

Weird.

Dale

:-)  :-) 



[gentoo-user] Re: Re: boost-1.62.0-r1 blocked by nothing ??

2017-02-03 Thread Jörg Schaible
Dale wrote:

[snip]

> Portage lock?  Sometimes, my brain does that too.  lol

Hehe.
 
> I thought about it after I hit send but figured you would get the
> thought, maybe you had one or the other in a mask/unmask file or
> something that resulted in a conflict?  I was sort of thinking it but
> didn't type it in for some reason.  Still, if you did the same command I
> posted, you would have seen the difference and thought on it.  Generally
> if there is a difference like that, it's because of a local setting, or
> a change in the tree due to different sync time, which would give the
> idea of syncing again.

Again the same issue on another box:

=== %< ==
$ equery l -p boost boost-build
 * Searching for boost ...
[-P-] [  ] dev-libs/boost-1.55.0-r2:0/1.55.0
[IP-] [  ] dev-libs/boost-1.56.0-r1:0/1.56.0
[-P-] [ ~] dev-libs/boost-1.58.0-r1:0/1.58.0
[-P-] [ ~] dev-libs/boost-1.59.0:0/1.59.0
[-P-] [ ~] dev-libs/boost-1.60.0:0/1.60.0
[-P-] [ ~] dev-libs/boost-1.61.0:0/1.61.0
[-P-] [ ~] dev-libs/boost-1.61.0-r1:0/1.61.0
[-P-] [  ] dev-libs/boost-1.62.0-r1:0/1.62.0
[-P-] [ ~] dev-libs/boost-1.63.0:0/1.63.0

 * Searching for boost-build ...
[-P-] [  ] dev-util/boost-build-1.55.0:0
[-P-] [ ~] dev-util/boost-build-1.55.0-r1:0
[IP-] [  ] dev-util/boost-build-1.56.0:0
[-P-] [ ~] dev-util/boost-build-1.58.0:0
[-P-] [ ~] dev-util/boost-build-1.59.0:0
[-P-] [ ~] dev-util/boost-build-1.60.0:0
[-P-] [ ~] dev-util/boost-build-1.61.0:0
[-P-] [  ] dev-util/boost-build-1.62.0-r1:0
[-P-] [ ~] dev-util/boost-build-1.63.0:0
=== %< ==

Portage should be capable of an update.
 
> Anyway, glad it is going.  That's what matters.

Yep, glad that I have a solution for it now.

- Jörg




[gentoo-user] Re: Re: boost-1.62.0-r1 blocked by nothing ??

2017-02-02 Thread Jörg Schaible
Hi Neil,

Neil Bothwick wrote:

> On Thu, 2 Feb 2017 14:47:29 +0200, Nikos Chantziaras wrote:
> 
>> > now I have an emerge mystery myself: It claims boost is blocked
>> > by  ... nothing.
>> 
>> Same here. I don't know why, but the way I solved it is by unmerging
>> boost and then trying the update again.
>> 
>> When I unmerged both boost as well as boost-build, portage wanted to
>> re-install 1.62. The only way I could make it work is keep boost-build
>> 1.62 installed and only unmerge boost.
> 
> All I did was "emerge -1a boost boost-build" and it worked fine, as it
> has done in the past.

Thanks for the idea. Yes, this works. Strange nevertheless.

Cheers,
Jörg




[gentoo-user] Re: Re: Re: Re: Re: KWallet doesn't recognise my password

2016-12-16 Thread Jörg Schaible
Mick wrote:

> On Thursday 15 Dec 2016 14:02:39 Jörg Schaible wrote:
>> Mick wrote:
>> > On Wednesday 14 Dec 2016 09:08:11 Jörg Schaible wrote:
>> >> Mick wrote:
>> >> > On Tuesday 13 Dec 2016 11:35:33 Jörg Schaible wrote:
>> >> [snip]
>> >> 
>> >> >> No, that's the point: If you enable it, all kwallet-4 based apps
>> >> >> will fail. At least until 5.7. I've not tested 5.8 yet.
>> >> >> 
>> >> >> Cheers,
>> >> >> Jörg
>> >> > 
>> >> > This is what works here without any problems:
>> >> [snip]
>> >> 
>> >> Well, for me it broke KDE4-based apps on several different machines.
>> >> So, tell me, can you open with Konqueror local files?
>> >> 
>> >> Cheers,
>> >> Jörg
>> > 
>> > Yes, I can open files from Konqueror with a single click, unlike
>> > Dolphin which requires a double click to descent into a directory or
>> > open a file.
>> 
>> Well, for me it broke on several machines for all KDE 4 based apps (e.g.
>> Konqueror) the file protocol (Konqueror only show an error page about an
>> unknown protocol 'file') and the FileOpen dialog no longer works (you
>> cannot open/save files in those apps using the default dialog, e.g.
>> libreoffice, amarok, ...).
>> 
>> As soon as I deactivate kwallet-pam, all apps start working properly.
>> 
>> Cheers,
>> Jörg
> 
> How do you activate/de-activate kwallet-pam?

By setting comments in /etc/pam.d/sddm

=== %< ===
$ cat /etc/pam.d/sddm
#%PAM-1.0

authinclude system-login
account include system-login
passwordinclude system-login
session include system-login
#-auth  optionalpam_kwallet.so kdehome=.kde4
#-auth  optionalpam_kwallet5.so
#-session   optionalpam_kwallet.so
#-session   optionalpam_kwallet5.so auto_start
=== %< ===

> This is what my /etc/pam.d/kde contains, in case yours is different:
> 
> $ cat /etc/pam.d/kde
> #%PAM-1.0
> 
> auth   required pam_nologin.so
> 
> auth   include  system-local-login
> 
> accountinclude  system-local-login
> 
> password   include  system-local-login
> 
> sessioninclude  system-local-login

=== %< ===
$ cat /etc/pam.d/kde
#%PAM-1.0

auth   required pam_nologin.so
auth   include  system-local-login
accountinclude  system-local-login
password   include  system-local-login
sessioninclude  system-local-login
-auth   optionalpam_kwallet.so kdehome=.kde4
-auth   optionalpam_kwallet5.so
-sessionoptionalpam_kwallet.so
-sessionoptionalpam_kwallet5.so auto_start
=== %< ===

It contains obviously still the kwallet-pam entries, but AFAICS only the 
ones for your display manager are relevant. At least this is what the elog 
message indicates if you install it.

Cheers,
Jörg
 





Re: [gentoo-user] Re: Re: Re: Re: KWallet doesn't recognise my password

2016-12-15 Thread Mick
On Thursday 15 Dec 2016 14:02:39 Jörg Schaible wrote:
> Mick wrote:
> > On Wednesday 14 Dec 2016 09:08:11 Jörg Schaible wrote:
> >> Mick wrote:
> >> > On Tuesday 13 Dec 2016 11:35:33 Jörg Schaible wrote:
> >> [snip]
> >> 
> >> >> No, that's the point: If you enable it, all kwallet-4 based apps will
> >> >> fail. At least until 5.7. I've not tested 5.8 yet.
> >> >> 
> >> >> Cheers,
> >> >> Jörg
> >> > 
> >> > This is what works here without any problems:
> >> [snip]
> >> 
> >> Well, for me it broke KDE4-based apps on several different machines. So,
> >> tell me, can you open with Konqueror local files?
> >> 
> >> Cheers,
> >> Jörg
> > 
> > Yes, I can open files from Konqueror with a single click, unlike Dolphin
> > which requires a double click to descent into a directory or open a file.
> 
> Well, for me it broke on several machines for all KDE 4 based apps (e.g.
> Konqueror) the file protocol (Konqueror only show an error page about an
> unknown protocol 'file') and the FileOpen dialog no longer works (you cannot
> open/save files in those apps using the default dialog, e.g. libreoffice,
> amarok, ...).
> 
> As soon as I deactivate kwallet-pam, all apps start working properly.
> 
> Cheers,
> Jörg

How do you activate/de-activate kwallet-pam?

This is what my /etc/pam.d/kde contains, in case yours is different:

$ cat /etc/pam.d/kde
#%PAM-1.0

auth   required pam_nologin.so

auth   include  system-local-login

accountinclude  system-local-login

password   include  system-local-login

sessioninclude  system-local-login

-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Re: Re: Re: KWallet doesn't recognise my password

2016-12-15 Thread Mick
On Thursday 15 Dec 2016 11:58:06 J. Roeleveld wrote:
> On December 15, 2016 7:23:21 AM GMT+01:00, Mick  
wrote:
> >On Wednesday 14 Dec 2016 09:08:11 Jörg Schaible wrote:
> >> Mick wrote:
> >> > On Tuesday 13 Dec 2016 11:35:33 Jörg Schaible wrote:
> >> [snip]
> >> 
> >> >> No, that's the point: If you enable it, all kwallet-4 based apps
> >
> >will
> >
> >> >> fail. At least until 5.7. I've not tested 5.8 yet.
> >> >> 
> >> >> Cheers,
> >> >> Jörg
> >> > 
> >> > This is what works here without any problems:
> >> [snip]
> >> 
> >> Well, for me it broke KDE4-based apps on several different machines.
> >
> >So,
> >
> >> tell me, can you open with Konqueror local files?
> >> 
> >> Cheers,
> >> Jörg
> >
> >Yes, I can open files from Konqueror with a single click, unlike
> >Dolphin which
> >requires a double click to descent into a directory or open a file.
> 
> I think single and double click can be configured somewhere.
> I can still use single click in Dolphin.

Yes, it can be configured in InputDevices/Mouse Control/General, in 
systemsettings5, but for some unfathomable reason this setting won't take here 
on two different machines.  This has been so since the early days of Plasma.


> Not used Konqueror in a long time.
> 
> --
> Joost

-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


[gentoo-user] Re: Re: Re: Re: KWallet doesn't recognise my password

2016-12-15 Thread Jörg Schaible
Mick wrote:

> On Wednesday 14 Dec 2016 09:08:11 Jörg Schaible wrote:
>> Mick wrote:
>> > On Tuesday 13 Dec 2016 11:35:33 Jörg Schaible wrote:
>> [snip]
>> 
>> >> No, that's the point: If you enable it, all kwallet-4 based apps will
>> >> fail. At least until 5.7. I've not tested 5.8 yet.
>> >> 
>> >> Cheers,
>> >> Jörg
>> > 
>> > This is what works here without any problems:
>> [snip]
>> 
>> Well, for me it broke KDE4-based apps on several different machines. So,
>> tell me, can you open with Konqueror local files?
>> 
>> Cheers,
>> Jörg
> 
> Yes, I can open files from Konqueror with a single click, unlike Dolphin
> which requires a double click to descent into a directory or open a file.

Well, for me it broke on several machines for all KDE 4 based apps (e.g. 
Konqueror) the file protocol (Konqueror only show an error page about an 
unknown protocol 'file') and the FileOpen dialog no longer works (you cannot 
open/save files in those apps using the default dialog, e.g. libreoffice, 
amarok, ...).

As soon as I deactivate kwallet-pam, all apps start working properly.

Cheers,
Jörg




Re: [gentoo-user] Re: Re: Re: KWallet doesn't recognise my password

2016-12-15 Thread J. Roeleveld
On December 15, 2016 7:23:21 AM GMT+01:00, Mick  
wrote:
>On Wednesday 14 Dec 2016 09:08:11 Jörg Schaible wrote:
>> Mick wrote:
>> > On Tuesday 13 Dec 2016 11:35:33 Jörg Schaible wrote:
>> [snip]
>> 
>> >> No, that's the point: If you enable it, all kwallet-4 based apps
>will
>> >> fail. At least until 5.7. I've not tested 5.8 yet.
>> >> 
>> >> Cheers,
>> >> Jörg
>> > 
>> > This is what works here without any problems:
>> [snip]
>> 
>> Well, for me it broke KDE4-based apps on several different machines.
>So,
>> tell me, can you open with Konqueror local files?
>> 
>> Cheers,
>> Jörg
>
>Yes, I can open files from Konqueror with a single click, unlike
>Dolphin which 
>requires a double click to descent into a directory or open a file.

I think single and double click can be configured somewhere.
I can still use single click in Dolphin.

Not used Konqueror in a long time.

--
Joost
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.



Re: [gentoo-user] Re: Re: Re: KWallet doesn't recognise my password

2016-12-14 Thread Mick
On Wednesday 14 Dec 2016 09:08:11 Jörg Schaible wrote:
> Mick wrote:
> > On Tuesday 13 Dec 2016 11:35:33 Jörg Schaible wrote:
> [snip]
> 
> >> No, that's the point: If you enable it, all kwallet-4 based apps will
> >> fail. At least until 5.7. I've not tested 5.8 yet.
> >> 
> >> Cheers,
> >> Jörg
> > 
> > This is what works here without any problems:
> [snip]
> 
> Well, for me it broke KDE4-based apps on several different machines. So,
> tell me, can you open with Konqueror local files?
> 
> Cheers,
> Jörg

Yes, I can open files from Konqueror with a single click, unlike Dolphin which 
requires a double click to descent into a directory or open a file.
-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


[gentoo-user] Re: Re: Re: KWallet doesn't recognise my password

2016-12-14 Thread Jörg Schaible
Mick wrote:

> On Tuesday 13 Dec 2016 11:35:33 Jörg Schaible wrote:

[snip]

>> No, that's the point: If you enable it, all kwallet-4 based apps will
>> fail. At least until 5.7. I've not tested 5.8 yet.
>> 
>> Cheers,
>> Jörg
> 
> This is what works here without any problems:

[snip]

Well, for me it broke KDE4-based apps on several different machines. So, 
tell me, can you open with Konqueror local files?

Cheers,
Jörg




Re: [gentoo-user] Re: Re: KWallet doesn't recognise my password

2016-12-13 Thread Mick
On Tuesday 13 Dec 2016 11:35:33 Jörg Schaible wrote:
> J. Roeleveld wrote:
> > On Tuesday, December 13, 2016 11:10:31 AM Jörg Schaible wrote:
> >> Peter Humphrey wrote:
> >> > Hello list,
> >> > 
> >> > Until this morning I've had no real problems with KMail and co. for
> >> > quite a while, but something's upset the wallet system so that my
> >> > password is no longer recognised when I start KMail-2. This is what
> >> > I've tried:
> >> > 
> >> > 1.   Re-created a blank /home partition and restored from yesterday's
> >> > backup. (Yesterday's setup was working nicely.)
> >> > No difference, so:
> >> > 
> >> > 2.   Deleted ~/kde4/share/apps/kwallet (while not running live) and
> >> > rebooted. No difference there either.
> >> > 
> >> > Is there something I can restore from backup to enable me to use the
> >> > wallet again - perhaps something in /etc/ssl or /var/tmp? Maybe I need
> >> > to remerge the wallet packages, or maybe I'll have to create an
> >> > entirely new user for myself. I hoped I'd seen the last of that kind of
> >> > masochism.
> >> > 
> >> > In case it's relevant, the appearance of this problem coincided with a
> >> > new kernel, 4.9.0, which I compiled as usual and rebooted. I just got a
> >> > blank screen. I had to revert to 4.8.14 and rebuild the associated
> >> > modules before I could boot, and then fsck ran to check all the file
> >> > systems.
> >> 
> >> Just in case: Did you activate kwallet-pam?
> > 
> > I don't think this is a requirement. I have this disabled and need to
> > enter the password twice. Once for kwallet-4 and once for kwallet-5.
> 
> No, that's the point: If you enable it, all kwallet-4 based apps will fail.
> At least until 5.7. I've not tested 5.8 yet.
> 
> Cheers,
> Jörg

This is what works here without any problems:

$ eix -l kwallet
[I] kde-apps/kwalletd
 Available versions:  
 (4)
16.04.3   (4/16.04)^t   [aqua debug gpg]
   [M]~ 16.04.3-r1 (4/16.04)^t  [aqua debug gpg] Michael Palimaka 
 (12 Nov 2016) Depends on masked app-crypt/gpgme[cxx]
 Installed versions:  16.04.3(4)^t(10:27:17 08/20/16)(-aqua -debug -gpg)
 Homepage:https://www.kde.org/
 Description: KDE Password Server

[I] kde-apps/kwalletmanager
 Available versions:  
 (4)
15.04.3-r1 (4/15.04)[aqua debug +handbook]
 (5)
16.04.3 [debug +handbook]
   ~16.04.3-r1  [debug +handbook]
   ~16.08.3 [debug +handbook]
 Installed versions:  16.04.3(5)(09:27:22 08/20/16)(handbook -debug)
 Homepage:https://www.kde.org/
 Description: KDE Wallet management tool

[I] kde-apps/signon-kwallet-extension
 Available versions:  
 (5)
16.04.3 [debug]
   ~16.08.3 [debug]
 Installed versions:  16.04.3(5)(08:51:14 08/20/16)(-debug)
 Homepage:https://01.org/gsso/
 Description: KWallet extension for signond

[I] kde-frameworks/kwallet
 Available versions:  
 (5)
5.26.0(5/5.26)  [debug gpg +man test]
   ~5.28.0(5/5.28)  [debug gpg +man test]
   [M]~ 5.28.0-r1 (5/5.28)  [debug gpg +man test] Michael Palimaka 
 (12 Nov 2016) Depends on masked app-crypt/gpgme[cxx]
 Installed versions:  5.26.0(5)(15:11:59 10/15/16)(man -debug -gpg -test)
 Homepage:https://www.kde.org/
 Description: Framework providing desktop-wide storage for 
passwords

[I] kde-plasma/kwallet-pam
 Available versions:  
 (5)
5.8.3   [debug +oldwallet]
   ~5.8.4   [debug +oldwallet]
 Installed versions:  5.8.3(5)(10:46:42 12/10/16)(oldwallet -debug)
 Homepage:https://www.kde.org/
 Description: KWallet PAM module to not enter password again

-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


[gentoo-user] Re: Re: KWallet doesn't recognise my password

2016-12-13 Thread Jörg Schaible
J. Roeleveld wrote:

> On Tuesday, December 13, 2016 11:10:31 AM Jörg Schaible wrote:
>> Peter Humphrey wrote:
>> > Hello list,
>> > 
>> > Until this morning I've had no real problems with KMail and co. for
>> > quite a while, but something's upset the wallet system so that my
>> > password is no longer recognised when I start KMail-2. This is what
>> > I've tried:
>> > 
>> > 1. Re-created a blank /home partition and restored from yesterday's
>> > backup. (Yesterday's setup was working nicely.)
>> > No difference, so:
>> > 
>> > 2. Deleted ~/kde4/share/apps/kwallet (while not running live) and
>> > rebooted. No difference there either.
>> > 
>> > Is there something I can restore from backup to enable me to use the
>> > wallet again - perhaps something in /etc/ssl or /var/tmp? Maybe I need
>> > to remerge the wallet packages, or maybe I'll have to create an
>> > entirely new user for myself. I hoped I'd seen the last of that kind of
>> > masochism.
>> > 
>> > In case it's relevant, the appearance of this problem coincided with a
>> > new kernel, 4.9.0, which I compiled as usual and rebooted. I just got a
>> > blank screen. I had to revert to 4.8.14 and rebuild the associated
>> > modules before I could boot, and then fsck ran to check all the file
>> > systems.
>> 
>> Just in case: Did you activate kwallet-pam?
> 
> I don't think this is a requirement. I have this disabled and need to
> enter the password twice. Once for kwallet-4 and once for kwallet-5.

No, that's the point: If you enable it, all kwallet-4 based apps will fail. 
At least until 5.7. I've not tested 5.8 yet.

Cheers,
Jörg




[gentoo-user] Re: Re: KDE 5: Broken file protocol for KDE 4 apps

2016-10-13 Thread Jörg Schaible
Hi,

P Levine wrote:

> On Wed, Oct 12, 2016 at 5:54 PM, Jörg Schaible 
> wrote:
>> Anyone? After upgrading a second machine to KDE/Plasma 5, I have the same
>> behavior there. All KDE-4-based apps fail to interact with the file
>> system. Using KMail I can no longer add any attachment to an email nor
>> save an existing attachment to disk.
>>
>> Jörg Schaible wrote:
>>
>>> Hi,
>>>
>>> after the update to Plasma 5/KF5, I can no longer open (HTML) files from
>>> my local disk with Konqueror. It claims it does no longer know the file
>>> protocol. I get a similar error in Amarok when I try to apply a cover to
>>> an album from the local disk. It seems all KDE4-based application are
>>> affected.
>>>
>>> Does anybody what's causing this behavior and know how to solve it?
>>>
>>> Cheers,
>>> Jörg
> 
> Try running the apps from command line, using '--nofork' and/or
> '--debug' where/if applicable.  Try to reproduce the behavior and see
> if something potentially relevant is printed out.

Nothing. No message on the console, no entry in .xsession-errors. I even 
tried to compare strace output for the same call:

  konqueror /usr/shared/docs/some/file.html

One time from a user account where this call behaves correctly, once from 
the account where konqueror fails to open it. Nothing obvious :-/

- Jörg




[gentoo-user] Re: Re: KDE 5: Broken file protocol for KDE 4 apps

2016-10-13 Thread Jörg Schaible
Michael Mol wrote:

> On Wednesday, October 12, 2016 11:54:48 PM Jörg Schaible wrote:
>> Anyone? After upgrading a second machine to KDE/Plasma 5, I have the same
>> behavior there. All KDE-4-based apps fail to interact with the file
>> system. Using KMail I can no longer add any attachment to an email nor
>> save an existing attachment to disk.
> 
> I'm running KMail (Gentoo doesn't have the KDE5 version in tree yet, so
> KDE4), and I send file attachments all the time. So I can say it's at
> least not *intrinsically* broken...

As already said, it is not KMail, it affects any KDE4-based app ... and 
there are still a lot of them.

> Much of KDE4 and KDE5 wind up installed side by side, FWIW. I'd suggest
> cycling through emerge @preserved-rebuild, revdep-rebuild, depclean, and
> see if that shakes something loose.

My dependency tree is fine.

> The KDE4->KDE5 transition was generally a royal PITA for me, too, though I
> can't remember what all broke...

I did it in the last 4 weeks on 3 machines. Two have this problem, the third 
does not.

However, it has to be something in the local configuration. If I log into 
another (unused) account in this machine, I don't have this problem :-/

Cheers,
Jörg





[gentoo-user] Re: Re: Re: Partition of 3TB USB drive not detected

2016-08-03 Thread Jörg Schaible
Hi Mick

Mick wrote:

> On Sunday 31 Jul 2016 22:38:22 Jörg Schaible wrote:
>> Hi Mick,
>> 
>> Mick wrote:
>> > On Sunday 31 Jul 2016 19:14:45 Jörg Schaible wrote:
>> >> Hi Daniel,
>> >> 
>> >> thanks for your response.
>> >> 
>> >> Daniel Frey wrote:
>> >> 
>> >> [snip]
>> >> 
>> >> > I can only think of two reasons, the kernel on the livecd doesn't
>> >> > support GPT (which is unlikely)
>> >> 
>> >> That would be really strange. However, how can I prove it?
>> > 
>> > If after you boot your systemrescuecd you can list:
>> > 
>> > /sys/firmware/efi
>> > 
>> > you have booted into UEFI mode.  If not, you have booted into legacy
>> > BIOS mode.
>> 
>> This machine has only plain old BIOS. The question is, why one kernel
>> detects the 3TB partition and the the other one does not. How can I prove
>> GPT support for the kernel itself.
> 
> 
> I see.  In this case have a look at /proc/config (it may be compressed) or
> depending on your version of sysrescuecd and kernel choice, have a look
> here:
> 
> https://sourceforge.net/p/systemrescuecd/code/ci/master/tree/
> 
> then compare your configuration to theirs.  The kernel module for GPT is
> 'CONFIG_EFI_PARTITION' and it must be built in, rather than as a separate
> module.

Now it's getting weird.

My normal kernel (4.4.6) does not have that flag set. Nevertheless it 
detects the partition. The two kernels (both 4.4.12) of systemrescue have 
that flag. I've tested another machine with kernel that also has the flag 
and it does not detect this partition also.

However, I have another 6TB USB drive and that one has one big partition 
that is detected by both machines. The funny thing is, it reports to have 
only a MBR with one partition of 6TB (same output on both kernels):

== %< ==
~ # parted /dev/sde print
Model: WD My Book 1230 (scsi)
Disk /dev/sde: 6001GB
Sector size (logical/physical): 4096B/4096B
Partition Table: msdos
Disk Flags: 

Number  Start   End SizeType File system  Flags
 1  8389kB  6001GB  6001GB  primary  ext4

~ # gdisk /dev/sde
GPT fdisk (gdisk) version 1.0.1

Partition table scan:
  MBR: MBR only
  BSD: not present
  APM: not present
  GPT: not present


***
Found invalid GPT and valid MBR; converting MBR to GPT format
in memory. THIS OPERATION IS POTENTIALLY DESTRUCTIVE! Exit by
typing 'q' if you don't want to convert your MBR partitions
to GPT format!
***


Command (? for help): q
== %< ==

AFAICS this partition works fine, fsck does not report any problem. The 
funny thing is, it should not have been possible, because of the 2GB limit 
of MBR.

???

- Jörg





Re: [gentoo-user] Re: Re: Partition of 3TB USB drive not detected

2016-07-31 Thread Mick
On Sunday 31 Jul 2016 22:38:22 Jörg Schaible wrote:
> Hi Mick,
> 
> Mick wrote:
> > On Sunday 31 Jul 2016 19:14:45 Jörg Schaible wrote:
> >> Hi Daniel,
> >> 
> >> thanks for your response.
> >> 
> >> Daniel Frey wrote:
> >> 
> >> [snip]
> >> 
> >> > I can only think of two reasons, the kernel on the livecd doesn't
> >> > support GPT (which is unlikely)
> >> 
> >> That would be really strange. However, how can I prove it?
> > 
> > If after you boot your systemrescuecd you can list:
> > 
> > /sys/firmware/efi
> > 
> > you have booted into UEFI mode.  If not, you have booted into legacy BIOS
> > mode.
> 
> This machine has only plain old BIOS. The question is, why one kernel
> detects the 3TB partition and the the other one does not. How can I prove
> GPT support for the kernel itself.


I see.  In this case have a look at /proc/config (it may be compressed) or 
depending on your version of sysrescuecd and kernel choice, have a look here:

https://sourceforge.net/p/systemrescuecd/code/ci/master/tree/

then compare your configuration to theirs.  The kernel module for GPT is 
'CONFIG_EFI_PARTITION' and it must be built in, rather than as a separate 
module.

-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


[gentoo-user] Re: Re: Partition of 3TB USB drive not detected

2016-07-31 Thread Jörg Schaible
james wrote:

> On 07/31/2016 12:56 PM, Jörg Schaible wrote:
>> Jörg Schaible wrote:
>>
>>> Hi Daniel,
>>>
>>> thanks for your response.
>>>
>>> Daniel Frey wrote:
>>>
>>> [snip]
>>>
 I can only think of two reasons, the kernel on the livecd doesn't
 support GPT (which is unlikely)
>>>
>>> That would be really strange. However, how can I prove it?
>>>
 or you're booting a 32-bit kernel live
 USB. I am reasonably certain for drives > 2TB a 64-bit kernel and GPT
 are required.
>>>
>>> No, I've always chosen 64-bit kernels. I wonder what is so special about
>>> this partition ...
>>
>> Currently I wonder, why my system can find the partition at all:
>>
>>  %< 
>> # gdisk -l /dev/sdi
>> GPT fdisk (gdisk) version 1.0.1
>>
>> Partition table scan:
>>   MBR: protective
>>   BSD: not present
>>   APM: not present
>>   GPT: not present
> 
> If you have seen my recent thread,

I saw it, but did not read it in depth, because I had the impression, it is 
mainly about EFI systems. I'll re-read it ...

> much of this automounting during
> boot(strapping) is flaky that is much of what I have been searching out
> is a default (magical) partitioning schema that will eventually lead to
> clear documents on the current state of affairs not only with old versus
> new motherboards (mbr-->efi) and disk (mbr < 2.2T and gpt >2.2T)
> but including all sorts of new arm and other embedded (linux) boards.
> 
> Different forms of Solid State memory are next on my list, with usb (1.x
> --> 3.x) being top of the SS memory mediums. (Sorry I do not have
> more atm).
>
>> Creating new GPT entries.
>> Disk /dev/sdi: 732566646 sectors, 2.7 TiB
>> Logical sector size: 4096 bytes
>> Disk identifier (GUID): 80C04475-9B51-4A44-A52F-1F165AE02695
>> Partition table holds up to 128 entries
>> First usable sector is 6, last usable sector is 732566640
>> Partitions will be aligned on 256-sector boundaries
>> Total free space is 732566635 sectors (2.7 TiB)
>>
>> Number  Start (sector)End (sector)  Size   Code  Name
>>  %< 
>>
>> However, it's mounted successfully, see system logs:
>>
>>  %< 
>> [22735.626752] sd 13:0:0:0: [sdi] 732566646 4096-byte logical blocks:
>> [(3.00
>> TB/2.73 TiB)
>> [22735.629255]  sdi: sdi1
>> [23414.066315] EXT4-fs (sdi1): mounted filesystem with ordered data mode.
>> Opts: (null)
>>  %< 
>>
>> Has anyone ever tried the recovery option of GPT disk to rebuild GPT from
>> MBR?
> 
> I see some sort of 'auto correction' by gpt technology to convert many
> forms of perceived mbr to gpt to be used by the booting process for
> spinning rust. So this issue is not limited to usb medium. I would also
> point out that I'd look deeply into the usb specs for the vendor of your
> usb sticks, as they do some 'funky things' at the firmware level inside
> many of the newer/faster/larger usb devices. It not just dumb memory
> like the early 1.x devices. Many are slanted to Microsoft business
> strategies. I'm not suggesting that is your current issues. I'm merely
> pointing out that some newer usb sticks are systems themselves complete
> with firmware so the devices looks like dumb memory. Furthermore, the
> silicon vendors provide firmware options to usb sticks vendors (like
> Texas Instruments) but also the vendor add to or change the hidden
> firmware as meets their multifaceted business objects. Sadly, the NSA is
> deeply involved here, as are many nation states and large corporations.
> You'd be surprised what youd find in a modern usb stick, should you take
> it into a class 6+ clean-room for analysis. The lower the particle count
> the more fantastic the tools
> to open up silicon and look deeply into what is actually going on.
> This is why folks love those classified research facilities that have
> govt contract and folks hanging around. Lots of very, very cool toys
> you just do not hear about.. Way beyond microscopes built by
> physicist.

Actually it is not that modern. ~5 year old Intenso 2GB. I'd be surprised if 
booting from the stick prevents partition detection of another USB drive, 
but who knows? Maybe I should burn the iso instead and boot that one ;-)

> Prolly not your issue, but still present. Cheap ass usb vendors often
> have corner issues that are unintentional, that is why well recognized
> vendors of SS memory are the best to deal with, for consistency of
> behavior.
> 
> I'd use as many different tools as you can find and read the vendor &
> silicon manufacturer's docs to see what you are really dealing with to
> ferret out this weirdness. (it's a darn time sync, just so you know).
> 
> 
> [1] http://www.cleanroom.byu.edu/particlecount.phtml
> 
> hth,
> James

Thanks,
Jörg





[gentoo-user] Re: Re: Partition of 3TB USB drive not detected

2016-07-31 Thread Jörg Schaible
Hi Mick,

Mick wrote:

> On Sunday 31 Jul 2016 19:14:45 Jörg Schaible wrote:
>> Hi Daniel,
>> 
>> thanks for your response.
>> 
>> Daniel Frey wrote:
>> 
>> [snip]
>> 
>> > I can only think of two reasons, the kernel on the livecd doesn't
>> > support GPT (which is unlikely)
>> 
>> That would be really strange. However, how can I prove it?
> 
> If after you boot your systemrescuecd you can list:
> 
> /sys/firmware/efi
> 
> you have booted into UEFI mode.  If not, you have booted into legacy BIOS
> mode.


This machine has only plain old BIOS. The question is, why one kernel 
detects the 3TB partition and the the other one does not. How can I prove 
GPT support for the kernel itself.

Cheers,
Jörg




[gentoo-user] Re: Re: Update blocked by kdebase-startkde:4

2016-07-11 Thread Jörg Schaible
Daniel Frey wrote:

> On 07/09/2016 07:08 PM, Peter Humphrey wrote:
>> Thanks Dan. I tried your package.mask and thought I was getting
>> somewhere. But I had to add these to package.use (I have USE=-qt5 in
>> make.conf):
>> 
>> sys-auth/polkit-qt  qt5
>> dev-libs/libdbusmenu-qt qt5
>> media-libs/phonon   qt5
>> media-libs/phonon-vlc   qt5
>> 
>> Then I had to remove >kde-apps/kdebase-runtime-meta-4.15 from
>> package.mask to satisfy "(dependency required by
>> "kde-base/kdebase-startkde-4.11.22::gentoo"
>> [ebuild])". Guess what? Of course - it wanted to install the whole set of
>> [qt5
>> packages.
>> 
>> So I'm still stuck. I don't want to go to KDE-5 until I can find a way to
>> reduce the absurd amount of vertical space occupied by every line of
>> text. It will still be ugly, but at least more manageable.
>> 
>> I've attached screen shots of qt4 and qt5 versions of KMail to show what
>> I mean. The qt5 version is as close as I can get to the qt4.
>> 
> 
> I just tried and no luck here either. I made that list back in April
> when plasma made my machine unusable (hence the "might not work now"
> comment.) It's been several months now but I really don't want to go and
> try plasma again and waste days trying to get my machine usable again.
> I'm just not going to do any updates (and I guess I should do a stage4
> backup in case I have to restore...)
> 
> Right now my machine is nice and stable. What I don't understand is that
> plasma clearly isn't ready for primetime yet (IMHO) yet it seems KDE4 is
> not installable on Gentoo anymore.

It seems so. Really embarassing is however, that the dependencies for even 
already installed packages have been changed under the hood:

= %< ===
$ diff -u `locate kactivities-4.13.3-r2.ebuild`
--- /var/db/pkg/kde-base/kactivities-4.13.3-r2/kactivities-4.13.3-r2.ebuild 
2016-03-09 17:26:34.581846384 +0100
+++ /var/db/portage/gentoo/kde-base/kactivities/kactivities-4.13.3-r2.ebuild
2016-07-08 22:21:51.0 +0200
@@ -10,11 +10,11 @@
 
 DESCRIPTION="KDE Activity Manager"
 
-KEYWORDS="amd64 ~arm ~ppc ~ppc64 x86 ~x86-fbsd ~amd64-linux ~x86-linux"
+KEYWORDS="amd64 ~arm x86 ~x86-fbsd ~amd64-linux ~x86-linux"
 IUSE=""
 
 RDEPEND="
-   || ( $(add_kdebase_dep kactivitymanagerd)  I did try many things trying to get plasma working but everything I
> tried had no results. Plasma would crash even if you didn't do anything
> (no keyboard or mouse input.)
> 
> Dan

Jörg





Re: [gentoo-user] Re: Re: Re: Emerge order not deterministic !?

2015-11-12 Thread Neil Bothwick
On Thu, 12 Nov 2015 10:35:14 +0100, Jörg Schaible wrote:

> > Then use emerge --keep-going and portage will take care of skipping
> > failing merges for you.  
> 
> Ah, no, that's not an option. It breaks for a reason. Sometimes I can
> ignore that and look for it later and in this case I skip it, but
> normally I fix the problem first. However, you have to take care, which
> package you're actually skipping. Especially if the build order is
> different with resume.

--keep-going will emerge all unaffected packages, meaning you are then
working with a much smaller list when you try to fix the problem. At
least, that's the approach that normally works for me.

--keep-going is intelligent enough to skip any packages that depend on
the failed package. That means you often end up with a package list that
is a single branch dependency tree, so the order is unlikely to change.


-- 
Neil Bothwick

Rainbows are just to look at, not to really understand.


pgpuekjCartz7.pgp
Description: OpenPGP digital signature


[gentoo-user] Re: Re: Re: Emerge order not deterministic !?

2015-11-12 Thread Jörg Schaible
Neil Bothwick wrote:

> On Thu, 12 Nov 2015 09:48:48 +0100, Jörg Schaible wrote:
> 
>>> > Hmmm. And how can you then ever use
>> >> 
>> >>   emerge --resume --skip-fist
>> >> 
>> >> if not even the first build is deterministic? I skip the first
>> >> package anyway only if the problematic package is the first one to
>> >> build after resume, but if I cannot even rely on that?
>> > 
>> > 
>> > Because it re-uses the previous build order, not re-generate a new
>> > one.
>> 
>> That's simply not true. Emerge resume calculates the order again and
>> for me it starts often with a different package.
> 
> Then use emerge --keep-going and portage will take care of skipping
> failing merges for you.

Ah, no, that's not an option. It breaks for a reason. Sometimes I can ignore 
that and look for it later and in this case I skip it, but normally I fix 
the problem first. However, you have to take care, which package you're 
actually skipping. Especially if the build order is different with resume.

Cheers,
Jörg




[gentoo-user] Re: Re: Re: Emerge order not deterministic !?

2015-11-12 Thread Jörg Schaible
Alan McKinnon wrote:

> On 12/11/2015 10:48, Jörg Schaible wrote:
>> Alan McKinnon wrote:
>> 
>>> On 12/11/2015 10:29, Jörg Schaible wrote:
 Alan McKinnon wrote:

[snip]

 Hmmm. And how can you then ever use

   emerge --resume --skip-fist

 if not even the first build is deterministic? I skip the first package
 anyway only if the problematic package is the first one to build after
 resume, but if I cannot even rely on that?
>>>
>>>
>>> Because it re-uses the previous build order, not re-generate a new one.
>> 
>> That's simply not true. Emerge resume calculates the order again and for
>> me it starts often with a different package.
> 
> I've never noticed that. For me --skip-first has always skipped the
> correct first package (the one that previously failed).

That's what I always did originally also, until my build suddenly broke at 
the same package again and I had to notice that it skipped a completely 
different.

> As long as a known build failure is not in the --resume list, I don't
> care what the build order is because it is irrelevant. The only time it
> becomes relevant is when an ebuild has a bug such as a missing dep. But
> that's a bug in the ebuild and is fixed there.

Well, normally I don't care about the sequence either, except when skipping 
the first ;-)

Cheers,
Jörg





Re: [gentoo-user] Re: Re: Emerge order not deterministic !?

2015-11-12 Thread Alan McKinnon
On 12/11/2015 10:48, Jörg Schaible wrote:
> Alan McKinnon wrote:
> 
>> On 12/11/2015 10:29, Jörg Schaible wrote:
>>> Alan McKinnon wrote:
>>>
 On 11/11/2015 21:35, Walter Dnes wrote:
>   Ongoing installation.  I looked at 2 instances of
> "emerge -pv x11-base/xorg-server" and the order was somewhat different.
> Here are a couple of outputs, just a few seconds apart.  Is this a bug
> or a feature?  See attachments.
>


 Emerge order is not deterministic, especially with parallel builds. The
 reason is that it does not need to be according to the dep graph - if
 two packages are at the same level and do not depend on each other, then
 the order they are built in does not affect the final result.
 Practically all parallel processing works this way.

 What is deterministic, is that if you build the same set of packages
 twice and even if portage does them in different order, the binaries
 produced are functionally identical
>>>
>>> Hmmm. And how can you then ever use
>>>
>>>   emerge --resume --skip-fist
>>>
>>> if not even the first build is deterministic? I skip the first package
>>> anyway only if the problematic package is the first one to build after
>>> resume, but if I cannot even rely on that?
>>
>>
>> Because it re-uses the previous build order, not re-generate a new one.
> 
> That's simply not true. Emerge resume calculates the order again and for me 
> it starts often with a different package.

I've never noticed that. For me --skip-first has always skipped the
correct first package (the one that previously failed).

As long as a known build failure is not in the --resume list, I don't
care what the build order is because it is irrelevant. The only time it
becomes relevant is when an ebuild has a bug such as a missing dep. But
that's a bug in the ebuild and is fixed there.


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Re: Re: Emerge order not deterministic !?

2015-11-12 Thread Neil Bothwick
On Thu, 12 Nov 2015 09:48:48 +0100, Jörg Schaible wrote:

>> > Hmmm. And how can you then ever use
> >> 
> >>   emerge --resume --skip-fist
> >> 
> >> if not even the first build is deterministic? I skip the first
> >> package anyway only if the problematic package is the first one to
> >> build after resume, but if I cannot even rely on that?  
> > 
> > 
> > Because it re-uses the previous build order, not re-generate a new
> > one.  
> 
> That's simply not true. Emerge resume calculates the order again and
> for me it starts often with a different package.

Then use emerge --keep-going and portage will take care of skipping
failing merges for you.


-- 
Neil Bothwick

Every morning is the dawn of a new error...


pgp5Vahc6BWYX.pgp
Description: OpenPGP digital signature


[gentoo-user] Re: Re: Emerge order not deterministic !?

2015-11-12 Thread Jörg Schaible
Alan McKinnon wrote:

> On 12/11/2015 10:29, Jörg Schaible wrote:
>> Alan McKinnon wrote:
>> 
>>> On 11/11/2015 21:35, Walter Dnes wrote:
   Ongoing installation.  I looked at 2 instances of
 "emerge -pv x11-base/xorg-server" and the order was somewhat different.
 Here are a couple of outputs, just a few seconds apart.  Is this a bug
 or a feature?  See attachments.

>>>
>>>
>>> Emerge order is not deterministic, especially with parallel builds. The
>>> reason is that it does not need to be according to the dep graph - if
>>> two packages are at the same level and do not depend on each other, then
>>> the order they are built in does not affect the final result.
>>> Practically all parallel processing works this way.
>>>
>>> What is deterministic, is that if you build the same set of packages
>>> twice and even if portage does them in different order, the binaries
>>> produced are functionally identical
>> 
>> Hmmm. And how can you then ever use
>> 
>>   emerge --resume --skip-fist
>> 
>> if not even the first build is deterministic? I skip the first package
>> anyway only if the problematic package is the first one to build after
>> resume, but if I cannot even rely on that?
> 
> 
> Because it re-uses the previous build order, not re-generate a new one.

That's simply not true. Emerge resume calculates the order again and for me 
it starts often with a different package.

Cheers,
Jörg




[gentoo-user] Re: Re: dev-qt/qtwebkit-5.4.0

2015-02-06 Thread Jörg Schaible
Stefan G. Weichinger wrote:

> On 05.02.2015 17:59, Michael Palimaka wrote:
>> On 04/02/15 08:07, Stefan G. Weichinger wrote:
>>> Am 03.02.2015 um 20:30 schrieb Jörg Schaible:
>>>
 Consider a memcheck. Arbitrary failures while the CPU is high is often
 because some component starts dying. Sometimes cleaning the fans work
 wonders.
>>>
>>> Good suggestion, will check tmrw and clean the fans as well.
>>>
>>> It gave "internal compiler error" afai remember.
>> 
>> How much free memory do you have, and are you building with debug
>> symbols? qtwebkit:5 is exceptionally hungry, and I've seen it hit by the
>> kernel OOM killer a lot.
> 
> I removed one of the four DIMMs with 4 GB each .. so I am now working
> with 12 GB of RAM.
> 
> When I re-emerge qtwebkit:5 I see 5-7 GB of *free* RAM ... and this with
> /var/tmp/portage as tmpfs (for the first minutes ... )
> 
> The removed DIMM threw errors ... ok, so on with "only" 12 gigs.

;-)

Good that you could locate it!

Cheers,
Jörg 




[gentoo-user] Re: Re: automated code validation

2014-12-07 Thread Sam Bishop
In order to catch up a bit since I wasn't subscribed to the
mailing list with this email at the time I found this thread.
If anything sounds odd, read through to the end.
I'm trying to top reply so I'm leaving my 'backstory' till the end.

> Rich Freeman  gentoo.org> writes:
>
> > James  tampabay.rr.com> writes:
> >
> > I bet our friends at RackSpace will provide all the virtual HorsePower
> > you need, should google not provide the  hundreds/thousands or cores for
> > you to run on.
> >
> My guess is that the hardware to run all this on is the simplest part
> of the problem.  If somebody writes something good and we build the
> processes to support it, then I'm sure we can find donors to provide
> the CPU cycles.  ChromeOS could probably steal whatever we put
> together so there is an obvious synergy there, and I'm sure
> Amazon/Rackspace/etc are always looking for use cases to show off.
>

I agree about 80% with that, well put. The disagreeing 20% is pretty much
all about physical hardware, this is where, to quote a Red Hat employee I know
'the work gets interesting'. For the bulk of work, we can easily use virtual
machines, scavenge bargain basement EC2 spot instance hours, and have lots
of other options, your right that will be easy, the x86/amd64 arch testing
wont be hard to find a home for. Its all the other arch work that wont be easy.
I'm currently in the process of obtaining AMD Opteron A1100 dev kit boards,
and lets just say I'm not expecting our software to 'boot first time'.
Red Hat kindly keep the Beaker project (https://beaker-project.org) moving
forward which will be how I deal with these AMD dev kits. It only really helps
when you have hardware you can put aside for being part of a test pool. But
it is one of the few tools available to easily boot hardware, splat an OS
onto it, connect and perform automated tests on it, get as much info out
as possible even if there are kernel issues and it doesn't boot properly.
Once things progress I'd be amenable to letting my dev boards do ebuild
test runs when I'm not using them to port our software stack.

So while I dont have the 'idle hardware budget' of AWS, Google or Rackspace,
I am however building a cloud platform, with a customised version of CoreOS,
which is based on ChromeOS, which is based on Gentoo.
And the further the company progresses, the more drift I see between
our 'OS' and 'CoreOS'. I could not imagine tackling this if the entire thing
wasnt built on top of the foundation of a Gentoo based OS. So consider me
an ardent supporter of actually getting Gentoo automatic testing.

> Rich Freeman  gentoo.org> writes:
>
> From past feedback from Diego and such the biggest issue with any kind
> of tinderbox is dealing with the output.  As has been pointed out
> there are folks running Repoman regularly already, and there have been
> past tinderbox efforts.  The real issue is to improve the signal/noise
> ratio.  You'll only get so far with that using code - there has to be
> process change to support it.>
>
> If we were to do something like this long-term I'd envision that this
> would run on every commit or something like that, and the commit
> doesn't go into the rsync tree until it passes.  If the tree goes red
> then people stop doing commits until it is fixed, and somebody gets
> their wrist slapped.  That is my sense of how most mature
> organizations do CI.  The tinderbox is really just doing verification
> - stuff should already be tested BEFORE it goes in the tree.  There
> also shouldn't be any false positives.  There would need to be a
> mechanism to flag ebuilds with known issues so that the tinderbox can
> ignore them, and of course we can monitor that to ensure it isn't
> abused.
>
> Basically doing this sort of thing right requires a big change in
> mindset.  You can't just throw stuff at the tree and wait for the bug
> reports to come in.  You can't just make dealing with the tinderbox
> the problem of the poor guy running the tinderbox.  The tinderbox
> basically has to become the boss and everybody has to make part of
> their job keeping the boss happy.

1st - Mindset change, definitely required. Can't agree more.
2nd - CI on this kind of thing is a multi-headed hydra of a thing. The
processes we wind up with will be quite similar philosophically but not
necessarily similar in implementation, staging or any other area.
For starters most CI pipelines aren't testing an entire distro as complex
as Gentoo ;-)
3rd - Looking at this linearly is less than idea. If an update breaks
5 downstream packages, the entire tree shouldn't go red and the only
person who should stop is the maintainer and/or commiter who submitted
the broken package. It should be more like automated QA 'gates' than a
pass fail build pipeline.
4th - Signal to noise is crucial! I'm going to be actively doing
something here because I have a build process that is building ebuilds
using portage, and when a build can take 30 minutes then an hour to test
the final mach

Re: [gentoo-user] Re: re: which NTPd package to use?

2014-07-27 Thread Alexander Kapshuk
On 07/26/2014 11:25 PM, Dale wrote:
> Alexander Kapshuk wrote:
>> On 07/26/2014 03:31 PM, Holger Hoffstätte wrote:
>>> On Sat, 26 Jul 2014 15:05:23 +0300, Alexander Kapshuk wrote:
>>>
 Which NTPd package would the list recommend using, ntp, openntpd, or
 some other package?
>>> chrony - no competition, even for servers. ntpd is way overrated,
>>> unnecessarily hard to setup correctly, fragile and contrary to
>>> popular belief not even that accurate, unless you use external
>>> HW clocks. Chrony is maintained by Red Hat in cooperation with the
>>> timekeeping code in the kernel.
>>>
 openntpd seems to be easier to set up according to wiki.gentoo.org.
>>> Many many years ago I helped port openntpd to Linux. It was OK-ish at
>>> the time and easier/less hassle than ntpd, but the portable version for
>>> Linux stopped working reliably many years ago due to kernel changes.
>>> IMHO it really should no longer be in the tree since it gives a false
>>> sense of accuracy.
>>>
>>> just my 0.01€..
>>>
>>> -h
>>>
>>>
>> Is this gentoo wiki article still relevant when it comes to configuring
>> chrony on gentoo?
>> http://www.gentoo-wiki.info/Chrony
>>
>> Or should I stick to the instructions given here:
>> /usr/share/doc/chrony-1.29.1/chrony.txt.bz2
>>
>> Thanks.
>>
>>
>>
>
> This is my chrony.conf without all the commented out parts. 
>
> server  64.6.144.6
> server  67.159.5.90
> server  67.59.168.233
> server  204.62.14.98
>
> server  69.50.219.51
> server  209.114.111.1
>
> driftfile /etc/chrony.drift
>
> keyfile /etc/chrony/chrony.keys
>
> commandkey 1
>
> logdir /var/log/chrony
> log measurements statistics tracking rtc
>
>
> The last two lines are optional.  Use those if you like to be nosy and
> watch it do its thing.  I still have ntpdate installed and use it to
> check and see how close it is on occasion.  This is what I get from the
> test:
>
> root@fireball / # ntpdate -b -u -q pool.ntp.org
> server 198.144.194.12, stratum 2, offset -0.003320, delay 0.10658
> server 173.44.32.10, stratum 2, offset -0.003313, delay 0.07515
> server 70.60.65.40, stratum 2, offset -0.003059, delay 0.09262
> server 38.229.71.1, stratum 2, offset -0.001002, delay 0.09563
> 26 Jul 15:16:00 ntpdate[10232]: step time server 173.44.32.10 offset
> -0.003313 sec
> root@fireball / # 
>
> I did a fair sized upgrade the other day and went to the boot runlevel
> afterwards to restart the services that were updated.  I'm pretty sure
> it has been doing its thing since then without me doing anything to it. 
> I think you can use mirrorselect to find the best mirrors for your
> area.  I can't recall the command but I bet a search of the Gentoo
> forums would find it fairly quick. 
>
> Looking at the howto, the only thing I do different is put it in the
> default runlevel.  Unless I am in the default runlevel, there is no
> internet access available anyway.  No internet access, no way to set the
> clock anyway.  ;-)
>
> Hope that helps.
>
> Dale
>
> :-)  :-) 
>
Terrific. Thanks.




Re: [gentoo-user] Re: re: which NTPd package to use?

2014-07-26 Thread Neil Bothwick
On Sat, 26 Jul 2014 20:10:12 +0100, Peter Humphrey wrote:

> > Chrony is maintained by Red Hat in cooperation with the
> > timekeeping code in the kernel.  
> 
> I didn't know Red Hat had taken over its maintenance - thanks for the
> info.

So the stories about Red Hat trying to force everyone to use systemd and
its components aren't true after all?


-- 
Neil Bothwick

Bury a lawyer 12 feet under, because deep down they're nice.


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: re: which NTPd package to use?

2014-07-26 Thread Dale
Alexander Kapshuk wrote:
> On 07/26/2014 03:31 PM, Holger Hoffstätte wrote:
>> On Sat, 26 Jul 2014 15:05:23 +0300, Alexander Kapshuk wrote:
>>
>>> Which NTPd package would the list recommend using, ntp, openntpd, or
>>> some other package?
>> chrony - no competition, even for servers. ntpd is way overrated,
>> unnecessarily hard to setup correctly, fragile and contrary to
>> popular belief not even that accurate, unless you use external
>> HW clocks. Chrony is maintained by Red Hat in cooperation with the
>> timekeeping code in the kernel.
>>
>>> openntpd seems to be easier to set up according to wiki.gentoo.org.
>> Many many years ago I helped port openntpd to Linux. It was OK-ish at
>> the time and easier/less hassle than ntpd, but the portable version for
>> Linux stopped working reliably many years ago due to kernel changes.
>> IMHO it really should no longer be in the tree since it gives a false
>> sense of accuracy.
>>
>> just my 0.01€..
>>
>> -h
>>
>>
> Is this gentoo wiki article still relevant when it comes to configuring
> chrony on gentoo?
> http://www.gentoo-wiki.info/Chrony
>
> Or should I stick to the instructions given here:
> /usr/share/doc/chrony-1.29.1/chrony.txt.bz2
>
> Thanks.
>
>
>


This is my chrony.conf without all the commented out parts. 

server  64.6.144.6
server  67.159.5.90
server  67.59.168.233
server  204.62.14.98

server  69.50.219.51
server  209.114.111.1

driftfile /etc/chrony.drift

keyfile /etc/chrony/chrony.keys

commandkey 1

logdir /var/log/chrony
log measurements statistics tracking rtc


The last two lines are optional.  Use those if you like to be nosy and
watch it do its thing.  I still have ntpdate installed and use it to
check and see how close it is on occasion.  This is what I get from the
test:

root@fireball / # ntpdate -b -u -q pool.ntp.org
server 198.144.194.12, stratum 2, offset -0.003320, delay 0.10658
server 173.44.32.10, stratum 2, offset -0.003313, delay 0.07515
server 70.60.65.40, stratum 2, offset -0.003059, delay 0.09262
server 38.229.71.1, stratum 2, offset -0.001002, delay 0.09563
26 Jul 15:16:00 ntpdate[10232]: step time server 173.44.32.10 offset
-0.003313 sec
root@fireball / # 

I did a fair sized upgrade the other day and went to the boot runlevel
afterwards to restart the services that were updated.  I'm pretty sure
it has been doing its thing since then without me doing anything to it. 
I think you can use mirrorselect to find the best mirrors for your
area.  I can't recall the command but I bet a search of the Gentoo
forums would find it fairly quick. 

Looking at the howto, the only thing I do different is put it in the
default runlevel.  Unless I am in the default runlevel, there is no
internet access available anyway.  No internet access, no way to set the
clock anyway.  ;-)

Hope that helps.

Dale

:-)  :-) 



Re: [gentoo-user] Re: re: which NTPd package to use?

2014-07-26 Thread Alexander Kapshuk
On 07/26/2014 09:38 PM, Holger Hoffstätte wrote:
> On Sat, 26 Jul 2014 21:14:04 +0300, Alexander Kapshuk wrote:
>
>> Is this gentoo wiki article still relevant when it comes to configuring
>> chrony on gentoo?
>> http://www.gentoo-wiki.info/Chrony
>>
>> Or should I stick to the instructions given here:
>> /usr/share/doc/chrony-1.29.1/chrony.txt.bz2
> The wiki article is from 2008 and doesn't seem "too wrong",
> but the current ebuilds are a bit more up to date wrt.
> default config and init script. The current template config also
> contains very detailed instructions and is probably the best way
> to get started. How much you need to set up depends on your specific
> use case - pure client, steady/interrupted connectivity, server for
> other machines on the LAN..
>
> If you only want to be a client just add one or multiple servers
> to the config and you are good to go; chrony works well pretty much
> out of the box.
>
> -h
>
>
Understood. Thanks.

For the time being, I just want to be a client.These are the options
I've got enabled in the config:
grep '^[a-z][a-z]*' /etc/chrony/chrony.conf
server 0.pool.ntp.org iburst
server 1.pool.ntp.org iburst
server 2.pool.ntp.org iburst
maxupdateskew 5
driftfile /var/lib/chrony/drift
keyfile /etc/chrony/chrony.keys
commandkey 1
logdir /var/log/chrony
log measurements statistics tracking




Re: [gentoo-user] Re: re: which NTPd package to use?

2014-07-26 Thread Peter Humphrey
On Saturday 26 July 2014 12:31:55 Holger Hoffstätte wrote:
> On Sat, 26 Jul 2014 15:05:23 +0300, Alexander Kapshuk wrote:
> > Which NTPd package would the list recommend using, ntp, openntpd, or
> > some other package?
> 
> chrony - no competition, even for servers. ntpd is way overrated,
> unnecessarily hard to setup correctly, fragile and contrary to
> popular belief not even that accurate, unless you use external
> HW clocks. Chrony is maintained by Red Hat in cooperation with the
> timekeeping code in the kernel.

I too have been using chrony since before I can remember, when ntpd could only 
step the clock. Chrony just works - I haven't even bothered to look round for 
an alternative. As the docs say (somewhere or other), if you run any kind of 
mail service, you certainly don't want your clock to step backwards suddenly.

I didn't know Red Hat had taken over its maintenance - thanks for the info.

-- 
Regards
Peter




[gentoo-user] Re: re: which NTPd package to use?

2014-07-26 Thread Holger Hoffstätte
On Sat, 26 Jul 2014 21:14:04 +0300, Alexander Kapshuk wrote:

> Is this gentoo wiki article still relevant when it comes to configuring
> chrony on gentoo?
> http://www.gentoo-wiki.info/Chrony
> 
> Or should I stick to the instructions given here:
> /usr/share/doc/chrony-1.29.1/chrony.txt.bz2

The wiki article is from 2008 and doesn't seem "too wrong",
but the current ebuilds are a bit more up to date wrt.
default config and init script. The current template config also
contains very detailed instructions and is probably the best way
to get started. How much you need to set up depends on your specific
use case - pure client, steady/interrupted connectivity, server for
other machines on the LAN..

If you only want to be a client just add one or multiple servers
to the config and you are good to go; chrony works well pretty much
out of the box.

-h




Re: [gentoo-user] Re: re: which NTPd package to use?

2014-07-26 Thread Alexander Kapshuk
On 07/26/2014 03:31 PM, Holger Hoffstätte wrote:
> On Sat, 26 Jul 2014 15:05:23 +0300, Alexander Kapshuk wrote:
>
>> Which NTPd package would the list recommend using, ntp, openntpd, or
>> some other package?
> chrony - no competition, even for servers. ntpd is way overrated,
> unnecessarily hard to setup correctly, fragile and contrary to
> popular belief not even that accurate, unless you use external
> HW clocks. Chrony is maintained by Red Hat in cooperation with the
> timekeeping code in the kernel.
>
>> openntpd seems to be easier to set up according to wiki.gentoo.org.
> Many many years ago I helped port openntpd to Linux. It was OK-ish at
> the time and easier/less hassle than ntpd, but the portable version for
> Linux stopped working reliably many years ago due to kernel changes.
> IMHO it really should no longer be in the tree since it gives a false
> sense of accuracy.
>
> just my 0.01€..
>
> -h
>
>
Is this gentoo wiki article still relevant when it comes to configuring
chrony on gentoo?
http://www.gentoo-wiki.info/Chrony

Or should I stick to the instructions given here:
/usr/share/doc/chrony-1.29.1/chrony.txt.bz2

Thanks.




Re: [gentoo-user] Re: re: which NTPd package to use?

2014-07-26 Thread Alexander Kapshuk
On 07/26/2014 03:31 PM, Holger Hoffstätte wrote:
> On Sat, 26 Jul 2014 15:05:23 +0300, Alexander Kapshuk wrote:
>
>> Which NTPd package would the list recommend using, ntp, openntpd, or
>> some other package?
> chrony - no competition, even for servers. ntpd is way overrated,
> unnecessarily hard to setup correctly, fragile and contrary to
> popular belief not even that accurate, unless you use external
> HW clocks. Chrony is maintained by Red Hat in cooperation with the
> timekeeping code in the kernel.
>
>> openntpd seems to be easier to set up according to wiki.gentoo.org.
> Many many years ago I helped port openntpd to Linux. It was OK-ish at
> the time and easier/less hassle than ntpd, but the portable version for
> Linux stopped working reliably many years ago due to kernel changes.
> IMHO it really should no longer be in the tree since it gives a false
> sense of accuracy.
>
> just my 0.01€..
>
> -h
>
>
Thanks. That sounds interesting.




[gentoo-user] Re: re: which NTPd package to use?

2014-07-26 Thread Holger Hoffstätte
On Sat, 26 Jul 2014 15:05:23 +0300, Alexander Kapshuk wrote:

> Which NTPd package would the list recommend using, ntp, openntpd, or
> some other package?

chrony - no competition, even for servers. ntpd is way overrated,
unnecessarily hard to setup correctly, fragile and contrary to
popular belief not even that accurate, unless you use external
HW clocks. Chrony is maintained by Red Hat in cooperation with the
timekeeping code in the kernel.

> openntpd seems to be easier to set up according to wiki.gentoo.org.

Many many years ago I helped port openntpd to Linux. It was OK-ish at
the time and easier/less hassle than ntpd, but the portable version for
Linux stopped working reliably many years ago due to kernel changes.
IMHO it really should no longer be in the tree since it gives a false
sense of accuracy.

just my 0.01€..

-h




Re: [gentoo-user] Re: Re: [gentoo-user] kernel bug?

2014-07-16 Thread Gmail

I start to use genkernel-next from the upgrade to gnome 3.12 with systemd.

I must repeat: with kernel 3.12.13 no problem, with 3.12.2x kernel 
system block during the ramdisk loading.


I see many discussion about this problem (many without solution again), 
but nothing to solve.


Gentoo Bugzilla ask to me a dmseg where to see the problem's details, 
but when the system freeze don't make any dmesg output.



Il 16/07/2014 05:20, taozhijiang ha scritto:
Yes, genkernel-next should be used. look at the install gentoo gnome 
with systemd from
scratch ( Sorry for currently I can not access Internet so can not 
provide your link)
I have test genkernel-next with systemd (needed by GNOME 3.12), all 
seems OK, with

kernel version 3.15。
But now I am using KDE 4.13 with openRC on ZFS。systemd sometimes 
makes things

strange, I switched to KDE, all seems well currently.
;-)
2014-07-16

Thanks & Best Regards.
陶治江 | TAO Zhijiang
研发处 | SOHO国际产品线
Tel: 3129
Mobile: 18938910923
Email: taozhijiang@tp-link.{net, com.cn}

*发件人:* Jc_García
*发送时间:* 2014-07-16  05:26:08
*收件人:* gentoo-user
*抄送:*
*主题:* Re: [gentoo-user] kernel bug?
2014-07-15 9:38 GMT-06:00 Gmail :
> My /usr partition in on the / partition.
>
> I just use initrd, i've compiled kernel with genkernel.
>
> I'm trying to look row for row if there's some diff.
>
>
Are you using genkernel also to generate a the initramfs? for booting
systemd this is not supported by genkernel(tthat's is pointed in the
systemd instalation guide in the wiki), you should be using either
sys-kernel/genkernel-next, or sys-kernel/dracut(this has been the most
widely recommended on this list).




[gentoo-user] Re: Re: [gentoo-user] kernel bug?

2014-07-15 Thread taozhijiang
Yes, genkernel-next should be used. look at the install gentoo gnome with 
systemd from 
scratch ( Sorry for currently I can not access Internet so can not provide your 
link)
I have test genkernel-next with systemd (needed by GNOME 3.12), all seems OK, 
with
kernel version 3.15。
But now I am using KDE 4.13 with openRC on ZFS。systemd sometimes makes things 
strange, I switched to KDE, all seems well currently.  
 
;-)

2014-07-16 



Thanks & Best Regards.

陶治江 | TAO Zhijiang
研发处 | SOHO国际产品线
Tel: 3129
Mobile: 18938910923
Email:   taozhijiang@tp-link.{net, com.cn}





发件人: Jc_García 
发送时间: 2014-07-16  05:26:08 
收件人: gentoo-user 
抄送: 
主题: Re: [gentoo-user] kernel bug? 
 
2014-07-15 9:38 GMT-06:00 Gmail :
> My /usr partition in on the / partition.
>
> I just use initrd, i've compiled kernel with genkernel.
>
> I'm trying to look row for row if there's some diff.
>
>
Are you using genkernel also to generate a the initramfs? for booting
systemd this is not supported by genkernel(tthat's is pointed in the
systemd instalation guide in the wiki), you should be using either
sys-kernel/genkernel-next, or sys-kernel/dracut(this has been the most
widely recommended on this list).


[gentoo-user] Re: Re: [gentoo-dev] About DELL ALPS touchpad

2014-07-03 Thread taozhijiang
definitely was set to y
CONFIG_MOUSE_PS2_ALPS=y


The touchpad was, but just basicly.
So I want to full featured such as multi-touch and scroll

2014-07-04 



Thanks & Best Regards.

陶治江 | TAO Zhijiang
研发处 | SOHO国际产品线





发件人: Chí-Thanh Christopher Nguyễn 
发送时间: 2014-07-03  17:19:31 
收件人: gentoo-dev; gentoo-user 
抄送: 
主题: Re: [gentoo-dev] About DELL ALPS touchpad 
 
taozhijiang schrieb:
> Hello, everyone
>  
> I am using DELL Latitiude Laptop, comes with ALPS touchpad.
>  
> When I installed the driver in Windows, this touchpad supports
> multi-touch very well.
> I am now using the latest Gentoo with KDE descktop environment, and I
> also want to
> enjoy the multi-touch functions, but this is not supported.
Check your kernel configuration that CONFIG_MOUSE_PS2_ALPS is enabled.
Best regards,
Chí-Thanh Christopher Nguyễn


Re: [gentoo-user] Re: Re: Re: OT: Mapping random numbers (PRNG)

2014-06-30 Thread Matti Nykyri
On Sun, Jun 29, 2014 at 02:38:51PM +0200, Kai Krakow wrote:
> Matti Nykyri  schrieb:
> 
> > That is why the possibility for 0 and 1 (after modulo 62) is twice as
> > large compared to all other values (2-61).
> 
> Ah, now I get it.
> 
> > By definition random means that the probability for every value should be
> > the same. So if you have 62 options and even distribution of probability
> > the probability for each of them is 1/62.
> 
> Still, the increased probability for single elements should hit different 
> elements each time. So for large sets it will distribute - however, I now 
> get why it's not completely random by definition.

Usually when you need random data the quality needs to be good! Key, 
passwords etc. For example if an attacker knows that your random number 
generator same or the next index with double probability, he will most 
likely crack each character with half the tries. So for each character 
in your password the time is split in half. Again 8 character password 
becomes 2^8 times easier to break compared to truely random data. This 
is just an example though.

> > Try counting how of often new_index = index and new_index = (index + 1) %
> > 62 and new_index = (index + 2) % 62. With your algorithm the last one
> > should be significantly less then the first two in large sample.
> 
> I will try that. It looks like a good approach.

Ok. I wrote a little library that takes random data and mathematically 
accurately splits it into wanted data. It is attached to the mail. You 
only need to specify the random source and the maximum number you wish 
to see in your set. So with 5 you get everything from 0 to 5 (in total 
of 6 elements). The library takes care of buffering. And most 
importantly keeps probabilities equal :)

-- 
-Matti
VERSION=v0.1

prefix=/usr/local

CC=$(CROSS_COMPILE)g++
LD=$(CROSS_COMPILE)ld

SYS=posix

DEF=-DRNG_VERSION=\"$(VERSION)\"
OPT=-O2
XCFLAGS=-fPIC -DPIC -march=nocona
#XCFLAGS=-fPIC -DPIC -DDEBUG -march=nocona
XLDFLAGS=$(XCFLAGS) -Wl,--as-needed -Wl,-O1 -Wl,-soname=librng.so
CPPFLAGS=-Wall -std=gnu++98 $(XCFLAGS) $(INC) $(DEF) $(OPT)
LDFLAGS=-Wall -shared $(XLDFLAGS)
TESTLDFLAGS=-Wall
#TESTLDFLAGS=-Wall -lrng

bindir=$(prefix)/bin
libdir=$(prefix)/lib

BINDIR=$(DESTDIR)$(bindir)
LIBDIR=$(DESTDIR)$(libdir)

SLIBS=$(LIBS)

EXT=$(EXT_$(SYS))

LIBS=librng.so

all: $(LIBS) rng

install:$(LIBS)
-mkdir -p $(BINDIR) $(LIBDIR)
cp rng$(EXT) $(BINDIR)

clean:
rm -f *.o *.so rng$(EXT)

rng: rng.o
$(CC) $(TESTLDFLAGS) -o $@$(EXT) $@.o librng.o
rng.o: rng.cpp

librng.so: librng.o
$(CC) $(LDFLAGS) -o $@$(EXT) librng.o
librng.o: librng.cpp
//#define BUFFER_SIZE 4096
//64 bits is 8 bytes: number of uint64_t in buffer
//#define NUM_SETS (4096 / 8)
//#define NUM_BITS 64
#include 

struct BinaryData {
  uint64_t data;
  int8_t bits;
};

class BitContainer {
public:
  BitContainer();
  ~BitContainer();
  
  bool has(int8_t bits);
  uint64_t get(int8_t bits);
  int8_t set(uint64_t data, int8_t bits);
  void fill(uint64_t *data);
  
  static void cpy(struct BinaryData *dest, struct BinaryData *src, int8_t bits);

private:
  void xfer();
  static void added(int8_t &stored, int8_t bits);

  struct BinaryData pri;
  struct BinaryData sec;
};

class Rng {
public:
  Rng(char* device, uint64_t max);
  ~Rng();
  
  const uint64_t setMax(const uint64_t max);
  uint64_t getMax();
  int setDevice(const char* device);
  
  uint64_t getRnd();  

  static uint64_t getMask(int8_t bits);
  static int8_t calculateBits(uint64_t level);
  
private:
  void fillBuffer();
  void readBuffer();
  
  void getBits(uint64_t *data, int8_t *avail, uint64_t *out);
  void saveBits(uint64_t save);
  void processBits(uint64_t max, uint64_t level, uint64_t data);
  
  void error(const char* str);

  int iRndFD;
  size_t lCursor;
  size_t lBuffer;
  uint64_t* pStart;
  uint64_t* pNext;
  uint64_t* pEnd;
  
  BitContainer sRnd;

  uint64_t lMax;
  uint64_t lOutMask;
  int8_t cOutBits;
};#include 
#include 
#include 
#include "librng.h"

#ifdef DEBUG
 #include 
 #include 
 long* results = 0;
 long* results2 = 0;
 unsigned long dMax = 0;
 int pushed[64];
 long readData = 0;
 long readBuff = 0;
 long readBits = 0;
 long validBits = 0;
 long bitsPushed = 0;
 long readExtra = 0;
 int bits = 0;
 
 unsigned long totalBits = 0;
 unsigned long used = 0;
 unsigned long wasted = 0;
 
 unsigned long power(int exp) {
   unsigned long x = 1;
   
   for (int i = 0; i < exp; i++)
 x *= 2;
   
   return x;
 }
 
 void dump_results() {
   fprintf(stderr, "Rounds for each number:\n");
   for (unsigned long i = 0; i < dMax; i++)
 fprintf(stderr, "%li = %li\t", i, results[i]);
   fprintf(stderr, "\n");
   
   fprintf(stderr, "Rounds for each initial number:\n");
   for (unsigned long i = 0; i < power(bits); i++)
 fprintf(stderr, "%li = %li\t", i, results2[i]);
   fprintf(stderr, "\n");
   
   fprintf(stderr, "Rounds for extra bits: total pushed: \t%li\n", bitsPushed);
   for (in

[gentoo-user] Re: Re: Re: OT: Mapping random numbers (PRNG)

2014-06-29 Thread Kai Krakow
Matti Nykyri  schrieb:

> On Jun 29, 2014, at 0:28, Kai Krakow  wrote:
>> 
>> Matti Nykyri  schrieb:
>> 
 On Jun 27, 2014, at 0:00, Kai Krakow  wrote:
 
 Matti Nykyri  schrieb:
 
> If you are looking a mathematically perfect solution there is a simple
> one even if your list is not in the power of 2! Take 6 bits at a time
> of the random data. If the result is 62 or 63 you will discard the
> data and get the next 6 bits. This selectively modifies the random
> data but keeps the probabilities in correct balance. Now the
> probability for index of 0-61 is 1/62 because the probability to get
> 62-63 out of 64 if 0.
 
 Why not do just something like this?
 
 index = 0;
 while (true) {
 index = (index + get_6bit_random()) % 62;
 output << char_array[index];
 }
 
 Done, no bits wasted. Should have perfect distribution also. We also
 don't have to throw away random data just to stay within unaligned
 boundaries. The unalignment is being taken over into the next loop so
 the "error" corrects itself over time (it becomes distributed over the
 whole set).
>>> 
>>> Distribution will not be perfect. The same original problem persists.
>>> Probability for index 0 to 1 will be 2/64 and for 2 to 61 it will be
>>> 1/64. Now the addition changes this so that index 0 to 1 reflects to
>>> previous character and not the original index.
>>> 
>>> The distribution of like 10GB of data should be quite even but not on a
>>> small scale. The next char will depend on previous char. It is 100% more
>>> likely that the next char is the same or one index above the previous
>>> char then any of the other ones in the series. So it is likely that you
>>> will have long sets of same character.
>> 
>> I cannot follow your reasoning here - but I'd like to learn. Actually, I
>> ran this multiple times and never saw long sets of the same character,
>> even no short sets of the same character. The 0 or 1 is always rolled
>> over into the next random addition. I would only get sets of the same
>> character if rand() returned zero multiple times after each other - which
>> wouldn't be really random. ;-)
> 
> In your example that isn't true. You will get the same character if 6bit
> random number is 0 or if it is 62! This is what makes the flaw!
> 
> You will also get the next character if random number is 1 or 63.
> 
> That is why the possibility for 0 and 1 (after modulo 62) is twice as
> large compared to all other values (2-61).

Ah, now I get it.

> By definition random means that the probability for every value should be
> the same. So if you have 62 options and even distribution of probability
> the probability for each of them is 1/62.

Still, the increased probability for single elements should hit different 
elements each time. So for large sets it will distribute - however, I now 
get why it's not completely random by definition.

>> In my tests I counted how ofter new_index > index and new_index < index,
>> and it had a clear bias for the first. So I added swapping of the
>> selected index with offset=0 in the set. Now the characters will be
>> swapped and start to distribute that flaw. The distribution, however,
>> didn't change.
> 
> Try counting how of often new_index = index and new_index = (index + 1) %
> 62 and new_index = (index + 2) % 62. With your algorithm the last one
> should be significantly less then the first two in large sample.

I will try that. It looks like a good approach.

-- 
Replies to list only preferred.




Re: [gentoo-user] Re: Re: OT: Mapping random numbers (PRNG)

2014-06-28 Thread Matti Nykyri
On Jun 29, 2014, at 0:28, Kai Krakow  wrote:
> 
> Matti Nykyri  schrieb:
> 
>>> On Jun 27, 2014, at 0:00, Kai Krakow  wrote:
>>> 
>>> Matti Nykyri  schrieb:
>>> 
 If you are looking a mathematically perfect solution there is a simple
 one even if your list is not in the power of 2! Take 6 bits at a time of
 the random data. If the result is 62 or 63 you will discard the data and
 get the next 6 bits. This selectively modifies the random data but keeps
 the probabilities in correct balance. Now the probability for index of
 0-61 is 1/62 because the probability to get 62-63 out of 64 if 0.
>>> 
>>> Why not do just something like this?
>>> 
>>> index = 0;
>>> while (true) {
>>> index = (index + get_6bit_random()) % 62;
>>> output << char_array[index];
>>> }
>>> 
>>> Done, no bits wasted. Should have perfect distribution also. We also
>>> don't have to throw away random data just to stay within unaligned
>>> boundaries. The unalignment is being taken over into the next loop so the
>>> "error" corrects itself over time (it becomes distributed over the whole
>>> set).
>> 
>> Distribution will not be perfect. The same original problem persists.
>> Probability for index 0 to 1 will be 2/64 and for 2 to 61 it will be 1/64.
>> Now the addition changes this so that index 0 to 1 reflects to previous
>> character and not the original index.
>> 
>> The distribution of like 10GB of data should be quite even but not on a
>> small scale. The next char will depend on previous char. It is 100% more
>> likely that the next char is the same or one index above the previous char
>> then any of the other ones in the series. So it is likely that you will
>> have long sets of same character.
> 
> I cannot follow your reasoning here - but I'd like to learn. Actually, I ran 
> this multiple times and never saw long sets of the same character, even no 
> short sets of the same character. The 0 or 1 is always rolled over into the 
> next random addition. I would only get sets of the same character if rand() 
> returned zero multiple times after each other - which wouldn't be really 
> random. ;-)

In your example that isn't true. You will get the same character if 6bit random 
number is 0 or if it is 62! This is what makes the flaw!

You will also get the next character if random number is 1 or 63.

That is why the possibility for 0 and 1 (after modulo 62) is twice as large 
compared to all other values (2-61).

By definition random means that the probability for every value should be the 
same. So if you have 62 options and even distribution of probability the 
probability for each of them is 1/62. 

> Keep in mind: The last index will be reused whenever you'd enter the 
> function - it won't reset to zero. But still that primitive implementation 
> had a flaw: It will tend to select characters beyond the current offset, if 
> it is >= 1/2 into the complete set, otherwise it will prefer selecting 
> characters before the offset.

If you modify the sequence so that if looks random it is pseudo random. 

> In my tests I counted how ofter new_index > index and new_index < index, and 
> it had a clear bias for the first. So I added swapping of the selected index 
> with offset=0 in the set. Now the characters will be swapped and start to 
> distribute that flaw. The distribution, however, didn't change.

Try counting how of often new_index = index and new_index = (index + 1) % 62 
and new_index = (index + 2) % 62. With your algorithm the last one should be 
significantly less then the first two in large sample.

> Of course I'm no mathematician, I don't know how I'd calculate the 
> probabilities for my implementation because it is sort of a recursive 
> function (for get_rand()) when looking at it over time:
> 
> int get_rand() {
>  static int index = 0;
>  return (index = (index + get_6bit_rand()) % 62);
> }
> 
> char get_char() {
>  int index = get_rand();
>  char tmp = chars[index];
>  chars[index] = chars[0];
>  return (chars[0] = tmp);
> }
> 
> However, get_char() should return evenly distributes results.
> 
> What this shows, is, that while distribution is even among the result set, 
> the implementation may still be flawed because results could be predictable 
> for a subset of results. Or in other words: Simply looking at the 
> distribution of results is not an indicator for randomness. I could change 
> get_rand() in the following way:
> 
> int get_rand() {
>  static int index = 0;
>  return (index = (index + 1) % 62);
> }
> 
> Results would be distributed even, but clearly it is not random.
> 
> -- 
> Replies to list only preferred.
> 
> 



Re: [gentoo-user] Re: Re: OT: Mapping random numbers (PRNG)

2014-06-28 Thread Canek Peláez Valdés
On Sat, Jun 28, 2014 at 7:37 PM,   wrote:
> On Sat, Jun 28 2014, Canek Peláez Valdés wrote:
>
>> That doesn't matter. Take a non-negative integer N; if you flip a coin
>> an infinite number of times, then the probability of the coin landing
>> on the same face N times in a row is 1.
>
> This is certainly true.
>
>> This means that it is *guaranteed* to happen
>
> That is not as clear.

Let me be more precise (and please correct me if I'm wrong): It is
guaranteed to happen at some point in the infinite sequence of random
flip coins, but we cannot know when it will happen, only that it will
happen.

That's the way I got it when I took my probability courses, admittedly
many years ago.

In any way, even if I'm wrong and it is not guaranteed, the main point
remains true: the probability of getting a large sequence of the same
number from a RNG is 1 for every true random RNG, and therefore seeing
a large sequence of the same number form a RNG doesn't (technically)
means that it is broken.

Regards.
-- 
Canek Peláez Valdés
Profesor de asignatura, Facultad de Ciencias
Universidad Nacional Autónoma de México



Re: [gentoo-user] Re: Re: OT: Mapping random numbers (PRNG)

2014-06-28 Thread gottlieb
On Sat, Jun 28 2014, Canek Peláez Valdés wrote:

> That doesn't matter. Take a non-negative integer N; if you flip a coin
> an infinite number of times, then the probability of the coin landing
> on the same face N times in a row is 1.

This is certainly true.

> This means that it is *guaranteed* to happen

That is not as clear.  Prob = 1 does not always mean certain (when there
are infinite possibilities).  For example the probability is zero that a
random rational number chosen between 0 and 1 is exactly 1/2.  So the
probability is 1 that the number is not 1/2.  However it is not certain
that the random choice will not be 1/2.

allan



Re: [gentoo-user] Re: Re: OT: Mapping random numbers (PRNG)

2014-06-28 Thread Canek Peláez Valdés
On Sat, Jun 28, 2014 at 4:28 PM, Kai Krakow  wrote:
[ ... ]
> I cannot follow your reasoning here - but I'd like to learn. Actually, I ran
> this multiple times and never saw long sets of the same character, even no
> short sets of the same character. The 0 or 1 is always rolled over into the
> next random addition.

That doesn't matter. Take a non-negative integer N; if you flip a coin
an infinite number of times, then the probability of the coin landing
on the same face N times in a row is 1. This means that it is
*guaranteed* to happen, and it *will* happen for any N you want:
1,000,000, a thousand billions, a gazillion. That is a mathematical
fact.

This of course is a consequence of "infinite" being really really
large, but it means that technically you cannot rule a RNG as broken
only because you saw that it produced the same result N times, which
is the crux of the Dilbert joke.

In practice, of course, it's a big sign that something is wrong. But
there is a non-zero probability that it's actually correct.

Because with randomness, you can never be sure.

Regards.
-- 
Canek Peláez Valdés
Profesor de asignatura, Facultad de Ciencias
Universidad Nacional Autónoma de México



[gentoo-user] Re: Re: OT: Mapping random numbers (PRNG)

2014-06-28 Thread Kai Krakow
Matti Nykyri  schrieb:

>> On Jun 27, 2014, at 0:00, Kai Krakow  wrote:
>> 
>> Matti Nykyri  schrieb:
>> 
>>> If you are looking a mathematically perfect solution there is a simple
>>> one even if your list is not in the power of 2! Take 6 bits at a time of
>>> the random data. If the result is 62 or 63 you will discard the data and
>>> get the next 6 bits. This selectively modifies the random data but keeps
>>> the probabilities in correct balance. Now the probability for index of
>>> 0-61 is 1/62 because the probability to get 62-63 out of 64 if 0.
>> 
>> Why not do just something like this?
>> 
>> index = 0;
>> while (true) {
>>  index = (index + get_6bit_random()) % 62;
>>  output << char_array[index];
>> }
>> 
>> Done, no bits wasted. Should have perfect distribution also. We also
>> don't have to throw away random data just to stay within unaligned
>> boundaries. The unalignment is being taken over into the next loop so the
>> "error" corrects itself over time (it becomes distributed over the whole
>> set).
> 
> Distribution will not be perfect. The same original problem persists.
> Probability for index 0 to 1 will be 2/64 and for 2 to 61 it will be 1/64.
> Now the addition changes this so that index 0 to 1 reflects to previous
> character and not the original index.
> 
> The distribution of like 10GB of data should be quite even but not on a
> small scale. The next char will depend on previous char. It is 100% more
> likely that the next char is the same or one index above the previous char
> then any of the other ones in the series. So it is likely that you will
> have long sets of same character.

I cannot follow your reasoning here - but I'd like to learn. Actually, I ran 
this multiple times and never saw long sets of the same character, even no 
short sets of the same character. The 0 or 1 is always rolled over into the 
next random addition. I would only get sets of the same character if rand() 
returned zero multiple times after each other - which wouldn't be really 
random. ;-)

Keep in mind: The last index will be reused whenever you'd enter the 
function - it won't reset to zero. But still that primitive implementation 
had a flaw: It will tend to select characters beyond the current offset, if 
it is >= 1/2 into the complete set, otherwise it will prefer selecting 
characters before the offset.

In my tests I counted how ofter new_index > index and new_index < index, and 
it had a clear bias for the first. So I added swapping of the selected index 
with offset=0 in the set. Now the characters will be swapped and start to 
distribute that flaw. The distribution, however, didn't change.

Of course I'm no mathematician, I don't know how I'd calculate the 
probabilities for my implementation because it is sort of a recursive 
function (for get_rand()) when looking at it over time:

int get_rand() {
  static int index = 0;
  return (index = (index + get_6bit_rand()) % 62);
}

char get_char() {
  int index = get_rand();
  char tmp = chars[index];
  chars[index] = chars[0];
  return (chars[0] = tmp);
}

However, get_char() should return evenly distributes results.

What this shows, is, that while distribution is even among the result set, 
the implementation may still be flawed because results could be predictable 
for a subset of results. Or in other words: Simply looking at the 
distribution of results is not an indicator for randomness. I could change 
get_rand() in the following way:

int get_rand() {
  static int index = 0;
  return (index = (index + 1) % 62);
}

Results would be distributed even, but clearly it is not random.

-- 
Replies to list only preferred.




Re: [gentoo-user] Re: Re: OT: Mapping random numbers (PRNG)

2014-06-27 Thread Neil Bothwick
On Fri, 27 Jun 2014 19:50:15 +0200, Kai Krakow wrote:

> You can actually learn from Dilbert comics. ;-)

Unless you're a PHB, they never learn.


-- 
Neil Bothwick

"You know how dumb the average person is? Well, statistically, half of
them are even dumber than that" - Lewton, P.I.


signature.asc
Description: PGP signature


[gentoo-user] Re: Re: OT: Mapping random numbers (PRNG)

2014-06-27 Thread Kai Krakow
thegeezer  schrieb:

> On 06/26/2014 11:07 PM, Kai Krakow wrote:
>>
>> It is worth noting that my approach has the tendency of generating random
>> characters in sequence.
> 
> sorry but had to share this http://dilbert.com/strips/comic/2001-10-25/

:-)

I'm no mathematician, but well, I think the swapping approach fixes it. What 
this makes clear, however, is that randomness on its own does not completely 
ensure unpredictable items if combined with other functions. One has to 
carefully think about it. I, for myself, would always stay away from using 
modulo to clip the random numbers. It will always create bias. My first idea 
introduced predictable followers (you always knew that the next char had a 
specific probability related to the tail length of the list).

You can actually learn from Dilbert comics. ;-)

-- 
Replies to list only preferred.




Re: [gentoo-user] Re: Re: Re: Re: [Gentoo-User] emerge --sync likely to kill SSD?

2014-06-24 Thread Rich Freeman
On Tue, Jun 24, 2014 at 2:34 PM, Kai Krakow  wrote:
> I'm not sure if multiple partitions can share the same cache device
> partition but more or less that's it: Initialize bcache, then attach your
> backing devices, then add those bcache devices to your btrfs.

Ah, if you are stuck with one bcache partition per cached device then
that will be fairly painful to manage.

> Yes, it will write double the data to the cache then - but only if btrfs
> also did actually read both copies (which it probably does not because it
> has checksums and does not need to compare data, and lets just ignore the
> case that another process could try to read the same data from the other
> raid member later, that case should become optimized-out by the OS cache).

I didn't realize you were proposing read caching only.  If you're only
caching reads then obviously that is much safer.  I think with btrfs
in raid1 mode with only two devices you can tell it to prefer a
particular device for reading in which case you could just bcache that
drive.  It would only read from the other drive if the cache failed.

However, I don't think btrfs lets you manually arrange drives into
array-like structures.  It auto-balances everything which is usually a
plus, but if you have 30 disks you can't tell it to treat them as 6x
5-disk RAID5s vs one 30-disk raid5 (I think).

Rich



[gentoo-user] Re: Re: Re: Re: [Gentoo-User] emerge --sync likely to kill SSD?

2014-06-24 Thread Kai Krakow
Rich Freeman  schrieb:

> On Sun, Jun 22, 2014 at 7:44 AM, Kai Krakow  wrote:
>> I don't see where you could lose the volume management features. You just
>> add device on top of the bcache device after you initialized the raw
>> device with a bcache superblock and attached it. The rest works the same,
>> just that you use bcacheX instead of sdX devices.
> 
> Ah, didn't realize you could attach/remove devices to bcache later.
> Presumably it handles device failures gracefully, ie exposing them to
> the underlying filesystem so that it can properly recover?

I'm not sure if multiple partitions can share the same cache device 
partition but more or less that's it: Initialize bcache, then attach your 
backing devices, then add those bcache devices to your btrfs.

I don't know how errors are handled, tho. But as with every caching 
technique (even in ZFS) your data is likely toast if the cache device dies 
in the middle of action. Thus, you should put bcache on LVM RAID if you are 
going to use it for write caching (i.e. write-back mode). Read caching 
should be okay (write-through mode). Bcache is a little slower than other 
flash-cache implementations because it only reports data as written back to 
the FS if it reached stable storage (which can be the cache device, tho, if 
you are using write-back mode). It was also designed with unexpected reboots 
in mind, read. It will replay transactions from its log on reboot. This 
means, you can have unstable data conditions on the raw device which is why 
you should never try to use that directly, e.g. from a rescue disk. But 
since bcache wraps the partition with its own superblock this mistake should 
be impossible.

I'm not sure how graceful device failures are handled. I suppose in write-
back mode you can get into trouble because it's too late for bcache to tell 
the FS that there is a write error when it already confirmed that stable 
storage has been hit. Maybe it will just keep the data around so you could 
swap devices or will report the error next time when data is written to that 
location. It probably interferes with btrfs RAID logic on that matter.

> The only problem with doing stuff like this at a lower level (both
> write and read caching) is that it isn't RAID-aware.  If you write
> 10GB of data, you use 20GB of cache to do it if you're mirrored,
> because the cache doesn't know about mirroring.

Yes, it will write double the data to the cache then - but only if btrfs 
also did actually read both copies (which it probably does not because it 
has checksums and does not need to compare data, and lets just ignore the 
case that another process could try to read the same data from the other 
raid member later, that case should become optimized-out by the OS cache). 
Otherwise both caches should work pretty individually with their own set of 
data depending on how btrfs uses each device individually. Remember that 
btrfs raid is not a block-based raid where block locations would match 1:1 
on each device. Btrfs raid can place one mirror of data in two completely 
different locations on each member device (which is actually a good thing in 
case block errors accumulate in specific locations for a "faulty" model of a 
disk). In case of write caching it will of course cache double the data 
(because both members will be written to). But I think that's okay for the 
same reasons, except it will wear your cache device faster. But in that case 
I suggest to use individual SSDs for each btrfs member device anyways. It's 
not optimal, I know. Could be useful to see some best practices and 
pros/cons on that topic (individual cache device per btrfs member vs. bcache 
on LVM RAID with bcache partitions on the RAID for all members). I think the 
best strategy depends on if you are write-most or read-most.

Thanks for mentioning. Interesting thoughts. ;-)

> Offhand I'm not sure
> if there are any performance penalties as well around the need for
> barriers/etc with the cache not being able to be relied on to do the
> right thing in terms of what gets written out - also, the data isn't
> redundant while it is on the cache, unless you mirror the cache.

This is partialy what I outlined above. I think in case of write-caching, 
there is no barriers pass-thru needed. Bcache will confirm the barriers and 
that's all the FS needs to know (because bcache is supervising the FS, all 
requests go through the bcache layer, no direct access to the backing 
device). Of course, it's then bcache's job to ensure everything gets written 
out correctly in the background (whenever it feels to do so). But it can use 
its own write-barriers to ensure that for the underlying device - that's 
nothing the FS has to care about. Performance should be faster anyway 
because, well, you are writing to a faster device - that is what bcache is 
all about, isn't it? ;-)

I don't think write-barriers for read caching are needed, at least not from 
point of view of the FS. The caching layer, tho,

Re: [gentoo-user] Re: Re: Re: [Gentoo-User] emerge --sync likely to kill SSD?

2014-06-22 Thread Rich Freeman
On Sun, Jun 22, 2014 at 7:44 AM, Kai Krakow  wrote:
> I don't see where you could lose the volume management features. You just
> add device on top of the bcache device after you initialized the raw device
> with a bcache superblock and attached it. The rest works the same, just that
> you use bcacheX instead of sdX devices.

Ah, didn't realize you could attach/remove devices to bcache later.
Presumably it handles device failures gracefully, ie exposing them to
the underlying filesystem so that it can properly recover?

>
> From that point of view, I don't think something like ZIL should be
> implemented in btrfs itself but as a generic approach like bcache so every
> component in Linux can make use of it. Hot data relocation OTOH is
> interesting from another point of view and may become part of future btrfs
> as it benefits from knowledge about the filesystem itself, using a generic
> interface like "hot data tracking" in VFS - so other components can make use
> of that, too.

The only problem with doing stuff like this at a lower level (both
write and read caching) is that it isn't RAID-aware.  If you write
10GB of data, you use 20GB of cache to do it if you're mirrored,
because the cache doesn't know about mirroring.  Offhand I'm not sure
if there are any performance penalties as well around the need for
barriers/etc with the cache not being able to be relied on to do the
right thing in terms of what gets written out - also, the data isn't
redundant while it is on the cache, unless you mirror the cache.
Granted, if you're using it for write intent logging then there isn't
much getting around that.

> Having to prepare devices for bcache is kind of a show-stopper because it is
> no drop-in component that way. But OTOH I like that approach better than dm-
> cache because it protects from using the backing device without going
> through the caching layer which could otherwise severely damage your data,
> and you get along with fewer devices and don't need to size a meta device
> (which probably needs to grow later if you add devices, I don't know).

And this is the main thing keeping me away from it.  It is REALLY
painful to migrate to/from.  Having it integrated into the filesystem
delivers all the same benefits of not being able to mount it without
the cache present.

Now excuse me while I go fix my btrfs (I tried re-enabling snapper and
it again got the filesystem into a worked-up state after trying to
clean up half a dozen snapshots at the same time - it works fine until
you go and try to write a lot of data to it, then it stops syncing
though you don't necessarily notice until a few hours later when the
write cache exhausts RAM and on reboot your disk reverts back a few
hours).  I suspect that if I just treat it gently for a few hours
btrfs will clean up the mess and it will work normally again, but the
damage apparently persists after a reboot if you go heavy in the disk
too quickly...

Rich



[gentoo-user] Re: Re: Re: [Gentoo-User] emerge --sync likely to kill SSD?

2014-06-22 Thread Kai Krakow
Rich Freeman  schrieb:

> On Sat, Jun 21, 2014 at 3:24 PM, Kai Krakow  wrote:
>> And while we are at it, I'd also like to mention bcache. Tho, conversion
>> is not straight forward. However, I'm going to try that soon for my
>> spinning rust btrfs.
> 
> I contemplated that, but I'd really like to see btrfs support
> something more native.  Bcache is way too low-level for me and strikes
> me as inefficient as a result.  Plus, since it sits UNDER btrfs you'd
> probably lose all the fancy volume management features.

I don't see where you could lose the volume management features. You just 
add device on top of the bcache device after you initialized the raw device 
with a bcache superblock and attached it. The rest works the same, just that 
you use bcacheX instead of sdX devices.

Bcache is a general approach and it seems to work very well for that 
already. There are hot data tracking patches and proposals to support adding 
a cache device to the btrfs pool and let btrfs migrate data back and forth 
between each. That would be native. But it still would lack the advanced 
features ZFS implements to make use of such caching devices, implementing 
even different strategies for ZIL, ARC, and L2ARC. That's the gap bcache 
tries to jump.
 
> ZFS has ssd caching as part of the actual filesystem, and that seems
> MUCH cleaner.

Yes, it is much more mature in that regard. Comparing with ZFS, bcache is a 
lot like ZIL, while hot data relocation in btrfs would be a lot like L2ARC. 
ARC is a special purpose RAM cache separate from the VFS caches which has 
special knowledge about ZFS structures to keep performance high. Some 
filesystems implement something similar by keeping tree structures 
completely in RAM. I think, both bcache and hot data tracking take parts of 
the work that ARC does for ZFS - note that "hot data tracking" is a generic 
VFS interface, while "hot data relocation" is something from btrfs. Both 
work together but it is not there yet.

>From that point of view, I don't think something like ZIL should be 
implemented in btrfs itself but as a generic approach like bcache so every 
component in Linux can make use of it. Hot data relocation OTOH is 
interesting from another point of view and may become part of future btrfs 
as it benefits from knowledge about the filesystem itself, using a generic 
interface like "hot data tracking" in VFS - so other components can make use 
of that, too.

A ZIL-like cache and hot data relocation could probably solve a lot of 
fragmentation issues (especially a ZIL-like cache), so I hope work for that 
will get pushed a little more soon.

Having to prepare devices for bcache is kind of a show-stopper because it is 
no drop-in component that way. But OTOH I like that approach better than dm-
cache because it protects from using the backing device without going 
through the caching layer which could otherwise severely damage your data, 
and you get along with fewer devices and don't need to size a meta device 
(which probably needs to grow later if you add devices, I don't know).

-- 
Replies to list only preferred.




Re: [gentoo-user] Re: Re: [Gentoo-User] emerge --sync likely to kill SSD?

2014-06-21 Thread Rich Freeman
On Sat, Jun 21, 2014 at 3:24 PM, Kai Krakow  wrote:
> And while we are at it, I'd also like to mention bcache. Tho, conversion is
> not straight forward. However, I'm going to try that soon for my spinning
> rust btrfs.

I contemplated that, but I'd really like to see btrfs support
something more native.  Bcache is way too low-level for me and strikes
me as inefficient as a result.  Plus, since it sits UNDER btrfs you'd
probably lose all the fancy volume management features.

ZFS has ssd caching as part of the actual filesystem, and that seems
MUCH cleaner.

Rich



[gentoo-user] Re: Re: [Gentoo-User] emerge --sync likely to kill SSD?

2014-06-21 Thread Kai Krakow
Peter Humphrey  schrieb:

> On Friday 20 June 2014 19:48:14 Kai Krakow wrote:
>> microcai  schrieb:
>> > rsync is doing bunch of  4k ramdon IO when updateing portage tree,
>> > that will kill SSDs with much higher Write Amplification Factror.
>> > 
>> > I have a 2year old SSDs that have reported Write Amplification Factor
>> > of 26. I think the only reason is that I put portage tree on this SSD
>> > to speed it up.
>> 
>> Use a file system that turns random writes into sequential writes, like
>> the pretty newcomer f2fs. You could try using it for your rootfs but
>> currently I suggest just creating a separate partition for it and either
>> mount it as /usr/portage or symlink that dir into this directory (that
>> way you could use it for other purposes, too, that generate random short
>> writes, like log files).
> 
> Well, there's a surprise! Thanks for mentioning f2fs. I've just converted
> my Atom box's seven partitions to it, recompiled the kernel to include it,
> changed the fstab entries and rebooted. It just worked.

It's said to be twice as fast with some workloads (especially write 
workloads). Can you confirm that? I didn't try it that much yet - usually I 
use it for pendrives only. I have no experience using it for rootfs.

And while we are at it, I'd also like to mention bcache. Tho, conversion is 
not straight forward. However, I'm going to try that soon for my spinning 
rust btrfs.

-- 
Replies to list only preferred.




[gentoo-user] Re: Re: [Gentoo-User] emerge --sync likely to kill SSD?

2014-06-21 Thread Kai Krakow
Rich Freeman  schrieb:

> On Sat, Jun 21, 2014 at 10:27 AM, Peter Humphrey 
> wrote:
>>
>> I found that fstrim can't work on f2fs file systems. I don't know whether
>> discard works yet.
> 
> Fstrim is to be preferred over discard in general.  However, I suspect
> neither is needed for something like f2fs.  Being log-based it doesn't
> really overwrite data in place.  I suspect that it waits until an
> entire region of the disk is unused and then it TRIMs the whole
> region.

F2fs prefers to fill an entire erase block before touching the next. It also 
tries to coalese small writes into 16k blocks before submitting them to 
disk. And according to the docs it supports trim/discard internally.

> However, I haven't actually used it and only know the little I've read
> about it.  That is the principle of a log-based filesystem.

There's an article at LWN [1] and in the comments you can find a few 
important information about the technical details.

Posted Oct 11, 2012 21:11 UTC (Thu) by arnd:
| * Wear leveling usually works by having a pool of available erase blocks
|   in the drive. When you write to a new location, the drive takes on block
|   out of that pool and writes the data there. When the drive thinks you
|   are done writing to one block, it cleans up any partially written data
|   and puts a different block back into the pool.
| * f2fs tries to group writes into larger operations of at least page size
|   (16KB or more) to be efficient, current FTLs are horribly bad at 4KB
|   page size writes. It also tries to fill erase blocks (multiples of 2MB)
|   in the order that the devices can handle.
| * logfs actually works on block devices but hasn't been actively worked on
|   over the last few years. f2fs also promises better performance by using
|   only 6 erase blocks concurrently rather than 12 in the case of logfs. A
|   lot of the underlying principles are the same though.
| * The "industry" is moving away from raw flash interfaces towards eMMC and
|   related technologies (UFS, SD, ...). We are not going back to raw flash
|   any time soon, which is unfortunate for a number of reasons but also has
|   a few significant advantages. Having the FTL take care of bad block
|   management and wear leveling is one such advantage, at least if they get
|   it right.

According to wikipedia [2], some more interesting features are on the way, 
like compression and data deduplication to lower the impact of writes.
 
[1]: http://lwn.net/Articles/518988/
[2]: http://en.wikipedia.org/wiki/F2FS

-- 
Replies to list only preferred.




Re: [gentoo-user] Re: Re: Flexibility and robustness in the Linux organisim

2013-10-11 Thread Volker Armin Hemmann
Am 11.10.2013 10:28, schrieb Steven J. Long:
> On Tue, Oct 01, 2013 at 06:35:58PM +0200, Volker Armin Hemmann wrote:
 wrong analogy and it goes down from here. Really.
>>> Ohh, but they are inspired on YOUR analogy, so guess how wrong yours was.
>> your trolling is weak. And since I never saw anything worth reading
>> posted by you, you are very close to plonk territory right now.
> If his analogies are weak, that's deliberate: to show that your analogy is 
> just
> as weak. Irrespective of why /usr was first added, or that it was in fact what
> /home now is, it's proven useful in many contexts. That you don't accept that,
> won't convince anyone who's lived that truth. All you'll do is argue in 
> circles
> about irrelevance.
>  
>>> The setup of a separate /usr on a networked system was used in amongst
>>> other places a few swedish universities.
>> seperate /usr on network has been used in a lot of places. So what? Does
>> that prove anything?
>> Nope, it doesn't.
> Er quite obviously it proves that a separate /usr can be useful. In fact so
> much so that all the benefits of the above setup are claimed by that god-awful
> "why split usr is broken because we are dumbasses who got kicked out of the
> kernel and think that userspace doesn't need stability" post, as if they never
> existed before, and could not exist without a rootfs/usr merge.
>  
>> Seriously, /var is a good candidate for a seperate partition. /usr is not.
> They both are. Not very convincing is it?
> Seriously, if you don't see the need for one, good for you. Just stop telling
> us what to think, will you?
>
>> too bad POSIX is much older than LSB or FHS.
> Too bad separate /usr is much older than initramfs.
 too bad that initramfs and initrd are pretty good solutions to the
 problem of hidden breakage caused by seperate /usr.
 If you are smart enough to setup an nfs server, I suppose you are smart
 enough to run dracut/genkernel&co.
>>> If you are smart enough to run "dracut/genkernel&co" I suppose you are
>>> smart enough to see the wrongness of your initial statement "too bad
>>> POSIX is much older than LSB or FHS."
>> too bad I am right and you are and idiot.
>>
>> Originally, the name "POSIX" referred to IEEE Std 1003.1-1988, released
>> in 1988. The family of POSIX standards is formally designated as IEEE
>> 1003 and the international standard name is ISO/IEC 9945.
>> The standards, formerly known as IEEE-IX, emerged from a project that
>> began circa 1985. Richard Stallman suggested the name POSIX to the IEEE.
>> The committee found it more easily pronounceable and memorable, so it
>> adopted it
>>
>> That is from wikipedia.
>>
>> 1985/1988. When were LSB/FHS created again?
>>
>> FHS in 1994. Hm
> You really are obtuse. You should try to consider what *point* the other 
> person
> is trying to make before you mouth off with "superior knowledge" that 
> completely
> misses it.
>
>> *plonk*
> ditto. AFAIC you're the one who pulled insults out, when in fact you were
> *completely* missing the point.
>
> Bravo. 
>
you know, I just reread this subthread and the other crap you just
posted today.

Complaining, insulting, being 'obtuse' - that describes you very well.
Or not reading at all.

Very well, I can live without your emails. Really, I can.



Re: [gentoo-user] Re: Re: Flexibility and robustness in the Linux organisim

2013-10-11 Thread Mark David Dumlao
On Fri, Oct 11, 2013 at 4:16 PM, Steven J. Long
 wrote:
> On Mon, Sep 30, 2013 at 11:37:53PM +0100, Neil Bothwick wrote:
>> initramfs is the new /, for varying values of new since most distros have
>> been doing it that way for well over a decade.
>
> Only it's not, since you're responsible for keeping it in sync with the main
> system. And for making sure it has everything you need. And hoping they don't
> change incompatibly between root and initramfs.

You have ALWAYS been responsible for keeping / in sync with /usr. ALWAYS.
Putting / out of sync with /usr will almost definitely result in breakage for
practically every use case where / and /usr have been separated. You cannot
reliably upgrade one without the other. If anything, it's easier to keep an init
thingy in sync with /usr than to keep / in sync with /usr because our
init thingies
have automated tools for calculating what to put in them. / does not, and the
problem of deciding what goes there is harder than with an init thingy.

Likewise, updating / without updating the init thingy, _if you dont know what
you're doing_ is a recipe for trouble.

Thus the analogy stands.

-- 
This email is:[ ] actionable   [x] fyi[ ] social
Response needed:  [ ] yes  [x] up to you  [ ] no
Time-sensitive:   [ ] immediate[ ] soon   [x] none



Re: [gentoo-user] Re: Re: separate / and /usr to require initramfs 2013-11-01

2013-10-11 Thread Neil Bothwick
On Fri, 11 Oct 2013 14:11:55 +0100, Peter Humphrey wrote:

> > While I'm loathe to use words like underhanded, ...  
> 
> 
>   Not "loathe" here but "loath" or even "loth".
> 

Ouch!


-- 
Neil Bothwick

Mac screen message: "Like, dude, something went wrong."


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: Re: separate / and /usr to require initramfs 2013-11-01

2013-10-11 Thread Peter Humphrey
On Friday 11 Oct 2013 12:55:55 Neil Bothwick wrote:

> While I'm loathe to use words like underhanded, ...


Not "loathe" here but "loath" or even "loth".


(Just to help non-native speakers avoid confusion, you understand.)

:-)

-- 
Regards,
Peter




Re: [gentoo-user] Re: Re: separate / and /usr to require initramfs 2013-11-01

2013-10-11 Thread Neil Bothwick
On Fri, 11 Oct 2013 12:27:59 +0100, Steven J. Long wrote:

> > I don't understand why people keep banging on about Poettering in
> > this, previously finished, thread.  
> 
> You brought up the background, wrt Greg K-H. Regardless of how you
> feel, I'm not alone in considering Poettering's (and Seivers')
> behaviour underhanded.

You're not. While I'm loathe to use words like underhanded, I certainly
don't like the direction things are taking with systemd. I'm not
defending them, but I don't see this as their fault. The potential for
breakage was always there, their way of dong things just found it sooner.

> And all this stuff about the "situation just arose" is only true, if you
> accept Poettering's propaganda^W arguments as given. So yes, he's very
> relevant.

We''ll just have o disagree on his relevance here. the problem is that
the split is arbitrary, there is no clear definition of what is and is
not needed at boot time for all systems, and that is going to lead to
incorrect decisions made with the best of intentions (not that I am
accusing the previously mentioned of having those).


-- 
Neil Bothwick

"I can picture in my mind a world without war, a world without hate. And I
can picture us attacking that world, because they'd never expect it."


signature.asc
Description: PGP signature


[gentoo-user] Re: Re: separate / and /usr to require initramfs 2013-11-01

2013-10-11 Thread Steven J. Long
On Fri, Oct 11, 2013 at 09:42:33AM +0100, Neil Bothwick wrote:
> On Fri, 11 Oct 2013 09:36:02 +0100, Steven J. Long wrote:
> 
> > > It's evolution. Linux has for years been moving in this direction,
> > > now it has reached the point where the Gentoo devs can no longer
> > > devote the increasing time needed to support what has now become an
> > > dge case.  
> > 
> > Yeah and that's just vague crap without content ;)
> 
> I bow to your superior expertise in that field :)

Yup I have to filter out crap all day every day, usually crap I wrote.
 
> > > So which was it, one specific person or a coven of conspirators? This
> > > is open source, secret conspiracies don't really work well. If this
> > > really was such a bad move, do you really think the likes of Greg K-H
> > > would not have stepped in? Or is he a conspirator too?  
> > 
> > No he's just a bit naive: he wants to believe the best of people and did
> > not realise quite how sneaky Poettering is. No doubt he still doesn't.
> > But I'm sure he never foresaw some of their shenanighans, such as
> > claiming their newly inserted breakage was the fault of device-drivers
> > and everyone should switch to their funky new way of loading modules.
> > No-one seemed to think what Torvalds said was incorrect, even if they
> > disagreed with his tone.
> 
> I don't understand why people keep banging on about Poettering in this,
> previously finished, thread.

You brought up the background, wrt Greg K-H. Regardless of how you feel, I'm
not alone in considering Poettering's (and Seivers') behaviour underhanded.

And all this stuff about the "situation just arose" is only true, if you
accept Poettering's propaganda^W arguments as given. So yes, he's very
relevant.

Sorry for not keeping current with the threads; I'll not post any more to
respect the deadline..

-- 
#friendly-coders -- We're friendly, but we're not /that/ friendly ;-)



[gentoo-user] Re: Re: Re: Flexibility and robustness in the Linux organisim

2013-10-11 Thread Steven J. Long
On Fri, Oct 11, 2013 at 09:50:05AM +0200, Alan McKinnon wrote:
> On 11/10/2013 09:54, Steven J. Long wrote:
> > On Mon, Sep 30, 2013 at 12:04:38AM +0200, Alan McKinnon wrote:
> >> On 29/09/2013 23:41, Dale wrote:
> >>> Alan McKinnon wrote:
>  >From that one single action this entire mess of separate /usr arose as
>  folks discovered more and more reasons to consider it good and keep it
>  around
> > 
> > Yes you elide over that part, but it's central: there were more and more
> > reasons to consider it good, and to use it. You said it.
> > 


> >> It has always been broken by
> >> design becuase it's a damn stupid idea that just happened to work by
> >> fluke.
> > 
> > *cough* bullsh1t.
> > 
> >> IT and computing is rife with this kind of error.
> > 
> > Indeed: and even more rife with a history of One True Way. So much so
> > that it's a cliche. Somehow it's now seen as "hip" to be crap at your
> > craft, unable to recognise an ABI, and cool to subscribe to "N + 1"
> > True Way, as that's an "innovation" on the old form of garbage.
> > 
> > And yet GIGO will still apply, traditional as it may be.
> 
> I have no idea what you are trying to communicate or accomplish with this.

Oh my bad, I thought this was an informal discussion. On a formal level, I
was correcting your assumption, presented as a fact, that the only reason root
and /usr split has worked in the past is some sort of fluke.

Further your conflation of basic errors in software design with a "solution"
to anything at all: the same problems still go on wrt initramfs, only now
the effort is fractured into polarised camps.

> All I see in all your responses is that you are railing against why
> things are no longer the way they used to be.

That's just casting aspersions, so I'll treat it as beneath you.

It's certainly beneath me.
-- 
#friendly-coders -- We're friendly, but we're not /that/ friendly ;-)



Re: [gentoo-user] Re: Re: Flexibility and robustness in the Linux organisim

2013-10-11 Thread Neil Bothwick
On Fri, 11 Oct 2013 09:16:50 +0100, Steven J. Long wrote:

> > initramfs is the new /, for varying values of new since most distros
> > have been doing it that way for well over a decade.  
> 
> Only it's not, since you're responsible for keeping it in sync with the
> main system.

No I'm not, the kernel makefile takes care of that very nicely thank you
very much.


-- 
Neil Bothwick

Hell:  Filling out the paperwork to get into Heaven.


signature.asc
Description: PGP signature


[gentoo-user] Re: Re: Flexibility and robustness in the Linux organisim

2013-10-11 Thread Steven J. Long
On Tue, Oct 01, 2013 at 06:35:58PM +0200, Volker Armin Hemmann wrote:
> >> wrong analogy and it goes down from here. Really.
> > Ohh, but they are inspired on YOUR analogy, so guess how wrong yours was.
> 
> your trolling is weak. And since I never saw anything worth reading
> posted by you, you are very close to plonk territory right now.

If his analogies are weak, that's deliberate: to show that your analogy is just
as weak. Irrespective of why /usr was first added, or that it was in fact what
/home now is, it's proven useful in many contexts. That you don't accept that,
won't convince anyone who's lived that truth. All you'll do is argue in circles
about irrelevance.
 
> > The setup of a separate /usr on a networked system was used in amongst
> > other places a few swedish universities.
> 
> seperate /usr on network has been used in a lot of places. So what? Does
> that prove anything?
> Nope, it doesn't.

Er quite obviously it proves that a separate /usr can be useful. In fact so
much so that all the benefits of the above setup are claimed by that god-awful
"why split usr is broken because we are dumbasses who got kicked out of the
kernel and think that userspace doesn't need stability" post, as if they never
existed before, and could not exist without a rootfs/usr merge.
 
> Seriously, /var is a good candidate for a seperate partition. /usr is not.

They both are. Not very convincing is it?
Seriously, if you don't see the need for one, good for you. Just stop telling
us what to think, will you?

>  too bad POSIX is much older than LSB or FHS.
> >>> Too bad separate /usr is much older than initramfs.
> >> too bad that initramfs and initrd are pretty good solutions to the
> >> problem of hidden breakage caused by seperate /usr.
> >> If you are smart enough to setup an nfs server, I suppose you are smart
> >> enough to run dracut/genkernel&co.
> > If you are smart enough to run "dracut/genkernel&co" I suppose you are
> > smart enough to see the wrongness of your initial statement "too bad
> > POSIX is much older than LSB or FHS."
> 
> too bad I am right and you are and idiot.
> 
> Originally, the name "POSIX" referred to IEEE Std 1003.1-1988, released
> in 1988. The family of POSIX standards is formally designated as IEEE
> 1003 and the international standard name is ISO/IEC 9945.
> The standards, formerly known as IEEE-IX, emerged from a project that
> began circa 1985. Richard Stallman suggested the name POSIX to the IEEE.
> The committee found it more easily pronounceable and memorable, so it
> adopted it
> 
> That is from wikipedia.
> 
> 1985/1988. When were LSB/FHS created again?
> 
> FHS in 1994. Hm

You really are obtuse. You should try to consider what *point* the other person
is trying to make before you mouth off with "superior knowledge" that completely
misses it.

> *plonk*

ditto. AFAIC you're the one who pulled insults out, when in fact you were
*completely* missing the point.

Bravo. 

-- 
#friendly-coders -- We're friendly, but we're not /that/ friendly ;-)



[gentoo-user] Re: Re: Flexibility and robustness in the Linux organisim

2013-10-11 Thread Steven J. Long
On Mon, Sep 30, 2013 at 11:37:53PM +0100, Neil Bothwick wrote:
> On Mon, 30 Sep 2013 17:05:39 -0400, Walter Dnes wrote:
> 
> > > If *something1* at boot time requires access to *something2* at boot
> > > time that isn't available then I would say that *something1* is broken
> > > by design not the *something2*.  
> > 
> >   What about the case where *something2* *USED TO BE AVAILABLE, BUT HAS
> > BEEN MOVED TO /USR* ?
> 
> What about the case where something1 wasn't required at boot time but
> changed circumstances mean it now is?

What about it? Honestly it's like you lot don't know the basics of scripting
or something. $PATH ffs.

(And don't start on at me about badly-coded apps: fix the apps, or the ebuilds
not the OS: it's not broken, and certainly does not need to worked-around.)

> > > So I would argue that devs relying on /usr always being there have
> > > broken the "system".  
> > 
> >   So I would argue that unnecessarily moving stuff into /usr is
> > deliberate sabotage, designed to break *something1*.
> 
> Define unnecessarily in that context? You can't, not for all use cases.
> There are many files that clearly need to be available early on, and many
> more that clearly do not. Between them is a huge grey area, files that
> some need and some don't, that may be needed now or at some indeterminate
> point in the future. If you put everything that may conceivably be needed
> at early boot into /, you shift a large chunk of /usr/*bin/ and /usr/lib*
> into /, effectively negating the point of a small, lean /. That puts us
> right back where we started, try to define a point of separation that
> cannot be defined.

Funny, sounds a lot like deciding what to put in an initramfs. And frankly
it's untrue[2]. Most of the core system utilities have long been intended to
run people's systems. All you need to do is stop pretending "nu-skool" rubbish
is as good as the stuff that's survived decades of use. By definition the
latter is a much smaller pool of much higher-quality than the mountains of
new unproven and untested stuff, that keeps falling over in real life.

Exactly the same happened back then: we just don't see the admittedly smaller
mountains of crap that fell by the wayside after a year or five.

> initramfs is the new /, for varying values of new since most distros have
> been doing it that way for well over a decade.

Only it's not, since you're responsible for keeping it in sync with the main
system. And for making sure it has everything you need. And hoping they don't
change incompatibly between root and initramfs.
 
The point is the burden has shifted, and made the distribution less of a
distribution and more of a "DIY, and tough sh1t if it don't work, you get
to pick up the pieces we broke" irrespective of how many scripts you provide
to do work that was never needed before, and technically is not needed now[1]

It will break. Everything does at some point or another. So I for one don't
need the extra hassle from a totally unnecessary extra point of failure.

Good luck to you if that's how you roll; just don't tell me what choices I
should make, thanks.

Regards,
steveL.

[1] http://forums.gentoo.org/viewtopic-t-901206.html
[2] http://forums.gentoo.org/viewtopic-t-901206-start-75.html
..shows how few things you actually need to move. Note portage is fine with
the directory symlinks from /usr to / (I checked with zmedico before I wrote
it up.) Also the bug in lvm initscript got fixed, but I still much prefer my
machine to have the few extra MB in rootfs, and be able to chuckle at all
the eleventy-eleven FUD about those 2 directories.

-- 
#friendly-coders -- We're friendly, but we're not /that/ friendly ;-)



Re: [gentoo-user] Re: Re: Flexibility and robustness in the Linux organisim

2013-10-11 Thread Alan McKinnon
On 11/10/2013 09:54, Steven J. Long wrote:
> On Mon, Sep 30, 2013 at 12:04:38AM +0200, Alan McKinnon wrote:
>> On 29/09/2013 23:41, Dale wrote:
>>> Alan McKinnon wrote:
 On 29/09/2013 18:33, Dale wrote:
>> that gnome is very hostile when it comes to KDE or choice is not news.
>>> And their dependency on systemd is just the usual madness. But they are
>>> not to blame for seperate /usr and the breakage it causes.
> If not, then what was it?  You seem to know what it was that started it
> so why not share?
>
 He already said it. Someone added a hard disk to a PDP-9 (or was it an 11?)

 Literally. It all traces back to that. In those days there was no such
 thing as volume management or raid. If you added a (seriously expensive)
 disk the only feasible way to get it's storage in the system was to
 mount it as a separate volume.

 >From that one single action this entire mess of separate /usr arose as
 folks discovered more and more reasons to consider it good and keep it
 around
> 
> Yes you elide over that part, but it's central: there were more and more
> reasons to consider it good, and to use it. You said it.
> 
> They haven't gone away just because some prat's had a brainwave and needs a
> lie-down, not encouragement. In fact most of them are touted as "USPs" in the
> propaganda we get told is a reasoned argument for ditching all our collective
> experience.
> 
>>>
>>> That wasn't the question tho.  My question wasn't about many years ago
>>> but who made the change that broke support for a seperate /usr with no
>>> init thingy.  The change that happened in the past few years.
>>>
>>> I think I got my answer already tho.  Seems William Hubbs answered it
>>> but I plan to read his message again.  Different thread tho.
>>
>>
>>
>> Nobody "broke" it.
>>
>> It's the general idea that you can leave /usr unmounted until some
>> random arb time later in the startup sequence and just expect things to
>> work out fine that is broken.
>>
>> It just happened to work OK for years because nothing happened to use
>> the code in /usr at that point in the sequence.
> 
> Actually because people put *thinking* into what things were needed in early
> boot and what were not. In fact *exactly the same* thinking that goes into
> sorting out an initramfs. Only you don't need to keep syncing it, and you
> don't need to worry about missing stuff. Or you never used to, given a
> reasonably competent distro. Which was half the point in using one.
> 
> Thankfully software like agetty deliberately has tight linkage, and it's
> simple enough to move the two or three things that need it to rootfs; it's
> even officially fine as far as portage is concerned (though I do get an
> _anticipated_ warning on glibc upgrades.)
> 
>> More and more we are
>> seeing that this is no longer the case.
>>
>> So no-one broke it with a specific commit.
> 
> True enough. Cumulative lack of discipline is to blame, although personally
> I blame gmake's insane rewriting of lib deps before the linker even sees
> them, that makes $+ a lot less useful than it should be, and imo led to a
> general desire not to deal with linkage in the early days of Linux, that
> never went away.
> 
>> It has always been broken by
>> design becuase it's a damn stupid idea that just happened to work by
>> fluke.
> 
> *cough* bullsh1t.
> 
>> IT and computing is rife with this kind of error.
> 
> Indeed: and even more rife with a history of One True Way. So much so
> that it's a cliche. Somehow it's now seen as "hip" to be crap at your
> craft, unable to recognise an ABI, and cool to subscribe to "N + 1"
> True Way, as that's an "innovation" on the old form of garbage.
> 
> And yet GIGO will still apply, traditional as it may be.

I have no idea what you are trying to communicate or accomplish with this.

All I see in all your responses is that you are railing against why
things are no longer the way they used to be.



-- 
Alan McKinnon
alan.mckin...@gmail.com




[gentoo-user] Re: Re: Flexibility and robustness in the Linux organisim

2013-10-11 Thread Steven J. Long
On Mon, Sep 30, 2013 at 12:04:38AM +0200, Alan McKinnon wrote:
> On 29/09/2013 23:41, Dale wrote:
> > Alan McKinnon wrote:
> >> On 29/09/2013 18:33, Dale wrote:
>  that gnome is very hostile when it comes to KDE or choice is not news.
> > And their dependency on systemd is just the usual madness. But they are
> > not to blame for seperate /usr and the breakage it causes.
> >>> If not, then what was it?  You seem to know what it was that started it
> >>> so why not share?
> >>>
> >> He already said it. Someone added a hard disk to a PDP-9 (or was it an 11?)
> >>
> >> Literally. It all traces back to that. In those days there was no such
> >> thing as volume management or raid. If you added a (seriously expensive)
> >> disk the only feasible way to get it's storage in the system was to
> >> mount it as a separate volume.
> >>
> >> >From that one single action this entire mess of separate /usr arose as
> >> folks discovered more and more reasons to consider it good and keep it
> >> around

Yes you elide over that part, but it's central: there were more and more
reasons to consider it good, and to use it. You said it.

They haven't gone away just because some prat's had a brainwave and needs a
lie-down, not encouragement. In fact most of them are touted as "USPs" in the
propaganda we get told is a reasoned argument for ditching all our collective
experience.

> > 
> > That wasn't the question tho.  My question wasn't about many years ago
> > but who made the change that broke support for a seperate /usr with no
> > init thingy.  The change that happened in the past few years.
> > 
> > I think I got my answer already tho.  Seems William Hubbs answered it
> > but I plan to read his message again.  Different thread tho.
> 
> 
> 
> Nobody "broke" it.
> 
> It's the general idea that you can leave /usr unmounted until some
> random arb time later in the startup sequence and just expect things to
> work out fine that is broken.
> 
> It just happened to work OK for years because nothing happened to use
> the code in /usr at that point in the sequence.

Actually because people put *thinking* into what things were needed in early
boot and what were not. In fact *exactly the same* thinking that goes into
sorting out an initramfs. Only you don't need to keep syncing it, and you
don't need to worry about missing stuff. Or you never used to, given a
reasonably competent distro. Which was half the point in using one.

Thankfully software like agetty deliberately has tight linkage, and it's
simple enough to move the two or three things that need it to rootfs; it's
even officially fine as far as portage is concerned (though I do get an
_anticipated_ warning on glibc upgrades.)

> More and more we are
> seeing that this is no longer the case.
> 
> So no-one broke it with a specific commit.

True enough. Cumulative lack of discipline is to blame, although personally
I blame gmake's insane rewriting of lib deps before the linker even sees
them, that makes $+ a lot less useful than it should be, and imo led to a
general desire not to deal with linkage in the early days of Linux, that
never went away.

> It has always been broken by
> design becuase it's a damn stupid idea that just happened to work by
> fluke.

*cough* bullsh1t.

> IT and computing is rife with this kind of error.

Indeed: and even more rife with a history of One True Way. So much so
that it's a cliche. Somehow it's now seen as "hip" to be crap at your
craft, unable to recognise an ABI, and cool to subscribe to "N + 1"
True Way, as that's an "innovation" on the old form of garbage.

And yet GIGO will still apply, traditional as it may be.

Peace and hugs ;)
steveL
-- 
#friendly-coders -- We're friendly, but we're not /that/ friendly ;-)



Re: [gentoo-user] Re: Re: Re: Fresh install and problem with net.* init.d script

2013-07-24 Thread Alan McKinnon
On 24/07/2013 22:18, Steven J. Long wrote:
> Alan McKinnon wrote:
>> Peace and hugz OK?
> 
> Definitely :-)
> 
> "POSIX 4: Programming for the Real World" (Gallmeister, 1995)
> "UNIX Network Programming vol 2: Interprocess Communications" (Stevens, 1999)
> 
> iirc the first is on safari-online; you can download code from the second 
> here:
> http://www.kohala.com/start/unpv22e/unpv22e.html
> 
> More here:
> https://foss.aueb.gr/posix/
> 
> If you've not had the pleasure of W Richard Stevens' writing, you have a treat
> in-store. I'd guess you guys have at least read some of the TCP/Illustrated 
> series,
> though.
> 
> Regards,
> steveL.
> 


I'll look into those, but do take note those books are 14 and 18 years
old - that's eternity in our world.

Basics never change, details do. Some features are here for the long
haul and I doubt anything will really change them: pipes, named pipes,
unix sockets and things of that ilk. The real bugbear with IPC is people
reinventing the wheel over and over and over to do simple messaging -
writing little daemons that do very little except listen for a small
number of messages from localhost and react to them.

Use a generic message bus for that! It fits nicely in the grand Unix
tradition of do one job and do it well, and few apps have passing
messages around as their core function. Hand it off to the system,
that's what it's there for.

One day I might well do an audit of a typical server base system and
count all the apps that have a hidden roll-your-own message process in
place. I'm certain the results will be scary.


-- 
Alan McKinnon
alan.mckin...@gmail.com




[gentoo-user] Re: Re: Re: Fresh install and problem with net.* init.d script

2013-07-24 Thread Steven J. Long
Alan McKinnon wrote:
> Peace and hugz OK?

Definitely :-)

"POSIX 4: Programming for the Real World" (Gallmeister, 1995)
"UNIX Network Programming vol 2: Interprocess Communications" (Stevens, 1999)

iirc the first is on safari-online; you can download code from the second here:
http://www.kohala.com/start/unpv22e/unpv22e.html

More here:
https://foss.aueb.gr/posix/

If you've not had the pleasure of W Richard Stevens' writing, you have a treat
in-store. I'd guess you guys have at least read some of the TCP/Illustrated 
series,
though.

Regards,
steveL.
-- 
#friendly-coders -- We're friendly, but we're not /that/ friendly ;-)



Re: [gentoo-user] Re: Re: Fresh install and problem with net.* init.d script

2013-07-24 Thread Alan McKinnon
On 24/07/2013 19:51, Steven J. Long wrote:
> Alan McKinnon wrote:
>> you forgot that shared library nonsense. Every app should just bundle
>> static copies of everything it needs and leave it up to the dev to deal
>> with bugs and security issues
> 
> And you forgot: -lc prob'y because it's not required. -lrt comes into play 
> too.
> I'd recommend a book or two, but I have the feeling you're not a coder, and 
> your
> only response has been derogatory, so I don't think you'd get very far with 
> them.
> 
> Shame really, you and Neil were two of the people I most respected on this 
> list.
> 


Hey dude, lighten up a bit.

Neil and I are more than double the average age on this list.
We're full of shit. And both British. So we're both full of shit twice.

Peace and hugz OK?



-- 
Alan McKinnon
alan.mckin...@gmail.com




[gentoo-user] Re: Re: Fresh install and problem with net.* init.d script

2013-07-24 Thread Steven J. Long
Alan McKinnon wrote:
> you forgot that shared library nonsense. Every app should just bundle
> static copies of everything it needs and leave it up to the dev to deal
> with bugs and security issues

And you forgot: -lc prob'y because it's not required. -lrt comes into play too.
I'd recommend a book or two, but I have the feeling you're not a coder, and your
only response has been derogatory, so I don't think you'd get very far with 
them.

Shame really, you and Neil were two of the people I most respected on this list.

-- 
#friendly-coders -- We're friendly, but we're not /that/ friendly ;-)



  1   2   3   4   5   6   >