Re: [gentoo-user] from Firefox52: NO pure ALSA?, WAS: Firefox 49.0 & Youtube... Audio: No

2016-12-19 Thread lee
Daniel Frey <djqf...@gmail.com> writes:

> On 12/19/2016 10:15 AM, lee wrote:
>> "Walter Dnes" <waltd...@waltdnes.org> writes:
>> 
>>>   Similarly, the vast majority of home users have a machine with one
>>> ethernet port, and in the past it's always been eth0.
>> 
>> Since 10 years or so, the default is two ports.
>
> Not in any of the computers I've built. Generally only high end or
> workstation/server boards have two ports.
>
> i.e. not what the typical home user would buy.

It is not reasonable to assume that a "typical home user" would want a
computer with a crappy board to run Linux on it (or for anything
else). If they are that cheap, they're better off buying a used one.
When they are sufficiently clueless to want something like that, what
does it matter what the network interfaces are called.

>>> Now the name varies in each machine depending on the motherboard
>>> layout; oogabooga11? foobar42?  It may be static, but you don't know
>>> what it'll be, without first booting the machine.  In a truly
>>> Orwellian twist, this "feature" is referred to as "Predictable"
>>> Network Interface Names.  It only makes things easier for corporate
>>> machines acting as gateways/routers, with multiple ports.  Again, the
>>> average home user is being jerked around for a corporate agenda.
>> 
>> Perhaps the hidden agenda was to make the names indistinguishable and
>> unrecognisable, forcing everyone to use copy and paste --- after at
>> least double-checking which port is which --- to eliminate human and
>> typing errors in order to get more predictable results.
>> 
>> Otherwise, how would using unrecognisable names for network ports make
>> anything easier for corporate machines?
>> 
>
> It is even more frustrating that these so-called predictable network
> names actually can change on a reboot, it's happened to me more than
> once when multiple network cards are detected in a different order.

I haven't had that happen with the unrecognisable names.  Aren't they
supposed to prevent things like that?



Re: [gentoo-user] xterm menu

2016-12-19 Thread lee
Jorge Almeida <jjalme...@gmail.com> writes:

> On Sun, Dec 18, 2016 at 1:44 PM, lee <l...@yagibdah.de> wrote:
>> Jorge Almeida <jjalme...@gmail.com> writes:
>>
>
>>
>> This works for me:
>>
>
> Nope. No change.
>
>>
>> Perhaps it has to do with a font not being available in the size needed
>> for the menu?
>>
>
> Maybe, but I'm out of ideas.
>
>
>>
>>> can't imagine why the menu would require an "usable ISO8859 font"...
>>
>> Try using another window manager?
>>
> This implies installing another WM. I'll try fvwm. Never thought the
> problem might be with the WM (openbox, a very unproblematic WM)
>
>
> Thanks

I'm using fvwm.  I was having trouble with xterm once when I still used
Fedora, and though I'm not sure, results might be different with
different WMs (I seem to remember something about that).

Other than that, there's a program called, IIRC, 'map' (available with
Fedora, apparently not with Gentoo) which would show the memory usage of
a process in detail.  It also showed memory being used for fonts.  So if
we can find such a program, we might be able to find out which fonts are
being used by xterm and see if there's a difference.


PS:

Font Path:
  
/usr/share/fonts/misc/,/usr/share/fonts/TTF/,/usr/share/fonts/OTF/,/usr/share/fonts/Type1/,/usr/share/fonts/100dpi/,/usr/share/fonts/75dpi/,built-ins


That's by default.

perl -e 'print "$_\n" foreach(split(/,/, 
"/usr/share/fonts/misc/,/usr/share/fonts/TTF/,/usr/share/fonts/OTF/,/usr/share/fonts/Type1/,/usr/share/fonts/100dpi/,/usr/share/fonts/75dpi/,built-ins"));'
 | xargs ls

... shows files in each directory, except 'built-ins', of course.


That brings up the question if there is some alternative to perls split
in coreutils or bash.  The split of coreutils appears to be supposed to
be doing something rather useless?



Re: [gentoo-user] from Firefox52: NO pure ALSA?, WAS: Firefox 49.0 & Youtube... Audio: No

2016-12-19 Thread lee
"Walter Dnes"  writes:

>   Similarly, the vast majority of home users have a machine with one
> ethernet port, and in the past it's always been eth0.

Since 10 years or so, the default is two ports.

> Now the name varies in each machine depending on the motherboard
> layout; oogabooga11? foobar42?  It may be static, but you don't know
> what it'll be, without first booting the machine.  In a truly
> Orwellian twist, this "feature" is referred to as "Predictable"
> Network Interface Names.  It only makes things easier for corporate
> machines acting as gateways/routers, with multiple ports.  Again, the
> average home user is being jerked around for a corporate agenda.

Perhaps the hidden agenda was to make the names indistinguishable and
unrecognisable, forcing everyone to use copy and paste --- after at
least double-checking which port is which --- to eliminate human and
typing errors in order to get more predictable results.

Otherwise, how would using unrecognisable names for network ports make
anything easier for corporate machines?



Re: [gentoo-user] from Firefox52: NO pure ALSA?, WAS: Firefox 49.0 & Youtube... Audio: No

2016-12-19 Thread lee
Alan McKinnon  writes:

>> That doesn't keep me from noticing that what is being said is very
>> different from what is being done.  If the bunch of people wants to
>> change that, /they/ need to do so.
>> 
>
>
> I recommend you brush up on your social skills.
>
> Figuring out what people really mean as opposed to what they say
> (because those 2 never map exactly) is a very useful skill to cultivate,
> things are seldom as they appear to your eyes.

No problem, I already figured it out.  That still doesn't mean anyone
else could solve their problem for them.



Re: [gentoo-user] xterm menu

2016-12-18 Thread lee
Jorge Almeida  writes:

> On Sun, Dec 18, 2016 at 6:52 AM, Andrew Savchenko  wrote:
>> On Sun, 18 Dec 2016 02:48:28 -0800 Jorge Almeida wrote:
>>> I tried Ctrl+click (any button) on an xterm window, to bring up the
>>> menu (which I never used before; after reading a recent thread about X
>>> (in)security, I was trying to access the secure mode for password
>>> entering).
>>>
>>> This crashes xterm. The logs:
>>
>> On xterm-325 "secure keyboard" mode works perfectly fine for me.
>>
>> Try to change font used by xterm, there are many ways to do this, I
>> prefer to put in ~/.Xresources:
>>
>> xterm*faceName: DejaVu Sans Mono:style=Bold
>> xterm*faceSize: 15
>>
>> Anyway, application should not crash, so if your system is
>> up-to-date (not only xterm, but Xorg, freetype and friends as well,
>> so better update all system) and bug is still here, please report
>> it on bugzilla.
>>
> My system (stable) is up-to-date, and I actually have XTerm*faceName:
> xft:DejaVu Sans Mono:style=book:antialias=true in
> .Xresources.
>
> The logs complain about helvetica, and I found similar stuff in the
> net (not necessarilly about xterm). This appears to be a font problem,
> which is essentially voodoo to me. xterm crashing instead of just
> failing to bring up the menu seems to be an xterm bug indeed, but the
> real problem is what to do to solve the missing fonts problem.

This works for me:


XTerm*termName:  xterm-256color

XTerm*activeIcon:   true
XTerm*background:   black
XTerm*foreground:   green
XTerm*cursorColor:  Red

XTerm*multiScroll:  on
XTerm*jumpScroll:   on

xterm*FaceName: xft:Source Code Pro:pixelsize=14:style=Regular

XTerm*ScrollBar:false
XTerm*SaveLines:1024


There's a Gentoo package for that font (which I can highly recommend).

Perhaps it has to do with a font not being available in the size needed
for the menu?

> I have xterm emerged with "openpty truetype unicode" USE flags. I

same here

> can't imagine why the menu would require an "usable ISO8859 font"...

Try using another window manager?



Re: [gentoo-user] from Firefox52: NO pure ALSA?, WAS: Firefox 49.0 & Youtube... Audio: No

2016-12-18 Thread lee
Alan McKinnon <alan.mckin...@gmail.com> writes:

> On 18/12/2016 18:47, lee wrote:
>> Rich Freeman <ri...@gentoo.org> writes:
>> 
>>> The universe of Linux systems that are running Firefox but not
>>> Pulseaudio is fairly small at this point.
>> 
>> Pulseaudio eats away about 10% CPU without any benefit whatsoever, not
>> to mention that it makes things more complex and less reliable.  Why
>> would anyone use it?
>> 
>> Developers might try to make their lifes easier by developing software
>> to the point where nobody wants to use it, except for the few developers
>> perhaps.  With firefox, a policy like that contradicts their claims.
>> 
>> 
>> This is another issue which comes up quite often with FOSS.  Developers
>> claim to be doing something in the interest of their users and are
>> asking for support.  When you take a closer look, you find that they
>> don't, and when you offer support, they do not want it.
>> 
>> Why can't they just say that they are making software for themselves the
>> way they want it and don't care about what anyone else says or wants?
>> It only gives reason to distrust someone when you find that they do not
>> do what they claim to be doing.
>> 
>
> I think you are over-simplifying the situation here. Step back and look
> at the problem from the angle of "it's a bunch of people doing stuff"
> and not from a tech-centric angle. It's a people problem.
>
> You could make a valid case that the Mozilla devs are outright lying -
> they said they want xvy, and your offer to help provide xyz was
> rejected. But is it really that simple? I think it's more a case of the
> devs would like contributions for xyz and they don't mention the
> "everyone knows" "hidden assumption" of environment abc and general
> method def. A, that's the usual tripping point.
>
> I don't know the specifics of your particular case, but my first
> approximation guess is that there's an abc and def in there which the
> devs didn't think to mention. Happens all the time, usually with
> stunningly obvious stuff that "everyone" thought "everyone else" knew
> about. Things like future roadmaps, planned features, and the individual
> personal preferences of each dev.
>
> I guess I'll saying don't be too quick to shoot from the hip - more
> looking less assuming is often the better path.

It really is that simple because it is the way it turns out.  It doesn't
matter /why/ it turns out that way.

There is no assuming involved, and I have no reason to try to figure out
what hidden agenda a bunch of developers might have, or to make
assumptions about one.  It won't change anything.

That doesn't keep me from noticing that what is being said is very
different from what is being done.  If the bunch of people wants to
change that, /they/ need to do so.



Re: [gentoo-user] from Firefox52: NO pure ALSA?, WAS: Firefox 49.0 & Youtube... Audio: No

2016-12-18 Thread lee
Dutch Ingraham <s...@gmx.us> writes:

> On Sun, Dec 18, 2016 at 05:47:39PM +0100, lee wrote:
>> Rich Freeman <ri...@gentoo.org> writes:
>
>> Why can't they just say that they are making software for themselves the
>> way they want it and don't care about what anyone else says or wants?
>
> Openbsd and Archlinux will (do) say exectly that.  If that attitude
> suits you, you will be right at home there.

I'd prefer such a more honest attitude.  That doesn't mean I'd be "right
at home there".



Re: [gentoo-user] from Firefox52: NO pure ALSA?, WAS: Firefox 49.0 & Youtube... Audio: No

2016-12-18 Thread lee
Rich Freeman  writes:

> The universe of Linux systems that are running Firefox but not
> Pulseaudio is fairly small at this point.

Pulseaudio eats away about 10% CPU without any benefit whatsoever, not
to mention that it makes things more complex and less reliable.  Why
would anyone use it?

Developers might try to make their lifes easier by developing software
to the point where nobody wants to use it, except for the few developers
perhaps.  With firefox, a policy like that contradicts their claims.


This is another issue which comes up quite often with FOSS.  Developers
claim to be doing something in the interest of their users and are
asking for support.  When you take a closer look, you find that they
don't, and when you offer support, they do not want it.

Why can't they just say that they are making software for themselves the
way they want it and don't care about what anyone else says or wants?
It only gives reason to distrust someone when you find that they do not
do what they claim to be doing.



Re: [gentoo-user] Re: Well, I went about updating my system again. (day 6)

2016-12-18 Thread lee
Kevin Monceaux  writes:

> On Wed, Dec 07, 2016 at 06:42:21PM -0500, Alan Grimes wrote:
>  
>> -> Updating weekly, as I used to do is a Good Idea, Agreed.
>
> Sounds like a good idea.  I update anywhere from daily to a few times a
> week.  Every once in a while I loose track of the time and go a week or so
> between updates.  A "long time" between updates for me would be a couple of
> weeks.

Updating once every three months is a short time between updates.
Updating Gentoo is always extremely painful, time consuming and prone to
break something.

Just look at how many posts about update problems there are on this
list, and there are probably many more that never are being posted
about.

Spending 80 or 90% of your time at the computer with trying to update
the system wasn't necessary 20 years ago and shouldn't be nowadays.  If
updating took about 5 minutes and could be expected to go without
problems, I might be able to do it monthly.



Re: [gentoo-user] Re: Well, I went about updating my system again. (day 6)

2016-12-18 Thread lee
Grant Edwards  writes:

> On 2016-12-08, Kevin Monceaux  wrote:
>> On Wed, Dec 07, 2016 at 06:42:21PM -0500, Alan Grimes wrote:
>>
>>> --> X11 would probably need to be shut down two which is equivalent to a
>>> reboot on a desktop system anyway.
>>
>> Shutting down X11 doesn't appear to be equivalent to a reboot on my desktop.
>> If I shut down X11, my uptime still keeps accumulating.  
>
> I think he meant that from a "desktop productivity" standpoint, the
> two are the same: you have to close every single program you are using
> and then start over.

It's an annoyance factor.

When I have to shut down the X11 session, I can as well reboot.  There's
enough software that can not reasonably be run within tmux.

It's like having to entirely clear out your fridge and/or freezer every
couple days because it needs to be rebooted, and then putting everything
back in.  Who would put up with that?



[gentoo-user] how to upgrade perl

2016-06-20 Thread lee

Hi,

how do you do an update despite perl blocking it?


emerge -a --update --newuse --deep --with-bdeps=y --keep-going @world
[...]
dev-lang/perl:0

  (dev-lang/perl-5.22.2:0/5.22::gentoo, ebuild scheduled for merge) pulled in by
=dev-lang/perl-5.22* required by 
(virtual/perl-IO-Zlib-1.100.0-r6:0/0::gentoo, installed)
^  ^

(and 8 more with the same problem)

  (dev-lang/perl-5.20.2:0/5.20::gentoo, installed) pulled in by
dev-lang/perl:0/5.20=[-build(-)] required by 
(dev-perl/Encode-Locale-1.30.0-r1:0/0::gentoo, installed)
    
  
=dev-lang/perl-5.20* required by 
(virtual/perl-Pod-Parser-1.620.0:0/0::gentoo, installed)
^  ^

(and 56 more with the same problems)
[...]


It seems that some pppl found perl 5.24 somewhere, which doesn't seem to
be available:


eix dev-lang/perl
[I] dev-lang/perl
 Available versions:  5.20.2(0/5.20) ~5.20.2-r1(0/5.20) ~5.22.0(0/5.22) 
~5.22.1(0/5.22) {berkdb debug doc gdbm ithreads}
 Installed versions:  5.20.2(01:11:43 AM 04/06/2015)(berkdb gdbm -debug 
-doc -ithreads)
 Homepage:http://www.perl.org/
 Description: Larry Wall's Practical Extraction and Report Language


An update is overdue.  What should I do?



Re: [gentoo-user] how to use two graphics cards with one display

2016-06-11 Thread lee
R0b0t1 <r03...@gmail.com> writes:

> On Jun 9, 2016 4:25 PM, "lee" <l...@yagibdah.de> wrote:
>>
>> R0b0t1 <r03...@gmail.com> writes:
>>
>> > Use Bumblebee. It is the FOSS version of Optimus.
>>
>> That seems to be for laptops having peculiar hardware.
>>
>
> Nope. Works regardless.

If that works with two NVIDIA cards, the PCI bus might not be fast
enough.  Even if it's fast enough, I have no idea how it would perform,
considering that it's possible (even likely) that not both PCI slots for
the graphics cards are connected to the same CPU --- however they do
that on boards that have two, it might not be an issue at all or perform
much worse than using a single card.

Do you have such a setup in use?


And the docs say:


"WARNING:You must install the Nvidia binaries in a way that will not
break Mesa’s LibGL, it is needed for 3D acceleration on the Intel
card. This means that on most distros you will need a Bumblebee specific
package for it to run, the stock packages on most cases will break
LibGL."[1]


I do not have an Intel card.  And what are these Optimus cards they
mention?

Besides, IIUC this is intended for /switching between/ different
cards. I do not want to switch between cards but /use both of them at
the same time/.  There's no point in switching between them, and
apparently that would be more a disadvantage than anything else due to
the overhead involved.


[1]: https://github.com/Bumblebee-Project/Bumblebee/wiki/Supported-drivers



Re: [gentoo-user] how to use two graphics cards with one display

2016-06-09 Thread lee
Andrew Savchenko <birc...@gentoo.org> writes:

> Hi,
>
> On Sun, 05 Jun 2016 19:34:15 +0200 lee wrote:
>> Hi,
>> 
>> is there a way to reasonably use two graphics cards with a single
>> display?
>> 
>> SLI won't work because it's retarded in requiring the GPUs to be the
>> same, which they aren't --- not to mention that the cards would be too
>> far away from each other in the slots for a bridge to fit.
>> 
>> So what I'm thinking of is like using one card as a default and being
>> able to use the other one to play a video in some window on the same
>> display, preferably managed by the same fvwm, with the window optionally
>> being fullscreen in size.  I'd like to do that because the card I have
>> isn't powerful enough to play a video while an open gl application is
>> running at the same time.
>> 
>> I'll probably get a better card once prices come down a bit, but it
>> might have the same problem, and why would I want to waste an otherwise
>> perfectly good graphics card.
>
> Yes, but it depends on your hardware setup. What's yours and why
> you need such unusual thing: connect two video cards to a single
> monitor, or do you mean by display X display spawn over multiple
> monitors?

a single monitor

> In case of laptops such configuration is quite common: they may
> have two video cards with single switchable output: intel card is
> used for general work to save power and nvidia card is used for
> applications, requiring high GPU performance. Switching is done
> using sys-power/bbswitch. But looks like this is not your case,
> since you are talking about card replacement, since most laptop GPU
> cards are not replaceable.

Right, it's not a laptop, and I don't want to switch between different
cards.

> If you want a multihead setup using two cards, this is trivial using
> either xinerama or X screens depending on your taste.

That is only simple when you have multiple monitors.

> As far as I understand your e-mail, you are trying to mux video
> outputs of two GPU cards to a single monitor (excuse me if I'm
> wrong, but it is hard to understand what your hardware is), this is
> also doable if your monitor supports dual input (most modern
> monitors do). This way separate X screens may be used to achive
> your goal. (Xinerama setup is also possible, but GL acceleration
> will be limited to abilities of the weakest card).

Exactly, but I don't want to use the picture-in-picture feature of the
monitor, and I don't want separate X screens, and I don't have room to
fit another monitor on my desk.

I simply want to use one of the graphics cards to handle an application
that uses open gl and the other one to play a video.

> But honestly I don't get why you need this: if you have a powerful
> GPU and it is not a laptop, where power consumption is critical,
> why just don't use that card? Most cards have multiple outputs, so
> it is not a problem to setup multihead with a single card either.

The GPU isn't quite powerful enough for some of what I'm doing.
Otherwise, it's a perfectly good card.

So I need to get a better graphics card, and once I do, it would be a
pity to have the current one laying around uselessly.  I wouldn't get
much if I tried to sell it, so I rather keep it in case I need a spare.
Buying another one which is the same, to use SLI, won't help, either.

IIUC, it takes some processing power to decode a video, so why not use
one of the cards for just that?  Multiple cards should be able to work
together.



Re: [gentoo-user] how to use two graphics cards with one display

2016-06-09 Thread lee
R0b0t1  writes:

> Use Bumblebee. It is the FOSS version of Optimus.

That seems to be for laptops having peculiar hardware.



Re: [gentoo-user] Foss hardened router?

2016-06-09 Thread lee
James  writes:

> Hello,
>
>
> Ideally, there would be a gentoo-based hardened router for sale somewhere,
> where I could merely configure iptables and add a few extra codes?
> But, I cannot seem to be able to locate such a product for sale, 
> nor router company willing to configure such for an extra fee.
>
>
> So, dropping the Gentoo requirement (although still desired) is
> there a hardened (linux distro) router out there for sale that
> work for a small office environment?  I have too many projects, atm,
> so I know I can build something, but was hoping for an off the shelf
> Foss hardened router. (3) gigE ports with or without wifi is fine.

https://www.ubnt.com/edgemax/edgerouter-lite/

It lacks good documentation; otherwise it's a great product.

It's surprising that there are so few routers to choose from, even when
you don't limit your selection to FOSS.


On a side note, never buy Cisco, not even used: They won't let you
download or otherwise obtain a replacement for the damaged firmware
image (not to mention an update) that came which the device, unless you
have a support contract with them.  Without the firmware, the device is,
of course, useless.

No other, not even a cheap manufacturer like TP-Link --- who also makes
great products and has a responsive support --- doesn't give you any
issues like that while Cisco simply does not stand behind their
products and lets their customers down.



[gentoo-user] how to use two graphics cards with one display

2016-06-05 Thread lee

Hi,

is there a way to reasonably use two graphics cards with a single
display?

SLI won't work because it's retarded in requiring the GPUs to be the
same, which they aren't --- not to mention that the cards would be too
far away from each other in the slots for a bridge to fit.

So what I'm thinking of is like using one card as a default and being
able to use the other one to play a video in some window on the same
display, preferably managed by the same fvwm, with the window optionally
being fullscreen in size.  I'd like to do that because the card I have
isn't powerful enough to play a video while an open gl application is
running at the same time.

I'll probably get a better card once prices come down a bit, but it
might have the same problem, and why would I want to waste an otherwise
perfectly good graphics card.



[gentoo-user] speech recognition?

2016-05-15 Thread lee
Hi,

is there a speech recognition software or the like which is capable to
listen in on a phone call in order to put on screen as text what the
other person is saying?

I'd like to connect that to a softphone so that someone who suffers from
very bad hearing can talk to people on the phone more easily.  It must
work for German.

If there's a phone capable of this, I'd like to know about it.

Surely we should be able with nowadays technology to achieve this.



Re: [gentoo-user] Arduino development on GENTOO Linux

2016-04-30 Thread Ming-Che Lee
Hi Meino

Am 30.04.2016 um 12:13 schrieb meino.cra...@gmx.de:

> One question: 
> Did you download the arduino-1.6.8 binary distribution or
> the sources and compile those locally on your GENTOO box?

I wanted a quick start so I downloaded the binary distribution:

https://www.arduino.cc/download_handler.php?f=/arduino-1.6.8-linux64.tar.xz

It's good enough for me to start with as a newbie to Arduino :-)

> PS: In a few minutes I will send you a mail offlist
> with some infos about what I have found and for what
> I will use the arduino. I would be happy, if you
> can use it for you arduino project also!

Thank you for the infos!

Best regards,

Ming-Che



Re: [gentoo-user] Arduino development on GENTOO Linux

2016-04-30 Thread Ming-Che Lee
Hi Meino

Am 30.04.2016 um 07:36 schrieb meino.cra...@gmx.de:
> Hi,
>
> WARNING! I AM __VERY__ NEW TO ARDUINO!  :)
>
> For a little project I need to program an Arduino board.
> Since all needed lib/scatched/script - or whatever it
> is called in case of the Arduino - are already implemented
> by someone else I will not reinvent the wheel a second time :)
>
> Therefore I need the Arduino IDE.
>

I am also new to Arduino, so here is what I did last week to install the
latest Arduino IDE 1.6.8.

I started with installing Arduino IDE 1.0.5 from GENTOO:

$ emerge arduino

It pulled in all needed dependent packages for the IDE. Look carefully
at the information displayed at the end of the emerge. You have to run

$ crossdev -s4 avr

to have the Tool-Chain for Arduino compiled.

Unfortunately I could not use IDE 1.0.5 because it didn't show some
highlighted commands in the IDE correctly. So I downloaded IDE 1.6.8 and
unzipped it in /opt. Next step was to run the installer

$ /opt/arduino-1.6.8/install.sh

To start IDE 1.6.8: /opt/arduino-1.6.8/arduino or via menu item.

Hope this helps.

Beste regards,

Ming-Che




Re: [gentoo-user] logrotate: name of log file after it's rotated?

2016-04-01 Thread lee
Alan McKinnon <alan.mckin...@gmail.com> writes:

> On 25/03/2016 13:46, lee wrote:
>> 
>> Hi,
>> 
>> is there a built-in way (like a place holder) to figure out what name a
>> rotated log file has been given by logrotate?
>> 
>> Here's what I'm trying to do:
>> 
>> 
>> , [ cat /etc/logrotate.d/exim }
>> | /var/log/exim/exim*.log {
>> | daily
>> | missingok
>> | rotate 800
>> | compress
>> | delaycompress
>> | notifempty
>> | create 640 mail mail
>> | postrotate
>> | /usr/sbin/eximstats  | mail -s 
>> "eximstats" root
>> | endscript
>> | }
>> `
>> 
>> 
>> I want  replaced with the name the log file that
>> has been rotated has been renamed to.  I can think of ways to do this
>> otherwise, like writing a script that figures out the name of the file,
>> or using 'prerotate' instead.
>> 
>> It just won't make any sense if logrotate doesn't already have some kind
>> of place holder for this.
>> 
>
>
> It depends. There are options to tell logrotate to use, or not use,
> dates in the new filename, and what compression to use or not use. So
> the names can vary.

Exactly, and that's why there needs to be some sort of place holder for
the file name.

> By far the easiest solution is to put your "| mail" into prerotate
> section. That way you know exactly what the name is. Or maybe not due to
> that * in the name glob...

The problem is that the file can be written to while it is being
examined when the examination is performed before it is rotated.  That
can lead to false results of the examination.

> Perhaps look into renamecopy described in man logrotate

Thanks, that sounds as if it will provide exactly what I'm looking for
:)



Re: [gentoo-user] New Laptop Will Be Here in Few Days

2016-03-28 Thread Lee
Thanks so much!
On Mar 28, 2016 1:37 AM, "Adam Carter" <adamcart...@gmail.com> wrote:

> On Mon, Mar 28, 2016 at 5:21 PM, Lee <ny6...@gmail.com> wrote:
>
>> Thanks!
>>
> FWIW i also used ~amd64 gcc, and;
> CFLAGS="-march=broadwell -O2 -pipe"
> VIDEO_CARDS="intel i965"
> CPU_FLAGS_X86="aes avx avx2 fma3 mmx popcnt sse sse2 sse3 sse4_1 sse4_2
> ssse3"
>


Re: [gentoo-user] New Laptop Will Be Here in Few Days

2016-03-28 Thread Lee
Thanks!
On Mar 27, 2016 11:12 PM, "Adam Carter" <adamcart...@gmail.com> wrote:

> On Sun, Mar 27, 2016 at 1:39 AM, Lee <ny6...@gmail.com> wrote:
>
>> New Clevo W670RZQ Laptop with 6th Gen i7 cpu, hm170 Intel chipset and on-
>> board graphics.
>>
>> Will the latest stable kernel and firmware packages work with it? Will
>> the most recent minimal install disc be adequate for my needs, or should I
>> go with another distribution for the purpose of having a working  nic?
>>
> Skylake/6th gen + Intel HD 520 video working fine for me. ~amd64 kernel
> but amd64 linux-firmware. I went straight to ~amd64 for the kernel since
> the hardware is pretty new, and using the latest kernel seems prudent to me
> when using recently released hardware. System was built when ~amd64 was
> 4.4. Everything worked.
>


[gentoo-user] New Laptop Will Be Here in Few Days

2016-03-26 Thread Lee
New Clevo W670RZQ Laptop with 6th Gen i7 cpu, hm170 Intel chipset and on-
board graphics.

Will the latest stable kernel and firmware packages work with it? Will the
most recent minimal install disc be adequate for my needs, or should I go
with another distribution for the purpose of having a working  nic?

Anyone have first hand knowledge?


[gentoo-user] logrotate: name of log file after it's rotated?

2016-03-25 Thread lee

Hi,

is there a built-in way (like a place holder) to figure out what name a
rotated log file has been given by logrotate?

Here's what I'm trying to do:


, [ cat /etc/logrotate.d/exim }
| /var/log/exim/exim*.log {
| daily
| missingok
| rotate 800
| compress
| delaycompress
| notifempty
| create 640 mail mail
| postrotate
| /usr/sbin/eximstats  | mail -s 
"eximstats" root
| endscript
| }
`


I want  replaced with the name the log file that
has been rotated has been renamed to.  I can think of ways to do this
otherwise, like writing a script that figures out the name of the file,
or using 'prerotate' instead.

It just won't make any sense if logrotate doesn't already have some kind
of place holder for this.



Re: [gentoo-user] Re: {OT} Allow work from home?

2016-03-04 Thread lee
Kai Krakow <hurikha...@gmail.com> writes:

> Am Sat, 20 Feb 2016 10:48:57 +0100
> schrieb lee <l...@yagibdah.de>:
>
>> Kai Krakow <hurikha...@gmail.com> writes:
>> 
>> > Am Fri, 22 Jan 2016 00:52:30 +0100
>> > schrieb lee <l...@yagibdah.de>:
>> >
>> >> Is WSUS of any use without domains?  If it is, I should take a
>> >> look at it.
>> >
>> > You can use it with and without domains. What domains give you
>> > through GPO is just automatic deployment of the needed registry
>> > settings in the client.
>> >
>> > You can simply create a proper .reg file and deploy it to the
>> > clients however you like. They will connect to WSUS and receive
>> > updates you control.
>> >
>> > No magic here.
>> 
>> Sounds good :)  Does it also solve the problem of having to make
>> settings for all users, like when setting up a MUA or Libreoffice?
>> 
>> That means settings on the same machine for all users, like setting up
>> seamonkey so that when composing an email, it's in plain text rather
>> than html, a particular email account every user should have and a
>> number of other settings that need to be the same for all users.  For
>> Libreoffice, it would be the deployment of a macro for all users and
>> some making some settings.
>
> Well... Depends on the software. Some MUAs may store their settings to
> the registry, others to files. You'll have to figure out - it should
> work. Microsoft uses something like that to auto-deploy Outlook
> profiles to Windows domain users if an Exchange server is installed.
> Thunderbird uses a combination of registry and files. You could deploy
> a preconfigured Thunderbird profile to the users profile dir, then
> configure the proper profile path in the registry. Firefox works the
> same: Profile directory, reference to it in the registry.
>
> I think LibreOffice would work similar to MS Office: Just deploy proper
> files after figuring out its path. I once deployed OpenOffice macros
> that way to Linux X11 terminal users.

It's possible --- and tedious --- to copy a seamonkey profile to other
users.  Then you find you have a number of users who require a more or
less different setup, or you add more users later with a more or less
different profile, or you need to add something to the profile for all
users, and you're back to square one.

I'd find it very useful to be able to do settings for multiple users
with some sort of configuration software which allows me to make
settings for them from an administrative account: change a setting,
select the users it should apply to, apply it and be done with it.

The way it is now, I need to log in as every user that needs some change
of settings and do that for each of them over and over again.  This
already sucks with a handfull of users.  What do you do when you have
hundreds of users?



[gentoo-user] incremental ZFS backups

2016-03-04 Thread lee

Hi,

when you want to use zfs send/receive to make incremental backups, do
you need to keep all the snapshots you're making the backups from around
indefinitely?

I haven't found any documentation about how to deal with all the
snapshots which would be created over time.  Can they be destroyed once
the backup is finished?  A full backup took about 48 hours, so something
faster is needed, and I don't want to end up with hundreds or thousands
of snapshots by making new ones every day without being able to ever
destroy them.

The manpage is entirely confusing:


,
|-i snapshot|bookmark
| 
|Generate  an  incremental  send  stream.   The
|incremental

Incremental in which way?

|source must be an earlier snapshot  in  the  destination's
|history.  It  will  commonly be an earlier snapshot in the

I don't want to back up the destination, and I don't care about its
history.  It's not like I'd be modifying the backup in between the
increments.

|destination's filesystem, in which case it can  be  speci‐
|fied as the last component of the name (the # or @ charac‐
|ter and following).

Huh?

|If the incremental target  is  a  clone,  the  incremental
|source  can be the origin snapshot, or an earlier snapshot
|in the origin's filesystem, or the origin's origin, etc.
`

There is only one source, which is the current data I want to backup.
Should I make an incremental clone on the destination machine?


Basically, documentation says that such incremental backups are awesome
because you get a 1:1 copy and only need to transfer what has changed
after a previous backup as if you would use rsync, but that it's better
than that and you can do it in like no time.  It doesn't really say how
to actually do that and what to do with all the snapshots, though.

I also can only guess that enabling compression on the target FS won't
work unless compression is enabled at the source, though it would be
rather useful to have the backups compressed while the source is not.
You could do that with rsync, though, but I don't know how to access the
snapshot for that.

So how does this work?



Re: [gentoo-user] Re: {OT} Allow work from home?

2016-03-04 Thread lee
Kai Krakow <hurikha...@gmail.com> writes:

> Am Sat, 20 Feb 2016 11:24:56 +0100
> schrieb lee <l...@yagibdah.de>:
>
>> > It uses some very clever ideas to place files into groups and into
>> > proper order - other than using file mod and access times like other
>> > defrag tools do (which even make the problem worse by doing so
>> > because this destroys locality of data even more).  
>> 
>> I've never heard of MyDefrag, I might try it out.  Does it make
>> updating any faster?
>
> Ah well, difficult question... Short answer: It uses countermeasures
> against performance after updates decreasing too fast. It does this by
> using a "gapped" on-disk file layout - leaving some gaps for Windows to
> put temporary files. By this, files don't become a far spread as
> usually during updates. But yes, it improves installation time.

What difference would that make with an SSD?

> Apparently it's unmaintained since a few years but it still does a good
> job. It was built upon a theory by a student about how to properly
> reorganize file layout on a spinning disk to stay at high performance
> as best as possible.

For spinning disks, I can see how it can be beneficial.

>> > But even SSDs can use _proper_ defragmentation from time to time for
>> > increased lifetime and performance (this is due to how the FTL works
>> > and because erase blocks are huge, I won't get into detail unless
>> > someone asks). This is why mydefrag also supports flash
>> > optimization. It works by moving as few files as possible while
>> > coalescing free space into big chunks which in turn relaxes
>> > pressure on the FTL and allows to have more free and continuous
>> > erase blocks which reduces early flash chip wear. A filled SSD with
>> > long usage history can certainly gain back some performance from
>> > this.  
>> 
>> How does it improve performance?  It seems to me that, for practical
>> use, almost all of the better performance with SSDs is due to reduced
>> latency.  And IIUC, it doesn't matter for the latency where data is
>> stored on an SSD.  If its performance degrades over time when data is
>> written to it, the SSD sucks, and the manufacturer should have done a
>> better job.  Why else would I buy an SSD.  If it needs to reorganise
>> the data stored on it, the firmware should do that.
>
> There are different factors which have impact on performance, not just
> seek times (which, as you write, is the worst performance breaker):
>
>   * management overhead: the OS has to do more house keeping, which
> (a) introduces more IOPS (which is the only relevant limiting
> factor for SSD) and (b) introduces more CPU cycles and data
> structure locking within the OS routines during performing IO which
> comes down to more CPU cycles spend during IO

How would that be reduced by defragmenting an SSD?

>   * erasing a block is where SSDs really suck at performance wise, plus
> blocks are essentially read-only once written - that's how flash
> works, a flash data block needs to be erased prior to being
> rewritten - and that is (compared to the rest of its performance) a
> really REALLY HUGE time factor

So let the SSD do it when it's idle.  For applications in which it isn't
idle enough, an SSD won't be the best solution.

>   * erase blocks are huge compared to common filesystem block sizes
> (erase block = 1 or 2 MB vs. file system block being 4-64k usually)
> which happens to result in this effect:
>
> - OS replaces a file by writing a new, deleting the old
>   (common during updates), or the user deletes files
> - OS marks some blocks as free in its FS structures, it depends on
>   the file size and its fragmentation if this gives you a
>   continuous area of free blocks or many small blocks scattered
>   across the disk: it results in free space fragmentation
> - free space fragments happen to become small over time, much
>   smaller then the erase block size
> - if your system has TRIM/discard support it will tell the SSD
>   firmware: here, I no longer use those 4k blocks
> - as you already figured out: those small blocks marked as free do
>   not properly align with the erase block size - so actually, you
>   may end up with a lot of free space but essentially no complete
>   erase block is marked as free

Use smaller erase blocks.

> - this situation means: the SSD firmware cannot reclaim this free
>   space to do "free block erasure" in advance so if you write
>   another block of small data you may end up with the SSD going
>   into a direct "r

Re: [gentoo-user] {OT} Allow work from home?

2016-03-04 Thread lee
Daniel Frey <djqf...@gmail.com> writes:

> On 02/21/2016 04:36 PM, lee wrote:
>> Daniel Frey <djqf...@gmail.com> writes:
>> 
>>> On 02/20/2016 02:27 AM, lee wrote:
>>>> Daniel Frey <djqf...@gmail.com> writes:
>>>>> I looked up x2go and rebuilt openssh on my home server as it suggested
>>>>> to try it out. 
>>>
>>> I should mention I undid the hpn USE-flag change (x2go suggested
>>> building without it) and it works fine, the newer versions have patches
>>> that don't require hpn to be disabled.
>>>
>>> Still using x2go, still works wonderfully.
>> 
>> IIRC, I wanted to try it, and it turned out to be incompatible with
>> current X servers --- perhaps they fixed that in the meantime ...
>> 
>
> What version are you using?

I'm not using it because I would have had to downgrade the X server to
be able to install it.  There was a bug report about something which
lead to mark the package as incompatible with current X servers.

> I'm using the most recent stable and it works for me:
>
> $ equery list xorg-server
>  * Searching for xorg-server ...
> [IP-] [  ] x11-base/xorg-server-1.17.4:0/1.17.4

Maybe the problem has been recently fixed entirely.



Re: [gentoo-user] {OT} Allow work from home?

2016-02-22 Thread lee
Daniel Frey <djqf...@gmail.com> writes:

> On 02/20/2016 02:27 AM, lee wrote:
>> Daniel Frey <djqf...@gmail.com> writes:
>>> I looked up x2go and rebuilt openssh on my home server as it suggested
>>> to try it out. 
>
> I should mention I undid the hpn USE-flag change (x2go suggested
> building without it) and it works fine, the newer versions have patches
> that don't require hpn to be disabled.
>
> Still using x2go, still works wonderfully.

IIRC, I wanted to try it, and it turned out to be incompatible with
current X servers --- perhaps they fixed that in the meantime ...



Re: [gentoo-user] {OT} Allow work from home?

2016-02-21 Thread lee
Rich Freeman  writes:

> develop.  (Before somebody points out LUKS, be aware that Bitlocker
> lets you do full-disk encyption that is secure without having to
> actually type a decryption key at any point.  Remove the hard drive or
> boot from a CD, and the disks are unreadable - you can only read them
> if you boot off them on the original PC.)

And how do you read the disks when this original machine is broken?

It doesn't seem very secure, either.  When your laptop that uses
Bitlocker gets into the wrong hands, whoever has it can read the disks.



Re: [gentoo-user] {OT} Allow work from home?

2016-02-21 Thread lee
Rich Freeman <ri...@gentoo.org> writes:

> On Mon, Jan 18, 2016 at 7:57 PM, lee <l...@yagibdah.de> wrote:
>> Rich Freeman <ri...@gentoo.org> writes:
>>> On Sun, Jan 17, 2016 at 7:26 PM, lee <l...@yagibdah.de> wrote:
>>>> Rich Freeman <ri...@gentoo.org> writes:
>>>>
>>>>> However, while an RDP-like solution protects you from some types of
>>>>> attacks, it still leaves you open to many client-side problems like
>>>>> keylogging.  I don't know any major corporation that lets people RDP
>>>>> into their applications in general.
>>>>
>>>> What do they use instead?
>>>>
>>>
>>> As I mentioned in my previous email - they just hand all their
>>> employees laptops.  Control the hardware, control the software,
>>> control the security...
>>
>> I mean instead of rdp.  It's a simple solution which works really well
>> on a LAN with Windoze.  What's the equivalent that works with Linux?
>
> Well, I've never been in a company that runs Linux on the desktop, or
> which even provides VDIs for Windows.

I'm doing that at work, and nothing speaks against doing it on the
thin-clients other than that the users would need to get used to it and
the poor graphics performance --- you can't really call that
"performance" --- of thin clients.  Other than that, we'd be much better
off.

What we would need are cheap thin clients that can drive at least two 4k
displays each, and there are none that could even drive one.  I don't
understand why they make thin-clients that aren't usable because their
graphics "performance" is from the '90ies.

> The most common solution is to provide windows laptops to users with
> various software packages for management/security/etc.

Laptops have slightly better graphics and add a maintenance overhead
thin-clients don't have, and they cost more.  Other than that, they
could replace the thin-clients, and nothing speaks against putting
Gentoo onto them.

Desktop machines require too much electricity.  That's another thing I
don't understand:  Why can't they finally manufacture hardware which is
really power efficient /and/ provides decent performance?

> The closest thing to RDP for Linux that I'm aware of us various
> NX-based implementations, like x2go, which I've mentioned a few times.
> It can be somewhat finicky.  And of course there is VNC, which is much
> less efficient.  I don't think either really gets to the level of RDP
> in general.
>
> I do sometimes wonder how the #1 server OS in the world somehow lacks
> decent facilities for graphical remote login, and for sharing files
> across the network.  (For the latter NFS is a real pain to set up in a
> remotely secure fashion - part of the problem is that it is hard to
> use some kind of a UUID to drive file permissions, and kerberos/etc is
> a pain to set up.  There is certainly nothing approaching the ease of
> just setting a password on a share or connecting to a windows domain
> (even a samba-driven one)).

Indeed, it's really strange that there's such a big lack.



Re: [gentoo-user] Re: {OT} Allow work from home?

2016-02-21 Thread lee
Kai Krakow <hurikha...@gmail.com> writes:

> Am Wed, 20 Jan 2016 01:46:29 +0100
> schrieb lee <l...@yagibdah.de>:
>
>> The time before, it wasn't
>> a VM but a very slow machine, and that also took a week.  You can have
>> the fastest machine on the world and Windoze always manages to bring
>> it down to a slowness we wouldn't have accepted even 20 years ago.
>
> This is mainly an artifact of Windows updates destroying locality of
> data pretty fast and mainly a problem when running on spinning rust.
> DLLs and data files needed for booting or starting specific
> software become spread wide across the hard disk. Fragmentation isn't
> the issue here - NTFS is pretty good at keeping it low. Still, the
> right defragmentation tool will help you:

You can't very well defragment the disk while updates are being
performed.  Updating goes like this:


+ install from an installation media

+ tell the machine to update

+ come back next day and find out that it's still looking for updates or
  trying to download them or wants to be restarted

+ restart the machine

+ start over with the second step until all updates have been installed


That usually takes a week.  When it's finally done, disable all
automatic updates because if you don't, the machine usually becomes
unusable when it installs another update.

It doesn't matter if you have the fastest machine on the world or some
old hardware you wouldn't actually use anymore, it always takes about a
week.

> I always recommend staying away from the 1000 types of "tuning tools",
> they actually make it worse and take away your chance of properly
> optimizing the on-disk file layout.

I'm not worried about that.  One of the VMs is still on an SSD, so I
turned off defragging.  The other VMs that use files on a hard disk
defrag themselves regularly over night.

> And I always recommend using MyDefrag and using its system disk
> defrag profile to reorder the files in your hard disk. It takes ages
> the first time it runs but it brings back your system to almost out of
> the box boot and software startup time performance.

That hasn't been an issue with any of the VMs yet.

> It uses some very clever ideas to place files into groups and into
> proper order - other than using file mod and access times like other
> defrag tools do (which even make the problem worse by doing so because
> this destroys locality of data even more).

I've never heard of MyDefrag, I might try it out.  Does it make updating
any faster?

> But even SSDs can use _proper_ defragmentation from time to time for
> increased lifetime and performance (this is due to how the FTL works
> and because erase blocks are huge, I won't get into detail unless
> someone asks). This is why mydefrag also supports flash optimization.
> It works by moving as few files as possible while coalescing free space
> into big chunks which in turn relaxes pressure on the FTL and allows to
> have more free and continuous erase blocks which reduces early flash
> chip wear. A filled SSD with long usage history can certainly gain back
> some performance from this.

How does it improve performance?  It seems to me that, for practical
use, almost all of the better performance with SSDs is due to reduced
latency.  And IIUC, it doesn't matter for the latency where data is
stored on an SSD.  If its performance degrades over time when data is
written to it, the SSD sucks, and the manufacturer should have done a
better job.  Why else would I buy an SSD.  If it needs to reorganise the
data stored on it, the firmware should do that.



Re: [gentoo-user] Re: {OT} Allow work from home?

2016-02-21 Thread lee
Kai Krakow <hurikha...@gmail.com> writes:

> Am Fri, 22 Jan 2016 00:52:30 +0100
> schrieb lee <l...@yagibdah.de>:
>
>> Is WSUS of any use without domains?  If it is, I should take a look at
>> it.
>
> You can use it with and without domains. What domains give you through
> GPO is just automatic deployment of the needed registry settings in the
> client.
>
> You can simply create a proper .reg file and deploy it to the clients
> however you like. They will connect to WSUS and receive updates you
> control.
>
> No magic here.

Sounds good :)  Does it also solve the problem of having to make
settings for all users, like when setting up a MUA or Libreoffice?

That means settings on the same machine for all users, like setting up
seamonkey so that when composing an email, it's in plain text rather
than html, a particular email account every user should have and a
number of other settings that need to be the same for all users.  For
Libreoffice, it would be the deployment of a macro for all users and
some making some settings.



Re: [gentoo-user] Re: {OT} Allow work from home?

2016-02-21 Thread lee
Kai Krakow <hurikha...@gmail.com> writes:

> Am Wed, 20 Jan 2016 01:46:29 +0100
> schrieb lee <l...@yagibdah.de>:
>
>> >> Overcommitting disk space sounds like a very bad idea.
>> >> Overcommitting memory is not possible with xen.  
>> >
>> > Overcommitting diskspace isn't such a bad idea, considering most
>> > installs never utilize all the available diskspace.  
>> 
>> When they do not use it anyway, there is no reason to give it to them
>> in the first place.  And when they do use it, how do the VMs handle
>> the problem that they have plenty disk space available, from their
>> point of view, while the host which they don't know about doesn't
>> allow them to use it?
>> 
>> Besides, overcommitting disk space means to intentionally create a
>> setup which involves that the host can run out of disk space easily.
>> That is not something I would want to create for a host which is
>> required to function reliably.
>> 
>> And how much do you need to worry about the security of the VMs when
>> you build in a way for the users to bring the whole machine, or at
>> least random VMs, down by using the disk space which has been
>> assigned to them?  The users are somewhat likely to do that even
>> unintentionally, the more the more you overcommit.
>
> Overcommitting storage is for setups where it's easy to add storage
> pools when needed, like virtual SAN. You just monitor available space
> and when it falls below a threshold, just add more to the storage pool
> whose filesystem will grow.
>
> You just overcommit to whatever storage requirments you may ever need
> combined over all VMs but you initially only buy what you need to start
> with including short term expected growth.
>
> Then start with clones/snapshots from the same VM image (SANs provide
> that so you actually do not have to care about snapshot dependencies
> within your virtualization software).
>
> SANs usually also provide deduplication and compression, so at any
> point you can coalesce the images back into smaller storage
> requirements.
>
> A sane virtualization solution also provides RAM deduplication and
> compaction so that you can overcommit RAM the same way as storage. Of
> course it will at some point borrow RAM from swap space. Usually you
> will then just migrate one VM to some other hardware - even while it is
> running. If connected to a SAN this means: You don't have to move the
> VM images itself. The migration is almost instant: The old VM host acts
> as some sort of virtualized swap file holding the complete RAM, the new
> host just "swaps in" needed RAM blocks over network and migrates the
> rest during idle time in the background. This can even be automated by
> monitoring the resources and let the VM manager decide and act.
>
> The Linux kernel lately gained support for all this so you could
> probably even home-brew it.

Ok, that makes sense when you have more or less unlimited resources to
pay for all the hardware you need for this.  I wonder how much money
you'd have to put out to even get started with a setup like this ...



Re: [gentoo-user] {OT} Allow work from home?

2016-02-21 Thread lee
Daniel Frey  writes:

> On 01/17/2016 10:10 AM, Rich Freeman wrote:
>> On Sun, Jan 17, 2016 at 1:03 PM, J. Roeleveld  wrote:
>>>
>>> I would prefer a method that is independent of OS used. And provides server 
>>> side limitations with regards to filesharing and clipboard access.
>>>
>> 
>> x2go is just X11, so it should be OS-independent as long as you have a
>> client/server for it.  It just logs in as the appropriate user on the
>> remote host, so access beyond that is whatever you'd get if you just
>> logged in on a console.
>> 
>> Now, I can't vouch for how many OSes anybody has bothered to implement it on.
>> 
>
> Thanks for that tip on x2go - I'd struggled with freenx and eventually
> gave up and freenx isn't even in the tree anymore.
>
> I looked up x2go and rebuilt openssh on my home server as it suggested
> to try it out. Other than restarting sshd, I didn't have to do any
> configuration and it just *worked*. I've, like, never ever had that
> happen before. Even when I set up my tigervnc with xinetd it was days of
> experimenting before I got it to work. tigervnc also was hanging up X
> upgrades, so now I can successfully ditch tigervnc.
>
> x2go is so much faster it's unbelievable. I have a gigabit LAN here at
> home and VNC was lagging pretty badly (to the point where I decided
> against even trying to use it remotely.)
>
> Some things to note: there's no android client, but there is one for
> Windows/linux/MacOS. I haven't tried it on my Windows laptop yet, but
> one of these days I'll dig it out and try it.

Thank you for letting us know, I'll keep x2go in mind.

> Makes me wonder if it would be possible to spin up a VM on demand with
> x2go on and preconfigured if OP requires users not to be on the same host.

It probably is; I guess you'd need something to start the VM when a
connection is attempted.



Re: [gentoo-user] {OT} Allow work from home?

2016-01-22 Thread lee
"J. Roeleveld" <jo...@antarean.org> writes:

> On Tuesday, January 19, 2016 11:22:02 PM lee wrote:
>> "J. Roeleveld" <jo...@antarean.org> writes:
>> > [...]
>> > If disk-space is considered too expensive, you could even have every VM
>> > use
>> > the same base image. And have them store only the differences of the disk.
>> > eg:
>> > 1) Create a VM
>> > 2) Snapshot the disk (with the VM shutdown)
>> > 3) create a new VM based on the snapshot
>> > 
>> > Repeat 2 and 3 for as many clones you want.
>> > 
>> > Most installs don't change that much when dealing with standardized
>> > desktops.
>> How does that work?  IIUC, when you created a snapshot, any changes you
>> make to the snapshotted (or how that is called) file system are being
>> referenced by the snapshot which you can either destroy or abandon.
>> When you destroy it, the changes you made are being applied to the
>> file system you snapshotted (because someone decided to use a very
>> misleading terminology), and when you abandon it, the changes are thrown
>> away and you end up with the file system as it was before the snapshot
>> was created.
>> 
>> In any case, you do not get multiple versions (which only reference the
>> changes made) of the file system you snapshotted but only one current
>> version.
>> 
>> Do you need to use a special file system or something which provides
>> this kind of multiple copies when you make snapshots?
>
> I use LVM for this.
>
> Steps are simple:
> 1) Create a LV (lv_1)
> 2) Create and install a VM using this LV (lv_1)
> 3) Stop the VM
> 4) Create multiple snapshots based on lv_1 (slv_1a, slv_1b, ..)
> 5) Create multiple VMs using the snapshots (vm1a -> slv_1a, vm1b, 
> slv_1b,.)
>
> Start the VMs
>
> This way you can overcommit on the actual diskspace as only changes are 
> taking 
> up diskspace.
> If you force everyone on the same base-image, the differences should not be 
> too 
> large.

I don't use lvm anymore.  It requires you to have unused space in the
same VG to make a snapshot (which, of course, I didn't have), and when
you need to move a volume from one machine to another, you're screwed
because you can't get the volume out of the volume group other than
moving it to a different media after attaching this media to the VG and
detaching it after the move.  Moving the volume to the new machine is
likewise a pita.  I lost a whole VM when I did that, and I have no idea
what might have happened to it.  I did copy it, and yet it somehow
disappeared.

> If you also force users to store files on a shared filesystem, it shouldn't 
> be 
> too much of a difficulty to occasionally move everyone to a new base-image 
> when 
> the updates are causing the snapshots to grow too much.

How do you force users to do that?  I tried that with some windoze 7
VMs, and according to the rules, users are not allowed to save anything
on their desktops, and nonetheless they can do that.  The installed
applications also create data in the disk space of the VM.  Their MUAs
do that, for example, and you may find users who have accumulated over
300GB for email storage.  Make the disk read-only, and the VM probably
won't even start.



Re: [gentoo-user] {OT} Allow work from home?

2016-01-22 Thread lee
"J. Roeleveld" <jo...@antarean.org> writes:

> On Wednesday, January 20, 2016 01:46:29 AM lee wrote:
>> "J. Roeleveld" <jo...@antarean.org> writes:
>> > On Tuesday, January 19, 2016 01:46:45 AM lee wrote:
>> >> "J. Roeleveld" <jo...@antarean.org> writes:
>> >> > On Monday, January 18, 2016 02:02:27 AM lee wrote:
>> >> >> "J. Roeleveld" <jo...@antarean.org> writes:
>
>> >> > 
>> >> > Yes
>> >> > 
>> >> >> That would be a huge waste of resources,
>> >> > 
>> >> > Diskspace and CPU can easily be overcommitted.
>> >> 
>> >> Overcommitting disk space sounds like a very bad idea.  Overcommitting
>> >> memory is not possible with xen.
>> > 
>> > Overcommitting diskspace isn't such a bad idea, considering most installs
>> > never utilize all the available diskspace.
>> 
>> When they do not use it anyway, there is no reason to give it to them in
>> the first place.  And when they do use it, how do the VMs handle the
>> problem that they have plenty disk space available, from their point of
>> view, while the host which they don't know about doesn't allow them to
>> use it?
>
> 1 word: Monitoring.
> When you overcommit any resource, you need to put monitoring in place.
> Then you also need to ensure you have the ability to increase that resource 
> when required.

So you more or less frequently shrink your VMs back when the monitoring
informs you that you need to do that?  Isn't it more reasonable not to
overcommit but to increase the resource when required?

>> Besides, overcommitting disk space means to intentionally create a setup
>> which involves that the host can run out of disk space easily.  That is
>> not something I would want to create for a host which is required to
>> function reliably.
>
> The host should not crash when a VM does or when the storage assigned to VMs 
> fills up.
> If it does, go back to the drawing board and fix your design.

I didn't say that the host would crash.  I wouldn't consider a VM which
is bound to run out of disk space as reliable, especially when it is
bound run out of disk space because other VMs which are also bound to
run out of disk space use the disk space which the VM would need that's
running out.

>> And how much do you need to worry about the security of the VMs when you
>> build in a way for the users to bring the whole machine, or at least
>> random VMs, down by using the disk space which has been assigned to
>> them?  The users are somewhat likely to do that even unintentionally,
>> the more the more you overcommit.
>
> See comment about monitoring.
> If all your users tend to fill up all available diskspace, you obviously can 
> not overcommit on diskspace.

Have you ever seen a disk that doesn't fill up, the larger the disk, the
more it fills?

>> > Overcommitting memory is, i think, on the roadmap for Xen. (Disclaimer: At
>> > least, I seem to remember reading that somewhere)
>> 
>> That would be a nice feature.
>
> For VDIs, I might consider using it.
> But considering most OSs tend to fill up all available memory with caches, I 
> expect performance issues.

It depends on how you use it.

>> >> >> plus having to take care of a lot of VMs,
>> >> > 
>> >> > Automated.
>> >> 
>> >> Like how?
>> > 
>> > How do you manage a large amount of physical machines?
>> > Just change physical to VMs and do it the same.
>> > With VMs you have more options for automation.
>> 
>> Individually, in lack of a better way.  Per user when it comes to
>> setting up their MUAs and the like, in lack of any better way.  It
>> doesn't make a difference if it's a VM or not, provided that you have
>> remote access to the machine.
>
> This is where management tools come into play. (Same methods apply to 
> physical 
> and virtual)
>
> When talking MS Windows, domains with their policies are very useful. Couple 
> that with WSUS for the patching and software distribution tools for the 
> additional software installs, and you have a very nice setup.

I don't like what they call "domains".  They tend to get in the way, and
when you want to take a machine out of one, all the users need to be set
up anew.

Is WSUS of any use without domains?  If it is, I should take a look at
it.

> For Linux, I would recommend tools like Ansible or Puppet to control the 
> software on the machines.

Does it really have an advantage over logging in remotely?

> For any OS, I would

Re: [gentoo-user] {OT} Allow work from home?

2016-01-22 Thread lee
Rich Freeman <ri...@gentoo.org> writes:

> On Tue, Jan 19, 2016 at 5:08 PM, lee <l...@yagibdah.de> wrote:
>>
>> BTW, is it as easy to give a graphics card to a container as it is to
>> give it a network card?
>
> I've never tried it, but I'd think that the container could talk to a
> graphics card.

Maybe ... it's really easy with network cards.

>> What if you have a container for each user who
>> somehow logs in remotely to an X session?  Do (can) you run X sessions
>> that do not have a console and do not need a (dedicated) graphics card
>> (just for users logging in remotely)?
>
> You don't need to even have a graphics card to serve X11 via vnc or
> nx.  You could probably serve them even if your only server console
> was a serial console.  Just run x11vnc or whatever it is called - it
> is an X server whose only framebuffer is a VNC session.  I think NX
> uses the same server, but I'd have to check.  Of course, you wouldn't
> have 3D accelleration with this server, not that you'd be using it
> over NX/VNC.

That might be a problem when you want to use kde or gnome?

And I thought vnc sends a copy of what is displayed on the screen, so if
you were running a program that renders something on the screen and
uses/requires a graphics card for that, you should be able to see what
it renders.  If you can't see that, vnc is of very limited use.  How
does RDP deal with this?



Re: [gentoo-user] {OT} Allow work from home?

2016-01-22 Thread lee
Rich Freeman <ri...@gentoo.org> writes:

> On Tue, Jan 19, 2016 at 5:22 PM, lee <l...@yagibdah.de> wrote:
>> "J. Roeleveld" <jo...@antarean.org> writes:
>>
>> How does that work?  IIUC, when you created a snapshot, any changes you
>> make to the snapshotted (or how that is called) file system are being
>> referenced by the snapshot which you can either destroy or abandon.
>> When you destroy it, the changes you made are being applied to the
>> file system you snapshotted (because someone decided to use a very
>> misleading terminology), and when you abandon it, the changes are thrown
>> away and you end up with the file system as it was before the snapshot
>> was created.
>>
>> In any case, you do not get multiple versions (which only reference the
>> changes made) of the file system you snapshotted but only one current
>> version.
>>
>> Do you need to use a special file system or something which provides
>> this kind of multiple copies when you make snapshots?
>>
>
> And that is exactly what zfs and btrfs provide. Snapshots are full
> citizens.  If I create a snapshot of a directory in btrfs it is
> essentially indistinguishable from running cp -a on the directory,
> except the snapshot takes only seconds to create almost entirely
> regardless of size, and takes almost no space until changes are made.
> Later I can delete the snapshot, or delete the original, or keep both
> indefinitely making changes to either.

Hm, I must be misunderstanding snapshots entirely.

What happens when you remove a snapshot after you modified the
"original" /and/ the snapshot?  You destroy at least one of them, so you
can never get rid of the snapshot in a non-destructive way?

My understanding is that when you make a snapshot, you get a copy that
doesn't change which you can somehow use to make backups.  When the
backup is finished, you can remove the snapshot, and the changes that
were made in the meantime are not lost --- unless you decide to throw
them away when removing the snapshot, in which case you get a rollback.

To make things more complicated, I've seen zfs refusing to remove a
snapshot and saying that something is recursive (IIRC), and it didn't
make any sense anymore.  So I left everything as it was because I didn't
want to loose data, and a while later, I removed this very same snapshot
without getting issues as before.  Weird behaviour makes snapshots
rather scary, so I avoid them now.

There seems to be some sort of relationship between a snapshot and the
"original" which limits what you can do with a snapshot, like the
snapshot is somehow attached to the "original".  At least that makes
some sense to me because no real copy is created when you make a
snapshot.  But how do you detach a snapshot from the "original" so that
you could savely modify both?



Re: [gentoo-user] {OT} Allow work from home?

2016-01-22 Thread lee
Alec Ten Harmsel <a...@alectenharmsel.com> writes:

> On Tue, Jan 19, 2016 at 10:56:21PM +0100, lee wrote:
>> Alec Ten Harmsel <a...@alectenharmsel.com> writes:
>> >
>> > Depends on how the load is. Right now I have a 500GB HDD at work. I use
>> > VirtualBox and vagrant for testing various software. Every VM in
>> > VirtualBox gets a 50GB hard disk, and I generally have 7 or 8 at a time.
>> > Add in all the other stuff on my system, which includes a 200GB dataset,
>> > and the disk is overcommitted. Of course, none of the VirtualBox disks
>> > use anywhere near 50GB.
>> 
>> True, that's for testing when you do know that the disk space will not
>> be used and have no trouble when it is.  When you have the VMs in
>> production and users (employees) using them, you don't know when they
>> will run out of disk space and trouble ensues.
>
> Almost. Here is an equal example: I am an admin on an HPC cluster. We
> have a shared Lustre filesystem that people store work files in while
> they are running jobs. It has around 1PB of capacity. As strange as this
> may sound, this filesystem is overcommitted (we have 20,000 cores,
> that's only 52GB per core, not even close to enough for more than half a
> year of data accumulation).  Unused data is deleted after 90 days, which
> is why it can be overcommitted.

Why do you need to overcommit in the first place when you don't need
that much disk space anyway?  And it only works because you "shrink" the
disk space used by deleting data.

> Extending this to a more realistic example without automatic data
> deletion is trivial. Imagine you are a web hosting provider. You allow
> each client unlimited disk space, so you're automatically overcommitted.
> In the aggregate, even though one client may increase their usage
> extremely quickly, total usage rises slowly, giving you more than enough
> time to increase the storage capacity of whatever backing filesystem is
> hosting their files.

I'm a customer of such a provider that used to do that, and they stopped
giving their customers unlimited disk space years ago.  I guess they
found out that they can't possibly keep up with the demand, at least not
without charging more.

>> > All Joost is saying is that most resources can be overcommitted, since
>> > all the users will not be using all their resources at the same time.
>> 
>> How do you overcommit disk space and then shrink the VMs automatically
>> when disk usage gets lower again?
>> 
>
> Sorry, my previous example was bad, since the normal strategy is to
> expand when necessary as far as I know. See above.

Well, that's exactly the problem.  Once a VM has grown, it won't shrink
automatically, which soon breaks the overcommitment.



Re: [gentoo-user] {OT} Allow work from home?

2016-01-19 Thread lee
"J. Roeleveld" <jo...@antarean.org> writes:

> On Tuesday, January 19, 2016 01:46:45 AM lee wrote:
>> "J. Roeleveld" <jo...@antarean.org> writes:
>> > On Monday, January 18, 2016 02:02:27 AM lee wrote:
>> >> "J. Roeleveld" <jo...@antarean.org> writes:
>> >> > On 17 January 2016 18:35:20 CET, Mick <michaelkintz...@gmail.com>
>> >> > wrote:
>> >> > 
>> >> > [...]
>> >> > 
>> >> >>I use the icaclient provided by Citrix to access my virtual desktop at
>> >> >>work,
>> >> >>but have never tried to set up something similar at home.  What
>> >> >>opensource
>> >> >>software would I need for this?  Is there a wiki somewhere to follow?
>> >> >>
>> >> > I'd love to do this myself as well.
>> >> > 
>> >> > Citrix sells the full package as 'XenDesktop'. To do it yourself you
>> >> > need
>> >> > a VMserver (Xen or similar) and a remote desktop tool that hooks into
>> >> > the
>> >> > VM display. (Spice or VNC)
>> >> > 
>> >> > Then you need some way of authenticating users and providing access to
>> >> > the
>> >> > client software. [...]
>> >> 
>> >> You would have a full VM for each user?
>> > 
>> > Yes
>> > 
>> >> That would be a huge waste of resources,
>> > 
>> > Diskspace and CPU can easily be overcommitted.
>> 
>> Overcommitting disk space sounds like a very bad idea.  Overcommitting
>> memory is not possible with xen.
>
> Overcommitting diskspace isn't such a bad idea, considering most installs 
> never utilize all the available diskspace.

When they do not use it anyway, there is no reason to give it to them in
the first place.  And when they do use it, how do the VMs handle the
problem that they have plenty disk space available, from their point of
view, while the host which they don't know about doesn't allow them to
use it?

Besides, overcommitting disk space means to intentionally create a setup
which involves that the host can run out of disk space easily.  That is
not something I would want to create for a host which is required to
function reliably.

And how much do you need to worry about the security of the VMs when you
build in a way for the users to bring the whole machine, or at least
random VMs, down by using the disk space which has been assigned to
them?  The users are somewhat likely to do that even unintentionally,
the more the more you overcommit.

> Overcommitting memory is, i think, on the roadmap for Xen. (Disclaimer: At 
> least, I seem to remember reading that somewhere)

That would be a nice feature.

>> >> plus having to take care of a lot of VMs,
>> > 
>> > Automated.
>> 
>> Like how?
>
> How do you manage a large amount of physical machines?
> Just change physical to VMs and do it the same.
> With VMs you have more options for automation.

Individually, in lack of a better way.  Per user when it comes to
setting up their MUAs and the like, in lack of any better way.  It
doesn't make a difference if it's a VM or not, provided that you have
remote access to the machine.

When you one VM for many users, you install the MUA only once, and when
you need to do updates, you do them only once.  When you have many VMs,
like one for each user, you have to install and update many times, once
on each VM.

>> >> plus having to buy  a lot of Windoze licenses
>> > 
>> > Volume licensing takes care of that.
>> 
>> expensive
>
> Depends on the requirements. It's cheaper then a few hundred seperate windows 
> licenses.

It's still more expensive than one, or than a handful, isn't it?

>> >> and taking about a week to install the updates
>> >> after installing a VM.
>> > 
>> > Never heard of VM templates?
>> 
>> It still takes a week to put the updates onto the template.
>
> Last time I had to fully reinstall a windows machine it took me a day to do 
> all the updates. Microsoft even has server software that will keep them 
> locally and push them to the clients.

That would be useful to have.  Where could I download that?

Last time I installed a VM, it took a week until the updates where
finally installed, and you have to check on it every now and then to
find out if it's even doing anything at all.  The time before, it wasn't
a VM but a very slow machine, and that also took a week.  You can have
the fastest machine on the world and Windoze always manages to bring it
down to a slowness we w

Re: [gentoo-user] {OT} Allow work from home?

2016-01-19 Thread lee
Rich Freeman  writes:

> On Mon, Jan 18, 2016 at 9:45 PM, Alec Ten Harmsel
>  wrote:
>>
>> All Joost is saying is that most resources can be overcommitted, since
>> all the users will not be using all their resources at the same time.
>>
>
> Don't want to sound like a broken record, but this is precisely why
> containers are so attractive.  You can set hard limits wherever you
> want, but otherwise absolutely everything can be
> over-comitted/shared/etc to the degree you desire.  They're just
> processes and namespaces and cgroups and so on.  You just have to be
> willing to live with whatever kernel is running on the host.  Of
> course, it isn't a solution for Windows, and there aren't any mature
> VDI-oriented solutions I'm aware of.  However, running as non-root in
> a container should be very secure so there is no reason it couldn't be
> done.  I just spun up a new container yesterday to test out burp
> (alas, ago beat me to the stablereq) and the server container is using
> all of 54M total / 3M RSS (some of that because I like to run sshd and
> so on inside).  I can afford to run a LOT of those.

Yes, I prefer containers over xen and kvm.  They are easy to set up,
have basically no overhead, no noticeable performance impact or loss,
and handing over devices, like a network card, to a container is easy
and painless.  Unfortunately, as you say, you can't use them when you
need Windoze VMs.

BTW, is it as easy to give a graphics card to a container as it is to
give it a network card?  What if you have a container for each user who
somehow logs in remotely to an X session?  Do (can) you run X sessions
that do not have a console and do not need a (dedicated) graphics card
(just for users logging in remotely)?

Having a container for each user would be much less painful than having
a VM for each user.  That brings back the question what to use when you
want to log in remotely to an X session ...



Re: [gentoo-user] {OT} Allow work from home?

2016-01-19 Thread lee
Alec Ten Harmsel <a...@alectenharmsel.com> writes:

> On Tue, Jan 19, 2016 at 01:46:45AM +0100, lee wrote:
>> "J. Roeleveld" <jo...@antarean.org> writes:
>> 
>> > On Monday, January 18, 2016 02:02:27 AM lee wrote:
>> >> "J. Roeleveld" <jo...@antarean.org> writes:
>> >> > On 17 January 2016 18:35:20 CET, Mick <michaelkintz...@gmail.com> wrote:
>> >> > 
>> >> > [...]
>> >> > 
>> >> >>I use the icaclient provided by Citrix to access my virtual desktop at
>> >> >>work,
>> >> >>but have never tried to set up something similar at home.  What
>> >> >>opensource
>> >> >>software would I need for this?  Is there a wiki somewhere to follow?
>> >> >>
>> >> > I'd love to do this myself as well.
>> >> > 
>> >> > Citrix sells the full package as 'XenDesktop'. To do it yourself you 
>> >> > need
>> >> > a VMserver (Xen or similar) and a remote desktop tool that hooks into 
>> >> > the
>> >> > VM display. (Spice or VNC)
>> >> > 
>> >> > Then you need some way of authenticating users and providing access to 
>> >> > the
>> >> > client software. [...]
>> >> 
>> >> You would have a full VM for each user?
>> >
>> > Yes
>> >
>> >> That would be a huge waste of resources,
>> >
>> > Diskspace and CPU can easily be overcommitted.
>> 
>> Overcommitting disk space sounds like a very bad idea.  Overcommitting
>> memory is not possible with xen.
>> 
>
> Depends on how the load is. Right now I have a 500GB HDD at work. I use
> VirtualBox and vagrant for testing various software. Every VM in
> VirtualBox gets a 50GB hard disk, and I generally have 7 or 8 at a time.
> Add in all the other stuff on my system, which includes a 200GB dataset,
> and the disk is overcommitted. Of course, none of the VirtualBox disks
> use anywhere near 50GB.

True, that's for testing when you do know that the disk space will not
be used and have no trouble when it is.  When you have the VMs in
production and users (employees) using them, you don't know when they
will run out of disk space and trouble ensues.

> All Joost is saying is that most resources can be overcommitted, since
> all the users will not be using all their resources at the same time.

How do you overcommit disk space and then shrink the VMs automatically
when disk usage gets lower again?



Re: [gentoo-user] {OT} Allow work from home?

2016-01-19 Thread lee
"J. Roeleveld"  writes:


> [...]
> If disk-space is considered too expensive, you could even have every VM use 
> the same base image. And have them store only the differences of the disk.
> eg:
> 1) Create a VM
> 2) Snapshot the disk (with the VM shutdown)
> 3) create a new VM based on the snapshot
>
> Repeat 2 and 3 for as many clones you want.
>
> Most installs don't change that much when dealing with standardized desktops.

How does that work?  IIUC, when you created a snapshot, any changes you
make to the snapshotted (or how that is called) file system are being
referenced by the snapshot which you can either destroy or abandon.
When you destroy it, the changes you made are being applied to the
file system you snapshotted (because someone decided to use a very
misleading terminology), and when you abandon it, the changes are thrown
away and you end up with the file system as it was before the snapshot
was created.

In any case, you do not get multiple versions (which only reference the
changes made) of the file system you snapshotted but only one current
version.

Do you need to use a special file system or something which provides
this kind of multiple copies when you make snapshots?



Re: [gentoo-user] {OT} Allow work from home?

2016-01-18 Thread lee
"J. Roeleveld" <jo...@antarean.org> writes:

> On Monday, January 18, 2016 02:02:27 AM lee wrote:
>> "J. Roeleveld" <jo...@antarean.org> writes:
>> > On 17 January 2016 18:35:20 CET, Mick <michaelkintz...@gmail.com> wrote:
>> > 
>> > [...]
>> > 
>> >>I use the icaclient provided by Citrix to access my virtual desktop at
>> >>work,
>> >>but have never tried to set up something similar at home.  What
>> >>opensource
>> >>software would I need for this?  Is there a wiki somewhere to follow?
>> >>
>> > I'd love to do this myself as well.
>> > 
>> > Citrix sells the full package as 'XenDesktop'. To do it yourself you need
>> > a VMserver (Xen or similar) and a remote desktop tool that hooks into the
>> > VM display. (Spice or VNC)
>> > 
>> > Then you need some way of authenticating users and providing access to the
>> > client software. [...]
>> 
>> You would have a full VM for each user?
>
> Yes
>
>> That would be a huge waste of resources,
>
> Diskspace and CPU can easily be overcommitted.

Overcommitting disk space sounds like a very bad idea.  Overcommitting
memory is not possible with xen.

>> plus having to take care of a lot of VMs,
>
> Automated.

Like how?

>> plus having to buy  a lot of Windoze licenses
>
> Volume licensing takes care of that.

expensive

>> and taking about a week to install the updates
>> after installing a VM.
>
> Never heard of VM templates?

It still takes a week to put the updates onto the template.

>> Add to that that the xen host goes down at
>> random time intervals (because the sending queue of the network card
>> times out for reasons that cannot be determined) which can be as long as
>> a day, a week or even up to three weeks, and you are likely to become a
>> rather unhappy administrator.
>
> Sorry, but I consider that a bug in your hardware. If it's really that 
> unstable, replace it.
> I've been running Xen enabled servers for nearly 15 years. Never had issues 
> like that. If it were truly that unstable, it wouldn't be gaining popularity.

The hardware has already been replaced, and the problem persists.  Other
machines of identical hardware that don't run xen don't show any issues.

>> Try kvm instead, and you'll find that
>> it's impossible to migrate the VMs from xen to to kvm when you want to
>> use virtio drivers because you can't install them on an existing Windoze
>> VM.
>
> Not a problem with the virtualisation technology. It is an issue with driver 
> management inside MS Windows.
> There are ways to migrate VMs succesfully, I just don't see the point in 
> wasting time for that.

It's time consuming when you have to reinstall the VMs to migrate them
to kvm.  And when you don't have the installers of all the software
that's on some of the VMs and can't get them, you either have to run
them without virtio drivers or you can't migrate them.

> The biggest reason why I don't use KVM is the lack of full snapshot 
> functionality. Snapshotting disks is nice, but you end up with an unclean-
> shutdown situation and anything that's not yet committed to disk is gone.

I'm not sure what you mean.  When you take a snapshot while the VM is not
shut down, what difference does it make whether you use xen or kvm?

>> Then there's the question how well vnc or spice connections work over a
>> VPN that goes over the internet.
>
> VNC works quite well, as long as you use a minimal desktop. (like blackbox).
> Don't expect KDE or Gnome to be usable.
> I haven't tried Spice yet, but I've read that it performs better.

It's not like you had a choice when you have Windoze VMs.

>> It's not like the employees could get
>> reliable internet connections with sufficient bandwidth, not to mention
>> that the company would have to get one in the first place, which isn't
>> much easier to get, if any.
>
> That depends on where you are.

In this country, you have to be really lucky to find a place where you
can get a decent internet connection.

> The company could host the servers in a decent datacentre, which should take 
> care of the bandwidth issues.

And give all their data out of hands?  And how much does that cost?

> For the employees, if they want to work from home, it's up to them to ensure 
> they have a reliable connection.

It is as much problem of the company when they want the employees to
work at home.  And the employees don't have a choice, they can only get
a connection they can get.

>> It might work in theory.  How would it be feasible in practise?
>
> Plenty of companies do it this way. If you d

Re: [gentoo-user] {OT} Allow work from home?

2016-01-18 Thread lee
<waben...@gmail.com> writes:

> lee <l...@yagibdah.de> wrote:
>
>> Rich Freeman <ri...@gentoo.org> writes:
>> 
>> > On Sun, Jan 17, 2016 at 6:38 AM, lee <l...@yagibdah.de> wrote:
>> >> Suppose you use a VPN connection.  How do does the client
>> >> (employee) secure their own network and the machine they're using
>> >> to work remotely then?
>> >
>> > Poorly, most likely.  Your data is probably not nearly as important
>> > to them as their data is, and most people don't take great care of
>> > their own data.
>> 
>> That's not what I meant to ask.  Assume you are an employee supposed
>> to work from home through a VPN connection:  How do you protect your
>> LAN?
>
> Depends on the VPN connection. If you use an OpenVPN client on your PC
> then it is sufficient to use a well configured firewall (ufw, iptables 
> or whatever) on this PC.

The PC would be connected to the LAN, even if only to have an internet
connection for the VPN.  I can only guess: Wouldn't that require to put
this PC behind a firewall that separates it from the LAN to protect the
LAN?

> If you use a VPN gateway then you could 
> configure this gateway (or a firewall behind) in a way that it blocks 
> incoming connections from the VPN tunnel. 

Hm.  I'd prefer to avoid having to run another machine as such a
firewall because electricity is way too expensive here.  And I don't
know if the gateway could be configure in such a way.

> IMHO there is no more risk to use a VPN connection than with any other
> Internet connection.

But it's a double connection, one to the internet, and another one to
another network, so you'd have to somehow manage to set up some sort of
double protection.  Setting up a VPN alone is more than difficult enough
already.



Re: [gentoo-user] {OT} Allow work from home?

2016-01-18 Thread lee
Rich Freeman <ri...@gentoo.org> writes:

> On Sun, Jan 17, 2016 at 7:26 PM, lee <l...@yagibdah.de> wrote:
>> Rich Freeman <ri...@gentoo.org> writes:
>>
>>> However, while an RDP-like solution protects you from some types of
>>> attacks, it still leaves you open to many client-side problems like
>>> keylogging.  I don't know any major corporation that lets people RDP
>>> into their applications in general.
>>
>> What do they use instead?
>>
>
> As I mentioned in my previous email - they just hand all their
> employees laptops.  Control the hardware, control the software,
> control the security...

I mean instead of rdp.  It's a simple solution which works really well
on a LAN with Windoze.  What's the equivalent that works with Linux?

I wouldn't try it over an internet connection, though, it requires too
much bandwidth.



Re: [gentoo-user] {OT} Allow work from home?

2016-01-17 Thread lee
"J. Roeleveld"  writes:

> On 17 January 2016 18:35:20 CET, Mick  wrote:

> [...]
>>I use the icaclient provided by Citrix to access my virtual desktop at
>>work, 
>>but have never tried to set up something similar at home.  What
>>opensource 
>>software would I need for this?  Is there a wiki somewhere to follow?
>
> I'd love to do this myself as well.
>
> Citrix sells the full package as 'XenDesktop'. To do it yourself you need a 
> VMserver (Xen or similar) and a remote desktop tool that hooks into the VM 
> display. (Spice or VNC)
>
> Then you need some way of authenticating users and providing access to the 
> client software.
> [...]

You would have a full VM for each user?  That would be a huge waste of
resources, plus having to take care of a lot of VMs, plus having to buy
a lot of Windoze licenses and taking about a week to install the updates
after installing a VM.  Add to that that the xen host goes down at
random time intervals (because the sending queue of the network card
times out for reasons that cannot be determined) which can be as long as
a day, a week or even up to three weeks, and you are likely to become a
rather unhappy administrator.  Try kvm instead, and you'll find that
it's impossible to migrate the VMs from xen to to kvm when you want to
use virtio drivers because you can't install them on an existing Windoze
VM.

Then there's the question how well vnc or spice connections work over a
VPN that goes over the internet.  It's not like the employees could get
reliable internet connections with sufficient bandwidth, not to mention
that the company would have to get one in the first place, which isn't
much easier to get, if any.

It might work in theory.  How would it be feasible in practise?



Re: [gentoo-user] {OT} Allow work from home?

2016-01-17 Thread lee
Rich Freeman <ri...@gentoo.org> writes:

> On Sun, Jan 17, 2016 at 6:38 AM, lee <l...@yagibdah.de> wrote:
>> Suppose you use a VPN connection.  How do does the client (employee)
>> secure their own network and the machine they're using to work remotely
>> then?
>
> Poorly, most likely.  Your data is probably not nearly as important to
> them as their data is, and most people don't take great care of their
> own data.

That's not what I meant to ask.  Assume you are an employee supposed to
work from home through a VPN connection:  How do you protect your LAN?


> [...]
>> What's the Linux equivalent of RDP sessions?  Some sort of VNC seems to
>> usually require a lot of bandwidth, and I wouldn't know how to run it as
>> a service so that someone could just start a client (like rdesktop) and
>> log in to the server as they can do with Windoze servers. --- I only
>> found x11rdp which appears to be incompatible with current X servers.
>
> There is stuff like xtogo and other NX-like technologies, but the
> trend seems to be towards client-side rendering which makes them
> perform about as well as VNC.  I mostly gave up on it ages ago - it
> was fairly fragile to keep working as well.  I do know one of the
> maintainers - perhaps it has gotten better in recent years.
>
> However, while an RDP-like solution protects you from some types of
> attacks, it still leaves you open to many client-side problems like
> keylogging.  I don't know any major corporation that lets people RDP
> into their applications in general.

What do they use instead?

This sounds as if it's basically impossible to work from a remote
location, at least when Linux comes into it at some point.

> [...]



Re: [gentoo-user] {OT} Allow work from home?

2016-01-17 Thread lee
Mick  writes:

> On Saturday 16 Jan 2016 09:39:24 Alan McKinnon wrote:
>> On 16/01/2016 06:17, Grant wrote:
>> > I'm considering allowing some employees to work from home but I'm
>> > concerned about the security implications.  Currently everybody shows up
>> > and logs into their locked down Gentoo system and from there is able to
>> > access the company webapps which are restricted to the office IP
>> > address.  I guess I would have to allow webapp access from any IP for
>> > those users and trust that their computer is secure?  Should that not be
>> > scary?
>> > 
>> > - Grant
>> 
>> I have experience in this area. I work at ISPs where working from home
>> is routine and required for overnight standby.
>> 
>> You need a VPN, I'd recommend OpenVPN. It's easy to set up and offers
>> the security levels you need. Use the Layer3 routing option that uses
>> tun drivers (not tap) and issue the certificates to the users yourself.
>> Then allow your servers to accept connections from the VPN range as well
>> as the internal office range
>> 
>> As for the security levels of their personal machines, tell them what
>> you require and from that point on you really have to trust your people
>> so be security aware and with the program.
>
> Some other alternatives and thoughts to solutions already proposed are:
>
> 1.  Only allow access through the office firewall and webapp servers to the 
> IP 
> addresses of your employees.  This would only work if your employees have 
> static IP addresses and are few in number - otherwise you are creating an 
> administrative burden.  I assume that the client connection to the webapp 
> server will be over some secure protocol, e.g. SSH, SSL/TLS.  Otherwise, 
> you'll need an encrypted tunnel (see below).
>
> 2. Instead of OpenVPN which has been recommended I suggest that you take a 
> look at IPSec with IKEv2.  IPSec + IKEv2 provides higher throughout because 
> encryption/decryption is performed in the kernel, rather than userspace and 
> because it allows for multi-threading, which last time I looked OpenVPN does 
> not.  In addition, IKEv2 employs the MOBIKE protocol which allows mobile 
> client roaming.  Changing client IP addresses is handled automatically, 
> without having to restart manually the VPN session.  All this said, if your 
> use case has low throughput demand then OpenVPN would work fine.  In both 
> cases, use strong encryption.  
>
> 3. If you go with OpenVPN, following Alan's suggestion to use tun instead of 
> tap, I should add that if you have deployed MSWindows or other clients and 
> services with non-IP protocols, then you'll probably need a tap bridge to 
> make 
> sure that all services can get through.  The client machines will then become 
> part of your LAN.  Depending on client numbers you may need more than one 
> VLAN 
> segment and multiple OpenVPN servers.
>
> 4. An easier and simpler alternative may be to run SSH SOCKS proxy on the 
> server and proxychains on the clients.  Any software run with proxychains on 
> the client will be tunnelled via SSH to the server and from a network 
> perspective will be connected to the office LAN.  Webapps should be able to 
> run quite efficiently in this way and connect to the LAN server.  Public key 
> authentication and an SSH high port should keep pests away.

Suppose you use a VPN connection.  How do does the client (employee)
secure their own network and the machine they're using to work remotely
then?

What's the Linux equivalent of RDP sessions?  Some sort of VNC seems to
usually require a lot of bandwidth, and I wouldn't know how to run it as
a service so that someone could just start a client (like rdesktop) and
log in to the server as they can do with Windoze servers. --- I only
found x11rdp which appears to be incompatible with current X servers.

Then there's LTSP.  Letting aside that there are no thin clients with
sufficient graphics performance:  would it be possible to do that over a
VPN connection, provided that the VPN connection doesn't put the rest of
the network on the client side at risk?

Having that said, I'm finding OpenVNC anything but easy to set up.  How
is that with IPsec and IKEv2?

Proxychains sounds interesting.  Is it possible to run rdesktop through
that?



Re: [gentoo-user] (Re-) Configuring X11 with two graphic cards (NVidia)

2016-01-17 Thread lee
meino.cra...@gmx.de writes:

> Hi,
>
> previously there were two graphic cards installed in my Gentoo box:
>
> Geforce GT 430
> Geforce GT 560TI
>
> The first was used for desktop purposes only and the second was used
> only by Blender as "render engine".
>
> Then the Geforce GT 560TI went crazy and died and had to change it
> with another one, a Geforce GTX 960.
>
> I grepped through /etc and checked for "560" and similiar to find
> things which need to be changed.
>
> Reboot.
>
> Rendering runs now faster, which means that Blender has found its
> new "render engine". But...
>
> The GUI of Blender starts lagging...
>
> The desktop "feels" the same...but I cannot tell, whether it is 
> supported by the first or second graphics card, since the 960 may
> be capable to handle both...dont know for shure.
>
> Nvidia settings recognizes both cards...from the thermal readings
> I would guess, that the GTX 960 is definetly used for rendering
> purposes...but I think using the desktop will not heat up either
> card...;)
>
> Is there any way to check, whether the current setup is working as
> wanted (GTX 480 for desktop only, GTX 960 for rendering only) and
> whether to "full power of the GTX 960" is available for rendering?
>
> I cannot get rid of the feeling, that I am driving with brakes on

Doesn't nvidia-settings show to which display the cards sync?

Did you specify which card is to be used by the X server?

Does it hurt to remove the GT 430?  There is worlds of difference
between a GT 430 and a GTX 960.  When you compare these two cards, you
may find that running the 430 isn't worth the electricity it costs,
unless you get it for free or almost free.  You may also get the
impression that your whole desktop, or at least your web browser, is
slow when you go from the 960 back to the 430 :)

I think I'd just remove the 430 and see how it goes.  If you're then
happy with the performance, there's no need to put the 430 back.



Re: [gentoo-user] emerging with distcc: What's taking so long?

2016-01-13 Thread lee
Neil Bothwick <n...@digimed.co.uk> writes:

> On Mon, 11 Jan 2016 23:55:27 +0100, lee wrote:
>
>> > Firstly, things like Flash and Skype are not special cases, they are
>> > widely used and many of us have to use them, whether we like it or
>> > not.  
>> 
>> They are special cases.  Flash never really worked, and when it does,
>> it's pretty much unusable because it's too crappy.  Skype only kinda
>> works and is not usable due to total lack of privacy.
>
> You can call those and other binary-only programs special cases as much
> as you like, and maybe they are for you. But for many people there is no
> real choice but to use them.

Of course it's a choice, no matter whether ppl make it or not.

>> > Secondly, no one is forcing you to use anything? There is a
>> > no-multilib profile,  
>> 
>> There doesn't seem to be a desktop profile that isn't multilib.
>
> Probably for the reasons I've already suggested, but you don't have to
> use a desktop profile.
>
>> > and nothing to stop you creating a no-multilib version of your
>> > preferred desktop profile if you so wish (the desktop profiles are
>> > basically a different set of default USE flags).  
>> 
>> I wouldn't know how to do that.
>
> emerge --info with the desktop profile to get a list of USE flags, then
> set those same flags on the no-multilib profile. What's so hard?

It is something you need to know before you can do it.  Look at the
instructions on the wiki for installing kde, for example.  They tell you
to use the corresponding profile.  That profile is a multilib profile,
and you cannot switch to a multilib profile from a non-multilib one.  Of
course, I choose a non-multilib profile when I install.

>> In any case, the default is simply wrong.
>
> Of course it is, it's a default, it can't be right for everyone. That's
> why it is a default starting point, not an enforced setting. If you don't
> want to move away from the defaults, what are you using Gentoo?

Ok, it's not only wrong, it's badly made.  Who knows what all a profile
does.

>> > Multilib should be going away on due time. Until then you have two
>> > courses of action: complain about it or use a no-multilib profile
>> > with your preferred flags. Only one of those choices has any real
>> > benefit.  
>> 
>> There is no non-multilib profile one could use when they want a desktop
>> profile.  Perhaps multilib goes away in 20 years or so, or never.  That
>> doesn't help.
>
> However long it takes, the timescale will not be altered by one second by
> any amount of complaining in here.

Well, I suppose you have no idea how awfully stupid and retarded it is
to encounter criticism and/or suggestions, or questioning something, by
claiming that someone is complaining.



Re: [gentoo-user] snapshots?

2016-01-12 Thread lee
Rich Freeman <ri...@gentoo.org> writes:

> On Tue, Jan 5, 2016 at 5:16 PM, lee <l...@yagibdah.de> wrote:
>> Rich Freeman <ri...@gentoo.org> writes:
>>
>>>
>>> I would run btrfs on bare partitions and use btrfs's raid1
>>> capabilities.  You're almost certainly going to get better
>>> performance, and you get more data integrity features.
>>
>> That would require me to set up software raid with mdadm as well, for
>> the swap partition.
>
> Correct, if you don't want a panic if a single swap drive fails.
>
>>
>>> If you have a silent corruption with mdadm doing the raid1 then btrfs
>>> will happily warn you of your problem and you're going to have a
>>> really hard time fixing it,
>>
>> BTW, what do you do when you have silent corruption on a swap partition?
>> Is that possible, or does swapping use its own checksums?
>
> If the kernel pages in data from the good mirror, nothing happens.  If
> the kernel pages in data from the bad mirror, then whatever data
> happens to be there is what will get loaded and used and/or executed.
> If you're lucky the modified data will be part of unused heap or
> something.  If not, well, just about anything could happen.
>
> Nothing in this scenario will check that the data is correct, except
> for a forced scrub of the disks.  A scrub would probably detect the
> error, but I don't think mdadm has any ability to recover it.  Your
> best bet is probably to try to immediately reboot and save what you
> can, or a less-risky solution assuming you don't have anything
> critical in RAM is to just do an immediate hard reset so that there is
> no risk of bad data getting swapped in and overwriting good data on
> your normal filesystems.

Then you might be better off with no swap unless you put it on a file
system that uses check sums.

>> It's still odd.  I already have two different file systems and the
>> overhead of one kind of software raid while I would rather stick to one
>> file system.  With btrfs, I'd still have two different file systems ---
>> plus mdadm and the overhead of three different kinds of software raid.
>
> I'm not sure why you'd need two different filesystems.

btrfs and zfs

I won't put my data on btrfs for at least quite a while.

> Just btrfs for your data.  I'm not sure where you're counting three
> types of software raid either - you just have your swap.

btrfs raid is software raid, zfs raid is software raid, mdadm is
software raid.  That makes three different sofware raids.

> And I don't think any of this involves any significant overhead, other
> than configuration.

mdadm does have a very significant performance overhead.  ZFS mirror
performance seems to be rather poor.  I don't know how much overhead is
involved with zfs and btrfs software raid, yet since they basically all
do the same thing, I have my doubts that the overhead is significantly
lower than the overhead of mdadm.

>> How would it be so much better to triple the software raids and to still
>> have the same number of file systems?
>
> Well, the difference would be more data integrity insofar as hardware
> failure goes, but certainly more risk of logical errors (IMO).

There would be a possibility for more data integrity for the root file
system, assuming that btrfs is as reliable as ext4 on hardware raid.  Is
it?

That's about 10GB, mostly read and not written to.  It would be a
very minor improvement, if any.

>>>> When you use hardware raid, it
>>>> can be disadvantageous compared to btrfs-raid --- and when you use it
>>>> anyway, things are suddenly much more straightforward because everything
>>>> is on raid to begin with.
>>>
>>> I'd stick with mdadm.  You're never going to run mixed
>>> btrfs/hardware-raid on a single drive,
>>
>> A single disk doesn't make for a raid.
>
> You misunderstood my statement.  If you have two drives, you can't run
> both hardware raid and btrfs raid across them.  Hardware raid setups
> don't generally support running across only part of a drive, and in
> this setup you'd have to run hardware raid on part of each of two
> single drives.

I have two drives to hold the root file system and the swap space.  The
raid controller they'd be connected do does not support using disks
partially.

>>> and the only time I'd consider
>>> hardware raid is with a high quality raid card.  You'd still have to
>>> convince me not to use mdadm even if I had one of those lying around.
>>
>> From my own experience, I can tell you that mdadm already does have
>> significant overhead when you use a raid1 of two disks and a raid5 with
>> three disks.  This overhead may

Re: [gentoo-user] Nouveau blank screen

2016-01-12 Thread lee
Håkon Alstadheim  writes:

> I have an old but good graphics card, "NVIDIA Corporation GT200GL
> [Quadro FX 3800]". The proprietary driver is EOL, not supported after
> kernel 3.14.*, so I'd like to switch to nouveau. I'm having trouble
> getting nouveau to work at all, it is giving me a blank screen and
> apparently not grabbing my keyboard (ctrl:swapcaps has no effect).
>
> Nothing stands out as errors in Xorg.0.log, same errors are both under
> nvidia and nouveau, but nvidia gives me a useable desktop. Both seem
> to detect my monitor (benq) correctly.
>
> ---
> $ grep '(EE)' Xorg.0.log.nvidia Xorg.0.log.nouveau | grep -v '(WW)'
> Xorg.0.log.nvidia:[39.193] (EE) systemd-logind: failed to get
> session: PID 2112 does not belong to any known session
> Xorg.0.log.nouveau:[35.428] (EE) systemd-logind: failed to get
> session: PID 2167 does not belong to any known session
> Xorg.0.log.nouveau:[37.322] (EE) NOUVEAU(0): [COPY] failed to
> allocate class.
> ---
> The PID belongs to /usr/bin/X, see below.
> ---
> I'm running gentoo-sources-4.3.3 kernel with experimental feature to
> select Haswell architecture. The host is a virtual machine running
> under app-emulation/xen-4.6.0-r6. Driver is
> x11-drivers/xf86-video-nouveau-1.0.11, use-flag glamor enabled.

Have you passed the graphics card through to the VM?

Is the user trying to run the X server in the video group?

Systemd appears to complicate things greatly.  Have you tried to use
startx?

Can you attach x11vnc to the session (assuming that one does exist) to
see what would be on the screen from a remote machine?



Re: [gentoo-user] snapshots?

2016-01-12 Thread lee
Neil Bothwick  writes:

> On Tue, 5 Jan 2016 18:22:59 -0500, Rich Freeman wrote:
>
>> > There's no need to use RAID for swap, it's not like it contains
>> > anything of permanent importance. Create a swap partition on each
>> > disk and let the kernel use the space as it wants.  
>> 
>> So, while I tend not to run swap on RAID, it isn't an uncommon
>> approach because if you don't put swap on raid and you have a drive
>> failure while the system is running, then you are likely to have a
>> kernel panic.  Since one of the main goals of RAID is availability, it
>> is logical to put swap on RAID.
>
> That's a point I hadn't considered, but I think I'll leave things as they
> are for now. I have three drives with a swap partition on each. My system
> uses very little swap as it is, so the chances of one of those drives
> failing exactly when something is using that particular drive is pretty
> small. There's probably more chance of my winning the lottery...

It seems far more likely for a drive to fail when it is used than when
it is not used.



Re: [gentoo-user] emerging with distcc: What's taking so long?

2016-01-12 Thread lee
Neil Bothwick <n...@digimed.co.uk> writes:

> On Mon, 11 Jan 2016 08:25:05 +0100, lee wrote:
>

> [...]
>> 
>> That there are a few special cases for which some people still need it
>> doesn't mean that everyone should be forced to use a multilib profile
>> when 100% of the software they're running is 64bit.
>
> Firstly, things like Flash and Skype are not special cases, they are
> widely used and many of us have to use them, whether we like it or not.

They are special cases.  Flash never really worked, and when it does,
it's pretty much unusable because it's too crappy.  Skype only kinda
works and is not usable due to total lack of privacy.

> Secondly, no one is forcing you to use anything? There is a no-multilib
> profile,

There doesn't seem to be a desktop profile that isn't multilib.

> and nothing to stop you creating a no-multilib version of your
> preferred desktop profile if you so wish (the desktop profiles are
> basically a different set of default USE flags).

I wouldn't know how to do that.

In any case, the default is simply wrong.

> Multilib should be gong away on due time. Until then you have two courses
> of action: complain about it or use a no-multilib profile with your
> preferred flags. Only one of those choices has any real benefit.

There is no non-multilib profile one could use when they want a desktop
profile.  Perhaps multilib goes away in 20 years or so, or never.  That
doesn't help.



Re: [gentoo-user] snapshots?

2016-01-12 Thread lee
Neil Bothwick <n...@digimed.co.uk> writes:

> On Tue, 05 Jan 2016 23:16:48 +0100, lee wrote:
>
>> > I would run btrfs on bare partitions and use btrfs's raid1
>> > capabilities.  You're almost certainly going to get better
>> > performance, and you get more data integrity features.  
>> 
>> That would require me to set up software raid with mdadm as well, for
>> the swap partition.
>
> There's no need to use RAID for swap, it's not like it contains anything
> of permanent importance. Create a swap partition on each disk and let
> the kernel use the space as it wants.

When a disk fails a swap partition is on, the system is likely to go
down.  Raid is not a replacement for backups.

>> The relevant advantage of btrfs is being able to make snapshots.  Is
>> that worth all the (potential) trouble?  Snapshots are worthless when
>> the file system destroys them with the rest of the data.
>
> You forgot the data checksumming.

Not at all, I'm seeing it as an advantage, especially when you want to
store large amounts of data.  Since I don't trust btrfs with that, I'm
using ZFS.

A system partition of 50 or 60GB --- of which about 10GB are used --- is
not exactly storing large amounts of data, and the data on it doesn't
change much.  In this application, checksums would still be a benefit,
yet a rather small one.  So as I said, the /relevant/ advantage of btrfs
is being able to make snapshots.  And that isn't worth the trouble.

> If you use hardware RAID then btrfs
> only sees a single disk. It can still warn you of corrupt data but it
> cannot fix it because it only has the one copy.

or it corrupts the data itself ;)

>> Well, then they need to make special provisions for swap files in btrfs
>> so that we can finally get rid of the swap partitions.
>
> I think there are more important priorities, its not like having a swap
> partition or two is a hardship or limitation.

Still needing swap partitions and removing the option to use swap files
instead simply defeats the purpose of btrfs and makes it significantly
harder to use.



Re: [gentoo-user] emerging with distcc: What's taking so long?

2016-01-10 Thread lee
Neil Bothwick <n...@digimed.co.uk> writes:

> On Fri, 08 Jan 2016 21:37:55 +0100, lee wrote:
>
>> > What about things like flash plugins? Those are often wanted on
>> > desktops and need multilib.  
>> 
>> Flash sucks, and fortunately, it's dead.
>
> It should be, but it's not. There are still many sites that require it.

It's not my problem when they are still using it.

>> 64bit should be the default for all profiles, with the option to add
>> 32bit support in case you need it.  Which parts of gnome, kde or another
>> IDE don't compile as 64bit?
>
> That's where the ABI_* stuff comes in, which I believe should replace
> multilib eventually. The problem is not software you compile, it is
> precompiled software. If you don't like flash, here's another example
> that you probably hate but is needed by a lot of people - Skype.

Skype sucks --- and it's not usable at all because there is no privacy
whatsoever.  They will listen in and record whatever they like.

You may also have some ancient games you might want to play, or you can
have an old computer that doesn't do 64bit.  And if you do want to use
32bit software or an old computer, nothing would prevent you from
selecting a profile that gives you 32bit support, or you can have
everything in 32bit.

That there are a few special cases for which some people still need it
doesn't mean that everyone should be forced to use a multilib profile
when 100% of the software they're running is 64bit.



Re: [gentoo-user] emerging with distcc: What's taking so long?

2016-01-08 Thread lee
Neil Bothwick <n...@digimed.co.uk> writes:

> On Tue, 05 Jan 2016 18:41:25 +0100, lee wrote:
>
>> > Try Neil's suggestion of using a chroot and NFS exporting. I use it
>> > here to good effect.  
>> 
>> I must be missing his posting?
>
> It's in the "QEMU/distcc combination question" thread, among other places.

Thanks, I'll take a look at that.

>> Of course, I installed the client no-multilib and found that gnome
>> cannot be installed on no-multilib, so I had to reinstall.  Then I
>> decided to use KDE because I don't want systemd, and installing gnome
>> without it is a PITA.
>> 
>> However, I don't see why gnome --- or anything else --- should require
>> multilib.  It's not like I'd want to run anything 32bit.
>
> What about things like flash plugins? Those are often wanted on desktops
> and need multilib.

Flash sucks, and fortunately, it's dead.

Last time I looked, years ago, there was a 64bit version and it was
announced that no new versions will be published.


64bit should be the default for all profiles, with the option to add
32bit support in case you need it.  Which parts of gnome, kde or another
IDE don't compile as 64bit?



Re: [gentoo-user] KDE5 stuff and media-libs/mlt conflict

2016-01-05 Thread lee
Peter Humphrey  writes:

> On Tuesday 05 January 2016 04:26:20 Dale wrote:
>> Howdy,
>> 
>> I was going to try out some of the KDE5 stuff just to see if I'm going
>> to like it or not.  Anyway, I added a BUNCH of stuff to a keywords file
>> related to KDE and got past that part.  I think I got them all.  Now I
>> get this:
>
> --->8
>
> I was afraid to mess up my KDE4 system with KDE5 in some sort of parallel 
> fashion, so I found some spare disk space and installed KDE5 into it, using 
> the kde overlay and the .../desktop/plasma profile. It's much easier to 

How did you manage to install that?  I tried it yesterday (without the
overlays) and found it impossible to resolve the dependency problems, so
I ended up installing the 'normal' kde (I'm running out of time).  I'd
rather switch that over to plasma from the beginning.



Re: [gentoo-user] KDE5 stuff and media-libs/mlt conflict

2016-01-05 Thread lee
Peter Humphrey <pe...@prh.myzen.co.uk> writes:

> On Tuesday 05 January 2016 14:34:38 lee wrote:
>> Peter Humphrey <pe...@prh.myzen.co.uk> writes:
>> > On Tuesday 05 January 2016 04:26:20 Dale wrote:
>> >> Howdy,
>> >> 
>> >> I was going to try out some of the KDE5 stuff just to see if I'm going
>> >> to like it or not.  Anyway, I added a BUNCH of stuff to a keywords file
>> >> related to KDE and got past that part.  I think I got them all.  Now I
>> > 
>> >> get this:
>> > --->8
>> > 
>> > I was afraid to mess up my KDE4 system with KDE5 in some sort of
>> > parallel
>> > fashion, so I found some spare disk space and installed KDE5 into it,
>> > using the kde overlay and the .../desktop/plasma profile. It's much
>> > easier to
>> How did you manage to install that?  I tried it yesterday (without the
>> overlays) and found it impossible to resolve the dependency problems, so
>> I ended up installing the 'normal' kde (I'm running out of time).  I'd
>> rather switch that over to plasma from the beginning.
>
> I found I needed the overlay. It greatly simplifies the dependencies. It 
> still isn't perfect, as changes occur during development, but it's definitely 
> well worth having.
>
> Here's my current package.keywords:
> [...]
>
> I've no doubt that after the next overlay update I'll have to add or remove 
> some things.

It seemed to me that it's daring to try this because it's a work in
progress, and with the overly, it seemed to be even worse.  So I went
with the "normal" kde when I found out that plasma it doesn't work.
That's already painful to install.  Install on a slow machine, and you
can't even try much simply because it takes too long.

Should I try to upgrade?  That might take another day, which I don't
have ...



Re: [gentoo-user] emerging with distcc: What's taking so long?

2016-01-05 Thread lee
Peter Humphrey <pe...@prh.myzen.co.uk> writes:

> On Tuesday 05 January 2016 14:18:12 lee wrote:
>
>> Is there a way to offload the preprocessing to the server, and can
>> compiling on localhost be avoided as much as possible somehow?
>
> Try Neil's suggestion of using a chroot and NFS exporting. I use it here to 
> good effect.

I must be missing his posting?

So export / via NFS, mount it on the server and chroot into it?  That
might not work because of the stupid multilib requirement:  The server
is no-multilib, the client is not.

Of course, I installed the client no-multilib and found that gnome
cannot be installed on no-multilib, so I had to reinstall.  Then I
decided to use KDE because I don't want systemd, and installing gnome
without it is a PITA.

However, I don't see why gnome --- or anything else --- should require
multilib.  It's not like I'd want to run anything 32bit.

Why are there no no-multilib profiles when you need a desktop profile?

It has taken two days now to install Gentoo, and it's still not finished
...



Re: [gentoo-user] emerging with distcc: What's taking so long?

2016-01-05 Thread lee
Jeremi Piotrowski <jeremi.piotrow...@gmail.com> writes:

> On Tue, Jan 5, 2016 at 12:49 PM, lee <l...@yagibdah.de> wrote:
>> The gui monitor doesn't seem to exist.
>
> Recompile distcc with the gtk use flag.

Oh, I thought that was an extra package ...



Re: [gentoo-user] snapshots?

2016-01-05 Thread lee
Rich Freeman <ri...@gentoo.org> writes:

> On Fri, Jan 1, 2016 at 5:42 AM, lee <l...@yagibdah.de> wrote:
>> "Stefan G. Weichinger" <li...@xunil.at> writes:
>>
>>> btrfs offers RAID-like redundancy as well, no mdadm involved here.
>>>
>>> The general recommendation now is to stay at level-1 for now. That fits
>>> your 2-disk-situation.
>>
>> Well, what shows better performance?  No btrfs-raid on hardware raid or
>> btrfs raid on JBOD?
>
> I would run btrfs on bare partitions and use btrfs's raid1
> capabilities.  You're almost certainly going to get better
> performance, and you get more data integrity features.

That would require me to set up software raid with mdadm as well, for
the swap partition.

> If you have a silent corruption with mdadm doing the raid1 then btrfs
> will happily warn you of your problem and you're going to have a
> really hard time fixing it,

BTW, what do you do when you have silent corruption on a swap partition?
Is that possible, or does swapping use its own checksums?

> [...]
>
>>>
>>> I would avoid converting and stuff.
>>>
>>> Why not try a fresh install on the new disks with btrfs?
>>
>> Why would I want to spend another year to get back to where I'm now?
>
> I wouldn't do a fresh install.  I'd just set up btrfs on the new disks
> and copy your data over (preserving attributes/etc).

That was the idea.

> I wouldn't do an in-place ext4->btrfs conversion.  I know that there
> were some regressions in that feature recently and I'm not sure where
> it stands right now.

That adds to the uncertainty of btrfs.


> [...]
>>
>> There you go, you end up with an odd setup.  I don't like /boot
>> partitions.  As well as swap partitions, they need to be on raid.  So
>> unless you use hardware raid, you end up with mdadm /and/ btrfs /and/
>> perhaps ext4, /and/ multiple partitions.
>
> [...]
> There isn't really anything painful about that setup though.

It's still odd.  I already have two different file systems and the
overhead of one kind of software raid while I would rather stick to one
file system.  With btrfs, I'd still have two different file systems ---
plus mdadm and the overhead of three different kinds of software raid.

How would it be so much better to triple the software raids and to still
have the same number of file systems?

>> When you use hardware raid, it
>> can be disadvantageous compared to btrfs-raid --- and when you use it
>> anyway, things are suddenly much more straightforward because everything
>> is on raid to begin with.
>
> I'd stick with mdadm.  You're never going to run mixed
> btrfs/hardware-raid on a single drive,

A single disk doesn't make for a raid.

> and the only time I'd consider
> hardware raid is with a high quality raid card.  You'd still have to
> convince me not to use mdadm even if I had one of those lying around.

>From my own experience, I can tell you that mdadm already does have
significant overhead when you use a raid1 of two disks and a raid5 with
three disks.  This overhead may be somewhat due to the SATA controller
not being as capable as one would expect --- yet that doesn't matter
because one thing you're looking at, besides reliability, is the overall
performance.  And the overall performance very noticeably increased when
I migrated from mdadm raids to hardware raids, with the same disks and
the same hardware, except that the raid card was added.

And that was only 5 disks.  I also know that the performance with a ZFS
mirror with two disks was disappointingly poor.  Those disks aren't
exactly fast, but still.  I haven't tested yet if it changed after
adding 4 mirrored disks to the pool.  And I know that the performance of
another hardware raid5 with 6 disks was very good.

Thus I'm not convinced that software raid is the way to go.  I wish they
would make hardware ZFS (or btrfs, if it ever becomes reliable)
controllers.

Now consider:


+ candidates for hardware raid are two small disks (72GB each)
+ data on those is either mostly read, or temporary/cache-like
+ this setup works without any issues for over a year now
+ using btrfs would triple the software raids used
+ btrfs is uncertain, reliability questionable
+ mdadm would have to be added as another layer of complexity
+ the disks are SAS disks, genuinely made to be run in a hardware raid
+ the setup with hardware raid is straightforward and simple, the setup
  with btrfs is anything but


The relevant advantage of btrfs is being able to make snapshots.  Is
that worth all the (potential) trouble?  Snapshots are worthless when
the file system destroys them with the rest of the data.

> [...]
>> How's btrfs's performance when you use swap files instead of swap
>> partitions to avoi

Re: [gentoo-user] emerging with distcc: What's taking so long?

2016-01-05 Thread lee
Frank Steinmetzger  writes:

> On Mon, Jan 04, 2016 at 04:38:56PM -0600, Dale wrote:
>
>> >>> what's taking so long when emerging packages despite distcc is used?
>> >>> […]
>> >>> Some compilations are being run on the remote machine, so distcc does
>> >>> work.  The log file on the remote machine shows compilation times of a
>> >>> few milliseconds up to about 1.5 seconds at most.  The distcc server
>> >>> would be finished with the emerging within maybe 15 minutes, and the
>> >>> client takes several hours already.
>> >>>
>> >>> Is there something going wrong?  Is there a way to speed things up as
>> >>> much as I would expect from using distcc?
>> > […]
>> > Can it be that the client is simply too slow compared to the server to
>> > give it any significant load?  (The client isn't exactly slow; it's slow
>> > compared to the server.)
>>
>> Once a really long time ago I tried doing this sort of thing.  What I
>> found is that the network speed between the two systems was what was
>> slowing it down.  It just couldn't transfer the data back and forth fast
>> enough.  I had a network card that really didn't have any good drivers
>> for it.  Anyway, it may not be your problem but it may be worth looking
>> at to be sure.  Using iftop or some similar tool should tell you
>> something.
>
> Well I’m using distcc over WiFi which gives me shy of 2 MB per second (only
> the big PC which acts as server is connected to the router via cable). For
> such cases I recommend using compression. It definitely increased throughput.

Wireless is a bad crutch which is only useful when it's entirely
impossible to use a cable.  I'd recommend using a cable, especially in
this case where the CPU is already compiling so slow.

Dale, thanks for the suggestion --- the network is fine and transfers
about 100MB/sec+.

> What I observe on my setup, though, is that sometimes a package builds with
> distcc, and then all of a sudden I get (the meaning of) “distributing via
> distcc failed, building locally” and after a while it works again. No idea
> what’s going on there.

The server might be busy, or it's not possible to compile remotely.



Re: [gentoo-user] emerging with distcc: What's taking so long?

2016-01-05 Thread lee
<waben...@gmail.com> writes:

> lee <l...@yagibdah.de> wrote:
>
>> <waben...@gmail.com> writes:
>> 
>> > lee <l...@yagibdah.de> wrote:
>> >
>> >> Hi,
>> >> 
>> >> what's taking so long when emerging packages despite distcc is
>> >> used?
> [...]
>> Can it be that the client is simply too slow compared to the server to
>> give it any significant load?  (The client isn't exactly slow; it's
>> slow compared to the server.)
>
> I used a pentium 4 laptop as client and two phenom2 quadcore pc as 
> server. I don't remember the settings that I used but I think it
> was something about -j10 or so.
>
> When I compiled large programs, the load count of the servers was
> high most of the time and they were very busy with compiling. Only
> at linking time they were waiting for new data.
> Compilation time was much lower than without distcc.

The load average only goes high on the client.  The server is too fast
to notice.

> However when I compiled small programs, the benefit of distcc was 
> very small or even null. Also compilation time of OpenOffice was
> very long, because of the -j1 setting in the ebuild.

I haven't emerged libreoffice yet --- that might take very long.

> I don't know the reason of your problem. Maybe you should try it
> without pump mode to see if this makes a difference.

Hm, that's worth a try.

> Have you used distccmon to see what happens while compiling? IIRC
> it shows you exactly what's going on at each host (preprocessing,
> compiling, waiting). Maybe this will bring some light into the 
> whole thing.

It doesn't show anything but blank lines --- it might compile too fast
for anything to show up.  I can see in /var/log/messages that things are
being compiled, like this:


distccd[29727]: (dcc_job_summary) client: 192.168.3.33:38604 COMPILE_OK
exit:0 sig:0 core:0 ret:0 time:372ms x86_64-pc-linux-gnu-g++ drvvtk.cpp


The gui monitor doesn't seem to exist.



Re: [gentoo-user] emerging with distcc: What's taking so long?

2016-01-05 Thread lee
 writes:

> Frank Steinmetzger  wrote:
>
>> On Mon, Jan 04, 2016 at 09:48:42PM +0100, waben...@gmail.com wrote:
>> 
>> > P.S.: distccmon is a good tool to watch the compilation processes.
>> 
>> I never got it to display anything. I just tried it again: synced
>> portage and ran a world update -- 16 Packages, among them
>> kdevplatform, a lengthy Qt package (which by the way is one of those
>> who benefit greatly from compression if distcc’ed over a slow
>> network).
>> 
>> At no time during building did I see any activity in distccmon-gui. I
>> started it on both client and server and as my own user as well as
>> root. Nada. Can you give a suggestion? Thanks.
>> 
>
> I remembered something:
>
> It is important to use the same value for the DISTCC_DIR environment 
> variable as the user running the client and that this directory is 
> readable by the user that is running distccmon. 

Hm.  Are you saying you can run it only on the client?

Oh, I can see it now!  Preprocessing seems to be done on localhost only,
and some compilation, too.  Some is compiled on the server.

I tried to remove 'distcc' and leaving only 'distcc-pump' in make.conf
to force preprocessing to the server.  With that, nothing shows up.

Is there a way to offload the preprocessing to the server, and can
compiling on localhost be avoided as much as possible somehow?



Re: [gentoo-user] emerging with distcc: What's taking so long?

2016-01-05 Thread lee
Frank Steinmetzger  writes:

> On Mon, Jan 04, 2016 at 09:48:42PM +0100, waben...@gmail.com wrote:
>
>> P.S.: distccmon is a good tool to watch the compilation processes.
>
> I never got it to display anything. I just tried it again: synced portage
> and ran a world update -- 16 Packages, among them kdevplatform, a lengthy
> Qt package (which by the way is one of those who benefit greatly from
> compression if distcc’ed over a slow network).
>
> At no time during building did I see any activity in distccmon-gui. I
> started it on both client and server and as my own user as well as root.
> Nada. Can you give a suggestion? Thanks.

I've set log level to 'notice' and can see messages in
/var/log/messages.  distccmon-gui doesn't seem to exist, and nothing
shows up with distccmon-txt.



Re: [gentoo-user] emerging with distcc: What's taking so long?

2016-01-05 Thread lee
 writes:

>  wrote:
>
>> 
>> I used a pentium 4 laptop as client and two phenom2 quadcore pc as 
>> server. I don't remember the settings that I used but I think it
>> was something about -j10 or so.
>
> Sorry, I think it was about -j16 (twice the totally amount of CPUs).

The wiki says to use twice CPUs plus 1.  That's 57 here.  That should be
fast.  I could add more servers and bring it up to -j81, but since the
server is pretty much idle, that won't help.



[gentoo-user] emerging with distcc: What's taking so long?

2016-01-04 Thread lee
Hi,

what's taking so long when emerging packages despite distcc is used?

I have disallowed compiling on the local machine (which is the one
emerge is running on) through distcc settings because the local machine
is relatively slow.  Yet I can see some gcc processes running on the
local machine, and emerging goes painfully slow.  Using distcc doesn't
seem to make it any faster, though disabling local compiling seems to
help a bit.

Some compilations are being run on the remote machine, so distcc does
work.  The log file on the remote machine shows compilation times of a
few milliseconds up to about 1.5 seconds at most.  The distcc server
would be finished with the emerging within maybe 15 minutes, and the
client takes several hours already.

Is there something going wrong?  Is there a way to speed things up as
much as I would expect from using distcc?



Re: [gentoo-user] emerging with distcc: What's taking so long?

2016-01-04 Thread lee
<waben...@gmail.com> writes:

> lee <l...@yagibdah.de> wrote:
>
>> Hi,
>> 
>> what's taking so long when emerging packages despite distcc is used?
>> 
>> I have disallowed compiling on the local machine (which is the one
>> emerge is running on) through distcc settings because the local
>> machine is relatively slow.  Yet I can see some gcc processes running
>> on the local machine, and emerging goes painfully slow.  Using distcc
>> doesn't seem to make it any faster, though disabling local compiling
>> seems to help a bit.
>> 
>> Some compilations are being run on the remote machine, so distcc does
>> work.  The log file on the remote machine shows compilation times of a
>> few milliseconds up to about 1.5 seconds at most.  The distcc server
>> would be finished with the emerging within maybe 15 minutes, and the
>> client takes several hours already.
>> 
>> Is there something going wrong?  Is there a way to speed things up as
>> much as I would expect from using distcc?
>
> You can try pump mode. Preprocessing is then done on the remote server.
> Depending on your hardware, this could be faster.
>
> But read carefully the  manpages of pump and distcc before you use it. 
> There are some restrictions you should be aware of.

I followed the instructions on the wiki which suggest to have
'FEATURES="distcc distcc-pump"' in make.conf and give instructions how
to set the CPUs.

> You can also try to optimize the number of concurrent compile processes 
> (-j). Watching the load counts of your client and server(s) will help
> you to find out the best value.

Using -j doesn't really help.  The server is pretty much idling --- or
you could say waiting for stuff to compile --- while the client
progresses awfully slowly and isn't overloaded with compilation
processes.  If the server would get more load, emerging could be much
much faster.

Can it be that the client is simply too slow compared to the server to
give it any significant load?  (The client isn't exactly slow; it's slow
compared to the server.)



Re: [gentoo-user] Re: Gcc 5.3

2016-01-01 Thread lee
Paul Colquhoun <paul...@andor.dropbear.id.au> writes:

> On Wed, 30 Dec 2015 17:32:44 lee wrote:
>> Neil Bothwick <n...@digimed.co.uk> writes:
>> > On Tue, 29 Dec 2015 19:21:01 +0100, lee wrote:

> [...]
>> >> So if I'd never explicitly update everything but run emerge --sync
>> >> frequently, things would be updated over time, occasionally?
>> > 
>> > No, nothing would get updated. To do that you need to run emerge @world
>> > after emerge --sync.
>> 
>> Well, yes, but what if want to install a package that hasn't been
>> installed yet, or re-emerge an installed package with different USE
>> flags, after updating the portage tree?  Will a more recent version be
>> installed than would have been installed before the tree was updated,
>> maybe updating other packages to more recent versions because they are
>> needed for the new package?
>
>
> You have a couple of options.
>
> First, start with "emerge -p whatever" and see what update would happen with 
> no adjustments.
>
> Then try again, but specify the version you want and see if that works: 
> "emerge -p =whatever-1.2.3"
>
> If it is still trying to install updated versions of libraries or other 
> dependencies, make a file like /etc/portage/package.mask/whatever and block 
> anything higher than the library/dependency versions you already have.
>
> A bit more work, but probably not much.
>
> However, if you get too far behind, the versions you want may have been 
> removed from the portage tree. This is still not a deal breaker. Old ebuilds 
> are available from the Gentoo attic at 
> https://sources.gentoo.org/cgi-bin/viewvc.cgi and can be installed in a local 
> overlay. (I put mine in 
> /usr/local/portage). Just put "PORTDIR_OVERLAY=/usr/local/portage" into 
> /etc/portage/make/conf and you should be set.
>
> You could also use the local overlay to just add the updated ebuilds for 
> things you do want to upgrade (and required dependency upgrades, etc) but I 
> think that would quickly become very unwieldy.

Thank you for the explanation.

I've installed gcc 5.2.0 and am running into trouble when trying to
compile the test application.  That just won't work.

It also runs out of memory too easily.

OTOH, I've compiled a kernel with it (unless the compilation somehow
picks a different version automatically), and it works fine.

>> > Exactly, run gcc-config, compile/emerge the program, run gcc-config again.
>> 
>> And what about ccache?  Will it use the new version automatically and
>> detect that the compiler version has changed so that files in the cache
>> need to be recompiled?

To answer my own question:  That also works without any further ado.



Re: [gentoo-user] Kmail2 - I have not given up ... yet

2016-01-01 Thread lee
"J. Roeleveld" <jo...@antarean.org> writes:

> On Wednesday, December 30, 2015 09:32:55 PM lee wrote:
>> "J. Roeleveld" <jo...@antarean.org> writes:
>> > On Tuesday, December 29, 2015 08:03:25 PM Mick wrote:
>> >> On Tuesday 29 Dec 2015 17:37:25 lee wrote:
>> >> > Are we at the point where users are accepting to have to install and
>> >> > maintain a fully fledged RDBMS just for a single application which
>> >> > doesn't even need a database in the first place?
>> >> 
>> >> Yes, a sad state of affairs indeed.  I was hoping for the last 5-6 years
>> >> that someone  who can code would come to their senses with this
>> >> application
>> >> and agree that not all desktop application use cases require some
>> >> enterprise level database back end architecture, when a few flat data
>> >> files
>> >> have served most users perfectly fine for years.  I mean, do I *really*
>> >> need a database for less that 60 entries in my address book?!!
>> > 
>> > I'm no longer convinced a database isn't needed.
>> > Kmail1 was slower than kmail2 is these days.
>> 
>> We are talking here about a single application.  Are users nowadays
>> generally willing, inclined and in the position to deploy a RDBMS just
>> in order to use a single application?  Can they be expected to run
>> several RDBMSs when the next application comes along and suggests mysql
>> instead of postgresql?
>
> Most applications use a database of one type or another.
> Flatfiles are a bad idea when performance is important with large datasets.

Then why don't they all use postgresql or mysql?  It might then make
sense to install either of them.

> My email is a large dataset.

Not every large dataset is suited to be stored in a database like mysql
or postgresql.  That's particularly true for email.

>> Ironically, in this case you require the RDBMS to be able to use an
>> application which is too unstable to be used even without one.  Why not
>> use a better application for the same purpose instead?  You wouldn't
>> have to worry about your emails then.
>
> I don't worry about my emails.
> I find kmail2 to be more stable and usable then kmail1.

I'm surprised you're not worried when it seems not unusual that kmail
becomes unstable and even randomly deletes email.



Re: [gentoo-user] snapshots?

2016-01-01 Thread lee
"Stefan G. Weichinger" <li...@xunil.at> writes:

> On 12/30/2015 10:14 PM, lee wrote:
>> Hi,
>> 
>> soon I'll be replacing the system disks and will copy over the existing
>> system to the new disks.  I'm wondering how much merit there would be in
>> being able to make snapshots to be able to revert back to a previous
>> state when updating software or when installing packages to just try
>> them out.
>> 
>> To be able to make snapshots, I could use btrfs on the new disks.  When
>> using btrfs, I could use the hardware RAID-1 as I do now, or I could use
>> the raid features of btrfs instead to create a RAID-1.
>> 
>> 
>> Is it worthwhile to use btrfs?
>
> Yes.
>
> ;-)
>
>> Am I going to run into problems when trying to boot from the new disks
>> when I use btrfs?
>
> Yes.
>
> ;-)
>
> well ... maybe.
>
> prepare for some learning curve. but it is worth it!

So how does that go?  Having trouble to boot is something I really don't
need.

>> Am I better off using the hardware raid or software raid if I use btrfs?
>
> I would be picky here and separate "software raid" from "btrfs raid":
>
> software raid .. you think of mdadm-based software RAID as we know it in
> the linux world?

I'm referring to the software raid btrfs uses.

> btrfs offers RAID-like redundancy as well, no mdadm involved here.
>
> The general recommendation now is to stay at level-1 for now. That fits
> your 2-disk-situation.

Well, what shows better performance?  No btrfs-raid on hardware raid or
btrfs raid on JBOD?

>> Suggestions?
>
> I would avoid converting and stuff.
>
> Why not try a fresh install on the new disks with btrfs?

Why would I want to spend another year to get back to where I'm now?

> You can always step back and plug in the old disks.
> You could even add your new disks *beside the existing system and set up
> a new rootfs alongside (did that several times here).

The plan is to replace the 3.5" SAS disks with 1TB disks.  There is no
room to fit any more 3.5" disks.  Switching disks all the time is not an
option.

That's why I want to use the 2.5" SAS disks.  But I found out that I
can't fit those as planned.  Unless I tape them to the bottom of the
case or something, I'm out of options :(  However, if tape them, I could
use 4 instead of two ...

> There is nearly no partitioning needed with btrfs (one of the great
> benefits).

That depends.  Try to install on btrfs when you have 4TB disks.  That
totally sucks, even without btrfs.  Add btrfs and it doesn't work at
all --- at least not with Debian, though I was thinking all the time
that if that wasn't Debian but Gentoo, it would just work ...

With 72GB disks, there's nearly no partitioning involved, either.  And
the system is currently only 20GB, including two VMs.

> I never had /boot on btrfs so far, maybe others can guide you with this.
>
> My /boot is plain extX on maybe RAID1 (differs on
> laptops/desktop/servers), I size it 500 MB to have space for multiple
> kernels (especially on dualboot-systems).
>
> Then some swap-partitions, and the rest for btrfs.

There you go, you end up with an odd setup.  I don't like /boot
partitions.  As well as swap partitions, they need to be on raid.  So
unless you use hardware raid, you end up with mdadm /and/ btrfs /and/
perhaps ext4, /and/ multiple partitions.  When you use hardware raid, it
can be disadvantageous compared to btrfs-raid --- and when you use it
anyway, things are suddenly much more straightforward because everything
is on raid to begin with.

We should be able to get away with something really straightforward,
like btrfs-raid on unpartitioned devices and special provisions in btrfs
for swap space so that we don't need extra swap partitions anymore.  The
swap space could even be allowed to grow (to some limit) and shrink back
to a starting size after a reboot.

> So you will have something like /dev/sd[ab]3 for btrfs then.

But I want straightforward :)

> Create your btrfs-"pool" with:
>
> # mkfs.btrfs -m raid1 -d raid1 /dev/sda3 /dev/sdb3
>
> Then check for your btrfs-fs with:
>
> # btrfs fi show
>
> Oh: I realize that I start writing a howto here ;-)

That doesn't work without an extra /boot partition?

How's btrfs's performance when you use swap files instead of swap
partitions to avoid the need for mdadm?

> In short:
>
> In my opinion it is worth learning to use btrfs.
> checksums, snapshots, subvolumes, compression ... bla ...
>
> It has some learning curve, especially with a distro like gentoo.
> But it is manageable.

Well, I found it pretty easy since you can always look up how to do
something.  The question is whether it's worthwhile or not.  If I had
time, I could do 

Re: [gentoo-user] major problem after samba update

2015-12-30 Thread lee
cov...@ccs.covici.com writes:

> lee <l...@yagibdah.de> wrote:
>
>> cov...@ccs.covici.com writes:
>> 
>> > lee <l...@yagibdah.de> wrote:
>> >
>> >> cov...@ccs.covici.com writes:
>> >> 
>> >> > lee <l...@yagibdah.de> wrote:
>> >> >
>> >> >> cov...@ccs.covici.com writes:
>> >> >> 
>> >> >> > Hi.  I just upgraded from samba 4.1.x to 4.2.7 and in one of my 
>> >> >> > shares,
>> >> >> > I can not access any subfolders of that share.  It usually gives me 
>> >> >> > some
>> >> >> > kind of windows permission error, or just location not available.
>> >> >> > Windows tells me I can't even display the advanced security settings 
>> >> >> > for
>> >> >> > any folder.  Anyone know what they did and how to fix?  There is a 
>> >> >> > hard
>> >> >> > blocker to downgrading, so maybe something is up.
>> >> >> >
>> >> >> > Thanks in advance for any suggestions.
>> >> >> 
>> >> >> Do they have a changelog which you looked at?  Can you mount these
>> >> >> shares from a Linux client?
>> >> >
>> >> > These are on a Linux server, so there is no problem there.
>> >> 
>> >> Can you definitely mount the share from a remote Linux client without
>> >> problems?
>> >> 
>> >> > Changelog doesn't say anything but the version number.
>> >
>> > I don't have any remote linux client and this is samba, used so that
>> > windows can access the share.
>> 
>> You could make a copy of everything in the inaccessible share, make a
>> new share with settings identical to the settings of the shares that are
>> still accessible when copying has finished, and try to access the new
>> share with a remote client.
>> 
>> If you can access the new share, either something with the old one is
>> weird, or you have changed something like permissions or extended
>> attributes by copying.
>> 
>> 
>> If you cannot access the new share, try a different kernel version (or
>> try a different kernel version first).  I've had a case in which a
>> kernel would freeze/panic when the directory contents of a directory
>> that was exported via NFS were displayed with ls on a NFS client.
>> 
>> IIRC samba uses kernel support on the server.  Perhaps you have a
>> version mismatch between the new samba version and what the kernel
>> supports.
>
> The share is my whole system, so obviously I cannot copy to a new
> share.

You can still try a different kernel, preferably one that is compatible
with the samba version you're using.

Other than that, I don't understand why anyone would try to export the
whole system like that.  It sounds like a recipe for failure to me.



Re: [gentoo-user] Re: Gcc 5.3

2015-12-30 Thread lee
Neil Bothwick <n...@digimed.co.uk> writes:

> On Tue, 29 Dec 2015 19:21:01 +0100, lee wrote:
>
>> > As 4.9.3 is marked stable, I guess that's what'd you get per
>> > default.  
>> 
>> 4.8.5
>> 
>> I'd have to run emerge --sync to know about more recent versions.  How
>> is that supposed to be used, btw?  I only run that when I do want to
>> update everything.  Now if I didn't want to update anything but gcc,
>> could I run emerge --sync and install gcc 5.x without having trouble
>
> Emerge --sync only updates the portage tree, so
>
> emerge --sync
> emerge -a sys-devel/gcc:5
>  
>> with anything else I might install before actually updating everything?
>> So if I'd never explicitly update everything but run emerge --sync
>> frequently, things would be updated over time, occasionally?
>
> No, nothing would get updated. To do that you need to run emerge @world
> after emerge --sync.

Well, yes, but what if want to install a package that hasn't been
installed yet, or re-emerge an installed package with different USE
flags, after updating the portage tree?  Will a more recent version be
installed than would have been installed before the tree was updated,
maybe updating other packages to more recent versions because they are
needed for the new package?

Other distributions usually (want to) update a lot of packages once you
update the information about available packages.

>> > Stuff compiled with older gcc's should run with newer libgcc*[0], but
>> > stuff compililed with a newer gcc might not run with the older
>> > libgcc*. Same goes, with more problems IIRC, for libstdc++.
>> > So beware of that. Apart from that? I'm not aware of problems.  
>> 
>> Uhm ... So I might break the system by switching between compiler
>> versions?
>
> That's highly unlikely as software that has been compiled with the old
> compiler will still work.

And if not?

Just yesterday I tried to update a Fedora install and it failed so that
the machine is now unusable because it only keeps rebooting.  I expected
it to fail, just not that badly ...  If I could find my USB stick, I'd
be putting Gentoo on it now.

> You may find that some programs fail to
> recompile with the new compiler, but I didn't experience that with the
> 4.9>5 step, although I had some that would build with 4.8 but not 4.9.
>
> I have an application which I would like to compile with gcc
>> 5.x just to see if that's even possible.  I could switch, try it, and
>> then switch back.
>
> Exactly, run gcc-config, compile/emerge the program, run gcc-config again.

And what about ccache?  Will it use the new version automatically and
detect that the compiler version has changed so that files in the cache
need to be recompiled?



Re: [gentoo-user] Maybe bug? (glibc related?)

2015-12-30 Thread lee
Elias Diem  writes:

>> Whether this is a bug or not depends on what you're supposed to expect,
>> which I don't know.  If someone would run the test suite on a
>> non-hardened profile and got the same warning from gcc, but vim wouldn't
>> be terminated when the segmentation fault occurs, then I'd be worried.
>
> Ok. Well, I don't know either what to expect. I haven't got 
> enough knowledge to analyse this. I posted it here because I 
> was told so ;-)

Maybe someone knows ...

How do you run the test thing?  I could try and see what happens.



Re: [gentoo-user] Kmail2 - I have not given up ... yet

2015-12-30 Thread lee
"J. Roeleveld" <jo...@antarean.org> writes:

> On Tuesday, December 29, 2015 08:03:25 PM Mick wrote:
>> On Tuesday 29 Dec 2015 17:37:25 lee wrote:
>> > Are we at the point where users are accepting to have to install and
>> > maintain a fully fledged RDBMS just for a single application which
>> > doesn't even need a database in the first place?
>> 
>> Yes, a sad state of affairs indeed.  I was hoping for the last 5-6 years
>> that someone  who can code would come to their senses with this application
>> and agree that not all desktop application use cases require some
>> enterprise level database back end architecture, when a few flat data files
>> have served most users perfectly fine for years.  I mean, do I *really*
>> need a database for less that 60 entries in my address book?!!
>
> I'm no longer convinced a database isn't needed.
> Kmail1 was slower than kmail2 is these days.

We are talking here about a single application.  Are users nowadays
generally willing, inclined and in the position to deploy a RDBMS just
in order to use a single application?  Can they be expected to run
several RDBMSs when the next application comes along and suggests mysql
instead of postgresql?

Ironically, in this case you require the RDBMS to be able to use an
application which is too unstable to be used even without one.  Why not
use a better application for the same purpose instead?  You wouldn't
have to worry about your emails then.



[gentoo-user] snapshots?

2015-12-30 Thread lee
Hi,

soon I'll be replacing the system disks and will copy over the existing
system to the new disks.  I'm wondering how much merit there would be in
being able to make snapshots to be able to revert back to a previous
state when updating software or when installing packages to just try
them out.

To be able to make snapshots, I could use btrfs on the new disks.  When
using btrfs, I could use the hardware RAID-1 as I do now, or I could use
the raid features of btrfs instead to create a RAID-1.


Is it worthwhile to use btrfs?

Am I going to run into problems when trying to boot from the new disks
when I use btrfs?

Am I better off using the hardware raid or software raid if I use btrfs?


The installation/setup is simple: 2x3.5" are to be replaced by 2x2.5",
each 15krpm, 72GB SAS disks, so no fancy partioning is involved.

(I need the physical space to plug in more 3.5" disks for storage.  Sure
I have considered SSDs, but they would cost 20 times as much and provide
no significant advantage in this case.)


I could just replace one disk after the other and let the hardware raid
do it all for me.  A rebuilt takes only 10 minutes or so.  Then I could
convert the file system to btrfs, or leave it as is.  That might even be
the safest bet because I can't miss anything when copying.  (What the
heck do I have it for? :) )


Suggestions?



Re: [gentoo-user] Maybe bug? (glibc related?)

2015-12-29 Thread lee
Elias Diem <li...@webconect.ch> writes:

> Hi lee
>
> On 2015-12-29, lee wrote:
>
>> Elias Diem <li...@webconect.ch> writes:
>> 
>> > Hi
>> >
>> > I just got the following while running Vim's testsuite.
>> >
>> > 
>> > *** buffer overflow detected ***: vim terminated; report to 
>> > <http://bugs.gentoo.org/>
>> > Makefile:151: recipe for target 'af.ck' failed
>> > make[2]: *** [af.ck] Killed
>> > 
>> >
>> > The compiler gave me the following warning.
>> >
>> > [...]
>> > /usr/include/bits/string3.h:110:3: warning: call to __builtin___strcpy_chk 
>> > will always overflow destination buffer
>> >return __builtin___strcpy_chk (__dest, __src, __bos (__dest));
>> >
>> > [...]
>> >
>> > Should I file a bug?
>> 
>> The test was successful because the buffer overflow was detected?
>
> I think I don't quite understand your question.
>
> `make test` failed. Therefore I'd say the test was not 
> successful.
>
> I run a hardened profile. I guess that's why the overflow 
> was detected and vim terminated.

When you perform a strcpy() and overflow the destination buffer, you are
supposed to experience a segmentation fault.  It shouldn't matter
whether you run a hardened profile or not for detecting these.

I imagine it was discovered that a segmentation fault did occur, and
that it inevitably would occur --- since gcc tells you that one will
occur when using __builtin___strcpy_chk() --- and the application was
terminated.  Otherwise, the test would have been unsuccessful.

Whether this is a bug or not depends on what you're supposed to expect,
which I don't know.  If someone would run the test suite on a
non-hardened profile and got the same warning from gcc, but vim wouldn't
be terminated when the segmentation fault occurs, then I'd be worried.



Re: [gentoo-user] Kmail2 - I have not given up ... yet

2015-12-29 Thread lee
Mick  writes:

> On Tuesday 29 Dec 2015 14:18:20 J. Roeleveld wrote:
>
>> sqlite is nice, for single threaded applications.
>> For anything more advanced, either a wrapper is required or something more
>> advanced needs to be used.
>
> I like sqlite because it is self-contained, embedded in the application that 
> uses it and accesses the data directly with functional calls, rather than 
> looping around port/socket interfaces to speak to a server.  This is why I 
> kept it, since with Kmail1 it is not used much.
>
> With Kmail2 the database will be hammered so as you say will need something 
> that can process things in parallel at speed and in higher volumes. So, I'm 
> planning to install postgresql for this purpose, since in my experience mysql 
> has had a number of hickups with akonadi.

Are we at the point where users are accepting to have to install and
maintain a fully fledged RDBMS just for a single application which
doesn't even need a database in the first place?

Quite a few times I've been thinking it would be nice to have a database
to implement a particular feature for an application, and I've always
decided not to do it because it seems to be a totally unreasonable
requirement, and because it seems rather unlikely that any user would be
willing to do it.  It would make some sense if an RDBMS were a
requirement already, used by all kinds of software --- though I'm
finding it very questionable if we should go there (and find ourselves
with a single point of failure and bottleneck).

A MUA must be doing something very wrong to have such a requirement.
And what kind of performance can you expect with a laptop that has only
4GB and is already overloaded with KDE?



Re: [gentoo-user] major problem after samba update

2015-12-29 Thread lee
cov...@ccs.covici.com writes:

> lee <l...@yagibdah.de> wrote:
>
>> cov...@ccs.covici.com writes:
>> 
>> > lee <l...@yagibdah.de> wrote:
>> >
>> >> cov...@ccs.covici.com writes:
>> >> 
>> >> > Hi.  I just upgraded from samba 4.1.x to 4.2.7 and in one of my shares,
>> >> > I can not access any subfolders of that share.  It usually gives me some
>> >> > kind of windows permission error, or just location not available.
>> >> > Windows tells me I can't even display the advanced security settings for
>> >> > any folder.  Anyone know what they did and how to fix?  There is a hard
>> >> > blocker to downgrading, so maybe something is up.
>> >> >
>> >> > Thanks in advance for any suggestions.
>> >> 
>> >> Do they have a changelog which you looked at?  Can you mount these
>> >> shares from a Linux client?
>> >
>> > These are on a Linux server, so there is no problem there.
>> 
>> Can you definitely mount the share from a remote Linux client without
>> problems?
>> 
>> > Changelog doesn't say anything but the version number.
>
> I don't have any remote linux client and this is samba, used so that
> windows can access the share.

You could make a copy of everything in the inaccessible share, make a
new share with settings identical to the settings of the shares that are
still accessible when copying has finished, and try to access the new
share with a remote client.

If you can access the new share, either something with the old one is
weird, or you have changed something like permissions or extended
attributes by copying.


If you cannot access the new share, try a different kernel version (or
try a different kernel version first).  I've had a case in which a
kernel would freeze/panic when the directory contents of a directory
that was exported via NFS were displayed with ls on a NFS client.

IIRC samba uses kernel support on the server.  Perhaps you have a
version mismatch between the new samba version and what the kernel
supports.



Re: [gentoo-user] Re: Gcc 5.3

2015-12-29 Thread lee
David Haller <gen...@dhaller.de> writes:

> Hello,
>
> On Tue, 29 Dec 2015, lee wrote:
>>Andrew Savchenko <birc...@gentoo.org> writes:
>>> There will be no 5.3.1 likely. Numeration scheme is changed from 5.x
>>> series: what was middle version is now major, what was minor is now
>>> middle. So 5.3 is a patch version of 5.0 the same as 4.9.3 is a
>>> patch version of 4.9.0.
>>
>>What do you currently get as default when you update, and can you easily
>>go back to a previous version, or have several versions installed and
>>switch between them?
>
> I'd guess 4.9.3. And yes and yes.
>
> # eix sys-devel/gcc
> [I] sys-devel/gcc
>  Available versions:  
>  (2.95.3) ~2.95.3-r10^s
>  (3.3.6) (~)3.3.6-r1^s
>  (3.4.6) 3.4.6-r2^s
>  (4.0.4) **4.0.4^s
>  (4.1.2) 4.1.2^s
>  (4.2.4) (~)4.2.4-r1^s
>  (4.3.6) 4.3.6-r1^s
>  (4.4.7) 4.4.7^s
>  (4.5.4) 4.5.4^s
>  (4.6.4) 4.6.4^s
>  (4.7)  4.7.4^s
>  (4.8)  (~)4.8.0^s (~)4.8.1-r1^s (~)4.8.2^s 4.8.3^s 4.8.4^s 4.8.5^s
>  (4.9)  ~*4.9.0^s ~*4.9.1^s (~)4.9.2^s 4.9.3^s{tbz2}
>  (5)**5.1.0^s **5.2.0^s (~)5.3.0^s{tbz2}
> [..]
>  Installed versions:  4.9.3(4.9)^s{tbz2}[..]
>   5.3.0(5)^s{tbz2}[..]
> [..]
>
> # gcc-config -l
>  [1] x86_64-pc-linux-gnu-4.9.3 *
>  [2] x86_64-pc-linux-gnu-5.3.0
>
> Basically, you can install one of each slot, i.e. the first column in
> () of the eix output. From (2.95.3) to (5). And switch as you like.
>
> As 4.9.3 is marked stable, I guess that's what'd you get per default.

4.8.5

I'd have to run emerge --sync to know about more recent versions.  How
is that supposed to be used, btw?  I only run that when I do want to
update everything.  Now if I didn't want to update anything but gcc,
could I run emerge --sync and install gcc 5.x without having trouble
with anything else I might install before actually updating everything?
So if I'd never explicitly update everything but run emerge --sync
frequently, things would be updated over time, occasionally?

> Stuff compiled with older gcc's should run with newer libgcc*[0], but
> stuff compililed with a newer gcc might not run with the older
> libgcc*. Same goes, with more problems IIRC, for libstdc++.
> So beware of that. Apart from that? I'm not aware of problems.

Uhm ... So I might break the system by switching between compiler
versions?  I have an application which I would like to compile with gcc
5.x just to see if that's even possible.  I could switch, try it, and
then switch back.

> BTW: why is gcc not also handled via eselect? Even if that'd just
> call gcc-config?

What about ccache?  How's that handled when you have multiple versions
of gcc installed?

> HTH,
> -dnh
>
> [0] e.g. old Loki games, probably compiled with 2.95.x or even older
> still run fine on a system built with gcc-4.6

If they were 64bit ...  Too bad that there basically aren't any games
anymore.



Re: [gentoo-user] major problem after samba update

2015-12-29 Thread lee
cov...@ccs.covici.com writes:

> lee <l...@yagibdah.de> wrote:
>
>> cov...@ccs.covici.com writes:
>> 
>> > Hi.  I just upgraded from samba 4.1.x to 4.2.7 and in one of my shares,
>> > I can not access any subfolders of that share.  It usually gives me some
>> > kind of windows permission error, or just location not available.
>> > Windows tells me I can't even display the advanced security settings for
>> > any folder.  Anyone know what they did and how to fix?  There is a hard
>> > blocker to downgrading, so maybe something is up.
>> >
>> > Thanks in advance for any suggestions.
>> 
>> Do they have a changelog which you looked at?  Can you mount these
>> shares from a Linux client?
>
> These are on a Linux server, so there is no problem there.

Can you definitely mount the share from a remote Linux client without
problems?

> Changelog doesn't say anything but the version number.

They have lots of changelogs which say more than that:
https://www.samba.org/samba/history/



Re: [gentoo-user] IPTABLES

2015-12-29 Thread lee
"siefke_lis...@web.de"  writes:

> Hello,
>
> i try to run iptables, block bad ips and close the system. 
>
> I want run firewall which block all INPUT, only ALLOW services i defined.
> Ipset want to use to block spam ips, make it sure awesome as ever set rules 
> manuell.

After reading a good iptables tutorial, you may want to take a look at
shorewall and it's documentation.

If you're referring to IP addresses from which you receive emails that
are spam, I'd recommend getting familiar with exim and perhaps
spamassassin.  For extreme cases, you might want to use something like
fail2ban.



Re: [gentoo-user] Re: Gcc 5.3

2015-12-29 Thread lee
Andrew Savchenko  writes:

> On Fri, 25 Dec 2015 12:40:48 -0800 walt wrote:
>> On Thu, 24 Dec 2015 10:18:27 -0500
>> Alan Grimes  wrote:
>> 
>> > Hey, thanks for putting out gcc 5.3...
>> > 
>> > Unfortunately, it fails to bootstrap on my machine. I am getting
>> > differences between the stage 2 and stage 3 compilers and it's
>> > dying... =(
>> 
>> I'm waiting for 5.3.1 before I even try to install it on my main
>> desktop machine.
>
> There will be no 5.3.1 likely. Numeration scheme is changed from 5.x
> series: what was middle version is now major, what was minor is now
> middle. So 5.3 is a patch version of 5.0 the same as 4.9.3 is a
> patch version of 4.9.0.

What do you currently get as default when you update, and can you easily
go back to a previous version, or have several versions installed and
switch between them?



Re: [gentoo-user] Maybe bug? (glibc related?)

2015-12-29 Thread lee
Elias Diem  writes:

> Hi
>
> I just got the following while running Vim's testsuite.
>
> 
> *** buffer overflow detected ***: vim terminated; report to 
> 
> Makefile:151: recipe for target 'af.ck' failed
> make[2]: *** [af.ck] Killed
> 
>
> The compiler gave me the following warning.
>
> [...]
> /usr/include/bits/string3.h:110:3: warning: call to __builtin___strcpy_chk 
> will always overflow destination buffer
>return __builtin___strcpy_chk (__dest, __src, __bos (__dest));
>
> [...]
>
> Should I file a bug?

The test was successful because the buffer overflow was detected?



Re: [gentoo-user] major problem after samba update

2015-12-28 Thread lee
cov...@ccs.covici.com writes:

> Hi.  I just upgraded from samba 4.1.x to 4.2.7 and in one of my shares,
> I can not access any subfolders of that share.  It usually gives me some
> kind of windows permission error, or just location not available.
> Windows tells me I can't even display the advanced security settings for
> any folder.  Anyone know what they did and how to fix?  There is a hard
> blocker to downgrading, so maybe something is up.
>
> Thanks in advance for any suggestions.

Do they have a changelog which you looked at?  Can you mount these
shares from a Linux client?



Re: [gentoo-user] arp question

2015-12-27 Thread lee
Adam Carter  writes:

>> Yes, I already tried that and didn't get any traffic listed.
>>
>
> In that case it sounds like linux has bridged them across from the other
> interface. Does this find anything?
> tcpdump -i enp2s0 net 192.168.1.0/24
>
> If it doesn't maybe generate some layer2 broadcast traffic on enp1s0 to see
> if you can see that traffic in the tcpdump on enp2s0. Something like;
> echo 0 > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts
> ping 192.168.1.255
>
> After the test is done turn it back on with;
> echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts

Thanks!  I tried it, and nothing shows up.

> I've never bridged with linux. Bridging is usually a bad option - if you
> can I suggest you move to a routed and/or NATed solution. Clean and simple
> is best.

Most ppl seem to recommend bridging as the clean and simple solution.
How come you say that bridging is usually bad?

And how do you start a container without having a bridge on the host?
Not being able to do that is why I have the bridge in the first place.



Re: [gentoo-user] arp question

2015-12-27 Thread lee
Rich Freeman <ri...@gentoo.org> writes:

> On Sat, Dec 26, 2015 at 9:14 AM, lee <l...@yagibdah.de> wrote:
>>
>> They are connected to different vlans on the same switch, so they don't
>> share the same broadcast domain.  The switch shows the mac addresses of
>> the phones only in the expected vlan.
>>
>
> Out of curiosity, have you tried actually sending a broadcast on the
> VLAN to verify that it actually is implemented correctly?  If your
> switch is mixing ARP across VLANs that would explain this behavior.

Not yet --- and it won't exactly be an easy thing to do.

It's a high-quality switch.  If it couldn't keep vlans seperated, the
customers it was designed for would have them pretty much all replaced
under warranty.

> I've never messed with VLAN on linux but I'd think that you could

Me neither; so far, the switch does it.

> probably implement VLAN in software and actually save yourself a
> physical network interface as well (both interfaces could go out over
> the same wire and be handled appropriately by the switch).

Hm.  That might even be possible, in a very complicated setup.  Maybe
some day, I can do that, after lots of learning.



Re: [gentoo-user] arp question

2015-12-26 Thread lee
Adam Carter  writes:

>> They are wrong because there is no way for network traffic from the
>> devices on the LAN to make it to the interface enp2s0.  Or, if they do
>> make it there, then there is something else seriously wrong.
>>
>
> tcpdump -i enp2s0 arp
>
> will tell you if the arps are being generated from something on the wire
> side. If there's not much traffic then clear the arp entry and ping the IP
> address to generate traffic.

Yes, I already tried that and didn't get any traffic listed.

> | heimdali ~ # route -n
>> | Kernel IP Routentabelle
>> | ZielRouter  Genmask Flags Metric RefUse
>> Iface
>> | 0.0.0.0 192.168.75.10.0.0.0 UG4005   00
>> ppp0
>> | 127.0.0.0   0.0.0.0 255.0.0.0   U 0  00
>> lo
>> | 192.168.1.0 0.0.0.0 255.255.255.0   U 0  00
>> br_dmz
>> | 192.168.3.0 0.0.0.0 255.255.255.0   U 0  00
>> enp1s0
>> | 192.168.3.800.0.0.0 255.255.255.255 UH0  00
>> enp1s0
>> | 192.168.3.810.0.0.0 255.255.255.255 UH0  00
>> enp1s0
>> | 192.168.75.10.0.0.0 255.255.255.255 UH0  00
>> ppp0
>> | heimdali ~ #
>> `
>>
>> What it the purpose of the static host routes? The connected
> 192.168.3.0/24 route will take care of those hosts, so they shouldn't be
> required.

They shouldn't be needed.  I added them manually only to see if it would
make a difference.

> What are enp1s0 and enp2s0 connected to? Same hub or same vlan on the
> switch? If so they will both see all the layer 2 broadcast traffic from
> each subnet.

They are connected to different vlans on the same switch, so they don't
share the same broadcast domain.  The switch shows the mac addresses of
the phones only in the expected vlan.


Even when enp2s0 is not connected to the switch but directly to the wire
the PPPoE connection comes from, the arp entries are updated.

Having that said, there's one more test I can make: disconnect enp2s0
entirely and see if the arp entries still persist.


As to the other reply: The firewall is IP based, and what reason and
what way would any traffic have to go from a device on the LAN to an
interface that is not connected to the LAN but to the internet, and on a
different network than the LAN, when all IP traffic from the device to
the internet is being dropped?

The firewall doesn't know enp2s0 but ppp0.  But still, I don't see how
these arp entries could appear on enp2s0, yet they do.


On a side note: I've verified that VOIP quality issues do not come from
anything on the LAN (by playing music to the phones and making internal
phone calls) but from the rather poor internet connection.  So at least
the wrong arp entries don't interfere with VOIP.



Re: [gentoo-user] arp question

2015-12-25 Thread lee
Rich Freeman  writes:

> On Fri, Dec 25, 2015 at 9:00 PM, Adam Carter  wrote:
>>> grandstream.yagibdah.de (192.168.3.80) auf 00:0b:82:16:ed:9e [ether] auf
>>> enp2s0
>>> grandstream.yagibdah.de (192.168.3.80) auf 00:0b:82:16:ed:9e [ether] auf
>>> enp1s0
>>> spa.yagibdah.de (192.168.3.81) auf 88:75:56:07:44:c8 [ether] auf enp2s0
>>> spa.yagibdah.de (192.168.3.81) auf 88:75:56:07:44:c8 [ether] auf enp1s0
>>>
>>>
>>> enp2s0 is an interface dedicated to a PPPoE connection, and enp1s0
>>> connects to the LAN.
>>>
>>> IIUC, this is bound to cause problems.
>>>
>>> How is it possible for the wrong entries to be created, and what can I
>>> do to prevent them?
>>>
>>
>> arp mappings are untrusted so your machine will accept anything is sees on
>> the network. That's what makes MITM so easy on a connected subnet. What
>> makes you think they are wrong? Also, the output of ifconfig would be
>> helpful.
>
> I suspect those interfaces are getting bridged or something, but I'm
> not an expert on such things.

No physical interface is connected to the bridge interface, though
selected traffic from the devices can reach one of the VMs that are
connected to the bridge.

> If a given IP has a MAC on more than one interface, the interface the
> packets go out to is still controlled by the routing rules.  If the
> routing rule says that 1.1.1.1 is on eth0 it doesn't matter that eth0
> doesn't have an ARP entry and eth1 does - I believe it will just be
> undelivered or sent to the gateway for eth0 if it isn't on a local
> subnet for that interface.

There are no arp entries created for interfaces of the host.  No traffic
from the devices can make it to enp2s0.

> If you have some kind of routing loop it could actually make its way
> back to the interface on eth1.

The traffic would have to be routed back via enp2s0 from somewhere.
Since traffic from the devices doesn't go over enp2s0, it cannot be
routed back there.

> ARP doesn't come into play until the kernel goes to send something on
> an interface and determines it is on a subnet for that interface.

The devices are not on a subnet of the interface, hence no arp entries
for them should be created for that interface.

> Again, I'm not an expert in this and there could be some nuance to the
> rules that I'm missing.

Me neither ...  The devices are functional, though I can't tell if
traffic from or to them can be misdirected because of the arp entries
that shouldn't exist.  The devices might still work if some of their
traffic is misdirected, just not as well as they otherwise could.

Since they are VOIP devices, they are required to work well --- and the
VOIP quality is actually not good enough.  So I'm looking for possible
causes, and wrong arp entries might be one.



Re: [gentoo-user] arp question

2015-12-25 Thread lee
Adam Carter  writes:

>>
>> grandstream.yagibdah.de (192.168.3.80) auf 00:0b:82:16:ed:9e [ether] auf
>> enp2s0
>> grandstream.yagibdah.de (192.168.3.80) auf 00:0b:82:16:ed:9e [ether] auf
>> enp1s0
>> spa.yagibdah.de (192.168.3.81) auf 88:75:56:07:44:c8 [ether] auf enp2s0
>> spa.yagibdah.de (192.168.3.81) auf 88:75:56:07:44:c8 [ether] auf enp1s0
>>
>>
>> enp2s0 is an interface dedicated to a PPPoE connection, and enp1s0
>> connects to the LAN.
>>
>> IIUC, this is bound to cause problems.
>>
>> How is it possible for the wrong entries to be created, and what can I
>> do to prevent them?
>>
>>
> arp mappings are untrusted so your machine will accept anything is sees on
> the network. That's what makes MITM so easy on a connected subnet. What
> makes you think they are wrong?

They are wrong because there is no way for network traffic from the
devices on the LAN to make it to the interface enp2s0.  Or, if they do
make it there, then there is something else seriously wrong.

> Also, the output of ifconfig would be helpful.


,
| heimdali ~ # ifconfig -a
| br_dmz: flags=4419  mtu 1500
| inet 192.168.1.1  netmask 255.255.255.0  broadcast 192.168.1.255
| inet6 fe80::5cce:2bff:fedc:dce0  prefixlen 64  scopeid 0x20
| ether fe:18:b0:e9:78:47  txqueuelen 0  (Ethernet)
| RX packets 5124752  bytes 3554838408 (3.3 GiB)
| RX errors 0  dropped 0  overruns 0  frame 0
| TX packets 5080086  bytes 3508269156 (3.2 GiB)
| TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
| 
| enp1s0: flags=4163  mtu 1500
| inet 192.168.3.20  netmask 255.255.255.0  broadcast 192.168.3.255
| inet6 fe80::7aac:c0ff:fe3c:2dc8  prefixlen 64  scopeid 0x20
| ether 78:ac:c0:3c:2d:c8  txqueuelen 1000  (Ethernet)
| RX packets 998350  bytes 217325937 (207.2 MiB)
| RX errors 0  dropped 7332  overruns 0  frame 0
| TX packets 965281  bytes 274572349 (261.8 MiB)
| TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
| device interrupt 17  
| 
| enp2s0: flags=4163  mtu 1500
| inet 185.55.75.245  netmask 255.255.255.255  broadcast 185.55.75.245
| inet6 fe80::7aac:c0ff:fe3c:2dc9  prefixlen 64  scopeid 0x20
| ether 78:ac:c0:3c:2d:c9  txqueuelen 1000  (Ethernet)
| RX packets 5157535  bytes 4875664995 (4.5 GiB)
| RX errors 0  dropped 0  overruns 0  frame 0
| TX packets 3377329  bytes 413568759 (394.4 MiB)
| TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
| device interrupt 16  
| 
| lo: flags=73  mtu 65536
| inet 127.0.0.1  netmask 255.0.0.0
| inet6 ::1  prefixlen 128  scopeid 0x10
| loop  txqueuelen 0  (Lokale Schleife)
| RX packets 276299  bytes 78159006 (74.5 MiB)
| RX errors 0  dropped 0  overruns 0  frame 0
| TX packets 276299  bytes 78159006 (74.5 MiB)
| TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
| 
| ppp0: flags=4305  mtu 1492
| inet 185.55.75.245  netmask 255.255.255.255  destination 192.168.75.1
| ppp  txqueuelen 3  (Punkt-zu-Punkt Verbindung)
| RX packets 7250  bytes 3180943 (3.0 MiB)
| RX errors 0  dropped 0  overruns 0  frame 0
| TX packets 6123  bytes 711342 (694.6 KiB)
| TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
| 
| veth5CBR3D: flags=4163  mtu 1500
| inet6 fe80::fc18:b0ff:fee9:7847  prefixlen 64  scopeid 0x20
| ether fe:18:b0:e9:78:47  txqueuelen 1000  (Ethernet)
| RX packets 5077428  bytes 3616056439 (3.3 GiB)
| RX errors 0  dropped 0  overruns 0  frame 0
| TX packets 5031817  bytes 3495334672 (3.2 GiB)
| TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
| 
| vethYXJVKH: flags=4163  mtu 1500
| inet6 fe80::fcd0:65ff:fec5:7b44  prefixlen 64  scopeid 0x20
| ether fe:d0:65:c5:7b:44  txqueuelen 1000  (Ethernet)
| RX packets 47324  bytes 10528497 (10.0 MiB)
| RX errors 0  dropped 0  overruns 0  frame 0
| TX packets 48502  bytes 13062823 (12.4 MiB)
| TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
| 
| heimdali ~ # brctl show
| bridge name bridge id   STP enabled interfaces
| br_dmz  8000.fe18b0e97847   no  veth5CBR3D
| vethYXJVKH
| heimdali ~ # route -n
| Kernel IP Routentabelle
| ZielRouter  Genmask Flags Metric RefUse Iface
| 0.0.0.0 192.168.75.10.0.0.0 UG4005   00 ppp0
| 127.0.0.0   0.0.0.0 255.0.0.0   U 0  00 lo
| 192.168.1.0 0.0.0.0 

[gentoo-user] arp question

2015-12-25 Thread lee
Hi,

any idea why I have entries in the arp table like this:


grandstream.yagibdah.de (192.168.3.80) auf 00:0b:82:16:ed:9e [ether] auf enp2s0
grandstream.yagibdah.de (192.168.3.80) auf 00:0b:82:16:ed:9e [ether] auf enp1s0
spa.yagibdah.de (192.168.3.81) auf 88:75:56:07:44:c8 [ether] auf enp2s0
spa.yagibdah.de (192.168.3.81) auf 88:75:56:07:44:c8 [ether] auf enp1s0


enp2s0 is an interface dedicated to a PPPoE connection, and enp1s0
connects to the LAN.

IIUC, this is bound to cause problems.

How is it possible for the wrong entries to be created, and what can I
do to prevent them?



[gentoo-user] mouse cannot use until retach

2015-12-05 Thread Wallance Lee
Hi, everyone.
I have a very starge phenomenon for a long time. I note that when I boot my 
computer with mouse attached, the mouse couldn't be recognized on sddm login 
ui. But I can see my mouse powered before sddm started. Strangely mouse can be 
recognized after I pull it out and insert it in again. I cannot confirm whether 
it is caused by sddm.


Any help would be very appreciately.
Thanks.





--

为爱上色
公益活动

Re:[gentoo-user] mouse cannot use until retach

2015-12-05 Thread Wallance Lee
I cannot use my mice at sddm-greeter but I can use my synaptics. When I login 
my kde, I just retach my mice, and it works again.
I am use sddm 0.13-r1 and kf-5.15, plasma-5.4.1, qt-5.4.2, gcc-4.9.3, 
glibc-2.21-r1.



--

为爱上色
公益活动

在 2015-12-06 08:46:41,"Wallance Lee" <ha...@126.com> 写道:

Hi, everyone.
I have a very starge phenomenon for a long time. I note that when I boot my 
computer with mouse attached, the mouse couldn't be recognized on sddm login 
ui. But I can see my mouse powered before sddm started. Strangely mouse can be 
recognized after I pull it out and insert it in again. I cannot confirm whether 
it is caused by sddm.


Any help would be very appreciately.
Thanks.





--

为爱上色
公益活动




 

Re: [gentoo-user] resolving names of local hosts locally

2015-12-03 Thread lee
Peter Humphrey <pe...@prh.myzen.co.uk> writes:

> On Wednesday 02 December 2015 20:37:34 lee wrote:
>> Hi,
>> 
>> is there a way to configure bind so that the names of local hosts,
>> i. e. the ones bind is authoritative for, can be resolved without a
>> connection to the internet?
>> 
>> I don't like it at all that when the internet connection goes out, no
>> name resolution at all is possible.  Since the information about the
>> local hosts is known to bind from its configuration files, why can't it
>> just resolve them?
>
> Have you looked into net-dns/dnsmasq?

"Small forwarding DNS server"?  Why would I want a forwarding one?



Re: [gentoo-user] resolving names of local hosts locally

2015-12-03 Thread lee
Alan McKinnon <alan.mckin...@gmail.com> writes:

> On 02/12/2015 21:37, lee wrote:
>> Hi,
>> 
>> is there a way to configure bind so that the names of local hosts,
>> i. e. the ones bind is authoritative for, can be resolved without a
>> connection to the internet?
>> 
>> I don't like it at all that when the internet connection goes out, no
>> name resolution at all is possible.  Since the information about the
>> local hosts is known to bind from its configuration files, why can't it
>> just resolve them?
>> 
>
>
> There are several problems with your idea. First, the configured
> namservers in resolv.conf are caching servers, not authoritative
> servers. You never configure an auth server to act as a cache. Yes, it
> can be done. No, it's an awful idea and things break horribly.

I thought it was caching anyway.  What's the point of forgetting the
answers to queries right away after answering them?

> Secondly, nothing else on your network can know your auth server is
> authoritative without first being informed so by the delegating server.

The name server itself knows this from its configuration, and it's the
only thing that needs to know this because it's the only thing
everything on the network is asking.

> Or in other words, if you own example.com and an auth server for
> example.com is on your network, you have to first go via .com to know
> that. Weird, but that's how it works.

The name server doesn't know what domains it's supposed to give answers
for without asking others first?

> DNS was designed to need a network connection because most of the DNS is
> out there somewhere else

Then how do you solve the problem of being unable to even resolve the
names of hosts on the LAN when the connection goes down?

> What you should do, is run your own caching server on the local network
> and set the TTL for your own zones to something sane i.e. 1 day (as
> opposed to the current idiotic fad of making it 10 minutes). The query
> your cache for your entire zone once a day. Unless your internet
> connection goes out for more than a day, you're good.

Hm, I just tried that, and it seems to work.  It didn't before I made
some small changes last night, that's why I'm asking.  Weird ...



[gentoo-user] resolving names of local hosts locally

2015-12-02 Thread lee
Hi,

is there a way to configure bind so that the names of local hosts,
i. e. the ones bind is authoritative for, can be resolved without a
connection to the internet?

I don't like it at all that when the internet connection goes out, no
name resolution at all is possible.  Since the information about the
local hosts is known to bind from its configuration files, why can't it
just resolve them?



Re: [gentoo-user] Weird "df" output

2015-11-27 Thread lee
waltd...@waltdnes.org writes:

> On Thu, Nov 26, 2015 at 11:57:20AM +0100, lee wrote
>
>> He said that he "has a primary partition 1, which covers the entire
>> hard drive" and "a small / partition".  That made me think that he
>> has two disks.
>
>   Primary partitions are numbered 1 through 4 and logical partitions are
> numbered 5 and up.  The "primary partition" is the entire physical disk.

Hm, I don't consider extended partitions as primary ones but as extended
ones.  When I need more than four partitions, I create three primary
ones, an extended one and logical ones within the extended one.  Why
would I do that any other way?

You cannot have a primary partition that covers the entire disk and then
some.



<    1   2   3   4   5   6   >