Re: [gentoo-user] Re: [was cross-compile attempt] 32bit chroot

2016-08-01 Thread Mick
On Tuesday 02 Aug 2016 00:33:57 waltd...@waltdnes.org wrote:
> On Tue, Aug 02, 2016 at 01:11:24AM +0200, Jeremi Piotrowski wrote
> 
> > Does it make sense to compile your own versions of these packages
> > and then binary merge, when portage already contains binary ebuilds
> > for these packages? (firefox-bin/libreoffice-bin/google-chrome)
> 
>   I've got an underpowered netbook that needs all the help it can get.
> I build in the VM with...
> 
> -O2 -march=bonnell -mfpmath=sse -pipe -fomit-frame-pointer
> -fno-unwind-tables -fno-asynchronous-unwind-tables
> 
>   Even older desktops benefit.  One case in point is my former Dell
> D530 Core2 Duo.  When Gentoo had been installed, it could not keep up
> with the slowest stream of NHL Gamecenter Live.  Everything was generic
> x86 with SSE2 thrown in, from the stage3.  After re-emerging system and
> world optimized for the machine's cpu, it could keep up with not only
> the lowest quality stream, but a medium-quality stream.  So yes, it
> helps.
> 
>   From http://gentoo-en.vfose.ru/wiki/Safe_Cflags#-march.3Dnative to
> find out exactly what your cpu is, run the following command on the
> *TARGET* machine...
> 
> gcc -march=native -E -v - &1 | grep cc1
> 
>   Ignore the flag output, which may be over-optimistic.  Just look at
> what it says for "-march=".

Yes, I've had similar experiences here with own built binaries being faster 
than generic *-bin packages offered by portage.  The 32bit box in question is 
running a single core Pentium4 ... I could bet it feels slower than my 
AppleTV1 with its 1.00GHz Pentium-M.  :-)
-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] PostgreSQL Vs MySQL @Uber

2016-08-01 Thread james

On 08/01/2016 01:03 PM, Rich Freeman wrote:

On Mon, Aug 1, 2016 at 12:49 PM, J. Roeleveld  wrote:

On Monday, August 01, 2016 08:43:49 AM james wrote:


Sure this part is only related to
transaction processing as there was much more to the "five 9s" legacy,
but imho, that is the heart of what was the precursor to ACID property's
now so greatly espoused in SQL codes that Douglas refers to.

Do folks concur or disagree at this point?


ACID is about data integrity. The "best 2 out of 3" voting was, in my opinion,
a work-around for unreliable hardware. It is based on a clever idea, but when
2 computers having the same data and logic come up with 2 different answers, I
wouldn't trust either of them.


I agree, this was a solution for hardware issues.  However, hardware
issues can STILL happen today, so there is an argument for it.  There
are really two ways to get to robustness: clever hardware, and clever
software.  The old way was to do it in hardware, the newer way is to
do it in software (see Google with their racks of cheap motherboards).
I suspect software will always be the better way, but you can't just
write a check to get better software the way you can with hardware.
Doing it right with software means hiring really good people, which is
something a LOT of companies don't want to do (well, they think
they're doing it, but they're not).

Basically I believe the concept with the mainframe was that you could
probably open the thing up, break one random board with a hammer, and
the application would still keep running just fine.  IBM would then
magically show up the next day and replace the board without anybody
doing anything.  All the hardware had redundancy, so you can run your
application for a decade or two without fear of a hardware failure.


Not with todays clusters and cheap hardware. As you pointed out 
expertise (and common sense) are the quintessential qualities for staff 
and managers.




However, you pay a small fortune for all of this.


Not today, that was then those absorbant prices. Sequoia made so much 
money, I pretty sure that how they ultimately became a VC firm?





The other trend as
I understand it in mainframes is renting your own hardware to you.


Yes, find a CPA that spent 10 years or so inside the IRS and you get 
even more aggressive  profitibility vectors. Some accouants move 
hardware, assest and corporations around and about the world in a shell 
game and never pay taxes, just recycling assets among billionares. It's 
pretty sickening, if you really learn the details of what goes on.



That is, you buy a box, and you can just pay to turn on extra
CPUs/etc.  You can imagine what the margins are like for that to be
practical, but for non-trendy businesses that don't want to offer free
ice cream and pay Silicon Valley wages I guess it is an alternative to
building good software.


Investment credits, sell/rent hardware to overseas divison, then move 
them to another country that pays you re-locate and bring a few jobs. 
Heck, event the US stats play that stupid game with recruiting 
corporations. Get and IRA career agent drunk some time and pull a few 
stories out of them.



You have seen how "democracies" work, right? :)
The more voters involved, the longer it takes for all the votes to be counted.
With a small number, it might actually still scale, but when you pass a magic
number (no clue what this would be), the counting time starts to exceed any
time you might have gained by adding more voters.

Also, this, to me, seems to counteract the whole reason for using clusters:
Have different nodes handle a different part of the problem.


I agree.  The old mainframe way of doing things isn't going to make
anything faster.  I don't think it will necessarily make things much
slower as long as all the hardware is in the same box.  However, if
you want to start doing this at a cluster scale with offsite replicas
I imagine the latencies would kill just about anything.  That was one
of the arguments against the Postgres vacuum approach where replicas
could end up having in-use records deleted.  The solutions are to
delay the replicas (not great), or synchronize back to the master
(also not great).  The MySQL approach apparently lets all the replicas
do their own vacuuming, which does neatly solve that particular
problem (presumably at the cost of more work for the replicas, and of
course they're no longer binary replicas).


Why Rich, using common sense? What's wrong with you? I thought you were 
a good corporate lacky?  Bob from accounting has already presented to 
the BOD and got approval. Rich, can you be a team player (silent idiot) 
just once for the team?







The way Uber created the cluster is useful when having 1 node handle all the
updates and multiple nodes providing read-only access while also providing
failover functionality.


I agree.  I do remember listening to a Postgres talk by one of the
devs and while everybody's holy grail is the magical replica

Re: [gentoo-user] Re: [was cross-compile attempt] 32bit chroot

2016-08-01 Thread waltdnes
On Tue, Aug 02, 2016 at 01:11:24AM +0200, Jeremi Piotrowski wrote

> Does it make sense to compile your own versions of these packages
> and then binary merge, when portage already contains binary ebuilds
> for these packages? (firefox-bin/libreoffice-bin/google-chrome)

  I've got an underpowered netbook that needs all the help it can get.
I build in the VM with...

-O2 -march=bonnell -mfpmath=sse -pipe -fomit-frame-pointer -fno-unwind-tables 
-fno-asynchronous-unwind-tables

  Even older desktops benefit.  One case in point is my former Dell
D530 Core2 Duo.  When Gentoo had been installed, it could not keep up
with the slowest stream of NHL Gamecenter Live.  Everything was generic
x86 with SSE2 thrown in, from the stage3.  After re-emerging system and
world optimized for the machine's cpu, it could keep up with not only
the lowest quality stream, but a medium-quality stream.  So yes, it
helps.

  From http://gentoo-en.vfose.ru/wiki/Safe_Cflags#-march.3Dnative to
find out exactly what your cpu is, run the following command on the
*TARGET* machine...

gcc -march=native -E -v - &1 | grep cc1

  Ignore the flag output, which may be over-optimistic.  Just look at
what it says for "-march=".

-- 
Walter Dnes 
I don't run "desktop environments"; I run useful applications



Re: [gentoo-user] PostgreSQL Vs MySQL @Uber

2016-08-01 Thread james

On 08/01/2016 11:49 AM, J. Roeleveld wrote:

On Monday, August 01, 2016 08:43:49 AM james wrote:

On 08/01/2016 02:16 AM, J. Roeleveld wrote:

On Saturday, July 30, 2016 06:38:01 AM Rich Freeman wrote:

On Sat, Jul 30, 2016 at 6:24 AM, Alan McKinnon 


wrote:

On 29/07/2016 22:58, Mick wrote:

Interesting article explaining why Uber are moving away from
PostgreSQL.
I am
running both DBs on different desktop PCs for akonadi and I'm also
running
MySQL on a number of websites.  Let's which one goes sideways first.
:p

 https://eng.uber.com/mysql-migration/


I don't think your akonadi and some web sites compares in any way to
Uber
and what they do.

FWIW, my Dev colleagues support and entire large corporate ISP's
operational and customer data on PostgreSQL-9.3. With clustering. With
no
db-related issues :-)


Agree, you'd need to be fairly large-scale to have their issues,


And also have to design your database by people who think MySQL actually
follows common SQL standards.


but I
think the article was something anybody interested in databases should
read.  If nothing else it is a really easy to follow explanation of
the underlying architectures.


Check the link posted by Douglas.
Ubers article has some misunderstandings about the architecture with
conclusions drawn that are, at least also, caused by their database design
and usage.


I'll probably post this to my LUG mailing list.  I think one of the
Postgres devs lurks there so I'm curious to his impressions.

I was a bit surprised to hear about the data corruption bug.  I've
always considered Postgres to have a better reputation for data
integrity.


They do.


And of course almost any FOSS project could have a bug.  I
don't know if either project does the kind of regression testing to
reliably detect this sort of issue.


Not sure either, I do think PostgreSQL does a lot with regression tests.


I'd think that it is more likely
that the likes of Oracle would (for their flagship DB (not for MySQL),


Never worked with Oracle (or other big software vendors), have you? :)


and they'd probably be more likely to send out an engineer to beg
forgiveness while they fix your database).


Only if you're a big (as in, spend a lot of money with them) customer.


Of course, if you're Uber
the hit you'd take from downtime/etc isn't made up for entirely by
having somebody take a few days to get everything fixed.


--
Joost


I certainly respect your skills and posts on Databases, Joost, as
everything you have posted, in the past is 'spot on'.


Comes with a keen interest and long-term (think decades) of working with
different databases.


Granted, I'm no database expert, far from it.


Not many people are, nor do they need to be.


But I want to share a few thing with you,
and hope you  (and others) will 'chime in' on these comments.

Way back, when the earth was cooling and we all had dinosaurs for pets,
some of us hacked on AT&T "3B2" unix systems. They were know for their
'roll back and recovery', triplicated (or more) transaction processes
and 'voters' system to ferret out if a transaction was complete and
correct. There was no ACID, the current 'gold standard' if you believe
what Douglas and other write about concerning databases.

In essence, (from crusted up memories) a basic (SS7) transaction related
to the local telephone switch, was ran  on 3 machines. The results were
compared. If they matched, the transaction went forward as valid. If 2/3
matched,


And what in the likely case when only 1 was correct?


1/3 was a failure, in fact X<1 could be defined (parameter setting) as a 
failure depending on the need.



Have you seen the movie "minority report"?
If yes, think back to why Tom Cruise was found 'guilty' when he wasn't and how
often this actually occured.


Apples to Oranges. The (3) "pre-cons" were  not equal, ableit the voted, 
most of the time all three in agreement, but the dominant pre-con was 
always on the correct side of the issue. But that is make-believe. 
Comparing results of codes run on 3 different processors or separate 
machines for agreement withing tolerances, is quite different.  The very 
essence of using voting where there a result less that 1.0 (that is 
n-1/n or n-x/n  was requisite on identical (replicated) processes all 
returning the same result ( expecting either a 0 or 1) returned. Results 
being logical or within rounding error of acceptance. Surely we need not 
split hairs. I was merely pointing out that the basis telecom systems 
formed the early and of widespread transaction processing industries and 
is the grand daddy of the ACID model/norms/constructs of modern 
transaction processing. And Douglas is
dead wrong that those sorts of (ACID) transactions cannot be made to fly 
on clusters versus a single machine. For massively parallel needs,
distributed processing rules, but it is not trivial and hence Uber, with 
mostly a bunch of kids, seems to be struggling and have made bad 
decisions. Prolly, there mid managers and

Re: [gentoo-user] PostgreSQL Vs MySQL @Uber

2016-08-01 Thread Rich Freeman
On Mon, Aug 1, 2016 at 1:31 PM, J. Roeleveld  wrote:
> On Monday, August 01, 2016 11:01:28 AM Rich Freeman wrote:
>> Neither my employer nor the big software provider
>> in question is likely to attract top-notch DB talent (indeed, mine has
>> steadily gotten rid of anybody who knows how to do anything in Oracle
>> beyond creating schemas it seems,
>
> Actively? Or by simply letting the good ones go while replacing them with
> someone less clued up?

A bit of both.  A big part of it was probably sacking anybody doing
anything other than creating tables (since you can't keep operating
without that), and outsourcing to 3rd parties and wanting
bottom-dollar prices.

There are accidentally some reasonably competent people in IT at my
company, but I don't think it is because we really are good at
targeting world-class talent.

>
> The problem is that the likes of Informatica (one
> of the leading ETL software vendors) don't actually support PostgreSQL.

Please tell me that it actually does support xml in a sane way, and it
is only our incompetent developers who seem to be hand-generating xml
files by printing strings?

I have an integration that involves Informatica, and another solution
that just synchronizes files from an smb share to a foreign FTP site.
Of course I don't have access to the share that lies in-between, so
when the interface breaks I get to play with two different groups to
try to figure out where the process died.  Informatica appears to be
running on Unix and I get helpful questions from the maintainers about
what path the files are on, as if I'd have any idea where some SMB
share (whose path I am not told) is mounted on some Unix server I have
no access to.

Gotta love division of labor.  Heaven forbid anybody have visibility
to the full picture so that the right group can be engaged on the
first try...

-- 
Rich



Re: [gentoo-user] PostgreSQL Vs MySQL @Uber

2016-08-01 Thread Rich Freeman
On Mon, Aug 1, 2016 at 7:18 PM, Alan McKinnon  wrote:
>
> So the original article very much seems to have been written with a skewed
> bias and wrong focus. That's bias as in "shifted to one side as used in
> math" not bias as in "opinionated asshat beating some special drum"
>

Well, I wouldn't say "wrong focus" so much as "particular focus."  The
original article doesn't really purport to be a holistic comparison of
the two systems, just an explanation of why they're migrating.  I
think people are reading a bit too much into it.

However, the original article would probably benefit from a few
caveats thrown in.

-- 
Rich



Re: [gentoo-user] PostgreSQL Vs MySQL @Uber

2016-08-01 Thread Alan McKinnon

On 01/08/2016 17:01, Rich Freeman wrote:

On Mon, Aug 1, 2016 at 3:16 AM, J. Roeleveld  wrote:

>
> Check the link posted by Douglas.
> Ubers article has some misunderstandings about the architecture with
> conclusions drawn that are, at least also, caused by their database design and
> usage.

I've read it.  I don't think it actually alleges any misunderstandings
about the Postgres architecture, but rather that it doesn't perform as
well in Uber's design.  I don't think it actually alleges that Uber's
design is a bad one in any way.



He does also make the stinger at the end:

On 2013 Uber migrated FROM mysql TO postgres, and now in 2016 they 
migrated FROM postgres TO Schemaless (with just happens to have InnoDB 
as backend).


So the original article very much seems to have been written with a 
skewed bias and wrong focus. That's bias as in "shifted to one side as 
used in math" not bias as in "opinionated asshat beating some special drum"




Re: [gentoo-user] Re: [was cross-compile attempt] 32bit chroot

2016-08-01 Thread Jeremi Piotrowski
On Mon, Aug 01, 2016 at 10:21:20PM +0100, Mick wrote:
> 
> I think libreoffice, chromium and firefox will be compiled in a chroot from 
> now 
> on and then emerged as binaries.  This is the difference for libreoffice:
> 
>  Sat Aug 29 06:09:09 2015 >>> app-office/libreoffice-4.4.4.3
>merge time: 15 hours, 34 minutes and 2 seconds.
> 
>  Sun Sep 13 01:36:03 2015 >>> app-office/libreoffice-4.4.5.2
>merge time: 15 hours, 13 minutes and 17 seconds.
> 
>  Sun Nov 29 02:30:04 2015 >>> app-office/libreoffice-5.0.3.2
>merge time: 16 hours, 54 minutes and 28 seconds.
> 
>  Sun Mar 27 09:31:20 2016 >>> app-office/libreoffice-5.0.5.2
>merge time: 17 hours and 8 seconds.
> 
>  Mon Aug  1 22:17:15 2016 >>> app-office/libreoffice-5.1.4.2
>merge time: 1 minute and 31 seconds.
> 
> (chromium takes even longer!)  :-)
>

Does it make sense to compile your own versions of these packages and then
binary merge, when portage already contains binary ebuilds for these
packages? (firefox-bin/libreoffice-bin/google-chrome)




Re: [gentoo-user] Re: [was cross-compile attempt] 32bit chroot

2016-08-01 Thread Mick
On Monday 01 Aug 2016 18:57:53 Mick wrote:
> On Monday 01 Aug 2016 17:32:58 Mick wrote:
> > On Monday 01 Aug 2016 12:19:41 waltd...@waltdnes.org wrote:
> > > > What chroot() actually does is fairly simple, it modifies pathname
> > > > lookups for a process and its children so that any reference to a path
> > > > starting '/' will effectively have the new root, which is passed as
> > > > the single argument, prepended onto the path. The current working
> > > > directory is left unchanged and relative paths can still refer to
> > > > files outside of the new root.
> > 
> > Thanks Walter, it's present along with the whole of the 32bit OS fs:
> > 
> > gentoo-32bit # ls -la /mnt/iso/gentoo-32bit/bin/bash
> > -rwxr-xr-x 1 root root 677244 Jan 16  2016 /mnt/iso/gentoo-32bit/bin/bash
> > 
> > gentoo-32bit # file /mnt/iso/gentoo-32bit/bin/bash
> > /mnt/iso/gentoo-32bit/bin/bash: ELF 32-bit LSB executable, Intel 80386,
> > version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for
> > GNU/Linux 2.6.32, stripped
> > 
> > 
> > Am I missing something in the amd64 kernel to be able to execute 32bit
> > code?
> No, I was missing the *whole* of the 32bit fs /lib directory.  O_O
> 
> Apologies for the noise.

I think libreoffice, chromium and firefox will be compiled in a chroot from now 
on and then emerged as binaries.  This is the difference for libreoffice:

 Sat Aug 29 06:09:09 2015 >>> app-office/libreoffice-4.4.4.3
   merge time: 15 hours, 34 minutes and 2 seconds.

 Sun Sep 13 01:36:03 2015 >>> app-office/libreoffice-4.4.5.2
   merge time: 15 hours, 13 minutes and 17 seconds.

 Sun Nov 29 02:30:04 2015 >>> app-office/libreoffice-5.0.3.2
   merge time: 16 hours, 54 minutes and 28 seconds.

 Sun Mar 27 09:31:20 2016 >>> app-office/libreoffice-5.0.5.2
   merge time: 17 hours and 8 seconds.

 Mon Aug  1 22:17:15 2016 >>> app-office/libreoffice-5.1.4.2
   merge time: 1 minute and 31 seconds.

(chromium takes even longer!)  :-)

-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


[gentoo-user] SSD over the SAS controller

2016-08-01 Thread Raphael MD
Hi,

I've a question about using SATA's SSD over the SAS controller on the
workstation motherboards.

I didn't find any benchmark comparing the CPU/Memory load, for example.

I suppose, that installing a SSD over the SAS controller, my CPU/Memory
load will be lower and I will get better performance, because dedicated
CPU/Memory over the SAS controller will do the job using its resources.

Is it right?


Re: [gentoo-user] PostgreSQL Vs MySQL @Uber

2016-08-01 Thread Rich Freeman
On Mon, Aug 1, 2016 at 12:49 PM, J. Roeleveld  wrote:
> On Monday, August 01, 2016 08:43:49 AM james wrote:
>
>> Sure this part is only related to
>> transaction processing as there was much more to the "five 9s" legacy,
>> but imho, that is the heart of what was the precursor to ACID property's
>> now so greatly espoused in SQL codes that Douglas refers to.
>>
>> Do folks concur or disagree at this point?
>
> ACID is about data integrity. The "best 2 out of 3" voting was, in my opinion,
> a work-around for unreliable hardware. It is based on a clever idea, but when
> 2 computers having the same data and logic come up with 2 different answers, I
> wouldn't trust either of them.

I agree, this was a solution for hardware issues.  However, hardware
issues can STILL happen today, so there is an argument for it.  There
are really two ways to get to robustness: clever hardware, and clever
software.  The old way was to do it in hardware, the newer way is to
do it in software (see Google with their racks of cheap motherboards).
I suspect software will always be the better way, but you can't just
write a check to get better software the way you can with hardware.
Doing it right with software means hiring really good people, which is
something a LOT of companies don't want to do (well, they think
they're doing it, but they're not).

Basically I believe the concept with the mainframe was that you could
probably open the thing up, break one random board with a hammer, and
the application would still keep running just fine.  IBM would then
magically show up the next day and replace the board without anybody
doing anything.  All the hardware had redundancy, so you can run your
application for a decade or two without fear of a hardware failure.

However, you pay a small fortune for all of this.  The other trend as
I understand it in mainframes is renting your own hardware to you.
That is, you buy a box, and you can just pay to turn on extra
CPUs/etc.  You can imagine what the margins are like for that to be
practical, but for non-trendy businesses that don't want to offer free
ice cream and pay Silicon Valley wages I guess it is an alternative to
building good software.

>
> You have seen how "democracies" work, right? :)
> The more voters involved, the longer it takes for all the votes to be counted.
> With a small number, it might actually still scale, but when you pass a magic
> number (no clue what this would be), the counting time starts to exceed any
> time you might have gained by adding more voters.
>
> Also, this, to me, seems to counteract the whole reason for using clusters:
> Have different nodes handle a different part of the problem.

I agree.  The old mainframe way of doing things isn't going to make
anything faster.  I don't think it will necessarily make things much
slower as long as all the hardware is in the same box.  However, if
you want to start doing this at a cluster scale with offsite replicas
I imagine the latencies would kill just about anything.  That was one
of the arguments against the Postgres vacuum approach where replicas
could end up having in-use records deleted.  The solutions are to
delay the replicas (not great), or synchronize back to the master
(also not great).  The MySQL approach apparently lets all the replicas
do their own vacuuming, which does neatly solve that particular
problem (presumably at the cost of more work for the replicas, and of
course they're no longer binary replicas).

>
> The way Uber created the cluster is useful when having 1 node handle all the
> updates and multiple nodes providing read-only access while also providing
> failover functionality.

I agree.  I do remember listening to a Postgres talk by one of the
devs and while everybody's holy grail is the magical replica where you
just have a bunch of replicas and you do any operation on any replica
and everything is up to date, in reality that is almost impossible to
achieve with any solution.

-- 
Rich



Re: [gentoo-user] Re: [was cross-compile attempt] 32bit chroot

2016-08-01 Thread Mick
On Monday 01 Aug 2016 17:32:58 Mick wrote:
> On Monday 01 Aug 2016 12:19:41 waltd...@waltdnes.org wrote:

> > > What chroot() actually does is fairly simple, it modifies pathname
> > > lookups for a process and its children so that any reference to a path
> > > starting '/' will effectively have the new root, which is passed as
> > > the single argument, prepended onto the path. The current working
> > > directory is left unchanged and relative paths can still refer to
> > > files outside of the new root.
> 
> Thanks Walter, it's present along with the whole of the 32bit OS fs:
> 
> gentoo-32bit # ls -la /mnt/iso/gentoo-32bit/bin/bash
> -rwxr-xr-x 1 root root 677244 Jan 16  2016 /mnt/iso/gentoo-32bit/bin/bash
> 
> gentoo-32bit # file /mnt/iso/gentoo-32bit/bin/bash
> /mnt/iso/gentoo-32bit/bin/bash: ELF 32-bit LSB executable, Intel 80386,
> version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for
> GNU/Linux 2.6.32, stripped
> 
> 
> Am I missing something in the amd64 kernel to be able to execute 32bit code?

No, I was missing the *whole* of the 32bit fs /lib directory.  O_O

Apologies for the noise.
-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] PostgreSQL Vs MySQL @Uber

2016-08-01 Thread J. Roeleveld
On Monday, August 01, 2016 11:01:28 AM Rich Freeman wrote:
> On Mon, Aug 1, 2016 at 3:16 AM, J. Roeleveld  wrote:
> > Check the link posted by Douglas.
> > Ubers article has some misunderstandings about the architecture with
> > conclusions drawn that are, at least also, caused by their database design
> > and usage.
> 
> I've read it.  I don't think it actually alleges any misunderstandings
> about the Postgres architecture, but rather that it doesn't perform as
> well in Uber's design.  I don't think it actually alleges that Uber's
> design is a bad one in any way.

It was written quite diplomatic. Seeing the create table for the sample tables 
already make me wonder how they designed their database schema. Especially 
from a performance point of view. But that is a seperate discussion :)

> But, I'm certainly interested in anything else that develops here...

Same here, and I am hoping some others will also come up with some interesting 
bits.

> >> And of course almost any FOSS project could have a bug.  I
> >> don't know if either project does the kind of regression testing to
> >> reliably detect this sort of issue.
> > 
> > Not sure either, I do think PostgreSQL does a lot with regression tests.
> 
> Obviously they missed that bug.  Of course, so did Uber in their
> internal testing.  I've seen a DB bug in production (granted, only one
> so far) and they aren't pretty.  A big issue for Uber is that their
> transaction rate and DB size is such that they really don't have a
> practical option of restoring backups.

>From the slides on their migration from MySQL to PostgreSQL in 2013, I see it 
took them 45 minutes to migrate 50GB of data.
To me, that seems like a very bad transfer-rate for, what I would consider, a 
dev environment. It's only about 20MB/s.
I've seen "bad performing" ETL processes reading from 300GB of XML files and 
loading that into 3 DB-tables within 1.5 hours. That's about 57MB/s.
With the XML-engine using up nearly 98% of the total CPU-load.

If the data would have been supplied in CSV-files, it would have been roughly 
100GB of data. This could be easily loaded within 20 minutes. Equalling to 
85MB/s. (Filling up the network bandwidth)

I think their database design and infrastructure isn't optimized for their 
specific work-load. Which is, unfortunately, quite common.

> Obviously they'd do that in a
> complete disaster, but short of that they can't really afford to do
> so.  By the time a backup is recorded it would be incredibly out of
> date.  They have the same issue with the lack of online upgrades
> (which the responding article doesn't really talk about).  They really
> need it to just work all the time.

When I migrate a Postgresql to a new major version, I migrate 1 database at a 
time to minimize downtime. This is done by piping the output of the backup-
process straight into a restore-proces connected to the new server.

If it were even more time-critical, I would develop a migration proces that 
would:
1) copy all the current (as in, needed today) to the new database
2) disable the application
3) copy all the latest changes for today to the new database
4) reenable the application (pointing to new database)
5) copy all the historical data I might need

I would add a note on the website and send out an email first informing the 
customers that the data is being migrated and historical data might be 
incomplete during this proces.

> >> I'd think that it is more likely
> >> that the likes of Oracle would (for their flagship DB (not for MySQL),
> > 
> > Never worked with Oracle (or other big software vendors), have you? :)
> 
> Actually, I almost exclusively work with them.  Some are better than
> others.  I don't work directly with Oracle, but I can say that the two
> times I've worked with an Oracle consultant they've been worth their
> weight in gold, and cost about as much.

They do have some good ones...

> The one was fixing some kind
> of RDB data corruption on a VAX that was easily a decade out of date
> at the time; I was shocked that they could find somebody who knew how
> to fix it.  interestingly, it looks like they only abandoned RDB
> recently.

Probably one of the few people in the world. And he/she might have been hired 
in by Oracle for this particular issue.

> They do tend to be a solution that involves throwing money at
> problems.  My employer was having issues with a database from another
> big software vendor which I'm sure was the result of bad application
> design, but throwing Exadata at it did solve the problem, at an
> astonishing price.

I was at Collaborate last year and spoke to some of the guys from Oracle. (Not 
going into specifics to protect their jobs). When asked if one of my customers 
should be using Oracle RAC or Exadata, the answer came down to: "If you think 
RAC might be sufficient, it usually is"

Exadata, however, is a really nice design. But throwing faster machines at a 
problem should only be part of the solution.
I know s

Re: [gentoo-user] PostgreSQL Vs MySQL @Uber

2016-08-01 Thread J. Roeleveld
On Monday, August 01, 2016 08:43:49 AM james wrote:
> On 08/01/2016 02:16 AM, J. Roeleveld wrote:
> > On Saturday, July 30, 2016 06:38:01 AM Rich Freeman wrote:
> >> On Sat, Jul 30, 2016 at 6:24 AM, Alan McKinnon 
> > 
> > wrote:
> >>> On 29/07/2016 22:58, Mick wrote:
>  Interesting article explaining why Uber are moving away from
>  PostgreSQL.
>  I am
>  running both DBs on different desktop PCs for akonadi and I'm also
>  running
>  MySQL on a number of websites.  Let's which one goes sideways first. 
>  :p
>  
>   https://eng.uber.com/mysql-migration/
> >>> 
> >>> I don't think your akonadi and some web sites compares in any way to
> >>> Uber
> >>> and what they do.
> >>> 
> >>> FWIW, my Dev colleagues support and entire large corporate ISP's
> >>> operational and customer data on PostgreSQL-9.3. With clustering. With
> >>> no
> >>> db-related issues :-)
> >> 
> >> Agree, you'd need to be fairly large-scale to have their issues,
> > 
> > And also have to design your database by people who think MySQL actually
> > follows common SQL standards.
> > 
> >> but I
> >> think the article was something anybody interested in databases should
> >> read.  If nothing else it is a really easy to follow explanation of
> >> the underlying architectures.
> > 
> > Check the link posted by Douglas.
> > Ubers article has some misunderstandings about the architecture with
> > conclusions drawn that are, at least also, caused by their database design
> > and usage.
> > 
> >> I'll probably post this to my LUG mailing list.  I think one of the
> >> Postgres devs lurks there so I'm curious to his impressions.
> >> 
> >> I was a bit surprised to hear about the data corruption bug.  I've
> >> always considered Postgres to have a better reputation for data
> >> integrity.
> > 
> > They do.
> > 
> >> And of course almost any FOSS project could have a bug.  I
> >> don't know if either project does the kind of regression testing to
> >> reliably detect this sort of issue.
> > 
> > Not sure either, I do think PostgreSQL does a lot with regression tests.
> > 
> >> I'd think that it is more likely
> >> that the likes of Oracle would (for their flagship DB (not for MySQL),
> > 
> > Never worked with Oracle (or other big software vendors), have you? :)
> > 
> >> and they'd probably be more likely to send out an engineer to beg
> >> forgiveness while they fix your database).
> > 
> > Only if you're a big (as in, spend a lot of money with them) customer.
> > 
> >> Of course, if you're Uber
> >> the hit you'd take from downtime/etc isn't made up for entirely by
> >> having somebody take a few days to get everything fixed.
> > 
> > --
> > Joost
> 
> I certainly respect your skills and posts on Databases, Joost, as
> everything you have posted, in the past is 'spot on'.

Comes with a keen interest and long-term (think decades) of working with 
different databases.

> Granted, I'm no database expert, far from it.

Not many people are, nor do they need to be.

> But I want to share a few thing with you,
> and hope you  (and others) will 'chime in' on these comments.
> 
> Way back, when the earth was cooling and we all had dinosaurs for pets,
> some of us hacked on AT&T "3B2" unix systems. They were know for their
> 'roll back and recovery', triplicated (or more) transaction processes
> and 'voters' system to ferret out if a transaction was complete and
> correct. There was no ACID, the current 'gold standard' if you believe
> what Douglas and other write about concerning databases.
> 
> In essence, (from crusted up memories) a basic (SS7) transaction related
> to the local telephone switch, was ran  on 3 machines. The results were
> compared. If they matched, the transaction went forward as valid. If 2/3
> matched,

And what in the likely case when only 1 was correct?
Have you seen the movie "minority report"?
If yes, think back to why Tom Cruise was found 'guilty' when he wasn't and how 
often this actually occured.

> and the switch was was configured, then the code would
> essentially 'vote' and majority ruled. This is what led to phone calls
> (switched phone calls) having variable delays, often in the order of
> seconds, mis-connections and other problems we all encountered during
> periods of excessive demand.

Not sure if that was the cause in the past, but these days it can also still 
take a few seconds before the other end rings. This is due to the phone-system 
(all PBXs in the path) needing to setup the routing between both end-points 
prior to the ring-tone actually starting.
When the system is busy, these lookups will take time and can even time-out. 
(Try wishing everyone you know a happy new year using a wired phone and you'll 
see what I mean. Mobile phones have a seperate problem at that time)

> That scenario was at the heart of how old, crappy AT&T unix (SVR?) could
> perform so well and therefore established the gold standard for RT
> transaction processing, aka the "five  9s" 99.999% of

Re: [gentoo-user] Re: [was cross-compile attempt] 32bit chroot

2016-08-01 Thread Mick
On Monday 01 Aug 2016 12:19:41 waltd...@waltdnes.org wrote:
> On Mon, Aug 01, 2016 at 04:46:24PM +0100, Mick wrote
> 
> > On Monday 01 Aug 2016 11:23:03 waltd...@waltdnes.org wrote:
> > >   I recommend going with one of 3 "cheats"...
> > > 
> > > 1) A 32-bit chroot in a 64-bit machine
> > > 
> > > 2) A QEMU (or VirtualBox) 32-bit guest on a 64-bit host
> > > 
> > > 3) If you have a spare 64-bit machine, install 32-bit Gentoo on it
> > > 
> > >   I use option 2) both as my distccd server and to manually build Pale
> > > 
> > > Moon.  The target in both cases is an ancient 32-bit-only Atom netbook.
> > 
> > I'm trying your cheat (1) above, but I must be doing something wrong:
> > 
> > gentoo-32bit # linux32 chroot /mnt/iso/gentoo-32bit /bin/bash
> > chroot: failed to run command ???/bin/bash???: No such file or directory
> > 
> > gentoo-32bit # ls -la /bin/bash
> > -rwxr-xr-x 1 root root 705400 Jan  9  2016 /bin/bash
> > 
> > gentoo-32bit # ls -la ./bin/bash
> > -rwxr-xr-x 1 root root 677244 Jan 16  2016 ./bin/bash
> > 
> > gentoo-32bit # linux32 chroot /mnt/iso/gentoo-32bit ./bin/bash
> > chroot: failed to run command ???./bin/bash???: No such file or directory
> 
>   I believe that "/bin/bash" is the pathname after you switch to the
> chroot environment.  So you would need a 32-bit bash located at
> /mnt/iso/gentoo-32bit/bin/bash *BEFORE CHROOTING*.  See
> https://lwn.net/Articles/252794/
> 
> > What chroot() actually does is fairly simple, it modifies pathname
> > lookups for a process and its children so that any reference to a path
> > starting '/' will effectively have the new root, which is passed as
> > the single argument, prepended onto the path. The current working
> > directory is left unchanged and relative paths can still refer to
> > files outside of the new root.

Thanks Walter, it's present along with the whole of the 32bit OS fs:

gentoo-32bit # ls -la /mnt/iso/gentoo-32bit/bin/bash
-rwxr-xr-x 1 root root 677244 Jan 16  2016 /mnt/iso/gentoo-32bit/bin/bash

gentoo-32bit # file /mnt/iso/gentoo-32bit/bin/bash
/mnt/iso/gentoo-32bit/bin/bash: ELF 32-bit LSB executable, Intel 80386, 
version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for 
GNU/Linux 2.6.32, stripped


Am I missing something in the amd64 kernel to be able to execute 32bit code?
-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] cross-compile attempt

2016-08-01 Thread Mick
On Monday 01 Aug 2016 16:49:15 Mick wrote:

> Thank you Peter, I seem to have posted a few seconds before I received your
> message.  From what you're showing above I seem to have not performed a
> correct mount of the chroot fs.  I better rinse and repeat ...

Hmm ... I followed the handbook this time to make sure all is correctly 
mounted:

gentoo-32bit # mount | grep 32bit
proc on /mnt/iso/gentoo-32bit/proc type proc (rw,relatime)
sysfs on /mnt/iso/gentoo-32bit/sys type sysfs 
(rw,nosuid,nodev,noexec,relatime)
debugfs on /mnt/iso/gentoo-32bit/sys/kernel/debug type debugfs 
(rw,nosuid,nodev,noexec,relatime)
fusectl on /mnt/iso/gentoo-32bit/sys/fs/fuse/connections type fusectl 
(rw,nosuid,nodev,noexec,relatime)
efivarfs on /mnt/iso/gentoo-32bit/sys/firmware/efi/efivars type efivarfs 
(rw,nosuid,nodev,noexec,relatime)
cgroup_root on /mnt/iso/gentoo-32bit/sys/fs/cgroup type tmpfs 
(rw,nosuid,nodev,noexec,relatime,size=10240k,mode=755)
openrc on /mnt/iso/gentoo-32bit/sys/fs/cgroup/openrc type cgroup 
(rw,nosuid,nodev,noexec,relatime,release_agent=/lib64/rc/sh/cgroup-release-
agent.sh,name=openrc)
cpuset on /mnt/iso/gentoo-32bit/sys/fs/cgroup/cpuset type cgroup 
(rw,nosuid,nodev,noexec,relatime,cpuset)
cpu on /mnt/iso/gentoo-32bit/sys/fs/cgroup/cpu type cgroup 
(rw,nosuid,nodev,noexec,relatime,cpu)
cpuacct on /mnt/iso/gentoo-32bit/sys/fs/cgroup/cpuacct type cgroup 
(rw,nosuid,nodev,noexec,relatime,cpuacct)
blkio on /mnt/iso/gentoo-32bit/sys/fs/cgroup/blkio type cgroup 
(rw,nosuid,nodev,noexec,relatime,blkio)
freezer on /mnt/iso/gentoo-32bit/sys/fs/cgroup/freezer type cgroup 
(rw,nosuid,nodev,noexec,relatime,freezer)
net_cls on /mnt/iso/gentoo-32bit/sys/fs/cgroup/net_cls type cgroup 
(rw,nosuid,nodev,noexec,relatime,net_cls)
net_prio on /mnt/iso/gentoo-32bit/sys/fs/cgroup/net_prio type cgroup 
(rw,nosuid,nodev,noexec,relatime,net_prio)
pids on /mnt/iso/gentoo-32bit/sys/fs/cgroup/pids type cgroup 
(rw,nosuid,nodev,noexec,relatime,pids)
dev on /mnt/iso/gentoo-32bit/dev type devtmpfs 
(rw,nosuid,relatime,size=10240k,nr_inodes=1915538,mode=755)
mqueue on /mnt/iso/gentoo-32bit/dev/mqueue type mqueue 
(rw,nosuid,nodev,noexec,relatime)
devpts on /mnt/iso/gentoo-32bit/dev/pts type devpts 
(rw,nosuid,noexec,relatime,gid=5,mode=620)
shm on /mnt/iso/gentoo-32bit/dev/shm type tmpfs 
(rw,nosuid,nodev,noexec,relatime)


I must be doing some newbie error ... because I still get:

gentoo-32bit # linux32 chroot /mnt/iso/gentoo-32bit /bin/bash
chroot: failed to run command ‘/bin/bash’: No such file or directory


what would this error be?  :-/

-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Re: [was cross-compile attempt] 32bit chroot

2016-08-01 Thread waltdnes
On Mon, Aug 01, 2016 at 04:46:24PM +0100, Mick wrote
> On Monday 01 Aug 2016 11:23:03 waltd...@waltdnes.org wrote:
> 
> >   I recommend going with one of 3 "cheats"...
> > 
> > 1) A 32-bit chroot in a 64-bit machine
> > 
> > 2) A QEMU (or VirtualBox) 32-bit guest on a 64-bit host
> > 
> > 3) If you have a spare 64-bit machine, install 32-bit Gentoo on it
> > 
> >   I use option 2) both as my distccd server and to manually build Pale
> > Moon.  The target in both cases is an ancient 32-bit-only Atom netbook.
> 
> I'm trying your cheat (1) above, but I must be doing something wrong:
> 
> gentoo-32bit # linux32 chroot /mnt/iso/gentoo-32bit /bin/bash
> chroot: failed to run command ???/bin/bash???: No such file or directory
> 
> gentoo-32bit # ls -la /bin/bash
> -rwxr-xr-x 1 root root 705400 Jan  9  2016 /bin/bash
> 
> gentoo-32bit # ls -la ./bin/bash
> -rwxr-xr-x 1 root root 677244 Jan 16  2016 ./bin/bash
> 
> gentoo-32bit # linux32 chroot /mnt/iso/gentoo-32bit ./bin/bash
> chroot: failed to run command ???./bin/bash???: No such file or directory

  I believe that "/bin/bash" is the pathname after you switch to the
chroot environment.  So you would need a 32-bit bash located at
/mnt/iso/gentoo-32bit/bin/bash *BEFORE CHROOTING*.  See
https://lwn.net/Articles/252794/

> What chroot() actually does is fairly simple, it modifies pathname
> lookups for a process and its children so that any reference to a path
> starting '/' will effectively have the new root, which is passed as
> the single argument, prepended onto the path. The current working
> directory is left unchanged and relative paths can still refer to
> files outside of the new root.

-- 
Walter Dnes 
I don't run "desktop environments"; I run useful applications



Re: [gentoo-user] cross-compile attempt

2016-08-01 Thread Mick
On Monday 01 Aug 2016 16:31:18 Peter Humphrey wrote:
> On Monday 01 Aug 2016 14:51:02 Mick wrote:
> > Given Andrew's steer I had another look and found this guide:
> > 
> > https://wiki.gentoo.org/wiki/Project:AMD64/32-bit_Chroot_Guide
> > 
> > Is this approach still valid, or have things moved on since this article
> > was authored (2012) and different configuration/approach is now
> > recommended?
> 
> I use that method to maintain a 32-bit Atom box. I export its /usr/portage
> via NFS to a chroot on this i7 box and build packages in the chroot. Then I
> install from packages on the Atom.
> 
> I'm sure it would be just as easy, or more so, to mount the whole Atom file
> system and work in it as though I had an i7 processor on the Atom file
> system. I may try that again. Meanwhile, here's my /etc/init.d/atom script.
> The mtab I copy in contains enough entries to forestall error messages:
> 
> $ cat /etc/init.d/atom
> #!/sbin/openrc-run
> depend() {
>need localmount
>need bootmisc
> }
> start() {
> ebegin "Mounting 32-bit chroot dirs under /mnt/atom"
> mount -t proc /proc /mnt/atom/proc
> mount --rbind /dev /mnt/atom/dev
> mount --rbind /sys /mnt/atom/sys
> mount --rbind /var/tmp/portage /mnt/atom/var/tmp/portage
> mount -t nfs 192.168.1.2:/usr/portage/packages
> /mnt/atom/usr/portage/packages cp /etc/mtab.atom /mnt/atom/etc/mtab
> eend $? "Error mounting 32-bit chroot directories"
> }
> stop() {
> ebegin "Unmounting 32-bit /mnt/atom chroot dirs"
> rm /mnt/atom/etc/mtab
> umount -f /mnt/atom/var/tmp/portage
> umount -f /mnt/atom/sys/firmware/efi/efivars
> umount -f /mnt/atom/sys/fs/pstore
> umount -f /mnt/atom/sys/fs/cgroup/openrc
> umount -f /mnt/atom/sys/fs/cgroup/cpuset
> umount -f /mnt/atom/sys/fs/cgroup/cpu
> umount -f /mnt/atom/sys/fs/cgroup/cpuacct
> umount -f /mnt/atom/sys/fs/cgroup/freezer
> umount -f /mnt/atom/sys/fs/cgroup
> umount -f /mnt/atom/sys/kernel/config
> umount -f /mnt/atom/sys/kernel/debug
> umount -f /mnt/atom/dev/pts
> umount -f /mnt/atom/dev/shm
> umount -f /mnt/atom/dev/mqueue
> umount -f /mnt/atom/proc
> umount -f /mnt/atom/sys
> umount -f /mnt/atom/dev
> umount -f /mnt/atom/usr/portage/packages
> eend $? "Error unmounting 32-bit chroot directories"
> }
> 
> Of course I haven't bothered with idiot-proofing it, as I'm the only one
> here.
> 
> HTH.

Thank you Peter, I seem to have posted a few seconds before I received your 
message.  From what you're showing above I seem to have not performed a 
correct mount of the chroot fs.  I better rinse and repeat ...
-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] How to correctly handle multiple Qt versions (qt4 X qt5)

2016-08-01 Thread Fernando Rodriguez
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08/01/2016 08:17 AM, Francisco Ares wrote:
> Hi all.
> 
> In thys Gento system, there are packages that still need Qt-4, while newest
> KDE, for instance, needs Qt-5.
> 
> Even inserting entries in "/etc/portage/package.use" for the packages that
> need qt4, the emerge still fails, arguing that the package needs Qt-4.
> 
> On this system, "qtchooser" has never worked propperly - as far as I could
> understand it - so I'm used to manage the "default.conf" symlink at
> "/etc/xdg/qtchooser/" - is it correct to do so?  If not, what should I do?
> 
> Thanks!
> Francisco
> 

Just resolve the conflicts and each program will use the right version. So post
the emerge output.

- -- 

Fernando Rodriguez
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJXn2+7AAoJEPbOFX/5UlwcK4cP/A81RFnsbR1UWneH0aov0+a6
qtdFD/GIAJZ1EqVsOTnoGvxCWJCnnCt6NJw3F1ImK0Ngx0gzIay7trHbJrivU/Of
PadJMD5lZyzM08CZYa1ZBa4tzEbgw+FbmRrm2in6ule2sEUtuximUSNgHW41rDYs
x8bENmPnPq7BCw4nAKnPSdiIT9uEOujbRsZbwRGte9YEaRbFSQJBqLw7PHFpWGdd
l2ROzRdqhxD/JSvjIa0K/uY0p+PSAbqSBCA3zQHCBKO2tB/RLAo/9SQjigvtl7lc
G1xQXvhUC3II8U1doa5E7KbSCgtEk+T+ih0BSsk4ugEQNLok9pBQAVdzCRc2DvSM
vjSmjCO5LMw/n0so2Kf+50R3mTosrGFQcIK4aiKNE+oONmX+uLqkO3pLttLi3Ff+
k9JNxqhhaLQrCUBYvfzrLh66V50VOoH0ofNBHzaZ11oqhUbx9zdnOBURWmHcJnlW
OgP3KvDvok7SXJN+XF5UsspvsDQqFfZsBUiqIoRpGhjlB8CiBWOJDYYe3JiY6tuG
00MxE8xEBy0hYBpF6OPt2ayEBwbdYPXgtizrzg7taij16Q4JNddy6mJY8ZgVCgW6
d9Kx9ZTwVRLsrfpsCEwFtkJkOGpNsT/eei1t27z+TwE0/YikyC85JYUMdPOmzjUm
Mbj3jQD4mztEN/N5TpAh
=zM8Z
-END PGP SIGNATURE-



[gentoo-user] Re: [was cross-compile attempt] 32bit chroot

2016-08-01 Thread Mick
On Monday 01 Aug 2016 11:23:03 waltd...@waltdnes.org wrote:

>   I recommend going with one of 3 "cheats"...
> 
> 1) A 32-bit chroot in a 64-bit machine
> 
> 2) A QEMU (or VirtualBox) 32-bit guest on a 64-bit host
> 
> 3) If you have a spare 64-bit machine, install 32-bit Gentoo on it
> 
>   I use option 2) both as my distccd server and to manually build Pale
> Moon.  The target in both cases is an ancient 32-bit-only Atom netbook.

I'm trying your cheat (1) above, but I must be doing something wrong:

gentoo-32bit # linux32 chroot /mnt/iso/gentoo-32bit /bin/bash
chroot: failed to run command ‘/bin/bash’: No such file or directory

gentoo-32bit # ls -la /bin/bash
-rwxr-xr-x 1 root root 705400 Jan  9  2016 /bin/bash

gentoo-32bit # ls -la ./bin/bash
-rwxr-xr-x 1 root root 677244 Jan 16  2016 ./bin/bash

gentoo-32bit # linux32 chroot /mnt/iso/gentoo-32bit ./bin/bash
chroot: failed to run command ‘./bin/bash’: No such file or directory

gentoo-32bit # mount | grep 32bit
dev on /mnt/iso/gentoo-32bit/dev type devtmpfs 
(rw,nosuid,relatime,size=10240k,nr_inodes=1915538,mode=755)
devpts on /mnt/iso/gentoo-32bit/dev/pts type devpts 
(rw,nosuid,noexec,relatime,gid=5,mode=620)
shm on /mnt/iso/gentoo-32bit/dev/shm type tmpfs 
(rw,nosuid,nodev,noexec,relatime)
proc on /mnt/iso/gentoo-32bit/proc type proc (rw,nosuid,nodev,noexec,relatime)
sysfs on /mnt/iso/gentoo-32bit/sys type sysfs 
(rw,nosuid,nodev,noexec,relatime)
/dev/sdb3 on /mnt/iso/gentoo-32bit/usr/portage type btrfs 
(rw,noatime,space_cache,subvolid=5,subvol=/)
tmpfs on /mnt/iso/gentoo-32bit/tmp type tmpfs (rw,nosuid,noatime,nodiratime)

Any clues?
-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] cross-compile attempt

2016-08-01 Thread Peter Humphrey
On Monday 01 Aug 2016 14:51:02 Mick wrote:

> Given Andrew's steer I had another look and found this guide:
> 
> https://wiki.gentoo.org/wiki/Project:AMD64/32-bit_Chroot_Guide
> 
> Is this approach still valid, or have things moved on since this article
> was authored (2012) and different configuration/approach is now
> recommended?

I use that method to maintain a 32-bit Atom box. I export its /usr/portage
via NFS to a chroot on this i7 box and build packages in the chroot. Then I
install from packages on the Atom.

I'm sure it would be just as easy, or more so, to mount the whole Atom file
system and work in it as though I had an i7 processor on the Atom file
system. I may try that again. Meanwhile, here's my /etc/init.d/atom script.
The mtab I copy in contains enough entries to forestall error messages:

$ cat /etc/init.d/atom
#!/sbin/openrc-run
depend() {
   need localmount
   need bootmisc
}
start() {
ebegin "Mounting 32-bit chroot dirs under /mnt/atom"
mount -t proc /proc /mnt/atom/proc
mount --rbind /dev /mnt/atom/dev
mount --rbind /sys /mnt/atom/sys
mount --rbind /var/tmp/portage /mnt/atom/var/tmp/portage
mount -t nfs 192.168.1.2:/usr/portage/packages 
/mnt/atom/usr/portage/packages
cp /etc/mtab.atom /mnt/atom/etc/mtab
eend $? "Error mounting 32-bit chroot directories"
}
stop() {
ebegin "Unmounting 32-bit /mnt/atom chroot dirs"
rm /mnt/atom/etc/mtab
umount -f /mnt/atom/var/tmp/portage
umount -f /mnt/atom/sys/firmware/efi/efivars
umount -f /mnt/atom/sys/fs/pstore
umount -f /mnt/atom/sys/fs/cgroup/openrc
umount -f /mnt/atom/sys/fs/cgroup/cpuset
umount -f /mnt/atom/sys/fs/cgroup/cpu
umount -f /mnt/atom/sys/fs/cgroup/cpuacct
umount -f /mnt/atom/sys/fs/cgroup/freezer
umount -f /mnt/atom/sys/fs/cgroup
umount -f /mnt/atom/sys/kernel/config
umount -f /mnt/atom/sys/kernel/debug
umount -f /mnt/atom/dev/pts
umount -f /mnt/atom/dev/shm
umount -f /mnt/atom/dev/mqueue
umount -f /mnt/atom/proc
umount -f /mnt/atom/sys
umount -f /mnt/atom/dev
umount -f /mnt/atom/usr/portage/packages
eend $? "Error unmounting 32-bit chroot directories"
}

Of course I haven't bothered with idiot-proofing it, as I'm the only one
here.

HTH.

-- 
Rgds
Peter




Re: [gentoo-user] cross-compile attempt

2016-08-01 Thread waltdnes
On Sun, Jul 31, 2016 at 07:40:37PM +0100, Mick wrote
> Hi All,
> 
> I am dipping my toe into cross-compile territory, in order to build i686 
> binaries for a 32bit box, which is too old to do its own emerges.  I am using 
> an amd64 box which is significantly faster to do all the heavy lifting and 
> started applying this page:
> 
> https://wiki.gentoo.org/wiki/Embedded_Handbook/General/Creating_a_cross-compiler

  "True cross-compiling" is a pain.  It will not work for non trivial
stuff, which builds against glib/glibc and other system libraries,
unless you pull in a whole bunch of "compatability libraries" for the
target architecture.  On my "no-multilib" system, it doesn't work.
Sure, distcc "works transparently", but a bunch of builds get sent back
to the target machine to be done.  That defeats the whole point of
distcc, which is to do all the work on the more powerful machine...

  I recommend going with one of 3 "cheats"...

1) A 32-bit chroot in a 64-bit machine

2) A QEMU (or VirtualBox) 32-bit guest on a 64-bit host

3) If you have a spare 64-bit machine, install 32-bit Gentoo on it

  I use option 2) both as my distccd server and to manually build Pale
Moon.  The target in both cases is an ancient 32-bit-only Atom netbook.

-- 
Walter Dnes 
I don't run "desktop environments"; I run useful applications



Re: [gentoo-user] Genlop oddity

2016-08-01 Thread Peter Humphrey
On Monday 01 Aug 2016 10:03:53 John Blinka wrote:

> Glad to see that someone else has experienced the same thing, and I'm not
> going crazy (although some might argue this is hardly proof...)

No, but it'll do as a bit of evidence pro tem.  :)

-- 
Rgds
Peter




Re: [gentoo-user] PostgreSQL Vs MySQL @Uber

2016-08-01 Thread Rich Freeman
On Mon, Aug 1, 2016 at 3:16 AM, J. Roeleveld  wrote:
>
> Check the link posted by Douglas.
> Ubers article has some misunderstandings about the architecture with
> conclusions drawn that are, at least also, caused by their database design and
> usage.

I've read it.  I don't think it actually alleges any misunderstandings
about the Postgres architecture, but rather that it doesn't perform as
well in Uber's design.  I don't think it actually alleges that Uber's
design is a bad one in any way.

But, I'm certainly interested in anything else that develops here...

>
>> And of course almost any FOSS project could have a bug.  I
>> don't know if either project does the kind of regression testing to
>> reliably detect this sort of issue.
>
> Not sure either, I do think PostgreSQL does a lot with regression tests.
>

Obviously they missed that bug.  Of course, so did Uber in their
internal testing.  I've seen a DB bug in production (granted, only one
so far) and they aren't pretty.  A big issue for Uber is that their
transaction rate and DB size is such that they really don't have a
practical option of restoring backups.  Obviously they'd do that in a
complete disaster, but short of that they can't really afford to do
so.  By the time a backup is recorded it would be incredibly out of
date.  They have the same issue with the lack of online upgrades
(which the responding article doesn't really talk about).  They really
need it to just work all the time.

>> I'd think that it is more likely
>> that the likes of Oracle would (for their flagship DB (not for MySQL),
>
> Never worked with Oracle (or other big software vendors), have you? :)

Actually, I almost exclusively work with them.  Some are better than
others.  I don't work directly with Oracle, but I can say that the two
times I've worked with an Oracle consultant they've been worth their
weight in gold, and cost about as much.  The one was fixing some kind
of RDB data corruption on a VAX that was easily a decade out of date
at the time; I was shocked that they could find somebody who knew how
to fix it.  interestingly, it looks like they only abandoned RDB
recently.

They do tend to be a solution that involves throwing money at
problems.  My employer was having issues with a database from another
big software vendor which I'm sure was the result of bad application
design, but throwing Exadata at it did solve the problem, at an
astonishing price.  Neither my employer nor the big software provider
in question is likely to attract top-notch DB talent (indeed, mine has
steadily gotten rid of anybody who knows how to do anything in Oracle
beyond creating schemas it seems, though I can only imagine how much
they pay annually in their license fees; and yes, I'm sure 99.9% of
what they use Oracle (or SQL Server) for would work just fine in
Postgres).

>
> Only if you're a big (as in, spend a lot of money with them) customer.
>

So, we are that (and I think a few of our IT execs used to be Oracle
employees, which I'm sure isn't hurting their business).  I'll admit
that Uber might not get the same attention.  Seems like Oracle is the
solution at work from everything to software that runs the entire
company to software that hosts one table for 10 employees (well, when
somebody notices and gets it out of Access).  Well, unless it involves
an MS-oriented dev or Sharepoint, in which case somebody inevitably
wants it on SQL Server.  I did mention that we're not a world-class IT
shop, didn't I?

-- 
Rich



Re: [gentoo-user] Genlop oddity

2016-08-01 Thread John Blinka
On Sun, Jul 31, 2016 at 6:47 AM, Peter Humphrey 
wrote:

>
> How is it possible for genlop's reported ETA to increase while its time
> spent so far also increases? Could the concurrent gnutls merging have
> affected it? Surely not.


I've noticed the same oddity recently while building a couple of new amd64
boxes.  Unfortunately, I can't document my observations
with precise numbers, but I have noticed it on multiple occasions when
simultaneously emerging, say, two packages that each
take hours to build.  As an example, If the estimate for the remaining time
was another 1/2-1 hr while the 2d build was sharing system
resources with the 1st, I've seen the 2d build estimate jump to 1-2 hr when
having exclusive access to system resources after the 1st
finishes. It's almost as if somewhere in the code there's logic that
estimates time remaining as proportional to available system
resources available instead of inversely proportional.

This isn't a subtle anomaly, in my opinion.  I've used genlop in this way
to monitor build times for ages and I've never seen it behave
this way before.  And I haven't changed my build options by fooling around
with -jN or by trying to do other things on the system while
emerging.  New behavior for genlop, or a 32/64 bit thing?  (I've recently
gone 64 bit and that's where I see the discrepancy.)  Glad
to see that someone else has experienced the same thing, and I'm not going
crazy (although some might argue this is hardly proof...)

John Blinka


Re: [gentoo-user] cross-compile attempt

2016-08-01 Thread Mick
On Sunday 31 Jul 2016 23:31:29 you wrote:
> On Sunday 31 Jul 2016 23:18:00 Andrew Savchenko wrote:
> > On Sun, 31 Jul 2016 19:40:37 +0100 Mick wrote:
> > > Hi All,
> > > 
> > > I am dipping my toe into cross-compile territory, in order to build i686
> > > binaries for a 32bit box, which is too old to do its own emerges.  I am
> > > using an amd64 box which is significantly faster to do all the heavy
> > > lifting and started applying this page:
> > > 
> > > https://wiki.gentoo.org/wiki/Embedded_Handbook/General/Creating_a_cross-> 
> > > > > co
> > > mpiler
> > > 
> > > which I followed up with:
> > > 
> > > https://wiki.gentoo.org/wiki/Cross_build_environment
> > 
> > And here comes this misconception again... Please, tell me, why on
> > the earth cross-compiling is needed for amd64 to produce i686
> > binaries?!
> 
> I thought it did.  From what you're saying I got this wrong.  When I read
> the first use case bullet point, on the 2nd URL above, I thought I had
> arrived at the right place.  :-/
> 
> > amd64 CPU _natively_ supports x86 instructions, amd64 kernel
> > natively supports x86 code (this can be disabled during kernel
> > config, but usually it isn't), amd64 gcc *can* produce x86 binaries.
> 
> I thought amd64 can run x86 binaries, but I wasn't aware that it can compile
> them too, or what is needed to achieve this.  My knowledge on gcc is pretty
> much minimal.  I did search the Wiki, gentoo.org and Google for it, but all
> I could come across was cross-compiling.
> 
> > There are two ways to help older x86 boxes to build packages faster:
> > 
> > 1. Set up distcc to produce x86 code on your amd64 processors. Just
> > add -m32 to your *FLAGS.
> 
> I read somewhere in these unsuccessful searches of mine that distcc is
> deprecated and it is better to use cross-compiling instead ...
> 
> > 2. Copy old box system to a chroot dir on amd64. Run setarch i686
> > and chroot to that directory, and build 32-bit packages as usual!
> > There are two ways to deliver them:
> > 
> > 2.a. Generate binary packages on new box and install them on old
> > boxes.
> 
> OK, I'll uninstall crossdev and try 2.a in the first instance.  Is there a
> Wiki page explaining what parts of the x86 system are needed to carry
> across to the amd64 guest_root_fs?  I wouldn't think I will need the whole
> x86 fs?  Anything else I need to pay attention to?
> 
> > 2.b. Instead of copying old box's root, mount it over NFS.
> 
> I'll look into this later, after I get 2.a going.
> 
> > I'm currently using 1, but planning to switch to 2.a, because
> > distcc can't help with everything (execution of java, python,
> > autotools and other stuff can't be helped with distcc).
> > 
> > I used 2.b earlier on very old box (it is dead now).
> > 
> > 3. Well, one can do full cross-compilation as you proposed, but
> > this is ridiculous. Cross-compilation is always a pain and if it
> > can be avoided, it should be avoided.
> 
> Thanks for this advice.  I am not particularly interested to use crossdev if
> it is not the best suited tool for the job, but I wasn't aware of the
> alternatives you suggested and haven't as yet found any HOWTOs on it.

Given Andrew's steer I had another look and found this guide:

https://wiki.gentoo.org/wiki/Project:AMD64/32-bit_Chroot_Guide

Is this approach still valid, or have things moved on since this article was 
authored (2012) and different configuration/approach is now recommended?

-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] PostgreSQL Vs MySQL @Uber

2016-08-01 Thread james

On 08/01/2016 02:16 AM, J. Roeleveld wrote:

On Saturday, July 30, 2016 06:38:01 AM Rich Freeman wrote:

On Sat, Jul 30, 2016 at 6:24 AM, Alan McKinnon 

wrote:

On 29/07/2016 22:58, Mick wrote:

Interesting article explaining why Uber are moving away from PostgreSQL.
I am
running both DBs on different desktop PCs for akonadi and I'm also
running
MySQL on a number of websites.  Let's which one goes sideways first.  :p

 https://eng.uber.com/mysql-migration/


I don't think your akonadi and some web sites compares in any way to Uber
and what they do.

FWIW, my Dev colleagues support and entire large corporate ISP's
operational and customer data on PostgreSQL-9.3. With clustering. With no
db-related issues :-)


Agree, you'd need to be fairly large-scale to have their issues,


And also have to design your database by people who think MySQL actually
follows common SQL standards.


but I
think the article was something anybody interested in databases should
read.  If nothing else it is a really easy to follow explanation of
the underlying architectures.


Check the link posted by Douglas.
Ubers article has some misunderstandings about the architecture with
conclusions drawn that are, at least also, caused by their database design and
usage.


I'll probably post this to my LUG mailing list.  I think one of the
Postgres devs lurks there so I'm curious to his impressions.

I was a bit surprised to hear about the data corruption bug.  I've
always considered Postgres to have a better reputation for data
integrity.


They do.


And of course almost any FOSS project could have a bug.  I
don't know if either project does the kind of regression testing to
reliably detect this sort of issue.


Not sure either, I do think PostgreSQL does a lot with regression tests.


I'd think that it is more likely
that the likes of Oracle would (for their flagship DB (not for MySQL),


Never worked with Oracle (or other big software vendors), have you? :)


and they'd probably be more likely to send out an engineer to beg
forgiveness while they fix your database).


Only if you're a big (as in, spend a lot of money with them) customer.


Of course, if you're Uber
the hit you'd take from downtime/etc isn't made up for entirely by
having somebody take a few days to get everything fixed.


--
Joost




I certainly respect your skills and posts on Databases, Joost, as 
everything you have posted, in the past is 'spot on'. Granted, I'm no 
database expert, far from it. But I want to share a few thing with you, 
and hope you  (and others) will 'chime in' on these comments.


Way back, when the earth was cooling and we all had dinosaurs for pets,
some of us hacked on AT&T "3B2" unix systems. They were know for their
'roll back and recovery', triplicated (or more) transaction processes 
and 'voters' system to ferret out if a transaction was complete and 
correct. There was no ACID, the current 'gold standard' if you believe 
what Douglas and other write about concerning databases.


In essence, (from crusted up memories) a basic (SS7) transaction related 
to the local telephone switch, was ran  on 3 machines. The results were 
compared. If they matched, the transaction went forward as valid. If 2/3 
matched, and the switch was was configured, then the code would 
essentially 'vote' and majority ruled. This is what led to phone calls 
(switched phone calls) having variable delays, often in the order of 
seconds, mis-connections and other problems we all encountered during 
periods of excessive demand.


That scenario was at the heart of how old, crappy AT&T unix (SVR?) could 
perform so well and therefore established the gold standard for RT 
transaction processing, aka the "five  9s" 99.999% of up-time (about 5 
minutes per year of downtime). Sure this part is only related to 
transaction processing as there was much more to the "five 9s" legacy, 
but imho, that is the heart of what was the precursor to ACID property's 
now so greatly espoused in SQL codes that Douglas refers to.


Do folks concur or disagree at this point?


The reason this is important to me (and others?), is that, if this idea 
(granted there is much more detail to it) is still valid, then it can 
form  the basis for building up superior-ACID processes, that meet or 
exceed, the properties of an expensive (think Oracle) transaction 
process on distributed (parallel) or clustered systems, to a degree of 
accuracy only limited by the limit of the number of odd numbered voter 
codes involve in the distributed and replicated parts of the 
transaction. I even added some code where replicated routines were 
written in different languages, and the results compared to add an 
additional layer of verification before the voter step. (gotta love 
assembler?).


I guess my point is 'Douglas' is full of stuffing, OR that is what folks 
are doing when they 'role their own solution specifically customized to 
their specific needs' as he alludes to near the end of his commentary? 
(I'd 

[gentoo-user] How to correctly handle multiple Qt versions (qt4 X qt5)

2016-08-01 Thread Francisco Ares
Hi all.

In thys Gento system, there are packages that still need Qt-4, while newest
KDE, for instance, needs Qt-5.

Even inserting entries in "/etc/portage/package.use" for the packages that
need qt4, the emerge still fails, arguing that the package needs Qt-4.

On this system, "qtchooser" has never worked propperly - as far as I could
understand it - so I'm used to manage the "default.conf" symlink at
"/etc/xdg/qtchooser/" - is it correct to do so?  If not, what should I do?

Thanks!
Francisco


Re: [gentoo-user] Partition of 3TB USB drive not detected

2016-08-01 Thread james

On 08/01/2016 01:45 AM, J. Roeleveld wrote:

On Sunday, July 31, 2016 03:37:55 PM Jörg Schaible wrote:

Hi,

for my backups I use a 3TB USB drive (one big ext4 partition) without any
problems. Just plug in the cable, mount it and perform the backup. The
partition (sdi1) is detected an mountable without any problems:

=== %< ==
$ ls -l /dev/disk/by-id
total 0



=== %< ==

However, when I boot a rescue system from a USB stick, the partition on the
USB is not detected. I already tried latest SystemRescueCD (default and
alternate kernel), Knoppix and the Gentoo Admin CD. Nothing, the partition
is not available.

What's the difference? Why does my kernel find this partition and the other
one's do not? It's pretty silly to have a backup drive and cannot access it
in question ;-)


Which kernel do you boot?
The systerescue-cd has 4 kernels:
2 * 64bit and 2 * 32bit.

By default, it boots the "default" one for the architecture you are booting.
Have you tried booting the "alternate" kernel?

I have 1 system that I need to boot using the "alternate" kernel as the
"default" one is too old. (Yes, by default it boots an old kernel)

It could easily be that the kernel you are using does not support your USB3
adapter or something else you used.

Eg. apart from all the 'ls' statements, also check "uname" and
"/proc/config.gz" for differences.


I was just reading about "IOMMU" and how often, if it is not "correctly 
configured" in the kernel and other places, your memory map to other 
hardwawre, like USB, can be flaky or not work at all. Fixes often 
require loading the latest bios for your motherboard. It also matters 
the 'rev' of your motherboard and other details.


I have a Gigabyte GA-990FXA-ud3 that seems to be a victim of this bug. 
No, I have not had time to ferret out this issue, so here are a few raw 
links where it is talked about::


https://www.reddit.com/r/linux/comments/4ixnyg/question_about_iommu/

https://en.wikipedia.org/wiki/Input–output_memory_management_unit

http://developer.amd.com/community/blog/2008/09/01/iommu/

http://pages.cs.wisc.edu/~basu/isca_iommu_tutorial/

https://en.wikipedia.org/wiki/List_of_IOMMU-supporting_hardware


(more posted if you ask)...


hth,
James







Re: [gentoo-user] Genlop oddity

2016-08-01 Thread Peter Humphrey
On Sunday 31 Jul 2016 18:52:23 Fernando Rodriguez wrote:

--->8

> Just out of curiosity what are the differences between the original genlop
> calculation and yours, and how long did it actually take? and what is the
> output of 'genlop -t '.

$ genlop -t gcc -f /mnt/rescue/var/log/emerge.log
using logfile /mnt/rescue/var/log/emerge.log
 * sys-devel/gcc

 Sat Apr 16 10:07:24 2016 >>> sys-devel/gcc-4.9.3
   merge time: 25 minutes and 41 seconds.

 Sun May 15 10:03:34 2016 >>> sys-devel/gcc-4.9.3
   merge time: 12 minutes and 19 seconds.

 Sun Jul 31 11:33:57 2016 >>> sys-devel/gcc-4.9.3
   merge time: 17 minutes and 58 seconds.

This is a fairly new box, so there aren't many records yet. The last one 
above is the one I was talking about.

I can't tell you how long I thought it would take - sorry.

> I think it would be inaccurate in most cases (at least it is for me) but
> if you think the calculation is wrong you should file a bug. For me it
> seems reasonably accurate por packages with consistent build times.

Just those three samples are enough to confirm the inaccuracy, but I don't 
really expect anything much better given all the factors we've mentioned.

I was just curious. Thanks for your interest though.

-- 
Rgds
Peter




Re: [gentoo-user] PostgreSQL Vs MySQL @Uber

2016-08-01 Thread J. Roeleveld
On Saturday, July 30, 2016 06:38:01 AM Rich Freeman wrote:
> On Sat, Jul 30, 2016 at 6:24 AM, Alan McKinnon  
wrote:
> > On 29/07/2016 22:58, Mick wrote:
> >> Interesting article explaining why Uber are moving away from PostgreSQL.
> >> I am
> >> running both DBs on different desktop PCs for akonadi and I'm also
> >> running
> >> MySQL on a number of websites.  Let's which one goes sideways first.  :p
> >> 
> >>  https://eng.uber.com/mysql-migration/
> > 
> > I don't think your akonadi and some web sites compares in any way to Uber
> > and what they do.
> > 
> > FWIW, my Dev colleagues support and entire large corporate ISP's
> > operational and customer data on PostgreSQL-9.3. With clustering. With no
> > db-related issues :-)
> 
> Agree, you'd need to be fairly large-scale to have their issues,

And also have to design your database by people who think MySQL actually 
follows common SQL standards.

> but I
> think the article was something anybody interested in databases should
> read.  If nothing else it is a really easy to follow explanation of
> the underlying architectures.

Check the link posted by Douglas.
Ubers article has some misunderstandings about the architecture with 
conclusions drawn that are, at least also, caused by their database design and 
usage.

> I'll probably post this to my LUG mailing list.  I think one of the
> Postgres devs lurks there so I'm curious to his impressions.
> 
> I was a bit surprised to hear about the data corruption bug.  I've
> always considered Postgres to have a better reputation for data
> integrity.

They do.

> And of course almost any FOSS project could have a bug.  I
> don't know if either project does the kind of regression testing to
> reliably detect this sort of issue.

Not sure either, I do think PostgreSQL does a lot with regression tests.

> I'd think that it is more likely
> that the likes of Oracle would (for their flagship DB (not for MySQL),

Never worked with Oracle (or other big software vendors), have you? :)

> and they'd probably be more likely to send out an engineer to beg
> forgiveness while they fix your database).

Only if you're a big (as in, spend a lot of money with them) customer.

> Of course, if you're Uber
> the hit you'd take from downtime/etc isn't made up for entirely by
> having somebody take a few days to get everything fixed.

--
Joost