Re: Arch qualification for buster: call for DSA, Security, toolchain concerns

2018-06-29 Thread Luke Kenneth Casson Leighton
On Fri, Jun 29, 2018 at 12:50 PM, Julien Cristau  wrote:

> Everyone, please avoid followups to debian-po...@lists.debian.org.
> Unless something is relevant to *all* architectures (hint: discussion of
> riscv or arm issues don't qualify), keep replies to the appropriate
> port-specific mailing list.

 apologies, julien: as an outsider i'm not completely familiar with
the guidelines.  the reduction in memory-usage at the linker phase
"-Wl,--no-keep-memory" however - and the associated inherent
slowly-inexorably-increasing size is i feel definitely something that
affects all ports.

 it is really *really* tricky to get any kind of traction *at all*
with people on this.  it's not gcc's problem to solve, it's not one
package's problem to solve, it's not any one distros problem to solve,
it's not any one port's problem to solve and so on, *and* it's a
slow-burn problem that's taking *literally* a decade to become more
and more of a problem.  consequently getting reports and feedback to
the binutils team is... damn hard.

l.



Re: Arch qualification for buster: call for DSA, Security, toolchain concernsj

2018-06-29 Thread Luke Kenneth Casson Leighton
On Fri, Jun 29, 2018 at 12:06 PM, John Paul Adrian Glaubitz
 wrote:
> On 06/29/2018 10:41 AM, Luke Kenneth Casson Leighton wrote:
>> On Fri, Jun 29, 2018 at 8:16 AM, Uwe Kleine-König  
>> wrote:
>>
>>>> In short, the hardware (development boards) we're currently using to
>>>> build armel and armhf packages aren't up to our standards, and we
>>>> really, really want them to go away when stretch goes EOL (expected in
>>>> 2020).  We urge arm porters to find a way to build armhf packages in
>>>> VMs or chroots on server-class arm64 hardware.
>>
>>  from what i gather the rule is that the packages have to be built
>> native.  is that a correct understanding or has the policy changed?
>
> Native in the sense that the CPU itself is not emulated which is the case
> when building arm32 packages on arm64.

 ok.  that's clear.  thanks john.

> I think that building on arm64 after fixing the bug in question is the
> way to move forward. I'm surprised the bug itself hasn't been fixed yet,
> doesn't speak for ARM.

 if you mean ARM hardware (OoO), it's too late.  systems are out there
with OoO speculative execution bugs in the hardware (and certainly
more to be found), and they're here to stay unfortunately.

 if you mean that buildd on 32-bit systems could be modified to pass
"-Wl,--no-keep-memory" to all linker phases to see if that results in
the anticipated dramatic reduction in memory usage, that's
straightforward to try, nothing to do with ARM themselves.

l.



Re: Arch qualification for buster: call for DSA, Security, toolchain concerns

2018-06-29 Thread Luke Kenneth Casson Leighton
On Fri, Jun 29, 2018 at 12:23 PM, Adam D. Barratt
 wrote:

>>  i don't know: i'm an outsider who doesn't have the information in
>> short-term memory, which is why i cc'd the debian-riscv team as they
>> have current facts and knowledge foremost in their minds.  which is
>> why i included them.
>
> It would have been wiser to do so *before* stating that nothing was
> happening as if it were a fact.

 true... apologies.

>>  ah.  so what you're saying is, you could really do with some extra
>> help?
>
> I don't think that's ever been in dispute for basically any core team
> in Debian.

 :)

> That doesn't change the fact that the atmosphere around the change in
> question has made me feel very uncomfortable and unenthused about SRM
> work. (I realise that this is somewhat of a self-feeding energy
> monster.)

 i hear ya.

l.



Re: Arch qualification for buster: call for DSA, Security, toolchain concerns

2018-06-29 Thread Luke Kenneth Casson Leighton
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


On Fri, Jun 29, 2018 at 10:35 AM, Adam D. Barratt
 wrote:

>>  what is the reason why that package is not moving forward?
>
> I assume you're referring to the dpkg upload that's in proposed-updates
> waiting for the point release in two weeks time?

 i don't know: i'm an outsider who doesn't have the information in
short-term memory, which is why i cc'd the debian-riscv team as they
have current facts and knowledge foremost in their minds.  which is
why i included them.

> I'm also getting very tired of the repeated vilification of SRM over
> this, and if there were any doubt can assure you that it is not
> increasing at least my inclination to spend my already limited free
> time on Debian activity.

 ah.  so what you're saying is, you could really do with some extra help?

l.



Re: Arch qualification for buster: call for DSA, Security, toolchain concerns

2018-06-29 Thread Luke Kenneth Casson Leighton
On Wed, Jun 27, 2018 at 9:03 PM, Niels Thykier  wrote:

> armel/armhf:
> 
>
>  * Undesirable to keep the hardware running beyond 2020.  armhf VM
>support uncertain. (DSA)
>- Source: [DSA Sprint report]

 [other affected 32-bit architectures removed but still relevant]

 ... i'm not sure how to put this other than to just ask the question.
has it occurred to anyone to think through the consequences of not
maintaining 32-bit versions of debian for the various different
architectures?  there are literally millions of ARM-based tablets and
embedded systems out there which will basically end up in landfill if
a major distro such as debian does not take a stand and push back
against the "well everything's going 64-bit so why should *we*
bother?" meme.

 arm64 is particularly inefficient and problematic compared to
aarch32: the change in the instruction set resulted in dropping some
of the more efficiently-encoded instructions that extend a 64-bit
program size, require a whopping FIFTY PERCENT instruction-cache size
increase to compensate for, whch increased power consumption by over
15%.

 in addition, arm64 is usually speculative OoO (Cavium ThunderX V1
being a notable exception) which means it's vulnerable to spectre and
meltdown attacks, whereas 32-bit ARM is exclusively in-order.  if you
want to GUARANTEE that you've got spectre-immune hardware you need
either any 32-bit system (where even Cortex A7 has virtualisattion) or
if 64-bit is absolutely required use Cortex A53.

 basically, abandoning or planning to abandon 32-bit ARM *right now*
leaves security-conscious end-users in a really *really* dicey
position.


> We are currently unaware of any new architectures likely to be ready in
> time for inclusion in buster.

 debian-riscv has been repeatedly asking for a single zero-impact line
to be included in *one* file in *one* dpkg-related package which would
allow riscv to stop being a NMU architecture and become part of
debian/unstable (and quickly beyond), for at least six months, now.
cc'ing the debian-riscv list because they will know the details about
this.  it's really quite ridiculous that a single one-line change
having absolutely no effect on any other architecture whatsover is not
being actioned and is holding debian-riscv back because of that.

 what is the reason why that package is not moving forward?

 l.



Re: Arch qualification for buster: call for DSA, Security, toolchain concernsj

2018-06-29 Thread Luke Kenneth Casson Leighton
On Fri, Jun 29, 2018 at 8:16 AM, Uwe Kleine-König  
wrote:

> Hello,
>
> On Wed, Jun 27, 2018 at 08:03:00PM +, Niels Thykier wrote:
>> armel/armhf:
>> 
>>
>>  * Undesirable to keep the hardware running beyond 2020.  armhf VM
>>support uncertain. (DSA)
>>- Source: [DSA Sprint report]
>>
>> [DSA Sprint report]:
>> https://lists.debian.org/debian-project/2018/02/msg4.html
>
> In this report Julien Cristau wrote:
>
>> In short, the hardware (development boards) we're currently using to
>> build armel and armhf packages aren't up to our standards, and we
>> really, really want them to go away when stretch goes EOL (expected in
>> 2020).  We urge arm porters to find a way to build armhf packages in
>> VMs or chroots on server-class arm64 hardware.

 from what i gather the rule is that the packages have to be built
native.  is that a correct understanding or has the policy changed?

>
> If the concerns are mostly about the hardware not being rackable, there
> is a rackable NAS by Netgear:
>
> 
> https://www.netgear.com/business/products/storage/readynas/RN2120.aspx#tab-techspecs
>
> with an armhf cpu. Not sure if cpu speed (1.2 GHz) and available RAM (2
> GiB) are good enough.

 no matter how much RAM there is it's never going to be "enough", and
letting systems go into swap is also not a viable option [2]

 i've been endeavouring to communicate the issue for many many years
wrt building (linking) of very large packages, for a long, *long*
time.  as it's a strategic cross-distro problem that's been very very
slowly creeping up on *all* distros as packages inexorably creep up in
size, reaching people about the problem and possible solutions is
extremely difficult.  eventually i raised a bug on binutils and it
took several months to communicate the extent and scope of the problem
even to the developer of binutils:

 https://sourceware.org/bugzilla/show_bug.cgi?id=22831

the problem is that ld from binutils by default, unlike gcc which
looks dynamically at how much RAM is available, loads absolutely all
object files into memory and ASSUMES that swap space is going to take
care of any RAM deficiencies.

 unfortunately due to the amount of cross-referencing that takes place
in the linker phase this "strategy" causes MASSIVE thrashing, even if
one single object file is sufficient to cause swapping.

 this is particularly pertinent for systems which compile with debug
info switched on as it is far more likely that a debug compile will go
into swap, due to the amount of RAM being consumed.

 firefox now requires 7GB of resident RAM, making it impossible to
compile on 32-bit systems  webkit-based packages require well over 2GB
RAM (and have done for many years).  i saw one scientific package a
couple years back that could not be compiled for 32-bit systems
either.

 all of this is NOT the fault of the PACKAGES [1], it's down to the
fact that *binutils* - ld's default memory-allocation strategy - is
far too aggressive.

 the main developer of ld has this to say:

Please try if "-Wl,--no-keep-memory" works.

 now, that's *not* a good long-term "solution" - it's a drastic,
drastic hack that cuts the optimisation of keeping object files in
memory stone dead.  it'll work... it will almost certainly result in
32-bit systems being able to successfully link applications that
previously failed... but it *is* a hack.  someone really *really*
needs to work with the binutils developer to *properly* solve this.

 if any package maintainer manages to use the above hack to
successfully compile 32-bit packages that previously completely ran
out of RAM or otherwise took days to complete, please do put a comment
to that effect in the binutiols bugreport, it will help everyone in
the entire GNU/Linux community to do so.

l.

[1] really, it is... developers could easily split packages into
dynamic-loadable modules, where each module easily compiles well below
even 2GB or 1GB of RAM.  they choose not to, choosing instead to link
hundreds of object files into a single executable (or library).
asking so many developers to change their strategy however... yyeah :)
 big task, i ain't taking responsibility for that one.

[2] the amount of memory being required for the linker phase of large
packages over time goes up, and up, and up, and up... when is it going
to stop?  never.  so just adding more RAM is never going to "solve"
the problem, is it?  it just *avoids* the problem.  letting even
64-bit systems go into swap is a huge waste of resources as builds
that go into swap will consume far more resources and time.  so *even
on 64-bit systems* this needs solving.



Re: Bug#399608: fixed in sysvinit 2.88dsf-59.1

2015-05-18 Thread Luke Kenneth Casson Leighton
On Sun, May 17, 2015 at 3:48 PM, Andreas Henriksson  wrote:
> Hello Adrian!
>
> Thanks for raising awareness about this issue. If there's anything
> I can do to help please tell me. That the new util-linux version hasn't
> been built yet sounds like it can't be avoided as it was just uploaded
> and unfortunately the sysvinit and util-linux update is a lockstep
> upgrade where both change at the same time as things are moved between
> the packages. There's no intermediate step possible, because the
> moved binaries always needs to be available at all times and thus
> have tight dependencies in both directions. Not sure how dependencies
> affects the build of these packages though They should both be
> able to build on systems with older versions of the packages installed
> and build independently.

 that sounds like the kind of thing that would cause nightmare
circular build dependencies for anyone porting to a new architecture
[which i'm considering doing: mvp from icubecorp].

 would that be correct - that if there *is* no "older version" it
would now be impossible to build both [or either] of the packages - or
am i mistaken?

 if it is correct, do you happen to know if they would cross-build, at all?

 l.


-- 
To UNSUBSCRIBE, email to debian-alpha-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/CAPweEDzvHBK9OUipm7Bt_P3MhCWK=udqyvfoiwhrhztu7sa...@mail.gmail.com