Bug#996000: general: System does not boot with second monitor attached

2021-10-09 Thread Andy Simpkins

Control: Severity -1 normal



Package: general
Severity: important
X-Debbugs-Cc: gaff...@live.com

Dear Maintainer,

I installed Debian 11 on a new computer (with a single monitor during 
installation, connected with HDMI).

Installation went well, but the monitor came up with a very limited resolution 
(1024x768, I think).


That isn't surprising, the installer (DI) will try and use only a minimum set 
of possible configurations, in order to function with most hardware it 
encounters

After a bit of googling, I found that the drivers for the Intel graphics on this board (Rocket Lake, UHD 750) were not 
included in the 5.10 kernel that came with Debian Bullseye.  I installed kernel 5.14 from Debian Testing, and that seemed to 
solve the issue - I got full resolution (still with a single monitor attached).


Ack that sounds about right to me too:  AFAICT Rocket Lake, UHD 750 is not yet 
officially supported in the linux kernel; i915 series has a generic driver, but 
not with full support for all devices in the series
this is perhaps a bit too new, and you may have to wait a while for the kernel 
drivers to arrive. [0]

 


Taking the system into use as my main system, I set it up with two monitors, one connected with an HDMI cable, one with a 
DVI cable.  (Both monitors are Benq 24 inch 1920x1080.)



Booting the system, it hangs during boot, with a message "VMC (outside TXT) disabled 
by bios".


This is stating that that you haven't enabled 'Intel virtualization technology' 
- this should have no effect on the graphic driver support, you would need to 
enable this if you want to run Virtual Machines.



Booting the system with only the HDMI-connected monitor attached works as expected, the system completes the boot sequence, 
I can log in and use the system.


Attaching the second monitor after boot also works, both monitors are 
recognized and works
As a workaround, I tried enabling "Intel virtualization technology" in the BIOS.  Booting with both monitors attached, there 
is no longer any error message, but the system still hangs during boot (with a blank screen).


I would expect the system to boot also with two monitors attached.


I would expect this as well - but until there is official support in the kernel 
for your new hardware then I am afraid that you may have to put up with only 
connecting the monitor after boot.

Trying newer, back ported kernels, when they become available, is probably your 
best bet.  You could try watching the kernel.org releases looking specifically 
for the mentions of i915, Rocket Lake, or UHD 750 beforue you try [1]


Very best wishes, and good luck with trying new kernels when they arrive

/Andy
(RattusRattus)

[0] https://gist.github.com/Postrediori/556706b28aff3b831d9e41acb47418c5
[1] https://www.kernel.org/



Re: Mozilla Firefox DoH to CloudFlare by default (for US users)?

2019-09-11 Thread Andy Simpkins




On 11/09/2019 06:16, Ingo Jürgensmann wrote:

Am 10.09.2019 um 07:50 schrieb Florian Lohoff :


On Mon, Sep 09, 2019 at 03:31:37PM +0200, Bjørn Mork wrote:

I for one, do trust my ISPs a lot more than I trust Cloudflare or
Google, simply based on the jurisdiction.

There are tons of setups which are fine tuned for latency because they
are behind sat links etc or low bandwidth landlines. They have dns
caches with prefetching to reduce typical resolve latency down to sub
milliseconds although your RTT to google/cloudflare is >1000ms.

Switching from your systems resolver fed by DHCP to DoH in Firefox will
make the resolve latency go from sub ms to multiple seconds as the
HTTP/TLS handshake will take multiple RTT. This will effectively break
ANY setup behind Sat links e.g. for example all cruise ships at
sea.


I can confirm (based on experiences on my day job) that this can be a real 
problem and affecting thousands and hundredthousands of users.

Having the *option* to use DoH is maybe a good idea, but making it the default 
is not.




I appreciate that Mozilla are trying to enhance privacy by introducing 
DoH as an option (but clearly not for children! [0][1]), but are we not 
missing the major point here?  DNS does not belong in the browser


If we wish to deploy DoH (I think it would get my vote) then it should 
be system wide and transparent to applications, using the same methods 
already available.  If every application were to deploy its own resolver 
service then total chaos will ensue.


Yes I know browsers offer alternative resolve / and proxy methods 
already, unfortunately that ship has already sailed. Providing that they 
are turned OFF by default then that is acceptable.  With in-browser DoH 
again, as long as it is OFF by default I don't see an issue.


/Andy

[0] "Respect user choice for opt-in parental controls and disable DoH if 
we detect them" 
https://blog.mozilla.org/futurereleases/2019/09/06/whats-next-in-making-dns-over-https-the-default/


[1] In browser DoH will break a lot of 'parental control / supervisor' 
applications that block traffic based on black & white lists.  IMO this 
is another reason why DoH shouldn't be inside the browser - already 
Mozilla are deploying work arounds for certain use cases...




Re: regarding non-free firmware for wi-fi and ethernet

2019-07-26 Thread Andy Simpkins
On 25/07/19 14:00, Abibula Aygun wrote:
> 
> Hello Debian Team,

Hi there :-)


> We have an little problem.
> The installer can't detect many simple wi-fi or ethernet hardware.
> Things that was ok on Stretch version.

Are you able to tell us WHICH wifi / ethernet hardware worked without
non-free drivers in Stretch and now needs drivers from non-free for Buster?


> What can we do to insert the Firmwares on our distribution ?
The same as we do - have a "non-free" area in your archive (but satisfy
yourselves that you are free to distribute)

> You have an unofficial Debian with all non-free firmwares. As far as i
> know is ok to include in AcademiX distro the
> non-free firmwares for wi-fi and ethernet without alterate the licence
> of the manufacturer or the code of the firmware.

> Anyway, the non-free firmwares are used by the installer an not by final
> installed system.
I am somewhat surprised if this is correct...

Regards
/Andy



Testing release images - Call for help

2019-06-30 Thread Andy Simpkins
Hi there,

We have the release of Buster scheduled to happen next Saturday.  As
always on a release day new iso images are generated and *before* they
get signed we try and smoke test them to make sure that the builds went
ok and nothing critical is missing from the manifests.  We do the same
for live images as well.

If you can spare the time your help would be greatly appreciated in
testing some of these images on the day. If you have time to test before
then too, that would be even more helpful!

Along with the usual amd64/i386/arm64 installer builds we'll be
explicitly testing, we need specific help for testing LIVE images
(including 2 different install methods) on BARE METAL PCs (i.e. NOT a
VM).  We are looking for tests to be carried out on machines running
BIOS as well as UEFI. Buster will also be our first Debian release to
include support for UEFI Secure Boot, and more test coverage there would
be lovely.

Additionally we are also looking for people who can test the following:
 * debian-mac.10.0.0-amd64-netinst.iso
 * debian-mac.10.0.0-i386-netinst.iso
 * debian-10.0.0-mips-*.iso (Multiple images)
 * debian-10.0.0-mipsel-*.iso   (Multiple images)
 * debian-10.0.0-mips64el-*.iso (Multiple images)
 * debian-10.0.0-ppc64el-*.iso  (Multiple images)
 * debian-10.0.0-s390x-*.iso(Multiple images)

On release day (Saturday 6th July), we are expecting installer images to
become available from around 1300 UTC, with live images between 1.5 and
2 hours later.

To get a reasonable coverage of DI there is a wiki matrix [0] of the
install tests that we will perform (although feel free to try 'your'
specific install options.

Wiki is not the *best* solution for a matrix that will change quite a
bit during the test process, it is far to easy to have edit clashes.  To
reduce this we try and coordinate our action in on #debian-cd or
irc.debian.org

Please check in on irc *before* starting a test to reduce duplicate
tests being carried out.

Finally if you want to get in some early testing of DI over the next few
days please do.  if you find a critical problem there may just be enough
time to fix it

Let's make buster the smoothest release yet :-)

/Andy


[0]  https://wiki.debian.org/Teams/DebianCD/ReleaseTesting/Buster_r0






Re: Proposition: Simlify the Installation

2019-06-24 Thread Andy Simpkins
There has been a lot of descension over the past couple of weeks about 
DI and what it could do to be better.


I think it is important that I join that debate with a couple of 
requirements for any replacement / enhancement:



(1)  Must work on all architectures supported by Debian
(2)  Must work on all platforms supported by Debian

This (1&2) means that DI must provide a UI on any machine that can run 
Debian.
It must attempt to discover what interface the user has chosen to use 
during the installation.  This could be a keyboard, mouse, and screen, 
equally it could be a serial device (UART, USB, Ethernet etc)

There may be any number of these possible interfaces and any combination

DI must also attempt to detect all of these devices, and configure it 
correctly - Some of these devices (laptop screens for example) may or 
may not correctly report their "console modes". DI may also have to 
probe around in order to configure the display back light and turn it 
on...  there may be several of these as well.  DI must determine which 
controller is associated with the display.


Of cause it also means that we must be able to install from various 
different boot media, support different network and storage methods 
(some of which may need non-free firmware/drivers to work correctly)
DI must work with different boot processes such as BIOS / EFI and the 
different and poor implementations that exist on different machines.



(3) Must be accessible

What do you mean you can't read English?  What do you mean your language 
doesn't use a 'Roman based alphabet', read top left to bottom right 
etc.  DI must still work...
Blind? DI must support audio and TTS or brail 'screens' - this means we 
need to be very careful of layout, windowing, pop-ups etc.
Sight problems? Large font sizes, high contrast colour schemes 
(different colours for different users)
Other conditions? Typically these have a specialist interface for the 
user, but these *normally* present (or consume) the same low level 
interface to the compute system - meaning that DI doesn't need to do 
anything special


Accessibility issues may limit the number of interfaces that we *can* 
present a UI in a suitable format to the user



(4) Configurable

We must also remember that Debian is open to all, DI may well be the 
first interaction that the user has with Debian, that user may equally 
well be an 'old hand'.
This installation may be for a desktop, a server, a deeply embedded 
device, a mobile device, a VM, the list goes on...

This may be a local or a remote install.
What about automated installations?
What about distribute then configure on first run?  (AKA OEM 
installations - with or without a recovery 'partition')?



DI may not be able to do all of these things, it may not do all of them 
well at the movement.  If, however, an overhaul of DI is to happen then 
we should start by deciding what it must be able to do BEFORE deciding 
on how to go about this.  And for me that also means that the above 
points MUST be demonstrable in any prototype / first offering because 
experience has shown me that unless it is all baked in from the 
beginning such features are often impossible to add later.



/Andy




On 24/06/2019 06:15, Marc Haber wrote:

On Sun, 23 Jun 2019 23:10:29 +0200, patrick.dre...@gmx.net wrote:

Proposition: Simlify the Installation. Dosn't have 2 Installation options.
..., Graphical Installation.

Rebuttal: I work with Debian every day, and I have not seen the
graphical installer in a decade.

Not everything that works for your should be default or even the only
option.


With kind Greetings!

Ebenso.

Grüße
Marc




Re: Bits from /me: A humble draft policy on "deep learning v.s. freedom"

2019-05-23 Thread Andy Simpkins
Sam.
Whilst i agree that "assets" in some packages may not have sources with them 
and the application may still be in main if it pulls in those assets from 
contrib or non free.  
I am trying to suggest the same thing here.  If the data set is unknown this is 
the *same* as a dependancy on a random binary blob (music / fonts / game levels 
/ textures etc) and we wouldn't put that in main.  

It is my belief that we consider training data sets as 'source' in much the 
same way

/Andy

On 23 May 2019 16:33:24 BST, Sam Hartman  wrote:
>>>>>> "Andy" == Andy Simpkins  writes:
>
>Andy>     *unless* we can reproduce the same results, from the same
>Andy> training data,     you cannot classify as group 1, "Free
>Andy> Model", because verification that     training has been
>Andy> carried out on the dataset explicitly licensed under a    
>Andy> free software license can not be achieved.  This should be
>Andy> treated as a     severe bug and the entire suite should be
>   Andy> classified as group 2,     "ToxicCandy Model", until such time
>Andy> that verification is possible.
>
>I don't think that's entirely true.
>If we've done the training we can have confidence that it's free.
>Reproducibility is still an issue, but is no more or less an issue than
>with any other software.
>
>
>Consider how we treat assets for games or web applications.  And yes
>there are some confusing areas there and areas where we'd like to
>improve.  But let's be consistent in what we demand from various
>communities to be part of Debian.  Let's not penalize people for being
>new and innovative.
>
>
>--Sam

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Re: Bits from /me: A humble draft policy on "deep learning v.s. freedom"

2019-05-23 Thread Andy Simpkins



On 22/05/2019 03:53, Mo Zhou wrote:

Hi Tzafrir,

On 2019-05-21 19:58, Tzafrir Cohen wrote:

Is there a way to prove in some way (reproducible build or something
similar) that the results were obtained from that set using the specific
algorithm?

I wrote a dedicated section about reproducibility:
https://salsa.debian.org/lumin/deeplearning-policy#neural-network-reproducibility


I suppose that the answer is negative, but it would have been nice to
have that.

In simple cases, fixing the seed for random number generator is enough.

If any upstream has ever claimed that their project aims to be of high
quality. Then unable to reproduce is very likely a fatal bug.

Reproducibility is also a headache among the machine learning and
deep learning communities. They are trying to improve the situation.
Everyone likes reproducible bits.


I agree completely.

Your wording "The model /should/be reproducible with a fixed random 
seed." feels
correct but wonder if guidance notes along the following lines should be 
added?


    *unless* we can reproduce the same results, from the same training 
data,

    you cannot classify as group 1, "Free Model", because verification that
    training has been carried out on the dataset explicitly licensed 
under a
    free software license can not be achieved.  This should be treated 
as a

    severe bug and the entire suite should be classified as group 2,
    "ToxicCandy Model", until such time that verification is possible.

Finally,
Thank you for your work on this.

/Andy



Re: FYI/RFC: early-rng-init-tools

2019-03-07 Thread Andy Simpkins
On 03/03/19 17:59, Kurt Roeckx wrote:
> I think the only sane things are:
> - Use a hardware RNG (CPU, TPM, chaos key, ...)
> - Credit a seed file stored during the previous boot
> - Wait for new entropy from other sources
> 
> Note that is can be a combination of all 3.
> 
> We currently do not credit the seed file, for various good
> reasons. We should provide an option to users that need it to
> trust that file and credit that file. Note that it does not need
> to be fully trusted, we could for instance say it only provides 64
> bits of entropy.
> 
> Most people will actually have at least 2 hardware RNGs: One in
> the CPU and one in the TPM. 

No.  This may be true for AMD64 but this is not the case for other platforms

We can make the kernel trust those as
> entropy source without using something in userspace to feed it.
> I'm not sure in the kernel has the option to use the TPM directly
> as source, but it makes it available as /dev/hwrng. (The TPM might
> be disabled in the BIOS.) Some people don't trust them, I suggest
> they buy something they do trust, and disable the ones they don't
> trust. I think we should trust all hardware RNGs by default, and
> then also actually extract data from all of them.
> 

yes - if there is a HRNG we should use it by default.  if the user DOES
not want to use this then they are free to disable it (we need to ensure
that this is not a complex procedure)

> Note that the internal state of an RNG is only 256 bit / 32 byte.
> If you make that output something, it can't have more than that 256
> bit of entropy. It does not make sense to take more bytes of the RNG
> than that to feed back in it. It can make sense to do this at
> different times, after the RNG has reseeded, but both should be
> limited to that 256 bit / 32 byte. It doesn't make sense to do
> this at more than 2 different points in time.
> 
> There is no point in using an other RNG to stretch something. Just
> use the kernel RNG to stretch it by just asking more data from it.
> 
> Do not feed the output of the kernel during boot back into the
> kernel, even if you don't credit it. If there is something random
> in it, the kernel will already have used that. If you do it, there
> is no point in using something like md5, the kernel will take care
> of that itself.
> 
> Other than the entropy you feed it, it can be useful to feed it
> data that does not need to be secret but is very likely different
> on each boot, including things like the current time, and an
> incrementing counter. It would not be credited as having entropy.

agreed - it provides a 'delta' that is especially useful for VMs
(Serial numbers and MACs may be useful sources of 'psudo uniqueness' here
> The seed file currently acts as this. I have no idea if the kernel
> does anything like that itself, like the mount count of a
> filesystem. It might be useful that we feed it some boot counter.
> 
> 
> Kurt
> 
> 

/Andy



signature.asc
Description: OpenPGP digital signature


Re: FYI/RFC: early-rng-init-tools

2019-02-25 Thread Andy Simpkins



On 24/02/2019 20:00, Philipp Kern wrote:

On 2/24/2019 8:52 PM, Thorsten Glaser wrote:

In buster/sid, I noticed a massive delay booting up my laptop
and some virtual machines, which was reduced by hitting the
Shift and Ctrl keys multiple times randomly during boot; a
message “random: crng init done” would appear, and boot would
continue.

This is a well-known problem, and there are several bugs about
this; new in buster/sid compared to stretch is that it also
blocks urandom reads (I was first hit in the tomcat init script
from this). This is especially noticeable if you use a sysvinit
non-parallel boot, but I’m sure it also affects all others.

FTR this is supposedly fixed on the main architectures featuring an RNG
in the CPU by linux 4.19.20-1, which enabled RANDOM_TRUST_CPU. Which Ben
announced on this list[1] earlier this month.


Be aware RANDOM_TRUST_CPU depends on: |CONFIG_X86 
 || CONFIG_S390 
 || CONFIG_PPC 
|


I should have thanked Ben for turning this on sooner, in the mean time I 
am preparing email to list for other architectures (Mainly ARM at the 
moment I admit)


/Andy



Re: package management symlink

2019-02-06 Thread Andy Simpkins

Sören please see:

https://xkcd.com/927/

/Andy


On 05/02/2019 06:20, Sören Reinecke wrote:

Dear Debian mailing list community,

I am Sören alias Valor Naram and I founded the project "goeasyLinux". I will 
help to make linux more user friendly.

A short introduction to "goeasyLinux" can be found at 
https://github.com/ValorNaram/goeasylinux/blob/master/README.md

The specification I wrote in order to make a cross platform symlink to package 
management systems: 
https://github.com/ValorNaram/goeasylinux/blob/master/package%20management/package%20install.md

With your help I want to make package installing/removing equal on all linux 
systems without disturbing the diversity we have across linux distributions. In 
order to do that we need just a symlink, no replacement of existing software.


Best wishes

Sören alias Valor Naram

---
This email has been checked for viruses by AVG.
https://www.avg.com




Re: Handling of entropy during boot

2019-01-21 Thread Andy Simpkins
Hi,

This thread seems to have gone quite for some time.  Re-Reading the
thread I don't see any solutions being proposed that will truly suit
everyone.

If I have correctly understood the problem we are seeing a change from a
more open and trusting software environment to one with more emphasis on
security that is also less trusting:
* More packages are requiring the use of the kernel's high quality
entropy pool (including aspects of the kernel itself)
* At the same time questions are being asked over how much we can trust
our entropy sources. There is no agreement of which sources we should
trust; this appears to be based upon a cultural perspective rather than
evidence based.
* Different platforms may have different entropy sources available to
them (think desktops, mobile devices, headless servers, small IoT
devices & virtualised instances)

What does this mean for Buster?

Some services may take a long time to start.  I am not talking about a
few seconds here, but instead minutes or even hours.  I myself see sshd
timing out and being restarted by systemd several times before finally
starting some 7 min after the rest of the system on my ARM64 Mustang
platform.  I have seen reports of taking literally several hours for all
services to start on some NAS boxes.

Unfortunately some services fail to start completely, others are
terminated and unlimited restart attempts are made.

In all cases, that I have seen, there is no mention of the reason for
the failed start being that there is insufficient entropy available.
This itself is a bug whatever your view on how to address lack of
available entropy during start-up.

We should at the very least state the reason a service has not started.
I believe that systemd has the ability to only start services when a
given event has happened (i.e. wait for network).  Should we be asking
for wait for “entropy pool > x bytes” before starting a given service?



Should we add to or change the possible entropy sources?

Increasing the number of different sources of entropy may well reduce
the time waiting for sufficient entropy, (although this is not an excuse
not to explain why a service has failed to start).

There has been some discussion about adding in further possible entropy
sources, and whether or not that source is enabled by default of not.
In general nobody appears to be arguing against having  the ability to
use additional entropy sources, the only debate is over which should be
enabled by default within debian.

This debate appears to boil down to ‘do I trust this source’ and it is
accepted that this is very much dependant upon what the installation is
going to be used for AND your geo-political leanings.  i.e. you may well
trust a HRNG for an Intel device if you are an American, but be less
inclined to trust one from China, and vice versa.

I don't think that we can OR SHOULD make a sensible decision for an out
of the box experience that will suitable for all users.
Perhaps instead we should consider a tool (to be included in DI as well
as just the archive) that can present the different options and allow
the user to decide?

If this is the way we as a project decide to go I would very much like
to be involved in this new package.  Such a tool is probably beyond my
ability to write, however I would be very happy to work on the design,
UI and testing.

Is this the right approach to take?

Best regards

Andy



Re: Conflict over /usr/bin/dune

2018-12-19 Thread Andy Simpkins



On 18/12/2018 17:48, Ian Jackson wrote:

Ian Jackson writes ("Re: Conflict over /usr/bin/dune"):

  https://www.google.com/search?q=dune+software
  https://en.wikipedia.org/wiki/Dune_(software)
  https://www.google.com/search?q=%2Fusr%2Fbin%2Fdune

Under the circumstances it seems obvious that, at the very least, the
ocaml build tool should not be allowed the name /usr/bin/dune.


I agree, the name space is clearly already ocupied and even whitedune is 
on thin ice.


---8<--


I just checked and `odune' seems to be available.  For a build tool a
reasonably short name is justified.  The `o' prefix is often used with
ocaml and though there is of course a risk of clashes with both
individual programs and with some suites like the old OpenStep stuff,
it seems that `/usr/bin/odune', odune(1) et al, are not taken.

Sounds like a very reasonable solution


HTH.  I know this may just be seen as my usual opinion in these
Judgement of Solomon cases and that the underlying policy is
controversial.  But whenever something like this happens and causes a
major stink, it serves to demonstrate to others what they want to, and
can, avoid.

Ian.

No you have simply put into words my thoughts...  +1

/Andy





Re: Upcoming Qt switch to OpenGL ES on arm64

2018-11-24 Thread Andy Simpkins
On 24/11/18 14:14, bret curtis wrote:
>>> But even here in this place I have seen *a lot* of "cheap" arm64 boards. 
>>> Yes,
>>> the RPI3[+] is ubiquitous. And having to render Open GL stuff by CPU is
>>> precisely not the fastest thing around.
>>
>>  "I have a Raspberry Pi (or similar mobile class system that
>> has migrated / is migrating away from armel to arm64) and this has
>> forced a move from 'mobile' OpenGLES to 'Desktop' OpenGL.  The result of
>> which is that because that platform (and those like it) do not have
>> hardware acceleration for OpenGL but DO for OpenGLES you think we should
>> change the whole architecture for your use case." 
>>
> 
> This is a very wrong assumption, the OpenGL on a RPi (all of them) is
> hardware accelerated via the VC4 mesa driver by Eric Anholt which is
> shipped, by default, on by Raspbian. It supports up to OpenGL 2.1 and
> if you plan on having hardware accelerated X11 or Wayland, you need
> the VC4 driver. You'll need "Desktop" OpenGL otherwise nothing will
> work on a RPi system, which as of 2015 has over 5 million units
> shipped. This is not an insignificant user base.
> 
> IMHO, the decision to switch away from 'Desktop' OpenGL to GLES was
> the wrong decision and should be reversed until a solution is found to
> support both.
> 
> Cheers,
> Bret
> 

Apologies for using the RaspberryPi as my example of a 'mobile' class SoC.

IIRC the Pi was being used as the primary argument for switching away
from OpenGL to OpenGLES as this is selling in large volumes.  If the Pi
already supports OpenGL then the argument to move solely to OpenGLES is
reduced somewhat.

I will try OpenGL on a RPi this week (I normally run RPi headless so no
desktop installed).

/Andy



Re: Upcoming Qt switch to OpenGL ES on arm64

2018-11-24 Thread Andy Simpkins



On 24/11/2018 02:05, Lisandro Damián Nicanor Pérez Meyer wrote:

Andy: explicitly CCing you because I think it answers part of a question you
did but in another part of the thread.

El viernes, 23 de noviembre de 2018 06:58:13 -03 Steve McIntyre escribió:

On Fri, Nov 23, 2018 at 03:27:57AM +0300, Dmitry Eremin-Solenikov wrote:

[snip]

Can you build two packages and allow user to select, which one he wants to
install? Or those packages will be binary incompatible?

That's a good question, yes. It'w ahst I was wondering too.

And that's a perfectly valid question, one we did in 2015, Ubuntu tried out
(as Dmitry pointed out) and did not work.

Why?

Short story: really *too* complicated and error prone.

Long story:

Please first check this image:



That's almost all of Qt for 5.10 (we have now new submodules, so I need to
update it).

Understood


The Desktop/GLES decision is done at the root of the graph, qtbase. This
decision changes the API/ABI of libqt5gui5, one of the libraries provided by
qtbase.

Ack


So, as the API/ABI changes then we would need to (probably) ship two set of
headers and (for sure) two different libraries, let's say libqt5gui5 for
Desktop and libqt5gui5gles for GLES.

Yes that sounds right


But it doesn't ends there. The whole graph you saw is actually the *entire*
Qt. Upstream provides it either as a big fat tarball or as submodules. We took
the submodules route because building the whole tarball as one would take
literally days in slow arches.
The time taken for an automated process to run (or fail) should not be a 
justification not to do something.
We need to be able to build the entire archive, not just Qt, and this is 
an automated process.


As an aside: The current arm64 buildd's are plenty fast enough to build 
the entire archive in a few days (IIRC sledge has done this several 
times recently), I also believe that the buildds (and porter boxes?) are 
being (have been?) replaced with newer and faster boxes (also easier for 
DSA to maintain).
I believe that they are also able to build / will build native armhf 
(and armel).  It is my understanding bug reports & fixes are in progress




And a single mistake could be disastrous.
Not relevant - a single mistake in any package is called a bug. As a 
distribution we have many of these; we strive not to introduce new ones 
and fix those that we can...

Now whatever switch is applied to qtbase it's "inherited" by the rest of the
submodules. So if we ship two versions of libqt5gui5 then we would probably
need to ship two versions of the libs provided by qtdeclarative, which is
affected by this switch.
Absolutely - everything in the subsystem would need to be duplicated 
up-to the point of common API

This waterfall schema means *multiple* libraries would have to start doing
this two-binaries thing, as Ubuntu devs discovered. But remember that Qt is
really a set of submodules, so in any later version any submodule could start
using this switch for something. So whatever change could mean yet another set
of binaries with a transition with multiple rebuilds of the big part of rdeps
of Qt... no, we don't want to enter that mess.


No. The libraries do not need to have any knowledge about the other 
subsystem / collection of sub modules. i.e. 'desktop' does not need to 
be aware of 'mobile' and vis versa.




So we either keep the status quo of keeping arm64 in Desktop GL or switch to
GLES. The question is: which use case gives more benefit for our users for the
next stable release?


So far I personally know 0 people with an arm64 board with PCI slots,
while I know many with arm64 boards with hardware GLES support.

I'm working with big arm64 iron, so for me a server arm64 board with PCIe
slots (and thus PCIe graphic cards) and on-board Aspeed "VGA card" is more
common compared to GLES-enabled arm64 SoC.

How many Qt-based applications do you use there? Which ones use OpenGL?


Yeah - it depends exactly on your background. There's a small (but
growing) set of arm64 desktop users, and it would be unfortunate to
cut them off.

Let's be fair: I live almost at the end of the world, probably at very least
600 km away from the next DD and in a country in which buying new hardware
it's not exactly the easiest thing (my current machine, currently the only one
I have working, is now 10 years old...). So yes, as Steve says, it depends on
your background.

But even here in this place I have seen *a lot* of "cheap" arm64 boards. Yes,
the RPI3[+] is ubiquitous. And having to render Open GL stuff by CPU is
precisely not the fastest thing around.


Right so this is the crux of the matter.

I am putting words in your mouth here - please accept my apologies I am 
trying to describe how I have perceived your comments, this is clearly 
not the words you have used but that is how I am parsing them.  I have 
tried several times to re-word tjis better, it still feels 
confrontational but 

Re: Upcoming Qt switch to OpenGL ES on arm64

2018-11-23 Thread Andy Simpkins



On 23/11/2018 00:17, Lisandro Damián Nicanor Pérez Meyer wrote:

Hi! Please let me reply first to your last part:


Is there any possible way to support *BOTH* OpenGL / OpenGLES?  Mutually
exclusive from an install POV, but give the end user the choice which to
install?  Why should we have one Architecture forced down a path
different to another architecture?

No, I'm afraid there is no way to do that. We did consider it many times, but
is definitely too much work to hack on
So this is a large parcel of work that the team doesn't want to do, but 
it is possible.


I do understand that there would be a lot of effort required to support 
OGL and OGLES but
as you have already pointed out "you are doing this already" because OGL 
is provided for
all platforms except armel & armhf which have OGLES - that means you are 
already tracking

changes for *BOTH* ecosystems.

Having OGL & OGLES available on the same architecture would be setup 
involved in creating

two package streams, but once done the actual build process is automated.
Yes there are now twice as many resulting sets of binaries for this 
layer, but it is
reasonable to assume that functional test of each strand can be split 
across the testing
for all architectures (where not automated), so the increased workload 
again shouldn't

differ by much (just the supporting of the automation).

I am sure my view is nieve and a little too simplistic...



So we need to force an architecture (actually, all of them!) to either one or
the other.


El jueves, 22 de noviembre de 2018 20:04:33 -03 Andy Simpkins escribió:

On 22/11/18 22:33, Lisandro Damián Nicanor Pérez Meyer wrote:

El jueves, 22 de noviembre de 2018 15:37:29 -03 Dmitry Shachnev escribió:

Hi all!

The Qt framework can be built either with “desktop” OpenGL, or with
OpenGL
ES support. At the moment we are building it with OpenGL ES on armel and
armhf, and with desktop OpenGL on all other architectures

Maybe we missed to properly explain the main point of this change:
currently most arm64 boards are using software rasterization because
their video cards do not support Desktop OpenGL.

I am not sure that is correct.  I certainly don't agree...

There is no special case here.  If you have a video card in your ARM64
PC then it is likely the same video card that you have for an AMD64 PC -
i.e. it is an off the shelf PCIe card.

Now it is correct that there is a large number of ARM64 based SoC
solutions out there with an embedded GPU - these are aimed mainly at the
mobile market (but as the computational power in these SoCs increases we
are already seeing that is enough for a lot of peoples 'PC' needs)

I guess what I am trying to say here is the GPU architecture is NOT tied
to the CPU architecture.

- GPU architecture is not tied to the arch: right.
- Qt is tied to either Desktop or GLES: yes

So we need to pick one. The question is then which one will benefit our users
most.

So far I personally know 0 people with an arm64 board with PCI slots, while I
know many with arm64 boards with hardware GLES support.
I have quite a lot of ARM boards (for the record I am neither an ARM or 
Lenaro

employee, but I do design hardware using ARM cores).

I have 2 arm64 motherboards - both have PCIe slots and no GPU built into 
the SoC
these are Both MiniITX form factor boards and are drop-in replacements 
for amd64 based

systems.  They both have multiple SATA interfaces, DIMM slots etc etc.

I have several armhf boards - these all have OpenGLES supporting GPUs on 
the SoC.
Only one of them has a (single) SATA interface, none of them have DIMM 
slots (instead
having between 512MB and 2GB LPDDR soldered to the board)  None of these 
are desktop
PC replacements - they are embedded systems (think tablet / mobile / 
dedicated task

systems).

As of today there are considerably more 'mobile' arm devices.  I suspect 
that this
will continue because they are lower cost mass market products. Full 
'desktop' on
arm64 has felt very close for the last few years, but hardware isn't 
there just yet.
There are some quite big server SoCs out there, but the desktop & laptop 
world isn't

well serviced.



If we switch to GLES then most amr64 boards


will be able to render using their video hardware, thus greatly improving
speed to the point of being actually usable for some stuff.

I imagine (but would *love* hard data) that any PCI video card added to an
arm64 machine will probably also support GLES, so they will still have
use.

So 
any PCI video card added to s/amr64/AMD64 machine will probably also
support GLES, so they will still have use.
OK that is true - lets enact this across ALL architectures, but I
suspect that there may be a bit of pushback from the AMD64 heavy graphic
users...


No need to use sarcasm. Yes, it's a matter of choice. No one noted yet that
all archs except armel and armhf have Desktop support and not GLES. And this
is because, so far and to the best of our knowledge, that ha

Re: Upcoming Qt switch to OpenGL ES on arm64

2018-11-22 Thread Andy Simpkins
On 22/11/18 22:33, Lisandro Damián Nicanor Pérez Meyer wrote:
> El jueves, 22 de noviembre de 2018 15:37:29 -03 Dmitry Shachnev escribió:
>> Hi all!
>>
>> The Qt framework can be built either with “desktop” OpenGL, or with OpenGL
>> ES support. At the moment we are building it with OpenGL ES on armel and
>> armhf, and with desktop OpenGL on all other architectures
> 
> Maybe we missed to properly explain the main point of this change: currently 
> most arm64 boards are using software rasterization because their video cards 
> do not support Desktop OpenGL. 

I am not sure that is correct.  I certainly don't agree...

There is no special case here.  If you have a video card in your ARM64
PC then it is likely the same video card that you have for an AMD64 PC -
i.e. it is an off the shelf PCIe card.

Now it is correct that there is a large number of ARM64 based SoC
solutions out there with an embedded GPU - these are aimed mainly at the
mobile market (but as the computational power in these SoCs increases we
are already seeing that is enough for a lot of peoples 'PC' needs)

I guess what I am trying to say here is the GPU architecture is NOT tied
to the CPU architecture.


If we switch to GLES then most amr64 boards
> will be able to render using their video hardware, thus greatly improving 
> speed to the point of being actually usable for some stuff.
> 
> I imagine (but would *love* hard data) that any PCI video card added to an 
> arm64 machine will probably also support GLES, so they will still have use.
> 

So 
any PCI video card added to s/amr64/AMD64 machine will probably also
support GLES, so they will still have use.
OK that is true - lets enact this across ALL architectures, but I
suspect that there may be a bit of pushback from the AMD64 heavy graphic
users...


> But one thing is for sure: it's not a decision in which everyone wins, so we 
> are trying to make a decision on which *most* of our users wins.  
> 
> 
Agreed

Is there any possible way to support *BOTH* OpenGL / OpenGLES?  Mutually
exclusive from an install POV, but give the end user the choice which to
install?  Why should we have one Architecture forced down a path
different to another architecture?

/Andy



Re: Opt-in to continue as DD/DM? (was: I resigned in 2004)

2018-11-13 Thread Andy Simpkins
Speaking as someone who has had his world shattered by betrayal and 
breach of trust by an organisation (not Debian related), I can 
completely understand how any correspondence opens old wounds.
I am not trying to justify anything here; only to say that I understand 
it from *both* sides, and (I may be projecting my own experiences into 
this too much) can see how things can get out of

control so fast



An organisation, and its ideals can be your friend (an idol? a belief? a 
religion?).  You invest a lot in that organisation - you live your life 
through it.

In many ways it defines you.
Belief and principles *are* that strong.
This may not be a healthy relationship, but relationship it is.

You contribute, you work hard and in return you are appreciated; you are 
shown to be valued, to have worth.  oh how good it feels to have someone 
tell you that when outside that organisation you are taken for granted, 
derided, bullied, and as a result you have very low self esteem.



When that organisation fractures or turns away from your aligned 
beliefs; or when something nasty happens and the organisation doesn't 
defend you from those within your trust / belief is shattered in an instant.
There can be no reconciliation, no forgiveness, at best there can be 
acceptance, to move on,  lick your wounds and start a new life.



These wounds do not heal, they fester and grow deeper over time.
You armour yourself with righteousness, you weave stories of what did 
happen (or what did happen when from your viewpoint when hurting) to 
shield you from the pain.
You put it in a box at the back of your mind and do not ever want to go 
there again.


Years later you still wake at night, replaying the events over and over 
again in your head.  you tell yourself you did the right thing to leave, 
you conclude that given the same events again you would probably have 
done the same things, acted the same way but wish that you could have 
acted without as much venom.



Fast forward a decade or more

Someone from that organisation opens that box

You go from your usual calm and controlled self to burning hate monster 
instantly.  You lash out, 'how dare you cause me to relive all this 
pain?'.  'leave me alone - I have told you before; go away'
Some days later when you look back on this you are not proud of what you 
did, but the pain is back, why should I care about others who have just 
hurt me again?


These wounds do not heal, they fester and grow deeper over time. They 
have been reopened and alongside them sits a new gash in your heart.
You armour yourself with righteousness, you weave stories of what did 
happen (or what did happen when from your viewpoint when hurting) to 
shield you from the pain.
You put it in a box at the back of your mind and do not ever want to go 
there again.


You still wake at night, replaying the events over and over again in 
your head.  you tell yourself you did the right thing to leave, you 
conclude that given the same events again you would probably have done 
the same things, acted the same way but wish that you could have acted 
without as much venom.


In time you get more and more trouble free nights of sleep.

In time...


Andy
//trying very hard not to break down whilst typing this.





Accepted libstring-shellquote-perl 1.03-1.2 (source all) into unstable

2016-02-20 Thread Andy Simpkins
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Format: 1.8
Date: Sat, 20 Feb 2016 16:43:17 +
Source: libstring-shellquote-perl
Binary: libstring-shellquote-perl
Architecture: source all
Version: 1.03-1.2
Distribution: unstable
Urgency: medium
Maintainer: Roderick Schertler <roder...@argon.org>
Changed-By: Andy Simpkins <rattusrat...@debian.org>
Description:
 libstring-shellquote-perl - quote strings for passing through the shell
Closes: 800302
Changes:
 libstring-shellquote-perl (1.03-1.2) unstable; urgency=medium
 .
   * Non-maintainer upload.
   * changed from debhelper compatability level 3 to 9:
 closes: #800302
 - changed (control) Build-Depends: debhelper (>= 9)
 - added (control) Depends: ${misc:Depends}
 - removed (rules) export DH_COMPAT=3
 - added new (compat) file
 - replaced (rules) (depricated) dh_clean -k
   with dh_prep
   * resolved lintian warnings:
 - changed (copyright) FSF address
 - updated (control) ancient-standards-version 3.6.1 to 3.9.6
Checksums-Sha1:
 b7db3055afe86f38ed653c704ac0d359e8b3225c 1795 
libstring-shellquote-perl_1.03-1.2.dsc
 7635c2c6dae4c183aabecb86eb91bec759ea6c2f 2778 
libstring-shellquote-perl_1.03-1.2.diff.gz
 54de852b5a8774c6b434f26a3e0be4c50a682e9e 12188 
libstring-shellquote-perl_1.03-1.2_all.deb
Checksums-Sha256:
 2c41c7b3b0efd4c62126a74938a3137ba5a0550225fdb9d1c56d43fbae37ab63 1795 
libstring-shellquote-perl_1.03-1.2.dsc
 5a3918c7a2009184ce25ae47cb5d1c9f74586d5a8475f445460bc602440bef7e 2778 
libstring-shellquote-perl_1.03-1.2.diff.gz
 050c49ade44e4148cfee6bfc29a5d553ab0e49de528f267d748ad53234f72107 12188 
libstring-shellquote-perl_1.03-1.2_all.deb
Files:
 7ca6f57fba70f61739f598d632a2d2fe 1795 perl optional 
libstring-shellquote-perl_1.03-1.2.dsc
 7687d884fa2d7c9f8d6a3e7861e805da 2778 perl optional 
libstring-shellquote-perl_1.03-1.2.diff.gz
 c52e69edf7ad64c068470563e1760ed8 12188 perl optional 
libstring-shellquote-perl_1.03-1.2_all.deb

-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJWyLOqAAoJEGN9Nzz68Z6gDzcQAIhsLo1PgL2++iZ/GFRM0lif
s2vF4WjCPOj8gh5B/TSJgsKeItNrNBQsHcNiLcxjX824oPHQTIQNLG6J87qq3vMY
GiXxdzErjhZHhlrqnLy2cuU8CbRoiso15Q8FBJlFxy6pG8LN3IDTkfShL+MM+Xpg
XkuJUq8T2z5c8iF6895Z2CyhiWo1rmobFDD5EInehFEWUBbZi4RodHSB2CKCym/x
IKQXhAy6XStzP7uSeX4pCVsM94dTXVr+C2it0a7mIHhBTzb8rewe1EEVOHz5ICtu
fPC50zSfYkOjz4q3a/itTUvTsjENKc44MIqPLnnV1SgHHBAU/A0cO6sHYO1GGyTC
I35c9Flt9vCBBllnXLMxwDyrABptJXSY6Ie4+D8bhe4tT70t3rINrRu3dcGW471t
RmWH9aCWdg7j8pJ6uoqm2trzy9V5NCDKIMIr96AEEYQ4hkpiFjmtnSm6oWAuc7ww
ZhiDAyAjuXM44psddgnbxobc0zVqUmFH8/j+qAfZ2ly2/lrHfe0u7V76JkcjOmZI
aO5NovztZ8fNo2pFhG3Io3Br8Dcfdn0ClH6dWXKB5JJeJ9L0HC151uxtoQl3nCrY
lws1ZYhF0Ylm8/PRp/F8DDSDtraFk4tkcZfsHCH835FTFXmyMGAFb4C7lU7wDNXL
Hst2a2t8BEwhkJR7Oeow
=Bhqd
-END PGP SIGNATURE-



Accepted libtext-unaccent-perl 1.08-1.2 (source amd64) into unstable

2016-02-20 Thread Andy Simpkins
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Format: 1.8
Date: Sat, 20 Feb 2016 14:17:59 +
Source: libtext-unaccent-perl
Binary: libtext-unaccent-perl
Architecture: source amd64
Version: 1.08-1.2
Distribution: unstable
Urgency: medium
Maintainer: Loic Dachary (OuoU) <l...@debian.org>
Changed-By: Andy Simpkins <rattusrat...@debian.org>
Description:
 libtext-unaccent-perl - provides functions to remove accents using UTF16 as a 
pivot
Closes: 800257
Changes:
 libtext-unaccent-perl (1.08-1.2) unstable; urgency=medium
 .
   * Non-maintainer upload.
   * changed from debhelper compatability level 3 to 9:
 closes: #800257
 - changed (control) Build-Depends: debhelper (>= 9)
 - added (control) Depends: ${misc:Depends}
 - removed (rules) export DH_COMPAT=3
 - added new (compat) file
 - replaced (rules) (deprecated) dh_clean -k
   with dh_prep
   * resolved lintian warnings:
 - changed (copyright) FSF address
 - added version in (copyright)
   /usr/share/common-licenses/GPL-2
 - added (rules) targets build-indep & build-arch
 - changed (rules) stop ignoring errors in make clean
Checksums-Sha1:
 bd80caac7e2948720143d0fbc2cb9b7adc38da32 1767 
libtext-unaccent-perl_1.08-1.2.dsc
 2a362f3bff1ed963e65e94b97cc407f3ad191975 2700 
libtext-unaccent-perl_1.08-1.2.diff.gz
 a31eb328a919c2d176430ad883df345d6d2c72ef 4012 
libtext-unaccent-perl-dbgsym_1.08-1.2_amd64.deb
 223342fad1af51a0e009cff8a0beed44c5173ea8 20792 
libtext-unaccent-perl_1.08-1.2_amd64.deb
Checksums-Sha256:
 35d481d881ffe9ed7277c0a890b22a010cf4b0cf251c45d069469166ffd529d6 1767 
libtext-unaccent-perl_1.08-1.2.dsc
 fb82aded74818c40dd7f854ea15478c1e775b64e6d0fc14fd1b26de8315b8a44 2700 
libtext-unaccent-perl_1.08-1.2.diff.gz
 02a19e955478f7c194f966a5d6417d805fd0ea1d309469c51449c205ac06ccb8 4012 
libtext-unaccent-perl-dbgsym_1.08-1.2_amd64.deb
 bdf2e48a74a4ad7870c75e9d907c02dff5cf800cc9c96760e661608c9953d47e 20792 
libtext-unaccent-perl_1.08-1.2_amd64.deb
Files:
 fcacd083802f2375744aa61131cc6358 1767 interpreters optional 
libtext-unaccent-perl_1.08-1.2.dsc
 7d22a5d84836bc448082aa79930b01c5 2700 interpreters optional 
libtext-unaccent-perl_1.08-1.2.diff.gz
 ffd34ae57c0a3854af839034e8f0031b 4012 debug extra 
libtext-unaccent-perl-dbgsym_1.08-1.2_amd64.deb
 cb3159c59ebda7b55245a588eb9ae56b 20792 interpreters optional 
libtext-unaccent-perl_1.08-1.2_amd64.deb

-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJWyIlRAAoJEGN9Nzz68Z6gGN0P/iT2amzymeTXCuZQShaDvLqW
e2sjGzWWSdGvX9A0v95X0ubjwe2vB3SM7Z0T/7j6xx8EdAe5NUr16OxwnfIya89W
GtyrRkOUlbM1s56S7DG7y2nKTcGnrI5/X/cYpV1mOckMMYokB8KgXLzOfRTWY/XI
3Idy2LVCtaKDp66r5eKMOnANp/4r/laBp2dbu8fteV972047BJCusGrOoEcFe7fH
CmcUmp0uz9PqiPVGth1OwySZAQJ+jmo9R+LN/sD1J616nAI98f3OevWrPlUvV0Vv
eViS7Fgv8vM2oWRHYwFbrUp8ly6OBaw2aiHoJu/2VhLz7a2nNoqD93ViX1FgnJNm
lsFlbzX3dPbnB+ZQVy5toX92ztfOVHyuC/NswU36gZOGSlhLp4TWYe+solPANl5F
v47hcaFgOsDeVZ7QlKMNrxTCicHiAr+sT8FtObUzg8xOqZNoaA+OFsRLq5Do7EIC
sKFL3xh6KLxWItE5tNEQ8z7p4jKrpryq4ZJPzybaEn/vQgWP2HCyNhkgjbuNf4Jy
Z8pgDNU4FbrK8nXks+DHb6fIt0QxDQFQ7wj1dkyzkrqJpTfEPci3vqPDTRaR0B+0
AsStPExDBkBiKea4GyTlJSq6kTqDdx8W00O7zPXsSwmArvGe0iGQyw6YUptURzRR
anM59snC4lcZXMJoIFPQ
=m5fe
-END PGP SIGNATURE-



Video-team sprint 4-6 November

2015-10-14 Thread Andy Simpkins

CC to correct address this time

Hi.

First my apologises for net getting round to this sooner.  I should have
done this weeks ago.  Sorry.

As part of the Cambridge Miniconf [1], ARM have kindly offered the video
team the opportunity to run a video team sprint in the preceding few
days. We will also be providing video services for the miniconf.

Sprint details are on the wiki [2]

I have spoken with Neil who has verbally approved the sprint, but this
email should hopefully officially sanction the event.

Over the next couple of days I shall arrange transport of equipment to
site, which has kindly been sponsored by Cosworth [3] [4]

We will be requesting budget for travel costs to enable a number of the
team to attend (presently it is expected to be for 3-4 people) and
should be sub £2500.

Regards,

Andy
(RattusRattus)


[1] https://wiki.debian.org/DebianEvents/gb/2015/MiniDebConfCambridge
[2] https://wiki.debian.org/sprints/2015/VideoTeam
[3]
https://wiki.debian.org/DebianEvents/gb/2015/MiniDebConfCambridge#Sponsors
[4] http://www.cosworth.com/










signature.asc
Description: OpenPGP digital signature