Re: Testing direct rendering/more video cards with qemu?

2020-11-17 Thread Jessica Clarke
On 17 Nov 2020, at 21:51, Samuel Thibault  wrote:
> 
> Svante Signell, le mar. 17 nov. 2020 22:47:04 +0100, a ecrit:
>> On Tue, 2020-11-17 at 21:59 +0100, Samuel Thibault wrote:
>>> Svante Signell, le mar. 17 nov. 2020 21:56:33 +0100, a ecrit:
>> 
 Got it. Can some of the drivers be tested with software rendering,
 like swrast?
>>> 
>>> software rendering does not use drm/dri. That's already what we are
>>> using (and have been using for years) for -vga std or cirrus.
>> 
>> So having more mesa packages building is worthless?
> 
> I don't mean it's worthless, I mean it already used to be building and
> working fine. Now that it builds fine again, I have little doubt that
> software rendering works fine.

Especially since software rendering (swrast in mesa) uses LLVM these
days (llvmpipe), having mesa continue to build is useful, otherwise it
can get caught up in transitions and block a lot of downstream software
from building (cmake -> qt -> mesa is an annoying build dependency
chain that quickly becomes problematic).

Jess




Re: Testing direct rendering/more video cards with qemu?

2020-11-17 Thread Samuel Thibault
Svante Signell, le mar. 17 nov. 2020 22:47:04 +0100, a ecrit:
> On Tue, 2020-11-17 at 21:59 +0100, Samuel Thibault wrote:
> > Svante Signell, le mar. 17 nov. 2020 21:56:33 +0100, a ecrit:
> 
> > > Got it. Can some of the drivers be tested with software rendering,
> > > like swrast?
> > 
> > software rendering does not use drm/dri. That's already what we are
> > using (and have been using for years) for -vga std or cirrus.
> 
> So having more mesa packages building is worthless?

I don't mean it's worthless, I mean it already used to be building and
working fine. Now that it builds fine again, I have little doubt that
software rendering works fine.

Samuel



Re: Testing direct rendering/more video cards with qemu?

2020-11-17 Thread Svante Signell
On Tue, 2020-11-17 at 21:59 +0100, Samuel Thibault wrote:
> Svante Signell, le mar. 17 nov. 2020 21:56:33 +0100, a ecrit:

> > Got it. Can some of the drivers be tested with software rendering,
> > like swrast?
> 
> software rendering does not use drm/dri. That's already what we are
> using (and have been using for years) for -vga std or cirrus.

So having more mesa packages building is worthless? You are very
encouraging.

Thanks!




Re: Testing direct rendering/more video cards with qemu?

2020-11-17 Thread Samuel Thibault
Svante Signell, le mar. 17 nov. 2020 21:56:33 +0100, a ecrit:
> On Tue, 2020-11-17 at 15:32 +0100, Samuel Thibault wrote:
> > Svante Signell, le mar. 17 nov. 2020 15:31:03 +0100, a ecrit:
> 
> > > > > > dri cannot work. You changes in libdrm only introduced some
> > > > > > stub interface. Actual drm implementation is needed to get
> > > > > > any kind of direct rendering working
> > > > > 
> > > > > The changes to libdrm are similar to the kFreeBSD solution. So
> > > > > you mean that dri won't work there either?
> > > > 
> > > > kFreeBSD does have some drm infrastructure.
> > > 
> > > Ok. What is needed for Hurd to have drm infrastructure, kernel
> > > support? 
> > There were some discussions about it on the list some years ago. Yes,
> > some kernel support probably, and coordination between user processes
> > using it. Not a simple hack anyway.
> 
> Got it. Can some of the drivers be tested with software rendering, like
> swrast?

software rendering does not use drm/dri. That's already what we are
using (and have been using for years) for -vga std or cirrus.

Samuel



Re: Testing direct rendering/more video cards with qemu?

2020-11-17 Thread Svante Signell
On Tue, 2020-11-17 at 15:32 +0100, Samuel Thibault wrote:
> Svante Signell, le mar. 17 nov. 2020 15:31:03 +0100, a ecrit:

> > > > > dri cannot work. You changes in libdrm only introduced some
> > > > > stub interface. Actual drm implementation is needed to get
> > > > > any kind of direct rendering working
> > > > 
> > > > The changes to libdrm are similar to the kFreeBSD solution. So
> > > > you mean that dri won't work there either?
> > > 
> > > kFreeBSD does have some drm infrastructure.
> > 
> > Ok. What is needed for Hurd to have drm infrastructure, kernel
> > support? 
> There were some discussions about it on the list some years ago. Yes,
> some kernel support probably, and coordination between user processes
> using it. Not a simple hack anyway.

Got it. Can some of the drivers be tested with software rendering, like
swrast?

Thanks!




Re: Testing direct rendering/more video cards with qemu?

2020-11-17 Thread Samuel Thibault
Svante Signell, le mar. 17 nov. 2020 15:31:03 +0100, a ecrit:
> On Tue, 2020-11-17 at 15:22 +0100, Samuel Thibault wrote:
> > Svante Signell, le mar. 17 nov. 2020 15:22:02 +0100, a ecrit:
> > > On Tue, 2020-11-17 at 14:57 +0100, Samuel Thibault wrote:
> > > > Svante Signell, le mar. 17 nov. 2020 14:53:56 +0100, a ecrit:
> > > > > > > Which of these (and xorg* packages) are needed?
> > > > > > 
> > > > > > Needed for what?
> > > > > 
> > > > > For testing if some of the dri/drv drivers work on Hurd: e.g.
> > > > > r200,
> > > > > r300 etc.
> > > > 
> > > > dri cannot work. You changes in libdrm only introduced some stub
> > > > interface. Actual drm implementation is needed to get any kind of
> > > > direct rendering working
> > > 
> > > The changes to libdrm are similar to the kFreeBSD solution. So you
> > > mean
> > > that dri won't work there either?
> > 
> > kFreeBSD does have some drm infrastructure.
> 
> Ok. What is needed for Hurd to have drm infrastructure, kernel
> support? 

There were some discussions about it on the list some years ago. Yes,
some kernel support probably, and coordination between user processes
using it. Not a simple hack anyway.

> Another question: libgbm1 now builds, and many packages build-depends
> on libgbm-dev (and maybe other packages not yet available): mpv,
> directfdb, virglrender, ukvm, kmscube, etc. Are they also unusable
> if/when built?

It depends whether they require direct rendering or not.

Samuel



Re: Testing direct rendering/more video cards with qemu?

2020-11-17 Thread Svante Signell
On Tue, 2020-11-17 at 15:22 +0100, Samuel Thibault wrote:
> Svante Signell, le mar. 17 nov. 2020 15:22:02 +0100, a ecrit:
> > On Tue, 2020-11-17 at 14:57 +0100, Samuel Thibault wrote:
> > > Svante Signell, le mar. 17 nov. 2020 14:53:56 +0100, a ecrit:
> > > > > > Which of these (and xorg* packages) are needed?
> > > > > 
> > > > > Needed for what?
> > > > 
> > > > For testing if some of the dri/drv drivers work on Hurd: e.g.
> > > > r200,
> > > > r300 etc.
> > > 
> > > dri cannot work. You changes in libdrm only introduced some stub
> > > interface. Actual drm implementation is needed to get any kind of
> > > direct rendering working
> > 
> > The changes to libdrm are similar to the kFreeBSD solution. So you
> > mean
> > that dri won't work there either?
> 
> kFreeBSD does have some drm infrastructure.

Ok. What is needed for Hurd to have drm infrastructure, kernel
support? 

Another question: libgbm1 now builds, and many packages build-depends
on libgbm-dev (and maybe other packages not yet available): mpv,
directfdb, virglrender, ukvm, kmscube, etc. Are they also unusable
if/when built?

Is there any idea to work on mesa/drm for Hurd at all??

Thanks!




Re: Testing direct rendering/more video cards with qemu?

2020-11-17 Thread Svante Signell
On Tue, 2020-11-17 at 14:57 +0100, Samuel Thibault wrote:
> Svante Signell, le mar. 17 nov. 2020 14:53:56 +0100, a ecrit:
> > > > Which of these (and xorg* packages) are needed?
> > > 
> > > Needed for what?
> > 
> > For testing if some of the dri/drv drivers work on Hurd: e.g. r200,
> > r300 etc.
> 
> dri cannot work. You changes in libdrm only introduced some stub
> interface. Actual drm implementation is needed to get any kind of
> direct rendering working

The changes to libdrm are similar to the kFreeBSD solution. So you mean
that dri won't work there either?

Thanks!




Re: Testing direct rendering/more video cards with qemu?

2020-11-17 Thread Samuel Thibault
Svante Signell, le mar. 17 nov. 2020 15:22:02 +0100, a ecrit:
> On Tue, 2020-11-17 at 14:57 +0100, Samuel Thibault wrote:
> > Svante Signell, le mar. 17 nov. 2020 14:53:56 +0100, a ecrit:
> > > > > Which of these (and xorg* packages) are needed?
> > > > 
> > > > Needed for what?
> > > 
> > > For testing if some of the dri/drv drivers work on Hurd: e.g. r200,
> > > r300 etc.
> > 
> > dri cannot work. You changes in libdrm only introduced some stub
> > interface. Actual drm implementation is needed to get any kind of
> > direct rendering working
> 
> The changes to libdrm are similar to the kFreeBSD solution. So you mean
> that dri won't work there either?

kFreeBSD does have some drm infrastructure.

Samuel



Re: Testing direct rendering/more video cards with qemu?

2020-11-17 Thread Samuel Thibault
Samuel Thibault, le mar. 17 nov. 2020 14:57:06 +0100, a ecrit:
> The std and cirrus drivers work just fine with qemu.

(and yes, the cirrus driver has some dri capabilities. Apparently not
DRI2, but at least XFree86-DRI).

Samuel



Re: Testing direct rendering/more video cards with qemu?

2020-11-17 Thread Samuel Thibault
Svante Signell, le mar. 17 nov. 2020 14:53:56 +0100, a ecrit:
> On Tue, 2020-11-17 at 14:22 +0100, Samuel Thibault wrote:
> > Svante Signell, le mar. 17 nov. 2020 11:41:52 +0100, a ecrit:
> > > I managed to build more packages from mesa based on that libdrm is
> > > now available. Is there any way to test these packages with qemu?
> > 
> > Which packages? Those mentioned below in your mail? They don't depend
> > that much on the emulated hardware, and rather on the software being
> > used.
> 
> Testing xorg with the other drivers than default? 

But you need hardware (virtual or physical) to run these drivers
again.

> > > qemu-system-x86_64 --help shows
> > > -vga [std|cirrus|vmware|qxl|xenfb|tcx|cg3|virtio|none]
> > > select video card type
> > 
> > Yes, that's basically all. Apart from the virtual devices meant for
> > virtualization, qemu doesn't propose many virtual cards.
> 
> Not emulating any of the radeon/nvidia/etc graphics cards then. So Hurd
> has to run on real hardware?

? No
The std and cirrus drivers work just fine with qemu.

> That can maybe be possible when rumpdisk is working properly!?

That has nothing to do at all with disks.

> > > Which of these (and xorg* packages) are needed?
> > 
> > Needed for what?
> 
> For testing if some of the dri/drv drivers work on Hurd: e.g. r200,
> r300 etc.

dri cannot work. You changes in libdrm only introduced some stub
interface. Actual drm implementation is needed to get any kind of direct
rendering working

> > (In general, relying on gettid() to provide non-reusable thread ids
> > is nonsense: tids do wrap around as well).
> 
> So you mean that all calls to gettid() can be replaced with
> pthread_self() for non-linux systems?

Even for Linux system that should just work well enough.
And theorically not worse that gettid(), actually.

Samuel



Re: Broken stack traces on crashed programs

2020-11-17 Thread Ludovic Courtès
Hi!

Samuel Thibault  skribis:

> Ludovic Courtès, le mar. 17 nov. 2020 10:57:43 +0100, a ecrit:
>> I’ve noticed that I’d always get “broken” stack traces in GDB when (1)
>> attaching to a program suspended by /servers/crash-suspend, (2)
>> examining a core dump, or (3) spawning a program in GDB and examining it
>> after it’s received an unhandled signal like SIGILL.
>> 
>> At best I can see the backtrace of the msg thread, the other ones are
>> all question-marky:
>
> Silly question, but still important to ask: did you build with -g?

Yes.

I get pretty back traces until the program gets an unhandled signal,
AFAICT.  This makes me think it could have something to do with how GDB
obtains thread state info for suspended threads.

> (meaning: no, I don't have such kind of issue with gdb 9.2 in debian)

(Same with GDB 10.1.)

It could be that we’re missing a libc patch that Debian has, or (more
likely) that we’re miscompiling something on the way (this is all
cross-compiled from x86_64-linux-gnu).

To be continued…

Ludo’.



Re: Testing direct rendering/more video cards with qemu?

2020-11-17 Thread Svante Signell
On Tue, 2020-11-17 at 14:22 +0100, Samuel Thibault wrote:
> Svante Signell, le mar. 17 nov. 2020 11:41:52 +0100, a ecrit:
> > I managed to build more packages from mesa based on that libdrm is
> > now available. Is there any way to test these packages with qemu?
> 
> Which packages? Those mentioned below in your mail? They don't depend
> that much on the emulated hardware, and rather on the software being
> used.

Testing xorg with the other drivers than default? 

> > qemu-system-x86_64 --help shows
> > -vga [std|cirrus|vmware|qxl|xenfb|tcx|cg3|virtio|none]
> > select video card type
> 
> Yes, that's basically all. Apart from the virtual devices meant for
> virtualization, qemu doesn't propose many virtual cards.

Not emulating any of the radeon/nvidia/etc graphics cards then. So Hurd
has to run on real hardware? That can maybe be possible when rumpdisk
is working properly!?

> > Which of these (and xorg* packages) are needed?
> 
> Needed for what?

For testing if some of the dri/drv drivers work on Hurd: e.g. r200,
r300 etc.

> > I had to make a dirty patch for non-linux of libva:va/va_trace.c
> > since gettid() or syscall(__NR_ttid) is not available on GNU/Hurd
> > (or GNU/kFreeBSD. Any ideas on how to make a workaround?
> > pthread_self() does not seem to work: See
> > http://clanguagestuff.blogspot.com/2013/08/gettid-vs-pthreadself.html
> 
> "work" wholy depends what it is used for. gettid is a very linuxish
> thing. What I read from va/va_trace.c looks like it could be very
> fine with using pthread_self().
> 
> (In general, relying on gettid() to provide non-reusable thread ids
> is nonsense: tids do wrap around as well).

So you mean that all calls to gettid() can be replaced with
pthread_self() for non-linux systems?

Thanks!




Re: Testing direct rendering/more video cards with qemu?

2020-11-17 Thread Samuel Thibault
Svante Signell, le mar. 17 nov. 2020 11:41:52 +0100, a ecrit:
> I managed to build more packages from mesa based on that libdrm is now
> available. Is there any way to test these packages with qemu?

Which packages? Those mentioned below in your mail? They don't depend
that much on the emulated hardware, and rather on the software being
used.

> qemu-system-x86_64 --help shows
> -vga [std|cirrus|vmware|qxl|xenfb|tcx|cg3|virtio|none]
> select video card type

Yes, that's basically all. Apart from the virtual devices meant for
virtualization, qemu doesn't propose many virtual cards.

> Which of these (and xorg* packages) are needed?

Needed for what?

> I had to make a dirty patch for non-linux of libva:va/va_trace.c since
> gettid() or syscall(__NR_ttid) is not available on GNU/Hurd (or
> GNU/kFreeBSD. Any ideas on how to make a workaround? pthread_self()
> does not seem to work: See
> http://clanguagestuff.blogspot.com/2013/08/gettid-vs-pthreadself.html

"work" wholy depends what it is used for. gettid is a very linuxish
thing. What I read from va/va_trace.c looks like it could be very fine
with using pthread_self().

(In general, relying on gettid() to provide non-reusable thread ids is
nonsense: tids do wrap around as well).

Samuel



Re: Broken stack traces on crashed programs

2020-11-17 Thread Samuel Thibault
Ludovic Courtès, le mar. 17 nov. 2020 10:57:43 +0100, a ecrit:
> I’ve noticed that I’d always get “broken” stack traces in GDB when (1)
> attaching to a program suspended by /servers/crash-suspend, (2)
> examining a core dump, or (3) spawning a program in GDB and examining it
> after it’s received an unhandled signal like SIGILL.
> 
> At best I can see the backtrace of the msg thread, the other ones are
> all question-marky:

Silly question, but still important to ask: did you build with -g?

(meaning: no, I don't have such kind of issue with gdb 9.2 in debian)

Samuel



Testing direct rendering/more video cards with qemu?

2020-11-17 Thread Svante Signell
Hello,

I managed to build more packages from mesa based on that libdrm is now
available. Is there any way to test these packages with qemu?

qemu-system-x86_64 --help shows
-vga [std|cirrus|vmware|qxl|xenfb|tcx|cg3|virtio|none]
select video card type

(or is it only possible with real hardware?)

Which of these (and xorg* packages) are needed?
(dbgsym packages excluded)
libegl1-mesa_20.2.2-1.1_hurd-i386.deb
libegl1-mesa-dev_20.2.2-1.1_hurd-i386.deb
libegl-mesa0_20.2.2-1.1_hurd-i386.deb
libgbm1_20.2.2-1.1_hurd-i386.deb
libgbm-dev_20.2.2-1.1_hurd-i386.deb
libgl1-mesa-dev_20.2.2-1.1_hurd-i386.deb
libgl1-mesa-dri_20.2.2-1.1_hurd-i386.deb
libgl1-mesa-glx_20.2.2-1.1_hurd-i386.deb
libglapi-mesa_20.2.2-1.1_hurd-i386.deb
libgles2-mesa_20.2.2-1.1_hurd-i386.deb
libgles2-mesa-dev_20.2.2-1.1_hurd-i386.deb
libglx-mesa0_20.2.2-1.1_hurd-i386.deb
libosmesa6_20.2.2-1.1_hurd-i386.deb
libosmesa6-dev_20.2.2-1.1_hurd-i386.deb
mesa-common-dev_20.2.2-1.1_hurd-i386.deb
mesa-va-drivers_20.2.2-1.1_hurd-i386.deb
mesa-vdpau-drivers_20.2.2-1.1_hurd-i386.deb

I had to make a dirty patch for non-linux of libva:va/va_trace.c since
gettid() or syscall(__NR_ttid) is not available on GNU/Hurd (or
GNU/kFreeBSD. Any ideas on how to make a workaround? pthread_self()
does not seem to work: See
http://clanguagestuff.blogspot.com/2013/08/gettid-vs-pthreadself.html

Thanks!





Re: pci-arbiter + rumpdisk

2020-11-17 Thread Damien Zammit
Somehow I was able to boot / via rumpdisk and then the arbiter
still worked afterwards, so networking via netdde started working.
This is the first time I've had a rumpdisk / with network access!

Alas, I cannot seem to make it work via the arbiter though.
My latest attempt looks like the arbiter started but could not give the i/o 
ports
access to rumpdisk:

start pci-arbiter: PCI start
PCI machdev start
Hurd bootstrap pci Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003,
 2004, 2005,
2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016
The NetBSD Foundation, Inc.  All rights reserved.
Copyright (c) 1982, 1986, 1989, 1991, 1993
The Regents of the University of California.  All rights reserved.

NetBSD 7.99.34 (RUMP-ROAST)
total memory = unlimited (host limit)
timecounter: Timecounters tick every 10.000 msec
timecounter: Timecounter "clockinterrupt" frequency 100 Hz quality 0
cpu0 at thinair0: rump virtual cpu
root file system type: rumpfs
kern.module.path=/stand/i386/7.99.34/modules
mainbus0 (root)
pci: I/O space init error 22, I/O space not available
pci0 at mainbus0 bus 0
pci0: memory space enabled, rd/line, rd/mult, wr/inv ok

Damien



Broken stack traces on crashed programs

2020-11-17 Thread Ludovic Courtès
Hello!

I’ve noticed that I’d always get “broken” stack traces in GDB when (1)
attaching to a program suspended by /servers/crash-suspend, (2)
examining a core dump, or (3) spawning a program in GDB and examining it
after it’s received an unhandled signal like SIGILL.

At best I can see the backtrace of the msg thread, the other ones are
all question-marky:

--8<---cut here---start->8---
(gdb) thread 1
[Switching to thread 1 (process 310)]
#0  0x080f08c0 in ?? ()
(gdb) bt
#0  0x080f08c0 in ?? ()
#1  0x in ?? ()
(gdb) thread 2
[Switching to thread 2 (process 1)]
#0  0x0159282c in mach_msg_trap () at 
/tmp/guix-build-glibc-cross-i586-pc-gnu-2.31.drv-0/build/mach/mach_msg_trap.S:2
2   
/tmp/guix-build-glibc-cross-i586-pc-gnu-2.31.drv-0/build/mach/mach_msg_trap.S: 
No such file or directory.
(gdb) bt
#0  0x0159282c in mach_msg_trap () at 
/tmp/guix-build-glibc-cross-i586-pc-gnu-2.31.drv-0/build/mach/mach_msg_trap.S:2
#1  0x01592f2a in __GI___mach_msg (msg=0x2802aa0, option=3, send_size=96, 
rcv_size=32, rcv_name=109, timeout=0, notify=0) at msg.c:111
#2  0x017dc8ab in __crash_dump_task (crashserver=132, task=1, file=133, 
signo=11, sigcode=2, sigerror=2, exc=1, code=2, subcode=210986494, 
cttyid_port=102, cttyid_portPoly=19)
at 
/tmp/guix-build-glibc-cross-i586-pc-gnu-2.31.drv-0/build/hurd/RPC_crash_dump_task.c:254
#3  0x015b248c in write_corefile (detail=, signo=) at hurdsig.c:296
#4  post_signal (untraced=) at hurdsig.c:947
#5  0x015b274b in _hurd_internal_post_signal (ss=0x1800808, signo=11, 
detail=0x2802e5c, reply_port=0, reply_port_type=17, untraced=0) at 
hurdsig.c:1235
#6  0x015b3fc1 in _S_catch_exception_raise (port=96, thread=39, task=1, 
exception=1, code=2, subcode=210986494) at catch-exc.c:88
#7  0x017c09b4 in _Xexception_raise (InHeadP=0x2802f20, OutHeadP=0x2803f30) at 
/tmp/guix-build-glibc-cross-i586-pc-gnu-2.31.drv-0/build/mach/mach/exc_server.c:155
#8  0x017c0a52 in _S_exc_server (InHeadP=0x2802f20, OutHeadP=0x2803f30) at 
/tmp/guix-build-glibc-cross-i586-pc-gnu-2.31.drv-0/build/mach/mach/exc_server.c:208
#9  0x015a7a09 in msgport_server (inp=0x2802f20, outp=0x2803f30) at 
msgportdemux.c:49
#10 0x015934c3 in __mach_msg_server_timeout (demux=0x15a79b0 , 
max_size=4096, rcv_name=96, option=0, timeout=0) at msgserver.c:108
#11 0x01593607 in __mach_msg_server (demux=0x15a79b0 , 
max_size=4096, rcv_name=96) at msgserver.c:195
#12 0x015a7a86 in _hurd_msgport_receive () at msgportdemux.c:67
#13 0x011eda50 in entry_point (self=0x804ac20, start_routine=0x15a7a30 
<_hurd_msgport_receive>, arg=0x0) at pt-create.c:62
#14 0x in ?? ()
--8<---cut here---end--->8---

(This is on Guix System with GDB 9.2.)

Does that ring a bell?

Thanks,
Ludo’.