Re: [PATCH build] Implement auto-clone.

2010-09-18 Thread Michael Cree

On 19/09/10 15:05, Trevor Woerner wrote:

From: Trevor Woerner

If the module/component to be processed doesn't exist, assume it
needs to be cloned from the repository first. Removes the need for
an explicit clone command.


I would prefer the build script to not clone by default.  I only have 
copies of the drivers I am actually interested in, for example, and 
having the script skip to the next module when one is missing is nice.


Cheers
Michael.
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH synaptics 1/2] Added "friction physics" so coasting can stop on its own.

2010-08-19 Thread Michael Cree

On 20/08/2010, at 1:38 PM, Chris Bagwell wrote:

On Thu, Aug 19, 2010 at 8:13 PM, Peter Hutterer
 wrote:


+.BI "Option \*qCoastingFriction\*q \*q" float \*q
+Number of scrolls per second per second to decrease the coasting  
speed.  Default


  ^ typo?


Presumably the friction is being stated as an acceleration, so how  
about "scrolls per second squared" or maybe "scrolls per squared  
second" since some argue that the former is ambiguous as to what the  
square applies to.  Or maybe it should be "(scrolls per second) per  
second" to emphasise that is the rate at which speed is decreased...


Cheers
Michael.

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: libpciaccess problem with new IO interface

2010-02-18 Thread Michael Cree
On 16/02/10 05:57, Adam Jackson wrote:
> On Sat, 2010-02-13 at 16:23 +1300, Michael Cree wrote:
>> The trouble with this implementation is that reading and writing the PCI
>> resource files in the Linux sysfs does not appear to be supported by the
>> kernel.  Only memory mapping the resource files is supported.
>
> It depends on the architecture and the resource type.
>
> On x86, for I/O resources, only read/write are supported.
> On x86, for memory resources, only mmap() is currently supported.

Thanks for that. That explains quite a bit.  Maybe I should step back to 
explain what I am trying to solve which, in short, is to get the Xserver 
going on old Alpha architectures that cannot do byte and word accesses 
to I/O ports via dense memory.  There's a special sparse mapping to 
achieve that.

Now a specific example of that problem: The ati radeon video driver 
memory maps the PCI resource at bar 2.  It's a memory resource however 
the way the radeon driver uses it it is quite clear that it is in fact 
I/O ports.  Writing to a location in that memory resource, for example, 
does not guarantee a read will return the same value that was just 
written.  The radeon driver memory maps the resource using 
pci_device_map_range() if libcpiaccess is available otherwise the old 
xf86MapPciMem() interface is used. Then it uses the Xserver's 
MMIO_IN/OUT routines defined in compiler.h for all accesses to the 
memory mapped resource.

In the Xserver's xf86MapPciMem() routine on older Alpha architecture it 
does two memory mappings - a dense one and a sparse one.  The sparse map 
lands at a fixed offset in memory from the dense one, and the 
MMIO_IN/OUT routines use the dense mapping for longword or quadword 
accesses, and the sparse mapping for word and byte accesses.  The fixed 
offset from the dense mapping to the sparse mapping is hard programmed 
into the MMIO_IN/OUT routines so only the address of the dense mapping 
is passed to them.

The libpciaccess's pci_device_map_range() routine doesn't do the dense 
and sparse mappings. As a test I modified it so it would do both 
mappings, and return only the dense mapping address to the caller. 
Unfortunately, the sparse mapping done via the Linux sysfs is at a 
different location to that which occurs with the old xf86MapPciMem() 
interface.  Hence when the radeon driver uses the old MMIO_IN/OUT 
routines on the memory map it fails because there is nothing mapped at 
the location where they expect to find the sparse map.

I wondered whether I could use the new pci_device_open_io() routines in 
libpciaccess, but with my testing, and your description of the PCI 
resources on the x86, it is clear that this is not going to work.  The 
resource at bar 2 on a radeon card, despite being used as "I/O ports", 
is in fact a memory resource, and has to be memory mapped.

It's easy enough to get pci_device_map_range() to map both the dense and 
the sparse mapping on the older Alphas (I have already implemented 
that).  The problem is getting the location of the sparse map, in 
addition to the location of the dense map, returned back to the calling 
video driver as pci_device_map_range() returns only one address and an 
error code.  Somehow the address of the sparse mapping has to be made 
available to the Xserver's MMIO_IN/OUT routines.  Any suggestions how?

One thought is to add an extra routine to libpciaccess that returns the 
address of the sparse mapping of a previously mapped resource and NULL 
if the sparce map doesn't exist.  And in the video drivers that matter 
for the older Alphas add, maybe within a #ifdef __alpha__ section, after 
the call to  pci_device_map_range(), a call to the new routine to get 
the sparse mapping address and pass it to the Xserver to use with the 
MMIO_IN/OUT routines.  Thus the extra work to libpciaccess is minimal, 
and the current interface is not broken in anyway, and only a couple or 
so extra lines of code need to be added to video drivers of interest. 
(Some minor modification to the Alpha specific code in the MMIO_IN/OUT 
routines will also be needed.)

Thoughts?  Other suggestions?

Cheers
Michael.
___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


libpciaccess problem with new IO interface

2010-02-12 Thread Michael Cree
I have been trying out the new IO interface routines 
(pci_device_open_io, etc.) in libpciaccess with a view that they might 
solve the problem of MMIO on idiosyncratic architectures such as the old 
Alphas.


In short, those routines do not work, at least not in the Linux sysfs 
implementation.


In that implementation libpciaccess opens the sysfs PCI resource file as 
a file then reads and writes (via pread and pwrite) are performed on the 
resource file for IO byte, word and longword access.


The trouble with this implementation is that reading and writing the PCI 
resource files in the Linux sysfs does not appear to be supported by the 
kernel.  Only memory mapping the resource files is supported.


If I modify the read I/O routines in the file linux-sysfs.c to report 
any IO errors as in:


static uint32_t
pci_device_linux_sysfs_read32(struct pci_io_handle *handle, uint32_t port)
{
uint32_t ret;

if (pread(handle->fd, &ret, 4, port) != 4) {
fprintf(stderr, "ERROR: pread failed %d (%s)\n",errno, strerror(errno));
}

return ret;
}

then error number 5 (Input/output error) is returned. Oh, In that 
routine I have also modified the offset passed to pread() to be port 
only as the offset should be from the start of the resource file, not 
the start of memory, methinks.


I test on amd64 architecture a Radeon RV710 card, in particular the 
resource at bar 2 which is a 65535 byte region of IO ports. 
Interestingly the PCI config reports this resource as memory but it is 
obviously IO ports by the way the radeon video driver uses it.  So for 
my tests I modified pci_device_open_io() in libpciaccess to allow access 
to PCI resources indicated as memory.


I attach my test code.  I run it as so:

./test-libpci 2:00.0 2 7200 ff

where 2:00.0 is the Radeon card, 2 is the bar, and it will access ports 
at 0x7200 through to 0x72ff (which might be recognised by some to be the 
default range the radeonhd project's rhd_dump utility dumps).  The 
utility prints that memory range using the  pci_device_map_range() and 
direct memory access first, then prints it out using 
pci_device_open_io() and pci_io_read32().  Output is:


Found bar 2 at 0xfdde of size 65536.

7200:0000
7210:0000
7220:0000
7230:00  65a20000
7240:0000
7250:0000
7260:0000
7270:0000
7280:0000
7290:0400
72a0:0000
72b0:000   40
72c0:0000
72d0:0000
72e0:0000
72f0:0000

ERROR: pread failed 5 (Input/output error)
ERROR: pread failed 5 (Input/output error)
ERROR: pread failed 5 (Input/output error)
ERROR: pread failed 5 (Input/output error)
7200: 7f7a 7f7a 7f7a 7f7a

And so on with more pread errors.

Cheers
Michael.

/* 
   For testing libpciaccess routines
*/

#include 
#include 
#include 
#include 

#include 


int main(int argc, char *argv[]) 
{
struct pci_device *device = NULL;
void *io;
int bus, dev, func;
int bar;
unsigned int start, size;
int j;

if (argc != 5) {
	printf("Usage: test-libpci bus:dev.func bar start size\n");
	printf("WARNING: be careful what you use this on!\n");
	return 1;
}

j = sscanf(argv[1], "%x:%x.%x", &bus, &dev, &func);
if (j != 3) {
	j = sscanf(argv[1], "%x:%x:%x", &bus, &dev, &func);
	if (j != 3) {
	j = sscanf(argv[1], "%d:%d.%d", &bus, &dev, &func);
	if (j != 3) {
		j = sscanf(argv[1], "%d:%d:%d", &bus, &dev, &func);
	}
	}
}
if (j != 3) {
	fprintf(stderr, "ERROR: Can't parse pci tag: %s\n", argv[1]);
	return 1;
}

bar = -1;
sscanf(argv[2], "%d", &bar);
if (bar < 0 || bar > 5) {
	fprintf(stderr, "ERROR: Invalid bar: %s\n", argv[2]);
	return 1;
}

start = -1;
sscanf(argv[3], "%x", &start);
if (start < 0) {
	fprintf(stderr, "ERROR: Invalid start value: %s\n", argv[3]);
	return 1;
}

size = -1;
sscanf(argv[4], "%x", &size);
if (size < 0) {
	fprintf(stderr, "ERROR: Invalid size value: %s\n", argv[4]);
	return 1;
}

if ((j = pci_system_init())) {
fprintf(stderr, "ERROR: pciaccess failed to initialise PCI bus"
" (error %d)\n", j);
return 1;
}

if ((device = pci_device_find_by_slot(0, bus, dev, func)) == NULL) {
	fprintf(stderr, "ERROR: Unable to find PCI device at %02X:%02X.%02X.\n",
		bus, dev, func);
	return 1;
}

pci_device_probe(device);

if (device->regions[bar].base_addr == 0) {
	fprintf(stderr, "ERROR: Failed to find required resource on PCI card.\n");
return 1;
}

printf("Found bar %d at %p of 

Re-implementing Alpha sparse I/O access in libpciaccess/Xserver

2010-02-04 Thread Michael Cree
With the shift of PCI code to libpciaccess and the removal of Jensen 
support and other Alpha cruft from the Xserver the code to support 
Sparse I/O mapping on non-BWX capable Alphas is broken.  It appears 
there is some interest in getting the sparse I/O mapping working again, 
see: http://bugs.freedesktop.org/show_bug.cgi?id=25678

What I am wondering about is the best way to implement this as it 
requires some interaction between libpciaccess and the Xserver's 
compiler.h MMIO_INx and MMIO_OUTx routines.

Drivers call pci_device_map_range() to memory map the PCI resource file. 
If we wish to keep that interface for the drivers then on a non-BWX 
Alpha this routine should map the resourceX_dense resource file, and if 
it is an I/O bar also map the resourceX_sparse resource file.  It is 
safe to return the address of the dense mapping only as the sparse 
mapping address is easily calculated from the dense mapping address.

But the problem is then communicating to the MMIO_INx/MMIO_OUTx routines 
that they should use the sparse memory map for byte and word accesses.

In the old Xserver all this was setup in the PCI initialisation code but 
now the PCI setup is shifted to libpciaccess and the Alpha specific code 
was left hanging and uncalled.  I think Matt removed much of it when I 
pointed out to him that it was no longer being called from anywhere.

My thought at the moment is to modify libpciaccess's 
pci_device_linux_sysfs_map_range() (and unmap routine) on the failure to 
map resourceX to try to map resourceX_dense and if it is an I/O bar to 
map resourceX_sparse as well.  This could be enclosed with #ifdef ALPHA 
type conditionals if desired.  I suspect this might require a change (an 
addition of a field) to the private part of struct pci_device but 
involves no change to the API.

On the Xserver side, in some initialisation routine, when detecting the 
system type, if it is an non-BWX alpha then overtake the 
MMIO_INx/MMIO_OUTx routines in compiler.h with ones that use the sparse 
mapping.

There is a disconnect here -- libpciaccess is detecting an Alpha system 
by the presence of resourceX_sparse/dense files in sysfs.  The Xserver 
is doing it by more fundamental means (it will have to talk to the 
kernel and/or procfs to determine the Alpha system type, etc.).  Maybe 
an opportunity for one to be out of sync with the other.

Also this implementation will only work with Linux sysfs.

Any thoughts or suggestions as to whether this is on the right track?

Cheers
Michael.
___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


Re: -fno-strict-aliasing in CWARNFLAGS?

2010-02-03 Thread Michael Cree
On 04/02/10 14:28, Soeren Sandmann wrote:
> Michael Cree  writes:
>
>> What I do see is that the variables a, r, g and b are essentially
>> declared unsigned char (what I presume uint8_t is typedefed to) and a
>> calculation is performed that will lose its intended result due to
>> shifting an unsigned char more bits to the left than is available in
>> the unsigned char.
>
> The variables are promoted to int before the shift takes place, so the
> code works fine, apart from the aliasing issue.

Yeah, I remembered that once I thought about it a bit more after hitting 
the send button...  oops :-/

I once knew all this stuff intimately; I could've even written out a 
complete operator precedence table from memory!  Having only programmed 
in C on occasion over the last 12 years I now realise that some of that 
knowledge is getting a little hazy.

While I unfortunately polluted the thread with my misguided ramblings I 
have nevertheless found this discussion very useful.

Cheers
Michael.
___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


Re: -fno-strict-aliasing in CWARNFLAGS?

2010-02-03 Thread Michael Cree
On 04/02/10 09:17, Peter Harris wrote:
> On 2010-02-03 15:02, Michael Cree wrote:
>> On 04/02/10 07:55, Soeren Sandmann wrote:
>>>
>>> I recently turned it on in pixman because completely reasonable code
>>> like this:
>>>
>>>   void
>>>   pixman_contract (uint32_t *  dst,
>>>const uint64_t *src,
>>>int width)
>>>   {
>>>   int i;
>>>
>>>   /* Start at the beginning so that we can do the contraction in
>>>* place when src == dst
>>>*/
>
>>> is actually illegal under the C aliasing rules, and GCC can and will
>>> break it unless you use -fno-strict-aliasing.
>>
>> I'm confused.  Why does this break the aliasing rules?
>
> If *dst and *src point to (alias) the same memory, it breaks the rules
> since they are different types.

Thanks, yes, it's now obvious.

Looking back at the code I now see that I completely missed the comment 
that says it is possible that src == dest.

When I read code and see two different pointer types in the functions 
arguments I naturally assume that it is _intended_ that they are two 
different areas of memory.  I always program like that and I guess that 
programming practice comes from once knowing the C standard well many 
years ago.

Cheers
Michael.
___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


Re: -fno-strict-aliasing in CWARNFLAGS?

2010-02-03 Thread Michael Cree
On 04/02/10 07:55, Soeren Sandmann wrote:
>
> I recently turned it on in pixman because completely reasonable code
> like this:
>
>  void
>  pixman_contract (uint32_t *  dst,
>   const uint64_t *src,
>   int width)
>  {
>  int i;
>
>  /* Start at the beginning so that we can do the contraction in
>   * place when src == dst
>   */
>  for (i = 0; i<  width; i++)
>  {
>  const uint8_t a = src[i]>>  56,
>r = src[i]>>  40,
>g = src[i]>>  24,
>b = src[i]>>  8;
>
>  dst[i] = a<<  24 | r<<  16 | g<<  8 | b;
>  }
>  }
>
> is actually illegal under the C aliasing rules, and GCC can and will
> break it unless you use -fno-strict-aliasing.

I'm confused.  Why does this break the aliasing rules?

What I do see is that the variables a, r, g and b are essentially 
declared unsigned char (what I presume uint8_t is typedefed to) and a 
calculation is performed that will lose its intended result due to 
shifting an unsigned char more bits to the left than is available in the 
unsigned char.  I doubt that this code works as intended whether 
-fno-strict-aliasing is defined or not!

Cheers
Michael.
___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH app-xfs 2/3] doc: remove pdf target for developer's doc

2010-01-21 Thread Michael Cree
On 21/01/2010, at 11:16 AM, Rémi Cardona wrote:
> Le 20/01/2010 21:41, Michael Cree a écrit :
>> Hope you don't mind a question from a user who is running xfs:  What
>> replaces xfs?  Should we now be running some other application for
>> font serving?
>
> Please see this thread [1] in Gentoo's bugzilla for more info, I'm  
> sure
> your questions have all been asked there.

Thanks, that is very helpful

The problem is that we are running a modern application, namely the  
commercial package Mathematica, that would appear to require server  
side font rendering.  It comes with its own set of maths fonts that  
are installed with the application on a high powered server that is  
behind locked doors.  Users access it with any of Xwin32 on MS Windows  
PCs (the most likely), thin clients, X on a Mac, or a Linux desktop.   
Mathematica crashes or locks up if the X-server doesn't have access to  
the maths fonts (ain't that nice!).  The easiest solution up to now  
has been to run xfs, except for X on Mac OS X for which we have to  
install the fonts on every user's Mac, which is problematic because we  
can't predict who might try to run Mathematica and abandons it without  
informing IT support when it does not work.

I guess the solution is:

1) Write to Wolfram to advise them to fix their program.

2) When xfs is finally gone, and assuming Wolfram hasn't got its A  
into G, install the maths fonts on every possible machine that someone  
might be connecting from.  A PITA, if I may say, when xfs was such an  
efficient and easy solution (from the point of view of IT  
administration).

Cheers
Michael.

___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH app-xfs 2/3] doc: remove pdf target for developer's doc

2010-01-20 Thread Michael Cree
On 21/01/2010, at 8:21 AM, Julien Cristau wrote:
> On Wed, Jan 20, 2010 at 14:10:45 -0500, Gaetan Nadon wrote:
>
>> The folks at Gentoo were not:
>> http://bugs.gentoo.org/show_bug.cgi?id=274925#c12
>> It's not in my distro, so I presume Debian were not able either.
>
> I'm not sure we've tried.  xfs is unmaintained in debian, and I'd like
> to see it disappear

Hope you don't mind a question from a user who is running xfs:  What  
replaces xfs?  Should we now be running some other application for  
font serving?

Cheers
Michael.

___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH] [alpha] Remove unused code, _dense_* functions, iobase stuff

2009-11-15 Thread Michael Cree
On 9/11/2009, at 4:39 PM, Matt Turner wrote:
>> I'm happy to test this, but it may be a couple of days before I can  
>> do
>> so.
>
> When do you get a chance, you'll want to test the rebased (attached)
> patch instead.
>
> Thanks a lot,
> Matt
> <0001-alpha-Use-glibc-s-in-out-routines.patch>

Tested-by: Michael Cree 

Sorry, it took much longer than I originally said to get around to  
testing this.  The rebased patch applies and compiles fine and I had  
the patched Xserver up and running.

Cheers
Michael.

___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH] Add DEC Alpha sum_s16 fast path

2009-11-05 Thread Michael Cree
On 6/11/2009, at 11:25 AM, Matt Turner wrote:

> Lifted from Compaq's Compiler Writer's Guide for the Alpha 21264,
> appendix B.
>
> http://h18000.www1.hp.com/cpq-alphaserver/technology/literature/cmpwrgd.pdf
>
> Signed-off-by: Matt Turner 

While the code does not appear to use any EV6 (21264) specific  
instructions so should work on any Alpha it is carefully scheduled to  
be optimised for an EV6.  I wonder what performance gains are achieved  
on older Alphas?

Cheers
Michael.

___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH] [alpha] Remove unused code, _dense_* functions, iobase stuff

2009-11-01 Thread Michael Cree
On 2/11/2009, at 12:06 PM, keithp wrote:
> Excerpts from Mark Kettenis's message of Sun Nov 01 13:26:11 -0800  
> 2009:
>
>> Ah well, if you're not implying that this is now handled by
>> libpciaccess, and if dense support on Linux still works after this
>> change, this should be fine.
>
> Any chance I could get a 'Test-by:' here? I'm not exactly in a
> position to even build-test alpha at this point (my last alpha box at
> the CPU when the power supply died).

I'm happy to test this, but it may be a couple of days before I can do  
so.

Cheers
Michael.

___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PULL] fixes and clean ups for alpha

2009-10-15 Thread Michael Cree
On 14/10/2009, at 2:24 PM, Matt Turner wrote:

> Matt Turner (3):
>  [alpha] don't return from void functions

Do you realise that the routines therein (i.e. _dense_outb(),  
_dense_inb(), etc.) are not currently called from anywhere in the  
Xserver?  At least, that is the conclusion I came to when examining  
the code relevant to Alpha.  The routine _alpha_iobase_query() changes  
_alpha_outb() and similar to be _dense_outb(), etc., if a dense I/O  
mapping is detected, but this routine used to be called from some of  
the old PCI detection software that was shifted out to libpciaccess,  
and, as far as I can tell, is now not called from anywhere.  So  
_alpha_outb() remains pointing at _outb() even on dense systems and  
the _dense_outb(), etc.,  routines are now unused.

The _dense_outb() routines just check whether the port is 0x or  
below; if so it calls _outb() else it does a direct memory access  
using the BWX instructions.  At the moment the Xserver code just calls  
_outb() without the check.  Whether that matters, or what the reason  
was for introducing the _dense_outb() routines in the first place, I  
don't know.  Maybe achieving some improved efficiency for dense systems?

Cheers
Michael.

___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


Re: Debugging xserver on Alpha

2009-10-12 Thread Michael Cree
On 6/10/2009, at 12:28 PM, Michael Cree wrote:
> On 6/10/2009, at 11:16 AM, Matt Turner wrote:
>> On Sat, Oct 3, 2009 at 7:05 PM, Michael Cree 
>>> I strongly suspect that the xserver is lost cycling around in the
>>> X86EMU_exec() routine and never exits it.
>>>
>>> Since the xserver 1.5 branch works on Alpha and the 1.6 branch
>>> doesn't, I
>>> did a diff of the code in the int10, x86emu, os-support/linux, etc.,
>>> directories to search for changes that might prove problematic on
>>> Alpha but
>>> I didn't spot anything that looked suspicious.
>>
>> Does the SIS card work with 1.5?
>
> I should really test the SIS card with 1.5 to completely verify and  
> eliminate any
> surprises.

Oh, surprise, the SIS card does _not_ work with xserver 1.5.  I am  
seeing the same 100% continuous forever CPU usage in the int10  
initialisation code as seen with later xserver versions.  Maybe the  
problem was introduced with the PCI rewrite between xserver 1.4 and 1.5?

It also means that the problem I saw with the Radeon HD2400 is  
probably not linked with the one seen with the SIS card (i.e. my  
working hypothesis for the last couple of months has been proved  
false).  I therefore have retried the Radeon HD2400 card and it is  
working with xserver 1.7. The kernel oops I reported with xserver  
1.6.3 appears fixed in 1.7.

Given that I now have the Radeon HD2400 working, which is what I  
really wanted, I am not going to explore the problem with the (rather  
old) SIS card any further.

I did quickly test a 24bit colour capable TGA card with the tga video  
driver.  The display appears with incorrect colours - they are a bit  
mixed up.  Actually this problem extends back quite a way, certainly  
back to xserver 1.4.2 as I remember it occurring with Debian Lenny.  
Unless some Alpha users announce they desperately need to use TGA  
cards I am not going to explore the problem any further.

In summary I have xserver 1.7 running/tested on the follow:

Working with Radeon 9200 (RV280) on Compaq Alpha XP1000.  DRI works too.

Working with Radeon HD2400 (RV610) with radeon (ati) driver on  
PWS600au.  Yet to test radeonhd driver and DRI.

Working with old (1997) Matrox card on DEC PWS600au.

Problems with old SIS card (known to work with Debian Lenny v1.4.2  
xserver but not xserver 1.5 and later) and v. old TGA card (problems  
with incorrect colours going back to at least xserver 1.4.2).

>> Maybe KMS is going to be the only way we can support R600+ cards on
>> Alpha. It would be interesting to see if you got the same results  
>> as I
>> did. (http://bugs.freedesktop.org/show_bug.cgi?id=23227)

I have installed a 2.6.31.3 kernel on the Compaq Alpha XP1000 with the  
Radeon 9200 card to test KMS.  The kernel comes up with the consoles  
in a 1680x1050 mode fine.  The xserver started with gdm and switched  
fine between the gdm login and the virtual consoles, but when I tried  
to log in to the Gnome desktop the system locked up; the screen went  
solid grey and I lost the keyboard, couldn't ssh in, and the console  
on the serial port locked up.  I got a response with ping. Once reset  
and rebooted I couldn't find anything in the logs that would help  
identify the problem.

I then tried running the xserver with xdm.  The xserver didn't  
complete initialisation and exited with a bus error, IIRC, in the  
fbBlt routine.  I can investigate and report on that one further if  
desired.

I was running drm git master with the --with-experimental-radeon-api  
configure option.  Have I done everything I should to properly enable  
KMS?

Cheerz
Michael.

___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


Re: Debugging xserver on Alpha

2009-10-06 Thread Michael Cree

On 7/10/2009, at 2:12 PM, Matt Turner wrote:

I'm going to get my xserver repository working ASAP and get these
patches in it. I'll respond when I've got it set up.


OK, my repository is available at http://cgit.freedesktop.org/~mattst88/xserver/

(Gosh git it a lot easier to use than it used to be for me.)

I guess I'm just missing your _X_EXPORTs patch. Send it along when you
can and I'll apply it to my tree.


Follows below.

Cheers
Michael.


diff --git a/hw/xfree86/os-support/linux/lnx_axp.c 
b/hw/xfree86/os-support/linux/lnx_axp.c
index 8571c04..34129cc 100644
--- a/hw/xfree86/os-support/linux/lnx_axp.c
+++ b/hw/xfree86/os-support/linux/lnx_axp.c
@@ -125,12 +125,12 @@ extern unsigned int _dense_inb(unsigned long);
 extern unsigned int _dense_inw(unsigned long);
 extern unsigned int _dense_inl(unsigned long);
 
-void (*_alpha_outb)(char, unsigned long) = _outb;
-void (*_alpha_outw)(short, unsigned long) = _outw;
-void (*_alpha_outl)(int, unsigned long) = _outl;
-unsigned int (*_alpha_inb)(unsigned long) = _inb;
-unsigned int (*_alpha_inw)(unsigned long) = _inw;
-unsigned int (*_alpha_inl)(unsigned long) = _inl;
+_X_EXPORT void (*_alpha_outb)(char, unsigned long) = _outb;
+_X_EXPORT void (*_alpha_outw)(short, unsigned long) = _outw;
+_X_EXPORT void (*_alpha_outl)(int, unsigned long) = _outl;
+_X_EXPORT unsigned int (*_alpha_inb)(unsigned long) = _inb;
+_X_EXPORT unsigned int (*_alpha_inw)(unsigned long) = _inw;
+_X_EXPORT unsigned int (*_alpha_inl)(unsigned long) = _inl;
 
 static long _alpha_iobase_query(unsigned, int, int, int);
 long (*_iobase)(unsigned, int, int, int) = _alpha_iobase_query;


___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


Re: Debugging xserver on Alpha

2009-10-06 Thread Michael Cree
On 6/10/2009, at 12:28 PM, Michael Cree wrote:
> On 6/10/2009, at 11:16 AM, Matt Turner wrote:
>> On Sat, Oct 3, 2009 at 7:05 PM, Michael Cree 
>> wrote:
>>>
>>> With commit c7680befe5ae on the xserver 1.7 branch only support for
>>> Alphas
>>> with sparse I/O remains.  I have already sent you and the list a
>>> patch that
>>> reenables the code path for Alphas with dense I/O mapping.
>>
>> I can't see anything in this commit that would break dense systems.  
>> It
>> just removed Jensen support (which is a _third_ memory mapping model,
>> i.e., not the same as sparse or dense). This commit was present in
>> xserver-1.5, which I can confirm works on alpha, so I don't think  
>> it's
>> got anything to do with the problems we've seen.

I can't find commit c7680befe5ae on the xserver-1.5-branch or in its  
ancestral commits so I am a little confused as to why you say it is  
present in xserver-1.5.

As part of the commit the functions xf86SlowBCopyFromBus() and  
xf86SlowBCopyToBus() are modified, e.g., part of the commit is:

  xf86SlowBCopyFromBus(unsigned char *src, unsigned char *dst, int  
count)
  {
-if (isJensen())
-{
-   unsigned long addr;
-   long result;
-
-   addr = (unsigned long) src;
-   while( count ){
-   result = *(volatile int *) addr;
-   result >>= ((addr>>SPARSE) & 3) * 8;
-   *dst++ = (unsigned char) (0xffUL & result);
-   addr += 1<>= ((addr>>SPARSE) & 3) * 8;
+   *dst++ = (unsigned char) (0xffUL & result);
+   addr += 1<  I should really test
> the SIS card with 1.5 to completely verify and eliminate any
> surprises.  I haven't compiled the 1.5 xserver for quite a while and I
> think I have lost the list of proto/lib/etc versions that was required
> for it.  Is there someway I can get all the proto and lib modules set
> to the correct versions for xserver 1.5 branch build efficiently?

OK, I have found that the xorg git supermodule has the versions of all  
modules necessary for the xserver 1.5 branch.  I will confirm within  
the next day or two that the SIS video card and the Radeon HD2400 card  
do indeed work on the xserver 1.5 branch.

Cheers
Michael.

___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


Re: Debugging xserver on Alpha

2009-10-05 Thread Michael Cree
On 6/10/2009, at 11:16 AM, Matt Turner wrote:
> On Sat, Oct 3, 2009 at 7:05 PM, Michael Cree   
> wrote:
>>
>> With commit c7680befe5ae on the xserver 1.7 branch only support for  
>> Alphas
>> with sparse I/O remains.  I have already sent you and the list a  
>> patch that
>> reenables the code path for Alphas with dense I/O mapping.
>
> I can't see anything in this commit that would break dense systems. It
> just removed Jensen support (which is a _third_ memory mapping model,
> i.e., not the same as sparse or dense). This commit was present in
> xserver-1.5, which I can confirm works on alpha, so I don't think it's
> got anything to do with the problems we've seen.

Hm.   I will have to check it again - I'm not at the computer with  
the xserver git so will have to wait a few hours.

>> I have tested on three Alphas (all have the BWX extension, hence a  
>> dense I/O
>> mapping model) and a number of (mostly older) video cards.  I am  
>> still
>> running a 2.6.30 variant kernel, hence haven't tested KMS.
>>
>> The xserver 1.7 branch with the two patches I mention above works  
>> on an
>> Compaq Alpha XP1000 (ev67 CPU) with a Radeon 9250 card.  At least I  
>> have a
>> gnome desktop opened on it but haven't done extensive usage testing.
>
> Excellent news! To clarify, this is with the _alpha_{in,out} functions
> marked with _X_EXPORT, and what was the other patch?

Yes, the alpha_in and out routines needed _X_EXPORT and the other  
patch is at http://article.gmane.org/gmane.comp.freedesktop.xorg.devel/2095

Without that second patch I get a segmentation error in one of the  
SlowBCopy routines.  It is because it executed the sparse copy code,  
thus the pointer for the copy is incremented by a far too big value  
for a dense system and eventually gets incremented out of valid memory  
space.

>> However, on the DEC Alpha PWS600au (an ev56 CPU), I am seeing  
>> lockups/kernel
>> oops with other video cards emanating from the vbe/int10 code.
>>
>> With a newer Radeon HD2400 I get a kernel oops (which I reported a  
>> couple or
>> so months ago to the linux kernel mail list) which appears to  
>> happen in the
>> int10 code.  Note that this card is not POSTed at boot; it is up to  
>> the
>> xserver to POST it.  This kernel oops was seen with the 1.6 xserver  
>> branch
>> and I haven't tested it since as I've put this video card aside as  
>> the video
>> cards I discuss below seem to offer a better chance of finding the  
>> problem
>> (and the kernel oops is nasty - it corrupted a disc partition on one
>> occasion).
>
> Maybe KMS is going to be the only way we can support R600+ cards on
> Alpha. It would be interesting to see if you got the same results as I
> did. (http://bugs.freedesktop.org/show_bug.cgi?id=23227)

Okay, I will need to compile a new kernel - I see that 2.6.31.2 is now  
out which apparently fixes some issues with the USB/serial port which  
I require.  What else do I need to do to enable KMS?  I don't see any  
configuration options in the xserver.

> (BTW: there's
> now a Radeon 4350 PCI card, so R700 on Alpha is possible. :)

Yes, I saw one in a local computer shop while browsing last weekend.   
I was tempted to get it but didn't.  Maybe I should allow myself to be  
tempted even more?

>> When I use an old 1997 Matrox card the xserver (1.7 branch) comes  
>> up fine..
>> An examination of the xf86-video-mga driver reveals that it doesn't  
>> load
>> int10 unless there is a request for that in xorg.conf.
>
> So is int10 needed to start X with the matrox?

No, it doesn't use int10 be default. The SRM successfully POSTs it and  
the Xserver comes up fine on it.

>> When I use an old Sis card (with the xf86-video-sis) driver the  
>> int10 code
>> is loaded but the xserver gets lost in the int10 initialisation  
>> code and
>> eats 100% cpu forever.  Connecting to the X process with gdb reveals
>> backtraces of the following nature:
>>
>> 0x026507e8 in inline_bwx_inb (addr=)
>>   at ../sysdeps/unix/sysv/linux/alpha/ioperm.c:359
>> 359../sysdeps/unix/sysv/linux/alpha/ioperm.c: No such file or  
>> directory.
>>   in ../sysdeps/unix/sysv/linux/alpha/ioperm.c
>> (gdb) bt
>> #0  0x026507e8 in inline_bwx_inb (addr=)
>>   at ../sysdeps/unix/sysv/linux/alpha/ioperm.c:359
>> #1  dense_inb (addr=)
>>   at ../sysdeps/unix/sysv/linux/alpha/ioperm.c:444
>> #2  0x02650960 in _inb (port=2199033660378)
>>   at ../sysdeps/unix/sysv/linux/alpha/ioperm.c:826
>> #3  0x000120135008 in

Re: checked sources of xorg-server 1.7.0 with static code analysis tool cppcheck

2009-10-05 Thread Michael Cree
On 6/10/2009, at 9:09 AM, Ping wrote:

> On Oct 3, 2009, at 03:53, Michel Dänzer wrote:
>
> On Fri, 2009-10-02 at 18:25 -0700, Jeremy Huddleston wrote:
> On Oct 2, 2009, at 16:17, Michel Dänzer wrote:
>
> On Fri, 2009-10-02 at 23:10 +0200, Martin Ettl wrote:
>
> diff --git a/exa/exa_classic.c b/exa/exa_classic.c
> index 1eff570..c9c7534 100644
> --- a/exa/exa_classic.c
> +++ b/exa/exa_classic.c
> @@ -144,14 +144,14 @@ Bool
> exaModifyPixmapHeader_classic(PixmapPtr pPixmap, int width, int
> height, int depth,
>int bitsPerPixel, int devKind, pointer pPixData)
> {
> +if (!pPixmap)
> +return FALSE;
> +
>   ScreenPtr pScreen = pPixmap->drawable.pScreen;
>   ExaScreenPrivPtr pExaScr;
>   ExaPixmapPrivPtr pExaPixmap;
>   Bool ret;
>
> -if (!pPixmap)
> -return FALSE;
> -
>   pExaScr = ExaGetScreenPriv(pScreen);
>   pExaPixmap = ExaGetPixmapPriv(pPixmap);
>
> [...]
> What purpose is that? If these functions were actually called with a
> NULL PixmapPtr, surely the current code would have crashed with a
> segmentation fault.
>
> My experience is "don't change it unless you know it is broken".   
> Gracefully returning a warning (FALSE) is better than making  
> yourself (xserver) look bad (crashing).

Be aware that the gcc compiler will remove the statement "if (! 
pPixmap) return FALSE;" in the original version if optimisation is  
turned on as the dereference of pPixmap to initialise pScreen implies  
that the pointer cannot be NULL.  Gcc takes the access via the pointer  
as a clear indication that the check for NULL is superfluous and  
optimises the test for NULL away.  The upshot is that without the  
patch applied, and using gcc with optimisation turned on, there is  
_no_ check whatsoever for the NULL pointer.

The person who said "If these functions were actually called with a  
NULL PixmapPtr, surely the current code would have crashed with  
segmentation fault" is only correct some of the time.  It is possible  
(particularly with Linux kernels more than a couple or so months old)  
to map memory at location 0 (which is typically the value of NULL)  
thus a segmentation violation would not occur.  Admittedly enabling  
memory maps at location 0 is not what the normal user does (with very  
few exceptions) but it is what a malicious user might do.

This coding practice (checking for NULL after dereferencing) appeared  
in the Linux kernel and the behaviour of gcc of optimising the check  
for NULL away formed a step in a published, and very ingenious, root  
exploit of the Linux kernel.

Cheers
Michael.

___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


Debugging xserver on Alpha

2009-10-03 Thread Michael Cree
Matt and the xorg-devel maillist,

I think it is time to report where I am with debugging the Xserver on Alpha
and to ask for advice on how to proceed.

Using the xserver 1.7 branch I find that I must wrap _X_EXPORTs around
_alpha_inb et al. else the int10, etc., modules have undefined symbols and
won't load.

With commit c7680befe5ae on the xserver 1.7 branch only support for Alphas
with sparse I/O remains.  I have already sent you and the list a patch that
reenables the code path for Alphas with dense I/O mapping.

I have tested on three Alphas (all have the BWX extension, hence a dense I/O
mapping model) and a number of (mostly older) video cards.  I am still
running a 2.6.30 variant kernel, hence haven't tested KMS.

The xserver 1.7 branch with the two patches I mention above works on an
Compaq Alpha XP1000 (ev67 CPU) with a Radeon 9250 card.  At least I have a
gnome desktop opened on it but haven't done extensive usage testing.

However, on the DEC Alpha PWS600au (an ev56 CPU), I am seeing lockups/kernel
oops with other video cards emanating from the vbe/int10 code.

With a newer Radeon HD2400 I get a kernel oops (which I reported a couple or
so months ago to the linux kernel mail list) which appears to happen in the
int10 code.  Note that this card is not POSTed at boot; it is up to the
xserver to POST it.  This kernel oops was seen with the 1.6 xserver branch
and I haven't tested it since as I've put this video card aside as the video
cards I discuss below seem to offer a better chance of finding the problem
(and the kernel oops is nasty - it corrupted a disc partition on one
occasion).

When I use an old 1997 Matrox card the xserver (1.7 branch) comes up fine.
An examination of the xf86-video-mga driver reveals that it doesn't load
int10 unless there is a request for that in xorg.conf.

When I use an old Sis card (with the xf86-video-sis) driver the int10 code
is loaded but the xserver gets lost in the int10 initialisation code and
eats 100% cpu forever.  Connecting to the X process with gdb reveals
backtraces of the following nature:

0x026507e8 in inline_bwx_inb (addr=)
at ../sysdeps/unix/sysv/linux/alpha/ioperm.c:359
359../sysdeps/unix/sysv/linux/alpha/ioperm.c: No such file or directory.
in ../sysdeps/unix/sysv/linux/alpha/ioperm.c
(gdb) bt
#0  0x026507e8 in inline_bwx_inb (addr=)
at ../sysdeps/unix/sysv/linux/alpha/ioperm.c:359
#1  dense_inb (addr=)
at ../sysdeps/unix/sysv/linux/alpha/ioperm.c:444
#2  0x02650960 in _inb (port=2199033660378)
at ../sysdeps/unix/sysv/linux/alpha/ioperm.c:826
#3  0x000120135008 in _dense_inb (port=2199033660378) at lnx_ev56.c:124
#4  0x02a2a930 in inb (port=986)
at ../../../hw/xfree86/common/compiler.h:344
#5  x_inb (port=986) at helper_exec.c:333
#6  0x02a34cbc in x86emuOp_in_byte_AL_DX (op1=)
at ./../x86emu/ops.c:9737
#7  0x02a45158 in X86EMU_exec () at ./../x86emu/decode.c:122
#8  0x02a2d5f8 in xf86ExecX86int10 (pInt=0x12024e550)
at xf86x86emu.c:40
#9  0x02a2e8a8 in xf86ExtendedInitInt10 (entityIndex=0,
Flags=) at generic.c:285
#10 0x02a10410 in VBEExtendedInit (pInt=0x0, entityIndex=0, Flags=3)
at vbe.c:68
#11 0x029881b8 in SiS_LoadInitVBE (pScrn=0x120248870)
at sis_driver.c:2828
#12 0x0298d504 in SISPreInit (pScrn=0x120248870,
flags=) at sis_driver.c:5996
#13 0x00012008bef0 in InitOutput (pScreenInfo=0x12022c758, argc=4,
argv=0x11fd45738) at xf86Init.c:817
#14 0x000120024da0 in main (argc=4, argv=0x11fd45738, envp=0x11fd45760)
at main.c:204

I strongly suspect that the xserver is lost cycling around in the
X86EMU_exec() routine and never exits it.

Since the xserver 1.5 branch works on Alpha and the 1.6 branch doesn't, I
did a diff of the code in the int10, x86emu, os-support/linux, etc.,
directories to search for changes that might prove problematic on Alpha but
I didn't spot anything that looked suspicious.

I see the emu86 code has debugging and disassembling capabilities.  I tried
setting DEBUG in the Makefile in the x86emu directory but discovered I will
probably have to insert a call to X86EMU_trace_on() before any debugging
output will occur.

Am I on the right track here?  How does one go about debugging the int10 and
vbe code?  Since an examination of code changes between a working xserver
and a broken xserver has failed to highlight anything, and the problem seems
to be somewhere in the x86 emulation is the best approach to enable the x86
emulator debugging modes and track exactly what it is doing?  Is there
someone who is happy to give me a bit of guidance on doing this?  I am quite
unfamiliar with the x86 emulator, and while I have programmed in various
assembler languages in the past I am unfamiliar with Intel x86.

Cheers
Michael Cree.

___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


Re: fix for xf86-video-vesa with xserver 1.7

2009-09-29 Thread Michael Cree
Matthieu Herrb pronounced:
> I need those 2 fixes to run the vesa driver with xserver 1.7. ok?
> 
> commit 9829de7a1b2a9734d20a239d3ed84a73ddaf70f1
> Author: Matthieu Herrb 
> Date:   Mon Sep 28 23:00:27 2009 +0200
> 
> fix vesa for xserver 1.7 branch.
> 
> - convert slowbcopy_frombus() to xf86SlowBcopy().
> - add missing shadowRemove() in VESACloseScreen().

Using xf86SlowBCopy() directly will break the Vesa driver on the Alpha
architecture, well..., at least on those older Alphas that have a sparse
I/O memory map model.

I see the compiler.h, etc.,  commit that broke the Vesa driver has been
reverted, at least on the 1.7 branch, so I presume the conversion of
slowbcopy_frombus() to xf86SlowBCopy() shouldn't be necessary anymore.

My preference (which probably carries no weight since my prior
contribution to the Xorg project is almost zilch) would be the patch
that I just mailed to the list under the subject "[patch] alpha: Fix
SlowBCopy ..." which, I think, should fix the slowbcopy_frombus() call
on the Alpha architecture.

Cheerz
Michael.


___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


[patch] alpha: Fix SlowBCopy [was: xserver: Branch 'master']

2009-09-28 Thread Michael Cree
For your consideration:

Commit "Remove the remnants of Jensen
support" (c7680befe5aebd0f4277d11ff3984d8a7deb9d5b) not only removed the
Jensen support but also the dense support leaving behind only sparse
support for Linux Alpha.  This breaks the SlowBCopy routines on all BWX
enabled Alphas.

Re-add the test for sparse and dense systems and re-introduce the dense
code path.

Michael Cree.

diff --git a/hw/xfree86/os-support/misc/SlowBcopy.c b/hw/xfree86/os-support/misc/SlowBcopy.c
index 182a3e6..21ab116 100644
--- a/hw/xfree86/os-support/misc/SlowBcopy.c
+++ b/hw/xfree86/os-support/misc/SlowBcopy.c
@@ -59,6 +59,8 @@ xf86SlowBcopy(unsigned char *src, unsigned char *dst, int len)
 
 #ifdef linux
 
+extern unsigned long _bus_base_sparse(void);
+
 #define SPARSE (7)
 
 #else
@@ -70,32 +72,38 @@ xf86SlowBcopy(unsigned char *src, unsigned char *dst, int len)
 void
 xf86SlowBCopyFromBus(unsigned char *src, unsigned char *dst, int count)
 {
-unsigned long addr;
-long result;
-
-addr = (unsigned long) src;
-while( count ){
-	result = *(volatile int *) addr;
-	result >>= ((addr>>SPARSE) & 3) * 8;
-	*dst++ = (unsigned char) (0xffUL & result);
-	addr += 1<>= ((addr>>SPARSE) & 3) * 8;
+	*dst++ = (unsigned char) (0xffUL & result);
+	addr += 1<___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel