Re: [Intel-gfx] GMA 950 Intel 945G gallium driver

2012-06-22 Thread Alan W. Irwin

On 2012-06-22 11:18-0700 Stéphane Marchesin wrote:




On Sun, Jun 3, 2012 at 5:44 AM, Emam Hossain  wrote:
  Hello Everyone,

  Recently I have tested one of my old desktop which got Intel 945G on a 
Dual Core CPU. I have installed Ubuntu 11.10 with
  XServer 1.11, kernel 3.2 and xf86-video-intel 2.18.

  What I have found that Gallium driver i915g from Mesa 7.11 and 8 is 
performing better than officially supported DRI i915
  driver.

  For example, when tested against the following games:

  BEEP, http://www.desura.com/games/beep (gallium plays fine while dri not)
  BIT.TRIP.RUNNER from humble bundle, 
http://bittripgame.com/bittrip-runner.html  (gallium smooth gameplay, dri slow)
  and many more.

  Moreover, Windows games with WINE are not playable at all or broken with 
DRI driver while runs good with gallium. For example
  with games:

  Need for Speed Underground
  Flatout 1
  Need for Speed Most Wanted

  gallium does the job while DRI does not.

  So, my question is why dont support gallium driver when it is performing 
better than DRI driver. why not make gallium driver
  better since Intel 945G does not have hardware support for many features, 
DRI driver is just slow for modern games except GL
  1.1 games while gallium driver making use of CPU to perform those missing 
hardware features and making games at least run.
  Moreover, Windows driver does similar approach like gallium 3D.


I feel that the reason is that the classic i915 driver is in maintenance mode 
and focus is on newer GPUs. The gallium i915 driver is what
we use on some Chrome OS machines, and that's the main reason I've been working 
on it.

With that said, I'm pondering exposing GL 2.1 on it, since it seems legit per 
the spec to hack sRGB texture support with U8 + fragment
shader instructions. That'd allow some unigine-based games to run.



The i915g driver sounds like an interesting alternative for driving
older Intel equipment.  For example, one of my computers (which I am
using as a thin client/X terminal) is an ASUS Eee netbook with the
945GME chipset.  The Debian stable version of the classic driver works
okay on that.  For example, I can run "env LIBGL_ALWAYS_INDIRECT=1
foobillard" on our principal machine and display the results on the
thin client without obvious issues.  However, that is a pretty old
version of X, and there have been numerous changes to the Intel
graphics stack since then without much official testing on old
equipment (or on thin clients for that matter) by the Intel software
team.  Therefore, I am not too sure whether the newer version of the
Intel graphics stack will work well on that equipment when I upgrade
to Debian testing, and the original post in this thread (quoted above)
isn't exactly reassuring on that issue.

Therefore, I would like to try out the i915g driver myself. Are there
build instructions somewhere for that driver or better yet is there a
Debian (or Ubuntu) package that includes it?

Alan
__
Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state
implementation for stellar interiors (freeeos.sf.net); the Time
Ephemerides project (timeephem.sf.net); PLplot scientific plotting
software package (plplot.sf.net); the libLASi project
(unifont.org/lasi); the Loads of Linux Links project (loll.sf.net);
and the Linux Brochure Project (lbproject.sf.net).
__

Linux-powered Science
__
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] Two GMA 950 issues

2012-09-25 Thread Alan W. Irwin

I have recently upgraded my Asus Eee b202 box (with 945GM chipset and
GMA 950 graphics core) from Debian stable to Debian testing.

Before I was just using this rather underpowered box as a thin client
(using the X -query method to access a remote box xdm to help log in
to that remote box to actually do my work or play low-end 3D games
such as foobillard).  However, as a test I tried installing KDE on
this box. This new KDE version is much faster than the Debian stable
version so that experiment turned out to be a huge success.  So on the
whole I am completely satisfied with the KDE graphics experience I have
directly on this box.

However, I have noticed two issues with this box that probably indicate
that the software in the Intel graphics driver needs some
additional testing/maintenance for GMA950 graphics cores.

(1) The first issue (full bug report at
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=688812) is fiddling
with desktop effects (specifically turning the "Outline" effect off
than on again) brings the GPU to a near halt (it took about a minute
to respond to even hovering the mouse over part of the desktop and
another minute to respond to clicking on the mouse).  I ascribe the
slowdown to the GPU because "top" showed the cpu was idle and there
was tonnes of free memory available as well. That huge slowdown is
"permanent" in the sense that logging off and/or (warm) rebooting does
not solve the issue.  The only way out of this trap that I found was
to mv the .kde directory aside and reconfigure everything from
scratch.

(2) The second issue (full bug report at 
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=688822)

is a regression (when compared to Debian stable) in playing
remote games with LIBGL_ALWAYS_INDIRECT=1 using the X -query method to gain 
access
to a remote box.  Interestingly, I can run those remote games just fine
if I use the ssh method rather than X -query to gain access to
the remote box.

The Eee box was first introduced only 4 years ago so it is fairly
modern equipment.  Also, it might experience a bit of a renaissance
now that it appears sophisticated Linux desktop environments like KDE
appear to run on it with no speed issues at all.  So it would be a
shame if the above bugs were not addressed in some way by the Intel
developers here.

My own feeling is perhaps the best way to deal with such bugs is not
to worry too much about the high-level specifics that triggered them,
but instead do full-blown tests on equipment with GMA 950 graphics
core similar to the tests Intel runs on their latest hardware.  In the
long run, such tests are the only way to make sure the latest Intel
graphics stack works properly on somwhat older equipment like this.

I would be happy to run such comprehensive tests, but this is a
production box (i.e., I am trying to use this box to help develop my
own free software).  I don't mind pausing my own development work to
run the tests, but if setting up such tests is too time-consuming or
would take too much of my time learning about how to build the Intel
graphics stack from scratch, I would prefer someone else to run the
tests instead.

Alan
__
Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state
implementation for stellar interiors (freeeos.sf.net); the Time
Ephemerides project (timeephem.sf.net); PLplot scientific plotting
software package (plplot.sf.net); the libLASi project
(unifont.org/lasi); the Loads of Linux Links project (loll.sf.net);
and the Linux Brochure Project (lbproject.sf.net).
__

Linux-powered Science
__
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] Intel 2011Q4 graphics package

2011-12-05 Thread Alan W. Irwin

On 2011-12-05 23:03-0200 Eugeni Dodonov wrote:


Hi,
 
We’d like to announce Intel 2011Q4 graphics package, focused on performance and 
stability improvements in the Intel Linux Graphics
stack.
 
Please check http://intellinuxgraphics.org/2011Q4.html for the recommended 
stack, list of new features and known issues.

I'd also like to thank all the developers, community, our users and testers for 
helping us to improve the drivers. Thanks a lot for all
your work, help and support!

And as usual, if you find any new issues, please let as know by filing bugs 
following the
http://intellinuxgraphics.org/how_to_report_bug.html guidelines.

Thanks,
Eugeni Dodonov
Intel Open Source Technology Center


Hi Eugeni:

My primary interest in Intel graphics is I just want something that is
completely stable for a mostly 2D desktop with the ability to play
some low-end 3D games.  Thus, I have been reasonably content with what
I get from Debian stable for my g33 graphics hardware, but I would
like to upgrade to Debian testing for a number of reasons.  However,
what's been holding me back from that upgrade is concern that g33 has
not been validated for a while by the Intel team (e.g., no mention of
g33 for your first quarterly release this year).

Therefore, I was very glad to see g33 mentioned as one of the
validated platforms both for this release and also for the 3rd
quarterly release this year so my hopes that g33 will work with no big
issues for Debian testing are much higher than they were. However,
testing results for only some (Pineview 32-bit, and
Sandybridge/Ivybridge for 64-bit) of your validated platforms are
linked at the above URL.  Is there a link you can give me for the g33
test results (preferably 64 bit)?

Alan
______
Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state
implementation for stellar interiors (freeeos.sf.net); the Time
Ephemerides project (timeephem.sf.net); PLplot scientific plotting
software package (plplot.sf.net); the libLASi project
(unifont.org/lasi); the Loads of Linux Links project (loll.sf.net);
and the Linux Brochure Project (lbproject.sf.net).
__

Linux-powered Science
_

Intel-gfx mailing list

Intel-gfx@lists.freedesktop.org

http://lists.freedesktop.org/mailman/listinfo/intel-gfx

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] Intel 2011Q4 graphics package

2011-12-06 Thread Alan W. Irwin

On 2011-12-06 09:14-0200 Eugeni Dodonov wrote:


2011/12/6 Alan W. Irwin 
  On 2011-12-05 23:03-0200 Eugeni Dodonov wrote:

Hi,
 
We’d like to announce Intel 2011Q4 graphics package, focused on 
performance and stability improvements in the
Intel Linux Graphics
stack.
 
Please check http://intellinuxgraphics.org/2011Q4.html for the 
recommended stack, list of new features and
known issues.

I'd also like to thank all the developers, community, our users and 
testers for helping us to improve the
drivers. Thanks a lot for all
your work, help and support!

And as usual, if you find any new issues, please let as know by 
filing bugs following the
http://intellinuxgraphics.org/how_to_report_bug.html guidelines.

Thanks,
Eugeni Dodonov
Intel Open Source Technology Center


Hi Eugeni:

My primary interest in Intel graphics is I just want something that is
completely stable for a mostly 2D desktop with the ability to play
some low-end 3D games.  Thus, I have been reasonably content with what
I get from Debian stable for my g33 graphics hardware, but I would
like to upgrade to Debian testing for a number of reasons.  However,
what's been holding me back from that upgrade is concern that g33 has
not been validated for a while by the Intel team (e.g., no mention of
g33 for your first quarterly release this year).

Therefore, I was very glad to see g33 mentioned as one of the
validated platforms both for this release and also for the 3rd
quarterly release this year so my hopes that g33 will work with no big
issues for Debian testing are much higher than they were. However,
testing results for only some (Pineview 32-bit, and
Sandybridge/Ivybridge for 64-bit) of your validated platforms are
linked at the above URL.  Is there a link you can give me for the g33
test results (preferably 64 bit)?


Hi Alan,

for g33, it is not being the main focus of the release validation and testing 
lately, so it receives limited validation, checking for
regressions mostly.

However, for all the kernel/mesa/drm/2d releases, the expectation is for it to work in a 
"no regressions" mode. It should be similar to
pineview for most test cases and workloads.

Having said that, we do intend to provide a good support for it. So if you 
observe any new issues or bugs which are not previously
filled in our bugzilla, please, let us know. Even if we do not focus on full 
validation and testing for it as part of the stack
release, we still do intend to have a good level of functionality for it. So if 
there are stability or regression issues, or its
functionality under Debian testing is somehow worse than with Debian stable, 
we'd like to know about it.



Thanks to you and Gordon for your up-front replies.  I am disappointed
you currently have such limited resources for release validation, and
for your Linux customer's sake I hope you can convince Intel to
give you some additional limited resources (man-hours and hardware) to make
that validation effort more comprehensive.  After all, the Intel
X stack is being constantly developed to handle new hardware and fix
bugs for old hardware, and without release testing for a wider variety
of hardware some regressions are inevitably going to creep in for
hardware that currently has limited release validation or no release
validation at all.  Thus, the inevitable result of your currently
limited resources for release validation is concerned Linux customers
like me who are forced to be extremely cautious about software
"upgrades".  That is a characteristic historically associated with
Windows users, and we don't want that to become the norm for Linux
users as well!

That said, I will try to help you out where I can. This g33 computer
is used by two people simultaneously in a home office (one directly
and one via an X terminal) as a production box so it will take a while
before we can find a convenient time to upgrade to Debian testing, but
when we do so (probably one or two months from now) I will get back to
you with an upgrade report (whether positive or negative) for the g33
graphics of that principal box.  If all goes well for that upgrade, we
also plan to upgrade the nettop computer we use for the X-terminal
from Debian stable to Debian testing and give you a similar upgrade
report for the 945GM/GMS/GME graphics of that system which are
somewhat older than the g33 graphics of our principal box.

Alan
__
Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state
implementation for stellar interiors (freeeos.sf.net); the Time
Ephemerides project (timeephem.sf.net); PLplot scientific plotting
software

Re: [Intel-gfx] Updated -next

2012-01-21 Thread Alan W. Irwin

On 2012-01-21 15:12+0100 Daniel Vetter wrote:


drm-intel-testing is drm-intel-next and drm-intel-fixes merged together
(as the time of when I've pushed things out). Gordon Jin said that he
prefers to qa one single branch and that qa will take the job of finding
out whether an issue has been introduced in -fixes or in -next. I agree
that it makes more sense to test everything together, otherwise you'll
miss some of the bugfixes in -fixes.


As an Intel graphics user whose number-one concern is stability, I
have to make a comment here.  I fully appreciate that the top priority
for qa should be the cutting edge so that Intel developers get quick
feedback on their changes.  But that leaves the -fixes branch untested
_on its own_ by qa, and I urge Gordon Jin to rethink that decision.
After all, the -fixes branch is quite important to the end user of
Intel graphics since it generally propagates sooner than
-intel-testing to the users. Also, doing qa for both -intel-testing
and -fixes should not double the burden on the qa group since -fixes
is much less volatile so doesn't have to be tested nearly as often as
-intel-testing.

In sum, my feeling is that if the -fixes branch is to have any
separate meaning at all, it has to go through the same qa process
(although not as often) as drm-intel-testing.

My $0.02.

Alan
______
Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state
implementation for stellar interiors (freeeos.sf.net); the Time
Ephemerides project (timeephem.sf.net); PLplot scientific plotting
software package (plplot.sf.net); the libLASi project
(unifont.org/lasi); the Loads of Linux Links project (loll.sf.net);
and the Linux Brochure Project (lbproject.sf.net).
__

Linux-powered Science
__
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] i915GM 2D+3D intel driver regression

2010-04-30 Thread Alan W. Irwin

On 2010-04-30 08:45-0700 SD wrote:


Dear all.

I have been using linux for 2 years already. And I use it on intel 915 GM video 
card on Lenovo laptop. With:
(II) Loading /usr/lib/xorg/modules//drivers/intel_drv.so
(II) Module intel: vendor="X.Org Foundation"
compiled for 1.5.2, module version = 2.5.0
Module class: X.Org Video Driver
ABI class: X.Org Video Driver, version 4.1
And acceleration module XAA when I watch Xvid movie on full screen, according 
to TOP:
X uses 4% of CPU
SMplayer 10-12% of CPU.

GLXgear gives me:
3216 frames in 5.0 seconds = 643.117 FPS



Now I tried Fedora13 (test) with intel driver:
[27.854] (II) Loading /usr/lib/xorg/modules/drivers/intel_drv.so
[27.855] (II) Module intel: vendor="X.Org Foundation"
[27.855]  compiled for 1.8.0, module version = 2.11.0
[27.855]  Module class: X.Org Video Driver
[27.855]  ABI class: X.Org Video Driver, version 7.0

And can you imagine GLXgear gives me:
165 frames in 5.1 seconds = 32.612 FPS

Watching Xvid movie:
X uses ~40% of CPU
SMplayer ~10-20% of CPU.
Even when I switch workspaces X11 uses ~20% of CPU - there is no any 2D 
acceleration at all

After all of this I would like to ask:
Do you respect customer who use linux?
Does any one check your driver and UXA with i915?
Why, just why developers through away XAA from driver, your UXA works the same 
as EXA did - awful. Awful with 3D and more important, awful with 2D.

Why dev. can't just leave what was done good for i915?
Why it was necessary to screw everything. Looks like you just put EXA to UXA.

I do not know about other intel chipsets, but i915 works really slow with new 
and previous driver on UXA.

So, for i915GM new driver is BIG BIG REGRESSION and big step backward.


Personally, I think you were a little hard on the Intel developers.  I think
we should all give them some slack so they have the freedom to get on with
the job of the huge X stack changes that have been necessary over the last
several years to deal with the capabilities of modern video chipsets
(including Intel ones).

However, I think those developers are entirely on your side that
_eventually_ these large X stack changes should be refined to the point that
they will not severely impact older hardware performance.  For example,
there have been reassurances in the past from the Intel developers on
exactly this point.  Clearly, from your xvid and smplayer numbers (they will
dismiss glxgears numbers for reasons that have been stated many times in the
past) they are currently doing poorly at this job, and that is quite
worrying.  For example, I am sticking to XAA for my older g33 Intel video
chipset using the Debian stable X stack because of speed and stability
concerns with the new X stack and new intel driver, and your post has
reaffirmed that decision.  But both of us (and all the other users of older
Intel hardware out there) cannot use old distributions forever so I hope
that the Intel developers are reassuring once again in answer to your post
that _soon_ (rather than "eventually") they will address the real-world (as
opposed to glxgears) performance regressions compared to the old X stack and
XAA.

Alan
__
Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state implementation
for stellar interiors (freeeos.sf.net); PLplot scientific plotting software
package (plplot.org); the libLASi project (unifont.org/lasi); the Loads of
Linux Links project (loll.sf.net); and the Linux Brochure Project
(lbproject.sf.net).
__

Linux-powered Science
__
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [i845G] stuck in 1024x768

2010-06-13 Thread Alan W. Irwin

On 2010-06-13 19:13-0400 Felix Miata wrote:


On 2010/06/13 23:10 (GMT+0100) Andy Lutomirski composed:


On Jun 13, 2010, at 9:49 PM, Felix Miata  wrote:



Using openSUSE 11.3M7 (1.8.0 server/2.11.0 AFAICT) I've been unable
to figure
out how to get the server to obey xorg.conf entries for NoDDC,
NoRandr,
PreferredMode or DisplaySize.

[...]

xorg.conf as last modified by me:
http://fm.no-ip.com/Tmp/Linux/Xorg/xorg.conf-t2240-s113-20100613a


Hi Felix:

Here is how I configured PreferredMode _for an old server_ (Debian Lenny)
in the Monitor section:

#gtf 1024 768 85
# 1024x768 @ 85.00 Hz (GTF) hsync: 68.60 kHz; pclk: 94.39 MHz
Modeline "1024x768_85.00"  94.39  1024 1088 1200 1376  768 769 772 807 -HSync + 
Vsync
Option "PreferredMode" "1024x768_85.00"

I found in the past that PreferredMode would not work with Intel if you used
a standard modeline name such as "PreferredMode" "1600x1200" like you do in
your xorg.conf.  Instead, I suggest you use gtf to calculate a 1600x1200
modeline, and use the generated non-standard modeline name with a suffix
corresponding to the vertical refresh rate.  No guarantees, but specifying a
special modeline like above with a non-standard modeline name was the only
way I could get PreferredMode to work in the past, and it is possible those
constraints on PreferredMode still apply for modern X servers.  Anyhow, it
is worth a try.


Just as importantly, I've not yet figured out why anyone should have to do
manually (presuming they can even figure out how) what used to work
automatically. For years, no modelines were in xorg.conf were required, and X
just used the first usable entry on the applicable modes line in 'Section
"Screen"'. Later someone decided a PreferredMode entry in 'Section "Monitor"'
was required to perform the same function, but now it no longer works.

Supposedly the overhaul of X begun two years ago was to make
operation/startup/configuration (?more?) automatic, not less, but I, always
using Trinitron CRTs, have only observed quite the contrary so far. X for me
has regressed from the jet age back to piston engined biplanes without
electric starters.


Like you, I hope the Intel jet age comes back soon.

By the way, my 15-year old Trinitron finally gave up the ghost earlier this
year and I replaced it with an LED-backlit ASUS LCD for $130.  That was a
superb deal, and I can say that new monitor is better in all respects
(brightness, colours, resolution, and size) except for width of viewing
angle. However, I doubt very much it will last as long as Trinitrons do; the
Trinitrons are not that much worse in quality; and "use it up, wear it out"
philosophy helps the environment (and the bank balance).  Thus, I am hanging
on to my remaining 10-yr old Trinitron monitor until it also dies. That
particular monitor is attached to a computer with SIS video chipset (ugh),
but when that computer fails I won't replace it with Intel unless
PreferredMode works properly.  So I hope my suggestion above works for you,
but if not, I hope the Intel developers get PreferredMode working
properly again.

Alan
__
Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state implementation
for stellar interiors (freeeos.sf.net); PLplot scientific plotting software
package (plplot.org); the libLASi project (unifont.org/lasi); the Loads of
Linux Links project (loll.sf.net); and the Linux Brochure Project
(lbproject.sf.net).
__

Linux-powered Science
__
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] Remaining set of small patches for -rc1

2010-08-11 Thread Alan W. Irwin

On 2010-08-11 05:46-0400 sh...@35point5.com wrote:


Dear Chris,

Please stop sending me these messages.



Dear Chris and other Intel developers,

Please do keep sharing your excellent work with intel-gfx.

Dear Shadi:

It appears you have subscribed to the intel-gfx without being aware of
what type of traffic you will get from that list.  To say this in the
nicest way I can, you have two choices which are (a) to get used to
the type of traffic on this list or (b) to unsubscribe.

Alan
__
Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state implementation
for stellar interiors (freeeos.sf.net); PLplot scientific plotting software
package (plplot.org); the libLASi project (unifont.org/lasi); the Loads of
Linux Links project (loll.sf.net); and the Linux Brochure Project
(lbproject.sf.net).
__

Linux-powered Science
__
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] Problems configuring unique xorg.conf file

2010-08-26 Thread Alan W. Irwin

On 2010-08-26 10:47- Nasa wrote:


Hi,


I am trying to configure a rather obscure resolution (800x480) which is the
native resolution for my monitor.  The monitor doesn't provide EDID or DDC
information (it's connected over VGA).  And the default settings by the
driver produces displays that don't fit the monitor properly.  I would like
to construct a xorg.conf file with appropraite vertical refresh rates, 
horizontal
syncs, and/or modelines to work correctly with the monitor.  However, there
doesn't seem a way to turn off the driver defaults for those items. I tried 
options
like NoDDC, UseEDID, etc with no luck.  I also tried using xrandr to change
resolutions after X has started.  The results end up being worse than the
intial problem I was trying to fix (ie: the screen is bigger than the area
available to display it).  I expect my inability to find a suitable solution
is due to my lack of knowledge -- so I hope someone can fill me in to what
I am missing.  Thanks in advance,


Earlier this year before I replaced my long-time Sony monitor with an
LCD, and upgraded from Debian Lenny to Debian testing, the results of
gtf and PreferredMode worked for me. For example, my xorg.conf file
for that monitor had the following lines in the Monitor section

#gtf 1024 768 85
# 1024x768 @ 85.00 Hz (GTF) hsync: 68.60 kHz; pclk: 94.39 MHz
Modeline "1024x768_85.00"  94.39  1024 1088 1200 1376  768 769 772 807
-HSync +Vsync
Option "PreferredMode" "1024x768_85.00"

Of course, intead of using the above example, you will want to run
something like

gtf 800 400 85

from the command line (man gtf), paste the results to your Monitor
section, and consistently update the identification of the mode used
by PreferredMode.

I emphasize the above configuration lines worked for an old version of the
Intel driver (Debian Lenny), and I don't know whether they  would work
for a modern version.  But it is worth a try.

Alan
__
Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state implementation
for stellar interiors (freeeos.sf.net); PLplot scientific plotting software
package (plplot.org); the libLASi project (unifont.org/lasi); the Loads of
Linux Links project (loll.sf.net); and the Linux Brochure Project
(lbproject.sf.net).
__

Linux-powered Science
__
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] Problems configuring unique xorg.conf file

2010-08-26 Thread Alan W. Irwin

On 2010-08-26 17:37- Nasa wrote:



- "Alan W. Irwin"  wrote:


On 2010-08-26 10:47- Nasa wrote:


Hi,


I am trying to configure a rather obscure resolution (800x480) which

is the

native resolution for my monitor.  The monitor doesn't provide EDID

or DDC

information (it's connected over VGA).  And the default settings by

the

driver produces displays that don't fit the monitor properly.  I

would like

to construct a xorg.conf file with appropraite vertical refresh

rates, horizontal

syncs, and/or modelines to work correctly with the monitor.

However, there

doesn't seem a way to turn off the driver defaults for those items.

I tried options

like NoDDC, UseEDID, etc with no luck.  I also tried using xrandr to

change

resolutions after X has started.  The results end up being worse

than the

intial problem I was trying to fix (ie: the screen is bigger than

the area

available to display it).  I expect my inability to find a suitable

solution

is due to my lack of knowledge -- so I hope someone can fill me in

to what

I am missing.  Thanks in advance,


Earlier this year before I replaced my long-time Sony monitor with an
LCD, and upgraded from Debian Lenny to Debian testing, the results of
gtf and PreferredMode worked for me. For example, my xorg.conf file
for that monitor had the following lines in the Monitor section

#gtf 1024 768 85
# 1024x768 @ 85.00 Hz (GTF) hsync: 68.60 kHz; pclk: 94.39 MHz
Modeline "1024x768_85.00"  94.39  1024 1088 1200 1376  768 769 772
807
-HSync +Vsync
Option "PreferredMode" "1024x768_85.00"

Of course, intead of using the above example, you will want to run
something like

gtf 800 400 85

from the command line (man gtf), paste the results to your Monitor
section, and consistently update the identification of the mode used
by PreferredMode.

I emphasize the above configuration lines worked for an old version of
the
Intel driver (Debian Lenny), and I don't know whether they  would
work
for a modern version.  But it is worth a try.

Alan


Thanks Alan,

I actually attempted this via CVT which ended up with horizontal sync out
of range errors.  Reading the MAN page for CVT didn't show any options to
put in options to set that.  Does GTF have this capability?


No, but it doesn't matter.  Play with either cvt of gtf (I don't think
there is any real difference between them) with a fixed resolution and
varying vertical refresh rate, and you will see that the horizontal
sync frequency of the generated mode line is proportional to the
vertical refresh you specify.  So for your desired resolution if the
generated horizontal sync is lower than your allowed range, increase
the vertical refresh until you have a value within the allowed range.
Or if it is above (extremely unlikely for such a low resolution)
reduce the vertical refresh.

I have assumed above that the horizontal frequency limits have been
set correctly for your particular monitor.  That is not always the
case.  Check your monitor manual for the correct vertical and
horizontal frequency limits and if your X log shows those are not
being discovered properly by X than specify the correct ranges using the
VertRefresh and HorizSync values in the Monitor section.  In my case
I used

HorizSync   30-96
VertRefresh 48-120

corresponding to values published in my Sony g200 manual, but your
monitor manual is very likely to require different ranges.

Alan
__
Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state implementation
for stellar interiors (freeeos.sf.net); PLplot scientific plotting software
package (plplot.org); the libLASi project (unifont.org/lasi); the Loads of
Linux Links project (loll.sf.net); and the Linux Brochure Project
(lbproject.sf.net).
__

Linux-powered Science
__
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] Problems configuring unique xorg.conf file

2010-08-26 Thread Alan W. Irwin

On 2010-08-26 18:40- Nasa wrote:



- "Alan W. Irwin"  wrote:

[...]I have assumed above that the horizontal frequency limits have been
set correctly for your particular monitor.  That is not always the
case.  Check your monitor manual for the correct vertical and
horizontal frequency limits and if your X log shows those are not
being discovered properly by X than specify the correct ranges using
the
VertRefresh and HorizSync values in the Monitor section.  In my case
I used

HorizSync   30-96
VertRefresh 48-120

corresponding to values published in my Sony g200 manual, but your
monitor manual is very likely to require different ranges.

Alan


There in lies the problem  I have created an xorg.conf file with
the settings you mention -- but the settings are ignored.  Trying to
tell Xorg not to use EDID/DDC/Default values hasn't worked (see 1st
part of my orignal message).  Worse off, the log file doesn't report
what values it is using!  So it's been a guessing game...


Look through the log file for the "ranges" string.  For my modern (LCD
monitor on Debian testing) system the result is

(II) intel(0): Ranges: V min: 55 V max: 75 Hz, H min: 30 H max: 80
kHz, PixClock max 140 MHz

(II) intel(0): Using hsync ranges from config file
(II) intel(0): Using vrefresh ranges from config file

However, despite those messages, it turns out HorizSync (and probably
VertRefresh) in my xorg.conf are ignored, and instead the values
reported by the monitor are used. (The above values are consistent
with those reported on the web for my particular monitor.  I set those
same values using HorizSync and VertRefresh except that I specified a
smaller H max via HorizSync as an experiment and it was ignored.)

Ignoring the frequencies in xorg.conf didn't hurt in my modern
LCD/Debian testing case, but probably does in yours.  To confirm
that what is the exact result you get for Ranges in the log file?

In general, I am troubled by any misguided tendency of Intel
developers to remove xorg.conf capability.  Sure, it is nice to
generally not require that file at all by default, but when you really
need control for situations where bad values or no values are being
reported by a monitor, a fully capable xorg.conf file is absolutely
essential.  So let's hope this ignoring of frequencies specified in
xorg.conf (at least for my Debian testing Intel X stack)
is a temporary aberration by the Intel developers that has been fixed
in later versions.

Alan
__
Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state implementation
for stellar interiors (freeeos.sf.net); PLplot scientific plotting software
package (plplot.org); the libLASi project (unifont.org/lasi); the Loads of
Linux Links project (loll.sf.net); and the Linux Brochure Project
(lbproject.sf.net).
__

Linux-powered Science
__
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] PROBLEM: i915 modesetting - weird offset graphics (v2.6.37-rc1-27-gff8b16d)

2010-11-04 Thread Alan W. Irwin

On 2010-11-04 13:41- Chris Wilson wrote:


On Thu, 04 Nov 2010 09:37:01 -0400, Jon Masters  wrote:

Also, the intel-gfx list is moderated with auto-rejection. Do I really
need to subscribe to the list just to get a mail through?


A few of the Intel gfx engineers are at Plumbers. If you tell Jesse about
it, he may be able to convince the powers-that-be to change the policy.
-Chris


I think the present list policy of posting only allowed for
subscriber's only is a good one.  My experience with some mailing
lists for particular SourceForge projects that I admin is if you allow
unsubscribed users to post, then lots of spam tends to gets posted
despite every effort by SourceForge to filter out spam.  Just out of
curiosity I have forwarded failing spam attempts for all SourceForge
mailing lists I admin to my own personal e-mail account to track what
is going on.  Most lists I admin have no spam attacks on them, but the
most active of them (plplot-devel) has thousands of thwarted attempts
to spam to it over say the last year.  So it appears the spammers keep
track of which mailing lists are active and don't waste their time on
relatively inactive lists.

However, it should be pointed out that this list is more active than
plplot-devel so I assume the spamming would start quite quickly if the
present list policy were changed.  Or this list may already be under
lots of spam attacks which are thwarted by the present policy.

Such spam shouldn't matter that much to list subscribers if they have
good spam filters (like I do with spam_bayes), but spamming does tend
to fill up your mailing-list archive with a load of crap which makes
it more difficult to browse or search that archive.

In sum, spammers appear to be generally lazy so that the small barrier
to entry of demanding all posts be from the poster's subscription
address continues to be an amazingly good tactic against spam in my
experience.

Alan
__________
Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state implementation
for stellar interiors (freeeos.sf.net); PLplot scientific plotting software
package (plplot.org); the libLASi project (unifont.org/lasi); the Loads of
Linux Links project (loll.sf.net); and the Linux Brochure Project
(lbproject.sf.net).
__

Linux-powered Science
__
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] artefacts on 855 graphic

2010-11-27 Thread Alan W. Irwin

On 2010-11-27 15:49+0100 Daniel Vetter wrote:


On Sat, Nov 27, 2010 at 10:07:24AM +0100, Alexey Fisher wrote:

Hallo all,

i know it is known issue that 855 is not really working with current
driver. But i wont to know if there is work in progress or any interest
of back reports?


Well, it's i8xx in general that's in a very sorry state with gem.
Unfortunately there's no easy fix available, even within intel no one
knows anymore how to work-around these problems (hw people designed these
chips approx 8 years ago and moved on). I have ideas that might fix these
problems. Unfortunately this requires rather massive code rewriting. But
I've been (very) slowly moving towards this in the past few months. Don't
hold your breath, though.

Meanwhile Chris Wilson's shadowfb support should give you Xv accel +
reasonable fast 2d (cpu-rendered, but the gpu was never really faster for
2d on these chips, anyway).


Frankly, it harms Intel's Linux reputation that this regression in 3D
support for old chips has been allowed to develop.  Shuttle was kind
enough back in 2004 to donate two of their shuttle boxes to the LUG I
happened to belong to at that time.  Those boxes had Intel Extreme
Graphics 2 chipsets (I assume 855GM's), and low-end 3D games worked
well with the Intel driver then according to my own experience as well
as the wonderful Linux reviews that particular shuttle box (the
SB62G2) was getting at the time.  I have felt positive about Shuttle
and Intel ever since that good experience, but this regression in 3D
support for old hardware is giving me second thoughts about Intel.

I realize it would probably take some modest additional personnel
resources from Intel to support those old devices (probably with a
completely separate and minimally maintained driver that was forked
from the last edition of the Intel driver that worked properly for
those devices since that old hardware appears not to be compatible
with GEM). The allocation of such personnel resources from Intel would
give their Linux reputation a much-needed boost since extended
software support times are a big selling point for hardware that is
run with open-source device drivers.

Alan
__
Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state implementation
for stellar interiors (freeeos.sf.net); PLplot scientific plotting software
package (plplot.org); the libLASi project (unifont.org/lasi); the Loads of
Linux Links project (loll.sf.net); and the Linux Brochure Project
(lbproject.sf.net).
__

Linux-powered Science
__
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] artefacts on 855 graphic

2010-11-27 Thread Alan W. Irwin

On 2010-11-28 00:01+0100 Clemens Eisserer wrote:


Hi Alan,


Frankly, it harms Intel's Linux reputation that this regression in 3D
support for old chips has been allowed to develop.


Well, I also own a 855GM powered laptop, but I have to add that I
understand intel's descision to not devote a lot development resources
to it anymore.


Hi Clemens:

Yeah, I agree that Intel should not devote huge resources to this.
That was why I was talking about Intel supporting a minimally
maintained fork of an old working driver that did have good 3D support
for legacy Intel hardware.


3D on that hardware doesn't make a lot of sence 


I disagree with you there.  I know from experience that 855GM hardware
would still be fairly powerful with the right software support.
Low-end 3D games worked well on that Shuttle box for me, and
it also received good reviews for 3D elsewhere with the caveat of not
expecting much for demanding high-end 3D games.  I didn't try it for
3D desktop effects at the time (I am not sure they even existed in
2004) but assuming the software and hardware demands of current 3D
desktop effects are similar to those of low-end 3D games, then a
minimally maintained fork of the old Intel driver that resurrected the
3D responsiveness I remember for that hardware should be more than
adequate to support 3D desktop effects.


Furthermore its not different from what other GPU vendors do with
legacy hardware.
Take nvidia for example - their old chips don't get driver updates
anymore, and the more-or-less official recommendation is to use nv
instead. I have to use the reverse engineered nouveau driver on that
machines (and I am really happy with it) to get at least useable 2D.
And for AMD, the HD2100 powered mainboard I bought exactly one year
ago doesn't receive any driver updates for windows anymore too, one
year after I bought it.


Yup, this is the known downside of proprietary software drivers, but
why would Intel want to copy that bad practice?  People like me want
to be free to "use up and wear out" our computer devices rather than
being the victim of forced hardware upgrades.  That's one of the big
advantages of open-source driver software from a user's perspective. 
For example, the open-source nouveau for Nvidia and Radeon for AMD/ATI

drivers do support legacy hardware. Why would Intel not want to follow
that good open-source practice?

Alan
__
Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state implementation
for stellar interiors (freeeos.sf.net); PLplot scientific plotting software
package (plplot.org); the libLASi project (unifont.org/lasi); the Loads of
Linux Links project (loll.sf.net); and the Linux Brochure Project
(lbproject.sf.net).
__

Linux-powered Science
__
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] General Purpose Programming GPU/(G)MCH

2010-12-14 Thread Alan W. Irwin

On 2010-12-14 10:42+0100 Clemens Eisserer wrote:


 .. they start call gpu inside GMCH since 965 ). I wold be great full for some 
more links to decumentation and
codes.


I guess just because the term "GPU" was uncommon before. Fact is, gen3
contains programmable pixel shaders.
However if it isn't just for fun I discourage from going that route,
the shaders are extremly slow compared to todays CPUs (keep in mind
gen3 is now ~5 years old).



I personally wouldn't be quite that discouraging on the speed front.
Today's cpu's are of similar clock speed to those of yesteryear.  I bought
an entry level 2.4GHz box back in 2003, and entry level boxes are
still of that order in raw speed.  The real advances for modern cpu's
are computing power efficiency (gigaflops/watt), and number of cpu's.
Thus, modern cpu's give you a real speed advance only if your
application can take advantage of multiple cpu's, but that is often
not the case. I agree some speed advances have been made in the Intel
GPU case (see http://en.wikipedia.org/wiki/Intel_GMA for some of the
speed details), but as in the CPU case it again appears to me that
most of the advances in the GPU case are in the number of execution
units.

Alan
__
Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state implementation
for stellar interiors (freeeos.sf.net); PLplot scientific plotting software
package (plplot.org); the libLASi project (unifont.org/lasi); the Loads of
Linux Links project (loll.sf.net); and the Linux Brochure Project
(lbproject.sf.net).
__

Linux-powered Science
__
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] An apparent large performance regression of 3D display over the network

2011-01-19 Thread Alan W. Irwin

Three years ago on a Debian testing system that was to become Debian
lenny, I made a test of running low-end 3D games (tuxracer and
foobillard) on a remote box on my 100Mbit LAN while displaying them on
my local Debian testing box X server with g33 chipset.  The games were
quite playable (in fact indistinguishable from playing the games
locally if I recall correctly).

Fast forward to today, when I once again tried the foobillard part of
the test (I didn't bother with tuxracer) with the same local (g33)
hardware but this time with Debian testing (squeeze) installed on that
locally displaying box.  foobillard has become completely unplayable
over the network with the former smooth movement reduced to what looks
like a series of snapshots with large gaps in between.  The LAN
network I have now is 1 gigabit as opposed to the older 100Megabit LAN
network I had when the remote 3D games worked well.  I can play
foobillard and tuxracer just fine locally on that machine so it
appears local 3D is in reasonable shape.

foobillard and tuxracer are just subjective tests of whether 3D
rendering works reasonably efficiently over the network, but my
impression is the regression in that regard is at least one or two
orders of magnitude in speed in order to reduce smooth effects to a
series of snapshots.  Thus, objective tests of remote 3D efficiency of
the old Intel stack from three years ago versus the current one should
pick up this performance regression easily.

I recently bought another computer (ASUS Eee Box with 945GME chipset)
that shows foobillard is unplayable over the network in the same way
while local running of that game is fine on that box.  I have now
configured that box to be an X-terminal (a configuration I far prefer
because it reduces sysadmin issues a lot).  The 2D KDE desktop
displays well for that configuration, but I have extreme doubts
(haven't tried them yet) about whether remote 3D desktop effects will
work at all considering this huge slowdown I get with remote running
of foobillard over the local 1 Gigabit LAN with that X-terminal.

One possibility is there may be something extra I have to do now to
make remote 3D display efficient over an ssh connection.  Advice in
that regard would be helpful.  (Currently, I just set ForwardX11 yes
and ForwardAgent yes for the host in question in .ssh/config for
the local computer.)

But if ssh configuration is not the issue, then it appears there has
been an efficiency regression for remote 3D at least for the 945GME
(GMA 950) and g33 (GMA 3100) chipsets. Has anyone here found
foobillard or similar low-end 3D games to be playable or 3D desktop
effects to work reasonably efficiently over fast LAN networks with
today's Intel graphics driver?

Of course, if this really turns out to be a general regression in
remote 3D display efficiency, then that regression obviously
corresponds with the X stack reorganization by Intel that has occurred
over the last 3 years. I expect making local 3D display efficient for
that newly organized stack is still one of the top priorities for
Intel developers, but I hope dealing with this efficiency regression
for remote 3D (if that is what it is) is at least on the agenda. After
all, with 3D desktop effects becoming more and more important and with
low-end 3D games as a "would be nice", reasonably efficient X network
transparency for 3D display is an important issue for those using X
terminals.

Let me know if there are more quantitative tests of efficiency you
would like me to run between local and remote display of 3D on either
the 945GME or g33 boxes.

Alan
__________
Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state implementation
for stellar interiors (freeeos.sf.net); PLplot scientific plotting software
package (plplot.org); the libLASi project (unifont.org/lasi); the Loads of
Linux Links project (loll.sf.net); and the Linux Brochure Project
(lbproject.sf.net).
__

Linux-powered Science
__
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] An apparent large performance regression of 3D display over the network

2011-01-25 Thread Alan W. Irwin

I would really appreciate on of the Intel developers taking a quick
shot at answering these general question.  To summarize: do you
confirm bad results (typically 1 frame per second) for running remote
3D apps over the network while displaying them locally?  If so, is a
fix for this performance regression relative to the X Intel stack of
several years ago on the agenda?

On 2011-01-19 01:01-0800 Alan W. Irwin wrote:


Three years ago on a Debian testing system that was to become Debian
lenny, I made a test of running low-end 3D games (tuxracer and
foobillard) on a remote box on my 100Mbit LAN while displaying them on
my local Debian testing box X server with g33 chipset.  The games were
quite playable (in fact indistinguishable from playing the games
locally if I recall correctly).

Fast forward to today, when I once again tried the foobillard part of
the test (I didn't bother with tuxracer) with the same local (g33)
hardware but this time with Debian testing (squeeze) installed on that
locally displaying box.  foobillard has become completely unplayable
over the network with the former smooth movement reduced to what looks
like a series of snapshots with large gaps in between.  The LAN
network I have now is 1 gigabit as opposed to the older 100Megabit LAN
network I had when the remote 3D games worked well.  I can play
foobillard and tuxracer just fine locally on that machine so it
appears local 3D is in reasonable shape.

foobillard and tuxracer are just subjective tests of whether 3D
rendering works reasonably efficiently over the network, but my
impression is the regression in that regard is at least one or two
orders of magnitude in speed in order to reduce smooth effects to a
series of snapshots.  Thus, objective tests of remote 3D efficiency of
the old Intel stack from three years ago versus the current one should
pick up this performance regression easily.

I recently bought another computer (ASUS Eee Box with 945GME chipset)
that shows foobillard is unplayable over the network in the same way
while local running of that game is fine on that box.  I have now
configured that box to be an X-terminal (a configuration I far prefer
because it reduces sysadmin issues a lot).  The 2D KDE desktop
displays well for that configuration, but I have extreme doubts
(haven't tried them yet) about whether remote 3D desktop effects will
work at all considering this huge slowdown I get with remote running
of foobillard over the local 1 Gigabit LAN with that X-terminal.

One possibility is there may be something extra I have to do now to
make remote 3D display efficient over an ssh connection.  Advice in
that regard would be helpful.  (Currently, I just set ForwardX11 yes
and ForwardAgent yes for the host in question in .ssh/config for
the local computer.)

But if ssh configuration is not the issue, then it appears there has
been an efficiency regression for remote 3D at least for the 945GME
(GMA 950) and g33 (GMA 3100) chipsets. Has anyone here found
foobillard or similar low-end 3D games to be playable or 3D desktop
effects to work reasonably efficiently over fast LAN networks with
today's Intel graphics driver?

Of course, if this really turns out to be a general regression in
remote 3D display efficiency, then that regression obviously
corresponds with the X stack reorganization by Intel that has occurred
over the last 3 years. I expect making local 3D display efficient for
that newly organized stack is still one of the top priorities for
Intel developers, but I hope dealing with this efficiency regression
for remote 3D (if that is what it is) is at least on the agenda. After
all, with 3D desktop effects becoming more and more important and with
low-end 3D games as a "would be nice", reasonably efficient X network
transparency for 3D display is an important issue for those using X
terminals.

Let me know if there are more quantitative tests of efficiency you
would like me to run between local and remote display of 3D on either
the 945GME or g33 boxes.

Alan
__________
Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state implementation
for stellar interiors (freeeos.sf.net); PLplot scientific plotting software
package (plplot.org); the libLASi project (unifont.org/lasi); the Loads of
Linux Links project (loll.sf.net); and the Linux Brochure Project
(lbproject.sf.net).
__

Linux-powered Science
__
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx



__________
Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state i

Re: [Intel-gfx] An apparent large performance regression of 3D display over the network

2011-01-27 Thread Alan W. Irwin

I think that you're supposed to set LIBGL_ALWAYS_INDIRECT=true (on the

client, the one running the X server) otherwise you'll end up with
software rendering.


You can check if you're getting software rendering or not with "glxinfo

| grep renderer"


But on my system I can't seem to get this to work, I'll either end up

using the clients or the servers' software renderer...

A huge thanks to you for this key LIBGL_ALWAYS_INDIRECT suggestion!  I
cannot find a man page where LIBGL_ALWAYS_INDIRECT is documented.  I
looked specifically for this in the Xorg, xorg.conf, intel, and even
radeon man page, and also using a google search for a man page
reference.  That google search did find some non-man documentation
that indicated the value should be "1" rather than "true" which
explains why it is not working for you.

Here are some interesting LIBGL_ALWAYS_INDIRECT results on the (g33)
raven machine (where the X clients like foobillard are located) from
my 945GME X-terminal where the X server is running.

When LIBGL_ALWAYS_INDIRECT is not set, I get the following results:

irwin@raven> glxinfo |grep render
direct rendering: Yes
OpenGL renderer string: Software Rasterizer

So it was that Software Rasterizer that was killing the speed of
foobillard.

When LIBGL_ALWAYS_INDIRECT is set to 1, I get very different results.

irwin@raven> export LIBGL_ALWAYS_INDIRECT=1
irwin@raven> glxinfo |grep render
direct rendering: No (LIBGL_ALWAYS_INDIRECT set)
OpenGL renderer string: Mesa DRI Intel(R) 945GME GEM 20091221 2009Q4
x86/MMX/SSE2

That clearly identifies the 945GME (where the X server is located in
this case) as where OpenGL rendering will be done, and indeed with
LIBGL_ALWAYS_INDIRECT=1, foobillard is playable via my X-terminal!  So
thanks again for directing me to this solution.

I don't think I had to set this environment variable for my
corresponding tests several years ago, but I may just not be
remembering properly.  However, clearly it is necessary now, and I
wish that environment variable was better documented.

I have asked this question a number of places (including lxer.com and
Dave Richards's blog where he is the guy who is managing ~500 thin
Linux clients for the City of Largo, Florida) where I expected someone
would have the answer. But nobody did (the general ignorance of X
networking transparency by veteran Linux users is frightening) until
you came through. You have made my day.


[...]And just for clarity, I'm neither an Intel employee or an X dev - just

another user :)

That's allowed and even encouraged.  :-) This list was started as a
list for the Intel X community of users and developers which is why I
joined it at its inception as a user.  It is great to get fundamental
help here like this from another user.

I hate to end on a negative note, but I have also tried etracer
(extreme tuxracer) with LIBGL_ALWAYS_INDIRECT=1, and it is not
playable (difficulties even using the mouse-driven menus) from my
X-terminal even though it can be run directly without issues.  So
there is clearly still a performance regression of 3D display over the
network (assuming etracer is not that different from tuxracer)
compared to the classic X intel stack that I used for my tuxracer
tests several years ago, but it is not as extreme (since at least
foobillard works smoothly with LIBGL_ALWAYS_INDIRECT=1) as I first
reported.  Nevertheless, I am still hoping for an answer from the
Intel developers on whether improving the efficiency of 3D display
over the network is at least on their agenda.

Alan
__
Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state implementation
for stellar interiors (freeeos.sf.net); PLplot scientific plotting software
package (plplot.org); the libLASi project (unifont.org/lasi); the Loads of
Linux Links project (loll.sf.net); and the Linux Brochure Project
(lbproject.sf.net).
__

Linux-powered Science
__
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] An apparent large performance regression of 3D display over the network

2011-02-14 Thread Alan W. Irwin

On 2011-02-14 15:34+0100 Sven Arvidsson wrote:


Hi,

Not sure if you have seen this patch (I just stumbled upon it myself)
but it sounds like it makes quite a difference with indirect rendering:
http://lists.x.org/archives/xorg-devel/2011-January/018623.html


Yes, I noticed that patch from Chris Wilson as well and look forward
to when it is propagated to Debian.

Chris, does that patch make the etracer initial menu usable with

export LIBGL_ALWAYS_INDIRECT=1

? That menu is almost unusable (much flickering, extremely slow
response to cursor motions) here for LIBGL_ALWAYS_INDIRECT=1.  That's
for the case of running etracer directly on the machine where the X
server is located.  For those same conditions with
LIBGL_ALWAYS_INDIRECT not set, the etracer initial menu is fine.

As an X-terminal user for the last decade, I am glad to see that there
is at least some attention being paid to efficiency issues with the
LIBGL_ALWAYS_INDIRECT=1 case.

Alan
__
Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state implementation
for stellar interiors (freeeos.sf.net); PLplot scientific plotting software
package (plplot.org); the libLASi project (unifont.org/lasi); the Loads of
Linux Links project (loll.sf.net); and the Linux Brochure Project
(lbproject.sf.net).
__

Linux-powered Science
__
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] Where should I report fill issues for the X stack?

2011-02-25 Thread Alan W. Irwin

The PLplot development team have just implemented a demanding 2D
rendering test for X where we modify our standard example 27 (see
http://plplot.sourceforge.net/examples.php?demo=27).  The standard
version of that example simply draws a line corresponding to
increasingly more complex "spirographic" polygons.  We have modified
that example to fill using those boundaries with some really strange
results. The rendering is awful (many missing filled areas and/or
areas filled when they shouldn't be) for all spirographic cases if I
use the EvenOddRule fill rule.  If I use the WindingRule instead, the
results are more acceptable although there are still some obvious fill
issues for the case of the more complex spirographic boundaries.

The first obvious question is where should I report these X stack fill
bugs?  Does the X server handle all fills (so these issues should be
reported/discussed on the X list) or does the driver (or some other
component of the X stack) do the fills (in which case the discussion
should obviously be here since I am using the Intel X stack
(Debian Squeeze version).

Alan
______
Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state implementation
for stellar interiors (freeeos.sf.net); PLplot scientific plotting software
package (plplot.org); the libLASi project (unifont.org/lasi); the Loads of
Linux Links project (loll.sf.net); and the Linux Brochure Project
(lbproject.sf.net).
__

Linux-powered Science
__
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] Where should I report fill issues for the X stack?

2011-03-01 Thread Alan W. Irwin

On 2011-02-25 17:38-0800 Alan W. Irwin wrote:


The PLplot development team have just implemented a demanding 2D
rendering test for X where we modify our standard example 27 (see
http://plplot.sourceforge.net/examples.php?demo=27).  The standard
version of that example simply draws a line corresponding to
increasingly more complex "spirographic" polygons.  We have modified
that example to fill using those boundaries with some really strange
results. The rendering is awful (many missing filled areas and/or
areas filled when they shouldn't be) for all spirographic cases if I
use the EvenOddRule fill rule.  If I use the WindingRule instead, the
results are more acceptable although there are still some obvious fill
issues for the case of the more complex spirographic boundaries.

The first obvious question is where should I report these X stack fill
bugs?  Does the X server handle all fills (so these issues should be
reported/discussed on the X list) or does the driver (or some other
component of the X stack) do the fills (in which case the discussion
should obviously be here since I am using the Intel X stack
(Debian Squeeze version).


Just to follow up, this fill issue appears not to be specific to the
Intel driver since the fbdev and vesa drivers also showed the issue. I
made a Debian bug report about this issue
(http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=615491) and followed
advice from Michel Dänzer there about where to upstream the bug report
(see https://bugs.freedesktop.org/show_bug.cgi?id=34877).  That last bug
report gives screenshots and a cookbook for replicating these X fill
bugs for self-intersecting boundaries.

Alan
______
Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state implementation
for stellar interiors (freeeos.sf.net); PLplot scientific plotting software
package (plplot.org); the libLASi project (unifont.org/lasi); the Loads of
Linux Links project (loll.sf.net); and the Linux Brochure Project
(lbproject.sf.net).
__

Linux-powered Science
__
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx