Re: Graphics cards with Free drivers

2008-04-29 Thread Helge Hafting

Lennart Sorensen wrote:

Well it certainly works with 2.6.24 which is the kernel currently in
unstable.  Until 2.6.25 enters unstable I won't have a clue if the
nvidia driver works with it, nor will I particularly care. :)
  

The nvidia-breaking change was before 2.6.24, so there is hope.
I have to use a recent kernel for reasons other than nvidia.

Maybe you are lucky having not exactly the same hardware as me then.
There are many nvidia cards after all.



That is certainly true.  I have a 6600GT and an 8600GT at home, and a
5200FX and 6200 at work.  So far so good.

  

That could happen - but I don't think so when all trouble go away by
using the vesa X driver. Vesa having useable (although not fantastic) 
performance

also means I probably aren't using the GPU all that hard when it locks the
machine. 



When you enable 3D mode the power consumption of a video card can go up
by quite a lot (close to 100W on high end cards).  So 2D and hence VESA
could work great with a crap power supply, but 3D mode would fail.
  

That argument is fine _if_ I use the 3D stuff heavily - such as some game
with a decent number of fps. But I did ordinary work using X, lightweight
stuff that vesa can handle too.  Nvidia's driver probably use more gpu 
features

than vesa, but the amount of work needed for a plain desktop with
no eye candy is minimal. I don't think that does much for power consumption.
No continous repainting of the screen, just using the gpu a bit to help 
out with

the occational rectangle or text string.


Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Graphics cards with Free drivers

2008-04-28 Thread Helge Hafting

Lennart Sorensen wrote:

On Wed, Apr 23, 2008 at 10:23:47AM +0200, Helge Hafting wrote:
  

That is one big problem with nvidia - my laptop needs a more recent
kernel than that in order to have sound and wifi. So
I use 2.6.25 with a wifi patch now, and with luck that patch will
go into 2.6.26 so I can run a standard kernel again. Perhaps
nvidia catch up someday. Anyway, the 3D effects in xlockmore is not
too heavy to run in software.



nvidia is perfectly up to date.  It isn't nvidia's fault if yu try to
run a 2 year old driver with a 1 month old kernel.  If you want a new
kernel and a new nvidia driver, run unstable, or backport things you
need yourself (or get someone to do it for you).
  

There is a nvidia driver in unstable now, that works with 2.6.25?
Interesting - I can try it when I get time then.  I run mostly
testing with some unstable stuff now and then. 

The worst part of nvidia is the occational lockup though. A surprise 
freeze or two a

week is definitely too much - the machine runs stable without nvidia. :-/



Never seen that myself.  The only thing that has ever locked up X on me
is the stupid flash plugin from adobe.  That is a piece of unstable crap.
  

Maybe you are lucky having not exactly the same hardware as me then.
There are many nvidia cards after all.

I heard about this older version of the driver that didn't freeze up. 
Took some
effort to find it and downgrade X. It was much better, but eventually it 
too hung

the machine.



I run the latest 169.12 driver and no lockups on any of my machines so
far (all of which have nvidia cards in them).

Sometimes I wonder if the machines crashing have bad power supplies or
bad ram or something.
  

That could happen - but I don't think so when all trouble go away by
using the vesa X driver. Vesa having useable (although not fantastic) 
performance

also means I probably aren't using the GPU all that hard when it locks the
machine. 



Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: apt-* vs aptitude vs synaptic

2008-04-27 Thread Helge Hafting

Lennart Sorensen wrote:

On Wed, Apr 23, 2008 at 10:40:29AM +0200, Helge Hafting wrote:
  

Aptitude may be better - it sure does the job. But it spends a lot of time
at every invocation on building dependency trees and tag databases.



Well yes, aptitude is a huge pig of an object oriented C++ program.

  

It is therefore noticeably slower than apt-get to use for simple tasks
like installing a single uncontroversial package. Whenever I upgrade a 
single
package, I get a list of all the not-upgraded packages - I didn't ask 
for that.

and when it needs to remove something (despite the better resolution)
I have to confirm several times instead of just once with apt-get.



I do tend to use apt-get myself just because I am used to typing that
and rather impatient.

  

So, better in some ways, but also much clunkier. If it needs a
dependency tree and a tag database almost all the time,
why not keep the information around in a cache? Perhaps an
apt-get or manual dpkg might invalidate the cache, but
the information should at least stay current as long as I use
aptitude exclusively. That'd make aptitude much more pleasant.



They are cached, but also have to be updated whenever the available
packages lists change.  Well at least some stuff is cached.
  
And if I only change that by doing aptitude update then no database 
building should

be necessary at aptitude install time.  But it happens everytime. :-/

Helge Hafting



--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Graphics cards with Free drivers

2008-04-23 Thread Helge Hafting

Lennart Sorensen wrote:

On Thu, Apr 10, 2008 at 07:54:43AM +0200, Heikki Levanto wrote:
  

Thanks, and sorry for being dense, but when you say 'the 196 driver' (or 1xx
driver, as you say in your howto), what should I be getting. I seem to be
unable to find any debian packages with that number in it.

You have a nice table that explains how to substitute
'nvidia-kernel-legacy-71xx-source' for 'nvidia-kernel-source' for an old
card, but I can't make anything similar to work no matter where I plug in the
magical 169. Obviously I am doing something wrong here...



169 is the current one.  No substitution required.  Actually it is current
on unstable (and probably testing).  Stable was released a long time ago
when 8776 was current.  This is why Etch can't support new cards, the
driver is simply too old.

  
The plain 
  sudo m-a a-i -i -t -f nvidia-kernel-source

gets to the compile errors I reported earlier.



Are you running Etch, Lenny or Sid?  Are you running the Debian kernel
that came with it?

If you install a 2.6.24 kernel on Etch, then you won't be able to
compile the nvidia driver since the one in Etch doesn't work with
kernels much newer than 2.6.18 that Etch uses.
  

That is one big problem with nvidia - my laptop needs a more recent
kernel than that in order to have sound and wifi. So
I use 2.6.25 with a wifi patch now, and with luck that patch will
go into 2.6.26 so I can run a standard kernel again. Perhaps
nvidia catch up someday. Anyway, the 3D effects in xlockmore is not
too heavy to run in software.

The worst part of nvidia is the occational lockup though. A surprise 
freeze or two a

week is definitely too much - the machine runs stable without nvidia. :-/

I heard about this older version of the driver that didn't freeze up. 
Took some
effort to find it and downgrade X. It was much better, but eventually it 
too hung

the machine.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: apt-* vs aptitude vs synaptic

2008-04-23 Thread Helge Hafting

Lennart Sorensen wrote:


I believe the officially recommended tool in Debian is aptitude as of
the Etch release.  It simply does dependancy resolution better than the
other tools.  I personally tend to mostly use apt-get still, mostly out
of habit.  Of course any apt-get command can be issued with aptitude the
same way, except you get the more advanced dependancy resolution.
aptitude will offer possible solutions to conflicts and let you pick an
option, while apt-get simply makes a decision and asks if you want to
proceed.
  

Aptitude may be better - it sure does the job. But it spends a lot of time
at every invocation on building dependency trees and tag databases.

It is therefore noticeably slower than apt-get to use for simple tasks
like installing a single uncontroversial package. Whenever I upgrade a 
single
package, I get a list of all the not-upgraded packages - I didn't ask 
for that.

and when it needs to remove something (despite the better resolution)
I have to confirm several times instead of just once with apt-get.

So, better in some ways, but also much clunkier. If it needs a
dependency tree and a tag database almost all the time,
why not keep the information around in a cache? Perhaps an
apt-get or manual dpkg might invalidate the cache, but
the information should at least stay current as long as I use
aptitude exclusively. That'd make aptitude much more pleasant.

Helge Hafting






--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: initng

2008-02-25 Thread Helge Hafting

Alex Samad wrote:

As for your udev/ldap problems - please tell what these problems are
if you want help with them.

yeah I think in initramfs or really early on, udev starts up and tries to mount 
??? or get access to the passwd DB, but ldap hasn't started.
  

Turn on all sorts of debugging, (in PAM/NSS) and find out
exactly what names / UIDs the PC tries to look up.
Make sure those exists in /etc/passwd. After that, the system shouldn't
need to consult ldap at all during bootup. Not until ordinary
users (presumably found in ldap) start logging in.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: initng

2008-02-12 Thread Helge Hafting

Alex Samad wrote:

Hi

I was just reading an article on slashdot about ubunto and sysvinit. in it, it 
mentioned about replacement init's one being initng (called prominent).


I was wondering if people of the list were using it.  Will it solve my 
udev/ldap (libnss-ldap) start up problems. Is it worth the effort to change ?
  

Initng isn't about solving problems - it is about starting up faster.
sysvinit do one job at a time, and start one service at a time.

Initng do such sequencing only when one operation depends on another.
(I.e. you must mount the filesystems before starting services and so on)

Things that don't depend on each other all starts in parallel. That is much
faster, because:
* multiple processors or dual-cores are used, if you have them.
* Starting up something normally involves several disk accesses where
  the process just waits. This waiting time can now be utilized by other
  processes starting - even on a single-core machine.

You can boot up in 30s or so - more if your bios wastes lots of time
before loading the kernel. Even less time if you don't use X.

As for your udev/ldap problems - please tell what these problems are
if you want help with them.

I have had one recurring problem with ldapnss - various processes
look up the user id 0 (name: root) before ldap is started. That fails.
Possibly the same problem with other daemons that must run before ldap and
under a username of its own.
The simple fix for this is to set up nsswitch.conf to try both the old
passwd file, and then ldap lookup.  You can then have a small /etc/passwd
file that have root and a few other system accounts.

Initial lookup of every name in a *small* /etc/passwd file will not be a 
performance problem.



Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: reinstalling Debian - part I

2008-02-06 Thread Helge Hafting
 based locale then, possibly english if you prefer that.
Also make sure the console is set up for UTF-8.  You will want
a framebuffer console, not vga, as vga is limited to 256 or 512
distinct symbols and you might need more.

In X, use a terminal emulator that supports unicode well.
I think several of them do - uxterm should be a safe bet.

How can i guarantee a default Unicode system? Which brings us to the
next question.
  

Install locales, set it up for unicode locales and no others.


Fonts. While fiddling with the default X meta-package (oh :(, i'd
forgotten about that) i ran into 3 different locations for fonts.
Apapretly Xfs is deprecated. I want my fonts to be central and
unicode, available to all programs, at least. I don't want fonts that
are not unicode - any tweaks?
  

Install only unicode fonts then. I don't know which ones that would be.


Short of compiling it how can i assure that my X server will be
adapted to my hardware? It often installs drivers for a bunch of cards
unnecessarily, for instance. And this motherboard has an onborad
nVidia chip which i'd like to use to the max (and how could i test
that?). Also i know this monitor (Samtron 55E) supports more than
800x600 resolutions, but i can't really know if it's using something
above that. Also there doesn't seem to be a standard as fas as icons
(and its size/behaviour) go...
  

Install the correct driver for your hardware then.
lspci tells you what you have. For nvidia, use the proprietary
driver to use it to the max or the nv driver if you
want to stay open-source.

The installation: i want to be sure i'll only instal the most basic
packages, the minimal system. I could use the netinst CD i used last
time (May) but it would be interesting to use a USB pen-drive. I have
a 2GB Kingston, i assume that's feasable.
  

Sure, you can install a minimal system on a 2GB drive.
I once had debian on a 240MB harddisk. . .

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: How to wifi with ipw3945 on Dell Vostro ???

2008-01-17 Thread Helge Hafting

Kelly Anderson wrote:


Thought I'd throw in a suggestion that you look into the iwlwifi 
driver.  Intel has moved on to the next best thing.  The iwl driver 
doesn't require the stupid daemon (a big step).  And my initial 
impression is that it will probably support WEP/WPA more effectively.  
I haven't used WEP/WPA with it yet but it's on my agenda.  I haven't 
had any issues since I switched from iwp to iwlwifi.
Thanks - this tip made it possible to upgrade to 2.6.23 for me. I didn't 
know

they changed the driver. And it works nicely with WPA too, something the
old driver didn't.

There is another problem though.
My homemade script uses iwlist scan in order to see
where the machine is (at home, at work, at friends/family house)
and then select the appropriate essid, key and other stuff.

This part used to work well - now I always have to try bringing the
network up 2-4 times before it actually works. If I run manually, I see
that iwlist comes up empty many times before it suddenly sees
the available networks and access points.

Setting a longer timeout between bringing up the driver and running
iwlist didn't seem to help. The old driver needed 2s. I tried 4s, but still
have to try many times before iwlist will see anything. Is there a trick
to make this work? Running iwlist in a loop is not what I want, that is 
error-prone

and wastes CPU. I use the machine in places with no network too.
I want to detect the available networks in minimum time and with only 
one attempt.


Helge Hafting







--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: status of native java JRE ?

2008-01-07 Thread Helge Hafting

Michael wrote:
Hey, 


What's the status for native-amd64 java support for mozilla browsers in debian 
unstable today ?

AFAIKS there's sun java5/6 jre and gij 4.2 but they have no browser plugin.
  

You can use konqueror, a 64-bit web browser that is able to use
Sun java without a mozilla-style plugin. Many sites, especially
java compliance test sites, works fine with this. As well as
many java games.

Some sites do not work - stupid web designers sometimes assumes
that java is running _as a plugin_, just as they sometimes
assumes that you use IE.


Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: How to wifi with ipw3945 on Dell Vostro ???

2008-01-04 Thread Helge Hafting

Joost Kraaijeveld wrote:

Hi,

I am running Debian Lenny on a Dell Vostro laptop which has a ipw3945
wifi card.

The card is recognised by the software and it even works somehow  (I can
see all the wifi networks in my building using wifi-radar). But whatever
I do I cannot get a  (DHCP) ip address from my Zyxel AP which is using a
WEP key (and it did when I still used Windows Vist so I know for a fact
that it is possible). 
  

You need:
* A driver module. If you use the 2.6.22-3 kernel from debian testing, 
install

 the ipw3945-modules-2.6.22-3-amd64 package.
 Then make sure that /etc/modules contains a line with ipw3945
 You probably have this already, or you wouldn't be able to use the
 card at all.

* The package ipw3945d. Install it and make sure the daemon is
  running, or the card won't work properly. Without this, the card
  will seem ok but anything you do will fail silently and mysteriously.


After that, set it up to associate with your access point.
For a quick test, use iwconfig directly. For a permanent setup, put
something like this in /etc/network/interfaces:

iface eth2 inet dhcp
pre-up iwconfig eth2 essid YOURSSID key YOURKEY

(Assuming the card is eth2. Use key off if there is no
encryption. If there is WPA encryption, get additional
software for supporting WPA. Get the card working
on a open or WEP encrypted net first, to rule out driver problems.
WPA is trickier to set up than WEP)

Hexadecimal keys are easiest to deal with, as there are
two incompatible ways of specifying the key as a text string.
The driver uses one way, some access points use the other way.
Hex is more typing but works everytime.

Also make sure you have a package with dhcp software,
for example dhcp3-client

Helge Hafting





--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Opinion question (Core2 Duo)

2007-09-18 Thread Helge Hafting

Zaq Rizer wrote:
I have an Intel Core2 Duo arriving in the mail in a couple of days, 
and I read online that these processors can run in either 32bit or 
64bit mode (just like Athlons can).


Thing is, the 32bit chroot and ia32-compatibility libraries, are, imo, 
a total mess and a real pain in the rear to deal with on a daily basis.

But usually, no need to deal with it. Almost all software is
available 64-bit.
The main problems seems to be some netscape plugins and wine.
(Java works 64-bit in konqueror, adobe acrobat has good 64-bit
alternatives, so I don't count those.)

I do run a 32-bit program in 32-bit wine - but that was a set up hassle 
once

and then it just works forever case.


I'm looking for people's opinions on whether I should stick with 
debian-amd64, or do a reinstall of debian from the main branch (32bit)?


What, truly, are the real performance differences?  Simply support for 
4+G of ram, or something else?

The processor also uses 16 registers in 64-bit modes, opposed to only
8 in 32-bit mode. (Working with registers is _much_ faster than
accessing memory, but as you see, there aren't many of them.)

If the inner loop of some time-consuming operation need more than
8 registers but less than 16, then you get a nice noticeable speedup
from using 64-bit mode.

Of course, any usage that actually do integer math on quantities
bigger than 32 bit also speeds up noticeably. Graphichs operations
not done by the graphichs adapter itself will also be in this category.

My experimental sudoku solving program is 3x faster on
a 1.8GHz 64-bit opteron, than on a 2.4GHz 32-bit pentium. In this case,
a slow 64-bit processor beats a faster 32-bit processor 3x.


Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: confused about performance

2007-06-20 Thread Helge Hafting

Sebastian Kuzminsky wrote:

Hi folks, I just bought a pair of AMD64 systems for a work project,
and I'm confused about the performance I'm getting from them.  Both are
identically configured Dell Dimension C521 systems, with Athlon 64 X2
3800+ CPUs and 1 GB RAM.

On one I installed using the Etch (4.0r0) i386 netinst CD, then upgraded
to Lenny.  This one's running linux-image-2.6.21-1-686.

On the other I installed using the current (as of 2007-06-13) Lenny d-i
amd64 snapshot netinst CD.  This one's running linux-image-2.6.21-1-amd64.

The one with the x86 userspace and 686 kernel is faster than the one
with x86_64 userspace and amd64 kernel.  The difference is consistently
a few percent in favor of x86 over x86_64.

My only benchmark is compiling our internal source tree (mostly running
gcc, some g++, flex, bison, etc).  We're using gcc-4.1 and g++-4.1.
I've tried it with a cold disk cache and hot disk cache, in both cases
x86 is faster than x86_64.

I was expecting a win for 64 bit.  What's going on here?
  

64 bit both advantages and disadvantages, for each program it
all depends on how they balance out. Test many different
cpu-intensive programs - one benchmark alone won't tell you much:

Disadvantages:
* 64-bit code uses some more memory. More memory accesses
 take a little more time. In a borderline case, using more memory
 might cause more swapping, which is very noticeable.
* Quality differences in the compilers for 32-bit and 64-bit. This will
  likely improve a lot, given that we're seeing more and more 64-bit
  machines, and many of the 32-bit specific optimizations are already done.


Advantages:
* Faster floating point.
* 64-bit code lets a program use more than about 3GB trivially.
  Such software simply can't run 32-bit.
* 16 registers instead of 8.  For some programs this won't matter for 
timing,

  for other cases it means a many-fold speedup as some important
  inner loop don't need to access memory at all, just those 16 registers.
  (Or smaller improvements when the loop access less memory thanks
   to more variables being held in registers.)
* Much faster computations on 64-bit datatypes, such as the
  long long type in C. Again, it depends on whether the sorce code
  specifies 64-bit types, (or the compiler manages to do this as an
  optimization.) I wrote a sudoku solver that mainly uses 64-bit
  and some 128-bit datatypes. It is not surprisingly several times
  faster 64-bit than 32-bit. :-)

Helge Hafting



--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: current chroot howto location?

2007-04-25 Thread Helge Hafting

Lubos Vrbka wrote:

Max Alekseyev wrote:

Try  the link from the Links section at the bottom of
http://www.debian.org/ports/amd64/

hi,
regarding this topic - does anyone know any howto showing how to set 
up schroot to automatically bind mount specified directories (/home, 
/dev, /tmp, /proc) when entering the chroot and then automatically 
unmount them when the chroot is left? i want to get rid of the 
permanent stuff in my fstab...


this possibility is mentioned in several docs, but always with the 
notice 'consult the man page'. unfortunately, the schroot man page 
(and related stuff) doesn't seem to provide me with the necessary 
information :(

Try the manpage for schroot.conf instead.  Look at run-setup-scripts in
particular.  You can have schroot run scripts for you, and these
scripts  may have the mount commands you need to mount /proc etc.
inside the chroot.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: ETA amd-64 Java And Flash

2007-04-11 Thread Helge Hafting

Karl Schmidt wrote:

Jérôme Warnier wrote:


Well, some people would argue that Flash is hardly used for anything
else than stupid games and advertisement while Java is not.


Wish that was true in our experiance - we have to deal with many sites 
that are stupidly flash only!




But we tend to see everyday less Java applets (while Java is really used
on webapps containers, but this surely works for ages on AMD64).


If you buy things off the web - you will find that about 1/3 of the 
shopping carts require Java ...  Perhaps 1 in 10 require Flash - but 
what if it is the only place that has what you need?

It is an incredibly stupid webshop that don't have a html fallback.
Why loose 1%-5% of the customers for no reason at all? 
Even those with no java problems turns it off for speed/security sometimes.


If I have the slightest trouble with a webshop, I find another.
In the rare cases where nobody else have what I want, I place the
order by sending email instead.  Email usually works, and one
can add a note about how their java requirement locks
some customers out too. sometimes that makes them think.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: cpu overheating

2007-04-11 Thread Helge Hafting

Constantine Kousoulos wrote:

Hello,

When i do a cold start, the cpu fan barely spins. The result is that 
in a few minutes the cpu overheats and the system automatically shuts 
down. When i immediately turn it on again, the system detects the 
cpu's high temperature and begins spinning at higher speeds for the 
rest of the time. In both cases, the cpu fan never icreases or 
decreases it's speed, it just continues spinning at a constant speed 
since the system's startup.


I have a debian-amd64 notebook, with a turion64 processor. The above 
mentioned behavior remains no matter what kernel i use.


Any sollutions to that??

Thanks,
Constantine
To me, this looks like the bios set the fan speed to what is right at 
the moment,

expecting linux to take over fan control.  And then linux does nothing.
Do you have the acpid package installed? It works for me,
turning the fan on and off from now and then.
The powersaved package might also be a good idea.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: how to get all options from dpkg-reconfigure

2006-09-23 Thread Helge Hafting

Gabor Gombas wrote:

On Wed, Sep 20, 2006 at 10:21:48AM -0400, Lennart Sorensen wrote:

  

Try dpkg-reconfigure -p low xserver-xorg

Perhaps some of the questions are at a different priority than they used
to be.  Not sure.



Looking at the postinst script, those questions are only asked when the
package is installed for the first time or when upgrading from a really
old version. Otherwise it's up to the admin to edit xorg.conf.

An apt-get install --reinstall xserver-xorg _may_ work, I did not
test.
  

You can always make it the first time like this:
1. backup xorg.conf just to be safe
2. dpkg --purge xserver-xorg --force-depends
3. apt-get install xserver-xorg

The force-depends is so you won't have to remove every package
that depends on  xserver-xorg.  That just takes a lot of extra time.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: USB rescue/boot disk

2006-09-12 Thread Helge Hafting

Joost Kraaijeveld wrote:

Hi ,

I want a bootable USB stick that will boot any machine that allows me to
boot from USB: a Debian Live USB (and not CD). I have found a howto on
the internet (http://feraga.com/) but that one does not seem to work for
me.

Is it actually possible to create an USB rescue/boot disk that contains
a Debian Etch AMD64  or i386 based installation? Is there an image
available somewhere (as the Debian Live Project does not have such an
image (yet?) )?
  

Just about any live/rescue CD/diskette with usb support should
do the trick, I think. Knoppix, for example?
Of course some USB sticks are smaller than a
full CD, in which case you go for one of the smaller rescue/live CDs.
DSL is 50MB, for example.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: [POLL] To continue 64 or not?

2006-09-08 Thread Helge Hafting

Andrew Robinson wrote:

Well I made the rough decision last night and switched back to 32b.
After reading some benchmarks it didn't look like 64b was going to
benefit me much. This is a home desktop computer so the 32b will fit
me fine. Just a shame to give up the extra functionality.

Perhaps in a few years I may try 64b again when more libraries and
software bundles are packaged as 64b. Until then I thought that it
would be nice if things just worked. I maintain a couple of
slackware boxes, and it would be nice to finally have one box that is
extremely low maintenance.

32bit may indeed be the way to go for you - for now.
I still recommend using a 64-bit kernel, while having everything else
32-bit.

First, you get a small speedup of the kernel itself.  This is hardly
noticable as the PC shouldn't spend much time on the kernel anyway.

Much more important is that the 64-bit kernel can hand out more
memory to the processes than a 32-bit kernel can, because it hides
itself outside the 32-bit memory range that the applications live in.

This could make a difference if you have 2GB or more memory,
the difference between swapping and _not_ swapping can
be felt sometimes.  Of course, only if you have a process
that need so much memory.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: future of ATX?

2006-09-08 Thread Helge Hafting

[EMAIL PROTECTED] wrote:

Slightly off topic but I'm soon going to be making a new computer (my
first in 12 years) and will be using AMD with Debian.

The standard board/power/box form-factor has been ATX for a while.  Does
anyone see anything else on the horizon?  I plan to buy a good case and
power supply with excellent cooling for long life.  If theres a new
standard soon to replace ATX then I'll wait and get that;  if not, I'll
stick with ATX.
  

There is mini-itx, and I believe there is a board that
takes an amd processor.  Of course it is not going
to replace ATX, no way.  Nice if you
want something really compact though, and don't need
multiple pci slots. . .

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: [POLL] To continue 64 or not?

2006-09-01 Thread Helge Hafting

Andrew Robinson wrote:

Looking for some constructive feedback on other people's experiences
with amd64 as a workstation. I have been only using it a few weeks,
and the setup is a pain due to the amount of 32 bit only programs. I
am having some issues with:

Eclipse + ExadelStudio plugin (plugin is 32-bit only)
Firefox + Flash (Flash is 32-bit only)
OpenOffice
Cedega
Wine

Unfortunately, this makes up a decent percentage of what I run daily
(especially eclipse). I am the type to sacrifice pain for new
functionality, but this one has got me thinking, is it worth it?

I run 64-bit, but with less 32-bit software.
I have a chroot so I can run my digital camera raw converter in wine.
The chroot painless to maintain - it is actually the remains of
my old 32-bit installation from before I got the opteron.
So I run apt-get update ; apt-get dist-upgrade in there occationally.
I have deleted lots of packages that are not needed anymore there.

I dont't have cedega or eclipse.  I don't need flash - there are
some fun flash games in the net, but I can do without those.
Debian have plenty of little games anyway. 


I have no need for openoffice. Gnumeric starts much faster
for spreadsheets, lyx is superior for writing anyway, is faster,
and can also be used for presentations.  Well, perhaps it isn't
the best tool for presentations, but I don't do many of those anyway.


As a server I would think it would be great but as a workstation it
seems to be a pain. The chroot jail works, but having to maintain the
apt libraries (I am using etch, so they change fairly often), it more
of a pain.

Maintaining a chroot should be no harder than maintaining
an extra pc?  Sure, you have to run the apt-get commands twice,
once in 64-bit, once in 32-bit.  This just works, assuming you
have a separate /var for the 32-bity software.


For those that have the knowledge, I know that the 64 bit architecture
is faster due to addressing and such, but is it that noticeable? Also,
I have read that the 32 kernel still supports the dualcore nature of
the amd64 even though running in 32 bit mode (is this true, my source
wasn't 100% reliable).

64-bit is nice and fast for cpu-intensive stuff.  Waiting for the
disk while loading openoffice won't be faster with 64-bits though.

Compiling is very fast - my 1.8 GHz opteron is so much
faster than the 2.4GHz pentium-m at work that it is very noticeable.
The same goes for typesetting long documents with latex, which
is what lyx does whenever I print.


Just looking for some constructive feedback on other's opinions.

Well, you may want to go 32-bit if that is much less pain.  But
then you'll have to reinstall?  A correctly setup chroot
is no more work to maintain than an extra pc.  Actually less,
as the chroot only will have a handful of programs and their
support libraries installed. 


If your chroot is much more cumbersome, please describe what
you have to do, and I'll see if something can be done to make
it easier.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: OpenOffice in chroot -- fonts?

2006-08-30 Thread Helge Hafting

Andrew Robinson wrote:

Tried using xfs. I have more fonts now in open office, and the Andale
Mono is now available, but that didn't help

What is interesting, is that the fonts are correct in the actual
contant (in oocalc, the fonts in the cells are fine). It is only the
windowing fonts (menu bar, dialog text, etc.).

I'm wondering if it could be a gnome or kde config issue inside the 
chroot.


Where do the font settings come from in the chroot (KDE, gnome, xfce)?
Outside of the chroot I am running XFCE with KDE support enabled.

Well, you ran the software at a time when few fonts were available,
so perhaps it defaulted to hopeless but available fonts. 
Now it doesn't change

automatically just because you made better fonts available.
The document look changed because the app tries to load
correct document fonts each time you open a document.  And now
the fonts are there.

You will probably have to change the font settings for the app(s)
in question - I don't use gnome/kde so I don't know how.


Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: OpenOffice in chroot -- fonts?

2006-08-21 Thread Helge Hafting

Andrew Robinson wrote:

I managed to get openoffice to install fine in a ia32 chroot
environment (from pages like [1]). It runs okay, but everything is
extremely large. The fonts (menu bars, dialogs, etc.) are about 18pt
font or so (buttons in the dialogs are huge).

I'm not sure if this is a dpi, font or resolution problem. Would
appreciate some advice. I'm starting to debate ditching amd64 for i386
as much as I'd hate to to avoid all these problems (mplayer  codecs,
flash in firefox, openoffice, wine, etc.).


Use a font server like xfs.  That way, the same fonts will be
available to 32-bit and 64-bit software.

Alternatively, install your font collection in the chroot too.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Commercial programs in Debian

2006-05-10 Thread Helge Hafting

jmt wrote:


  Sorry for the disturbion but I would like to mention some things.
I have been thinking about if it was possible to set up some bug list as a
kind of quality assurance for commercial programs in Debian. Most
commercial programs I have seen are only said to be compatible with RedHat
and sometimes SuSe. 
   




Mind that most commercial programs, targeted for RedHat or SuSe, are generally 
i386 ONLY !


Second point : commercial program editors want to rely on some king of system 
certification ; even if what RedHat or SuSe provide is far from satisfaction, 
it can take place in a business process, as a mention to good practice.


Third : what made Apple fortune : a very narrow hardware selection ! If a 
commercial program had to rely on 
 


Well, amd64 is a much more narrow platform than i386, which has
various enhancements for i586, i686, as well as lots of little issues
for special i386-compatible processors.

If you want to run on all of i386, then you can't take advantage of
i586, i686.  If you go for max performance (i686), then it won't run
on i586 which is still popular.  amd64 does not yet have such
problems.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Commercial programs in Debian

2006-05-10 Thread Helge Hafting

Gudjon I. Gudjonsson wrote:


Hi again
  Thank you Goswin and Alexander for nice ideas. I will do something into 
these directions.
  About the idea below. Debian and more or less Linux has now been banned 
from my institution even if I have been able to solve a lot of peoples 
problems with it. 
  Looking at a guy copying plots directly from some commercial program into 
Word on a Windows computer, 10 to 100 times faster than I can do with gnuplot 
makes me wonder if I am on the right track. The programs that I have 
mentioned need to work on Debian and they need to work better with open 
source programs if I will be able to continue use Debian or even Linux for 
the desktop applications. I could switch to Windows, get a perfect GUI and 
run the calculations on a Linux backend as most people do. It might save me 
time.
 

I don't know gnuplot - perhaps that one is particularly tricky. 
Stuffing graphichs into a wordprocessor document on linux

tends to be easy enough though.

Now, microsoft  is good at making things seem userfriendly,
but the guy above have at least one problem.  He's stuck
with mediocre word quality documents.  The open source
world does much better than that with latex,  and
you may use lyx as a frontend to latex to get the userfriendly part.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Nvidia driver bypasses xorg.conf settings

2006-02-17 Thread Helge Hafting

A J Stiles wrote:


On Thursday 16 Feb 2006 19:13, Lennart Sorensen wrote:
 

If 60Hz is the preferred rate, then why is the monitor telling the computer 
that it should run at 75Hz?  Surely *that* is the question.  {Of course, it 
could all just be due to a badly-shielded cable .  that wouldn't really 
be your fault if it were hard-wired at the monitor end, though.}
 


As far as I know, the communication protocol don't support
a preferred rate.  All it says is this screen support from 60 to 75 Hz.,
as well as some other information.  In the CRT days, the highest
supported refresh rate was what you wanted, so many a driver still
tries for that.  With LCD you may want a lower rate, like in this case.
But drivers don't do that automatically.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Sarge NR_CPUS limit of 1 reached. Processor ignored.

2006-02-09 Thread Helge Hafting

Siju George wrote:


but I faced a problem removing mysql-server. Details below.
How can you remove all installed packages with their configs ( Purge)
and get back the original base system???

I cannot do a re-install cause the server is not near by :-(

Thankyou so much :-)

kind regards

Siju

   



Sorry, Forgot to give the details in previous mail :-(

--
# dpkg -P mysql-server-5.0
(Reading database ... 30784 files and directories currently installed.)
Removing mysql-server-5.0 ...
Stopping MySQL database server: mysqld.
Purging configuration files for mysql-server-5.0 ...
rm: cannot remove directory `/var/lib/mysql': Device or resource busy
 


Is /var/lib/mysql the working directory (of your shell,
or some other process) when you try?  Make sure
that is not the case. cd away from it.

Is this a mountpoint? (Unlikely but possible)
Is mysql running?  A broken remove script may
fail to stop it properly first - if so stop/kill mysql yourself
before attempting removal.

Is there something else in the directory?
Have a look.  If you don't find anything worth having, what
happens if you do a rm -r var/lib/mysql/* as root?

If weird things happen, go single-user, umount /var, and
use fsck.  If /var is part of the root fs, run that fsck from
a cd-boot or boot into single-user, mount read-only, run
fsck, then boot.

Try removing the package again after emptying the directory
manually.  You may also want to try

dpkg --force-all -P mysql-server-5.0

But take the warnings seriously if you do so.


dpkg: error processing mysql-server-5.0 (--purge):
subprocess post-removal script returned error exit status 1
Errors were encountered while processing:
mysql-server-5.0
oss40:/var/cache/apt/archives# man dpkg
Reformatting dpkg(8), please wait...
oss40:/var/cache/apt/archives# apt-get install mysql-server-5.0
Reading Package Lists... Done
Building Dependency Tree... Done
mysql-server-5.0 is already the newest version.
 


If you want to simply reinstall the package (someone deleted
an important file?) then do:

apt-get install --reinstall mysql-server-5.0

This should overwrite all existing files. Another option
is to use: dpkg -i package-file.deb
After an apt-get run, you'll find the .deb file in
/var/cache/apt/archives


0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
oss40:/var/cache/apt/archives# apt-get remove --purge mysql-server-5.0
Reading Package Lists... Done
Building Dependency Tree... Done
The following packages will be REMOVED:
 mysql-server-5.0*
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
Need to get 0B of archives.
After unpacking 40.9MB disk space will be freed.
Do you want to continue? [Y/n] y
(Reading database ... 30626 files and directories currently installed.)
Removing mysql-server-5.0 ...
Purging configuration files for mysql-server-5.0 ...
rm: cannot remove directory `/var/lib/mysql': Device or resource busy
dpkg: error processing mysql-server-5.0 (--purge):
subprocess post-removal script returned error exit status 1
Errors were encountered while processing:
mysql-server-5.0
E: Sub-process /usr/bin/dpkg returned an error code (1)
 


As a last resort, consider using
dpkg -L mysql-server-5.0
to see what files this package consist of, then remove them manually.
Or pipe the output of the above command into some form of | xargs rm

Consider using reportbug to report the problems with this package.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: How big will the 32-bit chroot end up being? What goes in these days?

2006-01-27 Thread Helge Hafting

Thomas Steffen wrote:


On 1/23/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 


So, as I understand it, the following stuff would need to go into the
32-bit chroot (assuming one wants/needs these things):

- Sun's J2RE

- OpenOffice

- Flash

- RealPlayer/Helix/whatever

- win32codecs + other misc A/V codecs one might scrounge up elsewhere

- Any web browser that you want to be able to use Java/Flash/embedded
AV stuff in

- the Acrobat Reader

- cdrecord/cdrdao plus whatever front end you're using to call them

Is that correct?  Anything I'm missing?
   



Don't forget all the dependencies that all pulled in by these apps.
You might end up with nearly a typical Linux installation.
 


Not quite that bad.  There is no need for the xserver for example,
as the 32-bit processes have no problem talking to the 64-bit server.
And of course the chroot don't need any utilities, bootup scripts,
window managers, printing subsystem, login software . . .

Most of the dependencies are libraries.  You could end up with a
sizable chunk of those though.

Also consider 64-bit equivalents.  CD burning can be done in 64-bit.
Frontend software can always be 64-bit even if it controls a 32-bit
program doing the work. (Browser+plugin is different, as the plugin
isn't a freestanding program.  It links into the browser.)

Adobe acrobat has a 64-bit alternative in xpdf.  Xpdf sure looks
different - maybe it isn't a perfect replacement - but it is fine
for reading and printing pdf documents.

32-bit java is also something you may be able to do without.
64-bit java is fine for all non-browser use - but of course there is
no plugin for mozilla.  For java in a 64-bit webbrowser, use
64-bit konqueror and 64-bit java 1.5.0-4 from Sun.  It passes the
test at http://www.java.com/en/download/help/testvm.xml,
and works with java games at www.darkfish.com.  I have not
been able to use it for internet banks that use java, but only a
stupid bank forces the customers like that - there are other banks!

Some people need openoffice, but there are certainly good alternatives.
I use lyx for writing (better typography and much faster)
gnumeric for spreadsheets (much faster, and just as excel-compatible)
and abiword (much faster) when I need to exchange
something word-compatible with others.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: SATA, RAID, A8N-E, 3800+ help

2006-01-27 Thread Helge Hafting

Keith Ballantyne wrote:


Hi,

I recently bought an Asus A8N-E mobo and AMD64 3800+ CPU.  I then 
bought RAM, a case, and an ATI 300-based PCI Express 16 video card.  I 
used netinstall sarge and managed to install 32-bit using the 2.6 
kernel (as the 2.4 kernel wouldn't recognize my network card).


...a few days passed, and I realized that I wanted the 64-bit core.  I 
recompiled the kernel, and then realized there was a 64-bit 
distribution, so I downloaded both the testing release (Etch) and the 
31r1a netinstalls. 31r1a didn't recognize my onboard network, but Etch 
did.


...a few more days passed and I decided to buy 4 SATA II drives in an 
attempt to run RAID 0+1.  I configured the BIOS and ran the installer, 
but the installer sees hda through hdd rather than a single RAID 
drive.  In subsequent research (including this list archive) it 
appears that the BIOS RAID is considered 'inferior' to the software 
RAID support in Linux.  So, my questions are:


   1) Why is the ASUS BIOS RAID inferior to software RAID on Linux?


Not necessarily inferior, but _unnecessary_.

As others have pointed out, this is not a real raid controller. The bios 
does

software raid, and the windows driver for the card does software raid
for windows (if you use windows at all, that is.)

The fact that you _need_ a raid driver in window is a strong hint that the
raid is implemented in software - a hw raid controller can be made to look
like a single disk on a standard controller - both linux and windows can
then handle it without special drivers.

So, in order to use a raid with this controller you need a software driver
for it.  And one exist - the linux software raid (md).  It offers raid 
0+1, among

other things.  You can get more space from your 4 disks with raid-5, but
raid 0+1 will probably have better performance if you get the stripe 
size right.


Note that bios raid and linux sw raid isn't compatible, so you have to
turn bios raid off in order to use linux sw raid.  There is no loss 
involved in

doing this though.

   2) Is it possible to install 31r1a instead of the Etch release (I'm 
not overly keen on working with Etch, but will do it if it's a better 
64-bit option...In the few days I played with 64-bit Etch it seemed to 
work well, but I didn't have most of the utilties I need available 
from the installer/package manager).

   3) Do I need to flash the BIOS for things to work?


No - linux doesn't use the bios.  The bios is only used to set up some 
hardware at

power-on time, and to load the linux kernel which then takes over everything
with its own drivers.  Flashing the bios is sometimes useful if the old
bios has bugs.  Or if you want to do something radical like running 
linuxbios.



   4) Is ATI 300 support better in Etch?


I haven't tried, but I have the impression that unstable/experimental
have more and better support for 3D.  But of course there are more
bugs to stumble over as well. You may want to try xserver-xorg  
6.9.0.dfsg.1-4.

(Also upgrade all the accompagnying libx... packages, xserver-common,
libdrm and mesa packages. 
Write down the list of packages upgraded, so you can back out if

you hit some showstopper bug.


Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Mount USB memory stick in 32 bit chroot as well

2006-01-09 Thread Helge Hafting

Adam Skutt wrote:


Andrew Sharp wrote:



I can't tell if you answered the question or not.  Does the directory 
show

up in the chroot environment?  If so, then your problem lies elsewhere,
like maybe a permissions thing.


It does.  The issues is that a normal bind mount won't bind any mounts 
underneath it.


For example, if you have /usr and /usr/local and bind mount just /usr, 
/usr/local won't appear mounted in the bound mount.


As Daniel Foote suggested, rbinds may be a solution, but I haven't had 
the chance to play myself.  My only concern would be that it may or 
may not like mounts randomly appearing/disappearing.


One simple solution:
Always mount the stick in the chroot, and have the
64-bit environment provide a symbolic link into the chroot
so that 64-bit apps also find the stick.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Matrox 550 / Debian Etch/ X.Org 6.8.2 / Xinerama not working??

2006-01-04 Thread Helge Hafting

Joost Kraaijeveld wrote:


Hi,

After an update  of XFree86 to X.org my Xinerama setup stopped working.

I cannot get my two screens working at the same time, except when I use
the second screen (an LCD screen, the little inferior one) as my main
(or first, the one with the menubar) screen, and my first screen ( a
beautiful 21 inch CRT screen) as my second screen.

Is that a known feature or a plain bug? Is there a workaround for it?
 


Looks like a bug.  What bad happens if you switch the
vga connectors around?

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Problems installing on AMD Athlon 64 system

2006-01-04 Thread Helge Hafting

Austin (Ozz) Denyer wrote:


On Sun, 18 Dec 2005 15:03:58 -0800, lordSauron
[EMAIL PROTECTED] wrote:
 


noo!!!  another one lost to Gnome!  I really hope you meant
Kubuntu, since KDE is really much nicer than Gnome for people who need
to get something done (working with Gnome is like having your neck
amputated!)
   



Even Linus himself recently stated that people should use KDE rather
than Gnome.  I won't repeat what Linus called the people behind the
Gnome interface for fear of invoking Godwin's Law #;-D  I have to say
that Gnome has been dumbed down to the point of being virtually useless.


So even Linus seems to think the choice is between KDE and Gnome? :-/
There is a third alternative - use neither and be much more efficient.

Go for icewm or some other simple window manager -
and start X, login, get the window manager
up and running in a few seconds only.  No long
wait for KDE to start (what the heck is it spending time on?
Half a minute on an otherwise nice fast machine? Is it
compiling itself first?)
No insanely stupid hardware detect that lock up some machines.
(Why, oh why do a _GUI_ do hardware detection at all?  That is the
job of the kernel - or sometimes the xserver. )

I don't mind the look of KDE, but the startup time (and hangs)
offset any benefit for me.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: How to package SuperCollider (or, whats the deal with multiarch)

2005-12-14 Thread Helge Hafting

Mario Lang wrote:


[EMAIL PROTECTED] (Lennart Sorensen) writes:

 


On Tue, Dec 13, 2005 at 01:53:04PM +0100, Mario Lang wrote:
   


SuperCollider is for some reason I am going to explain below a bit
strange when it comes to porting it to AMD64:

SC consists of two separate programs, the synthesis server (connects to JACK),
and the language client which executes SCLang code.  Both processes
communicate with each other via UDP (OSC).

For reasons of design and speed vs. space tradeoffs, the language
*has* to be compiled as a 32bit binary.  The 64bit build fails
at runtime due to pointer size overflows.  (if anyone wants to know
more about it, the main reason for this is that SCLang uses a kind
of slot-mechanism to hold direct and indirect values all in a 64bit
double, therefore a 64bit pointer + tag does not fit).


Does it have to be a pointer, or could this code be rewritten to use
an array index instead?  You may then take care to keep the array
smaller than 4GB, and only need 32-bit offsets. Even on a 64-bit platform.


Briliant.  I just love when programers write non portable code for the
sake of performance. :)
   




I've looked at it, and it is doable.  However, I think it would require
some preprocessor magic, since currently, copying slots is done by
assignment, which is definitely faster than a memcpy of a struct...

But I agree, a proper patch which is acceptable by upstream would
be favourable.  Only to a purist I am afraid, the userbase seems
quite happy with the current situation.  And upstream is not going
to accept a solution which reduces performance for a 32bit address space
version.
 


Then there is the possibility of #ifdef, that lets you keep the
32-bit code unchanged and do it differently in the 64-bit case.
Perhaps this is necessary, seeing how the optimization is nice
but 32-bit specific.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Sarge NR_CPUS limit of 1 reached. Processor ignored.

2005-12-14 Thread Helge Hafting

Siju George wrote:


On 12/14/05, Lubos Vrbka [EMAIL PROTECTED] wrote:
 


Is there a way to install the kernel using apt-get??
   


 aptitude install kernel-image-2.6.8-11-amd64-k8-smp

   


Also will I have to re-install the packages to support SMP?
   


you don't have to reinstall any apps. single processor apps will run on
SMP just fine, you can just run more of them (2 in your case) at the
some time without slowing down...

   



Thankyou somuch lubos for the reply :-)

I installed some software from dotdeb following the advice on

http://dotdeb.org/news/dotdeb_now_available_for_amd64

hope it will automatically get packages for 64-bit amd and not 32 bit x86 ?
how do I check if the packages installed are for 64 bit?
 


Test binary files using the file command.  Example for testing /bin/ls:
$ file /bin/ls
/bin/ls: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for 
GNU/Linux 2.2.0, dynamically linked (uses shared libs), stripped


So /bin/ls on my office pentium machine is a 32-bit executable.
A 64-bit executable will have ELF 64-bit instead, and it
won't be 80386 either.


is it that only 64-bit packages can be installed or 32-bit packages
can also slip in if the /etc/apt/source.list is not edited carefully?

 


Apt will get you what you ask for, but you may of course get
in trouble when trying to _run_ an executable that doesn't fit the platform.

For a quick check:
file /bin/* /sbin/* /lib/* /usr/bin/* /usr/sbin/* /usr/lib/* 
/usr/bin/X11/* /usr/games/* | grep 32


If you get nothing, then no 32-bit sw on your system.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Open office

2005-12-02 Thread Helge Hafting

sigi wrote:

Hi, 

 


Some ms-office compatibility is nice, fortunately openoffice is not
needed for that.  I chuck the .doc/.rtf at abiword, and gnumeric handles
the excel stuff.  About a hundred times faster than openoffice, which
I only ever use to read the _openoffice_ documents people occationally
send me.
   



But abiword doesn't support doc/rtf-files as good as openoffice does.
There are lots of misrepresentations here...
 


That used to be a problem for me - I haven't had such problems
with abiword 2.4.1.  Still, you may receive documents different
from mine, of course.  I don't worry about small faults in a document
I am only going to read on screen anyway.  I consider most word
documents I get too ugly anyway, with the jagged right margins
everybody uses.  (Yes, I know word can do better than that - people is
part of this problem.)

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: changed ownership of / by mistake...

2005-12-01 Thread Helge Hafting

Craig Hagerman wrote:


Thanks for the replies Gilles and Cameron. I tried the suggestion to do

apt-get --reinstall -u install

and it does seem to have gotten things back ALMOST to where they
should be. still a couple of obvious problems. I have no sound for
some reason. I checked and things like /dev/audio0 ARE correct
(root:audio) now. I get an errror message when I try to open the
volume control and volume is greyed out on something like totem. Not
really sure what might still be wrong to prevent audio from
working
 


Ok, the device file has correct ownership.
Ownership of /dev and / is ok too?
How about the sounddriver module, probably in
/lib/modules/... somewhere?  Or the module loader,
/sbin/insmod or modprobe?

Use the find command to find files owned by craig outside of /home.
Then change them or reinstall the package.  Don't forget
directories not supposed to be owned by craig.

If chown root:root * fails, try chown 0:0 *.   Looking up
names like root may very well fail on a hosed system,
numbers tend to work.  And root always has UID 0.

Make sure any suid binaries in /bin, /usr/sbin, and /usr/bin are all 
owned by root.

If they're owned by craig then they will switch uid to craig
when run, loose their privileges and then fail. . .


The second thing I have noticed is that the terminal doesn't work. I
CAN open up the terminal (gui), but there is an error message saying
cannon open child process, and the terminal itself is unusable.
There is no prompt and I can't actually type anything in it. (Still
going in by ssh from a remote machine.)

 


Ownership of the xterm/gterm/kterm/rxvt binary is ok?
Check ownerships in /dev - nothing should be owned by craig, but
not all stuff is supposed to be owned by root:root either.  Consider
recreating /dev using makedev.



Any ideas on these anyone?

By the way Cameron, I did try knoppix, but since it is only a 32 bit
OS I can't chroot into my existing filesystem. 


No need to chroot.  Just mount the fs under /mnt, then go into /mnt
and use chown to set ownership correctly.  Then umount the fs and reboot.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Open office

2005-12-01 Thread Helge Hafting

Lars Schimmer wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Adam Stiles wrote:
 


On Tuesday 22 November 2005 08:09, Rob van Kraanen wrote:

   

A lot of people think they need OOo for the same reason that a lot of people 
think they need MS Office, or Windows.
   



Because it IS used.
We are all NOT private persons and if you are at office, you NEED some
software which IS stable for some years and which works nicely with MS
Office docs and produces some nice files.
And you need something which works on nearly all platforms, Apple OS X,
Windows XP AND Linux.


Some ms-office compatibility is nice, fortunately openoffice is not
needed for that.  I chuck the .doc/.rtf at abiword, and gnumeric handles
the excel stuff.  About a hundred times faster than openoffice, which
I only ever use to read the _openoffice_ documents people occationally
send me.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: RAID

2005-11-16 Thread Helge Hafting

lordSauron wrote:


okay... so, let's just say that hypothetically I choose to be a little
less than intelligent and try to use the motherboard built-in software
RAID.  Would that work?

Also, I'm a little skeptical of Linux's software RAID being better
(ie. faster) than my motherboards.  Is there any statistics I can look
at in regard to this?
 


It depends on _how_ the motherboard does RAID.  If the motherboard
has a true RAID controller that does raid in hardware - then sure,
it may beat linux software raid.  And it may not - linux certainly beat some
not so good hw raid controllers.

However if the motherboard raid merely is a stupid bios sw raid
then linux will perform much better.  How to know?  If windows need
a driver (supplied with the board) to use raid on that controller,
but no driver if the controller uses only one disk - then you have
one of those common fake raid controller.  Another way to know - if
you have a real hw raid controller, then it was expensive too. 


Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Advice in switching from Mandriva 64 to Debian 64

2005-11-03 Thread Helge Hafting

Giacomo Mulas wrote:


On Thu, 20 Oct 2005, Jean-Jacques de Jong wrote:


Hi,

I am planning to switch to Debian from Mandriva. I have an AMD64 and 
I would like to exploit the 64 bits for those programs that really 
need it (video editing/transcoding, photo editing), and still run 
Firefox with Flash, OpenOffice, and Wine (CodeWeavers and Cedega).


The issue is that the 32 bit applications need to be run by the rest 
of the family, and I fear a chroot environment would be too complex 
for them (they just want to click on an icon and it must work).



They don't need to be aware of the chroot for it to work, if it's 
configured

properly. I have a similar setup and my users usually have no indication
but just run what they want. This is how I did it:

1) install the 64 bit system first, with everything you want in there.

2) create a chroot, following the howto in the debian documentation, 
including bind mounts and the sort, so that home directories, X etc. are

indeed available in the chroot.

3) compare the passwd, shadow, group and gshadow files in the 64 and 
32 bit sides, make sure to make them as nearly equal as you can.


Or make them hardlinks, if they are on the same filesystem.  That way,
you won't need to maintain these files.  It saves some stress when
some package installs yet another daemon user.



4) get the list of installed packages in the 64 bit system, e.g. with
dpkg --get-selections my64bitselects

5) install the same packages in the 32 bit chroot, e.g. with dpkg 
--set-selections my64bitselects (run this in the chroot)


Ouch - much waste of diskspace.  I see no need to make a comlete mirror. 
Install the complete 64-bit system.

Install the 32-bit base system using debootstrap or cdebootstrap.
Then install those 32-bit packages that are necessary.
In this case - the webbrowser with a complete set of plugins, wine
and openoffice. 


Thanks to the nice packaging systems, every other package these
apps depend on wil also be pulled in.  It'll be a lot, but nowhere near
a mirror of the 64-bit installation.  (A mirror may even be
impossible, if you have network server software like sshd or
apache installed.)



8) in the 64 bit side, configure something as dchroot or schroot so 
that your users can run programs in the 32 bit chroot, and create 
scripts in /usr/local/bin for those programs you want to run in the
chroot (e.g. firefox, mozilla, OOo, acroread...) and arrange the 
default path to look in /usr/local/bin _before_ /usr/bin. In this way, 
you can just type the command and, if you arranged for a script to be 
run in /usr/local/bin, it will take precedence over the
64 bit app, even if it is present, unless you call the latter with the 
full path (which you usually don't do).


Nice trick.  And 32-bit apps will automatically get on the usual desktop 
menus

too, when the're available on the PATH.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: 32-bit memory limits IN DETAIL (Was: perspectives on 32 bit vs 64 bit)

2005-11-03 Thread Helge Hafting

Martin Kuball wrote:


Am Tuesday, 25. October 2005 02:31 schrieb [EMAIL PROTECTED]:
[snip]
 


Because the kernel address space has to hold more than just RAM (in
particular, it also has to hold memory-mapped PCI devices like
video cards), if you have 1G of physical memory, the kernel will by
default only use 896M of it, leaving 128M of kernel address space
for PCI devices.

A different user/kernel split can help there.  I use 2.75/1.25G on
1G RAM machines, but if you use PAE or NX, the split has to be on a
1G boundary.


But these are all workarounds.  The real solution is to use a
larger virtual address space so that the original, efficient
technique of mapping both the user's virtual address space and the
kernel's address space (basically a copy of physical memory) will
both fit.
   



And what about 64bit systems? How is the splitting done there? Do I 
have to worry?
 


The problem is exactly the same, but on a larger scale.
For 32-bit processors, you get trouble when your programs
need near 2^32 bytes or more. (I.e. 4GB.)  For a true 64-bit
processor, you get the same troubles the day you need
near 2^64 bytes or more per process.  Nobody is anywhere
near this limit yet. 


Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: perspectives on 32 bit vs 64 bit

2005-10-19 Thread Helge Hafting

Adam Skutt wrote:


Helge Hafting wrote:


Adam Skutt wrote:


Helge Hafting wrote:


You can address more than 4GiB by using the always-unpopular
segment registers found on intel processors.




How?  In protected-mode, they're in use as segement descriptor 
selectors.  Certain bits have specific meanings you cannot override, 
as they're part of the memory protection mechanism.




Yes, so?


That means it's logically impossible to have a 48-bit pointer, at all 
period.



You are right that this isn't a true 48-bit pointer.  The upper 16
bit of such a pointer is not a numerical part that can be incremented
the ordinary way.  But it _is_ a way that lets you have more than
32 bits of address space, although this way is so cumbersome that
nobody sane would bother implement it.

(Pointer arithmetic no longer being simple add/subtract, precisely
due to the descriptors, invoking the _swapper_ whenever we
reference a pointer to another 4G area . . .)


Sigh.  All mechanisms that lets the os support more than 4GB for
several processes, can be used to support more than 4GB for a
single process as well.  That is trivial, although also less efficient
than only supporting 4GB.


Yes, but it's obvious now you didn't understand what I said.

You /cannot/ have more than 32-bits of virtual address space.  Period.
There is no way to do it.

What you can do is remap the same virtual space to different physical 
addresses.  Which is different from having extra v.a.s.





 Whenever the app reloads a segment register,

 (i.e. trying to use a 48-bit pointer where the segment 
descriptor

 differs from the last pointer used)


This isn't a 48-bit pointer, because descriptor selectors aren't 
pointers.



Not a true 48 bit pointer, it doesn't give you 48 bits of address space.
But it gives you more than 32 bit, thats my point.  And I called it a
48-bit pointer because storing such a pointer indeed takes 48 bit
for the selector  offset.

And it won't work anyway.  How do I get a base offset higher than 
0x?  And if I add to it, what behavior is yielded?



You don't get a higher base offset than that - but I never said so
either.  Your compiler have to support a segment switch whenever
you cross a 4GB boundary.  Needless to say, this makes all
pointer arithmetic slow.


Not what is desired, to say the least.


Nobody desires this way of programming - but it is possible.
I never claimed it was useful - get a 64-bit processor instead I said.



You can't have more than 32-bit v.a.s.  Anytricks to get around that 
don't really get around that, they just have the same addresses the 
user-space code sees point to different physical addresses.


I really don't see how this is possible leafing through the IA-32 
System  Programming Guide so links or text would be preferred.


No guide will tell you how, they'll guide you towards something saner.
It is all there in the specs though, and is easier to understand if
you compare to a similiar situation in the 1980's:

Nobody ever used the 48-bit pointer system, but a 32-bit pointer
system (16-bit selector + 16-bit offset) was widely used to support more
than 1MB on the 80286 processor.  Of course this wasn't true
32-bit pointers either, they needed 32 bits of storage space but
merely allowed  a 24-bit address space.  Pointer arithmetic was
highly nontrivial due to the selector part of the pointer, but it worked.
The compilers did support data structures bigger than 64kB (and bigger
than 1MB), even though you couldn't have an offset bigger than 64kB.
They supported this by changing the segment selector when necessary.
Such pointer arithmetic was time-consuming and slow -
and programmers laughed at it because
true 32-bit processors were available at the time. But those didn't run
microsoft windows.

At least two operating systems used this programming model-
windows 3.0 and os/2 v.1.3. The 80286 was popular, unfortunately.

Using 48-bit pointers (16-bit selector + 32-bit offset) works much
of the same way, but with an added problem:  Where the 80286
created a 24-bit address from a 32-bit segmented pointer,
the 80386 creates a 32-bit pointer from a 48-bit segmented pointer.
This is the only extra problem that we get, other problems,
such as the offset not being greater than 32-bit is solved the
same way as 80286 programmers solved the problem of the offset
not being more than 16 bit.  The offset limitation don't stop us, it
is merely a performance problem.

The 32-bit address problem is solved by having only one segment selector
marked present at any time.  Accessing any other selector will then
give a segment not present trap, similiar to a page fault.  The os
can then resolve the problem by changing the PAE-extended page
tables, mapping a different 32-bit address space, marking the new
selector present
(and marking the previously used one not-present) and then
restart the instruction.  This step makes 48-bit segmented pointers
even slower than the 32-bit

Re: perspectives on 32 bit vs 64 bit

2005-10-14 Thread Helge Hafting

Adam Skutt wrote:


Helge Hafting wrote:


You can address more than 4GiB by using the always-unpopular
segment registers found on intel processors.


How?  In protected-mode, they're in use as segement descriptor 
selectors.  Certain bits have specific meanings you cannot override, 
as they're part of the memory protection mechanism.


Yes, so?




Simply have all but one segment not present and rely on the os
to trap access and remap the page tables whenever the code switch 
segments.


Remap the tables to what?  The address used for the lookup with a PTE 
is 32-bit.


Sigh.  All mechanisms that lets the os support more than 4GB for
several processes, can be used to support more than 4GB for a
single process as well.  That is trivial, although also less efficient
than only supporting 4GB.

History:
8086: 16-bit adressing, limited to 64kB.  But segments allowed addressing
 of up to 1MB.
80286 protected mode:  Still 16-bit adressing.  The segment registers
 are turned into segment descriptor selectors.  Now you can
 address up to 16 MB.

80386 32-bit mode without extensions:  32-bit addressing, limited to 4GB.
 there is also a set of page tables, so that virtual and physical
 addresses may be different.  Physical addresses still limited
 to 32-bit.

80386 with PAE extensions:  Still 32-bit addressing, but the page table
 can remap the 32-bit virtual addresses into a bigger address 
space.

 The _simple_ use of this is to support more than 4GB, but only
 4GB per process.

 If you want more than 4GB for a _single_ process, then you need
 to change page table mappings as needed as the process runs.
 This can be done two ways:
 1. The process explicitly calls into the memory management systems
 to do this.  That means accessing more than 4GB isn't 
transparent,
 you have to code this explicitly - it isn't transparent to 
the programmer.

 2. Use the segment descriptors.  Now, each segment still can't
 map more than 4GB, but the os can mark most segments as
 not present.  Whenever the app reloads a segment register,
 (i.e. trying to use a 48-bit pointer where the segment 
descriptor
 differs from the last pointer used) then the OS get a trap 
similiar
 to a page fault.  The os can then look up which segment 
descriptor

 the app loaded, change the page tables accordingly, mark the
segment present, and let the code continue.Performance 
may be

reasonable for code that stay inside the same 4GB most of the
time.  Code that moves all over the place will take a 
very big
performance hit as every memory operation page faults and 
incur

the same overhead as a context switch.  Still, this mode of
operation is transparent to the programmer.  I.e. you can 
recompile
ordinary portable code (assuming you have a compiler 
supporting this

memory model) and have it work without change.  (Assuming the
code makes no assumptions about the size or layout of a 
pointer,
they are 48-bit and cannot be incremented with simple 
arithmetic
only.)  This is why I said you don't want to do this.  
Doable, but

hard and not efficient.

amd 64 mode:  real 64-bit addressing, which is much easier to work with.
It is better than 80386 PAE in the same way as 80386 32-bit
was better than 80286 protexted mode.  Addressing more than
4GB is now trivial. No tricks at all, as pointers are 64-bit.
Old code that don't make assumptions about pointer size may be
compiled without change.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: perspectives on 32 bit vs 64 bit

2005-10-10 Thread Helge Hafting

Adam Skutt wrote:


Helge Hafting wrote:

The limit is 4GB. 


No, wrong.  The limit is 64 GiB physical RAM, period.  PAE has been part
of the ISA for pratically forever now, so it's silly to say it's
anything else.


Well, correct.



However, practically, Linux can only use 16 GiB physical RAM without
special patches, because of how the virtual memory is split by default
(1GiB/3GiB).  Windows is subject to the same limitation.

Fully getting around this requires a 4G/4G patch, which is a terrible
performance penalty because it requires a full TLB flush on every
context switch.

 (Well,


you can theoretically address more, but you definitely don't want
to do the work necessary to do that.  


No, you cannot.  One process cannot address more than 4 GiB virtual
memory at a time, period.  


You can address more than 4GiB by using the always-unpopular
segment registers found on intel processors. That means 48-bit
pointers, which certainly can address more than 4G. 


And yes, I know that the segmentation mechanism creates a 32-bit
address from the 48-bit pointer, but that can be worked around.
Simply have all but one segment not present and rely on the os
to trap access and remap the page tables whenever the code switch segments.

It is ugly and bad for performance, which is why I said you don't
want to do it. But doable - certainly.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: perspectives on 32 bit vs 64 bit

2005-10-06 Thread Helge Hafting

Hamish Moffatt wrote:


On Tue, Oct 04, 2005 at 03:26:12PM +0200, Thomas Steffen wrote:
 


And you might want to give Ubuntu a try. The amd64 version is quite
   



How nice of you to say so on the debian-amd64 list! More like how
insulting...
 

There is no need to take offense. We all know what distro ubuntu is 
based on,

where would they be without debian?  Now, ubuntu may have a few nice
addons not in debian - if that is a problem, just grab  include them. 
Ubuntu has benefitted from debian, nothing stops us from going the other 
way.

I believe hw detection stuff in ubuntu is free stuff. . .

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: perspectives on 32 bit vs 64 bit

2005-10-04 Thread Helge Hafting

Faheem Mitha wrote:



Dear People,

I have an Opteron server, on which I am running the AMD64 Debian port. 
For various reasons, I'm contemplating going back to 32 bit.


The major reason is that aside from the packages in the AMD64 Debian 
archive, it is not always easy to find Debian packages for AMD64, 
since i386 is still very much the default. Also, not all software 
compiles with AMD64.


There should be no need to go back to 32-bit for this reason.
Please note that the opteron, in 64-bit mode with a 64-bit kernel
running and all the 64-bit software you can get, is still capable of
running the odd 32-bit program just fine.




I am aware that 64 bit computing has considerable advantages as well.


Indeed, so get the best of both worlds:
* 64-bit performance for all software that _is_ ported, which is most
 of it,
* and 32-bit software for those few programs who either are proprietary
 or proved surprisingly hard to port.



I'm looking for perspectives from people who have experience with 
both, and what their feelings about this are.


I run both kinds of software on my home machine.  Almost everything
64-bit, but a 32-bit chroot so I can run a 32-bit webbrowser in order
to use 32-bit java/flash plugins.   (There is as yet no good 64-bit
java webbrowser plugin, although there is a 64-bit java.)



Specifically, I was looking for clarifications about memory issues. I 
have looked at stuff on the web, but am still confused.


What is the 4 Gig limit for 32 bit processors that people talk about? 
Does this mean that each process/thread can only get a limit of 4 Gig? 
Is there any workaround for this?


32-bit programs cannot address more than 4GB, because that's
all you can address in a 32-bit pointer.  Various trickery exists that
lets 32-bit intel machines access more than 4GB _in total_ (but still
limited to 4GB per process), but these tricks robs you of some performance!

There are no such problems with 64 bit. Of course there is a limit,
but it is at 17179869184 GB. Nothing to worry about today. :-)

What are the other limits? I read elsewhere that a 32 bit Linux system 
has an effective limit of 16 Gig usable memory total.


The limit is 4GB.  Intel has various tricks to up this limit a bit, all with
some performance impact and limitations. The main limitation of course,
is that one _single_ process won't get more than 4GB anyway. (Well,
you can theoretically address more, but you definitely don't want
to do the work necessary to do that.  (First, make a compiler to make
such code, then port the kernel to use 48-bit segmented pointers,
by the time you're finished all 32-bit hw is obsolete and people will
be worrying more about the Y10K problem. :-)

It is so much easier to just go 64-bit, and then
2GB or 4GB isn't special numbers at all any more. A single process
can use billions of GB, if you can afford a machine that big. A single
data structure can be bigger than 4GB . . .

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Problems with Acrobat Reader 7.0 in pure64 environment

2005-09-26 Thread Helge Hafting

Michelasso wrote:


I make presentations using a latex class (powerdot) that gives as
output a .pdf file, and often I have to present it using computers
running windows in which there is only Acrobat Reader installed, so I
want to be sure that everything is ok testing it with this polluting
closed source software.
Of course I could ask the organizers of the workshops where I am used
to show my presentations that I refuse to attend if they don't provide
me a pc running Debian or Knoppix, but this seems to me not feasible
at the moment...


Just tell them you need a pc with a cdrom.  Then boot knoppix
from a cd, where you may have your presentation as well. :-)
If you have time, you could even modify the knoppix installation
to boot straight into the presentation - no need to fuss around with
menus and such. :-)

And if their pc somehow won't boot your knoppix cd, then windows
will still be able to get the pdf off it.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Onboard raid controllers a problem?

2005-09-23 Thread Helge Hafting

Steve Dondley wrote:


Is anyone out there running a AMD64 system with onboard raid
controllers?  Any problems?  I'm looking at buying a server with a
Tyan Thunder K8S Pro S2882G3NR motherboard with an onboard cotnroller
but the server complany states on their website that:

Support for onboard RAID controllers has been depreciated in the
Linux 2.6 Kernel. If you desire RAID the system may be configured with
software RAID or you may select a hardware RAID controller.

Can anyone out there back this up or refute it?
 


Most cheap onboard raid controllers are merely
software raid anyway.  They need no linux support
because linux has its own software raid solution that
works very well. 


Set it to not be a raid in the bios setup, so you'll get plain disks.
Then, set up the raid in linux instead.  You don't loose anything
this way, the raid driver for windows is just a software raid too,
and probably not nearly as good as the linux driver.  Linux
sw raid has a history of outperforming some hw raid controllers too...

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Migrate running IA32 system to Debian AMD 64?

2005-09-22 Thread Helge Hafting

[EMAIL PROTECTED] wrote:


Hello,

I have a running critical production server, running Debian Sarge 32 Bits 
version.

I would like to migrate to Debian-AMD64 system, how can I do it, without 
interrupting for more
than 5 minutes my services which are running on it?

I mounted the ISO debian-31r0a-amd64-binary-1.iso at /mnt but I don't know how 
to start manually
the Debian installation program.
How can I start the install program manually?

I would like to install the Debian AMD64 distribution to a temporary device (USB key), then I do a chroot, if 
all works fine I will try to boot on the USB key, and then if all works very well, I will copy the USB key 
files to the old harddisk partition to complete the migration.


Is this possible?

 


My idea:

1. Compile a 64-bit kernel, with 32-bit emulation support.
2. Reboot into that new kernel, without changing any other programs.

The new kernel should work fine with the old 32-bit programs.
Assuming you have enough disk space, use debootstrap to
install a 64-bit installation in a chroot while the system is up
and running.  (Take care not to start any
network sw in the chroot.) Your users will be using the 32-bit
software while you work.

Note that you can bind-mount directories from the 32-bit
system in the 64-bit chroot, so that directories like /home
are accessible from both 32-bit land and 64-bit land.

3. Do a gradual update by stopping one 32-bit service,
copying stuff over to the installed 64-bit service, and
restarting the 64-bit service instead of the 32-bit service.
Test that it works well before proceeding to the next service.

Downtime on each service will then be short. If something
doesn't work, restart the known working 32-bit service
before pondering what went wrong.

4. After a while, there is no 32-bit code left running.  Wait some
days to be sure. 


5. Finalize by removing the 32-bit stuff and make the 64-bit
root the real root instead of a chroot.  (If you later need
to run some rare 32-bit only sw, install that in a chroot instead.)

I did something like this, but it was only a home machine.  Still,
family members get upset if they can't check their mail whenever
they want to . . .

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Dupal Opteron on Sarge

2005-09-19 Thread Helge Hafting

Ron Johnson wrote:


On Sun, 2005-09-18 at 13:18 +1000, Hamish Moffatt wrote:
 


On Sat, Sep 17, 2005 at 06:49:55PM -0700, lordSauron wrote:
   


pentium 4s use a 21 stage pipeline or something like that... so they
take approximately 21 clock cycles to get anything done.  AMD uses
about 7 stages (or something in that neighbourhood) so if you divide
2.8 by 21 and 2.0 (my Athlon64) by 7, you get a really interesting
breakdown.  You'll certainly find a HUGE increase in performance,
 


That's a terrible simplification. Yes, it takes longer to get the first
   



Not only is it a simplification, it's wrong.

 


result (21 cycles versus 7) but the idea of the pipeline is that you can
get a result every clock cycle after that.
   



But when you context-switch or branch, the pipeline gets dirty,
and the new process needs to fill up the pipeline.

Short pipelines like in Athlon  G4 are easier on branching,
but other techniques like speculative fetching and OOE mitigate
that somewhat.

And then, deep pipelines let you ramp up the clock much easier
than do short pipelines.  Don't know why, though.

 


You can ramp up the clock speed on a deep pipeline, because
each stage in the pipeline do very little.  Therefore, it can be done
faster than the more complex stages found in a shorter pipeline
doing the same job.

Unfortunately, ability to ramp up the clock doesn't help when
a faster clock becomes necessary just to keep up.  We all know
how an opteron beats any pentium with the _same_ clock rating
by a wide margin.

A 21-stage pipeline _is_ too deep - on average, x86 instructions
have a branch for every 7th instruction, and each branch
may invalidate that deep pipeline.  A shorter pipeline have less of
these problems.


The latency is higher but the
throughput is also higher (more clock cycles per second).
   


Yes, except when troughput is ruined by latency from all those
branches.  People try fixing this by unrolling loops, but then
they use up the instruction cache instead.  A good pipeline should
be short enough to be (almost) full most of the time.  Otherwise
it is going to loose, no matter what clock speed.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: problem with firefox and mozilla

2005-08-24 Thread Helge Hafting

Harald Wenninger wrote:


Hi,

I have problems with mozilla+firefox. I am using the unstable
distribution.
Every time I want to start firefox or mozilla, no window is opened. The
.mozilla-dir and its subdirectories get created, though. There's no
error displayed in the terminal from which I start the browsers.
Does anybody know somethin about this problem?
 

I noticed such a problem.  Firefox did not start for _new_ users using 
icewm.

The error is in firefox (or some library),
and the workaround is this:

For each user that cannot get firefox going:
1. log in as that user running X
2. $ xhost +
(Yes, disable access control!)
3. start firefox which will now work
4. close firefox
5. $ xhost -
(Normal access control again.)

Firefox will now work.
Firefox does something stupid and mistakenly thinks it is being denied
access to the X display.  Let it run once without access control, and
it'll save something to the users .mozilla/ that makes it
work the next times.

The permanent fix is:
$ reportbug firefox
And complain loudly about this idiocy.  (Check to see if the bug is
reported already.)  If it is not, report it and make sure to get all
details about error messages and such.  If it _is_ reported, just
add your details and opinions to the existing report.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: more weird apt - bzip2.

2005-08-23 Thread Helge Hafting

Chris Wakefield wrote:


Hi Bob.

Yes, I get no output from:  perl -e 0
install-docs says:  no such file  but I think it's probably run inside 
the perl shell somehow, so ...


I screwed things up even worse, as I copied over apt-get and dpkg from another 
install and I have a big mess now.
 

Ouch!  Try not to do that. 

You are right, I have to fix the original problem first, but I can't fix 
without apt-get  dpkg working right, so gotta do a reinstall I think.


 


They do not work _at all_, or it is merely this upgrade problem always
getting stuck on some broken packages?

If dpkg works, then you can fix your apt by dowmlading the apt
package manually from one of the mirrors. 
Then install it using dpkg.


After that, play safe by also reinstalling dpkg,
i.e: apt-get install --reinstall dpkg

If dpkg is _broken_, then download the dpkg deb manually
and unpack it yourself. It is a cpio archive I believe, and can be 
unpacked using
cpio. 


Make sure you get the amd64 package . . .

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: CPU burns 3

2005-08-11 Thread Helge Hafting

antonio giulio wrote:


Hi,

any weeks ago, I have posted about a strange behaviour of my amd64
cpu. GPG or molecular dynamics calcs in 3/4 minutes make cpu burning.
Now notebook is at acer labs to control (under garantee). A technician
(at phone) tell me that his stress-cpu program (windows) have found
nothing (3-days running normally and again continue, however notebook
come back in the next week). And so, could be a problem linux
related?!?!?!?!? Ok, it's impossible that software damages hardware...

 


Actually, sw can damage hw these days.  Some motherboards,
particularly portables, have fans where the speed is controlled
by the cpu.   (So they can slow down for silent running at times
of low load.)
If you deliberately programs the fans to stop, then
the cpu will overheat and break.  The same goes for any other
cooling/power consumption feature under cpu control.  An ill-designed
board could theoretically have cooling that need a correct driver to 
_start_,

but I guess all sane designs start the fans at full speed and only need
a driver for the optional slowdown.

I am sure you didn't do anything like that, but it is just the sort of 
thing a

virus writer might do.  Other possibilities for sw destroying hardware:

* Some old crt monitors let you stop the electron beam if you
  program the screen update frequencies out of spec.  That will
  quickly burn out the screen in that spot. 
  (That pixel get 640x480=307200 times the normal energy . . .)

  There are few of  those around - but see documentation for X
  modelines for details.

* Most boards have a programmable bios these days, so you can download
  updates.  Program something non-executable, and the machine
  won't be able to boot anymore.  Technically not destroyed, but you'll
  have to remove the chip and have it reprogrammed out of the pc.
  Most people don't have the necessary equipment and will have to pay.
  It might also be near impossible if the chip is soldered to the board. 


Have you any idea for other tests or what looking for?
 


Surey the service people can download a linux live CD (64-bit, of course)
and run GPG for a few hours in a room/enclosure as hot as the pc spec 
allows.

If it happens with a live linux cd from one of the well-known distributors,
then they can't claim that you had a bad linux setup.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: amd64-specific bug in XPDF based programs?

2005-07-14 Thread Helge Hafting

Volker Schlecht wrote:


Hi,

 


I am seeing similar problems  garbled text.  A windows user sent me
a .pdf this week and it exhibited the same characteristics - I assumed
it was a windows specific problem ;(

Will you file a bug?
   



Unless anyone tells me before tomorrow evening that the PDF in question 
displays perfectly on his system (amd64 sid, pretty much in sync with what is 
on the german mirrors) I'll do that yes.


Otherwise I'll assume that the problem is somehow caused by my setup and dig 
into it.


regards,
Volker
 


Looks fine with xpdf Version: 3.00-13 running on amd64, using
the X display on a intel pentium.  The X display used shouldn't
make a difference - font problems in X will likely affect
other apps using the same fonts too.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Confused about 64bit

2005-06-21 Thread Helge Hafting

Graham Smith wrote:


Hi, I'm confused and I hope that you can help me out.

I'm running the pure64 port of Debian on an AMD64.

My understanding about the AMD64 was that is could run in three modes:

1)32 bit emulation mode where you are using a 32bit kernel and 32 bit 
libraries (eg WinXP).
2)mixed mode with a 64 bit kernel and 32 bit libraries (eg Normal 
Debian with 64 bit kernel)

3)full 64 bit mode (64bit Debian port)


Note that 2) and 3) is exactly the same, from the processor point of 
view.  Case 3) simply have
more 64-bit programs and libraries installed - it will run 32-bit 
software just fine.


32-bit sw in case 3) often run into a practical problem though, most 
programs don't

stand alone, they use libraries. 32-bit programs want 32-bit libraries (or
some compatibility layer) and 64-bit programs want 64-bit libraries. 
Unfortunately,
libraries have the same name no matter if they're 32- or 64-bit, so you 
can only

have one kind at a time.  Unless you set up a chroot or use other tricks.

I now run 64-bit debian.  I have my entire old 32-bit installation available
on disk though, I occationally chroots into it in order to use 32-bit only
programs like opera and wine.  (Also a nice option for those who like 
openoffice.)




I thought that in full 64 bit mode it was impossible to run 32 bit 
applications unless the 32 bit libraries had been tweaked to allow it. 
But, and here's the strange bit, I am running the 32 bit Sun Java VM 
under the 64 bit environment quite happily without the ia32-libs 
package installed. How come this works? Surely the VM is compiled 
against 32 bit libraries and therefore shouldn't work when running 
against the 64 bit ones? That of course then raises the question why 
don't other 32 bit applications like OO work under 64 bit?


I suspect there is a hole in my knowledge but I don't know where. A 
quick lesson would be appreciated. Many thanks,


Now this seems strange. A standalone 32-bit program will work of course, 
but then it
can't use libraries at all.  Not even the C library.  Few such programs 
exist.
You are sure this _is_ 32-bit software?  And you are sure you don't have 
ia32-libs in any
way or form?  Not even some old unofficial package or a source install 
under /user/local?
And you are sure it is this 32-bit java that gets used?  You don't have 
a 64-bit one also?

Anyway, nice that it works. :-)

If you want to know what libraries a program uses, use ldd.  Example:
ldd /bin/ls
   librt.so.1 = /lib/librt.so.1 (0x002a9566c000)
   libacl.so.1 = /lib/libacl.so.1 (0x002a95774000)
   libc.so.6 = /lib/libc.so.6 (0x002a9587b000)
   libpthread.so.0 = /lib/libpthread.so.0 (0x002a95aba000)
   /lib64/ld-linux-x86-64.so.2 = /lib64/ld-linux-x86-64.so.2 
(0x002a95556000)

   libattr.so.1 = /lib/libattr.so.1 (0x002a95bcf000)

Here we see what libs ls needs to run.  You can do something similiar 
for your java.
To check wether executables and libraries are 32- or 64-bit, use the 
file command:


file /bin/ls
/bin/ls: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for 
GNU/Linux 2.4.0, dynamically linked (uses shared libs), stripped


file /lib/librt.so.1
/lib/librt.so.1: symbolic link to `librt-2.3.2.so'

file /lib/librt-2.3.2.so
/lib/librt-2.3.2.so: ELF 64-bit LSB shared object, AMD x86-64, version 1 
(SYSV), stripped


Here we see that ls and one of its libraries indeed are 64-bit on my 
machine.


Helge Hafting

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Success with Acrobat Reader 7.0 (tweaking module pathes)

2005-06-17 Thread Helge Hafting

Jrg Ebeling wrote:


Huam...

Okay, let me go some OT:

I know the other readers like xpdf, kpdf and the like, but beside that 
they all have problems with some special images (i.e. JPEG 
transparency) or that they're sometimes ugly in the handling, I've one 
BIG problem with them... may be it's because of nescience of mine:


The all have NO reload possibility !


Wrong.  With xpdf, hit R on the keyboard, or right-click the document
and select reload from the context menu.  Just tested it. :-)

Imagine, you're developing in/a PDF and need to check the result very 
often.
When using Adobe Acrobat as a Browser-Plugin, you've the possibility 
to press the reload button of the browser = one click.


With kpdf or the like you need to close and re-open the file.

I'm working the whole day with my AMD64 environment.
Everything works fine.
But when I need to develope in PDF I really restart my system back 
into IA32 only because of the Mouseclick case.

I know, it's stupid.

May be someone knows another way how to reload PDFs with one click.


No problem with xpdf.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: X dislikes int10 on amd64?

2005-06-07 Thread Helge Hafting

Helge Hafting wrote:


I have tried to run X on a secondary graphichs card.  It needs
int10 initialization, which works fine when using ia32.  But
the same XF86Config-4 fails with pure64:

Here is the end of the log:

(II) RADEON(0): Using 8 bits per RGB (8 bit DAC)
(II) Loading sub module int10
(II) LoadModule: int10
(II) Reloading /usr/X11R6/lib/modules/linux/libint10.a
(II) RADEON(0): initializing int10
(**) RADEON(0): Option InitPrimary on
(II) Truncating PCI BIOS Length to 53248

  *** If unresolved symbols were reported above, they might not
  *** be the reason for the server aborting.

Fatal server error:
Caught signal 11.  Server aborting


Is there any special tricks to get int10 working on amd64?
I have installed the ia32-libs package, so that 32-bit software works.
The xserver is the 64bit one though.
 


The problem turned out to be that X ships with two libint10.a files,
one in /usr/X11R6/lib/modules and one in /usr/X11R6/lib/modules/linux.
The latter is preferred as it offer better performance on x86, but
it is only the former that actually work on amd64.  So the solution is
to delete (or rename)  the file /usr/X11R6/lib/modules/linux/libint10.a
so it won't get used.
int10 initialization works after that.

Helge Hafting


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Idea for structure of Apt-Get

2005-03-29 Thread Helge Hafting
Patrick Carlson wrote:
Hello.  I'm not sure if anyone has suggested something like this or
not but I was thinking about the apt-get system and bittorrent today. 
What if the apt-get system was redesigned so that users could download
updates and upgrades from other users?  This way they would trickle
out to people, slowly at first, but then more and more people would
have the update and thus more people could get it faster.  I know
 

Faster than what?  Today's system is very fast: 
One user (maintainer) uploads a new version and everybody
have instant access to it as soon as they do the apt-get upgrade.
No slow trickle at first.

there would probably be a lot of security issues involved but then
maybe people wouldn't have to worry about setting up .deb mirrors and
trying to get the latest upgrades.  Just a thought.  If it's a bad
one, let me know. :)
 

Oh, you're worried about the internet slowing as everybody
upgrade and downloads the same stuff?  There is a much
better solution to this, and it is called caching proxies.  Many an ISP
have a caching proxy already, that caches both ftp and http which is
the protocols usually used by apt over the net.  Caching proxies have
two big advantages over changing apt:
* Nothing have to be done to apt at all!
* Proxies also cache other things than debian packages.
Helge Hafting
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]


X dislikes int10 on amd64?

2005-03-08 Thread Helge Hafting
I have tried to run X on a secondary graphichs card.  It needs
int10 initialization, which works fine when using ia32.  But
the same XF86Config-4 fails with pure64:

Here is the end of the log:

(II) RADEON(0): Using 8 bits per RGB (8 bit DAC)
(II) Loading sub module int10
(II) LoadModule: int10
(II) Reloading /usr/X11R6/lib/modules/linux/libint10.a
(II) RADEON(0): initializing int10
(**) RADEON(0): Option InitPrimary on
(II) Truncating PCI BIOS Length to 53248

   *** If unresolved symbols were reported above, they might not
   *** be the reason for the server aborting.

Fatal server error:
Caught signal 11.  Server aborting


Is there any special tricks to get int10 working on amd64?
I have installed the ia32-libs package, so that 32-bit software works.
The xserver is the 64bit one though.

Helge Hafting


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]