Re: Kernel panics on new machine

2007-10-14 Thread M. Edward (Ed) Borasky

Rob Klingsten wrote:

Hi folks, I am over my head here with kernel panics...

I've got a shiny new system: Tyan S2865 (nforce 4 ultra), AMD Athlon x2 
3800+, a single SATA-2 drive and 1gb of DDR400 RAM.  The board and CPU 
are brand new, the drive and RAM came from a desktop machine which I had 
no problems with so I think those things are ok.


Does the motherboard require the RAM modules to be in pairs? If so, are 
they?


The usual warnings about seating of modules and cables also apply.


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Kernel panics on new machine

2007-10-14 Thread M. Edward (Ed) Borasky

Rob Klingsten wrote:
Yes, I have the DIMMs in banks 1 and 2 out of 4 and I've removed and 
reseated them;  I have swapped out the SATA cable, there are no PCI or 
PCI Express cards (system is headless.)


And it looks like it was just that easy;  I pulled one DIMM and ran the 
system on a single 512, problem gone;  I wrote out 3 separate 100gb 
files without incident.


It makes no sense to me as the RAM was fine in the other system, I never 
had problems.


Oh well, guess I'm getting new RAM.

thanks, sorry to trouble the list with such a trivial thing.

Rob


Well ... if you've got the cash, go get four 1 GB DIMMs. Athlon64 X2s 
work very well with 4 GB of RAM. :)



--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: AMD 64 X2

2007-06-28 Thread M. Edward (Ed) Borasky
Lennart Sorensen wrote:
 On Thu, Jun 28, 2007 at 04:51:17PM +0200, Daniel Tryba wrote:
 If Ed in this thread is correct that the svm flag in /proc/cpuinfo
 indicates Pacifica, then the Turion64 X2 TL-52 (1.6Ghz) has it (also
 there is a flag in the bios on this machine).
 
 Oh right.  I forgot they went from Mobile Athlon 64 to Turion 64.  It
 appears from what I can find that all Turion 64 X2's have virtualization
 (some of the none X2's have it too, but not all of them).
 
 --
 Len Sorensen
 
 

Yes ... I looked at Turion 64 dual-core laptops briefly before buying
the desktop Athlon64 X2. There were some Turion 64 dual cores without
the virtualization assist, but I think they all have it now. The main
reason I didn't get a laptop is that I wanted a reasonable price for 4
GB of RAM and a fast hard drive, not because of the processors. :)
Besides, I think my next laptop will be a Mac because that's what most
of the Ruby geeks use. ;)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: AMD 64 X2

2007-06-27 Thread M. Edward (Ed) Borasky
[EMAIL PROTECTED] wrote:
 hello,
 
 I want to buy a new system with an AMD64 X2, as I want to try
 virtualization I need to know what kind of processor contains
 the Pacifica (now AMD-V) set of instructions.
 Does anybody have any information about that as the documents
 I have found are not crystal clear.
 
 Regards
 
 Storm66
 
 
 

If the system you want has been assembled, just get a LiveCD with a
recent kernel, boot it up and look at /proc/cpuinfo. I just got (late
March) an Athlon64 X2 4200+ at CompUsa and it does have the magic
virtualization stuff (the svm in flags is the gimmick, IIRC). This is
what mine looks like with a 2.6.21 kernel:

processor   : 0
vendor_id   : AuthenticAMD
cpu family  : 15
model   : 75
model name  : AMD Athlon(tm) 64 X2 Dual Core Processor 4200+
stepping: 2
cpu MHz : 2210.046
cache size  : 512 KB
physical id : 0
siblings: 2
core id : 0
cpu cores   : 2
fpu : yes
fpu_exception   : yes
cpuid level : 1
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov
pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp
lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy
bogomips: 4422.77
TLB size: 1024 4K pages
clflush size: 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management: ts fid vid ttp tm stc


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: confused about performance

2007-06-16 Thread M. Edward (Ed) Borasky
Douglas Allan Tutty wrote:
 I meant the whole shebang.  
 
 I don't mean to distribute it.  More of a massive benchmark to compare
 the gcc with the best commercial compiler and see what difference it
 made to applications we use all the time.
 
 It would be interesting to do this for both 32-bit and 64-bit.  Since
 32-bit gcc is quite mature it could provide some insight into how to
 make it better.

We have the technology. :) Part of the charm of gcc is that it has
multiple targets, multiple source languages, is extremely well
documents, and can be used as a cross-compiler. On top of that it
supports language-level application coverage analysis and profiling.

I suspect that the language-independent and target hardware independent
parts of gcc are as good as they can be, given the constraints imposed
by such things as NP-completeness for a lot of the guts of compile-time
optimization. And given the trend towards dynamic languages like Perl,
Python, JavaScript, Ruby and the bastard cousin of Perl called PHP, the
focus of the open source communities seems to be on making *them* run
more efficiently -- gcc is good enough for a Ruby interpreter, a
Python bytecode interpreter, etc. But not, apparently, good enough for
the Parrot virtual machine -- aren't they using Haskell?

In any event, for most of the Intel chips (Pentium MMX and later) and
the AMD64 chips (*before* Barcelona, I think) the resources at

http://www.agner.org/optimize

are freely available and can help both compiler writers and compiler
users if they don't have the budget for a chip-specific compiler.

 
 As for debian's free software roots, as a comparison, even OpenBSD which
 rants against the GPL at any opportunity still uses gcc since there's no
 other alternative; they're too small to be able to make a bsdcc.

Speaking of which, another bit of nostalgia. I was a happy Red Hat 9
user when Red Hat announced there would not be a Red Hat 10. So I went
and looked at all the distros. I ended up on Debian for a number of reasons:

1. The primary application I had in mind was algorithmic composition and
synthesis of music. Most of the projects used either Red Hat or Debian,
and since I had ruled out Red Hat, Debian was the logical choice.

2. I was also interested in alternative kernels, and the Gnu Hurd kernel
was hosted by Debian

3. The size of the repositories: Woody (stable at the time) had
something like 8 or 9 thousand packages, and Sarge had something like 15
thousand.

After about six months, though, it looked like the Hurd was a dead
project. It depended on specific technologies that weren't active, and
there were a number of more interesting and more active kernel research
projects, like the Dresden Real-Time Operating System (DROPS).
Eventually I just got too busy with other projects to pursue alternate
kernels, and the Linux kernel's audio-specific pieces were stabilizing
and improving rapidly, so I quit pursuing the alternative kernels.



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: confused about performance

2007-06-15 Thread M. Edward (Ed) Borasky
Giacomo Mulas wrote:
 Two or three months, when I last compiled the latest version of the big
 quantum chemistry code (NWChem) I use (which spends a lot of time doing
 floating point linear algebra). The computer on which I tested is a
 relatively old Athlon 64 3500+, your mileage may vary on other machines.
 Oddly enough, an old version of GOTO was the fastest, followed closely by
 the optimised acml, then head to head the internal implementation provided
 by the quantum chemistry package and the (then) current GOTO, then atlas.
 Differences among all but atlas were measurable (i.e. reproducible) but
 very
 small, within 2%, atlas was ~10% slower. The Intel compilers produced much
 faster code than the gcc suite (both 3.4 and 4.1), despite running on AMD
 processors. This is a VERY specific test, of course, so I do not claim
 my conclusions should apply to anyone but me. On the other hand, they
 do apply rather well to my scenario by definition :)

I used to do this stuff (tune quantum chemistry codes) for a living.
Nostalgia ain't what it used to be. :) The only one I spent any great
length of time on was Gaussian (80 way back then) and it proved
extremely difficult to vectorize and parallelize, despite the efforts of
the developers to make it work well on such beasts as the CDC 205, which
had very deep pipelines. If NWChem is anything like that, I'm not
surprised the Intel compilers do a better job than GCC -- I don't think
GCC knows much about all the specifics of tweaking such things as
keeping data in caches, re-use, chip-level parallelism, etc. If NWChem
is open source, I'm sure someone will come along and profile/tweak it.

 I will obviously evaluate again atlas and new versions of the gcc suite
 if/when it's worth the effort. I look forward seeing an up-to-date version
 of atlas being included in debian. I would actually be very glad to be able
 to switch to a completely clean environment using gcc, since I currently
 have to keep around hosts of libraries compiled with different compilers
 and
 it's somewhat messy to maintain.

I wouldn't throw away that Intel compiler just yet. For that matter, I'd
give serious consideration to switching to a Core 2 Duo and a copy of
Intel's tuning tools ... they are quite good. Life's too short to wait
for calculations. :)
 
 Bye
 Giacomo
 


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: confused about performance

2007-06-15 Thread M. Edward (Ed) Borasky
Douglas Allan Tutty wrote:
 On Fri, Jun 15, 2007 at 06:50:57AM -0700, M. Edward (Ed) Borasky wrote:
  
 I wouldn't throw away that Intel compiler just yet. For that matter, I'd
 give serious consideration to switching to a Core 2 Duo and a copy of
 Intel's tuning tools ... they are quite good. Life's too short to wait
 for calculations. :)
 
 I don't suppose, for fun, it would be possible to compile debian amd64
 on one of the good commercial compilers?
 
 Doug.
 
 
You mean the kernel, or all of the packages, or both? ;)

I don't think it would be much fun, given Debian's free software roots. ;)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: confused about performance

2007-06-14 Thread M. Edward (Ed) Borasky
Leopold Palomo-Avellaneda wrote:
 A Dijous 14 Juny 2007 15:21, Lennart Sorensen va escriure:
 On Wed, Jun 13, 2007 at 04:40:52PM -0600, Sebastian Kuzminsky wrote:
 Hi folks, I just bought a pair of AMD64 systems for a work project,
 and I'm confused about the performance I'm getting from them.  Both are
 identically configured Dell Dimension C521 systems, with Athlon 64 X2
 3800+ CPUs and 1 GB RAM.

 On one I installed using the Etch (4.0r0) i386 netinst CD, then upgraded
 to Lenny.  This one's running linux-image-2.6.21-1-686.

 On the other I installed using the current (as of 2007-06-13) Lenny d-i
 amd64 snapshot netinst CD.  This one's running
 linux-image-2.6.21-1-amd64.

 The one with the x86 userspace and 686 kernel is faster than the one
 with x86_64 userspace and amd64 kernel.  The difference is consistently
 a few percent in favor of x86 over x86_64.

 My only benchmark is compiling our internal source tree (mostly running
 gcc, some g++, flex, bison, etc).  We're using gcc-4.1 and g++-4.1.
 I've tried it with a cold disk cache and hot disk cache, in both cases
 x86 is faster than x86_64.

 I was expecting a win for 64 bit.  What's going on here?
 64 bit is faster at some things.  For things like gcc you may simply be
 gaining nothing and loosing a few percent due to having to move around 8
 bytes per pointer rather than 4 bytes per pointer.  Certainly on sparc64
 I believe that is known to cause a slight slow down.  On sparc most
 programs are 32bit I believe, with only a few specific ones that gain
 from 64bit (like lots of ram) are compiled for 64 bit.

 Now anything using floating point should gain significantly on 64bit.
 Of course none of your list of tests do.  SSE (the only floating point
 used on x86_64) is much faster than the old stack based x87
 instructions.

 There are also some programs that gain some performance benefit from the
 extra registers that you get in 64bit mode, but for most programs it
 probably doesn't really matter.  gcc may also not be very good at using
 those extra registers yet.

 Of course if you need more than about 3GB ram in your system, 64bit will
 probably win simply by avoiding the (not insignificant) overhead of PAE
 (needed to access more than 4GB address space on x86).  Also if a
 program can take advantage of more than 2 or 3GB ram by itself, on 64bit
 you can use however much ram you have for the application, while on x86
 you are limited to 3GB ram per application.
 
 really, reading you makes me doubt about the whole port. How many apps do we 
 have in the debian pool that can win some kind of performance?
 
 Regards,
 
 Leo
 
 

Well ...

1. 32-bit isn't going away. The new chips are all going to be
64-bit-capable (and virtualization-capable, multi-core, etc.) but the
software is going to lag that.

2. There are two reasons you'd want a 64-bit machine:

   a. You have a production application that *needs* a 64-bit machine.
For the moment, most but not all of those are high-performance
scientific computing.

   b. You want to develop 64-bit applications

3. I've got an Athlon64 X2 (64 bit, dual-core, virtualization capable).
I'm running a 64-bit OS (Gentoo, although Etch works just fine on it). I
haven't done any 32-bit vs. 64-bit benchmarking or GCC benchmarking on
it because quite frankly, I think such efforts are wasted and
irrelevant. I got the machine for b. above -- I want to develop 64-bit
multi-core virtualization-aware applications.

As far as the applications in the Debian pool, or open source in
general, are concerned, I would first look at large-scale scientific
computing. And I would start with making sure that you have the Atlas
(automatically tuned linear algebra subroutines) libraries and their
BLAS and LAPACK interfaces all tuned up and ready to install on all the
architectures. The ATLAS team is just about ready to release 3.8 -- they
are at 3.7.32 at the moment and I think what's in Debian is still 3.6.0.
The last test I ran on my Athlon64 X2 4200+ (2.2 GHz) got me about 10
gigaflops in 32-bit arithmetic and about half of that in 64-bit arithmetic.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: confused about performance

2007-06-14 Thread M. Edward (Ed) Borasky
Lennart Sorensen wrote:
 On Fri, Jun 15, 2007 at 07:53:44AM +1000, Alex Samad wrote:
 sounds like a sane thing to do then is to run a amd64 kernel and build your 
 apps in 32 bit mode.  They get the advantage of 32 over 64, but you get the 
 advantage of having lots more of them running in your 64 bit kernel ?
 
 Well I run a 64bit machine with a 64bit kernel with 32bit user space,
 and a 64bit chroot.  The choice of that setup was because the target of
 the development runs 32bit x86 code only and 64bit is just to play with.
 
 I also haven't seen any 32bit program that is actually faster than 64bit
 that I have tried lately, so perhaps it is actually 32bit programs that
 are the exception for good performance rather than the rule.  Keep a
 32bit chroot for those few programs where 32bit still somehow has better
 performance, but use 64bit for most things since it seems to be
 generally the fastest.
 
 --
 Len Sorensen
 
 

The strategy the Gentoo devs recommend is to run full 64 bit and have a
*32-bit* chroot for those apps which don't work when compiled for a
64-bit architecture. Of course, since most everything in Gentoo is
compiled on the user's machine and not pre-compiled like Debian, that
strategy might not be optimal for Debian. I have three systems, though,
only one of which is 64-bit (and none Intel) ;) so if an app doesn't
work on a 64-bit box, I just run it on one of the others.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: confused about performance

2007-06-14 Thread M. Edward (Ed) Borasky
Giacomo Mulas wrote:
 On Thu, 14 Jun 2007, Leopold Palomo-Avellaneda wrote:
 
 The last test I ran on my Athlon64 X2 4200+ (2.2 GHz) got me about 10
 gigaflops in 32-bit arithmetic and about half of that in 64-bit
 arithmetic.

 I don't understand that. Are you saying that the 64-bits was really
 worst tha 32?
 
 He is saying that the version distributed by default in debian of the
 ATLAS linear algebra libraries is much better optimised for performance
 in the 32 bit version than in the 64 bit version. However, if you are in
 for speed, you are way better off using a better optimised linear
 algebra library, such as the GOTO library (hand-optimised assembler
 written by masatoshi goto), or the acml libraries provided by AMD for
 AMD processors or the MKML libraries provided by Intel for Intel
 processors. All of these provide vastly better performance than the
 current incarnations of ATLAS in 64bit.

How recent is your data? I was under the impression that Atlas had
caught up with GOTO and ACML, and possibly even the Intel libraries on
the Core 2 Duo. I'm on the Atlas mailing list -- I can ask there. In any
event, there is enough assembly code in Atlas that I'd expect it to be
competitive with both GOTO and the vendor libraries on AMD 64-bit and
Intel 64-bit chips. And I think 3.7.32 cleaned out some bottlenecks in
the 64-bit SPARC code as well, so it's undoubtedly worthwhile for Debian
to put it in the repositories for SPARC at 3.7.32 and probably not for
older versions.

 ATLAS can be expected to improve
 a lot quickly, but currently is far behind in the 64 bit version. 

As I noted above, that's not the impression I got.

 Also,
 if you are after performance, you should consider using some commercial
 compiler (if you can afford it) instead of the GCC suite, until GNU
 compilers become as good at optimising for x86_64 processors as they are
 for x86.

You're probably right here, at least for 4.1 GCC and older. I haven't
seen anything on GCC 4.2 yet. If you have an *Intel* chip, you
definitely should look at the Intel compilers. They are written by some
good folks and neighbors of mine -- I used to work with a couple of them
in a galaxy long ago and far away. :) And they have access to all the
magic counters on the chip, which as far as I know, isn't even on the
GCC road map.

Finally, for those of you who love the joy of doing this sort of land
speed record chasing, there's an excellent collection of resources at

http://www.agner.org/optimize/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: confused about performance

2007-06-14 Thread M. Edward (Ed) Borasky
Stephen Olander-Waters wrote:
 On Thu, 2007-06-14 at 16:08 +0200, Leopold Palomo-Avellaneda wrote:
 really, reading you makes me doubt about the whole port. How many apps do we 
 have in the debian pool that can win some kind of performance?
 
 Personally, I don't care. I went 64-bit for *FUN*, not performance,
 though I am hoping that World Community Grid will port its apps to
 64-bit and take advantage of the new registers.
 
 Geek out,
 -s
 
 
 

Yeah ... I did it for fun too. :)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: confused about performance

2007-06-14 Thread M. Edward (Ed) Borasky
Leopold Palomo-Avellaneda wrote:
 A Dijous 14 Juny 2007 16:40, M. Edward (Ed) Borasky va escriure:
 [...]
b. You want to develop 64-bit applications
 
 why? you want to develop applications, that they run in a 32 or 64 system 
 it's 
 another thing. Maybe they run better or worst, but the final architecture 
 cannot be a target, I think.
Well in my case the fun is developing 64-bit multicore applications
and maxing out the machine. I'm not doing this with a profit motive as
yet, so I don't have a problem with limiting myself to things that will
only run in a 64-bit machine and which are tuned for multicore.
 
 [...]
 
 As far as the applications in the Debian pool, or open source in
 general, are concerned, I would first look at large-scale scientific
 computing. And I would start with making sure that you have the Atlas
 (automatically tuned linear algebra subroutines) libraries and their
 BLAS and LAPACK interfaces all tuned up and ready to install on all the
 architectures. The ATLAS team is just about ready to release 3.8 -- they
 are at 3.7.32 at the moment and I think what's in Debian is still 3.6.0.
 
 ok, but I think that this apps are developed in fortran, and the gfortran is 
 it sufficient developed to make a good difference? Maybe GSL or MTL ..

I don't do much FORTRAN and even less C. When I want raw speed I program
in Forth, and when I want convenience I use R for number crunching, Perl
for quick scripting and Ruby for longer-lived scripting projects.

 
 The last test I ran on my Athlon64 X2 4200+ (2.2 GHz) got me about 10
 gigaflops in 32-bit arithmetic and about half of that in 64-bit arithmetic.
 
 I don't understand that. Are you saying that the 64-bits was really worst 
 than 
 32? 

It's natural that 64-bit operations would take longer than 32-bit ones.
You're moving and adding/multiplying half the number of bits. That's
kind of a cheat, though, because only some signal and image processing
operations and some iterative particle-based/simulation models work
really well in 32-bit arithmetic. Big matrix jobs require 64-bit
arithmetic and sometimes more. But you can do a *lot* of physics, signal
processing and image processing with a fast 32-bit floating point unit,
so it's a useful thing for at least one class of user.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: deciding on a new amd64 system

2007-05-22 Thread M. Edward (Ed) Borasky

Alexandru Cardaniuc wrote:

Hi All!

I've been using HP Pavilion zv5260 as a desktop replacement for a while
and now decided to get a real desktop. I am not sure if I should build a
new box myself or buy a pre-built one. I need a home workstation that is
going to be used primarily for writing and debugging code, browsing
internet, occasionally watching dvds. I don't edit video and don't play
video games. So, I figured that I don't need that powerful and expensive
computer. In this case does it make sense to build one myself?

I googled and found that Dell offers Dimension n Series E521.
http://www.dell.com/content/topics/segtopic.aspx/e510_nseries?c=uscs=19l=ens=dhs~ck=mn

It comes with no Windows OS preinstalled. And Dell claims that it is
ready to work under linux. 


Does anybody here have this machine? Are there any compatibility issues?
I plan to use it with Debian Etch. Etch comes with linux kernel 2.6.18

Googling I found out about a problem with USB freezing mice and
keyboards, but it seems that this problem was solved with BIOS update
that Dell issued in January, 2007. Google doesn't show any more problems
with it...


I am thinking about choosing these parts:
-
Dell Dimension E521NAMD Athlon 64 X2 Dual-Core 4000+

Operating System: FreeDOS included in the box, ready to install

Memory  1GB Dual Channel DDR2 SDRAM at 667MHz- 2DIMMs

Dell USB Keyboard and Dell Optical USB Mouse

19 inch SP1908FP Silver Flat Panel Monitor TrueLife (Glossy Screen)

256MB NVIDIA Geforce 7300LE TurboCache

Hard Drive 250GB Serial ATA Hard Drive (7200RPM) w/DataBurst Cache

No Floppy Drive Included

Integrated 10/100 Ethernet

Modem   56K PCI Data Fax Modem

CD ROM/DVD ROM  16x DVD+/-RW Drive

Integrated 7.1 Channel Audio

Speakers Dell AS501 10W Flat Panel Attached Spkrs for UltraSharp Flat
Panels

Limited Warranty, Services and Support Options 1Yr In-Home Service,
Parts + Labor - Next Business Day*

FREE GROUND SHIPPING!   

Total Price (taxes included)$757.30 
-

It seems like the price is right. Before I always built computers
myself, but now would I actually be able to build a box myself for this
price ? Well, I don't necessarily want cheap, I just don't need a very
powerful machine for what I am using it...



Any advices or suggestions will be very appreciated!

Thanks in advance...


  
I just had a box built at CompUSA. It took me a while to get it up, but 
it's happily running Linux now. I looked at the Dell Linux-ready 
systems but ended up with a custom system mostly because


1. I didn't want to wait.
2. The Dell AMD systems didn't include the option to remove the monitor. 
The Intel systems did, but I wanted AMD.

3. The Dell memory prices were too high.

So I ended up with a 4 GB Athlon64 X2 4200+

If you already have a monitor, you could get the low-end Dell Intel 
Linux-ready system without one and save about $150US, IIRC.



--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: deciding on a new amd64 system

2007-05-22 Thread M. Edward (Ed) Borasky

Lennart Sorensen wrote:

On Mon, May 21, 2007 at 11:24:11PM -0700, Alexandru Cardaniuc wrote:
  

Hi All!

I've been using HP Pavilion zv5260 as a desktop replacement for a while
and now decided to get a real desktop. I am not sure if I should build a
new box myself or buy a pre-built one. I need a home workstation that is
going to be used primarily for writing and debugging code, browsing
internet, occasionally watching dvds. I don't edit video and don't play
video games. So, I figured that I don't need that powerful and expensive
computer. In this case does it make sense to build one myself?

I googled and found that Dell offers Dimension n Series E521.
http://www.dell.com/content/topics/segtopic.aspx/e510_nseries?c=uscs=19l=ens=dhs~ck=mn

It comes with no Windows OS preinstalled. And Dell claims that it is
ready to work under linux. 


Does anybody here have this machine? Are there any compatibility issues?
I plan to use it with Debian Etch. Etch comes with linux kernel 2.6.18

Googling I found out about a problem with USB freezing mice and
keyboards, but it seems that this problem was solved with BIOS update
that Dell issued in January, 2007. Google doesn't show any more problems
with it...


I am thinking about choosing these parts:
-
Dell Dimension E521NAMD Athlon 64 X2 Dual-Core 4000+



Personally at this time I would buy a Core 2 Duo instead.  Faster and
more efficient.

Oh and it's a Dell, so the pwoer supply and mainboard and possibly other
things are probably proprietary and never replaceable.  And the power
supply is probably only barely large enough to handle the system, so
upgrades could be tricky.  At least that is how Dell Dimension PCs were
in the past.

  

Operating System: FreeDOS included in the box, ready to install

Memory  1GB Dual Channel DDR2 SDRAM at 667MHz- 2DIMMs



Why does Dell (and other rip of the clueless consumer name brands)
insist on putting slow ram in machines with fast CPUs?  800MHz ram
doesn't cost that much more.  I guess they figure their customers only
care about price.

  

Dell USB Keyboard and Dell Optical USB Mouse

19 inch SP1908FP Silver Flat Panel Monitor TrueLife (Glossy Screen)

256MB NVIDIA Geforce 7300LE TurboCache



And then they slow down the ram some more by making the video card
borrow from it.

  

Hard Drive 250GB Serial ATA Hard Drive (7200RPM) w/DataBurst Cache

No Floppy Drive Included

Integrated 10/100 Ethernet

Modem   56K PCI Data Fax Modem

CD ROM/DVD ROM  16x DVD+/-RW Drive

Integrated 7.1 Channel Audio

Speakers Dell AS501 10W Flat Panel Attached Spkrs for UltraSharp Flat
Panels



Well all that stuff is probably typical.

  

Limited Warranty, Services and Support Options 1Yr In-Home Service,
Parts + Labor - Next Business Day*

FREE GROUND SHIPPING!   

Total Price (taxes included)$757.30 
-

It seems like the price is right. Before I always built computers
myself, but now would I actually be able to build a box myself for this
price ? Well, I don't necessarily want cheap, I just don't need a very
powerful machine for what I am using it...



The difficult part in getting a price like Dells is that most people
building a computer aren't willing to cut the corners Dell likes to cut.

Let us try though:

Athlon 64 X2 4000 $122
2 x 512MB DDR2-6400 800MHz OCZ platinum ram $80
Asus M2V mainboard (10/100/1000 ethernet, 5.1 audio) $90
WD 250GB SATA $79
LG 18x DVD+-RW $38
Antec SLK1650 (case with 350W PS) $70
USB mouse/keyboard $30
7300 video card $63
19 LCD screen $200

Total: $772 (canadian) which is about $730 US.

Significantly higher quality components than the Dell, but you would
have to buy and assemble parts yourself, and you don't get tech support
and warrenty (well warrenty on the parts not the system).

But overall, Dells price is just OK, not great.  Remember the Dell is
full of cheap junk which helps them keep the price down.

Modem (if you actually need one) which is actually a hardware modem that
works with linux is probably $75 or so.  Haven't bought one in years.  I
tend to assume most people don't need it so I will ignore it.  I would
be surprised if dell included anything other than a winmodem in their
system.

Personally I would go with spending more on a Core 2 Duo if I was buying
one, but I am not at the moment. :)  And I would get a 7600GT rather
than a 7300, and I would go for a silverstone TJ04-B case and probably a
silverstone 450W power supply.  And I wouldn't go for less than a 20
screen since I hate 1280x1024 screens, while 20 gives you 1600x1200.
Of course those changes would probably add another $500 to the price.

--
Len Sorensen


  
Well ... I don't want to get into Intel vs. AMD (until the AMD Quad 
Cores are out, anyhow) :). But I can't conceive of running a processor 
that fast in Linux with only a GB of RAM, and I can't conceive of 

Re: Problems starting X in a new Athlon 3800+ system

2007-05-16 Thread M. Edward (Ed) Borasky

[EMAIL PROTECTED] wrote:

Hi.  I've been using the latest unstable with a Pentium III for quite a whille 
now with no problems at all.  I decided to upgrade my h/w to an Athlon 3800+ 
and I thought I could just install the disks from the old system to the new h/w 
and work from there.

The new system, as I said, is an Athlon 3800+ with an ATI Radeon 9200 (from 
Sapphire).  Everything worked just fine until gdm started up.  It looks like X 
will start, but when the screen blanks it just stays that way and I cannot 
break X or reboot the system with ctrl-alt-del.

My old system was running an ATI 128 Rage Pro, but what I did was boot the new 
system into single user mode and reconfigured 'xserver-xorg' accepting sensible 
options.  The configuration recognized the card and the monitor, etc.  But X 
just doesn't work.

At this point, I decided to reinstall using the Debian 4.0 netinst cd thinking 
that the reinstallation would default to a useful value, but the 
re-installation has the same problem with X.  Does anyone have any suggestions 
I might follow to get my system back up?  Everything works fine in the text 
console; it's just X that seems to be having problems.  BTW, I've tried running 
a livecd (centos 4.3) and everything works just fine  (gdm starts up, etc.).  
I'd appreciate any help.  Thanks

-Jose


  
Isn't there a log file -- something like /var/log/Xorg.0.log? Whenever I 
screw up an X config or something screws it up for me, I usually can get 
the answer in that log file.



--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: New box crashes

2007-05-07 Thread M. Edward (Ed) Borasky

Jack Malmostoso wrote:

I've gotten a
few traces in /var/log/messages, which I'll post to the appropriate
place as soon as I find out what the appropriate place is.



If it's kernel related, the LKML is the right place.
  
Yeah ... once I try a 32-bit kernel, that's where I'm going. Etch, 
Feisty Fawn, Gentoo 2006.1 and CentOS all have similar problems, so it's 
pretty much got to be either hardware or the upstream 64-bit kernel.

In order:

1) Check the memory with memtest for 12+ hours
  
So far it has about five hours with no errors. I'm hoping there are 
other diagnostics I can use.

2) Check that HD cables are correctly placed in their sockets
  
This seems unlikely -- the messages I'm getting look more like memory 
management than I/O. I have one GB -- I'm thinking that might not be 
enough for a 64-bit kernel.

3) Check that the computer isn't running too hot
  
What's the package in Etch that does that? I couldn't find it in the 
Gnome desktop.
4) Check with another distro. Check the md5sum of the iso BEFORE burning 
it
  
Pretty much all the distros and kernels ranging from 2.6.17 through 
2.6.18 do this. If I can get the 2.6.20 kernel to build without a crash 
during the compile, I'll check it out.
5) If nothing yields results, disassemble the computer and remount it 
with more care and love


Good luck!
  
By the way, so far, Etch has been the most stable and it's the only one 
that's configured the video right. Feisty Fawn crashed during the 
install, CentOS can't give me a reasonable looking screen and Gentoo 
hasn't been able to do a kernel build without crashing. If I can figure 
out make-kpkg and start building my own kernels, lenny may be the 
best choice for this system. It's a scientific workstation.



--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



New box crashes

2007-05-06 Thread M. Edward (Ed) Borasky
I just got a new box (Athlon 54 X2 4200+ on a Gigabyte Technology NVIDIA 
GeForce 6100 Socket AM2 AMD ATX Motherboard). I'm getting miscellaneous 
crashes on Etch. They usually occur during I/O intensive operations, and 
at this point I have no reason to suspect the hardware. I've gotten a 
few traces in /var/log/messages, which I'll post to the appropriate 
place as soon as I find out what the appropriate place is.


So, where does one take this sort of thing? I don't have enough 
information yet to rule out hardware or enough decent debug traces to 
file a defect anywhere. If I can find a non-SMP kernel for Etch on an 
AMD64, I'll probably install it just to see if this stuff goes away.



--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]