(I'm reposting this message in hopes someone with th answers will find
it.)
Hi everyone,
Well, I just upgraded my network to 100baseTx. WOW! What a difference it
made! I used all Linksys products: 8 port dual speed hub; 10/100 PCI
NIC's. I have set up one system as an X-Server using KDM login and two
systems as X-Terminals. The server runs on a 266 Mhz AMD K-6 with 128
Megs and two WD Caviar 6.4's. The terminals are AMD P-90 and Cyrix
PR-200 with 16 and 32 Megs, Seagate 1 Gig and Caviar 2 Gigs. Video is S3
Virge and Riva 128, each with 4 Megs. The server runs the onboard SiS,
and using the latest SVGA from XFree86 3.3.3, it works fairly well. All
machines run at 1280x1024 16 Bit. I am also running X-WinPro 5.0 on top
of "98" on another machine. Nice package. -- The X-WinPro, that is. ;-)
The server also provides SAMBA disk services to the "98" machine as well
as makes use of its NEC (Windows only --GAK!) laser, and intranet web
services via APACHE. Linux is the best thing since sliced bread! And,
SuSE is the butter on that bread!
Having worked with Linux regularly for only about a year, I know I have
lots yet to learn. But, I hope to make the terminals completely
diskless, or at least only with a floppy, in the near future. As it
stands, they have a roughly minimal SuSE installation. Setting up the
terminals via the server's CD-ROM using NFS was a snap. I have
experimented with trying to find the optimal amount of RAM for the
terminals using DIMM's I had handy. I started with 8 Megs which proved
too little -- lots of swapping. Now, one has 16 and the other 32. Both
seem to work fine. I suppose I was under the mistaken impression that
system RAM was not that important once X was loaded. I would greatly
appreciate someone explaining to me why? Would, for instance, more RAM
on the video cards make less system RAM be needed since we are dealing
with only terminals which run nothing but the X server locally? Being
diskless excludes, of course, using a swap partition.
Now all of that said, here is my main question. Why does ifconfig report
the NIC's as running at only 10mps? There is no doubt that the cards are
linked at 100mps. Their greatly increased performance over 10mps is
obvious. And, of course, the indicator lights on the cards and the hub
show they are linked at 100mps.
I downloaded and made the latest tulip module from Donald Becker's NASA
CEDIS site. It is v.90 and is definitely being loaded on bootup. I have
tried several different options in modules.conf -"100baseTx" etc. Valid
options are listed in the the makefile. Where does the 100mps actually
take place? Is it solely on the, what I would call, hardware side? I
mean from NIC > hub > NIC. Or, does the module driver play a role in
actual data throughput? Simply put, does the module influence
performance and if it does, how do I get it to run at 100mps?
Thanks a Million for any Help! :-)
--
D. R. Grindstaff
MACROSHAFT Corp. has Performed an Illegal Operation and Will be Shut Down !!!!!
Powered by SuSE 5.3
-
To get out of this list, please send email to [EMAIL PROTECTED] with
this text in its body: unsubscribe suse-linux-e
Check out the SuSE-FAQ at http://www.suse.com/Support/Doku/FAQ/ and the
archiv at http://www.suse.com/Mailinglists/suse-linux-e/index.html