FS: APC SUA1000RM2U Smart-UPS

2019-08-02 Thread Dave Johnson


It's heavy so selling online/shipping isn't practical.  Anyone interested?

Available for pickup in Hollis or I might be convinced to bring it
somewhere close by.  Looking for $225, but make me an offer.


APC SUA1000RM2U Smart-UPS 2U Rackmount w/AP9617 Network Card & 4-post rails

Testing and working, was in use in my basement rack until no longer
needed.

Includes Fresh Batteries (about 1 year old)

Includes AP9617 Network Managent Card

Includes both 2-post and 4-post rails with screws and hardware


https://www.apc.com/shop/gy/en/products/APC-Smart-UPS-1000VA-USB-Serial-RM-2U-120V/P-SUA1000RM2U


-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Local inexpensive media destruction?

2014-12-22 Thread Dave Johnson

Anyone know of a local/inexpensive media destruction service?


As I'm sure many of you do, I have a box with 15+ years of obsolete
hard drives, floppy, tapes, CDR, DVDR, flash, etc... in my house.

While there has been many trips to e-scrap over the years to get rid of
computers, media is always removed before hand.

There are plenty of companies that will provide me with destruction
services including secure transport or on-site processing, liability
insurance, certification, logs of all items, videos of my actual items
being shredded, etc.. they all come with equally impressive bills.

Simple degausing and/or physical destruction (preferably while I watch
because that'd be cool) and the resulting scrap recycled is sufficient.


If not, sounds like an opportunity for a data destruction party where
someone has/rents a degauser and appropriate shredder.

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


free: Supermicro 5012B6 1U server

2013-04-19 Thread Dave Johnson

Supermicro 5012B6 1U server

http://www.supermicro.com/products/system/1u/5012/sys-5012b-6.cfm

  Intel P4 Celeron 1800Mhz
  Single 512MB ECC DIMM
  Two U160 SCA 36GB hotswap SCSI drives (plus 1 spare drive)

Ran reliabily for 5+ years, multi-year uptimes, and zero unexpected
crashes over its lifetime, retired a few years ago.

Was fully functional at time of retirement, includes rackmount rails
and original boxes/packing materials.

It's going to e-scrap Monday morning unless someone wants it.

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Widget to manipulate parallel port signals ?

2010-09-08 Thread Dave Johnson
Michael ODonnell writes:
 
 Anybody know of a (commandline or GUI) utility that I could use to
 wiggle/sense the individual data/control lines of a parallel port?
 I'd prefer that it operate using one of the standard drivers (like
 parport_pc) via ioctls rather than poking around directly in I/O
 or memory space at hardcoded addresses as I'd hope it'd be flexible
 enough to work with either an integral legacy device or an add-in
 device connected via PCI, USB, etc.
 
 I found (but for the moment have misplaced the URL of) one that
 appears to be somebody's homework project; it has a GUI that presents
 the control/data lines separated into two clusters of buttons whose
 color indicates current state and that you can click to change
 their state, etc, etc, but it looked like I'd have to duplicate
 his C++ development environment to get it working, so before I go
 down that road I'm hoping somebody knows of one that'd be more of
 a turn-key experience...


I wrote a test program when I was making my lcd control daemon for my
router.

http://centerclick.org/temp/lcd.tgz

the part you want is lcdraw.c

it'll take commands from cmdline and do all basic operations to the
parallel port.

Usage: ./obj/lcdraw command [command...]
  command is one of:
rr   data direction in
rw   data direction out
dr   data read
dw value data write
cr   control read
cw value control write
sr   status read
kr   lcd data ram read
kw value lcd data ram write
ir   lcd instruction ram read
iw value lcd instruction ram write

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: How can I retrieve the mount count for an ext3 volume?

2009-10-06 Thread Dave Johnson
Alex Hewitt writes:
 My Ubuntu 8.10 system uses EXT3 for the root filesystem and will 
 automatically fschk the volume every 35 mounts.  I haven't been able to 
 find out where the mount count is stored or how that data can be 
 retrieved. I don't want to change the automated fschk but I'd like to 
 display the count so I can anticipate when the volume will be checked. 
 
 -Alex


$ dumpe2fs -h /dev/sda1
[...]
Mount count:  9
Maximum mount count:  24
Last checked: Tue Mar 10 22:06:05 2009
Check interval:   15552000 (6 months)
Next check after: Sun Sep  6 22:06:05 2009
[...]


-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: How can I retrieve the mount count for an ext3 volume?

2009-10-06 Thread Dave Johnson
Ben Scott writes:
 On Tue, Oct 6, 2009 at 10:43 AM, Tom Buskey t...@buskey.name wrote:
  And RAIDs should be scrubbed periodically.
 
   Modern RAID controllers usually feature something called patrol
 read, which reads all the blocks on the physical disks in the
 background, when otherwise idle.
 
   Is there a similar feature in Linux's RAID implementation?

Yes, md has a 'check' action that you can issue to the kernel to have
it run a RAID consistancy check.

mdadm issues this once a week (by default)

you can also issue this manually with something like:
$ echo check  /sys/block/md0/md/sync_action

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: [OT] Generator testing

2009-09-10 Thread Dave Johnson
Ben Scott writes:
   But it's funny, lots of people have disaster stories... it seems
 everyone knows what *not* to do, or what can go wrong... but if you
 ask what you *should* do, and people are less certain.  :)

umm.. ya. speaking of what not to do, don't keep adding load to your
building transformer and not notice this.

Worked at a place where after adding more and more servers and lab
equipment every month, the transformer outside the building was way
overloaded 24/7.

I think it was already scheduled for upgrade, but it wasn't soon
enough.  It started leaking oil into the parking lot one day where
someone noticed it and called the power company.  They got there
really quick and shut it off right now before it could fail
spectacularly.

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Finding *unfiltered* free WiFi? (was: WAP/Router for use with OpenVPN)

2009-07-10 Thread Dave Johnson
Bill McGonigle writes:
 
  We've got the `open database of general knowledge' (Wikipedia), the
  open database of maps (OpenStreetMap), the open database of
  speed-limit signs (Wikispeedia), the open database of GSM cell-sites
  (OpenBmap)..., why not one for WiFi-hotspots?
 
 We actually talked about this a bit at the DLSLUG meeting on
 OpenStreetMap.  A WiFi node is just another type of node, with a certain
 tag.  I think somebody said wardrivers have already automated this?  It
 makes more sense to add the data to OpenStreetMap than to create another
 database.

Google has been recording location data of WiFi APs (no surpise
there), too bad the data isn't exported in a friendly way.  From what
I can tell, anywhere that has been Street View'ed has also had all
WiFi AP's recorded as the car passed by taking pictures.

This was rather obvious when using the iPhone 2G (no GPS).  It would
contact some server via HTTPS and (presumibily) send nearby WiFi AP
data in an attempt to get a more precise location.

Worked great when driving down a street that had been Street View'ed.
Whenever an AP from someone's house got in range, it would narrow down
the location rather well.  As soon as you went to a street with no
street view pictures, it would revert back to the less accurate cell
tower location.

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: WAP/Router for use with OpenVPN

2009-07-07 Thread Dave Johnson
Ben Scott writes:
  And they have to have enough computing-power to run WPA, right?
 
   Wireless crypto may be implemented in dedicated hardware in the
 wireless chipset, not on the general-purpose processor (where Linux
 and OpenVPN run), so that may not mean anything.

Yes, WiFi crypto will definately be done in hardware.


If you're going to use openvpn without a hardware assist, (like a
HiFn, etc.. ) cpu performance may be of concern.  Hardware assist in
openssl is usually hard to find in general.

Shouldn't be an issue for occasional single user connectivity though,
unless you need many mbps.


I have my openvpn links use blowfish instead of AES for the data
channel because it's less cpu intensive especially for small
block sizes.

For comparision, I use a Soekris net4801 (266Mhz NSC/AMD Geode) for
my router/firewall and openvpn endpoints.

It does IPv4 forwarding fine, up only up to about 50mbps for large
packets. Definately not good for LAN-LAN, but fine for LAN-WAN.

Throw in openvpn, the crypto and compression will drop the vpn data to
a few mbps.  Good for connectivity, just not performance.

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: recommendations on virtualization software

2009-03-19 Thread Dave Johnson
Mark Ellison writes:
 Hi,
 
 I am seeking recommendations and pros/cons of different virtualization 
 software.
 
 The physical machine is a Intel T9400 quad core with 8GB ram, 2x500GB 
 sata disks and 1Gb nic.  My current plan is to run 64 bit Fedora Core 10 
 (or 11 as available) as the host OS.  The guest OSes will include a mix 
 vista, xp and other UNIX variants.
 
 I am aware of the commercially available VMware workstation, VirtualBox 
 and Xen.  Any feedback and recommendations are appreciated.

I've been impressed with kvm and have it running several guests right
now.

Pros:
* supported by linux kernel proper
* active development
* emulates guest NICs that work with jumbo frames (Intel 82540EM, e1000)
* built-in kvm server for vga console, standards compliant, not some custom 
viewer
* serial console via telnet
* designed for headless hosts
* faster than needed

Cons:
* no support for multi-cpu/core guests
* virtio isn't fully baked yet, but looks promising
* had to write some custom start/stop scripts


-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: /proc/$$/fd mystery - {t,}csh vs. bash

2009-01-08 Thread Dave Johnson
Michael ODonnell writes:
   e521:~ 514--- /bin/csh
  e521:~ echo My PID is $$, contents of my /proc/$$/fd follow... ; ls -l 
 /proc/$$/fd
  My PID is 27244, contents of my /proc/27244/fd follow...
  total 0
  lrwx-- 1 mod mod 64 2009-01-08 20:47 15 - /dev/pts/0
  lrwx-- 1 mod mod 64 2009-01-08 20:47 16 - /dev/pts/0
  lrwx-- 1 mod mod 64 2009-01-08 20:47 17 - /dev/pts/0
  lrwx-- 1 mod mod 64 2009-01-08 20:47 18 - /dev/pts/0
  lrwx-- 1 mod mod 64 2009-01-08 20:47 19 - /dev/pts/0


Ah the wonders of strace

Because on startup /bin/csh dups stdin/out/err to higher fds...

execve(/bin/csh, [/bin/csh], [/* 31 vars */]) = 0
[... blah blah ...]
dup2(0, 16) = 16
fcntl64(16, F_SETFD, FD_CLOEXEC)= 0
dup2(1, 17) = 17
fcntl64(17, F_SETFD, FD_CLOEXEC)= 0
dup2(2, 18) = 18
fcntl64(18, F_SETFD, FD_CLOEXEC)= 0
dup2(16, 19)= 19
fcntl64(19, F_SETFD, FD_CLOEXEC)= 0
[... blah blah ...]
close(0)= 0
close(1)= 0
close(2)= 0

this means /bin/csh uses FDs 16,17,18 as it's stdin/out/err instead of
0/1/2.

when running a program:

after clone(), it just dup()s back to stdin/out/err before exec():

clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, 
child_tidptr=0xb7d266f8) = 11073
[... blah blah ...]
[pid 11073] close(0)= -1 EBADF (Bad file descriptor)
[pid 11073] dup(19) = 0
[pid 11073] fcntl64(0, F_SETFD, 0)  = 0
[pid 11073] close(1)= -1 EBADF (Bad file descriptor)
[pid 11073] dup(17) = 1
[pid 11073] fcntl64(1, F_SETFD, 0)  = 0
[pid 11073] close(2)= -1 EBADF (Bad file descriptor)
[pid 11073] dup(18) = 2
[pid 11073] fcntl64(2, F_SETFD, 0)  = 0
[... blah blah ...]
[pid 11073] execve(/bin/ls, [ls, -l, /proc/11071/fd], [/* 37 vars */]) 
= 0




-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: /proc/$$/fd mystery - {t,}csh vs. bash

2009-01-08 Thread Dave Johnson
Michael ODonnell writes:
 
 
  Because on startup /bin/csh dups stdin/out/err to higher fds...
   [...]
  this means /bin/csh uses FDs 16,17,18 as it's stdin/out/err instead
  of 0/1/2.
   [...]
  when running a program: after clone(), it just dup()s back to
  stdin/out/err before exec():
 
 Cool - thanks for the analysis.  Why might they be doing this?

from csh.h:

 /*
  * The shell moves std in/out/diag and the old std input away from units
  * 0, 1, and 2 so that it is easy to set up these standards for invoked
  * commands.
  */
 #define FSHTTY  15  /* /dev/tty when manip pgrps */
 #define FSHIN   16  /* Preferred desc for shell input */
 #define FSHOUT  17  /* ... shell output */
 #define FSHERR  18  /* ... shell diagnostics */
 #define FOLDSTD 19  /* ... old std input */


It does make it less complicated if you invoke programs with weird
redirection arguments.

If fd 0/1/2 are all unused just before you exec() then its simple to
dup/open these based on whatever redirection you want the program to
run under.  If they are used then you may have to move stuff out of
the way before setting up redirection or accidentially misplace the
shell's actual stdin/out/err while setting up redirection.

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Is this a postfix problem? Receiving mail from cellphone

2008-11-30 Thread Dave Johnson
Dan Coutu writes:
 This is a RHEL server running postfix.
 
 Sending email to the server from my cell phone is giving an error and I 
 don't understand why. I'm hoping that someone here can shed light on it 
 for me.
 
 Here's the mail log entries that show the problem:
 
 Dec  1 01:55:27 ec2-75-101-156-55 postfix/smtpd[31695]: connect from 
 150.sub-69-78-129.myvzw.com[69.78.129.150]
 Dec  1 01:55:27 ec2-75-101-156-55 postfix/smtpd[31695]: NOQUEUE: reject: 
 RCPT from 150.sub-69-78-129.myvzw.com[69.78.129.150]: 450 4.7.1 
 njbrspamp5.vtext.com: Helo command rejected: Host not found; 
 from=[EMAIL PROTECTED] to=[EMAIL PROTECTED] proto=ESMTP 
 helo=njbrspamp5.vtext.com
 Dec  1 01:55:32 ec2-75-101-156-55 postfix/smtpd[31695]: disconnect from 
 150.sub-69-78-129.myvzw.com[69.78.129.150]
 
 Thanks for any help or pointers to resolving this.

You phone (IP 69.78.129.150) is using a smtp client (or smtp relay)
that is identifying itself in the HELO SMTP message as
njbrspamp5.vtext.com which doesn't resolve using DNS.

njbrspamp5.vtext.com is likely a smtp relay which simply
doesn't have a public dns entry.

You can (carefully) loosen the HELO restrictions on your mail server
if you want to bypass this check for specific hosts.  Since the
postfix default is to allow everyting then you've likely already
modified this. See:

http://www.postfix.org/postconf.5.html#smtpd_helo_restrictions

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Old rackmount equipment?

2008-10-21 Thread Dave Johnson
Drew Van Zandt writes:
 Hey all,
I'm looking for old, not necessarily functional rackmount equipment to
 fill a rack with for some airflow experiments I'm doing, and I was wondering
 if anyone had junk lying around that would qualify.  Anything I can bolt
 into a rack will do, from shelves to token-ring switches.  Bonus points if I
 can plug it in and it produces heat (but not fire, preferably.)

I've got 3 Bay/Nortel BayStack 450 switches (1.5U) with rack-mount
hardware, stacking modules, various uplink modules, etc...  Fully
functional (though I think one has a single broken fan).  Free to you
or anyone else who wants to put them to actual use and can pick them
up from Dracut MA.

They heat left to right instead of front to back, not sure how that'll
play into your airflow experiments.

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Blackberry webpages

2008-09-27 Thread Dave Johnson
Travis Roy writes:
 This is a bit off topic, but does anybody know of any programs or
 webpages that will let me test a website to see how it will look on a
 blackberry? I'm trying to get a webpage to view properly and not
 having a blackberry to test it on makes for slow going.

Doesn't have blackberry, but still an interesting site:

http://browsershots.org/

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Serial admin console program

2008-09-25 Thread Dave Johnson
Shawn O'Shea writes:
 Back in my days of managing my Sun boxes, we had a Lantronix and I hated it.
 My experience with them is *years* old though. I'm partial to the Cyclades
 (now Avocent) myself. Formerly had the TS2000, now Avocent pushes the ACS48.
 I've also used Logical Solutions SCS series (liked supporting them when I
 was in CT because they were local, Milford, CT, got to visit their offices)
 and some guys here at work also use the Raritan Dominions. They are all
 Linux based.
 
 If you are looking to roll-your-own, basically all of these Linux products
 do their port management with Portstlave.
 
 Linkroll:
 Avocent Cyclades ACS: http://www.avocent.com/Products/Default.aspx?id=6846
 Logical Solutions SCS: http://www.thinklogical.com/product.asp?ID=27
 Raritan Dominion: http://raritan.com/products/serial-console-switches/
 Portslave: http://sourceforge.net/projects/portslave/

I use both the Cyclades TS2000 and Avocent ACS32 at work.  Cyclades
boxes have an underpowered CPU (they grind to a halt if you try
to actually do 32 interactive, plus 32 sniff sessions all pushing 115k
full rate over 64 different ssh sessions), buy the Avocent ones are
rock solid.

Very expensive solution though, but they run Linux which make them
cool.

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Linux network behavior wierdness

2008-09-05 Thread Dave Johnson
Paul Lussier writes:
 
 Hi all,
 
 I have a wierd problem where some hosts respond to a broadcast ping
 packet and others don't.
 
 I have some hosts which, when I do a ping -b 10.95.0.0 everything answers.
 On other hosts, doing exactly the same thing, I get no response.
 
 A reboot resets a broken host, but over time it will re-develop the problem.
 And I can't seem to figure out how to make the problem occur...
 
 I can't figure out if it's something we're doing which is causing this
 change, or if it's a kernel thing where some threshold is reached and
 it just stops.
 
 Fwiw, we're running Debian/Etch with a 2.6.18 kernel.  Most NICs are
 Intel e1000, though some are broadcom.  In general, I've seen it
 happen across our lab on different hardware platforms with different
 motherboards and nics.
 
 Any insights would be most appreciated.

10.95.0.0 is an unusual broadcast address, how did you end up with
that?

Watch out for /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts.  The
default was changed between 2.6.13 and 2.6.14 to ignore by default.

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Linux network behavior wierdness

2008-09-05 Thread Dave Johnson
Paul Lussier writes:
 Paul Lussier [EMAIL PROTECTED] writes:
 
  Watch out for /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts.  The
  default was changed between 2.6.13 and 2.6.14 to ignore by default.
 
  Ooh, I didn't know that, thanks!
 
 Fascinating, so how does this work?
 
  $ ssh farm-519 ping -b 10.95.255.255 -c 1
  64 bytes from 10.95.34.112: icmp_seq=1 ttl=64 time=0.074 ms
  1 packets transmitted, 1 received, 0% packet loss, time 0ms
 
  $ ssh farm-519 cat /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts
  1

Ping can still send, just farm-519's kernel won't reply.  I'd assume
10.95.34.112 is some other host on the network that does reply right?


-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: How do you determine the amount of system memory?

2008-07-30 Thread Dave Johnson
Paul Lussier writes:
 Thomas Charron [EMAIL PROTECTED] writes:
 
  On 7/29/08, Paul Lussier [EMAIL PROTECTED] wrote:
  Thomas Charron [EMAIL PROTECTED] writes:
   Are you talking about a real bug, or the fact that meminfo only
   reports non kernel memory?
  A real bug.
 
A bug in that /proc/meminfo doesn't report the amount of physical
  memory uder MemTotal?
 
 Yes, and that possibly, over time, the amount of memory in
 /proc/meminfo actually decreases.

I'm assuming you mean Total memory not free memory :)  If so, which
total? Mem, High, Low, Swap, Vmalloc?

Are you're saying the *Total lines change while the system is running
or simply that MemTotal line isn't what you expect at boot time?

 I know there's an official, known-to-be-a-problem bug on this
 somewhere in the kernel bug tracking.  I just don't know the exact
 details.  I was given the task of figure out how we can test this.
 And, just being back from vacation, my mind is rather fuzzy on just
 about everything, including the time of day and day of week...

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: mythtv and digital tv

2008-07-26 Thread Dave Johnson
Derek Atkins writes:
 The biggest difference is that cable companies can choose to
 encrypt their QAM, which means you either need a cablebox or a
 device with a cablecard.  As far as I know no device usable by
 mythtv directly can use a cablecard without having yet another
 Digital-Analog-Digital conversion.

The 2nd biggest difference is cable companies tend to re-compress the
heck out of their digital channels to squeeze as many channels into as
little bandwidth as possible.

Still, just 1-2 HDTV channels on cable is on the order of what is
devoted to Internet bandwidth (10-12mbps).  Imagine if the cable
companies dropped all TV (both analog and digital) and did only
broadband Internet over coax. Throw in some 20mbps+ IPTV steams
via multicast and it'd give FIOS some competition.

As mentioned before comcast at least does the broadcast channels
QAM unencrypted in both SD and HD which my pcHDTV HD3000 grabs
bit-for-bit off the wire just fine.  Way better quality then having
the SD cable box decode the QAM MPEG2 sent over horrible composite
video/RCA audio (stupid cable box doesn't even have svideo) into my
PVR250 which then re-encodes them back into MPEG2.

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: laptop fails to hear replies

2008-07-26 Thread Dave Johnson
Lloyd Kvam writes:
 When my laptop boots, it is configured to use a dhcp server.  For some
 time now, that processing has failed.  I can run 
 tcpdump -n ether host mac address
 and see the packets going out, but not the replies.
 
 At the dhcp server, the same tcpdump command shows the requests and
 replies.
 
 I can get the laptop network stack to function by manually assigning a
 valid IP address and then repeatedly pinging other computers on the
 network.  The arp table is empty, showing incompletes for the mac
 addresses.  Finally a ping will work and the arp table gets filled in
 and the networking functions are OK from then on.  I can restart the
 network service and now dhcp works.  (This behavior occurs at other
 sites, so it is not a switch or cable problem.)
 
 Since everything eventually works OK, I figure the hardware is good and
 I've fouled up some configuration item.  I've tried enabling and
 disabling NetworkManager, but it does not seem to have any impact.
 Stopping iptables also has no effect.
 
 I'm hoping someone can suggest a debugging approach or possibly stuff I
 misconfigured to create this problem.

Sounds like your switch(es) are running STP, LACP, and/or EAPoL.
Any of these can disable your ethernet port on the switch side
temporarially after link up.  Those protocols will give up after 30-40
seconds and enable your port.

LACP has a state where a newly link-up port can send into the network
but not recieve from the network, however I think this only applies
when a port is entereing into a LAG (this is a staging state called
the 'collecting state' that is used to allow a new port to join into a
pre-existing group without causing packet loss).

How long is dhcp trying before giving up?  You can control this in isc
dhcp3 with the 'timeout' configurable in /etc/dhcp3/dhclient.conf

you can see if your switches are causing problems by manually
configuring the laptop, then bounce the link on the cable and see if
you immediately recover or it takes time after each link bounce.

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: 1Gb ramdisk in RHEL3

2008-07-25 Thread Dave Johnson
Michael ODonnell writes:
 
 
 OK, the ramdisk story is looking a bit better now.
 
 First, the tmpfs trick is definitely cool and worth remembering
 but isn't what we're looking for because some of what makes it
 cool also makes it unsuitable for our purposes.  In particular,
 tmpfs has close ties to the kernel's buffer cache so files
 in tmpfs may actually be on backing store and since we're
 investigating a ramdisk solution to get some low-latency
 characteristics that makes tmpfs unsuitable for our purpose.
 Too bad, because rigging tmpfs is easier than rigging a ramdisk.
 
 The key to getting the [EMAIL PROTECTED]@!!!  ramdisk to be usable was 
 figuring
 out that ext2 and the ramdisk device driver had to agree on
 block size.  The default for ramdisk is 1k and the default for
 ext2 is 4k.  At first I tried to specify ramdisk_size=1048576
 and ramdisk_blocksize=4096 on the kernel commandline (units are
 Kb for the former and bytes for the latter) and all would have
 been well.  except that the machine in question boots from an
 initrd and the ext2 filesystem therein was already specified as
 using 1k blocks and the kernel panic'd when it failed to mount
 the initrd due to that mismatch.
 
 So, since I don't feel like rebuilding the initrd's filesystem
 with 4k blocksize I had the ramdisk fall back to the default 1k
 and told mkfs.ext2 to build the filesystem in the ramdisk using
 a 1k blocksize, thus:
 
dd if=/dev/zero bs=1024k count=1048576 of=/dev/ram9
mkfs.ext2 -b 1024 /dev/ram9
mount /dev/ram9 /mnt/test

If you're using a ramdisk for purposes other than booting, make sure
you have backported this fix if you are running a kernel older than
2.6.24:

if you don't, your ramdisk blocks will disapear once you use all the
ram in the system and the kernel needs to free memory

http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=5d0360ee96a5ef953dbea45873c2a8c87e77d59b


-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: How do you determine the amount of system memory?

2008-07-17 Thread Dave Johnson
Paul Lussier writes:
 
 Hi all,
 
 Recent Linux kernels have had a minor bug in that the amount of memory
 reported in /proc/meminfo is incorrect.  I'm trying to find a way to
 determine whether the amount reported is correct or not.
 
 I need some means of reliably knowing whether this value is accurate
 or not.  Does anyone have any ideas?  Physically looking is
 insufficient, given that I a) need to test 400+ systems, and b) I may
 need to run this test on boxes to which I have no physical access.


one more way:

$ grep 'System RAM' /proc/iomem 

basically the same info as from the e820, but in a nicer format:


system 1


$ grep 'System RAM' /proc/iomem 
-0009f3ff : System RAM
0010-7fe4 : System RAM
1-107fffefff : System RAM

system 2


$ grep 'System RAM' /proc/iomem 
-0009efff : System RAM
0010-efed : System RAM
1-10fff : System RAM

system 3


$ grep 'System RAM' /proc/iomem 
-0009fbff : System RAM
0010-7ded : System RAM

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: How do you determine the amount of system memory?

2008-07-17 Thread Dave Johnson
 Paul Lussier writes:
  
  Hi all,
  
  Recent Linux kernels have had a minor bug in that the amount of memory
  reported in /proc/meminfo is incorrect.  I'm trying to find a way to
  determine whether the amount reported is correct or not.
  
  I need some means of reliably knowing whether this value is accurate
  or not.  Does anyone have any ideas?  Physically looking is
  insufficient, given that I a) need to test 400+ systems, and b) I may
  need to run this test on boxes to which I have no physical access.

You can also just go to the source and read the SPD EEPROM off each
DIMM.

This is hit-or-miss depending on how well the i2c and eeprom kernel
modules find your hardware, plus SPD addresses aren't guaranteed to be
at the expected addresses (typically 0x50-0x5f) as that's up to the
motherboard menufacturer.


$ sensors
eeprom-i2c-0-52
Adapter: SMBus I801 adapter at e8a0
Memory type:DDR2 SDRAM DIMM
Memory size (MB):   2048

eeprom-i2c-0-50
Adapter: SMBus I801 adapter at e8a0
Memory type:DDR2 SDRAM DIMM
Memory size (MB):   2048


-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: How do you determine the amount of system memory?

2008-07-16 Thread Dave Johnson
Stephen Ryan writes:
 
 On Wed, 2008-07-16 at 15:20 -0400, Paul Lussier wrote:
  Hi all,
  
  Recent Linux kernels have had a minor bug in that the amount of memory
  reported in /proc/meminfo is incorrect.  I'm trying to find a way to
  determine whether the amount reported is correct or not.
  
  I need some means of reliably knowing whether this value is accurate
  or not.  Does anyone have any ideas?  Physically looking is
  insufficient, given that I a) need to test 400+ systems, and b) I may
  need to run this test on boxes to which I have no physical access.
  
  Thanks.
 
 
 dmidecode.  Look for sections titled Memory Module Information.


The e820 info from the BIOS can help too (provided your BIOS is
bug-free), get it from dmesg or /var/log/dmesg


You'll need account for memory both below 4GB and above 4GB PA.  top
of low memory is usually between 2GB and 3.75GB with the remaining
memory starting at 4GB PA

Below 4GB includes the low 640KB-1MB reserved, plus ACPI, SMM, and
other reserved at the top of low memory, but from the 'usable'
sections you can determine where the top of low mem boundry is (just
round up to the next few MB in the last usable section):

system 1


BIOS-provided physical RAM map:
 BIOS-e820:  - 0009f400 (usable)
 BIOS-e820: 0009f400 - 000a (reserved)
 BIOS-e820: 000f - 0010 (reserved)
 BIOS-e820: 0010 - 7fe5 (usable)
 BIOS-e820: 7fe5 - 7fe58000 (ACPI data)
 BIOS-e820: 7fe58000 - 8000 (reserved)
 BIOS-e820: fec0 - fed0 (reserved)
 BIOS-e820: fee0 - fee1 (reserved)
 BIOS-e820: ffc0 - 0001 (reserved)
 BIOS-e820: 0001 - 00107000 (usable)

below 4GB PA is from  - 8000 ( 2GB)
above 4GB PA is from 0001 - 00108000 (62GB)

this system has 64GB RAM, split between physical addressses 0GB-2GB
and 4GB-66GB


system 2


 BIOS-provided physical RAM map:
  BIOS-e820:  - 0009f000 (usable)
  BIOS-e820: 0009f000 - 000a (reserved)
  BIOS-e820: 000f - 0010 (reserved)
  BIOS-e820: 0010 - efee (usable)
  BIOS-e820: efee - efee3000 (ACPI NVS)
  BIOS-e820: efee3000 - efef (ACPI data)
  BIOS-e820: efef - eff0 (reserved)
  BIOS-e820: f000 - f400 (reserved)
  BIOS-e820: fec0 - 0001 (reserved)
  BIOS-e820: 0001 - 00011000 (usable)

below 4GB PA is from  - f000 (3840MB)
above 4GB PA is from 0001 - 00011000 ( 256MB)

This system has 4GB RAM, split between physical adresses 0MB-3840MB
and 4096MB-4352MB.


system 3


BIOS-provided physical RAM map:
 BIOS-e820:  - 0009fc00 (usable)
 BIOS-e820: 0009fc00 - 000a (reserved)
 BIOS-e820: 000f - 0010 (reserved)
 BIOS-e820: 0010 - 7dee (usable)
 BIOS-e820: 7dee - 7dee3000 (ACPI NVS)
 BIOS-e820: 7dee3000 - 7def (ACPI data)
 BIOS-e820: 7def - 7df0 (reserved)
 BIOS-e820: fec0 - 0001 (reserved)

below 4GB PA is from  - 8000 (2GB)
above 4GB PA is no memory

This system has 2GB RAM all of which are from 0MB-2GB PA





-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: How do you determine the amount of system memory?

2008-07-16 Thread Dave Johnson
Dave Johnson writes:
 The e820 info from the BIOS can help too (provided your BIOS is
 bug-free), get it from dmesg or /var/log/dmesg

 system 3
 
 
 BIOS-provided physical RAM map:
  BIOS-e820:  - 0009fc00 (usable)
  BIOS-e820: 0009fc00 - 000a (reserved)
  BIOS-e820: 000f - 0010 (reserved)
  BIOS-e820: 0010 - 7dee (usable)
  BIOS-e820: 7dee - 7dee3000 (ACPI NVS)
  BIOS-e820: 7dee3000 - 7def (ACPI data)
  BIOS-e820: 7def - 7df0 (reserved)
  BIOS-e820: fec0 - 0001 (reserved)
 
 below 4GB PA is from  - 8000 (2GB)
 above 4GB PA is no memory
 
 This system has 2GB RAM all of which are from 0MB-2GB PA

one thing to watch out for is system 3 above has integrated graphics
where the bios has stolen 32MB (7e00-8000)

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Alternatives to Comcast

2008-05-22 Thread Dave Johnson
Ben Scott writes:
   If what you see from your modem is PPPoE, then what happens is:
 Your PC builds an IP connection over PPP.  It puts the PPP frames into
 Ethernet frames and sends them to the modem.  The modem takes the PPP
 frames off Ethernet and puts them in ATM frames and beams them over
 the wire.  At the DSLAM, the ATM frames go into whatever equipment
 Verizon has until they get to the IP provider.  The IP provider takes
 the PPP frames out of the ATM frames and does IP routing with them.
 
   If what you see from your modem is IP-over-Ethernet (RFC-894),
 then the modem is either doing PPP termination locally and acting as
 a router, or it's encapsulating IP datagrams in ATM frames.  (Some
 implementations are worse, and encapsulate the entire Ethernet frame
 in the ATM frames.)  If the former, from the modem onward it's the
 same as for the PPPoE drill above.  For the later, the encapsulated
 frames get to the IP provider, are dencapsulated, and then handled
 appropriately.
 
   Vitts Networks was one of the providers doing the worse
 Ethernet-over-ATM scenario I describe.  Their DSL modem was
 basically just an Ethernet bridge.  In the CO, they had boxes which
 basically took DSL on one side and spit Ethernet out the other.  Then
 they'd patch each Ethernet port into a managed switch.  So now instead
 of running PPP over the serial DSL, you're running an emulation of
 broadcast-based Ethernet, which actually had *higher* overhead than
 PPP would have.

that sounds bad...

For both VZ's PPPoE direct customers and those customers that
have a different ISP, the dsl modem puts your packets directy into
AAL5 ATM and send them over a VC to the CO (VPI/VCI 0/32 if I
recall).

for Bridged IPoE, that's IP over ETH over LLC/SNAP over AAL5 over ATM


For anyone with a westell brand dsl modem on VZ you may be interested
in a program I wrote to gather data from the modem.

It sends the modem magic packets that cause it to send statistics data
periodically to the LAN.  The modem uses a multicast group to send
these, I gather that it uses multicast packets because in a bridged
configuration the device doesnt' have an IP address (at least not one
that you are supposed to see from your LAN).

http://davej.org/programs/westell.tgz

The program can generate human or machine readable output peridically
such as:

 $ westell --monitor --rate 10
 Waiting for Packet
 Got packet on 2008-05-22-09:20:34:
   Status : online
   Uptime : 214D 20H 44M (since 2007-10-20-12:35:40)
   Upstream
 Signal/Noise Ratio   : 12.0 db 
 Power: 12.0 dbm
 Attenuation  : 27.0 db 
 Data Rate (kbps) : 864 atm, 782 aal5, 764 1500byte, 522 64byte
   Downstream
 Signal/Noise Ratio   :  5.5 db   [ -0.5 db ]
 Power: 16.5 dbm
 Attenuation  : 47.0 db 
 Data Rate (kbps) : 3360 atm, 3043 aal5, 2972 1500byte, 2029 64byte
   ATM
 Signal Lost  :  0
 Frame Lost   :  0
 FEC Errors   :  0
 CRC Errors   :   5655
 HEC Errors   :   3068
 OAM Cells:527
 Loopback Cells   :  0
 TX Cells :   13172471[ 16616 cells,  705 kbps ]
 RX Cells :   14095746[   543 cells,   23 kbps ]
   ETH
 Dropped Packets  :150
 TX Packets   :  229991791[ 264 ]
 RX Packets   :  249194133[ 524 ]

I've tested the program on both my Bridged non-VZ-ISP connection and a
PPPoE VZ-ISP connection of a friend. It seems to work on PPPoE however
the data rate calculations for may be off due to the different
encapsulation..

It's good to note VZ provisions the raw data rate higher than
advertised so that once the LLC/SNAP header, AAL5 trailer and ATM cell
header is removed, a 1500 byte IP packet can run at the advertised
bandwidth. Unfortunately smaller packets suffer due to the 32 bytes of
overhead plus sawtooth effect of ATM SARing.  My 3mb downstream rate
is only 2mb with 64 byte IP packets.

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Alternatives to Comcast

2008-05-20 Thread Dave Johnson
Ben Scott writes:
   Where are you located?  If we know where you're at, we might be able
 to make specific recommendations.  In the Manchester area, give MV a
 call (http://www.mv.com).  Around Haverhill or Amesbury (MA), try USAi
 (http://www.usai.net).

For those in MA, I'd highly recommend AceDSL via Verizon's ATM cloud.
They (along with many other ISPs) should be able to get you a line in
most of Verizon's COs in eastern MA.

http://www.acedsl.com/acedsl_coverage.php?state=MA

I've had a dry line DSL (no voice) 3m/768k connection to them for
almost 3 years.

Pure IP from their network to the dsl bridge, none of this PPPoE
garbage you'd get with Verizon as the ISP.

The 3 or 4 times I've had to call them, got a real person right away
that knew what they were doing.  Oh and someone on their generic
support email even knows what a PTR is without having to explain it
and will change them to whatever you want.

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Alternatives to Comcast

2008-05-20 Thread Dave Johnson
Jarod Wilson writes:
 I'm rather fond of my Verizon FiOS here in Tyngsboro, MA. No PPoE, PPoE,
 etc., just straight up fiber to the optical network terminal on the
 outside of the box, ethernet from there to my switch, off which I hang
 stuff, utilizing the 5 static IP addresses I've got. $99/mo, wide-open
 EULA, 20Mbps down, 5Mbps up (business-class package, but at my
 residence).

Well of course if that was an option I'd switch :(

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


re: server uptime

2008-03-22 Thread Dave Johnson
Warren Luebkeman writes:
 I am curious how common it is for peoples servers to go extremely
 long periods of time without crashing/reboot.  Our server, running
 Debian Sarge, which serves our email/web/backups/dns/etc has been
 running 733 days (two years) without a reboot.  Its in an 4U IBM
 chassis with dual power supplies, which was old when we fired it up
 (PIII Server).
 
 Does anyone have similar uptime on their mission critical servers?
 Whats the longest uptime someone has had with Windows?   

I have a Sharp Zaurus SL-5500 PDA that's been accumulating a rather
impressive uptime sitting unused in it's cradle.

Just checked and it's now up to 1594 days, however the openzaurus
kernel it's running has a 32bit jiffies counter so it's wrapped it's
uptime 3 times so far and only shows 103 days, the other 3*497 days
are there but hidden :(

It's survived many power outages by simply auto-suspending itself if
power is lost, just resume back on after the outage and uptime picks up
where it left off.  I think there's probably another month of missed
uptime due to forgetting to resume it after power outages.

If only I could find a more useful purpose for it besides accumulating
uptime. 

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: gcc linking / libtools / building stuff

2008-01-16 Thread Dave Johnson
Bruce Labitt writes:
 So I tried to just build it from the command line and this is what I got.
 
 $ gcc main_tb_fftw.cpp -otbfft -I/usr/local/include -L/usr/local/lib 
 -lfftw3 -lm
 /tmp/ccJMISfw.o: In function 
 `__static_initialization_and_destruction_0(int, int)':
 main_tb_fftw.cpp:(.text+0x23): undefined reference to 
 `std::ios_base::Init::Init()'
 /tmp/ccJMISfw.o: In function `__tcf_0':
 main_tb_fftw.cpp:(.text+0x6c): undefined reference to 
 `std::ios_base::Init::~Init()'
 /tmp/ccJMISfw.o:(.eh_frame+0x12): undefined reference to 
 `__gxx_personality_v0'
 collect2: ld returned 1 exit status
 
 So what does all that mean???  The program did compile and link using 
 Dev-C++ (gcc) on windoze.  It looks like it can't find iostream ?  I 
 have that included in main_tb_fftw.
 
 Thanks!

Add '-lstdc++'.

Might also be more correct to invoke 'g++' instaed of 'gcc'.  gcc will
pick up on the language by the filename extension, but if you did
compile and link on two different commands, you'd need to invoke g++
for the link stage instead of gcc.

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Motherboard capable of supporting over 32 GB RAM

2007-09-13 Thread Dave Johnson
Dan Jenkins writes:
 Does anyone have any recommendations? Preferably with DDR-800 support.
 It'll run both Linux and Windows XP 64 and is used for simulations.
 Thanks.

You're probably in for a Xeon 5xxx or Opteron 2xx or 2xxx board if you
want this much RAM.

Check Tyan, Supermicro, or for full systems Dell, HP, IBM.

You can probably pick up an opteron 2xx system for relatively cheap if
you don't need the latest and greatest system.

See here:
http://www.supermicro.com/products/motherboard/Xeon1333/
http://www.supermicro.com/Aplus/motherboard/Opteron/Op200.cfm
http://www.supermicro.com/Aplus/motherboard/Opteron2000/
http://www.tyan.com/product_board_list.aspx?cpuid=1socketid=10chipsetid=9
http://www.tyan.com/product_board_list.aspx?cpuid=4socketid=9chipsetid=9
http://www.tyan.com/product_board_list.aspx?cpuid=4socketid=16chipsetid=9

Most of those are extended ATX or custom sized though.

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: sshd config problem.

2007-09-03 Thread Dave Johnson
Steven W. Orr writes:
 For reasons that are probably very boring, I want to make a change to my 
 sshd config. I have two interfaces, eth0 and eth1. I had previously 
 modified my listening port from 22 to something with a couple of extra 
 digits for the kiddys. Now I have a situation where I want to maintain 
 that setup only on eth0 (a.b.c.d) but I want to make it listen on port 22 
 for eth1 (192.168.0.2).
 
 I tried
 ListenAddress 192.168.0.2:22
 ListenAddress 207.172.210.41:1234
 but I must be doing something wrong.
 
 Anyone?

That should work, but you should also remove any 'Port' config lines
in the config file for the ListenAddress to take effect, expecially if
Port is below the ListenAddress lines.

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Success stories with MythTV and Schedule Direct?

2007-09-03 Thread Dave Johnson
Ted Roche writes:
 Just checking in to find out if anyone has switched their MythTV setups
 over to Schedules Direct [1]? (Schedules Direct is a non-profit
 organization that provides raw U.S./Canadian tv listing data to Free and
 Open Source Applications. Those applications then use the data to
 provide things like PVR functionality, search tools, and private channel
 grids.)

Singed up friday, and then went through the painful process of
upgrading 1 backend and 2 frontends from 0.19 to 0.20.2..

The prepackaged myth binaries required tons of upgrades as usual
(Debian sarge to etch plus a few dozen unstable packages) breaking
lots of minor things like squid, cups, nis, nfs (nfs sure breaks in
suttle ways when rpc.statd just isn't started for some reason
post-upgrade)

Schedules Direct seems to work just fine.  The upgrade of mythtv itself
wasn't bad but now I have to get used to a different set of bugs...

Only 3 misterious backend crashes so far :(

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: [OT] Charging UPS batteries outside the UPS

2007-08-07 Thread Dave Johnson
Ben Scott writes:
   Now, I don't want to have to spend $350 on a replacement battery
 only to find out that it's the UPS itself that's shot.  I'm thinking
 if I can find some way to charge up the battery to minimum levels, I
 can at least test the UPS to see if it works.  It doesn't have to hold
 a load.

$350 is a bit high.. I've bought replacement batteries from here
many times without issues:

http://www.powersupersite.com/MZIproducts.cfm?full=1ID=RBK12-f

175.00 + ~25 shipping for the RBC12


   The battery consists of eight smaller units, wired together.  The
 wiring is easily disconnected.  Each unit is labeled 12 V, 7.2
 Ah/20HR.  Anyone if I can just hook each unit up to a regular
 automotive battery charger (one at a time) and charge them that way?

I'd say probably so, but dont blame me if they explode.

useful info here:

http://en.wikipedia.org/wiki/Lead-acid_battery

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: [OT] Charging UPS batteries outside the UPS

2007-08-07 Thread Dave Johnson
[EMAIL PROTECTED] writes:
 First, confirm that these batteries are wired IN PARALLEL, NOT IN
 SERIES.  (If they are in series, you're looking at a highly
 non-standard 96V implementation.  No problem, though... just
 disconnect them all and charge them separately.)

This ups takes 2 sets of 2-parallel 2-series providing 2 24V plugs
with 4 batteries each.

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Excessive processor usage

2007-08-06 Thread Dave Johnson
Sean writes:
 I am beginning to think it is my drive as the problem, it 
 seems to be writing a great deal when I see the problem.
 Time for backups!

try 'vmstat 5'

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: [OT] Network In Flight data sizes

2007-07-24 Thread Dave Johnson
Star writes:
 Does anyone know of a tool that will help determine
 Internet-application performance throughput for overall data window
 size?
 
 My company has a client that depends on a hosted application.  While
 only one of their offices used this app, things worked very well for
 them.  Now that they're rolling it globally, they're noticing
 significant slowdowns in certain areas.  We already use Akamai and
 some fairly extreme caching settings to keep the dead-bits to a
 minimum, but the dynamic parts are showing some trouble.
 
 Essentially, they're telling us that they're seeing choke points in
 the 8k range for throughput.  We've gone through all of our equipment
 and assured that we're using 64k windows sizes on the send and receive
 sides.  Still they see this.  It's one of those it must be on your
 end discussions and we're working hard to get all the data that they
 request, but it's hard to quantify this in flight number that they
 keep touting and showing pretty graphs of.
 
 The tool that the client's group is using is Opnet IT Guru/ACE which
 is a fine tool...  but if I can get a req for $50k for software in
 under 6 months, Hell may have a need for those double hockey-sticks...
 
 Any advice is much appreciated.
 
 ~ Star

FYI: linux should be using window size scaling to get windows larger than
64k (there's a sysctl to turn this off, but I think it's on by
default).

But more fundamentilly, a tcpdump/ethereal/etc.. will easially tell
you the amount of data outstanding, and netstat/etc... will tell you
the fullness of socket buffers.

If that isn't helping you should get a capture on both sides so you
can compare to make sure there isn't any proxy or evil router/firewall
mucking with your tcp stream (i.e. reassemble, then add un-QoS,
jitter, backpressure, shaping, random drops, etc...)

Any retransmitions, SACKs, etc.. are a sign of packet loss, if you see
flags or window sizes showing up magically changing from when the same
packet was sent then some intermidiate device is changing them.

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: How to open a device for exclusive access?

2007-07-16 Thread Dave Johnson
Ben Scott writes:
 On 7/15/07, Steven W. Orr [EMAIL PROTECTED] wrote:
  ... my particular problem would be
  solved if the char device I was trying to open would only honor O_EXCL.
 
   My C is somewhat rusty, but doesn't O_EXCL just prevent one from
 creating/opening the file if it already exists?

For regular files yes, but char and block drivers can do whatever they
want with the open flags.  Some drivers use O_EXCL for exclusive
access to a device.

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: How to open a device for exclusive access?

2007-07-14 Thread Dave Johnson
Steven W. Orr writes:
 I have a device (it happens to be a somewhat exotic serial port) which is 
 managed by a server process. I want my server to detect whether another 
 instance of that server already has that device open. I'm guaranteed that 
 no other program other that the one I'm in control of is capable of 
 opening that particular device.
 
 I was hoping that I could maybe not have to use either mandatory or 
 advisory file locking. I have tried O_EXCL | O_RDWR | O_NONBLOCK when 
 opening the device node hoping to get back EBUSY or EAGAIN.
 
 Does anyone know if I'm SOOL or is there a way to do it?

Block devices support O_EXCL for exclusive access (something that
fdisk can make use of to prevent mounting of a partition while it has
the device open).

Since char devices are more generic, it's up to the individual
driver to either support or not support O_EXCL on open.  The driver
can also support exclusive opens all the time if it wants.


-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: OT: PC Gigabit Throughput Question

2007-06-16 Thread Dave Johnson

Most have already covered the PCI 33/32 bandwidth limits (avoid PCI
33/32 if you can). Integrated into the chipset, PCIX bus (faster but
still shared), or PCIe (dedicated) will solve the bandwidth problems.

Just about any PCIX/PCIe/Integrated MAC will give you 100% gigabit
full duplex line rate provided the driver and memory controller can
keep up.

Look for a useful TOE/TSO/GSO hardware engine as well as hardware
TCP/UDP checksumming on both RX and TX.

Hardware checksumming will reduce the cpu power requirements
dramatically, a segment engine will reduce the number of packets and
give you the equivelant of jumbo frames (from the driver's point of
view) without actually using jumbo frames on the network .

Which brings me to the next point.  A bad driver will make a huge
difference and may be a limiting factor especially at 500mbps.

A good driver will make use of NAPI for overload conditions, and
interrupt coalescing for moderate load conditions.

A segment engine isn't likely on a PCI33/32 MAC, but some do have
checksumming.

One other neat feature I've seen on nvidia integrated MACs is it makes
use of MSI-X to offer 3 different interrupts for each MAC.  One for TX
service, one for RX service, and the 3rd for phy events.  This allows
the driver to have completely separate RX/TX/misc code paths.
Very cool.

Anyway, I'd recommend Intel (e1000) or Broadcom (tg3, bnx2) MACs.

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: dual-core vs. HyperThreading

2007-04-28 Thread Dave Johnson
Thomas Charron writes:
 On 4/25/07, Michael ODonnell [EMAIL PROTECTED] wrote:
   I can definitely confirm that my core 2 quad system has no HT
   available to turn on in the bios (but does show ht in the cpuflags,
   which aligns with other comments in the thread).
  I've heard it suggested several times now that features
  like HyperThreading can be enabled or disabled in the BIOS
  and I don't understand how that can be possible.  AFAIK a CPU
  either does or doesn't have a given feature and there's nothing
  (short of modifying the CPU's microcode) that any BIOS can do
  to disable it, at least not in any way that would prevent a
  capable OS from doing as it pleased once it was booted.  So,
  what's the story?
 
   I don't know the specifics, but the BIOS has to 'turn the bits on'.
 There are thousands of Toshiba laptop owners who wished Toshiba would
 enable the VT bit in their damned BIOS screens for their Intel Core 2
 Duo chips.

Unfortunately this is a feature.  CPUs, Chipsets, etc.. usually have
things like this in Write-Once or Write-Lock registers.  After reset,
the register can we written exactly once or can be set to lock in the
values until the next reset.

BIOS code makes sure to write all of these registers once to
enable/disable features as it sees fit.  This is usually good as
typically they are uses for board-specific enable/disables.  It makes
no sense to enable a PCI device or bridge if there is nothing
connected to it on the board as an example.  Another use is for
features that can only be enabled or modified early in the boot
process before memory is configured or IO devices setup, making
changes afterwards would give unpredictable results.  Locking these
down makes sense as any changes could lock up the system.

Turning HT on/off on the fly seems unlikely to work at the OS level as
I'm sure there is some initializations needed that aren't trivial or
likely known to the OS.  Same goes for VT.

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: slow last 128MB of RAM in a 2GB system?

2007-04-20 Thread Dave Johnson
Ben Scott writes:
 On 4/20/07, Bill McGonigle [EMAIL PROTECTED] wrote:
  Ouch - is that simply a matter of cache impact on
  performance?  I wouldn't have guessed it would be so high.
 
   It could be (although it could be something else I'm just not aware of).

50x isn't too bad. I'd expect you would end up with different results
depending on which memory happends to end up in that last 32MB.  The
highmem area can be disk cache, userspace code, and userspace data,
but not kernel code or kernel lowmem allocations. If you were to run a
bunch of programs first then the test program you may end up with the
test program in cached memory, but if some critical disk blocks end
up uncached (say the test program's code segment, or even worse an
important part of libc) then it'll crawl.

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Dell 690 only seeing 3 GB RAM (was: slow last 128MB...)

2007-04-19 Thread Dave Johnson
Ben Scott writes:
   I'm not sure if disabling MTRR is a good idea or not.  It may be
 important for other reasons, and sometimes disabling something that is
 causing a problem just leads to bigger problems down the road.
 Perhaps Dave or someone else who understands it better can comment on
 that aspect?

If by disabling you mean CONFIG_MTRR, all that will do is remove
/proc/mtrr and prevent video drivers from modifying MTRR.  MTRR
is still present in the CPU, linux just won't touch it.

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: slow last 128MB of RAM in a 2GB system?

2007-04-13 Thread Dave Johnson
Bill McGonigle writes:
 Into -h units:
 
 639K  - BIOS-e820:  - 0009fc00 (usable)
 1K- BIOS-e820: 0009fc00 - 000a (reserved)
 128K  - BIOS-e820: 000e - 0010 (reserved)
 2014M - BIOS-e820: 0010 - 7dfd (usable)
 60K   - BIOS-e820: 7dfd - 7dfdf000 (ACPI data)
 132K  - BIOS-e820: 7dfdf000 - 7e00 (ACPI NVS)
 4K- BIOS-e820: fee0 - fee01000 (reserved)
 8M- BIOS-e820: ff78 - 0001 (reserved)
 
 1919M - limit_regions endfor: 0010 - 7800  
 (usable)

0MB to 2016MB is basically RAM.

The remaining 32MB isn't listed in the e820 map.  It's likely the 32MB
is being reserved for the integrated graphics.

Now this will cause a challenge to the BIOS's MTRR logic.  It will
want to cover that entire 2016MB exactly with the MTRR registers.

The MTRR registers allow you to specify a base address and size, but
the size must be a power of two.

There are 2 basic ways BIOSs can do this:

1) It can list the cacheable memory using power of 2's.
   That would be: 1024+512+256+128+64+32.

reg00: base=0x (   0MB), size=1024MB: write-back, count=1
reg01: base=0x4000 (1024MB), size= 512MB: write-back, count=1
reg02: base=0x6000 (1536MB), size= 256MB: write-back, count=1
reg03: base=0x7000 (1792MB), size= 128MB: write-back, count=1
reg04: base=0x7800 (1920MB), size=  64MB: write-back, count=1
reg05: base=0x7C00 (1984MB), size=  32MB: write-back, count=1

2) It can list all 2048MB as cachable, then work backwards for the
   uncachable part by doing:

reg00: base=0x (   0MB), size=2048MB: write-back, count=1
reg01: base=0x7e00 (2016MB), size=  32MB: uncachable, count=1

Also, note it is customary to leave 1 or 2 MTRR registers available
for the OS to use.  On intel cpus, there are 6 MTRR registers so the
BIOS will tend to only use 4.

Your BIOS is using #1 above, prefering to list out each cachable part.

When you had 1GB of RAM, you needed 4 MTRR registers to fully express
992MB, but with 2GB you need 5 MTRR registers to fully express 2016MB.

Using #2 method is much more preferable for the case were some amount
of memory at the end of RAM is reserved.  With this you only need 2
MTRR registers no matter what the size of RAM.

You can easially fix this using /proc/mtrr.  Since the BIOS filled
out the first 4 entries in #1 above, you can simply add the 5th one
yourself.  You could also convert the table into #2 above, but unless
you plan to run Xorg you dont need any more MTRR registers.

Simply add this to a startup script and you should be all set:

echo base=0x7c00 size=0x200 type=write-back /proc/mtrr

After that the output should look exactly like #1 above.


 While I feel it should be the manufacturer's responsibility to get  
 this right, I think linux would be stronger for knowing how to deal  
 with as much buggy BIOS as is out there.  I'll run bug reports up the  
 flagpole as necessary if you can set me on the right track.

Good luck.

 aside: Is it just more or is this on the uptick? - I'm seeing 'buggy  
 BIOS' as the cause of so many problems recently.  I've been through  
 nasty problems with nVidia, ATI, and SiS BIOS bugs, all in the past  
 year, where I haven't worried much about it previously.  Intel  
 chipsets are, so far, batting 1000 recently.

Yes, my ASUS M2N-SLI shipped with a buggy BIOS where it wouldn't place
any memory over 4GB, limiting usable memory to 3.75GB.  Upgrading the
bios to the latest version fixed that, but broke ACPI.

-- 
Dave

___
gnhlug-discuss mailing list
[EMAIL PROTECTED]
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: slow last 128MB of RAM in a 2GB system?

2007-04-13 Thread Dave Johnson
Dave Johnson writes:
 Using #2 method is much more preferable for the case were some amount
 of memory at the end of RAM is reserved.  With this you only need 2
 MTRR registers no matter what the size of RAM.

Make that: provided size of RAM is a power of 2 (512MB, 1GB, 2GB
etc..).  1.5GB would require 1 more MTRR.

 Yes, my ASUS M2N-SLI shipped with a buggy BIOS where it wouldn't place
 any memory over 4GB, limiting usable memory to 3.75GB.  Upgrading the
 bios to the latest version fixed that, but broke ACPI.

Woops, I mean it broke the IOAPIC, not ACPI. I always get those
confused.

-- 
Dave

___
gnhlug-discuss mailing list
[EMAIL PROTECTED]
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: slow last 128MB of RAM in a 2GB system?

2007-04-12 Thread Dave Johnson
Bill McGonigle writes:
 Long story short: I put 2GB in my soho server and if I use the last  
 128MB or RAM or so the machine is murderously slow. If I boot the  
 machine mem=1920M it's just fine.
 
 Some googling around has lead me to look at linux himem handling and  
 mtrr cache stuff, but I'm not really sure what's going on.
 
 1920MB is practically just fine, but I can't leave well enough alone...
 
 $cat /proc/mtrr
 reg00: base=0x (   0MB), size=1024MB: write-back, count=1
 reg01: base=0x4000 (1024MB), size= 512MB: write-back, count=1
 reg02: base=0x6000 (1536MB), size= 256MB: write-back, count=1
 reg03: base=0x7000 (1792MB), size= 128MB: write-back, count=1
 reg04: base=0x7800 (1920MB), size=  64MB: write-back, count=1
 
 I believe this is telling me that only the first 1920MB of RAM is  
 being cached.  But I'm not sure why or what's happening to the last  
 128MB.  32MB of that last 128MB is allocated to the integrated  
 graphics controller.   Maybe the last 128MB is mapped to the graphics  
 controller anyway?  But then why doesn't linux detect this?  Also,  
 this wasn't a problem with 1GB in the system.
 
 Documentation/mtrr.txt hasn't been particularly helpful for me, and  
 Googling has gotten me a fix, but not much in the way of theory  
 (other than MTRR is an old nasty x86 hack and linux ought to dump  
 MTRR on modern hardware and use PAT like Windows does - whatever that  
 is).
 
 Has anybody figured this out before?

I unfortunately know way too much about these types of problems having
debugged and fixed a commercial BIOS W.R.T. to MTRR and E820 bugs.

The MTRR map is (by default) filled out by the BIOS.  It also fills
out the E820 memory map.

This first question is what did the bios fill out for the E820 map?
This is printed at the begging of the kernel startup with the text
'BIOS-provided physical RAM map'.

Also, what is the output of /proc/iomem and 'lspci -vv' (run as root
please).

The BIOS may be providing a memory break at 1984MB with the remaining
64MB at 4GB and forgetting to add the MTRR entry for that, or it may
be reserving that 64MB for use by ACPI/SMM/etc... and forgetting to
mark those regions reserved in the E820 map.  Either way it's usually
a BIOS bug, but linux should be able to work around either case
(though that might mean some kernel changes).


-- 
Dave

___
gnhlug-discuss mailing list
[EMAIL PROTECTED]
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: SSH to one address, different ports, different hosts

2007-03-11 Thread Dave Johnson
Ben Scott writes:
   Anyone else have thoughts or ideas to offer?

ssh client doesn't mind if there is more than one entry for a given
host in the known_hosts file.

Because of this you can simply manually edit the known_hosts file to
have multiple entries (one for each actual host) all with the same
hostname.  ssh client won't do this automatically, but once you know
the public keys for each host you can then edit the file and add all
of them.

Once you add them all it will accept any of them for that hostname.
Example:

some.host.com ssh-rsa KEY-TEXT-FOR-HOST-1.
some.host.com ssh-rsa KEY-TEXT-FOR-HOST-2.

You can then ssh to some.host.com on some port and ssh client will
accept EITHER key listed in the file.

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: DHCP question: dhclient won't request same IP.

2007-01-31 Thread Dave Johnson
Scott Garman writes:
 To test things further, I placed both machines behind a Linux DHCP
 server so I could look for any unusual error messages. Sure enough, I
 found the following in my logs after the Debian system exhibited the
 IP-changing behavior. The Debian system started out with IP
 192.168.1.215, and ended up with 192.168.1.210:
 
 Jan 31 12:32:03 dhcpd: DHCPDISCOVER from 00:d0:c9:9e:ae:59 via eth1
 Jan 31 12:32:03 dhcpd: ICMP Echo reply while lease 192.168.1.215 valid.
 Jan 31 12:32:03 dhcpd: Abandoning IP address 192.168.1.215: pinged
 before offer
 Jan 31 12:32:03 dhcpd: Wrote 0 deleted host decls to leases file.
 Jan 31 12:32:03 dhcpd: Wrote 0 new dynamic host decls to leases file.
 Jan 31 12:32:03 dhcpd: Wrote 9 leases to leases file.
 Jan 31 12:32:05 dhcpd: DHCPDISCOVER from 00:d0:c9:9e:ae:59 via eth1
 Jan 31 12:32:06 dhcpd: DHCPOFFER on 192.168.1.210 to 00:d0:c9:9e:ae:59
 via eth1
 Jan 31 12:32:08 dhcpd: DHCPREQUEST for 192.168.1.210 (192.168.1.1) from
 00:d0:c9:9e:ae:59 via eth1
 Jan 31 12:32:08 dhcpd: DHCPACK on 192.168.1.210 to 00:d0:c9:9e:ae:59 via
 eth1


It appears the client is sending a DHCPDISCOVER while still using the
old address (the server can ping it).

For a renewal of an existing lease the client should first send a
DHCPREQUEST instead of DHCPDISCOVER.  Only if the DHCPREQUEST is
ignored/lost should it send a DHCPDISCOVER in an attempt to find a new
lease.

The server is doing the right thing in this case.  If the server
recieved a DHCPREQUEST first it would extend the existing lease.

The logs from the client computer may reveal the requests and if it is
trying a REQUEST first with no response.


-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: DHCP question: dhclient won't request same IP.

2007-01-31 Thread Dave Johnson
Dave Johnson writes:
 
 It appears the client is sending a DHCPDISCOVER while still using the
 old address (the server can ping it).
 
 For a renewal of an existing lease the client should first send a
 DHCPREQUEST instead of DHCPDISCOVER.  Only if the DHCPREQUEST is
 ignored/lost should it send a DHCPDISCOVER in an attempt to find a new
 lease.
 
 The server is doing the right thing in this case.  If the server
 recieved a DHCPREQUEST first it would extend the existing lease.
 
 The logs from the client computer may reveal the requests and if it is
 trying a REQUEST first with no response.

Also, one thing to note. I've found good luck with 'dhcp3-client'
instead of the older v2 versions or other clients such as pump.  Under
debian, you can:

apt-get install dhcp3-client


-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: PVR-150 (1042, 1045, 1062)?

2007-01-28 Thread Dave Johnson

FYI:

Rumor has it Hauppauge has been putting HVR-1600 into PVR-150 boxes
even though the box still says PVR-150.  See the most recent comments
here:

http://www.newegg.com/Product/Product.asp?Item=N82E16815116625

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Linux, gobs of RAM, RAID and performance suckage...

2006-12-01 Thread Dave Johnson
Bill McGonigle writes:
 I've forgotten some 2.4 stuff but there was a big-mem version of the  
 2.4 kernel at one point to work around problems with too much RAM.

Ah, that's right. If you're going to run 2.4 with 1GB RAM you need to
apply the rmap patches or performance will get worse the more RAM you
have.  Severely so with 2GB.

Google says rmap patches are here, but the server is giving me a 403
at the moment.

http://www.surriel.com/patches/


-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Linux, gobs of RAM, RAID and performance suckage...

2006-11-30 Thread Dave Johnson
Paul Lussier writes:
 
 This is bizarre.
 
 We've got an NFS server with Dual 3Ghz Xeon CPUs as our NFS server
 connected to a Winchester OpenSAN FC-based RAID array.  The array is a
 single 1TB partition (unfortunately).
 
 Before yesterday we were noticing lots of NFS drop-outs on the clients
 (300+ of them) and we correllated this pretty much to the backups
 (amanda).  The theory was that local disk I/O was beating out
 nfs-client requests.
 
 We also noticed that our memory utilization was through the roof.  We
 had 2GB of PC2300, ECC, DDR, Registered RAM.  That RAM was averaging
 the following usage patterns:
 
  active - 532M
  inactive   - 1.2G
  unused -  39M
  cache  - 1.3G
  slab cache - 255M
  swap cache -   6M
  apps   -  78M
  buffers- 350M
  swap   -  11M
 
 We were topping out our memory usage and occasionally dipping into
 swap space on disk.
 
 Yesterday we added 2GB of RAM and our memory utilization now looks like this:
 
  active - 793M
  inactive   - 2.3G
  unused - 213M
  cache  - 2.9G
  slab cache - 194M
  swap cache -   2M
  apps   -  71M
  buffers- 313M
  swap   - 4.5M
 
 So, it appears we really only succeeded in doubling the cache
 available to the system, and a little more than halving the amount of
 swap that was getting touched.
 
 However, now when backups are run, the system becomes completely
 unresponsive from an NFS client perspective, and the load average
 skyrockets (e.g. into the 40s!).
 
 Does anyone have any ideas ?  I'm at a complete loss on this one.


General nfs server comments:

1)
Make sure you are exporting the nfs shares async otherwise most
operations will seem slow from the clients point of view. Check
/proc/fs/nfs/exports for 'async'  If not there, set it in your
/etc/exports

2)
Make sure you server has enough nfsd threads to handle the client
load.  With 300 clients, you should have at least 100 nfsd threads in
my opinion. Check your init.d scripts for how to set this.


Other stuff:

11MB of swap doesn't mean anything is wrong.  It's actually a
good thing meaning your kernel found 11MB of stuff that wasn't needed
and booted it out to swap space in order to make more room available
for cache.

You should check the disk rates and cpu usage with 'vmstat 5' for a
few minutes.  This will also show how much of the cpus are spending in
wait. The full outout /proc/meminfo may also be useful.

I assume you are using a x86_64 kernel?  Using a 32bit kernel is ok as
long as you dont run out of low memory.  The kernel's heap/slab as
well as filesystem metadata (buffers) must come from low memory while
filesystem data (cache) and userspace processes can be in high memory.

If your SAN is on a dedicated lan to the server only you should
investigate converting that subnet to support jumbo frames.

Since your using Xeons not Opterons you should make sure irqbalance is
installed and running to spread irq load across all cpu cores (this
may not be a good idea on a multi-node Opteron system though)  You can
run top in multi-cpu mode (press '1') to see if any cpus are
overloaded with irq or wait load while others are idle.


-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Linux, gobs of RAM, RAID and performance suckage...

2006-11-30 Thread Dave Johnson
Paul Lussier writes:
 Yesterday we added 2GB of RAM and our memory utilization now looks like this:
 
  active - 793M
  inactive   - 2.3G
  unused - 213M
  cache  - 2.9G
  slab cache - 194M
  swap cache -   2M
  apps   -  71M
  buffers- 313M
  swap   - 4.5M

When you are in this condition, how large is Dirty and Writeback in
/proc/meminfo?

If you have a large amount of Dirty (read: 300MB) and the underlying
filesystem is using ordered journaling (the default for ext3) you can
cause very long delays if a process requests a fsync on a file (vi
does this alot).  Instead of writing a single file to disk right away
the filesystem must write all other outstanding data to disk first!

If your clients are write heavy and you are using ext3 you should
consider data=writeback when mounting the filesystem.

I've run into this issue on a server with 64GB of RAM with dirty
commonly exceeding 4GB under load.  Poor users of vi had to keep
waiting for the server to write out 4GB of data every time they saved
a file. data=writeback helped tremendously as only the previous
metadata is forced to disk on a fsync() instead of everything.


 Correct.  The nfsd's on the server get pushed to the bottom of the
 queue (in top) and what I see is [kupdated] and tar (from the backups)
 rotating between the top position.  There was one or two other
 processes up there too, one of which I think was a kernel thread
 ([bdflush] ?).  As far a I know, the tar's were not making processes,
 but I can't be totally sure of that.  Nothing *else* was making
 progress, that's for sure.

kupdated?!?! A 2.4 kernel??!?!? 


-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Kill-a-watt devices

2006-11-28 Thread Dave Johnson
Tom Buskey writes:
 
 $33 http://www.invertersrus.com/killawatt.html
 $21 http://www.sustainabilitysystems.com/products.php?cat=8
 
 
 Measures:
 
 # Cumulative kWh over time (using built in timer)
 # Watts (active power)
 # VA (apparent power)
 # Volts
 # Current (amps)
 # Frequency (hertz)
 # Power Factor
 
 1875VA limit
 15 A limit.
 
 I googled a review that said it showed 0.06 kWh after 2 hours for a TV.
 
 I'd imagine the utility co's unit has a higher amp rating.

I have a kill-a-watt too.  The only thing (besides a computer
interface) I don't like about it is the lack of battery backed or
otherwise persistant cumulative usage.  Every time you unplug it or
power goes out the totals are lost, but that's what you get for the
price.

There are other alterntives but a bit more expensive.  The 'Watts Up'
and 'Watts Up Pro' go for about $100-$200 but have more features
including persistant logging.

See: https://www.doubleed.com/products.html


-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: MythTV hw question

2006-11-12 Thread Dave Johnson
Derek Atkins writes:
 Some VGA outs can only get up to 1080i and can't get the throughput
 of 1080p, especially since I'm asking for nearly 995328000 B/s (unless
 I've miscalulated 1920 x 1080 x 4 x 120...

That's the first issue, since these boards use main memory for the
video you'll need high bandwidth RAM to feed the chipset. 1080p is
only about 8MB so size isn't an issue bandwidth may be.

The second issue is the max dot clock the chipset can drive out the VGA
port.

My CN400 shows: (II) VIA(0): Clock range:  20.00 to 230.00 MHz

That clock must include not just the visible picture but the sync and
blanking after each line needed by the actual display device on the
other end.

Some searching found these lines for 60hz:

Modeline 1920x1080i60  74.25  1920 1976 2008 2200  1080 1083 1085 1125 +Hsync 
+Vsync Interlace
Modeline 1920x1080p60 148.50  1920 1976 2008 2200  1080 1083 1085 1125 +Hsync 
+Vsync

148Mhz clock is needed for [EMAIL PROTECTED], [EMAIL PROTECTED] would require a
297Mhz clock which is beyond the VIA chipset's limit of 230Mhz.

There is also good info here:

http://www.linuxis.us/linux/media/howto/linux-htpc/video_card_configuration.html



I plugged in the [EMAIL PROTECTED] line and got this:

(II) VIA(0): ViaValidMode: Validating 1920x1080 (148500)
(II) VIA(0): ViaModePrimaryVGAValid
(II) VIA(0): Required bandwidth is not available. (497664000  46100)

The clock was ok, but memory bandwidth was insufficent.  This system
has a PC2700 DIMM, a PC3200 DIMM would probably be enough and DDR2
would be fine too.   The X driver actually measures the memory
bandwidth on startup.




-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: MythTV hw question

2006-11-11 Thread Dave Johnson

Not sure about the CN700, but the CN400 on my SP13000 can do 720p
however it isn't one of the standard resolutions the X driver
will offer.  You need to add a modeline to the XF86 config and it'll
then output it just fine.

Last I checked Unichrome Pro was still not supported in the standard
XF86 or X.org distributions.  Compiling the external unichrome pro X
driver from CVS is quite a pain and was still a bit buggy last time I
upgraded.

This recent discussion prompted me to finish up this page today,
something I've been meaning to finish for a few months.  It has just
about everything for my MythTV frontend computer.

http://centerclick.org/yukon/

-- 
Dave

Derek Atkins writes:
 They don't specify the VGA output resolution, and I can't seem to
 find that specification anywhere.  I even downloaded their PDF
 Manual and that info isn't in it.
 
 -derek
 
 Thomas Charron [EMAIL PROTECTED] writes:
 
  http://www.via.com.tw/en/products/mainboards/mini_itx/epia_en/
 
They've specifically built the board above for HD signal processing, as 
  well
  as 5.1 sound.
 
Thomas
 

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: MythTV hw question

2006-11-09 Thread Dave Johnson
Paul Lussier writes:
 
 So, before I start spending a ton of money on new hardware, I'd like
 to play around with MythTV a little.  I have an old PIII 300Mhz system
 collecting dust somewhere.  Would that be sufficient to play with if I
 threw a little memory and a new PVR-350 at it?
 
 I don't expect any great performance from it, and would expect to
 migrate the video card over to a new system once I figure out what I
 was doing.

That's probably fine for a backend, but may be a bit slow for decoding
for the frontend.  I've noticed cpu power requirements for playing
are perportional to the bitrate of the MPEG2 so you can always crank
down the bitrate setting when recording to find what it can handle.

At least on my PVR-250 you can set any width/height/bitrate you want
and it'll do MPEG2 hardware encoding on that.  You should have no
problems cranking them way down just for the experimentation phase.
At least that'll give you time to play around with the functionality
even if the video quality isn't great.

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Tivo vs MythTV (was: *pout* HDTV No Recordee....)

2006-11-08 Thread Dave Johnson
Ben Scott writes:
   Those of you here who are already using MythTV, how do you find it
 works in day-to-day usage?

I've been using MythTV for about 2 years or so.  I like it alot better
than the tivo I had before that.  It has it's bugs, but they are
usually minor and can be worked around without too much trouble.

I really like the split model [backend + frontend(s)] which allows me
to have a fanless, diskless, no-moving-parts frontend computer for the
TV and a big server in the garage^H^H^H^H^H^H^H server-room.

Note you can get unencrypted HD off of Cable in addition to OTA.

My MythTV backend has a Hauppauge WinTV-PVR 250 connected to the comcrap
set-top box and a pcHDTV HD-3000 connected direct to the comcrap coax.
The latter can do direct digital captures of all unencrypted channels (SD
and HD) with no conversion or recompression (direct MPEG2 to disk).

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Tivo vs MythTV (was: *pout* HDTV No Recordee....)

2006-11-08 Thread Dave Johnson
Ben Scott writes:
  Note you can get unencrypted HD off of Cable in addition to OTA.
 
   Any idea how many unencrypted HD channels there are?

Every broadcast channel.  In my case that's the HD versions of: 2, 4,
5, 7, 9, 25, 38 last I checked

In addition, comcast sends 2 version of the analog channels over the
coax.  An analog version for for people without set-top boxes and a
digital SD version for people with digital set-top boxes.

The set-top box actually uses the digital version.  When you tell it
to tune to say channel 5, it picks up the digital version of channel 5
instead of the analog version.  This eliminates the 'fuzzy' problems
you get with low signal strength or interference.

The pcHDTV card can get the unencrypted SD digital version too.

This allows me to avoid the digital [coax] - set-top box - analog
[composite] - capture card - digital conversion of the Hauppauge
Card. MythTV only uses the Hauppauge card if there is a conflict or for
channels that are encrypted.

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Tivo vs MythTV

2006-11-08 Thread Dave Johnson
Paul Lussier writes:
 Dave Johnson [EMAIL PROTECTED] writes:
  You can also just plug in the coax to a HDTV with a digital tuner and
  get the same result.
 
 Hmm, I didn't know that.  How do I find the HD channels?  When I tune
 my TV to the channel number advertised by Comcast, I get a black
 screen. Presumably that number is only meaningful to their box.


Sorry, should have said 'have the same issue' instead of 'get the same
result'.

Without a cable card (that's a differnt issue) you have the same
channel numbering problem.  Remember that ATSC and QAM both allow
multiple subchannels per frequency band.  You can get about 2 HD and
about 12 SD channels per frequency band.

At least on my HDTV (Samsung), channels are represented by
channel-subchannel so PBS is on 87-1 or something like that even
though comcast maps it as 802.  Something else is on 87-2 and so
forth. Just doing a full scan in the TV's tuner setup will pick up
everything (including all the encrypted channels, but those are
blank).


-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Getting started w/ MythTV [was Re: Tivo vs MythTV]

2006-11-08 Thread Dave Johnson
Paul Lussier writes:
 So, for these 2 systems, can someone give me a rough list of the
 essential-to-have hardware and the cost analysis?
 
 My main goal here is ease of installation/configuration and minimizing
 as much as possible hardware incompatibility frustration.  I.e., if
 there's a certain MoBo I ought to use or specific hardware combination
 that just works, tell me.  Cost savings *is* an issue, but time is
 more of an issue.  I don't want to fight with hardware, I want to
 build an appliance that will do it's job with as little interaction
 from me as possible :)

For analog cards, getting one with MPEG2 hardware compression is a
very good idea.  Besides sufficient bandwidth on your PCI bus and
disk controller/disks recording uses very little cpu (read: 2% on a
any modern cpu).

The bandwidth requirements aren't that big. SD should run you about
2-5Mbps, HD about 10-20Mbps. (that's bits not bytes)  As long as you
can sustain the average and not have a congested system that could
cause DMA underruns any motherboard and cpu should be fine.

Note that mythtv, at least IMHO, is quite bloated and the code way too
complicated and confusing to look at.

If you have alot of channels and capture cards the backend database
can get quite large.  Keeping all that in RAM will improve the
performance of the frontend and mythweb especially when getting
channel listings or doing searches. My /var/lib/mysql/mythconverg/ is
about 400MB.  The more memory for the backend the better especially
because it will be doing disk caching.

My frontend is a VIA EPIA SP13000 with 512MB RAM, PXE boots with
nfsroot.  It's plenty fast for SD, but barely fast enough for HD.  I
think this is actually a video dirver problem though.  I'd stick with
a CLE266 based VIA system though as the CN400 graphics is a pain to
get working in X.

CPU requirements aren't that bad for the frontend provided you have
an X driver that supports all the appropriate acceleration extensions,
that's the most important thing to consider.

mythfrontend is using about 120MB of RAM on my frontend computer right now.
Add X and other apps and 256MB or 512MB RAM should be fine.

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: good distro/instructions for soekris box?

2006-11-06 Thread Dave Johnson
Kevin D. Clark writes:
 
 Does anybody here have a favorite distro or a pointer to a favorite
 set of directions for installing Linux/BSD onto a Soekris box?
 
 I plan on using a 1MB CF card for my storage (no moving parts).  I'll
 probably end up running HTTP/SSH/DHCP/firewall and possibly some other
 stuff on the box.
 
 There's lots of information out there.  I can find all of that.  I'm
 just curious if somebody here has already found a particularly helpful
 set of instructions.

I've got a soekris net4801, works great.

I started with a debian install on it then modified things until
writes to rootfs were completely eliminted under normal operation to
save the CF from failure.

Some kernel and boot patches are here, but I haven't documented the
userspace changes:

http://centerclick.org/net4801/

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: CPUs with variable speed clocks ?

2006-11-03 Thread Dave Johnson
Thomas Charron writes:
   Aye, the Intel Core 2 Duo's have 'Advanced Intel Speedstep' capabilities.
 The clock can be dynamically modified by multipliers, I believe up to 8
 different speeds.  I'll give you more info as I investigate it, as the
 kernel I built last night I enabled for it.
 
   Thomas
 

speedstep and powernow work fine under linux, but usually require a
kernel that knows about your sepecific cpu.

Auto-detection isn't fully baked, and I have to load the appropriate
modules, but it works fine after that.

in my case:
* cpufreq_userspace
* powernow-k8

On my X2 I get 4 speed choices and both cores need to run the same
speed:

pmg-alliance:10002:/sys/devices/system/cpu/cpu0/cpufreq/ cat 
scaling_available_frequencies 
220 200 180 100 
pmg-alliance:10003:/sys/devices/system/cpu/cpu0/cpufreq/ cat scaling_cur_freq 
100
pmg-alliance:10004:/sys/devices/system/cpu/cpu0/cpufreq/ cat affected_cpus 
0 1

I'd recomend powernowd (v0.97) using the userspace governor, but there
are other choices out there.

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Long-term (hours+) solution for power outage: PBX, lights, and computers

2006-08-31 Thread Dave Johnson
Ted Roche writes:
 A client suffered a five-hour downtime recently, and has been looking  
 for a reasonably priced solution. He's working out of leased office  
 space, and a generator or major structural changes are unreasonable.  
 He happened upon:
 
 http://www.sentrypowertechnology.com/
 
 And they offer 6 KW and 12 KW units.
 
 The client would like to keep a half-dozen computers, LCD screens and  
 a laser printer or two running for a 4- to 6-hour period. Anyone have  
 experience with a solution that would work? Things to look out for?  
 For example, each computer has its own small APC UPS, so there would  
 be two sources of battery power in series. It's possible the  
 generated AC would not be stable enough in voltage or frequency to  
 keep the UPSes from tripping on and off. I know this is a problem  
 with the smaller, less-regulated generator units.
 
 Is there a good solution for a small office? Of should he be looking  
 for a 10K VA UPS?

Most UPS manufacturers have extended-run models that you can attach
multiple external battery packs for hours and hours of runtime.

Are we talking about equipment that is distributed throughout the
office or a bunch of network equipment and servers?  If they are
centralized then one big extended run UPS is in order.

It's generally not a good idea to chain UPSs.

For really long run (4+ hours) a generator to power the UPS is usually
a better solution.  3KVA-6KVA generators aren't that big, but you do
need to wheel them outside before turning them on.


--
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: kernel headers question.

2006-08-20 Thread Dave Johnson
Steven W. Orr writes:
 So having said all that, is it safe to say that for all versions of
 2.4 and 2.6 series kernels, it's ok to have one old version of
 glibc-kernheaders? Or is there some reason that I am constrained to
 having seperate kernel headers for each project thereby necessitating
 that each version of gcc, binutils, and UClibc is dependant on the
 version of linux that it will run against?


When building gcc/binutils you generally do not need the specific
version of the kernel headers (or more specifically
--with-headers=something from gcc's configure line).

The only really linux specific part in gcc/binutils is the crt*.o
.  Even libgcc.a, libiberty.a, libstdc++.a aren't very kernel
specific, but do require something in the lines of header files to
build.

I generally use something along the lines of:

construct sys include from a 'close-enough kernel'
build binutils cross
build gcc cross stage 1 (only c support)
build glibc cross
build gcc cross stage 2 (c  c++)
blow away this glibc results
install stage 2 gcc, binutils, and sys includes system wide.

(if you want c++ support you unfortunately need to build a cross
libc inbetween stage 1 and 2.)


The specific kernel headers are more important when building your
*libc as any change in the syscall area has a big effect on libc.

Note that this part has settled down a long time ago and you can
usually get away with a close-enough kernel here too.  If you're not
changing syscalls or other kernel headers then you can build your
*libc once and forget it.

workareas consist of:

build kernel
build *libc  (rebuilding isn't needed that often)
build everything else

If you're not changing your libc or major changes in your kernel (that
would effect *libc) you can build the libc and install it system wide
along with gcc/bintuils.

Also note, you need to make sure binutils and gcc dont look in
/usr/include or /usr/lib by default, but instead uses your sys includes
you build with and the libgcc.a, libiberty.a and libstdc++.a that you
built.


-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: kernel headers question.

2006-08-20 Thread Dave Johnson
Steven W. Orr writes:
 =The only really linux specific part in gcc/binutils is the crt*.o
 =.  Even libgcc.a, libiberty.a, libstdc++.a aren't very kernel
 =specific, but do require something in the lines of header files to
 =build.
 =
 =I generally use something along the lines of:
 =
 =construct sys include from a 'close-enough kernel'
 
 This is probably really the heart of the question. Is any early 2.4 set of 
 headers going to be good enough for any *libc to be run on any 2.4 series 
 that is equal to or later than the set of headers used? Do I need to have 
 a different set for 2.6 kernels or can I continue use of the headers that 
 came from a 2.4 kernel?

x86 should be rather good between 2.4 and 2.6, but other architectures
may be more different and have conflicts however I don't recall any
issues.

You may run into problems building the latest and greatest glibc
against 2.4 headers as it may be looking for new syscalls such as
sched_getaffinity() which were added in 2.6.

 =build binutils cross
 =build gcc cross stage 1 (only c support)
 =build glibc cross
 =build gcc cross stage 2 (c  c++)
 =blow away this glibc results
 =install stage 2 gcc, binutils, and sys includes system wide.
 =
 =(if you want c++ support you unfortunately need to build a cross
 =libc inbetween stage 1 and 2.)
 
 I don't understand why?

In order to build shared versions of libraries ld needs crt1.o as well
as -lc (both built as part of *libc) to link a .so.  So more correctly
I should have said --enable-shared (highly desirable in stage 2)
requires glibc although I recall c++ may also need it.

I just ran a build skipping glibc and the first failure in stage 2 was
building libgcc_s.so.1 because it couldnt' find crt1 and libc.

 =The specific kernel headers are more important when building your
 =*libc as any change in the syscall area has a big effect on libc.
 
 Agreed, but I'm not sure I understand why syscall comes into play. I get 
 that there are constants to syscall that correspond to new system calls in 
 newer kernels, but I'm not sure I understand why i care.

see above.

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Easy to use CAD / architectural design sw suggestions?

2006-08-13 Thread Dave Johnson
Bruce Labitt writes:
 Bruce Labitt wrote:
  I need to make up a set of scale drawings for a workshop I'm setting 
  up.  Does anyone have some experience with architectural drawing 
  package that is affordable / cheap / free ?  I used to have an ancient 
  copy of Generic Cadd that ran on DOS.  It was quite powerful.  
  Anything available for Linux?  I need a big step up from pencil.
 
  Anyone have experience on getting one of the ~$49 CAD software 
  packages for sale for that other OS to run in Linux?
  ___
  gnhlug-discuss mailing list
  gnhlug-discuss@mail.gnhlug.org
  http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
 
 I'm attempting to compile  QCAD.  I get an error about iso c++ not 
 supporting 'long long's
 
 In file included from /usr/lib/qt3/include/qobjectdefs.h:42,
  from /usr/lib/qt3/include/qobject.h:42,
  from actions/rs_actioninterface.h:31,
  from actions/rs_actioninterface.cpp:28:
 /usr/lib/qt3/include/qglobal.h:714: error: ISO C++ does not support 
 `long long'
 /usr/lib/qt3/include/qglobal.h:715: error: ISO C++ does not support 
 `long long'
 
 So I googled compile qcad and found a thread at 
 http://groups.yahoo.com/group/qcad-user/message/1406 .
 It said to
 
 grep -rn -pedantic the source directory
 go to the file with -pedantic in it and delete -pedantic (it's a
 compiler switch)
 compile, that should do it.
 
 
 I'm sure it is that I am a total novice at this...  I go to the 
 directory where I have the source installed and execute
 
 grep -rn -pedantic *
 
 grep tells me:
 grep: invalid option -- p
 Usage: grep [OPTION]... PATTERN [FILE]...
 Try `grep --help' for more information.
 
 This wasn't too helpful ;)  at least not for me...  Can anyone offer a 
 helpful hint?  Be gentle, I only hack about at this after hours.  I did 
 try enclosing in double quotes, same answer...
 
 Thanks! 


Yep, just downloaded qcad-2.0.5.0-1-community.src.tar.gz and I get:

$ cd scripts
$ QMAKESPEC=/usr/share/qt3/mkspecs/linux-g++ QTDIR=/usr/share/qt3 
./build_qcad.sh
[...]
make[2]: Entering directory 
`/home/pmg/tmp/qcad-2.0.5.0-1-community.src/qcadlib/src'
g++ -c -pipe -pedantic -Wall -W -O2  -DRS_NO_COMPLEX_ENTITIES -DQT_NO_DEBUG 
-DQT_SHARED -DQT_THREAD_SUPPORT -I/usr/share/qt3/mkspecs/linux-g++ -I. 
-I../include -I../../dxflib/include -I../../fparser/include 
-I../../qcadcmd/include -I/usr/include/qt3 -Imoc/ -o obj/rs_actioninterface.o 
actions/rs_actioninterface.cpp
In file included from /usr/include/qt3/qobjectdefs.h:42,
 from /usr/include/qt3/qobject.h:42,
 from actions/rs_actioninterface.h:31,
 from actions/rs_actioninterface.cpp:28:
/usr/include/qt3/qglobal.h:712: error: ISO C++ does not support `long long'
/usr/include/qt3/qglobal.h:713: error: ISO C++ does not support `long long'
make[2]: *** [obj/rs_actioninterface.o] Error 1
make[2]: Leaving directory 
`/home/pmg/tmp/qcad-2.0.5.0-1-community.src/qcadlib/src'
make[1]: *** [lib/libqcad.a] Error 2
make[1]: Leaving directory `/home/pmg/tmp/qcad-2.0.5.0-1-community.src/qcadlib'
make: *** [all] Error 2
Building qcadlib failed
$

The change needed is in:

qcad-2.0.5.0-1-community.src/mkspecs/defs.pro:

QMAKE_CXXFLAGS_DEBUG += -pedantic
QMAKE_CXXFLAGS += -pedantic

just comment out those 2 lines and it should build fine.


-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Easy to use CAD / architectural design sw suggestions?

2006-08-13 Thread Dave Johnson
Bruce Labitt writes:
 I got this after 20 minutes or so...
 
  Building Translations 
 sh: ./release_translations.sh: No such file or directory
 Building qcad binary failed
 
 I think the community.src doesn't contain translations?
 
 Did you get that far?
 

You need to run './build_qcad.sh notrans'

By the way, qcad is most likely included in part of your distribution,
and you shouldn't need to compile it unless you want to.

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: UPSes - MinuteMan, others?

2006-08-11 Thread Dave Johnson
Ben Scott writes:
   I suspect this type of device is perfect for situations where line
 voltage is consistently out of tolerance by a certain amount.
 However, they do not provide isolation from line transients or noise,
 nor can they cope with severe undervoltages ( 95 VAC or so).  We have
 both here.  Sometimes, when everything else is running and then the
 big mill kicks in, the lights go out for a fraction of a second.

If the motor is powerful enough to brown out everything else in your
building when it starts up chances are the transformer feeding the
building can't handle the startup load and is under powered for your
needs.

An electrician can hookup monitoring equipment to watch the
current/voltage of your feed or just the circuit to the motor and see
if this is the case.

Your utility can replace the transformer with a bigger one, but that
may cost money and downtime.

This won't solve everthing, but it's just another area to explore.

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Which CPU are we waiting for to get VM hypervisors in hw?

2006-07-30 Thread Dave Johnson
Ted Roche writes:
 It's time to start shopping for a new development laptop. I'm a big  
 fan of big iron: the laptop is my primary workstation and I'd prefer  
 to lug around a 4 kg machine than lack for power. I'm tempted both by  
 the ThinkPad T60p and the MacBookPro. Both run Core Duo.  While I  
 know these are capable of running VMs, I'm hearing that a next  
 generation chip is going to have more capability in the chips to  
 provide more powerful VMs. The technical details are beyond me, but  
 my questions are simple enough: does anyone know what's the timeline  
 on these new chips, and will their delivery in laptops make all that  
 much perceivable difference or are the features more aimed at big  
 iron (8-way and up) machines.
 

Merom will be out in 1-2 months.  Probably worth the wait.


http://www.dailytech.com/article.aspx?newsid=3543

http://www.pcper.com/article.php?type=expertaid=276




-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: LACIE USB disk and SuSE 10.1

2006-07-09 Thread Dave Johnson
Jon maddog Hall writes:
 Hi folks,
 
 It is not often I have a problem, but here one is.
 
 I recently bought a LACIE 600 GByte USB 2.0 disk, unwrapped it and plugged it
 into my SuSE 10.1 distribution.
 
 Normally I plug in a USB storage device and it mounts, with a nice little
 icon that comes up.  No such luck.  I hear the drive spin up and the heads 
 load,
 but that is about it.
 
 So I do a dmesg and I get this out of it:
 
 usb 4-1: new high speed USB device using ehci_hcd and address 10
 usb 4-1: new high speed USB device using ehci_hcd and address 11
 usb 4-1: new device found, idVendor=0451, idProduct=6250
 usb 4-1: new device strings: Mfr=2, Product=3, SerialNumber=1
 usb 4-1: Product: TUSB6250 Boot Device
 usb 4-1: Manufacturer: Texas Instruments
 usb 4-1: SerialNumber: AAB74D460127
 usb 4-1: configuration #1 chosen from 1 choice
 
 I expected to see something like /dev/sdX but that did not come up.
 
 I poked around the net, found some information about ehci_hcd drivers, etc.
 and it told me to do a lsusb:
 
 [EMAIL PROTECTED]:~ lsusb
 Bus 003 Device 001: ID :
 Bus 004 Device 011: ID 0451:6250 Texas Instruments, Inc.
 Bus 004 Device 001: ID :
 Bus 002 Device 001: ID :
 Bus 001 Device 001: ID :
 

'lsusb -v' will show the class of the device:

Interface Descriptor:
  bLength 9
  bDescriptorType 4
  bInterfaceNumber0
  bAlternateSetting   0
  bNumEndpoints   3
  bInterfaceClass 8 Mass Storage
  bInterfaceSubClass  6 SCSI
  bInterfaceProtocol 80 Bulk (Zip)
  iInterface  0 

If it isn't a Mass Storage class usb device you need to create a
manual mapping of vendor/device (0451/6250 in your case) to have
hotplug map it to the usb-storage driver.

This will tell hotplug to load the usb-storage module when that device
is discovered.  You can manually modprobe usb-storage to see if that
helps.

I'm not to familiar with suse, but you need to find one of hte mapping
files usually under /etc/hotplug or /etc/hotplug/usb and add an entry
mapping the vendor/device to the usb-storage driver.

I can never remember what all the fields are but something like:

usb-storage 0x000f 0x0451 0x6250 0x0 0x0 0x00 0x00 0x00 0x00 0x00 0x00 
0x


-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Recycled computers

2006-05-30 Thread Dave Johnson
Ted Roche writes:
 Branching the One Laptop thread...
 
 On May 27, 2006, at 10:27 PM, David Ecklein wrote:
 
  I don't understand this fixation on laptops.  These are commodities  
  for the
  I would rather see an effort mounted to refurbish the many usable  
  desktops
  that are going to the dump every day.
 
 LUG members have done this. A group of MonadLUG members lead by Tim  
 Lind worked on this for a while last year. Tim's apparently succeeded  
 in getting a couple refurbished machines into local libraries and  
 elder care facilities. However, there's a lot of resistance to these  
 contributions: concern over support, toxic waste disposal, security,  
 etc.
 
 For recycling, there are some good options:
 
 http://www.des.state.nh.us/swtas/recycle_electronics.htm
 http://www.des.state.nh.us/swtas/comp_recyclers.htm

Most electronics recycling companies already have re-use deals in
place for working components and will then go after the materials on
whatever is left over (broken or obsolete).

I brought a car-load of old stuff (some working some broken) to this
place about a year ago because they would take everything I had for
_FREE_ as long as I transported it to them (they charge for pickup).

http://www.recyclingelectronics.com/

They have a reasonable amount of info on their website about what they
will accept.

It was quite a drive (they are in Brockton, MA) but well worth it to
avoid the hassles and fees for CRTs and other electronics from my
local town or just about any other company I found.

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Liquid Cooling

2006-05-30 Thread Dave Johnson
Ben Scott writes:
 On 5/30/06, Sean [EMAIL PROTECTED] wrote:
  I actually figured there would have been a few people here who would be
  using liquid cooling, but no responses.
 
   For whatever reason, the members of this group appear to tend
 towards a pragmatic bent, and/or are IT professionals.  Liquid cooling
 is generally not cost-effective (it gives you a comparatively small
 performance gain, for a high cost and risk).  It's major application
 seems to be so people can indulge in games of one-up-manship.  Nothing
 wrong with that, but you don't see it much in a business environment.
 
 -- Ben

If you're after a quieter computer (as to over-clocking) there are some
easier steps to take before liquid cooling:

* Invest in a quieter cpu HS/Fan with adjustable speed (manual or
  temperature based).  I've had great success with Zalman parts.

* Get a case with BIG intake/exhaust fans, 120mm is very good, 60mm
  should be avoided at all costs.

* If the fan covers are punched out of the case material cut them out
  especially if they are made up of tiny holes.  Replace with an
  attachable wire grill.  I've seen some cases with more than 66% of
  the fan covered with only very small holes cut out.  You can double or
  triple the airflow by opening up the fan.

* If your case rattles or vibrates get some sound dampening material.
  I've had very good success with Akasa PaxMate brand material.

* If you're motherboard has a north-bridge fan, consider replacing it
  with a large passive heat sink.  Some companies sell these but this
  of course isn't for the novice.  Same goes for a GPU fan.

* Get an adjustable fan speed controller for your case fans.  If
  you've done the above steps you can under-volt your case fans
  significantly and still get more than enough airflow.  A good 80mm
  case fan can be very quiet at 1300rpm and still push a lot of
  airflow if it is unobstructed.

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Using the serial port for GPIO.

2006-05-13 Thread Dave Johnson
Scott Garman writes:
 Tom Buskey wrote:
  People  usually use the parallel port for this kind of stuff.  8 outputs 
  and  at least 5 inputs.  More inputs are possible w/ the newer 
  bidirectional ports.
  
  parpin on sourceforge lets you work on individual pins.  There's tons of 
  info on using the parallel port for digital I/O out there.
 
 Thanks Tom, this is extremely useful and exactly what I'm looking for 
 for some of the I/O I'll be doing!


/dev/parport0 is your friend.  No kernel work needed.  I use that for
controlling my LCD screen at work.  See http://centerclick.org/lcd/

Note that I've found that most modern PCs provide very poor voltages
on the parallel port (but still within the TTL ranges).  I've even
seen different voltages on the Data and Status Pins on the same port!.

Don't expect to draw very much current from it either.  After much
frustration I ended up adding some octal line drivers with a very wide
input range between the pport and device to boost everything to
acceptable levels.


-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Hard drive troubleshooting

2006-05-11 Thread Dave Johnson
Ted Roche writes:
 In /var/log/messages, smartd is reporting:
 
 Device: /de/hdb, 3 Currently unreadable (pending) sectors
 
 I'm trying to figure out where they are and how to fix them or mark  
 them as bad.
 
 Took the machine down last night and ran SpinRite 6 on it. It  
 reported no problems on the drive.
 
 Ran the smartctl tests, both short and long, and they finished  
 without errors.

My laptop HD developed the same problem a few years ago.  smartd keeps
reporting pending sectors.  After a few years nothing seems to be bad
and the HD keeps on working, however backups are always a good idea.


intrepid:~# grep unreadable /var/log/syslog
May 11 08:34:43 intrepid smartd[1557]: Device: /dev/hda, 7 Currently unreadable 
(pending) sectors 


intrepid:~# smartctl -iA /dev/hda
smartctl version 5.32 Copyright (C) 2002-4 Bruce Allen
Home page is http://smartmontools.sourceforge.net/

=== START OF INFORMATION SECTION ===
Device Model: IBM-DJSA-220
Serial Number:44W4ZNF4226
Firmware Version: JS4OAC0A
Device is:In smartctl database [for details use: -P show]
ATA Version is:   5
ATA Standard is:  ATA/ATAPI-5 T13 1321D revision 1
Local Time is:Thu May 11 08:34:53 2006 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE  UPDATED  
WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate 0x000b   100   100   062Pre-fail  Always   
-   0
  2 Throughput_Performance  0x0005   105   105   040Pre-fail  Offline  
-   6041
  3 Spin_Up_Time0x0007   128   128   033Pre-fail  Always   
-   0
  4 Start_Stop_Count0x0012   091   091   000Old_age   Always   
-   14316
  5 Reallocated_Sector_Ct   0x0033   100   100   005Pre-fail  Always   
-   0
  7 Seek_Error_Rate 0x000b   100   100   067Pre-fail  Always   
-   0
  8 Seek_Time_Performance   0x0005   095   095   040Pre-fail  Offline  
-   142
  9 Power_On_Hours  0x0012   093   093   000Old_age   Always   
-   3124
 10 Spin_Retry_Count0x0013   100   100   060Pre-fail  Always   
-   0
 12 Power_Cycle_Count   0x0032   100   100   000Old_age   Always   
-   1372
191 G-Sense_Error_Rate  0x000a   100   100   000Old_age   Always   
-   0
192 Power-Off_Retract_Count 0x0032   100   100   000Old_age   Always   
-   2
193 Load_Cycle_Count0x0012   092   092   050Old_age   Always   
-   86909
196 Reallocated_Event_Count 0x0032   100   100   000Old_age   Always   
-   27
197 Current_Pending_Sector  0x0022   100   100   000Old_age   Always   
-   7  ---HERE
198 Offline_Uncorrectable   0x0008   100   100   000Old_age   Offline  
-   0
199 UDMA_CRC_Error_Count0x000a   200   200   000Old_age   Always   
-   0

intrepid:~# 

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Ethernet device sit0

2006-04-29 Thread Dave Johnson
Jim Kuzdrall writes:
 On Saturday 29 April 2006 10:14 am, Michael ODonnell wrote:
  That laptop reportedly has
  a 3c905 Enet so theoretically most Linux distributions should
 
 Yes, I had YaST do its hardware search then save the results in a 
 file.  The file had 3Com.3C920 buried in a long string of other 
 device names, but it never showed up in the GUI listing of the results.
 
  output of dmesg?
 
 dmesg | grep -i showed nothing for warn or error.  But, boot.msg did 
 say eth-id-00:06:07:00:1b:67 No interface found.
 
 I checked the installed software.  All the network stuff appears to 
 be installed.  I will check some more.  I have to shut down to see if 
 the networking has to be enabled in BIOS.  I don't remember that being 
 so, but...
 
 Jim Kuzdrall
 

There are various NIC options in the C400 bios:

http://support.dell.com/support/topics/global.aspx/support/kb/en/document?dn=1062407l=enlangid=1c=uscs=s=gen


If none of those helps, make sure the 3c59x kernel module is loaded
(or CONFIG_VORTEX is enabled statically).

'lspci' of couse will also reveal what if anything is actually there.


-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: DNS migration and folks that don't play nice

2006-04-10 Thread Dave Johnson
Cole Tuininga writes:
 
 Preface - 
 
 The folks on the sys-admin list are talking about the migration of
 services from the older server to the newer server.  Of course, one of
 the issues that's come up is DNS.  This led to the following snippet:
 
 On Sat, 2006-04-08 at 09:04 -0400, wrote:
   Well, there's at least one easy workaround for that, aside from the
   obvious (shorten TTL ahead of time, to force fast propagation).
  
  Unfortunately, shortening the TTL doesn't work for clients (like AOL)
  that cache/maintain their own DNS.
 
 I was curious - how do folks in general deal with this?  While AOL can
 certainly constitute a large number of users, my inclination is to say
 hell with 'em.  If they can't conform to proper netiquette, why should
 I be bending over backwards to support them?
 
 I was just curious to get other folks' take on this quasi-philosophical
 point.

For HTTP you can create temporary A/PTRs that have never existed then
use a 302 to redirect from old to new.

For example:

old server has www.example.com that responds with a 302 redirecting to
www2.example.com

new server hosts both www and www2 with the same content.

That way people with and old cache will request a new lookup for www2
(which is new and never had the old address).

This of course means you need to keep the www2 name around
indefinately because it could end up in people's bookmarks/links.


If bandwidth isn't an issue for the short term, the better solution is
to NAT requests going to the old server to the new server.  Use both
SNAT and DNAT in iptables to redirect important UDP/TCP ports on the
old server to the new server.

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: this has to be a bug...

2006-03-24 Thread Dave Johnson
Paul Lussier writes:
 
 From the 'More for your dollar' department:
 
[snip]
$ sudo /sbin/ifconfig eth0:0 del 10.107.33.189
$ ifconfig
[snip]
eth0:0:1  Link encap:Ethernet  HWaddr 00:0C:F1:E2:A8:6B  
  inet addr:10.107.33.189  Bcast:0.0.0.0  Mask:0.0.0.0
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  Base address:0xac00 Memory:ff7e-ff80 
[snip]
 
 I ask to delete a non-existent interface, and instead, I get a totally
 new one I didn't ask for :)


Ya, a bug in net-tools, but you are using an invalid syntax.  Buggy
software and sudo definately dont mix :(

Calling 'del' with an IPv4 IP on an interface that doesn't exist
causes the call to for_all_interfaces() from set_ifstate() to not find
the interface.  It doesn't handle this case and assumes you want to
add the address as a secondary address on the base interface.


Two bugs here:

1) del with IPv4 actually adds if what you specify doesn't exist.

2) add and del assume you only call it with a parent interface name
   'eth0' not 'eth0:anything here' adding a second ':nnn' anyway.

The following actually work:

ifconfig eth0 add 10.107.33.189
ifconfig eth0 del 10.107.33.189

but run 'del' a second time and it adds it back, but with the correct
name (bug)



see below:

(gdb) r
Starting program: /tmp/net-tools-1.60/ifconfig eth0:0 del 10.107.33.189

Breakpoint 1, set_ifstate (parent=0xbf8a1a10 eth0:0, ip=3173083914, nm=0, 
bc=0, flag=0) at ifconfig.c:1118
1118pt.base = parent;
(gdb) n
1119pt.baselen = strlen(parent);
(gdb) 
1121pt.flag = flag;
(gdb) 
1120pt.addr = ip;
(gdb) 
1122memset(searcher, 0, sizeof(searcher));
(gdb) 
1121pt.flag = flag;
(gdb) 
1122memset(searcher, 0, sizeof(searcher));
(gdb) 
1123i = for_all_interfaces((int (*)(struct interface *,void 
*))do_ifcmd, 
(gdb) p pt
$4 = {flag = 0, addr = 3173083914, base = 0xbf8a1a10 eth0:0, baselen = 6}
(gdb) n
1125if (i == -1)
(gdb) 
1127if (i == 1)
(gdb) p i
$5 = 0
(gdb) n
1131for (i = 0; i  256; i++)
(gdb) 
1132if (searcher[i] == 0)
(gdb) 
1131for (i = 0; i  256; i++)
(gdb) p searcher
$6 = \001, '\0' repeats 254 times
(gdb) n
1132if (searcher[i] == 0)
(gdb) 
1135if (i == 256)
(gdb) 
1138if (snprintf(buf, IFNAMSIZ, %s:%d, parent, i)  IFNAMSIZ)
(gdb) p parent
$7 = 0xbf8a1a10 eth0:0
(gdb) n
1140if (set_ip_using(buf, SIOCSIFADDR, ip) == -1)
(gdb) p buf
$8 = eth0:0:1\000\031\212¿W¹\004\b
(gdb) p ip
$9 = 3173083914
(gdb) p /x ip
$10 = 0xbd216b0a
(gdb) show endian
The target endianness is set automatically (currently little endian)
(gdb) 


SIOCSIFADDR on 'eth0:0:1' will create the interface.  Kernel is doing
what it's told although with a very non-standard interface name.  As
long as it starts with 'eth0:' it considers it an alias.


-- 
Dave


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Network testing and latency

2006-03-08 Thread Dave Johnson
Kevin D. Clark writes:
 
 Dave Johnson writes:
 
  Latency and jitter are side effects due to queuing prior to the
  bandwidth limited hop. 
 
 I think that latency has more to do with your transmission medium.
 And I think that jitter has more to do with contention.

Ya, I meant to say queuing introduces jitter and latency.  The
transmission delay of the medium adds latency regardless of
congestion.



  Protocols such as TCP are designed to avoid
  introducing latency when a slow link is in the path.
 
 I think that the design of TCP more has to do with using the network
 efficiently and with operating reliably in the presence of congestion.
 
 I'm not sure how TCP is designed to avoid intoducing latency.  The
 protocol tries to operate reliably but doesn't really make any
 guarantees that the bytestream will make it to the destination by a
 certain time.

Should have said avoid introducing excessive congestion. Latency is a
side-effect of that.

-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Network testing and latency

2006-03-08 Thread Dave Johnson
Paul Lussier writes:
 Dave Johnson [EMAIL PROTECTED] writes:
 
  Note that as mentioned before, limiting bandwidth and introducing
  latency and/or jitter are different things.
 
  If you want to simulate a bandwidth limited link you need to both
  limit bandwidth and queue packets.  If you simply drop and don't queue
  then there is no possibility of latency.
 
  Latency and jitter are side effects due to queuing prior to the
  bandwidth limited hop.  Protocols such as TCP are designed to avoid
  introducing latency when a slow link is in the path.
 
  Anyway, onto the implementation.
 
  Below script limits bandwidth in both directions when forwarding
  through two interfaces.  Note you'll need to setup the appropriate
  interfaces and routes.  Each side has it's own bandwidth and queue
  with a max size in bytes.  This is equivilant to a full-duplex T1
  pipe using a linux box and 2 ethernet interfaces representing the two
  endpoints of the T1.
 
 Dave, does this add jitter as well?  I assume that since you're
 queueing, it does to some extent.  Would it also be wise to inject
 traffic on the simulated T1 connection by having other hosts
 communicating as well?  For example I could have 1 linux system in
 between 2 switches, on which were several other systems all
 communicating with each other doing things like generating web
 requests, copying large files, etc.
 
 It seems this approach would add to the randomness of the connection
 to some extent, and increase both the latency and jitter experienced
 by the 2 systems we're really trying to test.
 
 Thanks again!
 

Latency and jitter due to queuing will be accurate, however the only
thing this won't do is introduce inherant latency due to link speed,
distance, etc...

Sending 1500 bytes over a T1 takes 7.8ms plus forwarding delays,
distance, etc...  If you chain multiple connections together you've
got even more.  The TC setup will forward each packet as fast as
possible, just introduce delays between them to simulate a bandwidth
limit.

In any case if you want a 'real-world' experiance yuo'll need to send
other stuff over the link at the same time as your test.  TCP
connections are easy. If you want to bring the link to a crawl you can
use my UDP network test tool to send packets over the link at any rate:

http://davej.org/programs/untt/

# server listening on port 1
./untt -l -v -p 1

# client sending 400 byte packets at 4mbps (uni-directional)
./untt -vv -p 1 -s 400 -r 4000 -c 100 192.168.11.2


The more bursts the more jitter, the more input the more latency.

Also note the queue size will directly relate to the max latency
because bandwidth is fixed.  The 256KB gives a max latency of
1.333 seconds each direction.

Flooding the link with a constant 4mbps above fills one of the queues
in just seconds and we reach max latency in no time:

64 bytes from 192.168.11.2: icmp_seq=161 ttl=63 time=1350 ms
64 bytes from 192.168.11.2: icmp_seq=162 ttl=63 time=1345 ms
64 bytes from 192.168.11.2: icmp_seq=163 ttl=63 time=1353 ms
64 bytes from 192.168.11.2: icmp_seq=164 ttl=63 time=1348 ms
64 bytes from 192.168.11.2: icmp_seq=165 ttl=63 time=1349 ms
64 bytes from 192.168.11.2: icmp_seq=166 ttl=63 time=1356 ms
64 bytes from 192.168.11.2: icmp_seq=167 ttl=63 time=1350 ms
64 bytes from 192.168.11.2: icmp_seq=168 ttl=63 time=1350 ms


-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Network testing and latency

2006-03-07 Thread Dave Johnson
Ben Scott writes:
 On 3/7/06, Kevin D. Clark [EMAIL PROTECTED] wrote:
  If all that you want to do is to introduce latency, I would suggest
  using iptables dstlimit and fuzzy modules.
 
   Will that really create a realistic reproduction of a higher latency
 link, though?  Assuming the throughput demands were minimal, latency
 would remain low, wouldn't it?  And higher throughput demands would
 prolly result in TCP throttling or retransmissions or some such, which
 would be seen more as a bandwidth limit or bad line, not long
 transmission time, per se.

dstlimit and fuzzy are both packet-per-sec limiters not bandwidth
limiters. It makes a big difference if all you packets aren't the same
size.

Note that as mentioned before, limiting bandwidth and introducing
latency and/or jitter are different things.

If you want to simulate a bandwidth limited link you need to both
limit bandwidth and queue packets.  If you simply drop and don't queue
then there is no possibility of latency.

Latency and jitter are side effects due to queuing prior to the
bandwidth limited hop.  Protocols such as TCP are designed to avoid
introducing latency when a slow link is in the path.

Anyway, onto the implementation.

Below script limits bandwidth in both directions when forwarding
through two interfaces.  Note you'll need to setup the appropriate
interfaces and routes.  Each side has it's own bandwidth and queue
with a max size in bytes.  This is equivilant to a full-duplex T1
pipe using a linux box and 2 ethernet interfaces representing the two
endpoints of the T1.

-- 
Dave


run this:



# the 2 interfaces you are forwarding through
SIDE_A=eth1
SIDE_B=eth2

# line rates (real and fake)
REALRATE=100mbit
FAKERATE=1500kbit

# max queue size on both ends (bytes)
QUEUESIZE=256kb

# SIDE B - SIDE A
tc qdisc del dev $SIDE_A root
tc qdisc add dev $SIDE_A root handle 2: cbq bandwidth $REALRATE avpkt 1000
tc class add dev $SIDE_A parent 2: classid 2:1 cbq bandwidth $REALRATE \
  rate ${FAKERATE} allot 1514 maxburst 10 avpkt 1000 bounded prio 8
tc qdisc add dev $SIDE_A parent 2:1 handle 20: bfifo limit $QUEUESIZE
tc filter add dev $SIDE_A protocol ip parent 2: prio 100 u32 match ip \
  protocol 0 0 flowid 2:1

# SIDE A - SIDE B
tc qdisc del dev $SIDE_B root
tc qdisc add dev $SIDE_B root handle 3: cbq bandwidth $REALRATE avpkt 1000
tc class add dev $SIDE_B parent 3: classid 3:1 cbq bandwidth $REALRATE \
  rate ${FAKERATE} allot 1514 maxburst 10 avpkt 1000 bounded prio 8
tc qdisc add dev $SIDE_B parent 3:1 handle 30: bfifo limit $QUEUESIZE
tc filter add dev $SIDE_B protocol ip parent 3: prio 100 u32 match ip \
  protocol 0 0 flowid 3:1

# show current usage
watch tc -s -d qdisc ls

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: using make to convert images to thumbs

2006-02-26 Thread Dave Johnson
Python writes:
 I have a client that laid out their images and thumbs into almost
 parallel directory structures.
   /img/thumb
 /x  /img
   /y  /x
 /*.jpg  /y
   /*.jpg
 
 x and y are two digit directory names used to shorten the directory
 scans to retrieve files.  I thought I could use a Makefile to automate
 the creation of thumbs, but this pattern has me stumped.
 
 I am going to just write a script to step through the source tree and
 check the corresponding time stamp unless someone offers a suggestion.
 (No they will not shuffle things around because they have lots of logic
 in place.)


here you go:


.DEFAULT: all
.DELETE_ON_ERROR:
.PHONY: all clean

IMAGES=$(wildcard img/*/*/*.jpg)
THUMBS=$(patsubst %,thumb/%,$(IMAGES))

all: $(THUMBS)

thumb/%: %
@mkdir -p $(@D)
convert $^ -scale 160x120 -quality 25 $@

clean:
$(RM) $(THUMBS)




-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Connection Reset By Peer on ssh sessions

2006-02-04 Thread Dave Johnson
Lloyd Kvam writes:
 On Sat, 2006-02-04 at 09:36 -0500, Fred wrote:
  I've got an annoying problem with the new Verizon Fios service.
  
  If I leave an ssh session open and sits idle for longer than 2-5
  minutes, it 
  is killed with a Connection Reset by Peer error message.
  
 I've seen this kind of behavior where there is a stateful, inspection
 firewall processing packets, though never with a timeout this small.
 When the firewall dropped the connection info from its state tables, any
 subsequent packets would be mangled and unacceptable to the remote end
 which would then close the connection - generating the Connection Reset
 by Peer message at the local end.
 
 I ran tcpdump at both endpoints to document what was happening.  The
 firewall managers were unwilling to make any changes.
 
 I do not know if you will be able to get Verizon to do anything to fix
 the problem.  At least ssh has a keep-alive feature that should be
 somewhat configurable.  Hopefully you can send a keep-alive packet every
 2 minutes.

Ya, definately a firewall or NAT device is aging out your connection
from it's connection table.  It can be this short of a time if the
device is overloaded with connections (such as in a DDoS) or its table
size is simply too small for what traffic is flowing through it and it
needs to throw out old connections to make room for new ones.

Best bet is to use '-oProtocolKeepAlives=90' or
'-oServerAliveInterval=60 -oServerAliveCountMax=3' depending on what
version of openssh you are using.


-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Cohosting around Lebanon - suggestions?

2006-01-27 Thread Dave Johnson
Christopher Schmidt writes:
 On Fri, Jan 27, 2006 at 01:05:22AM -0500, Ken D'Ambrosio wrote:
  On Thu, January 26, 2006 11:54 pm, Bill McGonigle wrote:
  
   We'd probably want to fund a terminal server/remote power unit to share
   for decent non-driving management.  I have a Zyplex with lots of serial
   ports but it only speaks telnet, so there would be need for a pokey ssh
   box in front of it, which might not be worth another U.
  
  Somewhere, I've got a power strip that allows remote access.  Not sure
  what protocols it speaks.  I think it's an APC, so that probably says
  something to someone.  I'd be glad to contribute it for this project; I
  imagine poking around with the docs could get it up and running fairly
  quickly.  [I, too, have an older-than-death Ethernet-to-RS-232 gizmo. 
  Since it actually has an AUI port, in addition to the 10-Base-T port, I
  imagine it only supports telnet.]
 
 Presumably this is an APC Masterswitch. I actually wrote a perl script
 to talk to one of those things at Wedu. They're typically pretty simple:
 You telnet in, you can get a status of plugs, you can turn them off or
 on or cycle.
 
 We used it to do our heartbeat STONITH (Shoot the Other Node In the
 Head) step. Worked pretty well when we wanted to kill a machine and
 din't want to drive to the colo (even though it was only a mile away).
 
 It does only support telnet, and only 8 char passwords at that. (At
 least, ours does.) Note that this was determined by trial and error, and
 was not documented anywhere obvious.

APC materswitches can also be controlled via snmp which is how I
control mine.  See http://centerclick.org/temp/ms-reboot

Also,  I have a spare 24port 10/100 managed rackmount ethernet switch
that I can bring along.  It supports dot1q so we could use a seperate
vlan for local equipment such as materswitch/console server/etc..


-- 
Dave

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss