Help me fix this annoying RPI4 Buster printing problem

2021-01-08 Thread Bruce Labitt
At the moment I'm on an RPi4 running Raspberry Pi OS.  My laptop died 
recently.  I'm trying to figure out this OS works.  Have to say, it's 
been painful in many ways.  One is that the RPI is slow...  It will be a 
little while before I get a replacement laptop, so I have to deal with this.

Although I have configured a network printer (running on CUPS server on 
an RPi2) some apps on my RPI4, like, GIMP, gpaint, and LibreOffice seem 
to retain former and incorrect printers.

My GoogleFu seems to be terrible.  Can't seem to find a way to purge 
these old settings from showing up.  There has to be a way to do this!

I've found this experience to be pretty darned exasperating.  I find 
lots of apps, including gpaint, don't just work on RPI4 Buster.  Doing a 
print, or a print preview from gpaint should NOT cause the app to fail 
and terminate.  Especially if the network printer actually is configured 
correctly.  I can print a test page just fine.

Where are system wide print settings managed?  File names and 
directories?  Does gpaint have a shadow set of printer settings? Where 
are they stored?

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Linking problem

2019-09-13 Thread Bruce Labitt
Puzzling over the use of ldconfig.  As I understand it ldconfig can be 
used to rebuild/locate all the shared libraries.  It looks in ld.so.conf 
for the directories to use. In my case ld.so.conf has one line in it:

"include /etc/ld.so.conf.d/*.conf"

I have 3 conf files in ld.so.conf.d.

libc.conf:

     /usr/local/lib

x86_64-linux-gnu.conf:

     /lib/x86_64-linux-gnu

     /usr/lib/x86_64-linux-gnu

i386-linux-gnu.conf:

     /lib/i386-linux-gnu

     /usr/lib/i386-linux-gnu

If I $ sudo rm /etc/ld.so.cache and $ sudo ldconfig -v, I get the message

     /sbin/ldconfig.real: Path `/lib/x86_64-linux-gnu' given more than once
     /sbin/ldconfig.real: Path `/usr/lib/x86_64-linux-gnu' given more 
than once

Why would this happen?  Is this ok?

I haven't gotten to my actual question yet, but this is puzzling me.  
Ubuntu 18.04 LTS, if this matters.  I'm trying to figure out if things 
are ok enough to ask why the linker can't find a file, even though I see 
it in ldconfig.  Maybe what I am asking is how to force a new 
configuration after deleting the ld.so.cache.



___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


ip6tables problem

2014-01-03 Thread Curt Howland
So I'm trying to put together a nice firewall, and there is one
command that is just not working:

ip6tables -A INPUT -m limit --limit 3/min --limit-burst 10 -j LOG
--log-prefix [INPUT6]: 

ip6tables is acting as if -j LOG is trying to jump to a chain that
has not been defined.

ip6tables: No chain/target/match by that name.

Problem is that every example I can find online makes this seem to just work.

Any suggestions?

Curt-
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: ip6tables problem

2014-01-03 Thread Shawn O'Shea
The LOG target is apparently implemented as an iptables extension, but is a
standard extension for the iptables package. On an Ubuntu LTS system I
have, it is /lib/xtables/libipt_LOG.so and part of the iptables Ubuntu
packages. In a 64-bit CentOS 6 system I have, it's
/lib64/xtables/libipt_LOG.so and part of the iptables RPM.

What distro are you using? Maybe your distro broke the extensions out into
its own package? Or at least see if this library exists on your system.

-Shawn


On Fri, Jan 3, 2014 at 2:54 PM, Curt Howland howl...@priss.com wrote:

 So I'm trying to put together a nice firewall, and there is one
 command that is just not working:

 ip6tables -A INPUT -m limit --limit 3/min --limit-burst 10 -j LOG
 --log-prefix [INPUT6]: 

 ip6tables is acting as if -j LOG is trying to jump to a chain that
 has not been defined.

 ip6tables: No chain/target/match by that name.

 Problem is that every example I can find online makes this seem to just
 work.

 Any suggestions?

 Curt-
 ___
 gnhlug-discuss mailing list
 gnhlug-discuss@mail.gnhlug.org
 http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: ip6tables problem

2014-01-03 Thread Shawn O'Shea
I fired up a Debian wheezy vm. It looks like there is a corresponding
kernel module. When I try your ip6tables command, it works, and autoloads
the necessary kernel modules.
root@debian:~# lsmod | grep LOG
root@debian:~# ip6tables -A INPUT -m limit --limit 3/min --limit-burst 10
-j LOG --log-prefix [INPUT6]: 
root@debian:~# lsmod | grep LOG
ip6t_LOG   12609  1
ip6_tables 22175  2 ip6table_filter,ip6t_LOG
x_tables   19118  4 ip6_tables,ip6table_filter,xt_limit,ip6t_LOG

The module lives in the matching kernel version module tree, like:
/lib/modules/`uname -r`/kernel/net/ipv6/netfilter/ip6t_LOG.ko

Is that on your system? Have you created a custom compiled kernel that
perhaps does not have the proper option enabled to build it?

-Shawn

On Fri, Jan 3, 2014 at 3:49 PM, Curt Howland howl...@priss.com wrote:

 On Fri, Jan 3, 2014 at 3:24 PM, Shawn O'Shea sh...@eth0.net wrote:
  it is /lib/xtables/libipt_LOG.so and part of the iptables Ubuntu
 packages.
  In a 64-bit CentOS 6 system I have, it's /lib64/xtables/libipt_LOG.so and
 Debian stable,

 Sure enough,

 /lib/xtables
 # ls -a | grep -i log
 libip6t_LOG.so
 libipt_LOG.so
 libipt_ULOG.so
 libxt_NFLOG.so

 So I guess the next question is how to kick ip6tables into using it.

 Curt-

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Thunderbird problem fixed + plus an additional problem

2013-07-15 Thread Donald Leslie
I renamed the Inbox fille and created a new Inbox. Thunderbird then 
downloaded everything and now only gets new messages. I did look at the 
popstate.dat file. Before it had a few k entires but was mostly d. 
It now has only k entries since Inbox was rebuilt.

This had been more of an issues until recently since I can no longer use 
Comcast web-mail using firefox or konqueror. When you select the email 
tab at the Comcast web site the page goes away and then returns to same 
location not to email. I tried using Chrome a few days ago and 
discovered that it works fine. I wonder if it has anything to do with 
their  java script processing?




___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Thunderbird problem

2013-07-12 Thread Jerry Feldman
Go to Edit/Account Settings/youur username@comcast.net/Server Settings
Make sure that Leave messages on server is NOT checked.


On 07/10/2013 06:10 PM, Donald Leslie wrote:
 I regularly attended the meetings in Nashua until I moved away in 
 2006. The mozilla community does not seem to respond to problems, so I 
 thought I would try here.

 Recently thunderbird started re-reading my entire Comcast mail inbox 
 when looking for new mail. It does not happen with gmail. Comcast is a 
 pop server and gmail is imap. How does thunderbird know that mail is 
 new and need to be read? Any suggestions?

 I am including the trouble shooting information that thunderbird 
 generates :

 Application Basics

 Name: Thunderbird
 Version: 17.0.6
 User Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130510 
 Thunderbird/17.0.6
 Profile Directory: Open Directory

 (Local drive)
 Application Build ID: 2013051000
 Enabled Plugins: about:plugins
 Build Configuration: about:buildconfig
 Crash Reports: about:crashes
 Memory Use: about:memory

 Mail and News Accounts
 account1:
 INCOMING: account1, , (pop3) mail.comcast.net:110, alwaysSTARTTLS, 
 passwordCleartext
 OUTGOING: smtp.comcast.net:587, alwaysSTARTTLS, passwordCleartext, true

 account2:
 INCOMING: account2, , (none) Local Folders, plain, passwordCleartext

 account3:
 INCOMING: account3, , (imap) imap.googlemail.com:993, SSL, 
 passwordCleartext
 OUTGOING: smtp.googlemail.com:465, SSL, passwordCleartext, true

 Extensions
 Test Pilot for Thunderbird, 1.3.9, true, tbtestpi...@labs.mozilla.com 
 mailto:tbtestpi...@labs.mozilla.com

 Important Modified Preferences

 Name: Value

 browser.cache.disk.capacity: 358400
 browser.cache.disk.smart_size_cached_value: 358400
 browser.cache.disk.smart_size.first_run: false
 browser.cache.disk.smart_size.use_old_max: false
 extensions.lastAppVersion: 17.0.6
 mailnews.database.global.datastore.id: 
 a8d574ca-3cfb-4e4d-b031-9901685c4a1
 mail.openMessageBehavior.version: 1
 network.cookie.prefsMigrated: true
 places.database.lastMaintenance: 1372950413
 places.history.expiration.transient_current_max_pages: 98466

 Graphics

 Adapter Description: Intel Open Source Technology Center -- Mesa DRI 
 Intel(R) Ironlake Mobile
 Vendor ID: nter
 Device ID: ile
 Driver Version: 2.1 Mesa 9.0.2
 WebGL Renderer: Blocked for your graphics driver version. Try updating 
 your graphics driver to version Not Mesa or newer.
 GPU Accelerated Windows: 0/1


 ___
 gnhlug-discuss mailing list
 gnhlug-discuss@mail.gnhlug.org
 http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


-- 
Jerry Feldman g...@blu.org
Boston Linux and Unix
PGP key id:3BC1EB90
PGP Key fingerprint: 49E2 C52A FC5A A31F 8D66  C0AF 7CEA 30FC 3BC1 EB90

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Thunderbird problem

2013-07-12 Thread Donald Leslie
I left it checked so I can read the email when I am not on my laptop. It 
had worked fine that way until a few days ago.
The problem is only with Comcast. The gmail account does re-read already 
read email.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Thunderbird problem

2013-07-12 Thread Jerry Feldman
On 07/12/2013 01:21 PM, Donald Leslie wrote:
 I left it checked so I can read the email when I am not on my laptop. It
 had worked fine that way until a few days ago.
 The problem is only with Comcast. The gmail account does re-read already
 read email.
I don't think this is an email client problem (ie Thunderbird). It 
appears to be an issue with Comcast. WRT: gmail. Since gmail uses iMap 
it is an entirely different protocol. With iMAP when you read an email, 
then the server is notified but in POP it occurs when the email is 
downloaded. So, with gmail, in gmail, you may see 3 unread messages, and 
the same 3 should be unread on the server (allowing for timing 
differences),. If you read an email on the server, that should show up 
eventually in the client. In POP the email on the server should reflect 
the download, and should be market read on the server, but I have seen 
many cases where it was not.

I have not read the POP spec in a number of years so my info may be rusty.

-- 
Jerry Feldman g...@blu.org
Boston Linux and Unix
PGP key id:3BC1EB90
PGP Key fingerprint: 49E2 C52A FC5A A31F 8D66  C0AF 7CEA 30FC 3BC1 EB90

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Thunderbird problem

2013-07-12 Thread mikebw
The Thunderbird client, as far as I know, synchronizes a folder with the server 
via POP3 by trying UIDL first, if unsupported XTND XLST, and if unsupported 
TOP. If none of those extensions to POP3 work, keeping mail on the server is 
not supported.

As a practical matter, nearly all widely used POP3 servers support UIDL. When 
things go wrong, the first thing to check is whether the UIDL database has gone 
corrupt on either the client or the server. I'm not sure if there is any hope 
of getting Comcast to verify the server side, but reindexing the mailbox on the 
client side is worth a try.

-- Mike


Jerry Feldman g...@blu.org wrote:
On 07/12/2013 01:21 PM, Donald Leslie wrote:
 I left it checked so I can read the email when I am not on my laptop.
It
 had worked fine that way until a few days ago.
 The problem is only with Comcast. The gmail account does re-read
already
 read email.
I don't think this is an email client problem (ie Thunderbird). It 
appears to be an issue with Comcast. WRT: gmail. Since gmail uses iMap 
it is an entirely different protocol. With iMAP when you read an email,

then the server is notified but in POP it occurs when the email is 
downloaded. So, with gmail, in gmail, you may see 3 unread messages,
and 
the same 3 should be unread on the server (allowing for timing 
differences),. If you read an email on the server, that should show up 
eventually in the client. In POP the email on the server should reflect

the download, and should be market read on the server, but I have seen 
many cases where it was not.

I have not read the POP spec in a number of years so my info may be
rusty.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


RE: Thunderbird problem

2013-07-12 Thread Cook, Larry
Hi Don,

On 07/10/2013 06:10 PM, Donald Leslie wrote:
 Recently thunderbird started re-reading my entire Comcast mail inbox 
 when looking for new mail. It does not happen with gmail. Comcast is a 
 pop server and gmail is imap. How does thunderbird know that mail is 
 new and need to be read? Any suggestions?

Thunderbird uses this file to track your downloaded POP3 messages:

http://kb.mozillazine.org/Popstate.dat

There have been some bugs in the past that have left the popstate.dat file 
empty or corrupted.  A full disk lead to me hitting the problem.  You could 
search https://bugzilla.mozilla.org/ to see what bugs have been fixed and what 
ones are still open.  You may want to update Thunderbird to the latest release 
also.

Larry

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: TCL problem. Can someone help?

2010-11-05 Thread Bruce Dawson


On 11/04/2010 11:09 PM, Steven W. Orr wrote:
 On 11/4/2010 11:41 AM, David Rysdam wrote:
 An agent or agents purporting to be Steven W. Orr said:
 I have a stupid question in tcl that I'm just not getting. I'm hoping to get
 lucky here.

 I have a script in tcl/expect that spawns su and needs to pass its arguments
 to su.

 argv in tcl has the command line args. I lop off the first couple of args 
 that
 I need in my script via:

 set user [lindex $argv 0]
 set passwd [lindex $argv 1]

 Then I want to pass the *rest* of the args to su. What I have is this:

 spawn su - $user -c [lrange $argv 2 end]

 If I call me script

 sss me secret 'pwd; ls'

 Then what happens is this:

 spawn su - swagent -c {pwd; ls;}
 Password:
 -bash: -c: line 0: syntax error near unexpected token `}'
 -bash: -c: line 0: `{pwd; ls;}'

 I vershtumped about where the braces are coming from. found out that if I 
 pass
 a single command without any semicolon , it works ok.
 The braces are there because the result of the lrange is a list.  The
 way we handle this in our code (which may not be The Right Way) is by
 doing something this:

 set args [lrange $argv 2 end]
 eval {spawn su - $user -c} $args

 (I don't know if you need the quotes for either spawn or su, so those
 might be extraneous or need some special quoting or something.)

 Newer versions for Tcl (and therefore Expect?) also have an expand
 operator that probably does this better, but I don't know if you are
 using that.
 I'm impressed. Thanks

Note that the 'join' operator removes the top level of braces. This may
not be the right thing to do in all circumstances, but it frequently
helps when you need to get rid of extraneous braces.

--Bruce
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: TCL problem. Can someone help?

2010-11-05 Thread David Rysdam
An agent or agents purporting to be Bruce Dawson said:
 
 On 11/04/2010 11:09 PM, Steven W. Orr wrote:
  On 11/4/2010 11:41 AM, David Rysdam wrote:
  An agent or agents purporting to be Steven W. Orr said:
  I have a stupid question in tcl that I'm just not getting. I'm hoping to 
  get
  lucky here.
 
  I have a script in tcl/expect that spawns su and needs to pass its 
  arguments
  to su.
 
  argv in tcl has the command line args. I lop off the first couple of args 
  that
  I need in my script via:
 
  set user [lindex $argv 0]
  set passwd [lindex $argv 1]
 
  Then I want to pass the *rest* of the args to su. What I have is this:
 
  spawn su - $user -c [lrange $argv 2 end]
 
  If I call me script
 
  sss me secret 'pwd; ls'
 
  Then what happens is this:
 
  spawn su - swagent -c {pwd; ls;}
  Password:
  -bash: -c: line 0: syntax error near unexpected token `}'
  -bash: -c: line 0: `{pwd; ls;}'
 
  I vershtumped about where the braces are coming from. found out that if I 
  pass
  a single command without any semicolon , it works ok.
  The braces are there because the result of the lrange is a list.  The
  way we handle this in our code (which may not be The Right Way) is by
  doing something this:
 
  set args [lrange $argv 2 end]
  eval {spawn su - $user -c} $args
 
  (I don't know if you need the quotes for either spawn or su, so those
  might be extraneous or need some special quoting or something.)
 
  Newer versions for Tcl (and therefore Expect?) also have an expand
  operator that probably does this better, but I don't know if you are
  using that.
  I'm impressed. Thanks
 
 Note that the 'join' operator removes the top level of braces. This may
 not be the right thing to do in all circumstances, but it frequently
 helps when you need to get rid of extraneous braces.

join turns a list into a string and you are right that that might be
the right thing to do for either su or spawn, depending on what it is
expecting (see what I did there?!).  Sometimes a command (internal or
external) wants a single string with embedded spaces (i.e. from join)
and sometimes it wants multiple items (i.e. the result of expanding
the list).

It's a subtle Tcl thing that's bitten me a million times, which is why
I recognized it immediately.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: TCL problem. Can someone help?

2010-11-05 Thread Tyson Sawyer
On Fri, Nov 5, 2010 at 7:16 AM, David Rysdam da...@rysdam.org wrote:
 It's a subtle Tcl thing that's bitten me a million times, which is why
 I recognized it immediately.

At one time I was very good that tcl quoting and expanding/evaluating.
 ...but then found that Python was a better solution. ;-)

Cheers!
Ty

-- 
Tyson D Sawyer

A strong conviction that something must be done is the parent
of many bad measures.   - Daniel Webster

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


TCL problem. Can someone help?

2010-11-04 Thread Steven W. Orr
I have a stupid question in tcl that I'm just not getting. I'm hoping to get 
lucky here.

I have a script in tcl/expect that spawns su and needs to pass its arguments 
to su.

argv in tcl has the command line args. I lop off the first couple of args that 
I need in my script via:

set user [lindex $argv 0]
set passwd [lindex $argv 1]

Then I want to pass the *rest* of the args to su. What I have is this:

spawn su - $user -c [lrange $argv 2 end]

If I call me script

sss me secret 'pwd; ls'

Then what happens is this:

spawn su - swagent -c {pwd; ls;}
Password:
-bash: -c: line 0: syntax error near unexpected token `}'
-bash: -c: line 0: `{pwd; ls;}'

I vershtumped about where the braces are coming from. found out that if I pass 
a single command without any semicolon , it works ok.

I also found that if I pass multiple commands, then it works if the variable 
that contains the commandlist must have a preceding space. e.g.

foo=' pwd;ls;'
sss me secret $foo

Is there a proper way to write the tcl script so that it passes throuhg just 
what I send it?

TIA

-- 
Time flies like the wind. Fruit flies like a banana. Stranger things have  .0.
happened but none stranger than this. Does your driver's license say Organ ..0
Donor?Black holes are where God divided by zero. Listen to me! We are all- 000
individuals! What if this weren't a hypothetical question?
steveo at syslang.net
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: TCL problem. Can someone help?

2010-11-04 Thread David Rysdam
An agent or agents purporting to be Steven W. Orr said:
 I have a stupid question in tcl that I'm just not getting. I'm hoping to get 
 lucky here.
 
 I have a script in tcl/expect that spawns su and needs to pass its arguments 
 to su.
 
 argv in tcl has the command line args. I lop off the first couple of args 
 that 
 I need in my script via:
 
 set user [lindex $argv 0]
 set passwd [lindex $argv 1]
 
 Then I want to pass the *rest* of the args to su. What I have is this:
 
 spawn su - $user -c [lrange $argv 2 end]
 
 If I call me script
 
 sss me secret 'pwd; ls'
 
 Then what happens is this:
 
 spawn su - swagent -c {pwd; ls;}
 Password:
 -bash: -c: line 0: syntax error near unexpected token `}'
 -bash: -c: line 0: `{pwd; ls;}'
 
 I vershtumped about where the braces are coming from. found out that if I 
 pass 
 a single command without any semicolon , it works ok.

The braces are there because the result of the lrange is a list.  The
way we handle this in our code (which may not be The Right Way) is by
doing something this:

set args [lrange $argv 2 end]
eval {spawn su - $user -c} $args

(I don't know if you need the quotes for either spawn or su, so those
might be extraneous or need some special quoting or something.)

Newer versions for Tcl (and therefore Expect?) also have an expand
operator that probably does this better, but I don't know if you are
using that.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: TCL problem. Can someone help?

2010-11-04 Thread Steven W. Orr
On 11/4/2010 11:41 AM, David Rysdam wrote:
 An agent or agents purporting to be Steven W. Orr said:
 I have a stupid question in tcl that I'm just not getting. I'm hoping to get
 lucky here.

 I have a script in tcl/expect that spawns su and needs to pass its arguments
 to su.

 argv in tcl has the command line args. I lop off the first couple of args 
 that
 I need in my script via:

 set user [lindex $argv 0]
 set passwd [lindex $argv 1]

 Then I want to pass the *rest* of the args to su. What I have is this:

 spawn su - $user -c [lrange $argv 2 end]

 If I call me script

 sss me secret 'pwd; ls'

 Then what happens is this:

 spawn su - swagent -c {pwd; ls;}
 Password:
 -bash: -c: line 0: syntax error near unexpected token `}'
 -bash: -c: line 0: `{pwd; ls;}'

 I vershtumped about where the braces are coming from. found out that if I 
 pass
 a single command without any semicolon , it works ok.

 The braces are there because the result of the lrange is a list.  The
 way we handle this in our code (which may not be The Right Way) is by
 doing something this:

 set args [lrange $argv 2 end]
 eval {spawn su - $user -c} $args

 (I don't know if you need the quotes for either spawn or su, so those
 might be extraneous or need some special quoting or something.)

 Newer versions for Tcl (and therefore Expect?) also have an expand
 operator that probably does this better, but I don't know if you are
 using that.

I'm impressed. Thanks

-- 
Time flies like the wind. Fruit flies like a banana. Stranger things have  .0.
happened but none stranger than this. Does your driver's license say Organ ..0
Donor?Black holes are where God divided by zero. Listen to me! We are all- 000
individuals! What if this weren't a hypothetical question?
steveo at syslang.net
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: UPS electrical problem

2010-07-08 Thread Greg Rundlett (freephile)
The battery actually had a bulge in the case to the point where it cracked
-- although it didn't appear to leak anywhere.  Must have dried out.  New
battery through the intertubes for $16 with $12 shipping  $40 at Staples.

Greg Rundlett
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: UPS electrical problem

2010-07-01 Thread Steven W. Orr
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 06/30/10 09:09, quoth Tom Buskey:
 
 
 There's one called apcups that I've used also.  You can run a server on
 one system with clients that will poll the server for status and
 shutdown.  This is useful with several systems plugged into one UPS or
 other UPS that don't have monitoring. 
 
 

Here's mine. I get about 3.5 hours in a blackout, and the apcupsd process
works very well.

http://steveo.syslang.net/cgi-bin2/multimon.cgi

Running a UPS without software is sort of as pointless as how long it runs. As
far as whether the batteries are any good, that's the easy part. Pull the plug
and see how long it lasts

- --
Time flies like the wind. Fruit flies like a banana. Stranger things have  .0.
happened but none stranger than this. Does your driver's license say Organ ..0
Donor?Black holes are where God divided by zero. Listen to me! We are all- 000
individuals! What if this weren't a hypothetical question?
steveo at syslang.net
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.10 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org/

iEYEARECAAYFAkwtFSkACgkQRIVy4fC+NyTCqQCghLSnuoJdN7O3YEfP9XBOSjAS
hA8AnjySjF/eiILXOKC8e56WadYUEiZm
=mrlC
-END PGP SIGNATURE-
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: UPS electrical problem

2010-07-01 Thread Benjamin Scott
On Thu, Jul 1, 2010 at 6:22 PM, Steven W. Orr ste...@syslang.net wrote:
 Pull the plug and see how long it lasts

  It's best not to literally pull the plug out of the receptacle.
Doing so disconnects the earth ground.  That can cause problems.  Some
signal links can be perturbed by loss of ground, or change in
potential.  And if you've got a signal cable connecting you to
equipment still grounded, you can get current following *that* path to
ground and causing all manner of weird behavior, possibly even damage.

  Flipping a switch on a power strip, or a circuit breaker, will just
disconnect the hot wire, leaving neutral and ground connected.

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: UPS electrical problem

2010-07-01 Thread Greg Rundlett (freephile)
Thanks for all the responses.  I'm going to both check the status with
apcupsd and likely replace the battery.

Greg Rundlett
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: UPS electrical problem

2010-06-30 Thread Jerry Feldman
On 06/29/2010 09:33 PM, Greg Rundlett (freephile) wrote:
 I have my Linux Media Center, a Dell Studio Hybrid PC, 2 1TB Fantom
 external drives and a 25 lcd monitor plugged into an APC Backups Pro
 500 UPS on the battery+surge protection side.

 Within the past week, when the household thermostat kicks on or off
 the central A/C system, the PC shuts off instantaneously.  Other
 electronics do not seem to be affected, although they probably are.
  It's just harder to confirm since a flicker in the drive or monitor
 power would be hard to observe.   I assume that a sag or surge gets
 through the UPS and affects my PC, but don't understand how that can
 happen when I thought that's what the UPS was supposed to prevent.
  And, short of replacing the UPS, I'm not sure what to do about it.
  Is there something I can do to prevent this sag/surge event -- since
 it's likely to affect something else?

On the BLU list we were discussing online backlups, and Jack Coats sent
me some email regarding both surge suppressors and UPS systems. Not only
can the battery go bad, but the UPS itself can be toast. This is also
true for surge suppressors.

-- 
Jerry Feldman g...@blu.org
Boston Linux and Unix
PGP key id: 537C5846
PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB  CA3B 4607 4319 537C 5846




signature.asc
Description: OpenPGP digital signature
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: UPS electrical problem

2010-06-30 Thread Benjamin Scott
On Tue, Jun 29, 2010 at 9:33 PM, Greg Rundlett (freephile)
g...@freephile.com wrote:
 Within the past week, when the household thermostat kicks on or off the
 central A/C system, the PC shuts off instantaneously.

  What happens if the electrical power supply to the UPS is
disconnected?  (For example, flip the circuit breaker for the outlet.
Don't test a UPS by unplugging it, that cuts the ground which can
cause other problems.)

  Some possible causes, in roughly decreasing order of likelihood:

  As others have suggested, the battery is the most likely culprit.
UPS batteries are generally only good for a few years at best.  Heavy
battery usage can shorten that further.  For example, if your UPS has
been kicking on and off battery every time your air conditioner stars
or stops...

  It could be a UPS overload.  Overloads usually trigger an alarm or
trip a UPS's built-in circuit breaker, but not always.  I've seen
overloads manifest as the UPS drops the load.  Note that a worn-out
battery and an overload can have similar symptoms -- dropping a
regular load, but a tiny load seems okay.  (This is one reason I
suggest a clean disconnect test: If the UPS is fine on a clean
disconnect, then it's not an overload.)

  As others have said, it may be a question of the low-voltage
threshold being set wrong, or simply not being sensitive enough in the
design.  Air conditioner causes a voltage sag, UPS thinks everything
is okay, but PC disagrees.  (Another reason for the clean disconnect
test.  The UPS can't argue about zero volts.)

  It's possible for the UPS's power or control electronics to fail.
Like anything else.

  Another possibility is the UPS is allowing power transients through
which your load doesn't like.  Many UPSes are just a glorified surge
suppressor when they're not on battery.  When they sense trouble, they
switch over.  That switch takes time.  That's long enough let some
kinds of power transients through, which is enough to piss off some
loads.

  Better UPSes include some kind of power
conditioning/filtering/regulator/magic to address switching delays and
line transients.  APC calls their magic line interactive, which I've
never seen a convincing technical explanation of.

  Also, cheaper UPSes will not output a true AC sine wave -- they'll
use a square or stepped approximation.  Some loads *really* hate that
-- AC motors, for example.  Computers are almost always okay, though.

  The best UPSes always run off the battery+inverter, so there's no AC
switching or AC transient possible.  Names for this include
continuous or double conversion or on-line.  They are expensive,
both up-front and over time.

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: UPS electrical problem

2010-06-30 Thread Tom Buskey
On Tue, Jun 29, 2010 at 10:01 PM, Ben Eisenbraun b...@klatsch.org wrote:


 Last I looked, APC had Windows/Mac clients for checking/changing their
 settings, and I think there are some 3rd party linux/UNIX tools that will
 allow you to do it as well.  Network UPS Tools (NUT) is one I have used in
 the past.

 -ben


There's one called apcups that I've used also.  You can run a server on one
system with clients that will poll the server for status and shutdown.  This
is useful with several systems plugged into one UPS or other UPS that don't
have monitoring.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


UPS electrical problem

2010-06-29 Thread Greg Rundlett (freephile)
I have my Linux Media Center, a Dell Studio Hybrid PC, 2 1TB Fantom
external drives and a 25 lcd monitor plugged into an APC Backups Pro 500
UPS on the battery+surge protection side.

Within the past week, when the household thermostat kicks on or off the
central A/C system, the PC shuts off instantaneously.  Other electronics do
not seem to be affected, although they probably are.  It's just harder to
confirm since a flicker in the drive or monitor power would be hard to
observe.   I assume that a sag or surge gets through the UPS and affects my
PC, but don't understand how that can happen when I thought that's what the
UPS was supposed to prevent.  And, short of replacing the UPS, I'm not sure
what to do about it.  Is there something I can do to prevent this sag/surge
event -- since it's likely to affect something else?

Thanks,

Greg Rundlett
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: UPS electrical problem

2010-06-29 Thread Dan Jenkins
  On 6/29/2010 9:33 PM, Greg Rundlett (freephile) wrote:
 I have my Linux Media Center, a Dell Studio Hybrid PC, 2 1TB Fantom
 external drives and a 25 lcd monitor plugged into an APC Backups Pro 500
 UPS on the battery+surge protection side.

 Within the past week, when the household thermostat kicks on or off the
 central A/C system, the PC shuts off instantaneously.  Other electronics do
 not seem to be affected, although they probably are.  It's just harder to
 confirm since a flicker in the drive or monitor power would be hard to
 observe.   I assume that a sag or surge gets through the UPS and affects my
 PC, but don't understand how that can happen when I thought that's what the
 UPS was supposed to prevent.  And, short of replacing the UPS, I'm not sure
 what to do about it.  Is there something I can do to prevent this sag/surge
 event -- since it's likely to affect something else?
In my experience, that is usually a sign that the UPS battery is shot. 
On any power twitch, the UPS switches over to the battery, which has no 
capacity, and shuts everything off. How old is the UPS' battery? The 
Backups Pro 500 is a fairly old model, as in Windows 98 era, if I recollect.

-- 
Dan Jenkins, Rastech Inc., Bedford, NH, USA, 1-603-206-9951
*** Technical Support Excellence for four decades.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: UPS electrical problem

2010-06-29 Thread Ben Eisenbraun
On Tue, Jun 29, 2010 at 09:54:52PM -0400, Dan Jenkins wrote:
   On 6/29/2010 9:33 PM, Greg Rundlett (freephile) wrote:
  Within the past week, when the household thermostat kicks on or off the
  central A/C system, the PC shuts off instantaneously.
 
 In my experience, that is usually a sign that the UPS battery is shot. 
 On any power twitch, the UPS switches over to the battery, which has no 
 capacity, and shuts everything off. How old is the UPS' battery? The 
 Backups Pro 500 is a fairly old model, as in Windows 98 era, if I recollect.

That would be my guess as well, but the other thing to check is that most
of the APC models have user-settable voltage cutover points for over/under
current events.  It's possible that if the under-voltage setting is too
low, then the UPS battery might still be good, and it's just that it's not
cutting over soon enough and the PC power supply can't survive the dip,
which causes the machine to reboot.

Last I looked, APC had Windows/Mac clients for checking/changing their
settings, and I think there are some 3rd party linux/UNIX tools that will
allow you to do it as well.  Network UPS Tools (NUT) is one I have used in
the past.

-ben

--
when i read about the evils of drinking, i gave up reading.
   henry youngman
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: UPS electrical problem

2010-06-29 Thread Dan Jenkins
  On 6/29/2010 10:01 PM, Ben Eisenbraun wrote:
 On Tue, Jun 29, 2010 at 09:54:52PM -0400, Dan Jenkins wrote:
On 6/29/2010 9:33 PM, Greg Rundlett (freephile) wrote:
 Within the past week, when the household thermostat kicks on or off the
 central A/C system, the PC shuts off instantaneously.
 In my experience, that is usually a sign that the UPS battery is shot.
 On any power twitch, the UPS switches over to the battery, which has no
 capacity, and shuts everything off. How old is the UPS' battery? The
 Backups Pro 500 is a fairly old model, as in Windows 98 era, if I recollect.
 That would be my guess as well, but the other thing to check is that most
 of the APC models have user-settable voltage cutover points for over/under
 current events.  It's possible that if the under-voltage setting is too
 low, then the UPS battery might still be good, and it's just that it's not
 cutting over soon enough and the PC power supply can't survive the dip,
 which causes the machine to reboot.

 Last I looked, APC had Windows/Mac clients for checking/changing their
 settings, and I think there are some 3rd party linux/UNIX tools that will
 allow you to do it as well.  Network UPS Tools (NUT) is one I have used in
 the past.
On some of the older UPS models, there were dip switches on the back for 
the cutover.
That is a good point, if there is enough load and the PC power supply 
can't handle any dropout.

I have an old IBM server which will survive the lights going off  on 
(it doesn't have a UPS).
Some of my newer units can't handle even a flicker of the power. Cheaper 
power supplies.

-- 
Dan Jenkins, Rastech Inc., Bedford, NH, USA, 1-603-206-9951
*** Technical Support Excellence for four decades.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


YAC linking Problem

2010-05-10 Thread bruce . labitt
Fellow list members, I've got a linux linking problem, which has me 
stumped.  Since I've been coding mostly in python, lately, my 'C' brain 
has atrophied...

I've got a C (umm actually C++) program that won't link to some ATLAS 
libraries which I recently compiled.  The program itself will compile, 
link and run if I link to the baseline (non-optimized) ATLAS libraries.  I 
think it even gives the expected result, as an added bonus!

However, if I use /usr/local/atlas/include/cblas.h instead of the system 
/usr/include/cblas.h in the code snippet below,

extern C{
#include /usr/local/atlas/include/cblas.h
}

the build reports:
$ ./build_cblas_test
Building cblas1
/tmp/ccSMmtzW.o: In function `main':
cbas_tb.cpp:(.text+0x899): undefined reference to `cblas_zgemm'
collect2: ld returned 1 exit status
Build complete

My build file -- not a make file yet because it is just a one liner, is:

g++ -O3 -m64 -I/usr/local/atlas/include -L/usr/local/atlas/lib -lcblas -lm 
-Wall -Wcast-qual -o cblas1 cblas_tb.cpp

I've tried copying the include file and lib file and stuffing them in the 
same directory as the cpp file, but to no avail.  (Yes I modified the 
command above.)

Before anyone asks, yes, there are files at the locations.  The new 
cblas.h file looks very close to the old, although I have not run a diff 
on them.  ( cblas_zgemm is in the new header, I looked.)  The library is 
static.  Is there a tool to look inside an .a file? 

I figure the problem is probably an operator error, sigh, but I'm not sure 
what it is.  If someone could point me in the right direction, I'd 
appreciate it.  Assume nothing - it is probably basic.  :(

I can post the code if people think it would help.  (Just didn't want to 
make this email any longer than necessary.)

-Bruce


**
Neither the footer nor anything else in this E-mail is intended to or 
constitutes an brelectronic signature and/or legally binding agreement in the 
absence of an brexpress statement or Autoliv policy and/or procedure to the 
contrary.brThis E-mail and any attachments hereto are Autoliv property and 
may contain legally brprivileged, confidential and/or proprietary 
information.brThe recipient of this E-mail is prohibited from distributing, 
copying, forwarding or in any way brdisseminating any material contained 
within this E-mail without prior written brpermission from the author. If you 
receive this E-mail in error, please brimmediately notify the author and 
delete this E-mail.  Autoliv disclaims all brresponsibility and liability for 
the consequences of any person who fails to brabide by the terms herein. br
**

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: YAC linking Problem

2010-05-10 Thread Michael ODonnell


Try running your compile command with -v so it announces what it's
doing and then use readelf  grep to verify that the symbol in
question is defined/resolved in the objects you expect.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: YAC linking Problem

2010-05-10 Thread bruce . labitt
gnhlug-discuss-boun...@mail.gnhlug.org wrote on 05/10/2010 10:21:50 AM:

 Fellow list members, I've got a linux linking problem, which has me 
 stumped.  Since I've been coding mostly in python, lately, my 'C' brain 
 has atrophied...
 
 I've got a C (umm actually C++) program that won't link to some ATLAS 
 libraries which I recently compiled.  The program itself will compile, 
 link and run if I link to the baseline (non-optimized) ATLAS libraries. 
I 
 think it even gives the expected result, as an added bonus!
 
 However, if I use /usr/local/atlas/include/cblas.h instead of the 
system 
 /usr/include/cblas.h in the code snippet below,
 
 extern C{
 #include /usr/local/atlas/include/cblas.h
 }
 
 the build reports:
 $ ./build_cblas_test
 Building cblas1
 /tmp/ccSMmtzW.o: In function `main':
 cbas_tb.cpp:(.text+0x899): undefined reference to `cblas_zgemm'
 collect2: ld returned 1 exit status
 Build complete
 
 My build file -- not a make file yet because it is just a one liner, 
is:
 
 g++ -O3 -m64 -I/usr/local/atlas/include -L/usr/local/atlas/lib -lcblas 
-lm 
 -Wall -Wcast-qual -o cblas1 cblas_tb.cpp
 
 I've tried copying the include file and lib file and stuffing them in 
the 
 same directory as the cpp file, but to no avail.  (Yes I modified the 
 command above.)
 
 Before anyone asks, yes, there are files at the locations.  The new 
 cblas.h file looks very close to the old, although I have not run a diff 

 on them.  ( cblas_zgemm is in the new header, I looked.)  The library is 

 static.  Is there a tool to look inside an .a file? 
 
 I figure the problem is probably an operator error, sigh, but I'm not 
sure 
 what it is.  If someone could point me in the right direction, I'd 
 appreciate it.  Assume nothing - it is probably basic.  :(
 
 I can post the code if people think it would help.  (Just didn't want to 

 make this email any longer than necessary.)
 

For the sake of completeness: here is the source:
//  start of source file 

/*
** cblas_tb.cpp -- an initial stab at implementing a matrix matrix 
multiply 
using the blas library.  This is a precursor file for LAPACK

Origin Date:5 May 2010

*/

#include stdio.h
#include complex


extern C{ 
#include /usr/local/atlas/include/cblas.h
}

using namespace std;
using std::complex;
typedef complexdouble dcomp;  /* Define complex double data type */

int main(void)
{
int i, j, M, N;
M = 5; N = 5;
dcomp A[M][N];
// init A to eye(5), sort of...
printf(Initial value of A\n);
for(i=0; iM; i++)
{
for (j=0; jN; j++)
{
if (i==j)
{
A[i][j] = dcomp(1.0, 0.1);
}
else
{
A[i][j] = dcomp(0.0, 0.0);
}
printf(A[%i][%i]= %f, %f\n, i, j, real(A[i][j]), 
imag(A[i][j]));
}
}
printf(\n);
dcomp C[M][N]; 

double NN = 1.0/double(N);
dcomp alpha = dcomp(NN, 0.0);
dcomp beta  = dcomp(0.0, 0.0);
 
int m,k,n;  // matrix dimensions, A = [m x k], B = [k x n]
m = 5; k = 5; n =5;
int ldA, ldB, ldC;
ldA = 5; ldB = 5; ldC = 5;
cblas_zgemm(CblasRowMajor, CblasNoTrans, CblasConjTrans, m, n, k, 
alpha, 
A, ldA, A, ldB, beta, C, ldC);
printf(Computed value for C\n);
for (i=0; i5; i++)
{
for (j=0; j5; j++)
{
printf(C[%i][%i]= %f, %f\n, i, j, real(C[i][j]), 
imag(C[i][j]));
}
}
 
return 0;
}
//  end of source file 

Adding a -v to the compile reveals:

$ ./buildcblas_test
Building cblas1
Using built-in specs.
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu 
4.4.3-4ubuntu5' --with-bugurl=file:///usr/share/doc/gcc-4.4/README.Bugs 
--enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr 
--enable-shared --enable-multiarch --enable-linker-build-id 
--with-system-zlib --libexecdir=/usr/lib --without-included-gettext 
--enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.4 
--program-suffix=-4.4 --enable-nls --enable-clocale=gnu 
--enable-libstdcxx-debug --enable-plugin --enable-objc-gc --disable-werror 
--with-arch-32=i486 --with-tune=generic --enable-checking=release 
--build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5) 
COLLECT_GCC_OPTIONS='-O3' '-m64' '-v' '-I/usr/local/atlas/include' 
'-L/usr/local/atlas/lib/' '-Wall' '-Wcast-qual' '-o' 'cblas1' 
'-shared-libgcc' '-mtune=generic'
 /usr/lib/gcc/x86_64-linux-gnu/4.4.3/cc1plus -quiet -v 
-I/usr/local/atlas/include -D_GNU_SOURCE cblas_tb.cpp -D_FORTIFY_SOURCE=2 
-quiet -dumpbase cblas_tb.cpp -m64 -mtune=generic -auxbase cblas_tb -O3 
-Wall -Wcast-qual -version -fstack-protector -o /tmp/ccMNCUj8.s
GNU C++ (Ubuntu 4.4.3-4ubuntu5) version 4.4.3 (x86_64-linux-gnu)
compiled by GNU C version 4.4.3, GMP version 4.3.2, MPFR version 
2.4.2-p1.
GGC heuristics: --param ggc-min-expand=100

Re: YAC linking Problem [SOLVED]

2010-05-10 Thread bruce . labitt
gnhlug-discuss-boun...@mail.gnhlug.org wrote on 05/10/2010 11:15:28 AM:

 gnhlug-discuss-boun...@mail.gnhlug.org wrote on 05/10/2010 10:21:50 AM:
 
  Fellow list members, I've got a linux linking problem, which has me 
  stumped.  Since I've been coding mostly in python, lately, my 'C' 
brain 
  has atrophied...
  
  I've got a C (umm actually C++) program that won't link to some ATLAS 
  libraries which I recently compiled.  The program itself will compile, 

  link and run if I link to the baseline (non-optimized) ATLAS 
libraries. 
 I 
  think it even gives the expected result, as an added bonus!
  
  However, if I use /usr/local/atlas/include/cblas.h instead of the 
 system 
  /usr/include/cblas.h in the code snippet below,
  
  extern C{
  #include /usr/local/atlas/include/cblas.h
  }
  
  the build reports:
  $ ./build_cblas_test
  Building cblas1
  /tmp/ccSMmtzW.o: In function `main':
  cbas_tb.cpp:(.text+0x899): undefined reference to `cblas_zgemm'
  collect2: ld returned 1 exit status
  Build complete
  
  My build file -- not a make file yet because it is just a one liner, 
 is:
  
  g++ -O3 -m64 -I/usr/local/atlas/include -L/usr/local/atlas/lib -lcblas 

 -lm 
  -Wall -Wcast-qual -o cblas1 cblas_tb.cpp
  

[SOLVED] g++ -O3 -m64 -I/usr/local/atlas/include -lm -Wall -Wcast-qual -o 
cblas1 cblas_tb.cpp /usr/local/atlas/libcblas.a 
/usr/local/atlas/libatlas.a

Many thanks to the anonymous list member who pointed me in the right 
direction!

Who knew C was such an ugly language?  Discuss :P

-Bruce

**
Neither the footer nor anything else in this E-mail is intended to or 
constitutes an brelectronic signature and/or legally binding agreement in the 
absence of an brexpress statement or Autoliv policy and/or procedure to the 
contrary.brThis E-mail and any attachments hereto are Autoliv property and 
may contain legally brprivileged, confidential and/or proprietary 
information.brThe recipient of this E-mail is prohibited from distributing, 
copying, forwarding or in any way brdisseminating any material contained 
within this E-mail without prior written brpermission from the author. If you 
receive this E-mail in error, please brimmediately notify the author and 
delete this E-mail.  Autoliv disclaims all brresponsibility and liability for 
the consequences of any person who fails to brabide by the terms herein. br
**

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


MySQL table key corruption problem

2010-04-15 Thread Dan Coutu
I've spent a couple hours trying to track down a solution to this.
Perhaps one of you knows of a solution or at least can point me at some
new information that would help resolve it.

I have an openfire IM server running on RHEL 5. Apparently due to a
MySQL bug I have a problem with a key file related to one of the tables
used by openfire. I could not repair the table and found some references
that indicated that upgrading MySQL could very well fix it. So I
upgraded to 5.1.45 of MySQL but the problem remains. Here's an example
of what I'm seeing.

mysql analyze table ofPubsubNodeJIDs;
+-+-+--+---+
| Table   | Op  | Msg_type |
Msg_text  |
+-+-+--+---+
| jabber.ofPubsubNodeJIDs | analyze | Error| Incorrect key file for
table 'ofPubsubNodeJIDs'; try to repair it |
| jabber.ofPubsubNodeJIDs | analyze | error|
Corrupt   |
+-+-+--+---+
2 rows in set (0.00 sec)

mysql repair table ofPubsubNodeJIDs extended;
+-++--+---+
| Table   | Op | Msg_type |
Msg_text  |
+-++--+---+
| jabber.ofPubsubNodeJIDs | repair | Error| Incorrect key file for
table 'ofPubsubNodeJIDs'; try to repair it |
| jabber.ofPubsubNodeJIDs | repair | error|
Corrupt   |
+-++--+---+
2 rows in set (0.00 sec)

mysql repair table ofPubsubNodeJIDs quick;
+-++--+---+
| Table   | Op | Msg_type |
Msg_text  |
+-++--+---+
| jabber.ofPubsubNodeJIDs | repair | Error| Incorrect key file for
table 'ofPubsubNodeJIDs'; try to repair it |
| jabber.ofPubsubNodeJIDs | repair | error|
Corrupt   |
+-++--+---+
2 rows in set (0.00 sec)

As you can see the suggestion to try to repair the table is not really
helpful because the repair operation fails!

I've also found suggestions that indicate that if all else fails then
the use_frm option to the repair command should do the trick. Of course
life is just not that easy.

mysql repair table ofPubsubNodeJIDs use_frm;
+--++--+-+
| Table| Op | Msg_type |
Msg_text|
+--++--+-+
| ofPubsubNodeJIDs | repair | error| Failed repairing incompatible
.frm file |
+--++--+-+
1 row in set (0.00 sec)

Investigation shows that this indicates a need to upgrade the frm file.
So how do you upgrade the frm file? You run repair on the table and it
automatically upgrades the frm file. But wait, the repair fails so the
upgrade doesn't happen. Catch 22.

I would really rather not have to rebuild the entire openfire db from
scratch, adding about 40 user accounts with preserved passwords and so
forth.

Does anyone have ideas how I can fix this without losing data?

Thanks!

Dan
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: MySQL table key corruption problem

2010-04-15 Thread Lloyd Kvam
On Thu, 2010-04-15 at 10:38 -0400, Dan Coutu wrote:
 I would really rather not have to rebuild the entire openfire db from
 scratch, adding about 40 user accounts with preserved passwords and so
 forth.
 
 Does anyone have ideas how I can fix this without losing data?

mysqldump database [table_one] [table_two] ...  dump.sql

should preserver all of your data.  Feeding the resulting sql
file back into mysql will rebuild the tables.  You can examine
the dump file to make sure that it seems to be complete.  You'll
need to block access to the database to prevent unwanted
transactions while you do the dump and restore.  The restore
could be done as:

dump.sql mysql database


Could you have failed to run an upgrade script in the past?  I have dim
memories of scripts to upgrade 3 = 4 = 5.  The dump and restore will
not get around changes within the mysql (/var/lib/mysql/mysql) database
where user permissions and accounts are managed.


-- 
Lloyd Kvam
Venix Corp
DLSLUG/GNHLUG library
http://dlslug.org/library.html
http://www.librarything.com/catalog/dlslug
http://www.librarything.com/rsshtml/recent/dlslug
http://www.librarything.com/rss/recent/dlslug

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: MythTV - HVR-2250 installation problem

2010-03-31 Thread Bill McGonigle
I don't have experience with that particular card, but in general you 
should be able to do:

   mplayer /dev/video0

or for your 500:

   mplayer /dev/video1

To see if the hardware and module are working.  I have a 500 and it's 
given me no end of headaches, but for everything else besides the 
/dev/video0 part, which works pretty well.


-Bill

-- 
Bill McGonigle, Owner
BFC Computing, LLC
http://bfccomputing.com/
Telephone: +1.603.448.4440
Email, IM, VOIP: b...@bfccomputing.com
VCard: http://bfccomputing.com/vcard/bill.vcf
Social networks: bill_mcgonigle/bill.mcgonigle
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: MythTV - HVR-2250 installation problem

2010-03-31 Thread James R. Van Zandt

Bill McGonigle b...@bfccomputing.com writes:
   I don't have experience with that particular card, but in general you 
   should be able to do:

  mplayer /dev/video0

   or for your 500:

  mplayer /dev/video1

   To see if the hardware and module are working.  I have a 500 and it's 
   given me no end of headaches, but for everything else besides the 
   /dev/video0 part, which works pretty well.

Thanks.  I got the PVR-500 working again, by configuring it as 
Card Type: IVTV MPEG-2 encoder card.

By the way, is there a way to get Mythtv to print out a plain text
version of its configuration?  Screen dumps of the GUI are awkward to
include in email.


My HVR-2250 still scans but doesn't let me see any programs.  But I
have both capture cards connected to the same source which I call
Comcast.  
I'm now thinking I need to:
 - configure two different sources (say, Comcast-analog and
Comcast-QAM), 
 - use different cards to scan them, then
 - use the editor to manually delete all the analog channels from
 Comcast-QAM and 
 - delete all the digital channels from Comcast-analog

Is there a better way?

I've also just discovered that in watch TV mode, the M key will
bring up a menu with a switch source option.  Not sure how to use
it, though.  At first I thought it was choose the source for this
channel number, but it doesn't seem to work that way.  E.g. I'd tune
to some digital channel showing snow, change the source (I see three
options, all labeled Comcast), and get the ion channel.  But I can
tune to a different digital channel, go through the same procedure,
and get the ion channel again.

A simpler question: Comcast is using QAM-256 for the digital channels, right?

 - Jim Van Zandt
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: MythTV - HVR-2250 installation problem

2010-03-31 Thread Jarod Wilson
On Wed, Mar 31, 2010 at 10:34 PM, James R. Van Zandt j...@comcast.net wrote:

 Bill McGonigle b...@bfccomputing.com writes:
   I don't have experience with that particular card, but in general you
   should be able to do:

      mplayer /dev/video0

   or for your 500:

      mplayer /dev/video1

   To see if the hardware and module are working.  I have a 500 and it's
   given me no end of headaches, but for everything else besides the
   /dev/video0 part, which works pretty well.

 Thanks.  I got the PVR-500 working again, by configuring it as
 Card Type: IVTV MPEG-2 encoder card.

 By the way, is there a way to get Mythtv to print out a plain text
 version of its configuration?  Screen dumps of the GUI are awkward to
 include in email.

Off the top of my head, this is about as good as it gets:

$ mysql -u mythtv -p mythconverg
Password: ***
mysql select * from settings;
copious amounts of config triplet spew

 My HVR-2250 still scans but doesn't let me see any programs.  But I
 have both capture cards connected to the same source which I call
 Comcast.
 I'm now thinking I need to:
  - configure two different sources (say, Comcast-analog and
 Comcast-QAM),
  - use different cards to scan them, then
  - use the editor to manually delete all the analog channels from
  Comcast-QAM and
  - delete all the digital channels from Comcast-analog

 Is there a better way?

Nope. You do indeed need two different lineups, one for the analog
side, one for the digital side. It gets even more fun if you've also
got a set top box in the mix and want to record channels on it that
you don't get on either of the other two routes (I have this very
setup myself).

 I've also just discovered that in watch TV mode, the M key will
 bring up a menu with a switch source option.

iirc, 'y' cycles sources directly as well.

 Not sure how to use
 it, though.  At first I thought it was choose the source for this
 channel number, but it doesn't seem to work that way.

Yeah, its choose a video capture source. Naming your cards w/useful
descriptions helps here.

 A simpler question: Comcast is using QAM-256 for the digital channels, right?

Generally, yes.

-- 
Jarod Wilson
ja...@wilsonet.com

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


MythTV - HVR-2250 installation problem

2010-03-28 Thread James R. Van Zandt

I apologize in advance for the long email, but I want to put all the
evidence in one place.

I have bought an Hauppauge HVR-2250 dual tuner card so I could
continue to record the programs Comcast is moving from analog to
digital.  However, I'm having trouble setting it up, and now my old
PVR-500 isn't working either.  The symptom is that when I select
Watch TV, it displays the Please wait  message for six
seconds, then returns to the menu.  I can no longer watch or record TV
with myth.

lspci finds my older PVR-500 and the new card:

  02:08.0 Multimedia video controller: Internext Compression Inc iTVC16 
(CX23416) MPEG-2 Encoder (rev 01)
Subsystem: Hauppauge computer works Inc. Device e807
Flags: bus master, medium devsel, latency 64, IRQ 18
Memory at e800 (32-bit, prefetchable) [size=64M]
Capabilities: access denied
Kernel driver in use: ivtv
Kernel modules: ivtv
  
  02:09.0 Multimedia video controller: Internext Compression Inc iTVC16 
(CX23416) MPEG-2 Encoder (rev 01)
Subsystem: Hauppauge computer works Inc. Device e817
Flags: bus master, medium devsel, latency 64, IRQ 19
Memory at e400 (32-bit, prefetchable) [size=64M]
Capabilities: access denied
Kernel driver in use: ivtv
Kernel modules: ivtv
  
  05:00.0 Multimedia controller: Philips Semiconductors Device 7164 (rev 81)
Subsystem: Hauppauge computer works Inc. Device 8851
Flags: bus master, fast devsel, latency 0, IRQ 16
Memory at ef40 (64-bit, non-prefetchable) [size=4M]
Memory at ef00 (64-bit, non-prefetchable) [size=4M]
Capabilities: access denied
Kernel driver in use: saa7164
Kernel modules: saa7164

Following directions at

  http://linuxtv.org/wiki/index.php/Hauppauge_WinTV-HVR-2200

I downloaded and installed firmware from

  http://www.steventoth.net/linux/hvr22xx/

and the developmental sa7164 driver using

  hg clone http://kernellabs.com/hg/~stoth/saa7164-dev/

I built and installed the module.  When I loaded it, it looked for a
newer version of the firmware, v4l-saa7164-1.0.3-3.fw, which I
downloaded from

  http://steventoth.net/linux/hvr22xx/firmwares/4038864/

The driver runs and recognizes the new card.  Here are the syslog
messages relevant to the two video cards:

  saa7164 driver loaded
  ACPI: PCI Interrupt Link [APC8] enabled at IRQ 16
alloc irq_desc for 16 on node -1
alloc kstat_irqs on node -1
  saa7164 :05:00.0: PCI INT A - Link[APC8] - GSI 16 (level, low) - IRQ 16
  CORE saa7164[0]: subsystem: 0070:8851, board: Hauppauge WinTV-HVR2250 
[card=7,autodetected]
  saa7164[0]/0: found at :05:00.0, rev: 129, irq: 16, latency: 0, mmio: 
0xef40
  saa7164 :05:00.0: setting latency timer to 64
  Linux video capture interface: v2.00
  saa7164_downloadfirmware() no first image
  saa7164_downloadfirmware() Waiting for firmware upload 
(v4l-saa7164-1.0.3-3.fw)
  saa7164 :05:00.0: firmware: requesting v4l-saa7164-1.0.3-3.fw
  ivtv: Start initialization, version 1.4.1
  ivtv0: Initializing card 0
  ivtv0: Autodetected Hauppauge card (cx23416 based)
  ACPI: PCI Interrupt Link [APC3] enabled at IRQ 18
alloc irq_desc for 18 on node -1
alloc kstat_irqs on node -1
  ivtv :02:08.0: PCI INT A - Link[APC3] - GSI 18 (level, low) - IRQ 18
  ivtv0: Unreasonably low latency timer, setting to 64 (was 32)
  tveeprom 2-0050: Hauppauge model 23552, rev E587, serial# 9865756
  tveeprom 2-0050: tuner model is Samsung TCPN 2121P30A (idx 87, type 70)
  tveeprom 2-0050: TV standards NTSC(M) (eeprom 0x08)
  tveeprom 2-0050: second tuner model is Philips TEA5768HL FM Radio (idx 101, 
type 62)
  tveeprom 2-0050: audio processor is CX25843 (idx 37)
  tveeprom 2-0050: decoder processor is CX25843 (idx 30)
  tveeprom 2-0050: has radio
  ivtv0: Autodetected WinTV PVR 500 (unit #1)
  cx25840 2-0044: cx25843-24 found @ 0x88 (ivtv i2c driver #0)
  tuner 2-0060: chip found @ 0xc0 (ivtv i2c driver #0)
  tea5767 2-0060: type set to Philips TEA5767HN FM Radio
  tuner 2-0061: chip found @ 0xc2 (ivtv i2c driver #0)
  HDA Intel :00:09.0: power state changed by ACPI to D0
  ACPI: PCI Interrupt Link [AAZA] enabled at IRQ 22
  HDA Intel :00:09.0: PCI INT A - Link[AAZA] - GSI 22 (level, low) - IRQ 
22
  HDA Intel :00:09.0: setting latency timer to 64
  wm8775 2-001b: chip found @ 0x36 (ivtv i2c driver #0)
  saa7164_downloadfirmware() firmware read 4038864 bytes.
  saa7164_downloadfirmware() firmware loaded.
  Firmware file header part 1:
   .FirmwareSize = 0x0
   .BSLSize = 0x0
   .Reserved = 0x3da0d
   .Version = 0x3
  saa7164_downloadfirmware() SecBootLoader.FileSize = 4038864
  saa7164_downloadfirmware() FirmwareSize = 0x1fd6
  saa7164_downloadfirmware() BSLSize = 0x0
  saa7164_downloadfirmware() Reserved = 0x0
  saa7164_downloadfirmware() Version = 0x1d1c
  tuner-simple 2-0061: creating new instance
  tuner-simple 2-0061: type set to 70 (Samsung TCPN 

VPN problem...

2009-10-01 Thread Alex Hewitt
I recently was relating on the list how a client was having a problem 
with their Linksys BEFSX41 router and the solution was that Linksys 
RMA'd the router. They apparently have removed the BEFSX41 model from 
their active product list so they sent me a BEFVP41 v2 model. I received 
it yesterday, configured it and tested it from my office network. The 
router was set to obtain it's WAN address dynamically from it's WAN 
connection. It connected fine to a wireless bridge that I use for this 
purpose and I could surf the web from behind it with a PC. I then 
configured the VPN tunnel exactly as the old router was set up and it 
immediately connected to the customer's end point and I could ping 
systems located at the end point LAN. I tore down the setup and put the 
router in a container to set up at my client's location this morning.

I got to the client site and thought that all that was going to be 
necessary was to set the WAN address of the Linksys router to match the 
static address being provided by Comcast at the customer location. As 
soon as I did that I was able to connect to the internet from behind the 
router. But I then noticed that the VPN was not connected. Since the VPN 
settings were identical to the previous router there shouldn't have been 
a problem. For the fun of it I set the router to obtain it's WAN address 
dynamically and immediately the VPN tunnel connected. I checked the logs 
but didn't see anything obviously wrong. I did notice that when the 
router is setup to use a dynamic address, it has the correct date and 
time. When it's set up with a static address the status page says time 
unavailable. I think this might be part of the problem. If the router 
doesn't know the time (perhaps the clock can't be used?) then the VPN 
connection might not work. I'm also puzzled as to what server it's 
requesting date/time data from. It has the ability to manually set the 
time zone but doesn't give any choices as to which ntp server to use.

Does anyone have any ideas? So far Linksys support hasn't been very useful.

-Alex

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: VPN problem...

2009-10-01 Thread Ben Scott
On Thu, Oct 1, 2009 at 4:59 PM, Alex Hewitt hewitt_t...@comcast.net wrote:
 If the router doesn't know the time .. then the VPN
 connection might not work.

  Quite possible.  If it's using X.509 certificates (like SSL does),
one can specify effective and expiration dates in the certificate.  If
they are set, and the LinkSys box is checking them, having the wrong
time will likely cause it to conclude its certificate is invalid.

  Any idea what protocols the LinkSys is using?  IPsec?  IKE?  SSL/TLS?  X.509?

 Does anyone have any ideas?

  (1) Check for a firmware update.

  (2) Look for a way to set the clock manually (no time server).

  (3) Set up a DHCP reservation on the WAN side for the LinkSys box,
and give an NTP server in the DHCP options, in the hope that time is
actually the problem, and the LinkSys box will listen.

  Beyond that, you're at the mercy of the vendor.  Which leads me to:

  (4) I've never heard anything good about SOHO+VPN scenarios.

Which in turn leads me to:

  (4)(a) Throw out the SOHO crap and buy a real VPN appliance.

  (4)(b) Grab a couple PCs, install Linux and OpenVPN, and use that.

  Again: SOHO stuff has its uses.  I had a LinkSys router+WAP+switch
at home, and was happy with it.  Their products are appropriate for
home use, and I recommend them for that.  If you're running a real
business on them, you're crazy.  :)

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: VPN problem...

2009-10-01 Thread Lloyd Kvam
On Thu, 2009-10-01 at 16:59 -0400, Alex Hewitt wrote:
 For the fun of it I set the router to obtain it's WAN address 
 dynamically and immediately the VPN tunnel connected. I checked the
 logs 
 but didn't see anything obviously wrong. I did notice that when the 
 router is setup to use a dynamic address, it has the correct date and 
 time. When it's set up with a static address the status page says
 time 
 unavailable. I think this might be part of the problem. If the
 router 
 doesn't know the time (perhaps the clock can't be used?) then the VPN 
 connection might not work. I'm also puzzled as to what server it's 
 requesting date/time data from. It has the ability to manually set
 the 
 time zone but doesn't give any choices as to which ntp server to use.
 
 Does anyone have any ideas? So far Linksys support hasn't been very
 useful.

Name servers???  The DHCP server provides good name servers.  You have a
static name server setup that limps along at the client (blocked from
recursive queries??).

-- 
Lloyd Kvam
Venix Corp
DLSLUG/GNHLUG library
http://dlslug.org/library.html
http://www.librarything.com/catalog/dlslug
http://www.librarything.com/rsshtml/recent/dlslug
http://www.librarything.com/rss/recent/dlslug

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: VPN problem...

2009-10-01 Thread Ben Scott
On Thu, Oct 1, 2009 at 5:50 PM, Hewitt_Tech hewitt_t...@comcast.net wrote:
  Any idea what protocols the LinkSys is using?  IPsec?  IKE?  SSL/TLS?
  X.509?

 It's definitely using IKE.

  Okay, IPsec with IKE can use PSK or X.509 certificates.  Which one
is your LinkSys using?

  If it's PSK (pre-shared keys, also called a shared secret), you
have to enter the same password into both devices.  There will be no
clock time element involved.  So that isn't the problem.  (I think.)

  If it's X.509 certificates, you either register the device with a
Certificate Authority, or you exchange peer certificates between each
device.  X.509 allows the time stuff.  so that *MAY* be the problem.

  If you want to persue the certificate+time thing: Does the device
have the option of letting you load your own certificate and key?  If
so, you could use OpenSSL's CA support on a Linux box to generate
certificates for each device, specifying a Not Before date of
1/1/1900 or whatever the device thinks the date is.

  One word of warning: If you haven't used the OpenSSL CA stuff
already, it is extremely cryptic and very poorly documented.  Even by
Linux standards.  It doesn't help that X.509 is a nightmare, too.  It
will probabbly be cheaper to just buy a real VPN box than spend the
time and effort in figuring it all out -- especially since we're not
even sure that's the problem.

-- Ben

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: VPN problem...

2009-10-01 Thread Lloyd Kvam
On Thu, 2009-10-01 at 18:50 -0400, Hewitt_Tech wrote:
 Thanks for the help guys. I fixed it by setting up the cable modem as
 I was describing. I changed the Linksys router to get it's WAN address
 dynamically. I then re-configured the cable modem to create a DMZ
 which only has one computer (in this case the router). I changed the
 cable modem's DHCP lease to forever so that the IP address being
 used by the Linksys router wouldn't change. I then noticed that the
 WAN IP address was switched by the cable modem to what had previously
 been the gateway address (which was one off the original WAN IP
 address). 

I've seen DSL modems with 2 modes of behavior:
  * bridge mode where they simply translate DSL/Ethernet bits which
is what you want if you supply the router
  * NAT/router mode where the modem assumes router/firewall duties

I don't know if the cable modems offer similar capabilities.  The DSL
mode was controlled by the phone company and set by tech-support.  I was
helping a friend install a wireless router and was lucky to encounter a
phone company tech support person who knew what she was doing.

Perhaps your cable modem switched from bridge mode to router mode when
the customer router was removed.

 So it's up and running despite the weirdness that the Linksys router
 was displaying.
 
 -Alex
-- 
Lloyd Kvam
Venix Corp
DLSLUG/GNHLUG library
http://dlslug.org/library.html
http://www.librarything.com/catalog/dlslug
http://www.librarything.com/rsshtml/recent/dlslug
http://www.librarything.com/rss/recent/dlslug

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: VPN problem...

2009-10-01 Thread Ben Scott
On Thu, Oct 1, 2009 at 7:48 PM, Lloyd Kvam pyt...@venix.com wrote:
 I've seen DSL modems with 2 modes of behavior:
      * bridge mode ...
      * NAT/router ...

 I don't know if the cable modems offer similar capabilities.

  It's a bit different in cable-modem-land.  DSL is typically running
some kind of PPP feed, so in bridge mode, you need to run your own
PPPoE service.  Cable is presented more like a regular Ethernet
broadcast medium.

  Most cable modems I've seen act like bridges: You're on one big
subnet with all your neighbors.  You can see their broadcast traffic.
You request a DHCP lease, just like you do on a corporate LAN, and get
it from a cable company DHCP server somewhere.  This is what I've seen
Comcast provide in every residential install.

  I have seen cable modems with integrated routers.  Conceptually,
these are the same as other SOHO routers, except the Internet port
is a coaxial F connector instead of an Ethernet jack.  They typically
combine a NAT router, firewall, WAP, Ethernet switch, coffee maker,
etc., just like the more general SOHO gateways do.

  When we subscribed to Comcast's business service with a static IP
address, they gave us something like the later.  It appears to be a
halfheartedly[1] re-badged SMC8014.  Built-in four port Ethernet
switch.  It was configured to do NAT, and assigned IP addresses via
DHCP in the 10.1.10.0/24 subnet.  But the static IP address is also
configured on the Ethernet switch.  In other words, the LAN side of
the integrated router has multiple IP addresses.

  You can manage the LAN side by going to http://10.1.10.1/.
Default username is cusadmin; default password is highspeed.  I
recommend changing the password.  :)

[1] The front panel says Comcast, but the top of the case still has
a giant SMC molded into the plastic.

-- Ben

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: VPN problem...

2009-10-01 Thread Joshua Judson Rosen
Ben Scott dragonh...@gmail.com writes:

 On Thu, Oct 1, 2009 at 5:50 PM, Hewitt_Tech hewitt_t...@comcast.net wrote:
   Any idea what protocols the LinkSys is using?  IPsec?  IKE?  SSL/TLS?
   X.509?
 
  It's definitely using IKE.
 
   Okay, IPsec with IKE can use PSK or X.509 certificates.  Which one
 is your LinkSys using?
[...]
   If you want to persue the certificate+time thing: Does the device
 have the option of letting you load your own certificate and key?  If
 so, you could use OpenSSL's CA support on a Linux box to generate
 certificates for each device, specifying a Not Before date of
 1/1/1900 or whatever the device thinks the date is.
 
   One word of warning: If you haven't used the OpenSSL CA stuff
 already, it is extremely cryptic and very poorly documented.  Even by
 Linux standards.  It doesn't help that X.509 is a nightmare, too.  It
 will probabbly be cheaper to just buy a real VPN box than spend the
 time and effort in figuring it all out -- especially since we're not
 even sure that's the problem.

When I started using x.509 certificates with openVPN, I found that the
OpenSSL CA stuff was sufficiently documented in an easy-to-understand
way--just not in the OpenSSL documentation :)

The *OpenVPN* manpage actually provided (and still does) simple
instructions in the style of `this is the command that you need to run
to generate a CA key and certificate, and this is the commands that
you need to run on each system to generate keys and associated
certificates signed by the CA that you just created'.

When I forget how to use OpenSSL, I still refer to the OpenVPN
documentation.

-- 
Don't be afraid to ask (Lf.((Lx.xx) (Lr.f(rr.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: VPN problem...

2009-10-01 Thread Bill McGonigle
On 10/01/2009 08:06 PM, Ben Scott wrote:
 [1] The front panel says Comcast, but the top of the case still has
 a giant SMC molded into the plastic.

Same model here.  After turning off all of its 'features', it seems to
work well.

The only trick was changing the management interface to run on a private
IP range I wasn't using on my LAN and setting the static route on my
real firewall so I could still get to it.  But with that done I haven't
been able to blame anything on their router (up since May 1).

-Bill

-- 
Bill McGonigle, Owner
BFC Computing, LLC
http://bfccomputing.com/
Telephone: +1.603.448.4440
Email, IM, VOIP: b...@bfccomputing.com
VCard: http://bfccomputing.com/vcard/bill.vcf
Social networks: bill_mcgonigle/bill.mcgonigle
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Is this a postfix problem? Receiving mail from cellphone

2008-12-01 Thread Thomas Charron
On Sun, Nov 30, 2008 at 9:01 PM, Dan Coutu [EMAIL PROTECTED] wrote:
 This is a RHEL server running postfix.

 Sending email to the server from my cell phone is giving an error and I
 don't understand why. I'm hoping that someone here can shed light on it
 for me.

  Sending email from your phone, or sending SMS messages to an email address?

-- 
-- Thomas
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Is this a postfix problem? Receiving mail from cellphone

2008-12-01 Thread Dan Coutu
Thomas Charron wrote:
 On Sun, Nov 30, 2008 at 9:01 PM, Dan Coutu [EMAIL PROTECTED] wrote:
   
 This is a RHEL server running postfix.

 Sending email to the server from my cell phone is giving an error and I
 don't understand why. I'm hoping that someone here can shed light on it
 for me.
 

   Sending email from your phone, or sending SMS messages to an email address?

   
Sending SMS messages to an email address.

Dan
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Is this a postfix problem? Receiving mail from cellphone

2008-11-30 Thread Dan Coutu
This is a RHEL server running postfix.

Sending email to the server from my cell phone is giving an error and I 
don't understand why. I'm hoping that someone here can shed light on it 
for me.

Here's the mail log entries that show the problem:

Dec  1 01:55:27 ec2-75-101-156-55 postfix/smtpd[31695]: connect from 
150.sub-69-78-129.myvzw.com[69.78.129.150]
Dec  1 01:55:27 ec2-75-101-156-55 postfix/smtpd[31695]: NOQUEUE: reject: 
RCPT from 150.sub-69-78-129.myvzw.com[69.78.129.150]: 450 4.7.1 
njbrspamp5.vtext.com: Helo command rejected: Host not found; 
from=[EMAIL PROTECTED] to=[EMAIL PROTECTED] proto=ESMTP 
helo=njbrspamp5.vtext.com
Dec  1 01:55:32 ec2-75-101-156-55 postfix/smtpd[31695]: disconnect from 
150.sub-69-78-129.myvzw.com[69.78.129.150]

Thanks for any help or pointers to resolving this.

Dan

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Is this a postfix problem? Receiving mail from cellphone

2008-11-30 Thread Dave Johnson
Dan Coutu writes:
 This is a RHEL server running postfix.
 
 Sending email to the server from my cell phone is giving an error and I 
 don't understand why. I'm hoping that someone here can shed light on it 
 for me.
 
 Here's the mail log entries that show the problem:
 
 Dec  1 01:55:27 ec2-75-101-156-55 postfix/smtpd[31695]: connect from 
 150.sub-69-78-129.myvzw.com[69.78.129.150]
 Dec  1 01:55:27 ec2-75-101-156-55 postfix/smtpd[31695]: NOQUEUE: reject: 
 RCPT from 150.sub-69-78-129.myvzw.com[69.78.129.150]: 450 4.7.1 
 njbrspamp5.vtext.com: Helo command rejected: Host not found; 
 from=[EMAIL PROTECTED] to=[EMAIL PROTECTED] proto=ESMTP 
 helo=njbrspamp5.vtext.com
 Dec  1 01:55:32 ec2-75-101-156-55 postfix/smtpd[31695]: disconnect from 
 150.sub-69-78-129.myvzw.com[69.78.129.150]
 
 Thanks for any help or pointers to resolving this.

You phone (IP 69.78.129.150) is using a smtp client (or smtp relay)
that is identifying itself in the HELO SMTP message as
njbrspamp5.vtext.com which doesn't resolve using DNS.

njbrspamp5.vtext.com is likely a smtp relay which simply
doesn't have a public dns entry.

You can (carefully) loosen the HELO restrictions on your mail server
if you want to bypass this check for specific hosts.  Since the
postfix default is to allow everyting then you've likely already
modified this. See:

http://www.postfix.org/postconf.5.html#smtpd_helo_restrictions

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Is this a postfix problem? Receiving mail from cellphone

2008-11-30 Thread Bruce Dawson
Dan Coutu wrote:
 This is a RHEL server running postfix.

 Sending email to the server from my cell phone is giving an error and I 
 don't understand why. I'm hoping that someone here can shed light on it 
 for me.

 Here's the mail log entries that show the problem:

 Dec  1 01:55:27 ec2-75-101-156-55 postfix/smtpd[31695]: connect from 
 150.sub-69-78-129.myvzw.com[69.78.129.150]
 Dec  1 01:55:27 ec2-75-101-156-55 postfix/smtpd[31695]: NOQUEUE: reject: 
 RCPT from 150.sub-69-78-129.myvzw.com[69.78.129.150]: 450 4.7.1 
 njbrspamp5.vtext.com: Helo command rejected: Host not found; 
 from=[EMAIL PROTECTED] to=[EMAIL PROTECTED] proto=ESMTP 
 helo=njbrspamp5.vtext.com
   

It appears that njbrspamp5.vtext.com doesn't exist according to:

[EMAIL PROTECTED]:~$ host njbrspamp5.vtext.com
Host njbrspamp5.vtext.com not found: 3(NXDOMAIN)
[EMAIL PROTECTED]:~$ host vtext.com
vtext.com has address 69.78.67.39
vtext.com has address 69.78.128.199
vtext.com mail is handled by 50 smtp-bb.vtext.com.
vtext.com mail is handled by 50 smtp-sl.vtext.com.

 Dec  1 01:55:32 ec2-75-101-156-55 postfix/smtpd[31695]: disconnect from 
 150.sub-69-78-129.myvzw.com[69.78.129.150]

 Thanks for any help or pointers to resolving this.

 Dan
So the problem appears to be with either:

* vtext.com, which doesn't have an entry for njbrspamp5
* Your cell phone settings, which seem to be logging into the SMTP
  server w/HELO njbrspamp5.vtext.com
* Verizon Wireless' problem as they are probably forwarding the
  email from your phone.

--Bruce

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


problem with nfs mount on boot

2008-10-23 Thread Frank DiPrete

I've been seeing a strange problem common to a bunch of servers.

The servers need to nfs mount storage on boot but half of the time the 
mount fails with server not available but right after I can mount the 
storage manually.

I took the mount out of fstab and wrote a delay in rc.local (15 secs) 
before mounting and that helped but has still not solved the problem. I 
could loop and wait but I'd rather figure out what the problem is.

I am using bonded interfaces - ifenslave-2.6 on debian systems with 
e1000 nics into cisco gig-e switches.

bonding is configured as failover links (mode 1) using mii for link 
detect polling every 100m secs. For this test config the nfs server is 
also a debian (etch) system.

My next guess is the updelay parameter but it's a guess.

Anybody else run into a problem mounting nfs on boot?
Just looking for areas to look more closely,

Thanks



___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Problem upgrading memory on Dell Inspiron 5100

2008-08-02 Thread Ben Scott
On Sat, Aug 2, 2008 at 2:52 PM, Greg Rundlett [EMAIL PROTECTED] wrote:
 According to 3rd-party websites, crucial.com
 (http://www.crucial.com/store/partspecs.aspx?imodule=CT12864X335), and
 newegg's own 'comments' section, this memory should be compatible

  Well, careful, now.  You're comparing two different products, saying
they have the same advertised specs, so they must be
interchangeable.  That's not always the case.  For example, I know
with some motherboards, the density of the modules matters.

  Crucial's Memory Finder claims the Inspiron 5100 is limited to 512
MB modules:

http://www.crucial.com/store/listparts.aspx?model=Inspiron+5100+Series

  According to Dell's own product specifications, the Inspiron 5100 is
limited to 1 GB max system memory:

http://support.dell.com/support/edocs/systems/ins5100/en/i5100-om.pdf

  You're looking at a support forum, and finding some information from
people who managed to get it to work.  The problem with that is that
the spec's say what you're trying to do *won't work*.  So according to
all the authoritative information, what's happening is exactly what
should happen.  This is the problem when one exceeds the area of the
supported and well-defined.  You pays your money and you takes your
chance.

  I suspect you're SOL.  NewEgg has a very strict return policy.  If
something is defective, they'll get you a new one, but if you bought
the wrong thing, that's your problem.  If I'm at all unsure about a
purchase, I always go through a reseller with a more liberal return
policy.  It usually costs about 10% more, but I'm getting something
for that.  Then again, you paid $16 for a RAM module; at that price,
it's almost disposable.  :-)

 Dell's support site gives no information about the upgrade of the
 BIOS.

  If I go to pull up the BIOS downloads for an Insiron 5100, I get
offered a .EXE and a text file.  The text file contains revision
history going back to A20, which is labeled Initial release.  No
mention of RAM or memory is made.

  You state you updated from A06, which obviously contradicts the
release notes.  Is it possible you have some other model of laptop?
For example, I know Dell had both an Inspiron 5000 and an Inspiron
5000e; despite the similar names they had very different internals.

 Since I don't have Windows on this machine, I used an ISO found online for the
 BIOS rather than Dell's .exe installer.

  Where did you find this ISO?  From a trusted site, like Dell's Linux
site?  Or some random website?  There's a lot of malware out there;
I'd be careful if I were you.  Linux's much-talked about better
security won't help you if you willingly boot someone's malware on
your machine.

 I could retry installing the BIOS - either by using a Windows XP boot
 CD or actually installing Windows to a partition.

  Windows NT install CDs are only good for installing Windows NT.
(Windows XP is Windows NT version 5.1.)  Unlike most Linux distro
install CDs, they can't run arbitrary programs.

  The Dell .EXE BIOS updater says it can run from an MS-DOS boot
floppy.  If you don't have one, let me know; I'm sure I can find a
legit disk and just give it to you for free.  Not much call for
Windows 95 these days.  (Windows 95 (which is still the classic
Windows product) still booted and ran on top of MS-DOS.  So their
install CDs are useful for running arbitrary MS-DOS programs.)

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: How to troubleshoot wide area network performance problem?

2008-07-13 Thread Ben Scott
On Fri, Jul 11, 2008 at 11:30 AM, Hewitt_Tech [EMAIL PROTECTED] wrote:
 One of my colleagues ran into a Comcast throttling problem while
 doing an rsync at a different location. He said the rsync ran at full speed
 for about 30 seconds and then basically dropped to about ten percent
 performance after that.

  That sounds like Comcast's Speed Boost feature.  Basically, they
give you a burst of higher bandwidth for 30 seconds when you first
start pulling packets, and then clamp back to nominal rate.  After
some inactivity (5 seconds?), they reset for the next burst.  This
makes web browsing go a lot faster, since web browsing is very bursty
(Click, load, read.  Click, load, read.).  Obviously, it's not useful
for bulk data transfers.  Unlike, say, their TCP-port-blocking or
torrent-thorttling activities, Comcast is pretty up-front about this
feature.  They even advertise it heavily on TV.

  If you want to sustain a large file transfer at steady rate, use
something that limits your file transfer to the nominal rate (not the
burst rate).  You'll get a slower initial rate, but it won't plunge
after 30 seconds.  Also, make sure your send rate is limited to the
upload speed of the Comcast feed.  Since the feed is asymmetric, it
can pull data in faster than it can send it out.  That can lead to
congestion, which leads to dropped packets, which leads to TCP
back-off, which causes your transfer rate graph to look like a bandsaw
blade.

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: How to troubleshoot wide area network performance problem?

2008-07-12 Thread Alex Hewitt
  I've had suggestions from at least two colleagues that we may be the 
  victims of peer to peer throttling. I'm going back to the Nashua site 
  later today and I'm going to replace a small internal router that used 
  to replace a failed router Monday. I don't believe the internal router 
  has any bearing on the problem because the customer noticed the problem 
  when there was no internal router in place (we bypassed it as a 
  workaround). I'm not sure if there is any kind of tool that can be used 
  to check for throttling. One of my colleagues ran into a Comcast 
  throttling problem while doing an rsync at a different location. He said 
  the rsync ran at full speed for about 30 seconds and then basically 
  dropped to about ten percent performance after that. I need to see if 
  something similar is going on at the Bedford site.
  
  -Alex
  
  P.S. I'll probably put in a call to One Communications today to have 
  them check the connection/routing.
 
Replacing the router at the Bedford site had no effect (as expected). I
called One Communications and talked to an engineer who was more than
willing to help. He said they use an application called hyper-trace. He
was checking the connection at the Bedford end and concluded that there
wasn't anything obviously wrong. He then started checking towards the
Nashua end and said there's something strange. He then went on to say
that the Comcast end had too many hops.  So I think we're  back
looking at Comcast. He gave me his name and a trouble ticket number and
said he'd be more than happy to assist the Comcast folks should they
need to talk to him. While he was testing we  were chatting and he
mentioned that although his wife really likes Macs he ran an Ubuntu
system. Nice to know that the folks responsible for networks are also
open source enthusiasts. 

-Alex


P.S. Since I also have less than wonderful performance from my Comcast
service to the Comcast service in Nashua I might just call up and
complain as an actual direct customer rather than on behalf of my
clients. I'm also getting pretty irritated about this whole mess and the
amount of time I've wasted troubleshooting it. I think I'm going to call
MV Monday morning and see if they can provision a DSL connection at the
Nashua end. I'm also going to withdraw my Comcast business
recommendations for the 20-30 clients I have that use them...


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: How to troubleshoot wide area network performance problem?

2008-07-11 Thread Mark Greene
On Thu, Jul 10, 2008 at 7:36 PM, Alex Hewitt [EMAIL PROTECTED]
wrote:

 I have clients with an interesting network problem. One location in
 Bedford New Hampshire using a fractionated T1 has routinely been
 transmitting studies to an office in Nashua New Hampshire. There have
 been no problems with this for at least 18 months. However recently
 (about a week ago), the transmissions suddenly became slow, really slow.
 A transmission that was taking around 10 minutes suddenly jumped to 2-3
 hours. The customer in Bedford New Hampshire is using One
 Communications.


I'd bet money that One Communications is the culprit, and that they are
doing different routing on their network to you vs. to your Nashua client's
office.  They *may* be doing selective throttling based on content ala
Comcast, but this may also be a non-malicious mistaken config problem too.

mark
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: How to troubleshoot wide area network performance problem?

2008-07-11 Thread Hewitt_Tech
Mark Greene wrote:
 
 
 On Thu, Jul 10, 2008 at 7:36 PM, Alex Hewitt [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:
 
 I have clients with an interesting network problem. One location in
 Bedford New Hampshire using a fractionated T1 has routinely been
 transmitting studies to an office in Nashua New Hampshire. There have
 been no problems with this for at least 18 months. However recently
 (about a week ago), the transmissions suddenly became slow, really slow.
 A transmission that was taking around 10 minutes suddenly jumped to 2-3
 hours. The customer in Bedford New Hampshire is using One
 Communications. 
 
 
 I'd bet money that One Communications is the culprit, and that they are 
 doing different routing on their network to you vs. to your Nashua 
 client's office.  They *may* be doing selective throttling based on 
 content ala Comcast, but this may also be a non-malicious mistaken 
 config problem too.
 
 mark
 
 
 
 
 
 
 ___
 gnhlug-discuss mailing list
 gnhlug-discuss@mail.gnhlug.org
 http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/

I've had suggestions from at least two colleagues that we may be the 
victims of peer to peer throttling. I'm going back to the Nashua site 
later today and I'm going to replace a small internal router that used 
to replace a failed router Monday. I don't believe the internal router 
has any bearing on the problem because the customer noticed the problem 
when there was no internal router in place (we bypassed it as a 
workaround). I'm not sure if there is any kind of tool that can be used 
to check for throttling. One of my colleagues ran into a Comcast 
throttling problem while doing an rsync at a different location. He said 
the rsync ran at full speed for about 30 seconds and then basically 
dropped to about ten percent performance after that. I need to see if 
something similar is going on at the Bedford site.

-Alex

P.S. I'll probably put in a call to One Communications today to have 
them check the connection/routing.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: How to troubleshoot wide area network performance problem?

2008-07-11 Thread Alex Hewitt
On Fri, 2008-07-11 at 11:30 -0400, Hewitt_Tech wrote:
 Mark Greene wrote:
  
  
  On Thu, Jul 10, 2008 at 7:36 PM, Alex Hewitt [EMAIL PROTECTED] 
  mailto:[EMAIL PROTECTED] wrote:
  
  I have clients with an interesting network problem. One location in
  Bedford New Hampshire using a fractionated T1 has routinely been
  transmitting studies to an office in Nashua New Hampshire. There have
  been no problems with this for at least 18 months. However recently
  (about a week ago), the transmissions suddenly became slow, really slow.
  A transmission that was taking around 10 minutes suddenly jumped to 2-3
  hours. The customer in Bedford New Hampshire is using One
  Communications. 
  
  
  I'd bet money that One Communications is the culprit, and that they are 
  doing different routing on their network to you vs. to your Nashua 
  client's office.  They *may* be doing selective throttling based on 
  content ala Comcast, but this may also be a non-malicious mistaken 
  config problem too.
  
  mark
  
  
  
  
  
  
  ___
  gnhlug-discuss mailing list
  gnhlug-discuss@mail.gnhlug.org
  http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
 
 I've had suggestions from at least two colleagues that we may be the 
 victims of peer to peer throttling. I'm going back to the Nashua site 
 later today and I'm going to replace a small internal router that used 
 to replace a failed router Monday. I don't believe the internal router 
 has any bearing on the problem because the customer noticed the problem 
 when there was no internal router in place (we bypassed it as a 
 workaround). I'm not sure if there is any kind of tool that can be used 
 to check for throttling. One of my colleagues ran into a Comcast 
 throttling problem while doing an rsync at a different location. He said 
 the rsync ran at full speed for about 30 seconds and then basically 
 dropped to about ten percent performance after that. I need to see if 
 something similar is going on at the Bedford site.
 
 -Alex
 
 P.S. I'll probably put in a call to One Communications today to have 
 them check the connection/routing.

Actually the site I'm going to replace the router at is the Bedford
site. I want to make sure I've done everything humanly possible to be
100% sure the problem isn't in equipment that I can control.

-Alex

 
 ___
 gnhlug-discuss mailing list
 gnhlug-discuss@mail.gnhlug.org
 http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


How to troubleshoot wide area network performance problem?

2008-07-10 Thread Alex Hewitt
I have clients with an interesting network problem. One location in
Bedford New Hampshire using a fractionated T1 has routinely been
transmitting studies to an office in Nashua New Hampshire. There have
been no problems with this for at least 18 months. However recently
(about a week ago), the transmissions suddenly became slow, really slow.
A transmission that was taking around 10 minutes suddenly jumped to 2-3
hours. The customer in Bedford New Hampshire is using One
Communications. So far I haven't asked them to look at this problem
because I've been trying to clarify it. The office in Nashua has
Comcast business class service with a static IP address. 

Here's where it gets interesting. I had the Bedford client transmit the
data to my system in Manchester New Hampshire. I have Comcast
residential service. The data usually takes about 8 minutes to arrive at
my location. I then send the data to the Nashua office and it typically
takes 25-30 minutes. The payload is a collection of images that are
typically between 65 and 70 MB. 

Today Comcast at the request of the customer sent someone on site to the
Nashua site. The tech did some speed tests using the DSLReports
Speakeasy test suite. He was getting  20 mbs down, 3+ mbs up which is
pretty decent. For the fun of it I had him download a 47 MB antivirus
program. His first try was ridiculous telling him it was going to take 4
+ hours. I had him break the connection and try again and this time the
download took around a minute. 

And it gets more interesting...another client in Salem New Hampshire
needed to send their data to the Nashua site (they use Verizon DSL). It
arrived in about 8 minutes.

So my Comcast connection which is fairly decent is taking a half hour to
send 65-70 MB to the Nashua site. The Salem site is taking 8 minutes for
something approximately the same size and the Bedford site is taking
several hours.

Traceroute doesn't show much interesting - it craps out after the first
5 hops. Pinging (standard payload) from my office to the Nashua site is
averaging less than 20 ms. One odd thing is that when I'm in the process
of sending data to the Nashua site my pings jump up to 650 - 800 ms. 

The Comcast tech was happy to conclude that the Nashua site was working
properly. They checked transmission levels, noise and of course the guy
downloaded some files and ran the Speakeasy speed tests and all of that
looked good.

Any ideas how to proceed on a problem like this? Currently I'm having
the customer transmit their data to me and then I re-transmit because my
connection although slow is probably 4 or 5 times faster than theirs.

-Alex




___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: How to troubleshoot wide area network performance problem?

2008-07-10 Thread Alex Hewitt
On Thu, 2008-07-10 at 19:36 -0400, Alex Hewitt wrote:
 I have clients with an interesting network problem. One location in
 Bedford New Hampshire using a fractionated T1 has routinely been
 transmitting studies to an office in Nashua New Hampshire. There have
 been no problems with this for at least 18 months. However recently
 (about a week ago), the transmissions suddenly became slow, really slow.
 A transmission that was taking around 10 minutes suddenly jumped to 2-3
 hours. The customer in Bedford New Hampshire is using One
 Communications. So far I haven't asked them to look at this problem
 because I've been trying to clarify it. The office in Nashua has
 Comcast business class service with a static IP address. 
 
 Here's where it gets interesting. I had the Bedford client transmit the
 data to my system in Manchester New Hampshire. I have Comcast
 residential service. The data usually takes about 8 minutes to arrive at
 my location. I then send the data to the Nashua office and it typically
 takes 25-30 minutes. The payload is a collection of images that are
 typically between 65 and 70 MB. 
 
 Today Comcast at the request of the customer sent someone on site to the
 Nashua site. The tech did some speed tests using the DSLReports
 Speakeasy test suite. He was getting  20 mbs down, 3+ mbs up which is
 pretty decent. For the fun of it I had him download a 47 MB antivirus
 program. His first try was ridiculous telling him it was going to take 4
 + hours. I had him break the connection and try again and this time the
 download took around a minute. 
 
 And it gets more interesting...another client in Salem New Hampshire
 needed to send their data to the Nashua site (they use Verizon DSL). It
 arrived in about 8 minutes.
 
 So my Comcast connection which is fairly decent is taking a half hour to
 send 65-70 MB to the Nashua site. The Salem site is taking 8 minutes for
 something approximately the same size and the Bedford site is taking
 several hours.
 
 Traceroute doesn't show much interesting - it craps out after the first
 5 hops. Pinging (standard payload) from my office to the Nashua site is
 averaging less than 20 ms. One odd thing is that when I'm in the process
 of sending data to the Nashua site my pings jump up to 650 - 800 ms. 
 
 The Comcast tech was happy to conclude that the Nashua site was working
 properly. They checked transmission levels, noise and of course the guy
 downloaded some files and ran the Speakeasy speed tests and all of that
 looked good.
 
 Any ideas how to proceed on a problem like this? Currently I'm having
 the customer transmit their data to me and then I re-transmit because my
 connection although slow is probably 4 or 5 times faster than theirs.
 
 -Alex
 
 

A few more bits of information - I replaced the router in the Nashua
office (Netgear FVS 114) with a new identically configured model. The
download performance and speed tests were run with the Netgear router in
place (all good). I disconnected the router from the cable modem and
hooked the Mac that runs the client application directly to the cable
modem. Again all download tests look normal. I replaced the original Mac
with a newer model. The old system was a Mac Mini with 1 GB of Ram and a
G4 CPU. The replacement model was a dual core Intel based Mini with 2 GB
of Ram. The new system is definitely snappier but doesn't affect the
problem at all. 

-Alex

 
 
 ___
 gnhlug-discuss mailing list
 gnhlug-discuss@mail.gnhlug.org
 http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: How to troubleshoot wide area network performance problem?

2008-07-10 Thread Bruce Dawson
Alex Hewitt wrote:
 I have clients with an interesting network problem. One location in
 Bedford New Hampshire using a fractionated T1 has routinely been
 transmitting studies to an office in Nashua New Hampshire. There have
 been no problems with this for at least 18 months. However recently
 (about a week ago), the transmissions suddenly became slow, really slow.
 A transmission that was taking around 10 minutes suddenly jumped to 2-3
 hours. The customer in Bedford New Hampshire is using One
 Communications. So far I haven't asked them to look at this problem
 because I've been trying to clarify it. The office in Nashua has
 Comcast business class service with a static IP address. 

 Here's where it gets interesting. I had the Bedford client transmit the
 data to my system in Manchester New Hampshire. I have Comcast
 residential service. The data usually takes about 8 minutes to arrive at
 my location. I then send the data to the Nashua office and it typically
 takes 25-30 minutes. The payload is a collection of images that are
 typically between 65 and 70 MB. 
   

That sounds like typical asymmetric cable modem connection.
 Today Comcast at the request of the customer sent someone on site to the
 Nashua site. The tech did some speed tests using the DSLReports
 Speakeasy test suite. He was getting  20 mbs down, 3+ mbs up which is
 pretty decent. For the fun of it I had him download a 47 MB antivirus
 program. His first try was ridiculous telling him it was going to take 4
 + hours. I had him break the connection and try again and this time the
 download took around a minute.
   

Its hard to tell if that problem was on the server end or some router
between the local and remote system.
 And it gets more interesting...another client in Salem New Hampshire
 needed to send their data to the Nashua site (they use Verizon DSL). It
 arrived in about 8 minutes.
   
This would imply the Nashua site is OK.
 So my Comcast connection which is fairly decent is taking a half hour to
 send 65-70 MB to the Nashua site. The Salem site is taking 8 minutes for
 something approximately the same size and the Bedford site is taking
 several hours.
   

Before paying for a tech to go to the Bedford site, I would try a
*short* flood ping to the ISP's first advertised router (short = 5
seconds) and see what sort of loss you get. This will tell you if the
problem is in the ISP's on-site equipment (and if so, the tech can
diagnose it). Then try pinging to the first router outside of the ISP's
network. This should tell you if the problem is inside/outside the ISP
network. Armed with this info, you can then call the Bedford ISP and ask
them what's going on.
 Traceroute doesn't show much interesting - it craps out after the first
 5 hops. Pinging (standard payload) from my office to the Nashua site is
 averaging less than 20 ms. One odd thing is that when I'm in the process
 of sending data to the Nashua site my pings jump up to 650 - 800 ms. 

 The Comcast tech was happy to conclude that the Nashua site was working
 properly. They checked transmission levels, noise and of course the guy
 downloaded some files and ran the Speakeasy speed tests and all of that
 looked good.

 Any ideas how to proceed on a problem like this? Currently I'm having
 the customer transmit their data to me and then I re-transmit because my
 connection although slow is probably 4 or 5 times faster than theirs.
   
Sounds to me like the Bedford ISP/Carrier needs a clue bat.

--Bruce
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-19 Thread Ben Scott
On Fri, Apr 18, 2008 at 1:54 AM, Bill McGonigle [EMAIL PROTECTED] wrote:
  Yeah?  I've seen benchmarks with postfix spanking sendmail on
  performance, and exim handing it right to postfix.

  I've seen benchmarks that say just about anything.  Even when
they're not designed to produce a certain desired outcome, benchmarks
are very often biased by factors that aren't always immediately
obvious.  For one, they tend to reflect the knowledge base of the
people configuring the software under test.  If Acme Consulting Group
compares Postfix and Sendmail and finds Sendmail performs better,
what's more likely: That Sendmail beats Postfix, or that Acme's staff
simply knows Sendmail better than they know Postfix?

  High-performance benchmarking can also reveal performance
characteristics in the OS or hardware that happen to favor a
particular program.  And maybe the OS wasn't tuned best for this or
that program.  When you're talking high-performance, the whole system
really does matter.

  If you're doing the benchmarks are for your own use, then most of
these concerns don't matter, because of course it's what you're using
and what you know that counts.  But taking someone else's benchmarks
and generalizing them to other situations can be a very misleading
thing to do.

  This is not to defend Sendmail in this respect.  I honestly have no
idea.  Just... Lies, damn lies, and statistics.

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-17 Thread Bill McGonigle
On Apr 14, 2008, at 20:18, Paul Lussier wrote:

 Sendmail is still (and probably will be for as long as Eric Allman is
 alive/maintaining it) the work-horse of the internet.  If I need speed
 and throughput, I'd still choose sendmail.  If I need massive
 scalability, I'll choose sendmail.  If I need to deal with wacky and
 bizarre, I'll probably choose sendmail.

Yeah?  I've seen benchmarks with postfix spanking sendmail on  
performance, and exim handing it right to postfix.  The guys I've  
talked to who handle _big_ mail heart exim.  Where sendmail really  
shines is when you need to do something none of the others have  
thought about yet - sendmail lets you control everything with .cf.   
If writing a quick policy daemon for postfix was out of the question,  
that is.

-Bill

-
Bill McGonigle, Owner   Work: 603.448.4440
BFC Computing, LLC  Home: 603.448.1668
[EMAIL PROTECTED]   Cell: 603.252.2606
http://www.bfccomputing.com/Page: 603.442.1833
Blog: http://blog.bfccomputing.com/
VCard: http://bfccomputing.com/vcard/bill.vcf

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-14 Thread Coleman Kane
On Sun, 2008-04-13 at 18:23 -0400, Shawn O'Shea wrote:
 
 
 On Sun, Apr 13, 2008 at 8:13 AM, Ben Scott [EMAIL PROTECTED]
 wrote:
 On Sat, Apr 12, 2008 at 3:24 PM, Coleman Kane
 [EMAIL PROTECTED] wrote:
   A more helpful suggestion is that you may want to set the
 
   default_destination_recipient_limit
 in /etc/postfix/main.cf  ... to 5.
 
  I don't know much of anything about Postfix, but I'm guessing
 that
 will impact all destination MXes.  The goal here was to just
 limit
 connections to *Yahoo* to 5 recipients per envelope.  The
 above will
 penalize all connections, right?  How would one specify that
 for just
 Yahoo?
 I don't have a ton of Postfix experience, but using this Postfix FAQ
 question ( http://www.postfix.org/faq.html#incoming ) as a template of
 sorts (and reading bits from the O'Reilly postfix book and the postfix
 man pages.
 
 You would create a transport map file, say /etc/postfix/transport. Add
 entries for the domains you want to limit and assign them to a
 transport name, let's say lamdomains
 
 yahoo.com  lamedomains:
 
 You need to then run: postmap /etc/postfix/transport
 
 Then in the postfix main.cf, add lines to tell it about the transport
 and to tell it that anything in that transport has the recipient
 limit.
 transport_maps = hash:/etc/postfix/transport
 lamedomains_destination_recipient_limit = 5
 
 So now you've created a transport, put some domains in it, changed the
 default behavior of postfix for that transport, you just need to tell
 postfix what to do with that transport (aka, deliver it with smtp).
 
 Add a line to master.cf:
 lamedomains  unix  -   -   -   -   -   smtp
  
 Now tell postfix to reload it's config: postfix reload

OMG you're my hero. New stuff learned every day.

 
 Again, I haven't tested this, so you mean need to read man pages and
 play with that a little, but that should set a postfix user in the
 right direction
 
 -Shawn

Thanks for that little tidbit, that will be very helpful in the future.

I'd like to also point out another feature of Postfix that some of you
might also not be familiar with.

Notice the hash: above in the 
transport_maps = hash:/etc/postfix/transport line. If you compile
Postfix with the -DHAS_MYSQL option, then you can replace this with
mysql: and the filename after the : is the location of a
specially-formatted .cf file that tells postfix to connect to a mysql
table and where to get the information that it wants.

Postfix uses a database-abstraction model for maintaining most of these
mappings in the system. Pretty much any configuration option that
accepts such a parameter can be turned into a MySQL table. This greatly
increases your ability to perform dynamic run-time configuration changes
at will (without restarting postfix).

I believe that PostgreSQL support also exists as well, for those of you
who are that way inclined.

-- 
Coleman Kane


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-14 Thread Coleman Kane
On Sun, 2008-04-13 at 19:24 -0400, Ben Scott wrote:
 On Sun, Apr 13, 2008 at 2:34 PM, Steven W. Orr [EMAIL PROTECTED] wrote:
  Anyone want to give a presentation on switching from Sendmail to
  Postfix?
 
   Why would you ever want to do that?
 
   Primarily: Cleaner, easier configuration.  I find it costs me more
 to learn a new feature in Sendmail than it appears it would cost me to
 learn the corresponding feature in Postfix.
 
   I've been using Sendmail since I started with *nix, so the
 incremental cost of learning one new feature when I need it has been
 lower than the cost of learning all of Postfix.  But every time I do
 so, I think of all the cost I've been accumulating over the years.  A
 common situation, really.  The field of IT systems administration is
 largely about turning Better the devil you know into a way of life.
 
  Sendmail has more flexibility.
 
   More than I need.  The higher flexibility comes with a corresponding
 cost.  So I'm paying for something I don't need.  Like commuting into
 work by driving an 18-wheeler.
 
 -- Ben

I tend to agree here. Sendmail may be the ultimate mail server software
ever, but you practically need a formal degree in Sendmail to get it to
perform many of the complex operations that many other mailservers can
do in a seemingly more straight-forward manner. 

For instance, Shawn O'Shea just pointed out that you can dynamically
define new transports for postfix, and then address this problem by
setting up a lameservers transport that behaves in the
5-rcpts-per-message manner using configuration options that are much
more lexically understandable.

Maybe sendmail *is* the best option if your primary job is a 24/7 mail
relay operator... but I don't want to have to learn a (sort of) brand
new language for telling my mailserver what to do. I have got better
things to do with my time. I'd take the less features, but easily
configurable mailserver over the mailserver that you could write a .mc
that would compile the mailserver itself if you wanted it to, because
I'd spend less of my time administering my mailserver, and more time on
Paying Job (TM), and hobby projects (FreeBSD, etc...).

-- 
Coleman Kane



signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-14 Thread Tom Buskey
On Sun, Apr 13, 2008 at 2:34 PM, Steven W. Orr [EMAIL PROTECTED] wrote:

 On Sunday, Apr 13th 2008 at 08:13 -, quoth Ben Scott:

 =On Sat, Apr 12, 2008 at 3:24 PM, Coleman Kane [EMAIL PROTECTED] wrote:
 =  Anyone want to give a presentation on switching from Sendmail to
 =Postfix?  I really need to get around to doing that, one of these
 =decades...

 Why would you ever want to do that? Sendmail has more flexibility.


Security:

Sendmail has a long history of security problems.  In its defense, it's been
beaten to death the last decade and had not had as many security problems.
Also, I think OpenBSD uses sendmail by default.

Postfix  Qmail have been designed from the beginning to be secure. Sendmail
has had it added.  It's very hard to add after the fact.

Multiple layers for security in depth.  Run qmail outside the firewall,
postfix inside and sendmail/exchange on local boxes.

Simplicity:

Simpler configuration syntax.  Fewer tools needed.  Fewer transports
supported (UUCP not needed usually).  Smaller footprint.  Faster operation?

Smaller code/fewer features also mean fewer places for exploits to hide.
Easier to code review.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-14 Thread Ben Scott
On Mon, Apr 14, 2008 at 10:59 AM, Tom Buskey [EMAIL PROTECTED] wrote:
 Sendmail has a long history of security problems.

  I have to point out that the above statement would be equally true
if one wrote Unix instead of Sendmail.  (This is not a snide
remark, although it may qualify as subtle.)

  Separate from the above: From what I know if it, Postfix has a more
modular design than Sendmail.  Such designs usually lend themselves to
task isolation and least-privilege, which is usually good for
security.  It's interesting, but despite Sendmail's more flexible
design, implemention of these concepts came later.  When they did
arrive, though, they were implemented using the same Sendmail
configuration facilities already existent.  I'm not sure that last
part really matters, much, though.  The source code to everything is
readily available.  What difference does it make if one has to write a
new .c file vs a new .cf file?  That might matter on a
slavery-software platform, but surely we all know that story by now.

  It may be worth noting that Postfix was created by Wietse Venema,
the same person who created tcp_wrappers.

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-14 Thread Tom Buskey
On Mon, Apr 14, 2008 at 11:34 AM, Ben Scott [EMAIL PROTECTED] wrote:

 On Mon, Apr 14, 2008 at 10:59 AM, Tom Buskey [EMAIL PROTECTED] wrote:
  Sendmail has a long history of security problems.

   I have to point out that the above statement would be equally true
 if one wrote Unix instead of Sendmail.  (This is not a snide
 remark, although it may qualify as subtle.)


I can't disagree with you there.  I used to work at a paranoid security
firm.  Sendmail was written by 1 person  they avoided all code by that
person because of the coding techniques/style lent itself to buffer
overflows.  Unix had many more authors and different coding styles.

  Separate from the above: From what I know if it, Postfix has a more
 modular design than Sendmail.  Such designs usually lend themselves to
 task isolation and least-privilege, which is usually good for
 security.  It's interesting, but despite Sendmail's more flexible


Security was part of the design goal from day one.  Sendmail was created in
a different era.  In fact, the 1st internet worm in 1988 was enabled because
of the root access backdoor written into Sendmail.  That stuff isn't in
Sendmail anymore of course.

design, implemention of these concepts came later.  When they did
 arrive, though, they were implemented using the same Sendmail
 configuration facilities already existent.  I'm not sure that last
 part really matters, much, though.  The source code to everything is
 readily available.  What difference does it make if one has to write a
 new .c file vs a new .cf file?  That might matter on a
 slavery-software platform, but surely we all know that story by now.

  It may be worth noting that Postfix was created by Wietse Venema,
 the same person who created tcp_wrappers.


Qmail was written by DJ Bernstien, also with a security mindset.

I know Qmail hasn't accepted outside code.  I don't think Sendmail has.
Does Postfix? Does Exim? Does any MTA have multiple authors?
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-14 Thread Coleman Kane
On Mon, 2008-04-14 at 11:55 -0400, Tom Buskey wrote:
 
 
 On Mon, Apr 14, 2008 at 11:34 AM, Ben Scott [EMAIL PROTECTED]
 wrote:
 On Mon, Apr 14, 2008 at 10:59 AM, Tom Buskey [EMAIL PROTECTED]
 wrote:
  Sendmail has a long history of security problems.
 
 
  I have to point out that the above statement would be equally
 true
 if one wrote Unix instead of Sendmail.  (This is not a
 snide
 remark, although it may qualify as subtle.)
 
 I can't disagree with you there.  I used to work at a paranoid
 security firm.  Sendmail was written by 1 person  they avoided all
 code by that person because of the coding techniques/style lent itself
 to buffer overflows.  Unix had many more authors and different coding
 styles.  
 
 
   Separate from the above: From what I know if it, Postfix has
 a more
 modular design than Sendmail.  Such designs usually lend
 themselves to
 task isolation and least-privilege, which is usually good for
 security.  It's interesting, but despite Sendmail's more
 flexible
 
 Security was part of the design goal from day one.  Sendmail was
 created in a different era.  In fact, the 1st internet worm in 1988
 was enabled because of the root access backdoor written into Sendmail.
 That stuff isn't in Sendmail anymore of course. 
 
 
 design, implemention of these concepts came later.  When they
 did
 arrive, though, they were implemented using the same Sendmail
 configuration facilities already existent.  I'm not sure that
 last
 part really matters, much, though.  The source code to
 everything is
 readily available.  What difference does it make if one has to
 write a
 new .c file vs a new .cf file?  That might matter on a
 slavery-software platform, but surely we all know that story
 by now.
 
  It may be worth noting that Postfix was created by Wietse
 Venema,
 the same person who created tcp_wrappers.
 
 Qmail was written by DJ Bernstien, also with a security mindset.

Additionally to this, djb has a long-standing (since 1997) reward of
$500 for anybody who can publish a verifiable security crack against
qmail. Since then, nobody has been able to provide this.

 
 I know Qmail hasn't accepted outside code.  I don't think Sendmail
 has.  Does Postfix? Does Exim? Does any MTA have multiple authors?
 

I believe that postfix is still maintained by the original author,
although he does accept patches for review and inclusion. Exim is
maintained by a group at the University of Cambridge (UK), though I
don't know how central the project's structure is regarding the main
author.

I really do have to say that my favorite all-time mailserver has been
qmail. The one thing qmail lacks is many of the more complex and regular
features that are common with systems like Postfix, Exim, and Sendmail,
as well as integration with heavier-weight IMAP back-ends. There is a
large amount of qmail-specific software out there, and I found qmail's
code to be wonderful to hack on when I needed to add extra features
(such as editing qmail-smtpd to do more stuff at the SMTP-end).

I haven't found a mailserver that scales better than qmail either for
handling gigantic amounts of email flow. That said, finding others with
the breadth of knowledge that I have on qmail proves quite difficult.
For our IT clients, we just use Postfix because it is something that
everyone can administer (hooray pragmatism).

At previous job, I hosted all client mail (for 30k+ domains) through
two machines using one as the mail-store (w/ courier-imap) and one as
the front-end filter/remailer (for email forward accounts). It was
wonderful.

-- 
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-14 Thread Jon 'maddog' Hall
the 1st internet worm in 1988 was enabled because of the root access
backdoor written into Sendmail.

If I remember correctly, that back door was only available when
Sendmail was run in debug mode.  I seem to remember this because a
friend of mine, Henry no relation Hall, was going to join us for
dinner at the Hacienda Mexican restaurant on Daniel Webster Highway in
Nashua with a bunch of other Digital people.   Then I heard on the car
radio about this worm that was attacking systems.  I gleaned enough
from the radio broadcast (an oddity in itself) to relay this to Henry,
who instead of coming to dinner made sure that the gateway email servers
in Digital were running Sendmail with debug mode turned offkeeping
Digital from being affected.

Unfortunately for most Unix systems, distributing and running Sendmail
in debug mode was much more the practice of the day in 1984.

md
-- 
Jon maddog Hall
Executive Director   Linux International(R)
email: [EMAIL PROTECTED] 80 Amherst St. 
Voice: +1.603.672.4557   Amherst, N.H. 03031-3032 U.S.A.
WWW: http://www.li.org

Board Member: Uniforum Association
Board Member Emeritus: USENIX Association (2000-2006)

(R)Linux is a registered trademark of Linus Torvalds in several
countries.
(R)Linux International is a registered trademark in the USA used
pursuant
   to a license from Linux Mark Institute, authorized licensor of Linus
   Torvalds, owner of the Linux trademark on a worldwide basis
(R)UNIX is a registered trademark of The Open Group in the USA and other
   countries.


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-14 Thread Paul Lussier
Steven W. Orr [EMAIL PROTECTED] writes:

 On Sunday, Apr 13th 2008 at 08:13 -, quoth Ben Scott:

 Why would you ever want to do that? Sendmail has more flexibility. 

This has been answered, however, I just wanted to add my .02 drachma:

Just because something has more flexibility is not necessarilly a
reason for choosing it over something less flexible.  The majority of
vehicles on the road are cars, yet both pickup trucks and
tractor/trailers are more flexible.

With great flexibility comes great complexity.  99.999% of the
flexibility in sendmail is unnecessary for %99.999 of the sites
requiring the suse of an MTA.  I don't, nor do most people I know,
need the ability to gateway between the Internet, ARPANet, or UUCP.

Postfix has, as far as I know, a good majority of the flexibility of
sendmail at a fraction of the cost in terms of readability and
maintainability.

Sendmail is still (and probably will be for as long as Eric Allman is
alive/maintaining it) the work-horse of the internet.  If I need speed
and throughput, I'd still choose sendmail.  If I need massive
scalability, I'll choose sendmail.  If I need to deal with wacky and
bizarre, I'll probably choose sendmail.

If I need simplicity, readability, ease of maintenance, and basic
configuration, I'll go with Postfix.

I know them both equally as well (which is to say, neither as well as
I ought to, both farm more than want to), and in general, I prefer
postfix.

There, I think that's about .04 drachma :)
-- 
Seeya,
Paul
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Sendmail question. Problem with yahoo.

2008-04-13 Thread Bill McGonigle
On Apr 10, 2008, at 12:58, Steven W. Orr wrote:

 It seems they now
 have a limit of 5 recipients per envelope.

Are you getting special treatment?  I'm not finding info on this in  
web searches.  I haven't experimented either.

 MAILER_DEFINITIONS
 Mesmtp_mailer_maxmsgs_5,P=[IPC], F=mDFMuXa,
 S=EnvFromSMTP/HdrFromSMTP, R=EnvToSMTP/HdrFromSMTP, E=\r\n, L=990,
   m=5, T=DNS/RFC822/SMTP,
   A=TCP $h

Thank you, my head just exploded.

On Apr 12, 2008, at 15:24, Coleman Kane wrote:

 default_destination_recipient_limit = 5

OK, stuffed back in now.  Phew.

-Bill

-
Bill McGonigle, Owner   Work: 603.448.4440
BFC Computing, LLC  Home: 603.448.1668
[EMAIL PROTECTED]   Cell: 603.252.2606
http://www.bfccomputing.com/Page: 603.442.1833
Blog: http://blog.bfccomputing.com/
VCard: http://bfccomputing.com/vcard/bill.vcf

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-13 Thread Ben Scott
On Sat, Apr 12, 2008 at 3:24 PM, Coleman Kane [EMAIL PROTECTED] wrote:
  A more helpful suggestion is that you may want to set the
  default_destination_recipient_limit in /etc/postfix/main.cf  ... to 5.

  I don't know much of anything about Postfix, but I'm guessing that
will impact all destination MXes.  The goal here was to just limit
connections to *Yahoo* to 5 recipients per envelope.  The above will
penalize all connections, right?  How would one specify that for just
Yahoo?

  Anyone want to give a presentation on switching from Sendmail to
Postfix?  I really need to get around to doing that, one of these
decades...

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-13 Thread Steven W. Orr
On Saturday, Apr 12th 2008 at 22:53 -, quoth Paul Lussier:

=Coleman Kane [EMAIL PROTECTED] writes:
=
= A more helpful suggestion is that you may want to set the
= default_destination_recipient_limit in /etc/postfix/main.cf (or wherever
= main.cf is located on your particular install) to 5. Adding (or
= changing) the following line in the file should do:
=
= default_destination_recipient_limit = 5
=
=Thanks!  And just to clarify, does this limit the total number of
=recipients to 5, or does it just batch 5 recipients at a time when
=sending to the total list of recipients?  In other words, if I sent to
=20 people, does it get send in 4 batches of 5, or do 15 people not
=recieve the mail?
=
=I'm assuming the former, i.e. 4 batches of 5.

It's a good question. Therew is a difference between sending a sinlge 
message to list of people and and sending a message to a list. 

In the former case, I could send something really important to everyone I 
know with lots of people in the to line. In the latter case, I would send 
a message to the address of a list that is being run by a mailinglist 
manager. The manager would then explode the message to all the people that 
are subscribed to the list. This latter case is what is specified by the m 
option of the sendmail mailer. The former case would be handled (I 
believe) by the r option of the mailer.

And the point about postfix setting default_destination_recipient_limit is 
not equivalent because the goal is to cause proper mail delivery in groups 
of 5 only for yahoo and not for all outgoing traffic.

-- 
Time flies like the wind. Fruit flies like a banana. Stranger things have  .0.
happened but none stranger than this. Does your driver's license say Organ ..0
Donor?Black holes are where God divided by zero. Listen to me! We are all- 000
individuals! What if this weren't a hypothetical question?
steveo at syslang.net
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-13 Thread Steven W. Orr
On Sunday, Apr 13th 2008 at 08:13 -, quoth Ben Scott:

=On Sat, Apr 12, 2008 at 3:24 PM, Coleman Kane [EMAIL PROTECTED] wrote:
=  A more helpful suggestion is that you may want to set the
=  default_destination_recipient_limit in /etc/postfix/main.cf  ... to 5.
=
=  I don't know much of anything about Postfix, but I'm guessing that
=will impact all destination MXes.  The goal here was to just limit
=connections to *Yahoo* to 5 recipients per envelope.  The above will
=penalize all connections, right?  How would one specify that for just
=Yahoo?
=
=  Anyone want to give a presentation on switching from Sendmail to
=Postfix?  I really need to get around to doing that, one of these
=decades...

Why would you ever want to do that? Sendmail has more flexibility. 
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-13 Thread Shawn O'Shea
On Sun, Apr 13, 2008 at 8:13 AM, Ben Scott [EMAIL PROTECTED] wrote:

 On Sat, Apr 12, 2008 at 3:24 PM, Coleman Kane [EMAIL PROTECTED] wrote:
   A more helpful suggestion is that you may want to set the
   default_destination_recipient_limit in /etc/postfix/main.cf  ... to 5.

  I don't know much of anything about Postfix, but I'm guessing that
 will impact all destination MXes.  The goal here was to just limit
 connections to *Yahoo* to 5 recipients per envelope.  The above will
 penalize all connections, right?  How would one specify that for just
 Yahoo?

I don't have a ton of Postfix experience, but using this Postfix FAQ
question ( http://www.postfix.org/faq.html#incoming ) as a template of sorts
(and reading bits from the O'Reilly postfix book and the postfix man pages.

You would create a transport map file, say /etc/postfix/transport. Add
entries for the domains you want to limit and assign them to a transport
name, let's say lamdomains

yahoo.com  lamedomains:

You need to then run: postmap /etc/postfix/transport

Then in the postfix main.cf, add lines to tell it about the transport and to
tell it that anything in that transport has the recipient limit.
transport_maps = hash:/etc/postfix/transport
lamedomains_destination_recipient_limit = 5

So now you've created a transport, put some domains in it, changed the
default behavior of postfix for that transport, you just need to tell
postfix what to do with that transport (aka, deliver it with smtp).

Add a line to master.cf:
lamedomains  unix  -   -   -   -   -   smtp

Now tell postfix to reload it's config: postfix reload

Again, I haven't tested this, so you mean need to read man pages and play
with that a little, but that should set a postfix user in the right
direction

-Shawn
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-13 Thread Ben Scott
On Sun, Apr 13, 2008 at 2:34 PM, Steven W. Orr [EMAIL PROTECTED] wrote:
 Anyone want to give a presentation on switching from Sendmail to
 Postfix?

  Why would you ever want to do that?

  Primarily: Cleaner, easier configuration.  I find it costs me more
to learn a new feature in Sendmail than it appears it would cost me to
learn the corresponding feature in Postfix.

  I've been using Sendmail since I started with *nix, so the
incremental cost of learning one new feature when I need it has been
lower than the cost of learning all of Postfix.  But every time I do
so, I think of all the cost I've been accumulating over the years.  A
common situation, really.  The field of IT systems administration is
largely about turning Better the devil you know into a way of life.

 Sendmail has more flexibility.

  More than I need.  The higher flexibility comes with a corresponding
cost.  So I'm paying for something I don't need.  Like commuting into
work by driving an 18-wheeler.

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-12 Thread Coleman Kane
On Fri, 2008-04-11 at 10:40 -0400, Derek Atkins wrote:
 Paul Lussier [EMAIL PROTECTED] writes:
 
  Steven W. Orr [EMAIL PROTECTED] writes:
 
  Add this to the end of your sendmail.mc
 
  Anyone know what the postfix fix is?
 
 Yeah.  Install sendmail.  ;)
 
  Seeya,
  Paul
 
 -derek
 

A more helpful suggestion is that you may want to set the
default_destination_recipient_limit in /etc/postfix/main.cf (or wherever
main.cf is located on your particular install) to 5. Adding (or
changing) the following line in the file should do:

default_destination_recipient_limit = 5

--
Coleman Kane



signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-12 Thread Paul Lussier
Coleman Kane [EMAIL PROTECTED] writes:

 A more helpful suggestion is that you may want to set the
 default_destination_recipient_limit in /etc/postfix/main.cf (or wherever
 main.cf is located on your particular install) to 5. Adding (or
 changing) the following line in the file should do:

 default_destination_recipient_limit = 5

Thanks!  And just to clarify, does this limit the total number of
recipients to 5, or does it just batch 5 recipients at a time when
sending to the total list of recipients?  In other words, if I sent to
20 people, does it get send in 4 batches of 5, or do 15 people not
recieve the mail?

I'm assuming the former, i.e. 4 batches of 5.
-- 
Seeya,
Paul
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-12 Thread Coleman Kane
On Sat, 2008-04-12 at 22:53 -0400, Paul Lussier wrote:
 Coleman Kane [EMAIL PROTECTED] writes:
 
  A more helpful suggestion is that you may want to set the
  default_destination_recipient_limit in /etc/postfix/main.cf (or wherever
  main.cf is located on your particular install) to 5. Adding (or
  changing) the following line in the file should do:
 
  default_destination_recipient_limit = 5
 
 Thanks!  And just to clarify, does this limit the total number of
 recipients to 5, or does it just batch 5 recipients at a time when
 sending to the total list of recipients?  In other words, if I sent to
 20 people, does it get send in 4 batches of 5, or do 15 people not
 recieve the mail?
 
 I'm assuming the former, i.e. 4 batches of 5.

According to the Recipient limits section on this page:
http://www.postfix.org/rate.html

If an email message has more than $default_destination_recipient_limit
recipients at the same destination, the list of recipients will be
broken up into smaller lists, and multiple copies of the message will be
sent.

--
Coleman



signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-11 Thread Paul Lussier
Steven W. Orr [EMAIL PROTECTED] writes:

 Add this to the end of your sendmail.mc

Anyone know what the postfix fix is?
-- 
Seeya,
Paul
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-11 Thread Derek Atkins
Paul Lussier [EMAIL PROTECTED] writes:

 Steven W. Orr [EMAIL PROTECTED] writes:

 Add this to the end of your sendmail.mc

 Anyone know what the postfix fix is?

Yeah.  Install sendmail.  ;)

 Seeya,
 Paul

-derek

-- 
   Derek Atkins, SB '93 MIT EE, SM '95 MIT Media Laboratory
   Member, MIT Student Information Processing Board  (SIPB)
   URL: http://web.mit.edu/warlord/PP-ASEL-IA N1NWH
   [EMAIL PROTECTED]PGP key available
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Sendmail question. Problem with yahoo.

2008-04-10 Thread Steven W. Orr
I run a list and all the mail to yahoo is backing up. It seems they now 
have a limit of 5 recipients per envelope. Can someone tell me how to 
change my sendmail mc file to fix this?

TIA

-- 
Time flies like the wind. Fruit flies like a banana. Stranger things have  .0.
happened but none stranger than this. Does your driver's license say Organ ..0
Donor?Black holes are where God divided by zero. Listen to me! We are all- 000
individuals! What if this weren't a hypothetical question?
steveo at syslang.net
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Sendmail question. Problem with yahoo.

2008-04-10 Thread Ben Scott
On Thu, Apr 10, 2008 at 12:58 PM, Steven W. Orr [EMAIL PROTECTED] wrote:
 I run a list and all the mail to yahoo is backing up. It seems they now
  have a limit of 5 recipients per envelope.

  That violates RFC-821, I'm pretty sure.  I seem to recall it
requires implementations support at least 100 recipients.  It seems
like @yahoo.com is the new @aol.com -- don't expect your mail to
work if you use it.

 Can someone tell me how to
  change my sendmail mc file to fix this?

  You could try

define(confMAX_RCPTS_PER_MESSAGE, 5)

but I'm not sure that will do what you want.

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Solved: Sendmail question. Problem with yahoo.

2008-04-10 Thread Steven W. Orr
On Thursday, Apr 10th 2008 at 13:35 -, quoth Ben Scott:

=On Thu, Apr 10, 2008 at 12:58 PM, Steven W. Orr [EMAIL PROTECTED] wrote:
= I run a list and all the mail to yahoo is backing up. It seems they now
=  have a limit of 5 recipients per envelope.
=
=  That violates RFC-821, I'm pretty sure.  I seem to recall it
=requires implementations support at least 100 recipients.  It seems
=like @yahoo.com is the new @aol.com -- don't expect your mail to
=work if you use it.
=
= Can someone tell me how to
=  change my sendmail mc file to fix this?
=
=  You could try
=
=  define(confMAX_RCPTS_PER_MESSAGE, 5)
=
=but I'm not sure that will do what you want.
=

No, that would only limit messages which exceed the max number of 
recipients that I'd like to receive. Here's the solution for the few who 
are dying to know:


Add this to the end of your sendmail.mc

MAILER_DEFINITIONS
Mesmtp_mailer_maxmsgs_5,P=[IPC], F=mDFMuXa, 
S=EnvFromSMTP/HdrFromSMTP, R=EnvToSMTP/HdrFromSMTP, E=\r\n, L=990,
m=5, T=DNS/RFC822/SMTP,
A=TCP $h

Then add this to your mailertable:

yahoo.com   esmtp_mailer_maxmsgs_5:yahoo.com

Then just restart/reload/kill-1 sendmail

and as the French say Eez beeg fat accomplishmente.

-- 
Time flies like the wind. Fruit flies like a banana. Stranger things have  .0.
happened but none stranger than this. Does your driver's license say Organ ..0
Donor?Black holes are where God divided by zero. Listen to me! We are all- 000
individuals! What if this weren't a hypothetical question?
steveo at syslang.net
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


RE: Solved: Sendmail question. Problem with yahoo.

2008-04-10 Thread Flaherty, Patrick
 and as the French say Eez beeg fat accomplishmente.

Extra points for Steven...
It took me until I ran it thru google's language tools to get the joke.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Mysql connection problem

2008-04-10 Thread Deepan
Hi All,
I am able to connect to Mysql via command line
using mysql client. I am also able to connect to
mysql via php if I run those php programs via
command line. However when I hit those php pages
via the browser it throws the error Can't connect
to local MySQL server through socket
'/tmp/mysql.sock' (2). Please note that this is
the same socket the mysql client tries to connect
to the server.
Regards 
Deepan 
Sudoku Solver: http://www.sudoku-solver.net/ 


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Mysql connection problem

2008-04-10 Thread Coleman Kane
Deepan wrote:
 Hi All,
 I am able to connect to Mysql via command line
 using mysql client. I am also able to connect to
 mysql via php if I run those php programs via
 command line. However when I hit those php pages
 via the browser it throws the error Can't connect
 to local MySQL server through socket
 '/tmp/mysql.sock' (2). Please note that this is
 the same socket the mysql client tries to connect
 to the server.
 Regards 
 Deepan 
 Sudoku Solver: http://www.sudoku-solver.net/ 
   
The web server is going to be using a different user than the 
command-line is. What user are you using on the command line to test? 
You may need to change the socket so that it is group-readable and then 
put the web-server user into that group (and re-start the web server).

It would be helpful if you sent over the output of the following command:
ls -l /tmp/mysql.sock

--
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Mysql connection problem

2008-04-10 Thread Brian Karas
This often happens when you have a user configured only for localhost
connections.  Coming from the command line, the user will generally appear
to originate from localhost.  Coming from a PHP or CGI app the user will
generally appear to come from the hostname.

I'd start by checking the users table.


On 4/10/08 5:09 PM, Deepan [EMAIL PROTECTED] wrote:

 Hi All,
 I am able to connect to Mysql via command line
 using mysql client. I am also able to connect to
 mysql via php if I run those php programs via
 command line. However when I hit those php pages
 via the browser it throws the error Can't connect
 to local MySQL server through socket
 '/tmp/mysql.sock' (2). Please note that this is
 the same socket the mysql client tries to connect
 to the server.
 Regards 
 Deepan 
 Sudoku Solver: http://www.sudoku-solver.net/
 
 
 ___
 gnhlug-discuss mailing list
 gnhlug-discuss@mail.gnhlug.org
 http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Mysql connection problem

2008-04-10 Thread Thomas Charron
On Thu, Apr 10, 2008 at 5:09 PM, Deepan [EMAIL PROTECTED] wrote:
 Hi All,
  I am able to connect to Mysql via command line
  using mysql client. I am also able to connect to
  mysql via php if I run those php programs via
  command line. However when I hit those php pages
  via the browser it throws the error Can't connect
  to local MySQL server through socket
  '/tmp/mysql.sock' (2). Please note that this is
  the same socket the mysql client tries to connect
  to the server.
  Regards

  Can you run them as the user the apache web server is running as?
Since you've said you can run the php from the command line, I'm
assuming it has to do with the user authority of some type.  If the
system is really, REALLY locked down, the user apache runs as may not
be able to open /tmp/mysql.sock.

-- 
-- Thomas
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Mysql connection problem

2008-04-10 Thread Thomas Charron
On Thu, Apr 10, 2008 at 5:42 PM, Thomas Charron [EMAIL PROTECTED] wrote:
 On Thu, Apr 10, 2008 at 5:09 PM, Deepan [EMAIL PROTECTED] wrote:
   Hi All,
I am able to connect to Mysql via command line
using mysql client. I am also able to connect to

'/tmp/mysql.sock' (2). Please note that this is
the same socket the mysql client tries to connect
to the server.
Regards
   Can you run them as the user the apache web server is running as?
  Since you've said you can run the php from the command line, I'm
  assuming it has to do with the user authority of some type.  If the
  system is really, REALLY locked down, the user apache runs as may not
  be able to open /tmp/mysql.sock.

  Additionally, ensure that mysql is creating the pipe in
/tmp/mysql.sock and not /var/lib/mysql/mysql.sock which is where at
least my system dumps the file.

-- 
-- Thomas
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Problem with usb serial port ordering.

2008-02-05 Thread Bill McGonigle
On Feb 3, 2008, at 23:26, Ben Scott wrote:

   If the devices are identical in model, you're likely SOL: The USB
 standard doesn't require a unique ID (e.g., hardware address, serial
 number), so there's no sure way to tell identical models apart.  You
 might be able to finagle something with port numbers or the like.

==Hack alert==

I have to admit to not really understanding the output of `lsusb -v`  
but I didn't see how to tie a device to a part number, however

If they are the same device and you like it that way (e.g. I'm using  
Keyspans because they actually work, unlike others I've tried) you  
could put two different USB hubs in the chain, and figure out which  
device is connected to which hub, then do the right thing by parsing  
`lsusb -t`.

I know, men have been shot for more elegant solutions.

-Bill

-
Bill McGonigle, Owner   Work: 603.448.4440
BFC Computing, LLC  Home: 603.448.1668
[EMAIL PROTECTED]   Cell: 603.252.2606
http://www.bfccomputing.com/Page: 603.442.1833
Blog: http://blog.bfccomputing.com/
VCard: http://bfccomputing.com/vcard/bill.vcf

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Problem with usb serial port ordering.

2008-02-05 Thread Ben Scott
On Feb 5, 2008 6:39 PM, Bill McGonigle [EMAIL PROTECTED] wrote:
 I have to admit to not really understanding the output of `lsusb -v`
 but I didn't see how to tie a device to a part number, however

  The output of lsusb -- and lspci, too -- is based on the ID
numbers reported by the various devices.  Every device reports a
vendor ID, and a device ID (PCI) or product ID (USB).  The tools are
informed by a large database of known ID numbers.  Without that
database, you get only eight bytes of numeric ID.  (Incidentally, MS
Windows works the same way, except it builds that number-to-name
database by scanning those *.INF files.)

  Exactly what you can do when matching depends on the tool you're
using.  For example, on my Fedora 6 box at home, there is /etc/udev/
and a bunch of files under it.  The syntax looks pretty powerful in
general.  It can match by driver, kernel bus ID, sysfs attributes,
etc.  I haven't played with it much myself.  The udev(7) man page
documents some of it.  I say some of it because a lot of it appears
to be driven not by udev itself, but sysfs, and I haven't found TFM
for that yet.

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Problem with usb serial port ordering.

2008-02-04 Thread Bill Sconce
On Sun, 3 Feb 2008 16:48:09 -0500 (EST)
Steven W. Orr [EMAIL PROTECTED] wrote:

 I have two usb devices which I shall call spm and fp. They end up at 
 ttyUSB0 and ttyUSB1 but the order that they get accessed causes us to not 
 know which device we're talking to until they are first referenced.
 
 Does anyone know if there's a way to cause a specific device to always 
 come up on a specified ttyUSB? e.g., if spm happens to want to come up as 
 ttyUSB1 and I want it to be on ttyUSB0, is there a way?


As Ben notes, the answer is probably sysfs/udev.  I'm just now working
on a similar problem (getting some new Firewire disks to appear as the
same name each time they're plugged in).  Here's what got me started:

(Kickstart)
Create your own udev rules to control removable devices
  http://ubuntuforums.org/showthread.php?t=168221

(Real tutorial)
Daniel Drake's Writing udev rules
  http://reactivated.net/writing_udev_rules.html


HTH,

Bill
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Problem with usb serial port ordering.

2008-02-03 Thread Steven W. Orr
I have two usb devices which I shall call spm and fp. They end up at 
ttyUSB0 and ttyUSB1 but the order that they get accessed causes us to not 
know which device we're talking to until they are first referenced.

Does anyone know if there's a way to cause a specific device to always 
come up on a specified ttyUSB? e.g., if spm happens to want to come up as 
ttyUSB1 and I want it to be on ttyUSB0, is there a way?

-- 
Time flies like the wind. Fruit flies like a banana. Stranger things have  .0.
happened but none stranger than this. Does your driver's license say Organ ..0
Donor?Black holes are where God divided by zero. Listen to me! We are all- 000
individuals! What if this weren't a hypothetical question?
steveo at syslang.net
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Problem with usb serial port ordering.

2008-02-03 Thread Ben Scott
On Feb 3, 2008 4:48 PM, Steven W. Orr [EMAIL PROTECTED] wrote:
 Does anyone know if there's a way to cause a specific device to always
 come up on a specified ttyUSB?

  Assuming the devices are actually different USB devices (and not two
units of identical models), you should be able to use the device
naming facilities on your system to set-up useful aliases, like
/dev/fp and /dev/spm.  Basically, you create pattern matches on
manufacturer/model, and assign names to the match.  Depending on
kernel version, distribution, etc., you will want to look in
/etc/udev/, /etc/usb/, /etc/hotplug, and/or /etc/hotplug.d/.  The
files are usually well commented.

  If the devices are identical in model, you're likely SOL: The USB
standard doesn't require a unique ID (e.g., hardware address, serial
number), so there's no sure way to tell identical models apart.  You
might be able to finagle something with port numbers or the like.

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


USB automounting problem-resolved

2007-12-21 Thread Ed lawson
Awhile back I posted about a problem I had with automounting my USB 
drive.  Using Debian Sid and Gnome.  Turns out that the gparted program 
wrote a policy file prohibiting automounting in 
/usr/share/hal/fdi/policy/ when started and did not delete it when 
closed so it prevented automounting till deleted.

Not easy to discover cause of problem.

So for future reference, always check that directory as well as 
/etc/hal/fid/policy/ is something goes amiss to see if a policy file has 
gone bonkers.

-- 
Ed Lawson
Ham Callsign: K1VP
PGP Key ID:   1591EAD3
PGP Key Fingerprint:  79A1 CDC3 EF3D 7F93 1D28  2D42 58E4 2287 1591 EAD3

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: LVM problem

2007-12-18 Thread Dan Coutu
/archive/VG_00050.vg /dev/sdh1

Now it occurs to me that there is some chance that this may in fact work 
exactly right by changing the uuid of the /dev/sdb volume to match the 
old uuid that the display commnads are griping about. The only problem 
is that the file that would be used as the value for --restorefile 
contains this:

# Generated by LVM2: Mon Oct 23 13:23:55 2006

contents = Text Format Volume Group
version = 1

description = Created *before* executing 'vgscan --mknodes 
--ignorelockingfailure'

creation_host = culverco.culverco.com # Linux culverco.culverco.com 
2.6.9-42.0.3.ELsmp #1 SMP Mon Sep 25 17:2
8:02 EDT 2006 i686
creation_time = 1161624235  # Mon Oct 23 13:23:55 2006

VolGroup00 {
id = AuDV2N-7nfH-7OpL-KjCN-LWVD-ArpI-7AkTBy
seqno = 3
status = [RESIZEABLE, READ, WRITE]
extent_size = 65536 # 32 Megabytes
max_lv = 0
max_pv = 0

physical_volumes {

pv0 {
id = LdCKsY-xEZF-4koe-hnE1-LjVX-eSCi-bz1Asx
device = /dev/sda2# Hint only

status = [ALLOCATABLE]
pe_start = 384
pe_count = 4372 # 136.625 Gigabytes
}
}

logical_volumes {

LogVol00 {
id = 0ChzON-UBNj-xEdx-jrir-f5T1-nDKq-Wx4WUP
status = [READ, WRITE, VISIBLE]
segment_count = 1

segment1 {
start_extent = 0
extent_count = 4307 # 134.594 Gigabytes

type = striped
stripe_count = 1  # linear

stripes = [
pv0, 0
]
}
}

LogVol01 {
id = bI5vdI-uYbl-1ME1-8LvS-VLJ8-SOyn-0tgxVZ
status = [READ, WRITE, VISIBLE]
segment_count = 1

segment1 {
start_extent = 0
extent_count = 62 # 1.9375 Gigabytes

type = striped
stripe_count = 1  # linear

stripes = [
pv0, 4307
]
}
}
}
}


You will note that nowhere in there is any mention of the problematic 
uuid. Also there is no mention of the physical volume sdb, only of sda2. 
I'm sure that I can manage to edit the file in order to add the proper 
information if only I can figure out what the proper information is. 
Maybe then it would reset the uuid for sdb and I'll be happy again. I hope.

Any ideas, suggestions, comments, etc.?

Thanks,

Dan

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: LVM problem

2007-12-18 Thread Dan Coutu
. The 
 command it mentions is something like this:

 pvcreate --uuid FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk --restorefile 
 /etc/lvm/archive/VG_00050.vg /dev/sdh1

 Now it occurs to me that there is some chance that this may in fact work 
 exactly right by changing the uuid of the /dev/sdb volume to match the 
 old uuid that the display commnads are griping about. The only problem 
 is that the file that would be used as the value for --restorefile 
 contains this:

 # Generated by LVM2: Mon Oct 23 13:23:55 2006

 contents = Text Format Volume Group
 version = 1

 description = Created *before* executing 'vgscan --mknodes 
 --ignorelockingfailure'

 creation_host = culverco.culverco.com # Linux culverco.culverco.com 
 2.6.9-42.0.3.ELsmp #1 SMP Mon Sep 25 17:2
 8:02 EDT 2006 i686
 creation_time = 1161624235  # Mon Oct 23 13:23:55 2006

 VolGroup00 {
 id = AuDV2N-7nfH-7OpL-KjCN-LWVD-ArpI-7AkTBy
 seqno = 3
 status = [RESIZEABLE, READ, WRITE]
 extent_size = 65536 # 32 Megabytes
 max_lv = 0
 max_pv = 0

 physical_volumes {

 pv0 {
 id = LdCKsY-xEZF-4koe-hnE1-LjVX-eSCi-bz1Asx
 device = /dev/sda2# Hint only

 status = [ALLOCATABLE]
 pe_start = 384
 pe_count = 4372 # 136.625 Gigabytes
 }
 }

 logical_volumes {

 LogVol00 {
 id = 0ChzON-UBNj-xEdx-jrir-f5T1-nDKq-Wx4WUP
 status = [READ, WRITE, VISIBLE]
 segment_count = 1

 segment1 {
 start_extent = 0
 extent_count = 4307 # 134.594 Gigabytes

 type = striped
 stripe_count = 1  # linear

 stripes = [
 pv0, 0
 ]
 }
 }

 LogVol01 {
 id = bI5vdI-uYbl-1ME1-8LvS-VLJ8-SOyn-0tgxVZ
 status = [READ, WRITE, VISIBLE]
 segment_count = 1

 segment1 {
 start_extent = 0
 extent_count = 62 # 1.9375 Gigabytes

 type = striped
 stripe_count = 1  # linear

 stripes = [
 pv0, 4307
 ]
 }
 }
 }
 }


 You will note that nowhere in there is any mention of the problematic 
 uuid. Also there is no mention of the physical volume sdb, only of sda2. 
 I'm sure that I can manage to edit the file in order to add the proper 
 information if only I can figure out what the proper information is. 
 Maybe then it would reset the uuid for sdb and I'll be happy again. I hope.

 Any ideas, suggestions, comments, etc.?

 Thanks,

 Dan
   
Hey! I found an interesting and possibly useful file! 
/etc/lvm/backup/VolGroup00 contains the same stuff as the above archives 
PLUS this useful bit (pruned to keep things smaller)

physical_volumes {

pv0 {
id = LdCKsY-xEZF-4koe-hnE1-LjVX-eSCi-bz1Asx
device = /dev/sda2# Hint only

status = [ALLOCATABLE]
dev_size = 286535340# 136.631 Gigabytes
pe_start = 384
pe_count = 4372 # 136.625 Gigabytes
}

pv1 {
id = oACqnk-YQTQ-IiGy-F6Pj-UoBB-kUqM-g6Yu3D
device = /dev/sdb # Hint only

status = [ALLOCATABLE]
dev_size = 286746624# 136.731 Gigabytes
pe_start = 384
pe_count = 4375 # 136.719 Gigabytes
}
}

Whoo hoo! There's my bogus uuid. I'm thinking that I should be able to 
re-create things with this information.

Dan

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: LVM problem

2007-12-18 Thread Ben Scott
On Dec 18, 2007 9:36 AM, Dan Coutu [EMAIL PROTECTED] wrote:
 Well, I have my response from Red Hat ... Make sure you have full backups of
 everything, reinstall Linux, and restore your files from backups.

  Take off and nuke the site from orbit.  It's the only way to be sure.

  I suspect they're more afraid of the unknowns.  The current system
is a bit of a question mark.  A total reload will provide assurance
the system config is intact and known.  That's the safer route in
their book.  Of course, they're not the ones who have to rebuild the
system :-(

 Here is the output of various display commands:

  Which all seem to indicate that the volume group named VolGroup00
is not working.

  You did say this system was running, right?

  I'm wondering just how screwed up things are.  If what LVM is
reporting is right, there shouldn't be a system to run...

  Have you rebooted since the borkage occurred?  If not, I suggest
avoiding doing so.  The system may not come back up.

  I'm especially worried that you have holes in your LVs that you
just haven't run into yet.

  If you have backups from before the borkage, I'd pull the media out
of the rotation pool, and keep them indefinitely, in case you discover
trouble later.

  At some point, once you're out of the woods, and after making new
backups, I'd suggest scheduling downtime for an fsck, and/or a
database or other application-level consistency check.  I know
downtime can be hard to come by, but... better to run the checks at 4
AM on a Sunday than 1:32 PM on a Thursday after the system crashes.

 You will note that nowhere in there is any mention of the problematic
 uuid. Also there is no mention of the physical volume sdb, only of sda2.

  That may or may not be a good thing.  It all hinges on whether any
extents from the PV that was on sdb got allocated to any LVs at some
point.

  If the LVs have extents mapped to the damaged PV on sdb, then
restoring the metadata of the damaged PV is what you want.

  If the LVs exist entirely on the PV on sda2, you may be better off
restoring the VolGroup00 metadata from before the trouble happened.
Then just start over -- create the PV on sda2 as if none of this
ever happened.

  Normally, lvdisplay -m will show you the LE-to-PE mapping.  But
according to the lvdisplay -v you posted, there isn't any such
mapping.  At all.

  Try the --partial switch to the various LVM commands.  According
to the man page, it will not allow modification of metadata, so it
should be safe.  There are also some words in there about re-creating
a missing PV and restoring LVM config that you may want to read.

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


  1   2   3   4   5   >