Re: who offers cheap (personal) 1U colo?

2004-03-14 Thread Jeff S Wheeler
On Sun, 2004-03-14 at 05:42, Bohdan Tashchuk wrote:
> Question: Why can't a provider sell virtual PC colocation, instead of 
> physical PC colocation?

Many companies are providing "Virtual Private Servers" these days, which
range in implementation from OS jails to virtual hardware like vmware.

--
Jeff




Re: who offers cheap (personal) 1U colo?

2004-03-14 Thread Jeff S Wheeler
On Sun, 2004-03-14 at 05:42, Bohdan Tashchuk wrote:
> Question: Why can't a provider sell virtual PC colocation, instead of 
> physical PC colocation?

Many companies are providing "Virtual Private Servers" these days, which
range in implementation from OS jails to virtual hardware like vmware.

--
Jeff


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: apache uses 100 % cpu

2004-02-28 Thread Jeff S Wheeler
Regarding that mail filtering message, that seems to have come from some
third party who reads the list.  I guess it is not mailing list aware.

On Sat, 2004-02-28 at 15:14, Marty Landman wrote:
> Jeff, do you think that the apps are trying to flock the file? I'm curious 
> what the app level issue is and how it could be done properly -- being more 
> of a developer than sysadmin.

No, If apache were trying to flock the file, the strace output would
indicate that by showing an flock(2) system call in progress.

> How do you find the stability of Apache mod_perl?

We do not let our customers use mod_perl on shared servers, as there are
way too many things they could fuck up.  In all my experiences, mod_perl
is quite stable when administered by competiant folks and when running
relatively sane code.  I certainly choose mod_perl over PHP for any web
application development that I do.

--
Jeff S Wheeler


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: apache uses 100 % cpu

2004-02-28 Thread Jeff S Wheeler
On Sat, 2004-02-28 at 13:48, Timo Veith wrote:
> apache  24290 root8r   REG   8,17 13115 25739308 
> /home/jebu0001/homepage/jens/chat/php_chat_log

> I tried gdb, but there are no debug symbols,
> that's the default with most debian packages I assume.


It's a shame the debug symbols aren't available.  If this happens on a
routine basis, I would definately suggest rebuilding apache and all its
modules with debugging symbols left in.

Barring that, though, what your apache processes are doing is trying to
read that file, /home/jebu0001/homepage/jens/chat/php_chat_log, over and
over again, most likely in a `tail -f` nature, but no new data is
appearing.  Without question, if you were to fire up strace again, and
append a few bytes to that file, they would show up in your strace
output as bytes being read.

Unfortunately apache does not have any .php files or similar still open,
so it may be difficult to identify exactly what script the culprit is. 
In any case, I would contact that user and tell them that PHP script is
causing you grief, and have them stop running it.

PHP is the single biggest cause of shared server problems at my company.
I wish the PHP CGI stuff worked right, as if it did, we would opt to use
that instead of the Apache PHP module.  It may be slower, but at least
that would limit what users can fuck up with third-party PHP scripts. :(

I hope this helps!

--
Jeff S Wheeler


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: apache uses 100 % cpu

2004-02-28 Thread Jeff S Wheeler
On Sat, 2004-02-28 at 10:14, Timo Veith wrote:
> This is the output of strace:
> 
> read(8, "", 4096)   = 0
> read(8, "", 4096)   = 0
> read(8, "", 4096)   = 0
> read(8, "", 4096)   = 0
>   looping forever as it seems.


First of all, let me compliment you on the good level of detail you've
provided in your request for trouble-shooting aid.  If these processes
are still running, I'd really like to see what is on descriptor 8 of the
process that generated the above strace output.  To find that out, make
sure you have the "lsof" package installed, and issue `lsof -p `,
then take note of the FD column in the output.  That indicates which
file descriptor is being examined, and of course the information in the
remaining columns to the right are details about that descriptor.

I can't imagine why apache itself would be caught in the scenario you
are experiencing, but perhaps a CGI/PHP script or buggy module is the
culprit.  If that is the case, simple knowledge of what apache is trying
to read may be helpful.

If you have gdb available (from the gdb package) and your apache binary
is not stripped of debugging symbols, you can also issue `gdb -p `,
which will attach the debugger to that running process.  I'm not sure
what the output will look like as it's issuing garbage reads constantly,
but you want to issue the gdb command `backtrace`, and send that output
to the mailing list.  Just issue `q` after you've got that to detach.

What version of Apache are you running, and with what modules?

--
Jeff


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Starting isp and going to use Debian

2004-02-21 Thread Jeff S Wheeler
On Sat, 2004-02-21 at 14:50, charlie derr wrote:
> > 5. Drive usage control (i.e. user only get 10M for mail and 15M for web)
> 
> We have quotas implemented on the web and mail servers.  This is a daily 
>task though (raising quotas of people who've exceeded their default)

You could automate much of that task.  There is a Perl module for
manipulating dquota. http://search.cpan.org/~tomzo/Quota-1.4.10/Quota.pm

-- jsw



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Netgear FA311 and natsemi issue

2004-02-14 Thread Jeff S Wheeler
On Fri, 2004-02-13 at 19:21, [EMAIL PROTECTED] wrote:
> OS: Debian 3.0R2 stable 
> PC: Old Pentium Pro box with Intel 440FX chipset 
> Kernel: 2.4.18 (standard Debian kernel with natsemi support builtin) 
> natsemi driver: came with Debian dist (1.14 IIRC) 

I had endless problems with several Netgear ns83820 based cards last
year, and finally decided to throw them out in favor of Intel cards and
the Intel e1000 driver.  I am very pleased with this decision, and
recommend you make the same one, rather than cause yourself more grief
by trying to get the poorly supported Netgear cards to work.

--
Jeff




Re: Netgear FA311 and natsemi issue

2004-02-14 Thread Jeff S Wheeler
On Fri, 2004-02-13 at 19:21, [EMAIL PROTECTED] wrote:
> OS: Debian 3.0R2 stable 
> PC: Old Pentium Pro box with Intel 440FX chipset 
> Kernel: 2.4.18 (standard Debian kernel with natsemi support builtin) 
> natsemi driver: came with Debian dist (1.14 IIRC) 

I had endless problems with several Netgear ns83820 based cards last
year, and finally decided to throw them out in favor of Intel cards and
the Intel e1000 driver.  I am very pleased with this decision, and
recommend you make the same one, rather than cause yourself more grief
by trying to get the poorly supported Netgear cards to work.

--
Jeff


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: I/O performance issues on 2.4.23 SMP system

2004-01-29 Thread Jeff S Wheeler
On Tue, 2004-01-27 at 16:49, Benjamin Sherman wrote:
> I have a server running dual 2.66Ghz Xeons and 4GB RAM, in a 
> PenguinComputing Relion 230S system. It has a 3ware RAID card with 3 
> 120GB SATA drives in RAID5. It is currently running Debian 3.0 w/ 
> vanilla kernel 2.4.23, HIGHMEM4G=y, HIGHIO=y, SMP=y, ACPI=y. I see the 
> problem with APCI and HT turned off OR if I leave them on.

I don't know anything about thos 2.4.23 I/O problem, but I will tell you
that RAID 5 is not the way to go for big SQL performance. In a RAID 5
array, all the heads must move for every operation. You already spent a
lot of money on that server. I suggest you buy more disks for RAID 10.

--
Jeff




Re: I/O performance issues on 2.4.23 SMP system

2004-01-29 Thread Jeff S Wheeler
On Tue, 2004-01-27 at 16:49, Benjamin Sherman wrote:
> I have a server running dual 2.66Ghz Xeons and 4GB RAM, in a 
> PenguinComputing Relion 230S system. It has a 3ware RAID card with 3 
> 120GB SATA drives in RAID5. It is currently running Debian 3.0 w/ 
> vanilla kernel 2.4.23, HIGHMEM4G=y, HIGHIO=y, SMP=y, ACPI=y. I see the 
> problem with APCI and HT turned off OR if I leave them on.

I don't know anything about thos 2.4.23 I/O problem, but I will tell you
that RAID 5 is not the way to go for big SQL performance. In a RAID 5
array, all the heads must move for every operation. You already spent a
lot of money on that server. I suggest you buy more disks for RAID 10.

--
Jeff


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: AOL testing new anti-spam technology

2004-01-25 Thread Jeff S Wheeler
On Sat, 2004-01-24 at 13:07, Joey Hess wrote:
> One thing I've been wondering about is pseudo-forged @debian.org From
> addresses (like mine) and spf. It would seem we can never turn it on for
> toplevel debian.org without some large changes in how developers send
> their email.

I don't understand how this problem will be solved for folks who travel.
For example, many hotel access services redirect your SMTP TCP sessions
to their local smart sender these days, as quite simply, that is the
easiest way to prevent customers from being unable to send mail due to
relay restrictions on their office or ISP mail server.

--
Jeff




Re: AOL testing new anti-spam technology

2004-01-25 Thread Jeff S Wheeler
On Sat, 2004-01-24 at 13:07, Joey Hess wrote:
> One thing I've been wondering about is pseudo-forged @debian.org From
> addresses (like mine) and spf. It would seem we can never turn it on for
> toplevel debian.org without some large changes in how developers send
> their email.

I don't understand how this problem will be solved for folks who travel.
For example, many hotel access services redirect your SMTP TCP sessions
to their local smart sender these days, as quite simply, that is the
easiest way to prevent customers from being unable to send mail due to
relay restrictions on their office or ISP mail server.

--
Jeff


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



ntpd listening on alias interfaces seems non-trivial

2004-01-17 Thread Jeff S Wheeler
I have been attempting, without success, to get ntpd listening on an
alias interface on one of my general purpose boxes. It seems that ntpd
prefers to listen on localhost:ntp and eth0addr:ntp. It opens a socket
for *:ntp as well, but does not respond to queries on other addresses.
Here is some LSOF output demonstrating this..:

# lsof -p 16667 |grep UDP
ntpd16667 root4u  IPv44493134 UDP *:ntp 
ntpd16667 root5u  IPv44493135 UDP localhost:ntp 
ntpd16667 root6u  IPv44493136 UDP hostname:ntp 

I checked the archives, and it seems another poster had similar trouble
in Dec'02, but there were no apparent follow-up posts. Google has also
been less than revealing on this topic. All suggestions entertained.

--
Jeff




ntpd listening on alias interfaces seems non-trivial

2004-01-17 Thread Jeff S Wheeler
I have been attempting, without success, to get ntpd listening on an
alias interface on one of my general purpose boxes. It seems that ntpd
prefers to listen on localhost:ntp and eth0addr:ntp. It opens a socket
for *:ntp as well, but does not respond to queries on other addresses.
Here is some LSOF output demonstrating this..:

# lsof -p 16667 |grep UDP
ntpd16667 root4u  IPv44493134 UDP *:ntp 
ntpd16667 root5u  IPv44493135 UDP localhost:ntp 
ntpd16667 root6u  IPv44493136 UDP hostname:ntp 

I checked the archives, and it seems another poster had similar trouble
in Dec'02, but there were no apparent follow-up posts. Google has also
been less than revealing on this topic. All suggestions entertained.

--
Jeff


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: CPU Utiliaztion on a ethernet bridge

2003-11-20 Thread Jeff S Wheeler
On Thu, 2003-11-20 at 22:34, Donovan Baarda wrote:
> Do you really mean poll-based, or DMA based? Traditionally polling is
> evil CPU wise... but there could be reasons why polling is better if
> that is the only thing you are doing. Possibly PC DMA is probably so old
> and crappy that it's not worth using?

It is my understanding that the modern e1000 driver polls the NIC to
find out when new frames are available. You may be correct that it just
looks in the DMA rx ring, though; I am a bit out of my league at this
point. In either case, the PRO/100 and PRO/1000 cards, using the Intel
e100/e1000 drivers, are superb. I suspect the 3c59x driver is not quite
so modern, and the kernel is preempted by NIC interrupts frequently when
new frames come in under your existing bridge configuration.

-- 
Jeff S Wheeler <[EMAIL PROTECTED]>
Five Elements, Inc.




Re: CPU Utiliaztion on a ethernet bridge

2003-11-20 Thread Jeff S Wheeler
On Thu, 2003-11-20 at 22:34, Donovan Baarda wrote:
> Do you really mean poll-based, or DMA based? Traditionally polling is
> evil CPU wise... but there could be reasons why polling is better if
> that is the only thing you are doing. Possibly PC DMA is probably so old
> and crappy that it's not worth using?

It is my understanding that the modern e1000 driver polls the NIC to
find out when new frames are available. You may be correct that it just
looks in the DMA rx ring, though; I am a bit out of my league at this
point. In either case, the PRO/100 and PRO/1000 cards, using the Intel
e100/e1000 drivers, are superb. I suspect the 3c59x driver is not quite
so modern, and the kernel is preempted by NIC interrupts frequently when
new frames come in under your existing bridge configuration.

-- 
Jeff S Wheeler <[EMAIL PROTECTED]>
Five Elements, Inc.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: CPU Utiliaztion on a ethernet bridge

2003-11-19 Thread Jeff S Wheeler
On Wed, 2003-11-19 at 21:42, Simon Allard wrote:
> I have replaced NIC's as I thought it might of been the drives also. I
> moved to the eepro100 cards. Same problem.

You should be using NICs with a poll-based driver, as opposed to an
interrupt-based driver. This will preempt the kernel less often, and
allow it to service the NIC more efficiently.

The e1000 driver is excellent in this respect. We run more than 100Mb
through a Linux router with a full eBGP table (~127k FIB entries) with
no appreciable CPU consumption. The only time the box is substantially
taxed is when a BGP peer flaps, in which case zebra consumes a lot of
CPU power reconfiguring the FIB. It's a shame that the Linux kernel
doesn't make the FIB structures accessible directly via an interface
similar to /dev/kmem so zebra could simply mmap(2) it in and make large
writes instead of small ioctl(2) calls.

-- 
Jeff S Wheeler <[EMAIL PROTECTED]>
Five Elements, Inc.




Re: CPU Utiliaztion on a ethernet bridge

2003-11-19 Thread Jeff S Wheeler
On Wed, 2003-11-19 at 21:42, Simon Allard wrote:
> I have replaced NIC's as I thought it might of been the drives also. I
> moved to the eepro100 cards. Same problem.

You should be using NICs with a poll-based driver, as opposed to an
interrupt-based driver. This will preempt the kernel less often, and
allow it to service the NIC more efficiently.

The e1000 driver is excellent in this respect. We run more than 100Mb
through a Linux router with a full eBGP table (~127k FIB entries) with
no appreciable CPU consumption. The only time the box is substantially
taxed is when a BGP peer flaps, in which case zebra consumes a lot of
CPU power reconfiguring the FIB. It's a shame that the Linux kernel
doesn't make the FIB structures accessible directly via an interface
similar to /dev/kmem so zebra could simply mmap(2) it in and make large
writes instead of small ioctl(2) calls.

-- 
Jeff S Wheeler <[EMAIL PROTECTED]>
Five Elements, Inc.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Route Question!

2003-11-19 Thread Jeff S Wheeler
First, I strongly suggest you move your thread to the quagga-users list
at [EMAIL PROTECTED] You can find numerous configuration
examples in the archives at http://lists.quagga.net. This is the best
forum for help with Zebra/Quagga. I suggest you follow-up on that list,
which I also participate on.

On Wed, 2003-11-19 at 16:16, kgb wrote:
> router i have bgp all my traffic which are bgpeer (all traffic in my
> country) and int (outside my country or with two words international

First, you need to figure out how you will identify "bgpeer" traffic and
"international" traffic. AS-PATH works but it is not the best way to go.

Please provide details about how each of your eBGP sessions reaches your
network. Are they all presently on your Cisco? What type of ports do you
use, e.g. E3/DS3, FastEthernet, etc?

> cisco router and bgp on debian linux router to be with some access list
> _permit_ as_number _denied_ as_number can someone explane how that can

You can accomplish what you want with AS-PATH access lists, however it
will be a pain in the ass to maintain. What you really want is a BGP
community based route filtering system. In my shop(s), I set communities
on all routes learned via eBGP sessions. This helps me identify where I
learned a route (which POP); who it came from (customer, transit, peer);
and if it should have any special local-preference or export concerns. I
then use route-maps that match based on communities to export only my
customer routes to peers and transit providers, for example.

To do this, every eBGP session needs its own route-map. Below is just an
example; you will need some additional parameters for your peer ASes and
your transit ASes, as I understand you. I can produce a better example
when you provide more information. Please, follow up to the quagga list.

router 10
neighbor 20.20.20.20 remote-as 20
neighbor 20.20.20.20 description AS 20 transit
neighbor 20.20.20.20 soft-reconfiguration inbound
neighbor 20.20.20.20 route-map transit_AS20_in in
neighbor 20.20.20.20 route-map transit_AS20_out out
neighbor 30.30.30.30 remote-as 30
neighbor 30.30.30.30 description AS 30 peer
neighbor 30.30.30.30 soft-reconfiguration inbound
neighbor 30.30.30.30 route-map peer_AS30_in in
neighbor 30.30.30.30 route-map peer_AS30_out out
neighbor 40.40.40.40 remote-as 40
neighbor 40.40.40.40 description AS 40 customer
neighbor 40.40.40.40 soft-reconfiguration inbound
neighbor 40.40.40.40 route-map cust_AS40_in in
neighbor 40.40.40.40 route-map cust_AS40_out out
!
ip community-list cust_routes permit 10:14
ip community-list peer_routes permit 10:17
ip community-list transit_routes permit 10:19
!
route-map transit_AS20_in permit 100
set local-preference 100
set community 10:19 # this is "learnt from transit" community
set next-hop 20.20.20.20 # always enforce next-hop
!
route-map transit_AS20_out permit 100
match community cust_routes
set community none # don't send our communities to transit
set next-hop 20.20.20.21 # this is our interface to AS20
!
route-map peer_AS30_in permit 100
set local-preference 300
set community 19:17 # this is "learnt from peer" community
set next-hop 30.30.30.30
!
route-map peer_AS30_out permit 100
match community cust_routes
set community none # unless peer wants your communities
set next-hop 30.30.30.31
!
route-map cust_AS40_in permit 100
set local-preference 500
set community 19:14 # this is "learnt from customer"
set next-hop 40.40.40.40
!
route-map cust_AS40_out permit 100
match community transit_routes
goto 1000
!
route-map cust_AS40_out permit 110
match community peer_routes
goto 1000
!
route-map cust_AS40_out permit 120
match community cust_routes
goto 1000
!
route-map cust_AS40_out deny 999
!
route-map cust_AS40_out permit 1000
set community none
set next-hop 40.40.40.41

> be done in more details i want that because my cisco router is too weak
> and can't work well with 50-60Mbit traffic and if i can do that to split

With your level of traffic, 50Mb/s - 60Mb/s, you will want to choose
interfaces with poll-based, as opposed to interrupt-based interfaces.
The Intel e1000 cards are superb.

I hope this is a helpful start. You'll need to do some configuration
work on OSPF and Zebra itself as well, but we'll need to look at more
specifics of your setup to do that.

-- 
Jeff S Wheeler <[EMAIL PROTECTED]>
Five Elements, Inc.




Re: Route Question!

2003-11-19 Thread Jeff S Wheeler
First, I strongly suggest you move your thread to the quagga-users list
at [EMAIL PROTECTED] You can find numerous configuration
examples in the archives at http://lists.quagga.net. This is the best
forum for help with Zebra/Quagga. I suggest you follow-up on that list,
which I also participate on.

On Wed, 2003-11-19 at 16:16, kgb wrote:
> router i have bgp all my traffic which are bgpeer (all traffic in my
> country) and int (outside my country or with two words international

First, you need to figure out how you will identify "bgpeer" traffic and
"international" traffic. AS-PATH works but it is not the best way to go.

Please provide details about how each of your eBGP sessions reaches your
network. Are they all presently on your Cisco? What type of ports do you
use, e.g. E3/DS3, FastEthernet, etc?

> cisco router and bgp on debian linux router to be with some access list
> _permit_ as_number _denied_ as_number can someone explane how that can

You can accomplish what you want with AS-PATH access lists, however it
will be a pain in the ass to maintain. What you really want is a BGP
community based route filtering system. In my shop(s), I set communities
on all routes learned via eBGP sessions. This helps me identify where I
learned a route (which POP); who it came from (customer, transit, peer);
and if it should have any special local-preference or export concerns. I
then use route-maps that match based on communities to export only my
customer routes to peers and transit providers, for example.

To do this, every eBGP session needs its own route-map. Below is just an
example; you will need some additional parameters for your peer ASes and
your transit ASes, as I understand you. I can produce a better example
when you provide more information. Please, follow up to the quagga list.

router 10
neighbor 20.20.20.20 remote-as 20
neighbor 20.20.20.20 description AS 20 transit
neighbor 20.20.20.20 soft-reconfiguration inbound
neighbor 20.20.20.20 route-map transit_AS20_in in
neighbor 20.20.20.20 route-map transit_AS20_out out
neighbor 30.30.30.30 remote-as 30
neighbor 30.30.30.30 description AS 30 peer
neighbor 30.30.30.30 soft-reconfiguration inbound
neighbor 30.30.30.30 route-map peer_AS30_in in
neighbor 30.30.30.30 route-map peer_AS30_out out
neighbor 40.40.40.40 remote-as 40
neighbor 40.40.40.40 description AS 40 customer
neighbor 40.40.40.40 soft-reconfiguration inbound
neighbor 40.40.40.40 route-map cust_AS40_in in
neighbor 40.40.40.40 route-map cust_AS40_out out
!
ip community-list cust_routes permit 10:14
ip community-list peer_routes permit 10:17
ip community-list transit_routes permit 10:19
!
route-map transit_AS20_in permit 100
set local-preference 100
set community 10:19 # this is "learnt from transit" community
set next-hop 20.20.20.20 # always enforce next-hop
!
route-map transit_AS20_out permit 100
match community cust_routes
set community none # don't send our communities to transit
set next-hop 20.20.20.21 # this is our interface to AS20
!
route-map peer_AS30_in permit 100
set local-preference 300
set community 19:17 # this is "learnt from peer" community
set next-hop 30.30.30.30
!
route-map peer_AS30_out permit 100
match community cust_routes
set community none # unless peer wants your communities
set next-hop 30.30.30.31
!
route-map cust_AS40_in permit 100
set local-preference 500
set community 19:14 # this is "learnt from customer"
set next-hop 40.40.40.40
!
route-map cust_AS40_out permit 100
match community transit_routes
goto 1000
!
route-map cust_AS40_out permit 110
match community peer_routes
goto 1000
!
route-map cust_AS40_out permit 120
match community cust_routes
goto 1000
!
route-map cust_AS40_out deny 999
!
route-map cust_AS40_out permit 1000
set community none
set next-hop 40.40.40.41

> be done in more details i want that because my cisco router is too weak
> and can't work well with 50-60Mbit traffic and if i can do that to split

With your level of traffic, 50Mb/s - 60Mb/s, you will want to choose
interfaces with poll-based, as opposed to interrupt-based interfaces.
The Intel e1000 cards are superb.

I hope this is a helpful start. You'll need to do some configuration
work on OSPF and Zebra itself as well, but we'll need to look at more
specifics of your setup to do that.

-- 
Jeff S Wheeler <[EMAIL PROTECTED]>
Five Elements, Inc.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Packet Shaping

2003-11-15 Thread Jeff Waugh
On Sun, 2003-11-16 at 14:16, Splash Tekalal wrote:
> I know Debian can do packet shaping and set up rules for types of packets 
> to get priority and such, but I'm at a loss as to where to start on setting 
> it up.

You might want to try shorewall. In addition to firewall stuff, it also
provides a nice way of configuring QoS parameters. Worth a try. :-)

-- 
Jeff Waugh <[EMAIL PROTECTED]>
Flow Communications Pty. Ltd.




Re: Packet Shaping

2003-11-15 Thread Jeff Waugh
On Sun, 2003-11-16 at 14:16, Splash Tekalal wrote:
> I know Debian can do packet shaping and set up rules for types of packets 
> to get priority and such, but I'm at a loss as to where to start on setting 
> it up.

You might want to try shorewall. In addition to firewall stuff, it also
provides a nice way of configuring QoS parameters. Worth a try. :-)

-- 
Jeff Waugh <[EMAIL PROTECTED]>
Flow Communications Pty. Ltd.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: two ethernet ports on one PCI NIC?

2003-10-09 Thread Jeff S Wheeler
On Thu, 2003-10-09 at 15:57, Chris Evans wrote:
> but only one PCI slot.  Anyone know of a reliable dual ethernet NIC 
> for PCI that has linux drivers (Debian tested ideally)?


Chris,

The Intel PRO/100 and PRO/1000 ethernet cards are excellent and
inexpensive. You can also purchase mainboards with several of these
chipsets on-board. I have a number of Tyan mainboards with as many as 3
on-board Intel-based ethernet ports.

-- 
Jeff S Wheeler <[EMAIL PROTECTED]>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Funny NFS

2003-09-22 Thread Jeff S Wheeler
On Mon, 2003-09-22 at 06:09, Dave wrote:
> [EMAIL PROTECTED]'s password:
> Last login: Mon Sep 22 09:04:05 2003 from 192.168.11.2 on pts/1
> Linux valhalla 2.4.20-valhalla #1 Thu Sep 18 08:21:07 SAST 2003 i686 unknown
> struct nfs_fh * fh;
> const char *name;
> unsigned intle
> Last login: Mon Sep 22 09:04:05 2003 from 192.168.11.2
> valhalla:~#

The first thing I would do is login to an account w/o any startup script
commands, e.g. biff, setting permission on the tty, umask changes. If
you still get the message I would consider starting a login shell of
your new, clean account under stace with the follow child PIDs option,
and output each child's syscall messages to its own soandso.PID file.
You can use *grep to figure out where that output is coming from.

I doubt that you have a kernel problem but I suppose it is feasable. It
would be better to check other options first. Incidentally I am running
2.4.20 on my home NFS server and have no similar problems. I have not
upgraded to 2.4.20 on any of my NFS clients yet.

--
Jeff S Wheeler


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Servers with X.

2003-08-18 Thread Jeff Waugh
On Tue, 2003-08-19 at 09:28, Rudi Starcevic wrote:
> Is it bad practise to use X on your Debian ISP/Hosting machines ?
> Here I have 4 boxes all without X. I've always been of the impression
> X on servers was not good.

It's not a terrible thing to do, unless you forget to correctly firewall
your machines. :-)

> I have one box, a database server - PostgreSQL, which has a cool TCL 
> monitoring app.
> I'm interested in using. This GUI app. monitors server load and running 
> queries etc.
> I'll need to install X in order to use it - which I'm not sure is such a 
> good idea.

You don't need to install an X server on the local machine to use it. If
you install the tcl app, and ssh to the box using X forwarding (-X), you
can display the program on your own local X server.

[ desktop ]   -->   [ firewall ]   -->   [ db-server ]
 X server ssh  sshno X server

Fully encrypted, secure access to X software on your db-server, without
running (or even having) a full X server on the machine. :-)

- Jeff

-- 
 Systems Administrator
 Flow Communications
 p: +612 9263 5052
 f: +612 9263 5050


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: sane trouble-ticket systems

2003-08-12 Thread Jeff Waugh
On Sat, 2003-08-09 at 09:27, Brad Lay wrote:
> Anybody know of a backport of request-tracker2 from testing/unstable? even
> rt3 would do, so long as it'll work in Woody.

It's a very simple, uncomplicated backport. You could do it very easily
yourself. rt3 is significantly more difficult, however.

- Jeff

-- 
 Systems Administrator
 Flow Communications
 p: +612 9263 5052
 f: +612 9263 5050


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



RWHOIS daemon options

2003-07-03 Thread Jeff S Wheeler
Dear debian-isp list,

I've just been asked to setup an rwhois server in order to satisfy ARIN
policy without SWIPing a large number of customer blocks via email. I
have downloaded the daemon available at http://www.rwhois.net however it
leaves much to be desired. The example configurations are lacking, the
config file formats themselves aren't great, data is kept in text files
in a rather obtuse directory structure (by default), and I am wholely
unimpressed with the documentation. I'm a big IRC guy, and none of my
IRC netops pals seem to have much love, or success, with rwhoisd.

Does anyone else on the list run an RWHOIS server, and if so, which one?
An apt-cache search revealed little, as did a freshmeat.net query. If
other on the list are in the same boat I am, perhaps we could put our
heads together and come up with a free-as-in-debian alternative.

-- 
Jeff S Wheeler <[EMAIL PROTECTED]>




RWHOIS daemon options

2003-07-03 Thread Jeff S Wheeler
Dear debian-isp list,

I've just been asked to setup an rwhois server in order to satisfy ARIN
policy without SWIPing a large number of customer blocks via email. I
have downloaded the daemon available at http://www.rwhois.net however it
leaves much to be desired. The example configurations are lacking, the
config file formats themselves aren't great, data is kept in text files
in a rather obtuse directory structure (by default), and I am wholely
unimpressed with the documentation. I'm a big IRC guy, and none of my
IRC netops pals seem to have much love, or success, with rwhoisd.

Does anyone else on the list run an RWHOIS server, and if so, which one?
An apt-cache search revealed little, as did a freshmeat.net query. If
other on the list are in the same boat I am, perhaps we could put our
heads together and come up with a free-as-in-debian alternative.

-- 
Jeff S Wheeler <[EMAIL PROTECTED]>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Unidentified subject!

2003-06-26 Thread jeff



Unidentified subject!

2003-06-26 Thread jeff



gre tunnel MTU adjustment

2003-05-15 Thread Jeff S Wheeler
Dear List,

I have a GRE tunnel setup between a debian linux/zebra router at my
co-lo and my home office.  This allows me to have a /27 without coughing
up $7/IP to the local cable monopoly.  There are no other broadband IP
options available.

My problem is I can't raise the MTU on the intermediate links over which
the tunneled packets must travel, thus the MTU of my GRE tunnel is less
than 1500.  Many popular Internet sites, including paypal, hotmail,
portions of Yahoo, and my beloved friendster, have utterly broken Path
MTU Detection.  The problem is wide-spread, and I don't think these
sites are going to correct their problem or disable PMTUd on their
servers, load balancers, and whatnot.

Cisco routers have the ability to fragment and reassemble IP packets
traversing GRE tunnels in order to effectively increase the tunnel MTU. 
The command syntax is e.g. `ip mtu 1500` in interface configuration.

Is similar functionality available on linux?  If not, can someone with
iptables clue give me an example of how to disable the IP Don't-Fragment
bit on ip packets that are being routed to my tunnel, allowing them to
be fragmented even though the transmitting TCP stack has set DF?

Kind thanks,

-- 
Jeff S Wheeler <[EMAIL PROTECTED]>


signature.asc
Description: This is a digitally signed message part


Re: [Help] Anybody has auth_ldap with ssl deb can share to me ??? Thanks.

2003-04-11 Thread Jeff Waugh


> ==
> In file included from auth_ldap.c:20:
> auth_ldap.h:33: ldap.h: No such file or directory
> auth_ldap.h:34: lber.h: No such file or directory
> auth_ldap.h:53: ldap_ssl.h: No such file or directory
> make[1]: *** [auth_ldap.o] Error 1
> make[1]: Leaving directory `/root/source/libapache-auth-ldap-1.6.0'
> make: *** [build-stamp] Error 2
> ==

Looks like you're missing the devel packages for openldap. Install those,
try again. Make sure you check the build-depends of the package you're
building.

- Jeff

-- 
linux.conf.au 2004: Adelaide, Australia http://lca2004.linux.org.au/
 
  "In addition to these ample facilities, there exists a powerful
   configuration tool called gcc." - Elliot Hughes, author of lwm




Unidentified subject!

2003-03-31 Thread jeff



Unidentified subject!

2003-03-31 Thread jeff



Re: BGP memory/cpu req

2003-03-11 Thread Jeff S Wheeler
On Tue, 2003-03-11 at 05:58, Teun Vink wrote:
> Check out the Zebra mailinglist, it has been discussed there over and
> over. Basically, a full routing table would require 512Mb at least. CPU
> isn't that much of an issue, any 'normal' CPU (P3) would do...
512MB is more than enough for zebra.  I would be comfortable running
zebra on as little as 256MB of memory.  If you want to use the box for
other tasks like squid, etc. you might become constrained.  We use ours
for some RRD polling.

I have a box with two full sessions of about 120k prefixes from transit
providers, both sessions with soft-reconfig enabled.  CPU is never an
issue, as far more CPU time is spent handling ethernet card Rx
interrupts than BGP or OSPF updates.  The box forwards an average of
11kpps and 60Mbit/sec, peaks around 16kpps and 90Mbit/sec.

[EMAIL PROTECTED]:~# ps u `cat /var/run/zebra.pid` `cat
/var/run/bgpd.pid` `cat /var/run/ospfd.pid`
USER   PID %CPU %MEM   VSZ  RSS TTY  STAT START   TIME COMMAND
root   197  0.1  2.4 25916 24872 ?   S 2002 143:03
/usr/local/sbin/zebra -d -f/etc/zebra/zebra.conf
root   200  0.6  6.2 65756 64960 ?   S 2002 786:20
/usr/local/sbin/bgpd -d -f/etc/zebra/bgpd.conf
root   828  0.0  0.1  2292 1204 ?S 2002  20:06
/usr/local/sbin/ospfd -d -f/etc/zebra/ospfd.conf
[EMAIL PROTECTED]:~# free
 total   used   free sharedbuffers
cached
Mem:   1033380 864148 169232  0  21348
622244
-/+ buffers/cache: 220556 812824
Swap:   497972  0 497972
[EMAIL PROTECTED]:~# 

-- 
Jeff S Wheeler <[EMAIL PROTECTED]>


signature.asc
Description: This is a digitally signed message part


Re: BGP memory/cpu req

2003-03-11 Thread Jeff S Wheeler
On Tue, 2003-03-11 at 05:58, Teun Vink wrote:
> Check out the Zebra mailinglist, it has been discussed there over and
> over. Basically, a full routing table would require 512Mb at least. CPU
> isn't that much of an issue, any 'normal' CPU (P3) would do...
512MB is more than enough for zebra.  I would be comfortable running
zebra on as little as 256MB of memory.  If you want to use the box for
other tasks like squid, etc. you might become constrained.  We use ours
for some RRD polling.

I have a box with two full sessions of about 120k prefixes from transit
providers, both sessions with soft-reconfig enabled.  CPU is never an
issue, as far more CPU time is spent handling ethernet card Rx
interrupts than BGP or OSPF updates.  The box forwards an average of
11kpps and 60Mbit/sec, peaks around 16kpps and 90Mbit/sec.

[EMAIL PROTECTED]:~# ps u `cat /var/run/zebra.pid` `cat
/var/run/bgpd.pid` `cat /var/run/ospfd.pid`
USER   PID %CPU %MEM   VSZ  RSS TTY  STAT START   TIME COMMAND
root   197  0.1  2.4 25916 24872 ?   S 2002 143:03
/usr/local/sbin/zebra -d -f/etc/zebra/zebra.conf
root   200  0.6  6.2 65756 64960 ?   S 2002 786:20
/usr/local/sbin/bgpd -d -f/etc/zebra/bgpd.conf
root   828  0.0  0.1  2292 1204 ?S 2002  20:06
/usr/local/sbin/ospfd -d -f/etc/zebra/ospfd.conf
[EMAIL PROTECTED]:~# free
 total   used   free sharedbuffers
cached
Mem:   1033380 864148 169232  0  21348
622244
-/+ buffers/cache: 220556 812824
Swap:   497972  0 497972
[EMAIL PROTECTED]:~# 

-- 
Jeff S Wheeler <[EMAIL PROTECTED]>


signature.asc
Description: This is a digitally signed message part


Neighbour table overflow problem

2003-03-07 Thread Jeff S Wheeler
Dear list,

I have a linux 2.4 box running zebra and acting as a default gateway for
a number of machines.  I am concerned about "Neighbour table overflow"
output in my dmesg.  From some articles I've read on usenet, this is
related to the arp table becoming full.  Most of the posters solved
their problems by configuring a previously unused loopback interface, or
realizing that they had a /8 configured on one IP interface and a router
on their subnet that was using proxy-arp to fulfill the arp requests.

Neither of those is my situation, though.  I simply have a lot of hosts
on the segment.  When the network is busy I've seen as many as 230+ arp
entries, but it never seems to break 256.  Is this an artificial limit
on the number of entries that can be present in my arp table?  If so, I
would like to increase the limit by to 2048 or so and give myself some
headroom.  I am concerned that might slow down packet forwarding, but I
can probably live with that.

Has anyone on the list encountered similar problems?  If so, is this the
approach you took to solve them or did you do something else?

Thanks,

--
Jeff S Wheeler <[EMAIL PROTECTED]>

[EMAIL PROTECTED] uname -a
Linux mr0 2.4.20 #1 Mon Dec 16 14:13:15 CST 2002 i686 unknown
[EMAIL PROTECTED] arp -an |wc -l
239



signature.asc
Description: This is a digitally signed message part


Neighbour table overflow problem

2003-03-07 Thread Jeff S Wheeler
Dear list,

I have a linux 2.4 box running zebra and acting as a default gateway for
a number of machines.  I am concerned about "Neighbour table overflow"
output in my dmesg.  From some articles I've read on usenet, this is
related to the arp table becoming full.  Most of the posters solved
their problems by configuring a previously unused loopback interface, or
realizing that they had a /8 configured on one IP interface and a router
on their subnet that was using proxy-arp to fulfill the arp requests.

Neither of those is my situation, though.  I simply have a lot of hosts
on the segment.  When the network is busy I've seen as many as 230+ arp
entries, but it never seems to break 256.  Is this an artificial limit
on the number of entries that can be present in my arp table?  If so, I
would like to increase the limit by to 2048 or so and give myself some
headroom.  I am concerned that might slow down packet forwarding, but I
can probably live with that.

Has anyone on the list encountered similar problems?  If so, is this the
approach you took to solve them or did you do something else?

Thanks,

--
Jeff S Wheeler <[EMAIL PROTECTED]>

[EMAIL PROTECTED] uname -a
Linux mr0 2.4.20 #1 Mon Dec 16 14:13:15 CST 2002 i686 unknown
[EMAIL PROTECTED] arp -an |wc -l
239



signature.asc
Description: This is a digitally signed message part


Re: Determinig configure options in .debs

2003-01-14 Thread Jeff S Wheeler
On Tue, 2003-01-14 at 17:15, Jan V wrote:
> If you want to know the compile-options for eg cowsay: 'apt-get source
> cowsay' then go to the debian dir that has been created
  
/ I enjoyed your cowsay reference. It is \
\ very popular on EFnet. /
  
\   ^__^
 \  (oo)\___
(__)\   )\/\
||w |
|| ||




-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: DNS servers

2002-11-22 Thread Jeff S Wheeler
The draconian license you use to distribute tinydns and other software
is problematic for me.  I can accept different zone file syntax with
ease, and can even adapt myself to the notion that the filesytem is used
as a configuration database.  I can also understand that your resistance
to a license that would allow binary distribution, or distribution of
patched sources, is well-intentioned, but I cannot agree with it.

--
Jeff S Wheeler <[EMAIL PROTECTED]>





-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: how to upgrade dozens of debian servers

2002-11-14 Thread Jeff Waugh
> I have some debian servers and hav a pain when these is security
> upgrade  package available, for I have to check and upgrade them one by
> one, making  sure they are in safe status.
>
> I wonder how the administrator manage dozens or even hundreds of debian
>  servers in this case? Any tool or administration tips?

*nix tools save the day. I use a for loop and ssh in a bash script. "Low
tech" solutions are often highly efficient and flexible. :-)

- Jeff

-- 
  So, "Jeffrey" seems to mean "the ineffectual, victimised guy in
  American movies" in four different languages.





Re: how to upgrade dozens of debian servers

2002-11-14 Thread Jeff Waugh
> I have some debian servers and hav a pain when these is security
> upgrade  package available, for I have to check and upgrade them one by
> one, making  sure they are in safe status.
>
> I wonder how the administrator manage dozens or even hundreds of debian
>  servers in this case? Any tool or administration tips?

*nix tools save the day. I use a for loop and ssh in a bash script. "Low
tech" solutions are often highly efficient and flexible. :-)

- Jeff

-- 
  So, "Jeffrey" seems to mean "the ineffectual, victimised guy in
  American movies" in four different languages.



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: New BIND 4 & 8 Vulnerabilities

2002-11-13 Thread Jeff S Wheeler
My BIND 8 zone files are working perfectly.  We do have TTL values on
every RR in every zone, though.  Perhaps that was your difficulty?  I
believe I made that change when we upgraded from 4.x to 8.x ages ago.

If there is no such script and you have difficulty with your zonefiles,
let me know the apparent differences and I'd be happy to whip up a Perl
script and post it to the debian-isp list.  We have hundreds of zones as
well, and if it there had been a file format problem, I would had to
have done so in order to make the upgrade work.

--
Jeff S Wheeler <[EMAIL PROTECTED]>

On Tue, 2002-11-12 at 19:04, Craig Sanders wrote:
> On Tue, Nov 12, 2002 at 12:53:51PM -0600, Sonny Kupka wrote:
> > Why not use Bind 9.2.1..
> > 
> > It's in woody.. When I came over from Slackware to Debian I installed
> > it and haven't looked back..
> > 
> > The file format was the same from 8.3.* to 9.2.1 I didn't have to do
> > anything..
> 
> is this fully backwards-compatible?
> 
> last time i looked at bind9, the zonefile format had some slight
> incompatibilities - no problem if you only have a few zonefiles that
> need editing, but a major PITA if you have hundreds.
> 
> if there are zonefile incompatibilities, is there a script
> to assist in converting zonefiles?
> 
> craig
> 
> -- 
> craig sanders <[EMAIL PROTECTED]>
> 
> Fabricati Diem, PVNC.
>  -- motto of the Ankh-Morpork City Watch
> 



signature.asc
Description: This is a digitally signed message part


Re: New BIND 4 & 8 Vulnerabilities

2002-11-13 Thread Jeff S Wheeler
My BIND 8 zone files are working perfectly.  We do have TTL values on
every RR in every zone, though.  Perhaps that was your difficulty?  I
believe I made that change when we upgraded from 4.x to 8.x ages ago.

If there is no such script and you have difficulty with your zonefiles,
let me know the apparent differences and I'd be happy to whip up a Perl
script and post it to the debian-isp list.  We have hundreds of zones as
well, and if it there had been a file format problem, I would had to
have done so in order to make the upgrade work.

--
Jeff S Wheeler <[EMAIL PROTECTED]>

On Tue, 2002-11-12 at 19:04, Craig Sanders wrote:
> On Tue, Nov 12, 2002 at 12:53:51PM -0600, Sonny Kupka wrote:
> > Why not use Bind 9.2.1..
> > 
> > It's in woody.. When I came over from Slackware to Debian I installed
> > it and haven't looked back..
> > 
> > The file format was the same from 8.3.* to 9.2.1 I didn't have to do
> > anything..
> 
> is this fully backwards-compatible?
> 
> last time i looked at bind9, the zonefile format had some slight
> incompatibilities - no problem if you only have a few zonefiles that
> need editing, but a major PITA if you have hundreds.
> 
> if there are zonefile incompatibilities, is there a script
> to assist in converting zonefiles?
> 
> craig
> 
> -- 
> craig sanders <[EMAIL PROTECTED]>
> 
> Fabricati Diem, PVNC.
>  -- motto of the Ankh-Morpork City Watch
> 




signature.asc
Description: This is a digitally signed message part


Re: New BIND 4 & 8 Vulnerabilities

2002-11-12 Thread Jeff S Wheeler
I've taken Sonny's suggestion and upgraded to the bind9 package. 
Initially I thought I had a serious problem, as named was not answering
any queries, however it seems to have "fixed itself".  Ordinarily that
would spook me, but in this situation I think I'd rather have spooky
software than known-to-be-exploitable software :-)

Thanks for the suggestion, Sonny.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/

On Tue, 2002-11-12 at 13:53, Sonny Kupka wrote:
> Why not use Bind 9.2.1..
> 
> It's in woody.. When I came over from Slackware to Debian I installed it 
> and haven't looked back..
> 
> The file format was the same from 8.3.* to 9.2.1 I didn't have to do 
> anything..
> 
> ---
> Sonny
> 
> 
> At 01:08 PM 11/12/2002 -0500, Jeff S Wheeler wrote:
> >See ISC.ORG for information on new BIND vulnerabilities.  Current bind
> >package in woody is 8.3.3, which is an affected version.  Patches are
> >not available yet, it seems.
> >
> >http://www.isc.org/products/BIND/bind-security.html
> >
> >--
> >Jeff S Wheeler   [EMAIL PROTECTED]
> >Software DevelopmentFive Elements, Inc
> >http://www.five-elements.com/~jsw/
> 
> 



signature.asc
Description: This is a digitally signed message part


Re: New BIND 4 & 8 Vulnerabilities

2002-11-12 Thread Jeff S Wheeler
I've taken Sonny's suggestion and upgraded to the bind9 package. 
Initially I thought I had a serious problem, as named was not answering
any queries, however it seems to have "fixed itself".  Ordinarily that
would spook me, but in this situation I think I'd rather have spooky
software than known-to-be-exploitable software :-)

Thanks for the suggestion, Sonny.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/

On Tue, 2002-11-12 at 13:53, Sonny Kupka wrote:
> Why not use Bind 9.2.1..
> 
> It's in woody.. When I came over from Slackware to Debian I installed it 
> and haven't looked back..
> 
> The file format was the same from 8.3.* to 9.2.1 I didn't have to do anything..
> 
> ---
> Sonny
> 
> 
> At 01:08 PM 11/12/2002 -0500, Jeff S Wheeler wrote:
> >See ISC.ORG for information on new BIND vulnerabilities.  Current bind
> >package in woody is 8.3.3, which is an affected version.  Patches are
> >not available yet, it seems.
> >
> >http://www.isc.org/products/BIND/bind-security.html
> >
> >--
> >Jeff S Wheeler   [EMAIL PROTECTED]
> >Software DevelopmentFive Elements, Inc
> >http://www.five-elements.com/~jsw/
> 
> 




signature.asc
Description: This is a digitally signed message part


New BIND 4 & 8 Vulnerabilities

2002-11-12 Thread Jeff S Wheeler
See ISC.ORG for information on new BIND vulnerabilities.  Current bind
package in woody is 8.3.3, which is an affected version.  Patches are
not available yet, it seems.

http://www.isc.org/products/BIND/bind-security.html

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/


signature.asc
Description: This is a digitally signed message part


New BIND 4 & 8 Vulnerabilities

2002-11-12 Thread Jeff S Wheeler
See ISC.ORG for information on new BIND vulnerabilities.  Current bind
package in woody is 8.3.3, which is an affected version.  Patches are
not available yet, it seems.

http://www.isc.org/products/BIND/bind-security.html

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



signature.asc
Description: This is a digitally signed message part


Re: Fw: VIRUS IN YOUR MAIL (W32/BugBear.A (Clam))

2002-10-17 Thread Jeff S Wheeler
On Thu, 2002-10-17 at 04:51, Brian May wrote:
> AFAIK transparent proxying in Linux is limited to redirecting all ports
> to a given port another host. It is not possible for the proxy server to
> tell, for instance what the original destination IP address was.
Is this true, or will a getsockname() performed on a TCP socket which
was created as one endpoint of a connection which is being transparently
proxied give the client's intended destination address?  I do not know.

> A transparent HTTP proxy relies on the server name HTTP1.1 request
> field to determine what host the client really wanted to connect to.
> (this has been tested with Pacific's transparent proxy).
I do know that all HTTP/1.1 requests must contain a Host: header to be
valid.  Even if you knew the destination IP address, if you did not have
a Host: header you couldn't successfully complete an HTTP/1.1 request.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/


signature.asc
Description: This is a digitally signed message part


Re: Fw: VIRUS IN YOUR MAIL (W32/BugBear.A (Clam))

2002-10-17 Thread Jeff S Wheeler
On Thu, 2002-10-17 at 04:51, Brian May wrote:
> AFAIK transparent proxying in Linux is limited to redirecting all ports
> to a given port another host. It is not possible for the proxy server to
> tell, for instance what the original destination IP address was.
Is this true, or will a getsockname() performed on a TCP socket which
was created as one endpoint of a connection which is being transparently
proxied give the client's intended destination address?  I do not know.

> A transparent HTTP proxy relies on the server name HTTP1.1 request
> field to determine what host the client really wanted to connect to.
> (this has been tested with Pacific's transparent proxy).
I do know that all HTTP/1.1 requests must contain a Host: header to be
valid.  Even if you knew the destination IP address, if you did not have
a Host: header you couldn't successfully complete an HTTP/1.1 request.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



signature.asc
Description: This is a digitally signed message part


[Fwd: VU#210321]

2002-09-10 Thread Jeff S Wheeler
Below is a message some CERT folk posted to NANOG-L this morning.  I
personally think it's a crock of shit, and that CERT is damaging their
credibility by advising based purely on rumor and speculation, however
perhaps someone on this list has additional information?

Facts and first-hand information only, please.

--
Jeff S Wheeler <[EMAIL PROTECTED]>


-Forwarded Message-

From: CERT(R) Coordination Center <[EMAIL PROTECTED]>
To: nanog@merit.edu
Cc: CERT(R) Coordination Center <[EMAIL PROTECTED]>
Subject: VU#210321
Date: 10 Sep 2002 10:16:14 -0400


-BEGIN PGP SIGNED MESSAGE-

Hello,

The CERT/CC has recently seen discussions in a public forum detailing
potential vulnerabilities in several TCP/IP implementations (Linux,
OpenBSD, and FreeBSD). We are particularly concerned about these types
of vulnerabilities because they have the potential to be exploited
even if the target machine has no open ports.

The messages can be found here:

http://lists.netsys.com/pipermail/full-disclosure/2002-September/001667.html
http://lists.netsys.com/pipermail/full-disclosure/2002-September/001668.html
http://lists.netsys.com/pipermail/full-disclosure/2002-September/001664.html
http://lists.netsys.com/pipermail/full-disclosure/2002-September/001643.html

Note that one individual claims two exploits exist in the
underground. At this point in time, we do not have any more
information, nor have we been able to confirm the existence of these
vulnerabilities.

We would appreciate any feedback or insight you may have. We will
continue to keep an eye out for further discussions regarding this
topic.

FYI,
Ian

Ian A. Finlay
CERT (R) Coordination Center
Software Engineering Institute
Carnegie Mellon University
Pittsburgh, PA  USA  15213-3890
-BEGIN PGP SIGNATURE-
Version: PGPfreeware 5.0i for non-commercial use
Charset: noconv

iQCVAwUBPX3/VqCVPMXQI2HJAQFEqQQAr54e9c5SGgrIfmK5+EWqSOdvySKRtjwa
6dE4Z4DcoyHS57W5BEwW2OSXSGwrBL+mzippfTEnwAVT/otLYAADsnlPSQioRYNi
qHVh8yRXgh3kBgx3cMdhe3NC6zaSWffOsc/EvhkCDo2xa8FQItOqE5MjOeASjt1L
st5qq4mgM+E=
=kHt1
-END PGP SIGNATURE-




signature.asc
Description: This is a digitally signed message part


Re: creepy-crawlers from TW

2002-08-08 Thread Jeff S Wheeler
I suggest you email abuse contact for seed.net.tw, who appears to be the
owner of that network block (139.175/16) and take it up with them.  I
assume you already tried to go through openfind.com.tw and did not get a
satisfactory response.

You could always use this approach as well  :-)  Or if you do not have
access to your routing tables, add host routes to loopback0 on your web
servers.  I do this for customers when they request IP blocks.
  ip route 139.175.250.23/32 null0

What I definately recommend against is trying to use apache's access
controls to block based on IP.  It's not very smart, and will do a DNS
lookup on every request even if you are trying to block by IP.  If the
IP route null0 method ever fails me, I will patch apache to fix this.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/

On Thu, 2002-08-08 at 14:38, Martin WHEELER wrote:
> Does *anyone* have a solution for keeping the site-sucking bots from
> openfind.tw.com out of my machine?
> 
> They don't obey any sort of international guidelines;, and tie my
> machine up for hours on end once they find a way of getting in and
> latching on.
> 
> I'm getting desperate.
> 
> Any help appreciated.
> 
> msw
> -- 
> 
> 
> -- 
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
> 



signature.asc
Description: This is a digitally signed message part


Re: Maildirs in Debian

2002-08-02 Thread Jeff Waugh


> Jeff,
>  please share the cons/pros with us

The following document provides a good analysis of why Maildir was more
appropriate to Courier IMAP's general audience and tasks (the SELECT.1
benchmarks are telling):

  http://www.courier-mta.org/mbox-vs-maildir/

To me, the differences can be summarised as a compromise between random
access, speed and memory. On my server (I use Postfix and Courier IMAP),
Maildir provides very fast random access to email, low memory usage, and no
locking/access issues.

On my desktop machine I use mbox because my usage patterns and requirements
lean towards the use of massive, searchable mail folders and little interest
in saving memory. Once the mailboxes are open, access is enormously fast. I
have no serious locking issues, because it's just me and procmail writing to
the mboxes.

I don't think either system is ultimately (or religiously) the best, because
they're appropriate for different uses. Our role as technology providers is
to analyse these choices, rather than defend them. :-)

- Jeff

-- 
 "Evil will always triumph over good, because good is dumb." - Dark 
 Helmet, Spaceballs 




Re: Maildirs in Debian

2002-08-01 Thread Jeff Waugh


> > There are plenty of reasons to not use Maildir, too.
> 
> Aren't they mostly to do with backwards compatibility? If everything in
> Debian could handle it, wouldn't this be a non-issue?

No. I use maildirs on my IMAP server and mboxes on my desktop because they
are appropriate to each. They operate very differently, and have pros/cons
for different uses.

- Jeff

-- 
  "Love never misses the chance to put the boot in." - Kelly, SLOU  




Re: Maildirs in Debian

2002-08-01 Thread Jeff Waugh


> Failing that, a migration to pure maildir would probably be good, provided
> the migration could be handled transperantly.

There are plenty of reasons to not use Maildir, too.

- Jeff

-- 
 "What's up with that word though... it's like something you did to 
  frogs in grammar school." - Ani DiFranco on bisexuality   




Re: Linux box

2002-07-31 Thread Jeff S Wheeler
If you want to deal with the hassle of DNS switch-over from Net1 to Net2
in the event of an outage, you can do that.  You can also easily setup a
linux box or one of your Cisco routers to have two default routes.  One
would be the flat-cost circuit, the other would be per-packet circuit.

The per-packet circuit would have a higher "metric" than the flat-cost. 
You can do this with the standard linux ip routing mechanisms.  I'm sure
you have noticed the Metric column in route(8) before and perhaps not
known what it meant.  This is its use.  See the route(8) man page for
help on how to install two routes for the same network with different
metrics and gateways.  Note that higher metric is _lower_ preference. 
So use metric 0 on your least expensive gateway, metric 1 on your
"backup" route, or whatnot.


If you are doing some sort of web hosting, or something where the
general Internet is accessing services at your site, you would be _far_
smarter to colocate one or more PCs with a colocation supplier, than to
try to do fail-over with DNS.  It's a bad solution, won't work all the
time, you'll have TTL issues, etc. etc. but it is possible.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



signature.asc
Description: This is a digitally signed message part


Re: Linux box

2002-07-31 Thread Jeff S Wheeler
Riccardo,

You describe that you want all traffic originating from Net1 to traverse
Router1, and traffic originating from Net2 to traverse Router2, in order
to reach the Internet.  That is called policy-based routing, and you can
implement it with iproute2 on Linux.

You cannot both multihome using BGP, and policy-route in that manner. 
No currently deployed bgp-speakers can configure your routing table to
implement that policy.

In addition, although it seems like you have a firm understanding of
what you want to do on this level, your organization probably lacks the
necessary know-how to successfully deploy BGP, and your two ISPs may not
even be staffed or equipped to deliver BGP sessions to you.  If you want
to undertake it anyway, I strongly urge you to contract a consultant who
can help you and possibly your ISPs through the process.

I hope this helps.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



signature.asc
Description: This is a digitally signed message part


Re: transfer rate

2002-07-04 Thread Jeff S Wheeler
Hi, I suggest you check the duplex mode on your ethernet interface and
your switch.  I had a problem similar to yours just a couple of months
ago, and tracked it down to the interface auto-negotiating into half
instead of full duplex.  On a busy ethernet interface that can cause
enough collisions to affect TCP throughput substantially, due to that
small amount of packet loss.

Unfortunately under Linux there is no "good way" to find out the link
speed and duplex condition portably among different ethernet adapters,
at least that I am aware of.  Here is what I do:

$ dmesg |egrep eth[0-2]
eth1: Intel Corp. 82557 [Ethernet Pro 100], 00:A0:C9:39:4C:2C, IRQ 19.
eth2: ns83820 v0.15: DP83820 v1.2: 00:40:f4:17:74:8a io=0xfebf9000
irq=16 f=h,sg
eth2: link now 1000 mbps, full duplex and up.

Also unfortunately, most ethernet drivers don't bother reporting this,
although you can hack it into your drivers if it is important to you.

But another good way to check is to examine your switch:

switch0#sh int fa0/2
  ...
  Auto-duplex (Full), Auto Speed (100), 100BaseTX/FX
  ...
This shows 100/full auto-negotiated.  Really, this is a bad thing to do,
as we should be setting all our ports to 100/full in the configuration,
but it probably won't be done until it bites us in the ass.  :-)

switch0#sh int fa0/24
  ...
  Full-duplex, 100Mb/s, 100BaseTX/FX
  ...
This is a port which has been fixed to 100/full, because it *did* bite
us in the ass.  It's an uplink to a router and does several mbits/sec
24x7, and that packet loss affected all the TCP sessions going over it,
limiting them to around 400Kbits/sec throughput due to TCP backoff :-(


I hope this is helpful.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/


On Wed, 2002-07-03 at 22:20, Rajeev Sharma wrote:
> hi all
> 
>   I am stuck with  a problem of my network..
> 
>   my one debian box is very unstable ..some time it transfer data
>   smoothly(997.0 kB/s)and sometime it hanged up at 123.0 kB/s..
>   and when i use
> 
>   ping -f 192.168.x.x
> 
>  it shows 1% data loss and some time 0% data loss..
> 
> 
>  i have checked my cables,switch ...but no use 
> 
>  pliz help me
> 
**snip**


pgpYUOTYpLYRj.pgp
Description: PGP signature


Re: transfer rate

2002-07-04 Thread Jeff S Wheeler

Hi, I suggest you check the duplex mode on your ethernet interface and
your switch.  I had a problem similar to yours just a couple of months
ago, and tracked it down to the interface auto-negotiating into half
instead of full duplex.  On a busy ethernet interface that can cause
enough collisions to affect TCP throughput substantially, due to that
small amount of packet loss.

Unfortunately under Linux there is no "good way" to find out the link
speed and duplex condition portably among different ethernet adapters,
at least that I am aware of.  Here is what I do:

$ dmesg |egrep eth[0-2]
eth1: Intel Corp. 82557 [Ethernet Pro 100], 00:A0:C9:39:4C:2C, IRQ 19.
eth2: ns83820 v0.15: DP83820 v1.2: 00:40:f4:17:74:8a io=0xfebf9000
irq=16 f=h,sg
eth2: link now 1000 mbps, full duplex and up.

Also unfortunately, most ethernet drivers don't bother reporting this,
although you can hack it into your drivers if it is important to you.

But another good way to check is to examine your switch:

switch0#sh int fa0/2
  ...
  Auto-duplex (Full), Auto Speed (100), 100BaseTX/FX
  ...
This shows 100/full auto-negotiated.  Really, this is a bad thing to do,
as we should be setting all our ports to 100/full in the configuration,
but it probably won't be done until it bites us in the ass.  :-)

switch0#sh int fa0/24
  ...
  Full-duplex, 100Mb/s, 100BaseTX/FX
  ...
This is a port which has been fixed to 100/full, because it *did* bite
us in the ass.  It's an uplink to a router and does several mbits/sec
24x7, and that packet loss affected all the TCP sessions going over it,
limiting them to around 400Kbits/sec throughput due to TCP backoff :-(


I hope this is helpful.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/


On Wed, 2002-07-03 at 22:20, Rajeev Sharma wrote:
> hi all
> 
>   I am stuck with  a problem of my network..
> 
>   my one debian box is very unstable ..some time it transfer data
>   smoothly(997.0 kB/s)and sometime it hanged up at 123.0 kB/s..
>   and when i use
> 
>   ping -f 192.168.x.x
> 
>  it shows 1% data loss and some time 0% data loss..
> 
> 
>  i have checked my cables,switch ...but no use 
> 
>  pliz help me
> 
**snip**



msg06679/pgp0.pgp
Description: PGP signature


Re: increase mysql max connections over 1024

2002-06-16 Thread Jeff S Wheeler
On Sun, 2002-06-16 at 04:24, Osamu Aoki wrote:
  *snip*
> If what you say is true, I can tell you that ANY program which is
> involved with mysql and which used local_lim.h needs to be recompiled.
> What I do not know is whether this involves glibc (libc6) or not.

Why would this be the case?  I might be missing something, but I believe
the poster is just discussing making a change to the mysql-server, NOT
the libmysqlclient library.

Any library dependencies of the mysqld server (ldd bin/mysqld ?) would
need to be rebuilt, probably including libc, but you could always keep
private copies of them and use LD_LIBRARY_PATH to avoid changing the
system-wide libc, and thus necessitating a rebuild of other sources
which depend on that limit being consistent between themselves and their
dependencies.

Am I off my rocker?  I know it's not a real clean solution, keeping a
seperate copy of libc, but it seems workable.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: how to design mysql clusters with 30,000 clients?

2002-05-27 Thread Jeff S Wheeler
Everything I've heard about experiences with mysql on NFS has been
negative.  If you do want to try it, though, keep in mind that
100Mbit/sec ethernet is going to give you 12.5MByte/sec, less actually,
of I/O performance.  GIGE cards are cheap these days, as are switches
with a few GIGE ports.  1000baseT works, take advantage of it.

I hope you'll think about a solution other than mysql for this problem,
though.  It's not the right tool for session management on such a scale.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/

On Mon, 2002-05-27 at 07:54, Patrick Hsieh wrote:
> Hello Nicolas Bougues <[EMAIL PROTECTED]>,
> 
> I'd like to discuss the NFS server in this network scenario.
> Say, if I put a linux-based NFS server as the central storage device and
> make all web servers as well as the single mysql write server attached
> over the 100Base ethernet. When encountering 30,000 concurrent clients, 
> will the NFS server be the bottleneck? 
> 
> I am thinking about to put a NetApp filer as the NFS server or build a
> linux-based one myself. Can anyone give me some advice?
> 
> If I put the raw data of MySQL write server in the NetApp filer, if the
> database crashes, I can hopefully recover the latest snapshot backup
> from the NetApp filer in a very short time. However, if I put on the
> local disk array(raid 5) or linux-based NFS server with raid 5 disk
> array attached, I wonder whether it will be my bottleneck or not.
> 
> How does mysql support the NFS server? Is it wise to put mysql raw data
> in the NFS?
> 
> 
> -- 
> Patrick Hsieh <[EMAIL PROTECTED]>
> GPG public key http://pahud.net/pubkeys/pahudatpahud.gpg
> 
> 
> -- 
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
> 



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: how to design mysql clusters with 30,000 clients?

2002-05-27 Thread Jeff S Wheeler

Everything I've heard about experiences with mysql on NFS has been
negative.  If you do want to try it, though, keep in mind that
100Mbit/sec ethernet is going to give you 12.5MByte/sec, less actually,
of I/O performance.  GIGE cards are cheap these days, as are switches
with a few GIGE ports.  1000baseT works, take advantage of it.

I hope you'll think about a solution other than mysql for this problem,
though.  It's not the right tool for session management on such a scale.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/

On Mon, 2002-05-27 at 07:54, Patrick Hsieh wrote:
> Hello Nicolas Bougues <[EMAIL PROTECTED]>,
> 
> I'd like to discuss the NFS server in this network scenario.
> Say, if I put a linux-based NFS server as the central storage device and
> make all web servers as well as the single mysql write server attached
> over the 100Base ethernet. When encountering 30,000 concurrent clients, 
> will the NFS server be the bottleneck? 
> 
> I am thinking about to put a NetApp filer as the NFS server or build a
> linux-based one myself. Can anyone give me some advice?
> 
> If I put the raw data of MySQL write server in the NetApp filer, if the
> database crashes, I can hopefully recover the latest snapshot backup
> from the NetApp filer in a very short time. However, if I put on the
> local disk array(raid 5) or linux-based NFS server with raid 5 disk
> array attached, I wonder whether it will be my bottleneck or not.
> 
> How does mysql support the NFS server? Is it wise to put mysql raw data
> in the NFS?
> 
> 
> -- 
> Patrick Hsieh <[EMAIL PROTECTED]>
> GPG public key http://pahud.net/pubkeys/pahudatpahud.gpg
> 
> 
> -- 
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
> 



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: how to design mysql clusters with 30,000 clients?

2002-05-24 Thread Jeff S Wheeler
I don't know if anyone else who followed-up on this thread has ever
implemented a high traffic web site of this calibre, but the original
poster is really just trying to band-aid a poor session management
mechanism into working for traffic levels it wasn't really intended for.

While he may still need a large amount of DB muscle for other things,
using PHP/MySQL sessions for a site that really expects to have 30,000
different HTTP clients at peak instants is not very bright.  We have
cookies for this.  Server-side sessions are a great fallback for
paranoid end-users who disable cookies in their browser, but it is my
understanding that PHP relies on a cookie-based session ID anyway?

I tried to follow up with the original poster directly but I can't
deliver mail to his MX for some reason.  *shrug*

Look into signed cookies for authen/authz/session, using a shared secret
known by all your web servers.  This is not a new concept, nor a
difficult one.  It can even be implemented using PHP, though a C apache
module is smarter.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: how to design mysql clusters with 30,000 clients?

2002-05-24 Thread Jeff S Wheeler

I don't know if anyone else who followed-up on this thread has ever
implemented a high traffic web site of this calibre, but the original
poster is really just trying to band-aid a poor session management
mechanism into working for traffic levels it wasn't really intended for.

While he may still need a large amount of DB muscle for other things,
using PHP/MySQL sessions for a site that really expects to have 30,000
different HTTP clients at peak instants is not very bright.  We have
cookies for this.  Server-side sessions are a great fallback for
paranoid end-users who disable cookies in their browser, but it is my
understanding that PHP relies on a cookie-based session ID anyway?

I tried to follow up with the original poster directly but I can't
deliver mail to his MX for some reason.  *shrug*

Look into signed cookies for authen/authz/session, using a shared secret
known by all your web servers.  This is not a new concept, nor a
difficult one.  It can even be implemented using PHP, though a C apache
module is smarter.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: postfix and relayhost question

2002-05-06 Thread Jeff S Wheeler

On Mon, 2002-05-06 at 11:43, Patrick Hsieh wrote:
> I configure my relayhost in main.cf of postfix.
> When I send a mail with multiple recipients in an envelope, how to make
> it just one single smtp connection from the postfix server to the
> relayhost? I tried to use netstat -nt to view the smtp connection and
> found postfix use 5 smtp connections to my relayhost. It seems to divide
> the envelope into multiple single recipient and send them individually.

Probably, this is a product of *_destination_recipient_limit being set
too low.  I guess if you change default_ or smtp_ to something higher
than your current value, say 500, it will reduce the number of
connections used.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: network cabling management

2002-04-18 Thread Jeff S Wheeler

I asked a BellSouth central office supervisor this question once.  They
didn't have any organized method of tracking much of their cable plant. 
Surprising in one respect, because you'd think some huge software
company would be very motivated to write and sell such software to Bell.

But on the other hand, it's not surprising that they weren't organized
enough to realize they spend a lot of time figuring out where things go.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/

On Wed, 2002-04-17 at 11:01, Emile van Bergen wrote:
> On Wed, 17 Apr 2002, Tommy van Leeuwen wrote:
> 
> > What kind of tools are you using for network cabling and patches
> > management? We've tried txtfiles, acessdatabases and such but we're
> > looking at an easier to manage tool..
> 
> Perhaps the middle ground between those two: a spreadsheet?
> 
> Cheers,
> 
> 
> Emile.
> 
> --
> E-Advies / Emile van Bergen   |   [EMAIL PROTECTED]
> tel. +31 (0)70 3906153|   http://www.e-advies.info
> 
> 
> -- 
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
> 



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Email header parser?

2002-04-13 Thread Jeff Waugh



> Do you know of any better shell tools for extracting from, cc, subject etc. 
> from the headers than procmail/formail?

How about Python and its RFC822 modules?

- Jeff

-- 
  "But in the software world, that's daily business." - Kent Beck   
 "That's pissing money away and leaving scar tissue." - Alan Cooper 


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: apache BASIC authentication w/large userbase

2002-04-04 Thread Jeff S Wheeler

On Thu, 2002-04-04 at 03:06, Stephane Bortzmeyer wrote:
> On Wed, Apr 03, 2002 at 06:35:22PM -0500,
>  Jeff S Wheeler <[EMAIL PROTECTED]> wrote 
>  a message of 39 lines which said:
> 
> > would not go for that because apparently a disproportionate number of
> > their end-users disable cookies in their web browser.  Stupid media
> > privacy paranoia.
> 
> You are wrong.
>  

Well, we deal with a lot of adult webmasters, including a few large
ones.  I don't do a lot of CGI-ish stuff, or session tracking for those
sites, however our in-house guy who does do that work claims nearly 30%
of the visitors to one high-profile site we work on have a browser with
cookies disabled.  I haven't generated the data myself, so I don't know
if I believe the 30% figure, but I believe "disproportionate" is pretty
safe given the users.

It's probably a stretch for you to state that I am wrong given who their
userbase is, however if you have information on similar sites to back up
your statement I certainly will be interested.  I'll see if we can track
that precisely on some of our customer sites.

> So you reinvented LDAP :-)

LDAP didn't ocurr to me at all, I'm glad you suggested it.  We have no
LDAP resources or experience in-house, but honestly would like to move
to it for a more sane a/a system for our unix, ftp, and mail accounts as
well.  There seems to be a real lack of a good, thorough HOWTO though. 
Have I not looked in the right place?

Is LDAP really the best tool here?  Keep in mind hundreds of authen
requests per second, although I don't doubt that large shops with a lot
of users probably have that kind of volume in regular unixy stuff.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




apache BASIC authentication w/large userbase

2002-04-03 Thread Jeff S Wheeler

I have a customer who requires BASIC authentication for their site. 
They have a fair amount of traffic as well as a very quickly growing
userbase.  They were on mod_auth_mysql before, but with hundreds of
apache children that is not very practical.

I suggested a change to a signed-session-cookie type system, but they
would not go for that because apparently a disproportionate number of
their end-users disable cookies in their web browser.  Stupid media
privacy paranoia.

The userbase is presently around 100K and growing 5K/day or so.  They
were having things go so slowly that users could not login.  In the
short term we replaced mod_auth_mysql with an apache module I whipped up
to send requests out via UDP to a specified host/port, and wait for a
reply (with a 3 second timeout).  Then I hacked out a quick Perl program
to handle those requests, hit mysql for actual user/password info, and
to cache the user information in ram for the duration of the daemon's
lifetime.

Obviously this won't work forever without a serious change to my caching
strategy, but before I put more work into this mechanism, what do other
folks on the list do for high-traffic, large-userbase BASIC authen?  I
know it's a poor limitation but *shrug* the customer knows their needs.

I figured DBM would be sluggish, and the customer already tried text
files, but moved to mod_auth_mysql when that ran out of steam.

Your Input Is Appreciated.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: postfix problem

2002-03-24 Thread Jeff Waugh



> Mar 24 22:29:08 lyta postfix/master[21216]: warning: process 
>/usr/lib/postfix/cleanup pid 21253 killed by signal 6
> Mar 24 22:29:08 lyta postfix/master[21216]: warning: /usr/lib/postfix/cleanup: bad 
>command startup -- throttling
> 
> Any suggestions?

Sounds like what happens if master.cf isn't upgraded properly when updating
to newer postfixes; I had this happen with the Debian packages too. Check
the postinst file, or the postfix lists.

- Jeff

-- 
"Think video. Think text flickering over your walls. Think games at 
work. Think anything where a staid, link-based browser is useless." 
  "This person wrote for Ab Fab, right?" - Rich Welykochy   


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: RAID starter

2002-03-20 Thread Jeff Waugh



> Russel, would you recommend software RAID with a production system?  Have
> you tried it?  Curious.

I would, and have.

- Jeff

-- 
 He's not an idiot. 
The doctor said so. 


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: RAID 0 risky ?

2002-03-20 Thread Jeff S Wheeler

On Wed, 2002-03-20 at 01:03, Dave Watkins wrote:
> replicate the data somehow. RAID 5 obviously does the least replication 
> while still keeping fault tolerance, although it does cost a small amount 
> of computing power (not a problem if you have a RAID card)

Some RAID cards are substantially slower at RAID 5 than 0/1/0+1.  The
3ware ATA RAID boards are excellent, for example, except at RAID 5. 
They now produce a couple of boards with more CPU power or different
ASICs, or whatever, to make up for this shortfall.  But they are more
expensive.

IIRC the 7x10 series is quite slow at RAID 5, but the 7x50 series
improved upon this greatly.  My source is www.storagereview.com, though,
I do not use any of their newer RAID 5 boards.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Debian testing suitable for productive?

2002-03-12 Thread Jeff S Wheeler

I've got two different groups of production machines.  One is customer
web servers, which runs on stable.  It's antiquated and we'll probably
just move them up to unstable soon.  The other group of machines are my
SQL database and billing systems, which is already on unstable.

I think the suggestion to stay 2 - 3 days behind is good.  What I do now
is just upgrade a non-critical box, and assuming everything works okay,
I upgrade the others.  Is there an easy way to just keep the packages a
few days behind with apt?

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Antiviral checking for small server using postfx

2002-03-09 Thread Jeff Waugh



>  I'd like to do antiviral filtering but budget is low.  Any 
> recommendations?

postfix + amavis + nod32 (www.nod32.com). Happens to be the best, too.

- Jeff

-- 
  There's no horse higher, no mailing list taunt lower, no developer base   
  wider. Rock My Software in the Bosom of Debian.   


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: redundant office of redundancy

2002-03-06 Thread Jeff S Wheeler

Hi, yes, debian-isp gets posts like this with some regularity.  I firmly
believe that no one is ever happy with the half-assed solutions they
come up with, and it's certainly not something you should have hosting
customers rely upon.  Mail isn't so difficult, but web traffic is a
different animal.

Basically you need a dynamic dns service for all your web sites, and you
also need to just switch to your "backup" IP address if your primary one
fails.  You'll have to worry about TTLs and such.  Or, have one DNS box
on each IP link, have each one have their own zone files, and use both
in your root server entries for all your domains.  That sounds sane,
because if a DNS query can't get to the server on DSL, the client will
query the one on cable, which will respond with a working IP.  Again you
need a really small TTL to make this function remotely.

Shops that want to do this "the right way" buy a couple of circuits from
service providers offering BGP, apply with ARIN for an ASN, and announce
their own space.  Sounds like that is well beyond your financial ability
since you are on DSL and looking at $50/mo for cable, though these days
it is probably within the technical grasp of many folks.


If anyone on this list has done this and is satisfied with their
solution I would like to hear about your experiences, however, I think
you will find that the general opinion is you need to colocate with some
bigger shop, or be satisfied with what you have, or resell.

On Wed, 2002-03-06 at 16:40, David Bishop wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Howdy!  As you can tell from my subject line, I am interested today in making 
> sure that I can always surf por^W^Wserve webpages.  My business (consulting & 
> small-time webhosting) is dependent on my always having an internet 
> connection.  Currently, I have a fairly stable dsl line that serves my needs, 
> but some stupid redback issues on my isp's side have made me wary.  I figure 
> the chances of both a dsl line and cable going out at the same time are 
> fairly small, and throwing $50 a month at the problem is acceptable.  Now, as 
> I'm planning on doing this, it begs the question(s): 1) how to aggregate the 
> bandwidth of both pipes into one, transparently (I will be using two 
> computers as well, might as well do it right); 2) how do you go about setting 
> up "failover", such that if one of the machines drops out, the other takes 
> over dns/mail/web?
> 
> I know some of you out there are about to exclaim "Get your isp to do this, 
> idiot!"  Well, I'm large enough to seriously look at this, but small enough 
> (and geeky enough) that I'd really like to take care of it myself.  I have a 
> decent background in setting up linux as a firewall/proxy/nat box, and a 
> basic understanding of "real" routing.  Pointers, hints, tips, all are 
> welcomed gratefully.   
> 
> To sum up: currently, my setup is 2 machines hot to the 'net, the rest nat'ed 
> off, all using 1 dsl and a block of ips, all nat routing through 1 of the 
> machines.  I would like to end up with dual-connections, bandwidth aggregated 
> through both the machines, and failover for high-priority services.  
> 
> Thanks!
> 
> - -- 
> D.A.Bishop
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.0.6 (GNU/Linux)
> Comment: For info see http://www.gnupg.org
> 
> iD8DBQE8hozkEHLN/FXAbC0RAqILAJ4+m/vgTuCluGdDjP+zj9U24QxBQgCfdNTg
> 4wcJpD5lrFxyV6B6kTfywh8=
> =T1Ff
> -END PGP SIGNATURE-
> 
> 
> -- 
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
> 
-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Inexpensive gigabit copper NICs?

2002-03-05 Thread Jeff S Wheeler

Can anyone recommend some inexpensive GIGE NICs that use CAT 5 instead
of fibre pairs?  I just want to run some back-to-back from a busy NFS
server to a couple of its clients for now.  I have not even looked into
GIGE copper switches but I imagine they ROI would not be very high for
my shop just yet :-)

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: BGP4/OSPF routing daemon for Linux?

2002-03-01 Thread Jeff S Wheeler
IOS doesn't have protected memory, is that not correct?  It's like old
multitasking systems where you didn't have virtual memory. :/

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/





Re: BGP4/OSPF routing daemon for Linux?

2002-03-01 Thread Jeff S Wheeler

IOS doesn't have protected memory, is that not correct?  It's like old
multitasking systems where you didn't have virtual memory. :/

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




true x86 PCI bus speeds/specs

2002-02-23 Thread Jeff S Wheeler
Often, folks post on topics such as maximum network performance or disk
performance they should expect to see from their x86-based server,
firewall, etc.  And almost as often, some uninformed person posts a
reply that says something to the effect of, "Your PCI bus is only 66MHz,
which limits you to 66Mbit/sec", or something similar.  This is wrong.

The most common PCI bus is 32 bits wide, and operates at 33MHz.  Its
maximum throughput is thus 32*33/8 million bytes/second.  That's about
132MBytes/sec.  Some PCI buses are 64 bits wide at 33MHz, such as on
several popular Tyan Thunder models.  Those have a maximum throughput of
264MBytes/sec.  Other boards are 64 bits wide at 66MHz, which is limited
to 528MBytes/sec.  And numerous motherboard implementations have more
than one PCI bus, so you could but high-bandwidth perhipherals on each
of the two buses, and not substantially impact performance or cause them
to compete for resources.

Now, all card/driver combinations have some overhead associated with
them.  The bus isn't 100% efficient, but on many "consumer-grade"
mainboards the 32 bit / 33MHz bus will push 110MBytes/sec or more in
real-world use.  If you don't believe me, check the 3ware RAID card
reviews on storagereview.com (assuming SR is still up).

This means a 100Mbit/sec network througput, which is 12.5MBytes/sec,
will easily fit within the maximum throughput of the PCI bus.  The real
issue is kernel efficiency.  Zero-copy TCP and things like that are
going to improve linux network performance by leaps and bounds.  Going
from a 132MByte/sec bus to a 528MByte/sec bus will disappoint you :-)


This is a popular form of confusion.  Mr Billson is not the first person
to give someone a misleading answer in this respect, nor will he be the
last.  I do not intend to put him down by correcting his answer, but I
hope my post serves to better inform the readership of this list.

On Sat, 2002-02-23 at 09:10, Peter Billson wrote:
>   There was some discussion last January (2001) about this type of
> thing. The problem you will run into if you are using POTS Intel
> hardware is the PCI bus speed, so you are going to have a tough time
> filling one 100Mbs connection with an old Pentium - assuming an old
> 66Mhz PCI bus. You can forget about filling two or more. Also, cheap
> NICs will do more to kill your max. throughput.
>   That being said, I run old Pentium 133s with 64Mb RAM in several
> applications as routers and can notice no network latency on a 100BaseT
> network, but I have never benchmarked the machines. Usually the
> bottlenecks are elsewhere - i.e. server hard drive throughput. Packet
> routing, filtering, masquerading really doesn't require much CPU
> horsepower.
-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/





true x86 PCI bus speeds/specs

2002-02-23 Thread Jeff S Wheeler

Often, folks post on topics such as maximum network performance or disk
performance they should expect to see from their x86-based server,
firewall, etc.  And almost as often, some uninformed person posts a
reply that says something to the effect of, "Your PCI bus is only 66MHz,
which limits you to 66Mbit/sec", or something similar.  This is wrong.

The most common PCI bus is 32 bits wide, and operates at 33MHz.  Its
maximum throughput is thus 32*33/8 million bytes/second.  That's about
132MBytes/sec.  Some PCI buses are 64 bits wide at 33MHz, such as on
several popular Tyan Thunder models.  Those have a maximum throughput of
264MBytes/sec.  Other boards are 64 bits wide at 66MHz, which is limited
to 528MBytes/sec.  And numerous motherboard implementations have more
than one PCI bus, so you could but high-bandwidth perhipherals on each
of the two buses, and not substantially impact performance or cause them
to compete for resources.

Now, all card/driver combinations have some overhead associated with
them.  The bus isn't 100% efficient, but on many "consumer-grade"
mainboards the 32 bit / 33MHz bus will push 110MBytes/sec or more in
real-world use.  If you don't believe me, check the 3ware RAID card
reviews on storagereview.com (assuming SR is still up).

This means a 100Mbit/sec network througput, which is 12.5MBytes/sec,
will easily fit within the maximum throughput of the PCI bus.  The real
issue is kernel efficiency.  Zero-copy TCP and things like that are
going to improve linux network performance by leaps and bounds.  Going
from a 132MByte/sec bus to a 528MByte/sec bus will disappoint you :-)


This is a popular form of confusion.  Mr Billson is not the first person
to give someone a misleading answer in this respect, nor will he be the
last.  I do not intend to put him down by correcting his answer, but I
hope my post serves to better inform the readership of this list.

On Sat, 2002-02-23 at 09:10, Peter Billson wrote:
>   There was some discussion last January (2001) about this type of
> thing. The problem you will run into if you are using POTS Intel
> hardware is the PCI bus speed, so you are going to have a tough time
> filling one 100Mbs connection with an old Pentium - assuming an old
> 66Mhz PCI bus. You can forget about filling two or more. Also, cheap
> NICs will do more to kill your max. throughput.
>   That being said, I run old Pentium 133s with 64Mb RAM in several
> applications as routers and can notice no network latency on a 100BaseT
> network, but I have never benchmarked the machines. Usually the
> bottlenecks are elsewhere - i.e. server hard drive throughput. Packet
> routing, filtering, masquerading really doesn't require much CPU
> horsepower.
-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




opinions on swap size and usage?

2002-02-12 Thread Jeff S Wheeler
For years I've been configuring my machines with "small" swap spaces, no
larger than 128MB in most cases, even though most of my systems have
512MB - 1GB of memory.  My desktop computer has zero swap, although I
have more ram than even X + gnome + mozilla + xemacs can use. :-)

I do this because I think if they need to swap that much, there is
probably Something Wrong, and all that disk access is just going to make
the machine unusable.  May as well let it grind to a halt quickly than
drag it out, I always said.

Alexis Bory's post earlier today made me think about swap a bit more
than I usually do.  What do other folks on this list do?  Zero swap?  As
much swap as physical memory?  More?  Why?  Can you change the swapper's
priority, and does this help when your machine starts swapping heavily?

Thanks for the opinions.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/





opinions on swap size and usage?

2002-02-12 Thread Jeff S Wheeler

For years I've been configuring my machines with "small" swap spaces, no
larger than 128MB in most cases, even though most of my systems have
512MB - 1GB of memory.  My desktop computer has zero swap, although I
have more ram than even X + gnome + mozilla + xemacs can use. :-)

I do this because I think if they need to swap that much, there is
probably Something Wrong, and all that disk access is just going to make
the machine unusable.  May as well let it grind to a halt quickly than
drag it out, I always said.

Alexis Bory's post earlier today made me think about swap a bit more
than I usually do.  What do other folks on this list do?  Zero swap?  As
much swap as physical memory?  More?  Why?  Can you change the swapper's
priority, and does this help when your machine starts swapping heavily?

Thanks for the opinions.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




unstable is "unstable"; stable is "outdated"

2002-02-01 Thread Jeff S Wheeler
On Fri, 2002-02-01 at 01:42, Jason Lim wrote:
> We have production boxes running unstable with no problem. Needed to run
> unstable because only unstable had some new software, unavailable in
> stable. Its a pity stable gets so outdated all the time as compared to
> other distros like Redhat and Caldera (stable still on 2.2 kernel), but
> thats a topic for a separate discussion.

This is really a shame.  It's my biggest complaint with Debian by far. 
The tools work very well, but the release cycle is such that you can't
use a "stable" revision of the distribution and have modern packages
available.

I can't imagine this issue is being ignored, but is it discussed on a
policy list, probably?  It seems like FreeBSD's -RELEASE, -STABLE,
-CURRENT scheme works much better than what Debian has.  I've never seen
big political arguments on this mailing list, but I always hear that
Debian as an organization is often too burdened with internal bickering
and politics to move forward with big changes.  Is that the case here?

Just curious, not trying to start a flame war.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/





unstable is "unstable"; stable is "outdated"

2002-02-01 Thread Jeff S Wheeler

On Fri, 2002-02-01 at 01:42, Jason Lim wrote:
> We have production boxes running unstable with no problem. Needed to run
> unstable because only unstable had some new software, unavailable in
> stable. Its a pity stable gets so outdated all the time as compared to
> other distros like Redhat and Caldera (stable still on 2.2 kernel), but
> thats a topic for a separate discussion.

This is really a shame.  It's my biggest complaint with Debian by far. 
The tools work very well, but the release cycle is such that you can't
use a "stable" revision of the distribution and have modern packages
available.

I can't imagine this issue is being ignored, but is it discussed on a
policy list, probably?  It seems like FreeBSD's -RELEASE, -STABLE,
-CURRENT scheme works much better than what Debian has.  I've never seen
big political arguments on this mailing list, but I always hear that
Debian as an organization is often too burdened with internal bickering
and politics to move forward with big changes.  Is that the case here?

Just curious, not trying to start a flame war.

-- 
Jeff S Wheeler   [EMAIL PROTECTED]
Software DevelopmentFive Elements, Inc
http://www.five-elements.com/~jsw/



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Iptables and PPTP

2002-01-28 Thread Bender, Jeff

Anyone here have any luck with PPTP through NAT with IPtables?  I have
recompiled my kernel with PPTP VPN MASQ support and loaded the module.  I
have even verified that the modules is loaded with lsmod.  It tells me that
it is unused.  I can't seem to authenticate with PPTP to my work's VPN.  I
use Windows XP VPN client, with no luck.

Next step is to install Freeswan and setup the client connection at the
Linux box.  Maybe I will have better luck doing it this way.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: scp, no ssh

2002-01-09 Thread Jeff Norman
On Wed, 2002-01-09 at 21:23, Joel Michael wrote:
> On Thu, 2002-01-10 at 12:19, Tim Quinlan wrote:
> > how about setting the user's shell to /bin/true.  this allows ftp, but no 
> > login shell.  so it may work for scp as well.
> > 
> This is true, but you can still (probably) use ssh to execute commands,
> like /bin/sh, and effectively get a shell.

The above assumption is wrong.
Ssh executes commands by passing them as arguments to the user's shell.
For example, if the user executes the ls command by connecting with

$ ssh [EMAIL PROTECTED] ls /home/bob/

and the user's shell is set to /bin/bash, then the shell executed on
remotehost.com will be equivalent to executing

/bin/bash -c "ls /home/bob/"

instead of the user's normal login shell.

So, if you set the user's shell to /bin/true, all that will happen when
your friendly hacker tries to connect with

$ ssh [EMAIL PROTECTED] /bin/bash

is that instead of runing a shell, the benign command

/bin/true -c "/bin/bash"

will run.

After the "shell" that ssh tries to execute exits, ssh immediately
closes the connection, so when /bin/true exits (which is immediately, of
course) ssh will close the connection, not allowing the remote user any
further actions.



This is all fine and dandy, but it doesn't answer the initial question
of how to make scp work, but not allow shell logins; here goes on that
one:

Scp, in simplified terms, is just a wrapper for ssh, if I run

$ scp [EMAIL PROTECTED]:/home/bob/file.txt /home/alice/file.txt

scp will make the connection to the remote host with ssh, and request
that another copy of scp be run on the remote host. The details aren't
important, but it runs something like the command

ssh [EMAIL PROTECTED] scp -f /home/bob/file.txt

which in turn connects to remotehost.com and passes the specified
command to bob's shell as

/bin/bash -c "scp -f /home/bob/file.txt"

Now, the trick is to replace bob's shell with a (perl?) script that
takes -c argument passed and checks if scp is the intended command.
If scp *isn't* the intended command, it merely exits, thus closing the
remote connection and effectively denying access to other commands.
If scp *is* what was requested, the script could just exec scp with the
requested options in place of itself and everything should continue as
normal. If you wanted to, you could even get really fancy and have the
script deny access to certain directories or types of files. 

Of course, I don't imagine that the ssh/scp combo was intended to be
used like this, so one should be careful while implementing, but other
than that, the only downside I can think of is that the user on the
remote system becomes useless for any purpose other than scp-ing.


Hope that makes sense.
Later,

Jeff








Re: scp, no ssh

2002-01-09 Thread Jeff Norman

On Wed, 2002-01-09 at 21:23, Joel Michael wrote:
> On Thu, 2002-01-10 at 12:19, Tim Quinlan wrote:
> > how about setting the user's shell to /bin/true.  this allows ftp, but no 
> > login shell.  so it may work for scp as well.
> > 
> This is true, but you can still (probably) use ssh to execute commands,
> like /bin/sh, and effectively get a shell.

The above assumption is wrong.
Ssh executes commands by passing them as arguments to the user's shell.
For example, if the user executes the ls command by connecting with

$ ssh [EMAIL PROTECTED] ls /home/bob/

and the user's shell is set to /bin/bash, then the shell executed on
remotehost.com will be equivalent to executing

/bin/bash -c "ls /home/bob/"

instead of the user's normal login shell.

So, if you set the user's shell to /bin/true, all that will happen when
your friendly hacker tries to connect with

$ ssh [EMAIL PROTECTED] /bin/bash

is that instead of runing a shell, the benign command

/bin/true -c "/bin/bash"

will run.

After the "shell" that ssh tries to execute exits, ssh immediately
closes the connection, so when /bin/true exits (which is immediately, of
course) ssh will close the connection, not allowing the remote user any
further actions.



This is all fine and dandy, but it doesn't answer the initial question
of how to make scp work, but not allow shell logins; here goes on that
one:

Scp, in simplified terms, is just a wrapper for ssh, if I run

$ scp [EMAIL PROTECTED]:/home/bob/file.txt /home/alice/file.txt

scp will make the connection to the remote host with ssh, and request
that another copy of scp be run on the remote host. The details aren't
important, but it runs something like the command

ssh [EMAIL PROTECTED] scp -f /home/bob/file.txt

which in turn connects to remotehost.com and passes the specified
command to bob's shell as

/bin/bash -c "scp -f /home/bob/file.txt"

Now, the trick is to replace bob's shell with a (perl?) script that
takes -c argument passed and checks if scp is the intended command.
If scp *isn't* the intended command, it merely exits, thus closing the
remote connection and effectively denying access to other commands.
If scp *is* what was requested, the script could just exec scp with the
requested options in place of itself and everything should continue as
normal. If you wanted to, you could even get really fancy and have the
script deny access to certain directories or types of files. 

Of course, I don't imagine that the ssh/scp combo was intended to be
used like this, so one should be careful while implementing, but other
than that, the only downside I can think of is that the user on the
remote system becomes useless for any purpose other than scp-ing.


Hope that makes sense.
Later,

Jeff






-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: user traffic accounting

2002-01-08 Thread Jeff Waugh


> anyway, this is wicked, and i immediately want to give a virtual machine
> to every single one of my users.

Nice idea, but it's not going to work. Perhaps with some real love and
affection from someone who purely wanted to achieve this (and wasn't
primarily interested in using it as a debugging tool), it may happen, but in
its current state, UML is not appropriate for this.

- Jeff

-- 
"I'm taking no part in your merry 5-way clusterfuck - sort that mess
 out between yourselves." - Alexander Viro  




Re: user traffic accounting

2002-01-08 Thread Jeff Waugh



> anyway, this is wicked, and i immediately want to give a virtual machine
> to every single one of my users.

Nice idea, but it's not going to work. Perhaps with some real love and
affection from someone who purely wanted to achieve this (and wasn't
primarily interested in using it as a debugging tool), it may happen, but in
its current state, UML is not appropriate for this.

- Jeff

-- 
"I'm taking no part in your merry 5-way clusterfuck - sort that mess
 out between yourselves." - Alexander Viro  


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-07 Thread Jeff Waugh


> > 3) Add this to authorized_keys for the above account, specifying the
> > command that logins with this key are allowed to run. See command="" in
> > sshd(1).
> 
> I can't find the document about this section, can you show me
> some reference or examples? Many thanks.

man sshd, down the bottom.

- Jeff

-- 
   No clue is good clue.




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-06 Thread Jeff Waugh



> > 3) Add this to authorized_keys for the above account, specifying the
> > command that logins with this key are allowed to run. See command="" in
> > sshd(1).
> 
> I can't find the document about this section, can you show me
> some reference or examples? Many thanks.

man sshd, down the bottom.

- Jeff

-- 
   No clue is good clue.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: long email names

2002-01-02 Thread Jeff Waugh


> I have a customer who wants to host his own email server, and he wants 
> to have long email addresses, like .@domain.com , 
> and map it to a local name that is less than 8 chars.

This is a sensible request...

> What is the best email server to do this kind of mapping?

But this is just emotional blackmail! ;)

Postfix has a very handy canonical_maps (also canonical_sender and
canonical_recipient maps) setting. It means that you can make the switcheroo
'at the border', both ways. So everyone sees 'jeff.waugh @ perkypants.org'
on the outside when you send, and it gets changed back to 'jdub @
perkypants.org' when mail comes in.

Just about every MTA will do similar, or a fairly close approximation,
though. (I'm just familiar and happy with postfix.)

- Jeff

-- 
  I wonder how many bugs have gone unfixed due to misspellings of "FIXME".  




Re: long email names

2002-01-02 Thread Jeff Waugh



> I have a customer who wants to host his own email server, and he wants 
> to have long email addresses, like .@domain.com , 
> and map it to a local name that is less than 8 chars.

This is a sensible request...

> What is the best email server to do this kind of mapping?

But this is just emotional blackmail! ;)

Postfix has a very handy canonical_maps (also canonical_sender and
canonical_recipient maps) setting. It means that you can make the switcheroo
'at the border', both ways. So everyone sees 'jeff.waugh @ perkypants.org'
on the outside when you send, and it gets changed back to 'jdub @
perkypants.org' when mail comes in.

Just about every MTA will do similar, or a fairly close approximation,
though. (I'm just familiar and happy with postfix.)

- Jeff

-- 
  I wonder how many bugs have gone unfixed due to misspellings of "FIXME".  


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-02 Thread Jeff Waugh


> OK. My problem is, if I use rsync+ssh with blank passphrase among servers
> to automate rsync+ssh backup procedure without password prompt, then the
> cracker will not need to send any password as well as passphrase when ssh
> login onto another server, right?

No, password and rsa/dsa authentication are different authentication
mechanisms.

> Is there a good way to automate rsync+ssh procedure without
> password/passphrase prompt, while password/passphrase is still requierd
> when someone attempts to ssh login?

1) Use a minimally-privileged account for the rsync process, disable the
password on this account, so it cannot be used to login.

2) Generate a passphrase-less ssh key with ssh_keygen.

3) Add this to authorized_keys for the above account, specifying the
command that logins with this key are allowed to run. See command="" in
sshd(1).

Thus, no one can actually log in with the account normally, you can only
connect with the rsa/dsa key, and you can only run a particular process.

ssh-agent doesn't really help you in this instance, it's generally used to
provide single passphrase authentication for a user's session. (I use it to
log in to the ~30-40 machines I have my public key on, without typing
passwords every five minutes.)

- Jeff

-- 
 "jwz? no way man, he's my idle" - James Wilkinson  




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-02 Thread Jeff Waugh



> OK. My problem is, if I use rsync+ssh with blank passphrase among servers
> to automate rsync+ssh backup procedure without password prompt, then the
> cracker will not need to send any password as well as passphrase when ssh
> login onto another server, right?

No, password and rsa/dsa authentication are different authentication
mechanisms.

> Is there a good way to automate rsync+ssh procedure without
> password/passphrase prompt, while password/passphrase is still requierd
> when someone attempts to ssh login?

1) Use a minimally-privileged account for the rsync process, disable the
password on this account, so it cannot be used to login.

2) Generate a passphrase-less ssh key with ssh_keygen.

3) Add this to authorized_keys for the above account, specifying the
command that logins with this key are allowed to run. See command="" in
sshd(1).

Thus, no one can actually log in with the account normally, you can only
connect with the rsa/dsa key, and you can only run a particular process.

ssh-agent doesn't really help you in this instance, it's generally used to
provide single passphrase authentication for a user's session. (I use it to
log in to the ~30-40 machines I have my public key on, without typing
passwords every five minutes.)

- Jeff

-- 
 "jwz? no way man, he's my idle" - James Wilkinson  


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-02 Thread Jeff Waugh


> I am sorry I could be kind of off-topic. But I want to know how to
> cross-site rsync without authentication, say ssh auth.,?

That's the best way.

> I've read some doc. using ssh-keygen to generate key pairs, appending the
> public keys to ~/.ssh/authorized_hosts on another host to prevent ssh
> authentication prompt. Is it very risky? Chances are a cracker could
> compromise one machine and ssh login others without  any authentication.

It's not "without authentication" - you're still authenticating, you're
just using a different means. There's two parts to rsa/dsa authentication
with ssh; first there's the key, then there's the passphrase.

If a cracker gets your key, that's tough, but they'll need the passphrase to
authenticate. If you make a key without a passphrase (generally what you'd
do for scripted rsyncs, etc) then they *only need the key*. So, you should
keep the data available with passphrase-less keys either read-only or backed
up, depending on its importance, etc.

- Jeff

-- 
   "I think we agnostics need a term for a holy war too. I feel all left
out." - George Lebl 




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-01 Thread Jeff Waugh



> I am sorry I could be kind of off-topic. But I want to know how to
> cross-site rsync without authentication, say ssh auth.,?

That's the best way.

> I've read some doc. using ssh-keygen to generate key pairs, appending the
> public keys to ~/.ssh/authorized_hosts on another host to prevent ssh
> authentication prompt. Is it very risky? Chances are a cracker could
> compromise one machine and ssh login others without  any authentication.

It's not "without authentication" - you're still authenticating, you're
just using a different means. There's two parts to rsa/dsa authentication
with ssh; first there's the key, then there's the passphrase.

If a cracker gets your key, that's tough, but they'll need the passphrase to
authenticate. If you make a key without a passphrase (generally what you'd
do for scripted rsyncs, etc) then they *only need the key*. So, you should
keep the data available with passphrase-less keys either read-only or backed
up, depending on its importance, etc.

- Jeff

-- 
   "I think we agnostics need a term for a holy war too. I feel all left
out." - George Lebl 


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Jeff Waugh


> Sigh... and I was hoping for a simple solution like cp /mnt/disk1/*
> /mnt/disk2/  :-/

This is the point at which we have one of those "Brady Bunch Moments", when
everyone stands around chuckling at what they've learned, and the credits
roll.

- Jeff

-- 
"And that's what it sounds like if you *download* it!" - John, They 
  Might Be Giants   




Re: Best way to duplicate HDs

2002-01-01 Thread Jeff Waugh



> Sigh... and I was hoping for a simple solution like cp /mnt/disk1/*
> /mnt/disk2/  :-/

This is the point at which we have one of those "Brady Bunch Moments", when
everyone stands around chuckling at what they've learned, and the credits
roll.

- Jeff

-- 
"And that's what it sounds like if you *download* it!" - John, They 
  Might Be Giants   


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Jeff Waugh


> > It's called RAID-1.
> 
> I dunno... whenever I think of "RAID" I always think of live mirrors that
> operate constantly

That's what they do post-sync.

> and not a "once in a while" mirror operation just to
> perform a backup (when talking about RAID-1). Am I mistaken in this
> thinking?

That's what they do when they sync (in very rough terms).

> This would cause the 2 live HDs to be mirrored to the backups, and then
> disengage the 2 "backup" HDs so they aren't constantly synced.
> 
> Would the above work? Sorry if I seem naive, but I haven't tried this
> "once in a while" RAID method before.

It's a dirty hack to make it do what you want it to, that's all. Russell's
solution was better, as at least you were getting the benefit of the running
mirror if a drive failed (and buying three disks is not expensive).

- Jeff

-- 
  "And up in the corporate box there's a group of pleasant  
   thirtysomething guys making tuneful music for the masses of people who   
can spell "nihilism", but don't want to listen to it in the car." - 
Richard Jinman, SMH 




  1   2   3   >