Re: History of FOSS and stuff (was: The Silent Woman)

2008-04-04 Thread Coleman Kane
David W. Aquilina wrote:
> On Fri, Apr 04, 2008 at 08:21:39AM -0400, Tom Buskey wrote:
>   
>> I downloaded 0.1 of Jolitz on 30-40 5.25" floppies and it wouldn't boot on
>> my system.  OS/2 wouldn't either.  Then I got Linux and it booted and ran.
>> If FreeBSD had cleaned up the Jolitz version sooner, Linux might not have
>> gained its foothold.
>> 
>
> I'd venture a guess that the AT&T lawsuit had a very significant impact on 
> the adoption rate of BSD vs. Linux in the early days. 
>   
Indeed it did. This is one of the predominant reasons why Linux 
implemented their own TCP/IP stack and filtering, rather than bringing 
in the widely-accepted-as-superior-at-the-time Berkeley stack and BPF. 
It is also probably a big reason why GNU Hurd was so slow to get the 
missing parts (and thus, Linux was chosen as the kernel).

--
Coleman Kane
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: History of FOSS and stuff

2008-04-04 Thread Coleman Kane
Paul Lussier wrote:
> Coleman Kane <[EMAIL PROTECTED]> writes:
>
>   
>> Indeed it did. This is one of the predominant reasons why Linux 
>> implemented their own TCP/IP stack and filtering, rather than bringing 
>> in the widely-accepted-as-superior-at-the-time Berkeley stack and BPF. 
>> 
>
> echo $above | sed 's/\(Berkeley\)/(and still) \1/'
>
> Linux's TCP/IP stack still has lots of problems fixed by the Berekely
> code many years ago.  And the new OpenBSD pf code is light-years
> better than anything Linux has ever had.
>   
I agree with the above, I was just avoiding a flamewar (/me ducks!). 
I've used FreeBSD since 1999 on most what I can get my hands on 
(including my daily-use laptop/workstation). So far, I have not seen a 
GNU/Linux distro that offers as good a solution as FreeBSD for my 
desktop needs. Although I may be biased, since I am a committer and all ;).

--
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Mysql connection problem

2008-04-10 Thread Coleman Kane
Deepan wrote:
> Hi All,
> I am able to connect to Mysql via command line
> using mysql client. I am also able to connect to
> mysql via php if I run those php programs via
> command line. However when I hit those php pages
> via the browser it throws the error Can't connect
> to local MySQL server through socket
> '/tmp/mysql.sock' (2). Please note that this is
> the same socket the mysql client tries to connect
> to the server.
> Regards 
> Deepan 
> Sudoku Solver: http://www.sudoku-solver.net/ 
>   
The web server is going to be using a different user than the 
command-line is. What user are you using on the command line to test? 
You may need to change the socket so that it is group-readable and then 
put the web-server user into that group (and re-start the web server).

It would be helpful if you sent over the output of the following command:
ls -l /tmp/mysql.sock

--
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-12 Thread Coleman Kane
On Fri, 2008-04-11 at 10:40 -0400, Derek Atkins wrote:
> Paul Lussier <[EMAIL PROTECTED]> writes:
> 
> > "Steven W. Orr" <[EMAIL PROTECTED]> writes:
> >
> >> Add this to the end of your sendmail.mc
> >
> > Anyone know what the postfix fix is?
> 
> Yeah.  Install sendmail.  ;)
> 
> > Seeya,
> > Paul
> 
> -derek
> 

A more helpful suggestion is that you may want to set the
default_destination_recipient_limit in /etc/postfix/main.cf (or wherever
main.cf is located on your particular install) to 5. Adding (or
changing) the following line in the file should do:

default_destination_recipient_limit = 5

--
Coleman Kane



signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-12 Thread Coleman Kane
On Sat, 2008-04-12 at 22:53 -0400, Paul Lussier wrote:
> Coleman Kane <[EMAIL PROTECTED]> writes:
> 
> > A more helpful suggestion is that you may want to set the
> > default_destination_recipient_limit in /etc/postfix/main.cf (or wherever
> > main.cf is located on your particular install) to 5. Adding (or
> > changing) the following line in the file should do:
> >
> > default_destination_recipient_limit = 5
> 
> Thanks!  And just to clarify, does this limit the total number of
> recipients to 5, or does it just batch 5 recipients at a time when
> sending to the total list of recipients?  In other words, if I sent to
> 20 people, does it get send in 4 batches of 5, or do 15 people not
> recieve the mail?
> 
> I'm assuming the former, i.e. 4 batches of 5.

According to the "Recipient limits" section on this page:
http://www.postfix.org/rate.html

"If an email message has more than $default_destination_recipient_limit
recipients at the same destination, the list of recipients will be
broken up into smaller lists, and multiple copies of the message will be
sent."

--
Coleman



signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-14 Thread Coleman Kane
On Sun, 2008-04-13 at 18:23 -0400, Shawn O'Shea wrote:
> 
> 
> On Sun, Apr 13, 2008 at 8:13 AM, Ben Scott <[EMAIL PROTECTED]>
> wrote:
> On Sat, Apr 12, 2008 at 3:24 PM, Coleman Kane
> <[EMAIL PROTECTED]> wrote:
> >  A more helpful suggestion is that you may want to set the
> 
> >  default_destination_recipient_limit
> in /etc/postfix/main.cf  ... to 5.
> 
>  I don't know much of anything about Postfix, but I'm guessing
> that
> will impact all destination MXes.  The goal here was to just
> limit
> connections to *Yahoo* to 5 recipients per envelope.  The
> above will
> penalize all connections, right?  How would one specify that
> for just
> Yahoo?
> I don't have a ton of Postfix experience, but using this Postfix FAQ
> question ( http://www.postfix.org/faq.html#incoming ) as a template of
> sorts (and reading bits from the O'Reilly postfix book and the postfix
> man pages.
> 
> You would create a transport map file, say /etc/postfix/transport. Add
> entries for the domains you want to limit and assign them to a
> transport name, let's say lamdomains
> 
> yahoo.com  lamedomains:
> 
> You need to then run: postmap /etc/postfix/transport
> 
> Then in the postfix main.cf, add lines to tell it about the transport
> and to tell it that anything in that transport has the recipient
> limit.
> transport_maps = hash:/etc/postfix/transport
> lamedomains_destination_recipient_limit = 5
> 
> So now you've created a transport, put some domains in it, changed the
> default behavior of postfix for that transport, you just need to tell
> postfix what to do with that transport (aka, deliver it with smtp).
> 
> Add a line to master.cf:
> lamedomains  unix  -   -   -   -   -   smtp
>  
> Now tell postfix to reload it's config: postfix reload

OMG you're my hero. New stuff learned every day.

> 
> Again, I haven't tested this, so you mean need to read man pages and
> play with that a little, but that should set a postfix user in the
> right direction
> 
> -Shawn

Thanks for that little tidbit, that will be very helpful in the future.

I'd like to also point out another feature of Postfix that some of you
might also not be familiar with.

Notice the "hash:" above in the 
"transport_maps = hash:/etc/postfix/transport" line. If you compile
Postfix with the -DHAS_MYSQL option, then you can replace this with
"mysql:" and the filename after the ":" is the location of a
specially-formatted .cf file that tells postfix to connect to a mysql
table and where to get the information that it wants.

Postfix uses a database-abstraction model for maintaining most of these
"mappings" in the system. Pretty much any configuration option that
accepts such a parameter can be turned into a MySQL table. This greatly
increases your ability to perform dynamic run-time configuration changes
at will (without restarting postfix).

I believe that PostgreSQL support also exists as well, for those of you
who are that way inclined.

-- 
Coleman Kane


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-14 Thread Coleman Kane
On Sun, 2008-04-13 at 19:24 -0400, Ben Scott wrote:
> On Sun, Apr 13, 2008 at 2:34 PM, Steven W. Orr <[EMAIL PROTECTED]> wrote:
> >> Anyone want to give a presentation on switching from Sendmail to
> >> Postfix?
> >
> >  Why would you ever want to do that?
> 
>   Primarily: Cleaner, easier configuration.  I find it costs me more
> to learn a new feature in Sendmail than it appears it would cost me to
> learn the corresponding feature in Postfix.
> 
>   I've been using Sendmail since I started with *nix, so the
> incremental cost of learning one new feature when I need it has been
> lower than the cost of learning all of Postfix.  But every time I do
> so, I think of all the cost I've been accumulating over the years.  A
> common situation, really.  The field of IT systems administration is
> largely about turning "Better the devil you know" into a way of life.
> 
> > Sendmail has more flexibility.
> 
>   More than I need.  The higher flexibility comes with a corresponding
> cost.  So I'm paying for something I don't need.  Like commuting into
> work by driving an 18-wheeler.
> 
> -- Ben

I tend to agree here. Sendmail may be the ultimate mail server software
ever, but you practically need a formal degree in Sendmail to get it to
perform many of the complex operations that many other mailservers can
do in a seemingly more straight-forward manner. 

For instance, Shawn O'Shea just pointed out that you can dynamically
define new transports for postfix, and then address this problem by
setting up a "lameservers" transport that behaves in the
5-rcpts-per-message manner using configuration options that are much
more lexically understandable.

Maybe sendmail *is* the best option if your primary job is a 24/7 mail
relay operator... but I don't want to have to learn a (sort of) brand
new language for telling my mailserver what to do. I have got better
things to do with my time. I'd take the "less features, but easily
configurable" mailserver over the "mailserver that you could write a .mc
that would compile the mailserver itself if you wanted it to", because
I'd spend less of my time administering my mailserver, and more time on
Paying Job (TM), and hobby projects (FreeBSD, etc...).

-- 
Coleman Kane



signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Solved: Sendmail question. Problem with yahoo.

2008-04-14 Thread Coleman Kane
On Mon, 2008-04-14 at 11:55 -0400, Tom Buskey wrote:
> 
> 
> On Mon, Apr 14, 2008 at 11:34 AM, Ben Scott <[EMAIL PROTECTED]>
> wrote:
> On Mon, Apr 14, 2008 at 10:59 AM, Tom Buskey <[EMAIL PROTECTED]>
> wrote:
> > Sendmail has a long history of security problems.
> 
> 
>  I have to point out that the above statement would be equally
> true
> if one wrote "Unix" instead of "Sendmail".  (This is not a
> snide
> remark, although it may qualify as "subtle".)
> 
> I can't disagree with you there.  I used to work at a paranoid
> security firm.  Sendmail was written by 1 person & they avoided all
> code by that person because of the coding techniques/style lent itself
> to buffer overflows.  Unix had many more authors and different coding
> styles.  
> 
> 
>   Separate from the above: From what I know if it, Postfix has
> a more
> modular design than Sendmail.  Such designs usually lend
> themselves to
> task isolation and least-privilege, which is usually good for
> security.  It's interesting, but despite Sendmail's more
> flexible
> 
> Security was part of the design goal from day one.  Sendmail was
> created in a different era.  In fact, the 1st internet worm in 1988
> was enabled because of the root access backdoor written into Sendmail.
> That stuff isn't in Sendmail anymore of course. 
> 
> 
> design, implemention of these concepts came later.  When they
> did
> arrive, though, they were implemented using the same Sendmail
> configuration facilities already existent.  I'm not sure that
> last
> part really matters, much, though.  The source code to
> everything is
> readily available.  What difference does it make if one has to
> write a
> new .c file vs a new .cf file?  That might matter on a
> slavery-software platform, but surely we all know that story
> by now.
> 
>  It may be worth noting that Postfix was created by Wietse
> Venema,
> the same person who created tcp_wrappers.
> 
> Qmail was written by DJ Bernstien, also with a security mindset.

Additionally to this, djb has a long-standing (since 1997) reward of
$500 for anybody who can publish a verifiable security crack against
qmail. Since then, nobody has been able to provide this.

> 
> I know Qmail hasn't accepted outside code.  I don't think Sendmail
> has.  Does Postfix? Does Exim? Does any MTA have multiple authors?
> 

I believe that postfix is still maintained by the original author,
although he does accept patches for review and inclusion. Exim is
maintained by a group at the University of Cambridge (UK), though I
don't know how central the project's structure is regarding the main
author.

I really do have to say that my favorite all-time mailserver has been
qmail. The one thing qmail lacks is many of the more complex and regular
features that are common with systems like Postfix, Exim, and Sendmail,
as well as integration with heavier-weight IMAP back-ends. There is a
large amount of qmail-specific software out there, and I found qmail's
code to be wonderful to hack on when I needed to add extra features
(such as editing qmail-smtpd to do more stuff at the SMTP-end).

I haven't found a mailserver that scales better than qmail either for
handling gigantic amounts of email flow. That said, finding others with
the breadth of knowledge that I have on qmail proves quite difficult.
For our IT clients, we just use Postfix because it is something that
"everyone can administer" (hooray pragmatism).

At "previous job", I hosted all client mail (for 30k+ domains) through
two machines using one as the mail-store (w/ courier-imap) and one as
the front-end filter/remailer (for email forward accounts). It was
wonderful.

-- 
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Performance Tuning

2008-04-15 Thread Coleman Kane
On Tue, 2008-04-15 at 11:06 -0400, Kenny Lussier wrote:
> Hi All,
> 
> This should be a fairly easy one for someone out there I have made
> some modifications to a system for performance reasons. One of the
> changes that I made was setting the read_ahead_kb value to 1024 (up
> from 128). I used the blockdev command to do this (blockdev --setra
> 2048 /dev/sdb). My question is, how do I make this persistent across
> reboots??
> 
> TIA<
> Kenny

My cheap trick to getting something like this to be persistent is to
stick into my "local startup script". Typically, this
is /etc/local.rc, /etc/local.start, or something similar depending upon
your distro.

My guess, if you are using udev, is you can add some rules to the udev
configuration that ID that drive according to its UUID (or maybe some
more general parameters) and execute the blockdev command-line above to
perform the magic whenever the drive is probed. This, for instance, will
also make this action persistent on drive hot-swaps.

-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Intro and Questions...

2008-04-16 Thread Coleman Kane
On Wed, 2008-04-16 at 11:01 -0400, Gerry Hull wrote:
> Hello All,
>  
> I've been lurking on this list for a few weeks, and thought I'd
> introduce myself and ask a question...
>  
> I'm an almost-50 software engineer and been writing code for about 25
> years now... mostly in the Windows world...
>  
> In my day job, I work in the telecom world, writing .Net software that
> works with PBX products from the likes of Cisco, Nortel, Avaya and
> Siemens.
>  
> In my free time, I've discovered Linux and open software.I run a
> lot of servers at my house, and one of them is an Asterisk
> distribution called PBX-in-a-Flash (www.pbxinaflash.com).   I've been
> writing a lot of php and bash scripts and having a lot of fun with it.
>  
> My question has nothing to do with Asterisk, though.   A friend and I
> are building an Applications Server product that we want to run on
> both Windows and Linux.   I am planning to do the Linux side using
> Mono, (LAMM, Linux, Apache, MySQL, Mono instead of LAMP) so I can
> leverage my .Net codebase.For some of you purists, that may not be
> the "right" strategy, but for us it's time-to-market issue.
>  
> My question is this:  Have any of you had experience with Mono?  What
> distro did you use?   What distro would you recommend?
> Our end goal is to build a custom distro, which will install the OS
> from an iso, an after initial boot, downloads the latest application
> code
> and install it.
>  
> TIA,
>  
> Gerry Hull
> Greenfield, NH
> [EMAIL PROTECTED] (email/sip)
> +1-603-547-4005
>  

Hi,

I've been messing a bit with mono lately and it looks promising
(considering the insular approach that Sun Micro has employed with the
Java language / VM). I was successfully able to build some .exe files on
my FreeBSD amd64 system using "mcs", the "Mono C# Compiler". These were
just console-apps.

I was able to use "mono" to run these locally without any trouble. I
copied them to my x86 (32-bit) WinXP machine and ran them there and they
ran fine as well.

I haven't done any significant performance testing on the code, but it
generally seems to be decently supported. One thing that I do see is
that MS .NET seems to be implementing the .NET API v3.0 and v3.5 already
(as well as v1.1 and v2.0), while mono implements all of v1.1 and "most"
of v2.0.

One of the really cool features of the mono project is that it is
releasing much of its code under the MIT license (which has similar
distribution rights / responsibilities as the BSD license). This means
that you can implement C# stuff based on mono as an embedded GNU/Linux
distributor without losing proprietary rights (if such things concern
you). The next release (2.0) is supposed to release the C# compiler
under this license as well.

Novell has been the "Sun Micro" to the mono project and seems to be very
very good to it.

Links:
  - http://www.mono-project.com - main page
  - http://www.mono-project.com/Roadmap - project's roadmap (done, and
TODO)
  - http://www.mono-project.com/Todo - immediate TODO list (nudge)

-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Intro and Questions...

2008-04-18 Thread Coleman Kane
On Fri, 2008-04-18 at 01:41 -0400, Bill McGonigle wrote:
> On Apr 16, 2008, at 11:22, Coleman Kane wrote:
> 
> > Novell has been the "Sun Micro" to the mono project and seems to be  
> > very
> > very good to it.
> 
> 
> Yeah, if you're doing mono work you better be using a SuSE-based  
> distribution, since only Novell has a patent-indemnification pact  
> with Microsoft, and mono is likely patent-encumbered by Microsoft,  
> and Microsoft has already promised to take action against those it  
> feels are abusing the patents it holds which are being used by FLOSS  
> software.

I remember there being a large amount of fear surrounding the project.
>From what I can tell, the patent-sensitivity mainly pertains to the
following components:
  * ASP.NET
  * ADO.NET
  * Windows Forms

The community at-large seems to accept that there are probably no
enforceable patent claims on the Base system and the compiler. This is,
for instance, what is used for Gtk# software such as Tomboy and F-Spot.

> 
> I'd say running anything non-SuSE would be dangerous, but ask your  
> council for a real opinion. :)

I suppose I'd probably need to read up on their agreement to see what
Novell considers a "customer" or a "developer". If I download an install
SLED, am I covered forever? How does this not also cover me if I run the
software on other OSes (as long as I run one SLED install somewhere)?

So the question is, if I can install SLED and become covered, then why
can't I become covered by downloading Mono and installing it?

> 
> My non-legal opinion is that this is a good reason to stay away from  
> mono; it's a patent trap.  It's also, unfortunately why I've been  
> moving away from GNOME after more than a decade of using and testing  
> (to KDE).  If somebody points out to me that mono has gone under  
> GPL3, I'll take back everything I said.
> 
> -Bill
> 

I'll keep reading up on it, thanks for the pointers...

-- 
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Reformat an NTFS disk to FAT32? (You don't have to live w/ FAT)

2008-04-20 Thread Coleman Kane
On Sun, 2008-04-20 at 17:31 -0400, Alex Hewitt wrote:
> On Sun, 2008-04-20 at 16:40 -0400, Ben Scott wrote:
> > On Sun, Apr 20, 2008 at 3:42 PM, Bruce Labitt <[EMAIL PROTECTED]> wrote:
> > >  Now that I think about this, all that I want is a format that I can read
> > >  and write to for the WinXP machines that I have to live with and with
> > >  linux.
> > 
> >   Ah, then yah, FAT32 is likely your best bet.  That seems to have
> > become the "lingua franca" for filesystem interoperability.
> > 
> > > Unfortunately when I received the disk it already was preformatted
> > > NTFS.
> > 
> >   I'd say your best bet is to change the partition type of the
> > existing partition to 0x0C using fdisk, and then format it using
> > mkdosfs.
> 
> Believe it or not, if you want a > 32 GB partition you need to do it
> with Linux or a manufacturer supplied utility (Western Digital provides
> one for some of their 2.5 external hard drives). Microsoft doesn't
> believe you should be using > 32 GB FAT32 partitions even though the
> file system will support operations much greater.
> 
> -Alex
> 

If you guys don't know already, there's an NTFS driver based upon FUSE
that's supposed to be really good (read/write in Linux and
Ownership/Permission support):

http://www.ntfs3g.org/

-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Comcast blocks port 25 incoming, yet again

2008-04-25 Thread Coleman Kane
Hi all,

I just had to deal with the Comcast tech support today to resolve their
unannounced block of my tcp port 25. The first level of tech support
listened to my explanation that I owned some domains and have the email
coming in locally through port 25. The guy explained that an "abuse
ticket" was filed for the action which said that a single spam email
"supposedly originating" from my system was responsible for the block.
He was nice enough to unblock my cable modem, and I asked if I could get
the content of the abuse ticket, so that I could look at it and resolve
anything that I might have missed on my end. 

I got passed to their "abuse" department to get this resolved and the
"abuse representative" explains that I am not allowed to get the content
of the report because it is "proprietary". What sort of crap is that ?!?
In the "real world" if someone leverages a lawful complaint against me I
have a right to the complaint as well as confronting my accuser.
However, in the "comcast world" neither of these rights are granted to
me. I highly doubt that the email cited in the issue was actually
sourced from my system. I was recently the victim of some spammer
putting my email address in the From: header of a large spam bomb and
now I receive all of the failure notices in my inbox (which are pared
down handily by about 90% by my spamfilter).

Furthermore, I do host my own "websites" and "email" on my local
connection but none of it is used for commercial or business use. The
comcast representative then proceeded to inform me that my hosting
violates their terms and that I can get another provider, or I can use
their "business class" service. He warned me that they'll be
specifically monitoring my traffic for the next 30 days and if I don't
"stop it" they will turn off my access.

As much as this may seem commonplace to you, I have never had an issue
with this setup from any provider of mine in the past (TW Cincinnati,
Cincinnati Bell/Zoomtown, TW El Paso, TW San Jose). Most of the time the
only usage that is strictly barred on a residential line is "commercial
activity", which has typically been described in terms of monetary
exchange...

Anyhow, I did speak to FairPoint who informed me that I can get DSL
service (at the same speed) for a fraction of the rate that I pay to
Comcast right now (I don't have a TV for their 99% mind-numbing cable
programming racket, so I pay their higher net fee). I can also have
unlimited usage and the sales person tells me that they don't block
access. They also provide month-to-month service, instead of locking me
into a contract. Additionally, I can provide my own DSL equipment if I
have it.

So I am now curious if anyone else has moved to FairPoint, and how they
have been doing with it.

-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Comcast blocks port 25 incoming, yet again

2008-04-25 Thread Coleman Kane
On Fri, 2008-04-25 at 14:39 -0400, Shawn O'Shea wrote:
> 
> 
> Furthermore, I do host my own "websites" and "email" on my
> local
> connection but none of it is used for commercial or business
> use. The
> comcast representative then proceeded to inform me that my
> hosting
> violates their terms and that I can get another provider, or I
> can use
> their "business class" service. He warned me that they'll be
> specifically monitoring my traffic for the next 30 days and if
> I don't
> "stop it" they will turn off my access.
> 
> This is Comcast's SOP. Their Terms of Service that you agreed to when
> getting Comcast service says "no servers" , regardless of their
> commercial use or not. I'm not defending them, because I don't agree
> with the policy either, just that it is fact, and that by getting
> their service you agree to abide by their rules, dumb or not.
> From section: I. Prohibited Uses and Activities
> "use or run dedicated, stand-alone equipment or servers from the
> Premises that provide network content or any other services to anyone
> outside of your Premises local area network ("Premises LAN"), also
> commonly referred to as public services or servers. Examples of
> prohibited equipment and servers include, but are not limited to,
> e-mail, Web hosting, file sharing, and proxy services and servers;"
> http://www6.comcast.net/terms/use/
> 
> -Shawn
> 

Yeah, I realize this *now*, however it doesn't still excuse them from
unannouncedly denying service. They can contact me, they do have my
phone number / email address.

I am probably moving to FairPoint DSL. Generally I've had better service
in the past with DSL than with Cable in the city anyhow. Too bad
FairPoint didn't offer this service back when I first moved here though.

I recommend anybody living in NH to look at FairPoint for internet
access. They seem "less bad" than Comcast. Comcast can go screw
themselves, as far as I am concerned.

-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Comcast blocks port 25 incoming, yet again

2008-04-25 Thread Coleman Kane
On Fri, 2008-04-25 at 16:31 -0400, Kevin D. Clark wrote:
> Coleman Kane writes:
> 
> > Anyhow, I did speak to FairPoint who informed me that I can get DSL
> > service (at the same speed) for a fraction of the rate that I pay to
> > Comcast right now (I don't have a TV for their 99% mind-numbing cable
> > programming racket, so I pay their higher net fee). I can also have
> > unlimited usage and the sales person tells me that they don't block
> > access. They also provide month-to-month service, instead of locking me
> > into a contract. Additionally, I can provide my own DSL equipment if I
> > have it.
> 
> I would be curious to know if, in Fairpoint's DSL ToS, the term
> "unlimited usage" is defined.  I would also like to know if in this
> ToS the subject of running a server at the customer side of the
> connection is discussed.  What does the ToS say about these cases?
> 
> A quick perusal of their web site yields no details regarding these
> matters.
> 
> Thanks very much,
> 
> --kevin

I'm going to discuss this further with their sales person on Monday,
hopefully when I set up my new account. I did express that comcast was
blocking my service and my desire to handle my own mail for my domains.
Whether the person on the other line understood or not, I am not sure
of. It sounded to me like they really didn't care.

I also came across the following "Acceptable Use Policy" on their
website:
http://632fpbe.fairpoint.com/forms/acceptable_use_policy.php

It states:
"Serving of any kind is NOT allowed without express written consent from
ISP. Consent should be given in a separate service contract and should
be producible by the customer upon request from ISP."

I am not entirely sure what "ISP" constitutes in that sentence, but it
sounds like a livable policy (service is available on an elective
basis). We shall see how this goes...

-- 
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Comcast blocks port 25 incoming, yet again

2008-04-25 Thread Coleman Kane
On Fri, 2008-04-25 at 17:35 -0400, Ben Scott wrote:
> On Fri, Apr 25, 2008 at 3:11 PM, Coleman Kane <[EMAIL PROTECTED]> wrote:
> >  Yeah, I realize this *now*, however it doesn't still excuse them from
> >  unannouncedly denying service.
> 
>   Actually, per their ToS, they're within their rights to simply
> terminate your account and keep your money.  You *did* read that
> contract you agreed to, right?  ;-)
> 
>   FWIW, if you find you want to continue with Comcast (not sure how
> you'd reach that conclusion, but...), they offer a premium class of
> service which allows hosting services.  At work in Amesbury, MA, we're
> paying $65/month for something that's pretty speedy, with a static IP
> address.  YMMV.

As far as I can tell, I need to get in touch with their business reps in
order to figure out a business package that works for me. Most providers
I've used have a "home user w/ static IP option" that's typically a $10
fee above normal rate. I did find their "teleworker" package that must
be purchased in lots of ten by an employer and are a whopping $99 each.
This is the same package that Time-Warner typically provides in its
jurisdictions for less than half that.

> 
> > They [FairPoint] seem "less bad" than Comcast.
> 
>   Yah, when the choice was Verizon vs Comcast, I always said that it's
> not that I liked Comcast, but that I hated Verizon more.  In my
> experience, all telcos suck; some just suck more than others.  (And
> cablecos are telcos, if you didn't know already.)  If FairPoint
> manages to start Verizon's FTTP rollout back up again, I'll almost
> certainly be switching.  Cable Internet is usually much faster than
> DSL, so that's a tougher call.  If I hear really good things about
> FairPoint's customer service, I might consider it, but they'd really
> have to be astoundingly good things.  (Remember, all telcos suck.)

When in Cincinnati, I had good service relations with Cincinnati Bell
out there. That may be due in part to them being the only remaining
local telco that wasn't a former vital organ of AT&T... They actually
didn't suck:
   * They were receptive to my desire to run servers and even accepted
my diagnoses using traceroute, ping, etc...
   * They fixed a cable I dug up and broke in my yard for free
   * They strung cat-5 in one apartment to improve my DSL access, for
free
   * Customer service was not indignant when confronted with the rare
billing error
   * Didn't get fined for breaking contract

Of course, your best bet with their DSL is if you live within the
inner-city limits. Outside of that (in the burbs) and the CO/square-mile
ratio drops so far that you just end up being stuck with cable unless
you're lucky.

> 
> -- Ben

-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Comcast blocks port 25 incoming, yet again

2008-04-28 Thread Coleman Kane
On Sun, 2008-04-27 at 19:46 -0400, Bill Sconce wrote:
> On Fri, 25 Apr 2008 17:38:31 -0400
> "Ben Scott" <[EMAIL PROTECTED]> wrote:
> 
>  
> > Seriously.  GNHLUG gets better service from MV
> > for free then I've ever been able to *pay for* with somebody else.  I
> > can't say enough good things about MV.  Hugely recommended.
> > 
> >   http://www.mv.com
> 
> 
> Another vote.  As I've also said each time ISPs have been discussed, 
> I've been using MV for years, and wouldn't consider switching.  I'd
> tried two other local ISPs, neither of them a telco, and finally 
> thought to call MV when the second one started double-dipping on its
> billing.
> 
> Been happy ever since.  Totally.  Should have *started* with MV. Every
> call a pleasure; never one problem with the service.
> 
> -Bill

They do look promising, and they appear to have cheaper rates than
FairPoint, but that $60 setup fee is pretty killer. Especially
considering that I will probably be moving again in August.

-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: [OT] - bad bad humor

2008-05-02 Thread Coleman Kane
On Wed, 2008-04-30 at 15:15 -0400, Star wrote:
> http://en.wikipedia.org/wiki/Comparison_of_file_systems
> 
> Go to the Feature Compairison...  Note the last feature column.
> 

I take it you mean this:
http://en.wikipedia.org/w/index.php?title=Comparison_of_file_systems&oldid=209285146

-- 
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: IMAP Server

2008-05-02 Thread Coleman Kane
On Fri, 2008-05-02 at 08:49 -0400, Matt Snell wrote:
> Morning all,
> 
>   I'm looking for help choosing an IMAP server for my own simple needs.
> I currently store my email in maildir format, I use fetchmail to retrieve it,
> procmail to filter it and mutt to read it.  If possible, I'd still like to
> use these tools and add Thunderbird (or any IMAP capable client) on
> Linux and Windows machines to access my mail trough an SSH tunnel.  If it
> matters, the server won't be facing the Internet.
> 
>   Can anyone recommend an IMAP server that will allow me to do this?
> I'm hoping for something simple but effective.  If you need more information
> from me, please let me know.
> 

I've always like courier-imap for dealing with Maildirs.

-- 
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Converting mailboxes from mbox to maildir.

2008-05-02 Thread Coleman Kane
On Fri, 2008-05-02 at 13:53 -0400, Scott Garman wrote:
> Since the subject of IMAP servers has come up, I thought I'd ask about 
> something I really need to get around to soon. I have issues with mbox 
> corruption about 1-2 times per year, and still haven't made the switch 
> to Maildir. I'd like to do it before I get an urgent crisis again. :)
> 
> I'm using dovecot as my IMAP/POP server. Does anyone know if it's 
> possible for it to work with both mbox and Maildir at the same time so I 
> can convert my users' mailboxes one user account at a time? From my 
> research so far I'm under the impression that I can configure doevcot to 
> use either mbox or Maildir for all user accounts, but not both.
> 
> War stories and other advice on migrating from mbox to Maildir are welcome.
> 
> Thanks,
> 
> Scott

You can install procmail and use the formail and procmail programs to
perform the conversion.

I believe you want to set up a procmail rule to deliver mail into a
maildir (specify the destination with a trailing slash). Then you use
something like this:

formail -s procmail < mboxfile

I recommend reading up on the procmail/formail manpage before doing this
to ensure that memory serves me right.

-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


RE: Octave Make failure

2008-05-05 Thread Coleman Kane
On Mon, 2008-05-05 at 15:02 -0400, Jarod Wilson wrote:
> On Mon, 2008-05-05 at 14:33 -0400, Labitt, Bruce wrote:
> > On Mon, 2008-05-05 at 12:12 -0400, Labitt, Bruce wrote:
> > > I'm trying to compile octave on scientific linux 5.1 x86-64.  I have a
> > > make failure that I am trying to diagnose.  I saved the make log.  The
> > > failure seems to appear at line 2648 or so.
> > > 
> > > ../src/liboctinterp.so: undefined reference to
> > `__cxa_get_exception_ptr'
> > > ../src/liboctinterp.so: undefined reference to
> > `std::basic_istream > > std::char_traits >::ignore()'
> > > collect2: ld returned 1 exit status
> > > make[2]: *** [octave] Error 1
> > > make[2]: Leaving directory
> > > `/home/BDLabitt/octavesource/octave-3.0.1/src'
> > > make[1]: *** [src] Error 2
> > > make[1]: Leaving directory `/home/BDLabitt/octavesource/octave-3.0.1'
> > > make: *** [all] Error 2
> > > 
> > > Can anyone suggest what may be missing?  libxxx?  It looks like a very
> > > basic library - like a c++ lib or something.
> > 
> > Google suggests you're mixing compilers and libstdc++ from different
> > versions of gcc.
> > 
> > http://gcc.gnu.org/bugzilla/show_bug.cgi?id=20170
> [...]
> > [Labitt, Bruce]  It looks like I have libstdc++ and gcc and gcc-c++
> > at
> > the same level: 4.1.2-14.el5.  However, I do have both the i386 and
> > x86_64 compiler tools  installed.  Should I get rid of the 32 bit
> > ones?
> 
> Shouldn't hurt to have both installed. Although I suppose it could be
> that the wrong one gets used to build and the right one tries to get
> used at runtime, or vice-versa... If you want to make sure you're
> building a 64-bit binary, yeah, yank the 32-bit pieces and see what
> happens... (for me, 'yum remove \*.i386 \*.i686' is part of my usual
> post-install process on x86_64 boxes... :).

Bruce,

It would be helpful if you posted the linker line that produced this
error message as well. This is a common type of error you'll see when
the linker is not linking your C++ objects against libstdc++.so.

My guess would be that the linker is being called through gcc instead of
g++, the latter of which would add libstdc++ to the linker (ld) command
line.

-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


RE: Octave Make failure

2008-05-05 Thread Coleman Kane
On Mon, 2008-05-05 at 15:27 -0400, Labitt, Bruce wrote:
> OK.  Here is the previous line(s) that cause grief.
> 
> 
> gcc -c  -I. -I.. -I../liboctave -I../src -I../libcruft/misc
> -DHAVE_CONFIG_H  -Wall -W -Wshadow -g -O2 main.c -o main.o
> 
> g++  -I. -I.. -I../liboctave -I../src -I../libcruft/misc
> -DHAVE_CONFIG_H  -Wall -W -Wshadow -Wold-style-cast -g -O2 -rdynamic \
> -L..  -fPIC  -o octave \
> main.o  \
> -L../liboctave -L../libcruft -L../src -Wl,-rpath
> -Wl,/usr/local/lib/octave-3.0.1 \
> -loctinterp -loctave  -lcruft   \
>  \
> \
> -lfftw3 -lreadline  -lncurses -ldl -lz -lm
> -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6
> -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64
> -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. -L/lib/../lib64
> -L/usr/lib/../lib64 -lz -lfrtbegin -lg2c -lm
> ../src/liboctinterp.so: undefined reference to
> `__cxa_get_exception_ptr'
> ../src/liboctinterp.so: undefined reference to
> `std::basic_istream >::ignore()'
> collect2: ld returned 1 exit status
> make[2]: *** [octave] Error 1
> make[2]: Leaving directory
> `/home/BDLabitt/octavesource/octave-3.0.1/src'
> make[1]: *** [src] Error 2
> make[1]: Leaving directory `/home/BDLabitt/octavesource/octave-3.0.1'
> make: *** [all] Error 2
> 
> Thanks for everyone's help so far...
> 
> Regards,
> Bruce

From the above line, it looks like you are performing the link against
the GCC 3.4.6 libraries. I seem to remember you mentioned using 4.1.2 or
so earlier... this might be one of the problems.

What is the output of?:
g++ -v

-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


RE: Octave Make failure

2008-05-05 Thread Coleman Kane
On Mon, 2008-05-05 at 15:52 -0400, Labitt, Bruce wrote:
> On Mon, 2008-05-05 at 15:27 -0400, Labitt, Bruce wrote:
> > OK.  Here is the previous line(s) that cause grief.
> > 
> > 
> > gcc -c  -I. -I.. -I../liboctave -I../src -I../libcruft/misc
> > -DHAVE_CONFIG_H  -Wall -W -Wshadow -g -O2 main.c -o main.o
> > 
> > g++  -I. -I.. -I../liboctave -I../src -I../libcruft/misc
> > -DHAVE_CONFIG_H  -Wall -W -Wshadow -Wold-style-cast -g -O2 -rdynamic
> \
> > -L..  -fPIC  -o octave \
> > main.o  \
> > -L../liboctave -L../libcruft -L../src -Wl,-rpath
> > -Wl,/usr/local/lib/octave-3.0.1 \
> > -loctinterp -loctave  -lcruft   \
> >  \
> > \
> > -lfftw3 -lreadline  -lncurses -ldl -lz -lm
> > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6
> > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64
> > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. -L/lib/../lib64
> > -L/usr/lib/../lib64 -lz -lfrtbegin -lg2c -lm
> > ../src/liboctinterp.so: undefined reference to
> > `__cxa_get_exception_ptr'
> > ../src/liboctinterp.so: undefined reference to
> > `std::basic_istream >::ignore()'
> > collect2: ld returned 1 exit status
> > make[2]: *** [octave] Error 1
> > make[2]: Leaving directory
> > `/home/BDLabitt/octavesource/octave-3.0.1/src'
> > make[1]: *** [src] Error 2
> > make[1]: Leaving directory
> `/home/BDLabitt/octavesource/octave-3.0.1'
> > make: *** [all] Error 2
> > 
> > Thanks for everyone's help so far...
> > 
> > Regards,
> > Bruce
> 
> >From the above line, it looks like you are performing the link
> against
> the GCC 3.4.6 libraries. I seem to remember you mentioned using 4.1.2
> or
> so earlier... this might be one of the problems.
> 
> What is the output of?:
> g++ -v
> 
> -- 
> Coleman Kane
> [Labitt, Bruce] 
> 
> Could that be some "compatability" stuff that I have installed
> (recognizes old c/c++) ?

No such thing.

> 
> $ g++ -v
> 
> Using built-in specs.
> Target: x86_64-redhat-linux
> Configured with: ../configure --prefix=/usr --mandir=/usr/share/man
> --infodir=/usr/share/info --enable-shared --enable-threads=posix
> --enable-checking=release --with-system-zlib --enable-__cxa_atexit
> --disable-libunwind-exceptions --enable-libgcj-multifile
> --enable-languages=c,c++,objc,obj-c++,java,fortran,ada
> --enable-java-awt=gtk --disable-dssi --enable-plugin
> --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre
> --with-cpu=generic --host=x86_64-redhat-linux
> Thread model: posix
> gcc version 4.1.2 20070626 (Red Hat 4.1.2-14)
> 
> $
> 
> Regards,
> Bruce

Try doing this link without (omit them) the arguments that match:
-L/usr/lib/gcc/x86_64-redhat-linux/3.4.6

Most likely your problem is that libstdc++.so from GCC 3.4.6 is being
preferred to the libstdc++.so from the compiler you are running (namely,
GCC 4.1.2). You are obviously not using the v3.4.6 headers (because
there is no -I argument to #include them), so you are trying to use the
GCC 4.1.2 C++ API with the GCC 3.4.6 C++ library, which is bad.

-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


RE: Octave Make failure

2008-05-05 Thread Coleman Kane
On Mon, 2008-05-05 at 16:19 -0400, Jarod Wilson wrote:
> On Mon, 2008-05-05 at 16:01 -0400, Coleman Kane wrote:
> > > Could that be some "compatability" stuff that I have installed
> > > (recognizes old c/c++) ?
> > 
> > No such thing.
> 
> Yes and no. In addition to a gcc package, Red Hat ships a gcc34 package
> (and dependent bits). So it could be some sort of mixing. But this sorta
> thing generally doesn't happen. Was the system upgraded from an earlier
> release, by chance? Perhaps something didn't get completely updated...

The specific thing which I was pointing out was that his linker command
is being told to source in the GCC 3.4.6 library paths: 

"-L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 ... " in the g++ command line.

This is likely the baddness. While it is true that Red Hat ships a gcc34
package, this is only so that you can use GCC 3.4 to compile and link
software that requires GCC 3.4. There is no "GCC 3.4 compatibility
library for GCC 4.x", especially none that is installed
at  /usr/lib/gcc/x86_64-redhat-linux/3.4.6, and mixing the two
*definitely* isn't sanctioned.

> 
> > Most likely your problem is that libstdc++.so from GCC 3.4.6 is being
> > preferred to the libstdc++.so from the compiler you are running (namely,
> > GCC 4.1.2). You are obviously not using the v3.4.6 headers (because
> > there is no -I argument to #include them), so you are trying to use the
> > GCC 4.1.2 C++ API with the GCC 3.4.6 C++ library, which is bad.
> 
> Yep, sounds like it. 'ldconfig -p | grep stdc++' might provide some
> further insight.

I doubt that this will really reveal the problem, as it is a link-time
problem, and ldd is only useful in this manner for tracking down
dependencies in already linked dynamic ELF images. For some reason,
octave wants to use GCC 3.4.6 at link-time, but it really should have
thought about that back when it was compiling too If octave uses
configure for configuring it's Makefiles, I'd recommend looking at the
config.log to see where it is getting the "link with
-L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 option" instruction from.

Probably, if it depends upon other software, the configure script is
sourcing in the CFLAGS/LDFLAGS stuff from the dependency... you may have
a dependency that was built with GCC 3.4.

-- 
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


SuSE Linux

2008-05-06 Thread Coleman Kane
Hi,

I made a comment yesterday about SuSE Linux...

  * SLED stands for "SuSE Linux Enterprise Desktop" and is the
desktop-linux distribution that they provide. ("Sled")
  * SLES stands for "SuSE Linux Enterprise Server" and is the
server-linux distribution that they provide. ("Sles")

Apologies to anyone that I confused they've gone through a number of
renaming cycles since being SuSE GmbH (now just a division of Novell,
Inc.).

For those interested, this is also one of the few Server Distros that
I've found to use OpenLDAP by default as the "passwd database", rather
than the flat files in /etc/.

Red Hat Enterprise Linux, RHEL (sometimes pronounced "Our-Hell" by many
a disillusioned sysadmin), is analogous to SLES, while Fedora is more
analogous to SLED. I mistakenly compared SLED to RHEL in the discussion.

-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: SuSE Linux

2008-05-06 Thread Coleman Kane
On Tue, 2008-05-06 at 11:13 -0400, Jarod Wilson wrote:
> On Tue, 2008-05-06 at 10:49 -0400, Coleman Kane wrote:
> > Hi,
> > 
> > I made a comment yesterday about SuSE Linux...
> > 
> >   * SLED stands for "SuSE Linux Enterprise Desktop" and is the
> > desktop-linux distribution that they provide. ("Sled")
> >   * SLES stands for "SuSE Linux Enterprise Server" and is the
> > server-linux distribution that they provide. ("Sles")
> > 
> > Apologies to anyone that I confused they've gone through a number of
> > renaming cycles since being SuSE GmbH (now just a division of Novell,
> > Inc.).
> > 
> > For those interested, this is also one of the few Server Distros that
> > I've found to use OpenLDAP by default as the "passwd database", rather
> > than the flat files in /etc/.
> > 
> > Red Hat Enterprise Linux, RHEL (sometimes pronounced "Our-Hell" by many
> > a disillusioned sysadmin), is analogous to SLES, while Fedora is more
> > analogous to SLED.
> 
> No. Fedora is more analogous to openSUSE. SLED and SLES share the bulk
> of their package base. Its more like this: SLES is analogous to Red Hat
> Enterprise Linux Server, SLED is analogous to Red Hat Enterprise Linux
> Workstation.
> 

I suppose that's a better comparison... I didn't even know RedHat sold a
desktop distro, but now that I check their site it is here:
http://www.redhat.com/rhel/desktop/

Well, enjoy!

-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Spam-Filter-Free Options (Was: Computer repair shop)

2008-05-06 Thread Coleman Kane
On Tue, 2008-05-06 at 17:13 -0400, Ben Scott wrote:
> On Tue, May 6, 2008 at 4:50 PM,  <[EMAIL PROTECTED]> wrote:
> >  Maybe spam prevention (much like virus prevention) is more about not
> >  making yourself a target, than it is about defending yourself against
> >  those who target you.
> 
>   Running a business means you have to make yourself accessible.
> 
> -- Ben

The same is true with being active in opensource projects. I get loads
of spam per day, but my system usually is able to pare these down to
about 10 or so that actually end up in my mailbox.

Such is the plight of an @FreeBSD.org, @openoffice.org, and
participating on the bug-tracking and mailing lists for those and other
projects.

I'm using SpamAssassin with as many of the add-on modules as I could
find in Gentoo to plug into it. I also ended up forwarding it through my
office's MX-relay last week, and this has resulted in better
spam-catching (to the point where the only spams that make it through
unedited are the smaller "note-like" emails, rarely). I think both of
these end up using similar, but differently-populated, bayesian
classifiers which seems more effective. Additionally, I've got some of
the OCR plugins (for those spams that are images), and all of the public
database lookups (Razor, Pyzor, DNSRBL, RBL, DCC, and more).

Running two mailservers in serial like this might be a helpful thing for
the person who's getting spammed to death. Make sure you get powerful
(and more than one) CPUs though.

-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Palm vs other smart phones/PDAs

2008-05-09 Thread Coleman Kane
On Fri, 2008-05-09 at 09:45 -0400, Neil Joseph Schelly wrote:
> On Friday 09 May 2008 09:07, Tom Buskey wrote:
> > IMO, the original Palm UI and apps still hold up very well.  I've been
> > using Palm with Unix since I got a Pilot 1000.  I have a Blackberry for
> > work and my wife uses an iPhone.
> 
> That is true - I've been using Palms for nearly a decade now and whenever 
> I've 
> watched someone use a Windows Mobile, Blackberry, or an iPhone, I've only 
> ever wondered how they can get anything done on such crowded screens with 
> such obtrusive interfaces.

I've got a Treo 680 and I've liked it pretty well. Occasionally it
resets itself (like acouple times per month), but not the kind of reset
where it loses all data, it just reboots. I got it second-hand though,
so the prev. owner might have dropped it in a sink or something. I had a
data plan for awhile and the mini-browser that they use it pretty nice.
I'm a bit dismayed that I can't get an Opera or Mozilla browser for it
like you can for other handhelds.

> 
> I have nothing against the Palm software platform, but the hardware and 
> driver 
> support is lacking lately.  Bluetooth is very hit-or-miss with my Treo 650.  
> The Bluetooth modem has stopped working altogether and headsets can become 
> unpaired and nearly impossible to re-pair simply by running out of battery.  
> There's no voice dialing and I think wifi would be very helpful.  This all 
> amounts to little more than annoying, and I still look at my wife trying to 
> do anything useful with her Windows Mobile phone and realize it's all very 
> minor in comparison at least.

I'm using FreeBSD and I've never had any trouble with bluetooth access
on my device. I've used it both ways for internet access, as well as
using openobex and obexapp to interface with it for sending/receiving
files.

I haven't been able to use the GUI tools to do it, but that's because I
can't find any decent ones that are reliable *and* support bluetooth. I
suppose I could always write one using the commandline tools.

> 
> If I can get my hands on an OpenMoko and decide that I don't like the 
> calendaring or address book in it, I could just run the PalmOS emulator I 
> suppose.  But at least I'll have the hardware that will do whatever I want.  
> I really hope they pull it off - they've been so close for so long, it seems.
> -N

yeah I've been following that project too. It totally looks like a
hacker's dream phone.

-- 
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Palm vs other smart phones/PDAs

2008-05-09 Thread Coleman Kane
On Fri, 2008-05-09 at 09:28 -0400, Brian Chabot wrote:
> 
> Tom Buskey wrote:
> 
> > IMO, the original Palm UI and apps still hold up very well.  I've been
> > using Palm with Unix since I got a Pilot 1000.  I have a Blackberry for
> > work and my wife uses an iPhone.
> 
> I keep an old Handspring Visor Pro handy, myself.  I like that I can
> back it up to a CF card (with memplug) and not have to keep fresh
> batteries in it.  It uses AAA batteries so when I need to use it, I
> install them and load my data from the CF card.  It syncs fine in Linux,
> too.
> 
> I used to own (among many other PDAs) a Palm Treo 350. (Actually, I
> still have it, but it's bricked at the moment...) and a phone accessory
> for the Visor.
> 
> I recently got a Blackberry Curve 8320.  I really like the chicklet
> keyboard and the vast range of communications options (Edge, GMRS, WiFi,
> Bluetooth) and the fact that I can sync it with KDE PIM and back it all
> up to my hard drive.
> 
> Even better is Google's support for calendar sync and Blackberry's
> lightening fast push email.  With J2ME there are a lot of apps
> available, though not as many free ones as with the older Palm OS.  I do
> wish the Blackberry's development kit was a little more open (and Linux
> compatible) but again, there is the option of generic J2ME.
> 
> > I find the other devices don't improve on the basic apps and in the case
> > of the Blackberry's calendar, fall short.
> 
> Yes, the internal calendar alone does fall short.  BUT... the Google
> Calendar sync is quite nice.
> 
> > Palm hasn't updated it significantly.  They've made a number of abortive
> > attempts at modernizing the OS to a Linux based one.  They have added
> > web browsing and phone use.
> 
> Palm dropped the ball IMO when they split their hardware and software
> groups apart.
> 
> Brian

Yeah. After buying Be, Inc.

-- 
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Comcast blocks port 25 incoming, yet again

2008-05-15 Thread Coleman Kane
On Fri, 2008-04-25 at 18:01 -0400, Ben Scott wrote:
> On Fri, Apr 25, 2008 at 5:47 PM, Coleman Kane <[EMAIL PROTECTED]> wrote:
> >  As far as I can tell, I need to get in touch with their business reps in
> >  order to figure out a business package that works for me.
> 
>   Yah, their residential division cannot sell the business packages,
> and indeed, are often not even aware of then.  If you seriously want
> to go that route, I suggest identifying yourself to Comcast as a
> business.  If you say you're calling from a residence you'll just
> confuse them.  Say you have a small business office and want service.
> This isn't even necessarily being misleaning; an individual can run a
> sole proprietorship pretty much just by saying they are.
> 
> > I did find their "teleworker" package that must
> >  be purchased in lots of ten by an employer and are a whopping $99 each.
> 
>   Yah, in addition to lousy customer service and draconian AUP,
> Comcast's rates are also quite high.  Good, fast, cheap: Pick none.
> 
> >  When in Cincinnati, I had good service relations with Cincinnati Bell
> >  out there. That may be due in part to them being the only remaining
> >  local telco that wasn't a former vital organ of AT&T...
> 
>   That -- not being a Baby/Big Bell -- actually makes a really big
> difference most of the time.  NH used to have a number of small local
> telcos, who -- from what I've been told -- generally had good service.
>  But anything that used to be Ma Bell -- forget it.  They practically
> invented bad customer service[1].  "We don't care.  We don't have to.
> We're the phone company."
> 
> [1] Well, actually, banks invented bad customer service, but the
> telcos automated it.
> 
> >  Of course, your best bet with their DSL is if you live within the
> >  inner-city limits.
> 
>   Yah, and even that can be really iffy in New England.  Some of the
> outside plant (lines on the poles, junction boxes, etc.) is incredibly
> old and outdated.  It's not at all uncommon to find stuff over 50
> years old, and which hasn't been properly maintained, either.  You're
> lucky to be able to run 28 Kbit/sec modem over it, let alone DSL.  In
> my old hometown of Newton, I remember when they had to replace a large
> junction box because the tree it was nailed to grew far enough to
> start pulling the wires off the termination blocks.
> 
> -- Ben

So... an update to all of this...

I got Verizon DSL this week, and it turns out that they do block some
traffic. They specifically block incoming port 80 traffic and nothing
else, with the explicit reason that they want to block people from
running webservers. I learned this, after the sales person assured me
that they don't block inbound traffic. I also was occupied for two hours
arguing with multiple first-tier technicians who told me (in broken
English) that it had to be my problem and that Verizon/FairPoint doesn't
block *any* inbound traffic. Additionally, their usage policy doesn't
state anything about blocking incoming traffic. It turns out that there
is a paragraph that states that they don't want you to run a server, but
it says that I agree to Verizon reducing my bandwidth or disconnecting
my service if I exceed their (unspecified) bandwidth limits.

Additionally they don't block any other inbound traffic. So (if I were a
luser), my inbound port 137-139 are open, as well as port 449 and port
25. So, is it just me, or are they specifically picking on web-servers
here? The policy is quite absurd, in my mind. It is almost like they are
choosing to pick on home-web-servers because of some inbred prejudice.

The only upside is that Verizon gave me a 30 free-trial deal that I can
run out, and I don't have to pay anything before I switch to another
provider. I am looking into mv.com right now, as my best option.
Speakeasy is nice, but they are expensive, and provide more that I need.
MV sounds great, but the activation fee is high (especially since I am
pretty certain I'll be moving again in August).

I did find another company named DSLExtreme (http://www.dslextreme.com/)
that apparently allows servers and even provides a web-interface for
blocking/unblocking port 25. Additionally, they endorse the use of their
connection for home-serving. There's a helpful FAQ here:
http://www2.dslextreme.com/Support/KB/Details.aspx?questionid=11128

Right now I am looking into them as my best bet. The agent on the phone
has told me that they don't charge activation right now... and the
prices are less expensive than any of my other options. So they are
worth looking into as an option. The downside is that they're located in
Salt Lake City, UT... so no local office.

-- 
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Comcast blocks port 25 incoming, yet again (and the evils of Verizon/FairPoint)

2008-05-15 Thread Coleman Kane
On Thu, 2008-05-15 at 14:15 -0400, Ben Scott wrote: 
> On Thu, May 15, 2008 at 1:04 PM, Coleman Kane <[EMAIL PROTECTED]> wrote:
> > I got Verizon DSL this week, and it turns out that they do block some
> > traffic.
> ...
> > I learned this, after the sales person assured me
> > that they don't block inbound traffic.
> 
>   Wow.  I'm shocked -- *SHOCKED* -- to hear that.
> 
>   I know everyone always likes to only pay for what they can get away
> with, rather than paying for what is delivered, but when push comes to
> shove, the TOS/AUP is always the controlling document.  People really
> need to come to terms with that.  What the sales guy or tech rep or
> anyone else says is not worth the paper is isn't written on.  Just
> stop wasting your time (and everyone else's) worrying about what the
> sales person said, because *it doesn't matter*.  The TOS is the boss,
> and the TOS spells this out in clear, unambiguous language.

That is all fine and good, but it doesn't absolve them of the fact that
an agent of the company is not informed (perhaps strategically) properly
about their service regulations. This would have been fine if it stopped
there, but two other technicians argued with me about the company not
filtering any traffic. 

> 
>   Specifically: The TOS of big ISPs pretty much *always* forbid
> hosting services on residential connections.  If you get away with
> more, don't ever forget that you're getting something more than what
> you've been promised, and as such, it can evaporate at any time.  They
> can change it at any time.  They can block TCP/25 ever other day and
> still be within their rights, because they are still giving you
> exactly what they said they would.
> 
>   Don't be surprised when you get exactly what you signed up for.
> 
> > Additionally, their usage policy doesn't state anything about blocking
> > incoming traffic.  It turns out that there is a paragraph that states that
> > they don't want you to run a server ...
> 
>   Um...  they explicitly forbid you from doing what you're trying to
> do.  While they don't say that they may block TCP ports to enforce
> that policy, the fact that *they explicitly forbid you from doing what
> you're trying to do* is kind of a clue, don't ya think?
> 
>   For those of you playing along at home:
> 
> http://www2.verizon.net/policies/tos.asp
> Section 4, Subsection 3

I read this too, and explained to the tech that the paragraph lead me to
believe that it fell under their bandwidth regulations, where they have
some maximum bandwidth number (that, of course, they can't tell you)
that will be modified to restrict your traffic. I suppose blocking port
80 might fall under "bandwidth restrictions".

The aggravating thing is that they never actually come out and say these
things. It is like they don't really want to let on that they are
blocking traffic, but they want to do it anyhow.

> 
> > Additionally they don't block any other inbound traffic.
> 
>   So?

I don't think that anybody here can argue that port 80 traffic is more
prone to misuse than port 137-139,449 traffic.

They did not tell me that they are employing this restriction to
safeguard users. The only reason that I was given was that they wanted
to prevent home users from serving web servers. If that is the case,
then the policy should state that port 80 will be blocked. They also
didn't even know if the business-tier blocks port 80 (which, at this
point, I wouldn't even try). As far as I can tell from researching the
matter, Verizon probably blocks port 80 for all but the highest level of
Business DSL (which has up to 29 static IPs).

So, I am left to guess that Verizon doesn't provide a solution for me. I
could ask them, but they cannot be trusted to tell the truth on the
matter, so it is better not to use them at all.

> 
> > The policy is quite absurd, in my mind. It is almost like they are
> > choosing to pick on home-web-servers because of some inbred prejudice.
> 
>   It is extremely rare, in any part of any activity of any kind
> anywhere in the world, to find that a law, rule, or policy is enforced
> with absolute totality.  You don't get a ticket every single time you
> exceed the speed limit.  You don't die every time you do something
> risky in life.  I don't get fired every time I screw off at work.  I
> don't ban people from the list server every time they break a rule.
> This is pretty much the way the entire world works, and thank goodness
> for that.
> 
>   I suspect the reason they're just blocking TCP/80 inbound is that is
> where the problems were.  Whatever motivation they have for blocking
> the hos

Re: Disable environment settings

2008-06-03 Thread Coleman Kane
On Tue, 2008-06-03 at 12:49 -0400, Bill McGonigle wrote:
> On Jun 3, 2008, at 12:36, Kenny Lussier wrote:
> 
> > Unfortunately, people get tired of typing it when they need to run  
> > 100+
> > commands as another user to diagnose a problem.
> 
> Yeah, me too. :)
> 
> Are they special programs or common utilities?
> 
> If they're special, you can wrap them, e.g.:
> 
> /usr/local/bin/foobar:
> 
>#!/bin/sh
>sudo /usr/local/realbin/foobar $*
> 
> If you're dealing with mv, cp, ln, and friends this gets less happy.

Yeah, especially with the always annoying:
/usr/local/sudobin/ln -sf /home/mydir/mypwnedshadow /etc/shadow

> 
> -Bill
> -
> Bill McGonigle, Owner   Work: 603.448.4440
> BFC Computing, LLC  Home: 603.448.1668
> [EMAIL PROTECTED]   Cell: 603.252.2606
> http://www.bfccomputing.com/Page: 603.442.1833
> Blog: http://blog.bfccomputing.com/
> VCard: http://bfccomputing.com/vcard/bill.vcf

-- 
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: [HUMOR] $500 patch cable

2008-06-16 Thread Coleman Kane
On Mon, 2008-06-16 at 21:12 -0400, Ben Scott wrote:
> Denon AKDL1 Dedicated Link Cable
> http://www.amazon.com/gp/product/B000I1X6PM/
> $500 RJ-45 patch cable
> 
>   Be sure to read the reviews/comments.
> 
> -- Ben

I don't see what's so funny, this is a perfectly cromulent cable for
embiggening your audio experience. ;)

-- 
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: When is a UPS battery actually bad? APC SUA750

2008-06-17 Thread Coleman Kane
On Tue, 2008-06-17 at 11:26 -0400, Alex Hewitt wrote:
> On Tue, 2008-06-17 at 11:11 -0400, Thomas Charron wrote:
> > On 6/17/08, Alex Hewitt <[EMAIL PROTECTED]> wrote:
> > > Which basically says - the indicator doesn't necessarily mean the
> > > battery is bad. It has some kind of timer which turns the LED on
> > > theoretically one to two months before the battery "might" need
> > > replacing.
> > 
> >   Woot!  Shared technology between UPS batteries and car engine status
> > lights!  :-D
> > 
> 
> A good chunk of the time the UPS has long since passed it's "sell by
> date" and the users just keep the things plugged in. The batteries are
> useless as is the UPS but hey it's cheaper than buying a new one (that
> works). ;^)
> 
> -Alex
> 

At that point, in my experience, some of them start to become IPS'es:
Interrupting Power Supplies. Some of them exhibit a behavior whereby
they determine battery utility by cutting the power occasionally and
running off battery for a few minutes/seconds to auto-determine how the
battery responds to the power draw, and storing this info in its tables
for use by the firmware. If the battery is beyond dead, the unit will
simply cause a power cycle for you and your happily connected devices.
Of course, this is well after the UPS has already just become an
overly-expensive surge protector (with bonus monitoring features!). You
*were* expected to replace the battery at the proper intervals...

Then there are the ones that produce the incessant beeping noise...

I kind of favor my laptop for these reasons, as it has a built-in
UPS ;).

I wasn't aware of the comment earlier about the UPS destroying your
surge protectors attached to it (rendering them tap-only). That is
interesting...

-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Firefox 3 AwesomeBar

2008-06-18 Thread Coleman Kane
On Wed, 2008-06-18 at 08:01 -0400, Ben Scott wrote:
> On Tue, Jun 17, 2008 at 11:04 PM,  <[EMAIL PROTECTED]> wrote:
> > Typing "gnhl" in the NEW address bar would get me
> > "http://www.gnhlug.org/";, among other URLs.  Typing "gnhl" in the OLD
> > address bar wouldn't do squat.
> 
>   I dunno what's different for you, but for me, Firefox 2.x finds
> "www.gnhlug.org" if I type "gnh".  I don't know if it's doing matching
> on domain labels ("between dots") or simply stripping "www" as a
> special case.
> 
> > There are also situations in which you can remember the name (or a
> > keyword in the title of) a site you visited, but don't remember its
> > domain name.
> 
>   Maybe you do.  For some reason, I rarely did that.  If I needed to,
> I'd pull up a history search to find that sort of thing.
> 
> -- Ben

Ben,

Try: https://addons.mozilla.org/en-US/firefox/addon/6227

Maybe that will help you out some... BTW, it is expected to somewhat
suck at first. After you've been regularly using it for about a week,
the AB gets pretty good. I've been tracking the betas on another machine
ever since B3 was announced, and the bar works pretty well for me on
that box.

-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Firefox 3 AwesomeBar

2008-06-18 Thread Coleman Kane
On Wed, 2008-06-18 at 12:19 -0400, Ben Scott wrote:
> On Wed, Jun 18, 2008 at 11:34 AM, Coleman Kane <[EMAIL PROTECTED]> wrote:
> > Try: https://addons.mozilla.org/en-US/firefox/addon/6227
> 
>   Aware of OldBar; indeed, I noted it in my original post.  ;-)
> 
>   FYI: OldBar changes the look back to the one-line-per-match format.
> It doesn't change the behavior.  The "about:config" tweaks I posted
> modify that behavior to be more like FF 2.x, although it is not exact.
> 
>   BTW, there have been 4000+ additional downloads of OldBar since last
> night, so apparently I'm not the only one who prefers the old way.
> 
>   For the record, I've got no problem with people who prefer the
> AwesomeBar, or even making it the default.  The main reason I'm pissed
> is there is apparently no way to opt-out of the new style, and the
> developers just dismissed complaints out-of-hand.
> 
> > BTW, it [AwesomeBar] is expected to somewhat suck at first. After you've 
> > been
> > regularly using it for about a week ...
> 
>   Are the specifics of its behavior documented somewhere?  If I knew
> how it work, maybe I could adapt my habits to it.  (Although 5 or so
> years of muscle memory are hard to un-learn.)
> 
> -- Ben

Maybe this helps?

http://developer.mozilla.org/en/docs/Places:Awesomebar

-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: HP releases AdvFS under GPL-2

2008-06-23 Thread Coleman Kane
On Mon, 2008-06-23 at 13:48 -0400, Bill McGonigle wrote:
> On Jun 23, 2008, at 12:21, Bayard Coolidge wrote:
> 
> >  But, does AdvFS have any features now
> > that it didn't 6 years ago that are superior to, say, Ext3 or Ext4,  
> > or any other
> > filesystems available under Linux (e.g. ReiserFS, xfs, etc.)? I'd  
> > like to see that
> > question answered - has someone created a feature comparion chart?
> 
> It had some pretty neat snapshotting capabilities we used to use for  
> backups; Linux didn't have this functionality until much later with  
> LVM.  Perhaps there are still some superior aspects of AdvFS over LVM?
> 
> If so, it might be handy since ZFS isn't coming to Linux any time  
> soon, AFAICT, and some apps react poorly to NFS.  Would it be too  
> cynical to suspect that HP simply doesn't want to maintain it anymore  
> but has customers who like it?
> 
> -Bill

I saw this chattered on the FreeBSD developers list this morning. There
were comments about it being a potential solution to the lack of ZFS on
Linux (which, BTW, is well supported under FreeBSD). A quick SoTFW
reveals that there is currently a FUSE project for ZFS underway (in
Beta) to get around the CDDL-GPL incompatibilities:
http://www.wizy.org/wiki/ZFS_on_FUSE. Additionally, Sun has indicated
that a port to the Linux kernel is "being investigated", likely to
determine what they can and can't GPL from themselves.

I would imagine that your last paragraph is pretty close to the truth.
Facing the possibility of either losing all AdvFS clients to other
systems (Solaris or Linux), they made the play to put it out under the
GPL.

Maybe it is indicative of a larger play by HP into the Linux ring?

Link: http://advfs.sourceforge.net/

-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: DRBD

2008-06-24 Thread Coleman Kane
On Tue, 2008-06-24 at 12:19 -0400, Chip Marshall wrote:
> On June 24, 2008, Mark Komarinski sent me the following:
> > Anyway, DRBD is self-contained and is automatic (if the remote system
> > disappears, it'll automatically resync when it reappears). You can
> > encrypt the stream, you can enforce how fast it syncs, and setup is
> > pretty easy.
> 
> Is this a Linux only project, or is it portable to other operating
> systems?
> 
> I'm currently working on a project that involves mirroring some storage
> accross machines, the current thinking to do a UFS snapshot and rsync.
> DRBD sounds like a much better solution, but all my stuff is FreeBSD.
> 

yay. FreeBSD++

-- 
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: gnuplot 64 bits?

2008-06-24 Thread Coleman Kane
On Tue, 2008-06-24 at 14:36 -0400, Labitt, Bruce wrote:
> I'm trying to compile gnuplot for 64 bits and large files.  How does one
> set the CXXFLAGS in configure?  configure --help is not helpful.
> 
> Pseudocode
> 
> $ ./configure CXXFLAGS = "-m64 -D_FILE_OFFSET_BITS=64"
> 
> This barfs with no real indication of what to do. :)
> 
> TIA
> 
> -Bruce

I'm not 100% sure, but you should try specifying it as:

./configure CXXFLAGS="-m64 -D_FILE_OFFSET_BITS=64"

The spaces in between the env specification above probably confuse the
heck out of configure. They'll get picked up as three separate arguments
by the shell. The proper way is without the spaces.

-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


RE: gnuplot 64 bits?

2008-06-24 Thread Coleman Kane
On Tue, 2008-06-24 at 16:43 -0400, Labitt, Bruce wrote:
> Thanks!  The space did hose things up.  Stuff is building now!  Now to
> see if it fixes the original problem!
> 
> -Bruce
> 

BTW,

If you run "./configure --help", it provides you with all sorts of
verbose information on the vars and flags that are accepted.

-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: General Procedure to get ATI/DRI card running?

2008-07-01 Thread Coleman Kane
On Tue, 2008-07-01 at 18:41 -0400, Ben Scott wrote:
> On Tue, Jul 1, 2008 at 6:30 PM, Labitt, Bruce
> <[EMAIL PROTECTED]> wrote:
> > In any case, first I'd like to take a look at finding more current xorg.
> 
>   If you try and replace the X-related packages provided by the
> distro, you'll probably end up having to rebuild practically every
> X-based package on the system.  They will all depend on the X
> libraries.  You'd basically be recompiling the entire distro (minus
> stuff that doesn't use X).

If you are starting from X.org 1.4, this isn't so much the case anymore.
You can update many of the packages independently with one another,
taking care to simply make sure you get the upstream dependencies that
are affected. For the most part, packages such as Mesa and libX11 have
been developed to maintain a relatively rigid API, resulting in that
they can be upgraded without too much breakage from apps.

> 
>   So if you want to have a go at upgrading X, I would suggest doing a
> parallel install of the X server into a different directory (e.g.
> /usr/X11R7.4/ or /usr/local/X11/ or something like that).  You should,
> in theory, be able to run a newer X server, while having all the X
> clients use the libraries shipped with the distribution.  The X
> protocol is known for being fairly backwards compatible.
> 
>   That might get messy for DRI (accelerated 3D) stuff, though.  I have
> no idea how DRI works internally, but the "Direct Rendering" part of
> the name would seem to suggest version differences might matter more.
> :)
> 
> > xorg is not particularly simple to figure out what to do.
> 
>   I've never had to build X, but from what I've been told, it's one of
> the more challenging things to do.

In the monolithic days, it was a terrible mess. It has gotten easier as
they've modularized the codebase. Still, there are many packages and no
straight-forward answer of what to do with them.

> 
> -- Ben

This is one of those cases where source-based distributions rule (and
the main reason I use them exclusively).

I was successfully able to get the new xf86-video-radeonhd (R500/R600
X.org driver) working well by updating the following packages from
freedesktop.org git repositories:
  * dri2proto
  * mesa/drm
  * git
  * glproto
  * inputproto
  * kbproto
  * libX11
  * libXdamage
  * libxcb
  * mesa/mesa
  * xorg-server
  * xf86-input-keyboard
  * xf86-input-mouse
  * xf86driproto
  * xextproto
  * randrproto
  * x11proto
  * libXext
  * libXi
  * libXrandr
  * libpciaccess
  * libxkbfile
  * libxkbui
The rest of my x.org packages are the ones I built from the 1.4
releases, which are in FreeBSD's ports collection. You should be able to
use either the latest xf86-video-radeonhd or xf86-video-ati to get DRI,
AIGLX, EXA, Compositing, and other niceties. I hear the R5xx cards are
easier to support than my RS690.

Beware, it is not for the faint of heart. You will probably have to
suffer through many hurdles like any good beta tester, if you want the
goods. The good news is that it will pay off in the end. I've be
corresponding closely with the guys who've been doing a lot of the
FreeBSD work, and passing my patches and bug reports along. They've been
a very prompt and responsive bunch.

If you go down this path, I recommend getting on *both* the
xf86-video-ati and xf86-video-radeonhd mailing lists (visit
http://www.radeonhd.org/). Additionally, both teams manage IRC channels
on Freenode, where there is nightly activity going on, and you can
usually get interactive help that way.

A good place to get started:
  * http://www.x.org/wiki/radeonhd
  * http://www.x.org/wiki/radeon (less helpful)

-- 
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: General Procedure to get ATI/DRI card running?

2008-07-02 Thread Coleman Kane
, that the X.org project is
really one without a "home" or "owner". The X Consortium (x.org) really
only committed to provide some hosting, and maintain a central
repository of protocol, format, and other project-related standards and
specifications. Through its history as X.org and XFree86, it has never
gotten significant support from the hardware vendors that it is expected
to support. Support for 3Dfx hardware, for instance, didn't really get
solid until after 3Dfx shut its doors and released all their docs
on-line for free. When all the packages went modular, it was supposed to
nudge the bigger distributors (RedHat, SuSE/Novell, Debian, etc...) to
maintain their own X.org-derived distributions of X.org software, and
perform stability testing on the feature snapshots and release their own
distributions as development went on on the various freedesktop.org
projects. Basically, to take a more active role in testing and reporting
X.org problems.

To this day, that hasn't happened except in the case of the OpenBSD
project. Now everyone suffers because graphics hardware is getting close
to having a shorter lifespan than X.org releases. The X.org consortium,
to its credit, has finally recognized this and recently announced it is
going to change its release schedule to be more aggressive. The likely
result of this is a much quick time-to-market for new features, at the
expense of an increase in bugs exposed to the public (and hopefully,
found quicker and fixed quicker).

Today graphics hardware provides all sorts of features not considered by
the developers when X.org 1.3 or 1.4 were released. The development
trees are where all this stuff is being developed (EXA, DRI2, next-gen
RandR code, new DRM). They've done a heck of a lot of overhaul in the
Mesa and Xserver source code trees in the past couple months. The best I
can say is that, following the lists myself and the chats, they are
working really hard at getting this stuff together. You should probably
see 1.5 released with a lot of this new feature-set around August. If
you want to improve the odds of this happening, you should get involved.

I would, at the very least, recommend you try forwarding your email to
the developers list at X.org and also your Linux distribution. Both are
culpable in the problems that you are experiencing.

IMO, every distribution should be maintaining a -devel package for every
one of the released versions of their X.org software modules. We are
never going to get to the point where the features supported in a new
graphics card can be safely and reliably implemented in a new or
existing driver for the X-server in 3 months time, let alone at the
initial release date of the graphics card, if all the distributions
insist upon only using the "last stable release" of X.org packages. At
least with tracking -devel packages, they would be able to give much
greater exposure of newly developed features, without forcing users to
violate their distribution's deployment hierarchy and install from
sources manually (likely leaving out any distro-specific patches).

-- 
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: General Procedure to get ATI/DRI card running?

2008-07-02 Thread Coleman Kane
On Wed, 2008-07-02 at 09:38 -0400, Jarod Wilson wrote:
> On Wed, 2008-07-02 at 09:12 -0400, Coleman Kane wrote:
> > It is really important to understand, though, that the X.org project
> > is
> > really one without a "home" or "owner". The X Consortium (x.org)
> > really
> > only committed to provide some hosting, and maintain a central
> > repository of protocol, format, and other project-related standards
> > and
> > specifications. Through its history as X.org and XFree86, it has never
> > gotten significant support from the hardware vendors that it is
> > expected
> > to support. Support for 3Dfx hardware, for instance, didn't really get
> > solid until after 3Dfx shut its doors and released all their docs
> > on-line for free. When all the packages went modular, it was supposed
> > to
> > nudge the bigger distributors (RedHat, SuSE/Novell, Debian, etc...) to
> > maintain their own X.org-derived distributions of X.org software, and
> > perform stability testing on the feature snapshots and release their
> > own
> > distributions as development went on on the various freedesktop.org
> > projects. Basically, to take a more active role in testing and
> > reporting
> > X.org problems.
> 
> Nb: there's now an upstream xorg release manager (Adam Jackson), who
> also happens to be the X lead here at Red Hat.

That's great to hear. I haven't followed RH since FC3, so my comments
above about distro participation might be somewhat dated. I expect that
SuSE, who's leading the radeonhd work, also has some similar thing going
on.

X.org has been assembling "reference releases", but I have recently been
seeing the chatter about having a more active role from the X Consortium
in releases, to get the to happen more frequently.

> 
> > To this day, that hasn't happened except in the case of the OpenBSD
> > project. Now everyone suffers because graphics hardware is getting
> > close
> > to having a shorter lifespan than X.org releases. The X.org
> > consortium,
> > to its credit, has finally recognized this and recently announced it
> > is
> > going to change its release schedule to be more aggressive. The likely
> > result of this is a much quick time-to-market for new features, at the
> > expense of an increase in bugs exposed to the public (and hopefully,
> > found quicker and fixed quicker).
> > 
> > Today graphics hardware provides all sorts of features not considered
> > by
> > the developers when X.org 1.3 or 1.4 were released. The development
> > trees are where all this stuff is being developed (EXA, DRI2, next-gen
> > RandR code, new DRM). They've done a heck of a lot of overhaul in the
> > Mesa and Xserver source code trees in the past couple months. The best
> > I
> > can say is that, following the lists myself and the chats, they are
> > working really hard at getting this stuff together. You should
> > probably
> > see 1.5 released with a lot of this new feature-set around August. If
> > you want to improve the odds of this happening, you should get
> > involved.
> > 
> > I would, at the very least, recommend you try forwarding your email to
> > the developers list at X.org and also your Linux distribution. Both
> > are
> > culpable in the problems that you are experiencing.
> 
> My distribution of choice is already shipping a 1.5 pre-release with all
> these goodies. :)
> 
> Funnily enough though, Fedora 9 actually got a lot of flak for shipping
> with the 1.5 pre-release code, mostly since the binary nvidia driver was
> broken at release time... Overall though, its definitely been worth it,
> particularly the new randr stuff for my own usage.

Yeah, same here. I gave up on most binary-only distributions because of
their being tied to software that was way out of date, and many of the
maintainers' reluctance to forward-port more frequently. I'm actually
tracking on the 1.6 development branch (xorg-server master from fd.o
git), and it runs pretty well.

In all, my platform consists of:
  * FreeBSD 8.0-CURRENT, sources as of two nights ago, with some local
patches 
that I've got to have to get around the lack of a PCI MMIO remapper
in FreeBSD
and that the HP BIOS writers overlap my AHCI and HD-Audio MMIO
regions (yay!)
  * The latest masters from the git repository for the packages that I
listed 
earlier, from freedesktop.org
  * FireFox-3
  * All on amd64

Generally, with the exception of maybe once or twice every three months,
I have a perfectly stable reliable system, and can be sure that I can
safely update any number of those packages to get the latest features,
yet

RE: General Procedure to get ATI/DRI card running?

2008-07-09 Thread Coleman Kane
On Wed, 2008-07-09 at 15:13 -0400, Labitt, Bruce wrote:
> No joy so far.  Still getting Mesa GLX Indirect.  Any other ideas?
> 
> Does the order in the file matter?

Did you update to the latest development versions of mesa, drm,
dri2proto, xorg-server, and friends?

Also, you need to enable AIGLX in your xserver.

If you post your /var/log/Xorg.0.log after running an unsuccessful X
session, it will be easier to diagnose the problem.

> 
> 
> From: Arc Riley [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, July 09, 2008 2:06 PM
> To: Labitt, Bruce
> Cc: Greater NH Linux User Group
> Subject: Re: General Procedure to get ATI/DRI card running?
> 
> Looks like you're missing the glx module, based on your paste not including 
> it.
> 
> Section "Module"
> Load  "glx"
> 
> In the future I'll be sure to ask what distro you're running before 
> recommending hardware.  Apparently everyone that isn't running Gentoo or 
> another up-to-date distro is a second-class citizen left to toil in the 
> fields if they want anything even remotely new.
> 
> # emerge -av xf86-video-radeonhd 
> On Wed, Jul 9, 2008 at 1:53 PM, Labitt, Bruce <[EMAIL PROTECTED]> wrote:
> Arc,
>  
> My kernel is 2.6.18-92.1.6.el5
>  
> in /etc/X11/xorg.conf I have
>  
> Section "Device"
>   Identifier "Videocard0"
>   Driver "radeonhd"
> EndSection
>  
> Section "Screen"
>   Identifier "Screen0"
>   Device "Videocard0"
>   DefaultDepth 24
>   SubSection "Display"
>  Viewport 0 0
>  Depth    24
>   EndSubSection
> EndSubSection
>  
> Regards,
> Bruce
> 
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
> 
-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


RE: General Procedure to get ATI/DRI card running?

2008-07-09 Thread Coleman Kane
0)
> (II) RADEONHD(0): PCI FB Address (BAR) is at 0xC0000000 while card
> Internal Address is 0xE000
> (II) RADEONHD(0): Mapped FB at 0x2b66ffa81000 (size 0x1000)
> (II) RADEONHD(0): Using 3495 scanlines of offscreen memory
> (II) RADEONHD(0): Using XFree86 Acceleration Architecture (XAA)
>   Screen to screen bit blits
>   Solid filled rectangles
>   8x8 mono pattern filled rectangles
>   Indirect CPU to Screen color expansion
>   Solid Lines
>   Scanline Image Writes
>   Offscreen Pixmaps
>   Setting up tile and stipple cache:
>   32 128x128 slots
>   32 256x256 slots
>   14 512x512 slots
> (==) RADEONHD(0): Backing store disabled
> (==) RADEONHD(0): Silken mouse enabled
> (II) RADEONHD(0): Setting up "1920x1200" ([EMAIL PROTECTED])
> (II) RADEONHD(0): Shutting down DAC A
> (II) RADEONHD(0): Shutting down DAC B
> (II) RADEONHD(0): Shutting down TMDS B
> (II) RADEONHD(0): Using HW cursor
> (==) RandR enabled
> (II) Initializing built-in extension MIT-SHM
> (II) Initializing built-in extension XInputExtension
> (II) Initializing built-in extension XTEST
> (II) Initializing built-in extension XKEYBOARD
> (II) Initializing built-in extension XC-APPGROUP
> (II) Initializing built-in extension SECURITY
> (II) Initializing built-in extension XINERAMA
> (II) Initializing built-in extension XFIXES
> (II) Initializing built-in extension XFree86-Bigfont
> (II) Initializing built-in extension RENDER
> (II) Initializing built-in extension RANDR
> (II) Initializing built-in extension COMPOSITE
> (II) Initializing built-in extension DAMAGE
> (II) Initializing built-in extension XEVIE
> (EE) AIGLX: DRI module not loaded
> (II) Loading local sub module "GLcore"
> (II) LoadModule: "GLcore"
> (II) Loading /usr/lib64/xorg/modules/extensions/libGLcore.so
> (II) Module GLcore: vendor="X.Org Foundation"
>   compiled for 7.1.1, module version = 1.0.0
>   ABI class: X.Org Server Extension, version 0.3
> (II) GLX: Initialized MESA-PROXY GL provider for screen 0
> (**) Option "CoreKeyboard"
> (**) Keyboard0: Core Keyboard
> (**) Option "Protocol" "standard"
> (**) Keyboard0: Protocol: standard
> (**) Option "AutoRepeat" "500 30"
> (**) Option "XkbRules" "xorg"
> (**) Keyboard0: XkbRules: "xorg"
> (**) Option "XkbModel" "pc105"
> (**) Keyboard0: XkbModel: "pc105"
> (**) Option "XkbLayout" "us"
> (**) Keyboard0: XkbLayout: "us"
> (**) Option "CustomKeycodes" "off"
> (**) Keyboard0: CustomKeycodes disabled
> (WW) : No Device specified, looking for one...
> (II) : Setting Device option to "/dev/input/mice"
> (--) : Device: "/dev/input/mice"
> (==) : Protocol: "Auto"
> (**) Option "CorePointer"
> (**) : Core Pointer
> (==) : Emulate3Buttons, Emulate3Timeout: 50
> (**) : ZAxisMapping: buttons 4 and 5
> (**) : Buttons: 9
> (II) XINPUT: Adding extended input device "" (type:
> MOUSE)
> (II) XINPUT: Adding extended input device "Keyboard0" (type: KEYBOARD)
> (--) : PnP-detected protocol: "ExplorerPS/2"
> (II) : ps2EnableDataReporting: succeeded
> 
> -Original Message-
> From: Coleman Kane [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, July 09, 2008 3:25 PM
> To: Labitt, Bruce
> Cc: Arc Riley; gnhlug-discuss@mail.gnhlug.org
> Subject: RE: General Procedure to get ATI/DRI card running?
> 
> On Wed, 2008-07-09 at 15:13 -0400, Labitt, Bruce wrote:
> > No joy so far.  Still getting Mesa GLX Indirect.  Any other ideas?
> > 
> > Does the order in the file matter?
> 
> Did you update to the latest development versions of mesa, drm,
> dri2proto, xorg-server, and friends?
> 
> Also, you need to enable AIGLX in your xserver.
> 
> If you post your /var/log/Xorg.0.log after running an unsuccessful X
> session, it will be easier to diagnose the problem.
> 
> > 
> > 
> > From: Arc Riley [mailto:[EMAIL PROTECTED] 
> > Sent: Wednesday, July 09, 2008 2:06 PM
> > To: Labitt, Bruce
> > Cc: Greater NH Linux User Group
> > Subject: Re: General Procedure to get ATI/DRI card running?
> > 
> > Looks like you're missing the glx module, based on your paste not
> including it.
> > 
> > Section "Module"
> > Load  "glx"
> > 
> 
-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


RE: General Procedure to get ATI/DRI card running?

2008-07-09 Thread Coleman Kane
On Wed, 2008-07-09 at 16:58 -0400, Labitt, Bruce wrote:
> Umm, thanks for your frank assessment. 
> 
> So which is the lesser of evils - using the AMD/ATI proprietary drivers
> for 3D, or totally rebuilding my system from the ground up?  I presume
> that I will still have to mess around to get things going.  I've fooled
> around with this a few days now, I don't like wasting my time - I have
> plenty to do.  

Have you tried their proprietary drivers on your current system yet? Do
they work on such an old server?

You could always move to a Linux distro that has much newer components
to it, and start from there. The reason I posted "slackware" was just
that I've already done that route and felt it would actually be faster
to do than to shoehorn the development-class X server components into
your current system. It will be much cleaner.

If you were to just go and download all the development code for the
X.org modules and start building them, you would start to run into
compiler problems where some of the X.org headers that you have in
your /usr/include/* need to actually be removed so that they don't
override package-local versions of those headers. I don't have a
verified list of which ones they were but there are a bunch of them. So,
by trial and error you would waste immense time trying to get these
packages built for your system.

Starting from a fresh, empty base, you are more likely to have a full
working product much quicker.

> 
> If I were to do this from the ground up, which distro to choose?  Why
> slackware?  Why not Gentoo?  I suppose I can have a daily overnight
> update and recompile everything for the morning.  
> 
> I had originally wanted a relatively stable system.  It appears I can't
> get any work done with a stable system :(
> 

If you want to keep a stable system, you won't be able to easily do that
with cutting-edge hardware AND get all the cutting-edge features. This
is even beginning to be the case with Windows nowadays too (and they
have no excuse). 

From my experience, your options are:
  - Cutting edge system
  - Stable system

Choose one. :-)

In my case, I chose the first and use FreeBSD. The "cutting edge" is
"stable enough" for me, but I would never deploy a system like this onto
a bunch of office workstations. I would probably use hardware that is at
least a whole year old, and install FreeBSD 6.2 on them, after verifying
that all of the hardware has an existing track record of working well
under FreeBSD (either by buying a test system first, or researching it
online from someone else who's already bought the hardware).

> Any other solutions available?  Second opinion?  Anyone?
> 
> Bruce

Maybe it would be worth your time to investigate using the most recent
development snapshot of the xf86-video-ati driver, from its git repo? It
*might* be more compatible with older X servers, as it is at least that
old. The build/install procedure is pretty similar to what you've
already done with the radeonhd driver from what I can tell. You'll just
want to change the "radeonhd" into "radeon" in your conf file after you
build and install the driver.

> 
> -Original Message-
> From: Coleman Kane [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, July 09, 2008 4:37 PM
> To: Labitt, Bruce
> Cc: Arc Riley; gnhlug-discuss@mail.gnhlug.org
> Subject: RE: General Procedure to get ATI/DRI card running?
> 
> On Wed, 2008-07-09 at 16:19 -0400, Labitt, Bruce wrote:
> > Arc led me to believe that I did not have to do that yet.  He said
> that
> > the drm did not support radeonhd yet.
> > 
> > Believe me, this is more complicated than I had anticipated... :)
> > 
> > Here is the logfile
> > 
> 
> First of all, I can tell just by looking at this log output that you are
> in for a long headache. Your X server is over 2 years old, and won't be
> able to support DRI on the radeonhd. Your X server might not even
> support AIGLX on many of the drivers that will work with its older DRI
> implementation today.
> 
> The latest X server is v1.4.1, and you are using v1.1.1. The oldest one
> that will support DRI using radeonhd is v1.4.99.something, from the v1.5
> snapshots branch in the xorg-server git repository.
> 
> Basically, you are trying to use a brand new driver for a brand new
> piece of hardware with an ancient installation of X-Windows. If your
> distro at least had a v1.4+ X-server, you might be able to get by just
> by rebuilding about five modules.
> 
> Likely, you will need to rebuild almost all of X from scratch, and try
> to make sure that it doesn't accidentally bring in headers from the old
> X installation.
> 
> IOW, to get it working on your system, you are in for a wild ride. It i

RE: General Procedure to get ATI/DRI card running?

2008-07-09 Thread Coleman Kane
On Wed, 2008-07-09 at 17:29 -0400, Labitt, Bruce wrote:
> Hmm, not sure I’m scared of Gentoo – I don’t know enough to be
> scared!  I’ve used SuSE in the past, it is ok.
> 
>  
> 
> How hard is it to set up Gentoo?
> 
>  
Visit:
  * http://www.gentoo.org/doc/en/handbook/index.xml

Pick your architecture from the first row of the first table, with the
description labeled "Latest version, one page per chapter, perfect for
online viewing".

Go read Chapter 1. If it makes sense to you then that is how easy it
will be.

> 
>
> __
> From: Arc Riley [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, July 09, 2008 5:27 PM
> To: Labitt, Bruce
> Cc: Coleman Kane; gnhlug-discuss@mail.gnhlug.org
> Subject: Re: General Procedure to get ATI/DRI card running?
> 
> 
>  
> 
> Everyone I work with who uses the radeonhd drivers uses Gentoo.
> 
> I agree with Coleman's assessment - it was said earlier in this thread
> that you'd likely need to upgrade your X server, it really is ancient,
> and likely Mesa too.
> 
> The output shows that the radeonhd driver does support your card and
> is detecting it, but something else is going wrong down the chain.
> Since newer Mesa's have expanded OpenGL support (ie, OpenGL 2.0) some
> apps may not even work unless you're running a semi-recent version of
> it.
> 
> If Gentoo scares you, the newest OpenSUSE may be your best bet.
> 
> 
-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


On the evils of Comcast

2008-07-11 Thread Coleman Kane
I saw this in the news today, FCC ruled *against* COMCAST. Since this is
like the "secondary topic" of the mailing list, here's the link:

  * http://biz.yahoo.com/ap/080711/internet_regulation.html?.v=5

-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: sudo problems. anyone feeling up to it?

2008-07-21 Thread Coleman Kane
On Mon, 2008-07-21 at 14:25 -0400, Shawn O'Shea wrote:
> 
> 
> 
> The problem is that under the FF release, sudo is acting
> broken, i.e., not
> like the man page sez it's supposed to. Under FF, I lose my
> HOME envvar.
> I'm not supposed to lose it.
> 
> 503 > sudo python
> Python 2.5.1 (r251:54863, Mar  7 2008, 04:10:12)
> [GCC 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)] on
> linux2
> Type "help", "copyright", "credits" or "license" for more
> information.
> >>> import os
> >>> os.system('bash')
> [EMAIL PROTECTED]:/home/sorr# echo $HOME
> 
> 
> We starting "losing" some environment variables when we upgraded Macs
> from OS X 10.4 to 10.5. 10.5 includes a new sudo that wipes the
> environment other than specific variables it says to keep. We needed
> to add lines to sudoers to keep our proxy server environment
> variables. Maybe adding something like this would fix HOME?
> 
> Defaults env_keep += "HOME"
> 
> -Shawn 

Yeah this is the trick I've used to handle this case, as well as other
variables I'd like to pass (such as SSH_ORIGINAL_CMD).

> 
-- 
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Booting NOT-Windows

2008-08-20 Thread Coleman Kane
On Wed, 2008-08-20 at 14:25 -0400, Peg Harris wrote:
> I realize this isn't a Linux question, but maybe there's a Linux
> answer ;)
>  
> I have a Dell laptop, that should be running XP, and is less than a
> year old.  This morning, it decided that there was a file missing and
> it won't completely boot.  It doesn't even want to do a "SAFE" boot.
> I put in XP's "recovery CD" and that lands me into a recovery console
> (which is basically a DOS prompt.)  I figured if I can at least backup
> some of my significant files, I won't even mind reinstalling the whole
> system, since there aren't too many "extras" (just a few freebies,
> like Open Office and Gimp) installed.
>  
> If I boot into this recovery mode with my external USB disk plugged
> in, I can see files on the main hard drive and on this USB disk.  But,
> I can not copy from one to the other.  I called Dell support, and was
> told that since the initial installation did not give the
> "Administrator" account a password, I've missed out on some useful
> recovery tools (that would have been a nice warning to get during the
> install.) 
>  
> Since this machine has the ability to boot off a USB device or a CD, I
> wonder if I can boot up something else that will see all of my other
> disks, and let me copy my XP files onto my USB drive before I end up
> shipping this machine back to Dell (who said they'd just blindly
> replace the hard drive) or reinstalling from scratch.  
>  
> Peg

Peg,

You may want to look into KNOPPIX:
http://www.knopper.net/knoppix/index-en.html

It comes with the NTFS-3G FUSE driver, which I have heard is pretty good
at reading most of the newer features of NTFS in WinXP+. I don't know
about writing.

-- 
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: (OT) Laptop Repair

2008-08-21 Thread Coleman Kane
On Thu, 2008-08-21 at 09:45 -0500, [EMAIL PROTECTED] wrote:
> Good Morning.
> 
> My daughter has a Dell Inspiron (5100) and she dropped it. Consequently, the 
> LCD is cracked in a couple places.
> I assume Dell sold a bazillion of these machines, so also, I assume parts are 
> available...
> 
> Where might I go in the Southern NH or Mass. areas, to
> get this Laptop repaired ???
> 
> Thanks In Advance
> 
> paulc
> 

If you are adventurous, check out eBay for similar laptop models and see
if you can track down an LCD there (you may also need the proper
inverter board for it too). You'd be surprised at how easy it actually
is to disassemble laptops and service them. The key is knowing where to
look for the screws, snaps, etc...

Generally, a lot of people on eBay will sell dead laptops for parts for
really cheap.

-- 
Coleman Kane


signature.asc
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Fwd: New local Linux Kernel Contract...

2009-03-24 Thread Coleman Kane
On Tue, 2009-03-24 at 17:55 -0400, James R. Van Zandt wrote:
> 
> 
> ...must put up with management that still wants Word .doc format documents...

Try OpenOffice.org... it actually works really really well now (contrary
to the notorious history of the 2.x series).

In fact, I've found it to be the savior for those numerous people who
are forced by others to have to read OOXML documents (.docx) but already
shelled out for Office 2003. OpenOffice.org will open these.

-- 
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: [OT] goofy-expensive Denon cables - spoof?

2009-09-30 Thread Coleman Kane
On Wed, 2009-09-30 at 15:43 -0400, Ben Scott wrote:
> On Wed, Sep 30, 2009 at 1:18 PM, Michael ODonnell
>  wrote:
> >> Greg, it's obviously your Ethernet cables.  I bet
> >> they're no name.  I suggest you give the Denon AK-DL1's
> >> (http://www.usa.denon.com/ProductDetails/3429.asp) a try.
> >
> > I really want to believe that's a spoof site but, if so, their
> > deadpan is very good.  OMFG, do people actually order that stuff?
> 
>   As usual, GNHLUG is ahead of the curve.  We riff'ed that back in
> June '08.  ;-)
> 
> http://www.mail-archive.com/gnhlug-discuss@mail.gnhlug.org/msg23181.html
> 
> -- Ben
> 
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
> 

My personal favorite has always been the following review:
  * 
http://www.amazon.com/review/R2VDKZ4X1F992Q/ref=cm_cr_pr_viewpnt#R2VDKZ4X1F992Q

of Marjorie Flack's wonderful introductory guide to network
troubleshooting entitled "The Story About PING"

-- 
Coleman Kane


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: How Apple makes more profit on their systems...

2009-10-04 Thread Coleman Kane
To be completely fair, there are a considerable number of other hardware
components in a Mac than: Screen size, RAM, Hard Drive space, and CPU.

Let's take the screen, and I have some experience in this dept. as I've
been working on a project for the past couple years that has evaluated
about five different LVDS displays (the same type used in laptops).
Pricing is quite variant in this department when you consider other
properties than screen size, such as: the quantity of light levels to
the RGB bands of your display, the overall maximum brightness, whether
it is backlit from 1, 2, or all 4 sides, what the actual physical
resolution is (my HP by default came with a 1400x864 display, but for
added $$$ I got the 1680x1050 size screen). In addition to this, there
is the clarity (at the brightest setting, how much of the light produced
by the lcd manages to pass through to the user, versus getting diffused
and scattered) and the viewing angle, to cite two examples.

For the Hard Drive, I can get an extremely cheap 4800rpm 320GB drive, or
I can get (for a higher price) a faster 7200rpm
low-random-access-latency drive for my laptop. In addition to this,
there's the question of which ATA controller both laptops use, which one
is more expensive, and which one is faster.

For the RAM, there's always the question of the CAS latency and the
timings that are programmed into the SPD chip. Lower latency modules and
faster timings of course mean that your system uses less bus cycles to
fetch/store data in RAM.

Then there are the keyboard, trackpad, battery, and laptop power
management and cooling system, all of which Apple develops in-house, but
HP likely only does the cooling system part of this list themselves.
Apple has spent considerable years attempting to perfect these
components, and I still feel that the keyboards and trackpads on my
PowerBook are the best that I've played with. Even many of my
office-mates have switched to using an Apple keyboard for their PC's
because they are USB, type very very nicely, and are very sturdy yet
small.

Anyhow, getting back onto my point, I decked-out an HP Compaq 6700
series (which is one of the sturdier business models that actually uses
metal alloys for some of the external case), selected a lot of
components that were higher end choices for that laptop, and managed to
achieve a price that was a slight bit higher than the comparable Apple
model.

It depends upon what you're looking for in a laptop, and Apple is still
a niche vendor, so it is unlikely they're targeting you, but as far as I
can tell, they use more expensive components, and I think that's how
they arrive at a more expensive laptop.

As for the VGA adapter, disassemble it and see if it is any more
complicated than a simple re-wire.

On Sun, 2009-10-04 at 09:38 -0400, Jefferson Kirkland wrote:
> While I find the Apple OS to be pretty sweet, mostly due to the fact
> that it is Unix based now, I just don't see any justification for the
> cost of their systems.  Someone I follow on twitter tried to convince
> me of how cheap their systems are and I ended up halting the
> conversation with a small comparison.  Take the low end Mac Book.  It
> has a 13" screen, 2 gig ram, 160 gig hard drive and a 2.13 Ghz
> processor, starts at $999.  Meanwhile, my HP Pavillion dv7 laptop
> (only about 3 months old at this point) has a 2.10 Ghz processor, 4
> gig ram, 320 gig hard drive and a 17" screen cost  $649.  $350 cheaper
> and I get so much more.  On my last laptop, a Dell Inspiron 9200, I
> was able to (about a year ago) install and run Apple OSx 10.4.  While
> I really liked it and enjoyed the chance to play with it, I did not
> have the time to dedicate to work on getting the wireless working.
> (yes, a driver was available and I have it downloaded).  
> 
> My opinion is, unless you are either a Mac aficionado or have some
> reason for running OSx over Windows or Linux, I just cannot justify
> the cost of their machines.  But, that is my opinion.  
> 
> Regards,
> 
> Jeff
> 
> 
> 
>  
> 
> On Sun, Oct 4, 2009 at 9:12 AM, Alex Hewitt 
> wrote:
> Yesterday some friends asked me to accompany them to the Apple
> store in
> Salem to help them purchase a Mac. I had talked to them
> previously about 
> some of the advantages of the platform including decent
> reliability and
> in their case the much lower amount of malware targeting the
> system.
> 
> But before going I decided to check out the Apple web site.
> They were
> planning on buying a Mac Mini which is probably Apple's best
> bargain for
> their budget. Recently a customer had purchased the current
> (early 2009)
> model and I already knew that if they were going to use their
> VGA CRT
> type monitor they were going to need an adapter. The Mac Mini
> used to
> have a full size DVI connector on the back capable of

Re: How Apple makes more profit on their systems...

2009-10-05 Thread Coleman Kane
On Mon, 2009-10-05 at 13:46 -0400, Ben Scott wrote:
> On Mon, Oct 5, 2009 at 12:42 PM, Tom Buskey  wrote:
> >>> ... I got a mini displayport to composite adapter.  *bzzt*.  
> >>> ... the mini just works for everything I want to do. ...
> >>
> >>  Reality Distortion Field is in effect, I see.
> >
> > Nope.  I wanted ... While I waited to get the HDTV, it didn't work with
> > the old TV ...
> 
>   The fact remains that everything "just worked" only after you
> adjusted your definition of "just worked".  This appears to be a
> common symptom of RDF exposure: Repeated claims of "it just works",
> but careful attention notices that any scenario in which it does not
> or would not "just work" is excluded by changing the terms of the
> test.  It becomes impossible for things to not "just work" by careful
> manipulation of the scenario.
> 
> -- Ben

I think, in general, the rule that Mac stuff "just works" with other Mac
stuff implies the comparison to how PC stuff doesn't typically "just
work" with other PC stuff.

However, I think that all of us techies can pretty much agree that
nothing in computing (whether PC, Mac, or Mainframe) ever really "just
works" as much as we want it to, and that's why we're so dedicated to
FOSS.

Case in point:

OS X never "just worked" with my old HP psc1315 inkjet
Printer/Scanner/Copier, which claimed support "out of the box". I
actually had to install ESP GhostScript, CUPS 1.3.x, HPLIP, and the
Foomatic filter database over the top of the versions that shipped with
Mac OS X. The ramification of this was that I had to go back and
reinstall all of those packages when a new Mac OS X patch-release
clobbered them via Apple Software Updater. However, using the HPLIP
driver package, I was able to get it to "just work" under GNU/Linux
systems as well as under FreeBSD systems. This includes the
scanner/copier portion of the device.

Later, when Leopard (10.5) was released, they "fixed" some of the
problems by simply removing support for the built-in scanner/copier
portions.

Even in Windows, the support for the printer was pretty lackluster. I
had to resort to serving it up via CUPS with a postscript filter, and
use the ImageWriter driver from Windows.

Of these platforms, GNU/Linux and FreeBSD are the only two that actually
empowered the user to investigate and attempt to solve the problems. The
other two are quite hostile to this approach, and a user could easily
render their system unusable, incompatible with another software
package, or have their work unwittingly undone by a future software
update.

-- 
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: How To Ask Questions The Smart Way

2009-10-10 Thread Coleman Kane
On Sat, 2009-10-10 at 09:04 -0400, Alex Hewitt wrote:
> 
> Lori has hit it on the head.  The document reeks of "us and them". I 
> taught programming for several years at a community college. I told my 
> students that there were no stupid questions. I told them that if they 
> asked me a question 5 times I'd answer them every time. I told them that 
> I'd wonder about them around the third time they asked but never the 
> less I'd answer them.
> 
> Working in the industry I found myself working with people at widely 
> varying skill levels. My favorite people to work with were those who 
> were both brilliant and who had a self deprecating sense of humor.  One 
> engineer in particular, our kernel architect was incandescently 
> brilliant. Of the 300+ engineers who worked with him, virtually all felt 
> that they weren't qualified to carry his lunch bag into his office. 
> Instead of being a pain in the ass to work with he was always cheerful 
> and made you want to impress him that you had done your homework before 
> "bothering" him with your (for him) trivial question. His attitude and 
> style made people want to work with him. Other, otherwise bright 
> engineers would crap on anyone who approached them with less than 
> wonderful questions. Needless to say they didn't get nearly as much 
> cooperation as they might have otherwise gotten.
> 
> In the engineering field you sometimes hear the term "ego-less 
> programming". I have found that those ego-less programmers are quite 
> often the best.
> 
> So ESR's document is reasonable in terms of explaining why and how 
> someone should do their research in order to get better results but the 
> tone is borderline nasty.
> 
> One other small note - on one compiler project that I worked on, newbies 
> were looked on as another chance to get things right.  The newbie, not 
> knowing all the ingrained habits of the seasoned developers wouldn't 
> understand poorly written or incorrect documentation. They wouldn't 
> configure their environment to avoid the build problems which inevitably 
> creped  into  project resources. They usually improved the product 
> because they didn't know what they were supposed to know...
> 
> -Alex

As far as RTFM goes, have any of you sat down and read some of the
manual pages that sometimes accompany certain families of free software?
Notorious are those from the FSF and the OpenBSD communities which can
reek of the same sort of elitist badgering, for instance FSF man pages'
tendency to very tersely direct the reader to the more painful to use
"info" pages, or OpenBSD's commitment to slam "other unixes"
implementation of tools or APIs.

Not exactly newbie-friendly, IMHO.

-- 
Coleman

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Gnu-autotools

2009-10-14 Thread Coleman Kane
On Tue, 2009-10-13 at 21:17 -0400, D. Bahi wrote:
> Lori Nagel wrote:
> > I was wondering where I could find some good tutorials on gnu-autotools and 
> > using them in free software projects -thanks. 
> > 
> 
> I would also like to see a condensed tutorial that would demystify this
> art for me... till then I've made use of these:
> 
> the IDE "Anjuta"
> 
>   http://projects.gnome.org/anjuta/features.shtml
> 
> has a project wizard (using autogen) to create autotool based projects.
> it also has an autotool based project manager.
> 
> 
> more generically
> 
> short (hah) - http://fsmsh.com/2753
> 
> longer - http://sourceware.org/autobook/

Sourceware's HOWTO has always been a bit outdated. There is currently a
push in the automake and autoconf projects to better structure their
work, hopefully deprecating the 'aclocal' tool in the future for a
better alternative.

> 
> and sources:
> 
> http://www.gnu.org/software/autogen/
>   http://www.gnu.org/software/autogen/manual/autogen.html
> 
> http://www.gnu.org/software/automake/
>   http://www.gnu.org/software/automake/manual/automake.html
> 
> http://www.gnu.org/software/autoconf/
>   http://www.gnu.org/software/autoconf/manual/autoconf.html
> 
> http://www.gnu.org/software/libtool/
>   http://www.gnu.org/software/libtool/manual/libtool.html
> 

I've found the source code for the X.org projects to be a set of good
examples for the common use cases for Autotools:
  * Build and Install a shared library using libtool
  * Build and Install an executable program
  * Build and/or Install a package consisting only of Header files
  * Build and/or Install a non-software package (like a set of bitmaps 
or fonts)

You can view them at:
  * http://cgit.freedesktop.org/

Typically, the team there attempts to upgrade their configure.ac files
(the proper name now, so stop using 'configure.in') as well as their
Makefile.am files to track the features and syntax in the latest
releases from autotools.

You can also ask me... I'm a pretty good resource for this stuff. I've
been playing with it for a long long time, and I keep track of the devel
lists for libtool, automake, and autoconf.

-- 
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: It's official: Linux has become Microsoft Windows

2009-10-16 Thread Coleman Kane
On Fri, 2009-10-16 at 10:59 -0400, Tom Buskey wrote:
> 
> 
> On Thu, Oct 15, 2009 at 10:47 PM, Ben Scott 
> wrote:
>  This is actually from 2005, but I just found it now:
> 
> https://lists.ubuntu.com/archives/desktop-bugs/2005-August/002500.html
> 
>  Yes, that's right.  Rather than fix broken software, the
> sanctioned
> course of action is to reboot the system if HAL or DBus need
> to be
> restarted/refreshed.
> 
>  Can anyone recommend a Free, Unix-like operating system that
> supports a wide variety of hardware?  That used to be Linux,
> but it
> now fails on the second item.
> 
>  
> NetBSD comes closest, especially for CPU architectures.  FreeBSD might
> beat NetBSD for peripherals.  I'm not sure if OpenBSD is head of
> OpenSolaris.  Darwin is another possibility.
> 
> Of course, these are Unix systems and you asked for Unix-like (which
> linux technically is).
> 
> Haiku probably isn't unix-like enough.  Is Hurd far enough along yet?
> Debian on BSD or Hurd?
> 
> What about a Linux distro that doesn't use HAL or DBus.  Slackware?

I think you're confusing "all of Linux" with Ubuntu. The subject should
be "Ubuntu has become Microsoft Windows", or maybe even "GNOME has
become Microsoft Windows".

In my case, I am not using GDM or XDM or any of the other *DMs, and
instead just run X from the command line. If I upgrade hald or dbus, I
simply log out of X11 (Using GNOME's "System->Log out ..."), then run
startx again. No reboot necessary. This is running on FreeBSD, of
course. Some of you might argue that this amounts to a reboot. Let me
assure you, from a time-consumed perspective it most certainly does not.

Ubuntu gratuitously seems to want a reboot for any upgrade of a service
process running in X, or in the init system. This seems to be a
heavy-handed anti-foot-shooting measure intended to ensure a stable
experience at the expense of some efficiency.

As far as all the complaints go in that issue, there is still one
striking difference between Ubuntu and Microsoft Windows: You, the user,
are empowered to fix the behavior if you don't like it so much, because
you have access to the source code to the whole system. Ubuntu doesn't
*have to* be restarted after every invasive upgrade, if you would just
add the code to those packages that would fix the problem. I'm certain
that there's even a boilerplate recipe out there for this exact problem
that applies to both hald and dbus.

The only reason Ubuntu opts for this is the same 80/20 rule that
Microsoft employs: For 20% more rebooting, you can avoid 80% of the work
that would need to be performed to achieve a fully
no-reboot-necessary-on-upgrade OS.

-- 
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Git Help

2010-03-10 Thread Coleman Kane
Do't use 'git branch' any more for creating new branches, instead use
'git checkout'.

>From your description, I think I know what you are looking for. Consider
the following example where you edit the file 'test.c' and want to
commit those changes to a new branch named 'branch-2' rather than the
current branch ('master').

In your git-managed source repository, edit the file 'test.c' and save
it.

Then, run (this will create a new branch 'branch-2' from the current
working branch):
  git checkout -b branch-2

Your file 'test.c' will still remain "edited", but the .git repo will
have 'branch-2' now which is almost identical to 'master' (same rev
history). You will automatically be operating o 'branch-2' instead of
'master' when the operation completes.

Add the changes to 'test.c' to the present branch ('branch-2'):
  git add test.c

Finally, commit these additions to the present branch ('branch-2'):
  git commit


To switch back to branch 'master':
  git checkout master

To switch back to branch 'branch-2':
  git checkout branch-2

The output of these two will differ:
  git log master
  git log branch-2

The latter should have your edits, while the former will not.


Beware the HOWTO's out there, as the above sequence is the 'new way' of
doing this in git. The old way actually used 'git branch' for all the
branching operations and was more clumsy. This change happened within
the past two years.

-- 
Coleman Kane


On Wed, 2010-03-10 at 11:30 -0500, Thomas Charron wrote:
> As a git newb, I've got a couple of questions about git which
> confuse me.  Any git users who might be able to explain?
> 
>   If I'm in a directory and have made local modifications, and then I
> issue a git branch, what does the branch contain a copy of if I check
> it out?  The latest head?  Or does the branch branch whatever I'm
> working on to a new branch, which contains a copy of the state of the
> original branch?
> 


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Git Help

2010-03-10 Thread Coleman Kane
On Wed, 2010-03-10 at 12:27 -0500, Thomas Charron wrote:
> On Wed, Mar 10, 2010 at 11:46 AM, Coleman Kane  wrote:
> > Do't use 'git branch' any more for creating new branches, instead use
> > 'git checkout'.
> 
>   Actually, I already knew that one, I was separating the logic in
> case it confused things.  :-D
> 
> > >From your description, I think I know what you are looking for. Consider
> > the following example where you edit the file 'test.c' and want to
> > commit those changes to a new branch named 'branch-2' rather than the
> > current branch ('master').
> 
>   And if my current branch is 'tom', with edited files, and I issue a
> git checkout -b tom2, then tom2 will now be a copy of tom, PLUS the
> edited file?
> 
> > Beware the HOWTO's out there, as the above sequence is the 'new way' of
> > doing this in git. The old way actually used 'git branch' for all the
> > branching operations and was more clumsy. This change happened within
> > the past two years.
> 
>   I gave up on the HOWTO's and just dove in.  :-D  Now I'm using git
> to allow an application which has a single configuration directory,
> ~/.skeinforge, to have multiple configuration by using a git wrapper
> which uses git branches as 'profiles'.
> 

Basically, the "repository" is the .git/ subdirectory. The working
directory is your local copy. Think of the current working directory as
your "local client directory" when you run "svn checkout http://.";.
Think of the .git/ subdirectory as the subversion repository on your
server.

Thinking in terms of SVN, when you run 'checkout -b tom2' when using the
'tom' branch, it's like running:
svn cp http://your-server.com/repos/toms-project/branches/tom 
http://your-server.com/repos/toms-project/branches/tom2
followed by
svn switch http://your-server.com/repos/toms-project/branches/tom2

The result is that you are now considered by the git software to be
working on files that belong in the 'tom2' branch, whereas before you
were working on files considered to belong in the 'tom' branch. However,
the status of all of the files in the working directory remains the
same. You still need to commit your changes to the present branch before
they'll be stored in the repository. In other words the files marked as
edited under 'tom' will still be marked 'edited' and not checked in
after the 'git checkout -b tom2'.

-- 
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: We need a better Internet in America

2010-04-07 Thread Coleman Kane
Thanks Ric,

You just made a FreeBSD user's morning.


On Wed, 2010-04-07 at 00:14 -0400, Ric Werme wrote:
> From: "Greg Rundlett (freephile)" 
> 
> > I hope this message is considered "on topic" because 
> > a) the Internet was/is built on Linux
> 
> You just lost all of us who worked on ARPAnet.  Of course, there aren't that
> many of us, so maybe it doesn't matter.  The follow on to the ARPAnet, the
> Internet, started around 1980 with the publishing of the core Internet
> protocols and porting classics like the new (1973) FTP and Telnet protocols
> and new ones like NFS and the rest of ONC-RPC.  Linux didn't appear until 1991
> or so. I was "off net" in 1980, but I think BSD Unix is to the Internet as
> TENEX and PDP-10s were to the ARPAnet.  Linux and Windows came along later.
> 
> In V2, you might try dropping the "was".
> 
> Sorry, I guess that wasn't the point you were making
> 
>   -Ric
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Fw: Re: We need a better Internet in America

2010-04-07 Thread Coleman Kane
I'm sorry, but I'd like to know the "better" alternative to government
regulations that prohibit the marketing and sale of elixers such as the
following (ca. 1915):

http://www.orau.org/PTP/collection/quackcures/standradiumsolution.htm

Sure, today we all are taught that radiation is bad today, and so we all
know it is. However, how much of this knowledge is due to government
regulation via the FDA, etc... and public standards of education? What
alternative to these institutions has a track record of providing
sufficient confidence in our consumables marketplace?

-- 
Coleman Kane


On Wed, 2010-04-07 at 14:23 -0500, Seth Cohn wrote:
> On Wed, Apr 7, 2010 at 12:56 PM, Jerry Feldman  wrote:
> > Libraries have been public in the US primarily since the late 1700s.
> > There is an ongoing debate as to which is the first.
> 
> http://en.wikipedia.org/wiki/Public_library
> 
> "The library in the New Hampshire town of Peterborough claims to be
> the first publicly-funded library; it opened in 1833."
> 
> There is a big difference between public and publicly _funded_.  Most
> of the libraries you cite as being 'public' in the 1700s were
> 'private' in most every sense you'd recognize today, despite being
> open to the 'public'
> 
> But this list isn't for debating library history.  My overall point
> was that looking toward governmental regulation of the net, even for
> 'good reasons', as with all 'governmental regulation' in general is a
> mistaken approach to whatever problems you might want to solve.  There
> are _always_ better answers.
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
> 


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: [OT] Postal services (was: better Internet)

2010-04-08 Thread Coleman Kane
On Thu, 2010-04-08 at 19:53 +0430, Jeffry Smith wrote:
> >  The USPS *does* receive subsidies -- some Federal tax dollars go to
> > support it (or did, last I knew).  That's something else entirely.
> >
> Not for years  - that's one of their problems in that they have to
> both deliver anywhere and avoid a debt (not necessarily make a
> profit).
> 
> jeff
> 

This requirement also precludes the federal gov't from using USPS
profits for closing budget holes.

-- 
Coleman

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Fw: Re: We need a better Internet in America

2010-04-08 Thread Coleman Kane
On Thu, 2010-04-08 at 14:27 -0400, Bill McGonigle wrote:
> On 04/07/2010 04:08 PM, Coleman Kane wrote:
> > (ca. 1915):
> >
> > http://www.orau.org/PTP/collection/quackcures/standradiumsolution.htm
> >
> > Sure, today we all are taught that radiation is bad today, and so we all
> > know it is. However, how much of this knowledge is due to government
> > regulation via the FDA, etc... and public standards of education?
> 
> Marie Curie died in 1934 of radiation poisoning.  You'd expect an FDA to 
> know in 1915 that it was dangerous?

Yes. Considering that it was widely blamed for the death of many others
since its discovery in 1898. I suggest you look up history on the U.S.
Radium company and the "Radium Girls" episode. Radium had gone well into
mainstream use prior to Curie's death.

Curie's death in 1934 occurred long after it was determined to be a
health hazard: a fact that could have been revealed much earlier had
there been an avenue for appeal for the complainants. 


> 
> > What
> > alternative to these institutions has a track record of providing
> > sufficient confidence in our consumables marketplace?
> 
> Underwriters Laboratories is a great example - insurance companies use 
> it to control the risk of the assets they insure, and people buy 
> insurance to control their own risks.  A great negative-feedback loop.

Not great enough as we found out recently.

> 
> There's little competition to the FDA in the US because it's hard to 
> compete against a 'free' government program.  But I do subscribe to 
> Nutrition Action from CSPI ($12/yr) to get a much more science-based and 
> less corrupt idea of what foods are good or bad for me.  In other 
> countries without a strong central food authority there are independent 
> third-party evaluators and certifiers.  If they become 
> unreliable/corrupt, they'll lose reputation and be replaced.  Not so 
> much with the FDA, even now with Monsanto's chief lobbyist as the FDA's 
> 'food-safety czar'. _Food Inc._ is a great watch for a sub-two-hour 
> summation (on Netflix streaming, BTW).  The Stonyfield/WalMart 
> partnership against rBGH is a striking contrast.

By what means, or after what consequences, do they lose their
reputations? As corrupt as the FDA appears today (which you only know
about because of transparency, unlike private agencies), you cannot
write it off without a review of the history that led to its creation:
widespread use of harmful additives to food products, as well as
medicinal products advocating baseless claims. The utopian point of view
that a bunch of certification agencies will compete in good faith on a
level playing field hasn't proven to have much historical credit in this
country. Rather, vertical integration and monopolistic practices
intended to control production, distribution, and certification have
been the standard in the absence of oversight (such as the events that
led to the FDA's creation).

> 
> In a thinly-veiled effort to remain on topic, the same potential applies 
> with the FCC, though I don't know their agency to have such corruption 
> problems.  Except that an agency tasked with maintaining radio frequency 
> registrations (a natural scarcity) is busy trying to tell private 
> network operators how to manage their networks.

Because maintaining RF registrations isn't, and never was, the entire
scope of the FCC's duties.

> 
> -Bill
> 

-- 
Coleman

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Computer dinosaurs

2007-11-06 Thread Coleman Kane
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Kent Johnson wrote:
> Sheesh. It's bad enough we have threads about "when I was a boy, we
> had to punch paper tapes by hand, uphill both ways!", now we have a
>  meta-discussion about other people talking about the old days.
>
> Sigh.
>
> Kent
>
> Shawn K. O'Shea wrote:
>> Along the same lines, I actually have met the guy that owns this
>> site (and most if not all the computers on said site)
>> http://trailingedge.com/
>>
>> -Shawn
>>
>> On Nov 6, 2007 11:08 AM, Ted Roche <[EMAIL PROTECTED]> wrote:
>>
>>> It seems we all love to discuss our fond memories of computers
>>> long past, so here's a photo album I think many will enjoy:
>>>
>>> "Gallery: Ancient Marvels Abound at Vintage Computer Festival"
>>>
>>>
http://www.wired.com/gadgets/pcs/multimedia/2007/11/gallery_vintage_computers
>>>
Back when I lived in the SF Bay Area, I went and visited the Computer
History Museum located in Mountain View. If you ever have a trip to SF
or SJ CA, you should drop by and check it out. They had a Xerox X
terminal! They also have a number of the old Crays and all sorts of
other goodies!

- --
Coleman Kane

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.4 (FreeBSD)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHMKM2cMSxQcXat5cRAsRZAJ9sbUKI2owyUElhmwa1Ez2MiULUhACffSgG
TdzM0h0Ufri0JVbLkpWq7mc=
=JhuR
-END PGP SIGNATURE-

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: OLPC - Nov 12 launch

2007-11-07 Thread Coleman Kane
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Ted Roche wrote:
> Ben Scott wrote:
>> On 11/7/07, Tech Writer <[EMAIL PROTECTED]> wrote:
>>> Has anyone gotten further information about the "Give 1 Get 1"
>>> program on the One Laptop Per Child site?
>> My dilemma is, do I go for the XO-1, the Classmate PC, the Eee,
>> or something else I'm not yet aware of?  So many toys, so little
>> time and money.
>>
>
> Koolu! www.koolu.com
>
> I think you'll find the XO-1 might be neat for a niece or nephew,
> but too small for your fingers to type on, despite being
> oh-so-cool. If you want to play with the Sugar software, you can
> download a VM image from the www.laptop.org site:
>
> http://www.laptop.org/en/laptop/software/developers.shtml
>
> And run it on most machines. (It also runs on the Koolu ;)
>
> maddog brought a Classmate PC to the board meeting (Ben knows this,
> for the benefit of others) running Ubuntu and I had the same
> problem with the keyboard and I have relatively small hands. Cool,
> but a tad smallish. Small screen, too. Is it a mini-laptop, or a
> maxi-PDA?
>
> I haven't seen the Eee yet. It does look pretty neat. But I'm more
> inclined to use some low-power machines as mini-servers (servlets?)
>  around the SOHO, and control them with a nice big screen that
> accommodates my old eyes and a humpback keyboard and ergonomic
> mouse for my RSI-strained hands than go for the portable form
> factor. What sort of uses are you thinking of applying your toys
> towards?
I found the following one named Fit PC which manages to sport two
ethernet ports. It seems roughly similar in specs to koolu. The two
ports make it nice for a smart networking device.

http://www.fit-pc.com/new/fit-pc/about-fit-pc.html

- --
Coleman Kane


-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.4 (FreeBSD)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHMj55cMSxQcXat5cRAgW+AJ4kpniYs4rBZYFWBGG39r0vyhV64gCcDLa2
KlxMKsGFKnfa1OmO2+BQ/mw=
=VRg3
-END PGP SIGNATURE-

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: OLPC - Nov 12 launch

2007-11-08 Thread Coleman Kane
Paul Lussier wrote:
> Dan Coutu <[EMAIL PROTECTED]> writes:
>
>   
>> Does anyone know if remote desktop is installed (or installable) on any 
>> of these?
>> 
>
> If they actually do run Linux, I would expect rdesktop and VNC to run
> on them fairly easily.  If they don't run Linux, then I have no idea.
>   
They should all run Linux. You will probably need the i386 kernel
compiled for the Geode target, rather than the PC target though... I
think that we've got the Fit-PC machine running one of our in-house
cannibalized Gentoo distros. If you're concerned with X running on the
video hardware, you can probably just use the Xvfb server (I think it's
the xorg-vfbserver module in X.org 7.3) to get a network-access-only
Xserver.

--
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Ignition (was Re: tftp config problem (ltsp))

2007-11-11 Thread Coleman Kane
Ben Scott wrote:
> On Nov 10, 2007 5:05 PM, Michael ODonnell <[EMAIL PROTECTED]> wrote:
>   
>> all I said was that problems like bitrate mismatches at
>> the PHY level or receiver overruns due to protocol errors
>> are "unlikely" to be the culprit.
>> 
>
>   They may be "unlikely", but they sure do happen a lot.
>
>   When these problems happen, the cards usually talk just fine, and
> you can usually "ping" and do other simple diagnostics, but put a
> heavy traffic load on and everything goes to hell.
>
>   I suspect it's that one end can send so much faster than the other
> that buffers fill up and frames start getting dropped by the switch.
> The Ethernet stuff is all Working As Designed, but that doesn't mean
> your computers will successfully transfer data.
>
> -- Ben
>   
Have you looked into the ethernet switch as the culprit, rather than the
cards / drivers themselves?

--
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Ignition (was Re: tftp config problem (ltsp))

2007-11-12 Thread Coleman Kane
Ben Scott wrote:
> On Nov 11, 2007 11:29 AM, Coleman Kane <[EMAIL PROTECTED]> wrote:
>   
>> Have you looked into the ethernet switch as the culprit, rather than the
>> cards / drivers themselves?
>> 
>
>   It's not a problem with the switch, the cards, the drivers, or
> anything by itself.  It's not an implementation defect at all.  It's a
> design limitation of how Ethernet works (or fails to, depending on
> your point-of-view).
>
>   You've got a server connected to the switch at 1000 Mbit/sec.
> You've got a client connected to the same switch at 100 Mbit/sec.  The
> server is sending data faster than the client can accept it.  The
> switch will buffer frames, in the hopes that the server will stop
> sending soon, and then the switch will be able to empty the buffer
> into the client at a slower rate.  If the server does not stop
> sending, eventually the switch runs out of buffer memory, and has to
> start dropping frames.  There's nothing else it can do.
>
>   Well, not exactly.  The switch could send an Ethernet flow control
> "pause" message to the server, asking it to stop transmitting.
> However, this is not as simple as it seems.  Ethernet flow control is
> all-or-nothing.  If the switch asks the server to stop sending, it
> stops sending *for everybody*, even if there are other clients which
> are not having problems keeping up.  So a single overwhelmed port
> would bottleneck the entire network.  Thus, most switches do not send
> pause requests unless the entire switch is overwhelmed.
>
>   And who knows if the server would even honor a pause request?
> Ethernet is a jungle of optional features and poorly-implemented
> standards.
>
>   Good write-up here:
>
> http://www.networkworld.com/netresources/0913flow2.html
>
>   Obviously, the ideal thing to do would be to use a data link
> protocol which actually works under adverse conditions, but the world
> went with Ethernet instead.
>
> -- Ben
>   
I suppose that answers my query. By "the switch as the culprit" I was
getting at the question of whether or not the switch was implementing
the appropriate flow control. Though, you point out that flow control is
a two-way street and the other end must obey it too, or else it is
useless...

So the solution is not to bog down the network switch with slower devices?

--
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: A plague of daemons and the Unix Philosophy

2007-11-12 Thread Coleman Kane
Steven W. Orr wrote:
> On Monday, Nov 12th 2007 at 11:14 -, quoth Neil Joseph Schelly:
>
> =>On Monday 12 November 2007 10:50, Steven W. Orr wrote:
> =>> This disturbs me. I hear great things about Ubuntu, but AFAICT, Fedora is
> =>> the best and most cutting edge distro AND it's RPM based. I'm sorry, but
> =>> I have no desire to move to a deb based system.
> =>>
> =>> If I was to contemplate a different distro, is there anything that is RPM
> =>> based that people can say better things about?
> =>
> =>Why is RPM or DEB a determining factor?  If you have a distro you like, 
> then 
> =>you use it.  If you're looking for another one, then you're already looking 
> =>to change the respositories and the filesystem layouts and daemons and 
> other 
> =>nuances that differentiate one distro from another.  I guess if you're in 
> the 
> =>market for a new distro already, why eliminate DEB-based ones?
>
> The simple answer is that I highly prefer rpm over debian. The access is 
> far simpler. Full use of deb files implies about 13 different packages be 
> loaded just to do deb things. I'm in a situation right now where I have to 
> create .deb files and, while I'm getting my job done, I can tell you there 
> is no book that you can buy to teach you all you need to know about the 
> hundreds of places where documentation exists on how it all works 
> together. 
>
> I'm living with it and I have a few things I know how to do, but compared 
> to RPM and the available docs for it, deb files suck big green donkey 
> dicks.
>
> This is not a question of liking what you're familiar with. All of us know 
> the difference between learning from man pages and having a proper source 
> for learning idiomatic usage.
>   
Sounds like you're in a good position to put together that document ;).

--
Coleman
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Gimme that old time interface...

2007-11-14 Thread Coleman Kane
Star wrote:
> I know the benies to using KDE and Gnome, i'm wondering about the
> multitude of users with Black/Whitebox with PERLed out menues and the
> likes...
>   
For me, it has been only recently that my hardware has actually been of
a relatively current breed. During college, I relied upon a string of
antiquated Thinkpads beginning with the compact 486 model that had the
spring-loaded collapsing, pop-out keyboard.

As such, KDE/GNOME never really "fit" these notebooks. Instead, I
managed to craft together some pretty nifty solutions built around VTWM
(http://www.vtwm.org/) and a set of perl scripts to make the most out of
the applications being run therein. It served my needs, as a minimal
window manager that was relatively easy to hack on, yet still provided a
virtual desktop.

I experimented with evilwm (http://www.6809.org.uk/evilwm/) for a bit,
and managed to convert one of my former classmates to it for life.

I also used SCWM (http://scwm.sourceforge.net/) for a long time until
some incompatibility between it and guile crept in. All my configuration
was belong to Scheme.

Later on, I found golem (http://golem.sourceforge.net/) which is another
minimal one, but this one supports some of the WindowMaker features
while adhering to a minimal, yet powerful configuration language and
plugin API. If you want some simple changes, write a script for it. If
you want some bigger changes, write a plugin in C. A true "hacker WM".

Eventually, I ended up moving to GNOME and Metacity because of the
desktop integration into many of the apps I used and some of the more
advanced visual effects (and when I got an AMD64 laptop, the speed
problem was not an issue anymore). Also, non-GNOME window managers seem
to confuse some aspects of GNOME/Gtk2 applications.

I did find that you can make the most of Gtk2/GNOME-based applications
under other WM's by making sure the following daemons run (via .xinitrc
or otherwise):
gnome-settings-daemon, gnome-vfs-daemon, dbus-launch

They typically help make Gtk2 apps look better, without having the full
resource hit that GNOME typically does.

--
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: verizon DNS "helper"

2007-11-14 Thread Coleman Kane
Rob Lembree wrote:
> Verizon recently (I think) put in a handy DNS "helper" that redirects  
> DNS requests that result in a "not found" to their own servers.
> This completely breaks lots of stuff, and they should be lashed 50  
> times with a wet noodle for doing so.   It breaks the internet.
>
> To their credit, they have an "opt out" option that you can use.  If  
> you go to the configure page for your router, note your DNS settings.   
> Get to the point where you can choose to take DNS settings from DHCP,  
> and hard code your own DNS settings to the same addresses that DHCP  
> had given you, but replacing the '.12' with '.14', e.g.,  
> "192.168.0.12" would become "192.168.0.14".
>
> So I wanted to tell Verizon what a stupid move this is, and at the  
> same time tell them thanks for making opting out so relatively  
> painless, but I found that Verizon internet doesn't actually let you  
> speak with people.  Maybe those big windowless buildings *are* a sign  
> that the phone company is really and truly run entirely by machines.
>
>  >sigh<
>   
I don't know if you recall, but some time ago the registry for .com and
.net (VeriSign!) tried to pull this one. They had all unresolved .com
and .net requests go to their "register this new domain" page. Very
"independent" on the part of the company that is supposed to be neutral
in the administration of those servers, as per the ICANN agreements. It,
again, broke the Internet for many of us. I was working for a registrar
at the time (one of which who was very outraged by this abuse of the
system) and we all came down hard on them (as did ISPs who were suddenly
flooded with much more traffic). They backed off and un-did their
breakage, but not until after we had to rewrite all of those Perl scripts.

For those not familiar with the system, domain name registration is
handled by "registries" that maintain the delegation information for
their designated "top level domains" (.com, .net, .org, .cc, .co.uk,
.ac, etc...), while "registrars" are the designated vendors of domain
names in the system. In this structure, the "registry" basically maps
domain names to the "registrar" that maintains the account information
for that domain. There is usually a single entity that maintains this
information per top-level domain, and they are expected to behave in a
manner that doesn't artificially benefit any one registrar (a regulation
attempting to prevent monopolization of the Internet). However, the
registry companies are not barred from also being a registrar, so they
can still sell the domains that they control. In the above example, the
registry was directing user's to the registrar that they also ran by
modifying the data that they controlled exclusively to direct browsers
to their own registrar when they performed a DNS lookup that should
result in NXDOMAIN.

--
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Gimme that old time interface...

2007-11-16 Thread Coleman Kane
Bill McGonigle wrote:
> On Nov 15, 2007, at 08:09, Ted Roche wrote:
>
>   
>> Is this a corollary to the Peter Principle that any software project
>> will expand to the point where it has lost track of what it was  
>> supposed
>> to be doing?
>
> I'd like to offer an emacs exception to your corollary as emacs is  
> _supposed_ to be doing everything.
>
> -Bill
>   
I propose that it is likely that any software project will expand to the
point where it has lost track of what it was supposed to be doing.
Unchecked and on a long enough timeline, any software project will do
everything. Emacs simply has had a much longer timeline than GNOME, but
I think this hypothesis applied both places. The likely future will
result in two competing and large GNOEMACS projects arguing over who
does everything according to "the spec".

--
Coleman Kane
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


ASUS Eee sub-sub-notebook

2007-11-20 Thread Coleman Kane
I was forwarded this today:

http://www.reghardware.co.uk/2007/11/16/review_asus_eee_pc/

ASUS ultra-portable shipped with Xandros.

--
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Linux Math software (was Simple math considered physics...)

2007-12-02 Thread Coleman Kane
Michael Costolo wrote:
>
>
> On Nov 22, 2007 7:05 AM, Jim Kuzdrall <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>> wrote:
>
> On Wednesday 21 November 2007 23:27, Brian Chabot wrote:
>
>Has anyone tried Maxima for Linux?  I use its predecessor, Macsyma,
> on Win98 and absolutely love it.  No, more honestly, I invested enough
> time working with it to become proficient - and don't want to go
> through that again.
>
>A link to Maxima is at maxima.sourceforge.net
> <http://maxima.sourceforge.net>.  It gives some
> history of the public domain version (now GPL).
>
>
> There is also Octave (http://www.gnu.org/software/octave/), which is
> an open source Matlab clone.  It will run most Matlab scripts without
> modification (which can be rather handy).  It uses Gnuplot for
> graphical output.
>
> And my favorite, R (essentially Gnu S), found at
> http://www.r-project.org/.  It is generally considered a statistics
> package, but it is jammed full of usefulness.
>
> -Mike-
>
> -- 
> "America is at that awkward stage.  It's too late to work within the
> system, but too early to shoot the bastards."
> --Claire Wolfe
A not-quite-a-full-CAS, but an efficient and advanced "calculator"
replacement that I really like is called Mathomatic:
http://www.mathomatic.com/math/index.html . It kind of like a bc or dc
that implements a lot of the functionality of some newer TI calculators,
and performs syntax highlighting of equations (especially for
parenthetical matching) to boot. Very handy if you need a solver without
the extra girth that Maxima or Mathematica provide.

--
Coleman Kane
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Free Software Replacement for Maple

2008-02-20 Thread Coleman Kane
Gurhan wrote:
> On Wed, Feb 20, 2008 at 12:40 PM, Lori Nagel <[EMAIL PROTECTED]> wrote:
>   
>> Does anyone know of a Free-Software replacement for Maple?  My husband is
>> taking a an electrical engineering graduate level statistics  class and says
>> he needs  it to do  some of his  homework.  Having never gotten far enough
>> in the maths myself,  I'm not  really sure what features it needs.  All I
>> know is proprietary license keys are a real pain.
>>
>> 
>
>   Does it have to be maple-compatible? I mean will he need to turn in
> maple code for his assignments? If you are just looking for a
> mathematical software GNU Octave is an excellent one. It's intended to be
> a free software clone of Matlab, and does the job pretty good.
>
> http://www.octave.org
>
> Thanks,
> gurhan
>   
I've used Maxima in the past, and it has proven to be pretty good for me
when I needed it.
http://maxima.sourceforge.net/

For a smaller CAS (a glorified solver), I like Mathomatic:
http://mathomatic.orgserve.de/math/

--
Coleman Kane
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Microsoft flooding sites with fake traffic

2008-02-20 Thread Coleman Kane
Arc Riley wrote:
> Hey guys
>
> Do yourselves a favor and search your logs for connections from
> 131.107.* 65.52.* 65.53.* 65.54.* and 65.55.*
>
> I found a good % of traffic we got, not reported to Google Analytics
> so I didn't see it sooner, was referred from http://search.live.com/
> for search queries involving pornography, cars, drugs, and random
> gibberish.  The landing pages from these searches were subversion
> changesets, source code in the Trac browser, and other places those
> search queries certainly don't exist in.
>
> All of it, well 97.2%, from the above two subnets, belonging to
> Microsoft.  It'd be humorous if I didn't just purchase a new colo
> server to handle the large volume of traffic pysoy.org
> <http://pysoy.org> gets.  I can't tell if MS is trying to skew the
> statistics in favor of MSIE/Live/etc or if it's conducting a denial of
> service attack against free software project sites, perhaps both (two
> birds with one stone?).
>
> If you see the similar childish behavior in your logs, please join me
> in blocking them and being very vocal as to why.
>
An interesting find. I just checked my sites and I see the same thing,
however most of the search queries seem to be pretty pertinent to the
content of the pages that they reference. It is almost like theres some
script running on a farm of windows computers that just performs
single-word searches on their Windows LiveSearch database, and visits
the results (posting, of course, the LiveSearch referral in the request).

Here's my distribution:

cat apachelogs/*  | grep live.com  | cut -d\  -f1 | cut -d. -f1,2 | sort
| uniq -c | sort -rn

308 65.55
 10 131.107
  4 85.159
  3 142.161
  2 71.164
  2 68.95
  2 4.246
  2 207.224
  1 86.144
  1 84.202

There are many, many more with single visits, but I left them off the
list because they probably represent normal livesearch users.

--
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Microsoft flooding sites with fake traffic

2008-02-20 Thread Coleman Kane
Coleman Kane wrote:
> Arc Riley wrote:
>   
>> Hey guys
>>
>> Do yourselves a favor and search your logs for connections from
>> 131.107.* 65.52.* 65.53.* 65.54.* and 65.55.*
>>
>> I found a good % of traffic we got, not reported to Google Analytics
>> so I didn't see it sooner, was referred from http://search.live.com/
>> for search queries involving pornography, cars, drugs, and random
>> gibberish.  The landing pages from these searches were subversion
>> changesets, source code in the Trac browser, and other places those
>> search queries certainly don't exist in.
>>
>> All of it, well 97.2%, from the above two subnets, belonging to
>> Microsoft.  It'd be humorous if I didn't just purchase a new colo
>> server to handle the large volume of traffic pysoy.org
>> <http://pysoy.org> gets.  I can't tell if MS is trying to skew the
>> statistics in favor of MSIE/Live/etc or if it's conducting a denial of
>> service attack against free software project sites, perhaps both (two
>> birds with one stone?).
>>
>> If you see the similar childish behavior in your logs, please join me
>> in blocking them and being very vocal as to why.
>>
>> 
> An interesting find. I just checked my sites and I see the same thing,
> however most of the search queries seem to be pretty pertinent to the
> content of the pages that they reference. It is almost like theres some
> script running on a farm of windows computers that just performs
> single-word searches on their Windows LiveSearch database, and visits
> the results (posting, of course, the LiveSearch referral in the request).
>
> Here's my distribution:
>
> cat apachelogs/*  | grep live.com  | cut -d\  -f1 | cut -d. -f1,2 | sort
> | uniq -c | sort -rn
>
> 308 65.55
>  10 131.107
>   4 85.159
>   3 142.161
>   2 71.164
>   2 68.95
>   2 4.246
>   2 207.224
>   1 86.144
>   1 84.202
>
> There are many, many more with single visits, but I left them off the
> list because they probably represent normal livesearch users.
>
> --
> Coleman Kane
>   
Went a little further and found that all my 65.55 traffic comes from the
65.55.165 class C. I decided to pass all the visitors to the host
program and found that all of the visitors have PTR records like this:
livebot-65-55-165-87.search.live.com. The 131.107 traffic was all from
two machines: tide525.microsoft.com and tide526.microsoft.com

Maybe some others could look at their logs and pull information on the
other subnets?

--
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Microsoft flooding sites with fake traffic

2008-02-20 Thread Coleman Kane
Arc Riley wrote:
> Do you happen to be running google analytics on your site?
No, I'm just parsing the logs. I use awstats
(http://awstats.sourceforge.net) for collecting stats from my logs. I'm
not really familiar with many of google.com's services.

--
Coleman Kane

>
> On Wed, Feb 20, 2008 at 6:08 PM, Coleman Kane <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>> wrote:
>
> Coleman Kane wrote:
> > Arc Riley wrote:
> >
> >> Hey guys
> >>
> >> Do yourselves a favor and search your logs for connections from
> >> 131.107.* 65.52.* 65.53.* 65.54.* and 65.55.*
> >>
> >> I found a good % of traffic we got, not reported to Google
> Analytics
> >> so I didn't see it sooner, was referred from
> http://search.live.com/
> >> for search queries involving pornography, cars, drugs, and random
> >> gibberish.  The landing pages from these searches were subversion
> >> changesets, source code in the Trac browser, and other places those
> >> search queries certainly don't exist in.
> >>
> >> All of it, well 97.2%, from the above two subnets, belonging to
> >> Microsoft.  It'd be humorous if I didn't just purchase a new colo
> >> server to handle the large volume of traffic pysoy.org
> <http://pysoy.org>
> >> <http://pysoy.org> gets.  I can't tell if MS is trying to skew the
> >> statistics in favor of MSIE/Live/etc or if it's conducting a
> denial of
> >> service attack against free software project sites, perhaps
> both (two
> >> birds with one stone?).
> >>
> >> If you see the similar childish behavior in your logs, please
> join me
> >> in blocking them and being very vocal as to why.
> >>
> >>
> > An interesting find. I just checked my sites and I see the same
> thing,
> > however most of the search queries seem to be pretty pertinent
> to the
> > content of the pages that they reference. It is almost like
> theres some
> > script running on a farm of windows computers that just performs
> > single-word searches on their Windows LiveSearch database, and
> visits
> > the results (posting, of course, the LiveSearch referral in the
> request).
> >
> > Here's my distribution:
> >
> > cat apachelogs/*  | grep live.com <http://live.com>  | cut -d\
>  -f1 | cut -d. -f1,2 | sort
> > | uniq -c | sort -rn
>     >
> > 308 65.55
> >  10 131.107
> >   4 85.159
> >   3 142.161
> >   2 71.164
> >   2 68.95
> >   2 4.246
> >   2 207.224
> >   1 86.144
> >   1 84.202
> >
> > There are many, many more with single visits, but I left them
> off the
> > list because they probably represent normal livesearch users.
> >
> > --
> > Coleman Kane
> >
> Went a little further and found that all my 65.55 traffic comes
> from the
> 65.55.165 class C. I decided to pass all the visitors to the host
> program and found that all of the visitors have PTR records like this:
> livebot-65-55-165-87.search.live.com
> <http://livebot-65-55-165-87.search.live.com>. The 131.107 traffic
> was all from
> two machines: tide525.microsoft.com <http://tide525.microsoft.com>
> and tide526.microsoft.com <http://tide526.microsoft.com>
>
> Maybe some others could look at their logs and pull information on the
> other subnets?
>
> --
> Coleman Kane
>
>

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Microsoft flooding sites with fake traffic

2008-02-21 Thread Coleman Kane
Kent Johnson wrote:
> Ed lawson wrote:
>
>   
>> I know nothing from the technical side of this, but I mentioned this to
>> someone who works at MSFT and their first comment was that it was
>> likely Live Search crawling to build an index.
>> 
>
> Except:
> - the referrer is a single-word search at search.live.com, e.g.
> http://search.live.com/results.aspx?q=marketing&mrt=en-us&FORM=LIVSOP
>
> - The client acts like a browser, in that it fetches CSS and JavaScript 
> files as well as the primary page, and the User-Agent seems to be MSIE 7:
> "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.2; .NET CLR 1.1.4322)"
>
> Here is a complete sequence from my logs:
> 65.55.165.51 - - [20/Feb/2008:02:22:16 -0500] "GET 
> /category/Web-Marketing/ HTTP/1.1" 200 15810 
> "http://search.live.com/results.aspx?q=marketing&mrt=en-us&FORM=LIVSOP"; 
> "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.2; .NET CLR 1.1.4322)"
>
> 65.55.165.51 - - [20/Feb/2008:02:22:18 -0500] "GET 
> /media/public/css/blogcosm.css HTTP/1.1" 200 8114 
> "http://blogcosm.com/category/Web-Marketing/"; "Mozilla/4.0 (compatible; 
> MSIE 7.0; Windows NT 5.2; .NET CLR 1.1.4322)"
>
> 65.55.165.51 - - [20/Feb/2008:02:22:19 -0500] "GET 
> /media/public/css/category_detail.css HTTP/1.1" 200 2952 
> "http://blogcosm.com/category/Web-Marketing/"; "Mozilla/4.0 (compatible; 
> MSIE 7.0; Windows NT 5.2; .NET CLR 1.1.4322)"
>
> 65.55.165.51 - - [20/Feb/2008:02:22:19 -0500] "GET 
> /media/public/css/toc.css HTTP/1.1" 200 399 
> "http://blogcosm.com/category/Web-Marketing/"; "Mozilla/4.0 (compatible; 
> MSIE 7.0; Windows NT 5.2; .NET CLR 1.1.4322)"
>
> 65.55.165.51 - - [20/Feb/2008:02:22:19 -0500] "GET 
> /media/public/css/one-liners.css HTTP/1.1" 200 223 
> "http://blogcosm.com/category/Web-Marketing/"; "Mozilla/4.0 (compatible; 
> MSIE 7.0; Windows NT 5.2; .NET CLR 1.1.4322)"
>
> 65.55.165.51 - - [20/Feb/2008:02:22:19 -0500] "GET /css/colors.css 
> HTTP/1.1" 200 4410 "http://blogcosm.com/category/Web-Marketing/"; 
> "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.2; .NET CLR 1.1.4322)"
>
>
> I seem to have one of these roughly every 1/2 hour though the interval 
> varies widely.
>
> Kent
>   
It's not really out of the realm of reality that Microsoft could be
using a farm of Windows machines running IE7 to gather the data... It's
also not necessarily out of the realm of reality that their indexing
algorithm is trying to find single keyword results. Maybe they perform
the union/intersection of multiple search terms on their end.

--
Coleman Kane


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Microsoft flooding sites with fake traffic

2008-02-21 Thread Coleman Kane
Cole Tuininga wrote:
> On Thu, 2008-02-21 at 08:56 -0500, Kent Johnson wrote:
>   
>> Except:
>> - The client acts like a browser, in that it fetches CSS and 
>> JavaScript 
>> files as well as the primary page, and the User-Agent seems to be MSIE 7:
>> "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.2; .NET CLR 1.1.4322)"
>> 
>
> This *could* be explained by wanting to be able to display a thumbnail 
> version of the website.  Just a thought.
>   
Just an aside:

It is extremely amusing to me that MSIE still identifies as "Mozilla/4.0"...

--
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: eliminating gnome-terminal colors

2008-02-21 Thread Coleman Kane
Tom Buskey wrote:
> My google-fu is lacking.
>
> I want to force gnome-terminal (and konsole and xterm and...) to not
> allow colors.
>
> When I bring up lynx, I want black & white with bold.  I don't want
> the white on blue status at the bottom or the yellow on white I can't
> read either.
>
> I don't want ls to use colors.
> I don't want vim to use colors.
>
> I don't want anything to use anything except black on white text with
> bold.
Try setting your TERM environment variable to "vt102" or "vt220",
instead of "linux", "xterm", or "xterm-color".

For the xterm program, you can achieve this by running:
xterm -tn vt220

Instead of plain old xterm.

I don't know how to do it with the others, however I am sure they have
their means.

You can also just set your TERM env var in ~/.profile, ~/.cshrc, or
~/.bashrc (depending upon your shell type). You can get real creative
and reset it only if the current value of it is "xterm" or something
similar.

--
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: RAM Mapping Script

2008-03-02 Thread Coleman Kane
Jim Kuzdrall wrote:
> I have a Dell 2650 with one byte of bad RAM.  Unfortunately, it is 
> in low RAM which resides on the board.  It is not practical to replace 
> the memory chip.
>
> The kernel has a function, mmap(), which allows one to reserve 
> 4,8,16,... bytes of memory at a specific physical address for a 
> process.  As I understand it, one reserves the bad memory for a 
> do-nothing (or possibly not-loaded) process.
>
> Is there a command line equivalent of mmap() that can be put in one 
> of the starting scripts?  Or is there a better way to take the bytes 
> out of service?
>
> The error occurs at address 64h.  The memory test from the SuSE 9.3 
> installation CD reports 5 memory errors at this location in 250 passes 
> (90 hours of testing).  The same testing program reports no errors on a 
> Thinkpad T60 after 138 test passes through 5 times more RAM, so it does 
> not appear to be imagining the defect.
>
> Jim Kuzdrall
>   
This low memory byte is the location of your real-mode (16-bit mode)
Interrupt Vector for software interrupt routine 25 (19h). Note that
these are not necessarily related to the IRQ's that are assigned to
devices (this one in particular has nothing to do with an IRQ).

You can find out what the purpose of this routine is here:
http://www.ctyme.com/intr/rb-2270.htm


Contrary to popular belief, it is possible that this memory does get
accessed later on. For example, anything trying to use V86 mode, such as
your VBE / int 10h routines for VESA framebuffer access on the console
or in X.org will copy the first ~1MB of system RAM into a local space
for mapping to the V86 process space. Additionally, it looks like INT
19h *might* be called from real mode to reboot your system, depending
upon your systems BIOS. Yes, Linux can switch the machine back down to
16-bit real mode in a number of circumstances, one of these is at system
shutdown and reboot.

Is this actually causing you trouble while you're using the system? It
seems that this could cause errors on reboot, shutdown, suspend, or at
system boot-up, but otherwise it would be a harmless thing that you can
safely ignore. You don't have to worry about any kernel structures being
mapped to this area of system RAM.

Can you disable the on-board memory from Dell's BIOS ?

--
Coleman
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: RAM Mapping Script

2008-03-03 Thread Coleman Kane
Ben Scott wrote:
> On Sun, Mar 2, 2008 at 9:45 PM, Jim Kuzdrall <[EMAIL PROTECTED]> wrote:
>   
>>There is nothing resembling memmap or any related thing on this Linux
>>  system
>> 
>
>   What distribution and kernel version?
>
>   If you're running an older distribution using the 2.4 kernel, it may
> be that the "bad RAM patch" will be useful to you after all.
>
>   
>> The RAM at the memory error acts like it is in the stack area
>> 
>
>   ?
>
>   
>>So, the next thing to try is a program that executes right after boot
>>  and puts 128 bytes of zeros on the stack and stays in the background
>>  doing nothing.
>> 
>
>   I suspect that is unlikely to accomplish anything useful.  Every
> userland process has its own stack.  By the time your program gets
> around to running, several other userland processes will have started
> and possibly exited (any initrd programs, init, various initscripts).
> And the kernel itself has its own stack.  If anything in particular is
> using the page at address 0, I would expect it to be the kernel.
>
> -- Ben
>   
You won't be able to do this from userland. That spot of memory is
off-limits because the kernel needs to preserve it in the event that
another process wants to enter a vm86 mode.

What problem is the memory defect actually causing that is troublesome
(besides causing memtest86 to tell you that it is bad)? Is there a
stack-trace, kernel panic message, etc... ?

--
Coleman

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: RAM Mapping Script

2008-03-03 Thread Coleman Kane
Ben Scott wrote:
> On Mon, Mar 3, 2008 at 9:25 AM, Coleman Kane <[EMAIL PROTECTED]> wrote:
>   
>> That spot of memory is off-limits because the kernel needs to preserve it
>> in the event that another process wants to enter a vm86 mode.
>> 
>
>   If a process switches to virtual x86 mode, wouldn't virtual memory
> page for address 0 (which includes the software interrupt table) get
> mapped to a per-process page in physical memory?  Would that have
> anything to do with the RAM at physical address 0x64?
>   
What usually happens under Linux / BSD, if I can remember properly is
that much of the memory content of that 1MB is duplicated to another
memory page, which is then mapped to the process's memory space. Ranges
that are supposed to represent Hardware I/O (framebuffer at 0xA000:,
console at 0xB8000:, NIC at 0xD800: *maybe*, etc...) are
actually virtualized for the vm86 session. So if you got read error(s)
on that cell of RAM, you'll likely hit it when the process setup occurs.
This memory is "shadowed" so that multiple vm86 machines can't interfere
with one another. Much of the stuff mapped by devices into your memory
at 0xA000: and above would be software code (such as the VBE stuff),
and would be accessed in a read-only / execute-only sort of manner. In
the old days, Memory-Mapped I/O was much less common and it was much
more SOP to have the device's BIOS map some software code into this
"high memory area" that could be executed by your 8086-compatible CPU
and have it perform the proper port I/O operations to get the desired
result.

I suppose it is likely that all of this could initially be mapped
read-write with CoW, making it allocate new pages on-demand, but actual
writing to the memory located at 0x:0064 (or 0x0006:0004 if you want
to get fancy) is definitely a no-no from your user program. This memory
must remain free from writes from userland, as the devices will still
respond to write operations in this area.

Usually, your OS or host application is providing a monitor on top of
the Vm86 session, so that when your program writes to 0xb8000:,
you'll get a nice printing of characters inside of your xterm, rather
than having them not show up at all (if your graphics card is in raster
mode), or worse (under standard-console-mode, writes here would make
characters get printed all over your screen).
>>  What problem is the memory defect actually causing that is troublesome ...
>> 
>
>   The OP stated, "Every few weeks some Linux program would go wacky.
> Each time is was a different program and a different wackiness.
> (There were not enough wacky observations to conclude that the same
> effect in the same program never ever repeated.)  Reboot restored
> proper operation."
>
>   He hasn't mentioned anything that really proves the wackiness is due
> to the bad RAM, although I think that's a reasonable assumption to
> make at this point.  My fear would actually be that the bad cell at
> physical address 0x64 is just the tip of the iceburg, and that other
> cells are developing issues over the week-long periods he mentions.
>
> -- Ben
>   
That's what it sounds like to me. Is there no way to turn off the RAM on
the motherboard on that system via the BIOS menu or even a jumper on the
motherboard?


--
Coleman

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: RAM Mapping Script

2008-03-03 Thread Coleman Kane
Michael ODonnell wrote:
>   
>> You won't be able to do this from userland.  That spot of memory
>> is off-limits because the kernel needs to preserve it in the event
>> that another process wants to enter a vm86 mode.
>> 
>
> Maybe I haven't been following this thread closely enough, but I
> don't see how either User mode accesses or vm86 mode would matter.
>
> I figure that once the kernel has pointed the IDT register elsewhere
> that zero'th page of RAM loses any special status and goes into a pool
> where it's eligible for allocation like any other.
>
> Well, OK - maybe not like any other.  Using the "crash" utility
> I queried the system's mem_map[] and it appears (at least on this
> steam-powered RHEL3 machine, assuming I'm looking at the correct
> bit definitions in include/linux/mm.h) that the first dozen pages
> are reserved:
>
>   .
>   .
>   .
> crash> kmem -p
>   PAGE PHYSICAL   MAPPINGINDEX CNT FLAGS
> c12c  0 0 0  0 8000
> c168   1000 0 0  0 8000
> c1a4   2000 0 0  0 8000
> c1e0   3000 0 0  0 8000
> c100011c   4000 0 0  0 8000
> c1000158   5000 0 0  0 8000
> c1000194   6000 0 0  0 8000
> c10001d0   7000 0 0  0 8000
> c100020c   8000 0 0  0 8000
> c1000248   9000 0 0  0 8000
> c1000284   a000 0 0  0 8000
> c10002c0   b000 0 0  0 8000
> c10002fc   c000 0 0  0 0
> c1000338   d000 0 0  0 0
> c1000374   e000 0 0  0 0
> c10003b0   f000 0 0  0 0
> c10003ec  1 0 0  0 0
> c1000428  11000 0 0  0 0
> c1000464  12000 0 0  0 0
> c10004a0  13000 0 0  0 0
> c10004dc  14000 0 0  0 0
>   .
>   .
>   .
>
> ...so just for fun I wrote NULLs to that first page (with
> before and after displays for verification) thus:
>
>dd bs=1k count=1  if=/dev/mem | hexdump -C
>dd bs=1k count=1 if=/dev/zero of=/dev/mem
>dd bs=1k count=1  if=/dev/mem | hexdump -C
>
> ...and the system kept on running, which indicates only that the
> IDT is elsewhere, though I'm still not %100 certain the page isn't
> in use for some other purpose.
>   
You are right, the IDT is elsewhere. In fact, the real-mode IDT (also
called the IVT, which would be at that location) and the protected-mode
IDT that you are talking about are two completely different data
structures with different formats. This was my point, that this memory
would never be used by the Linux kernel or any applications during
normal operation. However, when vm86 mode is set up, this first 1MB of
RAM is typically copied from the first 1MB of system RAM to populate the
"virtual 1MB" in your vm86 process. So my point with the vm86 mode talk
was that if the memory at 64h were the *only* memory that was actually
bad, then this problem should affect your system except during cases
where the kernel is attempting to set up a VM86 process.

My guess is that more RAM is also bad, just less obviously bad. Have you
tried running memtest86 with some of the more exhaustive tests on this
system?

--
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Laptop Saved! (was RAM Mapping Script)

2008-03-06 Thread Coleman Kane
Jon 'maddog' Hall wrote:
>> Cost of saving Dell Inspiron 2650 (original cost ~$800)
>> "Technician" @ $40/hr 57 hr ..  $2280.00
>> Book "Understanding the Linux Kernel"  49.95
>> Ice cream to sooth nerves (6 times)33.37
>> Replacement hard drive (160GB) 93.75
>> Rebate from wife for saving environment   -22.78
>>  Total   2434.29
>> 
>
> Cost of all that technical help from the GNHLUG mailing
> list...Priceless!
>
> md
>   
I would have said:

More than you ever felt comfortable knowing about the i386 architecture:
Priceless!

But it's all the same. Glad to hear that you solved your problem.
Diagnosing hardware failure can be grueling.

--
Coleman
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Writing FOSS for Win32: A good LUG meeting topic?

2008-03-20 Thread Coleman Kane
Ben Scott wrote:
> On Wed, Mar 19, 2008 at 9:49 AM,  <[EMAIL PROTECTED]> wrote:
>   
>>  Does anyone here think that "Writing FOSS applications for Win32"
>>  would be a good presentation topic for a LUG meeting?
>> 
>
>   I do, and would also be interested for myself.  It even has
> potential to be a good "outreach event", attracting a larger
> /different audience than our usual suspects.  Good idea.
>
>   This could go beyond software development, too.  (Either as part of
> the same presentation, or a different one.)  Just "Using FOSS on
> Win32" could go a long way.
>
>   
>> Is anyone here expert on this topic?  (Note: I'm not!)
>> 
>
>   Me neither.
>
> -- Ben
>   
I've got a good amount of experience build Win32 apps with MinGW32 
(binutils 2.18, GCC 4.2.1), cross-compilation and all the quirky glory 
that is win32 DLL linking.

--
Coleman Kane

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Rally crash space/afterparty in DC next weekend

2010-10-22 Thread Coleman Kane
We've since moved back to Cincinnati. However, we will be going to the rally. 
It would be great to see you guys again!

Sent from my iPhone

On Oct 21, 2010, at 6:35 PM, Arc Riley  wrote:

> Anyone from NH headed to DC next weekend for the Rally to Restory Sanity or 
> Keep Fear Alive?
> 
> -- Forwarded message --
> From: Event Manager 
> 
> Hail from HacDC - Washington DC's Hacker Space!
> 
> Thanks to Comedy Central many of you will be headed our way next weekend for 
> the Rally to Restory Sanity/Keep Fear Alive.  We've reserved a good portion 
> of the church we rent space from for crash space for our fellow hackers and 
> friends traveling in from out of town, we'd like to extend an invitation to 
> our fellow geeks and hackers to crash at our space.
> 
> We'd also like to extend an invitation to join us Halloween Weekend (Friday 
> and Saturday night, Oct 29/30) for the HacDC Halloween Party.  The party, as 
> all events at HacDC, is free and open to the public.  Costume optional (but 
> encouraged).
> 
> Crash space is in a separate part of the building so those wanting to sleep 
> early won't be kept up by the party.  The suggested donation for people 
> crashing at the space is $20/night and includes breakfast both mornings.  
> Nobody will be turned away for lack of funds, however space is limited so 
> please RSVP (crashsp...@hacdc.org).
> 
> Hope to see some of you next weekend!
> 
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Allegation that the OpenBSD IPSEC stack contains FBI backdoors

2010-12-14 Thread Coleman Kane
On 12/14/2010 10:50 PM, Benjamin Scott wrote:
> On Tue, Dec 14, 2010 at 9:35 PM, Roger H. Goun  wrote:
>> http://marc.info/?l=openbsd-tech&m=129236621626462&w=2
> "Since we had the first IPSEC stack available for free, large parts of
> the code are now found in many other projects/products.   Over 10
> years..."
>
>   And no one else in the world has looked at the code and noticed the
> backdoors in the intervening ten years?  While possible, it makes the
> story a bit harder to swallow.
>
> -- Ben
It is not always so obvious. Many encryption algorithms and key exchange
systems back then relied upon large hard-coded constant-value lookup
tables for various purposes in the algorithm. These would either be hard
coded right in the source code, or would be the product of a run-time
formula that generated dynamic tables according to the key provided by
the user.

The origins of the hard-coded cases could be entirely mysterious to any
developers, and by their nature would be good places to hide obscure
weakness. The 3DES algorithm was, for years, rumored to contain such a
weakness in the design of its hard-wired S-box lookup tables. Much
research has gone in to studying them, and it appears more likely that
they actually do work as intended: they increase resistance to some
common cryptanalysis techniques.

In another case, it was rumored for a time that Rijndael was chosen over
Twofish as the sanctioned AES
algorithm because of supposed weaknesses in the algorithm's S-box
generation code so that the US could crack it.

In some cases, the algorithms used may just rely upon arbitrarily-picked
integer constants, and in other cases, like above, they might have been
very specifically selected. In many cases, the author's word and some
published research may not have been scrutinized. The trust that many
non-mathematician security developers put into block cipher algorithms
is akin to the trust in OpenVPN that you or I may have in simply
installing it and assuming that it is keeping our stuff private.

My guess is that this event may spark an urgent code-audit on the common
security systems which we rely upon out there. It's good to have these
come along every once in awhile, as it reminds us that we need to keep
studying this stuff. There are scant few software that we rely upon on a
daily basis that are more complex than encryption libraries, and they
just also happen to be the most opaque to us as well.

-- 
Coleman Kane
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: mint

2012-01-02 Thread Coleman Kane
A friend of mine at work (GE) uses Mint for his primary desktop environment and 
swears by it. I think GNOME 3 is the desktop environment (rather than Unity).

Sent from my iPhone

On Jan 2, 2012, at 1:42 PM, Bruce Labitt  wrote:

> I'm tempted to try out mint.  There are quite a few options.  From what I 
> gather, Mint is based on Ubuntu.  So the latest Mint 12 is based on Oneiric.  
> I have Oneiric now and really think it is a steaming bucket as far as 
> productivity.  It really is not made for doing work, it   seems to be 
> more oriented towards eye-candy.  It seems to take a lot more mouse movement 
> and difficult navigation to get anything done.
> 
> Anyone have experience with the Mint family?  How is the desktop handled?
> 
> Should I take the plunge to LMDE?  I've never run debian before.
> 
> I'm not looking for which distro is the best ever, unless folks want to have 
> fun.  Just looking for something that is closer to   what 10.04 LTS was.  
> I run and maintain that at work for my two servers.  Anything that is a bit 
> more modern than 10.04 that 1) runs on older hardware (video especially) and 
> can support vlc and myth?
> 
> Maybe I'm just looking to hold on to gnome.  It worked well enough.  It 
> didn't have too much stupid stuff and was relatively easy to maintain.
> 
> Any insights?
> ___
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Slashdot: NH Passes Open-Source bill

2012-02-05 Thread Coleman Kane
On 02/05/2012 04:49 PM, Jon "maddog" Hall wrote:
> On Sun, 2012-02-05 at 15:13 -0500, Bruce Dawson wrote:
>> Kudos to Seth...
>>
>> http://yro.slashdot.org/story/12/02/04/2259227/new-hampshire-passes-open-source-bill
>>  
>>
>> ___
>> gnhlug-discuss mailing list
>> gnhlug-discuss@mail.gnhlug.org
>> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>>
> I will second that "Kudos".  With so much going on in government these
> days that is "questionable", this is a good example of helping
> government "make the right choice", in both the use of Free Software,
> and (even more importantly) Open Standards and Open Data.
>
> md
I've since moved back to Cincinnati, but I've reconnected with many of
my "successor" evangelists at the University here. I try to explain to
them that, no matter how hopeless or cynical the system appears, there
is a pretty big ideas vacuum. That same vacuum that allows silly ideas
like SOPA and PIPA needs to be taken advantage of more by our community.
In this case, it has really helped having a "person on the inside". If
we make sure to be part of the process, by working toward contributing
to it, we have plenty of opportunity right now to be influential in
lawmaking.

Good work, Seth! I'll make sure I pass this along to my local reps here
in Cincinnati. I know the new city council that was elected last year
has placed on this year's agenda a plan to look into Open Source
solutions for long-term planning (to overcome recurring licensing &
support costs for proprietary closed-source legacy systems).
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Is your kids' school forcing Zoom on them too?

2020-08-08 Thread Coleman Kane
On Fri, Aug 07, 2020 at 10:26:57PM -0400, Kyle Smith wrote:
> On Fri, Aug 7, 2020 at 7:18 PM Matt Minuti  wrote:
> 
> > Virtually all of the security "issues" are irrelevant for the use case of
> > public schools. All the "hacking" I've heard of has been nothing more than
> > people doing the modern equivalent of wardialing, joining in meetings that
> > have no password by picking random numbers. That's not zooms fault, that's
> > just bad IT policy on any platform (which schools ought to know how to
> > address now).
> >
> > There's been no remote execution exploits (AFAIK), so that's a non-issue.
> >
> > Maybe I'm missing something, but what exactly is the problem with Zoom in
> > this context, and what better alternative are you proposing? Jitsi is cool
> > and open source (yay!), and a thousand times better than WebEx, but it's
> > subject to similar server-side concerns as zoom (compromised server MITM),
> > and I wouldn't put much trust in the local SAU IT guy to handle installing
> > it let alone running it securely for hundreds or thousands of simultaneous
> > users.
> >
> 
> This is essentially the main benefit of a hosted solution. Even if there
> are open-source alternatives that are equivalent or superior, most school
> don't have the resources (e.g. IT staff) to do this correctly. At least
> with Zoom it's consistent, and when security fixes go out they go out to
> everyone.

Hi everyone, long time since I chatted with many of you since moving back to
Cincinnati. However, as I am in a similar boat and also working in a cyber
security capacity for the past 10 years, I'll provide some insights around
Zoom that I and my friends are recommending. Mind you, Zoom can be as secure
as any other SaaS offering (Google Meet, WebEx, etc.).

All of the "security concerns" around Zoom boil down to two main categories:

1) Insecure by default - Default config options being "weak" to favor usability
or availability were the driving factor in many of its embarassing press pieces
earlier on. From what I can tell, none of these are much different from the
problems typically resulting from common (and flawed) software engineering 
methodologies. A lot of these are fixable, it just requires going exhaustively
through all of the system options prior to rolling it out.

My recommendation would be to offer to consult for your local school district
for free, to help them lock down their Zoom deployment and also build a list
of SOP to distribute to employees of the district.

2) Privacy concerns - supposedly a large amount of Zoom's contracted labor
workforce is located in China. People have derived that this also means a lot
of the server infrastructure is also located there. I'm not 100% sure, but I
am pretty skeptical of this claim - as just the bandwidth concerns alone would
seem to make this very unlikely to produce a working system. That said, there
had still been concerns early on about the lack of E2E encryption, and weak
algorithms, but Zoom has since fixed both of those. Now, even the free Zoom
accounts support E2E encryption. By my estimate, Zoom is about on-par with
MS Teams, Google Meet, and Cisco WebEx nowadays.

My kid's school district uses Google Enterprise Suite for education, which
works really well, and provides Google Meet for meetings (rather than Zoom).
It's too late this year, but if your school district is seeking out some sort
of lower-cost alternative to MS+O365, the Google Suite is a nice alternative
that also allows you to "activate" any Chromebook with the student's managed
account - basically managing their "desktop" in the Google cloud. At the very
least, everyone's data exists within the district "enclave". 

As always the above presumes some concessions around personal privacy, which
I realize can be a hot button topic. Not my biggest preference, but there are
so many usability and availability benefits to these SaaS productivity systems
that they're becoming commonplace for any large organizations that lack the
buying power of big corporate entities in their IT departments. Solutions like
the above can make it easier for the district's IT dept to manage and secure
what is going on within the student body, in scalable ways that installing a
bunch of dedicated-server software may not.

Coleman Kane

> 
> 
> > On Fri, Aug 7, 2020, 5:52 PM Joshua Judson Rosen 
> > wrote:
> >
> >> So..., pandemic. That's still a thing, and school is about to start up.
> >>
> >> I hear a lot of schools have decided to make everyone use Zoom,
> >> whether they're at school or remote. That's apparently what's happening
> >> at my k