[Discuss] unRAID

2011-07-25 Thread Tom Metro
t a lukewarm reception, and I don't hear much about it
these days. Part of that was because their initial product used a USB
interface rather than being a NAS. Later they offered an add-on to make
it a NAS which was not competitively priced. And it was consistently
panned for its performance.

It seems unRAID has worked through some of these issues. Minimizing the
performance hit (you give of striping, but otherwise maintain single
drive performance), and offering a software-only option so you can
minimize hardware costs. Looks like the pricing for unRAID is:

Free - 3 drives, no cache drive
$70 - 6 drives + cache drive
$120 - 21 drives + cache drive

Oddly their marketing makes no mention of the protocols used to share
files, but does make a few references to "My Network Places," implying
SMB/CIFS. (And here I thought we were moving away from a Windows-centric
world.) The FAQ on their Wiki notes:
http://lime-technology.com/wiki/index.php?title=FAQ#How_do_I_configure_NFS_mounts.3F

  Starting with unRAID 4.4.1, unRAID includes the ability to serve files
  using the NFS protocol. This release includes NFS export ability for
  disk and user shares.
  ...
  Please consider NFS support to be experimental. For example, there has
  been no performance tuning done whatsoever (as of 4.5beta1/4.4.2).
  However, see this thread for improved settings and better performance.

This is a good illustration of why we'd be better off getting
unRAID-like functionality added to a solid, widely used OS, like Linux
or FreeBSD. Though if you dig through the FAQ it does say unRAID is
"based on Slackware Linux" w/kernel 2.6.

Anyone tried out unRAID?
Anyone tried building the equivalent with ZFS or some other open source
technology?

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://blu.org/mailman/listinfo/discuss


Re: [Discuss] Firefox vs. Chrome

2011-07-25 Thread Tom Metro
Rich Braun wrote:
> I can make it go away for that short amount of time by killing
> and restarting the browser, but a day later Firefox is /always/ painfully slow
> -- thrashing through memory (not disk) in some inefficient piece of core code.

This doesn't fit your description, but one of the inefficiencies in FF
is that it writes out the sessionstore.js file (session data for crash
recovery) in its entirety every 10 seconds, which can cause a very
noticeable hang once your session has grown. This happens after any
action that alters the session, and that can include an action as
innocuous as scrolling in a page.

I often have 50 to 60 tabs open and after a day or so, FF does get
annoyingly slow.

One thing I really don't like in Chrome is the bookmark manager. The FF
manager was bad enough at being feature sparse, and the Chrome one is
even more so. It doesn't even support saving notes with a bookmark,
which I often use.

What's worse is that Chrome doesn't interoperate with FF's bookmark
manager - you can't drag a link from Chrome to the FF bookmark manager.
So you can't relegate FF to being just a bookmark manager, while using
Chrome for browsing.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://blu.org/mailman/listinfo/discuss


Re: [Discuss] BLU Meeting: Arduino Hacking 101: Importing the Universe

2011-07-25 Thread Tom Metro
Jerry Feldman wrote:
> Federico previews the talk he's giving at this year's OSCON
> 
> So, you have learned the essentials of the Arduino platform, its
> software package, and completed a few tutorials - learned some basic
> electronics in the process, including blinking the inevitable LED. Now what?
> 
> This session aims to give you the tools to import the real world into
> the programming scope of your trusty $30 microcontroller, by covering
> the technology fundamentals and Arduino integration essentials of a wide
> variety of sensors and actuators.

If you found this talk of interest, join us on the BLU Hardware Hacking
list:
http://lists.blu.org/mailman/listinfo/hardwarehacking

I'll be asking Federico some follow-up questions there.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://blu.org/mailman/listinfo/discuss


Re: [Discuss] unRAID

2011-07-26 Thread Tom Metro
 do easily with a JBOD is
replace a populated drive with another one of higher capacity. (Though
easier with a JBOD than a RAID5.)

Really JBOD + parity is an apt description for unRAID.

I'm thinking you could accomplish most of what unRAID does with a FUSE
driver that maintains a parity disk. You also need UnionFS
functionality, but I don't think it would work to layer them, so perhaps
a hacked version of UnionFS.

Reading through the wiki further it also seems that the unRAID approach
is far less automated compared to Drobo. Replacing a failed drive or
upgrading to a higher capacity drive both require several steps to be
performed through the unRAID UI. These ought to be fully automated.

Thanks for the additional research, Bill.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://blu.org/mailman/listinfo/discuss


Re: [Discuss] Firefox vs. Chrome

2011-07-26 Thread Tom Metro
MBR wrote:
> What have they warned you about Firebug?

In my experience, several years ago when I was using a more resource
constrained machine, it caused FF to hit the point of unusability much
sooner than without it.


> Does it affect performance if Firebug's installed even if you're not
> using it?

Yes. At least it did.

My solution was to create a separate profile that had FB and other dev
extension installed, and run a parallel instance of FF that would be
used for dev work. (It takes some trickery, but you can coax FF to run
multiple instances.)

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://blu.org/mailman/listinfo/discuss


Re: [Discuss] unRAID

2011-07-26 Thread Tom Metro
[Forwarding on behalf of Rich Braun, whose list subscription needs
adjustment, and apparently the list management UI isn't working yet. -Tom]

 Original Message 
Subject: Re: unRAID
Date: Tue, 26 Jul 2011 15:04:43 +
From: Rich Braun 

Edward Net Harvey opined:
> But even losing 5% of your
> files is usually considered fatal, so that's why people usually adopt the
> strategy of never losing more than their redundancy level, and make sure you
> have backups.

Indeed; the unRAID sales pitch smacks of a solution in search of a
problem. A problem I didn't know I had.

The built-in Linux RAID5 does it quite well for me, allowing mismatched
drives, providing quite-high performance given today's high-speed
processors, and with far superior monitoring capability than any
hardware solution I've used in the past.  (My RAID arrays are all in my
Nagios config, without any difficult-to-configure device driver that
needs to be overhauled whenever I do a hardware upgrade.)  Once or twice
a year, I get an alert telling me to swap out a drive.  Usually the
drive is under warranty so I just send it back to the manufacturer and
get a replacement in a week or so.  Good enough for me. And, arguably,
good enough for all but the most demanding corporate data
centers.

I /always/ use RAID even for a desktop.  If I want some of the benefits
that unRAID promises, namely the ability to recover an entire filesystem
from a single drive, then I use RAID1.

Backups used to be a harder problem but some of the online services have
gotten good enough to make this a whole lot more automatic without a lot
of cost.

With terabyte drives in the $50 price range, I can't see a situation
where kernel-based software RAID1 or RAID10 wouldn't be good enough
(performance and pricewise) for virtually any demanding situation.

The equation will be different in a couple of years when solid-state
storage finally starts to eclipse rotating media after a half-century of
dominance by the latter.

-rich
___
Discuss mailing list
Discuss@blu.org
http://blu.org/mailman/listinfo/discuss


Re: [Discuss] BLU Meeting: Arduino Hacking 101: Importing the Universe

2011-07-28 Thread Tom Metro
[Resending as the list server was down when this was originally sent.]

Jerry Feldman wrote:
> Federico previews the talk he's giving at this year's OSCON
> 
> So, you have learned the essentials of the Arduino platform, its
> software package, and completed a few tutorials - learned some basic
> electronics in the process, including blinking the inevitable LED. Now what?
> 
> This session aims to give you the tools to import the real world into
> the programming scope of your trusty $30 microcontroller, by covering
> the technology fundamentals and Arduino integration essentials of a wide
> variety of sensors and actuators.

If you found this talk of interest, join us on the BLU Hardware Hacking
list:
http://lists.blu.org/mailman/listinfo/hardwarehacking

I'll be asking Federico some follow-up questions there.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://blu.org/mailman/listinfo/discuss


Re: [Discuss] D-I-Y NAS enclosures, Backblaze

2011-07-29 Thread Tom Metro
Kurt Keville wrote:
> I have been following this dialogue at various locations... like
> http://openstoragepod.org/ ... it is remarkable how cheap DIY NAS is
> getting...

Thanks for the link. It says they were inspired by the Backblaze
project. For those not familiar, Backblaze is in the business of
providing online storage, and they published the plans for the low-cost
petabyte storage servers they used internally:
http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/

This is great to see, and I've looked into some of the components they
use, like the SATA port multiplier backplanes, but Backblaze and
OpenStoragePod are interested in solving the problem for petabyte-scale
storage, which is an order of magnitude (or two or three) beyond what
I'm interested in at the moment.

I had hoped to see multiple vendors start offering the SATA backplanes,
but years later the item is still hard to find.

Compared to the enterprise alternatives, a Backblaze is a bargain, but
much of it doesn't scale down cost effectively to 6 ~ 12 drives. They
paid $748 for their steel enclosure alone. A smaller one would obviously
cost less, but any custom enclosure is going to run $200+.

What's on the market for small-scale NASs is already cheap by enterprise
standards. But there is still a noticeable "server tax" on these small
system. At least some of it is justifiable due to lower volumes. So it
is a harder problem to solve.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://blu.org/mailman/listinfo/discuss


Re: [Discuss] D-I-Y NAS enclosures, 2.5" drives

2011-07-29 Thread Tom Metro
Kurt Keville wrote:
> I wonder if this
> approach would scale up and down to laptop drives? It may be that you
> get higher density with that form factor... it will be more robust I
> would think.

Higher density, sure, but robust? Because 2.5" drives are more hardened
against physical shock?

I see a lot more enterprisy products for 2.5" drives these days. You can
find RAID cages, rack servers, and blades all designed for 2.5" drives.

When you have to halt selling services because you've ran out of space
in your data center, then using smaller drives, even if they cost a
premium, makes sense. (I've seen numerous ISPs saying they can't
provision any new servers at the moment because their data center is full.)

If you don't have those space constraints, then obviously the 3.5"
desktop drives still offer the most storage for the money. At 1TB
capacity, you pay about 100% premium for 2.5".


Shirley Márquez Dúlcey wrote:
> The first true 1TB laptop drive came out recently; there was an
> earlier 2.5" 1TB drive from Western Digital but it was too thick to
> fit in most modern laptops.

When I recently bought a 1TB 2.5" drive, I noticed the WD offering was
12.5mm, and so I bought a 9.5mm Samsung:
http://www.newegg.com/Product/Product.aspx?Item=N82E16822152291

which NewEgg now lists as deactivated. I wonder why.

I see there is also a Seagate "enterprise-class nearline drive for
space-constrained data centers" that is 15mm. Probably too think to fit
most laptops, and thus the pitch for enterprise markets.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://blu.org/mailman/listinfo/discuss


Re: [Discuss] D-I-Y NAS enclosures

2011-07-29 Thread Tom Metro
Daniel Feenberg wrote:
> And what would be wrong with the Antec Twelve Hundred case, available
> from Microcenter for $185?
> 
> http://www.microcenter.com/single_product_results.phtml?product_id=0361137
> 
> Not rack-mountable, but otherwise a fine, quiet case with  lots of air
> movement and space for 12 drives.

Seems fine if you don't care about space. (It's 20" high and 22" deep.)
If that size case works for you, I'm pretty sure you can find others
like it for half as much.

My SOHO-class NAS specs would house 5 or 6 drives in an enclosure no
larger than necessary to hold the drives, plus power supply, and
mini-ITX motherboard.

The drive bays should be trayless, like these:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816215081&Tpk=iStarUSA%20BPU-350SA

The problem with these bays is that they cost as much as the entire case
and power supply should cost ($115). Which is no doubt why you never see
them used on SOHO-NASs. The bays are integral into the case design.

Also, many of these bays aren't space efficient, as they are made to go
into 5.25" bays, with wasted space on the side (1.75"). However 5-bay
units are designed to mount with the 3.5" drives vertically oriented,
resulting is less wasted space (.89" - you need some space for the eject
mechanism). (They should provide better ventilation, too.)

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://blu.org/mailman/listinfo/discuss


Re: [Discuss] BLU Meeting: Arduino Hacking 101: Importing the Universe

2011-07-29 Thread Tom Metro
James Kramer wrote:
> Has a copy of Federico's talk posted on the site?

He gave the same talk at OSCON. Looks like he posted the slides there:

http://www.oscon.com/oscon2011/public/schedule/detail/18831

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://blu.org/mailman/listinfo/discuss


Re: [Discuss] unRAID

2011-07-31 Thread Tom Metro
Rich Braun wrote:
> ...the unRAID sales pitch smacks of a solution in search of a problem. 
> 
> The built-in Linux RAID5 does it quite well for me, allowing mismatched
> drives...
> 
> With terabyte drives in the $50 price range, I can't see a situation
> where kernel-based software RAID1 or RAID10 wouldn't be good enough
> (performance and pricewise) for virtually any demanding situation.

The most appealing aspect of unRAID is the way it handles capacity
upgrades - either adding additional drives, or increasing the capacity
of existing droves.

Further investigation into unRAID shows it falls far short of its
potential, but much of the infrastructure is there to make it possible
to fully automate capacity upgrades once the media is physically
inserted into the NAS.

In contrast, describes the steps involved in replacing one of your 1 TB
drives in a 4-drive RAID5 set with a 2 TB drive and making use of that
additional space.

Earlier in the thread we found plenty of flaws in the unRAID approach,
but it is getting closer to an idealized expandable pool of storage.
Upgrading your storage capacity really shouldn't be harder than
upgrading your RAM capacity.


> If I want some of the benefits that unRAID promises, namely the
> ability to recover an entire filesystem from a single drive, then I
> use RAID1.

A storage model you described on this list several yeas ago is about the
closest approximation to an expandable storage pool that I've seen using
stock Linux. This is where you use RAID1 sets with a filesystem that
supports expansion. Periodically you buy whatever drive is at the sweet
spot for cost-per-capacity, and you replace the smaller drive in the
set. Steps are something like:

1. mdadm command to remove smaller drive
2. partition new drive
3. mdadm command to add new drive to array.
4. resync array. (and wait...)
5. expand partition or LVM.
6. file system command to expand FS.

Aside from this being more complicated than necessary, your files are
vulnerable to hardware faults during steps 1 through 4, and subject to
software faults during steps 5 and 6.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://blu.org/mailman/listinfo/discuss


[Discuss] BLU Job Posting Policy

2011-08-02 Thread Tom Metro
The BLU job posting policy:
http://blu.wikispaces.com/job+posting+policy

lists these requirements:

  All positions offered must be for jobs that require proficiency in
  Linux or UNIX and which can be performed by employees located in the
  Boston area. Please include at least the following:

   1. Required skill-set.
   2. Contract or permanent position?
   3. Pay range and incentives.
   4. Placement through a recruiter, or directly with the company? If
  you are a recruiter or search firm, you must say so.
   5. Location: include the City or Town where the applicant will be
  working, and if there is access to public transportation, please
  mention it.
   6. Whether telecommuting is available, and if it is, what percentage
  of the work week may be fulfilled offsite.
   7. Company's product or service.

Are there any changes we want to make to this list?

Should approval of the posting be on an all-or-nothing basis, or should
this list be ordered from "must-have" to "nice-to-have"?

I've rejected a several postings for lacking #3. Most don't address #2,
#4, #5, or #6 explicitly, though it is often fairly easy to infer the
answers.

Ideally, I'd like to have these postings supplied through a form, so
they are prompted to supply all the required elements. (Anyone with some
time on their hands want to build such a form? I looked at a few
outsourced services to see if there was something quick and cheap, but
none were capable of sending a custom formatted email in response to
each form submission.)

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] SAS drives

2011-08-05 Thread Tom Metro
Daniel Feenberg wrote:
> Oh, I see it about backplanes that don't use a cable. So they solve the
> problem for a small segment of the market - the segment that has the
> skills to do the right thing anyway.

As Dan suggests, the drive is an end-user replaceable component, while
the cabling is performed by the computer builder, or D-I-Yer who knows
what they are doing.


Dan Ritter wrote:
> The PC2012 specification says that all drives must be mounted in
> no-sled ejectable backplane slots

"No sled," eh? impressive that the spec says that. I haven't shopped
specifically for SAS components, but I know you have to hunt to find
trayless backplanes for SATA. Not sure why manufacturers rather throw
more steel and parts at the problem. Do they think there is more money
in selling trays? Are they not confident in their ability to design an
eject mechanism? (Heck, all you really need is some doubled over packing
tape stuck to the top and bottom of the drive, providing a pair of
protruding tabs you can pull on. APC has been doing this for years with
batteries in their UPSs.)


> This makes changing disks much easier for the home user, of
> course. They just need to turn off the computer, press the right
> eject button, pull out the old disk, take it to the store, have
> the contents copied into a new, larger/faster disk, and then
> they go home, push it in and turn it on.
> 
> The above two paragraphs are purest drivel, as the personal
> computer industry couldn't agree on doing something so smooth in
> a hundred years. I hope it was at least amusing to contemplate.

Ha! Sadly true.

Though the necessary standards are already in place. I'm not sure why
some manufacturer doesn't capitalize on it as a differentiator. (Well, a
scant few have.)

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Thunderbird question

2011-08-06 Thread Tom Metro
Jerry Feldman wrote:
> I used to have a setting where when I reply (especially to some lists)
> TB chooses the correct identity. ... In TBird 3.1 and
> previous it used the identity I associated with that list. I forgot
> where I set those.

In TB 3.x and earlier you need to use an extension to do that. There may
be others, but this is the one I use:

Folder Account
https://addons.mozilla.org/en-US/thunderbird/addon/folder-account/

I see it has been updated to work with TB 5, implying TB 5 doesn't have
the functionality built-in either.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Power consumption

2011-08-07 Thread Tom Metro
Scott Ehrlich wrote:
> The [stb from Verizon], surprisingly, whether Powered "on"
> or "off" maintained 15 watts continuous.

Were you inspired by one of the articles on this that made the rounds a
few months ago?

Our Set-Top Boxes Suck Up $3 Billion In Energy Every Year
http://gizmodo.com/5812142/our-dvrs-and-cable-boxes-suck-up-3-billion-in-energy-every-year

  ...the 160 million set-top boxes installed in 80% of American homes
  consume more than $3 billion in annual power costs. Mostly from after
  we turn them off.

  [Natural Resources Defense Council study said,] "In 2010, set-top
  boxes in the United States consumed approximately 27 billion
  kilowatt-hours of electricity, which is equivalent to the annual
  output of nine average (500 MW) coal-fired power plants."

  ..consider that a recent model HD-DVR consumes more power than an
  Energy Star-certified 42" LCD screen and consumes more than half the
  power of your new household refrigerator.


Jack Coats wrote:
> My guess is that the strip really contains a GFI.  They normally consume
> 1 to 3 watts.

Leakage through the line filter[1] used to suppress transient voltages
is more probable than a GFI on a common power strip.  (Most mid-range
power strips contain some sort of filtering. Rarely do they have a GFI.)
The filtering circuit contains components like capacitors that are
placed across the line.

1. http://www.cor.com/Series/PEM/C/ (see Electrical Schematics)

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


[Discuss] [OT] Apple is the AOL of consumer electronics

2011-08-07 Thread Tom Metro
I had the thought that Apple is the AOL of consumer electronics. Sure
enough, not an original thought. Google finds, "Apple is the new AOL
because of iPad, which offers a safe, easy way to consume content
without venturing out onto the vast Information Superhighway."[1] (And
deeper analysis in the latter half of this blog posting[2].)

1.
http://www.betanews.com/joewilcox/article/Apple-is-the-new-AOL-and-new-Microsoft-and-whoa-that-cant-be-a-good-thing/1275406379
2.
http://battellemedia.com/archives/2010/05/is_the_ipad_a_disappointment_depends_when_you_sold_your_aol_stock.php

Anyway, my thought wasn't limited to the iPad and not quite as literal
as these comparisons between AOL's restricted information sources and
the similar restricted information sources on some Apple devices. (The
latter article makes the comparison that you can't deep link between iOS
apps. and it was this very lack of linking with the broader world that
was AOL's downfall.)

It was more along the lines of how those of us who knew the Internet,
and not Compuserve or AOL, was going to be the future, avoided AOL, even
if it offered some short term ease-of-use advantages.

I raise the point not for the purpose of Apple bashing, but instead to
ask the question, do these long term bets pay off for those of us who
bet on the "winning" side?

What if we are right, and the phone and tablet markets do evolve into
something that more closely resembles Android, or perhaps even more
open, like the PC market (was), what do we gain by avoiding the long
term losers in the short term?

You can argue that it is better for the market, and expedites reaching
the desired end-state, by investing in the long term solution, rather
than the easy short term solution, but that isn't necessarily in one's
best self-interest in the near term.

On the other hand, avoiding a siloed environment does potentially reduce
frustration, if you are the type of person who wants to be able to do
anything with the hardware you own, even at the price of using a less
polished end-product.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Copy MAC address list from on Linksys router to another

2011-08-09 Thread Tom Metro
Jerry Feldman wrote:
> We do both WPA and MAC address filter authentication. The MAC
> address is a pain because not only do we have the MAC addresses of all
> the PCs in our office but also we've had new PCs as well as employees
> visiting us.

You might find this easier to manage using a captive portal, like:
http://www.dd-wrt.com/wiki/index.php/NoCatSplash
http://dev.wifidog.org/

These both require a third party firmware on your WRT (DD-WRT and
OpenWRT respectively).

There's also:
http://www.tocpcs.com/setting-up-chillispot-with-freeradius-on-tomato/

and a bunch of other options to explore. Google returns plenty of links.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Thunderbird 5.0

2011-08-12 Thread Tom Metro
Jerry Feldman wrote:
> Every once in a while Thunderbird freezes.

Have you turned off indexing?

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Creating a Wiki server

2011-08-28 Thread Tom Metro
Kyle Leslie wrote:
> Any suggestions on a set up.  I have seen some things about TikiWiki and
> MediaWiki.

My vote is for Twiki (http://twiki.org/).

MediaWiki obviously has the advantage that almost everyone is familiar
with it due to Wikipedia, but I find that the markup syntax is
inconsistent enough that I still refer to the documentation, despite
perhaps a decade of semi-regular document authoring in MediaWiki.

Can you even get a WYSIWYG editor for MediaWiki? It doesn't seem to
provide much of a framework for setting up a document hierarchy with
standard navigation controls to move up/down the hierarchy. (Maybe with
a plugin?) Instead you get a breadcrumb trail that reflects whatever
random path you took to the current document, with no real indication
where the document fits in or how to navigate to siblings or parents.

Aside from a consistent markup syntax, Twiki can be skinned and
customized largely from the admin UI to look unlike a wiki, if you so
desire. There's a vibrant community supplying plugins (or at least there
was a few years back when I was actively using it).

Lastly, it has a lot of programmable functionality accessible from
within the templates. You can, for example, generate self-maintaining
lists (directories) for documents that fall into a specific branch of
the hierarchy, are tagged with a keyword, or some other attribute. You
can also create forms and buttons to create new documents using a
specific template.

I've used this functionality to create a self-maintaining table of
specifications, where one column showed the client spec, and the next
column showed the developer spec, and the page had a button to create a
new spec document using the appropriate template. That's all doable
without modifying the core-code.

Like Dan said, wikis need maintenance, and a wiki that lets you automate
some of that helps.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] ZFS

2011-09-27 Thread Tom Metro
Rich Braun wrote:
> ...at least not on Linux until ZFS is made available
> as a stable kernel module.  (The usual patent and licensing crap is
> responsible for this situation.

ZFS has been integrated into the FreeBSD kernel (as I'm sure you know),
and despite being a less lucrative target for patent suits, is
theoretically subject to the same patent infringement liability, yet I
haven't heard of Sun/Oracle pursuing that.

As I understand it, it is the license under which ZFS is open sourced
that poses the clear and obvious barrier to Linux kernel integration.

A more interesting question is whether Oracle has done any code transfer
from ZFS to Btrfs, now that they own both, and the latter is integrated
into the Linux kernel. (I haven't read much about either project since
Oracle acquired Sun.) Or are they keeping ZFS away from open source as
much as possible, mirroring what they did with Solaris?


> So I am curious--how far along is the ZFS kernel dev?

Or similarly, how much has Btrfs matured?

A 2009-era article:
"BTRFS Intro & Benchmarks"
http://www.linux-mag.com/id/7308/

The short term solution seems to be to build a filer with FreeNAS. (Or
OpenSolaris, or Solaris-kernel-based Nexenta.)

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


[Discuss] The America Invents Act

2011-09-27 Thread Tom Metro
Rich Braun wrote:
> The America Invents Act of 16-Sep-2011 is about to make things a
> whole lot worse for future open-source dev, I fear.

I haven't read the details of the act. Can you elaborate on that?

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] LVM Re: A really interesting chain of functionality

2011-09-27 Thread Tom Metro
Bill Bogstad wrote:
> Any snapshot implementation is going to require two different blocks
> on disk for every block written while the snapshot exists.  (i.e. the
> original contents and the new contents of each virtual block which
> has been written)
>
> As I understand it, LVM uses the original location for new versions
> of each block while the original contents of that location are
> written to newly allocated blocks.  Thus you get two writes (and a
> read) the first time any block is written.
>
> The opposite approach (new contents are written to newly allocated
> blocks) only require a single write.
>
> The problem occurs when you go to delete a snapshot. With LVM, you
> just deallocate the storage where the old data was copied too and do
> some meta-data cleanup.   With the alternative approach, deleting a
> snapshot is more complicated.  Assuming that you want to actually
> release the storage where all of the new data was written while a
> snapshot was turned on, you have to copy all of that data back to the
> corresponding locations in the originally allocated space.  (i.e. a
> read and a write.)  So either design requires the same total number
> of IO operations...

That last paragraph is incorrect. (I'm surprised Ed didn't chime in on
this.)

A ZFS snapshot is essentially metadata pointing to "stale" blocks. If
you want to get rid of the snapshot, you just delete the head node in
the metadata, and poof, it's gone.

If you've ever played with rsync snapshots, the principle is the same,
only at the file level instead of the block level. With rsync snapshots
you have a set of real files, and then one or more directory trees that
consist of nothing but symlinks to the real files, with the exception of
any files that have changed.

Modified files are written as real files, while unmodified files are
written as a symlink.

When you delete a snapshot, you mostly remove symlinks, and a few real
directory entries. Essentially snapshots mostly consist of metadata,
plus a bit of stale data.

ZFS snapshots are light weight, fast to create or remove, and space
efficient.

As a non-storage professional, my option is that LVM snapshots were
glued-on as an afterthought, and have limited applications. ZFS
snapshots (and similar in NetApp filers) were designed into the
low-level file system architecture.

Generally speaking, the whole LVM concept seems quaint and requires a
lot of manual management. I haven't deployed it on a system since 2006.
Unfortunately, there aren't good alternatives for it on Linux.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] ZFS

2011-09-27 Thread Tom Metro
m...@ciranttechnologies.com wrote:
> ZFS was released under the CDDL license, which is open source...there
> is no ground for infringement suits, because everything
> released as open source remains in the community.

A software license generally addresses copyright restrictions, and not
patents. Unless the CDDL a) grants a royalty free use of any applicable
Sun/Oracle patents and b) indemnifies against third party patent claims
against ZFS, any user of a project incorporating ZFS has liability
exposure to patent suits. (The same can be said for any other open
source software project. Ain't software patents grand?)

CDDL might cover "a", but unlikely it covers "b."

In any case, it is due to the licensing, and not the patents, that keeps
ZFS out of the Linux kernel.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] ZFS

2011-09-29 Thread Tom Metro
Bill Bogstad wrote:
> m...@ciranttechnologies.com wrote:
>> Yeah, against commercial vendors, not end users.
> 
> I'm 99% sure that Ed was right.   In that RMS' video being discussed
> on a different thread, he talks about...that end users can be sued
> directly.

Correct.

The first example that came to mind was the way SCO sued DaimlerChrysler
and AutoZone:
http://www.linuxinsider.com/story/33031.html

but that was actually over (alleged) copyright (licensing) violations.

However, its easy to find other examples, like Microsoft suing HTC and
others for patent violations in their Android phones.
http://www.guardian.co.uk/technology/2010/oct/04/microsoft-motorola-android-patent-lawsuit

This one is a bit more murky, as it hasn't been disclosed exactly what
was in violation, and whether it was specific to customizations HTC
made, or if Microsoft just went after them as a lucrative target. (Some
articles say it is the HTC Sense UI, but that doesn't explain why they
went after Motorola)

Generally with commercially licensed software, the publisher will
indemnify the licensee from third party suits. You generally don't get
this with open source, as there are no deep pockets behind the project
(or defensive patent portfolio) to fend off attacks.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] ZFS

2011-10-02 Thread Tom Metro
Rich Braun wrote:
> ZFS kernel module for Linux is not an Oracle/Sun-sponsored product, so far
> as I can tell.  Lawrence Livermore Labs appears to be the current sponsor (see
> zfsonlinux.org) of the Linux upstream.  A firm in India called KQ Infotech
> pioneered this port but then got bought out by STEC earlier this year.

Ah, so this isn't the FUSE user-space driver we've heard about before,
but an actual kernel driver.

http://zfsonlinux.org/faq.html#WhatAboutTheLicensingIssue

  One way to resolve this issue is to implement ZFS in user space with
  FUSE where it is not considered a derived work of the kernel. This
  approach resolves the licensing issues but it has some technical
  drawbacks. There is another option though. The CDDL does not restrict
  modification and release of the ZFS source code which is publicly
  available as part of OpenSolaris. The ZFS code can be modified to
  build as a CDDL licensed kernel module which is not distributed as
  part of the Linux kernel. This makes a Native ZFS on Linux
  implementation possible if you are willing to download and build it
  yourself.

Interestingly the very next question directs Ubuntu users to a
repository of ready-built binary packages. :-)


> There is a snapshot-oriented filesystem project sponsored by Oracle: 
> OCFS2.  It's actually quite good.  I haven't looked at its snapshots yet.

What? Oracle wasn't busy enough developing Btrfs? They had to create
another one? :-)

I see the focus of OCFS2 is clustering, which is not necessarily the
case for Btrfs. There doesn't seem to be a leading choice for clustering
file systems for Linux. Plenty of options, but no clear leader.


> Neither ZFS nor OCFS2 can compete for raw performance with ext4...

Reference?

One data point is:
http://zfsonlinux.org/faq.html#PerformanceConsideration

"...it should be made clear that the ZFS on Linux implementation has not
yet been optimized for performance..."

Of course performance is relative. If you are building a 4-drive NAS for
a SOHO application, the performance difference between ZFS and Ext4 may
be indistinguishable (or not? need to see some benchmarks), or at least
a justified cost for a more self-maintaining storage appliance.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] more on software patent

2011-10-02 Thread Tom Metro
 better for electronic
cross-referencing of concepts, with the ability to let others tag and
link related art, and comment on the originality.

Eventually you could provide an API and have tools that semi-automate
the process of breaking down source files into "patentable" chunks,
prompting the author to fill in an abstract describing what each chunk does.

There are already several companies that build databases of open source
code to address copyright violations (for example, Black Duck Software,
who makes their database available via koders.com). A variation for
patents - though far more complicated - could be layered on top.


A better solution to the "obviousness" problem is peer review. You want
industry professionals reviewing patents from their industry, as they
will already be well familiar with the prior art and will know what
should be considered obvious. Large businesses will also find it is in
their best interest to pay employees with expertise to review patents,
as it would be far cheaper to knock out a competitor's patent at this stage.

The PTO has made it known that they'd welcome this, and I think I've
read a few times about proposals to formally adopt such a system. To do
it right, you need early public disclosure of the application. I've been
wondering if the AIA incorporates this.

Until formally adopted, anyone can volunteer to review applications
using monitoring sites like http://www.freshpatents.com/ .


> Accordingly, "patent" means "disclosure" and not
> monopoly by itself.  So, what the patent system does is that the government
> solicit disclosure of ideas from smart inventors, and award them with some
> "exclusive rights" with certain "limited time" if these smart people would
> tell the public in detail what the invention is...
> 
> To take away patent protections from all subject matter, inventors would be
> less inclined to disclose their invention.  Take blue LED for example, it
> took Nakamura 20 years of lab life to find his formula.  If Nakamura keeps
> his technology secret, I can't predict how long it would take for another
> person to develop the same thing.

The NPR piece cited above makes the point that current patents fail at
the objective of disclosure because they are intentionally worded to
obscure the invention as much as possible while also being as broad as
possible. (This is the difference between a "professionally" written
patent and an inventor's self-written patent.)

Anyone here tried building or coding something based on the description
in a patent?

Occasionally you'll see one that is straight forward, but clearly if the
objective was disclosure, the norms for patent language wouldn't be what
they are.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] ZFS

2011-10-04 Thread Tom Metro
Edward Ned Harvey wrote:
> In all of the above (and btrfs) there are different architectures and very
> efficiently written code.
> 
> It's unfair and inaccurate to make the generalization that one is better or
> faster than the other.  They're each better in specific cases.  Know the
> architecture gains and losses of each one, and use the best tool for
> whatever job you're trying to do.

Good point. If performance matters to you, your comparison benchmarks
should emulate your intended usage patterns.

I think what is getting blended together is not just the inherent
differences in file systems due to architectural differences, but also
the impression that these newer file systems (ZFS, Btrfs) are ether
immature or immature on Linux, and as such haven't been optimized for
the platform.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Is MythTV dead?

2011-10-09 Thread Tom Metro
Rich Braun wrote:
> Is MythTV dead?

DVRs in general are in decline. The TV listing service typically used
with MythTV and other open source projects, Schedules Direct, reports
that subscriptions are down (and as a consequence, prices will be going up).

I think for most consumers the rise of video-on-demand services offered
by cable companies have taken away the motivation to purchase a DVR.
(VOD services - at least what Comcast provides - are a really poor
substitute for a DVR, but for a casual viewer who doesn't really care
about seeing episodes in order or a season to completion, it's adequate.)

Then for the early adopters is there is streamed video. MythTV (last I
checked) doesn't do anything to address this emerging market.

The wide adoption of encrypted digital signals has pretty much killed
DVRs for the rest of us. It'll be interesting to see if the Silicondust
CableCard tuners help address this.


> ...whether to dump it in favor of something else?

What's the alternative?


> In the meantime I guess I could revisit how I build my own
> front-ends.  Have been using the PackMan repo to install 0.24 or
> 0.24.1 ...

I originally headed down the alternative front-end software path due to
wanting to use low-power, appliance-like hardware, and at the time, the
Hauppauge MediaMVP was the best option, and you couldn't run the full
front-end on it.

After years of using mvpmc as a front-end client, and more recently
using XBMC (though not as a MythTV client), I really can't see the
appeal in using the MythTV front-end. I still use it as a desktop player
(in a small window), but I've always found the UI really clunky.

As mvpmc has gone obsolete (no HD support), probably a majority of its
users have moved on to XBMC.

If you use MythTV as a front-end, have you tried XBMC? If so, why do you
prefer MythTV's front-end?


Derek Atkins wrote:
> Note: I've been using Myth since 0.11 and I currently run 0.22 on FC12
> systems.  I've had no reason to upgrade the OS or Myth systems, they
> work well.  I'll consider upgrading when I move next month and have to
> add a few more frontend boxes.

I've been treating my MythTV back-end like an appliance and let it and
the LTE version of Ubuntu it runs on fall obsolete. Aside from storage
upgrades, there isn't anything really compelling driving an upgrade of
the back-end.

On the other hand, I'm badly in need of a front-end upgrade, and I'm
trying to figure out the best hardware to run XBMC on.


Dan Ritter wrote:
> My video is NVidia VDPAU across the board. When Debian goes to a
> 3.0 kernel, I expect to be able to move one frontend off the
> GT220 board and on to the integrated Intel graphics.

I hear the integrated Intel graphics are becoming a more attractive option.


> I'm thinking about buying an HDHR Prime (cablecard)...

Likewise. Currently on sale at Micro Center for $200.

I'd buy it tomorrow if I though my old version of MythTV supported it.
(I gather it isn't API compatible with the original HDHR.) I guess this
will be the compelling reason to build a new back-end.


Jarod Wilson wrote:
> That said, I'm actually thinking about not using MythTV anymore. For
> one, most of what the kids watch anymore is Netflix. And its often
> done via iDevices, which is another sore spot -- there's no great
> integration between MythTV and iDevices.

That there is any need for i-specific integration is a failing of the
industry.

UPnP or DLNA protocol should have addressed this. In part I think it was
never solidly supported by MythTV (though maybe not a problem in the
latest versions). The other part is that I get the impression the
protocol falls far short of being able to control a DVR in he way most
people would want. It seems you can't create a generic DVR front-end
client based on DLNA. Only a watered down media player.

But even pre-dating UPnP support in MythTV, I think this is an artifact
of the MythTV back-end not integrating well with *anything* other than
the MythTV front-end. The client-server architecture has always been
sloppy, without good separation between the two halves.

What should have been there is a MythTV protocol feature that lets the
front-end negotiate with the back-end for supported video formats, and
employ VLC-style on-the-fly transcoding.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] open protocols for IP-TV

2011-10-09 Thread Tom Metro
Rich Braun wrote:
> Jarod Wilson:
>> MythTV is still actively being
>> developed, its just not moving at break-neck pace these days.
> 
> It needs to be.
> 
>> That said, I'm actually thinking about not using MythTV anymore. For
>> one, most of what the kids watch anymore is Netflix.
> 
> That's why:  the whole way people use TV sets is getting ripped out and
> rearranged in fundamental ways.  MythTV simply isn't keeping up.

I agree completely. Though this isn't justification for dumping your
DVR. Just the front-end.


> In order to have a user-friendly front end, it has to be able to pull
> online content from a variety of sources (Hulu, Netflix, Youtube,
> plus whatever else got invented in the past 15 minutes) and convert
> it into a unified look and feel to match the familar PVR and
> DVD/BluRay content that most people have.
> 
> My overall point is that these open-standard UI systems like MythTV
> *ARE* dead unless a more aggressive posture is taken.

I agree.

The streaming video model is evolving from using either full-custom
clients (on desktops and appliances), or web UIs with plugin-in
dependencies, to an app model. If Google wins, every video source is
going to need to provide its own Android app to run on your Google TV.

I've been surprised to see the open source communities and projects tied
to television sitting on the sidelines in this area. The glaring obvious
absence here is an open protocol for IP-TV. One ought to be able to
create a single generic IP-TV client and have access to thousands of
diverse video sources.

By analogy, the model we have now would be like if the web in 1995
required every web site to develop and provide their own web browser.

I get why companies like Hulu and Netflix have no interest in this. They
like customer lock-in, and they're OK with taking the time to negotiate
with every hardware vendor to get their custom client installed.

Plus, of course, there is the DRM issue. Though an open protocol doesn't
preclude supporting codecs that incorporate DRM.

In order for small original content creators to thrive, we need an open
model with low barriers to entry. We also need a public directory of
available content (TV guide) (with its own protocol and client).

And the best way to break free of the old-world TV model that the
existing studios, networks, and cable companies are clinging to is to
reduce barriers for the new upstarts to reach our living rooms.


> Suppose I wanted to go off and write such a thing from the ground up.
> What would stand in the way of creating a new open-source project to
> accomplish exactly that?

Likely patents (covering the codecs) and the usual uphill battle to
convince enough content providers and users to use the protocol so you
achieve critical mass.

(In the context of your original question - creating an open source DVR
with integrated Netflix/Hulu support - licensing and DRM. A sort of
pointless endeavor. The best bet is to create a big enough audience
using open clients that the commercial content providers will be
compelled to support the format.)

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] open protocols for IP-TV

2011-10-09 Thread Tom Metro
Dan Ritter wrote:
> Tom Metro wrote:
>> The glaring obvious
>> absence here is an open protocol for IP-TV.
> 
> Yeah, we have that. It's called DVB-IPTV. 

I see.  http://en.wikipedia.org/wiki/DVB-IPTV

Is this the IPTV protocol that MythTV supports? (I know it has had
experimental support for an IPTV "tuner" type for a long time.)

According to:
http://www.mythtv.org/wiki/DVB-IPTV

MythTV has partial support.

I see there are other open source projects that support it:
http://mumudvb.braice.net/mumudrupal/

http://www.netup.tv/en-EN/iptv_pc_client.php (Windows, Linux, and
Android clients)

And China is producing set top boxes that might support this:
http://leddream.en.alibaba.com/product/497408433-212784089/IPTV_Android_2_3_with_DVB_T_HDD_Player.html


> ...you can easily do it yourself inside your house.

What would be the benefit in doing that?

The real win is if content creators/distributors adopt it.

Although it may be an applicable protocol for the HDHR to use. The
MuMuDVB page above links to this network tuner that uses it:
http://www.elgato.com/elgato/int/mainmenu/products/tuner/netstreamdtt/product1.en.html


> US TV providers have no incentive to use it, when they can charge
> you for permanent rental on set-top boxes, enforce weird
> (profitable) channel blocks, and otherwise maintain their
> monopolies. 

Yeah, but you're talking about cable companies and networks that
currently depend upon them for distribution. It's a given they won't
support anything that threatens their existing business model.

The point is to create infrastructure to let new providers thrive who
aren't tied to that old model.

If DVB-IPTV works, then the next step is to get clients widely deployed,
and a directory to promote the available content.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Server Room Power

2011-10-13 Thread Tom Metro
Edward Ned Harvey wrote:
> Hold it.  P=VI is a DC rule.  Power is more complex in AC.
> What's the difference between VA and W?
> 
> If you have inefficient power supplies, you might be overpaying 30% 
> for power.

You're referring to power factor:
http://en.wikipedia.org/wiki/Power_factor

  The power factor of an AC electric power system is defined as the
  ratio of the real power flowing to the load over the apparent power in
  the circuit,[1][2] and is a dimensionless number between 0 and 1
  (frequently expressed as a percentage, e.g. 0.5 pf = 50% pf).
  [...]
  Circuits containing purely resistive [loads] have a power factor of
  1.0. Circuits containing inductive or capacitive elements (electric
  motors, solenoid valves, lamp ballasts, and others ) often have a
  power factor below 1.0.

So when PF=1.0, VA==Watts. The better the quality of your power supply,
the closer its PF will be to 1.0. In the last decade it has become
common for name brand computer power supplies to specify a PF as a
selling point.

See also:
http://en.wikipedia.org/wiki/Switching_regulator#Power_factor

for discussion of PF with respect to computer power supplies.


> When you're talking about 208, you're talking 3-phase.

You can attach single phase loads to a multi-phase supply, as long as
they are balanced:
http://en.wikipedia.org/wiki/Three-phase_electric_power#Single-phase_loads


> If you want to use 3-phase 208, you need a special power supply in the
> server.  Generally you don't have such a thing...

Old power supplies used to have a 120V/240V mechanical switch. Most
modern switching supplies will work fine with any input voltage from
like 90V up to 250V (check your supply specifications). The ability to
handle a wide input range is a byproduct of the switching regulator design:

http://en.wikipedia.org/wiki/Switching_regulator

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Server Room Power

2011-10-14 Thread Tom Metro
Matt Shields wrote:
> The clampon ammeter means I don't have to shut the
> server's off. I can also clamp on to the rack's main power feed...

I gather there is a point in the power feed where you can access the hot
conductor separately from the neutral?

Others may not be aware that you can't just put the clamp over the power
cord to the server. If you do that the electromagnetic field in the two
conductors cancel each other out. You would typically need to unplug the
device and plug it in through a line separator, like:
http://www.mouser.com/ProductDetail/Extech/480172/?qs=sGAEpiMZZMvxDRaL6U%2fGjYgmQwn6WPwUnwCdDDtlhA4%3d

which separates the two conductors and lets you put a clamp over either
one individually.


Edward Ned Harvey wrote:
> ...when you wrap the clamp around two wires together, in order to
> measure the power of both combined...

Are you talking abut pairing up the hot conductors from a pair of
redundant power supplies, or are you talking about the power cords?

If the latter, see above.


> It's important to ensure both wires are connected to the same
> power source. ...they're not necessarily in-phase with each
> other.

If the former, and you're using a true RMS meter, I wouldn't expect it
would matter whether to two current waveforms are in sync. The meter
would just see it as a waveform distortion, and should still give you
the RMS power.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Econonomic contribution

2011-10-15 Thread Tom Metro
ma...@mohawksoft.com wrote:
> Had a little debate, at work, about the importance of the work two men.
> Steve Jobs and Dennis Ritchie.
> Who contributed more to the world...

Looking at it from the perspective of general consumers (not developers,
which is a much smaller audience), this is an infrastructure vs. facade
type of comparison. Infrastructure alone doesn't add value. You need a
finished product made suitable for end-users to be useful. But you can't
build a facade without the backing infrastructure, so in that regards
Ritchie's contributions are more significant and pervasive. You may
think Apple products are wide spread, but just about anything with a CPU
in it is likely to be running software that has been influenced by C or
UNIX.

What's less clear is if Ritchie had not developed the infrastructure he
did, would it have soon become the obvious approach to a practitioner in
the field, or was it really a radical departure from what proceeded, and
without Ritchie these technologies would have been delayed by 5 or 10
years and/or not as good.

It's a little bit easier to see how others aimed for the same targets as
Job's and repeatedly failed, so you can argue that cell phones and
tablets would likely not be at their current state for another 2 to 5
years without his vision (comparatively the easy part) and execution of
his business strategy (the hard part).

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Econonomic contribution

2011-10-16 Thread Tom Metro
ma...@mohawksoft.com wrote:
> How much of "Jobs'" accomplishments were his own? I argue none.
>
> I submit that all his accomplishments were purely the work of a
> collaborative process. Yes, he chose the final versions, but he never
> made any of it. He never drew something out and said "Make this."
> Very creative people created designs, and as Jobs was presented
> designs, he took their creativity and made it his own. He learned
> more from the designers than the designers learned from him. Ritchie
> on the other hand, did all the things he did, first hand.

Richard Pieri wrote:
> Then I suggest making a fair comparison.  To wit, your dismissal of
> The Steve's perceived lack of creativity because of his genuine lack
> of technical expertise is unfair.

I agree with Richard here. Mark seems to be judging Jobs'
accomplishments from the perspective of engineering, but he wasn't an
engineer.

As technical people we may respect the non-technical fields less, but
they are still an essential part of any successful product.

I think you'd be hard pressed to name anyone from the last 50 years who
made substantial contributions to their industry in the absence of a
collaborative process.

Having a great gadget that either 1. no one knows about, 2. never gets
funded and built, or 3. has such bad usability that few want to use it,
is not a great accomplishment, even if you did conceive of it and build
a prototype all by yourself.


ma...@mohawksoft.com wrote:
> It is a mixture of character and circumstance. It is not certain that 
> either Jobs or Ritchie, outside of the circumstances of their life, 
> would have been so accomplished. 

I'm sure not many fans of Bill Gates here, but in interviews he was
always quick to credit his circumstances for much of his success (and
thus why he now funds educational programs).


> The question is, and has always been, despite the digression, who
> contributed more to the industry as a whole. The guy who, in very 
> general and simplistic terms, created the environment or a salesman
> who contributed to making it pretty?

Unless you come up with some objective criteria to quantify the
contributions, this will largely come down to your biases for
engineering or business.

About 11 minutes into episode 322 of This Week In Tech[1] they touch on
this topic, and conclude it is the popularizer of the technology that
gets the credit in the history books, not the inventor.

1. http://twit.tv/show/this-week-in-tech/322

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Do you know of a Dynamic DNS service that doesn't demand I install software?

2011-10-24 Thread Tom Metro
Bill Horne wrote:
> I'm looking for a free dynamic dns service provider...
> ...I can't find a...provider that I can use without installing
> software on my machine.

I thought most DDNS providers supported a mode where you could register
your IP using a simple HTTP request, which could always be triggered
from an if-up or whatever script via wget/curl.

Here's the HTTP REST API documentation for DynDNS:
http://dyn.com/support/developers/api/perform-update/

DHIS (http://dhis.org/) also supports this (and provides other
advantages). See the documentation:
http://dhis.org/WebEngine.ipo?context=dhis.website.updating.updates

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Verizon ADSL and port blocking

2011-10-26 Thread Tom Metro
Bill Horne wrote:
> If there are other ways to scan my IP, please tell m how. 

ShieldsUP
https://www.grc.com/x/ne.dll?bh0bkyd2

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] lvm snapshot cloning

2011-10-26 Thread Tom Metro
ma...@mohawksoft.com wrote:
> Suppose you have a 1TB hard disk. How on earth do you back that up? 
> Think now if you want to pipe that up to the cloud for off-site backup.
> 
> The snapshot device "knows"
> what is different. You don't have to really backup 1TB every time, you
> only have to backup the changes to it since the last full backup.

If you are at the point of considering developing software to do this,
instead of just using off-the-shelf solutions, then you should consider
using inotify[1]. I believe using this library you could log to a
database the inodes that have been altered over a given period of time,
which another tool could then use to package up the data and send it to
your local or remote backup server.

Of course without snapshots you'd needs to take other steps to insure
consistency, if that matters for your application.

Seems DRBD[2] would be another way to address this, though not practical
for a remote backup server, as it'll send every change to the remote
server in real time.

Generally though, it seems like you are trying to re-invent ZFS, which
does both the snapshotting you want, as we'll as communicating
differential changes to a remote server. But I understand your
objections to ZFS.

1. https://github.com/rvoicilas/inotify-tools/wiki/
2. http://www.drbd.org/


> I can't give too much away...

Why? Developing something you plan to patent? Turn into a commercial
product and you want to keep the technology a trade secrete?

If those don't apply, then why not detail your scheme? Better to idea
vetted before you've invested a lot of time into building it.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] lvm snapshot cloning

2011-10-31 Thread Tom Metro
Bill Bogstad wrote:
>> ...you should consider using inotify[1].
> 
> I'm not sure how inotify would help that much. Mark...wants to
> replicate/backup only the blocks that have changed due to the files'
> large sizes.

You are correct. I was mistakenly thinking that inodes==blocks, which of
course they don't.

I haven't looked at the inotify library in a while, but it isn't out of
the realm of possibility that in addition to the notification messages
 identifying the inode, it may also identify the blocks or byte range
impacted by the I/O operation.

 -Tom


-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Security

2011-11-03 Thread Tom Metro
Dan Ritter wrote:
> Everyone wants to connect their iPad or phone... so we got a
> cheap cable modem from Comcast, wired up a WiFi router, and 
> let them play. 

Good approach. Obviously it can also be implemented using appropriate
router/firewall/VLAN rules, rather than a physically separate WAN
connection.


> I can point to complete physical separation when the auditors
> come. That's worth more than the Comcast bill.

Sure, but aren't there dozens of other places in your infrastructure
where your security *is* dependent on firewall rules, and thus you still
need to assure the auditors of the integrity of those systems?


I bet when these "foreign" devices need access to the corporate network,
you're still using a VPN, which then makes the whole corporate LAN
accessible to the infected machine.

I get that it can be complicated to forward specific ports (via ssh or
otherwise), but never got why large corporations were always so willing
to completely open their internal networks to their employee's home
computers, and always preferred VPNs to port forwarding (which I find
far simpler to setup, than a VPN client).

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Security

2011-11-04 Thread Tom Metro
Dan O'Donovan wrote:
> Hsuan-Yeh Chang wrote:
>> Is there a way to encrypt data stored with cloud services (such as
>> dropbox) that can be decrypted only by the data owner...
> 
> Sure it can, but one of the reasons that DropBox is great is because
> it saves incremental backups of your files (tracking changes and the
> like). If you start encrypting them you loose this...

Not only is the problem solvable, but there are existing open source
(rsyncrypto) and and commercial (Wuala) solutions.

The only real challenge a DropBox-like service faces when implementing
client-side encryption is one of convenience - such as how do you
provide the user with access to their files if all they have is a web
browser? Decryption would need to be done in Java or JavaScript, and the
user would have to have their key, or an adequately strong passphrase to
generate a key. It's the usability challenges that undoubtedly led to
DropBox taking the less secure approach.


Cloud storage is easy compared to securing cloud applications. With the
latter the affiliated application data needs to be unencrypted in order
for the application to interact with it.

There is, however, a researcher at IBM who is working on a type of
encryption that makes it possible to perform certain mathematical
operations on encrypted data, such that the transformation persists
after decryption. If this worked, you'd upload encrypted data to the
cloud, process it in the cloud while still encrypted, and finally
download and decrypt it. It sounds like something that will never be
possible in the general case (more than for a few specific mathematical
transformations).

About the best you can hope for is a cloud vendor that uses an
architecture where your login generates a key and your data gets
decrypted on the fly. When you log out, the key gets flushed from
memory, and your data resumes being inaccessible to anyone but you.


> Hsuan-Yeh Chang wrote:
>> If I send an e-mail (with attachment) from Gmail to Hotmail, would
>> both Google and Microsoft keep this e-mail on their respective servers
>> forever?

No, not if you delete it. (Though backups are a different story.)

Many people don't realize it, but it is possible to purge messages out
of a Gmail account. I have a few Gmail hosted accounts, and I
periodically purge all the messages out of them.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


[Discuss] Richard Pieri wrote:, > Android doesn't support S/MIME out of the box, either.

2011-11-05 Thread Tom Metro
Richard Pieri wrote:
> Android doesn't support S/MIME out of the box, either.

Speaking of which, what are people using for an IMAP client on Android?

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Linux box for under $20? TRENDnet

2011-11-09 Thread Tom Metro
Rich Braun wrote:
> On an impulse, I bought a wifi router at Microcenter a few days ago,
> thinking that heck for $25 I wouldn't mind have an upgrade from "g"
> to "n".
> 
> * The vendor supplies full GPL source code on its website
> * It's on the OpenWRT hardware compatibility list

About 2 years ago I applied the same reasoning to buying a TRENDnet
TEW-652BRP at Micro Center.


> Anyone else here have comments about the TRENDnet routers?

I reported my experiences with the TEW-652BRP on this list back around
the Fall of 2009/Winter 2010. In summary, I found that several devices
on my wireless network had trouble establishing a connection, even
though they worked flawlessly with the WRT54G I used previously.
Additionally, my N supporting clients saw no speed improvement over the
G connection they previously used.

I set up the TEW-652BRP to log to another server via syslog, and over
the course of the year I used it it captured kernel buffer overruns and
at least a few panics, requiring restarts.

Other than recommending a hard reset, TRENDnet support wasn't helpful.
(Which is consistent with the level of support they provide for their IP
cameras when I've tried reporting bugs or seeking workarounds for said
bugs.)

My WiFi network improved greatly when I switched to an ASUS RT-N16
running Tomato USB firmware.


> ...I found it interesting that prices for a "Linux box" have dropped
> below $20.

That's novel for a device claiming to support 802.11N, but otherwise
there have been several Linux-based routers at or near that price point.
Several of the ASUS models that used to be popular with 3rd party
firmware users were around that price.


> I don't yet see a reason to swap out the vendor's software for OpenWRT

Likewise, I ran it stock for the time I had it deployed, but now that it
is sitting on a shelf gathering dust, I ought to try loading a 3rd party
firmware onto it and put it to some use.

I wish Dnsmasq was designed to operate in a master/slave fail-over
arrangement. If it was, I'd set it up as a backup DHCP/DNS server.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Linux box for under $20? TRENDnet

2011-11-11 Thread Tom Metro
Bill Bogstad wrote:
>> About 2 years ago I applied the same reasoning to buying a TRENDnet
>> TEW-652BRP at Micro Center.
> 
> I have had that same model on my home network for about
> the same time with no problems. ...
> One possible reason for different experiences
> is that TRENDnet used that model number for two completely different
> hardware versions.   I have the V2 hardware with the 2.00.22 firmware
> revision.

Could be. According to my notes, I have H/W: V1.1R, Firmware Version:
1.10.14  (came with 1.00b0008).

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Perl OO question

2011-11-16 Thread Tom Metro
Derek Martin wrote:
> I...have two implementations of a Perl module, and
> I want to compare them programmatically.

I would take the approach of running Perltidy[1] on both, followed by
diff or other text processing of your choice. (For example, once you
have methods formatted consistently, a simple grep can list all the
methods in a file.)

 dynamically generated methodsThis isn't perfect if your two modules
differ radically, as you can have , or an implementation that is spread
across multiple modules using inheritance.

1. http://perltidy.sourceforge.net/


> Is there a way to do this *in code* *in Perl*?

I see you ran across how to dive into the symbol table.


> The namespace is polluted by members of other imported modules used
> within that one, a fact which I have very little control over.

Using an OO framework like Moose avoids this, but that doesn't help you
if the code you are analyzing doesn't use it.

OO in Perl 5 is "glued on" rather than native, unlike Perl 6, and this
becomes evident when you start introspecting objects.


> I was also reminded that I (still) hate Perl.

Perl makes easy stuff convenient, and hard stuff possible. What you were
looking to do is out of the realm of what's typical.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Postfix Wildcard email forwarding

2011-11-16 Thread Tom Metro
Dave Peters wrote:
> ...how to configure postfix to forward wildcard emails to anther email 
> account.
> 
> what I would like to do: 
> 
> _re...@mymail.com --> re...@mymail.com
> _nore...@mymail.com --> nore...@mymail.com
> xxx_s...@mymail.com --> s...@mymail.com

So in your example you want the "x" replaced with a wild card that
matches any one character?

This should be doable using a regexp_table[1] or pcre_table[2], with a
file containing a line like (untested):
/_re...@mymail.com/ re...@mymail.com

and then in main.cf set alias_maps or virtual_alias_maps to
regexp:/path/to/file.

1. http://www.postfix.org/regexp_table.5.html
2. http://www.postfix.org/pcre_table.5.html

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] mbox questions

2011-11-16 Thread Tom Metro
Jerry Feldman wrote:
> ...one of the solutions is to remove all the index
> (.msf) files and compact all the mbox files. In my situation I have
> emails in local folders going back years. This should be no problem...

Be aware that although the documentation would have you believe that the
.msf files are there strictly for performance and are otherwise
disposable, they actually store some unique meta data - at least with TB
version 3.x and earlier, and for local folders. For example, if you
delete your .msf file, you will lose any tagging you had set.

Even doing something as seemingly innocuous as moving your local mail
store to a different file system (path) can throw TB off ad cause the
meta data to be lost.


John Abreau wrote:
> Now I keep all my local backups of mail folders in Maildir format
> on one of my home servers, with Dovecot sitting on top for
> easier access via IMAP.

I second that approach.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] to Ubuntu or not to Ubuntu

2011-11-23 Thread Tom Metro
ma...@mohawksoft.com wrote:
> I think I've decided to move away from Ubuntu. Maybe I'm a dinosaur, but
> I'm not liking the changes.
> 
> CentOS? OpenSuSE? Fedora?

I'm pondering the same question, but personally I plan to stay within
the Debian universe if I do stray from Ubuntu. The Debian frame is still
solid, even if the Ubuntu chrome on top has gotten too distracting.

It makes me wonder where community maintained distributions will find
technical contributors if they optimize their UIs for non-technical
people to the point that it drives them away. (Maybe not an issue for
Ubuntu, which has paid developers.)


William Ricker wrote:
> If I got totally alienated, I'd check out Mint...

That seems to be the alternative disto most frequently mentioned for
Ubuntu expats.


> ...How-To to replace Unity with traditional shell...

To me this comes down to which will be better supported: 1. running a
3rd tier distribution like Mint, or 2. substituting a major Ubuntu
component? Will most all Ubuntu packages still work with #2? Will
developers ignore your bug reports if you aren't running Unity? Will you
have to constantly switch back to Unity to test whether it makes an
observed problem go away? Which of these two options will have a larger
user community?

In the early days with Ubuntu, it was apparent how many layers you were
from a package's developer. It was "upstream" (original authors) ->
Debian package maintainers -> Ubuntu package maintainers. Most bugs had
to travel up that ladder, and faxes back down, which was slow. But over
time Ubuntu became embraced directly by more and more developers as they
adopted it for their personal use and the user base grew.

Either of the above options is likely to inject a step in that ladder again.


Matthew Gillen wrote:
> Note that if you're just mad about gnome3...

Although there are similarities and shared elements, Unity != GNOME3.
But it sounds like running even GNOME3 on Ubuntu 11+ will put you in the
minority. And I get the impression that neither option will not provide
a smooth transition if you have a well tuned and customized GNOME2
environment.

The question I'd put to the community is: If you have been using Ubuntu
since 10.4 or earlier, and you've upgraded or are planning to, what
option did you take?

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] BLU archive?

2011-11-24 Thread Tom Metro
John Abreau wrote:
> The main list archive is hosted at gmane.org:
> 
> http://dir.gmane.org/gmane.org.user-groups.linux.boston.discuss
> 
> We also have a secondary list archive at nabble.com:
> 
> 
> http://old.nabble.com/Boston-Linux-UNIX-General-Discussion-List-f24859.html

FYI, there are more. The definitive list can be found at:
http://blu.wikispaces.com/mailing+lists

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Competition of broadband

2011-12-05 Thread Tom Metro
The alternate approach taken by European governments was mentioned
earlier in this thread. I highly recommend watching this video on the topic:
http://www.engadget.com/2011/06/28/why-is-european-broadband-faster-and-cheaper-blame-the-governme/

(A variation of it ran on PBS earlier this year.)

It seems to show a successful model for creating inexpensive (at least
an order of magnitude cheaper than ours), faster, ubiquitous broadband.

What's less clear is whether this model could successfully be
extrapolated from suburban-density areas to rural areas. (But the video
does cover rural deployments.)


Ben Eisenbraun wrote:
> The only reason network providers build network to unprofitable
> areas, i.e. low-density, rural areas, is because regulations force
> them to.

True. The problem with the purely commercial provider model is that the
company investing in the infrastructure may not see a payback on running
a wire to a distant rural customer for hundreds of years, because the
only return is the $30 or whatever monthly fee they are getting.

If you instead view the problem from the perspective of the community
(town government), you now have much greater opportunity for return on
investment. A well wired town attracts more people, people who can
telecommute and get higher paying jobs, people who spend money at local
businesses, and those conditions also attract employers to the area
(premium property tax revenues).

So the answer seems to be some combination of public-private infrastructure.

Some towns in the US have attempted to build their own fiber plants only
to be thwarted by telecom lobbyists getting state laws passed
prohibiting anyone but the telecoms from doing such a thing.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


[Discuss] Ubuntu alternatives and upgrading distributions

2011-12-10 Thread Tom Metro
A couple of articles relating to the thread on Ubuntu alternatives and
Linux desktop distributions in general.

Linux Mint 12 "Lisa" now available, is most popular open source OS
http://www.slashgear.com/linux-mint-12-lisa-now-available-is-most-popular-open-source-os-29198793/

  In just the last twelve months, Linux Mint has surpassed Ubuntu as the
  most popular open source operating system on open source ranking
  website DistroWatch. Why, you ask? Perhaps because the latter has been
  looking with a new perspective on the user interface, and begun aiming
  at mobile platforms instead. However, note that Linux Mint is actually
  built on Ubuntu, so it has quite a few of Ubuntu's advantages while
  doing away with some of its shortcomings...

I thought Mint was still pretty much a minority player, though we
shouldn't confuse DistroWatch rankings with how widely deployed a
distribution is. I'm sure Ubuntu, which spent years at or near the top
of the DistroWatch rankings, still dwarfs the number of Mint
deployments. Still, DistroWatch could be accurately reflecting a trend...

I see that this version of Mint uses GNOME 3, so don't expect Mint to
insulate you from substantial UI changes. The article describes it as "a
solid desktop interface that isn't trying to do something crazy. "


Elsewhere Alex Handy in an SDTimes blog posting looks at the problem of
Linux OS upgrades from a server/application developer-perspective:
http://www.sdtimes.com/blog/post/2011/09/16/Youre-doin-it-wrong.aspx

  ...there really isn't a 10-year Linux. ...when it comes to long term
  support...you're looking at 2 to 5 years of support, max.

  You can't use Debian, because their release cycle is glacial. You
  can't use Ubuntu because their release cycle is break-neck. Is there
  room for a Linux that is released once a year, but each year's release
  has all security patches and bug fixes ported to previous versions?

  ...the short answer...is no. But this isn't because the market is
  moving away...It's because we're all doing it wrong.

  In 2003, you didn't want to touch the OS layer. Your app worked on a
  single instance of an OS, and it ran on a specific version, due to
  some requirement, or compatibility issue. But today, that whole
  paradigm of sticking with an old system to avoid change is rather
  wrong headed. When it comes down to it, if you're building a Web
  application, you really don't need to worry about the OS layer too
  much.

  The days of a Linux kernel patch breaking your application should
  really be behind you. When it comes right down to it, it's the items
  in your stack over which you should be executing version control. The
  right versions of the right libraries and components are still
  essential, and likely will be for years to come.

  But the actual OS you're using should be getting more and more
  irrelevant. The OS is just the container for your application...

  So, we're now drawing a line in the sand. If you're worried about
  upgrading an OS that's below a Web or mobile application, you're doing
  it wrong.

  And, of course, the inverse of this whole blog entry seems also to be
  true: as the OS layer becomes less relevant, there's less reason to
  upgrade it, ever. If it works, you might as well keep that virtual
  machine and disk image in its present state forever.

(I've edited this heavily, removing several qualifiers the author
included, so if you disagree with his premise, read the full text first.)

While this is talking about web apps running on servers, you should be
able to extrapolate the same ideas to the desktop...yet you can't.
Surprisingly the core of the OS - the kernel - is actually relatively
easy to upgrade without upgrading the full OS. Instead the linchpin for
any desktop OS version seems to be the libc version and secondly the GUI
environment version.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Off-Topic [IP] BufferBloat: What's Wrong with the Internet?

2011-12-14 Thread Tom Metro
Rich Braun wrote:
> Bill Horne:
>> At some point, the Internet will need a major overhaul.
> 
> Will it?

I think we have a very long history of incremental tweaks ahead of us...


>> For what common carriers are trying to do...TCP/IP can't be made to fit.

I'd buy that TCP may not be part of the future for real-time
communications - heck, it hardly is now, given that Skype, SIP, RTP are
all UDP - but I think what you intended is that the packet-switched
infrastructure used by IP isn't workable.

On that I disagree, as the desire to provide super cheap communications
is too great. Unless a cheaper alternative comes along for the OSI layer
2 and 3 infrastructure[1] we currently have, we'll see creative
solutions at layer 4 (TCP, UDP, SCTP[2]), or just faster pipes, as Rich
suggests - whatever ends up being cheaper to implement.

1. http://en.wikipedia.org/wiki/OSI_model#Layer_3:_network_layer
2. http://en.wikipedia.org/wiki/Stream_Control_Transmission_Protocol

Providers are currently racing at breakneck speed towards zero cost
telecommunications. Consider Google Voice, or Republic Wireless (whose
parent, Bandwidth.com, provides the infrastructure for Google Voice,
Skype, etc.), which obtained its own country code so your friends and
family in other countries can call your Republic Wireless phone for free[3].

3.
http://techcrunch.com/2011/12/13/republic-wireless-is-launching-free-international-calling-powered-by-their-own-country-code/

As phone service becomes like a disposable commodity, people will be
more tolerant of reliability problems, and likely will have a variety of
alternate channels to choose from if one doesn't work to their liking at
the moment.

Also consider that many of us are increasingly using less and less
real-time communications. Replacing phone calls with text messages and IMs.


>> This fight will be
>> about which mega-corporations carve out virtual slices of Internet
>> bandwidth so that they can avoid paying for their own.

There will likely emerge premium services that give you guaranteed
latency, using things like RSVP[4] or NSIS[5], but I think the vast
majority of users will find the commodity service to be "good enough."

4. http://en.wikipedia.org/wiki/Resource_reservation_protocol
5. http://en.wikipedia.org/wiki/Next_Steps_in_Signaling


> The gearheads who recognize BufferBloat will ultimately do the
> obvious: crank down the buffering and adjust the retry parameters.
> And flatten out the number of hops from source to destination.

Right.

[Thanks to Stephen Ronan for sharing the article. I'd heard about this
buffer bloat issue before, but hadn't read the details.]

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


[Discuss] live podcast BLU meetings

2011-12-20 Thread Tom Metro
Live podcast BLU meetings?
 -Tom

http://techcrunch.com/2011/12/20/twitter-for-audio-app-spreaker-now-live-on-android/

  ...the self-described "Twitter for audio" app Spreaker is now publicly
   launching Android. The app, which lets users broadcast live over the
  Internet, is a DIY broadcasting/podcasting solution with social
  sharing mechanisms built in. You connect the app to your Twitter and
  Facebook accounts to broadcast live as a status update. Afterwards,
  you can download the MP3 created to publish it as a proper podcast
  complete with music and sound effects.

  The Spreaker app is unique in that it streams audio in 128 kilobits
  and includes features like live chat and Skype call-in, in addition to
  social sharing. There's also an online web platform that lets you edit
  the recording...



-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] keeping an eye on congress

2011-12-20 Thread Tom Metro
Bill Horne wrote:
> Email is analyized and weighted for keywords, after being run through
> /very/ expertly devised filters which identify "mail bomb" auto-writing
> campaigns and chain letters. Printed mail is often simply weighed, after
> being sorted by zip code. Only hand-written letters get seen by a real
> person.

This is all true (from what I've heard), but it shouldn't be.

(Actually, in most cases I could care less whether my representatives
read my specific words, as long as my "vote" on the issue gets equal
weight to all the others.)

There really should be a more efficient mechanism for constituents to
express an opinion on pending legislation. We certainly have the
technology available to do this. It should not only be convenient for
us, but also for the representative. Why force them to run keyword
searches on correspondence to categorize it, if it will just end up
being aggregated as a "for" or "against" vote? Provide an actual voting
mechanism, and offer separate channels for actual hand written
correspondence.

But what bugs me more about the inefficient use of technology by our
representatives is that it is easier to keep tabs on any random
celebrity than it is to follow what our government is up to.

There ought to be a service (and perhaps there is, but I haven't ran
across it) that publishes a summarized, easily digestible report of what
legislation is coming up in the House of Representatives and the Senate.
Probably a version that is packaged up weekly, and another version that
sends out a daily report. Something you can get via email, RSS, or
Twitter. Something that includes links to a feedback channel that allows
you to supply your "for" or "against" vote, and optional comments, to
your reps.

We may have CSPAN, but it is largely an after-the-fact tool. By the time
legislation is covered there, the deals have already been made. The
lobbyists are certainly on top of pending legislation and have gotten
their opinion injected before the vote, so why shouldn't we?

The reports should also cover how your specific reps voted on recent
legislation. So much bad legislation gets through Congress because the
vast majority of constituents are completely unaware it existed, and
have no idea how their reps voted on it. Even for major legislation that
you've heard of, do you know how your rep. voted? (Unless the vote ends
up being something the rep's challenger can use in a campaign ad or that
they can brag about, you rarely learn how your reps voted.)

Sure, this blurs the line between representative and direct government,
but with reps. spending most of their time in Washington instead of in
their districts, this just leverages technology to improve communication.

Undoubtedly such a system would be used by only a small minority of
constituents, and would have inherent demographic biases, but it still
may shine enough sunlight on the inner workings of congress to make them
think twice that they can get away anything thanks to the obscurity of
CSPAN.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] list etiquette for posting links

2011-12-20 Thread Tom Metro
Dan O'Donovan wrote:
> Nilanjan Palit  wrote:
>> http://www.washingtonpost.com/national/on-innovations/code-for-america-an-elegant-solution-for-government-it-problems/2011/12/16/gIQAXrIu2O_story.html
>> 
> 
> This is a good story? This is a bad story? You agree with this? You
> don't agree? This story is worth us spending time to read, but not
> you to comment on?

In my opinion (not BLU list policy), as a matter of mailing list
etiquette, I don't think it is necessary for the OP citing a link to
provide commentary on it, but I do think all links necessitate some form
of summary, be it a self-explanatory title, quote from the article, or
hand written summary.

Remember that your posting will live on the archives probably long after
the link has ceased to operate.

Not to mention if you want to motivate someone to follow the link, it
generally helps to provide some sort of a teaser. (My personal policy is
that I rarely follow links posted without explanation. Along the lines
of what Dan said: if you can't be bothered to summarize, then I can't be
bothered to click on it.)

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] live podcast BLU meetings

2011-12-20 Thread Tom Metro
Richard Pieri wrote:
> Would be nifty, but it adds a legal twist.  You cannot legally
> distribute a recording of a speaker's presentation without obtaining
> that speaker's permission.

OK, but I don't think that poses much of a problem.

BLU already records the video for talks with some regularity. Presumably
such permission is already being sought and granted.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] keeping an eye on congress

2011-12-20 Thread Tom Metro
Dan Ritter wrote:
> http://www.congress.org/congressorg/megavote/
> 
> There you go. Everything except a different name.

The description:

  Track your Senators' and Representative's votes by e-mail
  Each week (that Congress is in session) you will receive:
  o Key votes by your two Senators and U.S. Representative.
  o Links to send e-mail to your members of Congress using pre-addressed
forms.
  o Upcoming votes for your review and a chance to offer e-mail input
before they vote.
  Use this weekly vote monitor to track the decisions made by your
  elected officials on key issues.


...sounds spot on. Just filling in your zip code and email. (Note that
the site accepts an email address containing a "+" character, but
doesn't escape it correctly if you hit the edit link. I reported the bug.)

I signed up. We'll see how it goes.

Thanks for the link.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] build my own cell phone

2012-01-02 Thread Tom Metro
Stephen Adler wrote:
> ...I've set out for my new years resolution, to build my own cell
> phone...
> Anyone on BLU ever attempt or know of anyone who attempted to build
> their own cell phone?

Didn't the Openmoko (http://www.openmoko.com/) guys run into some
significant roadblocks when it came to creating open firmware for the
GSM radio? The carrier don't take kindly to arbitrary code running at
that layer.

However, your goal is a bit fuzzy. Presumably you aren't planning to
build your own GSM radio. How much of the phone do you need/want to
build to achieve your goal? You best bet might be to start with a
purchased phone and mod it in some fashion. I'm not aware of any phones
that are designed to permit swapping components.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


[Discuss] [OT] Microsoft's Standalone System Sweeper

2012-01-02 Thread Tom Metro
I heard Microsoft's Standalone System Sweeper mentioned on the Security
Now podcast sometime last year, and recently when several friends and
relatives, that are still unfortunate enough to be running Windows,
asked me for advice on repairing malware infections, I recommended they
try it. They've all had positive results. Also it is turn-key enough
that non-technical users can employ it themselves. It has saved me from
making on-site visits.

To use Microsoft's Standalone System Sweeper you download an installer
on an uninfected Windows machine, and run it to produce a bootable CDR,
DVD, or USB drive. You then boot the infected system with the media you
created and it scans/repairs the system.

I think it is about time there was a commercial solution for malware
remediation that didn't depend on the infected OS. I always found the
idea of downloading and running repair tools on an infected system to be
tenuous. For the technically inclined, the best option was always to
boot a live CD (Linux or Windows) and run repair tools from that.

Microsoft seems to recommend SS only if other methods have failed, but I
tend to think that if you notice malware symptoms despite running
real-time protection (say Microsoft Security Essentials), then your
first response should be a tool like SS. I plan to recommend to my
friends and clients that they run SS prophylacticly every 6 months.

I would, however, like to know more about what System Sweeper does. For
example, why do they have both a 32-bit and 64-bit version? (The
architecture needs to match the target system that will be
scanned/repaired.) It raises the possibility that they are bundling
repair files onto the CDR to replace commonly damaged files, and that
the CDR only has enough capacity to handle one target type.

Why doesn't Microsoft provide an optional ISO file to download? It would
permit you to use more secure systems (like Linux) to create the media,
and if all you had was an infected system available, probably less risky
to download and burn an ISO than running the installer. Sure, the tool
would need the latest virus signatures, but a scheduled job could
regenerate the ISO file on Microsoft's servers periodically.

What does SS actually do when it scans a system? It seems to both detect
and repair problems. Can it replace corrupt or infected Windows files?
Does it include replacement files, or does it just know how to repair
the on-disk files from specific types of damage? Does it exclusively
scan for virus signatures, or does it also compare the hash of system
files against a database of hashes of known good files? Does it repair
the MBR? How does it determine the MBR is bad, and will it consider
alternate bootloaders, like GRUB or Truecrypt, as infected and replace them?

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


[Discuss] Full disk encryption

2012-01-02 Thread Tom Metro
The EFF recently tweeted
(http://twitter.com/#!/EFF/status/153306301965938688):
  @EFF
  Call to action for 2012: full disk encryption on every machine you
  own! Who's with us? eff.org/r.3Ng

Which links to this article:
https://www.eff.org/deeplinks/2011/12/newyears-resolution-full-disk-encryption-every-computer-you-own

  Many of us now have private information on our computers: personal
  records, business data, e-mails, web history, or information we have
  about our friends, family, or colleagues.  Encryption is a great way
  to ensure that your data will remain safe when you travel or if your
  laptop is lost or stolen.
  [...]
  Choosing a Disk Encryption Tool
  [...]
  -Microsoft BitLocker in its most secure mode is the gold standard
   because it protects against more attack modes than other software.
   Unfortunately, Microsoft has only made it available with certain
   versions of Microsoft Windows.
  -TrueCrypt has the most cross-platform compatibility.
  -Mac OS X and most Linux distributions have their own full-disk
   encryption software built in.


What makes Microsoft BitLocker better than TrueCrypt?

Are you using full disk encryption? If so, what tool are you using?

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] D-I-Y NAS enclosures

2012-01-02 Thread Tom Metro
Benjamin Carr wrote:
> I am personally enamored of the HP Proliant Microserver... It has
> a 64bit AMD Athlon II Neo processor, two DIMM slots (supports ECC), one
> gigabit NIC, a four drive cage (not hot-swap)...
> It is $330 from NewEgg with a "throw away" 250GB drive and 1GB of Ram. I
> wish they would sell it "bare" for $50 less but the don't.

Did that come loaded with Windows Home Server?

I see HP went on to produce an Atom version with 2GB Memory and 1TB HD:
http://www.newegg.com/Product/Product.aspx?Item=N82E16859105777

I looked it up for comparison when I recently ran across Acer's product
in this space:
http://www.newegg.com/Product/Product.aspx?Item=N82E16859321016

a smaller 8.5" x 8" x 7" cube with a 2 TB drive. (Plus 5 USB and 1 eSATA
ports.) Currently selling for $260. Possibly discounted due to being
loaded with an obsolete version of Windows Home Server.

(I wonder how much the "windows tax" is on this server and what a bare
bones version without the OS and drive would sell for.)

My biggest concern with these NAS boxes is whether the motherboards are
proprietary and if you'd be stuck if it died.

Seems like a good deal, if the included drive is useful to you.
According to camelegg.com, it is on a downward price trend, so it may be
discounted further:
http://camelegg.com/product/N82E16859321016?utm_campaign=firefox_ext&utm_source=product_link_ttp&utm_medium=www

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Full disk encryption

2012-01-03 Thread Tom Metro
Bill Horne wrote:
> Oa k'wala wrote:
>> Any thoughts on the kind of security risk I might be vulnerable to
>> because I only encrypt my home dir as opposed to the full disk?
> 
> Many applications use /tmp or /var files as working storage, and they
> leave ghosts behind.

As does swap.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] BitLocker

2012-01-03 Thread Tom Metro
Richard Pieri wrote:
> On Jan 2, 2012, at 7:55 PM, Tom Metro wrote:
>> What makes Microsoft BitLocker better than TrueCrypt?
> 
> "... because it protects against more attack modes than other software."

Granted, I was being lazy by asking the question rather than looking it
up, but repeating the quote I included doesn't exactly answer the question.


Chris O'Connell wrote:
> I prefer BitLocker for a couple of  reasons:
> 
> The password used to decrypt the disk and log in to Windows is the same.
> Thus the process is more transparent for users.

Makes sense. More convenient. Though less secure. (An attacker has more
opportunity to get at your network login password using social
engineering, fake login prompts, and server hacking.)


Kyle Leslie wrote:
> At my company we are using BitLocker.
> 
> One of the huge benefits I think is that the encryption keys/recovery keys
> can be stored in AD.  So that if you need to unlock or change the drives
> around you don't need to have the user store that some place to get
> lost/stolen.  It stores in AD and can be recovered when we need it.

OK, so again more convenient, but in the grand scheme of things, not
more secure.


Edward Ned Harvey wrote:
> Bitlocker is easier to use - No password necessary at boot time.  The TPM
> performs some system biometrics (checksum the BIOS, serial number, various
> other magic ingredients, and only unlock the hard drive if the system has
> been untampered.  Therefore you are actually as secure as your OS.)

This finally suggests a Bitlocker security advantage. I gather TrueCrypt
doesn't use the TPM? Answered in their FAQ:

http://www.truecrypt.org/faq

  Will TrueCrypt use TPM?

  No. Those programs use TPM to protect against attacks that require the
  attacker to have administrator privileges, or physical access to the
  computer, and the attacker needs you to use the computer after such an
  access. However, if any of these conditions is met, it is actually
  impossible to secure the computer (see below) and, therefore, you must
  stop using it (instead of relying on TPM).

  If the attacker has administrator privileges, he can, for example,
  reset the TPM, capture the content of RAM (containing master keys) or
  content of files stored on mounted TrueCrypt volumes (decrypted on the
  fly), which can then be sent to the attacker over the Internet or
  saved to an unencrypted local drive (from which the attacker might be
  able to read it later, when he gains physical access to the computer).

  If the attacker can physically access the computer hardware (and you
  use it after such an access), he can, for example, attach a malicious
  component to it (such as a hardware keystroke logger) that will
  capture the password, the content of RAM (containing master keys) or
  content of files stored on mounted TrueCrypt volumes (decrypted on the
  fly), which can then be sent to the attacker over the Internet or
  saved to an unencrypted local drive (from which the attacker might be
  able to read it later, when he gains physical access to the computer
  again).

  The only thing that TPM is almost guaranteed to provide is a false
  sense of security (even the name itself, "Trusted Platform Module", is
  misleading and creates a false sense of security). As for real
  security, TPM is actually redundant (and implementing redundant
  features is usually a way to create so-called bloatware). Features
  like this are sometimes referred to as security theater.

  For more information, please see the sections Physical Security and
  Malware in the documentation.


The Wikipedia article on TPM[1] points out another advantage to it: it
provides hardware prevention of dictionary attacks so "the user can opt
for shorter or weaker passwords which are more memorable."

1. http://en.wikipedia.org/wiki/Trusted_Platform_Module


A dated (2008, TrueCrypt v.5) comparison of BitLocker and TrueCrypt says:

http://4sysops.com/archives/system-drive-encryption-truecrypt-5-vs-bitlocker/

  So Bitlocker's biggest advantages are its TPM support and its
  sophisticated recovery options [like storing keys on a USB drive or in
  ActiveDirectory]. TrueCrypt is much easier to handle and practically
  needs no preparations.


 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Full disk encryption, why bother?

2012-01-03 Thread Tom Metro
Richard Pieri wrote:
> Tom Metro wrote:
>> Are you using full disk encryption?
> 
> I don't.  I take care of my gear.  I made this statement before: I
> see WDE as enabler for carelessness.

The EFF article I quoted references a prior EFF article on border
crossing inspections. The encouragement to encrypt was more for privacy
than for theft prevention.

As someone who goes through US Customs several times a year, this gives
me some concern, albeit minor. You may think you have nothing to hide,
but why open yourself up to a potential fishing expedition? With the way
copyright laws are trending (see SOPA), it wouldn't surprise me if being
caught with a downloaded broadcast TV show on your computer will someday
 result in felony charges.


> Never mind that I have a pair of Mac Minis playing server.  Sometimes
> they need to be restarted remotely.  Can't do that with WDE.

I guess for that you'd need a console server.


Daniel Feenberg wrote:
> I don't see much point in encrypting data on a network server - if the
> disk is mounted then the plain-text is available to an intruder and the
> addition of an encrypted version doesn't enhance security.

It does if the intruder is physically stealing the disk drive or the
server. This would also likely apply in a government seizure scenario.
They'd likely remove the equipment from the premises first, and attempt
access later. (Though maybe they've wised up to this possibility?0

So yeah, you're guarding against a highly unlikely scenario, but it
still has some benefit.


> I have used Truecrypt, but am put off by the documentation, which
> suggests that the primary purpose of encryption is to avoid police
> inspection. As xkcd pointed out, this is hopeless
> ( http://xkcd.com/538/ ).

[The cartoon makes the point that you can be tortured with a $5 wrench
to give up your password, so your high-tech encryption is pointless.]

But this is what plausible deniability is all about:
http://www.truecrypt.org/docs/?s=plausible-deniability

If you're in a situation where law enforcement *knows* you have
something they want on your disk, you've got bigger problems than your
choice of full disk encryption software. :-)

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Full disk encryption

2012-01-03 Thread Tom Metro
Daniel Feenberg wrote:
> The built-in Fedora encryption is no trouble to establish...

What tool do they use? Any other distributions that provide an
integrated solution?

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Full disk encryption and backups

2012-01-03 Thread Tom Metro
Richard Pieri wrote:
> And this is the great big rub with WDE: backups.  File-level backups
> are decrypted when sent to the backup system unless the backup system
> itself re-encrypts everything.

I'm not sure I see the big problem with backups, unless you simply find
file-level backups undesirable in general.

If you are performing backups while on your LAN, sending the data in the
clear should be of minor concern. The backup system can then encrypt.

If you are off-site, then use one of the backup systems that encrypt
locally before sending the data over the wire. Systems like this are
becoming increasingly common.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


[Discuss] Hardware Hacking list

2012-01-03 Thread Tom Metro
Stephen Adler wrote:
> I didn't know there was a hardware hacking list, I'll sign up!

Yup. More info here:
http://blu.wikispaces.com/Hardware+Hacking

And for anyone else who has gotten some new toys for Christmas - like a
new iPhone and you want to hack Siri, a new Android phone you want to
root, a router you want to load Tomato onto, an Arduino starter kit you
want to get started with, or any other hardware you want to hack - join
up and post your questions.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Full disk encryption and backups

2012-01-03 Thread Tom Metro
Richard Pieri wrote:
> om Metro wrote:
>> I'm not sure I see the big problem with backups, unless you simply
>> find file-level backups undesirable in general.
> 
> With WDE, you either decrypt-recrypt everything during backups which
> means that there is a point in the process where you have no
> security/privacy on the data...

Ummm...yeah. You do realize that in order to use your data you need to
decrypt it, right? :-)

You can make a case that decrypting and then re-encrypting data before
you send it off the machine to your backup service is inefficient, but
it isn't insecure.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Full disk encryption

2012-01-03 Thread Tom Metro
Daniel Feenberg wrote:
> Tom Metro wrote:
>> What tool do they use?
> 
> http://fedoraproject.org/wiki/Implementing_LUKS_Disk_Encryption#Introduction_to_LUKS
>   Fedora 9's default implementation of LUKS is AES 128 with a SHA256
>   hashing.

I'm assuming they're using an existing OSS encryption project and didn't
invent their own. According to:
http://en.wikipedia.org/wiki/Linux_Unified_Key_Setup

LUKS is a specification to facilitate interoperability between
encryption software. It says dm-crypt is the reference implementation of
LUKS on Linux:
http://en.wikipedia.org/wiki/Dm-crypt

The Fedora article makes no mention of dm-crypt, but does reference
cryptsetup, which is built on dm-crypt (so it seems):
http://code.google.com/p/cryptsetup/

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Full disk encryption, why bother?

2012-01-04 Thread Tom Metro
ma...@mohawksoft.com wrote:
> I doubt it is as simple as a mere power plug. It seems to be able to
> act as a UPS when power loss is detected.

Presumably it would need UPS-like circuitry to synchronize the
synthesized waveform to the AC power, and activate the output when loss
of power was detected.

I wouldn't be surprised if an off-the-shelf UPS could be applied this
way. (With the aforementioned risks to your wellbeing.)


> ...scary as a way for "the man" to
> take your computer without powering it down.

Actually pretty easily thwarted if you anticipate it.

All you need is a few trip switches wired in series and to the reset
line on the motherboard. Say one on any removable panels, one with a
plunger protruding from the bottom the the case, and one to a mercury
switch located somewhere deep in the interior of the computer. Really,
the mercury switch is all you need, and it alone is less likely to be
noticed and bypassed. (Though the switches on the panels might still be
a good idea in case they attempt an on-the-spot memory dump. Although I
suppose if you've got Firewire, that can be done without opening the case.)

Of course if you live in earthquake country, be prepared for your server
to reboot on every tremor. :-)

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] box.net

2012-01-05 Thread Tom Metro
Edward Ned Harvey wrote:
> You may already know, I use disposable email addresses.

Likewise. Mostly of the address extension variety, as mentioned
elsewhere in this thread.

It's particularly handy for weeding out phish emails if you know all
legitimate messages from your bank will be sent to me+mybank. (For
banking sites, I've taken to throwing in some cryptographic characters
to make guessing the address improbable.)


> Two weeks ago, I signed up for box.net (also now box.com) using
> box...@nedharvey.com.  I haven't given that email address out to anybody
> else, and I started receiving enlargement email on that address today.

The bank situation you mentioned was scary. A few months back I had this
happen with an address I gave to Equifax - you know, one of the three
major credit reporting agencies that has everyone's credit records? That
was disturbing. I can only hope that it was some secondary server or
outsourced email marketing vendor that got compromised.

Equifax has never disclosed the breach, if they are even aware of it.
(Contacting large organizations is pointless. They design their customer
service organizations to firewall the tech people from any useful
customer feedback. I did, however, tweet about it.)


Theodore Ruegsegger wrote:
> But how is this address disposable? ...how do I "dispose" of the
> address? Do you mean I add a line in my filters to reject any mail
> with that address?

In the case of Gmail, I don't think I've manually written a filter yet.
I use Gmail via IMAP and I simply move such messages, if Gmail hasn't
already flag them as spam, to the spam folder, and that seems to help
tune Google's statistical filters.


> ...won't spammers simply add a script truncate any gmail address with
> a + in it, yielding a valid and no-longer-traceable address? Or can
> we count on them to be really, really lazy?

So far they seem to be lazy. I've yet to spot any evidence of extensions
being stripped.


Edward Ned Harvey wrote:
> That's actually an RFC standard that's supported by many MTA's.  

I'm not aware of the address extension mechanism being codified in any
RFC. (Reference?) Of course the "+" character is permitted by RFCs, but
it isn't imbued with special meaning.

My understanding is that it is a defacto standard pioneered by Sendmail.


> The only problem is... As mentioned, lots of companies / websites /
> whatever reject your email address if there's a + character in it...

This is a major annoyance with this technique. I'd estimate something
like 80% of the web forms I encounter will incorrectly reject addresses
containing "+".

The sad part is that major providers, like Gmail, could easily have
avoided this by using a universally accepted character, like "-". (I was
part of the Gmail beta and complained frequently about this. Of course
it fell on deaf ears, and now with millions of Gmail accounts, it is
impossible to change.)


> ...because some other RFC standard defines the + character
> as a bad character for email addresses.

Again, I've never heard of that.

My best guess is that there are a handful of popular email address
validation libraries that all share this bug. (Perhaps some stock PHP
library? I know the Perl libraries don't have this bug.) That, and every
other web developer thinks they can easily whip up an email validation
regular expression from scratch in a matter of minutes.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] box.net

2012-01-05 Thread Tom Metro
Richard Pieri wrote:
> Because the plus sign screws up their SQL queries.

Because the plus sign screws up their URLs. (If unescaped, the "+"
translates into a space in URLs.)

While 80% of sites outright refuse to accept addresses with a "+", of
the 20% that do, probably 40% of them have buggy code elsewhere that
sticks an unescaped email address into a URL in an email, redirect, or
multi-stage form, such that clicking the link to - say unsubscribe - breaks.

Fortunately this is easy to fix usually, but I have ran across a few
sites using multi-stage forms and redirects that are near impossible to
work around by correcting URLs in the address bar. The web browser will
escape form field data for you, so to manage to pass an unescaped email
address takes some creative bad programming.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] box.net

2012-01-05 Thread Tom Metro
Edward Ned Harvey wrote:
>> I'm not aware of the address extension mechanism being codified in any
>> RFC. (Reference?)
> 
> RFC5233 "subaddressing" is an optional configuration for your MTA.

Close. RFC5233 is about:

   On email systems that allow for 'subaddressing' or 'detailed
   addressing' (e.g., "ken+si...@example.org"), it is sometimes
   desirable to make comparisons against these sub-parts of addresses.
   This document defines an extension to the Sieve Email Filtering
   Language that allows users to compare against the user and detail
   sub-parts of an address.

So this RFC is about using "subaddressing" (I've always heard them
referred to as "extensions") in a Sieve filter. (Sieve filters are a way
of managing mail filtering rules that run on your mail server instead of
your client.)

It refers to "subaddressing" as some existing convention, and implies it
is defined elsewhere, but interestingly provides no link or footnote to
where it is defined.

It does reference RFC2822 (an update of the classic RFC822) as the place
where email addresses are defined. RFC2822 doesn't contain
"subaddressing" and only mentions "extension" in the context of
Multipurpose Internet Mail Extensions (MIME).

So while I haven't specifically looked for an RFC covering address
extensions, these two RFCs would likely reference one, if it existed.

Consider also that the extension separator character is changeable via
configuration parameter in most MTAs (at least Sendmail and Postfix),
further suggesting that the "+" character itself isn't a universal
standard. (On my own Postfix installation I have it set to "-".)

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Least worst Internet Service Provider in Somerville

2012-01-13 Thread Tom Metro
d...@geer.org wrote:
> What do those of you who specify a nameserver
> (different than your ISP's) use?

If you really want to optimize your DNS server selection, run a
benchmark. Here are a couple of messages from the BLU archives
discussing benchmarking DNS:

http://thread.gmane.org/gmane.org.user-groups.linux.boston.discuss/37806/focus=37825
http://thread.gmane.org/gmane.org.user-groups.linux.boston.discuss/37806/focus=37825

The next best thing is to run a local caching proxy, like Dnsmasq, and
configure it with a list of DNS servers that includes your ISP's and the
ones you've seen recommended here. Dnsmasq will periodically "speed
test" the servers and pick the fastest.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


[Discuss] Comcast gets rid of the remaining analog channels

2012-01-14 Thread Tom Metro
I received a letter this past week that Comcast is getting rid of the
remaining analog channels on their Newton/Needham system (other towns
likely to do likewise). After their "digital conversion" a while back
they continued to transmit the "basic" channels (largely local
broadcast) in analog, but by March 13 will be strictly digital.

If I recall correctly these same channels were the few you could receive
unencrypted with a digital tuner. I'm not sure if that will remain the case.

So if you haven't already wired DTAs up to your DVR with analog tuners,
or upgraded to new CableCARD tuners, you've got about 2 months to upgrade.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Comcast gets rid of the remaining analog channels

2012-01-15 Thread Tom Metro
Jerry Feldman wrote:
> On 01/14/2012 08:23 PM, Bill Bogstad wrote:
>> As I understand it, they are required by FCC regulations to transmit
>> rebroadcasts of over the air channels in the clear.
>
> They must provide free equipment to their basic subscribers...

Yes, the letter reiterated the same deal they provided when the
"extended basic" channels went digital: all subscribers can get up to
two DTAs for free, with no rental fee.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Is MythTV dead?

2012-01-15 Thread Tom Metro
Rich Braun wrote:
> Tom Metro wrote:
>> If you use MythTV as a front-end, have you tried XBMC? If so, why do you
>> prefer MythTV's front-end?
> 
> Thanks to your posting, I just did.  It was a F R U S T R A T I N G waste of 2
> hours of my life.  The bottom line is summed up at
> http://forum.xbmc.org/showthread.php?t=85488&page=2

That thread didn't illuminate much, other than to say XBMC doesn't work
with MythTV v.0.24 because MythTV changed the protocol.

mvpmc faces the same obstacle every time MythTV alters its protocol or
schema. This is just another artifact of the poor architecture employed
by MythTV.

It also doesn't help that the non-MythTV clients are treated like 2nd
class members of the MythTV ecosystem. With the increasing popularity of
XBMC, it may become harder for MythTV devs to ignore the alternative
clients.


> The XBMC developers have stalled on a "hard problem" the same way the
> MythTV folks have.  Neither has had a major release in over 12 
> months.

What's the "hard problem" in the case of XBMC?


> There isn't even a patch out there to solve the problem: if you are
> running MythTV version 0.24, you can't run XBMC as a front-end.

An early message in the thread you referenced said you could get it to
work if you built XBMC from source. Perhaps that was inaccurate.

XBMC actually uses the MythTV client library developed for mvpmc and I
see on the mvpmc users list that they had a 0.24 compatible firmware
build in September:
http://thread.gmane.org/gmane.comp.multimedia.mvpmc.user/2992

And in December on the developers list they got started working on
0.24.1 compatibility:
http://thread.gmane.org/gmane.comp.multimedia.mvpmc.devel/6349

So I'm not sure why XBMC doesn't have a 0.24 compatible build (if that
is indeed still the case). A search of the forums turns up:
http://forum.xbmc.org/showthread.php?t=110694&highlight=0.24

which is announcing the availability of a new plugin for XBMC that takes
a different approach (see below), and supports MythTV 0.24.


> Period, end of story, it'll never work.  *Sigh*. 

I'm assuming you are exaggerating to make a point...


> And you wonder why people run out to Best Buy or the Apple Store
> to buy things that Just Plain Work.

Sure, but this has always been true if you are willing to accept
limited, canned functionality. And it has often been the case with open
source, even though we should be striving to eliminate these
complications. As an experienced open source user you can't really
exclaim surprise at this.

Getting a smoothly running home theater setup with open source software
is no different from how you approached building a Linux PC in years
past. It takes research to find what hardware and software components go
together well.

So for example, if you knew you wanted to run XBMC as your front end,
and found it didn't support MythTV 0.24, then use MythTV 0.23. (Granted,
not a very practical option if you've already upgraded to 0.24.)

A lot of the frustrations you seem to be expressing are centered around
being able to run the very latest MythTV, but I don't recall you saying
why that was important. The older MythTV releases do the job pretty well.

The good news is that even if DVRs are past peak, the open source media
player field is still very active and expanding. See last week's
announcement from Canonical of Ubuntu TV. And there are still enough
open source developers using MythTV who want to make it accessible from
the newer platforms. See for example this early stage MythTV client for
Android (developed by the lead mvpmc developer):
https://github.com/cmyth/lcm_android

perhaps someday becoming a Google TV app.


>> What should have been there is a MythTV protocol feature that lets the
>> front-end negotiate with the back-end for supported video formats, and
>> employ VLC-style on-the-fly transcoding.
> 
> Agreed.  But the cows left the barn too many years ago.  Even a project
> started from scratch, that included this architecture, no longer has any real
> hope of widespread adoption. 

My thought is that the best way to approach this is to create a proxy
server that runs on the MythTV back-end. This would be something that on
one side is tightly integrated with MythTV (knows about the database
schema, etc.) and gets revisioned with MythTV, while providing some
generalized DVR protocol as a public interface. (It looks like the
MythTV devs have taken some steps towards reducing the need to monkey
with the database directly by providing info in XML:
http://www.mythtv.org/wiki/Category:MythXML )

While chasing links in the XBMC thread you referenced I ran across this:

http://wiki.xbmc.org/index.php?title=MythTV_PVR_Addon
(see also: http://forum.xbmc.org/showthread.php?t=82015 )

which appears to be talking about an XBMC plugin using s

Re: [Discuss] open protocols for IP-TV

2012-01-15 Thread Tom Metro
Rich Braun wrote:
> Tom Metro suggested:
>> And the best way to break free of the old-world TV model that the
>> existing studios, networks, and cable companies are clinging to is to
>> reduce barriers for the new upstarts to reach our living rooms.
> 
> Go to Best Buy and take a look at their TV department.  Not much ever changes,
> thanks to the fact that Best Buy eliminated virtually all their competition...

I'm not sure the lack of local retail competition really impacts product
selection available to us. Inconvenient, sure, but I don't think you can
make the case that manufacturers are innovating less because BestBuy has
become a lazy, dominant retailer in the US. (And on tenuous ground,
according to recently released financials.)


> ...set-top boxes aka media players.
> They are all the same price, $99, unless they have local storage.

That depends on when you look. Boxee hit the market at $200 and can be
found now for $150. Google TV was $300 and eventually got marked down to
$100. Neither had local storage. Then Roku has had players for under
$100 for a while. (A bunch of "no name" players can also be found for
under $100.)


> Roku and Apple aside, the others support the Digital Living Network Alliance
> standard...
> There are now about 4 DLNA-compliant open source media server projects: 
> MythTV, MediaTomb, Rygel and Twonky. 

Also Vuze.

My understanding is that DLNA treats the DVR like a passive provider of
read-only media. If that impression is still accurate, that's a less
than satisfying way to interact with your DVR. Can you delete programs?
Is there any support for higher-level concepts, like marking a show
watched, or not? Bookmarking positions? Scheduling? Commercial skip?

It may be that this functionality can be added to DLNA, or implemented
with a protocol that operates along side DLNA, but either way there
needs to be an open protocol for remotely controlling a DVR. Something
that avoids the architecture mess that results in a MythTV client
messing with the back-end's MySQL tables.


> ...as recently as the 1990s (and even the early 2000s) we had a
> single device that could play back and record all TV content regardless
> of whether it originated on pay TV or broadcast TV.
> We called it...the VCR.
> 
> Dire Straits wrote the anthem for this concept:  "I want my MTV!"  Suppose
> they changed the lyrics to:  "I want my VCR!"  And suppose instead of
> government /being/ the problem, the government could be used to /solve/ the
> problem by imposing standards compliance.

I think you are chasing an obsolete problem. This thread started with
the question, "Is MythTV dead?" and my answer was that *all* DVRs are
dying. DVRs are past their peak with mainstream audiences.

If you are in the minority that needs the DVR-like control of your
programming, you'll just have to put up with the inconvenience that
comes with being out of the mainstream. Your best hope for an easy
solution in the future is some sort of "DVR-in-the-cloud" solution,
assuming that some company can successfully negotiates the necessary
licensing deals to avoid being sued[1] by the TV networks and the MPAA.
(The cloud music services cropping up lately provide some precedent for
this possibility.)

1. http://arstechnica.com/old/content/2006/05/6913.ars


> Those $99 media players at Best Buy are an endangered species regardless: 
> their functionality will be folded into all new TV sets within the next couple
> years.

True.

Recent stats show that consumers are actually buying TVs more frequently
than they used to. On the scale of about 3 to 5 years, similar to the
frequency of personal computer purchases. This has been attributed to
the combination of ever increasing screen sizes (buying a new, bigger TV
for the living room and moving the old one to the bedroom) and falling
prices. (Perhaps the new OLED TVs will spur another wave of upgrades.)

This will lessen the major concern about obsolescence of IP TV hardware
built-in to TVs. Plus there will be a spectrum of other solutions, from
TVs with upgradeable firmware (some have this now), to bundled and
replaceable hardware (like the latest Roku player that's the size of a
USB Flash drive). Who knows, maybe the industry might even agree on a
standard for a CableCARD-like hardware module that can be upgraded.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


[Discuss] Debian is now the most popular Linux distribution on web servers

2012-01-15 Thread Tom Metro
http://w3techs.com/blog/entry/debian_is_now_the_most_popular_linux_distribution_on_web_servers
  Debian lost that #1 position in June 2010 to CentOS and gained it back
  now, after a head-to-head race in the last year.

  Debian is now used by 9.6% of all websites (up from 8.9% one year ago,
  and 8% two years ago), which is equivalent to 29.4% of all Linux-based
  sites.

  It is also the fastest growing operating system at the moment: every
  day 54 of the top 1 million sites switch to Debian.

Not such a big deal if you look at the chart shown in the article, which
shows CentOS and Debian neck-and-neck at the top, trading off the lead
while on relatively flat trajectories, and both way ahead of the two
runners up, Ubuntu and RedHat.

It's easy to speculate that RedHat doesn't get the volume that CentOS
does because it costs money, but what does this say about Ubuntu Server,
which although commercial support is available, can be installed for free.

Given that it is free, Ubuntu Server should serve as its own "CentOS"
alternative for business users who want to have a homogeneous
environment, yet not have to pay for support on all the boxes they run.

I thought it might be that Debian retains a lead through inertia, having
predated Ubuntu Server by a significant period, and Ubuntu Server
doesn't provide compelling enough reasons to switch to it from Debian.
But then the article says:

  This growth comes primarily from websites that are starting to use
  Linux, because we see in the technology change report that many sites
  subsequently switch from Debian to the Ubuntu distribution (which is
  based on Debian). Debian gains market share from all other Linux
  distributions, mostly from CentOS, SuSE and Fedora.


If you run a Debian-based distribution on your servers, which flavor,
and why?

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Debian is now the most popular Linux distribution on web servers

2012-01-17 Thread Tom Metro
Dan Ritter wrote:
> Tom Metro wrote:
>> If you run a Debian-based distribution on your servers, which flavor,
>> and why?
> 
> My company runs Debian because the stable branch is both stable
> and well-supported...

You don't buy-in to the idea of Ubuntu LTE (long term) releases as being
well supported?

Or care about the predictable release schedule?

Do you feel that the Ubuntu LTE releases aren't at least as stable as
Debian stable? (I'd say regular Ubuntu releases are less stable, but the
comparison is to Ubuntu Server LTE.)


> ...and the systems administration tools are second to none.

That sounds more like an RPM vs. deb comparison. Aren't all the same
tools available for both Debian and Ubuntu?


Drew Van Zandt wrote:
> I used Ubuntu Server for a couple of months, and then switched back to
> Debian.  US was a little short on non-mainstream packages, and I 
> didn't see the point in using it if I was immediately going to need the 
> Debian repositories in the apt sources anyway.

Interesting. Over what time frame was that? I ran into similar issues
when I first started using Ubuntu back in 2006 and for the subsequent
few years. As it became he dominant Linux distribution, it became more
common to see projects directly supply Ubuntu packages. (I've rarely
pulled packages from Debian repositories, but I have made extensive use
of Launchpad PPAs in order to get backported version of newer tools.)

One problem that still potentially exists is that most mainstream
packages are still sourced from Debian, so bug reports made in Launchpad
need to flow upstream, then you need to wait for the fix to flow back
down to Ubuntu. Which raises the question of whether you get enough
value from Ubuntu to justify that added delay and middleman.

I also got the impression that Debian maintainers are slightly more
inclined to fix bugs in Debian Stable compared to Ubuntu developers who
always seem to have already moved on to one or two releases beyond the
current one. So less likely you'll ever see a reported bug fixed prior
to the next OS upgrade. Supposedly this is less of an issue with LTE. (I
still see some activity on bugs I reported against 8.04 LTE.)


Jay Kramer wrote:
> I find Debian to be more responsive and easier to use.

More responsive? Can you elaborate on that?

There are some cases where Ubuntu's attempts to provide more
hand-holding have the side effect of making troubleshooting and
customization more complex. Network Manager is an example.


Richard Pieri wrote:
> Vanilla Debian...because setting it up as a Xen domU is as simple as 
> dirt.

Any reason why this shouldn't be just as easy with Ubuntu?

Supposedly Ubuntu has done a lot of work to optimize it as both a host
and guest for virtualization.


[I've used both Debian and Ubuntu for servers, though Ubuntu mostly for
desktops. My line of questions here is an attempt to see if there is a
compelling story for using Ubuntu Server instead of Debian.]

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


[Discuss] Ubuntu TV

2012-01-17 Thread Tom Metro
Announced at CES this past week.

http://www.internetnews.com/blog/skerner/ubuntu-bringing-linux-to-your-tv.html

  Ubuntu TV is an effort that pushes the Ubuntu interface (Dash) for
  TV's as a way to get content, including shows, movies and TV
  programming content. There is a whole new site setup for those of us
  not at CES to see what it's all about at: http://www.ubuntu.com/tv

  Linux on TVs is nothing new and shouldn't come as a surprise. If
  you've got a DVR device [or] an 'enhanced' media device (WD, seagate,
  boxee etc..) you've already got a Linux TV. What Ubuntu is doing here
  is associating their brand of Linux (and make no mistake about it,
  Canonical is doing its best to push Ubuntu as a brand) for consumer
  electronics vendors.


Looks nice...
Anyone tried it out? If you want to, here are the install instructions:

How To Install Ubuntu TV From A PPA
http://www.webupd8.org/2012/01/how-to-install-ubuntu-tv-from-ppa.html


Ubuntu TV Features and Goals
http://news.softpedia.com/news/Ubuntu-TV-Features-and-Goals-245241.shtml

  Ubuntu TV offers easy integration of online and broadcast services,
  and applications, offering a modern broadcast TV experience, allowing
  users to search, record, buy, rent and playback.

  Users will be able to integrate satellite and cable services into
  Ubuntu TV, providing support for EU and US standard formats, for both
  high definition and standard contents.

  Ubuntu One integration will be provided, and users can install various
  TV optimized apps from the Ubuntu Software Center.

The featureset, such as being built-in to TVs and supporting apps makes
it sound a lot like Google TV.

Ubuntu One integration? As in playing content a user has uploaded to the
cloud? A law suit waiting to happen.

I wondered if this push to the TV was one of the big motivators behind
Canonical's effort to port Ubuntu to ARM CPUs, but an Ubuntu employee I
mentioned this to said it is consistent with Mark Shuttleworth's goal of
"Ubuntu everywhere" and they've included supporting ARM in low-power
data center servers.

What I'm really curious about is how or if they will be supporting MythTV.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


[Discuss] running Snort on a consumer-grade router

2012-01-18 Thread Tom Metro
Anyone tried running Snort on a consumer-grade router?

I was curious if it could be installed on a router running Tomato
firmware, and ran across this:

http://tomatousb.org/forum/t-305093/snort-and-dansguardian-on-tomatousb

  ...you must first install Optware...
  Then you can install Snort and Dansguardian

Optware (a debian-like package management system) was expected, but I
hadn't heard of DansGuardian[1], which is a "web content filter."
Something I have no interest in, and I'm assuming just an optional,
related tool mentioned because the OP asked about it.

1. http://dansguardian.org/?page=whatisdg

More importantly another post in the same thread says:

  Snort, on the other hand, is FAR too memory-hungry for use on a router
  unless you go with a pitifully reduced ruleset. It barely fit on an
  otherwise-empty RT-N16 with reasonable rules defined.

As I understand it, Snort relies on libpcap to inspect the packets
flowing through the router. I wonder if there are any mechanisms for
running libpcap on the router as usual, but running the more memory
intensive packet analysis on a full server inside the LAN. This should
constrain the memory footprint, though I could see such a setup still
adding CPU overhead on the router if it has to send every inbound packet
to two destinations. Perhaps if you don't need full packet for logging
or analysis, the proxy code on the router could pass on just the packet
headers.

Or maybe the warning was overstated. On the next page of the thread a
user reports being able to successfully run Snort on an RT-N16, but they
didn't report whether they ever got custom rules working.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] running Snort on a consumer-grade router

2012-01-18 Thread Tom Metro
Chris O'Connell wrote:
> When you saying "running snort on such and such router" are you talking
> about installing the source for snort on the router? 

Yes, this is what the thread I referenced was talking about.


> Or do you just mean you want to use Snort to listen to traffic on
> said router by installing it on a separate computer?

This is what I was speculating about.
Is there a mechanism to do this?

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] running Snort on a consumer-grade router

2012-01-18 Thread Tom Metro
Dan Ritter wrote:
> Running Snort at home doesn't seem to have brought me much advantage
> over my reasonably paranoid firewalling; I will probably drop it.

I generally like belt and suspender systems. Trust, but verify.

What bugs me about LANs is that there is no easy way to visualize the
traffic, and spot when rogue traffic is present.

I'd like to have some mechanism - ideally as independent from the router
as possible - that can be used to detect unexpected packet traffic and
trigger an alert, so if the router has a bug or misconfiguration, the
problem can be spotted.

There's also a curiosity factor in seeing reports of what attacks are
happening against the router, which the router is successfully fending
off. That can be interesting, but generally just amounts to useless noise.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] running Snort on a consumer-grade router

2012-01-19 Thread Tom Metro
Chris O'Connell wrote:
> Or do you just mean you want to use Snort to listen to traffic on
> said router by installing it on a separate computer?

I was hoping to hear more about how this might be possible at the talk.
You mentioned "sensors", but that seemed to be shorthand for a
commercial appliance with physical network connections, not a chunk of
software that can be put on a router.

So I'm still wondering if Snort supports a "client-server" like model
where pcap runs on a monitoring point and streams the data to an
analysis server elsewhere.

Another option to consider would be placing a Snort monitoring appliance
on the WAN side of the router (connected via a hub or using port
mirroring, as mentioned in the talk). This could potentially be
implemented using inexpensive hardware, like another consumer router.
(Something with a bit of power, like the previously mentioned ASUS
RT-N16.) It could even use an attached USB disk for storage. This way if
Snort gets bogged down, it won't impact your network speed.

But sticking another box outside your firewall has just doubled the
number of machines you need to secure. And having a machine that
captures network statistics and interesting packets (potentially
containing plain text passwords, etc.) makes it a useful target for a
hacker.

Plus, this setup doesn't tell you what has made it past your firewall,
which is the whole point to intrusion detection. (And it wouldn't be
wise to make one Snort appliance do double duty by monitoring packets
both inside and outside the firewall. You've now created a convenient
bridge if the appliance gets breached. So that means having yet another
appliance.)


In other matters, I'm curious to know what alternatives to Snort you
evaluated, if any.

On the host intrusion detection side of things I've had success with
Integrit (a file hash comparison tool equivalent to Tripwire; custom
configured to issue delta reports, rather than accumulating all
anomalies until the operator regenerates the database) and logwatch (log
monitoring tool). Both use email as the primary communication channel.
Both require a fair bit of customization and configuration to make them
minimally noisy. (A noisy monitoring tool quickly gets ignored.)

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] running Snort on a consumer-grade router

2012-01-19 Thread Tom Metro
David Miller wrote:
> In my experience most consumer routers barely have enough cpu power to
> get out of their own way.

As mentioned elsewhere, the Asus RT-N16 is a newer class of router with
beefier hardware than your typical WRT54G-era box. 128 MB of RAM and a
480 MHz CPU:
http://infodepot.wikia.com/wiki/Asus_RT-N16

And that's the hardware I'd be using. (This model was released in mid
2009, and even today there are only a handful of routers in the same
price class with a faster CPU, and of those almost none have as much RAM.)


> I'd love to see a speedtest.net with and without
> snort to see what sort of impact it has on performance.

Yes, that would be a good comparison to make.


> At home I'm currently running snort on an embedded Alix (800MHz AMD
> Geode cpu) w/ 256mb of ram on pfSense. 

I'm familiar with the Alix boards, have written about them here before,
and considered them. At the time they were selling in the ~$150 range
after you added a power supply and enclosure. The specs are hard to beat
for that price.

I had interest in FreeBSD/pfSense due to its ability to run in a fail
over configuration on redundant hardware. I believe they've since hacked
up an equivalent solution for Linux/iptables.

I'm hoping we'll see some of the consumer routers switch to ARM CPUs,
and less proprietary switch hardware, which should hopefully permit
FreeBSD to run on them. I suspect we will see 800 MHz+/128 MB+ consumer
routers in the $100 range in 2012. (There are already non-router
consumer products with these specs, like http://www.tonidoplug.com/, but
they lack built-in switches. In theory, you could pair it up with a $50
5-port switch that does VLAN tagging[1].)

1. http://www.newegg.com/Product/Product.aspx?Item=N82E16833122342
(This does port mirroring too. Perhaps the same low cost switch someone
mentioned at the talk.)


> It seems to run on this reasonably well on it but you still have to
> be careful as to what rule sets you enable and which Memory
> Performance option you use.

Good to know. Thanks.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


[Discuss] Visualizing LAN traffic

2012-01-19 Thread Tom Metro
To reiterate a point I made in the Snort thread:
> What bugs me about LANs is that there is no easy way to visualize the
> traffic, and spot when rogue traffic is present.

The Snort talk showed that what out get out of Snort is still a textual
log of anomalies.

Anyone seen a tool for visualizing LAN traffic? Something that can
distill what's going on down to a dynamic infographic of sorts, with
ways of indicating unusual behavior?

I've heard of tools that let you listen to LAN traffic, where supposedly
you can easily hear the differences when something unusual happens. But
I'd expect such a tool to get annoying fast.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Programming vs Engineering

2012-01-23 Thread Tom Metro
Mark Woodward wrote:
> http://www.mohawksoft.org/?q=node/86
> Does anyone have any comment?

I was expecting something more controversial when I saw the size of the
thread, but I agree completely with your thesis.

The interesting bit to me is:

> It takes a competent software engineer to do it in a way that promotes 
> improvement and "anticipates" the future. All too often management only 
> looks at the output of programmers and not the important engineering 
> work.

with the follow-up question, "How do we make quality software
engineering practices visible to management, so they can be appreciated?"

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Programming vs Engineering

2012-01-23 Thread Tom Metro
Richard Pieri wrote:
> Engineers, like doctors, are licensed professionals.  They are held
> to codified standards of practice and ethics.

You are conflating engineer with Professional Engineer (PE). The later
has a specific definition with licensing requirements. The
"Professional" part of the title isn't just an adjective describing any
random engineer that happens to get paid to work.


> Says the Commonwealth of Massachusetts:
> http://www.mass.gov/ocabr/licensee/dpl-boards/en/

As others pointed out, this is clearly a definition for a *Professional*
Engineer.

Dating back well before the invention of Software Engineers we had
Mechanical Engineers, Electrical Engineers, and Chemical Engineers, all
of whom were not licensed as Professional Engineers, unless they elected
to do so.

The "engineer" title is largely a reflection of education, but as
Matthew Gillen pointed out, the definition is more broad and anyone who
practices engineering can have the title applied.

Where you draw the line between technician or tradesperson and engineer
is a matter of convention within each industry.

I'll agree that the software field probably should have a "software
technician" title (perhaps it is "software developer", or "programmer",
as someone else mentioned), and some better understood guidelines for
what distinguishes it from a "software engineer." You don't, however,
need a government issued license to make that distinction.


> Programmers typically are not held to any
> standards other than those set by their employers...
> I think this is a problem.  I think that Software Engineers should
> join the ranks of Professional Engineering professions and that they
> should be trained and tested and licensed the same as other
> Professional Engineers.

I'll second Marks points that the lack of such standards are largely due
to the immaturity of the field, and that jumping to establish standards
could easily have a stagnating effect on the state of the art.

We do have a whole litany of certifications available. If you dismiss
the validity/value of such certifications, what makes you believe that a
single industry administered test will prove more accurate at qualifying
software developers as being of Professional Engineer quality?

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] What Happens when a cloud service shuts down

2012-01-23 Thread Tom Metro
Jerry Feldman wrote:
> These days, the buzz word is cloud. You put stuff into a cloud, you
> expect your data to be safe and accessible. 
> ...assume you are using a backup service and it suddenly declares
> bankruptcy.

If you are talking specifically about cloud-based backup services, then
the answer is that it is a backup, not primary service. If the provider
goes away, you find another.

The general answer applicable to all cloud services is that you apply
redundancy, just as you would in any other situation. Unfortunately the
proprietary nature of most SaaS applications make this impossible. At
best, your provider might provide a way of backing up your data outside
of their cloud. (As Google does for many of its services.) But having
your data without the app may be of limited value, at least in the short
term.

Ideally this is why open source is still important, and your first
choice for a SaaS should be a hosted open source application. The next
best option is to deploy an open source application yourself to a PaaS
or IaaS provider.

I've been hoping that we'd see more open source apps in hosted form, and
we have seen some, but not really wide spread. Take for example virtual
PBXs. It's entirely feasible that we could have seen an "Asterisk
hosting" market develop much like web hosting, but it didn't happen.
There is maybe one vendor that I know of that provides Asterisk hosting.
The rest stick a proprietary GUI on top, or use an entirely proprietary
solution. If things don't work out with your PBX provider, there is no
way to download your config and prompts and upload them to another
provider.


> ...should be in several different locations fully mirrored. ...
> Companies like Amazon, Google, Microsoft, IBM, HP are huge and have
> multiple datacenters so if one datacenter gets destroyed by a
> hurricane, tornado, or a bomb, the other data centers continue
> without much of an issue.

Netflix uses Amazon to host some of its services, but I believe they
don't use them exclusively. Ultimately that's what any business with
critical infrastructure in the cloud needs to do: use multiple vendors.
Common APIs, tools like Eucalyptus (http://www.eucalyptus.com/ mentioned
elsewhere in this thread), and projects like OpenStack
(http://openstack.org/) suggest this may be more practical in the future.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] cloud computing defined

2012-01-23 Thread Tom Metro
icle on Facebook's "open hardware" servers, which
I'll discuss in a separate post.


> Is it just a way of saying that you have a distributed, parallel app
> whose individual nodes can come online (or go offline) dynamically
> without interrupting the service?

I would say that although many cloud services include those attributes,
none of them are strictly required to fit my definition, but it looks
like you are pretty close to the NIST definition. (Distributed could be
argued is required if you count the fact that user's location and the
server's location are different as being distributed. I wouldn't.)

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Software Engineering redux

2012-01-23 Thread Tom Metro
Daniel C. wrote:
> What I am curious about is what a standard in software engineering
> would look like... 
> 
> On a more general level, what would the goals of such a standard be?
> 
> Obviously "writing programs that don't kill people" is one of them,
> but what else?

Actually, it wouldn't, but more on that below.


ma...@mohawksoft.com wrote:
> Then there is a lack of consensus about what such a certification would
> look like. Would it be based on Unix, Windows? C,C++, Java, Perl, Python?
> Would it be very abstract, favoring no existing technology?

Yes, abstract in the sense of going up a layer from the language.
Concentrating on design, architecture, and good "coding hygiene."

One could envision a "fundamentals of software engineering"
certification that covered general concepts such as code reuse,
readability, - heck, even something as basic as avoiding convoluted
logic would be a good baseline.

That's not to say this wouldn't still be controversial and take decades
to reach an industry consensus. Do you include design patterns? Which
ones? Test Driven Development? Agile practices? And then there will be a
ton of areas where some fundamentals are either impossible to implement
or in conflict with (the philosophy) of some languages. (For example,
there are lots of design patterns that become irrelevant once you move
from lower-layer languages like C++ and Java to higher order languages
like Perl.)


> If it were too abstract, how could it be anything but valueless?

Think how much easier our lives would be if every time we ran across
some piece of legacy code, the developer had *at least* the above
fundamentals.

There would still be plenty of room for someone with a Professional
Software Engineer (PSE) cert to write back code.


Daniel C. wrote:
> ...avionics systems in aircraft...
> We might eventually want to take a look at the process used to develop
> these systems and think about whether those practices could be
> generalized to other areas of software development.

I'm just regurgitating an argument I've ran across several times, and
agree with, which is that the vast majority of software doesn't need
this rigor, and applying it will impose unnecessary costs with little
benefit. The Agile principles of implementing only what provides value
to the customer is effective and efficient. It does not, however,
provide an excuse for not using a good architecture.

That's not to say there wouldn't be value to a separate certification
for "life-critical software engineering" for those working in fields
where that is applicable.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] PBX Standards?

2012-01-23 Thread Tom Metro
Daniel C. wrote:
> Web servers are almost a commodity at this point.

Shared hosting absolutely is. Virtual private servers pretty much are
too. Cloud hosting is getting there.


> Why isn't there some similar standard for PBX's?  It can't be
> that hard to slap together an XML standard that defines your phone
> network, options, etc. and that all vendors agree on.

It's not a lack of standards that holds it back. Vendors could simply
coalesce around Asterisk and the Asterisk config files could be the
portable files. (You could trivially bundle them up with the prompts
into a zip file or some such for easy transport.)

The limiting factor in my opinion is that there is no market pressure to
commoditize the service. If you think about web hosting, what people
want to do with it dictates that it needs to follow some basic standards
in order to be useful and general purpose.

There are specialized platforms in web hosting where you get a
proprietary GUI for building your site - like Square, or Yahoo Stores -
but there are enough people that want some other web application that
there is a huge demand for a more generic offering.

In contrast, you can pair down the functionality of a PBX into something
that meets the needs of 80 to 90% of the customers that would consider a
cloud hosted PBX. The result is that the more knowledgeable customers
who see value in portability, and recognize the financial benefits to
commodity competition, don't number enough to attract vendors.


> Why hasn't anyone put together a web app that works in tandem with a
> phone tree, allowing the caller to view the options on screen while
> simultaneously hearing them read over the phone?

Create one. Twilio (http://www.twilio.com/), and its competitors,
provide telephony web services where you can create a telephony
application as easily as you can create a web app.

(This is a kind of commoditization happening in the cloud telecom space.
But until the industry standardizes their APIs, it won't really be a
commodity.)


> "audiolizing" (Which is probably why we have a word for thinking
> about things with our visual centers, while I had to make up a word
> for the auditory equivalent.)

(See the recent network monitoring thread ("Visualizing LAN traffic").
Apparently the word is "auralizing." At least according to the author of
the project mentioned.)

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] New SMART disk errors reported

2012-01-25 Thread Tom Metro
Chuck Anderson wrote:
> ...I always replace the drive after even a single
> reallocation event.

Wow, you must send back a lot of drives.

Modern drives really push the limits of the technology, and are
constantly performing error correction. Occasionally that results in
sector reallocation. As long as the defect rate isn't rapidly growing,
and you aren't close to exceeding the number of spare sectors, you
should be fine.

I've had drives with a handful of reallocated sectors where the count
has remained static for years.


As for the OP's question, an accelerating defect rate and a count that
high seems reason for concern. It is, however, possible that the defect
region will cease growing. If all the failing sectors are physically
clustered together, there's a better chance that it may stabilize, than
if they are scatters across the drive. See the note at the end of:

Testing your hard drive in Linux
http://mypage.uniserve.ca/~thelinuxguy/doc/hdtest.html

If it is part of a RAID set and nothing more then a minor inconvenience
if it fails, then take a wait and see approach. It also would be a good
idea to run a read-write surface scan using the technique mentioned in
the article above, or if you are using Linux RAID, run checkarray (resync).

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


[Discuss] special BLU meeting Feb 1: single board computers

2012-01-25 Thread Tom Metro
[A cross post from the Hardware Hacking list
(http://blu.wikispaces.com/Hardware+Hacking)]

Kurt Keville wrote:
> actually some of them are available now... like the TI Beaglebone...
> (yup, bone)..
> http://beagleboard.org/bone
> 
> Looks like we will see the sun4i-crane soon as well (EOMA compliant!)
> http://rhombus-tech.net/allwinner_a10/orders/
> 
> and of course, we have the Raspberry Pi Director coming to BLU on Feb. 1
> http://blu.org/cgi-bin/calendar/2012-raspi1
> 
> We may also be getting the Cotton Candy (FXITech ) CEO... stay tuned...

Will any be demoed running XBMC or Ubuntu TV?

Graphics hardware in $25 Raspberry Pi Linux box outperforms iPhone 4S GPU
http://arstechnica.com/gadgets/news/2012/01/tiny-25-raspberry-pi-linux-board-reportedly-offers-twice-the-performance-of-iphone-4s-gpu.ars

  The Raspberry Pi Foundation is building a low-cost Linux computer with
  a 700MHz ARM11 CPU. The board, which is roughly the size of a pack of
  playing cards, entered the manufacturing stage last month. There will
  be two models, priced at $25 and $35, with different specifications.

  ...developers from the XBMC project demonstrated their software
  running on a Raspberry Pi board.

  The demo, which can be viewed in a YouTube video, shows that XBMC runs
  reasonably well on the Raspberry Pi hardware and is relatively
  responsive. It was able to smoothly play an H.264-encoded 1080p video.


BeagleBone board boots up XBMC Eden, shows off its media prowess
http://www.engadget.com/2012/01/25/beaglebone-board-boots-up-xbmc-eden-shows-off-its-media-prowess/

  ...some intrepid devs managed to get the second beta of Eden up and
  running on the ARM A8 dev board. The vid stutters a bit during
  playback but, overall, it's a respectable performance considering this
  is a CPU that would get laughed out of most modern smartphones.


 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Samba4

2012-01-27 Thread Tom Metro
Warren Luebkeman wrote:
> ...we released the latest version of...which is a free/open source
> Linux domain controller based on Samba4.
> If [you are] an IT shop or consultant, I think it could be quite useful
> for you.

I don't mean to disparage the project mentioned here, which I am not
familiar with, and may in fact be an excellent packaging of Samba, but
it does raise the question of whether it is still a wise recommendation
for "an IT shop or consultant" to be deploying Samba, unless their
customers are already well invested in it.

When I first started using Samba I thought it was a fantastic idea, and
I though the old-school UNIX guys that disparaged it were just being
anti-Microsoft, but after using it for a decade I came to view it as a
mess of a protocol with an unreliable and insecure authentication model.

I'm sure with enough care and feeding it can be coerced into behaving
well, but my experience with small scale deployments is that I've
inevitably ran into unexplainable situations where share security had to
be relaxed in order to accomplish what was needed. I've never had that
experience with NFS. And that's not even getting into performance
comparisons.

Of course it isn't fair to simply compare NFS to Samba, as Samba also
encompass name resolution and network-based authentication, but these
only make the situation more complicated, and the inevitable failures
harder to diagnose.

Would you choose to deploy Samba on a newly setup network?
Have your experiences with Samba been different?

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] wiki suggestions?

2012-01-29 Thread Tom Metro
Eric Chadbourne wrote:
> I'm going to use a wiki to share some classes on-line.  Some of the
> teachers who will be using the wiki are not very technical.  So a simple
> user interface and ease of use are important.

You already mentioned the software I'd recommend, but I'd also consider
a hosted solution, unless you need to have it under a domain you control
or just want the experience of installing and maintaining the software.
For example, Wikispaces, which has a WYSIWYG editor, and is what we use
for http://blu.wikispaces.com/ .

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] 4-Bay NAS

2012-02-03 Thread Tom Metro
Richard Pieri wrote:
> I'm shortly going to be in the market for a 4-bay NAS box for my home storage.

See prior thread:
http://www.mail-archive.com/discuss@blu.org/msg03133.html

  Re: [Discuss] D-I-Y NAS enclosures
  ...I recently ran across Acer's product in this space:
  http://www.newegg.com/Product/Product.aspx?Item=N82E16859321016

  a smaller 8.5" x 8" x 7" cube with a 2 TB drive. (Plus 5 USB and 1
  eSATA ports.) Currently selling for $260. Possibly discounted due to
  being loaded with an obsolete version of Windows Home Server.

NewEgg has since discontinued all the Acer Easystore models it carried.
Google tells me other vendors still have it, but for $360+, which is not
cost effective.

The search also turns up several vendors selling just a refurbished
motherboard for this product at prices ranging from $131 to $212, which
seems to confirm my fear mentioned in the above post that it uses a
proprietary motherboard.

I think these small servers aimed at the Windows Home Server market are
pretty close to the ideal hardware arrangement for a small D-I-Y NAS
platform, if they just: 1. used commodity ITX motherboards, 2. came
without Windows, and 3. had a price around $150. (Screwless/trayless
drive bays would be really nice, too.)

I'm still disappointed that there aren't better options for a mini-ITX
case with 4 or 5 screwless/trayless drive bays. There are a few, like:

http://www.e-itx.com/cfi-a7879.html
http://www.google.com/products/catalog?hl=en&q=mini-itx+nas&cid=6160286187527426797&ei=eCwsT4HaF9uwmQeG8tTMAg&ved=0CAkQrRI

(I like the layout of the first case better, but it uses superfluous
trays and costs $140.)

but they're generally priced at $120 or more. Add a motherboard, RAM,
boot device (Flash), and you're well over $200.

Clearly this is more a matter of sales volume than raw materials, as you
can get much larger steel enclosures with higher wattage power supplies
for half the price.

Consider that you can get a 5-bay eSATA enclosure with port multiplier
electronics for $150:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816132015

or a 4-bay version for about $100:
http://www.newegg.com/Product/Product.aspx?Item=N82E1681677&nm_mc=OTC-Froogle&cm_mmc=OTC-Froogle-_-Server+-+RAID+Sub-Systems-_-SANS+DIGITAL-_-1677

which means you should be able to get a similar case + power supply for
about $60 ~ $80.

For a real D-I-Y approach, I'd start with a CD duplicator enclosure like:
http://www.directron.com/5baybk.html

and then put a 5-bay trayless hot-swap cage into it, like:
http://www.google.com/products/catalog?q=BPU-350SATA&cid=15975077119275943947&sa=button#scoring=tp

In theory there should be enough space to mount a 6" square mini-ITX
motherboard at the bottom or near the top of the enclosure. The end
result is no cheaper, not smaller, and probably not better looking, but
will have better quality drive bays. (You might see a cost advantage if
you scale it up to a 7-bay CD duplicator case with 2 5-bay cages, but to
support 10 drives you'll need to add port multiplier hardware
(http://www.amazon.com/dp/B000VEMNAU/). Or replace the expensive cages
with bare-bones versions: http://www.amazon.com/dp/B001LF40KE and either
manually wire them up, or add a port multiplier backplane, like the hard
to find CFI-B53PM used in the Backblaze server
(http://blog.backblaze.com/2009/10/07/backblaze-storage-pod-vendors-tips-and-tricks/).)

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Network Traffic Visualization

2012-02-03 Thread Tom Metro
Derek Atkins wrote:
> I have mrtg set up to show me aggregate network usage at my choke-point
> (router).  Sometimes it would be nice if there were a way for me to
> pinpoint usage based on host and port or based on session threads. 
> 
> My router is running dd-wrt...

Tomato has built in logging and charting of aggregate bandwidth:
http://en.wikipedia.org/wiki/File:Tomato_Firmware_-_Bandwidth_Real_Time.PNG

that is probably comparable to what you get from mrtg, and if you are
using QoS, it provides some handy pie charts to show what chunk of your
bandwidth is being used up by each classification:
http://www.polarcloud.com/img/ssqosg108.png

(What protocols are included in each classifications depends on the QoS
rules you have created.)

Some of the less official Tomato builds also support per-machine
bandwidth monitoring. On one of my routers that runs one of these
builds, I noticed an "enable bandwidth monitoring" checkbox on the form
where you create a static DHCP mapping for a machine. I haven't yet seen
how or if you can view that data. And it seems like a significant
limitation if the data is only captured for statically mapped hosts (the
thread indicates they've remedied this limitation). See:

http://www.linksysinfo.org/index.php?threads/per-ip-bandwidth-monitoring-how-to.35533/

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Network Traffic Visualization

2012-02-03 Thread Tom Metro
Daniel C. wrote:
> - What problems do you have that a visualization tool could help
> solve?

Here's the use case that inspired the original thread: If you notice
something out of the ordinary happening with your network - slowness,
increased ping times, whatever - how do you determine what is happening?

Traditionally we turn to very narrowly focused tools that spit out a
specific number. This provides a very 1-dimensional view of what is
happening.

Generally, our networks are black boxes, and we have little insight as
to what is really happening. Bad things (both intentionally malicious
and inadvertent misconfiguration) could be happening, and we're unaware
of it.

The ideal is perhaps to consider the fantasy depiction of networks by
Hollywood where the hero trivially pulls up a visual representation of
their network and is easily able to spot the intruder's activity. It's
about time some of that absurdity became reality.


> - What do you need to see in order to solve the problem(s)?

Broadly speaking, some sort of overview representation, with the ability
to drill down (i.e. look at specific hosts or protocols) interactively.


> - Do you have any preference for how you see it?
> ...single window that you check on occasionally,
> but is otherwise minimized?

Most likely a tool that is consulted on an as-needed basis. (Presumably
this tool would be layered on top of a network monitoring infrastructure
which would include rules to trigger alerts when recognized anomalies
occurred. Alerts are more effective than expecting someone to be
constantly watching the visualization.)


> - What workflows are currently in place to tackle the problems that
> could be improved by having access to a visualization tool?

As mentioned above, there are a number of existing tools to help with
diagnosing network problems and spotting optimization opportunities, but
they typically show you only a few metrics at a time, and require the
use if multiple tools to get a fuller picture.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


<    1   2   3   4   5   6   7   8   >