Re: Success stories with MythTV and Schedule Direct?

2007-09-13 Thread Frank DiPrete
On Mon, 2007-09-03 at 16:58 -0400, Ted Roche wrote:
 Just checking in to find out if anyone has switched their MythTV setups
 over to Schedules Direct [1]? (Schedules Direct is a non-profit
 organization that provides raw U.S./Canadian tv listing data to Free and
 Open Source Applications. Those applications then use the data to
 provide things like PVR functionality, search tools, and private channel
 grids.)
 
 For those not following along, a subsidiary of the Tribune Media cartel
 (Zap2It Labs) had been providing the data gratis to the community using
 their own server resources; something's changed and they've decided not
 to do that. Schedules Direct was formed as a non-profit and has
 scrambled to license the data and pass it on to the many PVR
 communities. Interestingly, they approached a number of companies which
 had accumulated the data, but were not able to work out a payment with
 anyone other than the Tribune folks.
 
 [1] http://www.schedulesdirect.org
 

sd sign up and confirm: worked
compile mythtv 0.20.2:  worked
update from 0.20-svn to 0.20.2: worked

Add sd source to mythtv:  flawed
Only one option to remove old zap2it source: delete all.
deleted all, then had to re-add sd.

Retrieve sd channel list: extremely flawed
Channel names not retrieved / added.
99% of channels added as adding Channel #
have to manually edit the channel list.


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: fedora 7 on laptop no longer burns CDs or DVDs

2007-09-13 Thread Lloyd Kvam
On Wed, 2007-09-12 at 23:08 -0400, Stephen Ryan wrote:
 What about going the other way around?  Try the GUI CD burner - you
 should be able to right-click on the .iso and select Write to
 Disc  

That was my starting point.  I glossed over that since there was no
useful error output.  The GUI seems to provide a wrapper to the
underlying command-line tools.  I went to the command line simply to get
better error messages.

And that GUI interface worked nicely in Fedora 6.

-- 
Lloyd Kvam
Venix Corp.
1 Court Street, Suite 378
Lebanon, NH 03766-1358

voice:  603-653-8139
fax:320-210-3409

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Motherboard capable of supporting over 32 GB RAM

2007-09-13 Thread Dave Johnson
Dan Jenkins writes:
 Does anyone have any recommendations? Preferably with DDR-800 support.
 It'll run both Linux and Windows XP 64 and is used for simulations.
 Thanks.

You're probably in for a Xeon 5xxx or Opteron 2xx or 2xxx board if you
want this much RAM.

Check Tyan, Supermicro, or for full systems Dell, HP, IBM.

You can probably pick up an opteron 2xx system for relatively cheap if
you don't need the latest and greatest system.

See here:
http://www.supermicro.com/products/motherboard/Xeon1333/
http://www.supermicro.com/Aplus/motherboard/Opteron/Op200.cfm
http://www.supermicro.com/Aplus/motherboard/Opteron2000/
http://www.tyan.com/product_board_list.aspx?cpuid=1socketid=10chipsetid=9
http://www.tyan.com/product_board_list.aspx?cpuid=4socketid=9chipsetid=9
http://www.tyan.com/product_board_list.aspx?cpuid=4socketid=16chipsetid=9

Most of those are extended ATX or custom sized though.

-- 
Dave
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Perl best practices (was: question ... Split operator in Perl)

2007-09-13 Thread Ben Scott
On 9/13/07, John Abreau [EMAIL PROTECTED] wrote:
 s/^[\x20\t]*//; # trim leading space
 s/[\x20\t]*$//; # trim trailing space

 Any particular reason to use [\x20\t] instead of \s ?

  \s would also eat newlines and similar.  At a minimum, it would have
to explicitly print with \n and use the -n switch instead of the -p
switch.  Which would be fine.  But if the file contains non-native
line endings, it can result in those getting mangled, or so I've
found.  I've got a lot of such files hanging around on my system.
Just eating space and tab worked better for me.

  OTOH, \s should eat other kinds of in-line whitespace that might be
encountered, including anything Unicode dishes up.  So that might be
better for some situations.

  YMMV.  Or, since this is Perl we're talking about: TIMTOWTDI.  ;-)

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Success stories with MythTV and Schedule Direct?

2007-09-13 Thread Jarod Wilson

On Sep 13, 2007, at 07:31, Frank DiPrete wrote:


On Mon, 2007-09-03 at 16:58 -0400, Ted Roche wrote:
Just checking in to find out if anyone has switched their MythTV  
setups

over to Schedules Direct [1]? (Schedules Direct is a non-profit
organization that provides raw U.S./Canadian tv listing data to  
Free and

Open Source Applications. Those applications then use the data to
provide things like PVR functionality, search tools, and private  
channel

grids.)

For those not following along, a subsidiary of the Tribune Media  
cartel
(Zap2It Labs) had been providing the data gratis to the community  
using
their own server resources; something's changed and they've  
decided not

to do that. Schedules Direct was formed as a non-profit and has
scrambled to license the data and pass it on to the many PVR
communities. Interestingly, they approached a number of companies  
which
had accumulated the data, but were not able to work out a payment  
with

anyone other than the Tribune folks.

[1] http://www.schedulesdirect.org



sd sign up and confirm: worked
compile mythtv 0.20.2:  worked
update from 0.20-svn to 0.20.2: worked

Add sd source to mythtv:  flawed
Only one option to remove old zap2it source: delete all.
deleted all, then had to re-add sd.

Retrieve sd channel list: extremely flawed
Channel names not retrieved / added.
99% of channels added as adding Channel #
have to manually edit the channel list.


Hrm, my experience fetching channels was nothing like that, on both a  
0.20.2 test system and an svn trunk system. Are you sure that was  
from fetch channel listings from source and not perhaps scan for  
channels? Scanning for channels won't get channel names in most  
cases, leaving you with something along the lines of what you describe.


--
Jarod Wilson
[EMAIL PROTECTED]




PGP.sig
Description: This is a digitally signed message part
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Success stories with MythTV and Schedule Direct?

2007-09-13 Thread Derek Atkins
Jarod Wilson [EMAIL PROTECTED] writes:

 sd sign up and confirm:  worked
 compile mythtv 0.20.2:   worked
 update from 0.20-svn to 0.20.2:  worked

 Add sd source to mythtv:  flawed

This was not my experience.  I just went into my video sources
and converted by TMS DataDirect to SchedulesDirect, entered
my SD login info, and it worked just fine.

 Only one option to remove old zap2it source: delete all.
 deleted all, then had to re-add sd.

That wasn't the case for me.  I could just convert from DD to SD
directly.  I had to go up instead of down to get back to the
SD configuration in the video source, but it worked great for me!

-derek
-- 
   Derek Atkins, SB '93 MIT EE, SM '95 MIT Media Laboratory
   Member, MIT Student Information Processing Board  (SIPB)
   URL: http://web.mit.edu/warlord/PP-ASEL-IA N1NWH
   [EMAIL PROTECTED]PGP key available
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Perl best practices

2007-09-13 Thread Paul Lussier
Ben Scott [EMAIL PROTECTED] writes:

   Personally, in the proper context, I find this:

When writing code which will be used, looked at, modified, and
maintained by no one else, doing whatever makes you happy and more
efficient makes sense, and is more efficient and expedient.

When writing code which will be used, looked at, modified, and
maintained by a group of people, it is best to agree upon and strictly
adhere to a common set of coding standards.  This makes the entire
group more eficient.  The personal likes and/or dislikes of anyone
person may or may not bother anyone else in the group.


   @foo = split m{blah};

 to be easier to read and comprehend at a glance than this:

   @foo = split (m{blah}, $_);


I find both of these bothersome :)  *I* prefer:

@foo = split (/blah/);
or: @foo = split (/blah/, $actualVariableName);

The former implies $_, which need not be explicitly stated in this case.
The latter clearly denotes where you're getting your data from.

$_ has a *lot* of magical properties which can really screw things up,
especially in cases like:

   map { grep { ... $_ } $_ } @foo;

Which $_ is which, and where is each getting it's data from?  This is
where the use of named variables I find to be better than just
depending upon built-ins like $_.

 Explicitly specifying $_ over and over again just clutters up the
 code with pointless syntax.  It's one more thing my brain has to
 recognize and process.

Right, which is why you shouldn't depend upon $_ in these contexts and
explicitly state a variable name (which should also be my'ed into the
proper scope :)

   I don't arbitrarily assign to $_ and use it at random, the way some
 people do.  And I do make use of parenthesis, braces, and such, even
 when they are not needed, when I find it makes the code clearer.  But
 I also leave them out when I find it makes the code clearer.

No arguments with that.  In general, IMO, clarity is of the utmost
importance.  There are many best practices which can help aid
clarity though.  I find one such practice is to always use func(args)
because it makes it blatantly obvious you're calling a function.
(perhaps the one exception is with print, but even then, I find myself
very often using it there too.  To me:

print (join(\s, Some text, func(args), more text,),
   \n
  );

is far more readable than
print join  , Some text, func(args), more text, \n;

In the former, if I need to add stuff to the join, it's blatantly
obvious where it goes.  In the latter, it is not.

   For a slightly less contrived example, take a script which trims
 leading and trailing whitespace from each line in an input file.  I
 already have one implementation, and I just wrote up another one.
[...]
   Assuming the reader is familiar with the language, which do you
 think will be easier/quicker to comprehend?

Both of these hurt my eyes! :)

This one is short and sweet:

 #!/usr/bin/perl -wp
 s/^[\x20\t]*//; # trim leading space
 s/[\x20\t]*$//; # trim trailing space

but I'd rewrite it as:

  #!/usr/bin/perl -p

  s/(^\s*|\s*$)//g; # trim leading/trailing whitespace


For a script which optionally took stdin, I'd write it as:

  #!/usr/bin/perl -w

  use English;

  my $file = shift;
  my $FH; 
 
  if (!$file) {
*FH = *STDIN;
  } else {
open(FH, $file) || die(Could not open $file: $ERRNO\n;
  }

  while (my $line = FH) {
$line =~ s/(^\s*|\s*$)//g; # trim leading/trailing whitespace
print($line\n);
  }
  close(FH);


 It may be true that someone who *isn't* familiar with Perl would
 find it easier to puzzle out the meaning of the longer version.

I'm fairly comfortable with perl.  I could puzzle out the meaning
fairly easily.  And I'll even concede that as far as most perl, it's
pretty good.  But, as is true I'm sure even with my own code, there's
always room for improvement :)

 (/me waiting for Kevin to pipe in here in 4...3...2...1... ;)

 But I don't find that a particularly compelling argument.

The compelling argument is this: It should be blatantly obvious to
whomever is going to be maintaining your code in 6 months what you
were thinking :) The easier you make it up front to read your code and
discern your mindset, the less time it take the maintainer in 6
months.  Many times, that future maintainer is *you* :)

 I write Perl programs with the assumption that the reader
 understands Perl, the same way I am assuming readers of this message
 understand English.  :)

Ahh, yes.  But as the superintendent of the Lawrence, MA, School
system has recently shown, even those who *claim* to know the
language, often times are just fooling themselves.  Just axe him :)

 This may mean Perl, as practiced, is harder to learn than a language
 which is more rigid and always verbose.

I think perl is incredibly easy to learn if you learn from a good
source.  The documentation is one such source.  Other people's code is
most often NOT a good source!

 Many say similar things 

Re: Perl best practices

2007-09-13 Thread Kevin D. Clark

Paul Lussier writes:

  (/me waiting for Kevin to pipe in here in 4...3...2...1... ;)

Ben and Paul are competent Perl programmers.  They write good code.
Code should be written to be clear.  While it is nice if code written
in a given language is understandable by people who don't know the
language, this property isn't guaranteed.  Cryptic one-liners can be
hard to follow, but they can also be beautiful and useful.

If I write anything else, it would just be a combination of me
nit-picking for no purpose and hot air.

Kind regards,

--kevin
-- 
GnuPG ID: B280F24E It is best to forget the great sky
alumni.unh.edu!kdc And to retire from every wind
 -- Mumon
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Perl best practices

2007-09-13 Thread Ben Scott
On 9/13/07, Paul Lussier [EMAIL PROTECTED] wrote:
 When writing code which will be used, looked at, modified, and
 maintained by a group of people, it is best to agree upon and strictly
 adhere to a common set of coding standards.

  Yes.  But when the designated group of people is all of humanity,
the problem of agreeing on said standards becomes difficult.  :)

 I find both of these bothersome :)  *I* prefer:

 @foo = split (/blah/);

  Er, yes.  blah in this case was meta-syntactic, and I was still
thinking of the first example in this discussion, which had LTS
(Leaning Toothpick Syndrome).  I will use // if the regexp doesn't
suffer from LTS.  I use m{} or s{}{} when the regexp otherwise
contains slashes.

 @foo = split (/blah/);
 @foo = split (/blah/, $actualVariableName);
 The former implies $_, which need not be explicitly stated in this case.

  Right.  That's exactly what I was saying.  ;-)

 map { grep { ... $_ } $_ } @foo;

 Which $_ is which, and where is each getting it's data from?

  Like I said, I use things like parenthesis, braces, named variables,
etc., liberally when I find the meaning/intent is not obvious in
context.  But it's a case-by-case call, not an absolute, inviolable
rule.

  A foolish consistency is the hobgoblin of little minds.  -- Ralph
Waldo Emerson

  As a completely non-contrived example, here is an illustration of
when I think implicit use of $_ is very appropriate.  It's from a
Squid log analysis tool I wrote, where I wanted to condense the MIME
content type into something smaller and more appropriate for a log
report.  Here's the code (as usual, view in a monospace font to get
this to line up properly):

sub condense_type($) {
# condense a MIME content type to something shorter, for people
$_ = $_[0];
s{^text/plain$} {text};
s{^text/html$}  {html};
s{^text/css$}   {css};
s{^text/javascript$}{jscript};
s{^text/xml$}   {xml};
s{^text/}   {};
s{^image/.*}{image};
s{^video/.*}{video};
s{^audio/.*}{audio};
s{^multipart/byteranges}{bytes};
s{^application/}{};
s{^octet-stream$}   {binary};
s{^x-javascript$}   {jscript};
s{^x-shockwave-flash$}  {flash};
s{\*/\*}{stars};# some content gets marked */*
return $_;
}

  I could have used a regular named variable (say, $type) and repeated
$type =~ over and over again for 14 lines.  I believe that would
actually harm the readability of the code.  I find it clearer with use
of implicit $_, because it puts the focus on the fact that I'm doing a
bunch of transformations on the same thing, over and over again.

  As a counter-example from the same script, here's something using
explicit names and grouping which isn't strictly needed, because I
find it clearer:

sub condense_size($) {
# consense a byte-count into K/M/G
my $size = $_[0];
if($size  $gigabyte) { $size = ($size / $gigabyte) . G; }
elsif ($size  $megabyte) { $size = ($size / $megabyte) . M; }
elsif ($size  $kilobyte) { $size = ($size / $kilobyte) . K; }
return $size;
}

 Explicitly specifying $_ over and over again just clutters up the
 code with pointless syntax.  It's one more thing my brain has to
 recognize and process.

 Right, which is why you shouldn't depend upon $_ in these contexts and
 explicitly state a variable name ...

  A named variable would be *two* more things.  ;-)

   s/(^\s*|\s*$)//g; # trim leading/trailing whitespace

  Er, yah, that would be even better.  Not sure why I didn't just use
s/// with /g when I wrote that the first time around.

  (The actual script in my ~/bin/ has several lines of comments
explaining certain design decisions, but that's one one of them.)

   my $file = shift;

  You're using an implicit argument to shift there.  ;-)

 The compelling argument is this: It should be blatantly obvious to
 whomever is going to be maintaining your code in 6 months what you
 were thinking

  I do not think I could agree with you more here.  The thing you seem
to be ignoring in my argument is that clarity is subjective and
often depends on context.  :)

 I write Perl programs with the assumption that the reader
understands Perl ...

 ... even those who *claim* to know the language, often times are
 just fooling themselves.

  I'm not going to penalize the competent because there are others who
are incompetent.

 Many say similar things about Unix.  Or Emacs.  :-) I'm don't argue
 that one approach is right and the other wrong, but I do think that
 both approaches have their merits.

 Which approaches are you talking about?  Approaches to learning, or to
 writing?

  Yes.  :)

  Let me restate: A pattern which is powerful and easy-to-use is
sometimes unavoidably non-obvious.

  Or perhaps an example of a similar principle in a different context:
When invoking tar from a shell script, which of the following do you
prefer?

tar --create --gzip --verbose --preserve-permissions
--file=/path/to/file.tar.gz /etc

tar 

Re: Perl best practices

2007-09-13 Thread Ben Scott
On 13 Sep 2007 12:10:58 -0400, Kevin D. Clark [EMAIL PROTECTED] wrote:
 If I write anything else, it would just be a combination of me
 nit-picking for no purpose and hot air.

  Welcome to the Internet!   ;-)

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: sendmail masquerading question

2007-09-13 Thread Ben Scott
On 9/13/07, Bill McGonigle [EMAIL PROTECTED] wrote:
 For some reason the problems I seem to run into seem to do not have
 an .mc macro defined. :(   Sometimes I can find a hint online, but
 the community seems to be sparse.

  I've long suspected that some people just have bad karma with
certain software, and no amount of reading the man pages, searching
the web, etc., will ever find the right answer.

 Try searching on how to change the hostname sendmail sends for HELO
 greeting (no peeking below).

  If you just want to alter Sendmail's idea of the system's hostname,
the README for the config stuff[1] does have a section on this,
entitled Who Am I?, which I found by looking in the table of
contents.  :)

[1]  I knew to find this in /usr/share/sendmail/README on Red Hat
systems.  I believe it is under cf/README in the Sendmail source
distribution.

  Now, said docs do assume your problem is Sendmail not being able to
get a FQDN from the bare hostname, and not the hostname itself.
Points off there.  But the instructions and examples do give me enough
information to realize that I would want to define the M4 variable
`confDOMAIN_NAME', or perhaps define $w and/or $m explicitly.
Precisely which action to take would depend on precisely what the
problem is.  :)

 Djlibrescu.bfccomputing.com

  I had to look it up in Ye Olde Sendmail Installation and Operation
Guide[2], but said FM says that $j is the FQDN.  $w is the hostname
(first word of FQDN).  $m is the domain name (where domain name
means organization's domain name, i.e., parent domain of the host's
FQDN).  So that may not be doing what you want, because it leaves $w
set to whatever it used to be set to, and you seem to be implying your
system's hostname is somehow bogus for purposes of Sendmail.

[2] I Google'ed for sendmail manual, since I didn't remember the
exact name.  I recognized the title of the second match.

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: AMD Releases 900+ Pages Of GPU Specs

2007-09-13 Thread Tom Buskey
On 9/12/07, Tony Lambiris [EMAIL PROTECTED] wrote:

 Three cheers for AMD, who have been leading the way, showing the rest
 of the world there is nothing to fear in releasing docs on how your
 hardware works (you know, the thing that I paid for and now own). Even


While I'm applauding AMD/ATI, someone more prominent has been releasing
hardware information already: http://www.opensparc.net/
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


[GNHLUG] Tech-North Summit, Manchester, NH 19-Sept-2007

2007-09-13 Thread Ted Roche
FYI, a free conference and job fair in Manchester, NH next week.

Register online at http://www.tech-north.com/

Sessions are free, but meals cost extra.

The City of Manchester, New Hampshire is producing the first-ever
Tech-North Summit: an event to be held on Wednesday, September 19, 2007
at the Center of New Hampshire – Radisson in downtown Manchester.
Sponsored in part by UNH Manchester, Tech-North will explore the links
between higher education and high tech through keynote addresses, panel
discussions and informative break-out sessions. Breakfast and luncheon
events, an evening reception and the High Tech Job Fair and Exhibition
will provide networking, employment and social opportunities as well as
the chance to see and experience new technologies, products and services.

Sounds like an interesting assortment of speakers, sponsors and vendors.

-- 
Ted Roche
___
gnhlug-announce mailing list
[EMAIL PROTECTED]
http://mail.gnhlug.org/mailman/listinfo/gnhlug-announce/

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Software Freedom Day, Souhegan Valley Team

2007-09-13 Thread Bill Sconce
Here's a copy of the press release I cooked up for SFD.  Thanks to
Bill Poliquin at GotInk4U(*) it was sent out to ~17,000 e-mail recipients
today.  If it isn't raining too hard on Saturday you might want to
stop by(**).  Feel free to stop by regardless!  And help out if you
wish.

-Bill (and Janet and Mark and Ted and Roseann and Bill)

(*) Good folks.  I get all my toner/inkjet supplies there, and computer
stuff too.  They get it about Linux, and although few customers ask
them for a Linux box they've been putting OpenOffice.org on every
machine they build...

(**) For heckling, if nothing else, since Ben sez he's going to
be there.




Software Freedom Day comes to Nashua
Nashua, New Hampshire


Mark this coming Saturday (September 15th) in your calendar. 
It's Software Freedom Day, an international celebration of free
software, and of the principles which make it possible. These
are the same principles which make science possible, and 
literature, and mathematics: the ability to share what we write.

Although many people may not have heard of free software, almost
everyone has used it. More than half of all Web sites are powered
by free software (Apache, and Linux); many people are taking
advantage of the best Internet browser available (Firefox), and
it's free software; the best office suite available (OpenOffice.org)
is free software, and better still, OpenOffice.org creates its files
in a reliable format (an international standard, ISO 23600), which
ensures that what you write will be readable around the world -- and
perhaps more importantly, will be readable five or ten or a hundred
years from now.

Free software is not shareware. Free means free as in freedom, not
as in free lunch -- it means that there is no catch, no hidden pitch to
send in money later.  You are free to share this kind of software with
as many friends as you want (just as we are free to share it with you),
and you are free to change the software to suit your needs.

On Software Freedom Day teams get together all over the world to
celebrate free software.  Our local Souhegan Valley Team, which has
participated each of the past three years, will this year be handing
out CDs, discussing free software, and (for as long as they last)
sharing milk and cookies next to the Nashua Airport, in the little
parking lot on Charron Avenue. Saturday morning, September 15th, 
09:30 AM till 2:00PM or so.  In case of rain the fine folks at 
GotInk4U (themselves users of free software) have offered us shelter
-- they're directly away from the runway in the plaza.

Drop by!  Get your own copy of Firefox, OpenOffice.org, Firefox,
7Zip, PDF Creator, and more -- free (in both senses).  You can
download any and all of these from the Internet, but the SFD
worldwide team has packaged them on a CD to save you the trouble.

(P.S. The cookies are in the oven.  Chocolate!  --Bill)

Bill Sconce, Lyndeborough
Janet Levy, Lyndeborough
Mark Boyajian, Pepperell
Ted Roche, Contoocook
Roseann Day, Amherst
Bill Poliquin, Nashua
Ben Scott, Dover

For more information visit:
   http://softwarefreedomday.org
   http://theopencd.org
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Success stories with MythTV and Schedule Direct?

2007-09-13 Thread Frank DiPrete
On Thu, 2007-09-13 at 09:44 -0400, Jarod Wilson wrote:
 On Sep 13, 2007, at 07:31, Frank DiPrete wrote:
 
  On Mon, 2007-09-03 at 16:58 -0400, Ted Roche wrote:
  Just checking in to find out if anyone has switched their MythTV  
  setups
  over to Schedules Direct [1]? (Schedules Direct is a non-profit
  organization that provides raw U.S./Canadian tv listing data to  
  Free and
  Open Source Applications. Those applications then use the data to
  provide things like PVR functionality, search tools, and private  
  channel
  grids.)
 
  For those not following along, a subsidiary of the Tribune Media  
  cartel
  (Zap2It Labs) had been providing the data gratis to the community  
  using
  their own server resources; something's changed and they've  
  decided not
  to do that. Schedules Direct was formed as a non-profit and has
  scrambled to license the data and pass it on to the many PVR
  communities. Interestingly, they approached a number of companies  
  which
  had accumulated the data, but were not able to work out a payment  
  with
  anyone other than the Tribune folks.
 
  [1] http://www.schedulesdirect.org
 
 
  sd sign up and confirm: worked
  compile mythtv 0.20.2:  worked
  update from 0.20-svn to 0.20.2: worked
 
  Add sd source to mythtv:  flawed
  Only one option to remove old zap2it source: delete all.
  deleted all, then had to re-add sd.
 
  Retrieve sd channel list: extremely flawed
  Channel names not retrieved / added.
  99% of channels added as adding Channel #
  have to manually edit the channel list.
 
 Hrm, my experience fetching channels was nothing like that, on both a  
 0.20.2 test system and an svn trunk system. Are you sure that was  
 from fetch channel listings from source and not perhaps scan for  
 channels? Scanning for channels won't get channel names in most  
 cases, leaving you with something along the lines of what you describe.

in the channel editor, I do not have an option for fetch channel
listings from source and channel scan did cause the problem.

To fix it I deleted all channels from the mythtv channel editor then ran
mythfilldatabase from command line.

Thanks for the tip.



 
 ___
 gnhlug-discuss mailing list
 gnhlug-discuss@mail.gnhlug.org
 http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Success stories with MythTV and Schedule Direct?

2007-09-13 Thread Thomas Charron
On 9/13/07, Frank DiPrete [EMAIL PROTECTED] wrote:
  Hrm, my experience fetching channels was nothing like that, on both a
  0.20.2 test system and an svn trunk system. Are you sure that was
  from fetch channel listings from source and not perhaps scan for
  channels? Scanning for channels won't get channel names in most
  cases, leaving you with something along the lines of what you describe.
 in the channel editor, I do not have an option for fetch channel
 listings from source and channel scan did cause the problem.
 To fix it I deleted all channels from the mythtv channel editor then ran
 mythfilldatabase from command line.

  It isn't on that screen.  It's on the same screen you define the use
of SchedulesDirect.

-- 
-- Thomas
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Success stories with MythTV and Schedule Direct?

2007-09-13 Thread Frank DiPrete
On Mon, 2007-09-03 at 16:58 -0400, Ted Roche wrote:
 Just checking in to find out if anyone has switched their MythTV setups
 over to Schedules Direct [1]? (Schedules Direct is a non-profit
 organization that provides raw U.S./Canadian tv listing data to Free and
 Open Source Applications. Those applications then use the data to
 provide things like PVR functionality, search tools, and private channel
 grids.)
 
 For those not following along, a subsidiary of the Tribune Media cartel
 (Zap2It Labs) had been providing the data gratis to the community using
 their own server resources; something's changed and they've decided not
 to do that. Schedules Direct was formed as a non-profit and has
 scrambled to license the data and pass it on to the many PVR
 communities. Interestingly, they approached a number of companies which
 had accumulated the data, but were not able to work out a payment with
 anyone other than the Tribune folks.
 
 [1] http://www.schedulesdirect.org
 

Anybody else using SD with comcast ? (nashua)

After fixing the channel name prob, The feed for 7 channels returns no
program data. notable channels are 62 scifi, 63 animal, and 54 food.






___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Success stories with MythTV and Schedule Direct?

2007-09-13 Thread Jeff Creem
Frank DiPrete wrote:
 On Mon, 2007-09-03 at 16:58 -0400, Ted Roche wrote:
   
 Just checking in to find out if anyone has switched their MythTV setups
 over to Schedules Direct [1]? (Schedules Direct is a non-profit
 organization that provides raw U.S./Canadian tv listing data to Free and
 Open Source Applications. Those applications then use the data to
 provide things like PVR functionality, search tools, and private channel
 grids.)

 For those not following along, a subsidiary of the Tribune Media cartel
 (Zap2It Labs) had been providing the data gratis to the community using
 their own server resources; something's changed and they've decided not
 to do that. Schedules Direct was formed as a non-profit and has
 scrambled to license the data and pass it on to the many PVR
 communities. Interestingly, they approached a number of companies which
 had accumulated the data, but were not able to work out a payment with
 anyone other than the Tribune folks.

 [1] http://www.schedulesdirect.org

 

 Anybody else using SD with comcast ? (nashua)

 After fixing the channel name prob, The feed for 7 channels returns no
 program data. notable channels are 62 scifi, 63 animal, and 54 food.


   
I am using SD with comcast in Nashua. I had no channel name problems as 
a result of the switch and I have program data for 62. 63 and 54.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Perl best practices

2007-09-13 Thread Paul Lussier

For all those just tuning in, Ben and I are in violent and vocal
agreement with each other, and at this point are merely quibbling over
semantics :)

Ben Scott [EMAIL PROTECTED] writes:

   Er, yes.  blah in this case was meta-syntactic, and I was still
 thinking of the first example in this discussion, which had LTS
 (Leaning Toothpick Syndrome).  I will use // if the regexp doesn't
 suffer from LTS.  I use m{} or s{}{} when the regexp otherwise
 contains slashes.

Something about the use {} and () in regexps really bothers me.  I
think it's because in general, perl overloads too many things to begin
with.  To use {} for regexp delimiting is confusing and completely
non-intuitive to me. They are meant to denote either a hash element or
a code block.  Trying to make my mind use them for regexps hurts :)

To avoid LTS and backslashitis in a regexp, I tend to do something like:

   m|/foo/bar|/bar/baz|g;

The | is close enough to / that it's instantly clear to me.

 A foolish consistency is the hobgoblin of little minds.  -- Ralph
 Waldo Emerson

Yeah, what the old, dead guy said :)

 As a completely non-contrived example, here is an illustration of
 when I think implicit use of $_ is very appropriate.
[...]
 sub condense_type($) {
 # condense a MIME content type to something shorter, for people
 $_ = $_[0];
 s{^text/plain$} {text};
 s{^text/html$}  {html};
 s{^text/css$}   {css};
 s{^text/javascript$}{jscript};
 s{^text/xml$}   {xml};
 s{^text/}   {};
 s{^image/.*}{image};
 s{^video/.*}{video};
 s{^audio/.*}{audio};
 s{^multipart/byteranges}{bytes};
 s{^application/}{};
 s{^octet-stream$}   {binary};
 s{^x-javascript$}   {jscript};
 s{^x-shockwave-flash$}  {flash};
 s{\*/\*}{stars};# some content gets marked */*
 return $_;
 }

 I could have used a regular named variable (say, $type) and
 repeated $type =~ over and over again for 14 lines.  I believe
 that would actually harm the readability of the code.

Agreed, though, we (by we, I mean the company which currently puts
food on my table :) do things like this slightly differently,
completely avoiding the $_ dilemma:

  $match = shift;
  %mimeTypes = 
('^text/plain$'  = text   ,
 '^text/html$'   = html   ,
 '^text/css$'= css   ,
 '^text/javascript$' = jscript,
 '^text/xml$'= xml   ,
 '^text/'=   ,
 '^image/.*' = image  ,
 '^video/.*' = video  ,
 '^audio/.*' = audio  ,
 '^multipart/byteranges' = bytes  ,
 '^application/' =   ,
 '^octet-stream$'= binary ,
 '^x-javascript$'= jscript,
 '^x-shockwave-flash$'   = flash  ,
 '*/*'   = stars  ,# some content gets marked */*
);

  foreach my $mtype (keys %mimeTypes) {
if ($mtype =~ /$match/)
  return $mimeType{$mtype};
}
  }

Also, the foreach could be written as:

  map { ( $match =~ /$_/)  return $mimeTypes{$_}} keys %mimeTypes

Though I find this completely readable, it suffers from the problem
that it's not easily extensible.  If you decide you need to do more
processing within the loop, the foreach is much easier to extend.  You
just plonk another line in there and operate on the already existing
variables.  With the map() style loop, this becomes more difficult.

So, though I love map(), I would have to argue this is not the best
place to use it.  Once readability has been achieved, the next
priority ought to be future maintenance and extensibility, IMO.

   As a counter-example from the same script, here's something using
 explicit names and grouping which isn't strictly needed, because I
 find it clearer:

 sub condense_size($) {
 # consense a byte-count into K/M/G
 my $size = $_[0];
 if($size  $gigabyte) { $size = ($size / $gigabyte) . G; }
 elsif ($size  $megabyte) { $size = ($size / $megabyte) . M; }
 elsif ($size  $kilobyte) { $size = ($size / $kilobyte) . K; }
 return $size;
 }

I tend to like this style too, though I'd use a slightly different
syntax.  It's otherwise exactly the same.

  my $size = shift;
  ($size  $gigabyte)  { return (($size/$gigabyte) . G)};
  ($size  $megabyte)  { return (($size/$megabyte) . M)};
  ($size  $kilobyte)  { return (($size/$kilobyte) . K)};

Or, perhaps, if you wanted to be a little more cleverer:

  my  %units = ($gigabyte = sub { int($_[0]/$gigabyte) . 'G'},
$megabyte = sub { int($_[0]/$megabyte) . 'M'},
$kilobyte = sub { int($_[0]/$kilobyte) . 'K'},
 );

  foreach my $base (sort {$b = $a } keys %units) {
if ($size  $base) {
  print ($units{$base}-($size),\n);
  last;
}
  }

This last approach is both too clever by 1, but also, slightly easier
to maintain, given that to add another size, you add one line.  You
can even add the one line anywhere you want in the hash.