Re: OT(ish): Advice

2002-12-06 Thread Patrick Mulvany
On Fri, Dec 06, 2002 at 01:04:26PM +, Dominic Mitchell wrote:
> Neil Fryer wrote:
> >And had you done anything for a company or was it personal stuff that you
> >were working on?
> 
> This was stuff I'd worked on for the copmany - a config file 
> distribution mechanism.  But I think that they'd have been happy with 
> any code even if it was just personal stuff.  It basically proves that 
> you can code (hopefully, nicely).

One piece of advice I would give here is make sure you really understand the code you 
choose to show your proposed employer. I have done interviews with possible employees 
where it becomes evident very fast that they copied and pasted code or even didn't 
author it themselves. The why did you do a map rather than a for loop type question 
should start a nice discussion about coding styles and effiency and not leave you both 
feeling stupid.

Just my bit

Paddy




Re: Informative CVs

2002-12-07 Thread Patrick Mulvany
On Sat, Dec 07, 2002 at 04:06:01PM +, Lusercop wrote:
> On Sat, Dec 07, 2002 at 03:49:34PM +0100, Paul Johnson wrote:
> > This job ad on Perl Jobs made me smile.
> >   CVs with the following information will only be considered if they include
> >   the following information:
> >   - CV
> >   - ...
> 
> Submit it to the campaign for Plain English!
>

The final requirement was interesting too.

- A line of whether or not you have experience with Perl, mySQL, ASP and Mason.

;) 

Patrick Mulvany 




Re: Parsing PHP code from Perl

2002-12-13 Thread Patrick Mulvany
On Fri, Dec 13, 2002 at 12:21:06PM +, Alex McLintock wrote:
> >Kris Boulez wrote:
> >Given an existing PHP application that get it's configuration by
> >"include" ing a 'config.php3' file. I now want to use this config file
> >from a perl script. Is there an easy way to get all the
> >variables/defines/.. out of the PHP config file. I can of course write a
> >little parser for this.
> 
> Perhaps you should write another php script which displays all the
> variables you want as a web page containing comma separated values, or
> and XML file if you aren't averse to over kill.
> 
> Fetching a web page in perl, and splitting it based on commas (or some 
> other separater) is pretty trivial.
> 
> Alex
> 

Such an approach is also a potential security loophole considering that config files 
tend to have information such as database usernames, passwords, directory and file 
locations, pipes and ports.

Paddy 




[OT] Oldest machine still running perl

2003-01-20 Thread Patrick Mulvany
On Sun, Jan 19, 2003 at 01:38:47AM +, Piers Cawley wrote:
> Paul Makepeace <[EMAIL PROTECTED]> writes:
> 
> > On Sat, Jan 18, 2003 at 06:25:34PM +, Shevek wrote:
> >> I'd have to check my P60s to find out the exact dates.
> >
> > Wow, I thought my PII-233 was old!
> 
> 386SX/25 ran my first linux box. But I think we've already done this.
> 

I will open the bidding with a 486DX2-66 Running Debian 2.2 (still working as a 
firewall/NAT)

Paddy





Re: CVS Client

2003-01-22 Thread Patrick Mulvany
Hi,

Just a mild digression.

If you are considering simultanious windows/linux development becareful about your 
Windows editors it is a real pain when someone on windows checks in a file with only 
the line endings changed.

Paddy

On Wed, Jan 22, 2003 at 03:03:23PM -, Neil Fryer wrote:
> Hi All
> 
> Can anyone recommend a decent free CVS client, for W2K? To connect to a
> Linux CVS server?
> 
> Thanks in advance
> 
> Neil Fryer
> Systems Administrator
> 12Snap UK Ltd
> Level 8
> 10 Wardour Street
> London
> W1D 6QF
> www.12snap.co.uk
> 
> Office: +44(0)20 7534 7322
> mobile: +44 (0)7855 400 309
> Fax: +44(0)20 7534 7301
> email: [EMAIL PROTECTED]
> 
> 
> 
> 




Re: CVS Client

2003-01-24 Thread Patrick Mulvany
On Thu, Jan 23, 2003 at 09:39:32AM +, Nicholas Clark wrote:
> On Wed, Jan 22, 2003 at 11:39:04PM +, Roger Burton West wrote:
> > On Wed, Jan 22, 2003 at 06:06:00PM +, Patrick Mulvany wrote:
> > 
> > >If you are considering simultanious windows/linux development becareful about 
>your Windows editors it is a real pain when someone on windows checks in a file with 
>only the line endings changed.
> > 
> > I'm fairly sure wincvs allows an "ascii mode" for files in order to
> > remove this problem.
> 
> You get problems if you start using samba to mount files from unix, edit
> them on Windows, and then check them in on unix.
> 
> Something I never got a round tuit for - I believe that cvs uses diff3 to
> work out how to merge changes. There is an option in standard diff (but not
> diff3) to ignore whitespace. You could kill two birds with one stone
> (people re-indenting, and \r\n line endings) by adding an option to ignore
> whitespace at the beginnings and ends of lines. It wouldn't make a difference
> to languages such as C that forbid multi-line string constants, and a
> perl coding style of "no multiline strings" might be tolerable.

Ignoring whitespace changes can be very annoying and may even be just plain old wrong. 
There are occations where whitespace is significant.

$sql=q{SELECT id FROM users WHERE fullname like '%  %'};
is functionally different from:-
$sql=q{SELECT id FROM users WHERE fullname like '% %'};

say the first was checked in and the system ignored whitespace changes how would you 
fix the bug? by adding a # at the end to make the update significant?

Documentation such as POD can also have significant whitespace.

My personal preferences are generally towards setting a standard and adhearing to it. 
eg all editors set up with unix line endings, tab emulation and a standard tab size.
 
Just a few thoughts

Paddy





Re: weird eval

2003-05-30 Thread Patrick Mulvany
On Wed, May 28, 2003 at 05:41:49PM +0100, Ben wrote:
> What circumstances are there under which eval {}; will not trap a program exit ?
> 
> I assume a naughty XS module segfaulting will do for it - but are there any
> others?

The XS module may not directly segfault but rather leave garbage around that causes 
segfaults when they are cleared up.

perl -e 'eval {use Clone qw(clone); $a={}; undef $a; $c=clone $a;print "cloned\n";}'

clone does not directly appear to cause the segfault but the clean up of $c going out 
of scope causes the segfault.

see :- http://rt.cpan.org/NoAuth/Bug.html?id=2252 for a fuller explaination.

Hope that helped some

Paddy 



Re: 501 Not Implemented

2003-06-16 Thread Patrick Mulvany
On Mon, Jun 16, 2003 at 11:32:13AM +0100, Ben wrote:
> Hi,
> 
> I'm having a nasty HTTP implementation mismatch.
> 
> Some server I'm talking to using LWP (I can't tell what the server actually is,
> and Netcraft don't know either) is returning a 501 to a POST, not sending any
> response body and then closing the connection. (As a side effect, this is causing
> a segfault somewhere down in the XS code, but I think I understand how to
> workaround this and will isolate the cause and try and patch it later).
> 
> Now, I know that a 501 SHOULD contain a response body, but that's kind-of not
> relevant. What I want to know is what server conditions could cause it to
> think that a 501 is an appropriate thing to send back.
> 
> I know that Not Implemented could apply to a request method or a transfer-coding
> but are there any other examples that people know of that could trigger this?
> 

Hi Ben,

Are you setting the Referer correctly for those pages? I have known people have 
PerlTransHandlers that try to detect scrapping and return errors based that.
As far as a lot of companies are concerned if it only works with IE that is perfectly 
acceptable.

Best bet it to browse it while capturing the datastream. Then you really know what it 
will accept rather that what you thought it needed. Then you can backout to a minimum 
data request.

Hope it helps

Paddy




Re: 501 Not Implemented

2003-06-19 Thread Patrick Mulvany
On Thu, Jun 19, 2003 at 02:04:04AM +0100, Ben wrote:
> Ah, you see, this isn't a page which a browser would ever see. It's an 
> application that I POST some XML to, and it gives me back another XML
> document based on what I send it. I have a document which describes how
> this thing is supposed to work, and it doesn't behave the way it's
> supposed to.
> 
> If I send it garbage, well-formed but invalid XML or correctly formed 
> but incorrect XML, then I get an error message. If I send it something 
> correct, I get a 501 and no return document. Which is why I think it
> shouldn't be the transfer-coding or anything else. It clearly *can*
> interpret my requests, as it knows what incorrect ones look like.  
> 
> > Best bet it to browse it while capturing the datastream. 
> > Then you really know what it will accept rather that what you 
> > thought it needed. Then you can backout to a minimum data request.
> 
> I've tried that. Same behaviour in the browser. I'm now pretty sure the
> problem is with them, but the only contacts I have are non-technical. I
> tried asking for someone technical's email address and got told that my
> existing contact would be happy to answer my questions, which *really* 
> isn't what I wanted.
>

Can they provided you with a working test example. They must surely have one for their 
own internal use if nothing else. Failing a test example, a capture of the datastream 
they generate durring their tests would do. 

You just know that it is going to be something they have missed or is ambiguous in 
their spec. (ie the failed to mention it require DOS line endings)

If people refuse to give you access to a technical contact I have generally found that 
asking a complex technical question will get you one of three resposes.
 1. The I don't know what you are talking about.
 2. I will pass it on and nothing happens
 3. Someone else contacts you.

Having this kind of problem with a data supplier at the moment. They have decided to 
change the data format they are using and have given everyone 2 months to get there 
systems working with the new data.

When they provided test data it was the most simplistic cases. It appears they don't 
really understand the complexity and implications of their own changes.

Hope it helps

Paddy
 



Re: OT: More sybase related - IDENTIFIER TOO LONG

2003-07-07 Thread Patrick Mulvany
Hi,

This seams to be more a sybase issue but I have just been having the same conversation 
relating SQL Server.

In SQL Server this is related to being able to create tables that have variable length 
fields whos total size is larger than the page size.

This results in tables where one varchar field works fine at max length but filling 
all the fields up to there max results in the error as the record no longer fits in a 
single page.

Hope this helps

Paddy 
 

On Mon, Jul 07, 2003 at 12:22:55PM -, Raf wrote:
> 
> I saw this post, which doesn't appear to have been followed up:
> 
> http://lists.ibiblio.org/pipermail/freetds/2002q1/006299.html
> 
> I have the same problem:
> 
> Where:
>  (4)   attribute_valuevarchar(350) NOT NULL
> 
> =head2 MyError
> 
> 1>  insert into attributes_string (
> object_id,attribute_id,object_insert_id,attribute_value,creation_time,modification_time,modified_by
> ) values (18, 75, 7236,
> ''
> ,'2003/07/01', '2003/07/01',2)
> 2> go
> Msg 103, Level 15, State 1:
> Server 'pelican', Line 1:
> The identifier that starts with 'aa' is too
> long. Maximum length is 30.
> 
> =cut
> 
> This crashed my transaction and is giving me the same result in isql and
> via DBD::Sybase.
> 
> I know that it's not dbi specific , however it works when I reduced the
> string "a"x350 to "a"x10.  Given that the field is varchar(350) I really
> don't get this?
> 
> I've just moved to sybase from postgrest,sql,oracle and wonder if this
> error is familiar to other developers?
> 
> Cheers,
> 
> Raf
> 
> 
> 
> 



Re: Database Connection Conversions

2003-07-15 Thread Patrick Mulvany
On Tue, Jul 15, 2003 at 01:57:55AM -0700, Dave Cross wrote:
> A client of mine is thinking of buying a datafeed from a third
> party supplier.

> As part of the way that this feed works, there is an application
> that writes the data into a database. This app is written in
> VB and assumes that the database it is writing to is MS SQL Server.

Does it actually assume it is a SQL Server and check or is it just 
connecting to a named connection?

> Now we don't like MS SQL Server. We don't want to have to buy
> a license for it when we have a perfectly good Oracle database
> running on Solaris.

A nice sane attitude.. any one for SQL Slammer?

> So what we're looking for is an application that can sit between
> the VB app and the Oracle database, translating the ODBC/SQL
> Server calls to Oracle API calls. It needs to look like an SQL
> Server to the VB app and then actually put the data into Oracle.

With out more information and trying things it would be very hard to 
judge but if it is just using a named connection try creating that 
pointing at the oracle server with the same tables. This may well work 
but I will give the following provisios.

A. Tables and fields have sane names ie ANSII SQL compliant 
B. No stored procedures have been created or used on the SQL Server
C. The client isn't expecting the server to return a select from a stored
 procedure (Oracle can only do this as a rset and not directly)

If this fail the next suggestion is to try Sybase. SQL Server have diverged
from its original Sybase origins a lot but quite often even now you can switch
between the two with less work than moving to Oracle.

Hope this helps

Paddy





Re: Database Connection Conversions

2003-07-15 Thread Patrick Mulvany
On Tue, Jul 15, 2003 at 01:57:55AM -0700, Dave Cross wrote:
> 
> A client of mine is thinking of buying a datafeed from a third
> party supplier.
> 
> As part of the way that this feed works, there is an application
> that writes the data into a database. This app is written in
> VB and assumes that the database it is writing to is MS SQL Server.

This would not be from a well known car data provider would it?

Just curious

Paddy 



Re: Avoiding $1, $2, ...

2003-07-29 Thread Patrick Mulvany
On Tue, Jul 29, 2003 at 03:53:52PM +0200, Rafael Garcia-Suarez wrote:
> Paul Makepeace wrote:
> > I'd like to dump regex matches into an array without explicitly naming
> > $1, $2, ...
> > 
> > =head1 NOT WORKING CODE
> > ($month, $day, $time, $host, $process, $pid, $message) =
> > /^(\w+) (\d+) (\d\d+:\d\d:\d\d) (\w+) ([()\w\/]+)\[(\d+)\]: (.*)$/ ||
> > /^(\w+) (\d+) (\d\d+:\d\d:\d\d) (\w+) ([()\w\/]+)():\s+(.*)$/;
> > =cut
> > 
> > I.e. if the first regex fails, try the other one.
> 
> You can try to fiddle with $+ and/or $^N.
> Recent perl needed.
>

You can do something like :-

$data =~m/^(\w+) (\d+) (\d\d+:\d\d:\d\d) (\w+) ([()\w\/]+)\[(\d+)\]: (.*)$/;

my @data = map ((substr($data,$-[$_],$+[$_]-$-[$_])),(1..$#-));

see perlvar docs for an explaination of @+ and @- lastmatch start and end vars

Hope it helps

Paddy

 



Re: Bizarre DBI error

2003-08-18 Thread Patrick Mulvany
On Sun, Aug 17, 2003 at 10:15:22PM +0100, Paul Makepeace wrote:
> I'm seeing an error where an ->execute fails but no errstr or state is
> set, and then upon immediate retry succeeds.

The only times I have seen a similar issue is when then $sth was generated pre-fork on 
apache using mutli-threaded.

The sequence would be :-
   Process A executes SQL
   Process B executes SQL
   Process A fetches data
   Process B has no data to fetch but does not error

Hope it helps

Paddy




Re: perl on Solaris vs Linux

2003-09-15 Thread Patrick Mulvany
On Mon, Sep 15, 2003 at 12:04:21PM +0100, Andy Ford wrote:
> I have a perl script that works perfectly on my Gentoo Linux distro but
> fails on my Solaris 2.8 box. I am running v5.8.0 on both platforms and I
> have absolutely no clue on how to get it working on solaris. 
> 
> Its actually a collection of scripts that use the following CPAN
> modules...
> 
> IPC::Shareable
> Net::Pcap
> NetPacket::Ethernet
> NetPacket::IP
> Net::RawIP
> 
> The error I have is the following...
> 
> p is not of type pcap_tPtr at ./icmp_sniffer line 67.
> 
> Line 67 & 68 of icmp_sniffer is ...
> die "unable to compile $pktfilter\n"
> if (Net::Pcap::compile($pcap_t ,\$compprog,$pktfilter,0,$netmask)) ;
> 
> This script works perfectly on my Linux box but not Solaris.
> 
> Anyone offering some useful/helpful pointers would be much appreciated
> 
> Thanks
> 
> Andy
> 
>

>From the Net::RawIP Readme :-

NOTE: Ethernet related methods currently implemented only on Linux and *BSD!
Help with port eth.c to other platforms is very appreciated.

This would not be related would it?

Paddy
 



Re: How many lines of Perl code?

2003-09-16 Thread Patrick Mulvany
On Tue, Sep 16, 2003 at 11:42:53AM +0100, Michael Stevens wrote:
> On Tue, Sep 16, 2003 at 10:51:12AM +0100, Andy Wardley wrote:
> >   Well, let's say that there are ~200 london.pm members who live in the 
> >   london catchment area and can be bothered to go to a london.pm meeting.
> 
> I'd say nearer 50-100 myself.
> 
> >   Approx 1 in 5 Perl programmers care about Perl enough to go to a meeting.
> 
> I suspect this is a significant overestimate. My wild guess would
> be closer to 1 in 20.
> 
> >   That suggests there are around 1000 Perl programmers in the London area.
> > 
> >   The London catchment area covers approx one fifth of the UK population.
> 
> Yes, but IT jobs are not evenly distributed over the UK - I would guess
> there's a significant bias towards the south-east and London, even more
> so than the greater population density would imply.
> 
> Your final numbers look very high to me, but I don't have the data to
> argue against them in detail. I suspect you're overestimating the
> productivity of the average programmer.
> 
> > amusing way of estimating?  CPAN must be a rich source of clues.
> 
> I keep meaning to play around with sloccount and CPAN, generate some
> meaningless numbers on how long various CPAN modules took to write,
> and how long all of CPAN took to write. I believe someone on #perl
> has already done this, but I haven't seen detailed numbers.
> 
> sloccount estimates that perl took about 120 person-years to write,
> IIRC.
> 
> Michael
>

A possible way of analysing this would be to shread the whole of CPAN and then count 
the distinct MD5 checksums.

Any one want to have a try?

Paddy

 



Re: SWF spider

2003-09-18 Thread Patrick Mulvany
You might find ming worth looking at.

http://sourceforge.net/projects/ming/

They seem to like PHP for some strange reason

but last time I checked it did have a perl interface.

Paddy


On Thu, Sep 18, 2003 at 03:01:50AM +0100, Paul Makepeace wrote:
> Do the any .swf parsers provide a list of all the anchors and referred-
> to media in a movie?
> 
> Put another way, given a .swf, is there a way of knowing what further
> HTTP requests would and could be generated from loading and/or then
> interacting with it?
> 
> Paul
> 
> -- 
> Paul Makepeace ... http://paulm.com/
> 
> "If I could count every drop of water in the sea, then I'd be eating
>  through a permanent tube."
>-- http://paulm.com/toys/surrealism/
> 



Re: Unresponsive module authors

2003-09-25 Thread Patrick Mulvany
On Thu, Sep 25, 2003 at 10:56:56AM +0100, Nick Cleaton wrote:
> On Thu, Sep 25, 2003 at 10:45:13AM +0100, David Cantrell wrote:
> > What's the etiquette for dealing with unresponsive module authors?
> 
> I've found rt.cpan.org very handy for that.  One author failed to fix
> security problems in response to emails for over a year, but had a new
> version up within a couple of days being RTed about it.
>

But even that can take some time depending on the author. Heres one I did earlier :-

http://rt.cpan.org/NoAuth/Bug.html?id=2264

Subject   Clone 0.13 cuases Segfault 
Created   Thu Mar 20 04:40:43 2003 
Updated   Sun Sep 7 02:33:41 2003  
 
Got fixed in the end but ended up replacing the module with a equivilent (storable).

Gook luck

Paddy





Re: Well, maybe there's hope for more buffy yet. Or flying pigs...

2003-09-26 Thread Patrick Mulvany

Well it will certainly be a lot better use of the BBCs money than Eldorado was.

Paddy

On Fri, Sep 26, 2003 at 01:42:42AM -0400, David H. Adler wrote:
> Doctor Who returns to BBC Television:
> 
> http://news.bbc.co.uk/1/hi/entertainment/tv_and_radio/3140786.stm
> 
> Get a PAL->NTSC converter ready for me, ok?  :-)
> 
> dha
> 
> -- 
> David H. Adler - <[EMAIL PROTECTED]> - http://www.panix.com/~dha/
> My theory is that his ignorance clouded his poor judgement.
>   - Alice, in Dilbert's office
> 



Re: keyboards/RSI/switching costs (was Looking for a secondhand Datahand Pro II)

2009-10-23 Thread Patrick Mulvany
On Thu, Oct 22, 2009 at 12:23:51PM +0100, Smylers wrote:
> 
> Civil Service guidelines for new software procurement probably insist on
> decent accessibility support[*2], but continuing to use legacy systems
> which predate[*3] those guidelines isn't inself illegal.
> 

Actually this isn't the worst problem you will have. All Gov type departments 
have very strict controls on software deployment so you would have to get the 
software tested and approved before you could get it installed. Even upgrading 
from one version of software to another is a pain never mind trying to get a 
new piece of software installed. I would not be surprised if total time from 
requesting a piece of software, through approval and installation took 6 months 
to a year and that is with everyone co-operating. If anyone feels the need to 
stick his/her oar in it could be sunk without trace in 30 seconds.

Just my thoughts

Paddy