Testing Database Schema
We have an in-house procedure that says that the SQL definition for a table should be included in the __DATA__ section of the class that represents it (we're using Class::DBI), and is to be treated as the definitive version of the schema. When the code gets deployed to a new server, we'd like to be able to run a test as part of the normal 'make test' that tells us whether or not the schema on that server is the same as what's in the code. So if someone makes a change that adds a new column to a table, for example, but forgets to make this change on one of the servers, the test will fail. We're having too much difficulty thinking of a sane way to do this, however. For now it just needs to cope with MySQL. But MySQL has an interesting 'feature' where the CREATE TABLE schema you feed it, isn't the same as the SHOW CREATE TABLE schema you get back - as it fills in lots of extra defaults, quotes column names etc. The two best ideas we've had so far are to either run the SQL in the code against a temporary database, and then compare both SHOW CREATE TABLE outputs, or to use something like SQL::Translator to convert both lots of SQL to a common format. Both seem much too cumbersome, however. Anyone have any brighter ideas? Thanks, Tony
Why do users need FileHandles?
I was thinking about the discussions about the open function, and of the capabilities of strings. Given that we'll have things like $str.bytes, etc. It doesn't seem a stretch to suggest that we could also have $str.lines. Once we have that, and also a level of pervasive laziness (lazy evaluation), it seems to me that we don't really need user-visible file handles for most of the common operations on files. Imagine: my $text is TextFile(/tmp/foo.txt); for $text.lines { ... } The for loop is equivalent to the old while(FH) construct, but is more general in the sense that it applies to any test-like thing, not just file-handles. It also makes it easy to see how a use would apply a grammar to the contents of a file. Not all files are TextFiles, of course. We might have XML files, RawBinary files, etc. If we can think in terms of tying, then we can use all this things without ever touching a file handle. Writing a file is similarly encapsulated: my $text is TextFile(/tmp/bar); $text = hello; # writes, truncates $text ~= , world\n; # appends $text.print again\n; # for old-times sake Another use of filehandles for interactive communication over (e.g.) a socket. Interactive communication can be thought of as message passing -- which perhaps should be unified under a general mechanism for sending messages between threads/process/etc. A producer sends messages to a consumer, and thus has an object (proxy/handle) that represents that consumer. The fact that the implementation is a File Handle is not something that a user needs to worry about (usually). I guess what I'm saying is that if we can make tying the standard idiom, then we can relax you huffmanization worries for things like the open function. Dave.
Re: Why do users need FileHandles?
DW my $text is TextFile(/tmp/bar); DW $text = hello; # writes, truncates DW $text ~= , world\n; # appends DW $text.print again\n; # for old-times sake Anyhow we still need $text.flush() or $text.close() methods. -- , [EMAIL PROTECTED]
Re: Why do users need FileHandles?
Andrew Shitov [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] DW my $text is TextFile(/tmp/bar); DW $text = hello; # writes, truncates DW $text ~= , world\n; # appends DW $text.print again\n; # for old-times sake Anyhow we still need $text.flush() or $text.close() methods. I'd argue that these are only rarely needed. Even with Perl5 filehandles, I rarely find myself needing to close a file. Garbage collection does the job adequately. The case of flush is more interesting. It is really used for two purposes: forcing the ram-contents to be stored on disk, so it can be recovered even id the process dies violently, is one use; ensuring a message is sent so it can be received by a consumer is the other. Even if both these roles have the same underlying implementation, we might want to abstract them differently. Cflush is as good a name as any for the sync-to-disc usage; but the second might be better named Csend -- and probably wouldn't be on a text object, anyway. I'm not sure that the details of inter-thread messaging have been decided yet, but I'd think that a common messaging paradigm would apply. Dave.
Re: Why do users need FileHandles?
Dave Whipp wrote: I was thinking about the discussions about the open function, and of the capabilities of strings. Given that we'll have things like $str.bytes, etc. It doesn't seem a stretch to suggest that we could also have $str.lines. Once we have that, and also a level of pervasive laziness (lazy evaluation), it seems to me that we don't really need user-visible file handles for most of the common operations on files. Imagine: my $text is TextFile(/tmp/foo.txt); for $text.lines { ... } snip I guess what I'm saying is that if we can make tying the standard idiom, then we can relax you huffmanization worries for things like the open function. Uhm, my impression was that most of the huffmanization discussion was centered around declaring a file handle to be read only, write only, read-write, exclusive, etc. Masking the file handle with what basically amounts to a file handle subclass like you describe will still need to allow the user to specify all those attributes. So you would still need to allow: my $text is TextFile(/tmp/foo.txt :rw ); my $text is TextFile(/tmp/foo.txt :excl ); Not that having wrapper classes for file handles is a bad idea, it just doesn't relate to what I saw being discussed. Oh, and TextFile should be spelled IO::File::Text, IMHO. -- Rod Adams
Re: Why do users need FileHandles?
Rod Adams [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Uhm, my impression was that most of the huffmanization discussion was centered around declaring a file handle to be read only, write only, read-write, exclusive, etc. Masking the file handle with what basically amounts to a file handle subclass like you describe will still need to allow the user to specify all those attributes. So you would still need to allow: my $text is TextFile(/tmp/foo.txt :rw ); my $text is TextFile(/tmp/foo.txt :excl ); my $text is TextFile(/tmp/foo) is rw; my $text is TextFile(/tmp/foo) is const; truncate Vs append would be infered from usage (assign = truncate). One might be able to infer read Vs write in a similar way -- open the file based on the first access; re-open it (behind the scenes) if we write it after reading it. :excl would probably need to be an option, but is not sufficiently common to be agressively huffmanised: my $text is TextFile(foo.txt, :no_overwrite); my $text is TextFile(foo.txt) does no_overwrite; Not that having wrapper classes for file handles is a bad idea, it just doesn't relate to what I saw being discussed. Oh, and TextFile should be spelled IO::File::Text, IMHO. Possibly, but would need a hufmanized alias for common use. Possible just file: my Str $text is file(foo.txt) does no_follow_symlink does no_create; Do we have an antonym for Cdoes? Dave.
Re: Why do users need FileHandles?
Dave Whipp wrote: Rod Adams [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Uhm, my impression was that most of the huffmanization discussion was centered around declaring a file handle to be read only, write only, read-write, exclusive, etc. Masking the file handle with what basically amounts to a file handle subclass like you describe will still need to allow the user to specify all those attributes. So you would still need to allow: my $text is TextFile(/tmp/foo.txt :rw ); my $text is TextFile(/tmp/foo.txt :excl ); my $text is TextFile(/tmp/foo) is rw; my $text is TextFile(/tmp/foo) is const; truncate Vs append would be infered from usage (assign = truncate). One might be able to infer read Vs write in a similar way -- open the file based on the first access; re-open it (behind the scenes) if we write it after reading it. Case 1: So I wanted to do a read/write scan, so I create my TextFile, start reading in data, so the file is opened for reading. Then, I come to the part where I want to update something, so I do a write command. Suddenly the file has to be closed, and then re-opened for read and write. And all my buffers, file pointers and the like are reset, (though curable with very careful planning), leaving me in a bad spot. Better if I could just declare the file open for read and write at open time. Case 2: I meant to use some critical data file in read-only mode, and accidently use a write command somewhere I didn't mean to, and silently just clobbered /etc/passwd. Better if I could have just opened the file read only, and trigger an error on the write command. What design philosophy would you envision TextFile taking to handle both these cases in a coherent fashion? :excl would probably need to be an option, but is not sufficiently common to be agressively huffmanised: my $text is TextFile(foo.txt, :no_overwrite); my $text is TextFile(foo.txt) does no_overwrite; ot that having wrapper classes for file handles is a bad idea, it just doesn't relate to what I saw being discussed. Oh, and TextFile should be spelled IO::File::Text, IMHO. Possibly, but would need a hufmanized alias for common use. Possible just file: s/file/open/ and we're back where we started. my Str $text is file(foo.txt) does no_follow_symlink does no_create; my $text = open(foo.txt :no_follow_symlink :no_create); I don't think anyone (read: Larry) has declared exactly what the capabilities of the default file handle object are yet. It seems me that you could very well get what you want. -- Rod Adams
Re: push with lazy lists
On Friday 16 July 2004 18:23, Jonadab the Unsightly One wrote: Please take my words as my understanding, ie. with no connection to mathmatics or number theory or whatever. I'll just say what I believe is practical. [...] I'd believe that infinity can be integer, ie. has no numbers after the comma; and infinity is in the natural numbers (?), which are a subset of integers. If that were the case, 0/Inf would == 0. Isn't that so? 0/+Inf == 0 0/-Inf == 0 (or -0, if you wish :-) Also, if that were the case, 0..Inf would be a finite list. (It is trivial to prove that 0..N is a finite list with finite cardinality for all natural numbers N. So if you set N equal to Inf, 0..Inf would have finite cardinality, if Inf is a natural number.) This is obviously some new definition of Inf of which I was not previously aware. Well, after reading my sentence one more, I see what may have caused some troubles. Inf is not in N; but *in my understanding* it fits naturally as an extension to N, that is, Inf is (or can be) integer as is after N... This won't be written in math books, I know. Also, if that were the case, 0..Inf would be a finite list. (It is trivial to prove that 0..N is a finite list with finite cardinality for all natural numbers N. So if you set N equal to Inf, 0..Inf would have finite cardinality, if Inf is a natural number.) If I extend the natural numbers N with Inf to a new set NI (N with Inf), then 0 .. n (for n in NI) need not be finite ... Sorry for my (very possibly wrong) opinion ... Regards, Phil