Future of support for older Perl versions in DBI
Question: Are there any plans to sunset support for older Perl versions such as 5.8.x in DBI in the foreseeable future or is the plan to continue to support 5.8.1+ indefinitely until something unforeseen thing prevents it? Is there any information about how many people are actually still using 5.8.x for example? I ask because I plan to introduce new database-related CPAN modules and am starting to think about what minimum Perl version I should support. By default the answer would be 5.8.0+ because that doesn't exclude anything that DBI itself supports. But if DBI later say decided to require 5.10.0+ or something else newer than 5.8.x then I would likely follow suit. I don't actually need any Perl features newer than what 5.8.0 provides, but I would leverage some for cleaner coding if they were available. Thank you for any insight. -- Darren Duncan
Re: RaiseWarn attribute for DBI
Generally speaking, DBI is one of those things where backwards compatibility should be the highest concern after security. There is a time and place to break compatibility, and this Print/Raise thing seems way too minor to me for that. I support the version of this that is backwards-compatible and not the breaking version. -- Darren Duncan On 2019-01-17 2:43 AM, p...@cpan.org wrote: On Thursday 17 January 2019 11:06:22 Alexander Hartmaier wrote: I'm aware of that, semantic versioning and major versions exist to handle API breakage. Such thing is unsupported by CPAN clients. So we cannot use it. Anyway, this is question for Tim as DBI maintainer. But I guess he does not want to change API of DBI. My question is how to detect if an exception is because of a warn or a die when RaiseWarn is true. I guess you can use $dbh->err() method. See: https://metacpan.org/pod/DBI#err A driver may return 0 from err() to indicate a warning condition after a method call. Similarly, a driver may return an empty string to indicate a 'success with information' condition. In both these cases the value is false but not undef. Note that 'success with information' is not warning and therefore DBI's PrintWarn or RaiseWarn ignores them. Best regards, Alex On 2019-01-17 10:53, p...@cpan.org wrote: On Thursday 17 January 2019 10:23:25 Alexander Hartmaier wrote: > I don't see the benefit, Print* should die This would break existing API of DBI. Print just prints and Raise dies. This cannot be changed as there are many applications which depends on this API. > and I'd personally would release a major release and change the defaults as a breaking change: PrintError false, RaiseError true. > Can you name a use case and how to differ between an error and a warning at the error handling side? It is up to the DBI driver or database server what would result in is a warning and what in an error. > Best regards, Alex > > > On 2019-01-17 10:04, p...@cpan.org wrote: > > Hello! What do you think about adding a new attribute $dbh->{RaiseWarn} which cause that warnings reported by DBI drivers would behave like errors? For errors DBI has there $dbh->{PrintError} and $dbh->{RaiseError} attributes. First one is by default true and second one by default false. When PrintWarn is true, the n all er ror from DBI driver are passed to perl's "warn" function and when RaiseError is true, then errors are passed to perl's "die" function. (Plus there is ability to register own error handler function) Currently DBI has only $dbh->{PrintWarn} attribute to control warnings. When is set to true (by default) all warnings from DBI driver are passed to perl's "warn" function. So I would propose to add $dbh->{RaiseWarn} attribute (off by default) to behave like $dbh->{RaiseError}, but for warnings. I have implemented this attribute and patch is there: https://github.com/perl5-dbi/dbi/pull/71/files
Re: DBD::mysql next steps
I agree in principle with Patrick's plan. My strong recommendation for continuing development under a different module name was based on the assumption (but not the knowledge) that the Unicode/Blob problems were rooted in the DBD::mysql codebase in such a way that they blocked the ability to use the newer MySQL/MariaDB/etc client libraries properly and that maintaining support for both behaviors user-configurable would be too difficult to do without making bugs worse. However, I also believe that Patrick's proposal (a single DBD::mysql under that name where the incompatible new behavior is toggled on a per-connection switch) is actually the best and most elegant solution for satisfying all parties under the assumption that there are savvy developers who fully understand the problem and are able and willing to support such a more-complicated codebase. -- Darren Duncan On 2017-11-10 7:13 AM, Patrick M. Galbraith wrote: Greetings all! Michiel and I have been talking, weighing options of what course to take in dealing with moving forward-- with the goal of both offering both stability and the choice to have the latest functionality and bug fixes as well as give contributors the opportunity to be part of overall improvements to the driver. What we are going to do is: Add to the connection the ability to turn on proper UTF handling with 'mysql_enable_proper_unicode'. This gives the user volition the option to knowingly toggle whether they want the new functionality and understand that data structures returned or accepted by the driver might be different than without this setting. The other options had their merits, but we think this will solve the issue while keeping the driver unified and prevent divergence. Thank you for your input over the last couple months-- we look forward to moving ahead! Patrick and Michiel
Re: DBD::mysql path forward
On 2017-11-09 10:50 PM, Alexander Hartmaier wrote: What about DBD::MariaDB as that‘s the name of the OpenSource version these days? No, MariaDB is simply a fork of MySQL. Both MariaDB and MySQL are Open Source. Saying only MariaDB is open source is wrong. I also disagree with using the MariaDB name as that sends the wrong message. Keeping the MySQL name says its for all the (API-compatible) forks while using the MariaDB name implies that the MariaDB fork is more special than the others. -- Darren Duncan
Re: DBD::mysql path forward
Michael, why can't you accept moving forward under a new module name? Why does it have to be under the old name? When people purposefully want to upgrade they purposefully choose the new module name in order to do so. What is the actual problem in that? -- Darren Duncan On 2017-11-09 10:59 PM, Michiel Beijen wrote: On Fri, Nov 10, 2017 at 7:16 AM, Darren Duncan wrote: I agree with everything Dan said here. Its what I proposed, in fewer words. Do all new development under a new name, including all of Pali's work, and leave the current name for a product with no further effort applied to develop it. -- Darren Duncan This is NOT an option to me - it simply can't because the world moves forward and because of bitrot. The 'old' version - the version that works for most people, the current version of DBD::mysql, the one which would then receive no more maintenance as it is no longer compiles with the latest version of libmysqlclient and it does not compile with libmariadb. This will only get worse in the future. I'll stick with my earlier proposal - I'll propose to go back to the *current* latest DBD::mysql release which does not break backcompat for our users; add the patches that we discarded when we rolled back one by one, such as testing on many different lib/db options, memory leaks and so on, and make a new release so we can be on the move again. If possible I'd want to add back the *breaking* unicode changes that were introduced but they should be either in a separate namespace OR under a specific configuration option. Currently this whole thing has cost us loosing MONTHS of progress and then MONTHS of nothing and that is simply not good. Patrick: let me know if you're OK with this and then let's get start again!
Re: DBD::mysql path forward
On 2017-11-09 8:32 AM, Dan Book wrote: It seems to me like the remaining option that can make everyone "happy" is the previously-suggested option of maintaining a legacy branch and doing new development (reinstating 4.042) in another branch which will be released as a new distribution, like DBD::mysql2, by the same maintainers. (I would not favor DBD::MariaDB as a name, since this new distribution would also be the favored way to connect to MySQL.) After doing so DBD::mysql can be deprecated, with migration instructions added to the docs, and only receive critical security fixes if anything. If done this way the community should not be fractured; the current module will simply be abandoned for future development, all support and maintenance will move to the new one. Patrick and Michiel, what do you think? I agree with everything Dan said here. Its what I proposed, in fewer words. Do all new development under a new name, including all of Pali's work, and leave the current name for a product with no further effort applied to develop it. -- Darren Duncan
Re: DBD::mysql path forward
Pali, there's a very simple solution to what you said. The old DBD::mysql does not get further maintenance at all. It is simply frozen at the 4.041/3 state forever. This serves the primary reason for that to exist, which is that people whose package managers automatically upgrade to the highest number of a namespace don't introduce new corruption due to a higher version changing behavior being relied on. All development can be under the new namespace full stop, assuming what you say is true that no one would want to backport anything to the otherwise frozen version. -- Darren Duncan On 2017-11-09 12:54 AM, p...@cpan.org wrote: On Tuesday 07 November 2017 13:19:23 Darren Duncan wrote: A maintenance branch would exist starting from the 4.041/3 release on which further stable releases named DBD::mysql 4.x would be made, and the primary goal of this branch is to not break/corrupt anything that relied on 4.041 behavior. So people with legacy projects that just use DBD::mysql and made particular assumptions on its handling of Unicode or Blobs etc won't have corruption introduced because DBD::mysql changed its handling of those things. As those people or projects misuse some internals of perl we cannot guarantee anything such that which may be broken or changed by updating any module from cpan or external source not related to DBD::mysql. Maintaining such thing is something what I believe nobody wants. For sure I'm not. As already stated more times, if there is some code which depends on some internals, it should stay conserved in tested version as it can break by updating anything (also unrelated). There is no other option how to achieve that code stay working and does not corrupt something else. People without knowledge of CPAN or the development process will get something that just continues to work for them and doesn't corrupt. As explained above (and also in past) it is not possible to guarantee. For all intents and purposes this branch would be frozen but could have select security or bug fixes that comply with the mandate and don't involve changing Unicode etc. Are you going to maintain that branch with all problems which it brings? Or is there anybody else who want to maintain such crap? If not, then it does not make sense to talk about this. Because nobody expressed that is going to do such thing for 6 months which is really large amount of time.
Re: DBD::mysql path forward
On 2017-11-09 12:54 AM, p...@cpan.org wrote: On Tuesday 07 November 2017 13:19:23 Darren Duncan wrote: The whole discussion on the mailing lists that I recall and participated in seemed to consensus on branching DBD::sqlite in order to best satisfy the ~~ DBD::mysql I hope you are talking about DBD::mysql there... Yes, that was a typo, I was talking exclusively about DBD::mysql here. That being said, as an aside, DBD::SQLite (and yes I meant that) is still a valuable case study in that they also branched, back 14 years ago when SQLite 3 came out, using in that case DBD::SQLite2 that bundled the SQLite version 2 and repurposing DBD::SQLite name, and bumping from version 0.33 to 1.0, to now bundle the incompatible SQLite 3. In their case, it was a break for people that simply upgraded the usual CPAN manner, and the fork was for people that still needed to use SQLite 2 in Perl. DBD::mysql doesn't want to follow their example. -- Darren Duncan
Re: DBD::mysql path forward
Patrick and Pali, each of you please respond to the lists to confirm that what I say below is what you also understand is the primary plan, and if not, then say why not; in prior discussion I recall you agreed with it. Assuming they agree, the concerns of Night Light and those he is speaking for should be handily satisfied. The whole discussion on the mailing lists that I recall and participated in seemed to consensus on branching DBD::sqlite in order to best satisfy the needs of all stakeholders. There would be one version control with 2 active release branches. A maintenance branch would exist starting from the 4.041/3 release on which further stable releases named DBD::mysql 4.x would be made, and the primary goal of this branch is to not break/corrupt anything that relied on 4.041 behavior. So people with legacy projects that just use DBD::mysql and made particular assumptions on its handling of Unicode or Blobs etc won't have corruption introduced because DBD::mysql changed its handling of those things. People without knowledge of CPAN or the development process will get something that just continues to work for them and doesn't corrupt. For all intents and purposes this branch would be frozen but could have select security or bug fixes that comply with the mandate and don't involve changing Unicode etc. The master branch would then have all the short term breaking changes including the 4.042 Unicode fixes and would have all the significant feature changes including compatibility with newer MySQL major versions. All of its releases would be under a new namespace such as DBD::mysql2 and start at version 5.0 to signify that large changes happened which might possibly break code or cause corruption if user code doesn't specifically account for the differences. Being a different name this is strictly opt-in, so the only ones using DBD::mysql2 are those that explicitly opted in to it, and by doing so they also opted in to fixing anything with their code that needed fixing to not corrupt data while using it. The concept of having a single driver with toggled behavior eg a mysql_enable_proper_unicode flag was already rejected as unwieldy to implement, especially for such deep rooted changes as properly handling Unicode. Note that I expect that most other higher-level projects that use MySQL would NOT branch like this, instead having internal logic or their own toggle to work with DBD::mysql vs DBD::mysql2, or a few may branch, but the key thing is that branching DBD::mysql does NOT necessitate doing so in the rest of the ecosystem. -- Darren Duncan On 2017-11-07 12:07 PM, Night Light wrote: Proposed, but no made decisions posted afterwards. The last proposal is to re-commit the rejected 4.042 changes into the 4.043 master branch and only work on fixes that came after June. The git issue that regards improperly encoding of BLOBs is opened on April 6th (hence me sending the message to prevent a recurring cycle). https://github.com/perl5-dbi/DBD-mysql/issues/117 On Tue, Nov 7, 2017 at 8:57 PM, Michiel Beijen wrote: To me, the only real option is to make a new option in DBD::mysql; mysql_enable_proper_unicode or the like which you would knowingly set in your application code, which will expose the new behaviour. I understand this is difficult, but I really think it's the only way. If in the short term this is not feasible, it *could* be possible, in my eyes, to release a DBD::mysql2 or similar that does *correct* behaviour. Also in that case this is something the application developer should set explicitly in his connection string. This DBD::mysql2 or similar could live in another git repository, but preferably in the same repo, and even in the same CPAN distribution as DBD::mysql, and eventually the goal should be that they re-unite and using DBD::mysql2 would really be the same as to use the 'mysql_enable_proper_unicode' option in DBD::mysql. -- Michiel On Tue, Nov 7, 2017 at 7:41 PM, Darren Duncan mailto:dar...@darrenduncan.net>> wrote: > My understanding from the last discussion on this matter is that it was > agreed DBD::mysql would be forked such that the existing DBD::mysql name > would be frozen at 4.041/3 indefinitely and that the 4.042 changes plus any > further feature development would take place under a new name such as > DBD::mysql2 version 5.0.
Re: DBD::mysql path forward
My understanding from the last discussion on this matter is that it was agreed DBD::mysql would be forked such that the existing DBD::mysql name would be frozen at 4.041/3 indefinitely and that the 4.042 changes plus any further feature development would take place under a new name such as DBD::mysql2 version 5.0. So those people using "DBD::mysql" would not experience any changes in behavior from what they were used to and BLOB etc should not break. Whereas people that want the Unicode fixes and other features would use the DBD::mysql2 and fix their code to be compatible with its breaking changes. I thought this was settled, and takes care of your concerns, so it was just a matter of "make it so". (Anyone replying to this message/thread, please do NOT include my personal email address as an explicit recipient, only send it to the list addresses, I will get those.) -- Darren Duncan On 2017-11-07 3:41 AM, Night Light wrote: For the reason of "silence": I've spoken to other users to hear that they have passively withdrawn from this discussion as they are not motivated to be part of a release where concerns about backwards compatibility are ignored. One of the users wrote earlier (replace the word "sector" with "release"): "When you're dealing with software that's purpose in life is to not corrupt data, and have data there tomorrow, you go out of your way not to break that promise. There's no point in being involved in this sector if you don't care to deliver on that promise." Re-releasing the 4.042 changes will break the contract of a long-standing interface and corrupt all BLOB data when using "do()". These changes do therefore more harm than good. Putting these utf8 changes in the freezer until a suggestion is made that will add the functionality instead of replacing it is not a sin. The PG driver has for instance also a similar issue open without plans to fix it anytime soon. https://rt.cpan.org/Public/Bug/Display.html?id=122991 What is your objection against using the current 4.043 branch and work on outstanding fixes, do a pre-release, a period of time for people to test/try out, then release?
Re: DBD::mysql path forward
What Night Light's post says to me is that there is high risk of causing data corruption if any changes are made under the DBD::mysql name where DBD::mysql has not been exhaustively tested to guarantee that its behavior is backwards compatible. This makes a stronger case to me that the DBD::mysql Git master (that which includes the 4.042 changes and any other default breaking changes) should rename the Perl driver package name, I suggest DBD::mysql2 version 5.0, and that any changes not guaranteed backwards compatible for whatever reason go there. If the Git legacy maintenance branch 4.041/3 can have careful security patches applied that don't require any changes to user code to prevent breakage, it gets them, and otherwise only DBD::mysql2 gets any changes. By doing what I said, we can be guaranteed that users with no control over how DBD::mysql gets upgraded for them will introduce corruption simply for upgrading. -- Darren Duncan On 2017-09-19 5:46 AM, Night Light wrote: Dear Perl gurus, This is my first post. I'm using Perl with great joy, and I'd like to express my gratitude for all you are doing to keep Perl stable and fun to use. I'd like to ask to object to re-releasing this version and discuss on how to make 4.043 backwards compatible instead. This change will with 100% certainty corrupt all BLOB data written to the database when the developer did not read the release notes before applying the latest version of DBD::mysql (and changed its code consequently). Knowing that sysadmins have the habit of not always reading the release notes of each updated package the likelihood that this will happen will therefore high. I myself wasn't even shown the release notes as it was a dependency of an updated package that I applied. The exposure of this change is big as DBD::mysql affects multiple applications and many user bases. I believe deliberately introducing industry wide database corruption is something that will significantly harm peoples confidence in using Perl. I believe that not providing backwards compatibility is not in line with the Perl policy that has been carefully put together by the community to maintain the quality of Perl as it is today. http://perldoc.perl.org/perlpolicy.html#BACKWARD-COMPATIBILITY-AND-DEPRECATION I therefore believe the only solution is an upgrade that is by default backwards compatible, and where it is the user who decides when to start UTF8 encode the input values of a SQL request instead. If it is too time consuming or too difficult it should be considered to park the UTF8-encoding "fix" and release a version with the security fix first. I have the following objections against this release: 1. the upgrade will corrupt more records than it fixes (it does more harm than good) 2. the reason given for not providing backward compatibility ("because it was hard to implement") is not plausible given the level of unwanted side effects. This especially knowing that there is already a mechanism in place to signal if its wants UTF8 encoding or not (mysql_enable_utf8/mysql_enable_utf8mb4). 3. it costs more resources to coordinate/discuss a "way forward" or options than to implement a solution that addresses backwards compatibility 4. it is unreasonable to ask for changing existing source knowing that depending modules may not be actively maintained or proprietary It can be argued that such module should always be maintained but it does not change the fact that a good running Perl program becomes unusable 5. it does not inform the user that after upgrading existing code will start write corrupt BLOB records 6. it does not inform the user about the fact that a code review of all existing code is necessary, and how it needs to be changed and tested 7. it does not give the user the option to decide how the BLOB's should be stored/encoded (opt in) 8. it does not provide backwards compatibility By doing so it does not respect the Perl policy that has been carefully put together by the community to maintain the quality of Perl as it is today. http://perldoc.perl.org/perlpolicy.html#BACKWARD-COMPATIBILITY-AND-DEPRECATION 9. it blocks users from using DBD::mysql upgrades as long as they have not rewritten their existing code 10. not all users from DBD::mysql can be warned beforehand about the side effects as it is not known which private parties have code that use DBD::mysql 12. I believe development will go faster when support for backwards compatibility is addressed 13. having to write 1 extra line for each SQL query value is a monks job that will make the module less attractive to use About forking to DBD::mariadb?: The primary reason to create such a module is when the communication protocol of Mariadb has become incompatible with Mysql. To use this namespace to fix a bug in DBD::mysql does not meet that criteria and causes confusion for developers and unnecessary pollution of t
Re: DBD::mysql path forward
On 2017-09-14 3:01 AM, H.Merijn Brand wrote: On Thu, 14 Sep 2017 09:44:54 +0200, p...@cpan.org wrote: BYTE/BLOB/TEXT tests require three types of data • Pure ASCII • Correct UTF-8 (with complex combinations) subtest: Correct UTF-8 TEXT with only code points in range U+00 .. U+7F (ASCII subset) subtest: Correct UTF-8 TEXT with only code points in range U+00 .. U+FF (Latin1 subset) ASCII:U+00 .. U+7F iso-8859-*: + U+80 .. U+FF (includes cp1252) iso-10646: + U+000100 .. U+0007FF + U+000800 .. U+00D7FF + U+00E000 .. U+00 utf-8 1): + U+01 .. U+10 + surrogates + bidirectionality + normalization + collation (order by) 1) some iso-10646 implementations already support supplementary codepoints. Depends on the version of the standard Regarding Unicode subtests I was going to respond to Pali's comment to say that there are more important ranges; H.Merijn addressed the main points I was going to raise, however I propose a simpler set of tests as being the main ones of importance for data being handled without corruption. Note, these comments apply to ALL DBI drivers, not just DBD::mysql. There are 7 main Unicode integer codepoint ranges of interest, representable by a signed 32-bit integer: - negative integers rejected invalid data - 0 ..0x7F - ASCII subset accepted - 0x80 ..0xFF - non ASCII 8-bit subset accepted - 0x100 ..0xD7FF - middle Basic Multilingual Plane accepted - 0xD800 ..0xDFFF - UTF-16 surrogates rejected invalid data - 0xE000 ..0x - upper Basic Multilingual Plane accepted - 0x1..0x10 - the 15 supplementary planes accepted - 0x11000 and above rejected invalid data I would argue strongly that a transit middleware like a DBI driver should strictly concern itself with the Unicode codepoint level and that it shuttles data back and forth preserving the exact valid codepoints given, while rejecting invalid codepoints for both input and output either with an error or use of the unicode substitution character 0xFFFD. While Perl itself or MySQL itself can concern itself with other matters such as graphemes/normalization/collation/etc, a DBI driver should NOT. Besides being logically correct, this means that DBI drivers can avoid needing code for the most complicated aspects of Unicode, they can avoid 99% of the complexity. -- Darren Duncan
Re: DBD::mysql path forward
On 2017-09-13 12:58 PM, Dan Book wrote: On Wed, Sep 13, 2017 at 3:53 AM, Peter Rabbitson wrote: On 09/12/2017 07:12 PM, p...@cpan.org wrote: And here is promised script: The script side-steps showcasing the treatment of BLOB/BYTEA columns, which was one of the main ( albeit not the only ) reason the userbase lost data. Please extend the script with a BLOB/BYTEA test. I'm not sure how to usefully make such a script, since correct insertion of BLOB data (binding with the SQL_BLOB type or similar) would work correctly both before and after the fix. Perhaps the requirement of the extra tests is to ensure that BLOB/BYTEA data is NOT mangled during input or output, that on input any strings with a true utf8 flag are rejected and that on output any strings have a false utf8 flag. Part of the idea is to give regression testing that changes regarding Unicode handling with text don't inadvertently break blob handling. -- Darren Duncan
Re: DBD::mysql path forward
On 2017-09-13 6:31 AM, p...@cpan.org wrote: On Tuesday 12 September 2017 11:32:36 Darren Duncan wrote: Regardless, following point 2, mandate that all Git pull requests are made against the new 5.x master; the 4.x legacy branch would have no commits except minimal back-porting. New pull requests are by default created against "master" branch. So if 5.x development would happen in "master" and 4.x in e.g. "legacy-4.0" then no changes is needed. But on github it is not possible to disallow users to create new pull requests for non-default branch. When creating pull request there is a button which open dialog to change target branch. When I said "mandate" I meant as a matter of project policy, not on what GitHub enforces. Strictly speaking there are times where a pull request against the legacy branch is appropriate. -- Darren Duncan
Re: DBD::mysql path forward
On 2017-09-12 6:45 PM, Patrick M. Galbraith wrote: Darren, I agree with this as well, with the exception of 4 and 5, keeping 5.0 "pure" to the new way of doing things. I very much agree, pure is best. I only suggested 4 and 5 as half-hearted options because I knew some projects in our community liked to do things like that. For releases, I think I want to understand what this will mean. Sooner or later, a release shows up in as a distribution package, installed on the OS, and I want to know what the way of communicating that to the users and vendors so expectations are met. That's another question of "how do we handle that?" and "how do we inform an OS packager/vendor, what to do? Thank you for the great discussion! Based on my experience, the fact the major version number is being changed from 4 to 5 will communicate a lot by itself. Many software projects use semantic versioning, see http://semver.org for a formal definition adopted by SQLite among other projects, and all the common package managers and package repositories understand it. Generally speaking, if the first number in a dot-separated version number is incremented, it is treated as being a major version, and then any version numbers having that first part in common are considered minor versions of the same major. Given the default behavior of package managers, any users performing a generic "keep me up to date with all the patches and bug fixes" should receive the latest version sharing the major version they already have, and going to a new major version would require a more explicit "install this separate thing" on the user's behalf. Tangentially to this, this DBD::mysql is a CPAN module, CPAN conventions can also be followed by the package managers. As for giving explicit communication to repository maintainers eg Debian and CentOS and whatever, I think that's a matter of direct human communication. Perhaps have the person managing the packages for MySQL itself handle it. This all being said, for people using the CPAN clients, I believe their rules are simpler and they simply always get the highest numbers, ignoring major/minor stuff, except for Perl itself. For the sake of CPAN clients, another possible alternative is to fork the namespace. The current DBD::mysql name either means the old legacy version or the new better version, and then you pick a new name for the complement. A semi-relevant example of this is DBD::SQLite back in 2003 when SQLite 3 came out. In that case, SQLite 3 was a complete break from 2, neither could read the others' files. They renamed the old DBD::SQLite to DBD::SQLite2 (or some such) which was the one for SQLite 2, and the plain DBD::SQLite name then meant SQLite 3. That being said, all the SQLite 2 ones were 0.x versions, and then version 1.0 was the first SQLite 3 version. So the fact that prior to the break the 0.x semantics of (anything might break) made the decision easier. I think the decision then of which MySQL client keeps the old name depends on what you want to happen when users simply "get the latest" without testing. If the latest features and fixes stay with the old name, then the name for the legacy fork could be DBD::mysql::LegacyUnicode 4.044 AND DBD::mysql jumps to 5.0. If the latest features go to a new name, then you could have eg DBD::mysql2 version 1.0; it would be like a fork, but its official. The former approach will mean users with broken code will have to update their code to the new module name but won't have to fix their Unicode. The latter approach means people wanting the new stuff change the driver name they call. Another option that lets you keep the same name for both but requires other parts of the ecosystem to at least update some code, is you stick with the originally discussed 4-5 move, and other middleware modules add explicit internal code paths for "if DBD::mysql version >= 5 then expect correct Unicode behavior and otherwise expect and compensate for broken Unicode"; I expect DBIx::Class and friends will all add such checks anyway, short of hard requiring the new version. Basically, its all about tradeoffs, who you want to do what work and who you want to be able to do zero work. -- Darren Duncan
Re: DBD::mysql path forward
On 2017-09-12 11:05 AM, p...@cpan.org wrote: On Tuesday 12 September 2017 19:00:59 Darren Duncan wrote: I strongly recommend that another thing happen, which is re-versioning DBD::mysql to 5.0. 1. From now on, DBD::mysql versions 4.x would essentially be frozen at 4.041/4.043. 2. From now on, DBD::mysql versions 5.x and above would be where all active development occurs, would be the only place where Unicode handling is fixed and new MySQL versions and features are supported, and where other features are added. I'm fine with it. Basically it means reverting to pre-4.043 code and continue *git* development there with increased version to 5.x. And once code is ready it can be released to cpan as normal (non-engineering) 5.x release. Yes, as you say. With respect to Git I propose doing this immediately: 1. Create a Git tag/branch off of 4.043 which is the 4.x legacy support branch. 2. Revert Git master to the pre-4.043 code and then follow that with a commit to master that changes the DBD::mysql version to 5.0. Optionally a DBD::mysql version 5.0 could then be released, from the new master, that matches 4.042 in content, or otherwise an additional round of planning or public consultation could happen first in case any other possible breaking changes want to be introduced first. Regardless, following point 2, mandate that all Git pull requests are made against the new 5.x master; the 4.x legacy branch would have no commits except minimal back-porting. -- Darren Duncan
Re: DBD::mysql path forward
On 2017-09-12 8:54 AM, Dan Book wrote: On Tue, Sep 12, 2017 at 11:04 AM, Patrick M. Galbraith wrote: Pali, Yes, I agree, we'll have to create a fork pre revert and stop accepting PRs How might we allow people time to test the fixes to give them time? Just have them use the fork, I would assume? To be clear, this sounds like a branch not a fork. If your plan is to reinstate the mysql_enable_utf8 behavior as in 4.042 rather than adding a new option for this behavior, then branching from 4.042 seems reasonable to me; but you should be very clear if this is your intended approach, as this is what led to many people corrupting data as they send blobs to mysql with the same mysql_enable_utf8 option, and expect them to accidentally not get encoded. Assuming that broken Unicode handling has been in DBD::mysql for a long time and that users expect this broken behavior and that fixing DBD::mysql may break user code making those assumptions... I strongly recommend that another thing happen, which is re-versioning DBD::mysql to 5.0. A declared major version change would signify to a lot of people that there are significant changes from what came before and that they should be paying closer attention and expecting the possibility that their code might break unless they make some adjustments. Without the major version change they can more easily and reasonably expect not having compatibility breaks. Part and parcel with this is that only DBD::mysql 5.0 would have the other changes required for compatibility with newer MySQL versions or features that would be the other more normal-sounding reasons to use it. So I specifically propose the following: 1. From now on, DBD::mysql versions 4.x would essentially be frozen at 4.041/4.043. They would expressly never receive any breaking changes (but see point 3) and in particular no Unicode handling changes. They would expressly never receive any new features. This is the option for people whose codebases and environments work now and want to leave it alone. 2. From now on, DBD::mysql versions 5.x and above would be where all active development occurs, would be the only place where Unicode handling is fixed and new MySQL versions and features are supported, and where other features are added. Version 5.0 specifically would have all of the backwards-breaking changes at once that are currently planned or anticipated in the short term, in particular fixing the Unicode. Anyone who is already making changes to their environment by moving to a newer MySQL version or want newer feature support will have to use DBD::mysql 5, and in the process they will have to go through their Perl codebase and fix anything that assumes Unicode is broken. 3. As an exception to the complete freeze otherwise mentioned, there could be one or more DBD::mysql 4.x release whose sole purpose is to back-port security fixes or select other bug fixes from 5.x. Also 4.x could gain minimalist documentation changes to indicate its new legacy status and point to 5.x. 4. Optional bonus 1: If it is reasonably possible to support both correct Unicode handling as well as the old broken handling in the same codebase, I suggest DBD::mysql 5.0 could also have a user config option where one could explicitly set it to make DBD::mysql use the old broken behavior, while the default would be to have the correct behavior. This would be analogous to Perl's "use experimental" feature. The idea is that it would provide a temporary stopgap for users migrating from 4.x where they could just make the minimal change of enabling that option but they otherwise wouldn't yet have to go through their codebase to fix all the assumptions of wrongness. Using this option would cause deprecation warnings to be emitted. Perhaps the feature could be lexical or handle-specific if appropriate to help support piecemeal migration to correct assumptions. Alternately it may be be best to not have this option at all and people simply have to fix their code on moving to 5.x. This matter is probably its own discussion, but the key part is it only applies to 5.x, and meanwhile the 5.x default is correct Unicode while 4.x freezes the old bad Unicode behavior. 5. Optional bonus 2: A user config option could be turned on but all that it does is emit warnings in situations where the Unicode behavior was fixed, to prod users to check that particular spot in their code that may have been impacted, this would be more like a typical "use warnings" maybe. This is just a tool to help users convert their code and find impacted areas. Note, 4.x versions have already been in use for a full decade, so there shouldn't be new version fatigue. So Patrick and Pali and others, what think of my proposal? -- Darren Duncan
Re: Fork DBD::mysql
On 2017-08-28 12:00 PM, Alexander Foken wrote: On 28.08.2017 18:19, Darren Duncan wrote: While a fork may be the best short term fix, as keeping up with security issues is important, ... Not being able to connect to MySQL / MariaDB from Perl is not acceptable. Having tons of known bugs, especially security-related ones, is as bad. Being forced to take the long detour through DBD::ODBC, an ODBC driver, and a MySQL / MariaDB ODBC driver is not pretty, and it won't be faster than the native DBD::mysql driver. Indentionally breaking DBD::mysql to drive people away from MySQL / MariaDB wont work. Instead, people will search for another language that does support connecting to MySQL / MariaDB, like PHP, Python or Java. And so, they will use MySQL / MariaDB without Perl. I agree with what you said, and I said as much myself (quoted above). The I look at it is that keeping a good quality DBD::mysql is important but that it would be conceptually a LEGACY SUPPORT feature. This is similar to how Perl supports, and should support, talking with any tools and systems that people have, so people always have the option to use Perl, even when the things they're working with aren't the best choices, its still what they have. And Perl's support for MySQL should remain high quality as possible as long as it exists. But I believe that's mainly a short term solution, and the longer term solution is to migrate to better DBMSs. -- Darren Duncan
Re: Fork DBD::mysql
On 2017-08-28 8:55 AM, p...@cpan.org wrote: Because of the current situation we in the GoodData company are thinking about forking DBD::mysql. We use DBI not only with DBD::mysql, but also with DBD::Pg. The annoying bugs in DBD::mysql still require hacks and workarounds, while similar bugs were fixed in DBD::Pg years ago. Also, inability to use the new version of MySQL or MariaDB is a problem. Same for the open security issues. I would like to ask, whether somebody else is interested in DBD::mysql fork? Either as a user or as a developer? Maintaining a fork isn't a simple task, but with supporting users and contributing developers it should be easier. While a fork may be the best short term fix, as keeping up with security issues is important, I honestly believe that the best fix is to migrate to Postgres as soon as you can. You already use Pg so the experience in your company is there. I'd say move all your MySQL projects to Pg and then you won't have to think about MySQL anymore. Its not just about the Perl driver, but the differences in quality/features/etc of the DBMSs themselves. -- Darren Duncan
Re: RFC: Official DBI for Perl 6
On 2016-12-20 8:29 PM, Christopher Jones wrote: Here's a question: what is the GitHub repo? I see 0 repos under https://github.com/duncand (which I assume is you) I didn't mention it before because I didn't want to appear too spammy. To answer your question, I created a GitHub "organization" profile years ago to put all of my own projects under, which should provide the best foundation for others to collaborate who have the desire. By following either the list of organizations or contributor activity shown on my "duncand" profile, you would come to this GitHub account where my more recent and active projects live: https://github.com/muldis And this is the GitHub repo specific to the DBP spec, Perl 6 version: https://github.com/muldis/Muldis-DBP-Perl6 And its corresponding Perl 5 version: https://github.com/muldis/Muldis-DBP-Perl5 This is a work in progress and there is cross-influence between DBP and other projects in the Muldis organization. I'm still a few weeks away from announcing executing code here. -- Darren Duncan
Re: RFC: Official DBI for Perl 6
As a followup to this thread, I have now committed and updated on GitHub a rename of my API spec(s) as follows: Muldis::DBP - Formal spec of an abstract database protocol for Perl 6 The "DBP" was also one of Brock Wilcox' working name suggestions so I also "acknowledge" this in the spec. And so, "Muldis DBP" or "MDBP" further abbreviated, will be my working title for the forseeable future, used for both the Perl 6 and Perl 5 versions of this spec; in contexts where they need to be differentiated, they are called for example "Muldis DBP for Perl 6" or "Perl 6 Muldis DBP". This title appears to be a good fit considering the scope of the spec and the important qualities of what it defines. Unless someone else makes comments or asks questions, I probably won't post further on this subject until I have a reference implementation and fleshed out documentation, which will hopefully be within a month. -- Darren Duncan
Re: RFC: Official DBI for Perl 6
On 2016-12-09 7:41 AM, Greg Sabino Mullane wrote: Thanks for starting this thread. Muldis::DBI::Duck_Coupling If anyone wants to propose a better working title for the spec That's pretty wordy, and other than "DBI" doesn't tell us what it is. How about DBAPI? Or something short and sweet. For various reasons I have decided on a working title of the form Muldis::Foo where Foo is some short acronym of something descriptive. I haven't yet chosen one but am about to. Later on the proprietary Muldis:: prefix can be removed at about the same time as the project is adopted by a community management at least to the degree that DBI itself is, if that happens, and in the meantime I don't pollute generic namespaces with working titles. The second main outcome is that Tim appears to endorse my primary design proposal that all implementations of the API are permitted to be at arms length and there are no mandatory shared dependencies, in contrast to "DBI" being a mandatory shared dependency now in Perl 5. No mandatory dependencies includes no Perl 6 roles that must be consumed in order to declare provision of the API. Yeah, I have no major objections so far to the proposal. It would be nice to keep things close to DBI where it makes sense, but this seems like a good opportunity to, like Perl 6, keep the good stuff and replace the cruft. So in my case, I'm reading between the lines and distilling what I see as the essence of the Perl 5 DBI without considering any legacy API details to be sacred cows, not only for my Perl 6 version but also for my Perl 5 version. Likewise for JDBC, I am clearly intending to operate in a different space or at a different layer than Tim's long-stated preference for an API that directly mirrors JDBC. For now we can consider it as a layer Tim predicted as going directly over his JDBC-based one in his DBDI model, though it could also go underneath the JDBC-based one, similar placement flexibility as the PSGI stuff. That's a tall order, of course, which makes me wish we had started this thread about 10 years ago. You know, that time when Perl 6 was just about to be released. :) I believe we did, briefly. I joined the DBDI mailing list in 2004, around when it started, I think. The key difference now is that Perl 6 itself has actually been declared production ready by its creators, with a solid implementation and OS packaging infrastructure such that it is almost as easy for users to install and get into as Perl 5 or other common languages are, and presents a little-moving target to hit. On a tangent, and for posterity, I'll share a response I got in private from Brock Wilcox yesterday, which I quite liked. On 2016-12-08 7:06 PM, Brock Wilcox wrote: Random ideas: * DBDuck * DDuck * Duckie * ADI - Abstract Database Interface * QuackDB / QDB * DBP - DataBase Protocol * DBAL - DataBase Abstraction Layer * Dali - Database Abstraction Layer Interface * DAS - Database Abstraction Specification * DAR - Database Abstraction Role * DIR - Database Interface Role * DR - Database Role * Duncan's Database Role (DDR) In any case, great idea for doing this PSGI-style. -- Darren Duncan
Re: RFC: Official DBI for Perl 6
Tim, thank you for your response. I had posted my RFC more than 2 weeks ago and was starting to be concerned that no one had yet made the slightest comment on it. Your response of today has addressed my primary questions and concerns, for which I greatly appreciate that, and it will help me to move onward. The first main outcome is that I will not operate under any pretense of trying to name my work "DBI" and will seek some good alternate name. So far this is the working title NAME I have been using, for the API spec written in the form of a POD-only distribution, with separate Perl 6 and Perl 5 versions: Muldis::DBI::Duck_Coupling - Formal spec of a duck-coupling database interface for Perl 6 To partly explain, "Muldis" is a brand base name I use for my projects, while "DBI Duck Coupling" is descriptive for the project in common CPAN fashion. If anyone wants to propose a better working title for the spec, I would appreciate it. (My initial thought, just now, is that I could shorten the above to Muldis::Duck_Coupling, but that fails in the sense that a verb should be modifying a noun; otherwise I need a noun.) I know it just counts as bike shedding, but it would be nice to have something that rolls off the tongue easily. The second main outcome is that Tim appears to endorse my primary design proposal that all implementations of the API are permitted to be at arms length and there are no mandatory shared dependencies, in contrast to "DBI" being a mandatory shared dependency now in Perl 5. No mandatory dependencies includes no Perl 6 roles that must be consumed in order to declare provision of the API. -- Darren Duncan On 2016-12-08 3:05 PM, Tim Bunce wrote: On Wed, Nov 23, 2016 at 05:06:39PM -0800, Darren Duncan wrote: This message is an RFC regarding Perl 6 and my proposed official successor there for the current Perl 5 "DBI" module, and in particular for usage of the "DBI" namespace. I believe that now is the time for a serious look at having an official "DBI" for Perl 6. It's always been that time :) I disagree about "an" and "official" though. I'd much prefer to see some real discussion (i.e. more than one person actively involved) and some real code, rather than cannonize any vaporware or any particular early experiments. I think it's fair to say that the DBI has been reasonably successful. That was due, in part, to the fact several people spent a good year or so discussing and designing the spec, then spent another year or so refining it as we worked on prototypes (DBI and DBD::Oracle). I have already been working on a "Perl 6 DBI" or "Plack for databases" for awhile now, and in a few weeks I should be ready to show it off for evaluation. But in the meantime, I was hoping to get Tim Bunce and other community stakeholders on board with its philosophy and get a blessing to use the name "DBI" for it. Nope. Sorry. My position, FWIW, is the same as it was in this thread from Feb 2015: http://markmail.org/message/oavyl5l4dlme5dft which refers to this presentation: http://www.slideshare.net/Tim.Bunce/perl6-dbdi-yapceu-201008 In this message I will outline a few main points to start off the discussion, and other details can follow in the near future. That is, the new "DBI" would actually just be an API specification document for a duck-typing/etc API that conforming libraries and applications would implement for themselves. Which is also true for the "DBDI" proposed in the presentation. It's an interface definition that follows closely the JDBC API. (Which is mature, well documented, with a test suite, lots of books, lots of people with experience, and maps well to underlying database client library APIs. All *very* significant plusses.) There's no need for any implementation of a DBDI interface to share code with any other. Does this sound like a proposal you can get behind, You don't need me to get behind it. Write some code, get people to help, build a team then a community. Anyone else can do the same. Give it any name you like for now. You can always rename it if it gains traction. (The DBI was called DBperl for the first couple of years.) is it okay to use "DBI" for the name reserved for the specification There's no need for that. I think I'd prefer if nothing used "DBI". (Except, one day, something that provided a very-DBI-like API over whatever has been adopted by the wider community as a database API.) Using it now, on an unproven experiment, seems like a poor idea. do you have any questions or counter-proposals, and so on? Write some code, get people to help, build a team. Give it any name you like. Write a test suite. Release early, release often. Make it correct and make it fast. Have fun. Good luck! :) Tim.
RFC: Official DBI for Perl 6
cks for whether packages or objects compose particular roles or have particular names or even assume particular hierarchies for implementing classes etc. 5. This new DBI would make no core assumptions on what query/instruction/etc languages each provider understands or uses natively or how data is represented or what data types are supported. It treats on equal terms whether we have relational or nonrelational, SQL or non-SQL, and so on. But it would still remain savvy to relational in particular. A consumer would be able to introspect the language capabilities using the API, including in the case of SQL what versions of SQL are supported, so that the consumer can programmatically know what options it has for providing queries. For example, the Enterprise DB version of Postgres may accept both Postgres and Oracle versions of SQL for querying. While the Perl 5 DBI gives lip-service to being agnostic to query language or paradigm, a lot of its hard-coded API is still specific to details of SQL or ODBC etc. 6. All query/instruction etc is treated as just data to the main API itself, same as any result or regular input data is, and executing instructions is a separate step from feeding them in the form of data into the provider. 7. The main API has generic Value objects representing data of any type, whether scalar or collection or code etc, and the API details for executing queries are arms length from the API details for marshalling data between regular Perl data types / data representations and these Value objects. So that's what I have to say in my first message, and other details are pending. Does this sound like a proposal you can get behind, is it okay to use "DBI" for the name reserved for the specification (occupied by POD files say but no code), do you have any questions or counter-proposals, and so on? Also, when this proposal is adopted in Perl 5, what namespace should it have? Thank you in advance. -- Darren Duncan
Re: Perl 6 and DBI
On 2015-02-06 10:50 AM, Tim Bunce wrote on dbi-users: On Thu, Feb 05, 2015 at 12:23:40AM -, Greg Sabino Mullane wrote: As you may have heard, Larry Wall gave a speech recently declaring the goal of releasing Perl 6 this year, 2015. Honestly, there is little chance of me using Perl 6 until it has a good, working DBI. Anyone know the state of things with DBI and Perl6? Is the goal still to implement what is basically DBI v2, or perhaps someone is working on a simple port of the existing DBI? Is a working DBI even on their list of blocker features for a release? On MoarVM the perl5 DBI can be accessed via the Inline::Perl5 module. That probably counts as a "reasonable working DBI" :) Back in 2007 I said this: http://www.slideshare.net/Tim.Bunce/dbi-for-parrot-and-perl-6-lightning-talk-2007 Then in 2010 I said this: http://www.slideshare.net/Tim.Bunce/perl6-dbdi-yapceu-201008 I still think that's the right way to go, for all the reasons expressed in the slides. I support this in principle, particularly the part about having a language-agnostic DBI that can be reused across multiple languages otherwise capable of running in the same VM. However, I still think there should be no mandatory sharing of code between different DBMS interfaces, for example one shouldn't be required to share any code between say a PostgreSQL interface and a MySQL interface. What we should have instead is documented API standards, and a shared test suite written to that API that each DBMS interface implements, and that this common API is what we call "DBI". For example with a minimal delta from how it works now in Perl 5, we go from this: $dbh = DBI->connect('dbi:pg:foo', ...); ... to this: $dbh = DBD::Pg->connect('foo', ...); ... and the $dbh etc essentially behaves the same, but now DBI is just an interface that DBD::Pg etc provides to the public rather than a mandatory extra piece of mediating code. What's missing is a team of people with the right skill willing to work on it. I've had little time to do more than the tinkering I've already done and I'm severely hampered by knowing ~zero perl6 or Java. Volunteers are most welcome to express interest on the dbi-dev mailing list. Tim. I am certainly interested in this, and so am expressing so on dbi-dev. That being said, for the moment I am working to tackle the problem from a different angle, which loosely resembles your proposal but is distinct. I hope to have the start of something executable in about 2 months that people can play with. (It will actually be in Perl 5 initially, but a Not Quite Perl port would follow.) Then we could either use that as a point of departure, or otherwise it could just help inform your proposal. -- Darren Duncan
Perl 6 and DBI - copied from dbi-users
A discussion started on dbi-users which I thought seemed more appropriate to continue on dbi-dev assuming it caries on in the direction of a design discussion. I have quoted the important parts of each original message below, to get it started. -- Darren Duncan Forwarded Message Subject: Perl 6 and DBI Date: Thu, 5 Feb 2015 00:23:40 - From: Greg Sabino Mullane To: dbi-us...@perl.org As you may have heard, Larry Wall gave a speech recently declaring the goal of releasing Perl 6 this year, 2015. Honestly, there is little chance of me using Perl 6 until it has a good, working DBI. Anyone know the state of things with DBI and Perl6? Is the goal still to implement what is basically DBI v2, or perhaps someone is working on a simple port of the existing DBI? Is a working DBI even on their list of blocker features for a release? Forwarded Message Subject: Re: Perl 6 and DBI Date: Wed, 4 Feb 2015 16:28:15 -0800 From: David E. Wheeler To: Greg Sabino Mullane CC: dbi-us...@perl.org This goes for me, too, and many, many other people too, I’ve little doubt. Forwarded Message Subject: Re: Perl 6 and DBI Date: Wed, 04 Feb 2015 17:19:54 -0800 From: Darren Duncan To: David E. Wheeler , Greg Sabino Mullane CC: dbi-us...@perl.org And in that case, there would/should be a discussion about what form this DBI for Perl 6 should take. Personally I think one of the most important design changes DBI should make is that DBI becomes an API spec that drivers/engines must conform to in order to be certifiably compatible. Part of the API is that user code can query the driver/engine to ask, "do you implement DBI" or "do you implement this version of the DBI API" etc. DBI should not be an actual code layer that applications have to go through in order to talk with the driver/engine, as it is now. However, there can exist Perl 6 roles or other shared libraries that a driver/engine may optionally use to help it implement the API rather than rolling its own. Think of the kind of revolution that PSGI brought by working this way, just defining an API or protocol to conform to, rather than "being" a module that everything had to use. DBI should do the same thing. The actual details of the API / common interface are an orthogonal matter. The key thing I'm looking for is no mandatory shared code between engines/drivers. I also believe that what I said, the no mandatory shared code, would also work just as well in any language, including a major new Perl 5 version. Forwarded Message Subject:Re: Perl 6 and DBI Date: Wed, 4 Feb 2015 21:52:39 -0600 From: David Nicol To: Darren Duncan CC: David E. Wheeler , Greg Sabino Mullane , dbi-us...@perl.org Does this mean the floor is open for brainstorming? I'd like to see more transparent integration, so p6+dbi would be like pl/sql or pro*C or whatever that language Peoplesoft scripts used to be in that I was working with when I wrote DBIx::bind_param_inline. http://perlbuzz.com/2008/12/database-access-in-perl-6-is-coming-along-nicely.html http://www.mail-archive.com/dbdi-dev@perl.org/maillist.html doesn't have anything new since 2011.
Re: Accidently added CONV in SQL::Statement
On 2014-12-15 1:40 AM, Jens Rehsack wrote: during review of SQL::Statement wrt. RT#100551 I realized that the introduced SQL_FUNCTION_CONV (https://github.com/perl5-dbi/SQL-Statement/blob/master/lib/SQL/Statement/Functions.pm#L454) is unnecessary over-complex. Funny, when I first saw that sentence I had thought you added support for parsing SQL stored procedures/functions. Oh well. -- Darren Duncan
Re: Test::Database and Test::DSN
On 2014-04-03, 7:22 AM, Tim Bunce wrote: I'd caution against making a hard distinction between file and non-file DBDs as it may influence the design it ways that cause limitations later. It seems to me that the key distinction is "can we reliably and safely create and delete a private database". That's true for file-based DBDs (ie create a subdirectory and rm -rf it afterwards) but may also be true for some non-file DBDs in some situations. I agree with this. There is also precedent. As I recall the DBD::Pg test suite will create a whole new Postgres database cluster and associated server process in order to do its testing, these are completely separate files in their own directory and separate server on its own port, so isolated from any production/other use of Postgres that may be on the system. Such solutions should be utilized by a generic database testing infrastructure where ever possible. In such cases there is no distinction between "file based" and not. -- Darren Duncan
Re: Test::Database and Test::DSN
For actual or intended Test::Database use cases that you know about, which of these situations are represented: 1. The test requires the actual DBMS used in production, it is designed with only a single specific DBMS in mind, no substitutions allowed. 2. Like #1 except substitutions are allowed if the substitute is compatible for all the DBMS syntax/capabilities used. 3. The test will work with a variety of DBMS and works to least common denominator. 4. The test will work with a variety of DBMS and will do different things depending on which DBMS is in use as it recognizes their different syntax or capabilities. 5. Some other option? Thanks in advance for the info. -- Darren Duncan
Re: Making DBI (results) more strict
Max, you can distinguish a missing column from a null one quite easily in regular Perl. If "exists $hash->{key}" is false then the column doesn't exist, where if that is true but "defined $hash->{key}" is false then it exists but is null. -- Darren Duncan On 2/10/2014, 10:57 AM, Max Maischein wrote: Hi all, I recently discovered the greatness that is Hash::Util::lock_ref_keys , when used together with ->fetchall_arrayref() like this: ... my $rows= $sth->fetchall_arrayref( {} }; for( @$rows ) { lock_ref_leys( $_ ); }; ... This prevents me from accessing hash keys that don't exist. Usually, this is inconvenient, but with SQL query results, I (too) often encounter different case for the column names, or other inconsistencies. Especially when columns are allowed to be NULL, it may take me a while to figure out that I'm looking at the wrong key. I'd like to enable this returning of locked hashes from within DBI, preferrably on a per-$dbh-level: my $dbh= DBI->connect( $dsn, 'scott', 'tiger', { RaiseError => 1, StrictResults => 1 }); Alternatively, I'm also open to suggestions on how to best implement this feature in a separate module, tentatively named DBI::strict. I've thought about doing some AUTOLOAD reflection onto the real $dbh, but I'm unsure about how to best approach wrapping arbitrary DBD handles/statement handles with my DBI::strict::st without interfering. Also, I'd appreciate hints on what subroutine(s) would be the most appropriate to enable locking the hashes, as I want to write as little new code for such a feature as necessary. Thanks for reading, -max
Re: RFI: Database URIs
Sorry, I think I hit send without adding a reply message; here it is. On 2013.11.22 5:13 PM, David E. Wheeler wrote: First, I'm using the database name as the scheme in the URL. Some examples: postgresql://user@localhost:5433/dbname sqlite:///path/to/foo.db By "database name" do you mean "DBMS name"? Because I'd say the "database name" is what's on the right-hand side of the //, not what's on the left. Another thing I was going to say is, if you wanted some standardization, you should distinguish the parts that are necessary to connect to a database from parts that just select a default schema in the database for interacting with. By that I mean, remember that a PostgreSQL "database" and a MySQL "database" aren't actually the same concept. A PostgreSQL DBMS server gives access to multiple disjoint databases where you must name one to connect, and then separately from that is the optional concept of the current schema that you can select. A MySQL DBMS server gives access to exactly 1 database, which you can connect to without specifying a "database name", and selecting a current schema (what they call a "database") is optional for using MySQL. The reason I say this is that DBI's uri scheme uses the same syntax for both "database" even though how they're treated is actually very different. And you don't have to do the same. -- Darren Duncan
Re: RFI: Database URIs
On 2013.11.22 5:13 PM, David E. Wheeler wrote: DBI Folks & Gisle, I want to add support for specifying database connections as URIs to Sqitch, my DB change management system. I started working on it today, following the examples of JDBC and PostgreSQL. Before I release, though, I’d like a bit of feedback on a couple of things. First, I'm using the database name as the scheme in the URL. Some examples: postgresql://user@localhost:5433/dbname sqlite:///path/to/foo.db This is to make it easy to tell one DB from another. But I'm wondering if I should follow the example of JDBC more closely, and prepend "db:" or something to them. More like DBI DSNs, too. However, it would require a bit more work, as URI.pm does not currently recognize multiple colon-delimited strings as scheme names AFAICT. :-( Next, I've added a bunch of URI subclasses for various database engines. I’m not to familiar with some of them, so if you see any listed here where the name (to be used in the scheme) and the default port is wrong, please let me know: https://github.com/theory/uri-db/blob/master/t/db.t#L9 Thanks! David PS: Is this something the DBI might want to use?
Re: Best way to retire old code for unsupported database versions
On 2013.09.26 7:52 PM, Greg Sabino Mullane wrote: (MySQL 4.x? I know places still running 3.x!) Me too, such as shared web hosts that prize stability greatly, so if you got a server in 2000 and requested MySQL in 2001, you got 3.23, and then you were never given upgrades to it afterwards, except by having your whole account moved to a new server in 2013. -- Darren Duncan
might Alien::Base be useful in DBI or driver development?
I seem to recall a few years ago some discussion about future DBI or driver development, and that an ideal would be that one could more easily use the typical C client libraries for different DBMSs from Perl without having to write any XS necessarily. Have a look at this: http://blogs.perl.org/users/joel_berger/2013/05/alienbase-final-report.html I think that may be a useful tool to actually realize such an effort, and may be a significant improvement to the process of using databases in Perl. Part of the project is about being able to install C dependencies of Perl projects as easily as installing a Perl module. -- Darren Duncan
Re: DBD::mysql - bit worrying
On 2013.04.11 4:37 PM, Lyle wrote: Hmm... Got a reply to my bug report and it's a bit worrying: http://bugs.mysql.com/bug.php?id=68266 Seems like MySQL don't want to directly support DBD::mysql any more. Whether you like mysql or not, that's a heavily used DBD. I'm not aware that DBD::mysql was ever supported through the general MySQL bug reporter, and I understood it was a separate project. I think Patrick Galbraith is the person you should ask about this matter, as Patrick has been the one releasing DBD::mysql on CPAN for many years, a full decade at this point. -- Darren Duncan
Re: Compatability issue of DBI with SQLite
On 2013.03.15 4:05 AM, CV, Tejaswi (SPD) wrote: We have a C++ application which uses Perl 5.8.2, DBD-SQLite-1.13 and DBI-1.54. Due to a multithreading issue in perl which we are facing, we want to migrate to latest Perl. We got the source of Perl 5.14.3, complied it and then got the latest source versions of DBD-SQLite and DBI which is DBD-SQLite-1.37 and DBI-1.623. In order to compile the source of DB-SQLite 1.37, we need to compile DBI 1.57. I tried compiling DBI 1.57 and got the below error So, I took the latest source of DBI which is DBI 1.623 and compiled it, this compiles and I installed also. But when I try to compile the SQLlite 1.37 source, it says DBI 1.57 is required to configure this module; please install it or upgrade your CPAN/CPANPLUS shell.\n But I cannot compile DBI 1.57 as I get the errors like above. Can you please suggest how I can resolve the errors of DBI 1.57 compilation or how can I compile DBD-SQLite-1.37 with DBI-1.623 When a Perl module says it needs X version of a module, it means X or later. So you should only be attempting with DBI 1.623 and not with an older DBI version. You already said DBI 1.623 compiled and installed properly, so stick with that and don't try to do DBI 1.57. If DBI is actually installed (you did "make install") then the latest DBD::SQLite should work; if it doesn't then that's a separate problem. -- Darren Duncan
Re: Potential for new database driver
On 2013.03.04 12:52 PM, Travis Elliott Davis wrote: My name is Elliott Davis and I am relatively new to this list. I am interested in writing a driver for Blueprints. I imagine this working similar to the ODBC or the JDBC driver for DBI currently. I have read the DBI::DBD documentation (only once, not twice as instructed) and have found it very verbose. It seems that writing a C driver will be the optimal solution for speed but a pure perl driver will make the module more maintainable. If I were to work on the driver I think my goal would be a pure perl driver. The DBI::DBD documentation suggest an initial email to the dev list before pursuing code on any driver, so here I am. If anyone is currently working on this or is interested in working on this could you please let me know. Does there already exist a C client library for Blueprints, especially one made by the same people as the Blueprints server? If so, then probably use that and then made the DBD driver a C one that just interfaces to it. If there isn't a C client library and you're just writing to a network protocol, then write yours in Perl since that would be the easiest to do, and give you're doing all the client work yourself anyway. -- Darren Duncan
Re: Shared DBI driver test suite
On 2013.01.23 1:07 AM, Jens Rehsack wrote: I'd *love* to see that error fixed so all drivers authors could make use of, and contribute to, a common DBI driver test suite. You suggest a DBI::Test suite which is a separate module and is required by DBI? This completely different to previous statements where DBI should have no additional dependencies ... I don't think having a sharable test suite as an external dependency is a problem. Its like the DBI distribution simply being split in two, but that other modules can then depend on just the test suite portion. In fact, if the DBI metadata is appropriately setup, then this external would just be a "test requires", and so ones simply wanting to install DBI and which trust it already works don't even need to pull the other one. On a slight tangent, I finally have a mini-sabbatical from work of 4 weeks, during which time I hope to complete a working parser, syntax tree, and possibly executable of my Muldis D language. This is relevant to you because I have also designed a standardized way to represent arbitrary SQL as a syntax tree, the same one as my language, and this could form the basis of a more comprehensive cross-DBD test suite. In a nutshell, this involves taking the basic "rtn(args)" format of calls for user-defined SQL routines and using the same format for all the SQL built-ins too (eg, IUD and the parts of SELECT), rather than a pile pf special syntax, and so analysis or further processing of parsed SQL is a lot easier. If you don't see the benefit, then never mind, and I'll come back to you when I have an executable, hopefully soon. -- Darren Duncan
Re: RDBMS comparison tool
On 2013.01.05 5:39 AM, Lyle wrote: On 05/01/2013 10:41, Darren Duncan wrote: Since the SQL standard, as well as some other programming languages, define "decimal" as being exact numeric, then it is absolutely wrong to map them to either FLOAT or DOUBLE. In fact, in Perl 5 the only native type that can map to a DECIMAL is a character string of numeric digits. Don't shoehorn this. There is no good reason to map DECIMAL to a floating point type. I'm not overly familiar with Perl's internal handling of number. I guess if you have DECIMAL from a character string Perl will switch it out to an approximate the moment you do a calculation on it. Furthermore if the DBI (or the DBDs, I'm not sure where the distinction lies) is already putting it into a Perl decimal which is floating point, then the battle has already been lost before it gets to me. My understanding is that Perl 5 has 3 internal data types of relevance here, which are integers, floats, and strings. A DBMS integer can be represented perfectly by a Perl integer as long as it is within the Perl integer range, which AFAIK these days is typically a 64-bit machine int but used to be 32 bits, either depending on your configure settings at compile time (perl -V can tell you). If the integer is larger than that, eg some DBMSs support 128 bit ints, you'd need to use a Perl string to represent it perfectly. A DBMS float might be exactly representable by a Perl float, maybe, depending on their details, or there might be a round-off in the conversion; now while that loss might not matter, it means round-tripping may not produce an identical value, which is a larger concern in my mind. A DBMS DECIMAL has no native Perl equivalent and you'd have to use a string in the general case. As for what DBDs actually do, well that's a different matter; but I'm talking about what *could* be done in the Perl somewhere, and typically I'd expect the DBD to make that decision on the Perl's behalf. Yes, doing naive calculation on numbers-as-strings could cause loss, but at least having it as strings initially gives the Perl code the choice for what to do because it has all the detail. Now Perl does also have BigInt, BigRat, BigFloat etc which could be utilized for exact transference, but AFAIK no DBD does that as it would hurt performance or add further dependencies. -- Darren Duncan
Re: RDBMS comparison tool
On 2013.01.04 4:17 PM, Lyle wrote: On 01/01/2013 03:12, Greg Sabino Mullane wrote: Lyle wrote: Similar situation for PostgreSQL character. Yep. Reviewing the PostgreSQL documentation on CHARACTER it mentions things like short, long and very long character strings, but lacks detail so I've emailed them about it. Those are internal details about how they are stored and won't affect anything as far as an application is concerned. I thought it might be useful to know strings below a certain length are stored uncompressed and so a little faster. Likewise very long strings have a different storage mechanism one might want to avoid. Although I've only just had a reply to my post asking for specifics, and haven't had chance to look into it further. My understanding about Postgres either compressing strings or using "toast" segments for longer ones is that this is just an internal implementation detail and that user-facing concepts like data types should be ignoring these differences. Character data is just "text", a sequence of characters of arbitrary length, and that's all there is to it. MySQL's FLOAT and DOUBLE are linked to several ODBC types, perhaps PostgreSQL could do the same? Or is that bad practice on the MySQL drivers part? Hard to say, can you be more specific? Keeping in mind that I don't actually know a whole lot about the SQL/ODBC specs, and the differences therein. :) For SQL_DECIMAL PostgreSQL and MySQL seem to disagree on whether FLOAT or DOUBLE should be used. Well, a postgres float with no precision is really the same as a double precision (as you hint at below). The whole thing is quite confusing when you start pulling in ODBC, SQL standard, IEEE, decimal vs. binary precision, etc. Have a good link to a doc that explains exactly what SQL_DECIMAL is meant to be? I've got a copy of the standard, but I'm pretty sure I'd be breaking some law if I copied and pasted bits. DECIMAL is supposed to be an "exact numeric". Whereas FLOAT, REAL, DOUBLE PRECISION are "approximate numeric". So I guess DOUBLE is a better fit as it's supposed to be more accurate. But neither are a good match really. Here's the thing. The most important difference is "exact numeric" versus "approximate numeric". Your type list should clearly and explicitly separate these into separate rows from each other, and never cross types between them. Things like integers or rationals or DECIMAL etc are exact numeric. Things like FLOAT and DOUBLE are analogous to IEEE floats which are inexact and are defined by using a certain number of bits to store every value of their type. I don't recall which of the above REAL goes in. If different DBMSs use the same FOO to be exact in one or approximate in another, still keep them in different rows. Since the SQL standard, as well as some other programming languages, define "decimal" as being exact numeric, then it is absolutely wrong to map them to either FLOAT or DOUBLE. In fact, in Perl 5 the only native type that can map to a DECIMAL is a character string of numeric digits. Don't shoehorn this. There is no good reason to map DECIMAL to a floating point type. Likewise, under no circumstance, map integers to floating types. It also has SQL_VARCHAR assoicated with TEXT instead of VARCHAR. Not sure about this one either - if there was a reason for that, I don't remember it offhand. Thanks for doing this work, it's good to see someone poking at this. +1 for getting the DBDs somewhat consistent, although it may be tricky as some (looking at you, MySQL!) play more fast and loose with their data type definitions than others. :) Yes, the hard part is certainly trying to find consistency, or a way of making them act/emulate some consistency. Since in Postgres a VARCHAR is just a restricted TEXT, map length-restricted character types to VARCHAR and unrestricted ones like Perl strings to TEXT, that would carry the best semantics. -- Darren Duncan
Re: RDBMS comparison tool
Yes, that is useful. I think you should add a column such that your leftmost column is some canonical type name you made for the report, and have the SQL standard name(s) in a separate column like the ODBC standard names are. This works best when no one list is a superset of the others, which is surely the case, then you don't have say the confusion about which things in the first column are SQL standard actual vs some placeholder you added from ODBC/etc. -- Darren Duncan Lyle wrote: Hi All, Whilst working on another project it made sense to write a tool for comparing the various RDBMSs. I'm talking about the database management systems themselves, not databases within them. So far I've done parts that use $dbh->type_info_all() to compare what types SQL Server, Postgres, Oracle and MySQL have available and their details. Generating reports like: http://cosmicperl.com/rdbms/compare_types.html http://cosmicperl.com/rdbms/compare_type_details.html I'm not yet sure as to whether the mapping from the RDBMSs local sql type to the ones the DBI recognises is done by the DBD driver, or whether this is already predetermined by the RDBMS... Let me know if this isn't interesting to you all and I'll keep it off list. Lyle
Re: Merging DBD::RAM into DBI
I question this proposal. Does DBI currently have any useful drivers bundled, such that you could install just DBI and use it effectively for real work without installing any other drivers? (I consider Proxy a special case, and even with it in use, the same question applies to whatever other DBI install is being proxied. And the other bundled driver-named classes seem to be more base classes than complete drivers.) I think it would be better for DBI to stick to defining an interface, and any implementations/drivers should be distributed separately. The only good exception I can think of is a driver bundled in order to bootstrap a more comprehensive DBI test suite that tests doing it for real, eg fetching or modifying of data, whatever the API defines rather than mocking. That ideally can then be reused with other DBI drivers. So, I would only support bundling such a RAM-based DBMS if the DBI test suite exploits this in its test suite to help test the DBI interface itself. -- Darren Duncan Jens Rehsack wrote: Hi all, I'd like to merge DBD::RAM into DBI for several reasons: 1) It's a tiny one without much dependencies, it should run fine with just DBI::Sql::Nano (or should) 2) I had an evil idea how to allow Pure-Perl drivers can work together in one instance like: DBI->connect( "dbi:DBM:", , , { sql_join_drv => "DBD::RAM", ... } ); But there is something for testing required ;) 3) It currently doesn't get some love, even if it should get. Any comments? Cheers
Re: Enhancements to Pure-Perl DBD's (primarily DBD::File)
Jens Rehsack wrote: today I got some people in irc://irc.perl.org/#dbi to discuss planned extension to DBD::File. There were several goals to reach: 1) add readonly mode to DBI::DBD::SqlEngine ==> will be solved by adding a new attribute sql_readonly to DBI::DBD::SqlEngine and if required fix DBI::SQL::Nano and SQL::Statement and bump SQL::Statement requirement in *::Nano I would recommend doing the complement, backwards-compatibility concerns aside. Make read-only the default behavior and have a new attribute sql_writable instead. If the goal is to make it possible for safer behavior, that should be the default. Then people don't "accidentally" make changes when they didn't mean to. If backwards-compatibility is the only reason not to do that, you can phase it in over time maybe in a similar manner to how strictures were in Perl; eg, if someone explicitly requires a version number that is greater than X, then readonly is the default; if they have no explicit version requirement, then something backwards-compatible is the default. 2) add support for other I/O layers to DBD::File a) add support for streams (PerlIO) b) add support for other kind of fetch_row/push_row processing Does the current version work with files in a stream fashion, such as only reading in enough at a time that it needs, or does it read the whole file before doing anything? Regardless, streaming is great for scalability. It means you can have many-gigabyte files and be able to work with them. And security is helped as it isn't so easy for someone to DOS you by giving you a huge input. -- Darren Duncan
Re: feature request for SQL::Statement::Structure
Jens Rehsack wrote: Am 11.04.2012 04:58, schrieb Darren Duncan: That's what I would do if I were making a SQL parser. Oh wait, I am! Just another SQL Parser? Why not taking/improving one of the existing ones? Or is it, because they're not invented "here"? No, not "just another"; I don't do that kind of thing. I'm creating a new general purpose but database savvy programming language, first implemented in Perl 5 but that ultimately would self-host, which one can either use to write an application in or write a DBMS in. I am also writing a module in this language which provides all the API and semantics of SQL. Thereby, one can "write SQL" using either the native syntax of the language, which looks loosely like Perl, but I would also provide a parser for actual SQL syntax that maps to the SQL-semantics module, so one can take existing SQL code and run it in Perl. The initial Perl 5 implementation would implement a DBMS natively, but subsequently there is the option to farm out to or map to existing DBMSs, which can be quite direct for code mapped to the SQL-semantics module. I expect that I will try to utilize existing SQL parsing modules on CPAN for my project where they are capable enough or are designed in a manner that they can be made capable enough without a rewrite. My reason to exist isn't SQL parsing for its own sake, but SQL parsing would support the other things. Do you know any SQL parsing modules that can actually *run* the SQL in Perl or that generate Perl code from SQL that does the same thing with Perl data? -- Darren Duncan
Re: feature request for SQL::Statement::Structure
Tim Bunce wrote: With regard to migrating SQL::Statement to be built on SQL::Translator, that's clearly a big task, but one that I presume could be broken down into smaller tasks. I think that it might work better to treat SQL like a general purpose programming language, as much as is possible, rather than something inordinately special, and then use that as a starting point from which to design and build a parser for it. For example, think of your "create table" et al like a data type definition (columns are attributes, constraints are constraints, etc) plus variable declaration, and think of your "select" etc like a value expression, and so on; stored procedures etc are routine definitions; bind parameters are parameters. Perhaps something like PPI might be a better starting point (this is a stab in the dark) as it is designed for a full language. That's what I would do if I were making a SQL parser. Oh wait, I am! -- Darren Duncan
Re: [rt.cpan.org #76276] support 0-column input tables?
H.Merijn Brand wrote: On Wed, 4 Apr 2012 12:55:14 -0400, "Jens Rehsack via RT" can you throw it over into SQL::Statement Queue? It's reasonable to handle it there. Sure. I thought it would be DBD::File, as the *file* is empty and thus would do the same for all DBD::File related DBD's. It might even warrant a new attribute to allow this as normally in DBI a table always has columns (but not always rows). If a file is empty, it doesn't even have columns. Not all databases allow the creation of "empty" tables, as in a table with no rows at all. Postgres does, but Unify and Oracle do not. I have included the devel list to see how others think Damn right that tables/rows/etc with zero columns should be supported! All SQL or relational databases should support zero-column tables/rows. Those are very important to *the* relational model. For example, with (natural) join operations (or product or intersect), the zero-column table is analogous to the number 1 if it has one row and to the number 0 if it has none; the one-row version is the identity value for join. Or, if you define a unique/key constraint that ranges over zero columns, that is a simple way to restrict the table to having at most one row, which is useful say if you want to have a table storing singleton information. Practically speaking, supporting zero-column tables/rows just makes sense, it makes as much sense as supporting Perl array or hash variables that are allowed to be empty, or allowing empty sets, which are identity values for set unions or hash/array catenation. From DBI's perspective, as we use Array/Hash to represent rows, it is just the empty one of those to represent the zero-column one. Suffice it to say, if even one DBMS supports this, DBI should too, and this is an example others can follow. -- Darren Duncan
Re: Removal of Oraperl from DBD::Oracle?
Yanick Champoux wrote: On 12-01-10 03:25 AM, Tim Bunce wrote: I can do that, no problem. The only thing I'm afraid of, is that by not giving a specific timeframe, human psychology will do what human psychology does and people will tend to disregard the deprecation notice. :-) But you are right, there is no rush to actually remove the module itself, and perhaps the warning alone will do its job of nudging most of the last stragglers toward DBI. There is another option, assuming that Oraperl is just a wrapper over DBD::Oracle. You could split off Oraperl into a separate CPAN distribution which depends on DBD::Oracle. People who need Oraperl install both, and those that don't, don't. After the first release of Oraperl in a separate distro, as long as you don't change something in DBD::Oracle that breaks the API that Oraperl depends on, you never have to release another version of Oraperl itself again. Then you don't have to think about deprecation of Oraperl so much as deprecation of DBD::Oracle features that Oraperl depends on, which is much less likely since other things probably use those features too. -- Darren Duncan
Re: Database/DBD Bridging?
Brendan Byrd wrote: On Fri, Sep 23, 2011 at 5:01 PM, Darren Duncan wrote: This essentially is exactly what you want to do, have a common query syntax where behind the scenes some is turned into SQL that is pushed to back-end DBMSs, and some of which is turned into Perl to do local processing. The great thing is as a user you don't have to know where it executes, but just that the implementation will pick the best way to handle particular code. I think of an analogy like LLVM that can compile selectively to a CPU or a GPU. Automatically, more capable DBMSs like Postgres get more work pushed to them to do natively, and less capable things like DBD::CSV or whatever have less pushed to them and more done in Perl. Yeah, that sounds right. So would this eventually become its own DBD module? Yes and no. It would not natively be a DBD module, but a separate module can exist that is a DBD module which wraps it. Kind of like how you have both SQLite and DBD::SQLite, say. Does it use DBI methods to figure out the specs of the system? For example, you were saying "less capable things like DBD::CSV". Is that determined by querying get_info for the ODBC/ANSI capability data? It would use whatever means make sense, which might be starting with the DBI methods for some basic functionality and then doing SELECT from the INFORMATION_SCHEMA to provide enhanced functionality. Of course. Something like this is huge, but it's also hugely important to make sure it gets into the hands of the Perl community. Absolutely. -- Darren Duncan
Re: Database/DBD Bridging?
I only got a copy of this message directly and not also via the list as expected, since you addressed it to the list, but anyway ... Brendan Byrd wrote on 2011 Sep 22 at 6:25am PST/UTC-8: The problem with PostgreSQL's SQL/MED is that it's not Perl, and it won't work for some of the more abstract objects available as DBD. You may want to look into PL/Perl then, using Perl inside Postgres, to bring together some of these things, if it will work for you. I would like to tie this DBD::FederatedDB into DBIC, so that it can search and insert everything on-the-fly. Shoving everything into RAM isn't right, either, since DBD::AnyData can already do that. The whole point of having the databases process the rows one at a time is so that it can handle 10 million row tables without a full wasteful dump. Another thing to ask is whether what you're doing here is a batch process where some performance matters are less of an issue, or whether it is more on demand or more performance sensitive. It looks like Set::Relation can work out great for sucking in table_info/row_info data, and can be used as the temp cache as fractured rows come in. Perhaps, although Set::Relation is more about making database operations like join etc available in Perl, so you'll want to be using such various tools to take advantage of it. But then no one besides myself has used it yet that I know of, and others often think of tool uses beyond the creator. I would be highly interested in developing this with you. I'm spread pretty thin with several other Perl modules, so I otherwise wouldn't tackle it right now. But, if you already have something started, we can try to finish it, and that's much better than starting from scratch alone. Do you have a repository for this new module yet? What are you calling it? I take it the module is building off of SQL::Statement? If you mean the more robust/scalable solution, then that has 2 main parts, which is a standard query language specification, Muldis D, plus multiple implementations. It corresponds to but is distinct from the ecosystem of there being an ISO SQL standard and its implementations in various DBMSs. The query language, Muldis D, is not SQL but it is relevant here because it is designed to correspond to SQL and to be an intermediary form for generating/parsing SQL or translating between SQL dialects, or between SQL and other languages like Perl. (This means all SQL, including stored procedures.) This essentially is exactly what you want to do, have a common query syntax where behind the scenes some is turned into SQL that is pushed to back-end DBMSs, and some of which is turned into Perl to do local processing. The great thing is as a user you don't have to know where it executes, but just that the implementation will pick the best way to handle particular code. I think of an analogy like LLVM that can compile selectively to a CPU or a GPU. Automatically, more capable DBMSs like Postgres get more work pushed to them to do natively, and less capable things like DBD::CSV or whatever have less pushed to them and more done in Perl. The language spec is in github at https://github.com/muldis/Muldis-D and it is also published on CPAN in the pure-pod distribution Muldis-D, but the CPAN copy has fallen behind at the moment. The implementations I haven't started yet, or I did but canceled those efforts so to do it differently, so you can't run anything yet. But I know in my head exactly how I intend to do it. I intend to make a few more large updates to the Muldis D spec before starting in earnest on the implementation, so to make that simpler and easier to do (it is substantially complete other than some large refinements); some clues to this direction are in the file TODO_DRAFT in github. For timetable, if I could focus on this project I could have something usable in a few months; however, I also have a separate paying job that I'm currently focusing on which doesn't leave much time for the new project, though I hope to get more time to work on it maybe in mid-late October. If you are still interested in working on this, or you just want to follow it, please join the (low traffic) discussion list muldis-db-us...@mm.darrenduncan.net . FYI, this project is quite serious, not pie in the sky, and it has interest from some significant people in the industry, such as C.J. Date (well known for "An Introduction to Database Systems" that sold over 800K copies), and one of his latest co-authored books in 2010 explicitly covers part of my project with a chapter. -- Darren Duncan
Re: Database/DBD Bridging?
Brendan, Taking into account David's response ... I have several answers for you: 1. The functionality you are talking about is often referred to as a federated database, where you have one database engine that the client talks to which turns around and farms some of the work off to various other database engines, coordinating between them and doing some extra work itself. What you initially proposed is essentially using Perl as the client-most database engine, and what David proposed is using PostgreSQL as the client-most engine instead. 2. If PostgreSQL 9.1's SQL/MED abilities will do what you need, then use it. 3. If immense scalability isn't needed and you want to do this in Perl then I recommend you look at my Set::Relation CPAN module, which will handle a lot of the difficult work for you. Specifically, you would still manage for yourself the parceling out of queries to the various back-end DBMSs like Oracle or Excel or whatever, but Set::Relation will then take care of all the drudgery of taking the various rowsets returned from those and combine them into the various end-result queries you actually wanted to do. Each Set::Relation object contains a rowset and you can use its several dozen methods to do relational joins or aggregates or antijoins or various other things, all the functionality of SQL, but in Perl. Its main limiting factor is that it is entirely RAM-based, though this also makes it simpler. So you can do this right now. 4. I am presently implementing a relational DBMS in Perl which provides all the functionality you described and more, including query language support that lets you write code like you demonstrated. Strictly speaking the initial version is fully self-contained for simplicity, but a subsequent/spinoff version would add the ability to farm out to other database engines as per SQL/MED, and this *is* designed to scale. It even uses the paradigm you mention, where each underlying engine is essentially a namespace in which tables live, and you can join between them as if they were all local; or to be more accurate, each database *connection* has its own namespace and the underlying engine is just a quality of that connection, like with how DBI lets you have multiple connections with the same driver. So if this sounds like something you want to help create, please talk with me. -- Darren Duncan Brendan Byrd wrote: Okay, this is a big blue sky idea, but like all things open-source, it comes out of a need. I'm trying to merge together Excel (or CSV), Oracle, Fusion Tables, JSON, and SNMP for various data points and outputs. DBIC seems to work great for a large database with a bunch of tables, but what about a bunch of databases? I've searched and searched, and nobody seemed to have designed a DBD for multiple DBDs. There's DBD::Multi and Multiplex, but that's merely for replication. This would require reparsing of SQL statements. So, let's call this module idea DBD::IntegrateDB or MultiDB. It would be a module built from SQL::Statement (using the typical Embed instructions), so it would use that module's SQL Engine for parsing and processing SQL. We'll use a simple example of two databases: one Oracle, and one MySQL. This module loads both of them in with a DBI->connect string. Then the dev runs a prepare on the following SQL: SELECT book, title, b.person, age, dob FROM ora.books b INNER JOIN mysql.people p ON ( b.person_id = p.person_id ) So, "ora.books" is on the Oracle DB, and "mysql.people" is on the MySQL DB. The parser for this MultiDB would: 1. Use SQL::Parser to break down the SQL statement. 2. Figure out who owns what, in terms of tables and columns. (Complain about ambiguous columns if it has to.) 3. Use table_info calls to the separate DBI interfaces, including number of rows, cardinality (if available), etc. 4. Store the joining information. 5. Prepare two *separate* SQL statements for each DB. It would no longer be JOIN queries, but standard queries for the tables (including person_id, which wasn't included in the original SELECT statement). Then when the statement is executed: 1. The two SQL statements are executed for each DB. 2. The fetch_row sub would process each row one at a time for each DB. 3. If two IDs match, send a row back. Otherwise, cache the data and wait for something to match. 4. Repeat until the rows are exhausted on one or both sides. (One side for INNER, both sides for OUTER.) Does anything like that exists? I'm not saying it's an easy operation, but if something like that can just start off with a simple JOINs at first, it would be a miracle module. Imagine linking with more abstract DBI modules: Oracle to CSV to MySQL to Teradata to Sys to Sponge. Tell me you're not excited at the prospect of eventually creating fre
Re: DBI drivers by duck-typing
Greg Sabino Mullane wrote: I believe that DBI should go away as an actual piece of code and instead be replaced by an API specification document, taking PSGI as inspiration. I'm having a hard time envisioning how this would work in practice. What I see is lots of duplicated code across the DBDs. So a "DBI bug" would be handled by updating the API, bumping the version, and waiting for individual DBDs to implement it? A recipe for a large collection of supported API versions out in the wild, I would imagine. I envision that DBDs will tend to still have a shared dependency with each other as they do now, and that shared code essentially being what DBI is now. So when the API is updated to handle this "DBI bug", then generally only the shared dependency would need to be updated, same as now. The main difference in my proposal is that this code sharing is no longer *mandatory* to use the API. The concept of driver-specific methods, like pg_*, just become ordinary DBD methods that are beyond what is defined by the DBI spec. Seems a sure recipe for namespace collisions. Also makes it much harder to spot any DBMS-specific hacks in your Perl code. You can still use namespaces under my proposal. The difference is that it is no longer mandatory for the shared code to be updated for a new DBMS to be supported. As for DBMS-specific hacks, those still abound today, because every DBMS has unique SQL syntax and unique behaviors for some of the same syntax; DBI has never abstracted these away; it just prevents the need for some other kinds of DBMS-specific hacks. -- Darren Duncan
Re: DBI drivers by duck-typing
Tim Bunce wrote: On Mon, Sep 12, 2011 at 01:13:58PM -0700, Darren Duncan wrote: To be brief, ... Darren, if you want to do something really directly useful for the DBI ecosystem I would encorage you (or anyone else) to work on creating a DBI test suite that's independent of the DBI distribution. Tim. p.s. I'm unlikely to pay much attention to any proposal that requires getting a significant number of driver authors to make significant changes. A common test suite is a far higher priority in my opinion, with more benefits for more people more quickly. Sure, I could do that, eventually, maybe in 2012. It would in all practical sense be a prerequisite for my proposal too, so to help know that any implementations of my proposal didn't break something that works now. -- Darren Duncan
Re: DBI drivers by duck-typing
David Nicol wrote: Are you asking for something beyond documenting the DBI/DBD interface to the point where a DBD can be used more directly than through the DBI? Aside from requesting that everyone abandon the framework mentality? Are you asking for a stronger set of conventions in DBDs that will make them more useful away from DBI? Are you proposing to write thinner glue? Okay, actually, for the short term I propose something considerably less controversial, which can set the stage and ease transition for my other proposal, but without committing us to it. I propose making a few additions to the DBI as it is now, such as a few routines to the various user-facing DBI packages, which are then documented as the new way to introspect certain things such as that the object or package someone is looking at does indeed implement the DBI protocol, and its version, etc, and document that stuff like users testing $dbh for their package name is deprecated and is not expected to continue working in the future. Also make additions to any or all DBD as required so that one can use them directly if they want to, but they also continue to work the old way. Said DBD would also document that direct use is the recommended way to use them, and that using them by way of "DBI" is deprecated. And have documentation describing/defining the DBI's API. Then wait a year or two, and then make further updates that cause uses of the deprecated APIs to warn. Then wait a year or two, and optionally remove those, or wait longer considering how mature these things are. So basically I propose making the small additions so that users can experience the inversion of control, but also use the DBI the old way too, and so give a solid transition period. Then, at any time, one can choose to create a new DBD that just works the new/inverted way, and users can use that and legacy DBDs at the same time. -- Darren Duncan
Re: DBI drivers by duck-typing
David Nicol wrote: On Mon, Sep 12, 2011 at 5:20 PM, Darren Duncan wrote: How mandatory, currently, is the "mandatory shared codebase?" Are there really traps and snares preventing a different framework from using DBD modules? I'm presuming that there aren't; ICBW. So getting away from the "framework" mentality is the point here. I've got the idea (and also the feeling that carrying on this discussion on-list is dubious; but I think I might speak for others) I see the dbi-dev list to be equally about discussing implementation/design matters that are common to DBD development, because they share a user API by design, and not just about the "DBI" distribution itself. So in that respect, I consider this very on topic, at least if my proposal is accepted as the official evolution of the DBI itself, from code to a spec, where the latter, for the forseeable future, matches the former exactly as possible. Ultimately then, this list would be about what it is currently about to a large extent, which is DBD development, and managing the API they agree to share. that your proposal is a change in one's way of looking at the architecture, rather than a proposal for a change to the DBI. A lot of other framework/module projects exist: firefox and qpsmtpd are but the first examples that come to mind. Sometimes another framework can replace the core, and plugins can be reused -- I think but am not certain that chrome can handle some firefox plugins without modification. I've often wanted to formalize the qpsmtpd plugin interface to the point where a different MTA could use qpsmtpd plugins unchaged, but haven't found the time for that. Actually the big SMTP plugin framework is "milter" which is defined as a clean interface to the point where other MTAs offer Milter interfaces. Are you asking for something beyond documenting the DBI/DBD interface to the point where a DBD can be used more directly than through the DBI? Aside from requesting that everyone abandon the framework mentality? Are you asking for a stronger set of conventions in DBDs that will make them more useful away from DBI? Are you proposing to write thinner glue? Fundamentally I propose an inversion of control, where users invoke DBD modules directly that optionally invoke or compose DBI to help them, rather than users invoking DBI that uses DBD modules to help it. The other stuff flows from that. This is an outline of what I propose: 1. There be a paradigm shift of sorts for the DBDs themselves, where they are conceived as something that are meant to be used directly by users, rather than having their use mediated by a separate module which is DBI. By normal Perl conventions this means say that users should write something like this: use DBD::Pg; my $dbh = DBD::Pg->connect('...', ...); ... rather than: use DBI; my $dbh = DBI->connect('dbi:pg:...',...); ... and otherwise use it the same. The choice of DBD can still be data-defined, for example involving: my $dbh = $dbd_name->connect('...', ...); 2. Although DBDs are intended to be used directly, they have a promise to implement a common user API, which is formally documented. Each DBD has its own documentation, but also refers to the spec, and the spec includes user documentation which each DBD effectively composes by reference. 3. The separate "DBI" module as it is known now would no longer exist as far as regular users are concerned. For those DBDs wanting to share that code, the DBI code just becomes an implementation detail of said DBD modules rather than something users have to think about. 4. As DBI is an implementation detail of DBDs, it can be refactored into something that doesn't know about individual DBDs, eg the hard-coded driver function prefix registry can be eliminated, and it would just provide default implementations of the routines defined in the documented API spec, to be composed by the DBDs, or utilities, so not a huge amount changes. 5. A separate lightweight "DBI" module could still exist, say that just provides "DBI->connect()" so existing DBI users can upgrade with zero changes rather than a single change without having anything break, but all it does is turn around and invoke the DBD using the defined API that users should normally use for new/updated projects. This shim would be considered deprecated. 5b. On the other hand, I would wonder how much code there is out there that does things like testing whether an object it is given is of a specific, "DBI::*"-named class, and so might break if it got a "DBD::Pg::*"-named class instead. Probably not most code, but if there was any, then it might break. Not that modern code should be doing this. Something to look into. -- Darren Duncan
Re: DBI drivers by duck-typing
David Nicol wrote: On Mon, Sep 12, 2011 at 3:13 PM, Darren Duncan wrote: So what say you? I think you can do this without any change to DBI. You have your own DBI-like framework; you could declare that anything that passes your conformance suite is compliant, and offer low-impact patches to your favorite DBD modules where the glue is too thick for your aesthetics. My own DBI-like project is a red herring for this discussion. How mandatory, currently, is the "mandatory shared codebase?" Are there really traps and snares preventing a different framework from using DBD modules? I'm presuming that there aren't; ICBW. My proposal here is all about today's DBI users, not my own DBI-like project. In fact, the whole point of my proposal is that there doesn't need to be a "framework" at all. No common piece of code you need to use any database. I mentioned PSGI for a reason. PSGI is a *protocol*, not a framework. You don't need any specific piece of code to use PSGI. I could have said HTTP etc instead, but PSGI was more appropriate because it basically defines a protocol consisting of a routine API that takes arguments formatted a certain way as input and has a result formatted a certain way, and no dependencies than Perl itself. So getting away from the "framework" mentality is the point here. -- Darren Duncan P.S. I expressly do *not* call my project a "framework" either.
Re: DBI drivers by duck-typing
I was sent a response to this off-list, part of which I'll reply to on-list. The response bit was: "What happens to the 'which drivers are available' part of the DBI interface?" To this I say: The API definition would say that each DBD has something which can be easily scanned for, and so an external DBI utility function can easily find all the available drivers, same as it does now. That thing to scan for might be a simple is_a_DBI() main package routine, for example, or something else. In contrast, if the list of drivers to search for is hard-coded in DBI rather than DBI just checking all available packages on the system, well that solution could also exist as said external DBI utility function. -- Darren Duncan
Re: DBI drivers by duck-typing
Replying to myself, ... I believe that this fundamental design change can be accomplished with almost full or entirely backwards-compatibility to existing DBI-using codebases. This partly by a "DBI" package still being available which essentially provides shims for people saying "DBI->connect(...)" that wrap say "DBD::Pg->new(...)" for legacy code, while newer code would write the latter directly instead and cut out the shim. Either way, the result of invoking the above would be an object providing the API that a $dbh currently has, though its package name may be different. While it should carry a full major-version update because it isn't guaranteed compatible, I see that it could be made backwards-compatible much to the extent that Perl 5.14 will run code written for Perl 5.8 or 5.6 or such. I believe that the very first version with this fundamental change should keep all other kinds of changes to a minimum, to aid transition, and then other API changes can happen later, with new API versions. -- Darren Duncan Darren Duncan wrote: To be brief, ... I don't know if this has come up in past discussions about the next major DBI version, but I'll say it now, since its also what I'm doing with my own DBI-alike ecosystem to be. I believe that DBI should go away as an actual piece of code and instead be replaced by an API specification document, taking PSGI as inspiration. A DBD would be re-defined as a module that implements said documented API specification, and that's it. The DBDs would be mutually severed from there being a mandatory shared DBI codebase which can aid in their maturity and optimizability. The existing DBI code can be refactored into reusable utility modules or roles that a DBD can *optionally* depend on in its implementation. This also means, for example, that said DBI roles could more freely be Moose roles, while DBDs that don't want that overhead aren't required to use it. Or the shared parts could be identical to the current DBI, more or less, same dependencies. Or the choice between XS vs pure Perl is more clean, and is a quality of just each individual DBD. The concept of gophering in DBI would change into a "middleware" concept. The concept of driver-specific methods, like pg_*, just become ordinary DBD methods that are beyond what is defined by the DBI spec. So the new DBD is anything that quacks like a DBI, as defined by documentation. And of course, the DBI document would be versioned, and part of the API it defines is that one can programmatically query what version(s) of the DBI spec the DBD provides. So what say you? -- Darren Duncan
DBI drivers by duck-typing
To be brief, ... I don't know if this has come up in past discussions about the next major DBI version, but I'll say it now, since its also what I'm doing with my own DBI-alike ecosystem to be. I believe that DBI should go away as an actual piece of code and instead be replaced by an API specification document, taking PSGI as inspiration. A DBD would be re-defined as a module that implements said documented API specification, and that's it. The DBDs would be mutually severed from there being a mandatory shared DBI codebase which can aid in their maturity and optimizability. The existing DBI code can be refactored into reusable utility modules or roles that a DBD can *optionally* depend on in its implementation. This also means, for example, that said DBI roles could more freely be Moose roles, while DBDs that don't want that overhead aren't required to use it. Or the shared parts could be identical to the current DBI, more or less, same dependencies. Or the choice between XS vs pure Perl is more clean, and is a quality of just each individual DBD. The concept of gophering in DBI would change into a "middleware" concept. The concept of driver-specific methods, like pg_*, just become ordinary DBD methods that are beyond what is defined by the DBI spec. So the new DBD is anything that quacks like a DBI, as defined by documentation. And of course, the DBI document would be versioned, and part of the API it defines is that one can programmatically query what version(s) of the DBI spec the DBD provides. So what say you? -- Darren Duncan
Re: Add Unicode Support to the DBI
Another wrinkle to this is the fact that identifiers in the database, such as column names and such, are also character data, and have an encoding. So for any DBMSs that support Unicode identifiers (as I believe a complete one should, even if they have to be quoted in SQL) or identifiers with trans-ASCII characters, we have to account for those too, making sure that the various Perl-side code correctly matches or doesn't match those identifiers, and so on. -- Darren Duncan
about prefix requests
Is it necessary for each DBI driver to have a prefix registered in DBI itself, or is this only necessary when the driver wishes to expose extra public methods? If the DBD doesn't expose custom methods, can it be used without the prefix registration? -- Darren Duncan
Re: DBD::mongDB
Rudy Lippan wrote: On 11/03/2010 10:08 PM, Darren Duncan wrote: Darren Duncan wrote: Rudy Lippan wrote: Does anyone know if there are any DBD::drivers that do not use some variant of SQL? I ask because I am planing on implementing the driver using mongoDB's native query language initially, but having a query_language attribute so that It would be possible to add SQL later if desired/wanted. As I understand it, DBI is agnostic to the database query language in At least that is what the docs say :) Ignoring what existing DBD:: support, do you see anything in the fundamental DBI-defined API that is SQL specific and has no analogy for other DBMSs? In my own effort to rewrite DBI itself (Muldis::Rosetta::Interface[|::*]) which wasn't constrained to just SQL-think, I thought through various matters and still ended up with fundamentally the same API that DBI has (Machine=driver, Process=connection,...) and in particular found that it could easily be agnostic to the DBMS query language and essentially just pass it through. The largest design difference I have is that while DBI's interface wraps its driver counterparts, Muldis' interface is just a set of roles which its drivers ("engines") compose and users invoke the driver/engine directly; hopefully DBIv2 will do likewise rather than continuing the wrapper approach. Furthermore, to answer your other question ... I'm not aware of any existing DBD:: that doesn't take some variant of SQL. If that is the case, let the hate mail begin... What hate mail, for what reason? -- Darren Duncan
Re: DBD::mongDB
Darren Duncan wrote: Rudy Lippan wrote: Does anyone know if there are any DBD::drivers that do not use some variant of SQL? I ask because I am planing on implementing the driver using mongoDB's native query language initially, but having a query_language attribute so that It would be possible to add SQL later if desired/wanted. As I understand it, DBI is agnostic to the database query language in use, same as ODBC etc is, and same as Muldis::Rosetta is, and so an implementer of the API it defines can use any query language it wants, or even support several. I recommend just supporting MongoDB's native query language initially as per your plan. If the ability to query in SQL is desired then you can provide that as an alternative, but under no circumstances should the ability to use the DBMS' native query language be denied as an option to users. Furthermore, to answer your other question ... I'm not aware of any existing DBD:: that doesn't take some variant of SQL. However, when I get around to making a DBD:: for my Muldis DBMS, it will support *both* Muldis D and SQL. -- Darren Duncan
Re: DBD::mongDB
Rudy Lippan wrote: Does anyone know if there are any DBD::drivers that do not use some variant of SQL? I ask because I am planing on implementing the driver using mongoDB's native query language initially, but having a query_language attribute so that It would be possible to add SQL later if desired/wanted. As I understand it, DBI is agnostic to the database query language in use, same as ODBC etc is, and same as Muldis::Rosetta is, and so an implementer of the API it defines can use any query language it wants, or even support several. I recommend just supporting MongoDB's native query language initially as per your plan. If the ability to query in SQL is desired then you can provide that as an alternative, but under no circumstances should the ability to use the DBMS' native query language be denied as an option to users. -- Darren Duncan
Re: Perl 5.13.3+ MAY BREAK COMPILED DRIVERS - Please test DBI 1.613_71!
Okay, I'm not sure I understand the problem here. I thought that the point of PERL_POLLUTE was to provide things that were removed from the Perl core so that code which depends on the old removed APIs would continue to work with newer Perls that removed them if said code enables PERL_POLLUTE. DBD::SQLite *does* define PERL_POLLUTE, but if I take it out, then it still builds fine otherwise as-is on Perl 5.13.4 with DBI 1.613_71. So DBD::SQLite already appears to use the newer PL_ versions, so hence what is PERL_POLLUTE for? I found that under Perl 5.12.1, just removing PERL_POLLUTE caused build failures, which cited old versions without the PL_, but then I found that these old versions weren't in the DBD::SQLite source at all, but rather were being added by ppport.h. So then when I also removed ppport.h (and PERL_POLLUTE), then DBD::SQLite built with Perl 5.12.1 and DBI 1.613_71. I haven't tested on older Perls yet. So, in light of this, should I actually be changing anything in DBD::SQLite? Or should it be retaining both the PERL_POLLUTE and ppport.h? You seemed to be saying that including PERL_POLLUTE means broken on Perl 5.13.4 but that doesn't seem to be the case. Guidance please. Thank you. -- Darren Duncan Tim Bunce wrote: Short version: Please download build test *and install* DBI 1.613_71, then download build and test any compiled drivers you use to check they work with DBI 1.613_71. Let us know about any failures *and* successes. Also grep the source code of the driver to see if it defines PERL_POLLUTE. If it does, let us know. Long version: Perl 5.13.3+ removes support for PERL_POLUTE. PERL_POLUTE enables use of old-style variables names, without the PL_ prefix (e.g. sv_undef instead of PL_sv_undef). The DBI has, for many years, enabled PERL_POLUTE mode in DBIXS.h, so it's likely that compiled drivers are use some old-style variables names. These drivers won't work with Perl 5.13.3+. To aid testing for this, the DBI 1.613_71 doesn't enabled PERL_POLUTE mode. So please test compiled drivers against DBI 1.613_71. Thanks! Tim.
Re: Any reasons not to release DBI 1.614?
Tim Bunce wrote: On Tue, Aug 31, 2010 at 08:55:32AM -0700, David E. Wheeler wrote: On Aug 31, 2010, at 2:52 AM, Tim Bunce wrote: It's back in. I may remove it for 1.615 or, more likely, may leave it out and individual developers deal with failure reports on perl 5.13.3+/5.14. You may “remove it…or, more likely, leave it out”? Huh? Ug. I meant "may restore it or, more likely, leave it out". Thanks. Tim. I suggest releasing DBI *without* the pollute stuff and let the drivers catch up. The drivers would still work with Perls before 5.13 without changes. In particular, it will make it much easier to test that drivers are correct if DBI isn't muddling things up by perpetuating the pollution. -- Darren Duncan
Re: Any reasons not to release DBI 1.614?
Tim Bunce wrote: What's the state of play? Will DBI 1.614 still lack the POLLUTE or did you put that back in? -- Darren Duncan
Re: Perl 5.13.3+ MAY BREAK COMPILED DRIVERS - Please test DBI 1.613_71!
Tim Bunce wrote [to dbi-dev]: Short version: Please download build test *and install* DBI 1.613_71, then download build and test any compiled drivers you use to check they work with DBI 1.613_71. Let us know about any failures *and* successes. Also grep the source code of the driver to see if it defines PERL_POLUTE. If it does, let us know. Long version: Perl 5.13.3+ removes support for PERL_POLUTE. PERL_POLUTE enables use of old-style variables names, without the PL_ prefix (e.g. sv_undef instead of PL_sv_undef). The DBI has, for many years, enabled PERL_POLUTE mode in DBIXS.h, so it's likely that compiled drivers are use some old-style variables names. These drivers won't work with Perl 5.13.3+. To aid testing for this, the DBI 1.613_71 doesn't enabled PERL_POLUTE mode. So please test compiled drivers against DBI 1.613_71. I didn't try to build them with DBI 1.613_71, but I did a grep search for PERL_POLLUTE in the latest CPAN versions of DBD-SQLite and DBD-Pg. * DBD-SQLite-1.30_05 *does* #define PERL_POLLUTE, once in SQLiteXS.h * DBD-Pg-2.17.1 does not have any PERL_POLLUTE Obviously, then, DBD::SQLite will need an update to fix this problem, though it sounds like the fix should be some relatively simple symbol renaming. I could probably do this, but not until at least a few days from now when I have more time to do that *and* test it. -- Darren Duncan
test DBD::SQLite 1.30_04 - write-ahead logging
All, I am pleased to announce that DBD::SQLite (Self Contained RDBMS in a Perl DBI Driver) version 1.30_04 has been released on CPAN (by Adam Kennedy). http://search.cpan.org/~adamk/DBD-SQLite-1.30_04/ This developer release bundles the brand-new SQLite version 3.7.2, which (since 3.7.0) adds support for write-ahead logging (WAL). See http://sqlite.org/wal.html for the details of the WAL support that SQLite now has. WAL is an alternative method to how SQLite implements atomic commit and rollback than to its rollback journal method. It offers much improved concurrency and performance in many circumstances, such as because database readers and writers don't block each other. There are also trade-offs. By default, SQLite and DBD::SQLite will continue to use the older rollback journal method, and you can use the new WAL method with the SQL command: PRAGMA journal_mode=WAL; There are also numerous other additions, changes, or fixes in either DBD::SQLite or SQLite itself since the last production DBD::SQLite release 1.29 of 2010 January, which bundles SQLite 3.6.22. For the change details since then, see http://sqlite.org/changes.html or http://search.cpan.org/src/ADAMK/DBD-SQLite-1.30_04/Changes as appropriate. TESTING NEEDED! Please bash the hell out of the latest DBD::SQLite and report any outstanding bugs on RT. Test your dependent or compatible projects with it, which includes any DBMS-wrapping or object persistence modules, and applications. This 1.30_04 release will probably be released as a production 1.31 within a week if no show-stopper problems are found. Please note the compatibility caveats of using pre-3.7.x versions of SQLite on databases that had been used with WAL mode on. In order to use an older SQLite on the database, the database must have last been used by a 3.7.x in journal mode. See http://sqlite.org/wal.html for details. Please note that, if you receive nondescript "disk I/O error" errors from your code after the update, see if the failing code involves a process fork followed by unlinking of the database, such as if it was temporary for testing. The DBD::SQLite test suite had needed an update to act more correctly, which the update to 3.7.x from 3.6.x exposed; 3.6.x didn't complain about this. If you want in to DBD::SQLite development, then join the following email/IRC forums which MST created (the mailing list, I am administrating): http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/dbd-sqlite #dbd-sqlite on irc.perl.org And the canonical version control is at: http://svn.ali.as/cpan/trunk/DBD-SQLite/ Patches welcome. Ideas welcome. Testing welcome. If you feel that a bug you find is in SQLite itself rather than the Perl DBI driver for it, the main users email forum for SQLite in general is at: http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ... where you can report it as an appropriate list post (the SQLite issue tracking system is no longer updateable by the public; posting in the list can cause an update there by a registered SQLite developer). Please do not reply to me directly with your responses. Instead send them to the forums or file with RT as is appropriate. Thank you. -- Darren Duncan
Re: Take care with version numbers (eg DBD::Pg)
Tim Bunce wrote: My take on this, for the record, is to prefer two part numbers in the form X.YYY where YYY never has a trailing zero. Short, sweet, simple. Tim. p.s. No one commented on the DBI going from 1.609 to 1.611 :) You mean now? 1.611 came out on April 29th. Or did you mean the completely different 1.611_93? Confusing! And that points to an example of something else that should become common practice for numbers. Projects that have any version X.Y_Z should never also have a version X.Y for the same X plus Y. Instead, the Y should always increment when moving between a developer release and a production release. See how DBD::SQLite does things for an example that I think is better. This is also analogous to Perl's own versioning X.Y.Z scheme, where there are never developer and production releases with the same Y. Its much less confusing that way. It also avoids the confusion of relating 1.002003 to 1.002_003, say; are those the same version or different versions? So, if the next DBI release after the latest 1.611_93 is going to be a stable release, then keep the current plan for it to be 1.612. Then, when making a subsequent dev release, call it 1.613_1 or 1.613_001 or such. Does that not make more sense? -- Darren Duncan
Re: New DBD::File feature - can we do more with it?
H.Merijn Brand wrote: A bit more thought leads to better portability ... my $hash = { meta => { f_dir => "data", f_schema=> undef, f_ext => ".csv/r", f_encoding => "utf8", f_lock => 1, }, Something very important here, arguably the most important thing to get right at the start, is that the API has to be versioned. We have to start off with a ridiculously simple metaformat, such as simply a 2-element array. The first element of this array declares the version of the format of the metadata, where that is provided as the second element. For example: my $dbh = DBI->connect ("dbi:CSV:", undef, undef, { meta => [['DBI','http://dbi.perl.org/','0.001'],{ ... }], }) or die DBI->errstr; Or if that's too complicated, then: my $dbh = DBI->connect ("dbi:CSV:", undef, undef, { meta => ['0.001',{ ... }], }) or die DBI->errstr; The format here is based on that used by Perl 6 for fully-qualified Perl or module names, with parts [base-name, authority, vnum]. Something similar is also mandated by Muldis D for starting off code written in it. For that matter, look at the META.yml files bundled with Perl modules these days, which declare the version of the spec they adhere to. The point is that we will give ourselves a lot of room to evolve cleanly if we start using declared versions in the beginning, so that any subsequent version of DBI can interpret the user arguments unambiguously to their intent, because the users are telling them, the input is in this version of the format. The second element can even change over time, say from a hash-ref to an array-ref or vice-versa, depending what the first element is. So I strongly suggest starting from the point of mandating declared versions and go from there. -- Darren Duncan
Re: New DBD::File feature - can we do more with it?
Jens Rehsack wrote: because of a bogus implementation for PRECISION, NULLABLE etc. attributes in DBD::CSV (forced by limitations of SQL::Statement) we have a new attribute for tables in DBD::File meta data: 'table_defs'. This is filled when a 'CREATE TABLE ...' is executed and copies the $stmt->{table_defs} structure (containing column name and some more information - what ever could be specified using (ANSI) SQL to create tables). Could it makes sense to have a DBD::File supported way to store and load this meta-data (serialized, of course)? I would really like to do this - this would bring us a big step in the right direction. DBD::DBM could store it in it's meta-data (instead of saving column names it could safe the entire table_defs structure), but what should DBD::CSV do? You have several possible options and the short answer is "let the user tell you". For backwards or sideways compatibility, don't store any extra metadata by default; the only metadata is what can be gleaned from the normal CSV file itself. Then, if the user tells you to via extra DBI config args, then retrieve or store extra metadata in accordance with those args. For example, the extra args may give you a second file name in which the meta data is / is to be stored. Or the extra args may indicate that one of the first rows in the CSV file contains metadata rather than normal data. For example, if all of your metadata is column-specific, the CSV file could contain 2 heading rows instead of the usual one (or none)? The first heading row would be the name of the column as usual. The new second heading row would be the encoded type/constraint/etc definition that says how to interpret the data of the column, and then the third-plus columns are the data. But the point is, let the user tell you how to interpret the particular CSV files they throw at you, and so DBD::CSV is then more flexible and compatible. I will be taking a similar approach in the near future when implementing my Muldis D language over legacy systems such as SQL databases, whose metadata generally aren't as rich as mine. Over a SQL database, I would provide at least 3 options to users that they specify when connecting to a SQL database using Muldis D: 1. There is no extra metadata and just reverse-engineer the SQL metadata; this is also the degrade gracefully approach. 2. The extra metadata is stored in tables of another SQL schema of the same database and the user names that schema when connecting; this way, users can name their own stuff anything they want and they pick where my metadata goes, out of their way. 3. The extra metadata is stored outside the database in a local file; this file is named by the user when connecting. I think that #2 is the best option for that. -- Darren Duncan
Re: Extract SQL-Engine basics from DBD::File
Jens Rehsack wrote: during refactoring of DBD::File and new wishes (e.g. SQL_IDENTIFIER_CASE from Martin Evans, getinfo() support, ...) and my own try of a pure-perl DBD for system tables (DBD::Sys) I see many common code. I would like to extract the common code into a base class DBD - maybe named DBD::SQLPerl (placed SQL in from for private methods and attributes naming - the should have the prefix "sql_"). I had already been thinking of this need for awhile, but in that context it would have been a utility module, say with "common" in its name or something. Unless there is already a precedent otherwise, I recommend leaving the DBD::* namespace for actual DBI drivers, and utilities go in some other namespace, though perhaps DBD::Util::* or something. The choice may also be affected by whether the new module is meant to be subclassed by a DBI module or just "used" ... the latter makes more sense if it would be used by multiple component classes like the driver and statement handles, unless the utility has one for each of those, but I wouldn't make it a subclassing situation. YMMV. In any event, I intend to personally be doing akin to what you propose but more comprehensively, sometime around the August-September 2010 timeframe. For about that time I would be making the reference implementation of my Muldis D language, which is a self-contained Perl relational DBMS with nearly all the features you'd see in a regular SQL DBMS, including multi-column joins, subqueries, unions, recursion, nested transactions, etc. This would be componentized such that most of it could be reused directly as utilities to do what you want, such as by adding a SQL parser on the front and changing the storage end to talk to the system metadata or Google or CSV files or whatever. Don't let that stop you from doing what you're doing though, which looks like more of a refactor, but I do intend to take it to the next level before long. -- Darren Duncan P.S. See also Genezzo.
ANN - DBD::SQLite 1.27 - test it!
All, I am pleased to announce that DBD::SQLite (Self Contained RDBMS in a Perl DBI Driver) version 1.27 has been released on CPAN (by Adam Kennedy). http://search.cpan.org/~adamk/DBD-SQLite-1.27/ This release is the newest one intended for production use and has no known serious bugs. The previous version for production was 1.25, which was released on 2009 April 23. There were many improvements and changes between these 2 versions, and many bugs fixed; see http://cpansearch.perl.org/src/ADAMK/DBD-SQLite-1.27/Changes for a complete list. Just a small number of these are, since 1.25: - The bundled SQLite is version is now 3.6.20, up from 3.6.13 (both were the Amalgamation). - Foreign key constraints are now supported and enforceable by SQLite. However, to aid backwards compatibility and give you a transition period to ensure your applications work with them, this feature is not enabled by default. You enable (or disable) foreign key enforcement by issuing a pragma. - Read the Changes file linked above, especially the sections that say "changes which may possibly break your old applications". As usual, testing of this release is appreciated and recommended. If you use referential stuff in your schema (which SQLite ignores by default now) should do extensive testing to ensure that they will work when you issue "PRAGMA foreign_keys = ON". It is anticipated that foreign keys will be enabled by default within 1 or 2 production releases, and you will have to cope with it. If you want in to DBD::SQLite development, then join the following email/IRC forums which MST created (the mailing list, I am administrating): http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/dbd-sqlite #dbd-sqlite on irc.perl.org And the canonical version control is at: http://svn.ali.as/cpan/trunk/DBD-SQLite/ Patches welcome. Ideas welcome. Testing welcome. Whining is also welcome! If you feel that a bug you find is in SQLite itself rather than the Perl DBI driver for it, the main users email forum for SQLite in general is at: http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ... where you can report it as an appropriate list post (the SQLite issue tracking system is no longer updateable by the public; posting in the list can cause an update there by a registered SQLite developer). Please do not reply to me directly with your responses. Instead send them to the forums or file with RT as is appropriate. Thank you. -- Darren Duncan
test DBD::SQLite 1.26_06 please
All, I am pleased to announce that DBD::SQLite (Self Contained RDBMS in a Perl DBI Driver) version 1.26_06 has been released on CPAN (by Adam Kennedy). http://search.cpan.org/~adamk/DBD-SQLite-1.26_06/ TESTING NEEDED! Please bash the hell out of the latest DBD::SQLite and report any outstanding bugs on RT. Test your dependent or compatible projects with it, which includes any DBMS-wrapping or object persistence modules, and applications. This developer release includes both several changes which *might break your applications* if not accounted for, and it has a lot of code refactoring. This release should also fix the known problem with full-text search (FTS3) that was reported in the 1.26_05 release but had existed in many prior versions; the included test for that problem now passes. From the Changes file: *** CHANGES THAT MAY POSSIBLY BREAK YOUR OLD APPLICATIONS *** - Removed undocumented (and most probably unused) reset method from a statement handle (which was only accessible via func().) Simply use "$sth->finish" instead. (ISHIGAKI) - Now DBD::SQLite supports foreign key constraints by default. Long-ignored foreign keys (typically written for other DB engines) will start working. If you don't want this feature, issue a pragma to disable foreign keys. (ISHIGAKI) - Renamed "unicode" attribute to "sqlite_unicode" for integrity. Old "unicode" attribute is still accessible but will be deprecated in the near future. (ISHIGAKI) - You can see file/line info while tracing even if you compile with a non-gcc compiler. (ISHIGAKI) - Major code refactoring. (ISHIGAKI) - Pod reorganized, and some of the missing bits (including pragma) are added. (ISHIGAKI) The bundled SQLite version (3.6.19) is unchanged from last time. If you want in to DBD::SQLite development, then join the following email/IRC forums which MST created (the mailing list, I am administrating): http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/dbd-sqlite #dbd-sqlite on irc.perl.org And the canonical version control is at: http://svn.ali.as/cpan/trunk/DBD-SQLite/ Patches welcome. Ideas welcome. Testing welcome. Whining is also welcome! If you feel that a bug you find is in SQLite itself rather than the Perl DBI driver for it, the main users email forum for SQLite in general is at: http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ... where you can report it as an appropriate list post (the SQLite issue tracking system is no longer updateable by the public; posting in the list can cause an update there by a registered SQLite developer). Please do not reply to me directly with your responses. Instead send them to the forums or file with RT as is appropriate. Thank you. -- Darren Duncan
test DBD::SQLite 1.26_05 - foreign keys!
All, I am pleased to announce that DBD::SQLite (Self Contained RDBMS in a Perl DBI Driver) version 1.26_05 has been released on CPAN (by Adam Kennedy). http://search.cpan.org/~adamk/DBD-SQLite-1.26_05/ This developer release bundles the brand-new SQLite version 3.6.19, which adds support for enforcing SQL foreign keys. See http://sqlite.org/foreignkeys.html for the details of the foreign key support that SQLite now has. Also be sure to look at the section http://sqlite.org/foreignkeys.html#fk_enable , because you have to enable a pragma on each connect to use the foreign keys feature; it isn't yet on by default for backwards compatibility purposes. As I imagine many of you have been pining away for SQLite to support this feature for a long while, you'll want to dig in right away. TESTING NEEDED! Please bash the hell out of the latest DBD::SQLite and report any outstanding bugs on RT. Test your dependent or compatible projects with it, which includes any DBMS-wrapping or object persistence modules, and applications. And especially try actually using foreign keys with SQLite. As the official release announcement says: "This release has been extensively tested (we still have 100% branch test coverage). [The SQLite developers] consider this release to be production ready. Nevertheless, testing can only prove the presence of bugs, not their absence. So if you encounter problems, please let us know." See also http://www.sqlite.org/changes.html for a list of everything else that changed in SQLite itself over the last few months. If you want in to DBD::SQLite development, then join the following email/IRC forums which MST created (the mailing list, I am administrating): http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/dbd-sqlite #dbd-sqlite on irc.perl.org And the canonical version control is at: http://svn.ali.as/cpan/trunk/DBD-SQLite/ Patches welcome. Ideas welcome. Testing welcome. Whining to /dev/null. If you feel that a bug you find is in SQLite itself rather than the Perl DBI driver for it, the main users email forum for SQLite in general is at: http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ... where you can report it as an appropriate list post (the SQLite issue tracking system is no longer updateable by the public; posting in the list can cause an update there by a registered SQLite developer). Please do not reply to me directly with your responses. Instead send them to the forums or file with RT as is appropriate. Thank you. -- Darren Duncan P.S. DBD::SQLite has at least 1 known bug, also in version 1.25, with regard to full-text search (FTS3); there is an included new failing test, which currently is set to skip so the CPAN testers don't issue fails, but the issue behind it should hopefully be fixed before the next DBD::SQLite release. We decided that shipping DBD::SQLite now with the skipping test was preferable to waiting for that fix so you could get the new foreign keys feature the soonest.
Re: _concat_hash_sorted()
David E. Wheeler wrote: I've just released [DBIx::Connector](http://search.cpan.org/perldoc?DBIx::Connector) to the CPAN. It does connection caching and transaction management, borrowing pages from `connect_cached()`, Apache::DBI, and DBIx::Class, but usable in any of these environments. The transaction management is similar to that in DBIx::Class, but also includes savepoint support (hence my earlier post). Blog entry [here](http://www.justatheory.com/computers/programming/perl/modules/dbix-connector.html). I thought this the simplest thing to do, but I'm wondering, Tim, if it might be possible to expose this interface? It seems like it'd be generally useful. Thoughts? No thoughts about exposing the interface. But from what you've described in your blog to be the state of affairs, I think that having a distinct DBIx::Connector module is a good idea, versus embedding that functionality in a larger DBI-using module. I've never been in a situation to use cached connections before, but this module looks like it could be a good default practice to use when using DBI, if it seems to make connection caching work more correctly. Unless there might be some agreement for DBI itself to use those semantics (but it hadn't already). -- Darren Duncan
Re: Savepoints
David E. Wheeler wrote: On Oct 5, 2009, at 5:01 AM, Tim Bunce wrote: I'd be interested if someone could do the research to list what databases support savepoints and what syntax they use for the main statements. DBIx::Class has done this for a lot of databases. Check out MSSQL: SAVE TRANSACTION $name; ROLLBACK TRANSACTION $name; MySQL: SAVEPOINT $name; RELEASE SAVEPOINT $name; ROLLBACK TO SAVEPOINT $name; Oracle: SAVEPOINT $name; ROLLBACK TO SAVEPOINT $name; Pg: $dbh->pg_savepoint($name); $dbh->pg_release($name); $dbh->pg_rollback_to($name); SQLite also has savepoints, since 3.6.8 around January. See http://sqlite.org/lang_savepoint.html for details. SQLite: SAVEPOINT $name RELEASE [SAVEPOINT] $name ROLLBACK [TRANSACTION] TO [SAVEPOINT] $name Adding that to DBIx::Class shouldn't be difficult. -- Darren Duncan
Re: Savepoints
Going the other way, in SQL, there is a single terse SQL statement for starting/ending transactions, and doing the thing with savepoints. So for aside from maybe some consistency with legacy DBI features, why should DBI objects have begin/commit/rollback or methods specific to start/end savepoints at all? Why doesn't the user just do it in SQL like they do everything else in SQL? Its not like DBI is abstracting away other SQL language details, so why should it do so with the transaction/savepoint managing SQL? Unless some DBMSs support transactions but not with SQL? So maybe changing nothing in DBI is actually the best approach concerning savepoints. -- Darren Duncan
Re: Savepoints
I think I've talked about this before. I see that the best generic interface for savepoints is to make them look like normal sub-transactions, that are scoped: 1. The basic idea is that we have nested transactions, and starting a child is defining a subunit that needs to succeed or be a no-op as a whole. 2. DBI is always in autocommit mode by default, because that treats each SQL statement as an innermost nested transaction of its own. There should not be an autocommit=0 in the interest of consistency. 3. A slightly higher level of abstraction would provide the greatest user-friendliness, and I strongly prefer the idea of sub-transactions being tied to a lexical scope or block, such as a try-block. So for example, entering a sub-transaction block starts a child transaction, exiting one normally commits that child, and exiting abnormally due to a thrown exception rolls back the child. Making things scope-tied is the safest and easiest to use because users don't have to explicitly call commit/rollback for every begin, similar to how automatic memory management helps us not need to remember to do a 'free' for each 'malloc' in spite of the many ways a code block might be exited. In this situation, there would not be any explicit begin()/commit()/rollback() methods, and also the SQL itself can't call those unpaired. As for implementation, well I think there is a Perl module that implements sentinel objects or whatever they're called, which could be looked at for ideas. 4. Less ideal for users, but perhaps closer to bare metal or what people are used to, DBI can keep its existing start/begin()/commit()/rollback() methods, and they just get reused for child transactions. There should be a transaction nesting level counter which DBI exposes with a getter method. When a connection starts, the level is 0. Starting a transaction increments this by 1, and ending (commit or rollback) decrements it; decrementing it below zero is an error. The start/begin() method starts a new child transaction, or a first transaction if there are none, and commit()/rollback() ends the innermost transaction. This all said, if you still want to have actual named savepoints, well David's proposal sounds fairly decent. -- Darren Duncan David E. Wheeler wrote: Tim et al., Anyone given any thought to an interface for savepoints? They're a part of the SQL standard, and basically look like named subtransactions. The SQL looks like this: BEGIN; INSERT INTO table1 VALUES (1); SAVEPOINT my_savepoint; INSERT INTO table1 VALUES (2); ROLLBACK TO SAVEPOINT my_savepoint; INSERT INTO table1 VALUES (3); RELEASE SAVEPOINT my_savepoint; COMMIT; Compared to transactions, the statements look like this: TRANSACTIONS | SAVEPOINTS - BEGIN;| SAVEPOINT :name; COMMIT; | RELEASE :name; ROLLBACK; | ROLLBACK TO :name; Given these terms, I think that DBD::Pg takes the correct approach, offering these functions: pg_savepoint($name) pg_release($name) pg_rollback_to($name) All you have to do is pass a name to them. I'd therefore propose that the DBI adopt this API, offering these functions: savepoint($name) release($name) rollback_to($name) The would essentially work just like transactions in terms of error handling and whatnot. The example might look like this: $dbh−>{RaiseError} = 1; $dbh->begin_work; eval { foo(...)# do lots of work here $dbh->savepoint('point1'); eval { bar(...)# including inserts baz(...)# and updates }; if ($@) { warn "bar() and baz() failed because $@"; } $dbh−>commit; # commit the changes if we get this far }; if ($@) { warn "Transaction aborted because $@"; # now rollback to undo the incomplete changes # but do it in an eval{} as it may also fail eval { $dbh−>rollback }; # add other application on−error−clean−up code here } If the transaction succeeds but the savepoint fails, the foo() code will be committed, but not bar() and baz(). Thoughts? Best, David
ANN - DBD::SQLite version 1.24_01 - amalgamation
All, I am pleased to announce that DBD::SQLite (Self Contained RDBMS in a DBI Driver) version 1.24_01 has been released on CPAN (by Adam Kennedy). http://search.cpan.org/~adamk/DBD-SQLite-1.24_01/ The main feature of this release is that now DBD::SQLite also uses amalgamated source recommended at sqlite.org, meaning that the entire C source code of the SQLite library itself is now contained in a single file rather than being spread over several dozen files. Some advantages of this change include better performance due to cross-file optimization, and also an easier compilation on platforms with more limited make systems. The last DBD::SQLite release that doesn't use the amalgamated source is version 1.23, which was released 2 days earlier. Also the bundled SQLite library with both 1.23 and 1.24_01 has been updated to v3.6.13 from v3.6.12 that 1.20 had. Further improvements in 1.24_01 over 1.20 involve mainly a significant modernization of the whole test suite, so it uses Test::More, and also there were more bugs fixed, minor enhancements made, and RT items addressed. See http://cpansearch.perl.org/src/ADAMK/DBD-SQLite-1.24_01/Changes as well as http://sqlite.org/changes.html for details. Given that the switch to amalgamated SQLite sources is arguably a very large change (or arguably a very small change), mainly in subtle ways that might affect build/compile systems (though actual SQLite semantics should be identical), ... Please bash the hell out of the latest DBD::SQLite and report any outstanding bugs on RT. Test your dependent or compatible projects with it, which includes any DBMS-wrapping or object persistence modules, and applications. If you want in to DBD::SQLite development, then join the following email/IRC forums which MST created (the mailing list, I am administrating): http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/dbd-sqlite #dbd-sqlite on irc.perl.org And the canonical version control is at: http://svn.ali.as/cpan/trunk/DBD-SQLite/ Patches welcome. Ideas welcome. Testing welcome. Whining to /dev/null. Note that today's switch to amalgamated sources is the last major short term change to DBD::SQLite that I personally expected would happen (sans updates to the bundled SQLite library itself), but other developers probably have their own ideas for what directions the development will go next. Please do not reply to me directly with your responses. Instead send them to the forums or file with RT as is appropriate. Thank you. -- Darren Duncan
ANN - DBD::SQLite version 1.20
All, I am pleased to announce that DBD::SQLite (Self Contained RDBMS in a DBI Driver) version 1.20 has been released on CPAN. http://search.cpan.org/dist/DBD-SQLite/ This follows on the heels of 10 developer releases released starting 2009 March 27th (Adam "Alias" Kennedy has been doing release management). The previous production release of DBD::SQLite was version 1.14 about 18 months ago. Improvements in 1.20 over 1.14 include: * Updated the bundled SQLite library from v3.4.2 to v3.6.12, which carries many new features as well as bug fixes. * Added support for user-defined collations. * Added ->column_info(). * Resolved all but a handful of the 60+ RT items. * Many bug fixes and minor enhancements. * Added more tests, large refactoring of tests. * Minimum dependencies are now Perl 5.006 and DBI 1.57. See http://cpansearch.perl.org/src/ADAMK/DBD-SQLite-1.20/Changes as well as http://sqlite.org/changes.html for details. Now it is especially important, since automatic updates from CPAN such as with the CPAN/CPANPLUS utilities, would now be pulling this new 1.20 by default, ... Please bash the hell out of the latest DBD::SQLite and report any outstanding bugs on RT. Test your dependent or compatible projects with it, which includes any DBMS-wrapping or object persistence modules, and applications. If you want in to DBD::SQLite development, then join the following email/IRC forums which MST created (the mailing list, I am administrating): http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/dbd-sqlite #dbd-sqlite on irc.perl.org And the canonical version control is at: http://svn.ali.as/cpan/trunk/DBD-SQLite/ Patches welcome. Ideas welcome. Testing welcome. Whining to /dev/null. Regarding near future plans: Now, the current 1.20 uses the pristine several-dozen SQLite library source files, same as 1.14 did. While reality may be different, I believe that the next major planned change to DBD::SQLite is to substitute in the "amalgamation" version, which combines all the SQLite source files into a single file; the amalgamation is the recommended form for users according to the SQLite core developers. See http://sqlite.org/download.html for a description of that. Meanwhile there should be another stable release with any bug fixes for 1.20 to come out first. Any other major changes or features for DBD::SQLite are expected to come out separately from and after the stabilized switch to the amalgamation sources. Please do not reply to me directly with your responses. Instead send them to the forums or file with RT as is appropriate. Thank you. -- Darren Duncan
Re: RFC: developing DBD::SQLite
David E. Wheeler wrote: On Mar 27, 2009, at 2:39 AM, Darren Duncan wrote: I have tried emailing Matt several times without response already. Should I try telephoning him next? For all it looks like, Matt has abandoned the module. If someone knows better, or has been in contact recently, I'd be happy to hear. Looks like he responded to someone emailing him without CC'ing a mail list. I had also done that initially, before another post CC'ing a mail list. But anyway, it doesn't matter any more as the issue is resolved. Wow, I didn't know that you would be such a pushover. ;-) I don't use DBD::SQLite myself for any production purposes, and it has been a while since I've used it at all, so you should lean harder one people who really depend on it than on opinionated assholes like me. I'm not a pushover. It's more that I wasn't strongly opinionated on the matter in the first place and I was fishing; your response led to me realizing that a simpler plan of action was better (and less work for both me and others). And that's the end of this thread I think. -- Darren Duncan
ANN - DBD::SQLite version
All, I am pleased to announce that DBD::SQLite (Self Contained SQLite RDBMS in a DBI Driver) version 1.19_01 has been released on CPAN. http://search.cpan.org/~adamk/DBD-SQLite-1.19_01/ This is the first CPAN release of DBD::SQLite since version 1.14 about 18 months ago. This is the change summary since 1.14: 1.19_01 Fri 27 Mar 2009 - Updated to SQLite 3.6.10, and bumped up the version requirement for installed sqlite3 to 3.6.0 as 3.6.x has backward incompatiblity (ISHIGAKI) - fixed "closing dbh with active statement handles" issue with a patch by TOKUHIROM. (ISHIGAKI) - skip 70schemachange test for Windows users. (ISHIGAKI) - applied RT patches including #29497, #32723, #30558, #34408, #36467, #37215, #41047. (ISHIGAKI) - added TODO to show which issues are to be fixed. (ISHIGAKI) - license and configure_requires in Makefile.PL and META.yml (Alexandr Ciornii) - Spelling check for SQLite.pm (Alexandr Ciornii) - Adding arbitrary Perl 5.005 minimum Right now, DBD::SQLite has a new development team with Matt Sergeant's blessing, which is working to keep it updated and fix any outstanding bugs. Multiple people have made commits to it since Jan 24th. I am serving a role as project advocate among other things. So please bash the hell out of the latest DBD::SQLite and report any outstanding bugs on RT. Test your dependent or compatible projects with it, which includes any DBMS-wrapping or object persistence modules, and applications. And yes we are aware that 3.6.10 isn't the latest; that will be fixed soon. If you want in to DBD::SQLite development, then join the following email/IRC forums which MST created (the mailing list, I am administrating): http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/dbd-sqlite #dbd-sqlite on irc.perl.org Some discussion has also taken place in the dbi-dev list and there is also a general DBI related IRC channel, but the above DBD-SQLite forums were just created last night. The version control is the one Adam "Alias" Kennedy setup around January 24th, which is a Subversion repo. Here are some change log and browse urls: http://fisheye2.atlassian.com/changelog/cpan/trunk/DBD-SQLite http://svn.ali.as/cpan/trunk/DBD-SQLite/ Patches welcome. Ideas welcome. Testing welcome. Whining to /dev/null. In particular, we can use more people with C savvy, as we are somewhat bereft of that among the current team. For one thing, apparently using the amalgamation file from sqlite.org is incompatible with the XS code that talks to the multiplicity of original SQLite source code files, so some savvy is needed to patch it for that migration. Please do not reply to me directly with your responses. Instead send them to the forums or file with RT as is appropriate. Thank you. -- Darren Duncan
Re: takeover request - DBD::SQLite
Hello Steffen et al, thanks for your response. I was not previously aware that Adam had taken this up; no mention of it in any forums I frequent nor on CPAN. But I'm very happy to hear it nonetheless. If Adam is serious about this and wants to organize the effort to keep DBD::SQLite up to date, then I'm quite happy to step back from trying to start my own effort. The main reason I offered to take over or co-maintain back around Jan 12th was because the module seemed to be abandoned and no one else was stepping up. So I'm happy and prefer to join Adam's effort rather than doing a separate one. Adam, I see from the Changes file in your repo that your group's most recent work is later than my announcement, since I announced when SQLite 3.6.8 was new and your repo has 3.6.10, and you've made a number of bug fixes. Please indicate the main forum where your group is discussing this work so I can join it. And I look forward to a CPAN release. Or in helping you do it. Thank you. -- Darren Duncan Steffen Mueller wrote: Hi Darren, hi Matt, Matt, I hope you're well and simply too busy to answer! Darren Duncan wrote: Following up a discussion a couple months ago, Matt Sergeant still hasn't responded to that or any other message from me concerning DBD::SQLite, and so, AFAIK with the community's support, ... I would like to be registered as an official co-maintainer of DBD::SQLite, so I can start release managing it and CPAN will accept my uploads as authorized. I hadn't started work on this before now, but DBD::SQLite has now moved to the front of my queue of un-paid projects (now that Set::Relation 0.9.0 is out), and I plan to start working on it this week or next with a first developer release to CPAN estimated in 2-3 weeks from now and a first normal user release following a few weeks later when early adopters consider it ready. Note that I consider the dbi-dev list to be the official channel for all DBD::SQLite development, so if any of you have any input for this project then please just make it there. Also I have already made an RFC post there earlier today about my initial plans for DBD::SQLite's development. Darren, I've given you co-maintenance permissions via PAUSE. Matt, please note that this is a reversible action in case you severely object! Darren, I only just realized that people have imported the current state of DBD::SQLite into Adam Kennedy's SVN repository at http://svn.ali.as/cpan/trunk/DBD-SQLite/. Without looking at the details, I see that there have been some commits. I guess it would make sense of you joined that effort in order not to duplicate it. I am sure Adam (see CC) will give you access to the repository in no time if you don't have an account already. Best regards, Steffen (for the PAUSE admins)
Re: RFC: developing DBD::SQLite
Cosimo Streppone wrote: In data 27 marzo 2009 alle ore 03:30:10, Darren Duncan ha scritto: So, out of my un-paid projects, my promise to take over release management of DBD::SQLite (from the still incommunicado previous owner) has now come to the front of my queue (now that Set::Relation 0.9.0 is out) Hi Darren, I *seem* to remember, but I might be totally wrong, that ADAMK took over maintainership for DBD::SQLite some months ago. Well that's news to me. And great news if it is true. It also must have been a recent development (I'll look into it) as I didn't see any CPAN releases or announcements. Regarding feedback on DBD::SQLite, give me some time to read all your mail. I first made the proposal around January 12th of this year, quickly following my spreading the news that SQLite 3.6.8 came out with support for nested transactions, and getting responses that DBD::SQLite had all sorts of unresolved bugs and whatever, and seeing for myself it was long since updated. Also Audrey Tang had released SQLite 3.6.1/2 a year ago in amalgamation form, but now neither Audrey nor Matt were doing anything, and I saw no evidence that anyone else was too. So I offered to do it. The only thing I can say right now is that DBD::SQLite rocks for me because it's a direct-from-CPAN install with no hassle, even on Windows. This is also one of the advantages of using the SQLite amalgamation version that sqlite.org provides and Audrey experimentally released. The complicated pre-compilation work is done in advance and so less capable build environments like Windows can then build SQLite without having a pricey tool. Don't know if Adam's version is the amalgamation but I'll look. Anyway, to repeat a prior reply, I've decided not to unbundle the library. -- Darren Duncan
Re: RFC: developing DBD::SQLite
David, thanks for your quick response. David E. Wheeler wrote: > You *really* need to get Matt to signoff on this, IMHO. I have tried emailing Matt several times without response already. Should I try telephoning him next? For all it looks like, Matt has abandoned the module. If someone knows better, or has been in contact recently, I'd be happy to hear. So following your response, I will eliminate a lot of the change items from my plan. In particular, I will continue to bundle the SQLite library as it has always been done, and won't download anything. I will also not add the version.pm dependency and will continue numbering releases where Matt left off, such as starting at version 1.15 or jumping up a bit and starting at 1.20_01 which should be harmless. In regards to which version to use, I will also leave that behaviour as Matt had it; his version could either use the bundled version or the system version, and I'll leave selecting which to the same mechanism Matt had it, unless the users argue for a change. I think that means the bundled is the default. So then, what I'm planning to do is this then: 1. Enter DBD::SQLite into public version control using Git, with the initial version being the last one Matt published on CPAN, and then I will merge in Audrey Tang's changes that were released as DBD::SQLite::Amalgamation; essentially there is no change but that several dozen SQLite library files are replaced with a single one. 2. Then I'll update the library to the latest one on sqlite.org, and then retest and deal with anything that breaks. 3. Then I'll look at various open RT items and apply various submitted bug fixes, again testing. And I'll accept collaborative fixes from other people such as those made against the version control. 4. Then I'll put an experimental-versioned release on CPAN. In fact, despite the larger sized releases, this probably is actually much less work for me when it comes down to it due to less complexity. I really will just make the minimal changes possible from Matt's version, just to get it to run with the current SQLite library, which will now be the single amalgamation file. Also mainly the driver would just be tested and asserted by me to work with the version of SQLite bundled with it. And if users replace the bundled amalgamation file with a different amalgamation file, generally it'll act as if the replacement was the original, so this isn't really a separate case. Now I know you said prefer system, but to start I'll just leave it the way Matt had it; mind you, if there is a general consensus to change this default, I'll honor it. And my current plans are now back to what they were earlier this year when I proposed co-maintaining the driver, a focus on minimalism, before I became more ambitious in today's first proposal. To conclude, I would be quite happy to not have this responsibility. I am doing this primarily because I want SQLite to continue to succeed in the Perl world and no one is keeping the bindings up to date anymore, despite many people asking for updates. I'm doing it because no one else is. Now if Matt comes out of the shadows and addresses the languishing driver bug reports and updates the library to the new one, and continues to do so, I'm the happiest, but so far he isn't. And my email attempts have failed at a response. So, should I try to phone Matt now? Thank you. -- Darren Duncan
RFC: developing DBD::SQLite
le I suppose this is technically a fork, I'll still be just using the name DBD::SQLite so my versions are picked up automatically by users of the previous. Now as for version numbering, I decided already to not use 4-part versions like eg 3.8.11.0, but rather just use 3.x.y_NNN format for my releases, with the X and Y values following my own preferred pattern of Y for feature changes and Z for bug fixes, but that since prior releases are already production status, I'll also use the _N suffix convention for my initial or experimental releases and just release without _N when testers report no show-stoppers, similar to how the likes of DBD::Pg or Moose etc are released now. So I'm wondering what would be the least contrived looking numbering scheme in light of starting with _N, eg; would 3.0.0_001 be the best for dev release start and then either 3.1.0 or 3.0.1 (depending on whether features change from the _N) be the first non-dev release, or should I start the dev releases in perhaps a more contrived 2.y or 1.y space? Now I recall that someone had previously proposed making the SQLite library an external dependency when my initial proposal didn't. If they or anyone else wants to speak up with specific advice or patches etc I'd appreciate it. Also my DBD::SQLite will be in a public Git repository, either on utsl.gen.nz or github or both, so I can receive patches by pulling from someone's fork. Its not up yet though. Thank you in advance for any feedback. -- Darren Duncan
Re: request to become co-maintainer of DBD::SQLite
Erik, Thank you for volunteering, and that help is appreciated. Also, I still haven't heard back from the normal maintainer, so I think I'm going to step up. However, real life has intruded (eg, paying clients want stuff now) so it will still be a few weeks before I begin release-managing DBD::SQLite. I currently estimate mid-March to get into it. -- Darren Duncan Erik Aronesty wrote: I do Perl and C and offer some help. Same here. I feel reasonably at home both in C and Perl, and I've written some simple XS code. I don't have any experience with DBI, I will also do what I can to help, if anyone wants it. Also, is there a repository anywhere? It's not documented in the source or anywhere that I can find.. I usually find it easier submit patches if there's a repository involved. - There are some obvious patches and minor tweaks involving threading that can fix support on Win32. Notably the CLONE method doesn't work on a win32 fork, so a workaround would be to store the threadid along with the handle and compare to the current thread id to see if a clone is needed. (I posted an patch that simply disables testing for cloning on win32, which would probably satisfy 60% of the win32 users at the outset.) - Solaris and Win32 both have problems where the DBD xsi file creation line isn't included in the Makefile. **I think this is a DBI issue... since the code for that is part of DBI's driver implementation, but I haven't actally traced it to the line that is failing to see why. I can do that if anyone wants me too, but my bet is that the DBI devs know about it already, or could take one look at SQLite's Makefile.PL and tell me why it fails or what a good workaround is. Mostly, I'm not too concerned with a lot of the "my system locks up" bugs, as my guess is they are issues that will be addressed in new versions SQLite's, and they are common to embedded databases that run on a mixed-bag of operating systems. But compatibility = usage = support, and that's my primary concern.
Re: [Dbix-class] request to become co-maintainer of DBD::SQLite
This is a reply to a post on the dbix-class list. However, if there is going to be ongoing discussion I prefer it happen on the dbi-dev list, since dbi-dev just seems the most on topic, I think. Goro Fuji wrote: Just now I am concidering to maintain DBD::SQLite, and I have ideas about that: (1) to separate SQLite from DBD::SQLite (not to be bundled), which will also separate ploblems of SQLite from those of DBD::SQLite. (2) to make a new module Alien::SQLite (see other Alien::* modules), which will allow different versions of SQLite. (3) to allow "use DBD::SQLite 1.15 -sqlite_version => q(3.6.8)" for testing. What do you think of it? That sounds like it could be a good idea on paper. I'm not sure I know yet the degree of changes it would involve. By default I think I'll start off with the fewest changes, also now most likely using Audrey Tang's amalgamation as the point of departure, but DBD::SQLite could also go the way you say. I think this is an idea best discussed on dbi-dev or if you want to prototype and present your idea there please do so. There are probably other issues to consider that people may bring up. Thank you. -- Darren Duncan
Re: [sqlite] request to become co-maintainer of DBD::SQLite
These are replies to posts on the sqlite-users list. However, if there is going to be ongoing discussion I prefer it happen on the dbi-dev list. Not that sqlite-users isn't very on topic itself, dbi-dev just seems *more* on topic, I think. Clark Christensen wrote: One of my first code changes will be to require DBI 1.607+ The current DBD-SQLite works fine under older versions of DBI. So unless there's a compelling reason to do it, I would prefer you not make what seems like an arbitrary requirement. I have 2 answers to that: 1. Sure, I can avoid changing the enforced dependency requirements for now, leaving them as Matt left them. However, I will officially deprecate support for the older versions and won't test on them. If something works with the newer dependencies but not the older ones, it will be up to those using or supporting the older dependencies to supply fixes. 2. On one hand I could say, why not update your DBI when you're updating DBD::SQLite, since even the DBI added lots of fixes one should have. On the other hand, I can understand the reality that you may have other legacy modules like drivers for other old databases that might break with a DBI update. I say might, since on the other hand they might not break. Still, I'll just go the deprecation angle for now. Otherwise, it sounds like a good start. Matt must be really busy with other work. I'll be happy to contribute where I can, but no C-fu here, either :-( Thank you. Ribeiro, Glauber wrote: > My only suggestion at the moment, please use the amalgamation instead of > individual files. This makes it much easier to upgrade when SQLite > releases a new version. Okay. Jim Dodgen wrote: > I'm for the amalgamation too. the rest of you ideas are great also. > excelent idea to use Audrey Tangs nameing convention. > > I have been stuck back at 3.4 for various issues. > > I do Perl and C and offer some help. Okay and thank you. -- Darren Duncan
request to become co-maintainer of DBD::SQLite
ode changes will be to require DBI 1.607+ and Perl 5.8.1+ (and the former requires the latter too), though I may only ever run it under 5.10.x on my machine. But if anyone knows that it will work with older versions, they can submit a patch to that effect. 7. I would also like to adopt the versioning scheme that Audrey Tang used, so that for example a first stable release with the current SQLite would be DBD::SQLite 3.6.8.0, with the last digit only being updated while updates to DBD::SQLite itself occur but updates to SQLite itself don't. One question I still have to figure out though is whether that can be done in combination with the _NN suffix to mark developer releases, eg as 3.6.8_0 or 3.6.8.0_0 etc, so that CPAN install tools work, and nothing on CPAN/PAUSE/etc would break. Presumably I'd add a dependency on version.pm (bundled with Perl 5.10.x) in any event. The main benefit of this versioning scheme is that it is easy for users to know at a glance what they're getting, and also if for some reason users need me to later bundle some older SQLite version, the space already exists for appropriate lower version numbers. Basically I'm doing this because someone has to do it, and I'm as good a default person as any until someone better suited (eg, with more C-fu) comes along and takes my place. Matt, thank you in advance for a quick reply. To everyone, please don't actually submit patches to me until I announce that I'm ready to receive them, or just send them to RT as you already were. -- Darren Duncan
Re: Considering an SQLite-specific CPAN Module
Another distribution, DBD::SQLite::Amalgamation is a quasi-fork of DBD::SQLite and is up to date with the almost latest SQLite sources; it also adds a few more features over the last DBD::SQLite which hasn't seen a release in over a year. You may want to consider submitting your changes to include in DBD::SQLite::Amalgamation if appropriate. -- Darren Duncan Tom Browder wrote: After looking at various modules for using SQLite, I've not found exactly what I'm looking for. I want a pm that is higher level than what I see in DBS::SQLite. I would like to see functions similar to what is available in Win32::DBIODBC or Win32::ODBC: TableList # list of tables in the db FieldNames ColAttributes etc. I have cobbled together a pm for my use but it might be useful to others and I would appreciate any opinions from this audience. Any comments are appreciated. -Tom
Re: Function Calling Methods
I want to throw my support behind this idea in principle, that principle being that it is important to be able to invoke database stored routines efficiently and easily. In my case, considering that a dominant paradigm of Muldis D is to put all database access code in database stored routines, so having less indirection for implementing the language over existing DBMSs should only be helpful. Also as commented, the solution should support inputing or returning arbitrary configurations of data, eg, multiple subject-to-update parameters, as SQL itself supports. -- Darren Duncan
Re: ANNOUNCE: DBI 1.55 RC2
At 3:02 PM +0100 4/27/07, Tim Bunce wrote: You can download it from http://homepage.mac.com/tim.bunce/.Public/perl/DBI-1.55-RC2.tar.gz All tests of just DBI itself pass or skip on my system, Perl 5.8.8 with no threads, gcc 4.0.1, Mac OS X 10.4.9 PPC. -- Darren Duncan
Re: Pre-release DBD::mysql 4.005 - any thoughts?
At 9:09 PM +0100 4/26/07, Tim Bunce wrote: Section 23.1 of ISO/IEC 9075-2:2003 :-) I could email you a copy of the draft I have if you'd like. Don't print it though. It's >1300 pages :) Don't bother emailing it. Anyone can download it from http://www.wiscorp.com/SQLStandards.html ; its part of the "SQL 2003 Documents" zip file, linked to near the top. -- Darren Duncan
Re: ANNOUNCE: DBI 1.55 RC1 proxy
At 2:46 PM +0100 4/13/07, Tim Bunce wrote: You can download it from http://homepage.mac.com/tim.bunce/.Public/perl/DBI-1.55-RC1.tar.gz DBI's "make test" were all successful or skipped with Perl 5.8.8, with no threads, under Mac OS X 10.4.9. -- Darren Duncan
Re: ANNOUNCE: DBI 1.54 RC8
As a follow-up to what I previously said, which is quoted below ... I have just installed MySQL 5.1.15-beta on my machine from the tgz binary, and following that ... So, DBD-mysql-4.001 seems to build fine, and it passes all tests except for t/80procs.t, which mentions my not having permission to do something, but I don't see that failure necessarily having anything to do with DBI itself. So it appears that, on Mac OS X 10.4.8 PPC, GCC 4.0.1, Perl 5.8.8-nothread, all of DBI-1.54-RC8 and DBD-SQLite-1.13 and DBD-mysql-4.001 seem to build, self-test, and install fine. I have not actually tried to use any of the above with my own application code, however, nor am I currently equipped to do so. Maybe in a few months. -- Darren Duncan At 11:36 AM + 2/22/07, Tim Bunce wrote (off-list): On Wed, Feb 21, 2007 at 09:22:32PM -0800, Darren Duncan wrote: At 1:55 AM + 2/22/07, Tim Bunce wrote: >You can download it from: >http://homepage.mac.com/tim.bunce/.Public/perl/DBI-1.54-RC8.tar.gz With my usual setup, DBI's own tests are successful. -- Darren Duncan Thanks Darren. Can you install it and then try building and testing some DBD's? Since I don't have any external DBMSs installed on my Mac OS X home machine, for now my DBD testing here will have to be limited to stand-alone DBDs ... So, DBD-SQLite-1.13 successfully built and passed all of its tests fine here. As for external DBMSs, I would have to install them first ... not that this isn't doable, but then I'll report about it in a different email. I will, however, try doing one immediately ... -- Darren Duncan
Re: ANNOUNCE: DBI 1.54 RC8
At 11:36 AM + 2/22/07, Tim Bunce wrote (off-list): On Wed, Feb 21, 2007 at 09:22:32PM -0800, Darren Duncan wrote: At 1:55 AM + 2/22/07, Tim Bunce wrote: >You can download it from: >http://homepage.mac.com/tim.bunce/.Public/perl/DBI-1.54-RC8.tar.gz With my usual setup, DBI's own tests are successful. -- Darren Duncan Thanks Darren. Can you install it and then try building and testing some DBD's? Since I don't have any external DBMSs installed on my Mac OS X home machine, for now my DBD testing here will have to be limited to stand-alone DBDs ... So, DBD-SQLite-1.13 successfully built and passed all of its tests fine here. As for external DBMSs, I would have to install them first ... not that this isn't doable, but then I'll report about it in a different email. I will, however, try doing one immediately ... -- Darren Duncan
Re: ANNOUNCE: DBI 1.54 RC8
At 1:55 AM + 2/22/07, Tim Bunce wrote: You can download it from: http://homepage.mac.com/tim.bunce/.Public/perl/DBI-1.54-RC8.tar.gz With my usual setup, DBI's own tests are successful. -- Darren Duncan
Re: ANNOUNCE: DBI 1.54 RC5 - including cool new DBD::Gofer stateless proxy
At 3:38 PM + 2/18/07, Tim Bunce wrote: I've released an RC5 which uses $^X and moves the PERL5LIB setting to t/85gofer.t, along with some other more minor changes: http://homepage.mac.com/tim.bunce/.Public/perl/DBI-1.54-RC4.tar.gz I think you meant to spell that: http://homepage.mac.com/tim.bunce/.Public/perl/DBI-1.54-RC5.tar.gz ... as the other spelling just gets you RC4 again. Anyway, RC5 fixes the problem for me; all tests pass. -- Darren Duncan